title | summary |
---|---|
Manage an Endpoint |
Learn how to create, develop, test, deploy, and delete an endpoint in a Data App in the TiDB Cloud console. |
An endpoint in Data Service (beta) is a web API that you can customize to execute SQL statements. You can specify parameters for the SQL statements, such as the value used in the WHERE
clause. When a client calls an endpoint and provides values for the parameters in a request URL, the endpoint executes the SQL statement with the provided parameters and returns the results as part of the HTTP response.
This document describes how to manage your endpoints in a Data App in the TiDB Cloud console.
-
Before you create an endpoint, make sure the following:
- You have created a cluster and a Data App. For more information, see Create a Data App.
- The databases, tables, and columns that the endpoint will operate on already exist in the target cluster.
-
Before you call an endpoint, make sure that you have created an API key in the Data App. For more information, see Create an API key.
In Data Service, you can automatically generate endpoints, manually create endpoints, or add predefined system endpoints.
Tip:
You can also create an endpoint from a SQL file in SQL Editor. For more information, see Generate an endpoint from a SQL file.
In TiDB Cloud Data Service, you can generate one or multiple endpoints automatically in one go as follows:
-
Navigate to the Data Service page of your project.
-
In the left pane, locate your target Data App, click + to the right of the App name, and then click Autogenerate Endpoint. The dialog for endpoint generation is displayed.
-
In the dialog, do the following:
-
Select the target cluster, database, and table for the endpoint to be generated.
Note:
The Table drop-down list includes only user-defined tables with at least one column, excluding system tables and any tables without a column definition.
-
Select at least one HTTP operation (such as
GET (Retrieve)
,POST (Create)
, andPUT (Update)
) for the endpoint to be generated.For each operation you select, TiDB Cloud Data Service will generate a corresponding endpoint. If you select a batch operation (such as
POST (Batch Create)
), the generated endpoint lets you operate on multiple rows in a single request.If the table you selected contains vector data types, you can enable the Vector Search Operations option and select a vector distance function to generate a vector search endpoint that automatically calculates vector distances based on your selected distance function. The supported vector distance functions include the following:
VEC_L2_DISTANCE
(default): calculates the L2 distance (Euclidean distance) between two vectors.VEC_COSINE_DISTANCE
: calculates the cosine distance between two vectors.VEC_NEGATIVE_INNER_PRODUCT
: calculates the distance by using the negative of the inner product between two vectors.VEC_L1_DISTANCE
: calculates the L1 distance (Manhattan distance) between two vectors.
-
(Optional) Configure a timeout and tag for the operations. All the generated endpoints will automatically inherit the configured properties, which can be modified later as needed.
-
(Optional) The Auto-Deploy Endpoint option (disabled by default) controls whether to enable the direct deployment of the generated endpoints. When it is enabled, the draft review process is skipped, and the generated endpoints are deployed immediately without further manual review or approval.
-
-
Click Generate.
The generated endpoint is displayed at the top of the endpoint list.
-
Check the generated endpoint name, SQL statements, properties, and parameters of the new endpoint.
-
Endpoint name: the generated endpoint name is in the
/<name of the selected table>
format, and the request method (such asGET
,POST
, andPUT
) is displayed before the endpoint name. For example, if the selected table name issample_table
and the selected operation isPOST (Create)
, the generated endpoint is displayed asPOST /sample_table
.- If a batch operation is selected, TiDB Cloud Data Service appends
/bulk
to the name of the generated endpoint. For example, if the selected table name is/sample_table
and the selected operation isPOST (Batch Create)
, the generated endpoint is displayed asPOST /sample_table/bulk
. - If
POST (Vector Similarity Search)
is selected, TiDB Cloud Data Service appends/vector_search
to the name of the generated endpoint. For example, if the selected table name is/sample_table
and the selected operation isPOST (Vector Similarity Search)
, the generated endpoint is displayed asPOST /sample_table/vector_search
. - If there has been already an endpoint with the same request method and endpoint name, TiDB Cloud Data Service appends
_dump_<random letters>
to the name of the generated endpoint. For example,/sample_table_dump_EUKRfl
.
- If a batch operation is selected, TiDB Cloud Data Service appends
-
SQL statements: TiDB Cloud Data Service automatically writes SQL statements for the generated endpoints according to the table column specifications and the selected endpoint operations. You can click the endpoint name to view its SQL statements in the middle section of the page.
-
Endpoint properties: TiDB Cloud Data Service automatically configures the endpoint path, request method, timeout, and tag according to your selection. You can find the properties in the right pane of the page.
-
Endpoint parameters: TiDB Cloud Data Service automatically configures parameters for the generated endpoints. You can find the parameters in the right pane of the page.
-
-
If you want to modify the details of the generated endpoint, such as its name, SQL statements, properties, or parameters, refer to the instructions provided in Develop an endpoint.
To create an endpoint manually, perform the following steps:
- Navigate to the Data Service page of your project.
- In the left pane, locate your target Data App, click + to the right of the App name, and then click Create Endpoint.
- Update the default name if necessary. The newly created endpoint is added to the top of the endpoint list.
- Configure the new endpoint according to the instructions in Develop an endpoint.
Data Service provides an endpoint library with predefined system endpoints that you can directly add to your Data App, reducing the effort in your endpoint development. Currently, the library only includes the /system/query
endpoint, which enables you to execute any SQL statement by simply passing the statement in the predefined sql
parameter.
To add a predefined system endpoint to your Data App, perform the following steps:
-
Navigate to the Data Service page of your project.
-
In the left pane, locate your target Data App, click + to the right of the App name, and then click Manage Endpoint Library.
A dialog for endpoint library management is displayed. Currently, only Execute Query (that is, the
/system/query
endpoint) is provided in the dialog. -
To add the
/system/query
endpoint to your Data App, toggle the Execute Query switch to Added.Tip:
To remove an added predefined endpoint from your Data App, toggle the Execute Query switch to Removed.
-
Click Save.
Note:
- After you click Save, the added or removed endpoint is deployed to production immediately, which makes the added endpoint accessible and the removed endpoint inaccessible immediately.
- If a non-predefined endpoint with the same path and method already exists in the current App, the creation of the system endpoint will fail.
The added system-provided endpoint is displayed at the top of the endpoint list.
-
Check the endpoint name, SQL statements, properties, and parameters of the new endpoint.
Note:
The
/system/query
endpoint is powerful and versatile but can be potentially destructive. Use it with discretion and ensure the queries are secure and well-considered to prevent unintended consequences.- Endpoint name: the endpoint name and path is
/system/query
, and the request methodPOST
. - SQL statements: the
/system/query
endpoint does not come with any SQL statement. You can find the SQL editor in the middle section of the page and write your desired SQL statements in the SQL editor. Note that the SQL statements written in the SQL editor for the/system/query
endpoint will be saved in the SQL editor so you can further develop and test them next time but they will not be saved in the endpoint configuration. - Endpoint properties: in the right pane of the page, you can find the endpoint properties on the Properties tab. Unlike other custom endpoints, only the
timeout
andmax rows
properties can be customized for system endpoints. - Endpoint parameters: in the right pane of the page, you can find the endpoint parameters on the Params tab. The parameters of the
/system/query
endpoint are configured automatically and cannot be modified.
- Endpoint name: the endpoint name and path is
For each endpoint, you can write SQL statements to execute on a TiDB cluster, define parameters for the SQL statements, or manage the name and version.
Note:
If you have connected your Data App to GitHub with Auto Sync & Deployment enabled, you can also update the endpoint configurations using GitHub. Any changes you made in GitHub will be deployed in TiDB Cloud Data Service automatically. For more information, see Deploy automatically with GitHub.
On the right pane of the endpoint details page, you can click the Properties tab to view and configure properties of the endpoint.
-
Path: the path that users use to access the endpoint.
- The length of the path must be less than 64 characters.
- The combination of the request method and the path must be unique within a Data App.
- Only letters, numbers, underscores (
_
), slashes (/
), and parameters enclosed in curly braces (such as{var}
) are allowed in a path. Each path must start with a slash (/
) and end with a letter, number, or underscore (_
). For example,/my_endpoint/get_id
. - For parameters enclosed in
{ }
, only letters, numbers, and underscores (_
) are allowed. Each parameter enclosed in{ }
must start with a letter or underscore (_
).
Note:
-
In a path, each parameter must be at a separate level and does not support prefixes or suffixes.
Valid path:
/var/{var}
and/{var}
Invalid path:
/var{var}
and/{var}var
-
Paths with the same method and prefix might conflict, as in the following example:
GET /var/{var1}
GET /var/{var2}
These two paths will conflict with each other because
GET /var/123
matches both. -
Paths with parameters have lower priority than paths without parameters. For example:
GET /var/{var1}
GET /var/123
These two paths will not conflict because
GET /var/123
takes precedence. -
Path parameters can be used directly in SQL. For more information, see Configure parameters.
-
Endpoint URL: (read-only) the default URL is automatically generated based on the region where the corresponding cluster is located, the service URL of the Data App, and the path of the endpoint. For example, if the path of the endpoint is
/my_endpoint/get_id
, the endpoint URL ishttps://<region>.data.tidbcloud.com/api/v1beta/app/<App ID>/endpoint/my_endpoint/get_id
. To configure a custom domain for the Data App, see Custom Domain in Data Service. -
Request Method: the HTTP method of the endpoint. The following methods are supported:
GET
: use this method to query or retrieve data, such as aSELECT
statement.POST
: use this method to insert or create data, such as anINSERT
statement.PUT
: use this method to update or modify data, such as anUPDATE
statement.DELETE
: use this method to delete data, such as aDELETE
statement.
-
Description (Optional): the description of the endpoint.
-
Timeout(ms): the timeout for the endpoint, in milliseconds.
-
Max Rows: the maximum number of rows that the endpoint can operate or return.
-
Tag: the tag used for identifying a group of endpoints.
-
Pagination: this property is available only when the request method is
GET
and the last SQL statement of the endpoint is aSELECT
operation. When Pagination is enabled, you can paginate the results by specifyingpage
andpage_size
as query parameters when calling the endpoint, such ashttps://<region>.data.tidbcloud.com/api/v1beta/app/<App ID>/endpoint/my_endpoint/get_id?page=<Page Number>&page_size=<Page Size>
. For more information, see Call an endpoint.Note:
- If you do not include the
page
andpage_size
parameters in the request, the default behavior is to return the maximum number of rows specified in the Max Rows property on a single page. - The
page_size
must be less than or equal to the Max Rows property. Otherwise, an error is returned.
- If you do not include the
-
Cache Response: this property is available only when the request method is
GET
. When Cache Response is enabled, TiDB Cloud Data Service can cache the response returned by yourGET
requests within a specified time-to-live (TTL) period. -
Time-to-live(s): this property is available only when Cache Response is enabled. You can use it to specify the time-to-live (TTL) period in seconds for cached response. During the TTL period, if you make the same
GET
requests again, Data Service returns the cached response directly instead of fetching data from the target database again, which improves your query performance. -
Batch Operation: this property is visible only when the request method is
POST
orPUT
. When Batch Operation is enabled, you can operate on multiple rows in a single request. For example, you can insert multiple rows of data in a singlePOST
request by putting an array of data objects to theitems
field of an object in the--data-raw
option of your curl command when calling the endpoint.Note:
The endpoint with Batch Operation enabled supports both array and object formats for the request body:
[{dataObject1}, {dataObject2}]
and{items: [{dataObject1}, {dataObject2}]}
. For better compatibility with other systems, it is recommended that you use the object format{items: [{dataObject1}, {dataObject2}]}
.
On the SQL editor of the endpoint details page, you can write and run the SQL statements for an endpoint. You can also simply type --
followed by your instructions to let AI generate SQL statements automatically.
-
Select a cluster.
Note:
Only clusters that are linked to the Data App are displayed in the drop-down list. To manage the linked clusters, see Manage linked clusters.
On the upper part of the SQL editor, select a cluster on which you want to execute SQL statements from the drop-down list. Then, you can view all databases of this cluster in the Schema tab on the right pane.
-
Depending on your endpoint type, do one of the following to select a database:
- Predefined system endpoints: on the upper part of the SQL editor, select the target database from the drop-down list.
- Other endpoints: write a SQL statement to specify the target database in the SQL editor. For example,
USE database_name;
.
-
Write SQL statements.
In the SQL editor, you can write statements such as table join queries, complex queries, and aggregate functions. You can also simply type
--
followed by your instructions to let AI generate SQL statements automatically.To define a parameter, you can insert it as a variable placeholder like
${ID}
in the SQL statement. For example,SELECT * FROM table_name WHERE id = ${ID}
. Then, you can click the Params tab on the right pane to change the parameter definition and test values. For more information, see Parameters.When defining an array parameter, the parameter is automatically converted to multiple comma-separated values in the SQL statement. To make sure that the SQL statement is valid, you need to add parentheses (
()
) around the parameter in some SQL statements (such asIN
). For example, if you define an array parameterID
with test value1,2,3
, useSELECT * FROM table_name WHERE id IN (${ID})
to query the data.Note:
- The parameter name is case-sensitive.
- The parameter cannot be used as a table name or column name.
-
Run SQL statements.
If you have inserted parameters in the SQL statements, make sure that you have set test values or default values for the parameters in the Params tab on the right pane. Otherwise, an error is returned.
For macOS:
-
If you have only one statement in the editor, to run it, press ⌘ + Enter or click Run.
-
If you have multiple statements in the editor, to run one or several of them sequentially, place your cursor on your target statement or select the lines of the target statements with your cursor, and then press ⌘ + Enter or click Run.
-
To run all statements in the editor sequentially, press ⇧ + ⌘ + Enter, or select the lines of all statements with your cursor and click Run.
For Windows or Linux:
-
If you have only one statement in the editor, to run it, press Ctrl + Enter or click Run.
-
If you have multiple statements in the editor, to run one or several of them sequentially, place your cursor on your target statement or select the lines of the target statements with your cursor, and then press Ctrl + Enter or click Run.
-
To run all statements in the editor sequentially, press Shift + Ctrl + Enter, or select the lines of all statements with your cursor and click Run.
After running the statements, you can see the query results immediately in the Result tab at the bottom of the page.
Note:
The returned result has a size limit of 8 MiB.
-
On the right pane of the endpoint details page, you can click the Params tab to view and manage the parameters used in the endpoint.
In the Definition section, you can view and manage the following properties for a parameter:
-
The parameter name: the name can only include letters, digits, and underscores (
_
) and must start with a letter or an underscore (_
). DO NOT usepage
andpage_size
as parameter names, which are reserved for pagination of request results. -
Required: specifies whether the parameter is required in the request. For path parameters, the configuration is required and cannot be modified. For other parameters, the default configuration is not required.
-
Type: specifies the data type of the parameter. For path parameters, only
STRING
andINTEGER
are supported. For other parameters,STRING
,NUMBER
,INTEGER
,BOOLEAN
, andARRAY
are supported.When using a
STRING
type parameter, you do not need to add quotation marks ('
or"
). For example,foo
is valid for theSTRING
type and is processed as"foo"
, whereas"foo"
is processed as"\"foo\""
. -
Enum Value: (optional) specifies the valid values for the parameter and is available only when the parameter type is
STRING
,INTEGER
, orNUMBER
.- If you leave this field empty, the parameter can be any value of the specified type.
- To specify multiple valid values, you can separate them with a comma (
,
). For example, if you set the parameter type toSTRING
and specify this field asfoo, bar
, the parameter value can only befoo
orbar
.
-
ItemType: specifies the item type of an
ARRAY
type parameter. -
Default Value: specifies the default value of the parameter.
- For
ARRAY
type, you need to separate multiple values with a comma (,
). - Make sure that the value can be converted to the type of parameter. Otherwise, the endpoint returns an error.
- If you do not set a test value for a parameter, the default value is used when testing the endpoint.
- For
-
Location: indicates the location of the parameter. This property cannot be modified.
- For path parameters, this property is
Path
. - For other parameters, if the request method is
GET
orDELETE
, this property isQuery
. If the request method isPOST
orPUT
, this property isBody
.
- For path parameters, this property is
In the Test Values section, you can view and set test parameters. These values are used as the parameter values when you test the endpoint. Make sure that the value can be converted to the type of parameter. Otherwise, the endpoint returns an error.
To rename an endpoint, perform the following steps:
- Navigate to the Data Service page of your project.
- In the left pane, click the name of your target Data App to view its endpoints.
- Locate the endpoint you want to rename, click ... > Rename., and enter a new name for the endpoint.
Note:
Predefined system endpoints do not support renaming.
To test an endpoint, perform the following steps:
Tip:
If you have imported your Data App to Postman, you can also test endpoints of the Data App in Postman. For more information, see Run Data App in Postman.
-
Navigate to the Data Service page of your project.
-
In the left pane, click the name of your target Data App to view its endpoints.
-
Click the name of the endpoint you want to test to view its details.
-
(Optional) If the endpoint contains parameters, you need to set test values before testing.
-
On the right pane of the endpoint details page, click the Params tab.
-
Expand the Test Values section and set test values for the parameters.
If you do not set a test value for a parameter, the default value is used.
-
-
Click Test in the upper-right corner.
Tip:
Alternatively, you can also press F5 to test the endpoint.
After testing the endpoint, you can see the response as JSON at the bottom of the page. For more information about the JSON response, refer to Response of an endpoint.
Note:
If you have connected your Data App to GitHub with Auto Sync & Deployment enabled, any Data App changes you made in GitHub will be deployed in TiDB Cloud Data Service automatically. For more information, see Deploy automatically with GitHub.
To deploy an endpoint, perform the following steps:
-
Navigate to the Data Service page of your project.
-
In the left pane, click the name of your target Data App to view its endpoints.
-
Locate the endpoint you want to deploy, click the endpoint name to view its details, and then click Deploy in the upper-right corner.
-
If Review Draft is enabled for your Data App, a dialog is displayed for you to review the changes you made. You can choose whether to discard the changes based on the review.
-
Click Deploy to confirm the deployment. You will get the Endpoint has been deployed prompt if the endpoint is successfully deployed.
On the right pane of the endpoint details page, you can click the Deployments tab to view the deployed history.
To call an endpoint, you can send an HTTPS request to either an undeployed draft version or a deployed online version of the endpoint.
Tip:
If you have imported your Data App to Postman, you can also call endpoints of the Data App in Postman. For more information, see Run Data App in Postman.
Before calling an endpoint, you need to create an API key. For more information, refer to Create an API key.
TiDB Cloud Data Service generates code examples to help you call an endpoint. To get the code example, perform the following steps:
-
Navigate to the Data Service page of your project.
-
In the left pane, click the name of your target Data App to view its endpoints.
-
Locate the endpoint you want to call and click ... > Code Example. The Code Example dialog box is displayed.
Tip:
Alternatively, you can also click the endpoint name to view its details and click ... > Code Example in the upper-right corner.
-
In the dialog box, select the environment and authentication method that you want to use to call the endpoint, and then copy the code example.
Note:
- The code examples are generated based on the properties and parameters of the endpoint.
- Currently, TiDB Cloud Data Service only provides the curl code example.
-
Environment: choose Test Environment or Online Environment depending on your need. Online Environment is available only after you deploy the endpoint.
-
Authentication method: choose Basic Authentication or Digest Authentication.
- Basic Authentication transmits your API key as based64 encoded text.
- Digest Authentication transmits your API key in an encrypted form, which is more secure.
Compared with Basic Authentication, the curl code of Digest Authentication includes an additional
--digest
option.
Here is an example of a curl code snippet for a
POST
request that enables Batch Operation and uses Digest Authentication:To call a draft version of the endpoint, you need to add the
endpoint-type: draft
header:curl --digest --user '<Public Key>:<Private Key>' \ --request POST 'https://<region>.data.tidbcloud.com/api/v1beta/app/<App ID>/endpoint/<Endpoint Path>' \ --header 'content-type: application/json'\ --header 'endpoint-type: draft' --data-raw '{ "items": [ { "age": "${age}", "career": "${career}" } ] }'
You must deploy your endpoint first before checking the code example in the online environment.
To call the current online version of the endpoint, use the following command:
curl --digest --user '<Public Key>:<Private Key>' \ --request POST 'https://<region>.data.tidbcloud.com/api/v1beta/app/<App ID>/endpoint/<Endpoint Path>' \ --header 'content-type: application/json'\ --data-raw '{ "items": [ { "age": "${age}", "career": "${career}" } ] }'
Note:
- By requesting the regional domain
<region>.data.tidbcloud.com
, you can directly access the endpoint in the region where the TiDB cluster is located. - Alternatively, you can also request the global domain
data.tidbcloud.com
without specifying a region. In this way, TiDB Cloud Data Service will internally redirect the request to the target region, but this might result in additional latency. If you choose this way, make sure to add the--location-trusted
option to your curl command when calling an endpoint.
-
Paste the code example in your application, edit the example according to your need, and then run it.
-
You need to replace the
<Public Key>
and<Private Key>
placeholders with your API key. For more information, refer to Manage an API key. -
If the request method of your endpoint is
GET
and Pagination is enabled for the endpoint, you can paginate the results by updating the values ofpage=<Page Number>
andpage_size=<Page Size>
with your desired values. For example, to get the second page with 10 items per page, usepage=2
andpage_size=10
. -
If the request method of your endpoint is
POST
orPUT
, fill in the--data-raw
option according to the rows of data that you want to operate on.- For endpoints with Batch Operation enabled, the
--data-raw
option accepts an object with anitems
field containing an array of data objects so you can operate on multiple rows of data using one endpoint. - For endpoints with Batch Operation not enabled, the
--data-raw
option only accepts one data object.
- For endpoints with Batch Operation enabled, the
-
If the endpoint contains parameters, specify the parameter values when calling the endpoint.
-
After calling an endpoint, you can see the response in JSON format. For more information, see Response and Status Codes of Data Service.
Note:
If you have connected your Data App to GitHub with Auto Sync & Deployment enabled, undeploying an endpoint of this Data App will also delete the configuration of this endpoint on GitHub.
To undeploy an endpoint, perform the following steps:
- Navigate to the Data Service page of your project.
- In the left pane, click the name of your target Data App to view its endpoints.
- Locate the endpoint you want to undeploy, click ... > Undeploy.
- Click Undeploy to confirm the undeployment.
Note:
Before you delete an endpoint, make sure that the endpoint is not online. Otherwise, the endpoint cannot be deleted. To undeploy an endpoint, refer to Undeploy an endpoint.
To delete an endpoint, perform the following steps:
- Navigate to the Data Service page of your project.
- In the left pane, click the name of your target Data App to view its endpoints.
- Click the name of the endpoint you want to delete, and then click ... > Delete in the upper-right corner.
- Click Delete to confirm the deletion.