From 943442c5b4a06fa13df8b52df82782350ed1c476 Mon Sep 17 00:00:00 2001 From: Joyinqin Date: Sat, 16 Jan 2021 10:56:54 +0800 Subject: [PATCH 01/14] en: Deploy a TiDB Cluster across Multiple Kubernetes clusters --- en/TOC.md | 1 + ...tidb-cluster-across-multiple-kubernetes.md | 663 ++++++++++++++++++ ...tidb-cluster-across-multiple-kubernetes.md | 36 +- 3 files changed, 682 insertions(+), 18 deletions(-) create mode 100644 en/deploy-tidb-cluster-across-multiple-kubernetes.md diff --git a/en/TOC.md b/en/TOC.md index b8601976e9..63769a8387 100644 --- a/en/TOC.md +++ b/en/TOC.md @@ -23,6 +23,7 @@ - [Deploy TiDB Cluster](deploy-on-general-kubernetes.md) - [Initialize TiDB Cluster](initialize-a-cluster.md) - [Access TiDB Cluster](access-tidb.md) + - [Deploy a TiDB Cluster across Multiple Kubernetes Clusters](deploy-tidb-cluster-across-multiple-kubernetes.md) - [Deploy Heterogeneous Cluster](deploy-heterogeneous-tidb-cluster.md) - [Deploy TiFlash](deploy-tiflash.md) - [Deploy TiCDC](deploy-ticdc.md) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md new file mode 100644 index 0000000000..e0956b0f2c --- /dev/null +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -0,0 +1,663 @@ +--- +title: Deploy a TiDB Cluster across Multiple Kubernetes Clusters +summary: Learn how to deploy a TiDB cluster across multiple Kubernetes clusters. +--- + +# Deploy a TiDB Cluster across Multiple Kubernetes Clusters + +To deploy a TiDB cluster across multiple Kubernetes clusters refers to deploying **one** TiDB cluster on multiple network-interconnected Kubernetes clusters. Each component of the cluster is distributed on multiple Kubernetes clusters to achieve disaster recovery among Kubernetes clusters. The interconnected network of Kubernetes cluster means that Pod IP can be accessed in any cluster and between clusters, and Pod FQDN records can be parsed in any cluster and between clusters. + +## Prerequisites + +You need to configure the Kubernetes network and DNS so that the Kubernetes cluster meets the following conditions: + +- The TiDB components on each Kubernetes cluster can access the Pod IP of all TiDB components in and between clusters.各 Kubernetes. +- The TiDB components on each Kubernetes cluster can parse the Pod FQDN of all TiDB components in and between clusters. + +## Supported scenarios + +Currently supported scenarios: + +- Newly deployed a TiDB cluster across multiple Kubernetes clusters. +- Deploy new clusters that enable this feature on other Kubernetes clusters and join the clusters that also enable this feature. + +Experimental supported scenarios: + +- For clusters with existing data that disable this feature, change to enable this feature. If you need to use it in a production environment, it is recommended to complete this requirement through data migration. + +Unsupported scenarios: + +- Two interconnected existing data clusters. This scenario should be completed through data migration. + +## Deploy a cluster across multiple Kubernetes clusters + +If you deploy a TiDB cluster across multiple Kubernetes clusters, by default, you have already deployed Kubernetes clusters required for this scenario, and then perform the following deployment on this basis. + +The following takes the deployment of two clusters as an example. Cluster one is the initial cluster. Create it according to the configuration given below. After cluster one is running normally, create cluster two according to the configuration given below. After creating and deploying clusters, two clusters run normally. + +### Deploy the initial cluster + +Set the following environment variables according to the actual situation. You need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual use, where `cluster1_name` is the cluster name of cluster one, and `cluster1_cluster_domain` is the [Cluster Domain](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction) of cluster one, and `cluster1_namespace` is the namespace of cluster one. + +{{< copyable "shell-regular" >}} + +```bash + +cluster1_name="cluster1" +cluster1_cluster_domain="cluster1.com" +cluster1_namespace="pingcap" +``` + +Run the following command: + +{{< copyable "shell-regular" >}} + +```bash +cat << EOF | kubectl apply -f -n ${cluster1_namespace} - +apiVersion: pingcap.com/v1alpha1 +kind: TidbCluster +metadata: + name: "${cluster1_name}" +spec: + version: v4.0.9 + timezone: UTC + pvReclaimPolicy: Delete + enableDynamicConfiguration: true + configUpdateStrategy: RollingUpdate + clusterDomain: "${cluster1_cluster_domain}" + discovery: {} + pd: + baseImage: pingcap/pd + replicas: 1 + requests: + storage: "10Gi" + config: {} + tikv: + baseImage: pingcap/tikv + replicas: 1 + requests: + storage: "10Gi" + config: {} + tidb: + baseImage: pingcap/tidb + replicas: 1 + service: + type: ClusterIP + config: {} +EOF +``` + +### Deploy the new cluster to join the initial cluster + +You can wait for the cluster one to complete the deployment, then create cluster two. In actual situation, cluster two can join any existing cluster in multiple clusters. + +Refer to the following example and fill in the relevant information such as `Name`, `Cluster Domain`, and `Namespace` of cluster one and cluster two according to the actual situation: + +{{< copyable "shell-regular" >}} + +```bash +cluster1_name="cluster1" +cluster1_cluster_domain="cluster1.com" +cluster1_namespace="pingcap" +cluster2_name="cluster2" +cluster2_cluster_domain="cluster2.com" +cluster2_namespace="pingcap" +``` + +Run the following command: + +{{< copyable "shell-regular" >}} + +```bash +cat << EOF | kubectl apply -f -n ${cluster2_namespace} - +apiVersion: pingcap.com/v1alpha1 +kind: TidbCluster +metadata: + name: "${cluster2_name}" +spec: + version: v4.0.9 + timezone: UTC + pvReclaimPolicy: Delete + enableDynamicConfiguration: true + configUpdateStrategy: RollingUpdate + clusterDomain: "${cluster2_cluster_domain}" + cluster: + name: "${cluster1_name}" + namespace: "${cluster1_namespace}" + clusterDomain: "${cluster1_clusterdomain}" + discovery: {} + pd: + baseImage: pingcap/pd + replicas: 1 + requests: + storage: "10Gi" + config: {} + tikv: + baseImage: pingcap/tikv + replicas: 1 + requests: + storage: "10Gi" + config: {} + tidb: + baseImage: pingcap/tidb + replicas: 1 + service: + type: ClusterIP + config: {} +EOF +``` + +## Deploy enabling TLS between TiDB components across multiple Kubernetes clusters + +You can follow the steps below to enable TLS between TiDB components across multiple Kubernetes clusters. + +### Issue the root certificate + +#### Issue the root certificate using `cfssl` + +If you use `cfssl`, the CA certificate issue process is no different from the general issue process. You need to save the CA certificate created for the first time, and use this CA certificate when issuing certificates for TiDB components later. When creating a component certificate in a cluster, you do not need to create a CA certificate again and only need to complete step one to four in the [Enabling TLS between TiDB components](enable-tls-between-components.md#using-cfssl) once to complete the issuance of the CA certificate. You need to start from step five for the issue of certificates between other cluster components. + +#### Use the cert-manager system to issue a root certificate + +If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certificate` in the initial cluster, and export the `CA Secret` to other new clusters that want to join. Other clusters only need to create component certificates to issue `Issuer` (Refers to the Issuer named ${cluster_name}-tidb-issuer in the [TLS document](enable-tls-between-components.md#using-cert-manager +)). Use this CA to configure `Issuer`, the detailed process is as follows: + +1. Create a `CA Issuer` and a `CA Certificate` in the initial cluster. + + Set the following environment variables according to the actual situation: + + {{< copyable "shell-regular" >}} + + ```bash + cluster_name="cluster1" + namespace="pingcap" + ``` + + Run the following command: + + {{< copyable "shell-regular" >}} + + ```bash + cat <}} + + ```bash + kubectl get secret cluster1-ca-secret -n ${namespace} -o yaml > ca.yaml + ``` + + Delete irrelevant information in the `Secret YAML` file. The `YAML` file after deletion is as follows, where the information in `data` has been omitted: + + ```yaml + apiVersion: v1 + data: + ca.crt: LS0...LQo= + tls.crt: LS0t....LQo= + tls.key: LS0t...tCg== + kind: Secret + metadata: + name: cluster1-ca-secret + type: kubernetes.io/tls + ``` + +3. Import the exported CA to other clusters. + + You need to configure the `namespace` so that related components can access the CA certificate: + + {{< copyable "shell-regular" >}} + + ```bash + kubectl apply -f ca.yaml -n ${namespace} + ·``` + +4. Create a component certificate in the initial cluster and the new cluster to issue `Issuer` using this CA. + + 1. Create a certificate issuing `Issuer` between TiDB components in the initial cluster. + + Set the following environment variables according to the actual situation: + + {{< copyable "shell-regular" >}} + + ```bash + cluster_name="cluster1" + namespace="pingcap" + ca_secret_name="cluster1-ca-secret" + ``` + + Run the following command: + + {{< copyable "shell-regular" >}} + + ```bash + cat << EOF | kubectl apply -f - + apiVersion: cert-manager.io/v1alpha2 + kind: Issuer + metadata: + name: ${cluster_name}-tidb-issuer + namespace: ${namespace} + spec: + ca: + secretName: ${ca_secret_name} + EOF + ``` + + 2. Create a certificate issuing `Issuer` between TiDB components in the new cluster. + + Set the following environment variables according to the actual situation. Among them, `ca_secret_name` needs to point to the `Secret` that you just imported to store the `CA`. You can use the `cluster_name` and `namespace` in the following operations: + + {{< copyable "shell-regular" >}} + ```bash + cluster_name="cluster2" + namespace="pingcap" + ca_secret_name="cluster1-ca-secret" + ``` + + Run the following command: + + {{< copyable "shell-regular" >}} + + ```bash + cat << EOF | kubectl apply -f - + apiVersion: cert-manager.io/v1alpha2 + kind: Issuer + metadata: + name: ${cluster_name}-tidb-issuer + namespace: ${namespace} + spec: + ca: + secretName: ${ca_secret_name} + EOF + ``` + +### Issue certificates for the TiDB components of each Kubernetes cluster + +You need to issue a component certificate for each TiDB component on the Kubernetes cluster. When issuing a component certificate, you need to add an authorization record ending with `.${cluster_domain}` to the hosts, for example, `${cluster_name}-pd.${namespace}.svc.${cluster_domain}`. + +#### Use the cfssl system to issue certificates for TiDB components + +If you use `cfssl`, take the certificate used to create the PD component as an example, the `pd-server.json` file is as follows. + +Set the following environment variables according to the actual situation. + +{{< copyable "shell-regular" >}} + +```bash +cluster_name=cluster2 +cluster_domain=cluster2.com +namespace=pingcap +``` + +You can create the `pd-server.json` by the following command: + +{{< copyable "shell-regular" >}} + +```bash +cat << EOF > pd-server.json +{ + "CN": "TiDB", + "hosts": [ + "127.0.0.1", + "::1", + "${cluster_name}-pd", + "${cluster_name}-pd.${namespace}", + "${cluster_name}-pd.${namespace}.svc", + "${cluster_name}-pd.${namespace}.svc.${cluster_domain}", + "${cluster_name}-pd-peer", + "${cluster_name}-pd-peer.${namespace}", + "${cluster_name}-pd-peer.${namespace}.svc", + "${cluster_name}-pd-peer.${namespace}.svc.${cluster_domain}", + "*.${cluster_name}-pd-peer", + "*.${cluster_name}-pd-peer.${namespace}", + "*.${cluster_name}-pd-peer.${namespace}.svc", + "*.${cluster_name}-pd-peer.${namespace}.svc.${cluster_domain}" + ], + "key": { + "algo": "ecdsa", + "size": 256 + }, + "names": [ + { + "C": "US", + "L": "CA", + "ST": "San Francisco" + } + ] +} +EOF +``` + +#### Use the cert-manager system to issue certificates for TiDB components + +If you use `cert-manager`, take the certificate used to create the PD component as an example, `Certifcates` is shown below. + +Set the following environment variables according to the actual situation. + +{{< copyable "shell-regular" >}} + +```bash +cluster_name="cluster2" +namespace="pingcap" +cluster_domain="cluster2.com" +``` + +Run the following command: + +{{< copyable "shell-regular" >}} + +```bash +cat << EOF | kubectl apply -f - +apiVersion: cert-manager.io/v1alpha2 +kind: Certificate +metadata: + name: ${cluster_name}-pd-cluster-secret + namespace: ${namespace} +spec: + secretName: ${cluster_name}-pd-cluster-secret + duration: 8760h # 365d + renewBefore: 360h # 15d + organization: + - PingCAP + commonName: "TiDB" + usages: + - server auth + - client auth + dnsNames: + - "${cluster_name}-pd" + - "${cluster_name}-pd.${namespace}" + - "${cluster_name}-pd.${namespace}.svc" + - "${cluster_name}-pd.${namespace}.svc.${cluster_domain}" + - "${cluster_name}-pd-peer" + - "${cluster_name}-pd-peer.${namespace}" + - "${cluster_name}-pd-peer.${namespace}.svc" + - "${cluster_name}-pd-peer.${namespace}.svc.${cluster_domain}" + - "*.${cluster_name}-pd-peer" + - "*.${cluster_name}-pd-peer.${namespace}" + - "*.${cluster_name}-pd-peer.${namespace}.svc" + - "*.${cluster_name}-pd-peer.${namespace}.svc.${cluster_domain}" + ipAddresses: + - 127.0.0.1 + - ::1 + issuerRef: + name: ${cluster_name}-tidb-issuer + kind: Issuer + group: cert-manager.io +EOF +``` + +You need to refer to the TLS related documents, issue the corresponding certificates for the components, and create the `Secret` in the corresponding Kubernetes cluster. + +For other TLS related information, refer to the following documents: + +- [Enable TLS between TiDB Components](enable-tls-between-components.md) +- [Enable TLS for the MySQL Client](enable-tls-for-mysql-client.md) + +### Deploy the initial cluster + +To deploy and initialize the cluster, use the following command. In actual use, you need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual situation, where `cluster1_name` is the cluster name of cluster one, `cluster1_cluster_domain` is the `Cluster Domain` of cluster one, and `cluster1_namespace` is the namespace of cluster one. The following `YAML` file enables the TLS feature, and each component starts to verify the certificates issued by the `CN` for the `CA` of `TiDB` by configuring the `cert-allowed-cn`. + +Set the following environment variables according to the actual situation. + +{{< copyable "shell-regular" >}} + +```bash +cluster1_name="cluster1" +cluster1_cluster_domain="cluster1.com" +cluster1_namespace="pingcap" + +Run the following command: + +cat << EOF | kubectl apply -f -n ${cluster1_namespace} - +apiVersion: pingcap.com/v1alpha1 +kind: TidbCluster +metadata: + name: "${cluster1_name}" +spec: + version: v4.0.9 + timezone: UTC + tlsCluster: + enabled: true + pvReclaimPolicy: Delete + enableDynamicConfiguration: true + configUpdateStrategy: RollingUpdate + clusterDomain: "${cluster1_cluster_domain}" + discovery: {} + pd: + baseImage: pingcap/pd + replicas: 1 + requests: + storage: "10Gi" + config: + security: + cert-allowed-cn: + - TiDB + tikv: + baseImage: pingcap/tikv + replicas: 1 + requests: + storage: "10Gi" + config: + security: + cert-allowed-cn: + - TiDB + tidb: + baseImage: pingcap/tidb + replicas: 1 + service: + type: ClusterIP + tlsClient: + enabled: true + config: + security: + cert-allowed-cn: + - TiDB +EOF +``` + +### Deploy a new cluster to join the initial cluster + +You can wait for the cluster one to complete the deployment. After completing the deployment, you can create cluster two. The related command are as follows. In actual use, cluster one might not the initial cluster. You can specify any cluster in multiple clusters to join. + +Set the following environment variables according to the actual situation: + +{{< copyable "shell-regular" >}} + +```bash +cluster1_name="cluster1" +cluster1_cluster_domain="cluster1.com" +cluster1_namespace="pingcap" +cluster2_name="cluster2" +cluster2_cluster_domain="cluster2.com" +cluster2_namespace="pingcap" +``` + +Run the following command: + +{{< copyable "shell-regular" >}} + +```bash +cat << EOF | kubectl apply -f -n ${cluster2_namespace} - +apiVersion: pingcap.com/v1alpha1 +kind: TidbCluster +metadata: + name: "${cluster2_name}" +spec: + version: v4.0.9 + timezone: UTC + tlsCluster: + enabled: true + pvReclaimPolicy: Delete + enableDynamicConfiguration: true + configUpdateStrategy: RollingUpdate + clusterDomain: "${cluster2_cluster_domain}" + cluster: + name: "${cluster1_name}" + namespace: "${cluster1_namespace}" + clusterDomain: "${cluster1_clusterdomain}" + discovery: {} + pd: + baseImage: pingcap/pd + replicas: 1 + requests: + storage: "10Gi" + config: + security: + cert-allowed-cn: + - TiDB + tikv: + baseImage: pingcap/tikv + replicas: 1 + requests: + storage: "10Gi" + config: + security: + cert-allowed-cn: + - TiDB + tidb: + baseImage: pingcap/tidb + replicas: 1 + service: + type: ClusterIP + tlsClient: + enabled: true + config: + security: + cert-allowed-cn: + - TiDB +EOF +``` + +## Exit and recycle clusters that already joined + +When you need to make a cluster exit from the joined TiDB cluster deployed across Kubernetes and reclaim resources, you can achieve the above requirements through the scaling out. In this scenario, some requirements of scaling in need to be met. The restrictions are as follows: + +- After scaling in, the number of TiKV replicas in the cluster should be greater than the number of `max-replicas` set in PD. By default, the number of TiKV replicas needs to be greater than three. + +Take the cluster two created in the above document as an example. First, set the number of copies of PD, TiKV, TiDB to `0`. If you enable other components such as TiFlash, TiCDC, Pump, etc., set the number of these copies to `0`: + +{{< copyable "shell-regular" >}} + +```bash +kubectl patch tc cluster2 --type merge -p '{"spec":{"pd":{"replicas":0},"tikv":{"replicas":0},"tidb":{"replicas":0}}}' +``` + +Wait for the status of cluster two to become `Ready`, and scale out related components to `0` copy: + +{{< copyable "shell-regular" >}} + +```bash +kubectl get pods -l app.kubernetes.io/instance=cluster2 -n pingcap +``` + +The Pod list is displayed as `No resources found.`. At this time, Pods have all been scaled out, and cluster two has exited the cluster. Check the cluster status of cluster two: +Pod 列表显示为 `No resources found.`,此时 Pod 已经被全部缩容,集群 2 已经退出集群,查看集群 2 的集群状态: + +{{< copyable "shell-regular" >}} + +```bash +kubectl get tc cluster2 +``` + +The result shows that cluster two is in the `Ready` state. At this time, you can delete the object and reclaim related resources. + +{{< copyable "shell-regular" >}} + +```bash +kubectl delete tc cluster2 +``` + +Through the above steps, you can complete exit and resources reclaim of the joined clusters. + +## Enable the existing data cluster across multiple Kubernetes cluster feature as the initial TiDB cluster 已有数据集群开启跨多个 Kubernetes 集群功能并作为 TiDB 集群的初始集群 + +> **Warning:** +> +> Currently, this is an experimental feature and might cause data loss. Please use it carefully. + +1. Update `.spec.clusterDomain` configuration: + + Configure the following parameters according to the `clusterDomain` in your Kubernetes cluster information: + + > **Warning:** + > + > Currently, you need to configure `clusterDomain` with correct information. After modifying the configuration, you can not modify it again. + + {{< copyable "shell-regular" >}} + + ```bash + kubectl patch tidbcluster cluster1 --type merge -p '{"spec":{"clusterDomain":"cluster1.com"}}' + ``` + + After completing the modification, the TiDB cluster performs the rolling update. + +2. Update the `PeerURL` information of PD: + + After completing the rolling update, you need to use `port-forward` to expose PD's API interface, and use API interface of PD to update `PeerURL` of PD. + + 1. Use `port-forward` to expose API interface of PD: + + {{< copyable "shell-regular" >}} + + ```bash + kubectl port-forward pods/cluster1-pd-0 2380:2380 2379:2379 -n pingcap + ``` + + 2. Access `PD API` to obtain `members` information. Note that after using `port-forward`, the terminal is occupied. You need to perform the following operations in another terminal: + + {{< copyable "shell-regular" >}} + + ```bash + curl http://127.0.0.1:2379/v2/members + ``` + + > **Note:** + > + > If the cluster enables TLS, you need to configure a certificate when using the curl command. For example: + > + > `curl --cacert /var/lib/pd-tls/ca.crt --cert /var/lib/pd-tls/tls.crt --key /var/lib/pd-tls/tls.key https://127.0.0.1:2379/v2/members` + + After running the command, the output is as follows: + + ```output + {"members":[{"id":"6ed0312dc663b885","name":"cluster1-pd-0.cluster1-pd-peer.pingcap.svc.cluster1.com","peerURLs":["http://cluster1-pd-0.cluster1-pd-peer.pingcap.svc:2380"],"clientURLs":["http://cluster1-pd-0.cluster1-pd-peer.pingcap.svc.cluster1.com:2379"]},{"id":"bd9acd3d57e24a32","name":"cluster1-pd-1.cluster1-pd-peer.pingcap.svc.cluster1.com","peerURLs":["http://cluster1-pd-1.cluster1-pd-peer.pingcap.svc:2380"],"clientURLs":["http://cluster1-pd-1.cluster1-pd-peer.pingcap.svc.cluster1.com:2379"]},{"id":"e04e42cccef60246","name":"cluster1-pd-2.cluster1-pd-peer.pingcap.svc.cluster1.com","peerURLs":["http://cluster1-pd-2.cluster1-pd-peer.pingcap.svc:2380"],"clientURLs":["http://cluster1-pd-2.cluster1-pd-peer.pingcap.svc.cluster1.com:2379"]}]} + ``` + + 3. Record the `id` of each PD instance, and use the `id` to update the `peerURL` of each member in turn: + + {{< copyable "shell-regular" >}} + + ```bash + member_ID="6ed0312dc663b885" + member_peer_url="http://cluster1-pd-0.cluster1-pd-peer.pingcap.svc.cluster1.com:2380" + curl http://127.0.0.1:2379/v2/members/${member_ID} -XPUT \ + -H "Content-Type: application/json" -d '{"peerURLs":["${member_peer_url}"]}' + ``` + +For more examples and development information, refer to [`multi-cluster`](https://github.com/pingcap/tidb-operator/tree/master/examples/multi-cluster). \ No newline at end of file diff --git a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md index 2cfecffc35..8d4fd4b9d5 100644 --- a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -9,7 +9,7 @@ summary: 本文档介绍如何实现跨多个 Kubernetes 集群部署 TiDB 集 ## 前置条件 -您需要配置 Kubernetes 的网络和 DNS,使得 Kubernetes 集群满足以下条件: +需要配置 Kubernetes 的网络和 DNS,使得 Kubernetes 集群满足以下条件: - 各 Kubernetes 集群上的 TiDB 组件有能力访问集群内和集群间所有 TiDB 组件的 Pod IP。 - 各 Kubernetes 集群上的 TiDB 组件有能力解析集群内和集群间所有 TiDB 组件的 Pod FQDN。 @@ -31,13 +31,13 @@ summary: 本文档介绍如何实现跨多个 Kubernetes 集群部署 TiDB 集 ## 跨多个 Kubernetes 集群部署集群 -部署跨多个 Kubernetes 集群的 TiDB 集群,默认您已部署好此场景所需要的 Kubernetes 集群,在此基础上进行下面的部署工作。 +部署跨多个 Kubernetes 集群的 TiDB 集群,默认你已部署好此场景所需要的 Kubernetes 集群,在此基础上进行下面的部署工作。 下面以部署两个集群为例进行介绍,其中集群 1 为初始集群,按照下面给出的配置进行创建,集群 1 正常运行后,按照下面给出配置创建集群 2,等集群完成创建和部署工作后,两个集群正常运行。 ### 部署初始集群 -根据实际情况设置以下环境变量,实际使用中需要根据您的实际情况设置 `cluster1_name` 和 `cluster1_cluster_domain` 变量的内容,其中 `cluster1_name` 为集群 1 的集群名称,`cluster1_cluster_domain` 为集群 1 的 [Cluster Domain](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction), `cluster1_namespace` 为集群 1 的命名空间。 +根据实际情况设置以下环境变量,实际使用中需要根据你的实际情况设置 `cluster1_name` 和 `cluster1_cluster_domain` 变量的内容,其中 `cluster1_name` 为集群 1 的集群名称,`cluster1_cluster_domain` 为集群 1 的 [Cluster Domain](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction), `cluster1_namespace` 为集群 1 的命名空间。 {{< copyable "shell-regular" >}} @@ -90,7 +90,7 @@ EOF 等待集群 1 完成部署后,创建集群 2。在实际使用中,集群 2 可以加入多集群内的任意一个已有集群。 -您可以参考下面的范例,根据实际情况设置填入集群 1 和集群 2 的 `Name`、`Cluster Domain`、`Namespace` 等相关信息: +可以参考下面的范例,根据实际情况设置填入集群 1 和集群 2 的 `Name`、`Cluster Domain`、`Namespace` 等相关信息: {{< copyable "shell-regular" >}} @@ -149,17 +149,17 @@ EOF ## 跨多个 Kubernetes 集群部署开启组件间 TLS 的 TiDB 集群 -您可以按照以下步骤为跨多个 Kubernetes 集群部署的 TiDB 集群开启组件间 TLS。 +可以按照以下步骤为跨多个 Kubernetes 集群部署的 TiDB 集群开启组件间 TLS。 ### 签发根证书 #### 使用 cfssl 系统签发根证书 -如果您使用 `cfssl`,签发 CA 证书的过程与一般签发过程没有差别,您需要保存好第一次创建的 CA 证书,并且在后面为 TiDB 组件签发证书时都使用这个 CA 证书,即在为其他集群创建组件证书时,不需要再次创建 CA 证书,您只需要完成一次[为 TiDB 组件间开启 TLS](enable-tls-between-components.md#使用-cfssl-系统颁发证书) 文档中 1 ~ 4 步操作,完成 CA 证书签发,为其他集群组件间证书签发操作从第 5 步开始即可。 +如果你使用 `cfssl`,签发 CA 证书的过程与一般签发过程没有差别,需要保存好第一次创建的 CA 证书,并且在后面为 TiDB 组件签发证书时都使用这个 CA 证书,即在为其他集群创建组件证书时,不需要再次创建 CA 证书,你只需要完成一次[为 TiDB 组件间开启 TLS](enable-tls-between-components.md#使用-cfssl-系统颁发证书) 文档中 1 ~ 4 步操作,完成 CA 证书签发,为其他集群组件间证书签发操作从第 5 步开始即可。 #### 使用 cert-manager 系统签发根证书 -如果您使用 `cert-manager`,只需要在初始集群创建 `CA Issuer` 和创建 `CA Certificate`,并导出 `CA Secret` 给其他准备加入的新集群,其他集群只需要创建组件证书签发 `Issuer`(在 [TLS 文档](enable-tls-between-components.md#使用-cert-manager-系统颁发证书)中指名字为 `${cluster_name}-tidb-issuer` 的 `Issuer`),配置 `Issuer` 使用该 CA,具体过程如下: +如果你使用 `cert-manager`,只需要在初始集群创建 `CA Issuer` 和创建 `CA Certificate`,并导出 `CA Secret` 给其他准备加入的新集群,其他集群只需要创建组件证书签发 `Issuer`(在 [TLS 文档](enable-tls-between-components.md#使用-cert-manager-系统颁发证书)中指名字为 `${cluster_name}-tidb-issuer` 的 `Issuer`),配置 `Issuer` 使用该 CA,具体过程如下: 1. 在初始集群上创建 `CA Issuer` 和创建 `CA Certificate`。 @@ -229,7 +229,7 @@ EOF 3. 将导出的 CA 导入到其他集群。 - 您需要配置这里的 `namespace` 使得相关组件可以访问到 CA 证书: + 你需要配置这里的 `namespace` 使得相关组件可以访问到 CA 证书: {{< copyable "shell-regular" >}} @@ -270,7 +270,7 @@ EOF 2. 在新集群上,创建组件间证书签发 `Issuer`。 - 根据实际情况设置以下环境变量,其中 `ca_secret_name` 需要指向您刚才导入的存放 `CA` 的 `Secret`,`cluster_name` 和 `namespace` 在下面的操作中需要用到: + 根据实际情况设置以下环境变量,其中 `ca_secret_name` 需要指向你刚才导入的存放 `CA` 的 `Secret`,`cluster_name` 和 `namespace` 在下面的操作中需要用到: {{< copyable "shell-regular" >}} @@ -299,7 +299,7 @@ EOF ### 为各个 Kubernetes 集群的 TiDB 组件签发证书 -您需要为每个 Kubernetes 集群上的 TiDB 组件都签发组件证书。在签发组件证书时,需要在 hosts 中加上以 `.${cluster_domain}` 结尾的授权记录, 例如 `${cluster_name}-pd.${namespace}.svc.${cluster_domain}`。 +你需要为每个 Kubernetes 集群上的 TiDB 组件都签发组件证书。在签发组件证书时,需要在 hosts 中加上以 `.${cluster_domain}` 结尾的授权记录, 例如 `${cluster_name}-pd.${namespace}.svc.${cluster_domain}`。 #### 使用 cfssl 系统为 TiDB 组件签发证书 @@ -412,7 +412,7 @@ spec: EOF ``` -您需要参考 TLS 相关文档,为组件签发对应的证书,并在相应 Kubernetes 集群中创建 Secret。 +需要参考 TLS 相关文档,为组件签发对应的证书,并在相应 Kubernetes 集群中创建 Secret。 其他 TLS 相关信息,可参考以下文档: @@ -421,7 +421,7 @@ EOF ### 部署初始集群 -通过如下命令部署初始化集群,实际使用中需要根据您的实际情况设置 `cluster1_name` 和 `cluster1_cluster_domain` 变量的内容,其中 `cluster1_name` 为集群 1 的集群名称,`cluster1_cluster_domain` 为集群 1 的 `Cluster Domain`,`cluster1_namespace` 为集群 1 的命名空间。下面的 YAML 文件已经开启了 TLS 功能,并通过配置 `cert-allowed-cn`,使得各个组件开始验证由 `CN` 为 `TiDB` 的 `CA` 所签发的证书。 +通过如下命令部署初始化集群,实际使用中需要根据你的实际情况设置 `cluster1_name` 和 `cluster1_cluster_domain` 变量的内容,其中 `cluster1_name` 为集群 1 的集群名称,`cluster1_cluster_domain` 为集群 1 的 `Cluster Domain`,`cluster1_namespace` 为集群 1 的命名空间。下面的 YAML 文件已经开启了 TLS 功能,并通过配置 `cert-allowed-cn`,使得各个组件开始验证由 `CN` 为 `TiDB` 的 `CA` 所签发的证书。 根据实际情况设置以下环境变量: @@ -557,11 +557,11 @@ EOF ## 退出和回收已加入集群 -当您需要让一个集群从所加入的跨 Kubernetes 部署的 TiDB 集群退出并回收资源时,可以通过缩容流程来实现上述需求。在此场景下,需要满足缩容的一些限制,限制如下: +当你需要让一个集群从所加入的跨 Kubernetes 部署的 TiDB 集群退出并回收资源时,可以通过缩容流程来实现上述需求。在此场景下,需要满足缩容的一些限制,限制如下: - 缩容后,集群中 TiKV 副本数应大于 PD 中设置的 `max-replicas` 数量,默认情况下 TiKV 副本数量需要大于 3。 -我们以上面文档创建的集群 2 为例,先将 PD、TiKV、TiDB 的副本数设置为 0,如果开启了 TiFlash、TiCDC、Pump 等其他组件,也请一并将其副本数设为 0: +以上面文档创建的集群 2 为例,先将 PD、TiKV、TiDB 的副本数设置为 0,如果开启了 TiFlash、TiCDC、Pump 等其他组件,也请一并将其副本数设为 0: {{< copyable "shell-regular" >}} @@ -585,7 +585,7 @@ Pod 列表显示为 `No resources found.`,此时 Pod 已经被全部缩容, kubectl get tc cluster2 ``` -结果显示集群 2 为 `Ready` 状态,此时我们可以删除该对象,对相关资源进行回收。 +结果显示集群 2 为 `Ready` 状态,此时可以删除该对象,对相关资源进行回收。 {{< copyable "shell-regular" >}} @@ -593,7 +593,7 @@ kubectl get tc cluster2 kubectl delete tc cluster2 ``` -通过上述步骤,我们完成了已加入集群的退出和资源回收。 +通过上述步骤完成已加入集群的退出和资源回收。 ## 已有数据集群开启跨多个 Kubernetes 集群功能并作为 TiDB 集群的初始集群 @@ -603,11 +603,11 @@ kubectl delete tc cluster2 1. 更新 `.spec.clusterDomain` 配置: - 根据您的 Kubernetes 集群信息中的 `clusterDomain` 配置下面的参数: + 根据你的 Kubernetes 集群信息中的 `clusterDomain` 配置下面的参数: > **警告:** > - > 目前需要您使用正确的信息配置 `clusterDomain`,配置修改后无法再次修改。 + > 目前需要你使用正确的信息配置 `clusterDomain`,配置修改后无法再次修改。 {{< copyable "shell-regular" >}} From 45a35ec58b2d57af9278ae83cb142f80d22378bb Mon Sep 17 00:00:00 2001 From: Joyinqin Date: Sat, 16 Jan 2021 16:27:37 +0800 Subject: [PATCH 02/14] refine the language --- ...tidb-cluster-across-multiple-kubernetes.md | 35 +++++++++---------- 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index e0956b0f2c..113a1c83f8 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -5,13 +5,13 @@ summary: Learn how to deploy a TiDB cluster across multiple Kubernetes clusters. # Deploy a TiDB Cluster across Multiple Kubernetes Clusters -To deploy a TiDB cluster across multiple Kubernetes clusters refers to deploying **one** TiDB cluster on multiple network-interconnected Kubernetes clusters. Each component of the cluster is distributed on multiple Kubernetes clusters to achieve disaster recovery among Kubernetes clusters. The interconnected network of Kubernetes cluster means that Pod IP can be accessed in any cluster and between clusters, and Pod FQDN records can be parsed in any cluster and between clusters. +To deploy a TiDB cluster across multiple Kubernetes clusters refers to deploying **one** TiDB cluster on multiple network-interconnected Kubernetes clusters. Each component of the cluster is distributed on multiple Kubernetes clusters to achieve disaster recovery among Kubernetes clusters. The interconnected network of Kubernetes clusters means that Pod IP can be accessed in any cluster and between clusters, and Pod FQDN records can be parsed in any cluster and between clusters. ## Prerequisites You need to configure the Kubernetes network and DNS so that the Kubernetes cluster meets the following conditions: -- The TiDB components on each Kubernetes cluster can access the Pod IP of all TiDB components in and between clusters.各 Kubernetes. +- The TiDB components on each Kubernetes cluster can access the Pod IP of all TiDB components in and between clusters. - The TiDB components on each Kubernetes cluster can parse the Pod FQDN of all TiDB components in and between clusters. ## Supported scenarios @@ -147,17 +147,17 @@ spec: EOF ``` -## Deploy enabling TLS between TiDB components across multiple Kubernetes clusters +## Deploy the TiDB cluster with TLS enabled between TiDB components across multiple Kubernetes clusters -You can follow the steps below to enable TLS between TiDB components across multiple Kubernetes clusters. +You can follow the steps below to enable TLS between TiDB components for TiDB clusters deployed across multiple Kubernetes clusters. ### Issue the root certificate #### Issue the root certificate using `cfssl` -If you use `cfssl`, the CA certificate issue process is no different from the general issue process. You need to save the CA certificate created for the first time, and use this CA certificate when issuing certificates for TiDB components later. When creating a component certificate in a cluster, you do not need to create a CA certificate again and only need to complete step one to four in the [Enabling TLS between TiDB components](enable-tls-between-components.md#using-cfssl) once to complete the issuance of the CA certificate. You need to start from step five for the issue of certificates between other cluster components. +If you use `cfssl`, the CA certificate issue process is no different from the general issue process. You need to save the CA certificate created for the first time, and use this CA certificate when issuing certificates for TiDB components later. When creating a component certificate in a cluster, you do not need to create a CA certificate again and only need to complete step one to four in the [Enabling TLS between TiDB components](enable-tls-between-components.md#using-cfssl) once to complete the issue of the CA certificate. You need to start from step five for the issue of certificates between other cluster components. -#### Use the cert-manager system to issue a root certificate +#### Use the `cert-manager` system to issue a root certificate If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certificate` in the initial cluster, and export the `CA Secret` to other new clusters that want to join. Other clusters only need to create component certificates to issue `Issuer` (Refers to the Issuer named ${cluster_name}-tidb-issuer in the [TLS document](enable-tls-between-components.md#using-cert-manager )). Use this CA to configure `Issuer`, the detailed process is as follows: @@ -271,7 +271,7 @@ If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certi 2. Create a certificate issuing `Issuer` between TiDB components in the new cluster. - Set the following environment variables according to the actual situation. Among them, `ca_secret_name` needs to point to the `Secret` that you just imported to store the `CA`. You can use the `cluster_name` and `namespace` in the following operations: + Set the following environment variables according to the actual situation. Among them, `ca_secret_name` needs to point to the `Secret` that you just import to store the `CA`. You can use the `cluster_name` and `namespace` in the following operations: {{< copyable "shell-regular" >}} ```bash @@ -354,7 +354,7 @@ cat << EOF > pd-server.json EOF ``` -#### Use the cert-manager system to issue certificates for TiDB components +#### Use the `cert-manager` system to issue certificates for TiDB components If you use `cert-manager`, take the certificate used to create the PD component as an example, `Certifcates` is shown below. @@ -412,9 +412,9 @@ spec: EOF ``` -You need to refer to the TLS related documents, issue the corresponding certificates for the components, and create the `Secret` in the corresponding Kubernetes cluster. +You need to refer to the TLS-related documents, issue the corresponding certificates for the components, and create the `Secret` in the corresponding Kubernetes clusters. -For other TLS related information, refer to the following documents: +For other TLS-related information, refer to the following documents: - [Enable TLS between TiDB Components](enable-tls-between-components.md) - [Enable TLS for the MySQL Client](enable-tls-for-mysql-client.md) @@ -554,9 +554,9 @@ spec: EOF ``` -## Exit and recycle clusters that already joined +## Exit and reclaim clusters that already joined -When you need to make a cluster exit from the joined TiDB cluster deployed across Kubernetes and reclaim resources, you can achieve the above requirements through the scaling out. In this scenario, some requirements of scaling in need to be met. The restrictions are as follows: +When you need to make a cluster exit from the joined TiDB cluster deployed across Kubernetes and reclaim resources, you can achieve the above requirements through the scaling in. In this scenario, some requirements of scaling in need to be met. The restrictions are as follows: - After scaling in, the number of TiKV replicas in the cluster should be greater than the number of `max-replicas` set in PD. By default, the number of TiKV replicas needs to be greater than three. @@ -568,7 +568,7 @@ Take the cluster two created in the above document as an example. First, set the kubectl patch tc cluster2 --type merge -p '{"spec":{"pd":{"replicas":0},"tikv":{"replicas":0},"tidb":{"replicas":0}}}' ``` -Wait for the status of cluster two to become `Ready`, and scale out related components to `0` copy: +Wait for the status of cluster two to become `Ready`, and scale in related components to `0` copy: {{< copyable "shell-regular" >}} @@ -576,8 +576,7 @@ Wait for the status of cluster two to become `Ready`, and scale out related comp kubectl get pods -l app.kubernetes.io/instance=cluster2 -n pingcap ``` -The Pod list is displayed as `No resources found.`. At this time, Pods have all been scaled out, and cluster two has exited the cluster. Check the cluster status of cluster two: -Pod 列表显示为 `No resources found.`,此时 Pod 已经被全部缩容,集群 2 已经退出集群,查看集群 2 的集群状态: +The Pod list is displayed as `No resources found.`. At this time, Pods have all been scaled in, and cluster two exits the cluster. Check the cluster status of cluster two: {{< copyable "shell-regular" >}} @@ -585,7 +584,7 @@ Pod 列表显示为 `No resources found.`,此时 Pod 已经被全部缩容, kubectl get tc cluster2 ``` -The result shows that cluster two is in the `Ready` state. At this time, you can delete the object and reclaim related resources. +The result shows that cluster two is in the `Ready` status. At this time, you can delete the object and reclaim related resources. {{< copyable "shell-regular" >}} @@ -595,7 +594,7 @@ kubectl delete tc cluster2 Through the above steps, you can complete exit and resources reclaim of the joined clusters. -## Enable the existing data cluster across multiple Kubernetes cluster feature as the initial TiDB cluster 已有数据集群开启跨多个 Kubernetes 集群功能并作为 TiDB 集群的初始集群 +## Enable the existing data cluster across multiple Kubernetes cluster feature as the initial TiDB cluster > **Warning:** > @@ -639,7 +638,7 @@ Through the above steps, you can complete exit and resources reclaim of the join > **Note:** > - > If the cluster enables TLS, you need to configure a certificate when using the curl command. For example: + > If the cluster enables TLS, you need to configure the certificate when using the curl command. For example: > > `curl --cacert /var/lib/pd-tls/ca.crt --cert /var/lib/pd-tls/tls.crt --key /var/lib/pd-tls/tls.key https://127.0.0.1:2379/v2/members` From 2512ee13ad8da87f0a803e14488d6d4a75824e4f Mon Sep 17 00:00:00 2001 From: JoyinQ <56883733+Joyinqin@users.noreply.github.com> Date: Mon, 18 Jan 2021 19:32:41 +0800 Subject: [PATCH 03/14] Apply suggestions from code review Co-authored-by: Ran --- ...ploy-tidb-cluster-across-multiple-kubernetes.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index 113a1c83f8..a0b5d009da 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -5,7 +5,7 @@ summary: Learn how to deploy a TiDB cluster across multiple Kubernetes clusters. # Deploy a TiDB Cluster across Multiple Kubernetes Clusters -To deploy a TiDB cluster across multiple Kubernetes clusters refers to deploying **one** TiDB cluster on multiple network-interconnected Kubernetes clusters. Each component of the cluster is distributed on multiple Kubernetes clusters to achieve disaster recovery among Kubernetes clusters. The interconnected network of Kubernetes clusters means that Pod IP can be accessed in any cluster and between clusters, and Pod FQDN records can be parsed in any cluster and between clusters. +To deploy a TiDB cluster across multiple Kubernetes clusters refers to deploying **one** TiDB cluster on multiple interconnected Kubernetes clusters. Each component of the cluster is distributed on multiple Kubernetes clusters to achieve disaster recovery among Kubernetes clusters. The interconnected network of Kubernetes clusters means that Pod IP can be accessed in any cluster and between clusters, and Pod FQDN records can be parsed in any cluster and between clusters. ## Prerequisites @@ -18,20 +18,20 @@ You need to configure the Kubernetes network and DNS so that the Kubernetes clus Currently supported scenarios: -- Newly deployed a TiDB cluster across multiple Kubernetes clusters. +- Deploy a new TiDB cluster across multiple Kubernetes clusters. - Deploy new clusters that enable this feature on other Kubernetes clusters and join the clusters that also enable this feature. -Experimental supported scenarios: +Experimentally supported scenarios: -- For clusters with existing data that disable this feature, change to enable this feature. If you need to use it in a production environment, it is recommended to complete this requirement through data migration. +- Enable this feature for a cluster that already has data. If you need to perform this action in a production environment, it is recommended to complete this requirement through data migration. Unsupported scenarios: -- Two interconnected existing data clusters. This scenario should be completed through data migration. +- You cannot interconnect two clusters that already have data. You might perform this action through data migration. ## Deploy a cluster across multiple Kubernetes clusters -If you deploy a TiDB cluster across multiple Kubernetes clusters, by default, you have already deployed Kubernetes clusters required for this scenario, and then perform the following deployment on this basis. +Before you deploy a TiDB cluster across multiple Kubernetes clusters, you need to first deploy the Kubernetes clusters required for this operation. The following deployment assumes that you have completed Kubernetes deployment. The following takes the deployment of two clusters as an example. Cluster one is the initial cluster. Create it according to the configuration given below. After cluster one is running normally, create cluster two according to the configuration given below. After creating and deploying clusters, two clusters run normally. @@ -659,4 +659,4 @@ Through the above steps, you can complete exit and resources reclaim of the join -H "Content-Type: application/json" -d '{"peerURLs":["${member_peer_url}"]}' ``` -For more examples and development information, refer to [`multi-cluster`](https://github.com/pingcap/tidb-operator/tree/master/examples/multi-cluster). \ No newline at end of file +For more examples and development information, refer to [`multi-cluster`](https://github.com/pingcap/tidb-operator/tree/master/examples/multi-cluster). From f07fa654ebe552a20500b9bb0d18acf21069bf14 Mon Sep 17 00:00:00 2001 From: Joyinqin Date: Tue, 19 Jan 2021 11:34:09 +0800 Subject: [PATCH 04/14] refine the doc --- ...tidb-cluster-across-multiple-kubernetes.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index a0b5d009da..3231bcd124 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -33,11 +33,11 @@ Unsupported scenarios: Before you deploy a TiDB cluster across multiple Kubernetes clusters, you need to first deploy the Kubernetes clusters required for this operation. The following deployment assumes that you have completed Kubernetes deployment. -The following takes the deployment of two clusters as an example. Cluster one is the initial cluster. Create it according to the configuration given below. After cluster one is running normally, create cluster two according to the configuration given below. After creating and deploying clusters, two clusters run normally. +The following takes the deployment of two clusters as an example. Cluster #1 is the initial cluster. Create it according to the configuration given below. After cluster #1 is running normally, create cluster #2 according to the configuration given below. After creating and deploying clusters, two clusters run normally. ### Deploy the initial cluster -Set the following environment variables according to the actual situation. You need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual use, where `cluster1_name` is the cluster name of cluster one, and `cluster1_cluster_domain` is the [Cluster Domain](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction) of cluster one, and `cluster1_namespace` is the namespace of cluster one. +Set the following environment variables according to the actual situation. You need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual use, where `cluster1_name` is the cluster name of cluster #1, and `cluster1_cluster_domain` is the [Cluster Domain](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction) of cluster #1, and `cluster1_namespace` is the namespace of cluster #1. {{< copyable "shell-regular" >}} @@ -89,9 +89,9 @@ EOF ### Deploy the new cluster to join the initial cluster -You can wait for the cluster one to complete the deployment, then create cluster two. In actual situation, cluster two can join any existing cluster in multiple clusters. +You can wait for the cluster #1 to complete the deployment, then create cluster #2. In actual situation, cluster #2 can join any existing cluster in multiple clusters. -Refer to the following example and fill in the relevant information such as `Name`, `Cluster Domain`, and `Namespace` of cluster one and cluster two according to the actual situation: +Refer to the following example and fill in the relevant information such as `Name`, `Cluster Domain`, and `Namespace` of cluster #1 and cluster #2 according to the actual situation: {{< copyable "shell-regular" >}} @@ -421,7 +421,7 @@ For other TLS-related information, refer to the following documents: ### Deploy the initial cluster -To deploy and initialize the cluster, use the following command. In actual use, you need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual situation, where `cluster1_name` is the cluster name of cluster one, `cluster1_cluster_domain` is the `Cluster Domain` of cluster one, and `cluster1_namespace` is the namespace of cluster one. The following `YAML` file enables the TLS feature, and each component starts to verify the certificates issued by the `CN` for the `CA` of `TiDB` by configuring the `cert-allowed-cn`. +To deploy and initialize the cluster, use the following command. In actual use, you need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual situation, where `cluster1_name` is the cluster name of cluster #1, `cluster1_cluster_domain` is the `Cluster Domain` of cluster #1, and `cluster1_namespace` is the namespace of cluster #1. The following `YAML` file enables the TLS feature, and each component starts to verify the certificates issued by the `CN` for the `CA` of `TiDB` by configuring the `cert-allowed-cn`. Set the following environment variables according to the actual situation. @@ -483,7 +483,7 @@ EOF ### Deploy a new cluster to join the initial cluster -You can wait for the cluster one to complete the deployment. After completing the deployment, you can create cluster two. The related command are as follows. In actual use, cluster one might not the initial cluster. You can specify any cluster in multiple clusters to join. +You can wait for the cluster #1 to complete the deployment. After completing the deployment, you can create cluster #2. The related command are as follows. In actual use, cluster #1 might not the initial cluster. You can specify any cluster in multiple clusters to join. Set the following environment variables according to the actual situation: @@ -560,7 +560,7 @@ When you need to make a cluster exit from the joined TiDB cluster deployed acros - After scaling in, the number of TiKV replicas in the cluster should be greater than the number of `max-replicas` set in PD. By default, the number of TiKV replicas needs to be greater than three. -Take the cluster two created in the above document as an example. First, set the number of copies of PD, TiKV, TiDB to `0`. If you enable other components such as TiFlash, TiCDC, Pump, etc., set the number of these copies to `0`: +Take the cluster #2 created in the above document as an example. First, set the number of copies of PD, TiKV, TiDB to `0`. If you enable other components such as TiFlash, TiCDC, Pump, etc., set the number of these copies to `0`: {{< copyable "shell-regular" >}} @@ -568,7 +568,7 @@ Take the cluster two created in the above document as an example. First, set the kubectl patch tc cluster2 --type merge -p '{"spec":{"pd":{"replicas":0},"tikv":{"replicas":0},"tidb":{"replicas":0}}}' ``` -Wait for the status of cluster two to become `Ready`, and scale in related components to `0` copy: +Wait for the status of cluster #2 to become `Ready`, and scale in related components to `0` copy: {{< copyable "shell-regular" >}} @@ -576,7 +576,7 @@ Wait for the status of cluster two to become `Ready`, and scale in related compo kubectl get pods -l app.kubernetes.io/instance=cluster2 -n pingcap ``` -The Pod list is displayed as `No resources found.`. At this time, Pods have all been scaled in, and cluster two exits the cluster. Check the cluster status of cluster two: +The Pod list is displayed as `No resources found.`. At this time, Pods have all been scaled in, and cluster #2 exits the cluster. Check the cluster status of cluster #2: {{< copyable "shell-regular" >}} @@ -584,7 +584,7 @@ The Pod list is displayed as `No resources found.`. At this time, Pods have all kubectl get tc cluster2 ``` -The result shows that cluster two is in the `Ready` status. At this time, you can delete the object and reclaim related resources. +The result shows that cluster #2 is in the `Ready` status. At this time, you can delete the object and reclaim related resources. {{< copyable "shell-regular" >}} From 2da55fb133a3abb91af2838997dc20583f900c78 Mon Sep 17 00:00:00 2001 From: JoyinQ <56883733+Joyinqin@users.noreply.github.com> Date: Thu, 21 Jan 2021 09:30:23 +0800 Subject: [PATCH 05/14] Apply suggestions from code review Co-authored-by: Ran --- ...tidb-cluster-across-multiple-kubernetes.md | 39 +++++++++++-------- 1 file changed, 22 insertions(+), 17 deletions(-) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index 3231bcd124..e2088cd689 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -37,7 +37,7 @@ The following takes the deployment of two clusters as an example. Cluster #1 is ### Deploy the initial cluster -Set the following environment variables according to the actual situation. You need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual use, where `cluster1_name` is the cluster name of cluster #1, and `cluster1_cluster_domain` is the [Cluster Domain](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction) of cluster #1, and `cluster1_namespace` is the namespace of cluster #1. +Set the following environment variables according to the actual situation. You need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual use. `cluster1_name` is the cluster name of cluster #1, `cluster1_cluster_domain` is the [Cluster Domain](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction) of cluster #1, and `cluster1_namespace` is the namespace of cluster #1. {{< copyable "shell-regular" >}} @@ -89,7 +89,7 @@ EOF ### Deploy the new cluster to join the initial cluster -You can wait for the cluster #1 to complete the deployment, then create cluster #2. In actual situation, cluster #2 can join any existing cluster in multiple clusters. +You can wait for the cluster #1 to complete the deployment, and then create cluster #2. In the actual situation, cluster #2 can join any existing cluster in multiple clusters. Refer to the following example and fill in the relevant information such as `Name`, `Cluster Domain`, and `Namespace` of cluster #1 and cluster #2 according to the actual situation: @@ -147,20 +147,23 @@ spec: EOF ``` -## Deploy the TiDB cluster with TLS enabled between TiDB components across multiple Kubernetes clusters +## Deploy the TLS-enabled TiDB cluster across multiple Kubernetes clusters You can follow the steps below to enable TLS between TiDB components for TiDB clusters deployed across multiple Kubernetes clusters. ### Issue the root certificate -#### Issue the root certificate using `cfssl` +#### Use `cfssl` -If you use `cfssl`, the CA certificate issue process is no different from the general issue process. You need to save the CA certificate created for the first time, and use this CA certificate when issuing certificates for TiDB components later. When creating a component certificate in a cluster, you do not need to create a CA certificate again and only need to complete step one to four in the [Enabling TLS between TiDB components](enable-tls-between-components.md#using-cfssl) once to complete the issue of the CA certificate. You need to start from step five for the issue of certificates between other cluster components. +If you use `cfssl`, the CA certificate issue process is the same as the general issue process. You need to save the CA certificate created for the first time, and use this CA certificate when you issue certificates for TiDB components later. + +In other words, when you create a component certificate in a cluster, you do not need to create a CA certificate again. Complete step 1 ~ 4 in [Enabling TLS between TiDB components](enable-tls-between-components.md#using-cfssl) once to issue the CA certificate. After that, start from step 5 to issue certificates between other cluster components. #### Use the `cert-manager` system to issue a root certificate -If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certificate` in the initial cluster, and export the `CA Secret` to other new clusters that want to join. Other clusters only need to create component certificates to issue `Issuer` (Refers to the Issuer named ${cluster_name}-tidb-issuer in the [TLS document](enable-tls-between-components.md#using-cert-manager -)). Use this CA to configure `Issuer`, the detailed process is as follows: +If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certificate` in the initial cluster, and export the `CA Secret` to other new clusters that want to join. + +For other clusters, you only need to create a component certificate `Issuer` (refers to `${cluster_name}-tidb-issuer` in the [TLS document](enable-tls-between-components.md#using-cert-manager)) and configure the `Issuer` to use the `CA`. The detailed process is as follows: 1. Create a `CA Issuer` and a `CA Certificate` in the initial cluster. @@ -206,7 +209,7 @@ If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certi 2. Export the CA and delete irrelevant information. - First, you need to export the `Secret` that stores the CA. The name of the `Secret` can be obtained from the `.spec.secretName` of the first step `Certificate`. + First, you need to export the `Secret` that stores the CA. The name of the `Secret` can be obtained from `.spec.secretName` of the `Certificate` YAML file in the first step. {{< copyable "shell-regular" >}} @@ -214,7 +217,7 @@ If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certi kubectl get secret cluster1-ca-secret -n ${namespace} -o yaml > ca.yaml ``` - Delete irrelevant information in the `Secret YAML` file. The `YAML` file after deletion is as follows, where the information in `data` has been omitted: + Delete irrelevant information in the Secret YAML file. After the deletion, the YAML file is as follows (the information in `data` is omitted): ```yaml apiVersion: v1 @@ -236,11 +239,11 @@ If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certi ```bash kubectl apply -f ca.yaml -n ${namespace} - ·``` + ``` -4. Create a component certificate in the initial cluster and the new cluster to issue `Issuer` using this CA. +4. Create a component certificate `Issuer` in the initial cluster and the new cluster, and configure it to use this CA. - 1. Create a certificate issuing `Issuer` between TiDB components in the initial cluster. + 1. Create an `Issuer` that issues certificates between TiDB components in the initial cluster. Set the following environment variables according to the actual situation: @@ -271,7 +274,7 @@ If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certi 2. Create a certificate issuing `Issuer` between TiDB components in the new cluster. - Set the following environment variables according to the actual situation. Among them, `ca_secret_name` needs to point to the `Secret` that you just import to store the `CA`. You can use the `cluster_name` and `namespace` in the following operations: + Set the following environment variables according to the actual situation. Among them, `ca_secret_name` points to the imported `Secret` that stores the `CA`. You can use the `cluster_name` and `namespace` in the following operations: {{< copyable "shell-regular" >}} ```bash @@ -303,9 +306,9 @@ You need to issue a component certificate for each TiDB component on the Kuberne #### Use the cfssl system to issue certificates for TiDB components -If you use `cfssl`, take the certificate used to create the PD component as an example, the `pd-server.json` file is as follows. +The following example shows how to use `cfssl` to create a certificate used by PD. The `pd-server.json` file is as follows. -Set the following environment variables according to the actual situation. +Set the following environment variables according to the actual situation: {{< copyable "shell-regular" >}} @@ -356,7 +359,7 @@ EOF #### Use the `cert-manager` system to issue certificates for TiDB components -If you use `cert-manager`, take the certificate used to create the PD component as an example, `Certifcates` is shown below. +The following example shows how to use `cert-manager` to create a certificate used by PD. `Certifcates` is shown below. Set the following environment variables according to the actual situation. @@ -421,7 +424,9 @@ For other TLS-related information, refer to the following documents: ### Deploy the initial cluster -To deploy and initialize the cluster, use the following command. In actual use, you need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual situation, where `cluster1_name` is the cluster name of cluster #1, `cluster1_cluster_domain` is the `Cluster Domain` of cluster #1, and `cluster1_namespace` is the namespace of cluster #1. The following `YAML` file enables the TLS feature, and each component starts to verify the certificates issued by the `CN` for the `CA` of `TiDB` by configuring the `cert-allowed-cn`. +This section introduces how to deploy and initialize the cluster. + +In actual use, you need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual situation, where `cluster1_name` is the cluster name of cluster #1, `cluster1_cluster_domain` is the `Cluster Domain` of cluster #1, and `cluster1_namespace` is the namespace of cluster #1. The following `YAML` file enables the TLS feature, and each component starts to verify the certificates issued by the `CN` for the `CA` of `TiDB` by configuring the `cert-allowed-cn`. Set the following environment variables according to the actual situation. From 892e3cd132cf0caf433d1b9fbb608c1e7be23004 Mon Sep 17 00:00:00 2001 From: JoyinQ <56883733+Joyinqin@users.noreply.github.com> Date: Thu, 21 Jan 2021 09:32:00 +0800 Subject: [PATCH 06/14] Apply suggestions from code review Co-authored-by: Ran --- en/deploy-tidb-cluster-across-multiple-kubernetes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index e2088cd689..2d74201e34 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -159,7 +159,7 @@ If you use `cfssl`, the CA certificate issue process is the same as the general In other words, when you create a component certificate in a cluster, you do not need to create a CA certificate again. Complete step 1 ~ 4 in [Enabling TLS between TiDB components](enable-tls-between-components.md#using-cfssl) once to issue the CA certificate. After that, start from step 5 to issue certificates between other cluster components. -#### Use the `cert-manager` system to issue a root certificate +#### Use `cert-manager` If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certificate` in the initial cluster, and export the `CA Secret` to other new clusters that want to join. From 97434ece01745db6dcf000360d9e41684d9dfb82 Mon Sep 17 00:00:00 2001 From: Joyinqin Date: Thu, 21 Jan 2021 09:52:14 +0800 Subject: [PATCH 07/14] Update deploy-tidb-cluster-across-multiple-kubernetes.md --- ...tidb-cluster-across-multiple-kubernetes.md | 27 ++++++++++--------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index 2d74201e34..97d09e392a 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -209,17 +209,17 @@ For other clusters, you only need to create a component certificate `Issuer` (re 2. Export the CA and delete irrelevant information. - First, you need to export the `Secret` that stores the CA. The name of the `Secret` can be obtained from `.spec.secretName` of the `Certificate` YAML file in the first step. + First, you need to export the `Secret` that stores the CA. The name of the `Secret` can be obtained from `.spec.secretName` of the `Certificate` YAML file in the first step. - {{< copyable "shell-regular" >}} + {{< copyable "shell-regular" >}} - ```bash - kubectl get secret cluster1-ca-secret -n ${namespace} -o yaml > ca.yaml - ``` + ```bash + kubectl get secret cluster1-ca-secret -n ${namespace} -o yaml > ca.yaml + ``` - Delete irrelevant information in the Secret YAML file. After the deletion, the YAML file is as follows (the information in `data` is omitted): + Delete irrelevant information in the Secret YAML file. After the deletion, the YAML file is as follows (the information in `data` is omitted): - ```yaml + ```yaml apiVersion: v1 data: ca.crt: LS0...LQo= @@ -233,9 +233,9 @@ For other clusters, you only need to create a component certificate `Issuer` (re 3. Import the exported CA to other clusters. - You need to configure the `namespace` so that related components can access the CA certificate: + You need to configure the `namespace` so that related components can access the CA certificate: - {{< copyable "shell-regular" >}} + {{< copyable "shell-regular" >}} ```bash kubectl apply -f ca.yaml -n ${namespace} @@ -272,12 +272,13 @@ For other clusters, you only need to create a component certificate `Issuer` (re EOF ``` - 2. Create a certificate issuing `Issuer` between TiDB components in the new cluster. + 2. Create an `Issuer` that issues certificates between TiDB components in the new cluster. - Set the following environment variables according to the actual situation. Among them, `ca_secret_name` points to the imported `Secret` that stores the `CA`. You can use the `cluster_name` and `namespace` in the following operations: + Set the following environment variables according to the actual situation. Among them, `ca_secret_name` points to the imported `Secret` that stores the `CA`. You can use the `cluster_name` and `namespace` in the following operations: {{< copyable "shell-regular" >}} - ```bash + + ```bash cluster_name="cluster2" namespace="pingcap" ca_secret_name="cluster1-ca-secret" @@ -304,7 +305,7 @@ For other clusters, you only need to create a component certificate `Issuer` (re You need to issue a component certificate for each TiDB component on the Kubernetes cluster. When issuing a component certificate, you need to add an authorization record ending with `.${cluster_domain}` to the hosts, for example, `${cluster_name}-pd.${namespace}.svc.${cluster_domain}`. -#### Use the cfssl system to issue certificates for TiDB components +#### Use the `cfssl` system to issue certificates for TiDB components The following example shows how to use `cfssl` to create a certificate used by PD. The `pd-server.json` file is as follows. From 321359051f6830d9bceffc80dfb808a6f86d5d94 Mon Sep 17 00:00:00 2001 From: JoyinQ <56883733+Joyinqin@users.noreply.github.com> Date: Thu, 21 Jan 2021 16:43:26 +0800 Subject: [PATCH 08/14] Apply suggestions from code review Co-authored-by: Ran --- ...ploy-tidb-cluster-across-multiple-kubernetes.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index 97d09e392a..05249b8d36 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -489,7 +489,7 @@ EOF ### Deploy a new cluster to join the initial cluster -You can wait for the cluster #1 to complete the deployment. After completing the deployment, you can create cluster #2. The related command are as follows. In actual use, cluster #1 might not the initial cluster. You can specify any cluster in multiple clusters to join. +You can wait for the cluster #1 to complete the deployment. After completing the deployment, you can create cluster #2. The related commands are as follows. In actual use, cluster #1 might not the initial cluster. You can specify cluster #2 to join any cluster in the multiple clusters. Set the following environment variables according to the actual situation: @@ -560,13 +560,13 @@ spec: EOF ``` -## Exit and reclaim clusters that already joined +## Exit and reclaim clusters that already join a cross-Kubernetes cluster -When you need to make a cluster exit from the joined TiDB cluster deployed across Kubernetes and reclaim resources, you can achieve the above requirements through the scaling in. In this scenario, some requirements of scaling in need to be met. The restrictions are as follows: +When you need to make a cluster exit from the joined TiDB cluster deployed across Kubernetes and reclaim resources, you can perform the operation by scaling in the cluster. In this scenario, the following requirements of scaling-in need to be met. -- After scaling in, the number of TiKV replicas in the cluster should be greater than the number of `max-replicas` set in PD. By default, the number of TiKV replicas needs to be greater than three. +- After scaling in the cluster, the number of TiKV replicas in the cluster should be greater than the number of `max-replicas` set in PD. By default, the number of TiKV replicas needs to be greater than three. -Take the cluster #2 created in the above document as an example. First, set the number of copies of PD, TiKV, TiDB to `0`. If you enable other components such as TiFlash, TiCDC, Pump, etc., set the number of these copies to `0`: +Take the cluster #2 created in [the last section](#deploy-a-new-cluster-to-join-the-initial-cluster) as an example. First, set the number of replicas of PD, TiKV, and TiDB to `0`. If you enable other components such as TiFlash, TiCDC, and Pump, set the number of these replicas to `0`: {{< copyable "shell-regular" >}} @@ -574,7 +574,7 @@ Take the cluster #2 created in the above document as an example. First, set the kubectl patch tc cluster2 --type merge -p '{"spec":{"pd":{"replicas":0},"tikv":{"replicas":0},"tidb":{"replicas":0}}}' ``` -Wait for the status of cluster #2 to become `Ready`, and scale in related components to `0` copy: +Wait for the status of cluster #2 to become `Ready`, and scale in related components to `0` replica: {{< copyable "shell-regular" >}} @@ -582,7 +582,7 @@ Wait for the status of cluster #2 to become `Ready`, and scale in related compon kubectl get pods -l app.kubernetes.io/instance=cluster2 -n pingcap ``` -The Pod list is displayed as `No resources found.`. At this time, Pods have all been scaled in, and cluster #2 exits the cluster. Check the cluster status of cluster #2: +The Pod list shows `No resources found`. At this time, Pods have all been scaled in, and cluster #2 exits the cluster. Check the cluster status of cluster #2: {{< copyable "shell-regular" >}} From 4d58417513a59d75f0c12dca440e669051bf3f15 Mon Sep 17 00:00:00 2001 From: Joyinqin Date: Thu, 21 Jan 2021 17:16:41 +0800 Subject: [PATCH 09/14] Update deploy-tidb-cluster-across-multiple-kubernetes.md --- en/deploy-tidb-cluster-across-multiple-kubernetes.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index 05249b8d36..e43bc390a4 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -437,9 +437,11 @@ Set the following environment variables according to the actual situation. cluster1_name="cluster1" cluster1_cluster_domain="cluster1.com" cluster1_namespace="pingcap" +``` Run the following command: +``` cat << EOF | kubectl apply -f -n ${cluster1_namespace} - apiVersion: pingcap.com/v1alpha1 kind: TidbCluster From 92fa00aa3e700f415398bff6e252bce2538a2ccb Mon Sep 17 00:00:00 2001 From: Joyinqin Date: Fri, 28 May 2021 15:07:57 +0800 Subject: [PATCH 10/14] zh: Update info on EKS and dashboard access --- zh/access-dashboard.md | 10 +++++++--- zh/deploy-on-aws-eks.md | 10 +++++++++- 2 files changed, 16 insertions(+), 4 deletions(-) diff --git a/zh/access-dashboard.md b/zh/access-dashboard.md index 2d6c41fb93..d1b2722a8f 100644 --- a/zh/access-dashboard.md +++ b/zh/access-dashboard.md @@ -6,7 +6,11 @@ aliases: ['/docs-cn/tidb-in-kubernetes/stable/access-dashboard/','/docs-cn/tidb- # TiDB Dashboard 指南 -TiDB Dashboard 是 TiDB 4.0 专门用来帮助观察与诊断整个 TiDB 集群的可视化面板,你可以在 [TiDB Dashboard](https://docs.pingcap.com/zh/tidb/stable/dashboard-intro) 了解详情。本篇文章将介绍如何在 Kubernetes 环境下访问 TiDB Dashboard。 +> **警告:** +> +> PD 的 `/dashboard` 路径中提供了 TiDB Dashboard。除此以外的其他路径可能没有访问控制。 + +TiDB Dashboard 是从 TiDB 4.0 开始引入的专门用来帮助观察与诊断整个 TiDB 集群的可视化面板,你可以在 [TiDB Dashboard](https://docs.pingcap.com/zh/tidb/stable/dashboard-intro) 了解详情。本篇文章将介绍如何在 Kubernetes 环境下访问 TiDB Dashboard。 > **注意:** > @@ -14,7 +18,7 @@ TiDB Dashboard 是 TiDB 4.0 专门用来帮助观察与诊断整个 TiDB 集群 ## 前置条件 -你需要使用 v1.1.1 版本及以上的 TiDB Operator 以及 4.0.1 版本及以上的 TiDB 集群,才能在 Kubernetes 环境中流畅使用 `Dashboard`。 你需要在 `TidbCluster` 对象文件中通过以下方式开启 `Dashboard` 快捷访问: +你需要使用 v1.1.1 版本及以上的 TiDB Operator 以及 4.0.1 版本及以上的 TiDB 集群,才能在 Kubernetes 环境中流畅使用 `Dashboard`。你需要在 `TidbCluster` 对象文件中通过以下方式开启 `Dashboard` 快捷访问: ```yaml apiVersion: pingcap.com/v1alpha1 @@ -26,7 +30,7 @@ spec: enableDashboardInternalProxy: true ``` -## 快速上手 +## 通过端口转发访问 TiDB Dashboard > **注意:** > diff --git a/zh/deploy-on-aws-eks.md b/zh/deploy-on-aws-eks.md index ed891bc650..8462f3f599 100644 --- a/zh/deploy-on-aws-eks.md +++ b/zh/deploy-on-aws-eks.md @@ -182,6 +182,10 @@ curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/examples 如需了解更详细的配置信息或者进行自定义配置,请参考[配置 TiDB 集群](configure-a-tidb-cluster.md) +> **注意:** +> +> 默认情况下,`tidb-cluster.yaml` 文件中的配置为 TiDB 的 LoadBalancer 设置了“内部”方案。这意味着 LoadBalancer 只能在 VPC 内部访问,而不能在外部访问。要通过 MySQL 协议访问 TiDB,你需要使用一个堡垒主机或使用 `kubectl port-forward`。如果你想在互联网上公开 TiDB,并且意识到这样做的风险,你可以在 `tidb-cluster.yaml` 文件中将 LoadBalancer 的方案从“内部”改为“面向互联网”。 + 执行以下命令,在 EKS 集群中部署 TidbCluster 和 TidbMonitor CR。 {{< copyable "shell-regular" >}} @@ -302,7 +306,7 @@ MySQL [(none)]> show status; > - [MySQL 8.0 默认认证插件](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_default_authentication_plugin)从 `mysql_native_password` 更新为 `caching_sha2_password`,因此如果使用 MySQL 8.0 客户端访问 TiDB 服务(TiDB 版本 < v4.0.7),并且用户账户有配置密码,需要显示指定 `--default-auth=mysql_native_password` 参数。 > - TiDB(v4.0.2 起)默认会定期收集使用情况信息,并将这些信息分享给 PingCAP 用于改善产品。若要了解所收集的信息详情及如何禁用该行为,请参见 [TiDB 遥测功能使用文档](https://docs.pingcap.com/zh/tidb/stable/telemetry)。 -### 访问 Grafana 监控 +## 访问 Grafana 监控 先获取 Grafana 的 LoadBalancer 域名: @@ -328,6 +332,10 @@ basic-grafana LoadBalancer 10.100.199.42 a806cfe84c12a4831aa3313e792e3eed- > > Grafana 默认用户名和密码均为 admin。 +## 访问 TiDB Dashboard + +如果想要安全地访问 TiDB Dashboard,详情可以参见[访问 TiDB Dashboard](access-dashboard.md)。 + ## 升级 TiDB 集群 要升级 TiDB 集群,可以通过 `kubectl edit tc basic -n tidb-cluster` 命令修改 `spec.version`。 From f0d53b8d889a12e71b3f6cb077009fe7f7aa7aaf Mon Sep 17 00:00:00 2001 From: Joyinqin Date: Fri, 28 May 2021 15:14:31 +0800 Subject: [PATCH 11/14] Update deploy-tidb-cluster-across-multiple-kubernetes.md --- en/deploy-tidb-cluster-across-multiple-kubernetes.md | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index e43bc390a4..8f0b9318e4 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -3,9 +3,13 @@ title: Deploy a TiDB Cluster across Multiple Kubernetes Clusters summary: Learn how to deploy a TiDB cluster across multiple Kubernetes clusters. --- +> **Warning:** +> +> This is still an experimental feature. It is **NOT** recommended that you use it in the production environment. + # Deploy a TiDB Cluster across Multiple Kubernetes Clusters -To deploy a TiDB cluster across multiple Kubernetes clusters refers to deploying **one** TiDB cluster on multiple interconnected Kubernetes clusters. Each component of the cluster is distributed on multiple Kubernetes clusters to achieve disaster recovery among Kubernetes clusters. The interconnected network of Kubernetes clusters means that Pod IP can be accessed in any cluster and between clusters, and Pod FQDN records can be parsed in any cluster and between clusters. +To deploy a TiDB cluster across multiple Kubernetes clusters refers to deploying **one** TiDB cluster on multiple interconnected Kubernetes clusters. Each component of the cluster is distributed on multiple Kubernetes clusters to achieve disaster recovery among Kubernetes clusters. The interconnected network of Kubernetes clusters means that Pod IP can be accessed in any cluster and between clusters, and Pod FQDN records can be looked up by querying the DNS service in any cluster and between clusters. ## Prerequisites @@ -442,7 +446,7 @@ cluster1_namespace="pingcap" Run the following command: ``` -cat << EOF | kubectl apply -f -n ${cluster1_namespace} - +cat << EOF | kubectl apply -n ${cluster1_namespace} -f - apiVersion: pingcap.com/v1alpha1 kind: TidbCluster metadata: @@ -511,7 +515,7 @@ Run the following command: {{< copyable "shell-regular" >}} ```bash -cat << EOF | kubectl apply -f -n ${cluster2_namespace} - +cat << EOF | kubectl apply -n ${cluster2_namespace} -f - apiVersion: pingcap.com/v1alpha1 kind: TidbCluster metadata: From 54de666178736832480c9c39ac2e012e8ac34dcf Mon Sep 17 00:00:00 2001 From: JoyinQ <56883733+Joyinqin@users.noreply.github.com> Date: Fri, 28 May 2021 17:09:06 +0800 Subject: [PATCH 12/14] Apply suggestions from code review Co-authored-by: DanielZhangQD <36026334+DanielZhangQD@users.noreply.github.com> --- zh/deploy-on-aws-eks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/zh/deploy-on-aws-eks.md b/zh/deploy-on-aws-eks.md index 8462f3f599..b62171eb3e 100644 --- a/zh/deploy-on-aws-eks.md +++ b/zh/deploy-on-aws-eks.md @@ -184,7 +184,7 @@ curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.12/examples > **注意:** > -> 默认情况下,`tidb-cluster.yaml` 文件中的配置为 TiDB 的 LoadBalancer 设置了“内部”方案。这意味着 LoadBalancer 只能在 VPC 内部访问,而不能在外部访问。要通过 MySQL 协议访问 TiDB,你需要使用一个堡垒主机或使用 `kubectl port-forward`。如果你想在互联网上公开 TiDB,并且意识到这样做的风险,你可以在 `tidb-cluster.yaml` 文件中将 LoadBalancer 的方案从“内部”改为“面向互联网”。 +> 默认情况下,`tidb-cluster.yaml` 文件中 TiDB 服务的 LoadBalancer 配置为 "internal"。这意味着 LoadBalancer 只能在 VPC 内部访问,而不能在外部访问。要通过 MySQL 协议访问 TiDB,你需要使用一个堡垒机或使用 `kubectl port-forward`。如果你想在互联网上公开访问 TiDB,并且知晓这样做的风险,你可以在 `tidb-cluster.yaml` 文件中将 LoadBalancer 从 "internal" 改为 "internet-facing"。 执行以下命令,在 EKS 集群中部署 TidbCluster 和 TidbMonitor CR。 From b72f69e9caa6cd22162ec2daca7c6784fc6d70a1 Mon Sep 17 00:00:00 2001 From: JoyinQ <56883733+Joyinqin@users.noreply.github.com> Date: Mon, 31 May 2021 09:06:00 +0800 Subject: [PATCH 13/14] Apply suggestions from code review Co-authored-by: Grace Cai --- zh/access-dashboard.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/zh/access-dashboard.md b/zh/access-dashboard.md index d1b2722a8f..a3c8c535e6 100644 --- a/zh/access-dashboard.md +++ b/zh/access-dashboard.md @@ -8,7 +8,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/stable/access-dashboard/','/docs-cn/tidb- > **警告:** > -> PD 的 `/dashboard` 路径中提供了 TiDB Dashboard。除此以外的其他路径可能没有访问控制。 +> TiDB Dashboard 位于 PD 的 `/dashboard` 路径中。其他路径可能无法访问控制。 TiDB Dashboard 是从 TiDB 4.0 开始引入的专门用来帮助观察与诊断整个 TiDB 集群的可视化面板,你可以在 [TiDB Dashboard](https://docs.pingcap.com/zh/tidb/stable/dashboard-intro) 了解详情。本篇文章将介绍如何在 Kubernetes 环境下访问 TiDB Dashboard。 From 68385bd308b403406deb6890ea7c1288a085e6cb Mon Sep 17 00:00:00 2001 From: Joyinqin Date: Wed, 2 Jun 2021 14:20:32 +0800 Subject: [PATCH 14/14] update --- en/TOC.md | 1 - ...tidb-cluster-across-multiple-kubernetes.md | 674 ------------------ ...tidb-cluster-across-multiple-kubernetes.md | 36 +- 3 files changed, 18 insertions(+), 693 deletions(-) delete mode 100644 en/deploy-tidb-cluster-across-multiple-kubernetes.md diff --git a/en/TOC.md b/en/TOC.md index 63769a8387..b8601976e9 100644 --- a/en/TOC.md +++ b/en/TOC.md @@ -23,7 +23,6 @@ - [Deploy TiDB Cluster](deploy-on-general-kubernetes.md) - [Initialize TiDB Cluster](initialize-a-cluster.md) - [Access TiDB Cluster](access-tidb.md) - - [Deploy a TiDB Cluster across Multiple Kubernetes Clusters](deploy-tidb-cluster-across-multiple-kubernetes.md) - [Deploy Heterogeneous Cluster](deploy-heterogeneous-tidb-cluster.md) - [Deploy TiFlash](deploy-tiflash.md) - [Deploy TiCDC](deploy-ticdc.md) diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md deleted file mode 100644 index 8f0b9318e4..0000000000 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ /dev/null @@ -1,674 +0,0 @@ ---- -title: Deploy a TiDB Cluster across Multiple Kubernetes Clusters -summary: Learn how to deploy a TiDB cluster across multiple Kubernetes clusters. ---- - -> **Warning:** -> -> This is still an experimental feature. It is **NOT** recommended that you use it in the production environment. - -# Deploy a TiDB Cluster across Multiple Kubernetes Clusters - -To deploy a TiDB cluster across multiple Kubernetes clusters refers to deploying **one** TiDB cluster on multiple interconnected Kubernetes clusters. Each component of the cluster is distributed on multiple Kubernetes clusters to achieve disaster recovery among Kubernetes clusters. The interconnected network of Kubernetes clusters means that Pod IP can be accessed in any cluster and between clusters, and Pod FQDN records can be looked up by querying the DNS service in any cluster and between clusters. - -## Prerequisites - -You need to configure the Kubernetes network and DNS so that the Kubernetes cluster meets the following conditions: - -- The TiDB components on each Kubernetes cluster can access the Pod IP of all TiDB components in and between clusters. -- The TiDB components on each Kubernetes cluster can parse the Pod FQDN of all TiDB components in and between clusters. - -## Supported scenarios - -Currently supported scenarios: - -- Deploy a new TiDB cluster across multiple Kubernetes clusters. -- Deploy new clusters that enable this feature on other Kubernetes clusters and join the clusters that also enable this feature. - -Experimentally supported scenarios: - -- Enable this feature for a cluster that already has data. If you need to perform this action in a production environment, it is recommended to complete this requirement through data migration. - -Unsupported scenarios: - -- You cannot interconnect two clusters that already have data. You might perform this action through data migration. - -## Deploy a cluster across multiple Kubernetes clusters - -Before you deploy a TiDB cluster across multiple Kubernetes clusters, you need to first deploy the Kubernetes clusters required for this operation. The following deployment assumes that you have completed Kubernetes deployment. - -The following takes the deployment of two clusters as an example. Cluster #1 is the initial cluster. Create it according to the configuration given below. After cluster #1 is running normally, create cluster #2 according to the configuration given below. After creating and deploying clusters, two clusters run normally. - -### Deploy the initial cluster - -Set the following environment variables according to the actual situation. You need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual use. `cluster1_name` is the cluster name of cluster #1, `cluster1_cluster_domain` is the [Cluster Domain](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction) of cluster #1, and `cluster1_namespace` is the namespace of cluster #1. - -{{< copyable "shell-regular" >}} - -```bash - -cluster1_name="cluster1" -cluster1_cluster_domain="cluster1.com" -cluster1_namespace="pingcap" -``` - -Run the following command: - -{{< copyable "shell-regular" >}} - -```bash -cat << EOF | kubectl apply -f -n ${cluster1_namespace} - -apiVersion: pingcap.com/v1alpha1 -kind: TidbCluster -metadata: - name: "${cluster1_name}" -spec: - version: v4.0.9 - timezone: UTC - pvReclaimPolicy: Delete - enableDynamicConfiguration: true - configUpdateStrategy: RollingUpdate - clusterDomain: "${cluster1_cluster_domain}" - discovery: {} - pd: - baseImage: pingcap/pd - replicas: 1 - requests: - storage: "10Gi" - config: {} - tikv: - baseImage: pingcap/tikv - replicas: 1 - requests: - storage: "10Gi" - config: {} - tidb: - baseImage: pingcap/tidb - replicas: 1 - service: - type: ClusterIP - config: {} -EOF -``` - -### Deploy the new cluster to join the initial cluster - -You can wait for the cluster #1 to complete the deployment, and then create cluster #2. In the actual situation, cluster #2 can join any existing cluster in multiple clusters. - -Refer to the following example and fill in the relevant information such as `Name`, `Cluster Domain`, and `Namespace` of cluster #1 and cluster #2 according to the actual situation: - -{{< copyable "shell-regular" >}} - -```bash -cluster1_name="cluster1" -cluster1_cluster_domain="cluster1.com" -cluster1_namespace="pingcap" -cluster2_name="cluster2" -cluster2_cluster_domain="cluster2.com" -cluster2_namespace="pingcap" -``` - -Run the following command: - -{{< copyable "shell-regular" >}} - -```bash -cat << EOF | kubectl apply -f -n ${cluster2_namespace} - -apiVersion: pingcap.com/v1alpha1 -kind: TidbCluster -metadata: - name: "${cluster2_name}" -spec: - version: v4.0.9 - timezone: UTC - pvReclaimPolicy: Delete - enableDynamicConfiguration: true - configUpdateStrategy: RollingUpdate - clusterDomain: "${cluster2_cluster_domain}" - cluster: - name: "${cluster1_name}" - namespace: "${cluster1_namespace}" - clusterDomain: "${cluster1_clusterdomain}" - discovery: {} - pd: - baseImage: pingcap/pd - replicas: 1 - requests: - storage: "10Gi" - config: {} - tikv: - baseImage: pingcap/tikv - replicas: 1 - requests: - storage: "10Gi" - config: {} - tidb: - baseImage: pingcap/tidb - replicas: 1 - service: - type: ClusterIP - config: {} -EOF -``` - -## Deploy the TLS-enabled TiDB cluster across multiple Kubernetes clusters - -You can follow the steps below to enable TLS between TiDB components for TiDB clusters deployed across multiple Kubernetes clusters. - -### Issue the root certificate - -#### Use `cfssl` - -If you use `cfssl`, the CA certificate issue process is the same as the general issue process. You need to save the CA certificate created for the first time, and use this CA certificate when you issue certificates for TiDB components later. - -In other words, when you create a component certificate in a cluster, you do not need to create a CA certificate again. Complete step 1 ~ 4 in [Enabling TLS between TiDB components](enable-tls-between-components.md#using-cfssl) once to issue the CA certificate. After that, start from step 5 to issue certificates between other cluster components. - -#### Use `cert-manager` - -If you use `cert-manager`, you only need to create a `CA Issuer` and a `CA Certificate` in the initial cluster, and export the `CA Secret` to other new clusters that want to join. - -For other clusters, you only need to create a component certificate `Issuer` (refers to `${cluster_name}-tidb-issuer` in the [TLS document](enable-tls-between-components.md#using-cert-manager)) and configure the `Issuer` to use the `CA`. The detailed process is as follows: - -1. Create a `CA Issuer` and a `CA Certificate` in the initial cluster. - - Set the following environment variables according to the actual situation: - - {{< copyable "shell-regular" >}} - - ```bash - cluster_name="cluster1" - namespace="pingcap" - ``` - - Run the following command: - - {{< copyable "shell-regular" >}} - - ```bash - cat <}} - - ```bash - kubectl get secret cluster1-ca-secret -n ${namespace} -o yaml > ca.yaml - ``` - - Delete irrelevant information in the Secret YAML file. After the deletion, the YAML file is as follows (the information in `data` is omitted): - - ```yaml - apiVersion: v1 - data: - ca.crt: LS0...LQo= - tls.crt: LS0t....LQo= - tls.key: LS0t...tCg== - kind: Secret - metadata: - name: cluster1-ca-secret - type: kubernetes.io/tls - ``` - -3. Import the exported CA to other clusters. - - You need to configure the `namespace` so that related components can access the CA certificate: - - {{< copyable "shell-regular" >}} - - ```bash - kubectl apply -f ca.yaml -n ${namespace} - ``` - -4. Create a component certificate `Issuer` in the initial cluster and the new cluster, and configure it to use this CA. - - 1. Create an `Issuer` that issues certificates between TiDB components in the initial cluster. - - Set the following environment variables according to the actual situation: - - {{< copyable "shell-regular" >}} - - ```bash - cluster_name="cluster1" - namespace="pingcap" - ca_secret_name="cluster1-ca-secret" - ``` - - Run the following command: - - {{< copyable "shell-regular" >}} - - ```bash - cat << EOF | kubectl apply -f - - apiVersion: cert-manager.io/v1alpha2 - kind: Issuer - metadata: - name: ${cluster_name}-tidb-issuer - namespace: ${namespace} - spec: - ca: - secretName: ${ca_secret_name} - EOF - ``` - - 2. Create an `Issuer` that issues certificates between TiDB components in the new cluster. - - Set the following environment variables according to the actual situation. Among them, `ca_secret_name` points to the imported `Secret` that stores the `CA`. You can use the `cluster_name` and `namespace` in the following operations: - - {{< copyable "shell-regular" >}} - - ```bash - cluster_name="cluster2" - namespace="pingcap" - ca_secret_name="cluster1-ca-secret" - ``` - - Run the following command: - - {{< copyable "shell-regular" >}} - - ```bash - cat << EOF | kubectl apply -f - - apiVersion: cert-manager.io/v1alpha2 - kind: Issuer - metadata: - name: ${cluster_name}-tidb-issuer - namespace: ${namespace} - spec: - ca: - secretName: ${ca_secret_name} - EOF - ``` - -### Issue certificates for the TiDB components of each Kubernetes cluster - -You need to issue a component certificate for each TiDB component on the Kubernetes cluster. When issuing a component certificate, you need to add an authorization record ending with `.${cluster_domain}` to the hosts, for example, `${cluster_name}-pd.${namespace}.svc.${cluster_domain}`. - -#### Use the `cfssl` system to issue certificates for TiDB components - -The following example shows how to use `cfssl` to create a certificate used by PD. The `pd-server.json` file is as follows. - -Set the following environment variables according to the actual situation: - -{{< copyable "shell-regular" >}} - -```bash -cluster_name=cluster2 -cluster_domain=cluster2.com -namespace=pingcap -``` - -You can create the `pd-server.json` by the following command: - -{{< copyable "shell-regular" >}} - -```bash -cat << EOF > pd-server.json -{ - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "${cluster_name}-pd", - "${cluster_name}-pd.${namespace}", - "${cluster_name}-pd.${namespace}.svc", - "${cluster_name}-pd.${namespace}.svc.${cluster_domain}", - "${cluster_name}-pd-peer", - "${cluster_name}-pd-peer.${namespace}", - "${cluster_name}-pd-peer.${namespace}.svc", - "${cluster_name}-pd-peer.${namespace}.svc.${cluster_domain}", - "*.${cluster_name}-pd-peer", - "*.${cluster_name}-pd-peer.${namespace}", - "*.${cluster_name}-pd-peer.${namespace}.svc", - "*.${cluster_name}-pd-peer.${namespace}.svc.${cluster_domain}" - ], - "key": { - "algo": "ecdsa", - "size": 256 - }, - "names": [ - { - "C": "US", - "L": "CA", - "ST": "San Francisco" - } - ] -} -EOF -``` - -#### Use the `cert-manager` system to issue certificates for TiDB components - -The following example shows how to use `cert-manager` to create a certificate used by PD. `Certifcates` is shown below. - -Set the following environment variables according to the actual situation. - -{{< copyable "shell-regular" >}} - -```bash -cluster_name="cluster2" -namespace="pingcap" -cluster_domain="cluster2.com" -``` - -Run the following command: - -{{< copyable "shell-regular" >}} - -```bash -cat << EOF | kubectl apply -f - -apiVersion: cert-manager.io/v1alpha2 -kind: Certificate -metadata: - name: ${cluster_name}-pd-cluster-secret - namespace: ${namespace} -spec: - secretName: ${cluster_name}-pd-cluster-secret - duration: 8760h # 365d - renewBefore: 360h # 15d - organization: - - PingCAP - commonName: "TiDB" - usages: - - server auth - - client auth - dnsNames: - - "${cluster_name}-pd" - - "${cluster_name}-pd.${namespace}" - - "${cluster_name}-pd.${namespace}.svc" - - "${cluster_name}-pd.${namespace}.svc.${cluster_domain}" - - "${cluster_name}-pd-peer" - - "${cluster_name}-pd-peer.${namespace}" - - "${cluster_name}-pd-peer.${namespace}.svc" - - "${cluster_name}-pd-peer.${namespace}.svc.${cluster_domain}" - - "*.${cluster_name}-pd-peer" - - "*.${cluster_name}-pd-peer.${namespace}" - - "*.${cluster_name}-pd-peer.${namespace}.svc" - - "*.${cluster_name}-pd-peer.${namespace}.svc.${cluster_domain}" - ipAddresses: - - 127.0.0.1 - - ::1 - issuerRef: - name: ${cluster_name}-tidb-issuer - kind: Issuer - group: cert-manager.io -EOF -``` - -You need to refer to the TLS-related documents, issue the corresponding certificates for the components, and create the `Secret` in the corresponding Kubernetes clusters. - -For other TLS-related information, refer to the following documents: - -- [Enable TLS between TiDB Components](enable-tls-between-components.md) -- [Enable TLS for the MySQL Client](enable-tls-for-mysql-client.md) - -### Deploy the initial cluster - -This section introduces how to deploy and initialize the cluster. - -In actual use, you need to set the contents of the `cluster1_name` and `cluster1_cluster_domain` variables according to your actual situation, where `cluster1_name` is the cluster name of cluster #1, `cluster1_cluster_domain` is the `Cluster Domain` of cluster #1, and `cluster1_namespace` is the namespace of cluster #1. The following `YAML` file enables the TLS feature, and each component starts to verify the certificates issued by the `CN` for the `CA` of `TiDB` by configuring the `cert-allowed-cn`. - -Set the following environment variables according to the actual situation. - -{{< copyable "shell-regular" >}} - -```bash -cluster1_name="cluster1" -cluster1_cluster_domain="cluster1.com" -cluster1_namespace="pingcap" -``` - -Run the following command: - -``` -cat << EOF | kubectl apply -n ${cluster1_namespace} -f - -apiVersion: pingcap.com/v1alpha1 -kind: TidbCluster -metadata: - name: "${cluster1_name}" -spec: - version: v4.0.9 - timezone: UTC - tlsCluster: - enabled: true - pvReclaimPolicy: Delete - enableDynamicConfiguration: true - configUpdateStrategy: RollingUpdate - clusterDomain: "${cluster1_cluster_domain}" - discovery: {} - pd: - baseImage: pingcap/pd - replicas: 1 - requests: - storage: "10Gi" - config: - security: - cert-allowed-cn: - - TiDB - tikv: - baseImage: pingcap/tikv - replicas: 1 - requests: - storage: "10Gi" - config: - security: - cert-allowed-cn: - - TiDB - tidb: - baseImage: pingcap/tidb - replicas: 1 - service: - type: ClusterIP - tlsClient: - enabled: true - config: - security: - cert-allowed-cn: - - TiDB -EOF -``` - -### Deploy a new cluster to join the initial cluster - -You can wait for the cluster #1 to complete the deployment. After completing the deployment, you can create cluster #2. The related commands are as follows. In actual use, cluster #1 might not the initial cluster. You can specify cluster #2 to join any cluster in the multiple clusters. - -Set the following environment variables according to the actual situation: - -{{< copyable "shell-regular" >}} - -```bash -cluster1_name="cluster1" -cluster1_cluster_domain="cluster1.com" -cluster1_namespace="pingcap" -cluster2_name="cluster2" -cluster2_cluster_domain="cluster2.com" -cluster2_namespace="pingcap" -``` - -Run the following command: - -{{< copyable "shell-regular" >}} - -```bash -cat << EOF | kubectl apply -n ${cluster2_namespace} -f - -apiVersion: pingcap.com/v1alpha1 -kind: TidbCluster -metadata: - name: "${cluster2_name}" -spec: - version: v4.0.9 - timezone: UTC - tlsCluster: - enabled: true - pvReclaimPolicy: Delete - enableDynamicConfiguration: true - configUpdateStrategy: RollingUpdate - clusterDomain: "${cluster2_cluster_domain}" - cluster: - name: "${cluster1_name}" - namespace: "${cluster1_namespace}" - clusterDomain: "${cluster1_clusterdomain}" - discovery: {} - pd: - baseImage: pingcap/pd - replicas: 1 - requests: - storage: "10Gi" - config: - security: - cert-allowed-cn: - - TiDB - tikv: - baseImage: pingcap/tikv - replicas: 1 - requests: - storage: "10Gi" - config: - security: - cert-allowed-cn: - - TiDB - tidb: - baseImage: pingcap/tidb - replicas: 1 - service: - type: ClusterIP - tlsClient: - enabled: true - config: - security: - cert-allowed-cn: - - TiDB -EOF -``` - -## Exit and reclaim clusters that already join a cross-Kubernetes cluster - -When you need to make a cluster exit from the joined TiDB cluster deployed across Kubernetes and reclaim resources, you can perform the operation by scaling in the cluster. In this scenario, the following requirements of scaling-in need to be met. - -- After scaling in the cluster, the number of TiKV replicas in the cluster should be greater than the number of `max-replicas` set in PD. By default, the number of TiKV replicas needs to be greater than three. - -Take the cluster #2 created in [the last section](#deploy-a-new-cluster-to-join-the-initial-cluster) as an example. First, set the number of replicas of PD, TiKV, and TiDB to `0`. If you enable other components such as TiFlash, TiCDC, and Pump, set the number of these replicas to `0`: - -{{< copyable "shell-regular" >}} - -```bash -kubectl patch tc cluster2 --type merge -p '{"spec":{"pd":{"replicas":0},"tikv":{"replicas":0},"tidb":{"replicas":0}}}' -``` - -Wait for the status of cluster #2 to become `Ready`, and scale in related components to `0` replica: - -{{< copyable "shell-regular" >}} - -```bash -kubectl get pods -l app.kubernetes.io/instance=cluster2 -n pingcap -``` - -The Pod list shows `No resources found`. At this time, Pods have all been scaled in, and cluster #2 exits the cluster. Check the cluster status of cluster #2: - -{{< copyable "shell-regular" >}} - -```bash -kubectl get tc cluster2 -``` - -The result shows that cluster #2 is in the `Ready` status. At this time, you can delete the object and reclaim related resources. - -{{< copyable "shell-regular" >}} - -```bash -kubectl delete tc cluster2 -``` - -Through the above steps, you can complete exit and resources reclaim of the joined clusters. - -## Enable the existing data cluster across multiple Kubernetes cluster feature as the initial TiDB cluster - -> **Warning:** -> -> Currently, this is an experimental feature and might cause data loss. Please use it carefully. - -1. Update `.spec.clusterDomain` configuration: - - Configure the following parameters according to the `clusterDomain` in your Kubernetes cluster information: - - > **Warning:** - > - > Currently, you need to configure `clusterDomain` with correct information. After modifying the configuration, you can not modify it again. - - {{< copyable "shell-regular" >}} - - ```bash - kubectl patch tidbcluster cluster1 --type merge -p '{"spec":{"clusterDomain":"cluster1.com"}}' - ``` - - After completing the modification, the TiDB cluster performs the rolling update. - -2. Update the `PeerURL` information of PD: - - After completing the rolling update, you need to use `port-forward` to expose PD's API interface, and use API interface of PD to update `PeerURL` of PD. - - 1. Use `port-forward` to expose API interface of PD: - - {{< copyable "shell-regular" >}} - - ```bash - kubectl port-forward pods/cluster1-pd-0 2380:2380 2379:2379 -n pingcap - ``` - - 2. Access `PD API` to obtain `members` information. Note that after using `port-forward`, the terminal is occupied. You need to perform the following operations in another terminal: - - {{< copyable "shell-regular" >}} - - ```bash - curl http://127.0.0.1:2379/v2/members - ``` - - > **Note:** - > - > If the cluster enables TLS, you need to configure the certificate when using the curl command. For example: - > - > `curl --cacert /var/lib/pd-tls/ca.crt --cert /var/lib/pd-tls/tls.crt --key /var/lib/pd-tls/tls.key https://127.0.0.1:2379/v2/members` - - After running the command, the output is as follows: - - ```output - {"members":[{"id":"6ed0312dc663b885","name":"cluster1-pd-0.cluster1-pd-peer.pingcap.svc.cluster1.com","peerURLs":["http://cluster1-pd-0.cluster1-pd-peer.pingcap.svc:2380"],"clientURLs":["http://cluster1-pd-0.cluster1-pd-peer.pingcap.svc.cluster1.com:2379"]},{"id":"bd9acd3d57e24a32","name":"cluster1-pd-1.cluster1-pd-peer.pingcap.svc.cluster1.com","peerURLs":["http://cluster1-pd-1.cluster1-pd-peer.pingcap.svc:2380"],"clientURLs":["http://cluster1-pd-1.cluster1-pd-peer.pingcap.svc.cluster1.com:2379"]},{"id":"e04e42cccef60246","name":"cluster1-pd-2.cluster1-pd-peer.pingcap.svc.cluster1.com","peerURLs":["http://cluster1-pd-2.cluster1-pd-peer.pingcap.svc:2380"],"clientURLs":["http://cluster1-pd-2.cluster1-pd-peer.pingcap.svc.cluster1.com:2379"]}]} - ``` - - 3. Record the `id` of each PD instance, and use the `id` to update the `peerURL` of each member in turn: - - {{< copyable "shell-regular" >}} - - ```bash - member_ID="6ed0312dc663b885" - member_peer_url="http://cluster1-pd-0.cluster1-pd-peer.pingcap.svc.cluster1.com:2380" - curl http://127.0.0.1:2379/v2/members/${member_ID} -XPUT \ - -H "Content-Type: application/json" -d '{"peerURLs":["${member_peer_url}"]}' - ``` - -For more examples and development information, refer to [`multi-cluster`](https://github.com/pingcap/tidb-operator/tree/master/examples/multi-cluster). diff --git a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md index 8d4fd4b9d5..2cfecffc35 100644 --- a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -9,7 +9,7 @@ summary: 本文档介绍如何实现跨多个 Kubernetes 集群部署 TiDB 集 ## 前置条件 -需要配置 Kubernetes 的网络和 DNS,使得 Kubernetes 集群满足以下条件: +您需要配置 Kubernetes 的网络和 DNS,使得 Kubernetes 集群满足以下条件: - 各 Kubernetes 集群上的 TiDB 组件有能力访问集群内和集群间所有 TiDB 组件的 Pod IP。 - 各 Kubernetes 集群上的 TiDB 组件有能力解析集群内和集群间所有 TiDB 组件的 Pod FQDN。 @@ -31,13 +31,13 @@ summary: 本文档介绍如何实现跨多个 Kubernetes 集群部署 TiDB 集 ## 跨多个 Kubernetes 集群部署集群 -部署跨多个 Kubernetes 集群的 TiDB 集群,默认你已部署好此场景所需要的 Kubernetes 集群,在此基础上进行下面的部署工作。 +部署跨多个 Kubernetes 集群的 TiDB 集群,默认您已部署好此场景所需要的 Kubernetes 集群,在此基础上进行下面的部署工作。 下面以部署两个集群为例进行介绍,其中集群 1 为初始集群,按照下面给出的配置进行创建,集群 1 正常运行后,按照下面给出配置创建集群 2,等集群完成创建和部署工作后,两个集群正常运行。 ### 部署初始集群 -根据实际情况设置以下环境变量,实际使用中需要根据你的实际情况设置 `cluster1_name` 和 `cluster1_cluster_domain` 变量的内容,其中 `cluster1_name` 为集群 1 的集群名称,`cluster1_cluster_domain` 为集群 1 的 [Cluster Domain](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction), `cluster1_namespace` 为集群 1 的命名空间。 +根据实际情况设置以下环境变量,实际使用中需要根据您的实际情况设置 `cluster1_name` 和 `cluster1_cluster_domain` 变量的内容,其中 `cluster1_name` 为集群 1 的集群名称,`cluster1_cluster_domain` 为集群 1 的 [Cluster Domain](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction), `cluster1_namespace` 为集群 1 的命名空间。 {{< copyable "shell-regular" >}} @@ -90,7 +90,7 @@ EOF 等待集群 1 完成部署后,创建集群 2。在实际使用中,集群 2 可以加入多集群内的任意一个已有集群。 -可以参考下面的范例,根据实际情况设置填入集群 1 和集群 2 的 `Name`、`Cluster Domain`、`Namespace` 等相关信息: +您可以参考下面的范例,根据实际情况设置填入集群 1 和集群 2 的 `Name`、`Cluster Domain`、`Namespace` 等相关信息: {{< copyable "shell-regular" >}} @@ -149,17 +149,17 @@ EOF ## 跨多个 Kubernetes 集群部署开启组件间 TLS 的 TiDB 集群 -可以按照以下步骤为跨多个 Kubernetes 集群部署的 TiDB 集群开启组件间 TLS。 +您可以按照以下步骤为跨多个 Kubernetes 集群部署的 TiDB 集群开启组件间 TLS。 ### 签发根证书 #### 使用 cfssl 系统签发根证书 -如果你使用 `cfssl`,签发 CA 证书的过程与一般签发过程没有差别,需要保存好第一次创建的 CA 证书,并且在后面为 TiDB 组件签发证书时都使用这个 CA 证书,即在为其他集群创建组件证书时,不需要再次创建 CA 证书,你只需要完成一次[为 TiDB 组件间开启 TLS](enable-tls-between-components.md#使用-cfssl-系统颁发证书) 文档中 1 ~ 4 步操作,完成 CA 证书签发,为其他集群组件间证书签发操作从第 5 步开始即可。 +如果您使用 `cfssl`,签发 CA 证书的过程与一般签发过程没有差别,您需要保存好第一次创建的 CA 证书,并且在后面为 TiDB 组件签发证书时都使用这个 CA 证书,即在为其他集群创建组件证书时,不需要再次创建 CA 证书,您只需要完成一次[为 TiDB 组件间开启 TLS](enable-tls-between-components.md#使用-cfssl-系统颁发证书) 文档中 1 ~ 4 步操作,完成 CA 证书签发,为其他集群组件间证书签发操作从第 5 步开始即可。 #### 使用 cert-manager 系统签发根证书 -如果你使用 `cert-manager`,只需要在初始集群创建 `CA Issuer` 和创建 `CA Certificate`,并导出 `CA Secret` 给其他准备加入的新集群,其他集群只需要创建组件证书签发 `Issuer`(在 [TLS 文档](enable-tls-between-components.md#使用-cert-manager-系统颁发证书)中指名字为 `${cluster_name}-tidb-issuer` 的 `Issuer`),配置 `Issuer` 使用该 CA,具体过程如下: +如果您使用 `cert-manager`,只需要在初始集群创建 `CA Issuer` 和创建 `CA Certificate`,并导出 `CA Secret` 给其他准备加入的新集群,其他集群只需要创建组件证书签发 `Issuer`(在 [TLS 文档](enable-tls-between-components.md#使用-cert-manager-系统颁发证书)中指名字为 `${cluster_name}-tidb-issuer` 的 `Issuer`),配置 `Issuer` 使用该 CA,具体过程如下: 1. 在初始集群上创建 `CA Issuer` 和创建 `CA Certificate`。 @@ -229,7 +229,7 @@ EOF 3. 将导出的 CA 导入到其他集群。 - 你需要配置这里的 `namespace` 使得相关组件可以访问到 CA 证书: + 您需要配置这里的 `namespace` 使得相关组件可以访问到 CA 证书: {{< copyable "shell-regular" >}} @@ -270,7 +270,7 @@ EOF 2. 在新集群上,创建组件间证书签发 `Issuer`。 - 根据实际情况设置以下环境变量,其中 `ca_secret_name` 需要指向你刚才导入的存放 `CA` 的 `Secret`,`cluster_name` 和 `namespace` 在下面的操作中需要用到: + 根据实际情况设置以下环境变量,其中 `ca_secret_name` 需要指向您刚才导入的存放 `CA` 的 `Secret`,`cluster_name` 和 `namespace` 在下面的操作中需要用到: {{< copyable "shell-regular" >}} @@ -299,7 +299,7 @@ EOF ### 为各个 Kubernetes 集群的 TiDB 组件签发证书 -你需要为每个 Kubernetes 集群上的 TiDB 组件都签发组件证书。在签发组件证书时,需要在 hosts 中加上以 `.${cluster_domain}` 结尾的授权记录, 例如 `${cluster_name}-pd.${namespace}.svc.${cluster_domain}`。 +您需要为每个 Kubernetes 集群上的 TiDB 组件都签发组件证书。在签发组件证书时,需要在 hosts 中加上以 `.${cluster_domain}` 结尾的授权记录, 例如 `${cluster_name}-pd.${namespace}.svc.${cluster_domain}`。 #### 使用 cfssl 系统为 TiDB 组件签发证书 @@ -412,7 +412,7 @@ spec: EOF ``` -需要参考 TLS 相关文档,为组件签发对应的证书,并在相应 Kubernetes 集群中创建 Secret。 +您需要参考 TLS 相关文档,为组件签发对应的证书,并在相应 Kubernetes 集群中创建 Secret。 其他 TLS 相关信息,可参考以下文档: @@ -421,7 +421,7 @@ EOF ### 部署初始集群 -通过如下命令部署初始化集群,实际使用中需要根据你的实际情况设置 `cluster1_name` 和 `cluster1_cluster_domain` 变量的内容,其中 `cluster1_name` 为集群 1 的集群名称,`cluster1_cluster_domain` 为集群 1 的 `Cluster Domain`,`cluster1_namespace` 为集群 1 的命名空间。下面的 YAML 文件已经开启了 TLS 功能,并通过配置 `cert-allowed-cn`,使得各个组件开始验证由 `CN` 为 `TiDB` 的 `CA` 所签发的证书。 +通过如下命令部署初始化集群,实际使用中需要根据您的实际情况设置 `cluster1_name` 和 `cluster1_cluster_domain` 变量的内容,其中 `cluster1_name` 为集群 1 的集群名称,`cluster1_cluster_domain` 为集群 1 的 `Cluster Domain`,`cluster1_namespace` 为集群 1 的命名空间。下面的 YAML 文件已经开启了 TLS 功能,并通过配置 `cert-allowed-cn`,使得各个组件开始验证由 `CN` 为 `TiDB` 的 `CA` 所签发的证书。 根据实际情况设置以下环境变量: @@ -557,11 +557,11 @@ EOF ## 退出和回收已加入集群 -当你需要让一个集群从所加入的跨 Kubernetes 部署的 TiDB 集群退出并回收资源时,可以通过缩容流程来实现上述需求。在此场景下,需要满足缩容的一些限制,限制如下: +当您需要让一个集群从所加入的跨 Kubernetes 部署的 TiDB 集群退出并回收资源时,可以通过缩容流程来实现上述需求。在此场景下,需要满足缩容的一些限制,限制如下: - 缩容后,集群中 TiKV 副本数应大于 PD 中设置的 `max-replicas` 数量,默认情况下 TiKV 副本数量需要大于 3。 -以上面文档创建的集群 2 为例,先将 PD、TiKV、TiDB 的副本数设置为 0,如果开启了 TiFlash、TiCDC、Pump 等其他组件,也请一并将其副本数设为 0: +我们以上面文档创建的集群 2 为例,先将 PD、TiKV、TiDB 的副本数设置为 0,如果开启了 TiFlash、TiCDC、Pump 等其他组件,也请一并将其副本数设为 0: {{< copyable "shell-regular" >}} @@ -585,7 +585,7 @@ Pod 列表显示为 `No resources found.`,此时 Pod 已经被全部缩容, kubectl get tc cluster2 ``` -结果显示集群 2 为 `Ready` 状态,此时可以删除该对象,对相关资源进行回收。 +结果显示集群 2 为 `Ready` 状态,此时我们可以删除该对象,对相关资源进行回收。 {{< copyable "shell-regular" >}} @@ -593,7 +593,7 @@ kubectl get tc cluster2 kubectl delete tc cluster2 ``` -通过上述步骤完成已加入集群的退出和资源回收。 +通过上述步骤,我们完成了已加入集群的退出和资源回收。 ## 已有数据集群开启跨多个 Kubernetes 集群功能并作为 TiDB 集群的初始集群 @@ -603,11 +603,11 @@ kubectl delete tc cluster2 1. 更新 `.spec.clusterDomain` 配置: - 根据你的 Kubernetes 集群信息中的 `clusterDomain` 配置下面的参数: + 根据您的 Kubernetes 集群信息中的 `clusterDomain` 配置下面的参数: > **警告:** > - > 目前需要你使用正确的信息配置 `clusterDomain`,配置修改后无法再次修改。 + > 目前需要您使用正确的信息配置 `clusterDomain`,配置修改后无法再次修改。 {{< copyable "shell-regular" >}}