Skip to content

Commit

Permalink
en: updated docs links to stable instead of version-specific URLs (#393)
Browse files Browse the repository at this point in the history
Co-authored-by: Ran <huangran@pingcap.com>
  • Loading branch information
Kolbe Kegel and ran-huang authored Jun 28, 2020
1 parent a5e9c99 commit f760ea1
Show file tree
Hide file tree
Showing 17 changed files with 25 additions and 25 deletions.
6 changes: 3 additions & 3 deletions en/backup-and-restore-using-helm-charts.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ For TiDB Operator 1.1 or later versions, it is recommended that you use the back

TiDB in Kubernetes supports two backup strategies using Helm charts:

* [Full backup](#full-backup) (scheduled or ad-hoc): use [`mydumper`](https://pingcap.com/docs/v3.0/reference/tools/mydumper) to take a logical backup of the TiDB cluster.
* [Incremental backup](#incremental-backup): use [TiDB Binlog](https://pingcap.com/docs/v3.0/reference/tidb-binlog/overview) to replicate data from the TiDB cluster to another database or execute a real-time backup of the data.
* [Full backup](#full-backup) (scheduled or ad-hoc): use [`mydumper`](https://pingcap.com/docs/stable/reference/tools/mydumper) to take a logical backup of the TiDB cluster.
* [Incremental backup](#incremental-backup): use [TiDB Binlog](https://pingcap.com/docs/stable/tidb-binlog/tidb-binlog-overview/) to replicate data from the TiDB cluster to another database or execute a real-time backup of the data.

Currently, TiDB in Kubernetes only supports automatic [restoration](#restore) for full backup taken by `mydumper`. Restoring the incremental backup data by `TiDB Binlog` requires manual operations.

Expand Down Expand Up @@ -135,7 +135,7 @@ The `pingcap/tidb-backup` helm chart helps restore a TiDB cluster using backup d
## Incremental backup
Incremental backup uses [TiDB Binlog](https://pingcap.com/docs/v3.0/reference/tidb-binlog/overview) to collect binlog data from TiDB and provide near real-time backup and replication to downstream platforms.
Incremental backup uses [TiDB Binlog](https://pingcap.com/docs/stable/tidb-binlog/tidb-binlog-overview/) to collect binlog data from TiDB and provide near real-time backup and replication to downstream platforms.
For the detailed guide of maintaining TiDB Binlog in Kubernetes, refer to [TiDB Binlog](deploy-tidb-binlog.md).
Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-aws-s3-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ aliases: ['/docs/tidb-in-kubernetes/dev/backup-to-aws-s3-using-br/']

# Back up Data to S3-Compatible Storage Using BR

This document describes how to back up the data of a TiDB cluster in AWS Kubernetes to the AWS storage using Helm charts. "Backup" in this document refers to full backup (ad-hoc full backup and scheduled full backup). [BR](https://pingcap.com/docs/v3.1/reference/tools/br/br) is used to get the logic backup of the TiDB cluster, and then this backup data is sent to the AWS storage.
This document describes how to back up the data of a TiDB cluster in AWS Kubernetes to the AWS storage using Helm charts. "Backup" in this document refers to full backup (ad-hoc full backup and scheduled full backup). [BR](https://pingcap.com/docs/stable/br/backup-and-restore-tool/) is used to get the logic backup of the TiDB cluster, and then this backup data is sent to the AWS storage.

The backup method described in this document is implemented using Custom Resource Definition (CRD) in TiDB Operator v1.1 or later versions.

Expand Down
4 changes: 2 additions & 2 deletions en/backup-to-gcs.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ aliases: ['/docs/tidb-in-kubernetes/dev/backup-to-gcs/']

# Back up Data to GCS

This document describes how to back up the data of the TiDB cluster in Kubernetes to [Google Cloud Storage (GCS)](https://cloud.google.com/storage/docs/). "Backup" in this document refers to full backup (ad-hoc full backup and scheduled full backup). [`mydumper`](https://pingcap.com/docs/v3.0/reference/tools/mydumper) is used to get the logic backup of the TiDB cluster, and then this backup data is sent to the remote GCS.
This document describes how to back up the data of the TiDB cluster in Kubernetes to [Google Cloud Storage (GCS)](https://cloud.google.com/storage/docs/). "Backup" in this document refers to full backup (ad-hoc full backup and scheduled full backup). [`mydumper`](https://pingcap.com/docs/stable/reference/tools/mydumper) is used to get the logic backup of the TiDB cluster, and then this backup data is sent to the remote GCS.

The backup method described in this document is implemented using CustomResourceDefinition (CRD) in TiDB Operator v1.1 or later versions. For the backup method implemented using Helm Charts, refer to [Back up and Restore TiDB Cluster Data Using Helm Charts](backup-and-restore-using-helm-charts.md).

Expand Down Expand Up @@ -243,4 +243,4 @@ From the above example, you can see that the `backupSchedule` configuration cons
+ `.spec.maxBackups`: A backup retention policy, which determines the maximum number of backup items to be retained. When this value is exceeded, the outdated backup items will be deleted. If you set this configuration item to `0`, all backup items are retained.
+ `.spec.maxReservedTime`: A backup retention policy based on time. For example, if you set the value of this configuration to `24h`, only backup items within the recent 24 hours are retained. All backup items out of this time are deleted. For the time format, refer to [`func ParseDuration`](https://golang.org/pkg/time/#ParseDuration). If you have set the maximum number of backup items and the longest retention time of backup items at the same time, the latter setting takes effect.
+ `.spec.schedule`: The time scheduling format of Cron. Refer to [Cron](https://en.wikipedia.org/wiki/Cron) for details.
+ `.spec.pause`: `false` by default. If this parameter is set to `true`, the scheduled scheduling is paused. In this situation, the backup operation will not be performed even if the scheduling time is reached. During this pause, the backup [Garbage Collection](https://pingcap.com/docs/v3.0/reference/garbage-collection/overview) (GC) runs normally. If you change `true` to `false`, the full backup process is restarted.
+ `.spec.pause`: `false` by default. If this parameter is set to `true`, the scheduled scheduling is paused. In this situation, the backup operation will not be performed even if the scheduling time is reached. During this pause, the backup [Garbage Collection](https://pingcap.com/docs/stable/reference/garbage-collection/overview) (GC) runs normally. If you change `true` to `false`, the full backup process is restarted.
4 changes: 2 additions & 2 deletions en/backup-to-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ aliases: ['/docs/tidb-in-kubernetes/dev/backup-to-s3/']

# Back up Data to S3-Compatible Storage Using Mydumper

This document describes how to back up the data of the TiDB cluster in Kubernetes to the S3-compatible storage. "Backup" in this document refers to full backup (ad-hoc full backup and scheduled full backup). For the underlying implementation, [`mydumper`](https://pingcap.com/docs/v3.0/reference/tools/mydumper) is used to get the logic backup of the TiDB cluster, and then this backup data is sent to the S3-compatible storage.
This document describes how to back up the data of the TiDB cluster in Kubernetes to the S3-compatible storage. "Backup" in this document refers to full backup (ad-hoc full backup and scheduled full backup). For the underlying implementation, [`mydumper`](https://pingcap.com/docs/stable/mydumper-overview/) is used to get the logic backup of the TiDB cluster, and then this backup data is sent to the S3-compatible storage.

The backup method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator v1.1 or later versions. For the backup method implemented based on Helm Charts, refer to [Back up and Restore TiDB Cluster Data Based on Helm Charts](backup-and-restore-using-helm-charts.md).

Expand Down Expand Up @@ -540,4 +540,4 @@ From the examples above, you can see that the `backupSchedule` configuration con
+ `.spec.maxBackups`: A backup retention policy, which determines the maximum number of backup items to be retained. When this value is exceeded, the outdated backup items will be deleted. If you set this configuration item to `0`, all backup items are retained.
+ `.spec.maxReservedTime`: A backup retention policy based on time. For example, if you set the value of this configuration to `24h`, only backup items within the recent 24 hours are retained. All backup items out of this time are deleted. For the time format, refer to [`func ParseDuration`](https://golang.org/pkg/time/#ParseDuration). If you have set the maximum number of backup items and the longest retention time of backup items at the same time, the latter setting takes effect.
+ `.spec.schedule`: The time scheduling format of Cron. Refer to [Cron](https://en.wikipedia.org/wiki/Cron) for details.
+ `.spec.pause`: `false` by default. If this parameter is set to `true`, the scheduled scheduling is paused. In this situation, the backup operation will not be performed even if the scheduling time is reached. During this pause, the backup [Garbage Collection](https://pingcap.com/docs/v3.0/reference/garbage-collection/overview) (GC) runs normally. If you change `true` to `false`, the full backup process is restarted.
+ `.spec.pause`: `false` by default. If this parameter is set to `true`, the scheduled scheduling is paused. In this situation, the backup operation will not be performed even if the scheduling time is reached. During this pause, the backup [Garbage Collection](https://pingcap.com/docs/stable/garbage-collection-overview/) (GC) runs normally. If you change `true` to `false`, the full backup process is restarted.
2 changes: 1 addition & 1 deletion en/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ spec:
cpu: 1
```

For all the configurable parameters of TiDB, refer to [TiDB Configuration File](https://pingcap.com/docs/v3.1/reference/configuration/tidb-server/configuration-file/).
For all the configurable parameters of TiDB, refer to [TiDB Configuration File](https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/).

> **Note:**
>
Expand Down
2 changes: 1 addition & 1 deletion en/configure-backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ aliases: ['/docs/tidb-in-kubernetes/dev/configure-backup/']

### `restoreOptions`

- The optional parameter specified to [`loader`](https://pingcap.com/docs/v3.0/reference/tools/loader) used when backing up data
- The optional parameter specified to [`loader`](https://pingcap.com/docs/stable/reference/tools/loader) used when backing up data
- Default: "-t 16"

### `gcp.bucket`
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-tidb-binlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ aliases: ['/docs/tidb-in-kubernetes/dev/deploy-tidb-binlog/']

# Deploy TiDB Binlog

This document describes how to maintain [TiDB Binlog](https://pingcap.com/docs/v3.0/reference/tidb-binlog/overview) of a TiDB cluster in Kubernetes.
This document describes how to maintain [TiDB Binlog](https://pingcap.com/docs/stable/tidb-binlog/tidb-binlog-overview/) of a TiDB cluster in Kubernetes.

## Prerequisites

Expand Down
2 changes: 1 addition & 1 deletion en/enable-tls-for-mysql-client.md
Original file line number Diff line number Diff line change
Expand Up @@ -646,4 +646,4 @@ kubectl get secret -n ${namespace} ${cluster_name}-tidb-client-secret -ojsonpat
mysql -uroot -p -P 4000 -h ${tidb_host} --ssl-cert=client-tls.crt --ssl-key=client-tls.key --ssl-ca=client-ca.crt
```
Finally, to verify whether TLS is successfully enabled, refer to [checking the current connection](https://pingcap.com/docs/v3.1/how-to/secure/enable-tls-clients/#check-whether-the-current-connection-uses-encryption).
Finally, to verify whether TLS is successfully enabled, refer to [checking the current connection](https://pingcap.com/docs/stable/enable-tls-between-clients-and-servers/#check-whether-the-current-connection-uses-encryption).
6 changes: 3 additions & 3 deletions en/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The default time zone setting for each component container of a TiDB cluster in
* If the cluster is running:

* In the `values.yaml` file of the TiDB cluster, modify `timezone` settings in the `values.yaml` file of the TiDB cluster. For example, you can set it to `timezone: Asia/Shanghai` and then upgrade the TiDB cluster.
* Refer to [Time Zone Support](https://pingcap.com/docs/v3.0/how-to/configure/time-zone) to modify TiDB service time zone settings.
* Refer to [Time Zone Support](https://pingcap.com/docs/stable/how-to/configure/time-zone) to modify TiDB service time zone settings.

## Can HPA or VPA be configured on TiDB components?

Expand All @@ -47,7 +47,7 @@ In terms of the deployment topology relationship between the TiDB cluster and Ti

TiDB Operator does not yet support automatically orchestrating TiSpark.

If you want to add the TiSpark component to TiDB in Kubernetes, you must maintain Spark on your own in **the same** Kubernetes cluster. You must ensure that Spark can access the IPs and ports of PD and TiKV instances, and install the TiSpark plugin for Spark. [TiSpark](https://pingcap.com/docs/v3.0/reference/tispark/#deploy-tispark-on-the-existing-spark-cluster) offers a detailed guide for you to install the TiSpark plugin.
If you want to add the TiSpark component to TiDB in Kubernetes, you must maintain Spark on your own in **the same** Kubernetes cluster. You must ensure that Spark can access the IPs and ports of PD and TiKV instances, and install the TiSpark plugin for Spark. [TiSpark](https://pingcap.com/docs/stable/tispark-overview/#deploy-tispark-on-the-existing-spark-cluster) offers a detailed guide for you to install the TiSpark plugin.

To maintain Spark in Kubernetes, refer to [Spark on Kubernetes](http://spark.apache.org/docs/latest/running-on-kubernetes.html).

Expand Down Expand Up @@ -95,4 +95,4 @@ Three possible reasons:

```shell
kubectl get deployment --all-namespaces |grep tidb-scheduler
```
```
2 changes: 1 addition & 1 deletion en/monitor-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ You can monitor the TiDB cluster with Prometheus and Grafana. When you create a

The monitoring data is not persisted by default. To persist the monitoring data, you can set `spec.persistent` to `true` in `TidbMonitor`. When you enable this option, you need to set `spec.storageClassName` to an existing storage in the current cluster, and this storage is required to support persisting data; otherwise, there is a risk of data loss.

For configuration details on the monitoring system, refer to [TiDB Cluster Monitoring](https://pingcap.com/docs/v3.0/how-to/monitor/monitor-a-cluster).
For configuration details on the monitoring system, refer to [TiDB Cluster Monitoring](https://pingcap.com/docs/stable/how-to/monitor/monitor-a-cluster).

### View the monitoring dashboard

Expand Down
2 changes: 1 addition & 1 deletion en/prerequisites.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ The TiDB cluster uses many file descriptors by default. The `ulimit` of the work

## Hardware and deployment requirements

+ 64-bit generic hardware server platform in the Intel x86-64 architecture and 10 Gigabit NIC (network interface card), which are the same as the server requirements for deploying a TiDB cluster using binary. For details, refer to [Hardware recommendations](https://pingcap.com/docs/v3.0/how-to/deploy/hardware-recommendations/).
+ 64-bit generic hardware server platform in the Intel x86-64 architecture and 10 Gigabit NIC (network interface card), which are the same as the server requirements for deploying a TiDB cluster using binary. For details, refer to [Hardware recommendations](https://pingcap.com/docs/stable/how-to/deploy/hardware-recommendations/).

+ The server's disk, memory and CPU choices depend on the capacity planning of the cluster and the deployment topology. It is recommended to deploy three master nodes, three etcd nodes, and several worker nodes to ensure high availability of the online Kubernetes cluster.
Expand Down
2 changes: 1 addition & 1 deletion en/restore-data-using-tidb-lightning.md
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@ If the lightning fails to restore data, follow the steps below to do manual inte
4. Get the startup script by running `cat /proc/1/cmdline`.
5. Diagnose the lightning following the [troubleshooting guide](https://pingcap.com/docs/v3.0/how-to/troubleshoot/tidb-lightning#tidb-lightning-troubleshooting).
5. Diagnose the lightning following the [troubleshooting guide](https://pingcap.com/docs/stable/troubleshoot-tidb-lightning/).
## Destroy TiDB Lightning
Expand Down
2 changes: 1 addition & 1 deletion en/restore-from-aws-s3-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ aliases: ['/docs/tidb-in-kubernetes/dev/restore-from-aws-s3-using-br/']

# Restore Data from S3-Compatible Storage Using BR

This document describes how to restore the TiDB cluster data backed up using TiDB Operator in Kubernetes. [BR](https://pingcap.com/docs/v3.1/reference/tools/br/br) is used to perform the restoration.
This document describes how to restore the TiDB cluster data backed up using TiDB Operator in Kubernetes. [BR](https://pingcap.com/docs/stable/br/backup-and-restore-tool/) is used to perform the restoration.

The restoration method described in this document is implemented based on Custom Resource Definition (CRD) in TiDB Operator v1.1 or later versions.

Expand Down
2 changes: 1 addition & 1 deletion en/tidb-cluster-chart-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ This document describes the configuration of the tidb-cluster chart.
| `pd.nodeSelector` | `pd.nodeSelector` makes sure that PD Pods are dispatched only to nodes that have this key-value pair as a label. For details, refer to [nodeselector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) | `{}` |
| `pd.tolerations` | `pd.tolerations` applies to PD Pods, allowing PD Pods to be dispatched to nodes with specified taints. For details, refer to [taint-and-toleration](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration) | `{}` |
| `pd.annotations` | Add specific `annotations` to PD Pods | `{}` |
| `tikv.config` | Configuration of TiKV in configuration file format. To view the default TiKV configuration file, refer to [`tikv/etc/config-template.toml`](https://github.com/tikv/tikv/blob/master/etc/config-template.toml) and select the tag of the corresponding TiKV version. To view the descriptions of parameters, refer to [TiKV configuration description](https://pingcap.com/docs/v3.0/reference/configuration/tikv-server/configuration-file/) and select the corresponding document version. Here you only need to modify the configuration according to the format in the configuration file.<br/><br/>The following two configuration items need to be configured explicitly:<br/><br/>`[storage.block-cache]`<br/>&nbsp;&nbsp;`shared = true`<br/>&nbsp;&nbsp;`capacity = "1GB"`<br/>Recommended: set `capacity` to 50% of `tikv.resources.limits.memory`<br/><br/>`[readpool.coprocessor]`<br/>&nbsp;&nbsp;`high-concurrency = 8`<br/>&nbsp;&nbsp;`normal-concurrency = 8`<br/>&nbsp;&nbsp;`low-concurrency = 8`<br/>Recommended: set to 80% of `tikv.resources.limits.cpu` | If the TiDB Operator version <= v1.0.0-beta.3, the default value is<br/>`nil`<br/>If the TiDB Operator version > v1.0.0-beta.3, the default value is<br/>`log-level = "info"`<br/>For example:<br/>&nbsp;&nbsp;`config:` \|<br/>&nbsp;&nbsp;&nbsp;&nbsp;`log-level = "info"` |
| `tikv.config` | Configuration of TiKV in configuration file format. To view the default TiKV configuration file, refer to [`tikv/etc/config-template.toml`](https://github.com/tikv/tikv/blob/master/etc/config-template.toml) and select the tag of the corresponding TiKV version. To view the descriptions of parameters, refer to [TiKV configuration description](https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/) and select the corresponding document version. Here you only need to modify the configuration according to the format in the configuration file.<br/><br/>The following two configuration items need to be configured explicitly:<br/><br/>`[storage.block-cache]`<br/>&nbsp;&nbsp;`shared = true`<br/>&nbsp;&nbsp;`capacity = "1GB"`<br/>Recommended: set `capacity` to 50% of `tikv.resources.limits.memory`<br/><br/>`[readpool.coprocessor]`<br/>&nbsp;&nbsp;`high-concurrency = 8`<br/>&nbsp;&nbsp;`normal-concurrency = 8`<br/>&nbsp;&nbsp;`low-concurrency = 8`<br/>Recommended: set to 80% of `tikv.resources.limits.cpu` | If the TiDB Operator version <= v1.0.0-beta.3, the default value is<br/>`nil`<br/>If the TiDB Operator version > v1.0.0-beta.3, the default value is<br/>`log-level = "info"`<br/>For example:<br/>&nbsp;&nbsp;`config:` \|<br/>&nbsp;&nbsp;&nbsp;&nbsp;`log-level = "info"` |
| `tikv.replicas` | Number of Pods in TiKV | `3` |
| `tikv.image` | Image of TiKV | `pingcap/tikv:v3.0.0-rc.1` |
| `tikv.imagePullPolicy` | Pull strategy for TiKV image| `IfNotPresent` |
Expand Down
Loading

0 comments on commit f760ea1

Please sign in to comment.