Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

automatically set the scheduler K8s version #313

Merged
merged 4 commits into from
Mar 15, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion charts/tidb-operator/templates/scheduler-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ spec:
- -v={{ .Values.scheduler.logLevel }}
- -port=10262
- name: kube-scheduler
image: {{ required "scheduler.kubeSchedulerImage is required! Its verison must be the same as your kubernetes cluster version" .Values.scheduler.kubeSchedulerImage }}
image: {{ required "scheduler.kubeSchedulerImageName is required" .Values.scheduler.kubeSchedulerImageName }}:{{ .Values.scheduler.kubeSchedulerImageTag | default (split "-" .Capabilities.KubeVersion.GitVersion)._0 }}
resources:
{{ toYaml .Values.scheduler.resources | indent 12 }}
command:
Expand Down
5 changes: 3 additions & 2 deletions charts/tidb-operator/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,5 +50,6 @@ scheduler:
requests:
cpu: 80m
memory: 50Mi
# this hyperkube verison must be the same as your kubernetes cluster version
# kubeSchedulerImage: gcr.io/google-containers/hyperkube:v1.12.1
kubeSchedulerImageName: gcr.io/google-containers/hyperkube
# This will default to matching your kubernetes version
# kubeSchedulerImageTag:
3 changes: 1 addition & 2 deletions docs/google-kubernetes-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,10 +93,9 @@ When you see `Running`, it's time to hit `Control + C` and proceed to the next s

The first TiDB component we are going to install is the TiDB Operator, using a Helm Chart. TiDB Operator is the management system that works with Kubernetes to bootstrap your TiDB cluster and keep it running. This step assumes you are in the `tidb-operator` working directory:

KUBE_VERSION=$(kubectl version --short | awk '/Server/{print $NF}' | awk -F '-' '{print $1}') &&
kubectl apply -f ./manifests/crd.yaml &&
kubectl apply -f ./manifests/gke-storage.yml &&
helm install ./charts/tidb-operator -n tidb-admin --namespace=tidb-admin --set scheduler.kubeSchedulerImage=gcr.io/google-containers/hyperkube:${KUBE_VERSION}
helm install ./charts/tidb-operator -n tidb-admin --namespace=tidb-admin

We can watch the operator come up with:

Expand Down
1 change: 1 addition & 0 deletions docs/operation-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,7 @@ $ kubectl get pv -l app.kubernetes.io/namespace=${namespace},app.kubernetes.io/m

> **Note:** the above command will delete the data permanently. Think twice before executing them.


## Monitor

TiDB cluster is monitored with Prometheus and Grafana. When TiDB cluster is created, a Prometheus and Grafana pod will be created and configured to scrape and visualize metrics.
Expand Down
14 changes: 11 additions & 3 deletions docs/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,16 @@

Before deploying the TiDB Operator, make sure the following requirements are satisfied:

* Kubernetes v1.10 or later
* Kubernetes v1.10 or greater
* [DNS addons](https://kubernetes.io/docs/tasks/access-application-cluster/configure-dns-cluster/)
* [PersistentVolume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
* [RBAC](https://kubernetes.io/docs/admin/authorization/rbac) enabled (optional)
* [Helm](https://helm.sh) v2.8.2 or later
* [Helm](https://helm.sh) v2.8.2 or greater
* Kubernetes v1.12 is required for zone-aware persistent volumes.

> **Note:** Though TiDB Operator can use network volume to persist TiDB data, it is highly recommended to set up [local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) for better performance. Because TiDB already replicates data, network volume will add extra replicas which is redundant.
> **Note:** Allthough TiDB Operator can use network volume to persist TiDB data, this is slower due to redundant replication. It is highly recommended to set up [local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) for better performance.

> **Note:** Network volumes in a multi availability zone setup require Kubernetes v1.12 or greater. We do recommend using networked volumes for backup in the tidb-bakup chart.

## Kubernetes

Expand Down Expand Up @@ -151,3 +154,8 @@ $ helm upgrade tidb-operator charts/tidb-operator
When a new version of tidb-operator comes out, simply update the `operatorImage` in values.yaml and run the above command should be enough. But for safety reasons, you should get the new charts from tidb-operator repo and merge the old values.yaml with new values.yaml. And then upgrade as above.

TiDB Operator is for TiDB cluster maintenance, what this means is that when TiDB cluster is up and running, you can just stop TiDB Operator and TiDB cluster still works well unless you need to do TiDB cluster maintenance like scaling, upgrading etc.

## Upgrade Kubernetes

When you have a major version change of Kubernetes, you need to make sure that the kubeSchedulerImageTag matches it. By default, this value is generated by helm during install/upgrade so you need to perform a helm upgrade to reset it.

3 changes: 2 additions & 1 deletion images/tidb-operator-e2e/tidb-operator-values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,4 +50,5 @@ scheduler:
requests:
cpu: 80m
memory: 50Mi
kubeSchedulerImage: mirantis/hypokube:final
kubeSchedulerImageName: mirantis/hypokube
kubeSchedulerImageTag: final