Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…-aws into updates

* 'master' of https://github.com/kubernetes-incubator/kube-aws:
  Allow toggling Metrics Server installation
  Correct values for the `kubernetes.io/cluster/<Cluster ID>` tags Resolves kubernetes-retired#1025
  Fix dashboard doco links
  Fix install-kube-system when node drainer is enabled Follow-up for kubernetes-retired#1043
  Two fixes to 0.9.9 rc.3 (kubernetes-retired#1043)
  Quick start and high availability guides
  • Loading branch information
camilb committed Dec 8, 2017
2 parents 1942c85 + 2267305 commit 0ce26f9
Show file tree
Hide file tree
Showing 13 changed files with 192 additions and 142 deletions.
13 changes: 7 additions & 6 deletions core/controlplane/config/encrypted_assets.go
Original file line number Diff line number Diff line change
Expand Up @@ -440,11 +440,6 @@ func ReadOrEncryptAssets(dirname string, manageCertificates bool, caKeyRequiredO
}

func (r *RawAssetsOnMemory) WriteToDir(dirname string, includeCAKey bool) error {
workerCAKeyDefaultSymlinkTo := ""
if includeCAKey {
workerCAKeyDefaultSymlinkTo = "ca-key.pem"
}

assets := []struct {
name string
data []byte
Expand All @@ -453,7 +448,6 @@ func (r *RawAssetsOnMemory) WriteToDir(dirname string, includeCAKey bool) error
}{
{"ca.pem", r.CACert, true, ""},
{"worker-ca.pem", r.WorkerCACert, true, "ca.pem"},
{"worker-ca-key.pem", r.WorkerCAKey, true, workerCAKeyDefaultSymlinkTo},
{"apiserver.pem", r.APIServerCert, true, ""},
{"apiserver-key.pem", r.APIServerKey, true, ""},
{"worker.pem", r.WorkerCert, true, ""},
Expand All @@ -480,6 +474,13 @@ func (r *RawAssetsOnMemory) WriteToDir(dirname string, includeCAKey bool) error
overwrite bool
ifEmptySymlinkTo string
}{"ca-key.pem", r.CAKey, true, ""})

assets = append(assets, struct {
name string
data []byte
overwrite bool
ifEmptySymlinkTo string
}{"worker-ca-key.pem", r.WorkerCAKey, true, "ca-key.pem"})
}

for _, asset := range assets {
Expand Down
112 changes: 62 additions & 50 deletions core/controlplane/config/templates/cloud-config-controller
Original file line number Diff line number Diff line change
Expand Up @@ -781,108 +781,120 @@ write_files:
#!/bin/bash -e

kubectl() {
/usr/bin/docker run --rm --net=host -v /srv/kubernetes:/srv/kubernetes {{.HyperkubeImage.RepoWithTag}} /hyperkube kubectl "$@"
# --request-timeout=1s is intended to instruct kubectl to give up discovering unresponsive apiservice(s) in certain periods
# so that temporal freakiness/unresponsity of specific apiservice until apiserver/controller-manager fully starts doesn't
# affect the whole controller bootstrap process.
/usr/bin/docker run --rm --net=host -v /srv/kubernetes:/srv/kubernetes {{.HyperkubeImage.RepoWithTag}} /hyperkube kubectl --request-timeout=1s "$@"
}

ks() {
kubectl --namespace kube-system "$@"
}

# Try to batch as many files as possible to reduce the total amount of delay due to wilderness in the API aggregation
# See https://github.com/kubernetes-incubator/kube-aws/issues/1039
applyall() {
kubectl apply -f $(echo "$@" | tr ' ' ',')
}

while ! kubectl get ns kube-system; do
echo Waiting until kube-system created.
sleep 3
done

# See https://github.com/kubernetes-incubator/kube-aws/issues/1039#issuecomment-348978375
if ks get apiservice v1beta1.metrics.k8s.io && ! ps ax | grep '[h]yperkube proxy'; then
echo "apiserver is up but kube-proxy isn't up. We have likely encountered #1039."
echo "Temporary deleting the v1beta1.metrics.k8s.io apiservice as a work-around for #1039"
ks delete apiservice v1beta1.metrics.k8s.io

echo Waiting until controller-manager stabilizes and it creates a kube-proxy pod.
until ps ax | grep '[h]yperkube proxy'; do
echo Sleeping 3 seconds.
sleep 3
done
echo kube-proxy stared. apiserver should be responsive again.
fi

mfdir=/srv/kubernetes/manifests
rbac=/srv/kubernetes/rbac

{{ if .UseCalico }}
/bin/bash /opt/bin/populate-tls-calico-etcd
kubectl apply -f "${mfdir}/calico.yaml"
applyall "${mfdir}/calico.yaml"
{{ end }}

{{ if .Addons.MetricsServer.Enabled -}}
applyall \
"${mfdir}/metrics-server-sa.yaml" \
"${mfdir}/metrics-server-de.yaml" \
"${mfdir}/metrics-server-svc.yaml" \
"${rbac}/cluster-roles/metrics-server.yaml" \
"${rbac}/cluster-role-bindings/metrics-server.yaml" \
"${rbac}/role-bindings/metrics-server.yaml" \
"${mfdir}/metrics-server-apisvc.yaml"
{{- end }}

{{ if .Experimental.NodeDrainer.Enabled }}
for manifest in {kube-node-drainer-ds,kube-node-drainer-asg-status-updater-de}; do
kubectl apply -f "${mfdir}/$manifest.yaml"
done
applyall "${mfdir}"/{kube-node-drainer-ds,kube-node-drainer-asg-status-updater-de}".yaml"
{{ end }}

# Secrets
kubectl apply -f "${mfdir}/kubernetes-dashboard-se.yaml"
applyall "${mfdir}/kubernetes-dashboard-se.yaml"

# Configmaps
for manifest in {kube-dns,kube-proxy}; do
kubectl apply -f "${mfdir}/$manifest-cm.yaml"
done
applyall "${mfdir}"/{kube-dns,kube-proxy}"-cm.yaml"

# Service Accounts
for manifest in {kube-dns,heapster,kube-proxy,kubernetes-dashboard,metrics-server}; do
kubectl apply -f "${mfdir}/$manifest-sa.yaml"
done
applyall "${mfdir}"/{kube-dns,heapster,kube-proxy,kubernetes-dashboard}"-sa.yaml"

# Install tiller by default
kubectl apply -f "${mfdir}/tiller.yaml"
applyall "${mfdir}/tiller.yaml"

{{ if .KubeDns.NodeLocalResolver }}
# DNS Masq Fix
kubectl apply -f "${mfdir}/dnsmasq-node-ds.yaml"
applyall "${mfdir}/dnsmasq-node-ds.yaml"
{{ end }}

# Deployments
for manifest in {kube-dns,kube-dns-autoscaler,kubernetes-dashboard,{{ if .Addons.ClusterAutoscaler.Enabled }}cluster-autoscaler,{{ end }}heapster{{ if .KubeResourcesAutosave.Enabled }},kube-resources-autosave{{ end }},metrics-server}; do
kubectl apply -f "${mfdir}/$manifest-de.yaml"
done
applyall "${mfdir}"/{kube-dns,kube-dns-autoscaler,kubernetes-dashboard,{{ if .Addons.ClusterAutoscaler.Enabled }}cluster-autoscaler,{{ end }}heapster{{ if .KubeResourcesAutosave.Enabled }},kube-resources-autosave{{ end }}}"-de.yaml"

# Daemonsets
for manifest in {kube-proxy,}; do
kubectl apply -f "${mfdir}/$manifest-ds.yaml"
done
applyall "${mfdir}"/kube-proxy"-ds.yaml"

# Services
for manifest in {kube-dns,heapster,kubernetes-dashboard,metrics-server}; do
kubectl apply -f "${mfdir}/$manifest-svc.yaml"
done
applyall "${mfdir}"/{kube-dns,heapster,kubernetes-dashboard}"-svc.yaml"

{{- if .Addons.Rescheduler.Enabled }}
kubectl apply -f "${mfdir}/kube-rescheduler-de.yaml"
applyall "${mfdir}/kube-rescheduler-de.yaml"
{{- end }}

# API Services
for manifest in {metrics-server,}; do
kubectl apply -f "${mfdir}/$manifest-apisvc.yaml"
done

mfdir=/srv/kubernetes/rbac

# Cluster roles and bindings
for manifest in {node-extensions,metrics-server}; do
kubectl apply -f "${mfdir}/cluster-roles/$manifest.yaml"
done
for manifest in {kube-admin,system-worker,node,node-proxier,node-extensions,heapster,metrics-server}; do
kubectl apply -f "${mfdir}/cluster-role-bindings/$manifest.yaml"
done
applyall "${mfdir}/cluster-roles/node-extensions.yaml"

applyall "${mfdir}/cluster-role-bindings"/{kube-admin,system-worker,node,node-proxier,node-extensions,heapster}".yaml"

{{ if .KubernetesDashboard.AdminPrivileges }}
kubectl apply -f "${mfdir}/cluster-role-bindings/kubernetes-dashboard-admin.yaml"
applyall "${mfdir}/cluster-role-bindings/kubernetes-dashboard-admin.yaml"
{{- end }}

# Roles and bindings
for manifest in {pod-nanny,kubernetes-dashboard}; do
kubectl apply -f "${mfdir}/roles/$manifest.yaml"
done
for manifest in {heapster-nanny,kubernetes-dashboard,metrics-server}; do
kubectl apply -f "${mfdir}/role-bindings/$manifest.yaml"
done
applyall "${mfdir}/roles"/{pod-nanny,kubernetes-dashboard}".yaml"

applyall "${mfdir}/role-bindings"/{heapster-nanny,kubernetes-dashboard}".yaml"

{{ if .Experimental.TLSBootstrap.Enabled }}
for manifest in {node-bootstrapper,kubelet-certificate-bootstrap}; do
kubectl apply -f "${mfdir}/cluster-roles/$manifest.yaml"
done
applyall "${mfdir}/cluster-roles"/{node-bootstrapper,kubelet-certificate-bootstrap}".yaml"

for manifest in {node-bootstrapper,kubelet-certificate-bootstrap}; do
kubectl apply -f "${mfdir}/cluster-role-bindings/$manifest.yaml"
done
applyall "${mfdir}/cluster-role-bindings"/{node-bootstrapper,kubelet-certificate-bootstrap}".yaml"
{{ end }}

{{if .Experimental.Kube2IamSupport.Enabled }}
mfdir=/srv/kubernetes/manifests
kubectl apply -f "${mfdir}/kube2iam-rbac.yaml"
kubectl apply -f "${mfdir}/kube2iam-ds.yaml";
applyall "${mfdir}/kube2iam-rbac.yaml"
applyall "${mfdir}/kube2iam-ds.yaml";
{{ end }}

- path: /etc/kubernetes/cni/docker_opts_cni.env
Expand Down
4 changes: 4 additions & 0 deletions core/controlplane/config/templates/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1194,6 +1194,10 @@ addons:
rescheduler:
enabled: false

# Metrics Server (https://github.com/kubernetes-incubator/metrics-server)
metricsServer:
enabled: false

# Experimental features will change in backward-incompatible ways
experimental:
# Enable admission controllers
Expand Down
6 changes: 3 additions & 3 deletions core/controlplane/config/templates/stack-template.json
Original file line number Diff line number Diff line change
Expand Up @@ -554,7 +554,7 @@
}],
"HostedZoneTags" : [{
"Key": "kubernetes.io/cluster/{{$.ClusterName}}",
"Value": "true"
"Value": "owned"
}]
}
},
Expand Down Expand Up @@ -664,7 +664,7 @@
{
"Key": "kubernetes.io/cluster/{{$.ClusterName}}",
"PropagateAtLaunch": "true",
"Value": "true"
"Value": "owned"
},
{
"Key": "Name",
Expand Down Expand Up @@ -1618,7 +1618,7 @@
"Tags": [
{
"Key": "kubernetes.io/cluster/{{.ClusterName}}",
"Value": "true"
"Value": "owned"
},
{
"Key": "Name",
Expand Down
2 changes: 1 addition & 1 deletion core/nodepool/config/templates/stack-template.json
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@
{
"Key": "kubernetes.io/cluster/{{ .ClusterName }}",
"PropagateAtLaunch": "true",
"Value": "true"
"Value": "owned"
},
{
"Key": "kube-aws:node-pool:name",
Expand Down
1 change: 1 addition & 0 deletions core/root/config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,7 @@ func ConfigFromBytes(data []byte, plugins []*pluginmodel.Plugin) (*Config, error
{c.Addons, "addons"},
{c.Addons.Rescheduler, "addons.rescheduler"},
{c.Addons.ClusterAutoscaler, "addons.clusterAutoscaler"},
{c.Addons.MetricsServer, "addons.metricsServer"},
}

for i, np := range c.Worker.NodePools {
Expand Down
5 changes: 3 additions & 2 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Summary

* [Home](README.md)
* [Quick Start](tutorials/quick-start.md)
* [Getting Started](getting-started/README.md)
* [Prerequisites](getting-started/prerequisites.md)
* [Step 1: Configure](getting-started/step-1-configure.md)
Expand All @@ -20,10 +21,10 @@
* [Developer Guide](guides/developer-guide.md)
* [Operator Guide](guides/operator-guide.md)
* [Advanced Topics](advanced-topics/README.md)
* [etcd Backup & Restore](advanced-topics/etcd-backup-and-restore.md)
* [CloudFormation Updates in CLI](advanced-topics/cloudformation-updates-in-cli.md)
* [etcd Backup & Restore](advanced-topics/etcd-backup-and-restore.md)
* [Kubernetes Dashboard Access](advanced-topics/kubernetes-dashboard.md)
* [Use An Existing VPC](advanced-topics/use-an-existing-vpc.md)
* [Troubleshooting](troubleshooting/README.md)
* [Known Limitations](troubleshooting/known-limitations.md)
* [Common Problems](troubleshooting/common-problems.md)
* [Quick Start \(WIP\)](tutorials/quick-start.md)
4 changes: 2 additions & 2 deletions docs/advanced-topics/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Advanced Topics

* [etcd Backup & Restore](etcd-backup-and-restore.md) - how to backup and restore etcd either manually or automatically
* [CloudFormation Streaming](cloudformation-updates-in-cli.md) - stream CloudFormation updates during CLI commands `kube-aws up` and `kube-aws update`
* [etcd Backup & Restore](etcd-backup-and-restore.md) - how to backup and restore etcd either manually or automatically
* [Kubernetes Dashboard Access](kubernetes-dashboard.md) - how to expose and access the Kubernetes Dashboard
* [Use An Existing VPC](use-an-existing-vpc.md) - how to deploy a Kubernetes cluster to an existing VPC
* [Kubernetes Dashboard Access and Authentication](kubernetes-dashboard.md) - how to expose and access the Kubernetes Dashboard
12 changes: 12 additions & 0 deletions docs/advanced-topics/high-availability.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# High Availability

To achieve high availability using kube-aws, it is recommended to:

* Specify at least 3 for `etcd.count` in `cluster.yaml`. See [Optimal Cluster Size](https://coreos.com/etcd/docs/latest/v2/admin_guide.html#optimal-cluster-size) for details of etcd recommendations
* Specify at least 2 for `controller.count` in `cluster.yaml`
* Use 2 or more worker nodes,
* Avoid `t2.medium` or smaller instances for etcd and controller nodes. See [this issue](https://github.com/kubernetes-incubator/kube-aws/issues/138) for some additional discussion.

# Additional Reading

There's some additional documentation about [Building High-Availability Clusters](https://kubernetes.io/docs/admin/high-availability/) on the main Kubernetes documentation site. Although kube-aws will taken care of most of those concerns for you, it can be worth a read for a deeper understanding.
4 changes: 3 additions & 1 deletion docs/cli-reference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,18 @@ Initialize the base configuration for a cluster ready for customization prior to
| `hosted-zone-id` | The hosted zone in which a Route53 record set for a k8s API endpoint is created | none |
| `key-name` | The AWS key-pair for SSH access to nodes | none |
| `kms-key-arn` | The ARN of the AWS KMS key for encrypting TLS assets |
| `no-record-set` | Instruct kube-aws to not manage Route53 record sets for your K8S API | `false` |
| `region` | The AWS region to deploy to | none |

### `init` example

```bash
$ kube-aws init \
--cluster-name=my-cluster \
--external-dns-name=my-cluster-endpoint.mydomain.com \
--region=us-west-1 \
--availability-zone=us-west-1c \
--hosted-zone-id=xxxxxxxxxxxxxx \
--external-dns-name=my-cluster-endpoint.mydomain.com \
--key-name=key-pair-name \
--kms-key-arn="arn:aws:kms:us-west-1:xxxxxxxxxx:key/xxxxxxxxxxxxxxxxxxx"
```
Expand Down
Loading

0 comments on commit 0ce26f9

Please sign in to comment.