Skip to content

Commit

Permalink
Some tidying up
Browse files Browse the repository at this point in the history
  • Loading branch information
brtkwr committed Feb 12, 2021
1 parent b8460cf commit 494c295
Show file tree
Hide file tree
Showing 28 changed files with 235 additions and 170 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,5 @@
terraform.tfvars
terraform.tfstate
terraform.tfstate.backup
*.bak
*.qcow2
116 changes: 55 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,78 +3,74 @@
Using this repository will deploy two separate Kubernetes clusters: one with
Calico and another with Flannel.

## Autoscaling

Prerequisites:
## Prerequisites:

- OpenStack Queens, Magnum Stein 8.1.0 minimum
- Terraform v0.12.10, provider.openstack v1.23.0
- Terraform v0.14.6, provider.openstack v1.37.1
- Kubernetes client (to be able to run `kubectl`)

Deployment:

- Initialise terraform

terraform init --upgrade
Install dependencies:

- Copy sample variable file:
./scripts/install-deps.sh

cp terraform.tfvars{.sample,}
## Deployment:

- Edit `terraform.tfvars` and fill in details like `external_network_id` and `keypair_name`.
Initialise terraform

- Source your OpenStack cloud environment variables:
terraform init --upgrade

source openrc.sh
Copy sample variable file:

- To upload the latest Fedora CoreOS image:
cp terraform.tfvars{.sample,}

./upload-coreos.sh # requires Magnum Train 9.1.0 minimum and Heat Train.
./upload-atomic.sh # if using older Magnum releases
Edit `terraform.tfvars` and fill in details like `external_network_id` and `keypair_name`.

- To deploy the clusters (replace with `atomic.tfvars` or `podman.tfvars` if using Magnum release older than Train 9.1.0):
Source your OpenStack cloud environment variables:

./cluster.sh coreos.tfvars # requires Magnum Train (9.1.0) and Heat Train minimum.
./cluster.sh podman.tfvars # requires Magnum Train (9.1.0) and Heat Queens minimum.
./cluster.sh atomic.tfvars # requires Magnum Stein (8.1.0) and Heat Queens minimum.
source openrc.sh

- Optionally attach a floating IP to the Calico master node:
To upload the latest Fedora CoreOS image:

openstack server add floating ip `openstack server list -f value -c Name | grep 'calico.*master-0'` <floating-ip>
./scripts/upload-coreos.sh # requires Magnum Train 9.1.0 minimum and Heat Train.
./scripts/upload-atomic.sh # if using older Magnum releases

- Or to the Flannel master node:
To deploy the clusters (replace with `atomic.tfvars` or `podman.tfvars` if using Magnum release older than Train 9.1.0):

openstack server add floating ip `openstack server list -f value -c Name | grep 'flannel.*master-0'` <floating-ip>
./scripts/cluster.sh tfvars/coreos.tfvars # requires Magnum Train (9.1.0) and Heat Train minimum.
./scripts/cluster.sh tfvars/podman.tfvars # requires Magnum Train (9.1.0) and Heat Queens minimum.
./scripts/cluster.sh tfvars/atomic.tfvars # requires Magnum Stein (8.1.0) and Heat Queens minimum.

- SSH into the master node:

kubectl create deployment test-autoscale --image=nginx
kubectl scale deployment test-autoscale --replicas=100

- Sample output of `kubectl logs deploy/cluster-autoscaler -n kube-system`:
## Autoscaling

I1017 13:26:11.617165 1 leaderelection.go:217] attempting to acquire leader lease kube-system/cluster-autoscaler...
I1017 13:26:11.626499 1 leaderelection.go:227] successfully acquired lease kube-system/cluster-autoscaler
I1017 13:26:13.804795 1 magnum_manager_heat.go:293] For stack ID 3e981ac7-4a6e-47a7-9d16-7874f5e108a0, stack name is k8s-sb7k6mtqieim
I1017 13:26:13.974239 1 magnum_manager_heat.go:310] Found nested kube_minions stack: name k8s-sb7k6mtqieim-kube_minions-33izbolw5kvp, ID 2f7b5dff-9960-4ae2-8572-abed511d0801
I1017 13:32:25.461803 1 scale_up.go:689] Scale-up: setting group default-worker size to 3
I1017 13:32:28.400053 1 magnum_nodegroup.go:101] Increasing size by 1, 2->3
I1017 13:33:02.387803 1 magnum_nodegroup.go:67] Waited for cluster UPDATE_IN_PROGRESS status
I1017 13:36:11.528032 1 magnum_nodegroup.go:67] Waited for cluster UPDATE_COMPLETE status
I1017 13:36:21.550679 1 scale_up.go:689] Scale-up: setting group default-worker size to 5
I1017 13:36:24.157717 1 magnum_nodegroup.go:101] Increasing size by 2, 3->5
I1017 13:36:58.062981 1 magnum_nodegroup.go:67] Waited for cluster UPDATE_IN_PROGRESS status
I1017 13:40:07.134681 1 magnum_nodegroup.go:67] Waited for cluster UPDATE_COMPLETE status
W1017 13:50:14.668777 1 reflector.go:289] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:190: watch of *v1.Pod ended with: too old resource version: 15787 (16414)
I1017 14:00:17.891270 1 scale_down.go:882] Scale-down: removing empty node k8s-sb7k6mtqieim-minion-2
I1017 14:00:17.891315 1 scale_down.go:882] Scale-down: removing empty node k8s-sb7k6mtqieim-minion-3
I1017 14:00:17.891323 1 scale_down.go:882] Scale-down: removing empty node k8s-sb7k6mtqieim-minion-4
I1017 14:00:23.255551 1 magnum_manager_heat.go:344] Resolved node k8s-sb7k6mtqieim-minion-2 to stack index 2
I1017 14:00:23.255579 1 magnum_manager_heat.go:344] Resolved node k8s-sb7k6mtqieim-minion-4 to stack index 4
I1017 14:00:23.255584 1 magnum_manager_heat.go:344] Resolved node k8s-sb7k6mtqieim-minion-3 to stack index 3
I1017 14:00:24.283658 1 magnum_manager_heat.go:280] Waited for stack UPDATE_IN_PROGRESS status
I1017 14:01:25.030818 1 magnum_manager_heat.go:280] Waited for stack UPDATE_COMPLETE status
I1017 14:01:58.970490 1 magnum_nodegroup.go:67] Waited for cluster UPDATE_IN_PROGRESS status
SSH into the master node:

kubectl create deployment test-autoscale --image=nginx
kubectl scale deployment test-autoscale --replicas=100

Sample output of `kubectl logs deploy/cluster-autoscaler -n kube-system`:

I1017 13:26:11.617165 1 leaderelection.go:217] attempting to acquire leader lease kube-system/cluster-autoscaler...
I1017 13:26:11.626499 1 leaderelection.go:227] successfully acquired lease kube-system/cluster-autoscaler
I1017 13:26:13.804795 1 magnum_manager_heat.go:293] For stack ID 3e981ac7-4a6e-47a7-9d16-7874f5e108a0, stack name is k8s-sb7k6mtqieim
I1017 13:26:13.974239 1 magnum_manager_heat.go:310] Found nested kube_minions stack: name k8s-sb7k6mtqieim-kube_minions-33izbolw5kvp, ID 2f7b5dff-9960-4ae2-8572-abed511d0801
I1017 13:32:25.461803 1 scale_up.go:689] Scale-up: setting group default-worker size to 3
I1017 13:32:28.400053 1 magnum_nodegroup.go:101] Increasing size by 1, 2->3
I1017 13:33:02.387803 1 magnum_nodegroup.go:67] Waited for cluster UPDATE_IN_PROGRESS status
I1017 13:36:11.528032 1 magnum_nodegroup.go:67] Waited for cluster UPDATE_COMPLETE status
I1017 13:36:21.550679 1 scale_up.go:689] Scale-up: setting group default-worker size to 5
I1017 13:36:24.157717 1 magnum_nodegroup.go:101] Increasing size by 2, 3->5
I1017 13:36:58.062981 1 magnum_nodegroup.go:67] Waited for cluster UPDATE_IN_PROGRESS status
I1017 13:40:07.134681 1 magnum_nodegroup.go:67] Waited for cluster UPDATE_COMPLETE status
W1017 13:50:14.668777 1 reflector.go:289] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:190: watch of *v1.Pod ended with: too old resource version: 15787 (16414)
I1017 14:00:17.891270 1 scale_down.go:882] Scale-down: removing empty node k8s-sb7k6mtqieim-minion-2
I1017 14:00:17.891315 1 scale_down.go:882] Scale-down: removing empty node k8s-sb7k6mtqieim-minion-3
I1017 14:00:17.891323 1 scale_down.go:882] Scale-down: removing empty node k8s-sb7k6mtqieim-minion-4
I1017 14:00:23.255551 1 magnum_manager_heat.go:344] Resolved node k8s-sb7k6mtqieim-minion-2 to stack index 2
I1017 14:00:23.255579 1 magnum_manager_heat.go:344] Resolved node k8s-sb7k6mtqieim-minion-4 to stack index 4
I1017 14:00:23.255584 1 magnum_manager_heat.go:344] Resolved node k8s-sb7k6mtqieim-minion-3 to stack index 3
I1017 14:00:24.283658 1 magnum_manager_heat.go:280] Waited for stack UPDATE_IN_PROGRESS status
I1017 14:01:25.030818 1 magnum_manager_heat.go:280] Waited for stack UPDATE_COMPLETE status
I1017 14:01:58.970490 1 magnum_nodegroup.go:67] Waited for cluster UPDATE_IN_PROGRESS status

## Cinder Volumes

Expand Down Expand Up @@ -151,20 +147,18 @@ You can then proceed to spawn a PVC `kubectl apply -f https://raw.githubusercont

## Helm

To use helm with tiller installed on the cluster, first of all, install `helm`:

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
Helm should be installed as part of `./scripts/install-deps.sh` script.

Now source `magnum-tiller.sh` to use tiller installed in the `magnum-tiller` namespace.
If using Helm v2, source `magnum-tiller.sh` to use tiller installed in the `magnum-tiller` namespace.

source magnum-tiller.sh
helm version
source ./scripts/magnum-tiller.sh
helm2 version

If there is a mismatch between the intalled version of helm client and tiller installed on the server, upgrade tiller.

helm init --upgrade
helm2 init --upgrade

NOTE: magnum currently uses Helm 2, and tiller has been deprecated in Helm 3.
NOTE: Magnum currently uses Helm 2, and tiller has been deprecated in Helm 3.

## Ingress

Expand Down
9 changes: 0 additions & 9 deletions cluster.sh

This file was deleted.

37 changes: 30 additions & 7 deletions main.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,26 @@
provider "openstack" {
version = "1.29.0"
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = ">=1.31.0"
}
local = {
source = "hashicorp/local"
}
null = {
source = "hashicorp/null"
}
}
}

data "local_file" "public_key" {
filename = pathexpand("~/.ssh/id_rsa.pub")
}


resource "openstack_compute_keypair_v2" "keypair" {
name = var.keypair_name
public_key = data.local_file.public_key.content
}

resource "openstack_containerinfra_clustertemplate_v1" "templates" {
Expand All @@ -18,7 +39,7 @@ resource "openstack_containerinfra_clustertemplate_v1" "templates" {
master_lb_enabled = var.master_lb_enabled
fixed_network = var.fixed_network
fixed_subnet = var.fixed_subnet
floating_ip_enabled = var.floating_ip_enabled
insecure_registry = var.insecure_registry

lifecycle {
create_before_destroy = true
Expand All @@ -31,16 +52,18 @@ resource "openstack_containerinfra_cluster_v1" "clusters" {
cluster_template_id = openstack_containerinfra_clustertemplate_v1.templates[each.value.template].id
master_count = var.master_count
node_count = var.node_count
keypair = var.keypair_name
keypair = openstack_compute_keypair_v2.keypair.id
create_timeout = var.create_timeout
labels = merge(var.labels, var.label_overrides, lookup(each.value, "label_overrides", {}))
docker_volume_size = var.docker_volume_size
floating_ip_enabled = var.floating_ip_enabled
}

resource "local_file" "kubeconfigs" {
for_each = var.clusters
content = lookup(lookup(openstack_containerinfra_cluster_v1.clusters, each.key, {}), "kubeconfig", { raw_config : null }).raw_config
filename = pathexpand("~/.kube/${each.key}/config")
for_each = var.clusters
content = lookup(lookup(openstack_containerinfra_cluster_v1.clusters, each.key, {}), "kubeconfig", { raw_config : null }).raw_config
filename = pathexpand("~/.kube/${each.key}/config")
depends_on = [openstack_containerinfra_cluster_v1.clusters]
}

resource "null_resource" "kubeconfig" {
Expand Down
16 changes: 15 additions & 1 deletion scripts/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
# Scripts

## upload-coreos.sh

Upload the latest stable Fedora Coreos image to glance.

## upload-atomic.sh [deprecated]

Upload the last ever Fedora Atomic image to Glance.

## pull\_retag\_push.py

Pull retag and push list of images to a local container registry.

Usage:

./pull_retag_push.py -r localhost:5000 -i master.txt worker.txt
./pull_retag_push.py -r localhost:5000 -i images.txt

Pulling images in scripts/master.txt
kubernetesui/dashboard:v2.0.0 | exists locally
Expand Down
8 changes: 8 additions & 0 deletions scripts/cluster.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#!/bin/bash
set -x
DIR=`dirname $0`/..
TFDIR=`realpath $DIR`
TFVARS=${1:-$TFDIR/tfvars/coreos.tfvars}
TFSTATE=${2:-$TFDIR/terraform.tfstate}
ACTION=${3:-apply -auto-approve}
terraform $ACTION -state=$TFSTATE -var-file=$TFVARS
File renamed without changes.
56 changes: 35 additions & 21 deletions scripts/worker.txt → scripts/images.txt
Original file line number Diff line number Diff line change
@@ -1,29 +1,43 @@
rancher/hyperkube:v1.19.1-rancher1
k8s.gcr.io/hyperkube:v1.18.8
docker.io/openstackmagnum/heat-container-agent:victoria-dev
k8scloudprovider/cinder-csi-plugin:v1.18.0
gcr.io/google_containers/pause:3.1
quay.io/calico/cni:v3.13.1
quay.io/calico/node:v3.13.1
quay.io/calico/pod2daemon-flexvol:v3.13.1
quay.io/calico/cni:v3.13.1
quay.io/coreos/prometheus-config-reloader:v0.37.0
quay.io/coreos/prometheus-operator:v0.37.0
quay.io/prometheus/node-exporter:v0.18.1
rancher/hyperkube:v1.19.7-rancher2
rancher/hyperkube:v1.20.2-rancher1
quay.io/coreos/flannel-cni:v0.3.0
quay.io/coreos/flannel:v0.12.0-amd64
k8s.gcr.io/hyperkube:v1.18.2
coredns/coredns:1.6.6
k8scloudprovider/k8s-keystone-auth:v1.18.0
k8scloudprovider/magnum-auto-healer:latest
k8scloudprovider/openstack-cloud-controller-manager:v1.18.0
k8scloudprovider/openstack-cloud-controller-manager:v1.19.0
k8scloudprovider/openstack-cloud-controller-manager:v1.20.0
kubernetesui/dashboard:v2.0.0
kubernetesui/metrics-scraper:v1.0.4
openstackmagnum/cluster-autoscaler:v1.18.1
quay.io/calico/kube-controllers:v3.13.1
quay.io/coreos/etcd:v3.4.6
directxman12/k8s-prometheus-adapter-amd64:v0.5.0
gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2
gcr.io/google_containers/metrics-server-amd64:v0.3.5
grafana/grafana:6.6.2
jettech/kube-webhook-certgen:v1.0.0
k8s.gcr.io/defaultbackend:1.4
k8s.gcr.io/node-problem-detector:v0.6.2
k8scloudprovider/cinder-csi-plugin:v1.18.0
kiwigrid/k8s-sidecar:0.1.99
quay.io/coreos/configmap-reload:v0.0.1
quay.io/coreos/kube-state-metrics:v1.9.4
quay.io/prometheus/prometheus:v2.15.2
quay.io/prometheus/alertmanager:v0.20.0
squareup/ghostunnel:v1.5.2
quay.io/k8scsi/csi-resizer:v0.3.0
quay.io/k8scsi/csi-snapshotter:v1.2.2
quay.io/k8scsi/csi-provisioner:v1.4.0
gcr.io/google_containers/metrics-server-amd64:v0.3.5
quay.io/coreos/prometheus-config-reloader:v0.37.0
quay.io/coreos/prometheus-operator:v0.37.0
quay.io/k8scsi/csi-attacher:v2.0.0
jettech/kube-webhook-certgen:v1.0.0
quay.io/prometheus/node-exporter:v0.18.1
quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
directxman12/k8s-prometheus-adapter-amd64:v0.5.0
k8s.gcr.io/node-problem-detector:v0.6.2
gcr.io/google_containers/pause:3.1
k8s.gcr.io/defaultbackend:1.4
gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2
quay.io/coreos/configmap-reload:v0.0.1
quay.io/k8scsi/csi-provisioner:v1.4.0
quay.io/k8scsi/csi-resizer:v0.3.0
quay.io/k8scsi/csi-snapshotter:v1.2.2
quay.io/prometheus/alertmanager:v0.20.0
quay.io/prometheus/prometheus:v2.15.2
squareup/ghostunnel:v1.5.2
8 changes: 4 additions & 4 deletions install-deps.sh → scripts/install-deps.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,29 +19,29 @@ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s http
sudo mv kubectl /usr/local/bin/kubectl

# Install helm2 and helm (version 3)
VERSION=v2.16.4 && \
VERSION=v2.17.0 && \
curl -L https://get.helm.sh/helm-$VERSION-linux-amd64.tar.gz --output helm.tar.gz && \
mkdir -p tmp && \
tar -xzf helm.tar.gz -C tmp/ && \
sudo mv tmp/linux-amd64/helm /usr/local/bin/helm2 && \
rm -rf helm.tar.gz tmp

VERSION=v3.1.2 && \
VERSION=v3.5.0 && \
curl -L https://get.helm.sh/helm-$VERSION-linux-amd64.tar.gz --output helm.tar.gz && \
mkdir -p tmp && \
tar -xzf helm.tar.gz -C tmp/ && \
sudo mv tmp/linux-amd64/helm /usr/local/bin/helm && \
rm -rf helm.tar.gz tmp

# Install known latest terraform
VERSION=0.12.26 && \
VERSION=0.14.6 && \
curl -L https://releases.hashicorp.com/terraform/${VERSION}/terraform_${VERSION}_linux_amd64.zip --output terraform.zip && \
unzip terraform.zip && \
rm terraform.zip && \
sudo mv terraform /usr/local/bin/terraform

# Install latest known sonobuoy
VERSION=0.18.0 && \
VERSION=0.20.0 && \
curl -L "https://github.com/vmware-tanzu/sonobuoy/releases/download/v${VERSION}/sonobuoy_${VERSION}_linux_amd64.tar.gz" --output sonobuoy.tar.gz && \
mkdir -p tmp && \
tar -xzf sonobuoy.tar.gz -C tmp/ && \
Expand Down
File renamed without changes.
17 changes: 0 additions & 17 deletions scripts/master.txt

This file was deleted.

File renamed without changes.
4 changes: 2 additions & 2 deletions scripts/pull_retag_push.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ def pull(image, max_width):
try:
d.images.pull(image)
result = "pulled"
except docker.errors.NotFound:
result = "not found"
except Exception:
result = "error"
cols = image.ljust(max_width), result
print(" | ".join(cols))

Expand Down
File renamed without changes.
Loading

0 comments on commit 494c295

Please sign in to comment.