This repository hosts an implementation of a provider for Kubemark for the cluster-api project.
Learn how to engage with the Kubernetes community on the community page.
You can reach the maintainers of this project at:
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.
The Dockerfiles use as builder
in the FROM
instruction which is not currently supported
by the RH's docker fork (see kubernetes-sigs/kubebuilder#268).
One needs to run the imagebuilder
command instead of the docker build
.
Note: this info is RH only, it needs to be backported every time the README.md
is synced with the upstream one.
-
Install kvm
Depending on your virtualization manager you can choose a different driver. In order to install kvm, you can run (as described in the drivers documentation):
$ sudo yum install libvirt-daemon-kvm qemu-kvm libvirt-daemon-config-network $ systemctl start libvirtd $ sudo usermod -a -G libvirt $(whoami) $ newgrp libvirt
To install to kvm2 driver:
curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \ && chmod +x docker-machine-driver-kvm2 \ && sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \ && rm docker-machine-driver-kvm2
-
Deploying the cluster
To install minikube
v0.30.0
, you can run:$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.30.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
To deploy the cluster:
# minikube start --vm-driver kvm2 --kubernetes-version v1.11.3 --v 5
-
Deploying the cluster-api stack manifests
$ cd config/default && kustomize build | kubectl apply --validate=false -f -
Currently the kubemark actuator allows to configure the following test scenarios:
-
have a node report
Unready
status (e.g. for 5s), then Ready (e.g. for the next 40s) and again (periodically):apiVersion: kubemarkproviderconfig.k8s.io/v1alpha1 kind: KubemarkMachineProviderConfig unhealthyDuration: 5s healthyDuration: 40s turnUnhealthyPeriodically: true image: gofed/kubemark:v1.11.3-6
-
have a node report
Unready
status (e.g. 40s after kubelet startup) indefinitely:apiVersion: kubemarkproviderconfig.k8s.io/v1alpha1 kind: KubemarkMachineProviderConfig unhealthyDuration: 40s turnUnhealthyAfter: true image: gofed/kubemark:v1.11.3-6
Other configuration options:
deletionTimeout
- how much time to wait before a machine gets deleted from the cluster after setting machine deletion timestampnumCores
- for a number of cores a kubemark node will reportmemoryCapacity
- for memory a kubemark node will report
The provided kubemark (through gofed/kubemark-machine-controllers:d4f6edb
) is slightly updated version of the kubemark.
The list of PRs that allow kubemark to force kubelet to have node go Unready and/or back.
Upstream PRs:
- Setting ProviderID when running Kubemark: kubernetes/kubernetes#73393
- Injecting external Kubelet runtime health checker to simulate node failures: kubernetes/kubernetes#73398
- Allowing Kubemark to conditionally distrupt Kubelet runtime: kubernetes/kubernetes#73399
- Allowing kubemark to read in-cluster kubeconfig: TBD
How to build the kubemark image
- Clone
k8s.io/kubernetes
repo under$GOPATH/src/k8s.io/kubernetes
- Checkout to required version (e.g.
$ git checkout v1.14.3
) - Apply the PRs (and rebase if needed)
- Run
make WHAT="cmd/kubemark"
$ cp _output/bin/kubemark cluster/images/kubemark/
cd cluster/images/kubemark/
- Build docker image:
make build REGISTRY=... IMAGE_TAG=...
- Push the docker image to registry:
docker push $REGISTRY/
Available kubemark images
docker.io/gofed/kubemark:v1.14.3-beta.0-2
docker.io/gofed/kubemark:v1.13.7-beta.0-2
Set through image
of KubemarkMachineProviderConfig
.