Read the following guide if you're interested in contributing to cluster-api.
We'd love to accept your patches! Before we can take them, we have to jump a couple of legal hurdles.
Please fill out either the individual or corporate Contributor License Agreement (CLA). More information about the CLA and instructions for signing it can be found here.
NOTE: Only original source code from you and other people that have signed the CLA can be accepted into the repository.
If you're new to the project and want to help, but don't know where to start, we have a semi-curated list of issues that should not need deep knowledge of the system. Have a look and see if anything sounds interesting. Alternatively, read some of the docs on other controllers and try to write your own, file and fix any/all issues that come up, including gaps in documentation!
- If you haven't already done so, sign a Contributor License Agreement (see details above).
- Fork the desired repo, develop and test your code changes.
- Submit a pull request.
All changes must be code reviewed. Coding conventions and standards are explained in the official developer docs. Expect reviewers to request that you avoid common go style mistakes in your PRs.
Cluster API ships older versions through release-X.X
branches, usually backports are reserved to critical bug-fixes.
Some release branches might ship with both Go modules and dep (e.g. release-0.1
), users backporting patches should always make sure
that the vendored Go modules dependencies match the Gopkg.lock and Gopkg.toml ones by running dep ensure
Cluster API maintainers may add "LGTM" (Looks Good To Me) or an equivalent comment to indicate that a PR is acceptable. Any change requires at least one LGTM. No pull requests can be merged until at least one Cluster API maintainer signs off with an LGTM.
To gain viewing permissions to google docs in this project, please join either the kubernetes-dev or kubernetes-sig-cluster-lifecycle google group.
Anyone may comment on issues and submit reviews for pull requests. However, in order to be assigned an issue or pull request, you must be a member of the Kubernetes SIGs GitHub organization.
If you are a Kubernetes GitHub organization member, you are eligible for membership in the Kubernetes SIGs GitHub organization and can request membership by opening an issue against the kubernetes/org repo.
However, if you are a member of any of the related Kubernetes GitHub organizations but not of the Kubernetes org, you will need explicit sponsorship for your membership request. You can read more about Kubernetes membership and sponsorship here.
Cluster API maintainers can assign you an issue or pull request by leaving a
/assign <your Github ID>
comment on the issue or pull request.
This document is meant to help OSS contributors implement support for providers (cloud or on-prem).
As part of adding support for a provider (cloud or on-prem), you will need to:
- Create tooling that conforms to the Cluster API (described further below)
- A machine controller that can run independent of the cluster. This controller should handle the lifecycle of the machines, whether it's run in-cluster or out-cluster.
The machine controller should be able to act on a subset of machines that form a cluster (for example using a label selector).
To minimize code duplication and maximize flexibility, bootstrap clusters with an external Cluster Management API Stack. A Cluster Management API Stack contains all the components needed to provide Kubernetes Cluster Management API for a cluster. Bootstrap Process Design Details.
A new Machine can be created in a declarative way, specifying versions of various components such as the kubelet. It should also be able to specify provider-specific information such as OS image, instance type, disk configuration, etc., though this will not be portable.
When a cluster is first created with a cluster config file, there is no control plane node or api server. So the user will need to bootstrap a cluster. While the implementation details are specific to the provider, the following guidance should help you:
- Your tool should spin up the external apiserver and the machine controller.
- POST the objects to the apiserver.
- The machine controller creates resources (Machines etc)
- Pivot the apiserver and the machine controller in to the cluster.
While not mandatory, it is suggested for new providers to support configurable machine setups for creating new machines. This is to allow flexibility in what startup scripts are used and what versions are supported instead of hardcoding startup scripts into the machine controller. You can find an example implementation for GCE here.
For GCE, a config map holds the list of valid machine setup configs,
and the yaml file is volume mounted into the machine controller using a ConfigMap named machine-setup
.
A config type defines a set of parameters that can be taken from the machine object being created, and maps those parameters to startup scripts and other relevant information. In GCE, the OS, machine roles, and version info are the parameters that map to a GCP image path and metadata (which contains the startup script).
When creating a new machine, there should be a check for whether the machine setup is supported. This is done by looking through the valid configs parsed out of the yaml for a config with matching parameters. If a match is found, then the machine can be created with the startup script found in the config. If no match is found, then the given machine configuration is not supported. Getting the script onto the machine and running it on startup is a provider specific implementation detail.
More details can be found in the design doc, but note that it is GCE specific.
When the client deletes a Machine object, your controller's reconciler should trigger the deletion of the Machine that backs that machine. The delete is provider specific, but usually requires deleting the VM and freeing up any external resources (like IP).
These include:
- A specific Machine can have its kubelet version upgraded or downgraded.
- A specific Machine can have its OS image upgraded or downgraded.
A sample implementation for an upgrader is provided here. Each machine is upgraded serially, which can amount to:
for machine in machines:
upgrade machine
The specific upgrade logic will be implement as part of the machine controller, and is specific to the provider. The user provided provider config will be in machine.Spec.ProviderSpec
.
Discussion around in-place vs replace upgrades is here.
Whether you are a user or contributor, official support channels include:
- GitHub issues: https://github.com/kubernetes-sigs/cluster-api/issues/new
- Slack: Chat with us on Slack: #cluster-api
- Email: kubernetes-sig-cluster-lifecycle mailing list
Before opening a new issue or submitting a new pull request, it's helpful to search the project - it's likely that another user has already reported the issue you're facing, or it's a known issue that we're already aware of.