Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Advanced configurations with kubeadm (Kustomize) #1159

Merged
merged 4 commits into from
Jul 29, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,365 @@
---
title: Advanced configurations with kubeadm (Kustomize)
authors:
- "@fabriziopandini"
owning-sig: sig-cluster-lifecycle
participating-sigs:
- sig-cluster-lifecycle
reviewers:
- "@neolit123"
- "@rosti"
- "@ereslibre"
- "@detiber"
- "@vincepri"
- "@chuckha"
approvers:
- "@timothysc"
- "@luxas"
editor: "@fabriziopandini"
creation-date: 2019-07-22
last-updated: 2019-07-22
status: implementable
---

# Advanced configurations with kubeadm (Kustomize)

## Table of Contents

<!-- toc -->
- [Release Signoff Checklist](#release-signoff-checklist)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [User Stories](#user-stories)
- [Story 1](#story-1)
- [Story 2](#story-2)
- [Story 3](#story-3)
- [Implementation Details](#implementation-details)
- [Kustomize integration with kubeadm](#kustomize-integration-with-kubeadm)
- [Providing and storing Kustomize patches to kubeadm](#providing-and-storing-kustomize-patches-to-kubeadm)
- [Storing and retrieving Kustomize patches for kubeadm](#storing-and-retrieving-kustomize-patches-for-kubeadm)
- [Risks and Mitigations](#risks-and-mitigations)
- [Design Details](#design-details)
- [Test Plan](#test-plan)
- [Graduation Criteria](#graduation-criteria)
- [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy)
- [Version Skew Strategy](#version-skew-strategy)
- [Implementation History](#implementation-history)
- [Drawbacks](#drawbacks)
- [Alternatives](#alternatives)
<!-- /toc -->

## Release Signoff Checklist

- [x] kubernetes/enhancements issue in release milestone, which links to KEP
- [x] KEP approvers have set the KEP status to `implementable`
- [x] Design details are appropriately documented
- [x] Test plan is in place, giving consideration to SIG Architecture and SIG Testing input
- [ ] Graduation criteria is in place
- [x] "Implementation History" section is up-to-date for milestone
- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
- [ ] Supporting documentation e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes

## Summary

This KEP is aimed at defining a new kubeadm feature that will allow users to bootstrap
a Kubernetes cluster with static pods customizations not supported by the Kubeadm component configuration.

## Motivation

Kubeadm currently allows you to define a limited set of configuration options for a
Kubernetes cluster via the Kubeadm component configuration or the corresponding CLI flags.

More specifically the kubeadm component configuration provides an abstraction that allows to:

1. To define configurations settings at cluster level using the `ClusterConfiguration`
config object
2. To define a limited set of configurations at the node level using the
`NodeRegistrationOptions` object or the `localAPIEndpoint` object
fabriziopandini marked this conversation as resolved.
Show resolved Hide resolved

This abstraction is well suited for the most common cluster configurations/use cases, but
there are other use cases that cannot be achieved with the kubeadm component configuration
as of today. Some examples:

- It is not possible to add sidecars e.g. for authorization web-hooks serving components.
- It is not possible to set/alter timeouts for liveness probes in control plane components.

This KEP aims to introduce a new feature that will eneable users full control of static
pod manifest generated by Kubeadm at node level - vs the kubeadm component configuration
that allows mostly cluster-wide configurations on control-plane/etcd args only -.

With this new feature users should not be required anymore to manually alter static
pod manifests stored into `/etc/kubernetes/manifests` after kubeadm init/join.
fabriziopandini marked this conversation as resolved.
Show resolved Hide resolved

### Goals

Considering the complexity of this topic, this document is expected to be subject
to some iterations. The goal of the current iteration is to:

- Get initial approval on Summary and Motivation paragraphs
- To identify a semantic for defining “advanced configurations” for control-plane/etcd
static pod manifests.
- To define UX for passing “advanced configurations” to kubeadm init and to kubeadm join.
- To define mechanics, limitations, and constraints for preserving “advanced configurations”
during cluster lifecycle and more specifically for supporting the kubeadm upgrade workflow.
- To ensure the proper functioning of “advanced configurations” with kubeadm phases.
- To clarify what is in the scope of kubeadm and what instead should be the responsibility
of the users/of higher-level tools in the stack like e.g. cluster API

### Non-Goals

- To provide any validation or guarantee about the security, conformance,
consistency, of “advanced configurations” for control-plane/etcd settings.
As it is already for `extraArgs` fields in the kubeadm component configuration or in the
Kubelet/KubeProxy component config, the responsibility of proper usage of those
advanced configuration options belongs to higher-level tools/users.
- To deprecate the Kubeadm component configuration because:
- The component configuration provides an abstraction well suited for most common use
cases (that can be addressed with cluster-wide configurations on control-plane/etcd command line args only)
- The component configuration implicitly defines the main cluster variants the kubeadm team
is committed to support and monitor in the Kubernetes test grid.
- To define how to manage “advanced configurations” when kubeadm and [`etcdadm`](https://github.com/kubernetes-sigs/etcdadm)
project will integrate. This will be defined in following iterations of this KEP.
- To define how to manage “advanced configurations” for the addons
(this is postponed until kubeadm - [`addons`](https://github.com/kubernetes-sigs/addon-operators)
project integration).

## Proposal

### User Stories

#### Story 1
As a cluster administrator, I want to add a sidecar to the kube-apiserver pod for running an
authorization web-hooks serving component.

#### Story 2
As a cluster administrator, I want to set timeouts for the kube-apiserver liveness
probes for edge clusters.

#### Story 3
As a cluster administrator, I want to upgrade my cluster preserving all the
“advanced configuration” already in place

### Implementation Details

This proposal explores as a first option for implementing Kubeadm
“advanced configurations“ the usage of Kustomize; please refer to
[Declarative application management in Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/declarative-application-management.md)
and [Kustomize KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/0008-kustomize.md)
for background information about Kustomize.

#### Kustomize integration with kubeadm

By adopting Kustomize in the context of Kubeadm, this proposal assumes to:

- Let kubeadm generate the static pod manifests as usual.
- Use kubeadm generated artifacts as a starting point for applying patches
containing “advanced configurations”.

This has some implications:

1. The higher-level tools/users have to express “advanced configurations” using
one of the two alternative techniques supported by Kustomize - the [strategic
merge patch](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#patchstrategicmerge)
and the [JSON patch](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#patchjson6902) -.
2. The higher-level tools/users have to provide patches before running kubeadm;
this point is going to be further discussed in the following paragraphs.
3. Kubeadm is responsible for coordinating the execution of Kustomize within the
init/join/upgrade workflows
fabriziopandini marked this conversation as resolved.
Show resolved Hide resolved
4. as a consequence of the previous point, higher-level tools/users are not
requested to take care of defining `kustomization.yaml` files nor to define
a local folder structure.
5. `patchesJson6902` can't be used without a `kustomization.yaml` file defining `target`
for a patch; this limitation is not considered blocking for starting implementation;
however a solution for this problem should be defined before graduating to beta
(e.g. require `kustomization.yaml` in case the users want to use `patchesJson6902`).

Additionally, in order to simplify the first implementation of this KEP, this
proposal is going to assume that Kustomize patches for kubeadm are always defined
specifically for the node where kubeadm is going to be executed.

This point could be reconsidered in the future, by e.g. introducing cluster-wide
patches and/or patches for a subset of nodes.

The resulting workflow for using in kustomize will be the following

1. Create a folder with some patches, e.g.

```
mkdir kubeadm-patches

cat <<EOF >./kubeadm-patches/patch1.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
{your kube-apiserver patch here}
EOF

cat <<EOF >./kubeadm-patches/patch2.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
{your kube-controller-manager patch here}
EOF
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should there also be an example here for a net new static pod?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@detiber I'm not sure what do you mean with "net new static pod".

This feature targets only the 4 static pods manifests (3 in case of external etcd) created by kubeadm in the /etcd/kubernetes/manifests folder

```

2. Run kubeadm passing the new patch, e.g.

```
kubeadm init --experimental-kustomize kubeadm-patches/
or
kubeadm init phase control-plane ... --experimental-kustomize kubeadm-patches/
```
or
```
kubeadm join --experimental-kustomize kubeadm-patches/
or
kubeadm join phase control-plane-prepare control-plane --experimental-kustomize kubeadm-patches/
```
or
```
kubeadm upgrade apply --experimental-kustomize kubeadm-patches/
```
or
```
kubeadm upgrade node --experimental-kustomize kubeadm-patches/
or
kubeadm upgrade node phase control-plane --experimental-kustomize kubeadm-patches/
```

NB1. `--experimental-kustomize` is the proposed name for the flag to be renamed to --kustomize`
when beta is reached. `-k` abbreviation can be reserved or even fully connected.

#### Providing and storing Kustomize patches to kubeadm

Before kubeadm init, Kustomize patches should be eventually provided to kubeadm
by higher-level tools/users; patches should be defined in a custom location
on the machine file system and this location could be passed to
kubeadm init/join with a CLI flag.

In order to simplify the first implementation of this KEP, this proposal is assuming
to use the same approach also for kubeadm join; this point could be reconsidered
in the future, by e.g. defining a method for allowing higher-level tools/users to
define Kustomize patches using a new CRD.

#### Storing and retrieving Kustomize patches for kubeadm

Kustomize patches, should be preserved during the whole cluster lifecycle, mainly
for allowing kubeadm to preserve changes during the kubeadm upgrade workflow.

In order to simplify the first implementation of this KEP, this proposal is assuming
that Kustomize patches will remain stored in the custom location on the machine file system
for the necessary time, and that this location will be passed to kubeadm upgrade with a CLI
flag; this point could be reconsidered in the future, by e.g. defining a method for
allowing higher-level tools/users to define Kustomize patches using a new CRD.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will the kustomize patches be applied?

Generally kustomize operates on a single yaml file applying all defined patches. Would this proposal mean that the currently generated static manifests would be merged into a single file prior to applying the kustomize patches first?

Would they still be broken up before writing them to disk? If so, what are the plans for doing that splitting and for naming the resulting static manifest files? I'm assuming the existing static manifest file names would stay the same, but how would additional passed static pod files be named?

Are there plans to vendor kustomize to do the patching operations or to shell out to kubectl? If the later, doesn't this introduce a new dependency on the kubectl binary that didn't exist previously?

Copy link
Member Author

@fabriziopandini fabriziopandini Jul 26, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@detiber

How will the kustomize patches be applied? ...

In the POC I'm exploring a different approach:

  • read all the patches
  • group patches by the target object (without using any naming conventions, but relying on TypeMeta/ObjetMeta embedded in patches)
  • when rendering a static pod manifest, apply only relevant patches

how would additional passed static pod files be named?

ATM the KEP does not consider additional passed static pod, because of they are out of the kubeadm responsibility. But if you can provide more details about your use case I'm more than open to reconsidering this (TBD if in this or in future iterations of this KEP)

Are there plans to vendor kustomize

In the POC I'm exploring vendoring, because it allows me to do the kustomize process in memory and have full control on it; I also agree with you that the resulting solution is self-contained, but there are different opinion on this, see e.g. #1159 (comment)


### Risks and Mitigations
fabriziopandini marked this conversation as resolved.
Show resolved Hide resolved
fabriziopandini marked this conversation as resolved.
Show resolved Hide resolved

_Confusion between kubeadm component configuration and kustomize_
Kubeadm already offers a way to implement cluster settings, that is the kubeadm component
configuration and component configs. Adding a new feature for supporting “advanced configurations”
can create confusion in the users.

kubeadm maintainers should take care of making differences cristal clear in release notes
and feature announcement:

- The component configuration provides an abstraction well suited for most common use cases
that can be addressed with cluster-wide configurations on control-plane/etcd command line args only,
while “advanced configurations”/kustomize allows users full control of static
pod manifest generated by Kubeadm at node level.
- The component configuration implicitly defines the main cluster variants the kubeadm team is
committed to support and monitor in the Kubernetes test grid, while instead higher-level
tools/user are responsible for the security, conformance, consistency, of
“advanced configurations” for control-plane/etcd static pod manifests.

_Misleading expectations on the level of flexibility_
In order to provide guarantee about kubeadm respecting “advanced configurations” during
init, join, upgrades or single-phase execution, it is necessary to define some trade-offs
around _what_ can be customized and _how_.

Even if the proposed solution is based on the user feedback/issues, the kubeadm maintainers
want to be sure the implementation is providing the expected level of flexibility and, in
order to ensure that, we will wait for at least one K8s release cycle for the users to provide
feedback before moving forward in graduating the feature to beta.

Similarly, the kubeadm maintainers should work with [`etcdadm`](https://github.com/kubernetes-sigs/etcdadm)
project and [`addons`](https://github.com/kubernetes-sigs/addon-operators) project
to ensure a consistent approach across different components.
fabriziopandini marked this conversation as resolved.
Show resolved Hide resolved

_Component version change during upgrades_
Static pod manifests are managed by kubeadm, while “advanced configurations”/kustomize patches
will be managed by users.

It might happen that a static pod manifest during upgrades is changed by kubeadm in a way that
some patches will not cleanly apply anymore.

The kubeadm maintainers will work on release notes to make potential breaking changes more visible;
additionally, upgrade instructions will be updated adding the recommendation to --dry-run and check
expected changes before upgrades.

_Kustomize errors during kubeadm workflows_
When executing “advanced configurations”/kustomize patches within kubeadm workflows, we are introducing
an external element that can potentially generate errors during commands execution.

Error management should be adapted to this new possible risk, ensuring that the node remains in a
consistent state in case of errors.

## Design Details

### Test Plan

Add at least one periodic E2E test job exercising “advanced configurations” during init,
join and upgrades.
fabriziopandini marked this conversation as resolved.
Show resolved Hide resolved

Please note that, in accordance with the split of responsibilities defined in the previous paragraphs,
the new E2E test will focus _only_ on the mechanics of applying “advanced configurations”/kustomize
patches, not on the possible combination of patches and nor on the security, conformance, consistency,
of the resulting Kubernetes cluster.

### Graduation Criteria

This proposal in its initial version covers only the creation of a new alpha feature.
Graduation criteria will be defined in the following iterations on this proposal and
consider user feedback as well.

### Upgrade / Downgrade Strategy

As stated in goals, kubeadm will preserve “advanced configurations” during upgrades,
and more specifically, it will re-apply patches after each upgrade.

Downgrades are not supported by kubeadm, and not considered by this proposal.

### Version Skew Strategy

This proposal does not impact kubeadm compliance with official K8s version skew policy;
higher-level tools/user are responsible for the security, conformance, consistency of
“advanced configurations” that can impact or the aforementioned point.

## Implementation History

Tuesday, July 30, 2019
- the `Summary` and `Motivation` sections being merged signaling SIG acceptance
- the `Proposal` section being merged signaling agreement on a proposed design
- the date implementation started

## Drawbacks

Kubeadm already offers a way to implement cluster settings, that is the kubeadm API
and the support for providing component configs. Adding a new feature for supporting
“advanced configurations” can create confusion in the users.

See risks and mitigations.
neolit123 marked this conversation as resolved.
Show resolved Hide resolved

## Alternatives

There are many alternatives to “Kustomize” in the ecosystem; see [Declarative application management in Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/declarative-application-management.md).

While there is great value in several different approaches “Kustomize” was selected as
the first choice for this proposal because it already has first-class supported in
kubectl (starting from v1.14).