Skip to content

Latest commit

 

History

History
1110 lines (910 loc) · 35.4 KB

kustomization.md

File metadata and controls

1110 lines (910 loc) · 35.4 KB

Kustomization

The Kustomization API defines a pipeline for fetching, decrypting, building, validating and applying Kubernetes manifests.

Specification

A Kustomization object defines the source of Kubernetes manifests by referencing an object managed by source-controller, the path to the Kustomization file within that source, and the interval at which the kustomize build output is applied on the cluster.

type KustomizationSpec struct {
	// DependsOn may contain a dependency.CrossNamespaceDependencyReference slice
	// with references to Kustomization resources that must be ready before this
	// Kustomization can be reconciled.
	// +optional
	DependsOn []dependency.CrossNamespaceDependencyReference `json:"dependsOn,omitempty"`

	// Decrypt Kubernetes secrets before applying them on the cluster.
	// +optional
	Decryption *Decryption `json:"decryption,omitempty"`

	// The interval at which to reconcile the Kustomization.
	// +required
	Interval metav1.Duration `json:"interval"`

	// The interval at which to retry a previously failed reconciliation.
	// When not specified, the controller uses the KustomizationSpec.Interval
	// value to retry failures.
	// +optional
	RetryInterval *metav1.Duration `json:"retryInterval,omitempty"`

	// The KubeConfig for reconciling the Kustomization on a remote cluster.
	// When specified, KubeConfig takes precedence over ServiceAccountName.
	// +optional
	KubeConfig *KubeConfig `json:"kubeConfig,omitempty"`

	// Path to the directory containing the kustomization.yaml file, or the
	// set of plain YAMLs a kustomization.yaml should be generated for.
	// Defaults to 'None', which translates to the root path of the SourceRef.
	// +optional
	Path string `json:"path,omitempty"`

	// PostBuild describes which actions to perform on the YAML manifest
	// generated by building the kustomize overlay.
	// +optional
	PostBuild *PostBuild `json:"postBuild,omitempty"`

	// Enables garbage collection.
	// +required
	Prune bool `json:"prune"`

	// A list of resources to be included in the health assessment.
	// +optional
	HealthChecks []meta.NamespacedObjectKindReference `json:"healthChecks,omitempty"`

	// Strategic merge and JSON patches, defined as inline YAML objects,
	// capable of targeting objects based on kind, label and annotation selectors.
	// +optional
	Patches []kustomize.Patch `json:"patches,omitempty"`

	// Strategic merge patches, defined as inline YAML objects.
	// +optional
	PatchesStrategicMerge []apiextensionsv1.JSON `json:"patchesStrategicMerge,omitempty"`

	// JSON 6902 patches, defined as inline YAML objects.
	// +optional
	PatchesJSON6902 []kustomize.JSON6902Patch `json:"patchesJson6902,omitempty"`

	// Images is a list of (image name, new name, new tag or digest)
	// for changing image names, tags or digests. This can also be achieved with a
	// patch, but this operator is simpler to specify.
	// +optional
    Images []kustomize.Image `json:"images,omitempty"`

	// The name of the Kubernetes service account to impersonate
	// when reconciling this Kustomization.
	// +optional
	ServiceAccountName string `json:"serviceAccountName,omitempty"`

	// Reference of the source where the kustomization file is.
	// +required
	SourceRef CrossNamespaceSourceReference `json:"sourceRef"`

	// This flag tells the controller to suspend subsequent kustomize executions,
	// it does not apply to already started executions. Defaults to false.
	// +optional
	Suspend bool `json:"suspend,omitempty"`

	// TargetNamespace sets or overrides the namespace in the
	// kustomization.yaml file.
	// +optional
	TargetNamespace string `json:"targetNamespace,omitempty"`

	// Timeout for validation, apply and health checking operations.
	// Defaults to 'Interval' duration.
	// +optional
	Timeout *metav1.Duration `json:"timeout,omitempty"`

	// Validate the Kubernetes objects before applying them on the cluster.
	// The validation strategy can be 'client' (local dry-run), 'server' (APIServer dry-run) or 'none'.
	// +kubebuilder:validation:Enum=none;client;server
	// +optional
	Validation string `json:"validation,omitempty"`

	// Force instructs the controller to recreate resources
	// when patching fails due to an immutable field change.
	// +kubebuilder:default:=false
	// +optional
	Force bool `json:"force,omitempty"`
}

The decryption section defines how decryption is handled for Kubernetes manifests:

type Decryption struct {
	// Provider is the name of the decryption engine.
	// +kubebuilder:validation:Enum=sops
	// +required
	Provider string `json:"provider"`

	// The secret name containing the private OpenPGP keys used for decryption.
	// +optional
	SecretRef *meta.LocalObjectReference `json:"secretRef,omitempty"`
}

KubeConfig references a Kubernetes Secret for applying to another cluster. This can be used with Cluster API:

type KubeConfig struct {
	// SecretRef holds the name to a secret that contains a 'value' key with
	// the kubeconfig file as the value. It must be in the same namespace as
	// the Kustomization.
	// It is recommended that the kubeconfig is self-contained, and the secret
	// is regularly updated if credentials such as a cloud-access-token expire.
	// Cloud specific `cmd-path` auth helpers will not function without adding
	// binaries and credentials to the Pod that is responsible for reconciling
	// the Kustomization.
	// +required
	SecretRef meta.LocalObjectReference `json:"secretRef,omitempty"`
}

Image contains the name, new name and new tag that will replace the original container image:

type Image struct {
	// Name of the image to be replaced.
	// +required
	Name string `json:"name"`

	// NewName is the name of the image used to replace the original one.
	// +required
	NewName string `json:"newName"`
	
	// NewTag is the image tag used to replace the original tag.
	// +required
	NewTag string `json:"newTag"`
}

The post-build section defines which actions to perform on the YAML manifest after kustomize build:

type PostBuild struct {
	// Substitute holds a map of key/value pairs.
	// The variables defined in your YAML manifests
	// that match any of the keys defined in the map
	// will be substituted with the set value.
	// Includes support for bash string replacement functions
	// e.g. ${var:=default}, ${var:position} and ${var/substring/replacement}.
	// +optional
	Substitute map[string]string `json:"substitute,omitempty"`

	// SubstituteFrom holds references to ConfigMaps and Secrets containing
	// the variables and their values to be substituted in the YAML manifests.
	// The ConfigMap and the Secret data keys represent the var names and they
	// must match the vars declared in the manifests for the substitution to happen.
	// +optional
	SubstituteFrom []SubstituteReference `json:"substituteFrom,omitempty"`
}

The status sub-resource records the result of the last reconciliation:

type KustomizationStatus struct {
	// ObservedGeneration is the last reconciled generation.
	// +optional
	ObservedGeneration int64 `json:"observedGeneration,omitempty"`

	// +optional
	Conditions []metav1.Condition `json:"conditions,omitempty"`

	// The last successfully applied revision.
	// The revision format for Git sources is <branch|tag>/<commit-sha>.
	// +optional
	LastAppliedRevision string `json:"lastAppliedRevision,omitempty"`

	// LastAttemptedRevision is the revision of the last reconciliation attempt.
	// +optional
	LastAttemptedRevision string `json:"lastAttemptedRevision,omitempty"`

	// LastHandledReconcileAt is the last manual reconciliation request (by
	// annotating the Kustomization) handled by the reconciler.
	// +optional
	LastHandledReconcileAt string `json:"lastHandledReconcileAt,omitempty"`

	// The last successfully applied revision metadata.
	// +optional
	Snapshot *Snapshot `json:"snapshot"`
}

Status condition types:

const (
	// ReadyCondition is the name of the condition that
	// records the readiness status of a Kustomization.
	ReadyCondition string = "Ready"
)

Status condition reasons:

const (
	// ReconciliationSucceededReason represents the fact that the
	// reconciliation of the Kustomization has succeeded.
	ReconciliationSucceededReason string = "ReconciliationSucceeded"

	// ReconciliationFailedReason represents the fact that the
	// reconciliation of the Kustomization has failed.
	ReconciliationFailedReason string = "ReconciliationFailed"

	// ProgressingReason represents the fact that the
	// reconciliation of the Kustomization is underway.
	ProgressingReason string = "Progressing"

	// DependencyNotReady represents the fact that
	// one of the dependencies of the Kustomization is not ready.
	DependencyNotReadyReason string = "DependencyNotReady"

	// PruneFailedReason represents the fact that the
	// pruning of the Kustomization failed.
	PruneFailedReason string = "PruneFailed"

	// ArtifactFailedReason represents the fact that the
	// artifact download of the kustomization failed.
	ArtifactFailedReason string = "ArtifactFailed"

	// BuildFailedReason represents the fact that the
	// kustomize build of the Kustomization failed.
	BuildFailedReason string = "BuildFailed"

	// HealthCheckFailedReason represents the fact that
	// one of the health checks of the Kustomization failed.
	HealthCheckFailedReason string = "HealthCheckFailed"

	// ValidationFailedReason represents the fact that the
	// validation of the Kustomization manifests has failed.
	ValidationFailedReason string = "ValidationFailed"
)

Source reference

The Kustomization spec.sourceRef is a reference to an object managed by source-controller. When the source revision changes, it generates a Kubernetes event that triggers a kustomize build and apply.

Source supported types:

Note that the source should contain the kustomization.yaml and all the Kubernetes manifests and configuration files referenced in the kustomization.yaml. If your Git repository or S3 bucket contains only plain manifests, then a kustomization.yaml will be automatically generated.

Generate kustomization.yaml

If your repository contains plain Kubernetes manifests, the kustomization.yaml file is automatically generated for all the Kubernetes manifests in the spec.path and sub-directories. This expects all YAML files present under that path to be valid kubernetes manifests and needs non-kubernetes ones to be excluded using .sourceignore file or spec.ignore on GitRepository object.

Example of excluding CI workflows and SOPS config files:

apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
  name: podinfo
  namespace: default
spec:
  interval: 5m
  url: https://github.com/stefanprodan/podinfo
  ignore: |
    .git/
    .github/
    .sops.yaml
    .gitlab-ci.yml

It is recommended to generate the kustomization.yaml on your own and store it in Git, this way you can validate your manifests in CI (example script here). Assuming your manifests are inside ./clusters/my-cluster, you can generate a kustomization.yaml with:

cd clusters/my-cluster

# create kustomization
kustomize create --autodetect --recursive

# validate kustomization
kustomize build | kubeval --ignore-missing-schemas

Reconciliation

The Kustomization spec.interval tells the controller at which interval to fetch the Kubernetes manifest for the source, build the Kustomization and apply it on the cluster. The interval time units are s, m and h e.g. interval: 5m, the minimum value should be over 60 seconds.

The Kustomization execution can be suspended by setting spec.suspend to true.

With spec.force you can tell the controller to replace the resources in-cluster if the patching fails due to immutable fields changes.

The controller can be told to reconcile the Kustomization outside of the specified interval by annotating the Kustomization object with:

const (
	// ReconcileAtAnnotation is the annotation used for triggering a
	// reconciliation outside of the defined schedule.
	ReconcileAtAnnotation string = "reconcile.fluxcd.io/requestedAt"
)

On-demand execution example:

kubectl annotate --overwrite kustomization/podinfo reconcile.fluxcd.io/requestedAt="$(date +%s)"

List all Kubernetes objects reconciled from a Kustomization:

kubectl get all --all-namespaces \
-l=kustomize.toolkit.fluxcd.io/name="<Kustomization name>" \
-l=kustomize.toolkit.fluxcd.io/namespace="<Kustomization namespace>"

Garbage collection

To enable garbage collection, set spec.prune to true.

Garbage collection means that the Kubernetes objects that were previously applied on the cluster but are missing from the current source revision, are removed from cluster automatically. Garbage collection is also performed when a Kustomization object is deleted, triggering a removal of all Kubernetes objects previously applied on the cluster.

To keep track of the Kubernetes objects reconciled from a Kustomization, the following metadata is injected into the manifests:

labels:
  kustomize.toolkit.fluxcd.io/name: "<Kustomization name>"
  kustomize.toolkit.fluxcd.io/namespace: "<Kustomization namespace>"
annotations:
  kustomize.toolkit.fluxcd.io/checksum: "<manifests checksum>"

The checksum annotation value is updated if the content of spec.path changes. When pruning is disabled, the checksum annotation is omitted.

You can disable pruning for certain resources by either labeling or annotating them with:

kustomize.toolkit.fluxcd.io/prune: disabled

Note that Kubernetes objects generated by other controllers that have ownerReference.blockOwnerDeletion=true are skipped from garbage collection.

Health assessment

A Kustomization can contain a series of health checks used to determine the rollout status of the deployed workloads and the ready status of custom resources.

A health check entry can reference one of the following types:

  • Kubernetes builtin kinds: Deployment, DaemonSet, StatefulSet, PersistentVolumeClaim, Pod, PodDisruptionBudget, Job, CronJob, Service, Secret, ConfigMap, CustomResourceDefinition
  • Toolkit kinds: HelmRelease, HelmRepository, GitRepository, etc
  • Custom resources that are compatible with kstatus

Assuming the Kustomization source contains a Kubernetes Deployment named backend, a health check can be defined as follows:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: backend
  namespace: default
spec:
  interval: 5m
  path: "./webapp/backend/"
  prune: true
  sourceRef:
    kind: GitRepository
    name: webapp
  healthChecks:
    - apiVersion: apps/v1
      kind: Deployment
      name: backend
      namespace: dev
  timeout: 2m

After applying the kustomize build output, the controller verifies if the rollout completed successfully. If the deployment was successful, the Kustomization ready condition is marked as true, if the rollout failed, or if it takes more than the specified timeout to complete, then the Kustomization ready condition is set to false. If the deployment becomes healthy on the next execution, then the Kustomization is marked as ready.

When a Kustomization contains HelmRelease objects, instead of checking the underling Deployments, you can define a health check that waits for the HelmReleases to be reconciled with:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: webapp
  namespace: default
spec:
  interval: 15m
  path: "./releases/"
  prune: true
  sourceRef:
    kind: GitRepository
    name: webapp
  healthChecks:
    - apiVersion: helm.toolkit.fluxcd.io/v1beta1
      kind: HelmRelease
      name: frontend
      namespace: dev
    - apiVersion: helm.toolkit.fluxcd.io/v1beta1
      kind: HelmRelease
      name: backend
      namespace: dev
  timeout: 5m

If all the HelmRelease objects are successfully installed or upgraded, then the Kustomization will be marked as ready.

Kustomization dependencies

When applying a Kustomization, you may need to make sure other resources exist before the workloads defined in your Kustomization are deployed. For example, a namespace must exist before applying resources to it.

With spec.dependsOn you can specify that the execution of a Kustomization follows another. When you add dependsOn entries to a Kustomization, that Kustomization is applied only after all of its dependencies are ready. The readiness state of a Kustomization is determined by its last apply status condition.

Assuming two Kustomizations:

  • cert-manager - reconciles the cert-manager CRDs and controller
  • certs - reconciles the cert-manager custom resources

You can instruct the controller to apply the cert-manager Kustomization before certs:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: cert-manager
  namespace: flux-system
spec:
  interval: 5m
  path: "./cert-manager/controller"
  prune: true
  sourceRef:
    kind: GitRepository
    name: flux-system
  healthChecks:
    - apiVersion: apps/v1
      kind: Deployment
      name: cert-manager
      namespace: cert-manager
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: certs
  namespace: flux-system
spec:
  dependsOn:
    - name: cert-manager
  interval: 5m
  path: "./cert-manager/certs"
  prune: true
  sourceRef:
    kind: GitRepository
    name: flux-system

When combined with health assessment, a Kustomization will run after all its dependencies health checks are passing. For example, a service mesh proxy injector should be running before deploying applications inside the mesh.

Note that circular dependencies between Kustomizations must be avoided, otherwise the interdependent Kustomizations will never be applied on the cluster.

Role-based access control

By default, a Kustomization apply runs under the cluster admin account and can create, modify, delete cluster level objects (namespaces, CRDs, etc) and namespeced objects (deployments, ingresses, etc). For certain Kustomizations a cluster admin may wish to control what types of Kubernetes objects can be reconciled and under which namespaces. To restrict a Kustomization, one can assign a service account under which the reconciliation is performed.

Assuming you want to restrict a group of Kustomizations to a single namespace, you can create an account with a role binding that grants access only to that namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: webapp
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: webapp-reconciler
  namespace: webapp
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: webapp-reconciler
  namespace: webapp
rules:
  - apiGroups: ['*']
    resources: ['*']
    verbs: ['*']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: webapp-reconciler
  namespace: webapp
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: webapp-reconciler
subjects:
- kind: ServiceAccount
  name: webapp-reconciler
  namespace: webapp

Note that the namespace, RBAC and service account manifests should be placed in a Git source and applied with a Kustomization. The Kustomizations that are running under that service account should depend-on the one that contains the account.

Create a Kustomization that prevents altering the cluster state outside of the webapp namespace:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: backend
  namespace: webapp
spec:
  serviceAccountName: webapp-reconciler
  dependsOn:
    - name: common
  interval: 5m
  path: "./webapp/backend/"
  prune: true
  sourceRef:
    kind: GitRepository
    name: webapp

When the controller reconciles the frontend-webapp Kustomization, it will impersonate the webapp-reconciler account. If the Kustomization contains cluster level objects like CRDs or objects belonging to a different namespace, the reconciliation will fail since the account it runs under has no permissions to alter objects outside of the webapp namespace.

Override kustomize config

The Kustomization has a set of fields to extend and/or override the Kustomize patches and namespace on all the Kubernetes objects reconciled by the resource, offering support for the following Kustomize directives:

Target namespace

To configure the Kustomize namespace and overwrite the namespace of all the Kubernetes objects reconciled by the Kustomization, spec.targetNamespace can be defined:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: podinfo
  namespace: flux-system
spec:
  # ...omitted for brevity
  targetNamespace: test

The targetNamespace is expected to exist.

Patches

To add Kustomize patches entries to the configuration, and patch resources using either a strategic merge patch or a JSON patch, spec.patches items must contain a target selector and a patch document. The patch can target a single resource or multiple resources:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: podinfo
  namespace: flux-system
spec:
  # ...omitted for brevity
  patches:
    - patch: |-
        apiVersion: v1
        kind: Pod
        metadata:
          name: not-used
          labels:
            app.kubernetes.io/part-of: test-app
      target:
        labelSelector: "app=podinfo"

Strategic Merge patches

To add Kustomize patchesStrategicMerge entries to the configuration, spec.patchesStrategicMerge can be defined with a list of strategic merge patches in YAML format:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: podinfo
  namespace: flux-system
spec:
  # ...omitted for brevity
  patchesStrategicMerge:
  - apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: podinfo
    spec:
      template:
        spec:
          serviceAccount: custom-service-account

JSON 6902 patches

To add Kustomize patchesJson6902 entries to the configuration, and patch resources using the JSON 6902 standard, spec.patchesJson6902, the items must contain a target selector and JSON 6902 patch document:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: podinfo
  namespace: flux-system
spec:
  # ...omitted for brevity
  patchesJson6902:
  - target:
      version: v1
      kind: Deployment
      name: podinfo
    patch:
    - op: add
      path: /metadata/annotations/key
      value: value

Images

To add Kustomize images entries to the configuration, and overwrite the name, tag or digest of container images without creating patches, spec.images can be defined:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: podinfo
  namespace: flux-system
spec:
  # ...omitted for brevity
  images:
  - name: podinfo
    newName: my-registry/podinfo
    newTag: v1
  - name: podinfo
    newTag: 1.8.0
  - name: podinfo
    newName: my-podinfo
  - name: podinfo
    digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3

Variable substitution

With spec.postBuild.substitute you can provide a map of key/value pairs holding the variables to be substituted in the final YAML manifest, after kustomize build.

With spec.postBuild.substituteFrom you can provide a list of ConfigMaps and Secrets from which the variables are loaded. The ConfigMap and Secret data keys are used as the var names.

This offers basic templating for your manifests including support for bash string replacement functions e.g.:

  • ${var:=default}
  • ${var:position}
  • ${var:position:length}
  • ${var/substring/replacement}

Note that the name of a variable can contain only alphanumeric and underscore characters. The controller validates the var names using this regular expression: ^[_[:alpha:]][_[:alpha:][:digit:]]*$.

Assuming you have manifests with the following variables:

apiVersion: v1
kind: Namespace
metadata:
  name: apps
  labels:
    environment: ${cluster_env:=dev}
    region: "${cluster_region}"

You can specify the variables and their values in the Kustomization definition under substitute and/or substituteFrom post build section:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: apps
spec:
  interval: 5m
  path: "./apps/"
  postBuild:
    substitute:
      cluster_env: "prod"
      cluster_region: "eu-central-1"
    substituteFrom:
      - kind: ConfigMap
        name: cluster-vars
      - kind: Secret
        name: cluster-secret-vars

The var values which are specified in-line with substitute take precedence over the ones in substituteFrom.

Note that if you want to avoid var substitutions in scripts embedded in ConfigMaps or container commands, you must use the format $var instead of ${var}. All the undefined variables in the format ${var} will be substituted with string empty, unless a default is provided e.g. ${var:=default}.

You can disable the variable substitution for certain resources by either labeling or annotating them with:

kustomize.toolkit.fluxcd.io/substitute: disabled

Substitution of variables only happens if at least a single variable or resource to substitute from is defined. This may cause issues if you rely on expressions which should evaluate to a default, even if no other variables are configured. To work around this, one can set an arbitrary key/value pair to enable the substitution of variables. For example:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: apps
spec:
  ...
  postBuild:
    substitute:
      var_substitution_enabled: "true"

You can replicate the controller post-build substitutions locally using kustomize and Drone's envsubst:

$ go install github.com/drone/envsubst/cmd/envsubst

$ export cluster_region=eu-central-1
$ kustomize build ./apps/ | $GOPATH/bin/envsubst 
---
apiVersion: v1
kind: Namespace
metadata:
  name: apps
  labels:
    environment: dev
    region: eu-central-1

Remote Clusters / Cluster-API

If the kubeConfig field is set, objects will be applied, health-checked, pruned, and deleted for the default cluster specified in that KubeConfig instead of using the in-cluster ServiceAccount.

The secret defined in the kubeConfig.SecretRef must exist in the same namespace as the Kustomization. On every reconciliation, the KubeConfig bytes will be loaded from the value or value.yaml key of the secret's data, and the secret can thus be regularly updated if cluster-access-tokens have to rotate due to expiration.

This composes well with Cluster API bootstrap providers such as CAPBK (kubeadm), CAPA (AWS) and others.

To reconcile a Kustomization to a CAPI controlled cluster, put the Kustomization in the same namespace as your Cluster object, and set the kubeConfig.secretRef.name to <cluster-name>-kubeconfig:

apiVersion: cluster.x-k8s.io/v1alpha3
kind: Cluster
metadata:
  name: stage  # the kubeconfig Secret will contain the Cluster name
  namespace: capi-stage
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 10.100.0.0/16
    serviceDomain: stage-cluster.local
    services:
      cidrBlocks:
      - 10.200.0.0/12
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
    kind: KubeadmControlPlane
    name: stage-control-plane
    namespace: capi-stage
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: DockerCluster
    name: stage
    namespace: capi-stage
---
# ... unrelated Cluster API objects omitted for brevity ...
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: cluster-addons
  namespace: capi-stage
spec:
  interval: 5m
  path: "./config/addons/"
  prune: true
  sourceRef:
    kind: GitRepository
    name: cluster-addons
  kubeConfig:
    secretRef:
      name: stage-kubeconfig  # Cluster API creates this for the matching Cluster

The Cluster and Kustomization can be created at the same time. The Kustomization will eventually reconcile once the cluster is available.

If you wish to target clusters created by other means than CAPI, you can create a ServiceAccount on the remote cluster, generate a KubeConfig for that account, and then create a secret on the cluster where kustomize-controller is running e.g.:

kubectl create secret generic prod-kubeconfig \
    --from-file=value.yaml=./kubeconfig

Note that the KubeConfig should be self-contained and not rely on binaries, environment, or credential files from the kustomize-controller Pod. This matches the constraints of KubeConfigs from current Cluster API providers. KubeConfigs with cmd-path in them likely won't work without a custom, per-provider installation of kustomize-controller.

Secrets decryption

In order to store secrets safely in a public or private Git repository, you can use Mozilla SOPS and encrypt your Kubernetes Secrets data with OpenPGP and age keys.

OpenPGP

Generate a GPG key without passphrase using gnupg, then use sops to encrypt a Kubernetes secret:

sops --pgp=FBC7B9E2A4F9289AC0C1D4843D16CEE4A27381B4 \
--encrypt --encrypted-regex '^(data|stringData)$' --in-place my-secret.yaml

Commit and push the encrypted file to Git.

Note that you should encrypt only the data section, encrypting the Kubernetes secret metadata, kind or apiVersion is not supported by kustomize-controller.

Create a secret in the default namespace with the OpenPGP private key, the key name must end with .asc to be detected as an OpenPGP key:

gpg --export-secret-keys --armor FBC7B9E2A4F9289AC0C1D4843D16CEE4A27381B4 |
kubectl -n default create secret generic sops-gpg \
--from-file=sops.asc=/dev/stdin

Configure decryption by referring the private key secret:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: my-secrets
  namespace: default
spec:
  interval: 5m
  path: "./"
  sourceRef:
    kind: GitRepository
    name: my-secrets
  decryption:
    provider: sops
    secretRef:
      name: sops-pgp

Age

Generate an age key with age using age-keygen, then use sops to encrypt a Kubernetes secret:

$ age-keygen -o age.agekey
Public key: age1helqcqsh9464r8chnwc2fzj8uv7vr5ntnsft0tn45v2xtz0hpfwq98cmsg
$ sops --age=age1helqcqsh9464r8chnwc2fzj8uv7vr5ntnsft0tn45v2xtz0hpfwq98cmsg \
--encrypt --encrypted-regex '^(data|stringData)$' --in-place my-secret.yaml

Commit and push the encrypted file to Git.

Note that you should encrypt only the data section, encrypting the Kubernetes secret metadata, kind or apiVersion is not supported by kustomize-controller.

Create a secret in the default namespace with the age private key, the key name must end with .agekey to be detected as an age key:

cat age.agekey |
kubectl -n default create secret generic sops-age \
--from-file=age.agekey=/dev/stdin

Configure decryption by referring the private key secret:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: my-secrets
  namespace: default
spec:
  interval: 5m
  path: "./"
  sourceRef:
    kind: GitRepository
    name: my-secrets
  decryption:
    provider: sops
    secretRef:
      name: sops-age

Kustomize secretGenerator

SOPS encrypted data can be stored as a base64 encoded Secret, which enables the use of Kustomize secretGenerator as follows:

$ echo "my-secret-token" | sops -e /dev/stdin > token.encrypted
$ cat <<EOF > kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

secretGenerator:
 - name: token
   files:
   - token=token.encrypted
EOF

Commit and push token.encrypted and kustomization.yaml to Git.

The kustomize-controller scans the values of Kubernetes Secrets, and when it detects that the values are SOPS encrypted, it decrypts them before applying them on the cluster.

Status

When the controller completes a Kustomization apply, reports the result in the status sub-resource.

A successful reconciliation sets the ready condition to true and updates the revision field:

status:
  conditions:
  - lastTransitionTime: "2020-09-17T19:28:48Z"
    message: "Applied revision: master/a1afe267b54f38b46b487f6e938a6fd508278c07"
    reason: ReconciliationSucceeded
    status: "True"
    type: Ready
  lastAppliedRevision: master/a1afe267b54f38b46b487f6e938a6fd508278c07
  lastAttemptedRevision: master/a1afe267b54f38b46b487f6e938a6fd508278c07

You can wait for the kustomize controller to complete a reconciliation with:

kubectl wait kustomization/backend --for=condition=ready

The controller logs the Kubernetes objects:

{
  "level": "info",
  "ts": "2020-09-17T07:27:11.921Z",
  "logger": "controllers.Kustomization",
  "msg": "Kustomization applied in 1.436096591s",
  "kustomization": "default/backend",
  "output": {
    "service/backend": "created",
    "deployment.apps/backend": "created",
    "horizontalpodautoscaler.autoscaling/backend": "created"
  }
}

A failed reconciliation sets the ready condition to false:

status:
  conditions:
  - lastTransitionTime: "2020-09-17T07:26:48Z"
    message: "The Service 'backend' is invalid: spec.type: Unsupported value: 'Ingress'"
    reason: ValidationFailed
    status: "False"
    type: Ready
  lastAppliedRevision: master/a1afe267b54f38b46b487f6e938a6fd508278c07
  lastAttemptedRevision: master/7c500d302e38e7e4a3f327343a8a5c21acaaeb87

Note that the last applied revision is updated only on a successful reconciliation.

When a reconciliation fails, the controller logs the error and issues a Kubernetes event:

{
  "level": "error",
  "ts": "2020-09-17T07:27:11.921Z",
  "logger": "controllers.Kustomization",
  "kustomization": "default/backend",
  "error": "The Service 'backend' is invalid: spec.type: Unsupported value: 'Ingress'"
}