Skip to content

Latest commit

 

History

History
550 lines (404 loc) · 18.5 KB

DEVELOPMENT.md

File metadata and controls

550 lines (404 loc) · 18.5 KB

Developer Instructions

This document covers basic needs to work with Kyverno codebase.

It contains instructions to build, run, and test Kyverno.

Open project in devcontainer (recommended)

  • Clone the project to your local machine.

  • Make sure that you have the Visual Studio Code editor installed on your system.

  • Make sure that you have wsl(Ubuntu preferred) and Docker installed on your system and on wsl too (docker.sock (UNIX socket) file is necessary to enable devcontainer to communicate with docker running in host machine).

  • Open the project in Visual Studio Code, once the project is opened hit F1 and type wsl, now click on "Reopen in WSL".

  • If you haven't already done so, install the Dev Containers extension in Visual Studio Code.

  • Once the extension is installed, you should see a green icon in the bottom left corner of the window.

  • After you have installed Dev Containers extension, it should automatically detect the .devcontainer folder inside the project opened in wsl, and should suggest you to open the project in container.

  • If it doesn't suggest you, then press Ctrl + Shift + p and search "reopen in container" and click on it.

  • If everything goes well, the project should be opened in your devcontainer.

  • Then follow the steps as mentioned below to configure the project.

Tools

Building and/or testing Kyverno requires additional tooling.

We use make to simplify installing the tools we use.

Tools will be installed in the .tools folder when possible, this allows keeping installed tools local to the Kyverno repository. The .tools folder is ignored by git and binaries should not be committed.

Note: If you don't install tools, they will be downloaded/installed as necessary when running make targets.

You can manually install tools by running:

make install-tools

To remove installed tools, run:

make clean-tools

Building local binaries

The Kyverno repository contains code for three different binaries:

  • kyvernopre: Binary to update/cleanup existing resources in clusters. This is typically run as an init container before Kyverno controller starts.
  • kyverno: The Kyverno controller binary.
  • cli: The Kyverno command line interface.

Note: You can build all binaries at once by running make build-all.

Building kyvernopre locally

To build kyvernopre binary on your local system, run:

make build-kyverno-init

The binary should be created at ./cmd/kyverno-init/kyvernopre.

Building kyverno locally

To build kyverno binary on your local system, run:

make build-kyverno

The binary should be created at ./cmd/kyverno/kyverno.

Building cli locally

To build cli binary on your local system, run:

make build-cli

The binary should be created at ./cmd/cli/kubectl-kyverno/kubectl-kyverno.

Building local images

In the same spirit as building local binaries, you can build local docker images instead of local binaries.

ko is used to build images, please refer to Building local images with ko.

Image tags

Building images uses repository tags. To fetch repository tags into your fork run the following commands:

git remote add upstream  https://github.com/kyverno/kyverno
git fetch upstream --tags

Building local images with ko

When building local images with ko you can't specify the registry used to create the image names. It will always be ko.local.

Note: You can build all local images at once by running make ko-build-all.

Building kyvernopre image locally

To build kyvernopre image on your local system, run:

make ko-build-kyverno-init

The resulting image should be available locally, named ko.local/github.com/kyverno/kyverno/cmd/initcontainer.

Building kyverno image locally

To build kyverno image on your local system, run:

make ko-build-kyverno

The resulting image should be available locally, named ko.local/github.com/kyverno/kyverno/cmd/kyverno.

Building cli image locally

To build cli image on your local system, run:

make ko-build-cli

The resulting image should be available locally, named ko.local/github.com/kyverno/kyverno/cmd/cli/kubectl-kyverno.

Pushing images

Pushing images is very similar to building local images, except that built images will be published on a remote image registry.

ko is used to build and publish images, please refer to Pushing images with ko.

When pushing images you can specify the registry you want to publish images to by setting the REGISTRY environment variable (default value is ghcr.io).

Images tagging strategy

When publishing images, we are using the following strategy:

  • All published images are tagged with latest. Images tagged with latest should not be considered stable and can come from multiple release branches or main.
  • In addition to latest, dev images are tagged with the following pattern <major>.<minor>-dev-N-<git hash> where N is a two-digit number beginning at one for the major-minor combination and incremented by one on each subsequent tagged image.
  • In addition to latest, release images are tagged with the following pattern <major>.<minor>.<patch>-<pre release>. The pre release part is optional and only applies to pre releases (-beta.1, -rc.2, ...).

Pushing images with ko

Authenticating to the remote registry is done automatically in the Makefile with ko login.

To allow authentication you will need to set REGISTRY_USERNAME and REGISTRY_PASSWORD environment variables before invoking targets responsible for pushing images.

Note: You can push all images at once by running make ko-publish-all or make ko-publish-all-dev.

Pushing kyvernopre image

To push kyvernopre image on a remote registry, run:

# push stable image
make ko-publish-kyverno-init

or

# push dev image
make ko-publish-kyverno-init-dev

The resulting image should be available remotely, named ghcr.io/kyverno/kyvernopre (by default, if REGISTRY environment variable was not set).

Pushing kyverno image

To push kyverno image on a remote registry, run:

# push stable image
make ko-publish-kyverno

or

# push dev image
make ko-publish-kyverno-dev

The resulting image should be available remotely, named ghcr.io/kyverno/kyverno (by default, if REGISTRY environment variable was not set).

Pushing cli image

To push cli image on a remote registry, run:

# push stable image
make ko-publish-cli

or

# push dev image
make ko-publish-cli-dev

The resulting image should be available remotely, named ghcr.io/kyverno/kyverno-cli (by default, if REGISTRY environment variable was not set).

Deploying a local build

After building local images, it is often useful to deploy those images in a local cluster.

We use KinD to create local clusters easily, and have targets to:

Create a local cluster

If you already have a local KinD cluster running, you can skip this step.

To create a local KinD cluster, run:

make kind-create-cluster

You can override the k8s version by setting the KIND_IMAGE environment variable (default value is kindest/node:v1.29.1).

You can also override the KinD cluster name by setting the KIND_NAME environment variable (default value is kind).

Build and load local images

To build local images and load them on a local KinD cluster, run:

# build kyvernopre image and load it in KinD cluster
make kind-load-kyverno-init

or

# build kyverno image and load it in KinD cluster
make kind-load-kyverno

or

# build kyvernopre and kyverno images and load them in KinD cluster
make kind-load-all

You can override the KinD cluster name by setting the KIND_NAME environment variable (default value is kind).

Deploy with helm

To build local images, load them on a local KinD cluster, and deploy helm charts, run:

# build images, load them in KinD cluster and deploy kyverno helm chart
make kind-deploy-kyverno

or

# deploy kyverno-policies helm chart
make kind-deploy-kyverno-policies

or

# build images, load them in KinD cluster and deploy helm charts
make kind-deploy-all

This will build local images, load built images in every node of the KinD cluster, and deploy kyverno and/or kyverno-policies helm charts in the cluster (overriding image repositories and tags).

You can override the KinD cluster name by setting the KIND_NAME environment variable (default value is kind).

Code generation

We are using code generation tools to create the following portions of code:

Note: You can run make codegen-all to build all generated code at once.

Generating kubernetes API client

Based on the APIs golang code definitions, you can generate the corresponding Kubernetes client by running:

# generate clientset, listers and informers
make codegen-client-all

or

# generate clientset
make codegen-client-clientset

or

# generate listers
make codegen-client-listers

or

# generate informers
make codegen-client-informers

This will output generated files in the /pkg/client package.

Generating API deep copy functions

Based on the APIs golang code definitions, you can generate the corresponding deep copy functions by running:

# generate all deep copy functions
make codegen-deepcopy-all

or

# generate kyverno deep copy functions
make codegen-deepcopy-kyverno

or

# generate policy reports deep copy functions
make codegen-deepcopy-report

This will output files named zz_generated.deepcopy.go in every API package.

Generating CRD definitions

Based on the APIs golang code definitions, you can generate the corresponding CRDs manifests by running:

# generate all CRDs
make codegen-crds-all

or

# generate Kyverno CRDs
make codegen-crds-kyverno

or

# generate policy reports CRDs
make codegen-crds-report

This will output CRDs manifests /config/crds.

Generating API docs

Based on the APIs golang code definitions, you can generate the corresponding API reference docs by running:

# generate API docs
make codegen-api-docs

This will output API docs in /docs/crd.

Generating helm charts CRDs

Based on the APIs golang code definitions, you can generate the corresponding CRD definitions for helm charts by running:

# generate helm CRDs
make codegen-helm-crds

This will output CRDs templates in /charts/kyverno/templates/crds.yaml.

Note: You can run make codegen-helm-all to generate CRDs and docs at once.

Generating helm charts docs

Based on the helm charts default values:

You can generate the corresponding helm chart docs by running:

# generate helm docs
make codegen-helm-docs

This will output docs in helm charts respective README.md:

Note: You can run make codegen-helm-all to generate CRDs and docs at once.

Debugging local code

Running Kyverno on a local machine without deploying it in a remote cluster can be useful, especially for debugging purpose. You can run Kyverno locally or in your IDE of choice with a few steps:

  1. Create a local cluster
    • You can create a simple cluster with KinD with make kind-create-cluster
  2. Deploy Kyverno manifests except the Kyverno Deployment
    • Kyverno is going to run on your local machine, so it should not run in cluster at the same time
    • You can deploy the manifests by running make debug-deploy
  3. There are multiple environment variables that need to be configured. The variables can be found in here. Their values can be set using the command export $NAME=value
  4. To run Kyverno locally against the remote cluster you will need to provide --kubeconfig and --serverIP arguments:
    • --kubeconfig must point to your kubeconfig file (usually ~/.kube/config)
    • --serverIP must be set to <local ip>:9443 (<local ip> is the private ip adress of your local machine)
    • --backgroundServiceAccountName must be set to system:serviceaccount:kyverno:kyverno-background-controller
    • --caSecretName must be set to kyverno-svc.kyverno.svc.kyverno-tls-ca
    • --tlsSecretName must be set to kyverno-svc.kyverno.svc.kyverno-tls-pair

Once you are ready with the steps above, Kyverno can be started locally with:

go run ./cmd/kyverno/ --kubeconfig ~/.kube/config --serverIP=<local-ip>:9443 --backgroundServiceAccountName=system:serviceaccount:kyverno:kyverno-background-controller --caSecretName=kyverno-svc.kyverno.svc.kyverno-tls-ca  --tlsSecretName=kyverno-svc.kyverno.svc.kyverno-tls-pair

You will need to adapt those steps to run debug sessions in your IDE of choice, but the general idea remains the same.

Profiling

Enable profiling

To profile Kyverno application running inside a Kubernetes pod, set --profile flag to true in install.yaml. The default profiling port is 6060, and it can be configured via profile-port.

  --profile
        Set this flag to 'true', to enable profiling.
  --profile-port string
        Enable profiling at given port, defaults to 6060. (default "6060")

Expose the endpoint on a local port

You can get at the application in the pod by port forwarding with kubectl, for example:

$ kubectl -n kyverno get pod
NAME                                             READY   STATUS      RESTARTS       AGE
kyverno-admission-controller-57df6c565f-pxpnh    1/1     Running     0              20s
kyverno-background-controller-766589695-dhj9m    1/1     Running     0              20s
kyverno-cleanup-controller-54466dfbc6-5mlrc      1/1     Running     0              19s
kyverno-cleanup-update-requests-28695530-ft975   1/1     Running     0              19s
kyverno-reports-controller-76c49549f4-tljwm      1/1     Running     0              20s

Check the port of the pod you'd like to forward using the command below.

$ kubectl get pod kyverno-admission-controller-57df6c565f-pxpnh -n kyverno  --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
9443

Use the exposed port from above to run port-forward with the below command.

$ kubectl -n kyverno port-forward kyverno-admission-controller-57df6c565f-pxpnh 6060:9443
Forwarding from 127.0.0.1:6060 -> 9443
Forwarding from [::1]:6060 -> 9443

The HTTP endpoint will now be available as a local port.

Alternatively, use a Service of the type LoadBalancer to expose Kyverno. An example Service manifest is given below:

apiVersion: v1
kind: Service
metadata:
  name: pproc-service
  namespace: kyverno
spec:
  selector:
    app: kyverno
  ports:
    - protocol: TCP
      port: 6060
      targetPort: 6060
  type: LoadBalancer

Generate the data

You can then generate the file for the memory profile with curl and pipe the data to a file:

$ curl http://localhost:6060/debug/pprof/heap  > heap.pprof

Generate the file for the CPU profile with curl and pipe the data to a file:

curl "http://localhost:6060/debug/pprof/profile?seconds=60" > cpu.pprof

Analyze the data

To analyze the data:

go tool pprof heap.pprof

Read more about profiling

API Design

See docs/dev/api

Controllers Design

See docs/dev/controllers

Logging

See docs/dev/logging/logging.md

Feature Flags

See docs/dev/feature-flags

Reports Design

See docs/dev/reports

Troubleshooting

See docs/dev/troubleshooting

Selecting Issues

When you are ready to contribute, you can select issue at Good First Issues.