Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

Commit

Permalink
Enable testing charts with test values (#4157)
Browse files Browse the repository at this point in the history
* Refactor and fix shellcheck issues

* Make constants readonly and uppercase

* Remove .editorconfig

* Enable testing charts with test values

* Remove dummy chart
  • Loading branch information
unguiculus authored and k8s-ci-robot committed Mar 22, 2018
1 parent f386fa9 commit b5c3e51
Show file tree
Hide file tree
Showing 3 changed files with 168 additions and 120 deletions.
18 changes: 9 additions & 9 deletions REVIEW_GUIDELINES.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Anyone is welcome to review pull requests. Besides our [technical requirements](

## Process

The process to get a pull request merged is fairly simple. First, all required tests need to pass and the contributor needs to have a signed CLA. If there is a problem with some part of the test, such as a timeout issue, please contact one of the charts repository maintainers by commenting `cc @kubernetes/charts-maintainers`.
The process to get a pull request merged is fairly simple. First, all required tests need to pass and the contributor needs to have a signed CLA. See [Charts Testing](https://github.com/kubernetes/charts/blob/master/test/README.md) for details on our CI system and how you can provide custom values for testing. If there is a problem with some part of the test, such as a timeout issue, please contact one of the charts repository maintainers by commenting `cc @kubernetes/charts-maintainers`.

The charts repository uses the OWNERS files to provide merge access. If a chart has an OWNERS file, an approver listed in that file can approve the pull request. If the chart does not have an OWNERS file, an approver in the OWNERS file at the root of the repository can approve the pull request.

Expand Down Expand Up @@ -39,7 +39,7 @@ Resources and labels should follow some conventions. The standard resource metad
name: {{ template "myapp.fullname" . }}
labels:
app: {{ template "myapp.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
chart: {{ template "myapp.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
```
Expand Down Expand Up @@ -87,16 +87,16 @@ We officially support compatibility with the current and the previous minor vers
## Kubernetes Native Workloads.

While reviewing Charts that contain workloads such as [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/), [DaemonSets](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) and [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) the below points should be considered. These are to be seen as best practices rather than strict enforcement.

1. Any workload that are stateless and long running (servers) in nature are to be created as Deployments. Deployments in turn create ReplicaSets.
2. Unless there is a compelling reason, ReplicaSets or ReplicationControllers should be avoided as workload types.
2. Unless there is a compelling reason, ReplicaSets or ReplicationControllers should be avoided as workload types.
3. Workloads that are stateful in nature such as databases, key-value stores, message queues, in-memory caches are to be created as StatefulSets
4. It is recommended that Deployments and StatefulSets configure their workloads with a [Pod Disruption Budget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) for high availability.
5. For workloads such as databases, KV stores, etc., that replicate data, it is recommended to configure interpod anti-affinity.
6. It is recommended to have a default workload update strategy configured that is suitable for this chart.
7. Batch workloads are to be created using Jobs.
8. It is best to always create workloads with the latest supported [api version](https://v1-8.docs.kubernetes.io/docs/api-reference/v1.8/) as older version are either deprecated or soon to be deprecated.
9. It is generally not advisable to provide hard [resource limits](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container) to workloads and leave it configurable unless the workload requires such quantity bare minimum to function.
10. As much as possible complex pre-app setups are configured using [init contianers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/).
7. Batch workloads are to be created using Jobs.
8. It is best to always create workloads with the latest supported [api version](https://v1-8.docs.kubernetes.io/docs/api-reference/v1.8/) as older version are either deprecated or soon to be deprecated.
9. It is generally not advisable to provide hard [resource limits](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container) to workloads and leave it configurable unless the workload requires such quantity bare minimum to function.
10. As much as possible complex pre-app setups are configured using [init contianers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/).

More [configuration](https://kubernetes.io/docs/concepts/configuration/overview/) best practices.
More [configuration](https://kubernetes.io/docs/concepts/configuration/overview/) best practices.
24 changes: 15 additions & 9 deletions test/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,32 +27,38 @@ The configuration of the Pull Request trigger is [in the config.json](https://gi

This snippet tells Test Infra to run the [test/e2e.sh](https://github.com/kubernetes/charts/blob/master/test/e2e.sh)
when testing is triggered on a pull request. The e2e.sh script will use the [Charts test image](https://github.com/kubernetes/charts/blob/master/test/Dockerfile)
to run the [test/changed.sh](https://github.com/kubernetes/charts/blob/master/test/changed.sh) script. This script
to run the [test/changed.sh](https://github.com/kubernetes/charts/blob/master/test/changed.sh) script. This script
is the main logic for validation of a pull request. It intends to only test charts that have changed in this PR.

The logic is as follows:

1. [Get credentials for the Kubernetes cluster used for testing.](https://github.com/kubernetes/charts/blob/master/test/changed.sh#L42)
1. [Install and initialize Helm](https://github.com/kubernetes/charts/blob/master/test/changed.sh#L47)
1. [For any charts that have changed](https://github.com/kubernetes/charts/blob/master/test/changed.sh#L62):
1. [Get credentials for the Kubernetes cluster used for testing.](https://github.com/kubernetes/charts/blob/master/test/changed.sh#L128)
1. [Install and initialize Helm](https://github.com/kubernetes/charts/blob/master/test/changed.sh#L143)
1. [For any charts that have changed](https://github.com/kubernetes/charts/blob/master/test/changed.sh#L161):
- Download dependent charts, if any, with `helm dep build`
- Run `helm install` in a new namespace for this PR+build
- Run `helm install` in a new namespace for this PR build
- Use the [test/verify-release.sh](https://github.com/kubernetes/charts/blob/master/test/verify-release.sh) to ensure that if any pods were launched that they get to the `Running` state
- Run `helm test` on the release
- Delete the release

#### Providing Custom Test Values

Testing charts with default values may not be suitable in all cases. For instance, charts may require some values to be set which should not be part of the chart's default `values.yaml` (such as keys etc.). Furthermore, it may be desirable to test a chart with different configurations.

In order to enable custom test values, create a directory `ci` in the chart's directory and add any number of `*-values.yaml` files to this directory. Only files with a suffix `-values.yaml` are considered. Instead of using the defaults, the chart is then installed and tested separately for each of these files using the `--values` flag.

#### Triggering

In order for the tests to be kicked off one of the
[Kubernetes member](https://github.com/orgs/kubernetes/people) must add the
"ok-to-test" label. This can also be done by commenting "/ok-to-test" on the pull request.
In order for the tests to be kicked off one of the
[Kubernetes member](https://github.com/orgs/kubernetes/people) must add the
"ok-to-test" label. This can also be done by commenting "/ok-to-test" on the pull request.

This check is there to ensure that PRs are spot checked for any nefarious code. There are 2 things to check for:

1. No changes have been made to file in the test folder that unnecessarily alter the testing procedures.
1. The chart is not using images whose provenance is not traceable (usually done via the sources metadata field).

## Repo syncing
## Repo Syncing

The syncing of charts to the stable and incubator repos happens from a Jenkins instance that is polling for changes
to the master branch. On each change it will use the [test/repo-sync.sh](https://github.com/kubernetes/charts/blob/master/test/repo-sync.sh)
Expand Down
246 changes: 144 additions & 102 deletions test/changed.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,139 +17,181 @@ set -o errexit
set -o nounset
set -o pipefail
set -o xtrace
shopt -s nullglob

git remote add k8s https://github.com/kubernetes/charts
git fetch k8s master

NAMESPACE="${JOB_TYPE}-${PULL_INFO}-${BUILD_ID}"
CHANGED_FOLDERS=`git diff --find-renames --name-only $(git merge-base k8s/master HEAD) stable/ incubator/ | awk -F/ '{print $1"/"$2}' | uniq`
CURRENT_RELEASE=""
readonly NAMESPACE="${JOB_TYPE}-${PULL_INFO}-${BUILD_ID}"
readonly CHANGED_FOLDERS=$(git diff --find-renames --name-only "$(git merge-base k8s/master HEAD)" stable/ incubator/ | awk -F/ '{ print $1"/"$2 }' | uniq)

# Exit early if no charts have changed
if [ -z "$CHANGED_FOLDERS" ]; then
exit 0
if [[ -z "$CHANGED_FOLDERS" ]]; then
exit 0
fi

# include the semvercompare function
curDir="$(dirname "$0")"
source "$curDir/semvercompare.sh"
readonly CURRENT_DIR="$(dirname "$0")"
source "$CURRENT_DIR/semvercompare.sh"

release_index=1
exitCode=0
current_release=""

# Cleanup any releases and namespaces left over from the test
function cleanup {
if [ -n "$CURRENT_RELEASE" ]; then

# Before reporting the logs from all pods provide a helm status for
# the release
helm status ${CURRENT_RELEASE} || true

# List all logs for all containers in all pods for the namespace which was
# created for this PR test run
kubectl get pods --show-all --no-headers --namespace ${NAMESPACE} | awk '{ print $1 }' | while read line; do
if [[ $line != "" ]]; then
echo "===Details from pod $line:==="

echo "...Description of pod $line:..."
kubectl describe pod --namespace ${NAMESPACE} ${line} || true
echo "...End of description for pod $line...\n"

# There can be multiple containers within a pod. We need to iterate
# over each of those
containers=`kubectl get pods --show-all -o jsonpath="{.spec.containers[*].name}" --namespace ${NAMESPACE} $line`
for cname in ${containers}; do
echo "---Logs from container $cname in pod $line:---"
kubectl logs --namespace ${NAMESPACE} -c ${cname} ${line} || true
echo "---End of logs for container $cname in pod $line---\n"
done
cleanup_release() {
if [[ -n "$current_release" ]]; then

# Before reporting the logs from all pods provide a helm status for
# the release
helm status "${current_release}" || true

# List all logs for all containers in all pods for the namespace which was
# created for this PR test run
kubectl get pods --show-all --no-headers --namespace "$NAMESPACE" | awk '{ print $1 }' | while read -r pod; do
if [[ -n "$pod" ]]; then
printf '===Details from pod %s:===\n' "$pod"

printf '...Description of pod %s:...\n' "$pod"
kubectl describe pod --namespace "$NAMESPACE" "$pod" || true
printf '...End of description for pod %s...\n\n' "$pod"

# There can be multiple containers within a pod. We need to iterate
# over each of those
containers=$(kubectl get pods --show-all -o jsonpath="{.spec.containers[*].name}" --namespace "$NAMESPACE" "$pod")
for container in $containers; do
printf -- '---Logs from container %s in pod %s:---\n' "$container" "$pod"
kubectl logs --namespace "$NAMESPACE" -c "$container" "$pod" || true
printf -- '---End of logs for container %s in pod %s---\n\n' "$container" "$pod"
done

printf '===End of details for pod %s===\n' "$pod"
fi
done

helm delete --purge "$current_release" > cleanup_log 2>&1 || true
fi
}

echo "===End of details for pod $line===\n"
fi
done
# Cleanup any namespace left over from the test
cleanup_namespace() {
kubectl delete ns "$NAMESPACE" >> cleanup_log 2>&1 || true
}

helm delete --purge ${CURRENT_RELEASE} > cleanup_log 2>&1 || true
trap 'cleanup_release; cleanup_namespace' EXIT

dosemvercompare() {
# Note, the trap and automatic exiting are disabled for the semver comparison
# because it catches its own errors. If the comparison fails exitCode is set
# to 1. So, trapping and exiting is re-enabled and then the exit is handled
trap - EXIT
set +e
semvercompare "$1"
trap 'cleanup_release; cleanup_namespace' EXIT
set -e
if [[ "$exitCode" == 1 ]]; then
exit 1
fi
kubectl delete ns ${NAMESPACE} >> cleanup_log 2>&1 || true
}
trap cleanup EXIT

function dosemvercompare {
# Note, the trap and automatic exiting are disabled for the semver comparison
# because it catches its own errors. If the comparison fails exitCode is set
# to 1. So, trapping and exiting is re-enabled and then the exit is handled
trap - EXIT
set +e
semvercompare ${1}
trap cleanup EXIT
set -e
if [ $exitCode == 1 ]; then
exit 1
fi

test_release() {
values_file="${1:-}"
current_release="$release_name-$release_index"
release_index=$((release_index + 1))

if [[ -n "$values_file" ]]; then
echo "Testing chart with values file: $values_file..."
helm install --timeout 600 --name "$current_release" --namespace "$NAMESPACE" --values "$values_file" . | tee install_output
else
echo "Chart does not provide test values. Testing chart with defaults..."
helm install --timeout 600 --name "$current_release" --namespace "$NAMESPACE" . | tee install_output
fi

"$CURRENT_DIR/verify-release.sh" "$NAMESPACE"

kubectl get pods --namespace "$NAMESPACE"
kubectl get svc --namespace "$NAMESPACE"
kubectl get deployments --namespace "$NAMESPACE"
kubectl get endpoints --namespace "$NAMESPACE"

helm test "$current_release"

if [[ -n "${VERIFICATION_PAUSE:-}" ]]; then
cat install_output
sleep "$VERIFICATION_PAUSE"
fi

cleanup_release

# Setting the current release to none to avoid the cleanup and error
# handling for a release that no longer exists.
current_release=""
}

if [ ! -f "${KUBECONFIG:=}" ];then
# Get credentials for test cluster
gcloud auth activate-service-account --key-file="${GOOGLE_APPLICATION_CREDENTIALS}"
gcloud container clusters get-credentials jenkins --project kubernetes-charts-ci --zone us-west1-a
# Get credentials for test cluster
gcloud auth activate-service-account --key-file="$GOOGLE_APPLICATION_CREDENTIALS"
gcloud container clusters get-credentials jenkins --project kubernetes-charts-ci --zone us-west1-a
fi

# Install and initialize helm/tiller
HELM_URL=https://storage.googleapis.com/kubernetes-helm
HELM_TARBALL=helm-v2.8.2-linux-amd64.tar.gz
INCUBATOR_REPO_URL=https://kubernetes-charts-incubator.storage.googleapis.com/
readonly HELM_URL=https://storage.googleapis.com/kubernetes-helm
readonly HELM_TARBALL=helm-v2.8.2-linux-amd64.tar.gz
readonly INCUBATOR_REPO_URL=https://kubernetes-charts-incubator.storage.googleapis.com/

pushd /opt
wget -q ${HELM_URL}/${HELM_TARBALL}
tar xzfv ${HELM_TARBALL}
PATH=/opt/linux-amd64/:$PATH
wget -q "$HELM_URL/$HELM_TARBALL"
tar xzfv "$HELM_TARBALL"
PATH="/opt/linux-amd64/:$PATH"
popd

helm init --client-only
helm repo add incubator ${INCUBATOR_REPO_URL}
helm repo add incubator "$INCUBATOR_REPO_URL"

mkdir /opt/bin
pushd /opt/bin
# Install tools to check chart versions
# Install YAML Command line reader
wget -q -O yaml https://github.com/mikefarah/yaml/releases/download/1.13.1/yaml_linux_amd64
chmod +x yaml

# Install SemVer testing tool
wget -q -O vert https://github.com/Masterminds/vert/releases/download/v0.1.0/vert-v0.1.0-linux-amd64
chmod +x vert
# Install tools to check chart versions
# Install YAML Command line reader
wget -q -O yaml https://github.com/mikefarah/yaml/releases/download/1.13.1/yaml_linux_amd64
chmod +x yaml

# Install SemVer testing tool
wget -q -O vert https://github.com/Masterminds/vert/releases/download/v0.1.0/vert-v0.1.0-linux-amd64
chmod +x vert
popd
PATH=/opt/bin/:$PATH
PATH="/opt/bin/:$PATH"

# Iterate over each of the changed charts
# Lint, install and delete
for directory in ${CHANGED_FOLDERS}; do
if [ "$directory" == "incubator/common" ]; then
continue
elif [ -d $directory ]; then
CHART_NAME=`echo ${directory} | cut -d '/' -f2`

# A semver comparison is here as well as in the circleci tests. The circleci
# tests provide almost immediate feedback to chart authors. This test is also
# re-run right before the bot merges a PR so we can make sure the chart
# version is always incremented.
dosemvercompare ${directory}
RELEASE_NAME="${CHART_NAME:0:7}-${BUILD_ID}"
CURRENT_RELEASE=${RELEASE_NAME}
helm dep build ${directory}
helm install --timeout 600 --name ${RELEASE_NAME} --namespace ${NAMESPACE} ${directory} | tee install_output
./test/verify-release.sh ${NAMESPACE}
kubectl get pods --namespace ${NAMESPACE}
kubectl get svc --namespace ${NAMESPACE}
kubectl get deployments --namespace ${NAMESPACE}
kubectl get endpoints --namespace ${NAMESPACE}
helm test ${RELEASE_NAME}

if [ -n $VERIFICATION_PAUSE ]; then
cat install_output
sleep $VERIFICATION_PAUSE
fi
helm delete --purge ${RELEASE_NAME}
# Install, install and delete
for directory in $CHANGED_FOLDERS; do
if [[ "$directory" == "incubator/common" ]]; then
continue
elif [[ -d "$directory" ]]; then
chart_name=$(echo "$directory" | cut -d '/' -f2)

# Setting the current release to none to avoid the cleanup and error
# handling for a release that no longer exists.
CURRENT_RELEASE=""
fi
# A semver comparison is here as well as in the circleci tests. The circleci
# tests provide almost immediate feedback to chart authors. This test is also
# re-run right before the bot merges a PR so we can make sure the chart
# version is always incremented.
dosemvercompare "$directory"

helm dep build "$directory"

release_name="${chart_name:0:7}-${BUILD_ID}"

pushd "$directory"
has_test_values=

for values_file in ./ci/*-values.yaml; do
test_release "$values_file"
has_test_values=true
done

if [[ -z "$has_test_values" ]]; then
test_release
fi
popd
fi
done

cleanup_namespace

0 comments on commit b5c3e51

Please sign in to comment.