tektoncd
uses
Prow
for CI automation.
- Prow runs in the tektoncd GCP project
- Ingress is configured to
prow.tekton.dev
- Prow results are displayed via gubernator
- Instructions for creating the Prow cluster
- Instructions for updating Prow and Prow's Tekton Pipelines instance
- Instructions for updating Prow configuration
Prow docs. See the community docs for more on Prow and the PR process.
If you need to re-create the Prow cluster (which includes the boskos running inside), you will need to:
- Create a new cluster
- Create the necessary secrets
- Apply the new Prow and Boskos
- Setup ingress
- Update GitHub webhook(s)
To create a cluster of the right size, using the same GCP project:
export PROJECT_ID=tekton-releases
export CLUSTER_NAME=tekton-plumbing
gcloud container clusters create $CLUSTER_NAME \
--scopes=cloud-platform \
--enable-basic-auth \
--issue-client-certificate \
--project=$PROJECT_ID \
--region=us-central1-a \
--machine-type=n1-standard-4 \
--image-type=cos \
--num-nodes=8 \
--cluster-version=latest
In order to operate, Prow needs the following secrets, which are referred to by the following names in our config:
GCP
secret:test-account
is a token for the service accountprow-account@tekton-releases.iam.gserviceaccount.com
. This account can interact with GCP resources such as uploading Prow results to GCS (which is done directly from the containers started by Prow, configured in config.yaml) and interacting with boskos clusters.Github
secrets:hmac-token
for authenticating GitHub andoauth-token
which is a GitHub access token fortekton-robot
, used by Prow itself as well as by containers started by Prow via the Prow config. See the GitHub secret Prow docs.- Nightly release secret:
nightly-account
a token for the nightly-release GCP service account
kubectl apply -f oauth-token.yaml
kubectl apply -f hmac-token.yaml
kubectl apply -f gcp-token.yaml
kubectl apply -f nightly.yaml
To verify that you have gotten all the secrets, you can look for referenced secrets and service accounts in the Prow setup, the Prow config and the boskos config.
Apply the Prow and boskos configuration:
# Deploy boskos
kubectl apply -f boskos/boskos.yaml # Must be applied first to create the namespace
kubectl apply -f boskos/boskos-config.yaml
kubectl apply -f boskos/storage-class.yaml
# Deploy Prow
kubectl apply -f prow/prow.yaml
# Update Prow with the right configuration
kubectl create configmap config --from-file=config.yaml=prow/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
kubectl create configmap plugins --from-file=plugins.yaml=prow/plugins.yaml --dry-run -o yaml | kubectl replace configmap plugins -f -
To get ingress working properly, you must:
- Install and configure cert-manager.
cert-manager
can be installed viaHelm
using this guide - Apply the ingress resource and update the
prow.tekton.dev
DNS configuration.
To apply the ingress resource:
# Apply the ingress resource, configured to use `prow.tekton.dev`
kubectl apply -f prow/ingress.yaml
To see the IP of the ingress in the new cluster:
kubectl get ingres ing
You should be able to navigate to this endpoint in your browser and see the Prow landing page.
Then you can update https://prow.tekton.dev to point at the Cluster ingress address. (Not sure who has access to this domain name registration, someone in the Linux Foundation? dlorenc@ can provide more info.)
You will need to configure GitHubs's webhook(s) to point at the ingress of the new Prow cluster. (Or you can use the domain name.)
For tektoncd
this is configured at the Org level.
- github.com/tektoncd -> Settings -> Webhooks ->
http://some-ingress-ip/hook
Update the value of the webhook with http://ingress-address/hook
(see kicking the tires to get the ingress IP).
Prow has been installed by taking the starter.yaml and modifying it for our needs.
Updating (e.g. bumping the versions of the images being used) requires:
-
If you are feeling cautious and motivated, manually backup the config values by hand (see prow.yaml to see what values will be changed).
-
Manually updating the
image
values and applying any other config changes found in the starter.yaml to our prow.yaml. -
Updating the
utility_images
in our config.yaml if the version of theplank
component is changed. -
Applying the new configuration with:
# Step 1: Configure kubectl to use the cluster, doesn't have to be via gcloud but gcloud makes it easy gcloud container clusters get-credentials prow --zone us-central1-a --project tekton-releases # Step 2: Update Prow itself kubectl apply -f prow/prow.yaml # Step 2: Update the configuration used by Prow kubectl create configmap config --from-file=config.yaml=prow/config.yaml --dry-run -o yaml | kubectl replace configmap config -f - # Step 3: Remember to configure kubectl to connect to your regular cluster! gcloud container clusters get-credentials ...
-
Verify that the changes are working by opening a PR and manually looking at the logs of each check, in case Prow has gotten into a state where failures are being reported as successes.
These values have been removed from the original starter.yaml:
- The
ConfigMap
valuesplugins
andconfig
because they are generated from config.yaml and plugin - The
Services
which were manually configured with aClusterIP
and other routing information (deck
,tide
,hook
) - The
Ingress
ing
- Configuration for this is in ingress.yaml - The
statusreconciler
Deployment, etc. - Created #54 to investigate adding this. - The
Role
values givepod
permissions in thedefault
namespace as well astest-pods
- The intention seems to be thattest-pods
be used to run the pods themselves, but we don't currently have that configured in our config.yaml.
Tekton Pipelines is also installed in the prow
cluster so that Prow can trigger the execution of
PipelineRuns
.
Since Prow only works with select versions of Tekton Pipelines the version currently installed in the cluster is v0.3.1:
kubectl apply --filename https://storage.googleapis.com/tekton-releases/previous/v0.3.1/release.yaml
See also Tekton Pipelines installation instructions.
The prow configuration includes a periodic
job which invokes
the Tekton Pipelines nightly release Pipeline.
Since Prow + Pipelines in this org are a WIP (see #922), the only job (besides nightly releases) that is currently configured is the hello scott Pipeline.
This Pipeline
(special-hi-scott-pipeline
) is executed on every PR to this repo
(plumbing
) via the try-out-prow-plus-tekton
Prow job.
TODO(#1) Apply config.yaml changes automatically
Changes to config.yaml are not automatically reflected in the Prow cluster and must be manually applied.
# Step 1: Configure kubectl to use the cluster, doesn't have to be via gcloud but gcloud makes it easy
gcloud container clusters get-credentials prow --zone us-central1-a --project tekton-releases
# Step 2: Update the configuration used by Prow
kubectl create configmap config --from-file=config.yaml=prow/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
# Step 3: Remember to configure kubectl to connect to your regular cluster!
gcloud container clusters get-credentials ...