Skip to content

Latest commit

 

History

History
228 lines (167 loc) · 9.32 KB

README.md

File metadata and controls

228 lines (167 loc) · 9.32 KB

Prow

tektoncd uses Prow for CI automation.

Prow docs. See the community docs for more on Prow and the PR process.

Creating the Prow cluster

If you need to re-create the Prow cluster (which includes the boskos running inside), you will need to:

  1. Create a new cluster
  2. Create the necessary secrets
  3. Apply the new Prow and Boskos
  4. Setup ingress
  5. Update GitHub webhook(s)

Creating the cluster

To create a cluster of the right size, using the same GCP project:

export PROJECT_ID=tekton-releases
export CLUSTER_NAME=tekton-plumbing

gcloud container clusters create $CLUSTER_NAME \
 --scopes=cloud-platform \
 --enable-basic-auth \
 --issue-client-certificate \
 --project=$PROJECT_ID \
 --region=us-central1-a \
 --machine-type=n1-standard-4 \
 --image-type=cos \
 --num-nodes=8 \
 --cluster-version=latest

Creating the secrets

In order to operate, Prow needs the following secrets, which are referred to by the following names in our config:

  • GCP secret: test-account is a token for the service account prow-account@tekton-releases.iam.gserviceaccount.com. This account can interact with GCP resources such as uploading Prow results to GCS (which is done directly from the containers started by Prow, configured in config.yaml) and interacting with boskos clusters.
  • Github secrets: hmac-token for authenticating GitHub and oauth-token which is a GitHub access token for tekton-robot, used by Prow itself as well as by containers started by Prow via the Prow config. See the GitHub secret Prow docs.
  • Nightly release secret: nightly-account a token for the nightly-release GCP service account
kubectl apply -f oauth-token.yaml
kubectl apply -f hmac-token.yaml
kubectl apply -f gcp-token.yaml
kubectl apply -f nightly.yaml

To verify that you have gotten all the secrets, you can look for referenced secrets and service accounts in the Prow setup, the Prow config and the boskos config.

Start it

Apply the Prow and boskos configuration:

# Deploy boskos
kubectl apply -f boskos/boskos.yaml # Must be applied first to create the namespace
kubectl apply -f boskos/boskos-config.yaml
kubectl apply -f boskos/storage-class.yaml

# Deploy Prow
kubectl apply -f prow/prow.yaml

# Update Prow with the right configuration
kubectl create configmap config --from-file=config.yaml=prow/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
kubectl create configmap plugins --from-file=plugins.yaml=prow/plugins.yaml --dry-run -o yaml | kubectl replace configmap plugins -f -

Ingress

To get ingress working properly, you must:

  • Install and configure cert-manager. cert-manager can be installed via Helm using this guide
  • Apply the ingress resource and update the prow.tekton.dev DNS configuration.

To apply the ingress resource:

# Apply the ingress resource, configured to use `prow.tekton.dev`
kubectl apply -f prow/ingress.yaml

To see the IP of the ingress in the new cluster:

kubectl get ingres ing

You should be able to navigate to this endpoint in your browser and see the Prow landing page.

Then you can update https://prow.tekton.dev to point at the Cluster ingress address. (Not sure who has access to this domain name registration, someone in the Linux Foundation? dlorenc@ can provide more info.)

Update GitHub webhook

You will need to configure GitHubs's webhook(s) to point at the ingress of the new Prow cluster. (Or you can use the domain name.)

For tektoncd this is configured at the Org level.

  • github.com/tektoncd -> Settings -> Webhooks -> http://some-ingress-ip/hook

Update the value of the webhook with http://ingress-address/hook (see kicking the tires to get the ingress IP).

Updating Prow itself

Prow has been installed by taking the starter.yaml and modifying it for our needs.

Updating (e.g. bumping the versions of the images being used) requires:

  1. If you are feeling cautious and motivated, manually backup the config values by hand (see prow.yaml to see what values will be changed).

  2. Manually updating the image values and applying any other config changes found in the starter.yaml to our prow.yaml.

  3. Updating the utility_images in our config.yaml if the version of the plank component is changed.

  4. Applying the new configuration with:

     # Step 1: Configure kubectl to use the cluster, doesn't have to be via gcloud but gcloud makes it easy
     gcloud container clusters get-credentials prow --zone us-central1-a --project tekton-releases
    
     # Step 2: Update Prow itself
     kubectl apply -f prow/prow.yaml
    
     # Step 2: Update the configuration used by Prow
     kubectl create configmap config --from-file=config.yaml=prow/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
    
     # Step 3: Remember to configure kubectl to connect to your regular cluster!
     gcloud container clusters get-credentials ...
  5. Verify that the changes are working by opening a PR and manually looking at the logs of each check, in case Prow has gotten into a state where failures are being reported as successes.

These values have been removed from the original starter.yaml:

  • The ConfigMap values plugins and config because they are generated from config.yaml and plugin
  • The Services which were manually configured with a ClusterIP and other routing information (deck, tide, hook)
  • The Ingress ing - Configuration for this is in ingress.yaml
  • The statusreconciler Deployment, etc. - Created #54 to investigate adding this.
  • The Role values give pod permissions in the default namespace as well as test-pods - The intention seems to be that test-pods be used to run the pods themselves, but we don't currently have that configured in our config.yaml.

Tekton Pipelines with Prow

Tekton Pipelines is also installed in the prow cluster so that Prow can trigger the execution of PipelineRuns.

Since Prow only works with select versions of Tekton Pipelines the version currently installed in the cluster is v0.3.1:

kubectl apply --filename  https://storage.googleapis.com/tekton-releases/previous/v0.3.1/release.yaml

See also Tekton Pipelines installation instructions.

Nightly Tekton Pipelines release

The prow configuration includes a periodic job which invokes the Tekton Pipelines nightly release Pipeline.

Hello World Pipeline

Since Prow + Pipelines in this org are a WIP (see #922), the only job (besides nightly releases) that is currently configured is the hello scott Pipeline.

This Pipeline (special-hi-scott-pipeline) is executed on every PR to this repo (plumbing) via the try-out-prow-plus-tekton Prow job.

Updating Prow configuration

TODO(#1) Apply config.yaml changes automatically

Changes to config.yaml are not automatically reflected in the Prow cluster and must be manually applied.

# Step 1: Configure kubectl to use the cluster, doesn't have to be via gcloud but gcloud makes it easy
gcloud container clusters get-credentials prow --zone us-central1-a --project tekton-releases

# Step 2: Update the configuration used by Prow
kubectl create configmap config --from-file=config.yaml=prow/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -

# Step 3: Remember to configure kubectl to connect to your regular cluster!
gcloud container clusters get-credentials ...