This document will walk you through deploying your own Prow instance to a new Kubernetes cluster. If you encounter difficulties, please open an issue so that we can make this process easier.
Prow runs in any kubernetes cluster. Our tackle
utility helps deploy it correctly, or you can perform each of the steps manually.
Both of these are focused on Kubernetes Engine but should work on any kubernetes distro with no/minimal changes.
Before using tackle
or deploying prow manually, ensure you have created a
GitHub account for prow to use. Prow will ignore most GitHub events generated
by this account, so it is important this account be separate from any users or
automation you wish to interact with prow. For example, you still need to do
this even if you'd just setting up a prow instance to work against your own
personal repos.
- Ensure the bot user has the following permissions
- Write access to the repos you plan on handling
- Owner access (and org membership) for the orgs you plan on handling (note it is possible to handle specific repos in an org without this)
- Create a personal access token for the GitHub bot account, adding the
following scopes (more details here)
- Must have the
public_repo
andrepo:status
scopes - Add the
repo
scope if you plan on handing private repos - Add the
admin:org_hook
scope if you plan on handling a github org
- Must have the
- Set this token aside for later (we'll assume you wrote it to a file on your
workstation at
/path/to/oauth/secret
)
Prow's tackle
utility walks you through deploying a new instance of prow in a couple minutes, try it out!
You need a few things:
bazel
build tool installed and working- The prow
tackle
utility. It is recommended to use it by runningbazel run //prow/cmd/tackle
fromtest-infra
directory, alternatively you can install it by runninggo get -u k8s.io/test-infra/prow/cmd/tackle
(in that case you would also need go installed and working). Note: Creating thetackle
utility assumes you have thegcloud
application in your$PATH
, if you are doing this on another cloud skip to the Manual deployment below. - Optionally, credentials to a Kubernetes cluster (otherwise,
tackle
will help you create on GCP)
To install prow run the following from the test-infra
directory and follow the on-screen instructions:
# Ideally use https://bazel.build, alternatively try:
# go get -u k8s.io/test-infra/prow/cmd/tackle && tackle
$ bazel run //prow/cmd/tackle
This will help you through the following steps:
- Choosing a kubectl context (and creating a cluster / getting its credentials if necessary)
- Deploying prow into that cluster
- Configuring GitHub to send prow webhooks for your repos. This is where you'll provide the absolute
/path/to/oauth/secret
See the Next Steps section after running this utility.
If you do not want to use the tackle
utility above, here are the manual set of commands tackle will run.
Prow runs in a kubernetes cluster, so first figure out which cluster you want to deploy prow into. If you already have a cluster created you can skip to the Create cluster role bindings step.
You can use the GCP cloud console to set up a project and create a new Kubernetes Engine cluster.
I'm assuming that PROJECT
and ZONE
environment variables are set, if you are using
GCP. Skip this step if you are using another service to host your Kubernetes cluster.
$ export PROJECT=your-project
$ export ZONE=us-west1-a
Run the following to create the cluster. This will also set up kubectl
to
point to the new cluster on GCP.
$ gcloud container --project "${PROJECT}" clusters create prow \
--zone "${ZONE}" --machine-type n1-standard-4 --num-nodes 2
As of 1.8 Kubernetes uses Role-Based Access Control (“RBAC”) to drive authorization decisions, allowing cluster-admin
to dynamically configure policies.
To create cluster resources you need to grant a user cluster-admin
role in all namespaces for the cluster.
For Prow on GCP, you can use the following command.
$ kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)
For Prow on other platforms, the following command will likely work.
$ kubectl create clusterrolebinding cluster-admin-binding-"${USER}" \
--clusterrole=cluster-admin --user="${USER}"
On some platforms the USER
variable may not map correctly to the user
in-cluster. If you see an error of the following form, this is likely the case.
Error from server (Forbidden): error when creating
"config/prow/cluster/starter-gcs.yaml": roles.rbac.authorization.k8s.io "<account>" is
forbidden: attempt to grant extra privileges:
[PolicyRule{Resources:["pods/log"], APIGroups:[""], Verbs:["get"]}
PolicyRule{Resources:["prowjobs"], APIGroups:["prow.k8s.io"], Verbs:["get"]}
APIGroups:["prow.k8s.io"], Verbs:["list"]}] user=&{<CLUSTER_USER>
[system:authenticated] map[]}...
Run the previous command substituting USER
with CLUSTER_USER
from the error
message above to solve this issue.
$ kubectl create clusterrolebinding cluster-admin-binding-"<CLUSTER_USER>" \
--clusterrole=cluster-admin --user="<CLUSTER_USER>"
There are relevant docs on Kubernetes Authentication that may help if neither of the above work.
You will need two secrets to talk to GitHub. The hmac-token
is the token that
you give to GitHub for validating webhooks. Generate it using any reasonable
randomness-generator, eg openssl rand -hex 20
.
$ openssl rand -hex 20 > /path/to/hook/secret
$ kubectl create secret generic hmac-token --from-file=hmac=/path/to/hook/secret
The github-token
is the OAuth2 token you created above for the [GitHub bot account].
If you need to create one, go to https://github.com/settings/tokens.
kubectl create secret generic github-token --from-file=token=/path/to/oauth/secret
There are two sample manifests to get you started:
starter-s3.yaml
sets up a minio as blob storage for logs and is particularly well suited to quickly get something workingstarter-gcs.yaml
uses GCS as blob storage and requires additional configuration to set up the bucket and ServiceAccounts. See [this](# Configure a GCS bucket) for details.
Regardless of which object storage you choose, the below adjustments are always needed:
- The github token by replacing the
<<insert-token-here>>
string - The hmac token by replacing the
<< insert-hmac-token-here >>
string - The domain by replacing the
<< your-domain.com >>
string - Optionally, you can update the
cert-manager.io/cluster-issuer:
annotation if you use cert-manager - Your github organization(s) by replacing the
<< your_github_org >>
string
Apply the manifest you edited above by executing one of the following two commands:
kubectl apply -f config/prow/cluster/starter-s3.yaml
kubectl apply -f config/prow/cluster/starter-gcs.yaml
After a moment, the cluster components will be running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
crier-69b6bd8f48-6sg24 1/1 Running 0 9m54s
deck-7f6867c46c-j7nnh 1/1 Running 0 2m5s
deck-7f6867c46c-mkxzk 1/1 Running 0 2m5s
ghproxy-fdd45dfb6-582fh 1/1 Running 0 9m54s
hook-7cc4df66f7-r2qpl 1/1 Running 1 9m53s
hook-7cc4df66f7-shnjq 1/1 Running 1 9m53s
horologium-7976c7f597-ss86t 1/1 Running 0 9m53s
minio-d756b6477-d4w4k 1/1 Running 0 9m53s
prow-controller-manager-657767bb69-5qzhp 1/1 Running 0 9m53s
sinker-8b645d469-jjw8r 1/1 Running 0 9m53s
statusreconciler-669697d466-zqfsj 1/1 Running 0 3m11s
tide-65489c49b8-rpnn2 1/1 Running 0 3m2s
Find out your external address. It might take a couple minutes for the IP to show up.
kubectl get ingress prow
NAME CLASS HOSTS ADDRESS PORTS AGE
prow <none> prow.<<yourdomain.com>> an.ip.addr.ess 80, 443 22d
Go to that address in a web browser and verify that the "echo-test" job has a green check-mark next to it. At this point you have a prow cluster that is ready to start receiving GitHub events!
You have two options to do this:
- You can do this with the
update-hook
utility:
# Note /path/to/hook/secret and /path/to/oauth/secret from earlier secrets step
# Note the an.ip.addr.ess from previous ingress step
# Ideally use https://bazel.build, alternatively try:
# go get -u k8s.io/test-infra/experiment/update-hook && update-hook
$ bazel run //experiment/update-hook -- \
--hmac-path=/path/to/hook/secret \
--github-token-path=/path/to/oauth/secret \
--hook-url http://an.ip.addr.ess/hook \
--repo my-org/my-repo \
--repo my-whole-org \
--confirm=false # Remove =false to actually add hook
Look for the http://an.ip.addr.ess/hook
you added above.
A green check mark (for a ping event, if you click edit and view the details of the event) suggests everything is working!
- If you do not want to use the
update-hook
utility, you can go the GitHub web page and add the hook manually:
- Go to your org or repo and click
Settings -> Webhooks
, and clickAdd webhook
. - Change the
Payload URL
tohttp://an.ip.addr.ess/hook
you are planning to add. - Change the
Content type
toapplication/json
, and change yourSecret
to thehmac-path
secret you created above. - Change the trigger to
Send me **everything**
. - Click
Add webhook
.
If you need to configure webhooks for multiple orgs or repos, the manual process does not work that well as it can be error-prone, and it'll be painful when you want to replace the hmac token if it is accidentally leaked.
In such case, it's recommended to use the hmac tool to automatically manage the webhooks and hmac tokens for you via declarative configuration.
You now have a working Prow cluster (Woohoo!), but it isn't doing anything interesting yet. This section will help you complete any additional setup that your instance may need.
If you want to persist logs and output in GCS, you need to follow the steps below.
When configuring Prow jobs to use the Pod utilities
with decorate: true
, job metdata, logs, and artifacts will be uploaded
to a GCS bucket in order to persist results from tests and allow for the
job overview page to load those results at a later point. In order to run
these jobs, it is required to set up a GCS bucket for job outputs. If your
Prow deployment is targeted at an open source community, it is strongly
suggested to make this bucket world-readable.
In order to configure the bucket, follow the following steps:
- provision a new service account for interaction with the bucket
- create the bucket
- (optionally) expose the bucket contents to the world
- grant access to admin the bucket for the service account
- serialize a key for the service account
- upload the key to a
Secret
under theservice-account.json
key - edit the
plank
configuration fordefault_decoration_configs['*'].gcs_credentials_secret
to point to theSecret
above
After downloading the gcloud
tool and authenticating,
the following collection of commands will execute the above steps for you:
$ gcloud iam service-accounts create prow-gcs-publisher
identifier="$( gcloud iam service-accounts list --filter 'name:prow-gcs-publisher' --format 'value(email)' )"
$ gsutil mb gs://prow-artifacts/ # step 2
$ gsutil iam ch allUsers:objectViewer gs://prow-artifacts # step 3
$ gsutil iam ch "serviceAccount:${identifier}:objectAdmin" gs://prow-artifacts # step 4
$ gcloud iam service-accounts keys create --iam-account "${identifier}" service-account.json # step 5
$ kubectl -n test-pods create secret generic gcs-credentials --from-file=service-account.json # step 6
Before we can update plank's default_decoration_configs['*']
we'll need to retrieve the version of plank using the following:
$ kubectl get pod -lapp=plank -o jsonpath='{.items[0].spec.containers[0].image}' | cut -d: -f2
v20191108-08fbf64ac
Then, we can use that tag to retrieve the corresponding utility images in default_decoration_configs['*']
in config.yaml
:
For more information on how the pod utility images for prow are versioned see autobump
plank:
default_decoration_configs:
'*':
utility_images: # using the tag we identified above
clonerefs: "gcr.io/k8s-prow/clonerefs:v20191108-08fbf64ac"
initupload: "gcr.io/k8s-prow/initupload:v20191108-08fbf64ac"
entrypoint: "gcr.io/k8s-prow/entrypoint:v20191108-08fbf64ac"
sidecar: "gcr.io/k8s-prow/sidecar:v20191108-08fbf64ac"
gcs_configuration:
bucket: prow-artifacts # the bucket we just made
path_strategy: explicit
gcs_credentials_secret: gcs-credentials # the secret we just made
There are two ways to configure jobs:
- Using the inrepoconfig feature to configure jobs inside the repo under test
- Using the static config by editing the
config
configmap, some samples below:
Add the following to config.yaml
:
periodics:
- interval: 10m
name: echo-test
decorate: true
spec:
containers:
- image: alpine
command: ["/bin/date"]
postsubmits:
YOUR_ORG/YOUR_REPO:
- name: test-postsubmit
decorate: true
spec:
containers:
- image: alpine
command: ["/bin/printenv"]
presubmits:
YOUR_ORG/YOUR_REPO:
- name: test-presubmit
decorate: true
always_run: true
skip_report: true
spec:
containers:
- image: alpine
command: ["/bin/printenv"]
Again, run the following to test the files, replacing the paths as necessary:
$ bazel run //prow/cmd/checkconfig -- --plugin-config=path/to/plugins.yaml --config-path=path/to/config.yaml
Now run the following to update the configmap.
$ kubectl create configmap config \
--from-file=config.yaml=path/to/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
We create a make
rule:
update-config: get-cluster-credentials
kubectl create configmap config --from-file=config.yaml=config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
Presubmits and postsubmits are triggered by the trigger
plugin. Be sure to
enable that plugin by adding it to the list you created in the last section.
Now when you open a PR it will automatically run the presubmit that you added
to this file. You can see it on your prow dashboard. Once you are happy that it
is stable, switch skip_report
in the above config.yaml
to false
. Then, it will post a status on the
PR. When you make a change to the config and push it with make update-config
,
you do not need to redeploy any of your cluster components. They will pick up
the change within a few minutes.
When you push or merge a new change to the git repo, the postsubmit job will run.
For more information on the job environment, see jobs.md
You may choose to run test pods in a separate cluster entirely. This is a good practice to keep testing isolated from Prow's service components and secrets. It can also be used to furcate job execution to different clusters.
One can use a Kubernetes kubeconfig
file (i.e. Config
object) to instruct Prow components to use the build cluster(s).
All contexts in kubeconfig
are used as build clusters and the InClusterConfig
(or current-context
) is the default.
Create a secret containing a kubeconfig
like this:
apiVersion: v1
clusters:
- name: default
cluster:
certificate-authority-data: fake-ca-data-default
server: https://1.2.3.4
- name: other
cluster:
certificate-authority-data: fake-ca-data-other
server: https://5.6.7.8
contexts:
- name: default
context:
cluster: default
user: default
- name: other
context:
cluster: other
user: other
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
token: fake-token-default
- name: other
user:
token: fake-token-other
Use gencred to create the kubeconfig
file (and credentials) for accessing the cluster(s):
NOTE:
gencred
will merge new entries to the specifiedoutput
file on successive invocations by default .
Create a default cluster context (if one does not already exist):
NOTE: If executing
gencred
withbazel
like below, ensure--output
is an absolute path.
$ bazel run //gencred -- \
--context=<kube-context> \
--name=default \
--output=/tmp/kubeconfig.yaml \
--serviceaccount
Create one or more build cluster contexts:
NOTE: the
current-context
of the existingkubeconfig
will be preserved.
$ bazel run //gencred -- \
--context=<kube-context> \
--name=other \
--output=/tmp/kubeconfig.yaml \
--serviceaccount
Create a secret containing the kubeconfig.yaml
in the cluster:
$ kubectl --context=<kube-context> create secret generic kubeconfig --from-file=config=/tmp/kubeconfig.yaml
Mount this secret into the prow components that need it (at minimum: plank
,
sinker
and deck
) and set the --kubeconfig
flag to the location you mount it at. For
instance, you will need to merge the following into the plank deployment:
spec:
containers:
- name: plank
args:
- --kubeconfig=/etc/kubeconfig/config # basename matches --from-file key
volumeMounts:
- name: kubeconfig
mountPath: /etc/kubeconfig
readOnly: true
volumes:
- name: kubeconfig
secret:
defaultMode: 0644
secretName: kubeconfig # example above contains a `config` key
Configure jobs to use the non-default cluster with the cluster:
field.
The above example kubeconfig.yaml
defines two clusters: default
and other
to schedule jobs, which we can use as follows:
periodics:
- name: cluster-unspecified
# cluster:
interval: 10m
decorate: true
spec:
containers:
- image: alpine
command: ["/bin/date"]
- name: cluster-default
cluster: default
interval: 10m
decorate: true
spec:
containers:
- image: alpine
command: ["/bin/date"]
- name: cluster-other
cluster: other
interval: 10m
decorate: true
spec:
containers:
- image: alpine
command: ["/bin/date"]
This results in:
- The
cluster-unspecified
andcluster-default
jobs run in thedefault
cluster. - The
cluster-other
job runs in theother
cluster.
See gencred for more details about how to create/update kubeconfig.yaml
.
PRs satisfying a set of predefined criteria can be configured to be automatically merged by Tide.
Tide can be enabled by modifying config.yaml
.
See how to configure tide for more details.
GitHub Oauth is required for PR Status
and for the rerun button on Prow Status.
To enable these features, follow the
instructions in github_oauth_setup.md
.
Use cert-manager for automatic LetsEncrypt integration. If you already have a cert then follow the official docs to set up HTTPS termination. Promote your ingress IP to static IP. On GKE, run:
$ gcloud compute addresses create [ADDRESS_NAME] --addresses [IP_ADDRESS] --region [REGION]
Point the DNS record for your domain to point at that ingress IP. The convention
for naming is prow.org.io
, but of course that's not a requirement.
Then, install cert-manager as described in its readme. You don't need to run it in a separate namespace.