Here is the high-level overview architecture that shows what we want to achieve:
Similar: https://github.com/developer-guy/google-cloud-function-stdout-falco-alert
- gcloud 342.0.0
- kubectl 1.20.5
As the blog title said already, we need to create a GKE cluster with workload identity enabled:
$ GOOGLE_PROJECT_ID=$(gcloud config get-value project)
$ CLUSTER_NAME=falco-falcosidekick-demo
$ gcloud container clusters create $CLUSTER_NAME --workload-pool ${GOOGLE_PROJECT_ID}.svc.id.goog
$ gcloud container clusters get-credentials $CLUSTER_NAME
We need to create a new Service Account for target $GOOGLE_PROJECT_ID
using IAM Binding policies to get access our Cloud Function:
$ SA_ACCOUNT=falco-falcosidekick-sa
$ gcloud iam service-accounts create $SA_ACCOUNT
$ gcloud projects add-iam-policy-binding ${GOOGLE_PROJECT_ID} \
--member="serviceAccount:${SA_ACCOUNT}@${GOOGLE_PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/cloudfunctions.developer"
$ gcloud projects add-iam-policy-binding ${GOOGLE_PROJECT_ID} \
--member="serviceAccount:${SA_ACCOUNT}@${GOOGLE_PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/cloudfunctions.invoker"
At the beginning, we already enabled WorkloadIdentity feature for our GKE Cluster by setting --workload-pool
flag. What we need to do here is we should add a iam.workloadIdentityUser
role for the given Service Account.
$ gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:${GOOGLE_PROJECT_ID}.svc.id.goog[${FALCO_NAMESPACE}/falco-falcosidekick]" \
${SA_ACCOUNT}@${GOOGLE_PROJECT_ID}.iam.gserviceaccount.com
We need to annotate the falco-falcosidekick
resource. So it can grant access for our Cluster. Set up the Falcosidekick SA to impersonate a GCP SA:
$ kubectl annotate serviceaccount \
--namespace $FALCO_NAMESPACE \
falco-falcosidekick \
iam.gke.io/gcp-service-account=${SA_ACCOUNT}@${GOOGLE_PROJECT_ID}.iam.gserviceaccount.com
To limit function role access in the particular cluster, we need to ensure our SA only have limited permissions within a particular namespace by using Role Bindings. It should only access Pod resource for DELETE action:
$ kubectl create serviceaccount pod-destroyer
$ kubectl create clusterrole pod-destroyer
--verb=delete
--resource=pod # give only pod resource access for delete op
$ kubectl create clusterrolebinding pod-destroyer
--clusterrole pod-destroyer
--serviceaccount default:pod-destroyer
To obtain the Token from secret, we need to get pod-deleter
ServiceAccount resource first:
$ POD_DESTROYER_TOKEN=$(kubectl get secrets $(kubectl get serviceaccounts pod-deleter -o json \
| jq -r '.secrets[0].name') -o json \
| jq -r '.data.token' \
| base64 -D)
Add the pod-destroyer
user to your KUBECONFIG:
# Generate your KUBECONFIG
$ kubectl config view --minify --flatten > kubeconfig_pod-destroyer.yaml
# Set the token at the end of yaml
$ cat << EOF >> kubeconfig_pod-destroyer.yaml
users:
- name: user.name
user:
token: $POD_DESTROYER_TOKEN
We can test it with auth can-i to check if roles are set correctly
$ kubectl auth can-i list deployments # no
$ kubectl auth can-i delete pod # yes
$ kubectl access-matrix # github.com/corneliusweig/rakkess
Where Secret Manager get involved our architecture is we had to find a way out to initialize our kubeclient in our function. Simply, we need to store our pod-destroyer
's KUBECONFIG and access from the function.
We need to create a new secrets IAM policy for the SA member to enable Managing Secrets:
$ gcloud secrets add-iam-policy-binding pod-destroyer \
--role roles/secretmanager.secretAccessor \
--member serviceAccount:$SA_ACCOUNT@$GOOLE_PROJECT_ID.iam.gserviceaccount.com
Create a new secret, called pod-destroyer
:
$ gcloud secrets create pod-destroyer --replication-policy="automatic"
Push the our generated kubeconfig_pod-destroyer.yaml
file as a new version:
$ gcloud secrets versions add pod-destroyer --data-file=kubeconfig_pod-destroyer.yaml
Finally, we are ready to deploy our Cloud Run function!
In this demonstration our function will simply delete the pwned Pod, as we already pointed it out in the architecture diagram.
You can find the Go code here.
$ git clone https://github.com/Dentrax/kubernetes-response-engine-based-on-gke-and-gcloudfunctions.git
$ cd kubernetes-response-engine-based-on-gke-and-gcloudfunctions
...
We need to pass extra --service-account
flag in order to get access to Secret Manager.
Deploy the function:
$ FUNCTION_NAME=KillThePwnedPod
$ gcloud functions deploy $FUNCTION_NAME \
--runtime go113 --trigger-http \
--service-account $SA_ACCOUNT@$GOOLE_PROJECT_ID.iam.gserviceaccount.com
Allow unauthenticated invocations of new function [KillThePwnedPod]? (y/N)? N
...
Now, get the name of the function:
$ CLOUD_FUNCTION_NAME=$(gcloud functions describe --format=json $FUNCTION_NAME | jq -r '.name')
It is time to install Falco
, Falcosidekick
with Cloud Function
output type enabled:
$ export FALCO_NAMESPACE=falco
$ kubectl create namespace $FALCO_NAMESPACE
$ helm upgrade --install falco falcosecurity/falco \
--namespace $FALCO_NAMESPACE \
--set ebpf.enabled=true \
--set falcosidekick.enabled=true \
--set falcosidekick.config.gcp.cloudfunctions.name=${CLOUD_FUNCTION_NAME} \
--set falcosidekick.webui.enabled=true
Try to run a busybox image and execute a command:
$ kubectl run busybox --image=busybox --restart='Never' -- sh -c "sleep 600"
Try to exec into:
$ kubectl exec -it busybox -- sh -c "uptime"
Check the logs of the Falco
, and Falcosidekick
to see what happened:
For Falcosidekick
:
$ kubectl logs deployment/falco-falcosidekick --namespace falco
2021/06/14 21:01:24 [INFO] : GCPCloudFunctions - Call Cloud Function OK
..