CKS-Lab-Challenges
Practice CKS Challenge Labslink: https://kodekloud.com/courses/cks-challenges/
Here's the summary of activities performed during this lab :
π Task1 - PVC to PV binding
π Task2 - Image Scanning using Aquasec Trivy
π Task3 - Ingress and Egress Network Policy Implementation
π Task4 - Secure Deployment using AppArmor Profile
π Task5 - Expose Deployment with 'ClusterIP' Type Service
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Kubectl autocomplete BASH
source <(kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
You can also use a shorthand alias for kubectl that also works with completion:
alias k=kubectl
complete -o default -F __start_kubectl k
PersistentVolume 'alpha-pv' has already been created. Inspect parameters i.e. 'accessModes' & 'capacity' and modify PVC accordingly
Apply the changes and ensure PVC status changes from 'pending' to 'Bound'
k edit pvc -n alpha
k apply --force -f /tmp/kubectl-edit-668393621.yaml
List '6' NGINX images
docker images | grep nginx
docker images | grep nginx |awk '{print $1 ":" $2}' |sort -u
Scan each image and identify the image with 'least number of CRITICAL vulnerabilites'
which trivy
trivy image --help
trivy image --severity=CRITICAL bitnami/nginx:latest
trivy image --severity=CRITICAL nginx:1.13
trivy image --severity=CRITICAL nginx:1.14
trivy image --severity=CRITICAL nginx:1.16
trivy image --severity=CRITICAL nginx:1.17
trivy image --severity=CRITICAL nginx:alpine
trivy image --severity=CRITICAL nginx:latest
Docker Image: nginx:alpine has 0 CRITICAL vulnerabilites. Modify 'alpha-xyz' deployment to use 'nginx:alpine' image and apply the configuration
k apply -f alpha-xyz.yaml
Expose 'alpha-xyz' deployment as a 'ClusterIP' type service
kubectl expose deploy alpha-xyz --name alpha-svc --port 80 --target-port 80 --type ClusterIP --namespace=alpha --dry-run=client -oyaml > alpha-svc.yaml
k apply -f alpha-svc.yaml
k get svc -n alpha
Restrict external POD from accessing 'alpha-svc' on port 80
k apply -f enp.yaml
Allow Inbound access only from 'middleware' POD
k apply -f inp.yaml
Move AppArmor profile to given location on controlplane node. Load and Enforce AppArmor profile 'custom-nginx'
cp /root/usr.sbin.nginx /etc/apparmor.d/usr.sbin.nginx
k replace --force -f alpha-xyz.yaml
cat usr.sbin.nginx | grep profile
apparmor_parser -a /etc/apparmor.d/usr.sbin.nginx
Verify AppArmor status
apparmor_status | grep custom-nginx
π Deployment 'alpha-xyz'
π Service 'alpha-svc'
π NetworkPolicy 'restrict-inbound'
π NetworkPolicy 'external-network-policy'
π AppArmor Profile 'custom-nginx'
- [βοΈ] PVC to PV binding
- [βοΈ] Image Scanning using Aquasec Trivy
- [βοΈ] Ingress and Egress Network Policy Implementation
- [βοΈ] Secure Deployment using AppArmor Profile
- [βοΈ] Expose Deployment with NodePort Type Service
Here's the summary of activities performed during this lab :
π Task1 - Edit and Build Docker Image using Dockerfile
π Task2 - Inspect and fix security issues using kubesec
π Task3 - Use startupProbe to remove shell access
π Task4 - Access Secret using environment variables within deployment
π Task5 - Implement Ingress Network Policy
Copy the required files (app.py,requirements.txt,templates) and build docker image
docker build -t kodekloud/webapp-color:stable .
Scan YAML using kubesec
which kubesec
kubesec scan staging-webapp.yaml
Re-create 'dev-webapp' and 'staging-webapp' PODS after fixing security issues
k get pod -n dev
k replace --force -f dev-webapp.yaml -n dev
k replace --force -f staging-webapp.yaml -n staging
Try shell access and see it failing with 'exit' status
k exec -it dev-webapp -n dev -- sh
k exec -it staging-webapp -n staging -- sh
Create Generic Secret 'prod-db' and access Secret using envFrom within deployment 'prod-web'
k get deployments.apps prod-web -n prod -oyaml > prod-web.yaml
cat prod-web.yaml | grep -A 7 "env:"
kubectl create secret generic prod-db --from-literal DB_Host=prod-db --from-literal DB_User=root --from-literal DB_Password=paswrd -n prod
k describe secrets prod-db
k edit deployments.apps prod-web -n prod
k get pod -n prod
Create NetworkPolicy 'prod-netpol' and allow traffic only within 'prod' namespace. Deny traffic from other namespaces.
k apply -f prod-np.yaml
π Dockerfile 'Dockerfile'
π POD 'dev-webapp'
π POD 'staging-webapp'
π Deployment 'prod-web'
π NetworkPolicy 'prod-netpol'
- [βοΈ] Edit and Build Docker Image using Dockerfile
- [βοΈ] Inspect and fix security issues using kubesec
- [βοΈ] Use startupProbe to remove shell access
- [βοΈ] Access Secret using environment variables within deployment
- [βοΈ] Implement Ingress Network Policy
Here's the summary of activities performed during this lab :
π Task1 - Use AquaSec 'kube-bench' to identify and fix issues related to controlplane and work node components
π Task2 - Inspect and fix kube-apiserver auditing issues
π Task3 - Fix kubelet security issues
π Task4 - Inspect and fix etcd / kube-controller-manager / kube-scheduler security issues
.
Task1 - Use AquaSec 'kube-bench' to identify and fix issues related to controlplane and work node components
Install 'kube-bench' tool
curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.6.2/kube-bench_0.6.2_linux_amd64.tar.gz -o kube-bench_0.6.2_linux_amd64.tar.gz
tar -xvf kube-bench_0.6.2_linux_amd64.tar.gz
mkdir -p /var/www/html/
Run 'kube-bench'
./kube-bench run --config-dir /opt/cfg --config /opt/cfg/config.yaml > /var/www/html/index.html
Identify 'failed' issues
cd /var/www/html/
cat index.html |egrep "INFO|FAIL"
Kubelet
ps -ef |grep kubelet
/var/lib/kubelet/config.yaml
cat /var/lib/kubelet/config.yaml
$$$ staticPodPath: /etc/kubernetes/manifests
cd /etc/kubernetes/manifests
Apply fixes as diagnosed using 'kube-bench' tool
vi kube-apiserver.yaml
parameters:
- --profiling=false
- --enable-admission-plugins=NodeRestriction,PodSecurityPolicy
- --insecure-port=0
- mountPath: /var/log/apiserver/
name: audit-log
readOnly: false
- name: audit-log
hostPath:
path: /var/log/apiserver/
type: DirectoryOrCreate
- --audit-log-path=/var/log/apiserver/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100
Ensure process is running post changes
crictl ps -a
Apply fixes on both controlplane and workernode
ps -ef |grep kubelet
/var/lib/kubelet/config.yaml
vi /var/lib/kubelet/config.yaml
!
protectKernelDefaults: true
systemctl restart kubelet
k get nodes
ssh node01
ps -ef |grep kubelet
vi /var/lib/kubelet/config.yaml
Fix ETCD issues
kubectl config set-context --current --namespace kube-system
ls -l /var/lib/ | grep etcd
chown etcd:etcd /var/lib/etcd
Fix Kube Controller Manager issues
vi kube-controller-manager.yaml
- --profiling=false
Fix Kube Scheduler issues
vi kube-scheduler.yaml
- --profiling=false
Restart kubelet and ensure all services are up and running after fixing security issues
crictl ps -a
systemctl restart kubelet
π etcd 'etcd.yaml'
π kubelet-config 'kubelet-config.yaml'
π kube-apiserver 'kube-apiserver.yaml'
π kube-scheduler 'kube-scheduler.yaml'
π kube-controller-manager 'kube-controller-manager.yaml'
[βοΈ] Task1 - Use AquaSec 'kube-bench' to identify and fix issues related to controlplane and work node components
[βοΈ] Task2 - Inspect and fix kube-apiserver auditing issues
[βοΈ] Task3 - Fix kubelet security issues
[βοΈ] Task4 - Inspect and fix etcd / kube-controller-manager / kube-scheduler security issues
π Task1 - Configure Auditing using Audit Policy
π Task2 - Apply auditing to kube-apiserver
π Task3 - Install & Analyze Falco
Check ConfigMaps/Roles for Citadel Namespace
k get all -n citadel
k get cm -n citadel
k get role -n citadel
Audit Policy:
Create Single rule in audit policy as per the requirement give at path /etc/kubernetes/audit-policy.yaml'
/etc/kubernetes/audit-policy.yaml
Modify kube-apiserver and create required mountPaths
cd /etc/kubernetes/manifests/
vi kube-apiserver.yaml
Make Changes as required in given scenario
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- mountPath: /etc/kubernetes/audit-policy.yaml
name: audit
readOnly: true
- mountPath: /var/log/kubernetes/audit/
name: audit-log
readOnly: false
- name: audit
hostPath:
path: /etc/kubernetes/audit-policy.yaml
type: File
- name: audit-log
hostPath:
path: /var/log/kubernetes/audit/
type: DirectoryOrCreate
Check kube-apiserver Status and ensure it is running
systemctl restart kubelet
crictl ps | grep api
journalctl | grep apiserver
Install Falco Utility and start it as systemd service
cat /etc/os-release
URL : https://falco.org/docs/getting-started/installation/#debian
curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
echo "deb https://download.falco.org/packages/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
apt-get update -y
apt-get -y install linux-headers-$(uname -r)
apt-get install -y falco
Check process status
crictl ps
crictl pods
systemctl restart kubelet
systemctl restart falco
systemctl status falco
Configure Falco to save event output to given path: /opt/falco.log
cd /etc/falco
ls
vim falco.yaml
file_output:
enabled: true
keep_alive: false
filename: /opt/falco.log
Restart once changes have been made
systemctl restart falco
systemctl status falco
Insect API server audit logs and identify user causing abnormal behaviour
cd /var/log/kubernetes/audit/
ls -al
//audit.log
cat audit.log |grep citadel |egrep -v "\"get|\"watch|\"list" |jq
Find the name of the 'user', 'role' and 'rolebinding' responsible for the event
k get sa -n citadel
k get role -n citadel
k get role important_role_do_not_delete -n citadel
k get rolebindings.rbac.authorization.k8s.io -n citadel
k get rolebindings.rbac.authorization.k8s.io important_binding_do_not_delete -n citadel
k get rolebindings.rbac.authorization.k8s.io important_binding_do_not_delete -n citadel -oyaml
Save the name of the 'user', 'role' and 'rolebinding' responsible for the event to the file '/opt/blacklist_users' file
echo "agent-smith,important_role_do_not_delete,important_binding_do_not_delete" > /opt/blacklist_users
Inspect the 'falco' logs and identify the pod that has events generated because of packages being updated on it
cat falco.log
05:32:46.651495067: Error Package management process launched in container (user=root user_loginuid=-1 command=apt install nginx container_id=e23544847bbf container_name=k8s_eden-software2_eden-software2_eden-prime_07eb74da-63d8-4ac5-8310-ab49fa93cbc6_0 image=ubuntu:latest)
Identify the container ID
//container_id=e23544847bbf
crictl ps |grep "e23544847bbf"
Identify the namespace and pod name
crictl pods| grep "91efe3fe8bed6"
root@controlplane /opt β crictl pods| grep "91efe3fe8bed6"
91efe3fe8bed6 54 minutes ago Ready eden-software2 eden-prime 0 (default)
Save the namespace and pod name to file '/opt/compromised_pods'
echo "eden-prime,eden-software2" > /opt/compromised_pods
Delete the POD belonging to the 'omega' namespace that were flagged in the 'Security Report' file '/opt/compromised_pods'
k delete pod eden-software2 -n eden-prime
Identify and delete the role and rolebinding causing the constant deletion and creation of the configmaps and pods in this namespace
k get role -n citadel
k get rolebinding -n citadel
Take Action
k delete role important_role_do_not_delete -n citadel
k delete rolebinding important_binding_do_not_delete -n citadel
π Audit Policy 'audit-policy.yaml'
π KUBE API Server 'kube-apiserver'
[βοΈ] Task1 - Configure Auditing using Audit Policy
[βοΈ] Task2 - Apply auditing to kube-apiserver
[βοΈ] Task3 - Install & Analyze Falco
π LinkedIn: https://www.linkedin.com/in/tariq-a-sheikh/
π Credly: https://www.credly.com/users/tariqsheikh