Please refer to the Policy Authoring Guide for details about structure of our policy files.
Name | Group | Description | CIS Benchmark |
---|---|---|---|
Enable node auto-repair | Availability | GKE node pools should have Node Auto-Repair enabled to configure Kubernetes Engine | CIS GKE 1.4: 5.5.2 |
Ensure redundancy of the Control Plane | Availability | GKE cluster should be regional for maximum availability of control plane during upgrades and zonal outages | |
Ensure redundancy of the node pools | Availability | GKE node pools should be regional (multiple zones) for maximum nodes availability during zonal outages | |
Enable Cloud Monitoring and Logging | Maintenance | GKE cluster should use Cloud Logging and Monitoring | CIS GKE 1.4: 5.7.1 |
Enable Compute Engine persistent disk CSI driver | Management | Automatic deployment and management of the Compute Engine persistent disk CSI driver. The driver provides support for features like customer managed encryption keys or volume snapshots. | |
Enable GKE upgrade notifications | Management | GKE cluster should be proactively receive updates about GKE upgrades and GKE versions | |
Enable binary authorization in the cluster | Management | GKE cluster should enable for deploy-time security control that ensures only trusted container images are deployed to gain tighter control over your container environment. | CIS GKE 1.4: 5.10.5 |
Enable maintenance windows | Management | GKE cluster should use maintenance windows and exclusions to upgrade predictability and to align updates with off-peak business hours. | |
Ensure acceptable version skew in a cluster | Management | Difference between cluster control plane version and node pools version should be no more than 2 minor versions. | |
Use GKE Autopilot mode | Management | GKE Autopilot mode is the recommended way to operate a GKE cluster | |
Use VPC-native cluster | Management | GKE cluster nodepool should be VPC-native as per our best-practices | CIS GKE 1.4: 5.6.2 |
Enable GKE L4 ILB Subsetting | Scalability | GKE cluster should use GKE L4 ILB Subsetting if nodes > 250 | |
Enable GKE node local DNS cache | Scalability | GKE cluster should use node local DNS cache | |
Enable node pool auto-scaling | Scalability | GKE node pools should have autoscaling configured to proper resize nodes according to traffic | |
GKE Nodes Limit | Scalability | GKE Nodes Limit | |
Number of HPAs in a cluster | Scalability | The optimal number of Horizontal Pod Autoscalers in a cluster | |
Number of PODs in a cluster | Scalability | The total number of PODs running in a cluster | |
Number of PODs per node | Scalability | The total number of PODs running on a single node | |
Number of containers in a cluster | Scalability | The total number of containers running in a cluster | |
Number of namespaces in a cluster | Scalability | The total number of namespaces in a cluster | |
Number of nodes in a nodepool zone | Scalability | The total number of nodes running in a single node pool zone | |
Number of secrets with KMS encryption | Scalability | The total number of secrets when KMS secret encryption is enabled | |
Number of services in a cluster | Scalability | The total number of services running in a cluster | |
Number of services per namespace | Scalability | The total number of services running in single namespace | |
Change default Service Accounts in Node Auto-Provisioning | Security | Node Auto-Provisioning configuration should not allow default Service Accounts | CIS GKE 1.4: 5.2.1 |
Change default Service Accounts in node pools | Security | GKE node pools should have a dedicated sa with a restricted set of permissions | CIS GKE 1.4: 5.2.1 |
Configure Container-Optimized OS for Node Auto-Provisioning node pools | Security | Nodes in Node Auto-Provisioning should use Container-Optimized OS | CIS GKE 1.4: 5.5.1 |
Configure Container-Optimized OS for node pools | Security | GKE node pools should use Container-Optimized OS which is maintained by Google and optimized for running Docker containers with security and efficiency. | CIS GKE 1.4: 5.5.1 |
Disable control plane basic authentication | Security | Disable Basic Authentication (basic auth) for API server authentication as it uses static passwords which need to be rotated. | CIS GKE 1.4: 5.8.1 |
Disable control plane certificate authentication | Security | Disable Client Certificates, which require certificate rotation, for authentication. Instead, use another authentication method like OpenID Connect. | CIS GKE 1.4: 5.8.2 |
Disable legacy ABAC authorization | Security | GKE cluster should use RBAC instead of legacy ABAC authorization | CIS GKE 1.4: 5.8.4 |
Enable Customer-Managed Encryption Keys for persistent disks | Security | Use Customer-Managed Encryption Keys (CMEK) to encrypt node boot and dynamically-provisioned attached Google Compute Engine Persistent Disks (PDs) using keys managed within Cloud Key Management Service (Cloud KMS). | CIS GKE 1.4: 5.9.1 |
Enable GKE intranode visibility | Security | GKE cluster should have intranode visibility enabled | CIS GKE 1.4: 5.6.1 |
Enable Google Groups for RBAC | Security | GKE cluster should have RBAC security Google group enabled | CIS GKE 1.4: 5.8.3 |
Enable Kubernetes Network Policies | Security | GKE cluster should have Network Policies or Dataplane V2 enabled | CIS GKE 1.4: 5.6.7 |
Enable Kubernetes secrets encryption | Security | GKE cluster should use encryption for kubernetes application secrets | CIS GKE 1.4: 5.3.1 |
Enable Secure boot for node pools | Security | Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails | CIS GKE 1.4: 5.5.7 |
Enable Security Posture dashboard | Security | The Security Posture feature enables scanning of clusters and running workloads against standards and industry best practices. The dashboard displays the scan results and provides actionable recommendations for concerns. | |
Enable Shielded Nodes | Security | GKE cluster should use shielded nodes | CIS GKE 1.4: 5.5.5 |
Enable Workload vulnerability scanning | Security | The Workload vulnerability scanning is a set of capabilities in the security posture dashboard that automatically scans for known vulnerabilities in your container images and in specific language packages during the runtime phase of software delivery lifecycle. | |
Enable control plane private endpoint | Security | Control Plane endpoint should be locked from external access | CIS GKE 1.4: 5.6.4 |
Enable integrity monitoring for Node Auto-Provisioning node pools | Security | Nodes in Node Auto-Provisioning should use integrity monitoring | CIS GKE 1.4: 5.5.6 |
Enable integrity monitoring for node pools | Security | GKE node pools should have integrity monitoring feature enabled to detect changes in a VM boot measurements | CIS GKE 1.4: 5.5.6 |
Enable node auto-upgrade | Security | GKE node pools should have Node Auto-Upgrade enabled to configure Kubernetes Engine | CIS GKE 1.4: 5.5.3 |
Enroll cluster in Release Channels | Security | GKE cluster should be enrolled in release channels | CIS GKE 1.4: 5.5.4 |
Ensure redundancy of Node Auto-provisioning node pools | Security | Node Auto-Provisioning configuration should cover more than one zone | |
Limit Control Plane endpoint access | Security | Control Plane endpoint access should be limited to authorized networks only | CIS GKE 1.4: 5.6.3 |
Use GKE Workload Identity | Security | GKE cluster should have Workload Identity enabled | CIS GKE 1.4: 5.2.2 |
Use private nodes | Security | GKE cluster should be private to ensure network isolation | CIS GKE 1.4: 5.6.5 |