An example of deploying a web app on GKE. Consists of
- GKE cluster with a single node pool
- VPC-native, private and using container-native load-balancing
- access to cluster master is limited to a single whitelisted IP: check the
K8S_MASTER_ALLOWED_IP
env variable below
- Cloud SQL Postgres instance with private networking
- connects to GKE through a private IP, ensuring traffic is never exposed to the public internet
- Cloud Storage and Cloud CDN for serving static assets
- Cloud Load Balancing routing
/api/*
to GKE and the rest to the static assets bucket- implemented in a bit of a roundabout way since
ingress-gce
lacks support for backend buckets: we're passing GKE backend's name in Terraform variables and attaching it to our default URL map
- implemented in a bit of a roundabout way since
- Cloud DNS for domain management
- check
ROOT_DOMAIN_NAME_<ENV>
below
- check
- Terraform-defined infrastructure
- using
kubectl
directly instead of the Kubernetes Terraform provider as the latter is missing an Ingress type, among others
- using
- CircleCI pipeline
- push to any non-master branch triggers
dev
deployment & push tomaster
branch triggerstest
deployment prod
deployment triggered by an additional approval step at CircleCI UI
- push to any non-master branch triggers
The following steps need to be completed manually before automation kicks in:
- Create a new Google Cloud project per each environment
- For each Google Cloud project,
- set up a Cloud Storage bucket for storing remote Terraform state
- set up a service IAM account to be used by Terraform. Attach the
Editor
andCompute Network Agent
roles to the created user
- Set environment variables in your CircleCI project (replacing
ENV
with an uppercaseDEV
,TEST
andPROD
):GOOGLE_PROJECT_ID_<ENV>
: env-specific Google project idGCLOUD_SERVICE_KEY_<ENV>
: env-specific service account keyDB_PASSWORD_<ENV>
: env-specific password for the Postgres user that the application usesROOT_DOMAIN_NAME_<ENV>
: env-specific root domain name, e.g.dev.example.com
K8S_MASTER_ALLOWED_IP
: IP from which to access cluster master's public endpoint, i.e. the IP where you runkubectl
at (read more)- In CircleCI we temporarily whitelist the test host IP in order to run
kubectl
- In CircleCI we temporarily whitelist the test host IP in order to run
- Enable the following Google Cloud APIs:
cloudresourcemanager.googleapis.com
compute.googleapis.com
container.googleapis.com
containerregistry.googleapis.com
dns.googleapis.com
servicenetworking.googleapis.com
sqladmin.googleapis.com
You might also want to acquire a domain and update your domain registration to point to Google Cloud DNS name servers.
You can also sidestep CI and deploy locally:
- Install terraform, gcloud and kubectl
- Login to Google Cloud:
gcloud auth application-default login
- Update infra:
cd terraform/dev && terraform init && terraform apply
- Follow instructions on building and pushing a Docker image to GKE:
cd app
export PROJECT_ID="$(gcloud config get-value project -q)"
docker build -t gcr.io/${PROJECT_ID}/gke-app:v1 .
gcloud docker -- push gcr.io/${PROJECT_ID}/gke-app:v1
- Authenticate
kubectl
:gcloud container clusters get-credentials $(terraform output cluster_name) --zone=$(terraform output cluster_zone)
- Render Kubernetes config template:
terraform output k8s_rendered_template > k8s.yml
- Update Kubernetes resources:
kubectl apply -f k8s.yml
Read here on how to connect to the Cloud SQL instance with a local psql
client.
- Cloud SQL high availability & automated backups
- regional GKE cluster
- GKE autoscaling
- Cloud Armor DDoS protection
- SSL
- tune down service accounts privileges
- possible CI improvements:
- add a step to clean up old container images from GCR
- prompt for extra approval on infra changes in master
- don't rebuild docker image from
test
toprod