Terra Ops • Project Structure • Terraform • Configuration Details • Github Actions • References • Contribute
This project structure allows engineers to:
- Work on specific environments they are allowed to.
- Add/Remove modules as they need in order to build their applications within their environments.
- Deploy specific applications/modules (e.g.: networking, addons, argocd-app-1) within their specific environments.
Each envs folder (dev, prod, etc) is associated with a specific Terraform workspace.
├── infra
│ ├── envs
│ │ ├── dev
│ │ │ ...
│ │ │ ├── app-a
│ │ │ │ ├── data.tf
│ │ │ │ ├── main.tf
│ │ │ │ ├── outputs.tf
│ │ │ │ └── .terraform.lock.hcl
│ │ │ ├── eks
│ │ │ │ ├── data.tf
│ │ │ │ ├── main.tf
│ │ │ │ ├── outputs.tf
│ │ │ │ └── .terraform.lock.hcl
│ │ │ ├── loadbalancer
│ │ │ │ ├── data.tf
│ │ │ │ ├── main.tf
│ │ │ │ └── .terraform.lock.hcl
│ │ │ └── networking
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── .terraform.lock.hcl
│ │ │ ...
│ │ └── prod
│ │ └── networking
│ │ └── main.tf
│ └── modules
│ ...
│ │
│ ├── applications
│ │ ├── app-a
│ │ │ ├── README.md
│ │ │ └── main.tf
│ │ ├── app-b
│ │ │ ├── README.md
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ └── app-c
│ │ ├── README.md
│ │ ├── main.tf
│ │ └── variables.tf
│ ├── eks
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── loadbalancer
│ │ ├── iam
│ │ │ └── AWSLoadBalancerController.json
│ │ ├── main.tf
│ │ └── variables.tf
│ ├── networking
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── providers.tf
│ │ └── variables.tf
│ ...
- Repository is cloned within local machine.
- Terraform workspace is setup within a specific local environment folder, e.g.: dev, feature-7890.
- Engineer adds modules/apps as neededd within that folder and pushes changes to repo.
- Github Actions pipeline triggers validations and Terraform provisioning/deployment based on changes pushed.
tf.sh script is available on each environment
to facilitate quick validations, tests and can be customized by engineers as needed.
Script by default provides below capabilities:
- Provisioning
VPC
,Subnets
andEKS Cluster
. - Application deployment through Kubernetes
Ingress
,Service
andDeployment
resources.
$ cd infra/envs/dev
$ ./tf.sh apply
Same script can be used to remove infrastructure and any apps within specific environment:
$ cd infra/envs/dev
$ ./tf.sh destroy
- Located at:
infra/modules/networking
- Located at:
infra/modules/loadbalancer
Located at: infra/modules/applications/app-helm-installer-example
resource "helm_release" "hello" {
name = "hello"
repository = "https://helm.github.io/examples"
chart = "hello-world"
namespace = "hello-ns"
values = [file("${path.module}/values/custom.yaml")]
set {
name = "awsRegion"
value = "us-east-1"
}
}
Located at: infra/modules/eks
resource "aws_eks_cluster" "eks" {
name = "${var.app_name}-${var.env}-${var.eks_name}"
version = var.eks_version
role_arn = aws_iam_role.eks.arn
vpc_config {
endpoint_private_access = false
endpoint_public_access = true
subnet_ids = [
var.private_zone1,
var.private_zone2
]
}
...
Check user currently used to make calls via CLI:
$ aws sts get-caller-identity
Update local kube config to connect with EKS Cluster in AWS:
$ aws eks update-kubeconfig \
--name sample-app-dev-sample-eks \
--region us-east-1
Check for access:
$ kubectl get nodes
Located at: infra/addons/main.tf
resource "helm_release" "metrics_server" {
name = "metrics-server"
repository = "https://kubernetes-sigs.github.io/metrics-server/"
chart = "metrics-server"
namespace = "kube-system"
version = "3.12.1"
values = [file("${path.module}/values/metrics-server.yaml")]
depends_on = [var.eks_node_group_general]
}
Located at: infra/modules/external-lb-with-cluster-ip-service
...
resource "kubernetes_deployment" "second" {
metadata {
name = "second"
namespace = "second-ns"
labels = {
App = "second"
}
}
spec {
replicas = 2
...
resource "kubernetes_service" "second" {
metadata {
name = "second-service"
namespace = "second-ns"
}
spec {
selector = {
...
resource "kubernetes_ingress_v1" "second_ingress" {
wait_for_load_balancer = true
metadata {
name = "second-ingress"
namespace = "second-ns"
annotations = {
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "ip"
"alb.ingress.kubernetes.io/healthcheck-path" = "/health"
}
}
spec {
ingress_class_name = "alb"
...
Useful CLI commands: https://developer.hashicorp.com/terraform/cli/workspaces
In order to access the server UI you have the following options:
kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080
and accept the certificate
- Enable ingress in the values file
server.ingress.enabled
and either
-
Add the annotation for ssl passthrough: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/ingress.md#option-1-ssl-passthrough
-
Add the
--insecure
flag toserver.extraArgs
in the values file and terminate SSL at your ingress: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/ingress.md#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login with username: admin
and the random password
generated during the installation. You can find the password by running:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
# or using jq
kubectl -n argocd get secret argocd-initial-admin-secret -o json | jq .data.password -r | base64 -d
# or when attribute to extract data from contains a "." in its name -> "tls.crt"
# Eg.: extracting certificate key from sealed-secrets-124234 secret after installing `kubeseal` CLI
kubectl get secrets sealed-secrets-keyr4vzg -n kube-system -o json | jq .data'."tls.crt"' -r | base64 -d
(You should delete the initial secret afterwards as suggested by the Getting Started Guide: https://github.com/argoproj/argo-cd/blob/master/docs/getting_started.md#4-login-using-the-cli)
- (only for testing purposes) Adjust
argocd
helm configuration to placeLoadBalancer
within apublic subnet
to be reached through internet:
resource "helm_release" "argocd" {
depends_on = [var.eks_node_group_general]
name = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
version = "4.5.2"
namespace = "argocd"
create_namespace = true
set {
name = "server.service.type"
value = "LoadBalancer"
}
set {
name = "server.service.annotations.service\\.beta\\.kubernetes\\.io/aws-load-balancer-type"
value = "external"
}
set {
name = "server.service.annotations.service\\.beta\\.kubernetes\\.io/aws-load-balancer-scheme"
value = "internet-facing"
}
}
- Watches for changes within
dev
environment/folder and triggers provisioning/deployment. - Watches for changes within
prod
environment/folder and triggers provisioning/deployment. - [WIP] Configs remote state and associate
dev
folder withdev
workspace in Terraform.
[WIP] - Prometheus & cAdvisor & Grafana
[WIP] https://github.com/kunduso/add-aws-ecr-ecs-fargate/blob/main/deploy/backend.tf
terraform {
backend "s3" {
bucket = "ecs-app-XXX"
encrypt = true
key = "terraform-state/terraform.tfstate"
region = "us-east-1"
}
}
[WIP] Showcase 1 sample application imported from another Github
repo instead of a folder within modules
folder.
- https://spacelift.io/blog/argocd-terraform
- https://spacelift.io/blog/terraform-gitops
- https://spacelift.io/blog/terraform-remote-state#benefits-of-using-terraform-remote-state
Got something interesting you'd like to add or change? Please feel free to Open a Pull Request
If you want to say thank you and/or support the active development of Terra Ops
:
- Add a GitHub Star to the project.
- Tweet about the project on your Twitter.
- Write a review or tutorial on Medium, Dev.to or personal blog.