Skip to content

Repository to play with different devops projects

Notifications You must be signed in to change notification settings

awoisoak/devops-sandbox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

Repository to play with different devops projects.

Project Bash

This script automatically build an environment with a Flask Python web server which exposes port 9000. In order to build the environment simply execute the script:

bash photo-shop.sh

deploy


Project CI/CD (Github Actions)

Pipeline created with Github Actions for the Camera Exposure Calculator app.

ci.yaml

This workflow will build and upload the corresponding artifacts for each commit pushed to the repository.

google_play_deployment.yml.yaml

This workflow allows the deployment of the app to Google Play with a simple interface.

deploy

All sensible data for the deployments is kept within the Github Secrets including the base64 coding of the signing key which will decode during the process to be able to sign the app. Using Gradle Play Publisher within the app code it will publish the app listing to the Google Play and will deploy a 10% rollout in the passed track. Lastly will create a Github Release with the attached app bundle.


Project Docker Compose

This project uses Docker Compose to run a load balancer, a database and a given number of photo-shop web servers. All of them running on their own docker container.

In order to run the infrastructure specify how many web servers instances you want to launch:

docker compose up --scale web=3 &

Repeating the command with a different number will scale up/down the web servers with no down time.

The load balancer will be accesible at:

localhost:8080

The page returned will display the pictures registered in the database along with the specific web server instance that processed the request.


Project Kubernetes

This project runs a Kubernetes cluster with two deployments: one for a photo-shop web server and another one for a mariadb database.

db-deploy.yaml

Database deployment consist of just one replica set. As explained in mariadb Docker image the initialization of a fresh database is done via scripts found in /docker-entrypoint-initdb.d. The scripts are defined via a Config map (db-configmap.yaml) and are located in the above directory via a mounted volume.

db-service.yaml

Cluster Ip service to expose the database

db-configmap.yaml

Config map to contain the sql script used to initialize the database

web-deploy.yaml

Frontend deployments of photo-shop consisting of 2 replica sets.

web-service.yaml

NodePort service to expose the frontend outside of the cluster

In order to deploy the whole cluster:

kubectl apply -f .

You should be able to access the web server running in the pods through the new NodePort ip:port generated:

kubernetes git:(master) ✗ kubectl get service
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
db-service           ClusterIP   10.108.255.121   <none>        3306/TCP         19m
kubernetes           ClusterIP   10.96.0.1        <none>        443/TCP          24m
photo-shop-service   NodePort    10.101.35.13     <none>        9000:30000/TCP   19m

However, if you are running K8s on minikube you will have to obtain the ip:port from this command.

kubernetes git:(master) ✗ minikube service photo-shop-service --url
http://127.0.0.1:51781

Project AWS Terraform

The Terraform files will deploy the next infrastructure

aws

  • The VPC contains 3 subnets: 1 public and 2 privates.
  • The web server is located in the public one to be accessed by users.
  • The database is located in one of the private subnets
  • RDS needs a 'DB subnet group' which requires at least 2 subnets in different Availability Zones to create a new standby if needed in the future (+info)
  • The web server only accepts http and ssh connections from outside (ssh should be limited to the admin ip in production)
  • The database only accepts connections in the port 3306 from the EC2 instance

This project assumes that Terraform was setup with the corresponding AWS account.

To see the exact changes that Terraform will apply:

terraform plan -var-file="secrets.tfvars"

To trigger the infraestructure setup:

terraform apply -var-file="secrets.tfvars"

All components are using Free Tier components but make sure you destroy them once you stop working with them to avoid being charged:

terraform destroy -var-file="secrets.tfvars"

In order to initialize the DB with some data we will have to do it through a SSH tunnel (SSH port forwarding) through the EC2:

sh -i "$PRIVATE_KEY_PATH" -N -L 6666:$DB_ADDRESS:3306 ec2-user@$WEB_ADDRESS

With the tunnel above setup, all connections againt our localhost:6666 will be forward to the 3306 port of the RDS allowing us to populate the DB:

mysql -u $DB_USER -p$DB_PASSWORD -h 127.0.0.1 -P 6666 < scripts/setup_db.sql

Project GCP Terraform

Similar infraestructure than the one from the AWS project with some GCP peculiarities, In order to configure a Cloud SQL instance with a private IP it is required to have private services access which allow us to create private connections between our VPC networks and the underlying Google service producer's VPC network

Setup Infraestructure.

This project assumes that Terraform was setup with the corresponding GCP account.

To see the exact changes that Terraform will apply:

terraform plan -var-file="secrets.tfvars"

To trigger the infraestructure setup:

terraform apply -var-file="secrets.tfvars"

Make sure you destroy all resources once you stop working with them to avoid being charged: (If a storage bucket was created to initialize the DB make sure you remove it manually too)

terraform destroy -var-file="secrets.tfvars"

Database Initialization

We can initialize the db with a sql script uploaded via GCP web interface. For that we need to create a bucket where the sql script is uploaded Cloud SQL > IMPORT > BUCKET > setup-db.sql

Since the Photoshop web server is using a db user called 'user', we need to create it via gcloud

    gcloud sql users create user \
    --host=% \
    --instance=database \
    --password=password

Troubleshooting

  • Cloud SQL comes by default with a root user. To set a default password to it:

      gcloud sql users set-password root \
      --host=% \
      --instance=database \
      --prompt-for-password
    

(root:% does not have almost any privileges by default but the users created via gcloud will)

  • Confirm DB connection from Compute Engine

      mysql --user=root -p -h DB_PRIVATE_IP
    
  • To connect to Compute Engine from our local terminal (instead of using cloud shell). Add our public key to ~/.ssh/authorized_keys file in Compute Engine and make sure it has the corresponding permissions

      chmod 600 ~/.ssh/authorized_keys
    
  • Confirm SSH connection to Compute Engine

      ssh -i $HOME/.ssh/google_compute_engine  awoisoak_devops@COMPUTE_ENGINE_PUBLIC_IP        
    

Since we manually created a bucket storage to upload the scripts/setup_db.sql file, make sure you delete it to avoid being charged (It won't be deleted by Terraform!)


Project GCP Terraform (Cloud Run with remote backend)

There are two Terraform projects

  • 'infrastructure'
  • 'pre-infrastructure'

The infrastructure tf project defines a Cloud Run service with a prepopulated image of photo-shop web server within the Artifact Registry. When the 'infrastructure' projects is setup the Artifact Registry must be already created and must contain the required Docker image. Because of that it is not created within this Terraform project.

The 'pre-infrastructure' Terraform project is in charge of two main tasks:

  • Setting up the Artifact Registry where the photo-shop Docker image used by 'infraestructre' will be uploaded.

  • Create the Bucket where 'infrastructure' will push its state files.

The idea is to keep the main 'infrastructure' tf state saved privately in a protected storage which furthermore allows state locking.

The state of 'pre-infrastructure' does not include any sensible information so can be pushed to the repo itself in plain text.

In order to setup the whole thing follow this steps:

Setup pre-infrastructure tf project

cd pre-infrastructure
terraform init
terraform apply

Pull photo-shop image from Docker Hub

docker pull awoisoak/photo-shop

Tag image aiming to the Artifact Registry repository created

docker tag awoisoak/photo-shop us-west1-docker.pkg.dev/cloud-run-photoshop/my-repository/photo-shop

Upload the imagee to the Artifact Registry repository

docker push us-west1-docker.pkg.dev/cloud-run-photoshop/my-repository/photo-shop

Setup infrastructure tf project

cd ../infrastructure
terraform init
terraform apply

The output of the last command should display the URL of the photoshop web server running as a service within Google Cloud Run.

Once you are finished don't forget to settle down both infrastructures

tf destroy && cd ../pre-infrastructure && tf destroy

Project Ansible

Ansible requires remote hosts to have python installed so this project uses its own docker images.

Note: We could install python when executing 'docker compose up' and, if need, we could let ansible know python path via the inventory (ex. ansible_python_interpreter=/usr/bin/python3) However this brings issues with nginx and the development process in general takes much longer since we need to update and install dependencies every single time we up/down docker compose.

Making use of a static inventory like inventory.txt is not ideal since the number of web servers is hardcoded and the user might want to scale up/down depending on the circunstances.

Because of that, in this scenario having a dynamic docker container inventory is a better approach. To install it:

ansible-galaxy collection install community.docker
pip install docker

Now we can generate a dynamic docker inventory by adding a inventory.docker.yaml

To get the dynamic list of Docker hosts:

ansible-inventory -i inventory.docker.yaml --list --yaml

To get all possibles metadata to be used to make groups:

ansible-inventory -i inventory.docker.yaml --list | grep -i docker_

Once evertyhing is setup we can trigger Docker Compose which will launch a Load balancer, a database and the number of web servers we specify.

docker compose up --scale web=3

We can now create our playbook.yaml to automate all kind of tasks in the different servers.

docker compose up --scale web=3
ansible-playbook -i inventory.txt playbook.yaml

Project Prometheus

A Prometheus server is setup together with an instance of AlertManager, Grafana, and a bunch of servers and Exporters. AlertManager is configured to send alert notifications via email.

How metrics are generated:

TODO: The Photo-shop running behind the load balancer can't be scrapped by Prometheus. The web server instances should probably be pushing metrics instead.

grafana


Project Python

Different handy Python projects for DevOps

Spreadsheet

Run operations given a spreadsheet and add new fields to it

AWS

Run several operations against an AWS account

- Create VPCs
- Create EC2 instances
- Monitoring EC2 instances
- Create Snapshots
- Create volumes and restore backup snapshots in it
- Clean resources

Web monitoring

Monitor websites 

- Handle different environments:
    - Running web server in a local container
    - Running web server in a EC2 instance 
- Send alerts via email when applications are not available
- Connecting to remote EC2 instance via SSH
- Try to recover the application from different scenarios
    - Restarting local containers
    - Restart remote host and/or its services

About

Repository to play with different devops projects

Resources

Stars

Watchers

Forks

Packages

No packages published