___ ___ _ __ _ _
| \ ___ __ __ / _ \ | '_ \ ___ o O O | || | __ _ __ __ ___ _ _
| |) | / -_) \ V / | (_) | | .__/ (_-< o | __ | / _` | \ V / / -_) | ' \
|___/ \___| _\_/_ \___/ |_|__ /__/_ TS__[O] |_||_| \__,_| _\_/_ \___| |_||_|
_|"""""|_|"""""|_|"""""|_|"""""|_|"""""|_|"""""| {======|_|"""""|_|"""""|_|"""""|_|"""""|_|"""""|
Centralized hub for my DevOps wisdom β a curated collection of scripts, guides, and best practices to streamline development and operations. Your go-to resource for mastering the art of seamless software delivery. ππ
Subject | Code | DONE |
---|---|---|
Enriched web application with automated tests | APP | β |
Continuous Integration and Continuous Delivery (and Deployment) | CICD | β |
Infrastructure as code using Ansible | IAC | β |
Containerisation with Docker | D | β |
Orchestration with Docker Compose | DC | β |
Orchestration with Kubernetes | KUB | β |
Service mesh using Istio | IST | β |
Monitoring | MON | β |
Accurate project documentation in README.md file | DOC | β |
CI job for automated build and publish to DockerHub of the USER API image | CICD | β |
Automated K8S deployments with Helm for variabilisation | K8S | β |
Implementation of new API methods (Update, Delete, Get all keys) | APP | β |
Improved tests and new tests for every new API method | CICD | β |
API documentation using Swagger UI | APP | β |
API health endpoint | APP | β |
Complete DevOps toolbox | β | |
Ready to use DevOps Infrastructures | INF | β |
- Prerequisites
- USER API
- CI/CD Pipeline
- Infrastructure as a Code
- Docker Image
- Docker Compose
- Orchestration with K8S
- Istio in K8S
- Helm Integration
- Bonuses
- Useful Links
- Authors
For running this project you'll need the following software/apps to be installed on your device.
Find in the Useful Links part, the docs and installation processes of these software.
This project has been designed to run on a Linux machine. Most of the steps will still work on a Windows machine, but you may encounter problems with scripting and VM management. These problems are not due to the project itself but to the execution environment, which we don't control.
It is a basic NodeJS web application exposing REST API that creates and stores user parameters in Redis database. This application allows the USER to perform CRUD operations.
The source code of the application is available at Source Code
This application is written on NodeJS and it uses Redis database. Follow the instructions below to perform the complete installation of this application.
-
Clone our repo to your computer:
git clone https://github.com/tristanqtn/ece-devops-ING4-SI-03/
- Navigate to the freshly downloaded repo:
cd ece-devops-ING4-SI-03
cd userapi
- Since node modules are not present in this repo you should install them manually using the following command:
npm install
If you've followed the instructions above, the entire project is installed on your machine and you have the tools (NodeJS and Redis) to run this application locally. The type of installation you've just performed is comparable to a dev-type installation.
Furthermore, the aim of this project is to deploy this same application in a variety of environments, so in the rest of this documentation you'll learn how to deploy the application using different methods.
Here few explanations concerning the usage of the application in local mode. This type of deployment requires that Redis and NodeJS are already installed on the hosting device. Redis must be running when you use the application. To make sure Redis is running use the command redis-cli PING
and Redis should answer with PONG
.
Start a web server: in the ./userapi
folder run the following command to launch the application.
npm start
For dev mode:
npm run dev
It will start a web server available in your browser at http://localhost:3000
.
Now the application is running on your device and you should be able to access the application home page at USER API - home. This home page explains you how to use the whole application.
Here's a list of operations available using the REST API. For API testing we strongly recommend to use Postman.
- Create a user
This method will allow you to insert a new user in the Redis DB. Send a POST (REST protocol) request using the following command:
curl --header "Content-Type: application/json" \
--request POST \
--data '{"username":"tristanqtn","firstname":"tristan","lastname":"querton"}' \
http://localhost:3000/user
Or using Postman, send a POST request to http://localhost:3000/user
with the following json
body:
{
"username": "tristanqtn",
"firstname": "tristan",
"lastname": "querton"
}
The API should respond you with the following json
message:
{ "status": "success", "msg": "OK" }
- Retrieve the information of a specific user
This method will allow you to retrieve the firstname
and lastname
of a user inserted in Redis using its username
. To do so send a GET request to the API at http://localhost:3000/user/:username
where username
is the username of the user you want to get the information.
Use the following bash command to send the GET request:
curl http://localhost:3000/user/:username
Or using Postman, send a GET request to http://localhost:3000/user/:username
with the correct username
parameter.
The API should respond you with the following json
message:
{
"status": "success",
"msg": {
"firstname": "tristan",
"lastname": "querton"
}
}
- Retrieve all keys in the Redis database
This method will allow you to retrieve all keys stored in Redis. To do so send a GET request to the API at http://localhost:3000/user/keys
.
Use the following bash command to send the GET request:
curl http://localhost:3000/user/keys
Or using Postman, send a GET request to http://localhost:3000/user/keys
.
The API should respond you with the following json
message:
{
"status": "success",
"msg": ["tristan", "apolline"]
}
- Update the information of a specific user
This method will allow you to update the information of an already inserted uder the Redis DB. Make sure to use the username
of a user existing in the DB. Send a PUT (REST protocol) request using the following command:
curl --header "Content-Type: application/json" \
--request PUT \
--data '{"username":"tristanqtn","firstname":"tristan","lastname":"querton"}' \
http://localhost:3000/user
Or using Postman, send a PUT request to http://localhost:3000/user
with the following json
body:
{
"username": "tristanqtn",
"firstname": "tristan",
"lastname": "querton"
}
The API should respond you with the following json
message:
{ "status": "success", "msg": "OK" }
- Delete a specific user
This method will allow you to delete a user inserted in Redis using its username
. To do so send a DELETE request to the API at http://localhost:3000/user/:username
where username
is the username of the user you want to delete.
Use the following bash command to send the GET request:
curl -X DELETE http://localhost:3000/user/:username
Or using Postman, send a DELETE request to http://localhost:3000/user/:username
with the correct username
parameter.
The API should respond you with the following json
message:
{
"status": "success",
"msg": 1
}
- Health Endpoint
An API endpoint has been created to send the current health state of the application. Send a GET request to the http://localhost:3000/health
(or curl it with curl http://localhost:3000/health
) and the API should respond you with a message similar to the following one:
{ "uptime": 389.8366598, "status": "OK", "timestamp": 1700817327148 }
This application has been covered with tests. These tests will be useful for creating CI/CD pipelines. They are also useful for checking the integrity of the application after code has been added or modifications have been made. To run these tests make sure Redis is running with the command redis-cli PING
and Redis should answer with PONG
. Then run the following command that will automatically start the server and then perform the suite of tests.
npm run test
The code of the test scripts is available at Tests
Here's a list of all test that will be performed:
Configure
- load default json configuration file
- load custom configuration
Redis
- should connect to Redis
User
Create
- create a new user
- passing wrong user parameters
- avoid creating an existing user
Get
- get a user by username
- can not get a user when it does not exist
Get keys
- get the key of an existing user
Delete
- delete an existing user
- prevent deleting a non-existing user
User REST API
POST /user
- create a new user
- pass wrong parameters
GET /user
- get an existing user
- can not get a user when it does not exist
GET /user/keys
- get the key of an existing user
Delete /user
- delete an existing user
- can not delete a user when it does not exist
PUT /user
- update an existing
- pass wrong parameters
- can not delete a user when it does not exist
The expected output of the execution of all test script is the following screenshot:
If you don't have Redis installed and you can't install it, don't worry we've created a Docker Compose file. Run the docker compose up
command in the given folder ./tools/standalone_redis/
, this will start a standalone Redis server with the correct port mapping.
A Swagger generator has been added to the API. The API description is available at API Docs
Using GitHub actions we have created a CI/CD pipeline. This pipeline is running on every push or accepted pull request. This pipeline is made out of two jobs. One ensures the Continuous Integration part and the other the Continuous Deployment. We could have perform those two jobs within a single job but it's part of the best practices to at least split the integration and the deployment.
The code of this CI/CD pipeline is available at CI/CD
CI/CD Pipelines are not executed locally but on a GitHub server similar to a production environment. Thus, for each job we need to tell the server which dependencies are required. We run those pipelines on a clean remote server because we want to reproduce a production environment.
CI stands for Continuous Integration. This job is responsible of making sur that the added code (pushed or merged) is integrating correctly with the legacy code. Verification of correct integration is carried out by some tests coded by the developer. Before run the test the pipeline installs on the container running the job the needed dependencies: Redis and NodeJS. If all these tests pass without error, it means that the new code integrates well with the old one.
BONUS: If these tests pass correctly we can move on to the second step of the integration which is in our case building and publishing the Docker image. It can be verify boring and repetitive to do it by hand each time. Thus we've create a second job in the GitHub Action that automatically builds and pushes the image to DockerHub. Thanks to this job we always know that the version available on DockerHub is always the latest. This job depends on the succes of the testing job because we don't want to build and publish a buggy application that didn't pass all test.
The last job of this pipeline is to deploy the application to Azure. To do so we've created a Ressource Group in Azure that hosts a Azure Web App Service.
And using the publishProfile
of this Azure ressource we're able to connect GitHub to Azure and automate the deployment. This job depends on the succes of the testing job because we don't want to deploy a buggy application that didn't pass all test.
App running in Azure: IMPORTANT: This version of the userapi hasn't been connected on purpose to a Redis DB because that result in a publicly accessible DB which is a major security threat.
When all of these 3 jobs are finished the new version of the application has been deployed and the latest image pushed to DockerHub. We end up with the following working tree:
At this step we will be deploying our application inside a VM dedicated to run the app. To create the VM we will be using Vagrant and for provisioning and configuration of this VM we will be using Ansible. Those two will do the definition of a VM, copying necessary files, and executing Ansible playbooks and shell scripts to set up the desired environment.
This part is refering to the Vagrantfile.
- VM Definition:
A VM named "nodeapp_server" is defined using the CentOS 7 box. Port forwarding is set up to map port 3000 on the guest VM to port 3000 on the host machine. The VM is configured with specific resources (memory and CPUs) for both VirtualBox and VMware providers.
- File Provisioning:
The local ../userapi directory is copied to the VM's $HOME/nodeapp directory. At this step we are installing our NodeJS app code inside the VM.
- Ansible Provisioning:
Ansible is used for provisioning with the local playbook playbooks/run.yml. Only the roles with the tags "install" and "integrity" will be executed.
- Shell Provisioning:
A shell script app_launcher.sh is executed on every provision. This script performs once again integrity tests and then open port 3000 of the VM to external connection an finally starts our NodeJS application.
We need to configure this VM in order to host the application on it, this configuration process will be done by Ansible.
The installation is ensured by Ansible, to do so we provide Ansible with a playbook that explains all required steps to be performed. Here's an explanation of those steps. At the end of this play book the VM is completly configured to host the app (Redis is installed and runnig, firewalls are opened, language runtime is installed, app launcher is ready, ...).
-
Install Required Packages:
- Uses the
yum
module to ensure that various packages (e.g.,curl
,redis
,nodejs
, etc.) are installed and up-to-date.
- Uses the
-
Enable and Start SSH:
- Ensures that the SSH service (
sshd
) is started and enabled for automatic startup.
- Ensures that the SSH service (
-
Enable HTTP+HTTPS Access:
- Configures firewalld to enable permanent access for HTTP and HTTPS services.
-
Reload Firewalld:
- Reloads the firewalld service to apply the new configuration.
-
Starting REDIS:
- Uses the
command
module to start the Redis service usingsystemctl
.
- Uses the
-
Install Node Packages:
- Uses the
command
module to install Node.js packages for the specified directory (/home/vagrant/nodeapp/
) using npm.
- Uses the
-
Changing Permissions for App Launcher:
- Changes the permissions of the
app_launcher.sh
script to make it executable.
- Changes the permissions of the
This Ansible playbook automates the installation and configuration of necessary packages and services for a system running a Node.js application. It also includes tasks for firewall configuration, service management, and script execution. The playbook ensures that the system is properly configured to run our application.
Health checks are performed by another playbook. This one ensures that the environment is ready to host the application.
Running Integrity Tests:
- Uses the
command
module to execute a health check command. - Runs the
npm run test
command for the Node.js application located in/home/vagrant/nodeapp/
. And checks that all tests are passed correctly.
This Ansible task is responsible for triggering the integrity tests for the Node.js application, providing a mechanism to verify that the application is in a healthy and expected state. The health check is essential for ensuring the reliability and correctness of the deployed application.
To run this NodeJS app in Vagrant VM make sure Vagrant and VirtualBox are installed and configured.
Then, browse to the ./iac
folder.
cd iac
Perform the following command in the ./iac
folder to start the VM provisioning, the NodeJS app will start automatically if the provisioning step has been done correctly.
vagrant up
To destroy the VM, first stop the NodeJS application and run this command:
vagrant destroy
In order to make the application usable in environments such as Docker Compose or Kubernetes, we first need to create the docker image og the application. To do so we've created the DockerFile which is responsible for the creation of the image.
Note that for this image it's useless to upload folders and files such has ./test
, eslintrc.json
, DockerFile
itself, ... Thus we've added a .dockerignore to tell the DockerFile which files and folders aren't required in the image.
Before performing the following instructions make sure Docker is installed and running on your device.
Browse to the ./userapi
folder.
cd userapi
Build the image with this command:
docker build -t userapi-devops .
Before publishing the image make sure you have a DockerHub account.
docker tag userapi $YOUR_USERNAME/userapi-devops:latest
docker login
docker push $YOUR_USERNAME/userapi-devops:latest
The image is now available online. Thus you can perform the installation of this image on any compatible device without needing the source code.
Thanks to the DockerHub platform, the image of our application is available online for everyone. Find it using this link.
As explained above, we've created a bonus CI/CD pipeline job to automate all the steps involved in compiling and publishing the docker image. Now we don't need to build and publish each new version of the USER API beacause it will be done automatically for us on each merge or push to main branch.
Run the container, pay attention that this container requires a REDIS DB to work well. Thus make sure another container is hosting a REDIS DB with an open port on 6379 or a Redis instance is installed and running on the device hosting the container. In order to access the application a port binding is required as follows.
Locally built image:
docker run -p 3000:3000 -d userapi
Image available on DockerHub:
docker run -p 3000:3000 -d tristanqtn/userapi-devops:latest
If you don't have Redis installed and you can't install it, don't worry we've created a Docker Compose file. Run the docker compose up
command in the given folder ./labs/tools/standalone_redis/
, this will start a standalone Redis server with the correct port mapping.
You can see on the screenshot above that the container is running. We've just created a Redis container to make sure that the USER API is running correctly. Those two are completly independent even though the API is using the Redis container.
Docker Compose is a powerful tool that simplifies the deployment of multi-container applications. To create a docker compose we have to create a docker-compose.yaml file.
In this docker compose, two services are defined: redis
and userapi
. The redis service is based on the latest Redis image, exposing its port 6379
(native port of Redis). This container hosts a Redis instance that will be used by the other container hosting the application. The userapi service, encapsulating a Node.js web application, relies on a custom image tristanqtn/userapi-devops:latest
(image that we built and published previously) and exposes its functionality on port 3000
.
Importantly, the userapi service specifies dependencies using the depends_on
directive, ensuring that the Redis service is fully initialized before the Node.js application starts. Additionally, environmental variables REDIS_HOST
and REDIS_PORT
are set, establishing communication between the services.
Before starting the Docker Compose, make sure Docker is running on your device and the docker-compose
extension is installed too. Then run the following command:
docker compose up
A cluster of two containers should now be running on your device. Let them time to start and when you see the following line in your command prompt: nodejs-webapp | Server listening the port 3000
you can start using the application.
To stop the Docker Compose cluster you can either use the command CTRL + C
or the following Docker command (will delete the whole cluster):
docker compose down
We didn't implement persistent volumes in the docker compose because we thought it would be more challenging to setup in the K8S environment, and thus funnier to do.
Kubernetes is an open-source container orchestration platform, and these configurations are part of its persistent storage system. Before reading and experimenting this part, make sure you understand those K8S terms: cluster, node, pod, deployments, services, PV and PVC.
These tools (Persistent Volume and Persistent Volume Claim) facilitate the dynamic provisioning and consumption of persistent storage in a Kubernetes cluster, ensuring data persistence for applications like Redis that require durable storage beyond the lifecycle of individual pods.
This YAML file defines a Kubernetes PersistentVolume (PV) named redis-pv
. It specifies attributes such as storage capacity (1Gi), access modes (read-write-once), reclaim policy (Retain), and a host path on the underlying node where the volume is physically stored ("/mnt/data") here in the Minikube node.
A K8S PV is represents physical storage resources in the cluster. But this storage need to be claim by a pod to be used, this will be the role of the PVC.
This YAML file defines a Kubernetes PersistentVolumeClaim (PVC) named redis-pvc
. It specifies that the claim requires 1Gi of storage with a read-write-once access mode. A PVC is a request for storage that can be fulfilled by a PV. In this case, it is designed to bind to the previously defined redis-pv
for storage.
A K8S PVC is a request for storage by a user or a pod.
We're using PV and PVC to make the data stored in the Redis pod persistent, thus if this pod has to restart or to be down for a moment, the data won't be lost because it's stored persistently in the PV mounted inside Minikube.
Service facilitate the seamless connectivity between different components of the application within the Kubernetes cluster and could be used to enable external access to a pod. In our case we'll define two services, one for expsoing the redis native port so that the redis pod can be exploited b the NodeJS app and another one responsible for exposing the running port of the NodeJS app to external access (outside of the node).
This redis-service defines a Kubernetes Service named redis-service
. Services in Kubernetes enable communication between different sets of pods. This service is configured to route traffic to pods with the label app: redis based on the specified selector. It exposes the Redis pod on port 6379
within the cluster.
This nodejs-app-service defines a Kubernetes Service named nodejs-app-service
. Similar to redis-service
, it facilitates communication between pods, but in this case, it selects pods with the label app: nodejs-app. It exposes the Node.js application to external traffic on port 3000
.
Now that PV and PVC has been defined to enable data persistence, and that some services will be used to ensure inside and outside node connectivity between pods, we can finally deploy the application.
This deployment file enables the orchestrated deployment and scaling of the Redis and Node.js applications in a Kubernetes cluster. The applications are configured to communicate seamlessly and leverage persistent storage for data durability.
This redis-deployment defines a Kubernetes Deployment named redis-deployment
for the Redis database. It ensures that one replica of the Redis pod is always running. The pod specification includes a Redis container, using the latest Redis image, and mounts a persistent volume redis-storage
at the path /data
for data persistence. The container exposes port 6379
, and the volume is dynamically provisioned using the redis-pvc
PersistentVolumeClaim.
This nodejs-app-deployment defines a Kubernetes Deployment named nodejs-app-deployment
for the USER API application. It ensures that one replica of the Node.js pod is always running. The pod specification includes a Node.js container, using a our image, exposing port 3000
. Environment variables REDIS_HOST
and REDIS_PORT
are set to establish communication with the Redis service redis-service
, making it aware of the Redis pod's location.
- Make sure Minikube and kubectl are installed on your device. Then start the minikube node and then check the status of it with the two commands:
minikube start
minikube status
Then apply in the order those files: redis-pv.yaml
=> redis-pvc.yaml
=> service.yaml
=> deployment.yaml
. This will deploy all needed tools and finally deploy the application.
kubectl apply -f redis-pv.yaml
kubectl apply -f redis-pvc.yaml
kubectl apply -f service.yaml
kubectl apply -f deployment.yaml
Make sure everything is ok with the following commands. All pods should be running with no restarting loops. Access the logs of the NodeJS app pod to make sure that the application is healthy and running on port 3000
.
kubectl get pods
kubectl logs $NAME_OF_NODEJS_APP_POD
Since the app is running in a pod and the pode is inside a node you have to create a tunnel directly to the NodeJS app with this command (the command uses the service that open the NodeJS pod to outside connection on port 3000
defined before):
minikube service nodejs-app-service
To delete the application and all deployments you can perform a clean exit with the following command or destroy the node with minikube delete
:
kubectl delete deployment redis-deployment
kubectl delete deployment nodejs-app-deployment
kubectl delete service nodejs-app-service
kubectl delete service redis-service
kubectl delete pvc redis-pvc
kubectl delete pv redis-pv
A simple script to perform this cleaning has been created here.
Deleting services, PV, PVC and deployments:
Deleting minikube node directly:
In the previous step we deployed our application in a K8S cluster but to stop there would be to miss out on the advanced functionality of Kubernetes. So in this part of the project we're going to build on the work done previously in K8S and take it to the next level. With Istio, we'll implement service mesh in our application, and with the help of Prometeus and Grafana, we'll be able to monitor the K8S cluster in real time and set alerts in the event of failure.
In short, we'll be configuring the K8S environment to monitor our application and the health of the cluster, as well as managing intra-cluster routing.
The first step is to configure an empty K8S cluster with Istio. to do so, browse into the istio
folder:
cd istio
Then we must give more ressources than usual to the node because installing Istio will consume a lot of resources. Make sure no minikube node is running or already define and then create the new node:
minikube delete
minikube start --cpus 6 --memory 8192
Just like kubectl
there's and istioctl
command used to manage istio inside a node. To add istioctl command to your command prompt you should update locally the PATH
environnement variable. To do so, run the pwd
command to get the location of your working directory and then update the PATH
variable with this path. Dont forget to add the bien folder at the end of the path.
pwd
/home/tristan/Documents/GitHub/ece-devops-ING4-SI-03/istio/
export PATH=$PATH:/home/tristan/Documents/GitHub/ece-devops-ING4-SI-03/istio/bin
Now the istioctl
command should be available in you instance of your command prompt.
IMPORTANT: This configuration is specific to the window of your command prompt, if you fire a new command prompt the istioctl
command won't be defined.
At this point you should have an empty K8S cluster running. Check with the two following commands that the cluster is empty and nothing is running:
kubectl get ns
kubectl get pods
It is time to install Istio inside our cluster:
istioctl install
Istio should create two pods and a namespace, check that the installation has ended correctly with those two commands:
kubectl get ns
kubectl get pod -n istio-system
The result of the Istio installation should look like this:
In order to deploy our application pods using the Istio management we have to create a namespace label inside de ndoe.
kubectl label namespace default istio-injection=enabled
Check that the label has been added to the default namespace with this command:
kubectl get ns default --show-labels
NAME STATUS AGE LABELS
default Active 18d istio-injection=enabled,kubernetes.io/metadata.name=default
It's now time to deploy the NodeJS application inside the node. We have created a manifest file. This file is just the concatenation of all files of the k8s folder. Thus when executing this manifest.yaml
with the kubectl apply
command, it will create 3 pods for the NodeJS app (because we want to do some service mesh), a pod for Redis, a service for each pod and will manage the persistent volume for Redis.
kubectl apply -f manifest.yaml
persistentvolume/redis-pv created
persistentvolumeclaim/redis-pvc created
service/redis-service created
service/nodejs-app-service created
deployment.apps/redis-deployment created
deployment.apps/nodejs-app-deployment created
Note the difference with the previous section, now when we deploy with Istio, it creates two containers instead of one for each pod. This is due to the functioning of Istio.
kubectl get pods
NAME READY STATUS RESTARTS AGE
nodejs-app-deployment-946675b48-qldht 2/2 Running 0 20s
redis-deployment-57fb5c959b-cldrz 2/2 Running 0 20s
You can even check the architecture of an Istio managed pod with the following command:
kubectl describe pod $YOU_POD_NAME
Now our application is deployed inside and Istio managed cluster. In the following steps we'll plug addons in the cluster to fully exploit all Istio features.
To deploy the addons we just have to run the following command inside the istio folder. This will install Grafana, Prometheus, Kiali, ...
kubectl apply -f /addons/
Make sure all pods have been created with the correct namespace:
kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-5f9b8c6c5d-4tmvk 1/1 Running 0 31m
istio-ingressgateway-56558c9fd7-f75bg 1/1 Running 0 48m
istiod-7d4885fc54-m9bws 1/1 Running 0 48m
jaeger-db6bdfcb4-nr5q4 1/1 Running 0 31m
kiali-cc67f8648-hj2gg 1/1 Running 0 31m
prometheus-5d5d6d6fc-zt8rr 2/2 Running 0 31m
Some services have been created to be able to access all addons. Make sure they are running and healthy with this command.
kubectl get services -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.101.141.166 <none> 3000/TCP 82s
istio-ingressgateway LoadBalancer 10.111.32.227 <pending> 15021:31482/TCP,80:32750/TCP,443:30541/TCP 18m
istiod ClusterIP 10.96.130.232 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 18m
jaeger-collector ClusterIP 10.100.255.191 <none> 14268/TCP,14250/TCP,9411/TCP,4317/TCP,4318/TCP 82s
kiali ClusterIP 10.102.186.192 <none> 20001/TCP,9090/TCP 81s
loki-headless ClusterIP None <none> 3100/TCP 81s
prometheus ClusterIP 10.97.230.76 <none> 9090/TCP 81s
tracing ClusterIP 10.104.184.68 <none> 80/TCP,16685/TCP 82s
zipkin ClusterIP 10.107.243.53 <none> 9411/TCP 82s
To access a service we have to do some port forwarding between the node port and the localhost port. To do so we use to following command.
kubectl port-forward svc/$SERVICE_NAME -n istio-system $SERVICE_PORT
For example to access the Kiali dashboard. We use to command given below.
kubectl port-forward svc/kiali -n istio-system 20001
And now Kiali should be accessible from : Kiali dashboard. You're now in the Kiali dashboard and you can now play arround with service mesh and workload balancing. The 3 replicas of the NodeJS application are meant to be use for practicing service mesh.
Kiali dashboard:
Pods managed by Istio:
App graph (connections appear when the app is in use):
Some services have been created to be able to access all addons. Make sure they are running and healthy with this command.
kubectl get services -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.101.141.166 <none> 3000/TCP 82s
istio-ingressgateway LoadBalancer 10.111.32.227 <pending> 15021:31482/TCP,80:32750/TCP,443:30541/TCP 18m
istiod ClusterIP 10.96.130.232 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 18m
jaeger-collector ClusterIP 10.100.255.191 <none> 14268/TCP,14250/TCP,9411/TCP,4317/TCP,4318/TCP 82s
kiali ClusterIP 10.102.186.192 <none> 20001/TCP,9090/TCP 81s
loki-headless ClusterIP None <none> 3100/TCP 81s
prometheus ClusterIP 10.97.230.76 <none> 9090/TCP 81s
tracing ClusterIP 10.104.184.68 <none> 80/TCP,16685/TCP 82s
zipkin ClusterIP 10.107.243.53 <none> 9411/TCP 82s
To access a service we have to do some port forwarding between the node port and the localhost port. To do so we use to following command.
kubectl port-forward svc/$SERVICE_NAME -n istio-system $SERVICE_PORT
For example to access the Grafana dashboard. We use to command given below.
kubectl port-forward svc/grafana -n istio-system 3000
For Prometheus dashboard:
kubectl port-forward svc/prometheus -n istio-system 9090
We are now able to access either Grafana or Prometheus. Using these two tools, we have defined a very basic metrics capture. This metrics capture checks the health of containers in the cluster and also measures the consumption of resources in the node.
Grafana Dashboard:
Grafana metrics control dashboard:
Prometheus dashboard:
For this part of the project we encountered many problems. These problems were not resource-related, as we were using a sufficiently powerful machine (32GB of RAM and 16 CPUS). To this day, we can't explain why we had so many problems. Despite these problems, the mission was accomplished: the mesh service has been set up and can be configured thanks to Kiali, and the pods and resources are monitored by Prometheus and Grafana.
We weren't able to create alerts between Prometheus and Grafana.
Still in keeping with DevOps logic and culture, launching a K8S cluster managed by Istio can be laborious, so we created a shell script that automates the launch and configuration of the cluster, as well as the installation of Istio, addons and our entire application.
Helm is a package manager for Kubernetes that simplifies deploying and managing applications on Kubernetes clusters. It allows us to define, install, and upgrade even the most complex Kubernetes applications. We choose here to add this Helm deployment has a bonus to show that K8S deployments can be done with variabilisation. It's interesting to use such features when deploying the same application in different K8S configurations.
Using Helm will make it easier to deploy and uninstall our application in a K8S cluster.
We've integrated Helm into our project to streamline the deployment process and manage Kubernetes manifests efficiently. Helm provides a standardized way to package, distribute, and manage Kubernetes applications, making it easier to share and reproduce deployments.
Follow these steps to deploy the project using Helm:
Edit the values.yaml file in the helm
directory to configure your deployment settings, such as image repository, tag, and any other customizable values.
Run the following command to install the Helm chart (from the root of this repo):
helm install mydeployment ./helm -f helm/values.yaml
Replace mydeployment
with your desired release name. One the installation has been performed by Helm, you should end up with something like this:
We strongly recommend you to try to deploy our application in an Istio managed cluster using Helm.
Check the status of your deployment:
kubectl get deployments
kubectl get services
kubectl get pv
kubectl get pvc
When pods are started check the API health by sending a GET request to the /health
route. If the answer is positive then go and use the API.
Since the app is running in a pod and the pode is inside a node you have to create a tunnel directly to the NodeJS app with this command (the command uses the service that open the NodeJS pod to outside connection on port 3000
defined before):
minikube service nodejs-app-service
If you make changes to your application or configuration, you can upgrade the deployment using:
helm upgrade mydeployment ./helm -f helm/values.yaml
To uninstall and delete the deployment, run:
helm uninstall mydeployment
Here's a list of all additional features we've added to our project:
- Automated deployment using variables with Helm
- CI job for automated build and publish to DockerHub of the USER API image
- API health endpoint
- Implementation of new API methods
- Update the information of a user
- Delete a user
- Get all keys stored in Redis
- Improved tests and new tests for every new API method
- API documentation using Swagger UI
- Complete DevOps toolbox
In keeping with the devops spirit, we also created a number of shell script that were useful throughout the project. These scripts are basic, but they save a lot of time, because you can run them and concentrate on other things, without having to wait for each command to finish before launching the next one.
This toolbox contains the following scripts:
- automated Docker build and publish for USER API app
- automated deployment of the USER API app in the K8S cluster (including PV, PVC, services and pods deployment)
- automated cluster cleaner for K8S (deleting PV, PVC, services and deployments)
- standalone Dockerized Redis container for local dev
- automated deployment of the NodeJS app in the K8S cluster management with Istio for service mesh and monitoring
IMPORTANT: Those scripts are path sensitive, they have been created to be executed from the root of the project /ece-devops-ING4-SI-03
. PLease run them as foolows:
ece-devops-ING4-SI-03$ chmod +x tools/k8s/user_api_launcher.sh
ece-devops-ING4-SI-03$ sh tools/k8s/user_api_launcher.sh
ece-devops-ING4-SI-03$ chmod +x tools/k8s/user_api_cleaner.sh
ece-devops-ING4-SI-03$ sh tools/k8s/user_api_cleaner.sh
ece-devops-ING4-SI-03$ chmod +x tools/standalone_redis/start.sh
ece-devops-ING4-SI-03$ sh tools/standalone_redis/start.sh
ece-devops-ING4-SI-03$ chmod +x tools/docker/image_builder.sh
ece-devops-ING4-SI-03$ sh tools/docker/image_builder.sh
ece-devops-ING4-SI-03$ chmod +x ./tools/k8s/setup_istio.sh
ece-devops-ING4-SI-03$ sh ./tools/k8s/setup_istio.sh
chmod +x ./tools/k8s/setup_istio.sh
- USER API
- CI/CD Pipeline
- Infrastructure as a Code
- Docker Image
- Docker Compose
- Orchestration with K8S
- Istio serivce mesh and monitoring
- Helm Integration
- Tools and Software used:
- Apolline PETIT: apolline.petit@edu.ece.fr
- Tristan QUERTON: tristan.querton@edu.ece.fr