Skip to content

ComplexNetTSP/kubernetes-LAshref

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

Review Assignment Due Date

TSP logo

NET 4255: High Availability Web Services

Introduction

  • Clone this repository
  • Each challenge must be validated with a professor and then committed and pushed to your own github repository.
  • Before starting a given step, present the sketch of your infrastructure to a professor.
  • We strongly recommend that you use Alpine OS as the base operating system for your docker image.

Challenge 0: Install

On your own computer install the following software:

  • Docker Desktop
  • Conda installation steps:
    • Run the following commands (Linux systems only, WSL works too):
    mkdir -p ~/miniconda3
    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
    bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
    rm ~/miniconda3/miniconda.sh
    source ~/miniconda3/bin/activate
    conda init --all
    • Create & enter your virtual python environment:
    conda create -n [env-name]
    conda activate [env-name]
    • Install pip in your env:
    conda install pip
    • Install packages inside your python virtual environnement using pip:
    pip install [package1] [package2] ...
  • IDE (VSCode, etc)

Challenge 1: Create a Simple Web page and develop a docker file for your website (2 pts)

  • Build a one page Flask application which contains the following elements:
    • Your name
    • Your project name
    • Version of your website (i.e. V1)
    • The server hostname
    • The current date
  • Build a docker container which contains your Flask web page
  • Test your application
  • Send your docker container on DockerHub
  • Draw a schema of your systems (ex. draw.io)
    • Show the system
    • Show the container IP address
    • Show the container ports

Notes about Flask

Flask is a micro web framework written in Python. It is classified as a microframework because it does not require particular tools or libraries. It has no database abstraction layer, form validation, or any other components where pre-existing third-party libraries provide common functions. However, Flask supports extensions that can add application features as if they were implemented in Flask itself.

A minimal Flask application looks something like this:

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello_world():
    return "<p>Hello, World!</p>"

Now you can run the web server you just created

flask --app hello run
* Serving Flask app 'hello'
* Running on http://127.0.0.1:5000 (Press CTRL+C to quit)

See the quickstart guide for more information

References

Challenge 2: Create docker compose to deploy a mongodb database server (2pts)

  • Find a already made mongodb docker container (for instance the official mongodb docker image)
  • Use docker compose to deploy your mongodb database
  • Test your application and add record in your database manually with the mongod command (see the install requirement for mongodb
  • Update the schema of your infrastructure (ex. draw.io)
    • Show the system
    • Show the containers IP address or hostname
    • Show the container ports

References

Why mongodb ?

MongoDB is classified as a NoSQL database program, MongoDB uses JSON-like documents. MongoDB provides high availability with replica sets. A replica set consists of two or more copies of the data. Each replica-set member may act in the role of primary or secondary replica at any time. All writes and reads are done on the primary replica by default. Secondary replicas maintain a copy of the data of the primary using built-in replication. When a primary replica fails, the replica set automatically conducts an election process to determine which secondary should become the primary. Secondaries can optionally serve read operations, but that data is only eventually consistent by default. It's easier to create a master/slave database cluster when using nosql.

Challenge 3: Create docker compose file to deploy a simple web service (flask + mongodb) (2pts)

  • Update your Flask application and add the following items:
    • Your name
    • Your project name
    • Version of your website (i.e. V1)
    • The server hostname
    • The current date
  • Flask application should connect to a mongodb database each time a request is served :
    • It connects to the mongodb database through pymongo
    • For each request, it will record in the mongodb database:
      • The IP address of the client
      • The current date
  • Your flask application should display the last 10 records of the database
  • Update Flask app version displayed on the page to V2
  • Finaly deploy your services using a docker compose file with the following elements:
    • Docker service for your website (your flask application)
    • Docker service for your mongodb database
    • Network to connect the previous services
  • Send your docker container on DockerHub with the correct tags.
  • Update the schema of your infrastructure (ex. draw.io)
    • Show the system
    • Show the container IP address
    • Show the container ports

References

Challenge 4: Install a load balancer for your infrastructure (1pts)

Add a NGINX load balancer to your Docker compose file. Update your docker compose file with the following elements:

  • 2 Flask application
    • Deploy 1 Flask app without database connections
    • Deploy 1 Flask app with database connections
  • 1 NGINX load balancer which balances the load between the two web server
  • 1 Mongodb database
  • Network
  • Update the schema of your infrastructure (ex. draw.io)
    • Show the system
    • Show the container IP address and the hostname of each container
    • Show the container ports

References

Challenge 5: Learn Kubernetes with the online tutorial (1pts)

  • Ask the professor to get access to the Kubernetes cluster.
  • Install kubectl
  • Use the provided credentials to connect in the Rancher Web interface. You have access to the cluster dashboard at the same address.
  • Install the kubeconfig file on your computer:
    • In Rancher web interface: go to "Cluster Management".
    • Click on the "net4255" cluster, then on "Download KubeConfig".
    • After this step you should have net4255.yml file on your laptop, inside your Downloads folder. This file will enable you to get access to the kubernetes cluster.
    • Move the net4255.yml file to `~/.kube:
    mkdir ~/.kube
    mv ~/Downloads/net4255.yaml ~/.kube
    • Export the kubeconfig file
    export KUBECONFIG=$HOME/.kube/kubeconfig.yml
  • Now, you should be able to connect to the Kubernetes cluster. Check the following command in order to check the connection.
kubectl cluster-info
...

References

Challenge 6: Launch your first Pod in command line (1pts)

Create your first deployment in command line with the kubectl command:

  • Create a deployment for the "webnodb" container in your own namespace
    • Without any replicat
    • Without any service
  • Check that your deployment is successful in the Rancher interface and with the command line:
kubectl get deployments -o wide
...
kubectl get pods -o wide
...
  • Test if the "webnodb" Pod is correctly running with port-forwarding using the Pod's ID:
kubectl port-forward pods/[your pod ID]:[pod port] --namespace=[your namespace]
Forwarding from 127.0.0.1:54127 -> 5000

In this example, you can connect with your browser to http://127.0.0.1:54127 to access the "webnodb" website.

References

Challenge 7: Create your first deployment file with a ClusterIP service (1pts)

  • Create a Deployment file for your container "webnodb" (the one without database)
  • Be careful, create your Deployment and the ClusterIP service in your on namespace !!!!
  • Show on a new schema how a request is served from the service to the pods:
    • The schema should explain which port is used at each step and what IP address is used by each component (nodes, pods, services)
  • Request the following resources per pod:
    • CPU resource: 1/10 CPU per pod
    • Memory (RAM): 100 Mo per pod
  • Limit your pod resources as the following:
    • CPU resource: 1/5 of CPU per pod
    • Memory (RAM): 200 Mo per pod
  • Connect to the cluster through a Proxy with the following command (you can still use port-forwarding to check if the pod is running, for debug purposes):
kubectl proxy
Starting to serve on 127.0.0.1:8001

Now should be able to access to the webnodb web page at the following url http://127.0.0.1:8001/api/v1/namespaces/[namespace_name]/services/[service_name]/proxy/

  • Update the schema of your infrastructure (ex. draw.io)
    • Show the system
    • Show the container IP address and the hostname of each container
    • Show the container ports

References

Challenge 8: Deploy on the Kubernetes cluster your website (webdb) and the respective mongodb database (2pts)

  • Deploy the "webdb" web service with 3 replica and its related service (ClusterIP)
  • Deploy the "mongodb" database and its related service (ClusterIP)
  • Connect the "webdb" Pod to the database using KubeDNS
  • Explain the difference between a NodePort Service and a ClusterIP service
  • Validate your deployment ("webdb", "mongodb") by using port-forwarding & KubeProxy
  • Request the following resources per pod:
    • CPU resource: 1/10 CPU per pod
    • Memory (RAM): 100 Mo per pod
  • Limit your pod resources as the following:
    • CPU resourse: 1/5 of CPU per pod
    • Memory (RAM): 200 Mo per pod
  • Update the schema of your infrastructure (ex. draw.io)
    • Show the system
    • Show the container IP address and the hostname of each container
    • Show the container ports

References

Challenge 9: Liveness Probes (1pts)

  • Define the Liveness Probe for each container in your chart.
  • Note that each application may require a specific type of probe.
  • Explain why you have chosen a particular type of probe for a particular application.
    • What is your liveness probing strategy for the web servers?
    • What is your liveness probing strategy for the database?

References

Challenge 10: Expose your services (1pts)

  • Create a Ingress in order to expose your web application ("webnodb" and "webdb")
    • In the cluster, we use MetalLB (bare metal load balancer) to handle incoming Ingress traffic. Here you can find some documentation about the specifics of MetalLB (only useful for the schema).
    • Don't forget to deploy your Ingress in your own namespace !!!
  • The kubenetes cluster has a public IP address with the hostname: net4255.luxbulb.org create a service that redirects http traffic of the following url to your respective deployment:
  • Test your Ingress
  • Update the schema of your infrastructure (ex. draw.io)
    • Show the system
    • Show the container IP address and the hostname of each container
    • Show the container ports

References

Challenge 11: Automate your deployment with HELM (1pts)

  • Create a HELM Chart to deploy the whole infrastructure
  • Use ConfigMaps to store database hostname and port information

References

Challenge 12: Update the mongodb database Deployment with a StatefulSet (1pts)

  • Instead of a traditional deployment use a StafulSet to deploy your "mongodb" database
  • Use a Persistant Volume to store the database content
    • Stockage ressource : 0.1 Go
  • Update the previous "mongodb" service with a headless service
  • There should be only one replicat for the "mongodb" database
  • What is a StatefulSet and in which case it is usefull?
  • What is a headless service, how pods are named with headless service?
  • Update your previous Helm chart accordinly
  • Your first database in the StatefulSet (example: mongo-0) should have a valid DNS hostname

References

Challenge 13: Rolling update (1pts)

  • Update the version number in each of the HTML page of the respective website ("webdb" and "webnodb") and rebuild their repective docker container (and bump up their version humber)
  • Deploy your new container as a rolling update

References

Challenge 14: Automatic scaling (1pts)

  • Create a deployment that spin a new pod when the CPU utilization of a pod cross a certain threasold (e.g.: 60% Utilization)
  • Limit to maximum number of pods to be deploy to 10 pods

References

Challenge 15: Create a Network policies (1pts)

  • Create a Network policy that restrict access to the database only to IP address corresponding to your Web Pods
  • Test your network policy

References

Challenge 16: Create a distributed database system (I) (2pts)

  • Create a master-slave architecture with mongodb
  • Don't use already made Helm chart to achive this challenge
  • Manualy configure each instance of the mongodb database to be part of a Replica Set (meaning that a given master database is replicated to all the slaves in the cluster)

References

Challenge 17: Create a distributed database system (II) (1pts)

  • Create a Helm Chart that deploy a Master-Slave architecture with mongodb

Challenge 18: Deploy a Redis cache in your infrastructure (1pts)

  • Update your site (with and without db) to display a counter showing the number of visits.
    • Your webpage should display the current number of visits and remain consistent across replicas.
    • Each time a page is loaded, you should increment the number of visits to that page.
  • Explain the advantage of using a redis cache in this case.
  • Update the drawing of your new infrastructure (services, etc)

References

Challenge 19: Implement Hooks on Helm

  • Use Hooks to configure your ConfigMaps before and after the deployment of your helm chart
  • (Optional) Use hooks to save your database before updating helm chart

References

About

net4255-telecom-sudparis-kubernetes-NET4255 created by GitHub Classroom

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published