-
Notifications
You must be signed in to change notification settings - Fork 38
CI CD
A workflow has been created that ensures the frontend code passes style checks (with ESLint) and builds successfully. It is configured to run when PRs are made that include changes to the frontend source code.
The definition for the workflow can be found in .github/workflows/webapp-test.yml
Another workflow runs unit tests for the backend. Currently this is just server/src/test_db.py
. The workflow runs when PRs are made that make changes to backend source code.
Definition: .github/workflows/backend-test.yml
A rudimentary deployment configuration/pipeline has been set up using:
- Docker
- GitHub Actions & GitHub Releases
- Amazon ECR & Amazon ECS
Links to where to find out about these services / tools are at the top of the corresponding sections. Currently there is an IAM programmatic user with limited permissions set up in my (margeobur's) personal AWS account. The access keys are stored as secrets in the main repo.
Below is a diagram of the overall deployment process
To be honest, it all began with the "Deploy to Amazon ECS" workflow being present on the first page of the starter workflows showcase. It looked easy to set up, so I figured it would be worth using - we could simply serve the webapp and backend from the same web server. I realised that it might be better to deploy them separately to more appropriately tailored services, such as S3 (S3 can very easily be used to host static files) or use something other than AWS altogether. However, I figured Docker is easy to configure, plus I already had an AWS account set up with Educate credits and I've had quite a bit of AWS experience.
So, this is the configuration we (I) went with :)
What's a (Docker) container? Put simply, it's a tiny virtual machine image with a lite OS and only the dependencies you need, along with your application.
A Dockerfile in the root of the repo defines two container images; one for building the frontend and one for building the "production" image with the backend and frontend included. The frontend image is not strictly necessary - we could build the frontend outside of any containers and copy the files in, but it's useful to provision for possiblly wanting separate images.
The backend image that gets created is what will eventually be deployed and run as our application server. If you have Docker installed, you can build it locally with docker build -t split:test .
. "split:test" is just a tag (name) given to the finished image. The way Docker actually finds the Dockerfile is by searching the given directory (.
in this case). First build will be slow, but subsequent builds should be quick.
Run the image with docker run -it -p 80:80 split:test
then navigate to http://localhost
in your browser (no port necessary)
Elastic Container registry simply stores our Docker image for us. ECS will fetch it to be run later.
Elastic Container Service is what runs our image. ECS is structured as follows:
- Clusters are groups of container instances.
- A container instance is a basically a virtual machine that can run a Docker image.
- Container instances can simply be EC2 instances (floating virtual machines).
- Alternatively, there is this neat service called Fargate that abstracts away individual EC2 instances - all you have to worry about is picking how much compute power you want!
- A task definition is essentially a checklist of config info that describes how you want to run an image.
- e.g. "I want to run split:v1.0.0 on a Fargate container instance using AWS VPC networking"
- A service in ECS is essentially an application or part of an application. You can use it to manage
In the case of Sp/it, we have a single Fargate container instance with the lowest compute profile. We have a single service that has one task running at all times (our Docker image with the CherryPy server).
All deployment actions are performed from one workflow, defined in .github/workflows/aws-ecr-deployment.yml.
This workflow gets triggered on release (instructions on how to release here). At the moment, the workflow only gets triggered when a manual release is created. There is another workflow that creates an automatic release when tags are pushed, but this doesn't cause a release creation event (bit of an odd design on GitHub's part), so at the moment it does nothing.
The workflow performs the following steps:
- Sets up AWS credentials (using the access key)
- Logs into AWS ECR
- Builds the Docker image (just like you would on your own machine) and pushes it to ECR
- "Renders" the task definiton (inserts the name of the newly built image).
- Deploys the task definiton to ECS.
The act of deploying to ECS causes the task to be restarted based on the new definition (which points to the new image, ergo has the updated code).
Currently, the database is rewritten every time we deploy. This is because the database is part of the image (stored as a file in the same directory as the server source). To get around this, we could use a remote DB (Amazon RDB would do) or put the SQLite DB file in a different file location, which can be on a separate EBS Volume (basically a virtual hard drive) that rolls over between deployments. The latter would be the easier option as it would require no changes to code.
The public IP address of the running task changes every time we deploy - substantial network configuration with AWS VPC might be needed to get around this