The Udagram application monorepo
The Udagram source code is maintained using the "monorepo" strategy with Lerna. Therefore, each component of the application is split into its own package. The packages are:
While each package is split out into its own repository, the source-of-truth is always the monorepo.
Required:
Recommended:
Once the repository is cloned and all requirements are installed, run:
npm i
This will install all project dependencies and bootstrap/hoist packages using Lerna.
After dependencies are installed, compile the applications using:
npm run build:dev
Technically, you just need to run npm run frontend:build:dev
as the compiled front-end app
is mounted to the frontend container when running docker-compose
locally.
The following environment variables are required to run the Udagram application locally:
The AWS region where Udagram resources are deployed, i.e. us-east-1
.
The AWS CLI profile to use. Profiles are configured at ~/.aws/credentials
.
The username to use when connecting to the Udagram RDS database.
The password to use when connecting to the Udagram RDS database.
The name of the Udagram RDS database.
The hostname of the Udagram RDS database.
The dialect of the Udagram RDS database, i.e. postgres
.
The name of the AWS S3 bucket used for storing Udagram images.
The JWT Key secret used for Udagram API auth.
Before starting the application, please request a .env
file that contains the
default values for these env variables and place this file at the root of this
repository.
Before running the application locally, you'll need to configure AWS credentials
that provide access to retrieve signed URLs from the Udagram S3 bucket. After retrieving
the credentials, run aws configure
and enter in the access key ID and secret access key.
The profile name should match the value set in the UDAGRAM_AWS_PROFILE
environment variable.
Docker Compose is used to run the application during local development. Run:
docker-compose up
This will start all containers with local packages mounted into the container so a nice DX with hot reloading is provided.
Once the application containers are running, you can access the frontend web client in your browser
at http://localhost:8100
.
Once the application containers are running, you can access the API at http://localhost:8080/api/{version}/{resource}
.
Postman collections/environments for the API are located in the postman/
directory at the root of this repo.
You can import these to facilitate development/testing of the API. Use the udagram_local
environment for local development.
Each application component is packaged into a container image. These images are regularly built and pushed to the Docker Registry. You can check out each image:
- uncledrewbie/udagram-frontend
- uncledrewbie/udagram-reverse-proxy
- uncledrewbie/udagram-feed-svc
- uncledrewbie/udagram-user-svc
To locally build the application images, you can run:
npm run build-images ${TAG}
where TAG
is the image tag that will be used. If TAG
is omitted, the images will
just use the latest
tag.
Please consult the scripts
section of the root package.json
file for available commands.
There you'll find many helper scripts for linting/testing/building/etc the codebase, as well as
pre-scoped commands for each package maintained by Lerna.
The Udagram app is hosted in AWS using EKS. The web application can be accessed at https://udagram-dev.com. The API can be accessed at https://api.udagram-dev.com.
Given the cost of EKS and that this is a toy application, those domains are not guaranteed to be available.
The Kubernetes resources for each Udagram service are defined in
./k8s/apps/${APP}
. Notice that the service definitions for the frontend
and reverse-proxy
applications have type: LoadBalancer
. This results in a
classic load balancer being provisioned in AWS for each service that acts as
the entrypoint for external (public) HTTP traffic.
Scripts for working with the K8s resources are provided at ./k8s/
:
Deploys all K8s resources for all apps using kubectl
. Accepts a single parameter that is the
application image tag (defaults to latest
) to deploy.
Destroys all K8s resources for all apps using kubectl
.
WARNING: Destroying the service resource in EKS will deprovision the ELBs in AWS. This will expose the fact that the AWS environment is not truly ephemeral as the ELBs are configured in Route53 Record Sets for the udagram-dev.com Hosted Zone!
CI/CD is handled by TravisCI. The configuration for TravisCI can be found here, with helper scripts available in the repository here.
Every push/PR goes through CI, which performs the following stages:
The CI job executes npm run build:prod
which compiles all Udagram packages.
See ./scripts/travisci/compile.sh
.
The CI job executes npm run ci
which performs validation/tests across all Udagram packages.
See ./scripts/travisci/test.sh
.
When commits are merged into the master
branch, additional jobs are run in the TravisCI pipeline:
Packages that have updates are split out into each packages' read-only GitHub repository
using the git subtree
command.
See ./scripts/travisci/monorepo-split.sh
.
Each Udagram application image is built and tagged both as latest
and the most recent git commit SHA.
Each image is then pushed to its respective repository in the Docker Hub registry.
See ./scripts/travisci/build-app-images.sh
.
Each Udagram application is deployed to the udagram-dev AWS EKS cluster.
This requires/uses kubectl
and the AWS CLI to configure the ~/.kube/config
file
so that the cluster can be reached via API. The latest commit sha is used as the application
image tag for each K8s deployment resource.
After all K8s resources have been applied to the EKS cluster, the deploy script uses
kubectl rollout status
to wait for the rollout of each application to complete before
considering the CD stage complete.
See ./scripts/travisci/deploy.sh
.
Postman's Newman CLI tool is used to execute integration tests (live API requests) against the deployed Udagram application.