We saw how managing a multi-container project can be and even though we might simplify the process by using shell scripts, using docker-compose
is the easiest when dealing with such project.
According to the offical documentation
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
NOTE: Although Compose works in all environments, it's more focused on development and testing. Using Compose on a production environment is not recommended at all.
In the api-notes/
, let's create a development Dockerfile with these lines:
# stage one
FROM node:lts-alpine as builder
# install dependencies for node-gyp
RUN apk add --no-cache python make g++
WORKDIR /app
COPY ./package.json .
RUN npm install ## we want the development dependancies also
# stage two
FROM node:lts-alpine
ENV NODE_ENV=development ## development not production
USER node
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
COPY . .
COPY --from=builder /app/node_modules /home/node/app/node_modules
# nodemon is a tool that gives us the hot-reload deature
CMD [ "./node_modules/.bin/nodemon", "--config", "nodemon.json", "bin/www" ]
This project has two containers:
notes-db
- A database server powered by PostgreSQLnotes-api
- A REST API powered by Exppress.js
In the world of compose, each container that makes up the application is known as a service. In composing a project, the first step is to define those services.
As docker daemon uses a Dockerfile
for building images, Docker compose uses a docker-compose.yaml
file to read services definition from.
Let's create our docker-compose.yaml
version: "3.8"
services:
db:
image: postgres:12
container_name: notes-db-dev
volumes:
- notes-db-dev-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: notesdb
POSTGRES_PASSWORD: secret
api:
build:
context: ./api
dockerfile: Dockerfile.dev
image: notes-api:dev
container_name: notes-api-dev
environment:
DB_HOST: db ## same as the database service name
DB_DATABASE: notesdb
DB_PASSWORD: secret
volumes:
- /home/node/app/node_modules
- ./api:/home/node/app
ports:
- 3000:3000
volumes:
notes-db-dev-data:
name: notes-db-dev-data
Every valid docker-compose.yaml file starts by defining the file version. At the time of writing, 3.8 is the latest version. You can look up the latest version here.
Blocks in an YAML file are defined by indentation. I will go through each of the blocks and will explain what they do.
-
The
services
block holds the definitions for each of the services or containers in the application.db
andapi
are the two services that comprise this project. -
The
db
block defines a new service in the application and holds necessary information to start the container. Every service requires either a pre-built image or aDockerfile
to run a container. For the db service we're using the official PostgreSQL image. -
Unlike the
db
service, a pre-built image for theapi
service doesn't exist. So we'll use theDockerfile
.dev file. -
The
volumes
block defines any name volume needed by any of the services. At the time it only enlistsnotes-db-dev-data
volume used by thedb
service.
Let's now have a closer look at the individual services:
-
The definition of the
db
is as follow:db: image: postgres:12 container_name: notes-db-dev volumes: - db-data:/var/lib/postgresql/data environment: POSTGRES_DB: notesdb POSTGRES_PASSWORD: secret
-
The <
image
key holds the image repository and tag used for this container. We're using thepostgres:12
image for running the database container. -
The
container_name
indicates the name of the container. By default containers are named following<project directory name>_<service name>
syntax. You can override that usingcontainer_name
. -
The
volumes
array holds the volume mappings for the service and supports named volumes, anonymous volumes, and bind mounts. The syntax<source>:<destination>
is identical to what we've seen before.
The environment
map holds the values of the various environment variables needed for the service.
Definition code for the api
is as follow:
api:
build:
context: ./api
dockerfile: Dockerfile.dev
image: notes-api:dev
container_name: notes-api-dev
environment:
DB_HOST: db ## same as the database service name
DB_DATABASE: notesdb
DB_PASSWORD: secret
volumes:
- /home/node/app/node_modules
- ./api:/home/node/app
ports:
- 3000:3000
-
The
api
service doesn't come with a pre-built image. Instead it has a build configuration. Under thebuild
block we define the context and the name of the Dockerfile for building an image. -
The image key holds the name of the image to be built. If not assigned, the image will be named following the
<project directory name>_<service name>
syntax. -
Inside the
environment
map, theDB_HOST
variable demonstrates a feature of Compose. That is, we can refer to another service in the same application by using its name. So thedb
here, will be replaced by the IP address of the api service container. TheDB_DATABASE
andDB_PASSWORD
variables have to match up withPOSTGRES_DB
andPOSTGRES_PASSWORD
respectively from thedb
service definition. -
In the
volumes
map, you can see an anonymous volume and a bind mount described. The syntax is identical to what you've seen in previous sections. -
The
ports
map defines any port mapping. The syntax,<host port>:<container port>
is identical to the--publish
option you used before.
Finally, the code for the volumes is as follow:
volumes:
db-data:
name: notes-db-dev-data
Any named volume used in any of the services has to be defined here. If we don't define a name, the volume will be named following the <project directory name>_<volume key>
and the key here is db-data
.
There is a few ways in starting services defined in a YAML file. up
is the first command we'll learn. This command builds any missing images, creates containers and start them in one go.
Every docker-compose
command should be executed in the same folder as the docker-compose.yaml
.
This is how we would run docker-compose for our project:
docker-compose --file docker-compose.yaml up --detach
## --file | -f is a must when the yaml file is not named docker-compose
The start
command only start existing containers but doesn't create missing containers, same as docker container start
The build
option for the up
forces a rebuild of the images.
Although service containers started by Compose can be listed using the container ls
command, there is the ps
command for listing containers defined in the YAML only.
docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------
notes-api-dev docker-entrypoint.sh ./nod ... Up 0.0.0.0:3000->3000/tcp,:::3000-
>3000/tcp
notes-db-dev docker-entrypoint.sh postgres Up 5432/tcp
docker-compose exec <service name> commad
## example
docker-compose exec api npm run db:migrate
> notes-api@ db:migrate /home/node/app
> knex migrate:latest
Using environment: development
Batch 1 run: 1 migrations
Unlike the container exec command, you don't need to pass the -it
flag for interactive sessions. docker-compose
does that automatically.
We can also use the logs
command to retrieve logs from a running service. The generic syntax for the command is as follows:
docker-compose logs <service-name>
#example
docker-compose logs api
Attaching to notes-api-dev
notes-api-dev | [nodemon] 2.0.12
notes-api-dev | [nodemon] reading config ./nodemon.json
notes-api-dev | [nodemon] to restart at any time, enter `rs`
notes-api-dev | [nodemon] or send SIGHUP to 1 to restart
notes-api-dev | [nodemon] ignoring: *.test.js
notes-api-dev | [nodemon] watching path(s): *.*
notes-api-dev | [nodemon] watching extensions: js,mjs,json
notes-api-dev | [nodemon] starting `node bin/www`
notes-api-dev | [nodemon] forking
notes-api-dev | [nodemon] child pid: 20
notes-api-dev | [nodemon] watching 18 files
notes-api-dev | app running -> http://127.0.0.1:3000
This is just a portion from the log output. You can kind of hook into the output stream of the service and get the logs in real-time by using the -f
or --follow
option. Any later log will show up instantly in the terminal as long as you don't exit by pressing ctrl + c
or closing the window. The container will keep running even if you exit out of the log window.
Two approaches to be taken, the first one is the down
. The down
command stops all running containers and removes them from the system. It also removes any network.
docker-compose down --volumes ## Removing all named volumes
Stopping notes-db-dev ... done
Stopping notes-api-dev ... done
Removing notes-db-dev ... done
Removing notes-api-dev ... done
Removing network notes-api_default
Removing volume notes-db-dev-data
We'll be adding a front-end to our previous notes api application. Let's have a look on how the application will work:
Instead of accepting requests directly,all the requests will be first received by an NGINX (lets call it router) service.
The router will then see if the requested end-point has /api
in it. If yes, the router will route the request to the back-end or if not, the router will route the request to the front-end.
When we run a front-end application it doesn't run inside a container. It runs on the browser, served from a container. As a result, Compose networking doesn't work as expected and the front-end application fails to find the api service.
NGINX, on the other hand, runs inside a container and can communicate with the different services across the entire application.
We can check out the /notes-api/nginx/development.conf
and /notes-api/nginx/production.conf
for nginx
files. Code for the /notes-api/nginx/Dockerfile.dev
is as follows.
FROM nginx:stable-alpine
COPY ./development.conf /etc/nginx/conf.d/default.conf
All it does is copy the configuration file to /etc/nginx/conf.d/default.conf
inside the container.
Let's start writing the docker-compose.yaml
file. Apart from the api
and db
services there will be the client
and nginx
services. There will also be some network definitions that we'll get into shortly.
The only thing that needs some explanation is the network configuration. The code for the networks block is as follows:
networks:
frontend:
name: fullstack-notes-application-network-frontend
driver: bridge
backend:
name: fullstack-notes-application-network-backend
driver: bridge
We have two bridge networks. By default, Compose creates a bridge network and attaches all containers to that. In this project, however, we wanted proper network isolation. So we defined two networks, one for the front-end services and one for the back-end services.
We've also added networks block in each of the service definitions. This way the the api
and db
service will be attached to one network and the client service will be attached to a separate network. But the nginx
service will be attached to both the networks so that it can perform as router between the front-end and back-end services.
Finally, we can start all the services by executing the following command:
docker-compose up --detach