Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker: add initial docker host playbook and dockerfiles #992

Closed
wants to merge 4 commits into from

Conversation

rvagg
Copy link
Member

@rvagg rvagg commented Nov 9, 2017

Moving on from #989 which is trying to use the Infrastructure Containers on Joyent to run Alpine. This sets up a "Docker host" similar to the initial work started in #437 (that PR shows how we're running our Alpine 3.4 test host(s) right now in CI).

Configuring a Docker host means first setting up a machine that can handle the number of containers we need, then using host_vars to configure the containers. At the moment I have 2 DIgitalOcean 4 VCPU, 8G machines running and have them set up with this type of host_vars:

containers:
  - { name: 'test-digitalocean-alpine34_container-x64-1', os: 'alpine34', secret: 'abc123' }
  - { name: 'test-digitalocean-alpine35_container-x64-1', os: 'alpine35', secret: 'abc456' }
  - { name: 'test-digitalocean-alpine36_container-x64-1', os: 'alpine36', secret: 'abc567' }
  - { name: 'test-digitalocean-ubuntu1604_container-x64-1', os: 'ubuntu1604', secret: 'abc890' }

So we have 2 x Alpine 3.4, 2 x Alpine 3.5, 2 x Alpine 3.6 and 2 x Ubuntu 16.04. The Alpine machines are labelled in CI like this: alpine36-container-x64 and the 3 of them are included now in node-test-commit-linux. The Ubuntu containers have the label jenkins-beta as @refack suggested we have some floating workers that can be used to test new jobs. They're not assigned to anything so feel free to use them in CI as you like, just don't attach them permanently to anything I suppose.

Dockerfile templates are in ansible/roles/docker/templates/ and we can add extra types of images in here as required. I could imagine falling back to Docker to run our CentOS5 instances if we lost our current hosts and got desperate for example.

On the host, the containers mount their /home/iojs/ directories inside the host /home/iojs/ directory under a subdirectory that has the worker's name. So on test-digitalocean-ubuntu1604-docker-x64-1 we have:

# ls /home/iojs/
test-digitalocean-alpine34_container-x64-1  test-digitalocean-alpine36_container-x64-1
test-digitalocean-alpine35_container-x64-1  test-digitalocean-ubuntu1604_container-x64-1

Each of those directories looks like a standard /home/iojs, complete with slave.jar, .ccache and tmp.

The containers are managed by systemd and have the worker name in them, e.g.:

# systemctl list-units | grep jenkins
  jenkins-test-digitalocean-alpine34_container-x64-1.service                                            loaded    active running   Jenkins Slave in Docker for test-digitalocean-alpine34_container-x64-1
  jenkins-test-digitalocean-alpine35_container-x64-1.service                                            loaded    active running   Jenkins Slave in Docker for test-digitalocean-alpine35_container-x64-1
  jenkins-test-digitalocean-alpine36_container-x64-1.service                                            loaded    active running   Jenkins Slave in Docker for test-digitalocean-alpine36_container-x64-1
  jenkins-test-digitalocean-ubuntu1604_container-x64-1.service                                          loaded    active running   Jenkins Slave in Docker for test-digitalocean-ubuntu1604_container-x64-1

So you could systemctl restart jenkins-test-digitalocean-ubuntu1604_container-x64-1 for example.

The two hosts included in the inventory here are the Docker hosts and build/test have access to them to do all of this stuff. Jenkins doesn't know about the hosts, only the workers, but you don't have ssh access to the workers. In Jenkins, I've added descriptions for these to make it clear where the host is if you ever needed to "manage" it. e.g. https://ci.nodejs.org/computer/test-digitalocean-alpine36_container-x64-1/ says "Docker container running on test-digitalocean-ubuntu1604_docker-x64-1 in /home/iojs/test-digitalocean-alpine36_container-x64-1".

@rvagg
Copy link
Member Author

rvagg commented Nov 13, 2017

As mentioned in #108 (comment) I've started work on a shared-openssl job for 1.1.0g based in a docker container. I've been tinkering with node-test-commit-linux-fips and think we should just roll it into the same set of jobs along with other shared library builds and also a Debug build. Basically we'd use a pool of Ubuntu 16.04 containers that are running on 2 or 3 beefier machines (with JOBS relatively high so they can soak up unused capacity where possible) that can run all of these things. node-test-commit-linux-fips currently downloads and compiles the FIPS OpenSSL but I think we should just bake this into the container along with other shared libraries. Then our configs are in Dockerfiles inside our Ansible scripts and we can update everything there and we don't get the extra build overhead for what is identical on each run.

@rvagg rvagg force-pushed the rvagg/docker-host branch from 1d4a453 to 6e2721f Compare November 13, 2017 04:38
Joyent 16.04 has a user config that leads to iojs being 1001:1001 so
don't make assumptions inside the container.

Softlayer doesn't come with a primed apt cache so it needs to be fetched
before aptitude is installed
refack

This comment was marked as off-topic.

@rvagg rvagg closed this Nov 14, 2017
@rvagg rvagg deleted the rvagg/docker-host branch November 14, 2017 22:13
@rvagg
Copy link
Member Author

rvagg commented Nov 14, 2017

merged for now, can tweak in additional PRs

@rvagg
Copy link
Member Author

rvagg commented Nov 14, 2017

oh, this includes the "sharedlibs" Ubuntu container that can run OpenSSL 1.1.0 and OpenSSL-FIPS tests too, currently playing with this here: https://ci.nodejs.org/view/All/job/node-test-commit-linux-linked/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants