• Install Ubuntu and Docker - https://docs.docker.com/engine/installation/linux/ (OR)
• Install Docker Toolbox in windows - https://docs.docker.com/docker-for-windows/
• This includes Docker installation inside of Jenkins.
• This maintains the created containers as children.
• This would be preferable if want to manage a complete clean Docker environment inside Jenkins.
• This uses underlying host’s Docker installation.
• This maintains the created containers as siblings.
Advantages:
• Enables sharing of images with host OS
o Eliminates storing images multiple times
o Makes it possible for Jenkins to automate local image creation
• Eliminates a virtualization layer (lxc)
• Any settings in the host’s Docker daemon will apply to Jenkins container as well
• Easier to set up, just need to map the host’s Docker executable and daemon socket on to the container
• Host and Jenkins container will use the same version of Docker, always.
• No privileged mode needed
• Permits the jenkins user to run docker without the sudo prefix.
• Allows greater flexibility at runtime.
• Ability to reuse the image cache from the host.
Note: In general, for testing and production environment, DOOD is chosen instead of DinD
1. Once Docker Toolbox installed in windows , will be seen two shortcuts on desktop as below
2. Double-Click the “Docker Quick start Terminal” shortcut icon, execute the command
“pwd “will come to know which directory mapped on windows.
3. Create a directory for workspace by executing the command “mkdir docker-workspace”
4. All the successive steps mentioned will be followed by considering the above mentioned directory
as base directory. (For instance, here base directory - /c/users/Administrator/docker-workspace)
1. Create a directory “Jenkins” under “docker-workspace” directory by executing the command
“mkdir Jenkins”
2. Create a directory “Master” under “Jenkins” directory by executing the command “mkdir Master”
3. Create a file with name “Dockerfile” under “Master” directory and fill with the below content
FROM jenkins:latest
MAINTAINER Kranthi Kumar Bitra <kranthi.b76@gmail.com>
USER root
RUN apt-get update \
&& apt-get install -y sudo supervisor \
&& rm -rf /var/lib/apt/lists/*
RUN curl -sSL https://get.docker.com/ | sh
RUN usermod -aG docker root
USER root
RUN mkdir -p /var/log/supervisor
RUN mkdir -p /var/log/jenkins
RUN curl -L "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar -zx
RUN /cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-linux_x86 -f
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
4. Create a file with name “supervisord.conf” under “Master” directory , fill with the below content
[supervisord]
user=root
nodaemon=true
[program:chown]
priority=20
command=chown -R root:root /var/jenkins_home
[program:root]
user=root
autostart=true
autorestart=true
command=/usr/local/bin/jenkins.sh
redirect_stderr=true
stdout_logfile=/var/log/jenkins/%(program_name)s.log
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=10
environment = JENKINS_HOME="/var/jenkins_home",HOME="/var/jenkins_home",USER="root"
5. Create a file with name “plugins.txt” under “Master” directory , fill with the below content
structs:latest
workflow-step-api:latest
workflow-scm-step:latest
scm-api:latest
scm-sync-configuration:latest
credentials:latest
mailer:latest
junit:latest
subversion:latest
script-security:latest
matrix-project:latest
ssh-credentials:latest
git-client:latest
git:latest
greenballs:latest
ssh-slaves:latest
token-macro:latest
durable-task:latest
docker-plugin:latest
6. Create a file with name “docker-compose.yml” under “Master” directory, fill with the below content
myjenkins:
image: myjenkins_cloudfoundary_dood
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./jenkins_home:/var/jenkins_home
ports:
- "8080:8080"
- "50000:50000"
7. In command prompt, go to the directory where the above four files exists, execute the below commands
to start Jenkins with DOOD.
a. docker build –t myjenkins_cloudfoundary_dood . (this will take few minutes time to build
the image – only one time should execute)
b. In order to confirm whether the image got created, execute the command
- “ docker images myjenkins_cloudfoundary_dood”
c. docker-compose up
d. In order to confirm whether the Jenkins process created or not , execute the command
– “docker ps –a” which should be in “Up” status
• Access the Jenkins in windows with url – http://192.168.99.100:8080 .
Once this got accessed, will be seen “Unlock Jenkins” page.
• To get the administrator password, go to the “/c/users/Administrator/docker-workspace/
Jenkins/Master/jenkins_home/secrets” directory. In that open the “initialAdminPassword” file.
• Copy the content from the above file and paste in the “Administrator password” text area
and click “Continue” button.
• Now, it will redirect to “Customize Jenkins” page and select the “Install suggested plugins”
• After few minutes, all the suggested plugins will be installed automatically.
• Now, it will redirect to “Create First Admin User”, enter the user details and click
“Save and Finish” button.
• Now, will get set up complete page, click the “Start using Jenkins” button.
• Click the “Manage Jenkins” link in order to install few plugins required for this Continuous
Integration.
• Now, click the “Manage Plugins” link to go to “Plugin Manager” page.
• Select the “Available” Tab and using the “Filter” select all the below mentioned plugins and
click “Install without restart” button
Build Pipeline Plugin
Build Grpah View Plugin
Node and Label Parameter Plugin
Docker Plugin
Docker Commons Plugin
Docker Build Step Plugin
Copy Artifact Plugin
Groovy
HTML Publisher Plugin
• Once all plugins installed , will be changed the status from “Pending” to “Success”
• Go to “Manage Jenkins” and click the “Manage Nodes” link to create slave jnlp agents.
• In order to create slave node, click the “New Node” link
• Name the node as “agent1”, select the ‘Permanent Agent” radio button and click “OK” button.
• Mention the Remote root directory “/var/jenkins_home” and click the “Save” button.
• Repeat the above 3 steps for created nodes – “agent2” and “agent3”
• Once 3 nodes created, will be seen the agents as below under “Manage Nodes” link.
• Now, select the each agent, copy the secret key which will be used for creation of Jenkins
slave Jnlp agents.
• Download the apache maven from - https://maven.apache.org/download.cgi
• Copy the maven folder to ““/c/users/Administrator/docker-workspace/Jenkins/Master
/jenkins_home” directory
• Go to “Manage Jenkins” and click the “Global Tool Configuration” link to set the maven path
• Select ‘Add Maven”, un-check the “Install automatically, specify the “Name” as
“'Maven_3.3.9”, “MAVEN_HOME” as /var/Jenkins_home/apache-maven-3.3.9” and click the “Save” button.
• Create a directory with name “registry” under “docker-workspace” directory by executing the
command “mkdir registry”
• Go to directory – “registry” , create two sub directories – “nginx” and “data”
• In “registry” folder, create a file with name “docker-compose.yml”, fill with the below content
nginx:
image: "nginx:latest"
ports:
- 5043:443
links:
- registry:registry
volumes:
- ./nginx/:/etc/nginx/conf.d
registry:
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
volumes:
- ./data:/data
• Go to the “nginx” folder, create a file with name “registry.conf”, fill with the below content
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name myregistrydomain.com;
# SSL
# ssl on;
# ssl_certificate /etc/nginx/conf.d/domain.crt;
# ssl_certificate_key /etc/nginx/conf.d/domain.key;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486
(https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping,
catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
# auth_basic "registry.localhost";
# auth_basic_user_file /etc/nginx/conf.d/registry.password;
# add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
• Go to “registry” folder , execute the below command to run the private registry
o docker-compose up
o In order to confirm whether the registry related process created or not , execute the
command – “docker ps –a” which should be in “Up” status
• Create a directory “Slaves” under “Jenkins” directory by executing the command “mkdir Slaves”
• Create 3 directories – “agent1”, “agent2” and “agent3” under “Slaves” directory
• Now repeat the below steps under each agent directory.
• Create a file with name “Dockerfile” and fill with the below content
FROM java:8-jdk
MAINTAINER Kranthi Kumar Bitra <kranthi.b76@gmail.com>
ENV HOME /var/jenkins_home
RUN useradd -c "Jenkins user" -d $HOME -m jenkins
RUN usermod -G users jenkins
USER root
RUN apt-get update \
&& apt-get install -y sudo supervisor \
&& rm -rf /var/lib/apt/lists/*
RUN curl -sSL https://get.docker.com/ | sh
RUN usermod -aG docker jenkins
USER root
RUN mkdir -p /var/log/supervisor
RUN mkdir -p /var/log/Jenkins
RUN curl -L "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar -zx
RUN /cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-linux_x86 -f
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
RUN curl -A "Mozilla/5.0" --create-dirs -sSLo /usr/share/jenkins/slave.jar
http://repo.jenkins-ci.org/public/org/jenkins-ci/main/remoting/2.53/remoting-2.53.jar
&& chmod 755 /usr/share/jenkins && chmod 644 /usr/share/jenkins/slave.jar
COPY jenkins-slave /usr/local/bin/jenkins-slave
WORKDIR /var/jenkins_home
USER jenkins
• Create a file with name “supervisord.conf” , fill with the below content
[supervisord]
user=root
nodaemon=true
[program:chown]
priority=20
command=chown -R jenkins:jenkins /var/jenkins_home
[program:jenkins]
user=jenkins
autostart=true
autorestart=true
command=/usr/local/bin/jenkins.sh
redirect_stderr=true
stdout_logfile=/var/log/jenkins/%(program_name)s.log
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=10
environment = JENKINS_HOME="/var/jenkins_home",HOME="/var/jenkins_home",USER="jenkins"
• Create a file with name “docker-compose.yml”, fill with the below content
myjenkinsAgent:
image: myjenkins_cloudfoundary_dood_jnlp_agent
command:
java -jar /usr/share/jenkins/slave.jar -jnlpUrl
http://192.168.99.100:8080/computer/agent1/slave-agent.jnlp -secret
4f9b27619dc67a88161e05d6afe36b0508643b74989e5cf29e80c89ace3075a8
volumes:
- /usr/local/bin/docker:/bin/docker
- /var/run/docker.sock:/var/run/docker.sock
• In command prompt, go to the directory where the above three files exists, execute the below
commands to start Jenkins slave as a jnlp agent.
o docker build –t myjenkins_cloudfoundary_dood_jnlp_agent .
(this will take few minutes time to build the image – only one time should execute)
o In order to confirm whether above step success or not , check with the command
mentioned in below image
o docker-compose up
o In order to confirm 3 slave agent process created or not , check with the command
mentioned in below image
1. Job to create artefact for the “product catalogue” spring-boot project.
• Source Code Management – Given the Git Project URL
• Build Triggers – This get executed whenever a change is pushed to GitHub
• Build – Create a jar for spring-boot project
• Post-build Actions – will archive the artefacts which will be used by successor Jenkins jobs
2. Job to build docker image for the “product catalogue” spring-boot project.
• Build Triggers – This will get executed only when the above artefact project build is stable
• Build - Copying artefacts generated from the previous project
• Build – Build docker image for the product-catalogue project
3. Job to publish the docker image to private registry
• Build Triggers – This will get executed only when the above build docker image is stable
• Build – publish image to docker registry
4. Job to run the project as docker container from the published image of docker registry
• Build Triggers - This will get executed only when the above publish docker image is stable
• Build – run the published image from docker registry
5. Build Graph View
1. Instead of “Freestyle project”, select project type as “Pipeline”
2. Under “Build Triggers” tab, select the option “The project is parameterized” and add a parameter
by selecting type as “Node”. Here, we can ensure how many nodes can be involved while executing a
pipeline as below
3. Write the pipeline script with clear functionality to be performed at each stage of the workflow
stage 'Create artifact'
node ('agent1’'){
git url: 'https://github.com/kranthiB/product-catalogue-service.git'
def mvnHome = tool 'Maven_3.3.9'
sh "${mvnHome}/bin/mvn clean install -DskipTests"
}
stage 'Archive artifact'
node{
archive '**/*.jar,**/Dockerfile'
}
stage 'Create artifact'
node ('master'){
git url: 'https://github.com/kranthiB/product-catalogue-service.git'
def mvnHome = tool 'Maven_3.3.9''
sh "${mvnHome}/bin/mvn clean package"
}
stage 'Archive artifact'
node {
archive '**/*.jar,**/Dockerfile'
}
stage 'Build Docker Image'
node('master') {
sh "cp ~/workspace/Product_Catalog_Local_Pipeline/target/
product-catalogue-service-1.0.jar ~/workspace/Product_Catalog_Local_Pipeline/"
sh "docker build -f ~/workspace/Product_Catalog_Local_Pipeline/src/main/
docker/Dockerfile -t retailstore/product-catalogue-service ."
}
stage 'Run Integration Tests'
node('master') {
def mvnHome = tool 'Maven_3.3.9'
sh "${mvnHome}/bin/mvn test"
}
stage 'Generate Reports'
node('master') {
def mvnHome = tool 'Maven-3.3.9'
sh "${mvnHome}/bin/mvn site"
}
stage 'Publish Code Coverage Report'
node('master') {
publishHTML(target: [allowMissing: false, alwaysLinkToLastBuild: false,
keepAll: false, reportDir: 'target/site/jacoco', reportFiles: 'index.html',
reportName: 'Code Coverage Report'])
}
stage 'Publish Project Report'
node('master') {
publishHTML(target: [allowMissing: false, alwaysLinkToLastBuild: false,
keepAll: false, reportDir: 'target/site', reportFiles: 'project-reports.html',
reportName: 'Project Report'])
}
stage 'Publish Docker Image'
node('master'){
sh "docker tag retailstore/product-catalogue-service localhost:5043/
retailstore/product-catalogue-service"
sh "docker push localhost:5043/retailstore/product-catalogue-service"
}
stage 'Run Docker Image'
node('master'){
sh "if [ \$(docker ps -aqf 'name=product-catalogue-deployment') ] ;
then docker rm -f \$(docker ps -aqf 'name=product-catalogue-deployment');
else echo \" No container found\" ; fi"
sh "docker run -d --name product-catalogue-deployment -p 8870:8870
localhost:5043/retailstore/product-catalogue-service"
}
4. Build Pipeline View
5. In the above stage view, we are publishing two reports – “Code Coverage Report” and
“Project Report” in 2 different stages.
6. Click the “Code Coverage Report” to see the results related to this
7. Click the “Project Report” to see the results related to this
8. On click of “FindBugs”
9. On click of “Checkstyle”
10. On click of “surefire Report”
• In a single pipeline job, we can specify all stages of our workflow where as in Graph view each stage
of the flow will be one "Freestyle job"
• In Pipeline view , that master will be dynamically choose the slave node to run a particular stage
depending on the high availability where as in Graph view we have the specify statically the slave node
which is fixed
• Download and install the CloudFoundary CLi from the below URL
https://github.com/cloudfoundry/cli
• Install the ibm-containers cf CLI plugin using the below command
cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-linux_x86 -f
• Login to IBM Bluemix using the login command as specified below
cf login -a https://api.ng.bluemix.net -u prokarmapoc@gmail.com -p prokarm@2013
-o prokarmapoc -s poc
• Set the namespace for the ibm containers (This is only one time set up)
cf ic namespace set poc_ic
• Login to IBM containers
cf ic login
• Tag the built docker image to the IBM Container registry
docker tag retailstore/produt-catalogue-service registry.ng.bluemix.net/poc_ic/
product-catalogue-service:latest
• Push the image to IBM Container Registry
docker push registry.ng.bluemix.net/poc_ic/product-catalogue-service
• Run the container pushed to IBM registry as below
cf ic run -p 169.44.117.16:8870:8870 --name product-catalogue-service
registry.ng.bluemix.net/poc_ic/product-catalogue-service:latest
Create a pipeline project in Jenkins and add the below pipeline script. This will perform the below steps
• Create jar for the sample spring boot project
• Archive the generated jar
• Build a docker image using the archived jar and the docker file
• Push the generated docker image to IBM Container registryand run an instance in the bluemix from registry
stage 'Create artifact'
node {
git url: 'https://github.com/kranthiB/product-catalogue-service.git'
def mvnHome = tool 'Maven-3.3.9'
sh "${mvnHome}/bin/mvn clean install -DskipTests"
}
stage 'Archive artifact'
node {
archive '**/*.jar,**/Dockerfile'
}
stage 'Build Docker Image'
node {
sh "pwd"
sh "cp ~/workspace/Product_Catalog_BlueMix_Pipeline/target/
product-catalogue-service-1.0.jar ~/workspace/Product_Catalog_BlueMix_Pipeline/"
sh "docker build -f ~/workspace/Product_Catalog_BlueMix_Pipeline/src/main/
docker/Dockerfile -t retailstore/product-catalogue-service ."
}
stage 'Deploy To IBM BLUEMIX'
node {
sh "/cf login -a https://api.ng.bluemix.net -u prokarmapoc@gmail.com -p prokarm@2013
-o prokarmapoc -s poc"
sh "/cf ic login"
sh "docker tag retailstore/product-catalogue-service
registry.ng.bluemix.net/poc_ic/product-catalogue-service:latest"
sh "docker push registry.ng.bluemix.net/poc_ic/product-catalogue-service"
sh "/cf ic run -p 169.44.8.154:8870:8870 --name product-catalogue-service
registry.ng.bluemix.net/poc_ic/product-catalogue-service:latest"
}
• Login to IBM Dev-Ops Service using Bluemix credentials – https://hub.jazz.net
• Click the “Create Project” in order to map the existing git url.
• Select the github repository to link
• Select the bluemix space and click the create button
• Once project creation done, the same can be visualize in the dashboard
• On selection of the project , will provided with options to edit the code or Build & Deploy the code
• Create a pipeline with two stages – Build and Run as below
• The configuration under Build stage will be like
• The configuration under run will be like