-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove dependency on playbook2image, rebase directly on OS. #4742
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,8 +1,12 @@ | ||
.* | ||
bin | ||
docs | ||
hack | ||
inventory | ||
test | ||
utils | ||
**/*.md | ||
*.spec | ||
*.ini | ||
*.txt | ||
setup* |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,51 +1,43 @@ | ||
# Using playbook2image as a base | ||
# See https://github.com/openshift/playbook2image for details on the image | ||
# including documentation for the settings/env vars referenced below | ||
FROM registry.centos.org/openshift/playbook2image:latest | ||
FROM centos:7 | ||
|
||
MAINTAINER OpenShift Team <dev@lists.openshift.redhat.com> | ||
|
||
USER root | ||
|
||
# install ansible and deps | ||
RUN INSTALL_PKGS="python-lxml pyOpenSSL python2-cryptography openssl java-1.8.0-openjdk-headless httpd-tools openssh-clients" \ | ||
&& yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS \ | ||
&& EPEL_PKGS="ansible python-passlib python2-boto" \ | ||
&& yum install -y epel-release \ | ||
&& yum install -y --setopt=tsflags=nodocs $EPEL_PKGS \ | ||
&& rpm -q $INSTALL_PKGS $EPEL_PKGS \ | ||
&& yum clean all | ||
|
||
LABEL name="openshift/origin-ansible" \ | ||
summary="OpenShift's installation and configuration tool" \ | ||
description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \ | ||
url="https://github.com/openshift/openshift-ansible" \ | ||
io.k8s.display-name="openshift-ansible" \ | ||
io.k8s.description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \ | ||
io.openshift.expose-services="" \ | ||
io.openshift.tags="openshift,install,upgrade,ansible" | ||
io.openshift.tags="openshift,install,upgrade,ansible" \ | ||
atomic.run="once" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 👍 |
||
|
||
USER root | ||
ENV USER_UID=1001 \ | ||
HOME=/opt/app-root/src \ | ||
WORK_DIR=/usr/share/ansible/openshift-ansible \ | ||
OPTS="-v" | ||
|
||
# Create a symlink to /opt/app-root/src so that files under /usr/share/ansible are accessible. | ||
# This is required since the system-container uses by default the playbook under | ||
# /usr/share/ansible/openshift-ansible. With this change we won't need to keep two different | ||
# configurations for the two images. | ||
RUN mkdir -p /usr/share/ansible/ && ln -s /opt/app-root/src /usr/share/ansible/openshift-ansible | ||
# Add image scripts and files for running as a system container | ||
COPY images/installer/root / | ||
# Include playbooks, roles, plugins, etc. from this repo | ||
COPY . ${WORK_DIR} | ||
|
||
RUN INSTALL_PKGS="skopeo openssl java-1.8.0-openjdk-headless httpd-tools" && \ | ||
yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \ | ||
rpm -V $INSTALL_PKGS && \ | ||
yum clean all | ||
RUN /usr/local/bin/user_setup \ | ||
&& rm /usr/local/bin/usage.ocp | ||
|
||
USER ${USER_UID} | ||
|
||
# The playbook to be run is specified via the PLAYBOOK_FILE env var. | ||
# This sets a default of openshift_facts.yml as it's an informative playbook | ||
# that can help test that everything is set properly (inventory, sshkeys) | ||
ENV PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just curious, why was |
||
OPTS="-v" \ | ||
INSTALL_OC=true | ||
|
||
# playbook2image's assemble script expects the source to be available in | ||
# /tmp/src (as per the source-to-image specs) so we import it there | ||
ADD . /tmp/src | ||
|
||
# Running the 'assemble' script provided by playbook2image will install | ||
# dependencies specified in requirements.txt and install the 'oc' client | ||
# as per the INSTALL_OC environment setting above | ||
RUN /usr/libexec/s2i/assemble | ||
|
||
# Add files for running as a system container | ||
COPY images/installer/system-container/root / | ||
|
||
CMD [ "/usr/libexec/s2i/run" ] | ||
WORKDIR ${WORK_DIR} | ||
ENTRYPOINT [ "/usr/local/bin/entrypoint" ] | ||
CMD [ "/usr/local/bin/run" ] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Now that this has been re-arranged I think it would make sense to complete the work by moving the contents under The There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Will go ahead and add a commit for this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Changed in e02bc5d |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,55 +1,46 @@ | ||
FROM openshift3/playbook2image | ||
FROM rhel7.3:7.3-released | ||
|
||
MAINTAINER OpenShift Team <dev@lists.openshift.redhat.com> | ||
|
||
# override env vars from base image | ||
ENV SUMMARY="OpenShift's installation and configuration tool" \ | ||
DESCRIPTION="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" | ||
USER root | ||
|
||
# Playbooks, roles, and their dependencies are installed from packages. | ||
RUN INSTALL_PKGS="atomic-openshift-utils atomic-openshift-clients python-boto openssl java-1.8.0-openjdk-headless httpd-tools" \ | ||
&& yum repolist > /dev/null \ | ||
&& yum-config-manager --enable rhel-7-server-ose-3.6-rpms \ | ||
&& yum-config-manager --enable rhel-7-server-rh-common-rpms \ | ||
&& yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS \ | ||
&& rpm -q $INSTALL_PKGS \ | ||
&& yum clean all | ||
|
||
LABEL name="openshift3/ose-ansible" \ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same question as with |
||
summary="$SUMMARY" \ | ||
description="$DESCRIPTION" \ | ||
summary="OpenShift's installation and configuration tool" \ | ||
description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \ | ||
url="https://github.com/openshift/openshift-ansible" \ | ||
io.k8s.display-name="openshift-ansible" \ | ||
io.k8s.description="$DESCRIPTION" \ | ||
io.k8s.description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \ | ||
io.openshift.expose-services="" \ | ||
io.openshift.tags="openshift,install,upgrade,ansible" \ | ||
com.redhat.component="aos3-installation-docker" \ | ||
version="v3.6.0" \ | ||
release="1" \ | ||
architecture="x86_64" | ||
|
||
# Playbooks, roles and their dependencies are installed from packages. | ||
# Unlike in Dockerfile, we don't invoke the 'assemble' script here | ||
# because all content and dependencies (like 'oc') is already | ||
# installed via yum. | ||
USER root | ||
RUN INSTALL_PKGS="atomic-openshift-utils atomic-openshift-clients python-boto skopeo openssl java-1.8.0-openjdk-headless httpd-tools" && \ | ||
yum repolist > /dev/null && \ | ||
yum-config-manager --enable rhel-7-server-ose-3.6-rpms && \ | ||
yum-config-manager --enable rhel-7-server-rh-common-rpms && \ | ||
yum install -y $INSTALL_PKGS && \ | ||
yum clean all | ||
|
||
# The symlinks below are a (hopefully temporary) hack to work around the fact that this | ||
# image is based on python s2i which uses the python27 SCL instead of system python, | ||
# and so the system python modules we need would otherwise not be in the path. | ||
RUN ln -s /usr/lib/python2.7/site-packages/{boto,passlib} /opt/app-root/lib64/python2.7/ | ||
|
||
USER ${USER_UID} | ||
architecture="x86_64" \ | ||
atomic.run="once" | ||
|
||
# The playbook to be run is specified via the PLAYBOOK_FILE env var. | ||
# This sets a default of openshift_facts.yml as it's an informative playbook | ||
# that can help test that everything is set properly (inventory, sshkeys). | ||
# As the playbooks are installed via packages instead of being copied to | ||
# $APP_HOME by the 'assemble' script, we set the WORK_DIR env var to the | ||
# location of openshift-ansible. | ||
ENV PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just curious, why was There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I want the image to do something helpful if you just run it without reading the docs (or if you miss salient items). To me the most helpful thing seems to print a usage message immediately and exit. If we specify PLAYBOOK_FILE then we're going to run a playbook, and running a "usage" playbook seemed like overkill when we can just... run |
||
ANSIBLE_CONFIG=/usr/share/atomic-openshift-utils/ansible.cfg \ | ||
ENV USER_UID=1001 \ | ||
HOME=/opt/app-root/src \ | ||
WORK_DIR=/usr/share/ansible/openshift-ansible \ | ||
ANSIBLE_CONFIG=/usr/share/atomic-openshift-utils/ansible.cfg \ | ||
OPTS="-v" | ||
|
||
# Add files for running as a system container | ||
COPY system-container/root / | ||
# Add image scripts and files for running as a system container | ||
COPY root / | ||
|
||
RUN /usr/local/bin/user_setup \ | ||
&& mv /usr/local/bin/usage{.ocp,} | ||
|
||
USER ${USER_UID} | ||
|
||
CMD [ "/usr/libexec/s2i/run" ] | ||
WORKDIR ${WORK_DIR} | ||
ENTRYPOINT [ "/usr/local/bin/entrypoint" ] | ||
CMD [ "/usr/local/bin/run" ] |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
#!/bin/bash -e | ||
# | ||
# This file serves as the main entrypoint to the openshift-ansible image. | ||
# | ||
# For more information see the documentation: | ||
# https://github.com/openshift/openshift-ansible/blob/master/README_CONTAINER_IMAGE.md | ||
|
||
|
||
# Patch /etc/passwd file with the current user info. | ||
# The current user's entry must be correctly defined in this file in order for | ||
# the `ssh` command to work within the created container. | ||
|
||
if ! whoami &>/dev/null; then | ||
echo "${USER:-default}:x:$(id -u):$(id -g):Default User:$HOME:/sbin/nologin" >> /etc/passwd | ||
fi | ||
|
||
exec "$@" |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
#!/bin/bash -e | ||
# | ||
# This file serves as the default command to the openshift-ansible image. | ||
# Runs a playbook with inventory as specified by environment variables. | ||
# | ||
# For more information see the documentation: | ||
# https://github.com/openshift/openshift-ansible/blob/master/README_CONTAINER_IMAGE.md | ||
|
||
# SOURCE and HOME DIRECTORY: /opt/app-root/src | ||
|
||
if [[ -z "${PLAYBOOK_FILE}" ]]; then | ||
echo | ||
echo "PLAYBOOK_FILE must be provided." | ||
exec /usr/local/bin/usage | ||
fi | ||
|
||
INVENTORY="$(mktemp)" | ||
if [[ -v INVENTORY_FILE ]]; then | ||
# Make a copy so that ALLOW_ANSIBLE_CONNECTION_LOCAL below | ||
# does not attempt to modify the original | ||
cp -a ${INVENTORY_FILE} ${INVENTORY} | ||
elif [[ -v INVENTORY_URL ]]; then | ||
curl -o ${INVENTORY} ${INVENTORY_URL} | ||
elif [[ -v DYNAMIC_SCRIPT_URL ]]; then | ||
curl -o ${INVENTORY} ${DYNAMIC_SCRIPT_URL} | ||
chmod 755 ${INVENTORY} | ||
else | ||
echo | ||
echo "One of INVENTORY_FILE, INVENTORY_URL or DYNAMIC_SCRIPT_URL must be provided." | ||
exec /usr/local/bin/usage | ||
fi | ||
INVENTORY_ARG="-i ${INVENTORY}" | ||
|
||
if [[ "$ALLOW_ANSIBLE_CONNECTION_LOCAL" = false ]]; then | ||
sed -i s/ansible_connection=local// ${INVENTORY} | ||
fi | ||
|
||
if [[ -v VAULT_PASS ]]; then | ||
VAULT_PASS_FILE=.vaultpass | ||
echo ${VAULT_PASS} > ${VAULT_PASS_FILE} | ||
VAULT_PASS_ARG="--vault-password-file ${VAULT_PASS_FILE}" | ||
fi | ||
|
||
cd ${WORK_DIR} | ||
|
||
exec ansible-playbook ${INVENTORY_ARG} ${VAULT_PASS_ARG} ${OPTS} ${PLAYBOOK_FILE} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
#!/bin/bash -e | ||
cat <<"EOF" | ||
|
||
The origin-ansible image provides several options to control the behaviour of the containers. | ||
For more details on these options see the documentation: | ||
|
||
https://github.com/openshift/openshift-ansible/blob/master/README_CONTAINER_IMAGE.md | ||
|
||
At a minimum, when running a container using this image you must provide: | ||
|
||
* ssh keys so that Ansible can reach your hosts. These should be mounted as a volume under | ||
/opt/app-root/src/.ssh | ||
* An inventory file. This can be mounted inside the container as a volume and specified with the | ||
INVENTORY_FILE environment variable. Alternatively you can serve the inventory file from a web | ||
server and use the INVENTORY_URL environment variable to fetch it. | ||
* The playbook to run. This is set using the PLAYBOOK_FILE environment variable. | ||
|
||
Here is an example of how to run a containerized origin-ansible with | ||
the openshift_facts playbook, which collects and displays facts about your | ||
OpenShift environment. The inventory and ssh keys are mounted as volumes | ||
(the latter requires setting the uid in the container and SELinux label | ||
in the key file via :Z so they can be accessed) and the PLAYBOOK_FILE | ||
environment variable is set to point to the playbook within the image: | ||
|
||
docker run -tu `id -u` \ | ||
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z,ro \ | ||
-v /etc/ansible/hosts:/tmp/inventory:Z,ro \ | ||
-e INVENTORY_FILE=/tmp/inventory \ | ||
-e OPTS="-v" \ | ||
-e PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \ | ||
openshift/origin-ansible | ||
|
||
EOF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any reason to move the
USER
andRUN
commands before theLABEL
s? If not, wouldn't it be better to leave theLABEL
s first to keep the metadata type of content at the top? The caching related benefits of that are probably not relevant in our case, but still wondering if it would make sense from a file organization POV?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's really just that when iterating on changes, it helps to put the slowest thing (the install) first so it can be most likely to be cached. Not a huge deal to me but I don't like having to re-run the install just because I tweaked the labels. Labels are still pretty close to the top... but we can move it if you like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh, in the "normal" use case it's usually the other way around: when you build the same image over time, labels change less than the result of 'yum install' - so having the labels before the run should help caching.
However, this doesn't really apply to our build system (I think), so it was more of a comment around the logical structure of the file. To me it looks better to have the metadata first, but up to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our build system doesn't use cache so as far as I'm concerned the only optimizations to consider are for iterative local development. So I vote to leave it as is unless I hear loud complaining :)