Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove dependency on playbook2image, rebase directly on OS. #4742

Merged

Conversation

juanvallejo
Copy link
Contributor

@juanvallejo juanvallejo commented Jul 12, 2017

Remove dependency on playbook2image by only adding the functionality that we currently use from it directly to our container image.

Draft: https://docs.google.com/document/d/1AlvlI0dw_vGHD_Fx4psWxbAuCXIc7blJjT554bGfdUo
Trello card: https://trello.com/c/BHJYd3Y5

TODO

  • Build from rhel7 (rhel7.3:7.3-released?) or (upstream) centos7
  • Update yum install to include ansible
  • Combine o-a + p2i labels and ENV
  • Remove ln -s lib hack
  • Copy /etc/passwd-munging logic (could we add comments for why it’s there…)
  • Copy .s2i run script to images/installer and make it the cmd
  • Modify to print usage instead of just exiting when there is no inventory/playbook specified
  • Test with installing logging and metrics on multi-host cluster; test check playbooks
  • Test that it does what we expect when run with runc
  • Test that it does what we expect when run with atomic install (currently being tested by Luke)
  • Merge changes to dist-git aos3-installation-docker and build
  • Have QE take a good look at it
  • Probably lots of room for improvement re ssh key/config handling, inventory generation, output redirection, etc.

Known issues

  • Running playbooks/byo/config.yml deploys OpenShift in a remote host successfully but fails when deploying logging with: Failed to get information on remote file (/tmp/openshift-logging-ansible-PERtiY/signing.conf): /bin/sh: sudo: command not found
    • when openshift_hosted_logging_deploy=true

cc @sosiouxme @rhcarvalho @codificat @brenton

@juanvallejo juanvallejo force-pushed the jvallejo/remove-dependency-on-p2i branch 2 times, most recently from e2e62ee to 2bc1a2d Compare July 12, 2017 20:15
@rhcarvalho
Copy link
Contributor

Out of curiosity, why do we want to part ways with playbook2image?

@@ -18,6 +18,8 @@ LABEL name="openshift3/ose-ansible" \
version="v3.6.0" \
release="1" \
architecture="x86_64"
vcs-url="https://github.com/openshift/openshift-ansible" \
vcs-type="git" \

# Playbooks, roles and their dependencies are installed from packages.
# Unlike in Dockerfile, we don't invoke the 'assemble' script here
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can probably remove the rest of this comment

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -18,6 +18,8 @@ LABEL name="openshift3/ose-ansible" \
version="v3.6.0" \
release="1" \
architecture="x86_64"
vcs-url="https://github.com/openshift/openshift-ansible" \
vcs-type="git" \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thought there was another label p2i had that o-a didn't... something pep posted in chat

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

atomic.run="once" \

Also the last line shouldn't have a \


if [[ -z PLAYBOOK_FILE ]]; then
exec /usr/bin/usage
exit 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not necessary after exec

curl -o ${INVENTORY} ${DYNAMIC_SCRIPT_URL}
chmod 755 ${INVENTORY}
else
echo "One of INVENTORY_FILE, INVENTORY_URL or DYNAMIC_SCRIPT_URL must be provided"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this run usage after?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, done

@juanvallejo juanvallejo force-pushed the jvallejo/remove-dependency-on-p2i branch 2 times, most recently from fe203fb to 8a82808 Compare July 12, 2017 23:02
@sosiouxme
Copy link
Member

sosiouxme commented Jul 13, 2017 via email

RUN pip install -Iv ansible==2.2.0.0

COPY ./images/installer/bin /usr/bin
COPY ./images/installer/bin/user_setup /tmp
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are optimizing the image, we could reduce the number of layers by having a single COPY instruction, and placing all the files under ./images/installer/root. Then, we can group together all RUN instructions into a single one.

io.openshift.tags="openshift,install,upgrade,ansible" \
vcs-url="https://github.com/openshift/openshift-ansible" \
vcs-type="git" \
version="alpha"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this label used for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://docs.docker.com/engine/reference/builder/#label just part of image metadata; however, I seem to have kept the default used by p2i. Not sure what the value should be here; 1.6.0?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, some of those labels are used for example in the web console. It made sense for p2i since it was meant to be used as a builder image (thus the io.k8s.* and io.openshift.* labels). Some other labels are used in build systems.

If we have no use for a label, perhaps it is a good time to get rid of it instead of copying from p2i?

A version label seems to be only one more thing that needs to be kept up-to-date.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(If we really need a version label, we could have our image build script set the right version on build)

# Create a symlink to /opt/app-root/src so that files under /usr/share/ansible are accessible.
# This is required since the system-container uses by default the playbook under
# /usr/share/ansible/openshift-ansible. With this change we won't need to keep two different
# configurations for the two images.
RUN mkdir -p /usr/share/ansible/ && ln -s /opt/app-root/src /usr/share/ansible/openshift-ansible

# Make home folder writable in order to let playbooks make changes
RUN chmod a+rwx -R /opt/app-root/src

RUN INSTALL_PKGS="skopeo openssl java-1.8.0-openjdk-headless httpd-tools" && \
yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
yum clean all

USER ${USER_UID}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was defined in p2i, now at this point it is undefined.

@@ -1,7 +1,7 @@
# Using playbook2image as a base
# See https://github.com/openshift/playbook2image for details on the image
# including documentation for the settings/env vars referenced below
FROM registry.centos.org/openshift/playbook2image:latest
FROM openshift/base-centos7
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uh... why this one? This brings in a bunch of dev stuff we don't need. Why not just centos7?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was not sure which one to use but for some reason, openshift/base-centos7 seemed more inviting. Will switch to just centos7

RUN yum install -y epel-release && yum clean all -y

# workaround for https://github.com/openshift/openshift-ansible/issues/3111
# install ansible via pip to lock version
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is resolved with later ansible, should be able to use latest ansible from EPEL or origin repos

# workaround for https://github.com/openshift/openshift-ansible/issues/3111
# install ansible via pip to lock version
RUN yum install -y --setopt=tsflags=nodocs python-pip python-devel && yum clean all -y
RUN pip install -Iv ansible==2.2.0.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we should just be able to yum install ansible

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good, should we still install python / worry about having any other deps that Dockerfile.rhel7 installs as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope, don't see any need for them... however we are probably going to need to add RPMs for role dependencies that were pip installed upstream and RPM dependencies downstream.

# dependencies specified in requirements.txt and install the 'oc' client
# as per the INSTALL_OC environment setting above
RUN /usr/libexec/s2i/assemble
RUN /usr/bin/assemble
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't need any of this assemble stuff

I'm wondering what we need the oc client for either but I guess we should include that in case something is actually using it.

we don't need the INSTALL_OC variable if we just hardcode that bit from assemble

echo "---> Installing 'oc' binary..."
TMPDIR=$(mktemp -d)
cd ${TMPDIR}
OC_BINARY_URL=$(python -c "import requests;releases = requests.get('https://api.github.com/repos/openshift/origin/releases').json();print [s for s in [r for r in releases if not r['prerelease'] and '1.4' in r['name']][0]['assets'] if 'linux-64' in s['browser_download_url']][0]['browser_download_url']")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gets latest origin oc... origin-only of course. err, it's a little unclear to me what this actually ends up with... 1.4 releases?

downstream, RPM deps will pull in oc

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good, will remove

else
echo "One of INVENTORY_FILE, INVENTORY_URL or DYNAMIC_SCRIPT_URL must be provided"
exec /usr/bin/usage
exit 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exec doesn't return

@@ -18,26 +18,42 @@ LABEL name="openshift3/ose-ansible" \
version="v3.6.0" \
release="1" \
architecture="x86_64"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs a \ at the end

USER root
RUN INSTALL_PKGS="atomic-openshift-utils atomic-openshift-clients python-boto skopeo openssl java-1.8.0-openjdk-headless httpd-tools" && \
yum repolist > /dev/null && \
yum-config-manager --enable rhel-7-server-ose-3.6-rpms && \
yum-config-manager --enable rhel-7-server-rh-common-rpms && \
yum install -y --setopt=tsflags=nodocs ansible
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs && \

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at INSTALL_PKS, do we still need some of them? (looking at java in particular)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Java was added to be able to run the logging installer? Not sure. It was added relatively recently, we can check the history.

@juanvallejo juanvallejo force-pushed the jvallejo/remove-dependency-on-p2i branch 3 times, most recently from bf59e7c to e7407e5 Compare July 13, 2017 21:17
Copy link
Member

@codificat codificat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding some comments/questions/issues found so far, haven't had time to actually build and test the images yet

io.k8s.display-name="openshift-ansible" \
io.k8s.description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \
io.openshift.expose-services="" \
io.openshift.tags="openshift,install,upgrade,ansible"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we should be removing these: there are use cases that include running the image inside openshift, for which this metadata would be useful...

&& yum install -y --setopt=tsflags=nodocs --enablerepo=epel ansible \
&& INSTALL_PKGS="python-lxml pyOpenSSL python2-cryptography skopeo openssl java-1.8.0-openjdk-headless httpd-tools" \
&& yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS \
&& rpm -V $INSTALL_PKGS \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this rpm verification needed/useful? (just wondering)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have seen too many instances of doing yum install X Y Z and having Z not available for some reason but yum still installs everything else and reports success. This fails if you don't install everything you wanted.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

although rpm -q should work just as well...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Case in point! skopeo comes from extras and wasn't being installed. And we don't actually need it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What @sosiouxme said. We use rpm -V in many of the images we ship with OpenShift, e.g.:

https://github.com/sclorg/postgresql-container/blob/master/9.5/Dockerfile#L44

And yes, it saves time debugging why an image is missing some package we think was installed.

PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \
OPTS="-v"

COPY ./images/installer/bin /usr/bin
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm being picky, but this copies a README file into /usr/bin... I'd suggest moving the README.md inside images/installer/bin one level up and expand it a bit (see below)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also leveraging the practice of writing Dockerfiles... what @codificat says + make a single top-level dir root and place files mirroring what we want to see in the image. Things that need not / should not be in the image should be outside of root. E.g.:

https://github.com/sclorg/postgresql-container/blob/master/9.5/Dockerfile#L56

==================================================

Contains image entrypoint as well as any other scripts useful
in aiding an image run.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this file can be moved up one level and briefly document the contents, pointing to other readmes for details... it should make it easier to find and also avoid it ending up in /usr/bin

&& chmod -R ug+x /opt/app-root/{bin,etc} /usr/bin/user_setup \
#
# Make home folder writable in order to let playbooks make changes
&& chmod a+rwx -R /opt/app-root/src \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm failing to see why we need this...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At some point I was getting an error because .ansible could not be written to /opt/app-root/src. Will test again though

io.k8s.display-name="openshift-ansible" \
io.k8s.description="$DESCRIPTION" \
io.openshift.expose-services="" \
io.openshift.tags="openshift,install,upgrade,ansible" \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto about these labels.. they are certainly not required, but I believe it doesn't hurt to have them in this image

com.redhat.component="aos3-installation-docker" \
version="v3.6.0" \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is actually required by the build system

USER root
RUN INSTALL_PKGS="atomic-openshift-utils atomic-openshift-clients python-boto skopeo openssl java-1.8.0-openjdk-headless httpd-tools" && \
yum repolist > /dev/null && \
yum-config-manager --enable rhel-7-server-ose-3.6-rpms && \
yum-config-manager --enable rhel-7-server-rh-common-rpms && \
yum install -y --setopt=tsflags=nodocs ansible && \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe ansible gets pulled as a dependency - but if we want to install explicitly maybe just add it to INSTALL_PKGS above...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 it does.


RUN mkdir -p ${APP_HOME} ${APP_ROOT}/etc ${APP_ROOT}/bin
RUN chmod -R ug+x ${APP_ROOT}/bin ${APP_ROOT}/etc /tmp/user_setup && \
/tmp/user_setup
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These can be consolidated in a single RUN.
More importantly, they reference stuff that is defined in ENV and COPY statements that are below - order should be fixed


CMD [ "/usr/libexec/s2i/run" ]
RUN sed "s@${USER_NAME}:x:${USER_UID}:0@${USER_NAME}:x:\${USER_ID}:\${GROUP_ID}@g" /etc/passwd > ${APP_ROOT}/etc/passwd.template
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This RUN is now replaced by the entrypoint, which isn't defined... so this line should be replaced with ENTRYPOINT ["/usr/bin/entrypoint"]


USER root
ENV USER_UID=1000 \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Dockerfile.rhel7 it says 1001 instead - let's pick one and make it the same in both

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will change to 1001 to match .rhel7. Changed it here to 1000 to test while I finished adding the entrypoint script that patched /etc/passwd with a 1000 user entry


MAINTAINER OpenShift Team <dev@lists.openshift.redhat.com>

USER root

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason to move the USER and RUN commands before the LABELs? If not, wouldn't it be better to leave the LABELs first to keep the metadata type of content at the top? The caching related benefits of that are probably not relevant in our case, but still wondering if it would make sense from a file organization POV?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's really just that when iterating on changes, it helps to put the slowest thing (the install) first so it can be most likely to be cached. Not a huge deal to me but I don't like having to re-run the install just because I tweaked the labels. Labels are still pretty close to the top... but we can move it if you like.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heh, in the "normal" use case it's usually the other way around: when you build the same image over time, labels change less than the result of 'yum install' - so having the labels before the run should help caching.

However, this doesn't really apply to our build system (I think), so it was more of a comment around the logical structure of the file. To me it looks better to have the metadata first, but up to you.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our build system doesn't use cache so as far as I'm concerned the only optimizations to consider are for iterative local development. So I vote to leave it as is unless I hear loud complaining :)

CMD [ "/usr/libexec/s2i/run" ]
WORKDIR ${WORK_DIR}
ENTRYPOINT [ "/usr/local/bin/entrypoint" ]
CMD [ "/usr/local/bin/run" ]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that this has been re-arranged I think it would make sense to complete the work by moving the contents under system-container/root into root so there's only one root.

The README.md inside system-container could be moved higher up as README_SYSTEM_CONTAINER.md, or its contents integrated into README_CONTAINER_IMAGE.md...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will go ahead and add a commit for this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed in e02bc5d

&& yum-config-manager --enable rhel-7-server-rh-common-rpms \
&& yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS \
&& rpm -q $INSTALL_PKGS \
&& yum clean all

LABEL name="openshift3/ose-ansible" \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question as with Dockerfile: should LABEL come before USER/RUN?

@juanvallejo juanvallejo force-pushed the jvallejo/remove-dependency-on-p2i branch 2 times, most recently from 68e7fb9 to 08ece2a Compare July 17, 2017 17:01
@sosiouxme
Copy link
Member

As best I can tell, everything is working. I have never seen the /bin/sh: sudo: command not found issue when deploying logging (and metrics). I also don't have or need the localhost line in my inventory. But it is disquieting that Juan does and makes me feel like users will manage to run into it. But is that something new?

/cc @ashcrow @giuseppe As best I can tell, the atomic install --system container works fine with both origin-ansible and ose-ansible after these changes. Would appreciate your input.

(We don't have the labels for atomic install to operate on in a non-system-container context - something to introduce in the near future hopefully as behavior is very surprising without the --system.)

Copy link
Member

@codificat codificat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One pending fix and a minor question

# Add entrypoint and other setup scripts
COPY images/installer/root /
# Add files for running as a system container
COPY images/installer/system-container/root /
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With recent changes, system-container/root does not exist anymore and the previous COPY already takes care of the system container files, so this should be removed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah meant to remove this but never pushed the change :/

# Add files for running as a system container
COPY images/installer/system-container/root /
# Include playbooks, roles, plugins, etc. from this repo
COPY . ${WORK_DIR}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be a good opportunity to review .dockerignore... e.g. should we add images and `hack? anything else?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we copy only COPY root /? Or is this supposed to copy openshift-ansible top-level contents?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's suppose to copy top-level contents - playbooks, roles, plugins, etc. These things are all installed by RPM downstream. We could certainly .ignore a lot of things.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@codificat we actually need images/ in origin-ansible :) and ose-ansible is already sparse. We can omit hack/, inventory/ and most of the top-level conf files. Do the README.md files belong in the image? Not sure how useful they are but they do little harm...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh actually... .md files already omitted.

@sosiouxme sosiouxme changed the title [WIP] Remove dependency on playbook2image Remove dependency on playbook2image, rebase directly on OS. Jul 18, 2017
@sosiouxme
Copy link
Member

I think it's ready once we squash commits.

LABEL name="openshift/origin-ansible" \
summary="OpenShift's installation and configuration tool" \
description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \
url="https://github.com/openshift/openshift-ansible" \
io.k8s.display-name="openshift-ansible" \
io.k8s.description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \
io.openshift.expose-services="" \
io.openshift.tags="openshift,install,upgrade,ansible"
io.openshift.tags="openshift,install,upgrade,ansible" \
atomic.run="once"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

# As the playbooks are installed via packages instead of being copied to
# $APP_HOME by the 'assemble' script, we set the WORK_DIR env var to the
# location of openshift-ansible.
ENV PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, why was PLAYBOOK_FILE removed while other env items were kept?

Copy link
Member

@sosiouxme sosiouxme Jul 18, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want the image to do something helpful if you just run it without reading the docs (or if you miss salient items). To me the most helpful thing seems to print a usage message immediately and exit. If we specify PLAYBOOK_FILE then we're going to run a playbook, and running a "usage" playbook seemed like overkill when we can just... run usage.

# The playbook to be run is specified via the PLAYBOOK_FILE env var.
# This sets a default of openshift_facts.yml as it's an informative playbook
# that can help test that everything is set properly (inventory, sshkeys)
ENV PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, why was PLAYBOOK_FILE removed while other env items were kept?

@ashcrow
Copy link
Member

ashcrow commented Jul 18, 2017

It seems like reasonable changes to me. Nothing jumps out at me as being problematic but I suggest letting QE know they should re-test the container and system containers just to be sure.

@giuseppe WDYT?

@sosiouxme
Copy link
Member

We absolutely want QE to scrutinize the major use cases once we merge and rebuild.

https://trello.com/c/BHJYd3Y5/135-drop-playbook2image-dependency

@@ -1,6 +1,6 @@
# Containerized openshift-ansible to run playbooks

The [Dockerfile](images/installer/Dockerfile) in this repository uses the [playbook2image](https://github.com/openshift/playbook2image) source-to-image base image to containerize `openshift-ansible`. The resulting image can run any of the provided playbooks. See [BUILD.md](BUILD.md) for image build instructions.
The [Dockerfile](images/installer/Dockerfile) in this repository uses the [Centos:7](https://hub.docker.com/_/centos/) base image to containerize `openshift-ansible`. The resulting image can run any of the provided playbooks. See [BUILD.md](BUILD.md) for image build instructions.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sosiouxme @codificat maybe we should just take the line about the base image out altogether?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed that it can be removed

BUILD.md Outdated
@@ -37,9 +37,9 @@ To build a container image of `openshift-ansible` using standalone **Docker**:

To build an openshift-ansible image using an **OpenShift** [build and image stream](https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html) the straightforward command would be:

oc new-build registry.centos.org/openshift/playbook2image~https://github.com/openshift/openshift-ansible
oc new-build openshift/origin-ansible~https://github.com/openshift/openshift-ansible
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sosiouxme @codificat wasn't sure if this was the right thing to put here... Would this command actually work?

Also, per the docs in the lines bellow, technically we are no longer "required" to keep the images/installer structure

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could move Dockerfile back to root ... but we would need to keep images/installer for the downstream build anyway. Not inclined to change this just now.

BUILD.md Outdated
@@ -37,9 +37,9 @@ To build a container image of `openshift-ansible` using standalone **Docker**:

To build an openshift-ansible image using an **OpenShift** [build and image stream](https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html) the straightforward command would be:

oc new-build registry.centos.org/openshift/playbook2image~https://github.com/openshift/openshift-ansible
oc new-build openshift/origin-ansible~https://github.com/openshift/openshift-ansible
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that doesn't look right... openshift/origin-ansible is not a builder image...

I think it's not that easy to build the image in OpenShift anymore... the Dockerfile is under images/ but the context is the tld...

Maybe we can just leave out the whole Building on OpenShift section for now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can just leave out the whole Building on OpenShift section for now?

Okay, I think that might be the best approach while we figure this out

cc @brenton

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

works for me.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sure there's a way to invoke the docker builder but I don't particularly want to try and test it right now, so save that for later.

We do not need the builder functionality from playbook2image and the
resulting image was overly complicated, so this simply builds on
Centos/RHEL.
@sosiouxme sosiouxme force-pushed the jvallejo/remove-dependency-on-p2i branch from b93ad95 to 5497673 Compare July 18, 2017 19:12
@sosiouxme
Copy link
Member

aos-ci-test

@openshift-bot
Copy link

success: "aos-ci-jenkins/OS_3.6_NOT_containerized, aos-ci-jenkins/OS_3.6_NOT_containerized_e2e_tests" for 5497673 (logs)

@openshift-bot
Copy link

error: "aos-ci-jenkins/OS_3.6_containerized, aos-ci-jenkins/OS_3.6_containerized_e2e_tests" for 5497673 (logs)

@sosiouxme
Copy link
Member

Errr... test flake? Doesn't look related.

@sosiouxme
Copy link
Member

aos-ci-test

@openshift-bot
Copy link

success: "aos-ci-jenkins/OS_3.6_NOT_containerized, aos-ci-jenkins/OS_3.6_NOT_containerized_e2e_tests" for 5497673 (logs)

@openshift-bot
Copy link

success: "aos-ci-jenkins/OS_3.6_containerized, aos-ci-jenkins/OS_3.6_containerized_e2e_tests" for 5497673 (logs)

@sosiouxme
Copy link
Member

[merge] 🎉

@sosiouxme
Copy link
Member

flake although it looks rather too persistent to be called that. @sdodson should I try the merge again?

@sosiouxme
Copy link
Member

[merge] again just for protocol

@rhcarvalho
Copy link
Contributor

Flake openshift/origin#15332 and openshift/origin#13977

[merge]

@openshift-bot
Copy link

Evaluated for openshift ansible merge up to 5497673

@openshift-bot
Copy link

continuous-integration/openshift-jenkins/merge FAILURE (https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_openshift_ansible/720/) (Base Commit: e432f8a) (PR Branch Commit: 5497673)

@sosiouxme
Copy link
Member

Just openshift/origin#15356 again... @sdodson can you merge please?

@rhcarvalho rhcarvalho merged commit d13c11a into openshift:master Jul 20, 2017
@rhcarvalho
Copy link
Contributor

For future reference, do we have the size of the image BEFORE and AFTER this change? @juanvallejo @sosiouxme

@rhcarvalho
Copy link
Contributor

Also for our future selves, could you please update the PR description to reflect the current state? Are there still known issues, are the checkboxes in the TODO reasonable?

@sosiouxme
Copy link
Member

In my testing, ose-ansible's size before was about 1.1G and after, 760M. origin-ansible was 880M and is now 454M. Now that I look at it I'm not sure why the difference from origin to ose is so much... should have nearly the same dependencies, and base image size is about the same.

@sosiouxme
Copy link
Member

sosiouxme commented Jul 20, 2017

Hm, I should update the origin-ansible job to tag with v3.7 when building from master.
Update: done.

@sosiouxme
Copy link
Member

There is no 3.7 branch in dist-git aos3-installation-docker yet. I am curious to see if the build tomorrow morning pulls in changes from master or has been re-pointed at release-3.6.

@juanvallejo juanvallejo deleted the jvallejo/remove-dependency-on-p2i branch July 25, 2017 13:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants