Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openshift_checks/docker_storage: overlay/2 support #4757

Merged
merged 2 commits into from
Jul 20, 2017
Merged

openshift_checks/docker_storage: overlay/2 support #4757

merged 2 commits into from
Jul 20, 2017

Conversation

sosiouxme
Copy link
Member

@sosiouxme sosiouxme commented Jul 13, 2017

fix https://bugzilla.redhat.com/show_bug.cgi?id=1469197

When Docker is configured with the overlay or overlay2 storage driver, check that it is supported and usage is below threshold.

Also, document and fix openshift_docker_selinux_enabled option so that running the check need not result in the docker role reconfiguring and restarting docker with --selinux-enabled (which will prevent docker from running with these drivers at all).

@sosiouxme sosiouxme changed the title 20170703 docker storage overlay2 openshift_checks/docker_storage: overlay/2 support Jul 13, 2017
@sosiouxme
Copy link
Member Author

aos-ci-test

Copy link
Contributor

@rhcarvalho rhcarvalho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gave a first pass, looking good.

@@ -113,6 +113,9 @@ openshift_release=v3.6
# Downgrades are not supported and will error out. Be careful when upgrading docker from < 1.10 to > 1.10.
# docker_version="1.12.1"

# Specify whether to --enable-selinux on Docker (set false if using overlay/2 storage drivers before RHEL 7.4)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: perhaps overlay2 as that is the name of the driver in config files, etc?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two drivers, overlay and overlay2. Is overlay/2 a confusing abbreviation for that? Hm, perhaps...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha, yes, I did not understand it applied to both.

Better say "overlay / overlay2", then.

@@ -113,6 +113,9 @@ openshift_release=v3.6
# Downgrades are not supported and will error out. Be careful when upgrading docker from < 1.10 to > 1.10.
# docker_version="1.12.1"

# Specify whether to --enable-selinux on Docker (set false if using overlay/2 storage drivers before RHEL 7.4)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, instead of suggesting people to manually update their inventory (and they may be starting off an old copy of this example file), wouldn't it be possible and wise to "do the right thing" in the playbook(s) that use this value?

Copy link
Member Author

@sosiouxme sosiouxme Jul 13, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's one task in one role that uses it. And as far as I can see, nothing that specifies what storage driver you're using (when I configure driver in /etc/sysconfig/docker-storage it doesn't get reconfigured). I don't know of any other reason why you would set this to false, and I suspect no one else does either because it didn't even work.

@@ -93,7 +93,7 @@
dest: /etc/sysconfig/docker
regexp: '^OPTIONS=.*$'
line: "OPTIONS='\
{% if ansible_selinux.status | default(None) == '''enabled''' and docker_selinux_enabled | default(true) %} --selinux-enabled {% endif %}\
{% if ansible_selinux.status | default(None) == '''enabled''' and docker_selinux_enabled | default(true) | bool %} --selinux-enabled {% endif %}\
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are those triple quotes necessary?!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doubt it but they were there when I got here :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I know they were there, might be our chance to clean it up :)

),
(
"max_overlay_usage_percent",
"For 'overlay' or 'overlay2' storage driver, usage threshold percentage. Format: float. Default: 90.0",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: If we define configuration_variables below max_overlay_usage_percent et al, we could reference the class attributes instead of hardcoding the defaults / duplicating.

In [1]: class Foo:
   ...:     bar = 1
   ...:     doc = 'Default bar: {}'.format(bar)
   ...:     

In [2]: Foo.doc
Out[2]: 'Default bar: 1'

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point.


return {"changed": changed}
result['changed'] = result.get('changed', False) or changed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


# TODO(lmeyer): migrate to base class
@staticmethod
def find_ansible_mount(path, ansible_mounts):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we have this code somewhere else?!

Suggestion: instead of migrating to the base class (OpenShiftCheck?), staticmethod -> function and import and reuse the function where needed.

Copy link
Member Author

@sosiouxme sosiouxme Jul 13, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah but see, after task_vars is internalized this can become find_ansible_mount(path) if it's an instance method.

We have something like this elsewhere; I think we should make it generic. Just didn't feel like a refactor for this PR at this time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I looked it up, the code we have is not quite the same but we have at least two variations of processing ansible_mounts, in etcd_imagedata_size.py and disk_availability.py.

I'm okay with improving / generalizing later.

@openshift-bot
Copy link

success: "aos-ci-jenkins/OS_3.6_NOT_containerized, aos-ci-jenkins/OS_3.6_NOT_containerized_e2e_tests" for aa3dfca (logs)

@openshift-bot
Copy link

success: "aos-ci-jenkins/OS_3.6_containerized, aos-ci-jenkins/OS_3.6_containerized_e2e_tests" for aa3dfca (logs)

@openshift-bot
Copy link

success: "aos-ci-jenkins/OS_3.5_NOT_containerized, aos-ci-jenkins/OS_3.5_NOT_containerized_e2e_tests" for aa3dfca (logs)

@openshift-bot
Copy link

success: "aos-ci-jenkins/OS_3.5_containerized, aos-ci-jenkins/OS_3.5_containerized_e2e_tests" for aa3dfca (logs)

@juanvallejo
Copy link
Contributor

Aside from https://github.com/openshift/openshift-ansible/pull/4757/files#r127292139 (not sure which direction we should take, but I agree it would make sense to handle in playbook), lgtm

@@ -113,6 +113,9 @@ openshift_release=v3.6
# Downgrades are not supported and will error out. Be careful when upgrading docker from < 1.10 to > 1.10.
# docker_version="1.12.1"

# Specify whether to --enable-selinux on Docker (set false if using overlay/2 storage drivers before RHEL 7.4)
Copy link
Member

@ashcrow ashcrow Jul 13, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: May be worth noting that it will change container-daemon.json on container-engine. Eventually everything will move to a json file:

Note: l_docker_selinux_enabled is populated by docker_selinux_enabled

    "selinux-enabled": {{ l_docker_selinux_enabled | lower }},

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ashcrow ah, thanks. I overlooked that. Hmm, I see that used here:

roles/docker/tasks/systemcontainer_docker.yml:    l_docker_selinux_enabled: "{{ docker_selinux_enabled | default(true) | to_json }}"
roles/docker/templates/daemon.json:    "selinux-enabled": {{ l_docker_selinux_enabled | lower }},
roles/openshift_docker_facts/tasks/main.yml:      selinux_enabled: "{{ openshift_docker_selinux_enabled | default(None) }}"

Maybe I should be documenting openshift_docker_selinux_enabled instead?

I think I see the drift of your nit... when running Docker as a system container, we're not setting flags on the daemon, but options in a config json. So maybe like this instead:

# Specify whether to run Docker daemon with SELinux enabled in containers.
# (Needs to be false to use overlay/overlay2 storage drivers before RHEL 7.4)
# openshift_docker_selinux_enabled=True

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sosiouxme that works foe me!

Copy link
Member

@ashcrow ashcrow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I defer to @rhcarvalho on the internal specifics but nothing negatively jumps out at me.

@@ -113,6 +113,10 @@ openshift_release=v3.6
# Downgrades are not supported and will error out. Be careful when upgrading docker from < 1.10 to > 1.10.
# docker_version="1.12.1"

# Specify whether to set flag --enable-selinux on Docker daemon.
# (Needs to be false to use overlay/overlay2 storage drivers before RHEL 7.4)
# docker_selinux_enabled=True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be openshift_docker_selinux_enabled?

@@ -113,6 +113,10 @@ openshift_release=v3.6
# Downgrades are not supported and will error out. Be careful when upgrading docker from < 1.10 to > 1.10.
# docker_version="1.12.1"

# Specify whether to set flag --enable-selinux on Docker daemon.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is the same as the ose example don't forget to update the wording.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aw geez, I edited the file and didn't save/commit.

@sosiouxme
Copy link
Member Author

I think I've addressed the feedback, @rhcarvalho feel free to take another pass.

@@ -113,6 +113,10 @@ openshift_release=v3.6
# Downgrades are not supported and will error out. Be careful when upgrading docker from < 1.10 to > 1.10.
# docker_version="1.12.1"

# Specify whether to run Docker daemon with SELinux enabled in containers.
# (Needs to be false to use overlay/overlay2 storage drivers before RHEL 7.4)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the True/False here is case-insensitive, isn't it? I'd suggest dropping the parentheses and make it a "command" statement:

Set to False when using the overlay or overlay2 storage driver on RHEL 7.3 or earlier.

What about other operating systems? CentOS, Fedora?

Is it clear that the commented-out values are the defaults? I'm not sure one can reason that way in the example inventory.

Copy link
Member Author

@sosiouxme sosiouxme Jul 14, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe "uncomment" as a command would be more helpful.

# Specify whether to run Docker daemon with SELinux enabled in containers. Default is True.
# Uncomment below to disable; for example if your kernel does not support the
# Docker overlay/overlay2 storage drivers with SELinux enabled.
#openshift_docker_selinux_enabled=False

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes 👍

@@ -93,7 +93,7 @@
dest: /etc/sysconfig/docker
regexp: '^OPTIONS=.*$'
line: "OPTIONS='\
{% if ansible_selinux.status | default(None) == '''enabled''' and docker_selinux_enabled | default(true) %} --selinux-enabled {% endif %}\
{% if ansible_selinux.status | default(None) == 'enabled' and docker_selinux_enabled | default(true) | bool %} --selinux-enabled {% endif %}\
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me see if I understand this: without bool, setting docker_selinux_enabled=False in the inventory would make the variable a string with value "False", which is a truthy value in Python leading to always true no matter what string/value you set in the inventory. Is that right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exactly.

"For 'overlay' or 'overlay2' storage driver, usage threshold percentage. "
"Format: float. Default: {:.1f}".format(max_overlay_usage_percent),
),
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


def check_overlay_usage(self, docker_info, task_vars):
"""Check disk usage on OverlayFS backing store volume. Return: result dict."""
path = os.path.join(docker_info.get("DockerRootDir", "/var/lib/docker"), docker_info.get("Driver"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is strange to use os.path.join and have strings with an specific path separator. In this case we only ever intend this to work on Linux anyway.(?)

Don't we need a default value for "Driver"? I think os.path.join will raise an exception if one of the arguments is not a string.

path = "{}/{}".format(docker_info.get("DockerRootDir", "/var/lib/docker"), docker_info.get("Driver", ""))

Copy link
Member Author

@sosiouxme sosiouxme Jul 18, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we got to this point in the code, we have a driver. It's just a question of whether it's overlay or overlay2. (Actually if we ever get a docker info with the root dir or driver unset then something is totally messed up. The default was more for the purpose of in-code example of what I expect it to be.) But hey, I suppose for consistency it can default.

Point taken about os.path.join but to me the format you have above is a lot less readable than just using string concatenation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we got to this point in the code, we have a driver. It's just a question of whether it's overlay or overlay2.

If we guarantee we have a Driver, then why not docker_info['Driver']?

Similarly, if we are sure about DockerRootDir, we could then write:

path = docker_info['DockerRootDir'] + '/' + docker_info['Driver']

or

# not important, but avoids an intermediary string in the previous code
path = '{}/{}'.format(docker_info['DockerRootDir'], docker_info['Driver'])

This is possible, but problematic if the values are not strings:

path = '/'.join(docker_info['DockerRootDir'], docker_info['Driver'])

I care about this being defensive code, so if there is a remote chance one of those keys are not in docker_info, then we can wrap it with a try: ... except KeyError block.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a remote API call, and while I think the chance is slim we'll ever see it come back missing these keys, I can't completely discount it. The question is how much I want to mess up the code to try to make that remote possibility less confusing to the user if it did happen :) I'll try for something better here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To explain where I landed...

I have absolute confidence docker_info["Driver"] is defined because this overlay-specific function could never be called otherwise.

If DockerRootDir ever actually came back missing yet docker is actually up enough to respond (say the API changes radically), I do not feel bad about assuming it's the usual directory and the user being none the wiser for now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


# TODO(lmeyer): migrate to base class
@staticmethod
def find_ansible_mount(path, ansible_mounts):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I looked it up, the code we have is not quite the same but we have at least two variations of processing ansible_mounts, in etcd_imagedata_size.py and disk_availability.py.

I'm okay with improving / generalizing later.


mount_for_path = {mount['mount']: mount for mount in ansible_mounts}
mount_point = path
while mount_point not in mount_for_path:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: add a paranoia-compliant way to avoid an infinite loop:

Note that os.path.dirname(mount_point) may give the empty string instead of /:

In [1]: import os

In [2]: os.path.dirname('foo')
Out[2]: ''

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then I can just check that it's "" or "/" ... there are no other cases it can devolve to, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either '/' or '', yes.

"DriverStatus": [("Backing Filesystem", "btrfs")],
}),
True,
["'xfs' as the backing storage"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is xfs in the expected error message if the backing filesystem is btrfs?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because the error message tells you what it's supposed to be as well as what it is
https://github.com/openshift/openshift-ansible/pull/4757/files#diff-4739a080555d5d9b11dc300197eb46b8R223

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading the test, how is one to tell what the file systems are?! I saw "btrfs" in one place, and was confused seeing "'xfs' ... nearby.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see how the test looks confusing, will improve

check = dummy_check()
task_vars = non_atomic_task_vars.copy()
task_vars["ansible_mounts"] = ansible_mounts
if threshold:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion:

if threshold is not None:

For clarity of intent and to avoid surprises. Otherwise only "thruthy" thresholds would be interpreted correctly.

])
def test_overlay_usage(ansible_mounts, threshold, expect_fail, expect_msg):
check = dummy_check()
task_vars = non_atomic_task_vars.copy()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps non_atomic_task_vars should be a fixture?

@pytest.fixture()
def non_atomic_task_vars():
    return {"openshift": {"common": {"is_atomic": False}}}

def test_overlay_usage(..., non_atomic_task_vars, ...):

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But then it's mentally cluttering up the long args, mixing fixtures with parametrized... shrug I can do it if you prefer, it's a weak preference for me.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not a big deal as is now. The problem would only show itself as we make changes. If for some reason a test mutates non_atomic_task_vars, the tests that run after would start making copies of the mutated value.

If we make it a function that returns the value, we guarantee that we won't introduce that sort of bug.

def non_atomic_task_vars():
    return {"openshift": {"common": {"is_atomic": False}}}

def test_overlay_usage(...):
    # ...
    task_vars = non_atomic_task_vars()
    # ...

Making it a fixture or not is orthogonal to that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 will make it a function.

Document the option so that those who want to run on OverlayFS can find
it. Fix the task so that setting it to False isn't interpreted as true.
@sosiouxme
Copy link
Member Author

@rhcarvalho updated - good now?

@sosiouxme
Copy link
Member Author

aos-ci-test

"DriverStatus": [("Backing Filesystem", "btrfs")],
}),
True,
["storage is type 'btrfs'", "only supported with\n'xfs'"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm the \n there smells like we're being too specific about the message, up to how it is formatted.
This makes it annoying in that to update the message we need also to update the test.

Not a blocker, though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it does, which is actually why it wasn't there before. But the \n doesn't truly matter, it's the generic problem of having error string outputs instead of defined exceptions. I'd like to refactor as much as possible into exceptions at some point. It's tech debt for now.

# keep it simple, only check enterprise kernel versions; assume everyone else is good
kernel = docker_info.get("KernelVersion", "[NONE]")
kernel_arr = [int(num) for num in re.findall(r'\d+', kernel)]
if kernel_arr < [3, 10, 0, 514]: # rhel < 7.3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How confident are you with this?

I'm thinking about some corner cases...

  • kernel = [NONE] (and thus kernel_arr = [])
  • kernel = the empty string (and thus kernel_arr = [])
  • kernel contains non-numeric portions that get stripped out

Did you go through those?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my head, sure :)
What do we want to say about completely bogus kernel versions coming through? This will fail the check and they'll be in the output so it should be a pretty good clue what went wrong.

As far as non-numeric portions, we are only looking down to the first release segment, and I feel confident RHEL will continue with the x.y.z-r format with all numbers for the OS releases of interest here. I don't think the first release number even changes during the lifetime of a RHEL release. At the very least, the non-numeric bits (e.g. el7) do not break it.


mount = self.find_ansible_mount(path, get_var(task_vars, "ansible_mounts"))
free_bytes = mount.get('size_available', 0)
total_bytes = mount.get('size_total', 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I see 1 is to avoid a division by zero, but could lead to reporting some funny number?
And we still could get a division by zero exception if the size_total is actually zero.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If someone has defined a zero-size volume - well, I don't think it's possible, but if it is (or ansible had a bug and 0 came through somehow) what do we want to say about it? Is it a blocker to merging today?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather fail explicitly if total_bytes is 0 than seeing usage = 100.0 * (1 - free_bytes).

What would you think as a user if an error message came out saying you're using -1.5 GB?! I think that's a terrible experience, and I could even imagine myself ignoring the negative sign at first out of incredulity.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that would suck, I'm just not seeing how it would happen unless ansible collects completely bogus values... free bytes greater than total??

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just want us to be on the safe side. It would be plausible to have both readings = 0, in which case the default value of 1 in this line is useless, and we still get a ZeroDivisionError below.

free_bytes = mount.get('size_available', 0)
total_bytes = mount.get('size_total', 1)

usage = 100.0 * (total_bytes - free_bytes) / total_bytes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look for ZeroDivisionError.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, how does the error handling look? Anything else?

@openshift-bot
Copy link

success: "aos-ci-jenkins/OS_3.6_NOT_containerized, aos-ci-jenkins/OS_3.6_NOT_containerized_e2e_tests" for 72f2cc5 (logs)

@openshift-bot
Copy link

success: "aos-ci-jenkins/OS_3.6_containerized, aos-ci-jenkins/OS_3.6_containerized_e2e_tests" for 72f2cc5 (logs)

fix bug 1469197
https://bugzilla.redhat.com/show_bug.cgi?id=1469197

When Docker is configured with the overlay or overlay2 storage driver,
check that it is supported and usage is below threshold.
Copy link
Contributor

@rhcarvalho rhcarvalho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🎉 ; please check with @brenton or @sdodson if we're merging non-bugs to master now, I'm not sure. This does fix a BZ but is big enough to consider a "feature"?

@sosiouxme
Copy link
Member Author

3.6 is a separate branch now, so we can merge to master. To address the blocker bug there is a parallel PR #4791

It's currently tracked as a blocker. It is in fact more like a feature but it's a new feature on a new feature so low-risk. You guys OK with proceeding as a blocker?

@brenton
Copy link
Contributor

brenton commented Jul 19, 2017

I'm fine with this merging for 3.6.

@sosiouxme
Copy link
Member Author

[merge]

@openshift-bot
Copy link

[test]ing while waiting on the merge queue

@openshift-bot
Copy link

Evaluated for openshift ansible test up to 2b1c749

@openshift-bot
Copy link

continuous-integration/openshift-jenkins/test FAILURE (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_openshift_ansible/350/) (Base Commit: e432f8a) (PR Branch Commit: 2b1c749)

@rhcarvalho
Copy link
Contributor

Flake openshift/origin#15356

[merge]

@openshift-bot
Copy link

Evaluated for openshift ansible merge up to 2b1c749

@openshift-bot
Copy link

continuous-integration/openshift-jenkins/merge FAILURE (https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_openshift_ansible/719/) (Base Commit: e432f8a) (PR Branch Commit: 2b1c749)

@sdodson sdodson merged commit a6a0621 into openshift:master Jul 20, 2017
@sosiouxme sosiouxme deleted the 20170703-docker-storage-overlay2 branch July 20, 2017 15:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants