-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ceph-common: purge ceph.conf file #694
Conversation
130893f
to
c587f0d
Compare
@andrewschoen looks like we are seeing the same error again on the Ubuntu VM :( |
@jdurgin, @andrewschoen, @alfredodeza feel free to chime :) |
test this please |
@leseb the trusty failure was because the CI was picking up a node without an extra device attached. I think I've got that cleared up, but we will probably need to run the tests one more time. |
I think the rbd_client_log/dir should also go away, with selinux enabled this can create issues if they are not in default directories and policy exists only for default directories(for hammer/jewel), for other distro's it isn't an issue. |
test this please |
looks good in terms of getting rid of settings that are suboptimal for jewel. For other versions, I do think having a per-version config might make sense - e.g. the osd recovery settings commonly had to be adjust in hammer, which is probably where a lot of these came from. Maybe those could just go in a sample playbook as overrides or something? I'm not too familiar with the structure of things in ansible. |
Ok I'll try to add an example section for those who are looking at deploying Hammer then. |
c587f0d
to
33d29df
Compare
Test this please |
test this please |
@leseb what else needs to be done for this one? Can I help? |
@andrewschoen I just need to rebase and then add some doc to address the Hammer use case I guess. |
33d29df
to
e755dc3
Compare
@@ -223,10 +209,14 @@ dummy: | |||
|
|||
#rbd_client_log_path: /var/log/ceph | |||
#rbd_client_log_file: "{{ rbd_client_log_path }}/qemu-guest-$pid.log" # must be writable by QEMU and allowed by SELinux or AppArmor | |||
<<<<<<< HEAD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we want to keep rbd_client_admin_socket_path
, but lose the rest of the rbd_*
options here.
e755dc3
to
90f1389
Compare
test this please |
90f1389
to
a199704
Compare
test this please |
[client] | ||
rbd cache = {{ rbd_cache }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the defaults for rbd_cache
will also need to be removed.
Hi @leseb, The restarts on the back of config changes is concerning obviously, but I think what is potentially more concerning is the significant impact to behavior that is going to result from this change. I've tried to look at what will be removed and made a note where ceph-ansible and upstream ceph values differ:
These are defaults I'm still working on tracking down:
Obviously, we can override all these locally but this is going to have a big impact on anyone who is not watching what is happening in this repo. I think some further discussions need to be had here before this goes in. With all that said, I do think this is the right long-term approach. :) --Matt |
Hi @mattt416, You raising a really good point and thanks for the in-depth analysis. However I'm not really sure how to provide a compatible and non-breaking change. Any other idea? |
test this please |
1 similar comment
test this please |
Hi @leseb, TBH I didn't even realise you were using tags! Having those in place helps a lot because it means you can be mindful of breaking changes when the major version is bumped. We will discuss internally what to do for rpc-openstack -- whether we override locally to bring back a similar configuration or try to run with a more vanilla setup and only override things when we have a specific need. Thanks! --Matt |
test this please |
2 similar comments
test this please |
test this please |
@mattt416 how did the discussion go? |
Since ##461 we have been having the ability to override ceph default options. Previously we had to add a new line in the template and then another variable as well. Doing a PR for one option was such a pain. As a result, we now have tons of options that we need to maintain across all the ceph version, yet another painful thing to do. This commit removes all the ceph options so they are handled by ceph directly. If you want to add a new option, feel free to to use the `ceph_conf_overrides` variable of your `group_vars/all`. Risks, for those who have been managing their ceph using ceph-ansible this is not a trivial change as it will trigger a change in your `ceph.conf` and then restart all your ceph services. Moreover if you did some specific tweaks as well, prior to run ansible you should update the `ceph_conf_overrides` variable to reflect your previous changes. To avoid service restart, you need to know a bit of ansible for this, but generally the idea would be to run ansible on a dummy host to generate the ceph.conf, then scp this file to all your ceph hosts and you should be good. Closes: #693 Signed-off-by: Sébastien Han <seb@redhat.com>
a199704
to
47860a8
Compare
@leseb I think the main ones are the recovery settings highlighted already in this pr. Others are pretty hardware-dependent, or not too important I think. You could point at the old config before this pr so folks could compare perhaps. |
@jdurgin good point, will do thanks! |
Highlight the variables that were used prior to this path: #694 Signed-off-by: Sébastien Han <seb@redhat.com>
Since ##461 we have been having the ability to override ceph default
options. Previously we had to add a new line in the template and then
another variable as well. Doing a PR for one option was such a pain. As
a result, we now have tons of options that we need to maintain across
all the ceph version, yet another painful thing to do.
This commit removes all the ceph options so they are handled by ceph
directly. If you want to add a new option, feel free to to use the
ceph_conf_overrides
variable of yourgroup_vars/all
.Risks, for those who have been managing their ceph using ceph-ansible
this is not a trivial change as it will trigger a change in your
ceph.conf
and then restart all your ceph services. Moreover if you didsome specific tweaks as well, prior to run ansible you should update the
ceph_conf_overrides
variable to reflect your previous changes.To avoid service restart, you need to know a bit of ansible for this,
but generally the idea would be to run ansible on a dummy host to
generate the ceph.conf, then scp this file to all your ceph hosts and
you should be good.
Closes: #693
Signed-off-by: Sébastien Han seb@redhat.com