-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure Openstack uses its own CinderManager #242
Conversation
Using |
Our subclass has its own graph refresh and event-based targeted refresh handling that is necessary for proper integration with the Openstack cloud manager's inventory, much like we have our own implementation of the Volume class. Otherwise it passes through to the parent class. See also #194 |
What aspects of the refresh are OpenStack specific as opposed to Cinder specific? |
Specifically relationships between Openstack objects, like Volume attachments. |
Attachments are handled by a provider-specific cross linker, That's already accounted for in |
We had a requirement to implement graph and targeted refresh for Openstack's block storage manager. To accomplish that, we created a subclass that uses our specific refresh code, event handling code, and object classes. We certainly couldn't add our specific event handling code to an ostensibly stand-alone core provider, and due to the update to use graph inventory the old crosslinker did not apply. I see no problem with the Openstack provider containing Cinder code considering that the Cinder manager in the core repository relies on Openstack provider code in order to function anyway. (it uses Openstack provider volume classes, refers to a "cinder_service" method that to my knowledge only exists on the Openstack CloudManager and refers to our OpenstackHandle code). It seems to me that there isn't really stand alone Cinder support in ManageIQ at all since the CinderManager has no way to create it as a stand-alone in the UI as far as I am aware, has these Openstack provider dependencies, and as a consequence seems to only support the Openstack provider. Documentation seems to confirm this as well. (see section 8 of http://manageiq.org/docs/reference/fine/doc-Managing_Providers/miq/) If I am mistaken about those items I would be happy to be corrected but at the moment I don't see the issue with the Openstack provider maintaining its own Cinder code. |
There's history here. When the Cinder manager was implemented, it was based on a number of requirements (the same is true of the Swift manager). We needed to consider the fact that the Cinder manager could be associated with other types of providers, and it could also be deployed as its own provider, providing service to multiple consumers. It shouldn't rely directly on OpenStack volumes. The initial implementation did not. If it does now, someone changed the code. That's not to say that it can't create volumes subclassed by specific provider, it just needs to request the volume subclass from the provider (not hardcoded). True Just because there's currently no way to have a standalone Cinder manager, or have a Cinder manager associated with another provider (RHEV for example), that doesn't mean that's not the direction in which we're headed. The current implementation considers that direction, so it won't have to be reworked in the future. It would have been very easy to make the Cinder manager OpenStack specific, but then I'd question the utility of making it a separate manager. |
Richard, thank you for the background information. Even so, I think this is fine. It doesn't change anything about the core implementation, and you probably wouldn't want Openstack specific event code mingled into that anyway. This PR (and the work it is a continuation of) allows us to meet our requirements without interfering with any possible future work to make the core Cinder manager stand alone. @Loicavenel, would you mind weighing in here in regards to RFE https://bugzilla.redhat.com/show_bug.cgi?id=1532796? This PR is a continuation of the already merged work for that. |
I'm not sure if it's fine. It may be. The key is to keep the common parts of the implementation separate and shared, and to be able to implement the provider specific aspects in a consistent, flexible and extensible manner. As you mentioned, this work is a continuation of work that has already been merged. That work was merged without our knowledge or review, so we have no idea if it's acceptable or not. We should have been included on the review of that code. As it stands now, I can't sign off on this without reviewing those changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand there was a misunderstanding of usage/scope of CinderManage with OpenStack, but regarding to this PR, I think it should be merged.
The main part of this work is in #194 and it is based on https://bugzilla.redhat.com/show_bug.cgi?id=1532796.
From my point of view, the common part of CinderManager might need to be updated to reduce duplicated code, but I don't see it as a big issue since at least now the CinderManager depends on OpenStack anyway.
I disagree. Also, it's not clear the the main part, which was merged without our knowledge or review, should remain in its current state. The existing code was implemented the way it was for a reason. The code should not have been changed with out first understanding the original intent and maintaining the original design goals. |
We're evaluating the situation. We'll let you know when we determine the proper course. |
@roliveri so what is the exact issue now?
Now the issue is that the code lives under OpenStack gem? As we do not support separate Cinder Manager yet, I see this as a low priority, vs. high priority for delivering the targeted refresh for the waiting customer. It was easier to backport, when implemented in OpenStack gem. For the next step, Then we need manageiq-providers-openstack-common, to hold common code for both, if Cinder will be separate manager, it needs to talk to identity endpoint at least, maybe to orchestration, etc. (maybe a gem per service) The same approach should be taken for Swift manager(SwiftStack), or even Neutron manager, if that ever becomes a thing. @roliveri since I think this PR aligns with above, what exactly blocks this PR? And is there a priority set to make separate Cinder manager for some team? Because as part of that endeavor, we would move the code to a correct common place and implement all steps above. |
@roliveri #240 seems to be an issue that was present before. I've commented it so @jerryk55 can also add the same checks for graph refresh, which is default for now, so the PR won't have any effect for g-release. Also @mansam is in charge of OpenStack refresh, so please include him in any PR you do around refresh. He would have comment the PR with graph refresh knowledge and with a need to add specs. Also @aufi is in charge of OpenStack integration, so he should have a final word in merging anything, since any breakage of OpenStack provider is his responsibility. So lets do communication both way and we will be in a happy place. :-) |
@Ladas And we're in charge of the storage definition. We get the BZs that result from these changes. Were we included when these changes were made? Changes like this should not have been made without first understanding the intent of the original code, or consulting with us. We just need to ensure this is done in such a way that all of our requirements are fulfilled. That would have been much easier if we were consulted initially. Like I said, We're evaluating the situation. We'll let you know when we determine the proper course. |
@roliveri ok, so I thought the responsibility was passed to OpenStack team, because it affects OpenStack directly. Not sure if determining of the proper course should be done without OpenStack team. Cinder and attached volumes are important part of Compute, so we can't just act like it's a separate thing. So please, consult this with @mansam and @mansam , or we can add you to attend a weekly integration call, so you know what is happening and vice versa. A delivered feature must stay, because it's important to customers. So a 'proper course' can't deviate from that. |
@Ladas When I said "proper course", I meant determining a direction to proceed forward. We wouldn't adopt a course or act on it without including the interested parties. Given the issue at hand involves graph refresh, we will probably be consulting with you, in addition to @mansam and the OpenStack team. |
@roliveri sounds good |
This pull request is not mergeable. Please rebase and repush. |
@Ladas This fixes some problems with #194, but since this pull request isn't merged yet those issues don't manifest anyway since Openstack's version of cinder refresh doesn't get used. Without this being merged though, event targeted refresh doesn't work for storage in the openstack provider as you already know. |
@mansam ah @aufi told me this doesn't block anything If this break the block storage targeted refresh, we should mark the BZ as a blocker, since it affect targeted refresh as a whole. @mansam let's put it in a merge-able state and merge it. @roliveri so the the next step should be merging this and fixing the targeted refresh (there are customers waiting for this). Are we in agreement on that? If not, please write specific things you need to be fixed, so this can be merged and backported. |
@Ladas please hold off on merging while we continue our discussions. Thank you. In addition there appear to be conflicts in the above VCR cassettes. |
The associated BZ is a blocker. This PR isn't necessarily the proper solution to that BZ. |
So, based on the emails that have been exchanged, merge this now (after the conflicts have been resolved) and address moving it out again (along with the graph refresh solution) later. |
519606e
to
0a9a17d
Compare
@mansam can you rebase this and make sure it works as expected? So we can solve the blocker BZ related https://bugzilla.redhat.com/show_bug.cgi?id=1532796 |
@Ladas Yep, working on that. Just had to get some prerequisite stuff merged related to VCRs. |
3f12d67
to
ea9a5ec
Compare
ea9a5ec
to
c25ab93
Compare
Checked commits mansam/manageiq-providers-openstack@ec1c935~...c25ab93 with ruby 2.3.3, rubocop 0.52.1, haml-lint 0.20.0, and yamllint 1.10.0 **
|
The BZ/RFE which this solves is https://bugzilla.redhat.com/show_bug.cgi?id=1532796 |
VCR conflict... |
Backported to Gaprindashvili via #275 |
Without this fix, the Openstack provider continues to use the core Cinder manager.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1532796