Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure Openstack uses its own CinderManager #242

Merged
merged 3 commits into from
Apr 10, 2018

Conversation

mansam
Copy link
Contributor

@mansam mansam commented Mar 1, 2018

Without this fix, the Openstack provider continues to use the core Cinder manager.

Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1532796

@jerryk55
Copy link
Member

jerryk55 commented Mar 2, 2018

@roliveri @hsong-rh have you seen this?

@roliveri
Copy link
Member

roliveri commented Mar 5, 2018

Using ManageIQ::Providers::StorageManager::CinderManager was intentional.
It's meant to be the implementation of the provider-independent aspects of the CinderManager.
Does ManageIQ::Providers::Openstack::StorageManager::CinderManager call the code in ManageIQ::Providers::StorageManager::CinderManager?

@mansam
Copy link
Contributor Author

mansam commented Mar 5, 2018

Our subclass has its own graph refresh and event-based targeted refresh handling that is necessary for proper integration with the Openstack cloud manager's inventory, much like we have our own implementation of the Volume class. Otherwise it passes through to the parent class. See also #194

@roliveri
Copy link
Member

roliveri commented Mar 5, 2018

What aspects of the refresh are OpenStack specific as opposed to Cinder specific?

@mansam
Copy link
Contributor Author

mansam commented Mar 5, 2018

Specifically relationships between Openstack objects, like Volume attachments.

@roliveri
Copy link
Member

roliveri commented Mar 5, 2018

Attachments are handled by a provider-specific cross linker, That's already accounted for in ManageIQ::Providers::StorageManager::CinderManager.

@mansam
Copy link
Contributor Author

mansam commented Mar 6, 2018

We had a requirement to implement graph and targeted refresh for Openstack's block storage manager. To accomplish that, we created a subclass that uses our specific refresh code, event handling code, and object classes. We certainly couldn't add our specific event handling code to an ostensibly stand-alone core provider, and due to the update to use graph inventory the old crosslinker did not apply.

I see no problem with the Openstack provider containing Cinder code considering that the Cinder manager in the core repository relies on Openstack provider code in order to function anyway. (it uses Openstack provider volume classes, refers to a "cinder_service" method that to my knowledge only exists on the Openstack CloudManager and refers to our OpenstackHandle code). It seems to me that there isn't really stand alone Cinder support in ManageIQ at all since the CinderManager has no way to create it as a stand-alone in the UI as far as I am aware, has these Openstack provider dependencies, and as a consequence seems to only support the Openstack provider. Documentation seems to confirm this as well. (see section 8 of http://manageiq.org/docs/reference/fine/doc-Managing_Providers/miq/)

If I am mistaken about those items I would be happy to be corrected but at the moment I don't see the issue with the Openstack provider maintaining its own Cinder code.

@roliveri
Copy link
Member

roliveri commented Mar 6, 2018

There's history here. When the Cinder manager was implemented, it was based on a number of requirements (the same is true of the Swift manager). We needed to consider the fact that the Cinder manager could be associated with other types of providers, and it could also be deployed as its own provider, providing service to multiple consumers.

It shouldn't rely directly on OpenStack volumes. The initial implementation did not. If it does now, someone changed the code. That's not to say that it can't create volumes subclassed by specific provider, it just needs to request the volume subclass from the provider (not hardcoded).

True cinder_service is a method implemented by the OpenStack provider, but that doesn't mean it's OpenStack specific. It's part of the contract between the Cinder manager and the provider. Each provider that can use or have a Cinder manager needs to implement the method.

Just because there's currently no way to have a standalone Cinder manager, or have a Cinder manager associated with another provider (RHEV for example), that doesn't mean that's not the direction in which we're headed. The current implementation considers that direction, so it won't have to be reworked in the future. It would have been very easy to make the Cinder manager OpenStack specific, but then I'd question the utility of making it a separate manager.

@mansam
Copy link
Contributor Author

mansam commented Mar 7, 2018

Richard, thank you for the background information. Even so, I think this is fine. It doesn't change anything about the core implementation, and you probably wouldn't want Openstack specific event code mingled into that anyway. This PR (and the work it is a continuation of) allows us to meet our requirements without interfering with any possible future work to make the core Cinder manager stand alone.

@Loicavenel, would you mind weighing in here in regards to RFE https://bugzilla.redhat.com/show_bug.cgi?id=1532796? This PR is a continuation of the already merged work for that.

@roliveri
Copy link
Member

roliveri commented Mar 7, 2018

I'm not sure if it's fine. It may be. The key is to keep the common parts of the implementation separate and shared, and to be able to implement the provider specific aspects in a consistent, flexible and extensible manner.

As you mentioned, this work is a continuation of work that has already been merged. That work was merged without our knowledge or review, so we have no idea if it's acceptable or not. We should have been included on the review of that code. As it stands now, I can't sign off on this without reviewing those changes.

Copy link
Member

@aufi aufi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand there was a misunderstanding of usage/scope of CinderManage with OpenStack, but regarding to this PR, I think it should be merged.

The main part of this work is in #194 and it is based on https://bugzilla.redhat.com/show_bug.cgi?id=1532796.

From my point of view, the common part of CinderManager might need to be updated to reduce duplicated code, but I don't see it as a big issue since at least now the CinderManager depends on OpenStack anyway.

@roliveri
Copy link
Member

roliveri commented Mar 22, 2018

I disagree. Also, it's not clear the the main part, which was merged without our knowledge or review, should remain in its current state. The existing code was implemented the way it was for a reason. The code should not have been changed with out first understanding the original intent and maintaining the original design goals.
Also, we have already encountered issues introduced by the changes made in #194 that we had to investigate and debug. That code should not have been merged.

@roliveri
Copy link
Member

We're evaluating the situation. We'll let you know when we determine the proper course.

@Ladas
Copy link
Contributor

Ladas commented Mar 23, 2018

@roliveri so what is the exact issue now?

#194

  • was needed for delivering targeted refresh for whole OpenStack for g-release. Which was great work from @mansam , to deliver this in a very short frame before the release. I did a thorough review to make sure OpenStack can run only in targeted mode, for compute, network and storage models.

Now the issue is that the code lives under OpenStack gem? As we do not support separate Cinder Manager yet, I see this as a low priority, vs. high priority for delivering the targeted refresh for the waiting customer. It was easier to backport, when implemented in OpenStack gem.

For the next step, ManageIQ::Providers::StorageManager::CinderManager should not be in core, we need manageiq-providers-cinder gem. That should hold most of the functionality and manageiq-providers-openstack should inherit from that. And OpenStack provider should have it's own STI subclass for every model, since that is a consistent approach with other providers.

Then we need manageiq-providers-openstack-common, to hold common code for both, if Cinder will be separate manager, it needs to talk to identity endpoint at least, maybe to orchestration, etc. (maybe a gem per service)

The same approach should be taken for Swift manager(SwiftStack), or even Neutron manager, if that ever becomes a thing.


@roliveri since I think this PR aligns with above, what exactly blocks this PR? And is there a priority set to make separate Cinder manager for some team? Because as part of that endeavor, we would move the code to a correct common place and implement all steps above.

@Ladas
Copy link
Contributor

Ladas commented Mar 23, 2018

@roliveri also please list the PRs, BZs caused by #194 , because none got to me.

@Ladas
Copy link
Contributor

Ladas commented Mar 23, 2018

@roliveri #240 seems to be an issue that was present before. I've commented it so @jerryk55 can also add the same checks for graph refresh, which is default for now, so the PR won't have any effect for g-release.

Also @mansam is in charge of OpenStack refresh, so please include him in any PR you do around refresh. He would have comment the PR with graph refresh knowledge and with a need to add specs.

Also @aufi is in charge of OpenStack integration, so he should have a final word in merging anything, since any breakage of OpenStack provider is his responsibility.

So lets do communication both way and we will be in a happy place. :-)

@roliveri
Copy link
Member

@Ladas And we're in charge of the storage definition. We get the BZs that result from these changes. Were we included when these changes were made?

Changes like this should not have been made without first understanding the intent of the original code, or consulting with us. We just need to ensure this is done in such a way that all of our requirements are fulfilled. That would have been much easier if we were consulted initially.

Like I said, We're evaluating the situation. We'll let you know when we determine the proper course.

@Ladas
Copy link
Contributor

Ladas commented Mar 23, 2018

@roliveri ok, so I thought the responsibility was passed to OpenStack team, because it affects OpenStack directly.

Not sure if determining of the proper course should be done without OpenStack team. Cinder and attached volumes are important part of Compute, so we can't just act like it's a separate thing.

So please, consult this with @mansam and @mansam , or we can add you to attend a weekly integration call, so you know what is happening and vice versa. A delivered feature must stay, because it's important to customers. So a 'proper course' can't deviate from that.

@roliveri
Copy link
Member

@Ladas When I said "proper course", I meant determining a direction to proceed forward.

We wouldn't adopt a course or act on it without including the interested parties. Given the issue at hand involves graph refresh, we will probably be consulting with you, in addition to @mansam and the OpenStack team.

@Ladas
Copy link
Contributor

Ladas commented Mar 23, 2018

@roliveri sounds good

@Ladas
Copy link
Contributor

Ladas commented Mar 26, 2018

@mansam @aufi is this blocking something now?

@miq-bot
Copy link
Member

miq-bot commented Mar 27, 2018

This pull request is not mergeable. Please rebase and repush.

@mansam
Copy link
Contributor Author

mansam commented Mar 27, 2018

@Ladas This fixes some problems with #194, but since this pull request isn't merged yet those issues don't manifest anyway since Openstack's version of cinder refresh doesn't get used. Without this being merged though, event targeted refresh doesn't work for storage in the openstack provider as you already know.

@Ladas
Copy link
Contributor

Ladas commented Mar 29, 2018

@mansam ah @aufi told me this doesn't block anything

If this break the block storage targeted refresh, we should mark the BZ as a blocker, since it affect targeted refresh as a whole. @mansam let's put it in a merge-able state and merge it.

@roliveri so the the next step should be merging this and fixing the targeted refresh (there are customers waiting for this). Are we in agreement on that? If not, please write specific things you need to be fixed, so this can be merged and backported.

@jerryk55
Copy link
Member

@Ladas please hold off on merging while we continue our discussions. Thank you. In addition there appear to be conflicts in the above VCR cassettes.

@roliveri
Copy link
Member

roliveri commented Apr 2, 2018

The associated BZ is a blocker. This PR isn't necessarily the proper solution to that BZ.

@roliveri
Copy link
Member

roliveri commented Apr 5, 2018

So, based on the emails that have been exchanged, merge this now (after the conflicts have been resolved) and address moving it out again (along with the graph refresh solution) later.

@mansam mansam force-pushed the ensure-openstack-cinder-manager branch from 519606e to 0a9a17d Compare April 5, 2018 19:52
@Ladas
Copy link
Contributor

Ladas commented Apr 9, 2018

@mansam can you rebase this and make sure it works as expected? So we can solve the blocker BZ related https://bugzilla.redhat.com/show_bug.cgi?id=1532796

@mansam
Copy link
Contributor Author

mansam commented Apr 9, 2018

@Ladas Yep, working on that. Just had to get some prerequisite stuff merged related to VCRs.

@mansam mansam force-pushed the ensure-openstack-cinder-manager branch 3 times, most recently from 3f12d67 to ea9a5ec Compare April 9, 2018 15:52
@mansam mansam force-pushed the ensure-openstack-cinder-manager branch from ea9a5ec to c25ab93 Compare April 9, 2018 16:49
@miq-bot
Copy link
Member

miq-bot commented Apr 9, 2018

Checked commits mansam/manageiq-providers-openstack@ec1c935~...c25ab93 with ruby 2.3.3, rubocop 0.52.1, haml-lint 0.20.0, and yamllint 1.10.0
12 files checked, 1 offense detected

**

  • 💣 💥 🔥 🚒 - Linter/Yaml - missing config files

@aufi aufi merged commit 17b926b into ManageIQ:master Apr 10, 2018
@aufi aufi added this to the Sprint 83 Ending Apr 9, 2018 milestone Apr 10, 2018
@aufi
Copy link
Member

aufi commented Apr 10, 2018

The BZ/RFE which this solves is https://bugzilla.redhat.com/show_bug.cgi?id=1532796

@simaishi
Copy link
Contributor

VCR conflict...

@simaishi
Copy link
Contributor

Backported to Gaprindashvili via #275

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants