Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Targeted Refresh for Cloud VMs #74

Merged
merged 2 commits into from
Aug 17, 2017
Merged

Conversation

mansam
Copy link
Contributor

@mansam mansam commented Aug 1, 2017

This pull request introduces the scaffold for targeted refresh and implements it for cloud VMs. This is based significantly on @Ladas's work on the Amazon provider.

@tzumainn tzumainn requested a review from Ladas August 1, 2017 20:12
@tzumainn
Copy link
Contributor

tzumainn commented Aug 1, 2017

@Ladas could you help us take a look at this? Thanks!

def cloud_networks
return [] if references(:cloud_networks).blank?
@cloud_networks = references(:cloud_networks).collect do |network_id|
safe_get { network_service.networks.get(network_id) }
Copy link
Contributor

@Ladas Ladas Aug 2, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, could we use a filtered handled list? OpenStack should support list filtering by UUIDs right? In larger envs and e.g. bigger stack deployment, we can have hundreds of entities here. So that would be a lot of API calls

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you thinking of making a call to pull the whole inventory and then filtering that, or is there actually API support for requesting a set of UUIDs?

Copy link
Contributor

@Ladas Ladas Aug 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we should be able to filter N records from the API, to avoid N+1 API requests. E.g. check server list https://developer.openstack.org/api-ref/compute/#list-servers , the parameter uuid can be used to filter by the instance uuid. And we should be able to provide more of them I think

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is API docs mentioning IN query. But also that nova does not support OR filter. https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#filtering

But seem like these are just some guidelines, so not sure who actually follows them. :-)

Copy link
Contributor

@Ladas Ladas Aug 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose we can merge this as a first version and optimize it where possible later. If we can't get rid of the N+1 queries though, we will need to put back the condition, that will run a full refresh based on the threshold. (Best placement might be the Inventory builder) The old refresh condition is here https://github.com/Ladas/manageiq/blob/04f82ac2f4601144370d760d6f5d221f1cc88ca5/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb#L64

I don't have that condition for AWS, since the API support the filtering everywhere. :-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that is awesome. I didn't know it was possible to query like that. Should be easy enough to figure out what's actually supported and fix this PR to get rid of the N+1 requests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, lets do it in the next PR, you will probably need to experiment. I fear the OpenStack API is still heavily inconsistent at this. :-(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do, thanks for the advice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is definitely going to be more tricky than hoped, Nova doesn't seem to respect the "in:" directive.

Copy link
Contributor

@Ladas Ladas Aug 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I wonder which service will do :-) btw. the guideline was mentioning nova might not support any kind of OR condition :-(

@@ -863,5 +863,107 @@ def expect_sync_cloud_tenants_with_tenants_is_queued
expect(sync_cloud_tenant.method_name).to eq("sync_cloud_tenants_with_tenants")
expect(sync_cloud_tenant.state).to eq(MiqQueue::STATE_READY)
end

def assert_targeted_vm(vm_name, attributes)
Copy link
Contributor

@Ladas Ladas Aug 2, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be nice to have a test using existing Vm asserts, I do that in AWS to make sure Vm is the same for targeted and full refresh

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not using exactly the same, since it seemed like it was not possible to acquire the external network information from the information available from just the VM. It seemed like it would take basically a full pull of inventory from Neutron to get some of it, so I left the external network stuff out of the targeted refresh and dropped the external network checks from the test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right so what I do in AWS is a full VM refresh now, so I expand more targets out of the Vm, so the result is a Vm with all network connections(IPs) and volumes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, what do you mean by "full VM refresh"? I'm not sure if you're referring to a different sort of approach, or just using a different name. :)

Copy link
Contributor

@Ladas Ladas Aug 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So for AWS, every time there is a reference to Vm (aka. Vm target), we unpack the list of references by looking at the DB:

https://github.com/Ladas/manageiq-providers-amazon/blob/6938025f2cd6a4b0504909a1be22a87e30dbc7c0/app/models/manageiq/providers/amazon/inventory/collector/target_collection.rb#L209

and by looking at the API:
https://github.com/Ladas/manageiq-providers-amazon/blob/6938025f2cd6a4b0504909a1be22a87e30dbc7c0/app/models/manageiq/providers/amazon/inventory/collector/target_collection.rb#L229

So we basically send more Targets to the refresh, by scanning edges of the Vm.


Then the Vm full refresh means that after 1 targeted refresh, you will get a full info about the Vm. Where the most important relations are probably IP addresses and volumes. Then it's easier to use this e.g. in a automate workflow, where 1 step is the sync refresh of the Vm and you want to work with the data in the next step.

Also for other releases, the refresh should be configurable, so we can invoke targeted Vm or full targeted Vm refresh, based on the needs.


Also this solves the fact that events might come in a wrong order, then the graph refresh will not be able to populate some edges. (for the 5.0 release, we will be solving these by a skeletal pre-create and a post processing where the missing edges will be planned for the next refresh)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I think I see what you mean. "Targeted VM Refresh" just being the VM object itself, and "Full Targeted VM Refresh" being the VM and its relationships. Makes sense now. I'm refreshing the VM and its relationships, but it seems like not all the relevant information to populate the network stuff was available. (Since you only get the IP address not it's ID, and you don't have enough port information to reconstruct the whole network just from the VM and its relationships due to objects from the Openstack API not having references in both directions). Though if Neutron supports filtered querying like you mentioned, maybe I can get around that. I was working on this before 2 weeks of PTO so my memory is fuzzy about the exact nature of the problem.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will require some tweaking, yo ucan also do more API queries to fetch e.g. the network_ports, given you will be able to filter that API using e.g. a Vm uuid

@Ladas
Copy link
Contributor

Ladas commented Aug 2, 2017

@mansam it's great, just 1 concern with get vs filtered list API

For the specs, we should add moooore in next PRs, so every possible target has at least some coverage. :-D

In aws I have https://github.com/Ladas/manageiq-providers-amazon/blob/cd02dc09e805424ef90cf30f082f710000c09319/spec/models/manageiq/providers/amazon/cloud_manager/vcr_specs/targeted_refresh_spec.rb#L32
and I would like to add much more later


next steps, analyzing events and calling refresh from automate. I have not yet merged handler here
ManageIQ/manageiq-content#159

it calls
def self.miq_refresh(obj, _inputs) fro miq_ae_engine gem

going to:
https://github.com/Ladas/manageiq/blob/0accaeba1f38f16b39eca47018c8fc0aa5e4608a/app/models/ems_event/automate.rb#L5

and using event parser to extract targets out of the event:
https://github.com/Ladas/manageiq-providers-amazon/blob/7eed07f92bf4589394de19002d7ad606754a707f/app/models/manageiq/providers/amazon/cloud_manager/event_target_parser.rb#L1


then from automate event handler we just call:
"/System/event_handlers/refresh" and targets are extracted by the parser (or fallback to full refresh based on settings)

comparing the the old way:
"/System/event_handlers/event_action_refresh?target=ems" where we would need to specify targets in each event handler

@Ladas
Copy link
Contributor

Ladas commented Aug 15, 2017

@mansam just call rubocop -a on the files rubocop complains about and I am ok with merging this. Lets first build a working targeted refresh, then optimize/clean it up. :-)

Copy link
Contributor

@Ladas Ladas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, I am good with merging this and move forward. 👍 (we should have a little less than a month to a devel freeze)

@tzumainn
Copy link
Contributor

@mansam are any of the codeclimate issues relevant?

@mansam
Copy link
Contributor Author

mansam commented Aug 17, 2017

@tzumainn I've fixed the only ones that were relevant, so this should be ok to merge once the tests go green again.

@miq-bot
Copy link
Member

miq-bot commented Aug 17, 2017

Some comments on commits mansam/manageiq-providers-openstack@2087eec~...670c192

spec/models/manageiq/providers/openstack/cloud_manager/stubbed_refresher_spec.rb

  • ⚠️ - 130 - Detected allow_any_instance_of. This RSpec method is highly discouraged, please only use when absolutely necessary.

@miq-bot
Copy link
Member

miq-bot commented Aug 17, 2017

Checked commits mansam/manageiq-providers-openstack@2087eec~...670c192 with ruby 2.2.6, rubocop 0.47.1, and haml-lint 0.20.0
21 files checked, 1 offense detected

app/models/manageiq/providers/openstack/inventory/collector/target_collection.rb

@tzumainn tzumainn merged commit cd4a0c8 into ManageIQ:master Aug 17, 2017
@tzumainn tzumainn added this to the Sprint 67 Ending Aug 21, 2017 milestone Aug 17, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants