-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove vmi/vm watch and reduce default sync time to 90 seconds #216
Remove vmi/vm watch and reduce default sync time to 90 seconds #216
Conversation
Signed-off-by: David Vossel <davidvossel@gmail.com>
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: davidvossel The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
Pull Request Test Coverage Report for Build 3856719842
💛 - Coveralls |
If think we can do both, so we have responsiveness and we don't miss stuff, what do you think ? |
How would you do this. |
You have a pair of "knobs":
I think with both you can have both features. |
The problem is that we need the capk controller to be able to function when the VM/VMI objects are not registered in the infra cluster. The I could dynamically detect of vm/vmis are present at launch, and only watch if they are, but i'd rather have a single code path to test rather than multiple (one when crds are available, one when they are not). So just using a reduced sync period with no watches seems to satisfy that requirement (at the cost of efficiency, which might be okay for us at our current scale) |
An alternative is to run the polling for external clusters and as a result enqueue to the controller so they enter the |
/lgtm |
Issue #100 and #78 have two things in common, they both involve issues derived from tracking VM/VMIs on external infra.
So far, we've been treating external infra as an advanced usecase, and not the default use case. The result is that external infra has been treated as a second class citizen to the use case where the capi components and VM/VMIs are running on the same k8s cluster.
I'd like to return to a single code path that satisfies the use case where the capk/capi components are running on the same cluster as the VM/VMIs (centralized infra use case) and the use case where the controller components run on a separate cluster from the VM/VMIs (external infra use case)
To achieve this, we can't assume that the VM/VMI objects are even registered on the same cluster as the capi/capk controllers. Which means we can't watch these objects by default using the default in cluster config. I propose we return back to depending on syncing the KubeVirtMachine and KubeVirtCluster objects regularly in order to pickup VM/VMI changes (basically polling). For polling to be responsive enough to pick up things like IP changes after a VM reboot, I think we should lower the default polling interval to 60 seconds.