Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reconcile sts/deploy if replica count changed #115

Merged
merged 1 commit into from
Dec 16, 2020

Conversation

himanshug
Copy link
Member

Fixes #108

Description

This PR introduces the mechanism to do comparison of resources generated by druid-operator so as to reconcile them if they were changed directly. currently it only does so for the Deployment/StatefulSet replica counts but could be extended further as needed.


This PR has:

  • been tested on a real K8S cluster to ensure creation of a brand new Druid cluster works.
  • been tested for backward compatibility on a real K*S cluster by applying the changes introduced here on an existing Druid cluster. If there are any backward incompatible changes then they have been noted in the PR description.
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • [] added documentation for new or modified features or behaviors.

Comment on lines +848 to +853
func statefulSetIsEquals(obj1, obj2 object) bool {
o1 := obj1.(*appsv1.StatefulSet)
o2 := obj2.(*appsv1.StatefulSet)
return *o1.Spec.Replicas == *o2.Spec.Replicas
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@himanshug in case we add spec here, wonder if this will conflict on scaling sts #97

Regardless are you planning to keep only replicas for now or extend for spec too

Copy link
Member Author

@himanshug himanshug Dec 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for now, I am leaving it at just replica count mismatch ... but we can add more stuff here based on real world needs.

from feature perspective .. #97 and this , are different things ... however both might end up reusing some code for checking equality if that makes sense.

@AdheipSingh
Copy link
Contributor

question :-

  1. What will happen if someone delete's a deployment/sts, with this PR will operator create the sts/deploy ?

@himanshug
Copy link
Member Author

What will happen if someone delete's a deployment/sts, with this PR will operator create the sts/deploy ?

operator would recreate those in 10secs in the worst case.

@himanshug himanshug merged commit b9d2f99 into druid-io:master Dec 16, 2020
@himanshug himanshug deleted the work_108 branch December 17, 2020 13:34
@connie-ru-wang
Copy link

Hi, is it possible that this PR broke the HPA functionality? I have a working hpa in a cluster with operator:0.0.4, but the hpa doesn't work in a cluster with operator:0.0.5 because it looks like the operator immediately reconciles the sts size back down when the hpa tries to scale up.

@AdheipSingh
Copy link
Contributor

@connie-ru-wang
yeah i think that might have broken the HPA, we can add a condition to check in case HPA is enabled, or maybe add a flag for force maintaining the replica count at the operator level.
Regardless do you mind telling for which components are you using HPA for ?

cc @himanshug

@AdheipSingh
Copy link
Contributor

BTW @connie-ru-wang it would be nice if you can create a separate issue for this, so that it can be tracked.
Thanks

@connie-ru-wang
Copy link

connie-ru-wang commented Apr 8, 2021

@AdheipSingh I've been testing HPA with routers, brokers, historicals. Although for historicals, most likely we will need v2beta2 support for the specific behavior policies that will prevent thrashing. https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior

Also just opened an issue for this: #160

@AdheipSingh AdheipSingh mentioned this pull request Apr 16, 2021
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Watch deployments/Sts/services with watcher
3 participants