Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create HPA v2 Stable API #102362

Closed
josephburnett opened this issue May 27, 2021 · 17 comments · Fixed by #102534
Closed

Create HPA v2 Stable API #102362

josephburnett opened this issue May 27, 2021 · 17 comments · Fixed by #102534
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@josephburnett
Copy link
Member

josephburnett commented May 27, 2021

What would you like to be added:

We need to create a new version v2 (no suffix) along with conversion routines, etc..

Why is this needed:

HPA v2beta2 is graduating to stable!

What to do:

  1. Read Changing the API, in particular Making a New API Version.
  2. Tell @josephburnett on #sig-autoscaling-api "Hey, I want to do this." (just so two people don't start the same work)
  3. Create the new API and conversion routines.

Details:

The HPA controller uses v2beta2 as the internal structure. So your conversion routines will be from v2 -> v2beta2 and back. We will migrate the controller and associated code to v2 stable as a separate change.

Create the new API by copying v2beta2. We are graduating the API as-is without adding any new fields. However there are a few fields and enum values we would like to rename because they are confusing.

  • Rename MaxPolicySelect to MaxChangePolicySelect and the value from "Max" to "MaxChange". The enum MaxPolicySelect will choose the smallest absolute value when scaling down (largest change) which has been confusing to customers.
  • Rename MinPolicySelect to MinChangePolicySelect and the value from "Min" to "MinChange".
  • Rename ResourceMetricSourceType to PodResourceMetricSourceType and the value from "Resource" to "PodResource". The new "ContainerResource" type makes "Resource" ambiguous.
  • Rename the associated ResourceMetricSource struct.

Generate new code (see Generating Code).

Send a pull request (one squashed commit please) and notify #sig-autoscaling-api so we can take a look too. It will probably need API review.

Feel free to reach out for help on #sig-autoscaling-api if you have any questions!

@josephburnett josephburnett added the kind/feature Categorizes issue or PR as related to a new feature. label May 27, 2021
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 27, 2021
@k8s-ci-robot
Copy link
Contributor

@josephburnett: The label(s) sig/sig-autoscaling cannot be applied, because the repository doesn't have them.

In response to this:

/sig sig-autoscaling
/add help wanted

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@josephburnett
Copy link
Member Author

/sig autoscaling

@k8s-ci-robot k8s-ci-robot added sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 27, 2021
@josephburnett
Copy link
Member Author

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 27, 2021
@josephburnett
Copy link
Member Author

/help

@k8s-ci-robot
Copy link
Contributor

@josephburnett:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label May 27, 2021
@wangyysde
Copy link
Member

/assign

@danishprakash
Copy link
Contributor

@wangyysde just a bump, are you working on this? couldn't find a message on slack for the same. Thanks

@josephburnett
Copy link
Member Author

Hey @wangyysde thanks for volunteering! Can you touch base with me in #sig-autoscaling-api in the Kubernetes Slack channel? If you don't want to use slack, just email directly at josephburnett@google.com.

@AbdulBasitAlvi
Copy link
Contributor

AbdulBasitAlvi commented May 31, 2021

I would also like to start working on creating the new HPA v2 API. If someone is not working on it I can pick it up @josephburnett @wangyysde

@josephburnett
Copy link
Member Author

@AbdulBasitAlvi great! Let's coordinate the work on #sig-autoscaling-api. Ping me there or stop by the sig-autoscaling working group meeting.

@wangyysde
Copy link
Member

@AbdulBasitAlvi Are you start working on this? If not ,I will try to it .

@wangyysde
Copy link
Member

@josephburnett Sorry. I have not Slack account.

@josephburnett
Copy link
Member Author

Just to follow up, @wangyysde you have this issue. Thanks for the PR. @AbdulBasitAlvi I will follow up with you on Slack to find other, interesting work. :)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 8, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 8, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants