Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support verticalpodautoscaler v1 #1718

Closed
iamcan13 opened this issue Apr 15, 2022 · 15 comments
Closed

Support verticalpodautoscaler v1 #1718

iamcan13 opened this issue Apr 15, 2022 · 15 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@iamcan13
Copy link

iamcan13 commented Apr 15, 2022

What would you like to be added:
Support verticalpodautoscalers/v1

Why is this needed:
Since vertical pod autoscalers CRD has been updated from v1beta2 to v1, kube-state-metrics should support new version of API. (release note)

Describe the solution you'd like
Generate client codes from version v1 and modify the dependencies.
The legacy client dependency is v1beta2, below is one of them I found.

vpaautoscaling "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1beta2"

Additional context

@iamcan13 iamcan13 added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 15, 2022
@Serializator
Copy link
Contributor

@fpetkovski, is this something to be looked into or would the quote below from the README negate this issue?

Resources in Kubernetes can evolve, i.e., the group version for a resource may change from alpha to beta and finally GA in different Kubernetes versions. For now, kube-state-metrics will only use the oldest API available in the latest release.

I'm not too familiar with the compatibility and versioning of KSM. The quote says "will only use the oldest API available in the latest release", what does this mean exactly?

Would this mean if autoscaling.k8s.io/v1beta2 is still available in Kubernetes 1.24 then KSM will keep using v1beta2 until this API version is removed in one of the next Kubernetes releases and then start using autoscaling.k8s.io/v1?

@fpetkovski
Copy link
Contributor

KSM aims to have backwards compatibility with the latest 3 Kubernetes versions. However, VPA is an external CRD so I'm not sure if we should apply the same rules since the CRD is not linked to Kubernetes releases. I think it should be safe to switch to v1 and users that use the old API can upgrade VPA independently of Kubernetes.

@mrueg @dgrisonnet any thoughts from your side?

@dgrisonnet
Copy link
Member

Adding VPA to kube-state-metrics was a mistake since it is out of its scope to only expose metrics about the core resources of Kubernetes. This leads to the problem we have here where we can't really move forward with an update since the API isn't tied to Kubernetes directly.

In my opinion, we should slowly deprecate it in favor of adding it as a configuration-based CRD. In the meantime, I think we shouldn't break the existing compatibility and stick to v1beta2.

As for the deprecation of VPA, I would recommend doing it over 2 minor releases of ksm:

v2.6.0:

  • mark all VPA metrics as deprecated since v2.6.0 in order to notify our users
  • provide a document guiding our users through the migration with an example of a CRD config that would provide the same functionalities as before

v2.8.0:

  • remove the VPA resource completely in ksm

@fpetkovski
Copy link
Contributor

I think this makes sense. Now that we have a way to export metrics from CR fields, we can start deprecating the built-in VPA support.

@mrueg
Copy link
Member

mrueg commented Jul 8, 2022

Should we plan work on this for v3?

I think there are a couple of larger changes that we might want to include (e.g. Refactor command line arguments, remove specific metrics, rename others, etc.)

@fpetkovski
Copy link
Contributor

We could mark VPA as deprecated in 2.6 and advise users to use CRD metrics. I would say the removal of the feature should happen in v3 at the latest, but I am also fine with having it in a 2.x release in case someone has time to do the work.

@dgrisonnet
Copy link
Member

I agree with Filip, we shouldn't necessarily wait for 3.x to remove the metrics since we have a way to deprecate them now.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 17, 2022
@mindw
Copy link
Contributor

mindw commented Oct 26, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 26, 2022
@mrueg
Copy link
Member

mrueg commented Nov 27, 2022

We decided to deprecate built-in VPA support in v2.7.0, please consider using Custom Resource State configurations instead. See #1790 #1835

@mrueg mrueg closed this as completed Nov 27, 2022
@QuentinBisson
Copy link
Contributor

Did anyone manage to create the CustomResourceState configuration that provides the same metrics? I'm new to Custom Resource State but creating the removed metrics seems like a not so easy topic

@dgrisonnet
Copy link
Member

@mrueg wrote this doc https://github.com/kubernetes/kube-state-metrics/blob/main/docs/customresourcestate-metrics.md#verticalpodautoscaler to migrate VPA to CustomResourceState. It might be worth extending it to include all the VPA metrics that are going to be removed.

@QuentinBisson
Copy link
Contributor

I'd love to do it but i'm not sure how to get the metrics coming from Kubernetes resource list like:

spec:
  resourcePolicy:
    containerPolicies:
      - containerName: container
        minAllowed: v1.ResourceList

The 2 issues I had were:
1 how do I get the containerName as a label if the path i use is:
[spec, resourcePolicy, containerPolicies, minAllowed]
Because it's not possible to use path: [spec, resourcePolicy, containerPolicies] and valueFrom: [minAllowed, cpu] as quantiles are not parsed successfully
2 thé current cpu value in vpa CR is in millicore and thé collector in ksm divised the value by 1000 to have it in cores. Is it possible with thé current custom resource collector or should we make this a millicore metric?

@mrueg
Copy link
Member

mrueg commented Apr 5, 2023

Quantile/Percentage support is in the main branch and will be shipped with v2.9.0: #1989

I would suggest to open a new issue about the missing "complete replacement config for VPA in CRM" and continue the conversation there.

@QuentinBisson
Copy link
Contributor

Added the request #2041 :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

9 participants