Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change metrics store to use sync.Map #1028

Closed
wants to merge 1 commit into from

Conversation

zhengl7
Copy link

@zhengl7 zhengl7 commented Jan 18, 2020

What this PR does / why we need it:
The cache map for metrics cannot be concurrently accessed in the current version, so there is a mutex for map reading/writing. This caused a problem that heavy metrics writing to the map will cause a contention with reading, and can potentially cause Prometheus scrape call to timeout. Using sync.Map can remove the need of the mutex.

The non-concurrent map implementation led to issues reported in #995, there is constant Prometheus scraping timeouts when the cluster reaches certain busyness of creating new pods. Applied a version of kube-state-metrics with this change to the same cluster, timeouts are completely gone.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #995

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Jan 18, 2020
@k8s-ci-robot
Copy link
Contributor

Welcome @zhengl7!

It looks like this is your first PR to kubernetes/kube-state-metrics 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kube-state-metrics has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jan 18, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: zhengl7
To complete the pull request process, please assign lilic
You can assign the PR to them by writing /assign @lilic in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@zhengl7
Copy link
Author

zhengl7 commented Jan 18, 2020

Results of running make test-benchmark-compare locally,

### Result
old=release-1.8 new=syncmap_metrics_store
benchmark                                       old ns/op       new ns/op       delta
BenchmarkKubeStateMetrics/GenerateMetrics-8     1014742988      1005576671      -0.90%
BenchmarkKubeStateMetrics/MakeRequests-8        10689884448     21356121974     +99.78%
BenchmarkPodStore-8                             28217           31236           +10.70%
BenchmarkMetricWrite/value-1-8                  565             551             -2.48%
BenchmarkMetricWrite/value-35.7-8               704             687             -2.41%

benchmark                                    old MB/s     new MB/s     speedup
BenchmarkKubeStateMetrics/MakeRequests-8     925.12       970.94       1.05x

benchmark                                       old allocs     new allocs     delta
BenchmarkKubeStateMetrics/GenerateMetrics-8     824843         864269         +4.78%
BenchmarkKubeStateMetrics/MakeRequests-8        410539         455643         +10.99%
BenchmarkPodStore-8                             488            523            +7.17%
BenchmarkMetricWrite/value-1-8                  7              7              +0.00%
BenchmarkMetricWrite/value-35.7-8               7              7              +0.00%

benchmark                                       old bytes       new bytes       delta
BenchmarkKubeStateMetrics/GenerateMetrics-8     83766392        87583568        +4.56%
BenchmarkKubeStateMetrics/MakeRequests-8        26515301456     52930885600     +99.62%
BenchmarkPodStore-8                             24304           25984           +6.91%
BenchmarkMetricWrite/value-1-8                  536             536             +0.00%
BenchmarkMetricWrite/value-35.7-8               536             536             +0.00%

@zhengl7 zhengl7 requested a review from lilic January 18, 2020 00:18
The cache map for metrics cannot be concurrently accessed in the current version. This caused a problem that heavy metrics writing to the map will cause a contention with reading, and can potentially cause Prometheus scrape call to timeout. Using sync.Map can remove the mutex.
@zhengl7 zhengl7 force-pushed the syncmap_metrics_store branch from 620c817 to 61c94d9 Compare January 18, 2020 00:34
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 18, 2020
@zhengl7
Copy link
Author

zhengl7 commented Jan 18, 2020

Force pushing after rebasing

@zhengl7
Copy link
Author

zhengl7 commented Jan 18, 2020

/assign @lilic

@zhengl7
Copy link
Author

zhengl7 commented Jan 18, 2020

ok, tests failure, need to understand it better. Should I close the PR and come back later when it's ready?

@zhengl7 zhengl7 closed this Jan 18, 2020
@lilic
Copy link
Member

lilic commented Jan 20, 2020

hmm those do seem like very huge performance degradation. 🤔

Pokom added a commit to grafana/kube-state-metrics that referenced this pull request Jun 5, 2024
There are a few documented scenarios where `kube-state-metrics` will
lock up(kubernetes#995, kubernetes#1028). I believe a much simpler solution to ensure
`kube-state-metrics` doesn't lock up and require a restart to server
`/metrics` requests is to add default read and write timeouts and to
allow them to be configurable. At Grafana, we've experienced a few
scenarios where `kube-state-metrics` running in larger clusters falls
behind and starts getting scraped multiple times. When this occurs,
`kube-state-metrics` becomes completely unresponsive and requires a
reboot. This is somewhat easily reproduceable(I'll provide a script in
an issue) and causes other critical workloads(KEDA, VPA) to fail in
weird ways.

Adds two flags:
- `server-read-timeout`
- `server-write-timeout`

Updates the metrics http server to set the `ReadTimeout` and
`WriteTimeout` to the configured values.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

kube-state-metrics API scraping timeout
3 participants