Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cherry-pick #16128 to 7.6: Improve kubernetes.pod.cpu.usage.limit.pct field description #16193

Merged
merged 1 commit into from
Feb 7, 2020

Conversation

ChrsMark
Copy link
Member

@ChrsMark ChrsMark commented Feb 7, 2020

Cherry-pick of PR #16128 to 7.6 branch. Original message:

What does this PR do?

This PR improves kubernetes.pod.cpu.usage.limit.pct field description to make clear how the pct is calculated when at least one container of a Pod has no limits.

Why is it important?

When at least one container of a Pod has no limits, the situation can be tricky since in such case the limit.pct will fall-back to node.pct.

Example:

apiVersion: v1
kind: Pod
metadata:
  name: cpu-demo3
  namespace: beats
spec:
  containers:
  - name: cpu-demo-ctr
    image: vish/stress
    resources:
      limits:
        cpu: "1"
      requests:
        cpu: "0.5"
    args:
    - -cpus
    - "2"
  - name: cpu-demo-ctr2
    image: vish/stress
    args:
    - -cpus
    - "1"

In this case the sum of CoresLimit calculated at

coresLimit += perfMetrics.ContainerCoresLimit.GetWithDefault(cuid, nodeCores)

will be greater than nodeCores, cause cpu-demo-ctr2 has no limit and hence it will add nodeCores to the sum.
Given this fact, later on at

if coresLimit > nodeCores {
the coresLimit will become equal to nodeCores making cpu.usage.node.pct and cpu.usage.limit.pct to be equal as well.

Related issues

cc @jsoriano @exekias

@ChrsMark ChrsMark requested a review from a team as a code owner February 7, 2020 12:50
@ChrsMark ChrsMark merged commit 3c6fb98 into elastic:7.6 Feb 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants