Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use chainguard dev image and set Pod Security Standard to restricted profile for the Kubecost copier #32

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

avrodrigues5
Copy link
Collaborator

@avrodrigues5 avrodrigues5 commented May 22, 2024

What does this PR change?

  • Uses cgr.dev/chainguard/wolfi-base:latest image
  • Introduce Pod security standard of Restricted profile
  • Adds standard label to the copier pod
  • Moves the information about disk auto scaler operation from debug log level to info as info is default level

Does this PR rely on any other PRs?

None

How does this PR impact users?

Better security profile for the intermediate copier pod

Links to Issues or tickets this PR addresses or fixes

Closes #8

What risks are associated with merging this PR? What is required to fully test this PR?

It was tested for scale up and scale down operation

The pod that gets created has following yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2024-05-22T23:18:14Z"
  deletionGracePeriodSeconds: 30
  deletionTimestamp: "2024-05-22T23:18:56Z"
  labels:
    app.kubernetes.io/component: copy-pod
    app.kubernetes.io/instance: kubecost-data-mover-pod-lonjx
    app.kubernetes.io/managed-by: kubecost-disk-autoscaler
    app.kubernetes.io/name: kubecost-data-mover-pod
    app.kubernetes.io/part-of: kubecost-disk-autoscaler
    app.kubernetes.io/version: ""
  name: kubecost-data-mover-pod-lonjx
  namespace: disk-scaler-demo
  resourceVersion: "89982731"
  uid: be70a682-7e35-40c1-8ada-222e8ef0b554
spec:
  containers:
  - args:
    - sleep infinity
    command:
    - /bin/sh
    - -c
    - --
    image: cgr.dev/chainguard/wolfi-base:latest
    imagePullPolicy: Always
    name: temp-container
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /oldData
      name: orig-vol-mount
    - mountPath: /newData
      name: backup-vol-mount
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-q9vz4
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: ip-192-168-149-220.us-east-2.compute.internal
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    seccompProfile:
      type: RuntimeDefault
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: orig-vol-mount
    persistentVolumeClaim:
      claimName: test-pvc
  - name: backup-vol-mount
    persistentVolumeClaim:
      claimName: test-pvc-iylfc
  - name: kube-api-access-q9vz4
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-05-22T23:18:24Z"
    status: "True"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2024-05-22T23:18:19Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2024-05-22T23:18:24Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2024-05-22T23:18:24Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2024-05-22T23:18:19Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://4d91c09e5880bede5bccfd3ee1ee7e082dbcd659bfc7d321ec30a52f484332b8
    image: cgr.dev/chainguard/wolfi-base:latest
    imageID: cgr.dev/chainguard/wolfi-base@sha256:d8386fa1d2ebddb69689fdb639817004d1ba97ce358e26ff06a8c21e02fc11ae
    lastState: {}
    name: temp-container
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2024-05-22T23:18:24Z"
  hostIP: 192.168.149.220
  hostIPs:
  - ip: 192.168.149.220
  phase: Running
  podIP: 192.168.131.77
  podIPs:
  - ip: 192.168.131.77
  qosClass: BestEffort
  startTime: "2024-05-22T23:18:19Z"

How was this PR tested?

Creating two deployments and waiting on kubecost to recommend and then letting the DIsk auto scaler do its respective function.

…m a yaml template, change log level of number of runs from debug to info as info is default
@avrodrigues5 avrodrigues5 requested a review from chipzoller May 22, 2024 23:25
Copy link
Collaborator

@chipzoller chipzoller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glad to see the CI is already paying off.

Can we add resource requests and limits here so the copier gets the highest level of QoS? I don't imagine, even with large datasets, there is a particularly large amount of CPU or memory consumed. Have you done any profiling/monitoring here to get a sense of what we could configure?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Better defaults for intermediary Pod data mover
2 participants