Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No topology key found on hw nodes #400

Closed
SNB-hz opened this issue Mar 27, 2023 · 9 comments · Fixed by #743
Closed

No topology key found on hw nodes #400

SNB-hz opened this issue Mar 27, 2023 · 9 comments · Fixed by #743
Assignees
Labels
bug Something isn't working pinned

Comments

@SNB-hz
Copy link

SNB-hz commented Mar 27, 2023

In clusters with hardware nodes, a new PVC and its workload can be stuck in Pending state if they are scheduled without nodeAffinity.

Steps to reproduce:

  • run a cluster that includes a hardware worker, and label the hw node with instance.hetzner.cloud/is-root-server=true as mentioned in the README
  • install CSI driver according to instructions
  • apply the test-pvc and pod mentioned in the README, using the default storageClass with WaitForFirstConsumer volumeBindingMode

Expected Behaviour:

hcloud-csi-controller should provide the desired / required topology constaints to the k8s scheduler, which then schedules the pod on a node fulfilling the topology requirements.
As the hardware node does not run csi-driver and cannot mount hetzner cloud volumes, the workload should not be scheduled there.

Observed Behaviour:

  • Both pvc and pod are stuck in Pending state.
  • the container csi-provisioner of the CSI Controller deployment logs this Error:
'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "hcloud-volumes": error generating accessibility requirements: no topology key found on CSINode hardwarenode.testcluster

More Info:
Tested with csi-driver 2.1.1 as well as 2.2.0, together with csi-provisioner 3.4.0

  • the DaemonSet for hcloud-csi-node does not run on the hw node
  • because of this, the csinode object for the node lists no driver:
kubectl get csinode
NAME                     DRIVERS       AGE
virtualnode.testcluster     1           1d
hardwarenode.testcluster    0           1d
  • the csinode object of the virtual node looks ok:
kubectl get csinode virtualnode.testcluster -oyaml
apiVersion: storage.k8s.io/v1
kind: CSINode
...
spec:
  drivers:
  - allocatable:
      count: 16
    name: csi.hetzner.cloud
    nodeID: "12769030"
    topologyKeys:
    - csi.hetzner.cloud/location
  • the csinode object of the hardware node does not have a driver and therefore no topology key, as the node intentionally runs no hcloud-csi-node pod due to the nodeAffinity:
kubectl get csinode hardwarenode.testcluster -oyaml
apiVersion: storage.k8s.io/v1
kind: CSINode
...
spec:
  drivers: null

Theory

It seems we are hitting this Issue in csi-provisioner.
As the hardware node has no csi-driver pod and therefore no driver or topology key listed, the csi-provisioner breaks. It is trying to build the preferred topology to give it to the scheduler, but as the hardware node has no topology key, the csi-provisioner fails. Pod and PVC cannot finish scheduling and remain in Pending state forever.

Workaround

This issue can be avoided by making sure the object that uses the PVC (StatefulSet, Pod etc.) cannot be scheduled on the hardware node in the first place. This can be done by specifying a nodeAffinity:

    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: instance.hetzner.cloud/is-root-server
                operator: NotIn
                values:
                - "true"

Proposed Solution

The external-provisioner Issue, lists a few possible solutions on the csi-driver side, such as running the csi-driver on all nodes, including hardware nodes.
CSI-controller would then need to be aware of which nodes are virtual or hardware when providing the topology preferences to the k8s scheduler.

@hypery2k
Copy link

Having the same issue, seems that the wrong node is selected:

- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    annotations:
      volume.beta.kubernetes.io/storage-provisioner: csi.hetzner.cloud
      volume.kubernetes.io/selected-node: production-agent-large-srd
      volume.kubernetes.io/storage-provisioner: csi.hetzner.cloud
    creationTimestamp: "2023-03-24T05:54:30Z"
    finalizers:
    - kubernetes.io/pvc-protection
    labels:
      app.kubernetes.io/component: primary
      app.kubernetes.io/instance: pcf-app
      app.kubernetes.io/name: postgresql
    name: data-pcf-app-postgresql-0
    namespace: pen-testing
    resourceVersion: "4164426"
    uid: 0c39bdac-5540-4a34-b274-151a6409cdbf
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 8Gi
    storageClassName: hcloud-volumes
    volumeMode: Filesystem
  status:
    phase: Pending
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    annotations:
      meta.helm.sh/release-name: reconmap-app
      meta.helm.sh/release-namespace: pen-testing
      pv.kubernetes.io/bind-completed: "yes"
      pv.kubernetes.io/bound-by-controller: "yes"
      volume.beta.kubernetes.io/storage-provisioner: csi.hetzner.cloud
      volume.kubernetes.io/selected-node: production-storage-yhq
      volume.kubernetes.io/storage-provisioner: csi.hetzner.cloud
    creationTimestamp: "2023-03-22T08:09:04Z"
    finalizers:
    - kubernetes.io/pvc-protection
    labels:
      app: mysql
      app.kubernetes.io/managed-by: Helm
    name: reconmap-app-mysql-pv-claim
    namespace: pen-testing
    resourceVersion: "3367563"
    uid: e355ac30-2136-4193-8264-04e33bc335c8
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 20Gi
    storageClassName: hcloud-volumes
    volumeMode: Filesystem
    volumeName: pvc-e355ac30-2136-4193-8264-04e33bc335c8
  status:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 20Gi
    phase: Bound

Seeing this logs:

17m         Normal    WaitForFirstConsumer   persistentvolumeclaim/data-pcf-app-postgresql-0   waiting for first consumer to be created before binding
12m         Normal    ExternalProvisioning   persistentvolumeclaim/data-pcf-app-postgresql-0   waiting for a volume to be created, either by external provisioner "csi.hetzner.cloud" or manually created by system administrator
12m         Normal    Provisioning           persistentvolumeclaim/data-pcf-app-postgresql-0   External provisioner is provisioning volume for claim "pen-testing/data-pcf-app-postgresql-0"
12m         Warning   ProvisioningFailed     persistentvolumeclaim/data-pcf-app-postgresql-0   failed to provision volume with StorageClass "hcloud-volumes": error generating accessibility requirements: no topology key found on CSINode production-agent-large-srd
10m         Normal    WaitForFirstConsumer   persistentvolumeclaim/data-pcf-app-postgresql-0   waiting for first consumer to be created before binding
6s          Normal    ExternalProvisioning   persistentvolumeclaim/data-pcf-app-postgresql-0   waiting for a volume to be created, either by external provisioner "csi.hetzner.cloud" or manually created by system administrator
61s         Normal    Provisioning           persistentvolumeclaim/data-pcf-app-postgresql-0   External provisioner is provisioning volume for claim "pen-testing/data-pcf-app-postgresql-0"
61s         Warning   ProvisioningFailed     persistentvolumeclaim/data-pcf-app-postgresql-0   failed to provision volume with StorageClass "hcloud-volumes": error generating accessibility requirements: no topology key found on CSINode production-agent-large-srd

When updating manually the volume.kubernetes.io/selected-node annotation to production-storage-yhq it's working

@samcday
Copy link
Contributor

samcday commented Apr 15, 2023

As per the hint in linked issue, perhaps this can be easily solved by setting allowed topologies on the StorageClass? That is, assuming the StorageClass has an allowedTopologies selector that accurately matches hcloud Nodes only, then we can be sure the Kubernetes scheduler won't try to schedule a Pod with hcloud PVC attachment(s) on non-hcloud nodes.

This only solves the issue for Kube, I have no idea about Swarm/Nomad.

@github-actions
Copy link

This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs.

@github-actions github-actions bot added the Stale label Jul 14, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 14, 2023
@apricote apricote removed the Stale label Aug 14, 2023
@apricote apricote reopened this Aug 14, 2023
Copy link

This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs.

@github-actions github-actions bot added the Stale label Nov 12, 2023
@apricote apricote added bug Something isn't working pinned and removed Stale labels Nov 13, 2023
@lukasmetzner lukasmetzner self-assigned this Oct 9, 2024
lukasmetzner added a commit that referenced this issue Oct 29, 2024
Due to a bug in the scheduler a node with no driver instance might be
picked and the volume is stuck in pending as the "no capacity - >
reschedule" recovery is never triggered
[[0]](kubernetes/kubernetes#122109),
[[1]](kubernetes-csi/external-provisioner#544).

- See #400

---------

Co-authored-by: lukasmetzner <lukas@metzner.io>
Co-authored-by: Julian Tölle <julian.toelle@hetzner-cloud.de>
@samcday
Copy link
Contributor

samcday commented Oct 29, 2024

Great to see such a thorough and satisfying conclusion/solution here! 👍

@apricote
Copy link
Member

(never sure if you are sarcastic or not)

You can check the updated docs to learn more about it: https://github.com/hetznercloud/csi-driver/tree/main/docs/kubernetes#integration-with-root-servers

We ended up going with the allowedTopologies in the StorageClass as you suggested in #400 (comment)

The necessary label is automatically added by hcloud-cloud-controller-manager if the customer is running that in their cluster.

@samcday
Copy link
Contributor

samcday commented Oct 29, 2024

I'm impressed that my customary acerbic wit has left such an indelible mark ;)

I wasn't being sarcastic at all! I had also tripped over the corresponding stuff in cluster-autoscaler - hence being impressed with the thoroughness of the fix here! (and of course that the fix took a similar to shape to how I proposed also leaves me feeling additionally chuffed xD)

lukasmetzner added a commit that referenced this issue Nov 11, 2024
Due to a bug in the scheduler a node with no driver instance might be
picked and the volume is stuck in pending as the "no capacity - >
reschedule" recovery is never triggered
[[0]](kubernetes/kubernetes#122109),
[[1]](kubernetes-csi/external-provisioner#544).

- See #400

---------

Co-authored-by: lukasmetzner <lukas@metzner.io>
Co-authored-by: Julian Tölle <julian.toelle@hetzner-cloud.de>
lukasmetzner pushed a commit that referenced this issue Nov 12, 2024
### ⚠️ Removed Feature from v2.10.0

We have reverted a workaround for an upstream issue in the Kubernetes
scheduler where nodes without the CSI Plugin (e.g. Robot servers) would
still be considered for scheduling, but then creating and attaching the
volume fails with no automatic reconciliation of the this error.

Due to variations in the CSI specification implementation, these changes
disrupted Nomad clusters, requiring us to revert them. We are actively
working on placing this workaround behind a feature flag, allowing
Kubernetes users to bypass the upstream issue.

This affects you, if you have set the Helm value
`allowedTopologyCloudServer` in v2.10.0. If you are affected by the
Kubernetes upstream issue, we will provide a fix in the next minor
version v2.11.0.

Learn more about this in
[#400](#400) and
[#771](#771).

### Bug Fixes

- reverted NodeGetInfo response as it breaks Nomad clusters (#776)

Co-authored-by: releaser-pleaser <>
@lukasmetzner
Copy link
Contributor

Hi,

We encountered compatibility issues with Nomad clusters due to differences in CSI Spec implementations, which led us to revert our recent changes. We’ve now released v2.10.1 to address this. Moving forward, we’ll implement a feature flag to reintroduce this workaround, scheduled for release in v2.11.0.

We apologize for any inconvenience this may have caused.

Best regards,
Lukas

@lukasmetzner lukasmetzner reopened this Nov 12, 2024
lukasmetzner added a commit that referenced this issue Nov 19, 2024
We are reintroducing a feature originally present in v2.10.0 to prevent
pods from getting stuck in the `pending` state in clusters with
non-cloud nodes. This feature is now optional and can be enabled via the
Helm Chart. By default, it remains disabled to avoid compatibility
issues with Nomad clusters, which have a different CSI spec
implementation.

Learn more about it in #400.
@lukasmetzner
Copy link
Contributor

v1.11.0 got released with new feature flag enableProvidedByTopology 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working pinned
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants