Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

losetup in buildroot VM does not support -j: MapBlockVolume failed: exit status 32 #8284

Closed
anencore94 opened this issue May 27, 2020 · 8 comments · Fixed by #10255 or #10704
Closed
Labels
area/guest-vm General configuration issues with the minikube guest VM help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@anencore94
Copy link
Contributor

anencore94 commented May 27, 2020

Steps to reproduce the issue:

  1. Create pv, pvc, pod with spec.VolumeMode:Block
  • the same yaml file succeed with another k8s clusters installed with kubespray and kubeadm.
  • also, with the same yaml only different with spec.VolumeMode:Filesystem was succeed
  1. However, creating pod failed in minikube:
    pod stuck in containerCreating status, but the error message was different depends on minikube version (error message in describe pod and journalctl -u kubelet and minikube log was similar, so added only describing pod events log)
  • in minikube v1.10.1 (k8s v1.18.2)
Events:
  Type     Reason           Age    From               Message
  ----     ------           ----   ----               -------
  Normal   Scheduled        5m25s  default-scheduler  Successfully assigned default/busybox-vol to minikube
  Warning  FailedMapVolume  5m25s  kubelet, minikube  MapVolume.MapBlockVolume failed for volume "example-pv-volume" : blkUtil.MapDevice failed. devicePath: /home/docker/aaa, globalMapPath:/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume, podUID: dd860442-0b3f-4dd3-bf13-7994f0d93b80, bindMount: true: failed to bind mount devicePath: /home/docker/aaa to linkPath /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/dd860442-0b3f-4dd3-bf13-7994f0d93b80: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/dd860442-0b3f-4dd3-bf13-7994f0d93b80 --scope -- mount  -o bind /home/docker/aaa /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/dd860442-0b3f-4dd3-bf13-7994f0d93b80
Output: Running scope as unit: run-r3be8748ee1704f7d9b575e971e6acc8b.scope
mount: /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/dd860442-0b3f-4dd3-bf13-7994f0d93b80: mount point is not a directory.
  Warning  FailedMapVolume  5m24s  kubelet, minikube  MapVolume.MapBlockVolume failed for volume "example-pv-volume" : blkUtil.MapDevice failed. devicePath: /home/docker/aaa, globalMapPath:/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume, podUID: dd860442-0b3f-4dd3-bf13-7994f0d93b80, bindMount: true: failed to bind mount devicePath: /home/docker/aaa to linkPath /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/dd860442-0b3f-4dd3-bf13-7994f0d93b80: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/dd860442-0b3f-4dd3-bf13-7994f0d93b80 --scope -- mount  -o bind /home/docker/aaa /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/dd860442-0b3f-4dd3-bf13-7994f0d93b80
Output: Running scope as unit: run-re84bf69ccf79445dbbc71358e5b23c43.scope
mount: /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/dd860442-0b3f-4dd3-bf13-7994f0d93b80: mount point is not a directory.
  Warning  FailedMapVolume  5m24s  kubelet, minikube  MapVolume.MapBlockVolume failed for volume "example-pv-volume" : blkUtil.MapDevice failed. devicePath: /home/docker/aaa, globalMapPath:/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume, podUID: dd860442-0b3f-4dd3-bf13-7994f0d93b80, bindMount: true: failed to bind mount devicePath: /home/docker/aaa to linkPath /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/dd860442-0b3f-4dd3-bf13-7994f0d93b80: mount failed: exit status 32
  • in minikube v1.8.2 (k8s v1.17.1)
    (tried with hostname : m01, not minikube)
Events:
  Type     Reason           Age              From               Message
  ----     ------           ----             ----               -------
  Normal   Scheduled        5s               default-scheduler  Successfully assigned default/busybox-vol to m01
  Warning  FailedMapVolume  1s (x4 over 5s)  kubelet, m01       MapVolume.MapBlockVolume failed for volume "example-pv-volume" : blkUtil.AttachFileDevice failed. globalMapPath:/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume, podUID: 43ef6e11-853f-46fa-a55b-99305d629cbd: GetLoopDevice failed for path /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/43ef6e11-853f-46fa-a55b-99305d629cbd: losetup -j /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/example-pv-volume/43ef6e11-853f-46fa-a55b-99305d629cbd failed: exit status 1
  • in minikube v1.10.1-beta2 (k8s v1.15.3 and v1.18.1)
    • v1.15.3 says nothing, just say failedMapVolume
    • v1.18.1 says same as minikube v1.10.1 (k8s v1.18.2)

Full output of failed command:

  • I guess it happens since losetup is invalid in minikube, hint from the error message in minikube v1.8.2
    • related issues in kubernetes/kubernetes :
  1. output of losetup in minikube
$ sudo ls /usr/sbin -al | grep losetup
lrwxrwxrwx 1 root root      14 May 11 21:37 losetup -> ../bin/busybox

$ losetup -j
losetup: invalid option -- 'j'
BusyBox v1.29.3 (2020-05-11 14:37:18 PDT) multi-call binary.

Usage: losetup [-r] [-o OFS] {-f|LOOPDEV} FILE - associate loop devices
	losetup -d LOOPDEV - disassociate
	losetup -a - show status
	losetup -f - show next free loop device

	-o OFS	Start OFS bytes into FILE
	-r	Read-only
	-f	Show/use next free loop device
  1. expected output of losetup as in ubuntu (5.3.0-51-generic)
$ losetup --help | grep j
 -j, --associated <file>       list all devices associated with <file>
  • it seems the losetup in minikube is not valid.

Full output of minikube start command used, if not already included:

  • Ubuntu 18.04
  • minikube v1.10.1, v1.10.1-beta2, v1.8.2
  • kubernetes v1.18.2, etc.
  • virtualbox
  • Docker 19.03.8
  • Enabled addons: default-storageclass, storage-provisioner
    • also tried with disabling storage-provisioner
@anencore94 anencore94 changed the title losetup bin is invalid in minikube losetup is invalid in minikube May 27, 2020
@tstromberg tstromberg changed the title losetup is invalid in minikube losetup in buildroot VM does not support -j: MapBlockVolume failed: exit status 32 May 28, 2020
@tstromberg
Copy link
Contributor

The root cause here is that there are different versions of losetup. It seems that this functionality is expecting one that supports -j. Just curious - does this work if you use minikube start --driver=docker?

The Docker driver uses an Ubuntu based environment rather than the buildroot VM.

@tstromberg tstromberg added area/guest-vm General configuration issues with the minikube guest VM help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. labels May 28, 2020
@anencore94
Copy link
Contributor Author

anencore94 commented May 29, 2020

@tstromberg Yeap, the minikube with docker driver works well. And I can see the losetup in docker-driver-minikube has -j option, I guess your comment was right. Thanks.

Then the virtualbox-driver-minikube base image need to be changed to support that losetup version ?

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 29, 2020

I think that we can change from busybox to util-linux, by setting BR2_PACKAGE_UTIL_LINUX_LOSETUP=y

That's the one that kicbase uses:

docker@docker:~$ losetup --version
losetup from util-linux 2.34

This is in the builldroot config file: deploy/iso/minikube-iso/configs/minikube_defconfig

nixpanic added a commit to nixpanic/ceph-csi that referenced this issue Jul 28, 2020
minikube has /sbin/losetup from Busybox, and that does not work with
raw-block PVCs. Use the losetup executable from the host in the VM
instead.

See-also: kubernetes/minikube#8284
Signed-off-by: Niels de Vos <ndevos@redhat.com>
@nixpanic
Copy link
Contributor

I just hit this problem on minikube 1.12.1 and kvm2 too. Copying the /sbin/losetup executable from the CentOS 8 host into the minikube VM makes it work (but is a very ugly workaround).

I'll try to build an ISO with BR2_PACKAGE_UTIL_LINUX_LOSETUP=y and see if that works.

nixpanic added a commit to nixpanic/ceph-csi that referenced this issue Jul 28, 2020
minikube has /sbin/losetup from Busybox, and that does not work with
raw-block PVCs. Use the losetup executable from the host in the VM
instead.

See-also: kubernetes/minikube#8284
Signed-off-by: Niels de Vos <ndevos@redhat.com>
nixpanic added a commit to nixpanic/ceph-csi that referenced this issue Jul 29, 2020
minikube has /sbin/losetup from Busybox, and that does not work with
raw-block PVCs. Use the losetup executable from the host in the VM
instead.

See-also: kubernetes/minikube#8284
Signed-off-by: Niels de Vos <ndevos@redhat.com>
nixpanic added a commit to nixpanic/ceph-csi that referenced this issue Jul 31, 2020
minikube has /sbin/losetup from Busybox, and that does not work with
raw-block PVCs. Use the losetup executable from the host in the VM
instead.

See-also: kubernetes/minikube#8284
Signed-off-by: Niels de Vos <ndevos@redhat.com>
nixpanic added a commit to nixpanic/ceph-csi that referenced this issue Jul 31, 2020
minikube has /sbin/losetup from Busybox, and that does not work with
raw-block PVCs. Use the losetup executable from the host in the VM
instead.

See-also: kubernetes/minikube#8284
Signed-off-by: Niels de Vos <ndevos@redhat.com>
nixpanic added a commit to nixpanic/ceph-csi that referenced this issue Jul 31, 2020
minikube has /sbin/losetup from Busybox, and that does not work with
raw-block PVCs. Use the losetup executable from the host in the VM
instead.

See-also: kubernetes/minikube#8284
Signed-off-by: Niels de Vos <ndevos@redhat.com>
mergify bot pushed a commit to ceph/ceph-csi that referenced this issue Jul 31, 2020
minikube has /sbin/losetup from Busybox, and that does not work with
raw-block PVCs. Use the losetup executable from the host in the VM
instead.

See-also: kubernetes/minikube#8284
Signed-off-by: Niels de Vos <ndevos@redhat.com>
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 26, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 25, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

antonmyagkov added a commit to ydb-platform/nbs that referenced this issue Aug 6, 2024
issue: #1588

Changes:

Remove links to Nebius internal infrastructure( container registry, nbs pull secrets)
build local CSI Driver without publishing it in remote container registry
fix nbs-csi-driver parameters and mountPath
remove nbs-server manifests(Run nbsd-lightweight instead of it)
run minikube with docker driver to avoid "blkUtil.AttachFileDevice failed" error: losetup in buildroot VM does not support -j: MapBlockVolume failed: exit status 32 kubernetes/minikube#8284
Add instruction how to create volume manually via blockstore-client
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/guest-vm General configuration issues with the minikube guest VM help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
6 participants