Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

guestfs does not create root directory for mounting #55349

Closed
tash opened this issue Nov 18, 2019 · 4 comments
Closed

guestfs does not create root directory for mounting #55349

tash opened this issue Nov 18, 2019 · 4 comments
Labels
info-needed waiting for more info
Milestone

Comments

@tash
Copy link

tash commented Nov 18, 2019

Description of Issue

while True:
if os.listdir(root):
# Stuff is in there, don't use it
hash_type = getattr(hashlib, __opts__.get('hash_type', 'md5'))
rand = hash_type(os.urandom(32)).hexdigest()
root = os.path.join(
tempfile.gettempdir(),
'guest',
location.lstrip(os.sep).replace('/', '.') + rand
)
log.debug('Establishing new root as %s', root)
else:
break
cmd = 'guestmount -i -a {0} --{1} {2}'.format(location, access, root)
__salt__['cmd.run'](cmd, python_shell=False)

If directory root has files in it, a new temporary root directory is computed, but not created. Some version of guestmount do not create the given root directory implicitly.

salt-call -l debug virt.init gitlab 2 4096 start=False image=/central/vm/test/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-kvm-and-xen-Snapshot9.114.qcow2

Logging:

[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG   ] Including configuration from '/etc/salt/minion.d/testdefault.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/testdefault.conf
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: test02
[DEBUG   ] Configuration file path: /etc/salt/minion
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[DEBUG   ] Grains refresh requested. Refreshing grains.
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG   ] Including configuration from '/etc/salt/minion.d/testdefault.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/testdefault.conf
[DEBUG   ] Connecting to master. Attempt 1 of 1
[DEBUG   ] "salt-master" Not an IP address? Assuming it is a hostname.
[DEBUG   ] Master URI: tcp://10.10.81.124:4506
[DEBUG   ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'test02', u'tcp://10.10.81.124:4506')
[DEBUG   ] Generated random reconnect delay between '1000ms' and '11000ms' (9280)
[DEBUG   ] Setting zmq_reconnect_ivl to '9280ms'
[DEBUG   ] Setting zmq_reconnect_ivl_max to '11000ms'
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'test02', u'tcp://10.10.81.124:4506', 'clear')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://10.10.81.124:4506
[DEBUG   ] Trying to connect to: tcp://10.10.81.124:4506
[DEBUG   ] salt.crypt.get_rsa_pub_key: Loading public key
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] salt.crypt.get_rsa_key: Loading private key
[DEBUG   ] salt.crypt._get_key_with_evict: Loading private key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem                                                                                                                                                                    [60/1944]
[DEBUG   ] salt.crypt.get_rsa_pub_key: Loading public key
[DEBUG   ] Connecting the Minion to the Master publish port, using the URI: tcp://10.10.81.124:4505
[DEBUG   ] salt.crypt.get_rsa_key: Loading private key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Determining pillar cache
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'test02', u'tcp://10.10.81.124:4506', u'aes')
[DEBUG   ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'test02', u'tcp://10.10.81.124:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://10.10.81.124:4506
[DEBUG   ] Trying to connect to: tcp://10.10.81.124:4506
[DEBUG   ] salt.crypt.get_rsa_key: Loading private key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] LazyLoaded virt.init
[DEBUG   ] LazyLoaded config.get
[DEBUG   ] Using hyperisor kvm
[DEBUG   ] NIC profile is [{u'mac': u'52:54:00:1C:64:32', u'source': u'virbr0', u'model': u'virtio', u'type': u'bridge', u'name': u'eth0'}]
[DEBUG   ] /central/vm/test/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-kvm-and-xen-Snapshot9.114.qcow2 image from module arguments will be used for disk "system" instead of None
[DEBUG   ] Creating disk for VM [ gitlab ]: {u'system': {u'model': u'virtio', u'format': u'qcow2', u'image': u'/central/vm/test/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-kvm-and-xen-Snapshot9.114.qcow2', u'pool': u'/nas-01/images', u'size
': u'8192'}}
[DEBUG   ] Image directory from config option `virt.images` is /nas-01/images
[DEBUG   ] Image destination will be /nas-01/images/gitlab/system.qcow2
[DEBUG   ] Image destination directory is /nas-01/images/gitlab
[DEBUG   ] Create disk from specified image /central/vm/test/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-kvm-and-xen-Snapshot9.114.qcow2
[DEBUG   ] LazyLoaded cp.cache_file
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'test02', u'tcp://10.10.81.124:4506', u'aes')
[DEBUG   ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'test02', u'tcp://10.10.81.124:4506')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://10.10.81.124:4506
[DEBUG   ] Trying to connect to: tcp://10.10.81.124:4506
[DEBUG   ] LazyLoaded cmd.run
[INFO    ] Executing command 'qemu-img info /central/vm/test/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-kvm-and-xen-Snapshot9.114.qcow2' in directory '/root'
[DEBUG   ] stdout: image: /central/vm/test/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-kvm-and-xen-Snapshot9.114.qcow2
file format: qcow2
virtual size: 24G (25769803776 bytes)
disk size: 231M
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
[DEBUG   ] output: image: /central/vm/test/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-kvm-and-xen-Snapshot9.114.qcow2
file format: qcow2
virtual size: 24G (25769803776 bytes)
disk size: 231M
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
[DEBUG   ] Copying /central/vm/test/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-kvm-and-xen-Snapshot9.114.qcow2 to /nas-01/images/gitlab/system.qcow2
[DEBUG   ] Resize qcow2 image to 8192M
[INFO    ] Executing command 'qemu-img resize /nas-01/images/gitlab/system.qcow2 8192M' in directory '/root'
[ERROR   ] Command '[u'qemu-img', u'resize', u'/nas-01/images/gitlab/system.qcow2', u'8192M']' failed with return code: 1
[ERROR   ] stdout: qemu-img: qcow2 doesn't support shrinking images yet
[ERROR   ] retcode: 1
[ERROR   ] Command 'qemu-img resize /nas-01/images/gitlab/system.qcow2 8192M' failed with return code: 1
[ERROR   ] output: qemu-img: qcow2 doesn't support shrinking images yet

[DEBUG   ] Apply umask and remove exec bit                                                                                                                                                                                        [0/1944]
[DEBUG   ] Seed command is seed.apply
[DEBUG   ] LazyLoaded seed.apply
[DEBUG   ] LazyLoaded file.stats
[DEBUG   ] Mounting file at /nas-01/images/gitlab/system.qcow2
[DEBUG   ] LazyLoaded guestfs.mount
[DEBUG   ] LazyLoaded mount.mount
[DEBUG   ] Using root /tmp/guest/nas-01.images.gitlab.system.qcow2
[DEBUG   ] Establishing new root as /tmp/guest/nas-01.images.gitlab.system.qcow237768243661fc3142cbce85d693868a41915eb36625bc58dbda23ef284546b1f
[ERROR   ] An un-handled exception was caught by salt's global exception handler:
OSError: [Errno 2] No such file or directory: '/tmp/guest/nas-01.images.gitlab.system.qcow237768243661fc3142cbce85d693868a41915eb36625bc58dbda23ef284546b1f'
Traceback (most recent call last):
  File "/usr/bin/salt-call", line 11, in <module>
    salt_call()
  File "/usr/lib/python2.7/site-packages/salt/scripts.py", line 410, in salt_call
    client.run()
  File "/usr/lib/python2.7/site-packages/salt/cli/call.py", line 57, in run
    caller.run()
  File "/usr/lib/python2.7/site-packages/salt/cli/caller.py", line 134, in run
    ret = self.call()
  File "/usr/lib/python2.7/site-packages/salt/cli/caller.py", line 212, in call
    ret['return'] = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/salt/modules/virt.py", line 773, in init
    priv_key=priv_key,
  File "/usr/lib/python2.7/site-packages/salt/modules/seed.py", line 142, in apply_
    mpt = _mount(path, ftype, mount_point)
  File "/usr/lib/python2.7/site-packages/salt/modules/seed.py", line 80, in _mount
    mpt = __salt__['mount.mount'](path, device=root, util=util)
  File "/usr/lib/python2.7/site-packages/salt/modules/mount.py", line 1186, in mount
    return __salt__['guestfs.mount'](name, root=device)
  File "/usr/lib/python2.7/site-packages/salt/modules/guestfs.py", line 56, in mount
    if os.listdir(root):
OSError: [Errno 2] No such file or directory: '/tmp/guest/nas-01.images.gitlab.system.qcow237768243661fc3142cbce85d693868a41915eb36625bc58dbda23ef284546b1f'
Traceback (most recent call last):
  File "/usr/bin/salt-call", line 11, in <module>
    salt_call()
  File "/usr/lib/python2.7/site-packages/salt/scripts.py", line 410, in salt_call
    client.run()
  File "/usr/lib/python2.7/site-packages/salt/cli/call.py", line 57, in run
    caller.run()
  File "/usr/lib/python2.7/site-packages/salt/cli/caller.py", line 134, in run
    ret = self.call()
  File "/usr/lib/python2.7/site-packages/salt/cli/caller.py", line 212, in call
    ret['return'] = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/salt/modules/virt.py", line 773, in init
    priv_key=priv_key,
  File "/usr/lib/python2.7/site-packages/salt/modules/seed.py", line 142, in apply_
    mpt = _mount(path, ftype, mount_point)
  File "/usr/lib/python2.7/site-packages/salt/modules/seed.py", line 80, in _mount
    mpt = __salt__['mount.mount'](path, device=root, util=util)
  File "/usr/lib/python2.7/site-packages/salt/modules/mount.py", line 1186, in mount
    return __salt__['guestfs.mount'](name, root=device)
  File "/usr/lib/python2.7/site-packages/salt/modules/guestfs.py", line 56, in mount
    if os.listdir(root):
OSError: [Errno 2] No such file or directory: '/tmp/guest/nas-01.images.gitlab.system.qcow237768243661fc3142cbce85d693868a41915eb36625bc58dbda23ef284546b1f'

Setup

libvirt-bash-completion-4.5.0-10.el7_6.2.x86_64
libvirt-client-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-nwfilter-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-lxc-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-storage-disk-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-storage-4.5.0-10.el7_6.2.x86_64
libvirt-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-kvm-4.5.0-10.el7_6.2.x86_64
libvirt-libs-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-network-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-config-nwfilter-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-qemu-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-storage-scsi-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-storage-logical-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-storage-iscsi-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-secret-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-nodedev-4.5.0-10.el7_6.2.x86_64
libvirt-glib-1.0.0-1.el7.x86_64
libvirt-python-4.5.0-1.el7.x86_64
libvirt-daemon-driver-storage-core-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-config-network-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-storage-mpath-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-storage-rbd-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-interface-4.5.0-10.el7_6.2.x86_64

qemu-kvm-common-ev-2.10.0-21.el7_5.7.1.x86_64
qemu-kvm-tools-ev-2.10.0-21.el7_5.7.1.x86_64
libvirt-daemon-kvm-4.5.0-10.el7_6.2.x86_64
libvirt-daemon-driver-qemu-4.5.0-10.el7_6.2.x86_64
qemu-kvm-ev-2.10.0-21.el7_5.7.1.x86_64
qemu-img-ev-2.10.0-21.el7_5.7.1.x86_64
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch

libguestfs-1.38.2-12.el7.x86_64
libguestfs-tools-c-1.38.2-12.el7.x86_64
libguestfs-tools-1.38.2-12.el7.noarch
perl-Sys-Guestfs-1.38.2-12.el7.x86_64
libguestfs-bash-completion-1.38.2-12.el7.noarch

Steps to Reproduce Issue

Call is made locally on the Hypervisor (test02):
salt-call -l debug virt.init gitlab 2 4096 start=False image=/central/vm/test/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-kvm-and-xen-Snapshot9.114.qcow2

Versions Report

Salt Version:
           Salt: 2018.3.4

Dependency Versions:
           cffi: 1.6.0
       cherrypy: Not Installed
       dateutil: 1.5
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.7.2
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: 2.14
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.5 (default, Oct 30 2018, 23:45:53)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.3.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.4

System Versions:
           dist: centos 7.5.1804 Core
         locale: UTF-8
        machine: x86_64
        release: 3.10.0-957.el7.x86_64
         system: Linux
        version: CentOS Linux 7.5.1804 Core
@tash
Copy link
Author

tash commented Nov 18, 2019

while True:
if os.listdir(root):
# Stuff is in there, don't use it
hash_type = getattr(hashlib, __opts__.get('hash_type', 'md5'))
rand = hash_type(os.urandom(32)).hexdigest()
root = os.path.join(
tempfile.gettempdir(),
'guest',
location.lstrip(os.sep).replace('/', '.') + rand
)
log.debug('Establishing new root as %s', root)
else:
break
cmd = 'guestmount -i -a {0} --{1} {2}'.format(location, access, root)
__salt__['cmd.run'](cmd, python_shell=False)

One possible fix is to create the directory in place:

    while True:
        if os.listdir(root):
            # Stuff is in there, don't use it
            hash_type = getattr(hashlib, __opts__.get('hash_type', 'md5'))
            rand = hash_type(os.urandom(32)).hexdigest()
            root = os.path.join(
                tempfile.gettempdir(),
                'guest',
                location.lstrip(os.sep).replace('/', '.') + rand
            )
            os.mkdir(root)
            log.debug('Establishing new root as %s', root)
        else:
            break
    cmd = 'guestmount -i -a {0} --{1} {2}'.format(location, access, root)
    __salt__['cmd.run'](cmd, python_shell=False)

@waynew
Copy link
Contributor

waynew commented Dec 12, 2019

I haven't yet been able to reproduce this yet - I'm just trying to create an image and then mount it

dd if=/dev/zero of=/tmp/fun.img bs=1M count=10
mkfs.ext4 /tmp/fun.img

I then downloaded the alpine mini root filesystem, extracted it, mounted my image, and copied the filesystem over. I may have had to run e2fsck -f /tmp/fun.img.

Then I ran

salt-call --local guestfs.mount /tmp/fun.img

And it worked just fine.

Have you tried just running guestfs.mount on your image?

@waynew waynew added the info-needed waiting for more info label Dec 12, 2019
@waynew waynew added this to the Blocked milestone Dec 12, 2019
@cbosdo
Copy link
Contributor

cbosdo commented Dec 17, 2019

I managed to reproduce here with master. Here is how I did:

mkdir -p /tmp/guest/var.testsuite-data.disk-image-template.qcow2/foobar
salt-call --local guestfs.mount /var/testsuite-data/disk-image-template.qcow2

The problem is that if the computed root folder is not empty we're looping to find a free one... but those are never created.

@waynew
Copy link
Contributor

waynew commented Jan 16, 2020

closing this as #55672 was merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
info-needed waiting for more info
Projects
None yet
Development

No branches or pull requests

3 participants