diff --git a/xml/book_virtualization.xml b/xml/book_virtualization.xml
index 48f316d302..f99666ae50 100644
--- a/xml/book_virtualization.xml
+++ b/xml/book_virtualization.xml
@@ -83,6 +83,7 @@
+
diff --git a/xml/libvirt_managing.xml b/xml/libvirt_managing.xml
index 6b10905c25..8a70cc9218 100644
--- a/xml/libvirt_managing.xml
+++ b/xml/libvirt_managing.xml
@@ -996,519 +996,6 @@ Metadata: yes
-
- Migrating &vmguest;s
-
-
- One of the major advantages of virtualization is that &vmguest;s are
- portable. When a &vmhost; needs to go down for maintenance, or when the
- host gets overloaded, the guests can easily be moved to another &vmhost;.
- &kvm; and &xen; even support live migrations during which
- the &vmguest; is constantly available.
-
-
-
- Migration requirements
-
- To successfully migrate a &vmguest; to another &vmhost;, the following
- requirements need to be met:
-
-
-
- The post-copy live migration (not supported or recommended for production) requires setting the
- unprivileged_userfaultfd system value to
- 1 from kernel version 5.11 onward.
-
-&prompt.sudo;sysctl -w vm.unprivileged_userfaultfd=1
-
- Since kernel versions before 5.11 did not require setting
- unprivileged_userfaultfd to use the post-copy option, &libvirt;
- provides the setting in
- /usr/lib/sysctl.d/60-qemu-postcopy-migration.conf
- file to preserve the old behavior.
-
-
-
-
-
- The source and destination systems must have the same architecture.
-
-
-
-
- Storage devices must be accessible from both machines, for example,
- via NFS or iSCSI, and must be configured as a storage pool on both
- machines. For more information, see
- .
-
-
- This is also true for CD-ROM or floppy images that are connected
- during the move. However, you can disconnect them before the move
- as described in
- .
-
-
-
-
- &libvirtd; needs to run on both &vmhost;s and you must be able to
- open a remote &libvirt; connection between the target and the
- source host (or vice versa). Refer to
- for details.
-
-
-
-
- If a firewall is running on the target host, ports need to be
- opened to allow the migration. If you do not specify a port during
- the migration process, &libvirt; chooses one from the range
- 49152:49215. Make sure that either this range (recommended) or a
- dedicated port of your choice is opened in the firewall on the
- target host.
-
-
-
-
- Host and target machine should be in the same subnet on the
- network, otherwise networking fails after the migration.
-
-
-
-
- All &vmhost;s participating in migration must have the same UID for
- the qemu user and the same GIDs for the kvm, qemu and libvirt
- groups.
-
-
-
-
- No running or paused &vmguest; with the same name must exist on the
- target host. If a shut down machine with the same name exists, its
- configuration is overwritten.
-
-
-
-
- All CPU models except host cpu model are
- supported when migrating &vmguest;s.
-
-
-
-
- disk device type is not
- migratable.
-
-
-
-
- File system pass-through feature is incompatible with migration.
-
-
-
-
- The &vmhost; and &vmguest; need to have proper timekeeping
- installed. See .
-
-
-
-
- No physical devices can be passed from host to guest. Live
- migration is currently not supported when using devices with PCI
- pass-through or . If live migration
- needs to be supported, you need to use software virtualization
- (paravirtualization or full virtualization).
-
-
-
-
- Cache mode setting is an important setting for migration. See:
- .
-
-
-
-
- Backward migration, for example, from &slsa; 15 SP2 to 15 SP1, is
- not supported.
-
-
-
-
- SUSE strives to support live migration of &vmguest;s from a
- &vmhost; running a service pack under LTSS to a &vmhost; running a
- newer service pack, within the same &slsa; major version. For
- example, &vmguest; migration from an &slsa; 12 SP2 host to an
- &slsa; 12 SP5 host. SUSE only performs minimal testing of
- LTSS-to-newer migration scenarios and recommends thorough on-site
- testing before attempting to migrate critical &vmguest;s.
-
-
-
-
- The image directory should be located in the same path on both
- hosts.
-
-
-
-
- All hosts should be on the same level of microcode (especially the
- spectre microcode updates). This can be achieved by installing the
- latest updates of &productname; on all hosts.
-
-
-
-
-
-
- Migrating with &vmm;
-
- When using the &vmm; to migrate &vmguest;s, it does not matter on which
- machine it is started. You can start &vmm; on the source or the target
- host or even on a third host. In the latter case you need to be able to
- open remote connections to both the target and the source host.
-
-
-
-
- Start &vmm; and establish a connection to the target or the source
- host. If the &vmm; was started neither on the target nor the source
- host, connections to both hosts need to be opened.
-
-
-
-
- Right-click the &vmguest; that you want to migrate and choose
- Migrate. Make sure the guest is running or
- paused—it is not possible to migrate guests that are shut
- down.
-
-
- Increasing the speed of the migration
-
- To increase the speed of the migration, pause the &vmguest;. This
- is the equivalent of the former so-called offline
- migration option of &vmm;.
-
-
-
-
-
- Choose a New Host for the &vmguest;. If the
- desired target host does not show up, make sure that you are
- connected to the host.
-
-
- To change the default options for connecting to the remote host,
- under Connection, set the
- Mode, and the target host's
- Address (IP address or host name) and
- Port. If you specify a Port,
- you must also specify an Address.
-
-
- Under Advanced options, choose whether the move
- should be permanent (default) or temporary, using
- Temporary move.
-
-
- Additionally, there is the option Allow unsafe,
- which allows migrating without disabling the cache of the &vmhost;.
- This can speed up the migration but only works when the current
- configuration allows for a consistent view of the &vmguest; storage
- without using
- cache="none"/0_DIRECT.
-
-
- Bandwidth option
-
- In recent versions of &vmm;, the option of setting a bandwidth
- for the migration has been removed. To set a specific bandwidth,
- use virsh instead.
-
-
-
-
-
- To perform the migration, click Migrate.
-
-
- When the migration is complete, the Migrate
- window closes and the &vmguest; is now listed on the new host in
- the &vmm; window. The original &vmguest; is still available on the
- target host (in shut down state).
-
-
-
-
-
-
- Migrating with virsh
-
- To migrate a &vmguest; with virsh
- , you need to have direct or remote shell
- access to the &vmhost;, because the command needs to be run on the
- host. The migration command looks like this:
-
-&prompt.user;virsh migrate [OPTIONS] VM_ID_or_NAMECONNECTION_URI [--migrateuri tcp://REMOTE_HOST:PORT]
-
- The most important options are listed below. See virsh help
- migrate for a full list.
-
-
-
-
-
-
- Does a live migration. If not specified, the guest is paused
- during the migration (offline migration).
-
-
-
-
-
-
-
- Does an offline migration and does not restart the &vmguest; on
- the target host.
-
-
-
-
-
-
-
- By default a migrated &vmguest; is migrated temporarily, so its
- configuration is automatically deleted on the target host if it
- is shut down. Use this switch to make the migration persistent.
-
-
-
-
-
-
-
- When specified, the &vmguest; definition on the source host is
- deleted after a successful migration (however, virtual disks
- attached to this guest are not deleted).
-
-
-
-
-
-
-
- Parallel migration can be used to increase migration data
- throughput in cases where a single migration thread is not
- capable of saturating the network link between source and
- destination hosts. On hosts with 40 GB network interfaces,
- it may require four migration threads to saturate the link. With
- parallel migration, the time required to migrate large memory VMs
- can be reduced.
-
-
-
-
-
- The following examples use &wsIVname; as the source system and
- &wsIname; as the target system; the &vmguest;'s name is
- opensuse131 with Id 37.
-
-
-
- Offline migration with default parameters
-
-&prompt.user;virsh migrate 37 qemu+ssh://&exampleuser_plain;@&wsIname;/system
-
-
-
- Transient live migration with default parameters
-
-&prompt.user;virsh migrate --live opensuse131 qemu+ssh://&exampleuser_plain;@&wsIname;/system
-
-
-
- Persistent live migration; delete VM definition on source
-
-&prompt.user;virsh migrate --live --persistent --undefinesource 37 \
-qemu+tls://&exampleuser_plain;@&wsIname;/system
-
-
-
- Offline migration using port 49152
-
-&prompt.user;virsh migrate opensuse131 qemu+ssh://&exampleuser_plain;@&wsIname;/system \
---migrateuri tcp://@&wsIname;:49152
-
-
-
-
- Transient compared to persistent migrations
-
- By default virsh migrate creates a temporary
- (transient) copy of the &vmguest; on the target host. A shut down
- version of the original guest description remains on the source host.
- A transient copy is deleted from the server after it is shut down.
-
-
- To create a permanent copy of a guest on the target host, use the
- switch . A shut down version of the
- original guest description remains on the source host, too. Use the
- option together with
- for a real move where a
- permanent copy is created on the target host and the version on the
- source host is deleted.
-
-
- It is not recommended to use
- without the option, since this results
- in the loss of both &vmguest; definitions when the guest is shut down
- on the target host.
-
-
-
-
-
-
-
- Step-by-step example
-
-
- Exporting the storage
-
- First you need to export the storage, to share the Guest image
- between host. This can be done by an NFS server. In the following
- example we want to share the /volume1/VM
- directory for all machines that are on the network 10.0.1.0/24. We
- are using a &sle; NFS server. As root user, edit the
- /etc/exports file and add:
-
-/volume1/VM 10.0.1.0/24 (rw,sync,no_root_squash)
-
- You need to restart the NFS server:
-
-&prompt.sudo;systemctl restart nfsserver
-&prompt.sudo;exportfs
-/volume1/VM 10.0.1.0/24
-
-
- Defining the pool on the target hosts
-
- On each host where you want to migrate the &vmguest;, the pool must
- be defined to be able to access the volume (that contains the Guest
- image). Our NFS server IP address is 10.0.1.99, its share is the
- /volume1/VM directory, and we want to get it
- mounted in the /var/lib/libvirt/images/VM
- directory. The pool name is VM. To define this
- pool, create a VM.xml file with the following
- content:
-
-<pool type='netfs'>
- <name>VM</name>
- <source>
- <host name='10.0.1.99'/>
- <dir path='/volume1/VM'/>
- <format type='auto'/>
- </source>
- <target>
- <path>/var/lib/libvirt/images/VM</path>
- <permissions>
- <mode>0755</mode>
- <owner>-1</owner>
- <group>-1</group>
- </permissions>
- </target>
- </pool>
-
- Then load it into &libvirt; using the pool-define
- command:
-
-&prompt.root;virsh pool-define VM.xml
-
- An alternative way to define this pool is to use the
- virsh command:
-
-&prompt.root;virsh pool-define-as VM --type netfs --source-host 10.0.1.99 \
- --source-path /volume1/VM --target /var/lib/libvirt/images/VM
-Pool VM created
-
- The following commands assume that you are in the interactive shell
- of virsh which can also be reached by using the
- command virsh without any arguments. Then the pool
- can be set to start automatically at host boot (autostart option):
-
-virsh # pool-autostart VM
-Pool VM marked as autostarted
-
- To disable the autostart:
-
-virsh # pool-autostart VM --disable
-Pool VM unmarked as autostarted
-
- Check if the pool is present:
-
-virsh # pool-list --all
- Name State Autostart
--------------------------------------------
- default active yes
- VM active yes
-
-virsh # pool-info VM
-Name: VM
-UUID: 42efe1b3-7eaa-4e24-a06a-ba7c9ee29741
-State: running
-Persistent: yes
-Autostart: yes
-Capacity: 2,68 TiB
-Allocation: 2,38 TiB
-Available: 306,05 GiB
-
- Pool needs to exist on all target hosts
-
- Remember: this pool must be defined on each host where you want to
- be able to migrate your &vmguest;.
-
-
-
-
- Creating the volume
-
- The pool has been defined—now we need a volume which contains
- the disk image:
-
-virsh # vol-create-as VM sled12.qcow2 8G --format qcow2
-Vol sled12.qcow2 created
-
- The volume names shown are used later to install the guest with
- virt-install.
-
-
-
- Creating the &vmguest;
-
- Let's create a &productname; &vmguest; with the
- virt-install command. The VM
- pool is specified with the --disk option,
- cache=none is recommended if you do not want to
- use the --unsafe option while doing the migration.
-
-&prompt.root;virt-install --connect qemu:///system --virt-type kvm --name \
- sled12 --memory 1024 --disk vol=VM/sled12.qcow2,cache=none --cdrom \
- /mnt/install/ISO/SLE-12-Desktop-DVD-x86_64-Build0327-Media1.iso --graphics \
- vnc --os-variant sled12
-Starting install...
-Creating domain...
-
-
- Migrate the &vmguest;
-
- Everything is ready to do the migration now. Run the
- migrate command on the &vmhost; that is currently
- hosting the &vmguest;, and choose the destination.
-
-virsh # migrate --live sled12 --verbose qemu+ssh://IP/Hostname/system
-Password:
-Migration: [ 12 %]
-
-
- Monitoring
diff --git a/xml/libvirt_migrating_vms.xml b/xml/libvirt_migrating_vms.xml
new file mode 100644
index 0000000000..1f04a1af40
--- /dev/null
+++ b/xml/libvirt_migrating_vms.xml
@@ -0,0 +1,607 @@
+
+
+ %entities;
+]>
+
+ Migrating &vmguest;s
+
+
+
+ yes
+
+
+
+ One of the major advantages of virtualization is that &vmguest;s are
+ portable. When a &vmhost; needs maintenance, or when the host becomes
+ overloaded, the guests can be moved to another &vmhost;. &kvm; and &xen;
+ even support live migrations during which the &vmguest; is
+ constantly available.
+
+
+ Types of migration
+
+
+ Depending on the required scenario, there are three ways you can migrate
+ virtual machines (VM).
+
+
+
+
+ Live migration
+
+
+ The source VM continues to run while its configuration and memory is
+ transferred to the target host. When the transfer is complete, the
+ source VM is suspended and the target VM is resumed.
+
+
+ Live migration is useful for VMs that need to be online without any
+ downtime.
+
+
+
+ VMs experiencing heavy I/O load or frequent memory page writes are
+ challenging to live migrate. In such cases, consider using
+ non-live or offline migration.
+
+
+
+
+
+ Non-live migration
+
+
+ The source VM is suspended and its configuration and memory
+ transferred to the target host. Then the target VM is resumed.
+
+
+ Non-live migration is more reliable than live migration, although it
+ creates downtime for the VM. If downtime is tolerable, non-live
+ migration can be an option for VMs that are difficult to live
+ migrate.
+
+
+
+
+ Offline migration
+
+
+ The VM definition is transferred to the target host. The source VM
+ is not stopped and the target VM is not resumed.
+
+
+ Offline migration can be used to migrate inactive VMs.
+
+
+
+ The option must be used together
+ with offline migration.
+
+
+
+
+
+
+
+ Migration requirements
+
+
+ To successfully migrate a &vmguest; to another &vmhost;, the following
+ requirements need to be met:
+
+
+
+
+
+ The source and target systems must have the same architecture.
+
+
+
+
+ Storage devices must be accessible from both machines, for example,
+ via NFS or iSCSI. For more information, see
+ .
+
+
+ This is also true for CD-ROM or floppy images that are connected
+ during the move. However, you can disconnect them before the move as
+ described in .
+
+
+
+
+ &libvirtd; needs to run on both &vmhost;s and you must be able to open
+ a remote &libvirt; connection between the target and the source host
+ (or vice versa). Refer to
+ for details.
+
+
+
+
+ If a firewall is running on the target host, ports need to be opened
+ to allow the migration. If you do not specify a port during the
+ migration process, &libvirt; chooses one from the range 49152:49215.
+ Make sure that either this range (recommended) or a dedicated port of
+ your choice is opened in the firewall on the target
+ host.
+
+
+
+
+ The source and target machines should be in the same subnet on the
+ network, otherwise networking fails after the migration.
+
+
+
+
+ All &vmhost;s participating in migration must have the same UID for
+ the qemu user and the same GIDs for the kvm, qemu and libvirt groups.
+
+
+
+
+ No running or paused &vmguest; with the same name must exist on the
+ target host. If a shut-down machine with the same name exists, its
+ configuration is overwritten.
+
+
+
+
+ All CPU models, except the host cpu model, are
+ supported when migrating &vmguest;s.
+
+
+
+
+ The disk device type is not
+ migratable.
+
+
+
+
+ File system pass-through feature is incompatible with migration.
+
+
+
+
+ The &vmhost; and &vmguest; need to have proper timekeeping installed.
+ See .
+
+
+
+
+ No physical devices can be passed from host to guest. Live migration
+ is currently not supported when using devices with PCI pass-through or
+ . If live migration needs to be
+ supported, use software virtualization (paravirtualization or full
+ virtualization).
+
+
+
+
+ The cache mode setting is an important setting for migration. See:
+ .
+
+
+
+
+ Backward migration, for example, from &slsa; 15 SP2 to 15 SP1, is not
+ supported.
+
+
+
+
+ SUSE strives to support live migration of &vmguest;s from a &vmhost;
+ running a service pack under LTSS to a &vmhost; running a newer
+ service pack within the same &slsa; major version. For example,
+ &vmguest; migration from a &slsa; 12 SP2 host to a &slsa; 12 SP5 host.
+ SUSE only performs minimal testing of LTSS-to-newer migration
+ scenarios and recommends thorough on-site testing before attempting to
+ migrate critical &vmguest;s.
+
+
+
+
+ The image directory should be located in the same path on both hosts.
+
+
+
+
+ All hosts should be on the same level of microcode (especially the
+ Spectre microcode updates). This can be achieved by installing the
+ latest updates of &productname; on all hosts.
+
+
+
+
+
+ Live-migrating with &vmm;
+
+
+ When using the &vmm; to migrate &vmguest;s, it does not matter on which
+ machine it is started. You can start &vmm; on the source or the target
+ host or even on a third host. In the latter case, you need to be able to
+ open remote connections to both the target and the source host.
+
+
+
+
+
+ Start &vmm; and establish a connection to the target or the source
+ host. If the &vmm; was started neither on the target nor the source
+ host, connections to both hosts need to be opened.
+
+
+
+
+ Right-click the &vmguest; that you want to migrate and choose
+ Migrate. Make sure the guest is running or
+ paused—it is not possible to migrate guests that are shut down.
+
+
+ Increasing the speed of the migration
+
+ To increase the speed of the migration, pause the &vmguest;. This is
+ the equivalent of non-live migration described in
+ .
+
+
+
+
+
+ Choose a New Host for the &vmguest;. If the desired
+ target host does not show up, make sure that you are connected to the
+ host.
+
+
+ To change the default options for connecting to the remote host, under
+ Connection, set the Mode, and
+ the target host's Address (IP address or host name)
+ and Port. If you specify a Port,
+ you must also specify an Address.
+
+
+ Under Advanced options, choose whether the move
+ should be permanent (default) or temporary, using Temporary
+ move.
+
+
+ Additionally, there is the option Allow unsafe,
+ which allows migrating without disabling the cache of the &vmhost;.
+ This can speed up the migration but only works when the current
+ configuration allows for a consistent view of the &vmguest; storage
+ without using
+ cache="none"/0_DIRECT.
+
+
+ Bandwidth option
+
+ In recent versions of &vmm;, the option of setting a bandwidth for
+ the migration has been removed. To set a specific bandwidth, use
+ virsh instead.
+
+
+
+
+
+ To perform the migration, click Migrate.
+
+
+ When the migration is complete, the Migrate window
+ closes and the &vmguest; is now listed on the new host in the &vmm;
+ window. The original &vmguest; is still available on the source host
+ in the shut-down state.
+
+
+
+
+
+ Migrating with virsh
+
+
+ To migrate a &vmguest; with virsh
+ , you need to have direct or remote shell access
+ to the &vmhost;, because the command needs to be run on the host. The
+ migration command looks like this:
+
+
+&prompt.user;virsh migrate [OPTIONS] VM_ID_or_NAMECONNECTION_URI [--migrateuri tcp://REMOTE_HOST:PORT]
+
+
+ The most important options are listed below. See virsh help
+ migrate for a full list.
+
+
+
+
+
+
+
+ Does a live migration. If not specified, the guest is paused during
+ the migration (non-live migration).
+
+
+
+
+
+
+
+ Leaves the VM paused on the target host during live or non-live
+ migration.
+
+
+
+
+
+
+
+ Persists the migrated VM on the target host. Without this option,
+ the VM is not be included in the list of domains reported by
+ virsh list --all when shut down.
+
+
+
+
+
+
+
+ When specified, the &vmguest; definition on the source host is
+ deleted after a successful migration. However, virtual disks
+ attached to this guest are not deleted.
+
+
+
+
+
+
+
+ Parallel migration can be used to increase migration data throughput
+ in cases where a single migration thread is not capable of
+ saturating the network link between source and target hosts. On
+ hosts with 40 GB network interfaces, it may require four
+ migration threads to saturate the link. With parallel migration, the
+ time required to migrate large memory VMs can be reduced.
+
+
+
+
+
+
+ The following examples use &wsIVname; as the source system and &wsIname;
+ as the target system; the &vmguest;'s name is
+ opensuse131 with ID 37.
+
+
+
+
+ Non-live migration with default parameters
+
+&prompt.user;virsh migrate 37 qemu+ssh://&exampleuser_plain;@&wsIname;/system
+
+
+
+ Transient live migration with default parameters
+
+&prompt.user;virsh migrate --live opensuse131 qemu+ssh://&exampleuser_plain;@&wsIname;/system
+
+
+
+ Persistent live migration; delete VM definition on source
+
+&prompt.user;virsh migrate --live --persistent --undefinesource 37 \
+qemu+tls://&exampleuser_plain;@&wsIname;/system
+
+
+
+ Non-live migration using port 49152
+
+&prompt.user;virsh migrate opensuse131 qemu+ssh://&exampleuser_plain;@&wsIname;/system \
+--migrateuri tcp://@&wsIname;:49152
+
+
+
+ Live migration transferring all used storage
+
+&prompt.user;virsh migrate --live --persistent --copy-storage-all \
+opensuse156 qemu+ssh://&exampleuser_plain;@&wsIname;/system
+
+
+ When migrating VM's storage using the
+ option, the storage must be
+ placed in a &libvirt; storage pool. The target storage pool must
+ exist with identical type and name as the source pool.
+
+
+ To obtain the XML representation of the source pool, use the
+ following command:
+
+&prompt.sudo;virsh pool-dumpxml EXAMPLE_VM > EXAMPLE_POOL.xml
+
+ To create and start the storage pool on the target host, copy its
+ XML representation there and use the following commands:
+
+&prompt.sudo;virsh pool-define EXAMPLE_POOL.xml
+&prompt.sudo;virsh pool-start EXAMPLE_VM
+
+
+
+
+
+
+ Transient compared to persistent migrations
+
+ By default, virsh migrate creates a temporary
+ (transient) copy of the &vmguest; on the target host. A shut-down
+ version of the original guest description remains on the source host. A
+ transient copy is deleted from the server after it is shut down.
+
+
+ To create a permanent copy of a guest on the target host, use the switch
+ . A shut-down version of the original guest
+ description remains on the source host, too. Use the option
+ together with
+ for a real move where a
+ permanent copy is created on the target host and the version on the
+ source host is deleted.
+
+
+ It is not recommended to use without
+ the option, since this results in the loss
+ of both &vmguest; definitions when the guest is shut down on the target
+ host.
+
+
+
+
+
+ Step-by-step example
+
+
+ Exporting the storage
+
+ First, you need to export the storage to share the guest image between
+ hosts. This can be done by an NFS server. In the following example, we
+ want to share the /volume1/VM directory for all
+ machines that are on the network 10.0.1.0/24. We are using a &sle; NFS
+ server. As root user, edit the /etc/exports file
+ and add:
+
+/volume1/VM 10.0.1.0/24 (rw,sync,no_root_squash)
+
+ You need to restart the NFS server:
+
+&prompt.sudo;systemctl restart nfsserver
+&prompt.sudo;exportfs
+/volume1/VM 10.0.1.0/24
+
+
+
+ Defining the pool on the target hosts
+
+ On each host where you want to migrate the &vmguest;, the pool must be
+ defined to be able to access the volume (that contains the Guest image).
+ Our NFS server IP address is 10.0.1.99, its share is the
+ /volume1/VM directory, and we want to get it
+ mounted in the /var/lib/libvirt/images/VM
+ directory. The pool name is VM. To define this
+ pool, create a VM.xml file with the following
+ content:
+
+<pool type='netfs'>
+ <name>VM</name>
+ <source>
+ <host name='10.0.1.99'/>
+ <dir path='/volume1/VM'/>
+ <format type='auto'/>
+ </source>
+ <target>
+ <path>/var/lib/libvirt/images/VM</path>
+ <permissions>
+ <mode>0755</mode>
+ <owner>-1</owner>
+ <group>-1</group>
+ </permissions>
+ </target>
+ </pool>
+
+ Then load it into &libvirt; using the pool-define
+ command:
+
+&prompt.root;virsh pool-define VM.xml
+
+ An alternative way to define this pool is to use the
+ virsh command:
+
+&prompt.root;virsh pool-define-as VM --type netfs --source-host 10.0.1.99 \
+ --source-path /volume1/VM --target /var/lib/libvirt/images/VM
+Pool VM created
+
+ The following commands assume that you are in the interactive shell of
+ virsh, which can also be reached by using the command
+ virsh without any arguments. Then the pool can be set
+ to start automatically at host boot (autostart option):
+
+virsh # pool-autostart VM
+Pool VM marked as autostarted
+
+ To disable the autostart:
+
+virsh # pool-autostart VM --disable
+Pool VM unmarked as autostarted
+
+ Check if the pool is present:
+
+virsh # pool-list --all
+ Name State Autostart
+-------------------------------------------
+ default active yes
+ VM active yes
+
+virsh # pool-info VM
+Name: VM
+UUID: 42efe1b3-7eaa-4e24-a06a-ba7c9ee29741
+State: running
+Persistent: yes
+Autostart: yes
+Capacity: 2,68 TiB
+Allocation: 2,38 TiB
+Available: 306,05 GiB
+
+ Pool needs to exist on all target hosts
+
+ Remember: this pool must be defined on each host where you want to be
+ able to migrate your &vmguest;.
+
+
+
+
+
+ Creating the volume
+
+ The pool has been defined—now we need a volume which contains the
+ disk image:
+
+virsh # vol-create-as VM sled12.qcow2 8G --format qcow2
+Vol sled12.qcow2 created
+
+ The volume names shown are used later to install the guest with
+ virt-install.
+
+
+
+
+ Creating the &vmguest;
+
+ Let us create a &productname; &vmguest; with the
+ virt-install command. The VM
+ pool is specified with the --disk option,
+ cache=none is recommended if you do not want to use
+ the --unsafe option while doing the migration.
+
+&prompt.root;virt-install --connect qemu:///system --virt-type kvm --name \
+ sles15 --memory 1024 --disk vol=VM/sled12.qcow2,cache=none --cdrom \
+ /mnt/install/ISO/SLE-15-Server-DVD-x86_64-Build0327-Media1.iso --graphics \
+ vnc --os-variant sled15
+Starting install...
+Creating domain...
+
+
+
+ Migrate the &vmguest;
+
+ Everything is ready to do the migration now. Run the
+ migrate command on the &vmhost; that is currently
+ hosting the &vmguest;, and choose the target.
+
+virsh # migrate --live sled12 --verbose qemu+ssh://IP/Hostname/system
+Password:
+Migration: [ 12 %]
+
+
+