Only administrators can perform live migrations. If your cloud is configured to use cells, you can perform live migration within but not between cells.
Migration enables an administrator to move a virtual-machine instance from one compute host to another. This feature is useful when a compute host requires maintenance. Migration can also be useful to redistribute the load when many VM instances are running on a specific physical machine.
The migration types are:
The following sections describe how to configure your hosts and compute nodes for migrations by using the KVM and XenServer hypervisors.
/var/lib/nova/instances) has to be mounted by shared storage. This guide uses NFS but other options, including the OpenStack Gluster Connector are available.
live_migration_downtimeconfiguration parameters. Migration downtime is measured in steps, with an exponential backoff between each step. This means that the maximum downtime between each step starts off small, and is increased in ever larger amounts as Compute waits for the migration to complete. This gives the guest a chance to complete the migration successfully, with a minimum amount of downtime.
NOVA-INST-DIR/instances). If you have changed the
instances_pathvariables, modify the commands accordingly.
vncserver_listen=0.0.0.0or live migration will not work correctly.
instances_pathin each node that runs
nova-compute. The mount point for
instances_pathmust be the same value for each node, or live migration will not work correctly.
Prepare at least three servers. In this example, we refer to the servers as
HostAis the Cloud Controller, and should run these services:
HostCare the compute nodes that run
NOVA-INST-DIR (set with
state_path in the
nova.conf file) is the same on all hosts.
HostAis the NFSv4 server that exports
HostCare NFSv4 clients that mount
Configuring your system
Configure your DNS or
/etc/hosts and ensure it is consistent across all hosts. Make sure that the three hosts can perform name resolution with each other. As a test, use the ping command to ping each host from one another:
$ ping HostA $ ping HostB $ ping HostC
novauser (set with the owner of
nova-computeservice). Direct access from one compute host to another is needed to copy the VM file across. It is also needed to detect if the source and target compute nodes share a storage subsystem.
HostA, and ensure it is readable and writable by the Compute user on
Configure the NFS server at
HostA by adding the following line to the
Change the subnet mask (
255.255.0.0) to the appropriate value to include the IP addresses of
HostC. Then restart the
# /etc/init.d/nfs-kernel-server restart # /etc/init.d/idmapd restart
On both compute nodes, enable the
execute/search bit on your shared directory to allow qemu to be able to use the images within the directories. On all hosts, run the following command:
$ chmod o+x NOVA-INST-DIR/instances
Configure NFS on
HostC by adding the following line to the
HostA:/ /NOVA-INST-DIR/instances nfs4 defaults 0 0
Ensure that you can mount the exported directory
$ mount -a -v
HostA can see the
$ ls -ld NOVA-INST-DIR/instances/ drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/
Perform the same check on
HostC, paying special attention to the permissions (Compute should be able to write)
$ ls -ld NOVA-INST-DIR/instances/ drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/ $ df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 921514972 4180880 870523828 1% / none 16498340 1228 16497112 1% /dev none 16502856 0 16502856 0% /dev/shm none 16502856 368 16502488 1% /var/run none 16502856 0 16502856 0% /var/lock none 16502856 0 16502856 0% /lib/init/rw HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)
Update the libvirt configurations so that the calls can be made securely. These methods enable remote access over TCP and are not documented here.
libvirt. After you run the command, ensure that libvirt is successfully restarted
# stop libvirt-bin && start libvirt-bin $ ps -ef | grep libvirt root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l\
Configure the downtime required for the migration by adjusting these parameters in the
live_migration_downtime parameter sets the maximum permitted downtime for a live migration, in milliseconds. This setting defaults to 500 milliseconds.
live_migration_downtime_steps parameter sets the total number of incremental steps to reach the maximum downtime value. This setting defaults to 10 steps.
live_migration_downtime_delay parameter sets the amount of time to wait between each step, in seconds. This setting defaults to 75 seconds.
Prior to the Kilo release, the Compute service did not use the libvirt live migration function by default. To enable this function, add the following line to the
[libvirt] section of the
On versions older than Kilo, the Compute service does not use libvirt's live migration by default because there is a risk that the migration process will never end. This can happen if the guest operating system uses blocks on the disk faster than they can be migrated.
Configuring KVM for block migration is exactly the same as the above configuration in configuring-migrations-kvm-shared-storage the section called shared storage, except that
NOVA-INST-DIR/instances is local to each host rather than shared. No NFS client or server configuration is required.
Shared storage. An NFS export, visible to all XenServer hosts.
For the supported NFS versions, see the NFS VHD section of the XenServer Administrator's Guide.
To use shared storage live migration with XenServer hypervisors, the hosts must be joined to a XenServer pool. To create that pool, a host aggregate must be created with specific metadata. This metadata is used by the XAPI plug-ins to establish the pool.
Using shared storage live migrations with XenServer Hypervisors
Configure all compute nodes to use the default storage repository (
sr) for pool operations. Add this line to your
nova.conf configuration files on all compute nodes:
Create a host aggregate. This command creates the aggregate, and then displays a table that contains the ID of the new aggregate
$ nova aggregate-create POOL_NAME AVAILABILITY_ZONE
Add metadata to the aggregate, to mark it as a hypervisor pool
$ nova aggregate-set-metadata AGGREGATE_ID hypervisor_pool=true $ nova aggregate-set-metadata AGGREGATE_ID operational_state=created
Make the first compute node part of that aggregate
$ nova aggregate-add-host AGGREGATE_ID MASTER_COMPUTE_NAME
The host is now part of a XenServer pool.
Add hosts to the pool
$ nova aggregate-add-host AGGREGATE_ID COMPUTE_HOST_NAME
The added compute node and the host will shut down to join the host to the XenServer pool. The operation will fail if any server other than the compute node is running or suspended on the host.
Compatible XenServer hypervisors. The hypervisors must support the Storage XenMotion feature. See your XenServer manual to make sure your edition has this feature.