Recover from a failed compute nodeIf you deployed Compute with a shared file system, you can quickly recover from a failed
compute node. Of the two methods covered in these sections, evacuating is the preferred
method even in the absence of shared storage. Evacuating provides many benefits over manual
recovery, such as re-attachment of volumes and floating IPs.Manual recoveryTo recover a KVM/libvirt compute node, see the previous section. Use the
following procedure for all other hypervisors.Review host informationIdentify the VMs on the affected hosts, using tools such as a
combination of nova list and nova show or
euca-describe-instances. For example, the following
output displays information about instance i-000015b9
that is running on node np-rcc54:$euca-describe-instancesi-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60Review the status of the host by querying the Compute database. Some of the
important information is highlighted below. The following example converts an
EC2 API instance ID into an OpenStack ID; if you used the
nova commands, you can substitute the ID directly. You
can find the credentials for your database in
/etc/nova.conf.mysql>SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;*************************** 1. row ***************************
created_at: 2012-06-19 00:48:11
updated_at: 2012-07-03 00:35:11
deleted_at: NULL
...
id: 5561
...
power_state: 5
vm_state: shutoff
...
hostname: at3-ui02
host: np-rcc54
...
uuid: 3f57699a-e773-4650-a443-b4b37eed5a06
...
task_state: NULL
...Recover the VMAfter you have determined the status of the VM on the failed host,
decide to which compute host the affected VM should be moved. For example, run
the following database command to move the VM to
np-rcc46:mysql>UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06';If using a hypervisor that relies on libvirt (such as KVM), it is a
good idea to update the libvirt.xml file (found in
/var/lib/nova/instances/[instance ID]). The important
changes to make are:Change the DHCPSERVER value to the host IP
address of the compute host that is now the VM's new
home.Update the VNC IP, if it isn't already updated, to:
0.0.0.0.Reboot the VM:$nova reboot --hard 3f57699a-e773-4650-a443-b4b37eed5a06In theory, the above database update and nova
reboot command are all that is required to recover a VM from a
failed host. However, if further problems occur, consider looking at
recreating the network filter configuration using virsh,
restarting the Compute services or updating the vm_state
and power_state in the Compute database.Recover from a UID/GID mismatchWhen running OpenStack Compute, using a shared file system or an automated
configuration tool, you could encounter a situation where some files on your compute
node are using the wrong UID or GID. This causes a number of errors, such as being
unable to do live migration or start virtual machines.The following procedure runs on nova-compute hosts, based on the KVM hypervisor, and could help to
restore the situation:To recover from a UID/GID mismatchEnsure you do not use numbers that are already used for some other
user/group.Set the nova uid in /etc/passwd to the same number in
all hosts (for example, 112).Set the libvirt-qemu uid in
/etc/passwd to the
same number in all hosts (for example,
119).Set the nova group in
/etc/group file to
the same number in all hosts (for example,
120).Set the libvirtd group in
/etc/group file to
the same number in all hosts (for example,
119).Stop the services on the compute
node.Change all the files owned by user nova or by
group nova. For example:#find / -uid 108 -exec chown nova {} \; # note the 108 here is the old nova uid before the change
#find / -gid 120 -exec chgrp nova {} \;Repeat the steps for the libvirt-qemu owned files if those needed to
change.Restart the services.Now you can run the find
command to verify that all files using the
correct identifiers.Recover cloud after disasterUse the following procedures to manage your cloud after a disaster, and to easily
back up its persistent storage volumes. Backups are
mandatory, even outside of disaster scenarios.For a DRP definition, see http://en.wikipedia.org/wiki/Disaster_Recovery_Plan.Disaster recovery exampleA disaster could happen to several components of your architecture (for
example, a disk crash, a network loss, or a power cut). In this example, the
following components are configured:A cloud controller (nova-api,
nova-objectstore,
nova-network)A compute node (nova-compute)A Storage Area Network (SAN) used by OpenStack Block Storage
(cinder-volumes)The worst disaster for a cloud is a power loss, which applies to all three
components. Before a power loss:From the SAN to the cloud controller, we have an active iSCSI session
(used for the "cinder-volumes" LVM's VG).From the cloud controller to the compute node, we also have active
iSCSI sessions (managed by cinder-volume).For every volume, an iSCSI session is made (so 14 ebs volumes equals
14 sessions).From the cloud controller to the compute node, we also have iptables/
ebtables rules, which allow access from the cloud controller to the running
instance.And at least, from the cloud controller to the compute node; saved
into database, the current state of the instances (in that case "running" ),
and their volumes attachment (mount point, volume ID, volume status, and so
on.)After the power loss occurs and all hardware components restart:From the SAN to the cloud, the iSCSI session no longer exists.From the cloud controller to the compute node, the iSCSI sessions no
longer exist.From the cloud controller to the compute node, the iptables and
ebtables are recreated, since at boot, nova-network
reapplies configurations.From the cloud controller, instances are in a shutdown state (because
they are no longer running).In the database, data was not updated at all, since Compute could not
have anticipated the crash.Before going further, and to prevent the administrator from making fatal
mistakes, instances won't be lost, because no
"destroy" or "terminate" command was
invoked, so the files for the instances remain on the compute node.Perform these tasks in the following order.
Do not add any extra steps at this stage.Get the current relation from a
volume to its instance, so that you
can recreate the attachment.Update the database to clean the
stalled state. (After that, you cannot
perform the first step).Restart the instances. In other
words, go from a shutdown to running
state.After the restart, reattach the volumes to their respective
instances (optional).SSH into the instances to reboot them.Recover after a disasterTo perform disaster recoveryGet the instance-to-volume
relationshipYou must determine the current relationship from a volume to its
instance, because you will re-create the attachment.You can find this relationship by running nova
volume-list. Note that the nova client
includes the ability to get volume information from OpenStack Block
Storage.Update the databaseUpdate the database to clean the stalled state. You must restore for
every volume, using these queries to clean up the database:mysql>use cinder;mysql>update volumes set mountpoint=NULL;mysql>update volumes set status="available" where status <>"error_deleting";mysql>update volumes set attach_status="detached";mysql>update volumes set instance_id=0;You can then run nova volume-list commands to list
all volumes.Restart instancesRestart the instances using the nova reboot
$instance command.At this stage, depending on your image, some instances completely
reboot and become reachable, while others stop on the "plymouth"
stage.DO NOT reboot a second timeDo not reboot instances that are stopped at this point. Instance state
depends on whether you added an /etc/fstab entry for
that volume. Images built with the cloud-init package
remain in a pending state, while others skip the missing volume and start.
The idea of that stage is only to ask Compute to reboot every instance, so
the stored state is preserved. For more information about
cloud-init, see help.ubuntu.com/community/CloudInit.Reattach volumesAfter the restart, and Compute has restored the right status, you can
reattach the volumes to their respective instances using the nova
volume-attach command. The following snippet uses a file of
listed volumes to reattach them:#!/bin/bash
while read line; do
volume=`echo $line | $CUT -f 1 -d " "`
instance=`echo $line | $CUT -f 2 -d " "`
mount_point=`echo $line | $CUT -f 3 -d " "`
echo "ATTACHING VOLUME FOR INSTANCE - $instance"
nova volume-attach $instance $volume $mount_point
sleep 2
done < $volumes_tmp_fileAt this stage, instances that were pending on the boot sequence
(plymouth) automatically continue their boot,
and restart normally, while the ones that booted see the volume.SSH into instancesIf some services depend on the volume, or if a volume has an entry
into fstab, you should now simply restart the
instance. This restart needs to be made from the instance itself, not
through nova.SSH into the instance and perform a reboot:#shutdown -r nowBy completing this procedure, you can
successfully recover your cloud.Follow these guidelines:Use the errors=remount parameter in the
fstab file, which prevents data
corruption.The system locks any write to the disk if it detects an I/O error.
This configuration option should be added into the cinder-volume server (the one which
performs the iSCSI connection to the SAN), but also into the instances'
fstab file.Do not add the entry for the SAN's disks to the cinder-volume's
fstab file.Some systems hang on that step, which means you could lose access to
your cloud-controller. To re-run the session manually, run the following
command before performing the mount:
#iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -lFor your instances, if you have the whole /home/
directory on the disk, leave a user's directory with the user's bash
files and the authorized_keys file (instead of
emptying the /home directory and mapping the disk
on it).This enables you to connect to the instance, even without the volume
attached, if you allow only connections through public keys.Script the DRPYou can download from here a bash script which performs the following steps:An array is created for instances and their attached volumes.The MySQL database is updated.Using euca2ools, all instances are restarted.The volume attachment is made.An SSH connection is performed into every instance using Compute credentials.The "test mode" allows you to perform
that whole sequence for only one
instance.To reproduce the power loss, connect to the compute node which runs
that same instance and close the iSCSI session. Do not detach the volume using the nova
volume-detach command; instead, manually close the iSCSI session. For the following
example command uses an iSCSI session with the number 15:#iscsiadm -m session -u -r 15Do not forget the -r
flag. Otherwise, you close ALL
sessions.