diff --git a/doc/admin-guide-cloud/compute/section_compute-recover-nodes.xml b/doc/admin-guide-cloud/compute/section_compute-recover-nodes.xml
new file mode 100644
index 0000000000..591e100298
--- /dev/null
+++ b/doc/admin-guide-cloud/compute/section_compute-recover-nodes.xml
@@ -0,0 +1,405 @@
+
+
+ Recover from a failed compute node
+ If you deployed Compute with a shared file system, you can quickly recover from a failed
+ compute node. Of the two methods covered in these sections, evacuating is the preferred
+ method even in the absence of shared storage. Evacuating provides many benefits over manual
+ recovery, such as re-attachment of volumes and floating IPs.
+
+
+ Manual recovery
+ To recover a KVM/libvirt compute node, see the previous section. Use the
+ following procedure for all other hypervisors.
+
+ Review host information
+
+ Identify the VMs on the affected hosts, using tools such as a
+ combination of nova list and nova show or
+ euca-describe-instances. For example, the following
+ output displays information about instance i-000015b9
+ that is running on node np-rcc54:
+ $euca-describe-instances
+i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60
+
+
+ Review the status of the host by querying the Compute database. Some of the
+ important information is highlighted below. The following example converts an
+ EC2 API instance ID into an OpenStack ID; if you used the
+ nova commands, you can substitute the ID directly. You
+ can find the credentials for your database in
+ /etc/nova.conf.
+ mysql>SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;
+*************************** 1. row ***************************
+ created_at: 2012-06-19 00:48:11
+ updated_at: 2012-07-03 00:35:11
+ deleted_at: NULL
+...
+ id: 5561
+...
+ power_state: 5
+ vm_state: shutoff
+...
+ hostname: at3-ui02
+ host: np-rcc54
+...
+ uuid: 3f57699a-e773-4650-a443-b4b37eed5a06
+...
+ task_state: NULL
+...
+
+
+ Recover the VM
+
+ After you have determined the status of the VM on the failed host,
+ decide to which compute host the affected VM should be moved. For example, run
+ the following database command to move the VM to
+ np-rcc46:
+ mysql>UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06';
+
+
+ If using a hypervisor that relies on libvirt (such as KVM), it is a
+ good idea to update the libvirt.xml file (found in
+ /var/lib/nova/instances/[instance ID]). The important
+ changes to make are:
+
+
+
+ Change the DHCPSERVER value to the host IP
+ address of the compute host that is now the VM's new
+ home.
+
+
+ Update the VNC IP, if it isn't already updated, to:
+ 0.0.0.0.
+
+
+
+
+
+ Reboot the VM:
+ $nova reboot --hard 3f57699a-e773-4650-a443-b4b37eed5a06
+
+
+ In theory, the above database update and nova
+ reboot command are all that is required to recover a VM from a
+ failed host. However, if further problems occur, consider looking at
+ recreating the network filter configuration using virsh,
+ restarting the Compute services or updating the vm_state
+ and power_state in the Compute database.
+
+
+ Recover from a UID/GID mismatch
+ When running OpenStack Compute, using a shared file system or an automated
+ configuration tool, you could encounter a situation where some files on your compute
+ node are using the wrong UID or GID. This causes a number of errors, such as being
+ unable to do live migration or start virtual machines.
+ The following procedure runs on nova-compute hosts, based on the KVM hypervisor, and could help to
+ restore the situation:
+
+ To recover from a UID/GID mismatch
+
+ Ensure you do not use numbers that are already used for some other
+ user/group.
+
+
+ Set the nova uid in /etc/passwd to the same number in
+ all hosts (for example, 112).
+
+
+ Set the libvirt-qemu uid in
+ /etc/passwd to the
+ same number in all hosts (for example,
+ 119).
+
+
+ Set the nova group in
+ /etc/group file to
+ the same number in all hosts (for example,
+ 120).
+
+
+ Set the libvirtd group in
+ /etc/group file to
+ the same number in all hosts (for example,
+ 119).
+
+
+ Stop the services on the compute
+ node.
+
+
+ Change all the files owned by user nova or by
+ group nova. For example:
+ #find / -uid 108 -exec chown nova {} \; # note the 108 here is the old nova uid before the change
+#find / -gid 120 -exec chgrp nova {} \;
+
+
+ Repeat the steps for the libvirt-qemu owned files if those needed to
+ change.
+
+
+ Restart the services.
+
+
+ Now you can run the find
+ command to verify that all files using the
+ correct identifiers.
+
+
+
+
+ Recover cloud after disaster
+ Use the following procedures to manage your cloud after a disaster, and to easily
+ back up its persistent storage volumes. Backups are
+ mandatory, even outside of disaster scenarios.
+ For a DRP definition, see http://en.wikipedia.org/wiki/Disaster_Recovery_Plan.
+
+ Disaster recovery example
+ A disaster could happen to several components of your architecture (for
+ example, a disk crash, a network loss, or a power cut). In this example, the
+ following components are configured:
+
+
+ A cloud controller (nova-api,
+ nova-objectstore,
+ nova-network)
+
+
+ A compute node (nova-compute)
+
+
+ A Storage Area Network (SAN) used by OpenStack Block Storage
+ (cinder-volumes)
+
+
+ The worst disaster for a cloud is a power loss, which applies to all three
+ components. Before a power loss:
+
+
+ From the SAN to the cloud controller, we have an active iSCSI session
+ (used for the "cinder-volumes" LVM's VG).
+
+
+ From the cloud controller to the compute node, we also have active
+ iSCSI sessions (managed by cinder-volume).
+
+
+ For every volume, an iSCSI session is made (so 14 ebs volumes equals
+ 14 sessions).
+
+
+ From the cloud controller to the compute node, we also have iptables/
+ ebtables rules which allow access from the cloud controller to the running
+ instance.
+
+
+ And at least, from the cloud controller to the compute node; saved
+ into database, the current state of the instances (in that case "running" ),
+ and their volumes attachment (mount point, volume ID, volume status, and so
+ on.)
+
+
+ After the power loss occurs and all hardware components restart:
+
+
+ From the SAN to the cloud, the iSCSI session no longer exists.
+
+
+ From the cloud controller to the compute node, the iSCSI sessions no
+ longer exist.
+
+
+ From the cloud controller to the compute node, the iptables and
+ ebtables are recreated, since at boot, nova-network
+ reapplies configurations.
+
+
+ From the cloud controller, instances are in a shutdown state (because
+ they are no longer running).
+
+
+ In the database, data was not updated at all, since Compute could not
+ have anticipated the crash.
+
+
+ Before going further, and to prevent the administrator from making fatal
+ mistakes, instances won't be lost, because no
+ "destroy" or "terminate" command was
+ invoked, so the files for the instances remain on the compute node.
+ Perform these tasks in the following order.
+ Do not add any extra steps at this stage.
+
+
+
+ Get the current relation from a
+ volume to its instance, so that you
+ can recreate the attachment.
+
+
+ Update the database to clean the
+ stalled state. (After that, you cannot
+ perform the first step).
+
+
+ Restart the instances. In other
+ words, go from a shutdown to running
+ state.
+
+
+ After the restart, reattach the volumes to their respective
+ instances (optional).
+
+
+ SSH into the instances to reboot them.
+
+
+
+
+
+ Recover after a disaster
+
+ To perform disaster recovery
+
+ Get the instance-to-volume
+ relationship
+ You must determine the current relationship from a volume to its
+ instance, because you will re-create the attachment.
+ You can find this relationship by running nova
+ volume-list. Note that the nova client
+ includes the ability to get volume information from OpenStack Block
+ Storage.
+
+
+ Update the database
+ Update the database to clean the stalled state. You must restore for
+ every volume, using these queries to clean up the database:
+ mysql>use cinder;
+mysql>update volumes set mountpoint=NULL;
+mysql>update volumes set status="available" where status <>"error_deleting";
+mysql>update volumes set attach_status="detached";
+mysql>update volumes set instance_id=0;
+ You can then run nova volume-list commands to list
+ all volumes.
+
+
+ Restart instances
+ Restart the instances using the nova reboot
+ $instance command.
+ At this stage, depending on your image, some instances completely
+ reboot and become reachable, while others stop on the "plymouth"
+ stage.
+
+
+ DO NOT reboot a second time
+ Do not reboot instances that are stopped at this point. Instance state
+ depends on whether you added an /etc/fstab entry for
+ that volume. Images built with the cloud-init package
+ remain in a pending state, while others skip the missing volume and start.
+ The idea of that stage is only to ask Compute to reboot every instance, so
+ the stored state is preserved. For more information about
+ cloud-init, see help.ubuntu.com/community/CloudInit.
+
+
+ Reattach volumes
+ After the restart, and Compute has restored the right status, you can
+ reattach the volumes to their respective instances using the nova
+ volume-attach command. The following snippet uses a file of
+ listed volumes to reattach them:
+ #!/bin/bash
+
+while read line; do
+ volume=`echo $line | $CUT -f 1 -d " "`
+ instance=`echo $line | $CUT -f 2 -d " "`
+ mount_point=`echo $line | $CUT -f 3 -d " "`
+ echo "ATTACHING VOLUME FOR INSTANCE - $instance"
+ nova volume-attach $instance $volume $mount_point
+ sleep 2
+done < $volumes_tmp_file
+ At this stage, instances that were pending on the boot sequence
+ (plymouth) automatically continue their boot,
+ and restart normally, while the ones that booted see the volume.
+
+
+ SSH into instances
+ If some services depend on the volume, or if a volume has an entry
+ into fstab, you should now simply restart the
+ instance. This restart needs to be made from the instance itself, not
+ through nova.
+ SSH into the instance and perform a reboot:
+ #shutdown -r now
+
+
+ By completing this procedure, you can
+ successfully recover your cloud.
+
+ Follow these guidelines:
+
+
+ Use the errors=remount parameter in the
+ fstab file, which prevents data
+ corruption.
+ The system locks any write to the disk if it detects an I/O error.
+ This configuration option should be added into the cinder-volume server (the one which
+ performs the iSCSI connection to the SAN), but also into the instances'
+ fstab file.
+
+
+ Do not add the entry for the SAN's disks to the cinder-volume's
+ fstab file.
+ Some systems hang on that step, which means you could lose access to
+ your cloud-controller. To re-run the session manually, run the following
+ command before performing the mount:
+ #iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -l
+
+
+ For your instances, if you have the whole /home/
+ directory on the disk, leave a user's directory with the user's bash
+ files and the authorized_keys file (instead of
+ emptying the /home directory and mapping the disk
+ on it).
+ This enables you to connect to the instance, even without the volume
+ attached, if you allow only connections through public keys.
+
+
+
+
+
+ Script the DRP
+ You can download from here a bash script which performs the following steps:
+
+ An array is created for instances and their attached volumes.
+ The MySQL database is updated.
+ Using euca2ools, all instances are restarted.
+ The volume attachment is made.
+ An SSH connection is performed into every instance using Compute credentials.
+
+ The "test mode" allows you to perform
+ that whole sequence for only one
+ instance.
+ To reproduce the power loss, connect to the compute node which runs
+ that same instance and close the iSCSI session. Do not detach the volume using the nova
+ volume-detach command; instead, manually close the iSCSI session. For the following
+ example command uses an iSCSI session with the number 15:
+ #iscsiadm -m session -u -r 15
+ Do not forget the -r
+ flag. Otherwise, you close ALL
+ sessions.
+
+
+
diff --git a/doc/config-reference/compute/section_compute-security.xml b/doc/admin-guide-cloud/compute/section_compute-security.xml
similarity index 100%
rename from doc/config-reference/compute/section_compute-security.xml
rename to doc/admin-guide-cloud/compute/section_compute-security.xml
diff --git a/doc/admin-guide-cloud/compute/section_compute-system-admin.xml b/doc/admin-guide-cloud/compute/section_compute-system-admin.xml
index 0db3c11d63..5236c0ba28 100644
--- a/doc/admin-guide-cloud/compute/section_compute-system-admin.xml
+++ b/doc/admin-guide-cloud/compute/section_compute-system-admin.xml
@@ -500,437 +500,6 @@ local0.error @@172.20.1.43:1024
-
- Recover from a failed compute node
- If you have deployed Compute with a shared file
- system, you can quickly recover from a failed compute
- node. Of the two methods covered in these sections,
- the evacuate API is the preferred method even in the
- absence of shared storage. The evacuate API provides
- many benefits over manual recovery, such as
- re-attachment of volumes and floating IPs.
-
-
- Manual recovery
- For KVM/libvirt compute node recovery, see the previous section. Use the
- following procedure for all other hypervisors.
-
- To work with host information
-
- Identify the VMs on the affected hosts, using tools such as a
- combination of nova list and nova show
- or euca-describe-instances. Here's an example using the
- EC2 API - instance i-000015b9 that is running on node np-rcc54:
- i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60
-
-
- You can review the status of the host by using the Compute database.
- Some of the important information is highlighted below. This example
- converts an EC2 API instance ID into an OpenStack ID; if you used the
- nova commands, you can substitute the ID directly.
- You can find the credentials for your database in
- /etc/nova.conf.
- SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;
-*************************** 1. row ***************************
- created_at: 2012-06-19 00:48:11
- updated_at: 2012-07-03 00:35:11
- deleted_at: NULL
-...
- id: 5561
-...
- power_state: 5
- vm_state: shutoff
-...
- hostname: at3-ui02
- host: np-rcc54
-...
- uuid: 3f57699a-e773-4650-a443-b4b37eed5a06
-...
- task_state: NULL
-...
-
-
-
- To recover the VM
-
- When you know the status of the VM on the failed host, determine to
- which compute host the affected VM should be moved. For example, run the
- following database command to move the VM to np-rcc46:
- UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06';
-
-
- If using a hypervisor that relies on libvirt (such as KVM), it is a
- good idea to update the libvirt.xml file (found in
- /var/lib/nova/instances/[instance ID]). The important
- changes to make are:
-
-
-
- Change the DHCPSERVER value to the host IP
- address of the compute host that is now the VM's new
- home.
-
-
- Update the VNC IP if it isn't already to:
- 0.0.0.0.
-
-
-
-
-
- Reboot the VM:
- $nova reboot --hard 3f57699a-e773-4650-a443-b4b37eed5a06
-
-
- In theory, the above database update and nova
- reboot command are all that is required to recover a VM from a
- failed host. However, if further problems occur, consider looking at
- recreating the network filter configuration using virsh,
- restarting the Compute services or updating the vm_state
- and power_state in the Compute database.
-
-
-
- Recover from a UID/GID mismatch
- When running OpenStack compute, using a shared file
- system or an automated configuration tool, you could
- encounter a situation where some files on your compute
- node are using the wrong UID or GID. This causes a
- raft of errors, such as being unable to live migrate,
- or start virtual machines.
- The following procedure runs on nova-compute hosts, based on the KVM hypervisor, and could help to
- restore the situation:
-
- To recover from a UID/GID mismatch
-
- Ensure you don't use numbers that are already used for some other
- user/group.
-
-
- Set the nova uid in /etc/passwd to the same number in
- all hosts (for example, 112).
-
-
- Set the libvirt-qemu uid in
- /etc/passwd to the
- same number in all hosts (for example,
- 119).
-
-
- Set the nova group in
- /etc/group file to
- the same number in all hosts (for example,
- 120).
-
-
- Set the libvirtd group in
- /etc/group file to
- the same number in all hosts (for example,
- 119).
-
-
- Stop the services on the compute
- node.
-
-
- Change all the files owned by user nova or
- by group nova. For example:
- find / -uid 108 -exec chown nova {} \; # note the 108 here is the old nova uid before the change
-find / -gid 120 -exec chgrp nova {} \;
-
-
- Repeat the steps for the libvirt-qemu owned files if those needed to
- change.
-
-
- Restart the services.
-
-
- Now you can run the find
- command to verify that all files using the
- correct identifiers.
-
-
-
-
- Compute disaster recovery process
- Use the following procedures to manage your cloud after a disaster, and to easily
- back up its persistent storage volumes. Backups are
- mandatory, even outside of disaster scenarios.
- For a DRP definition, see http://en.wikipedia.org/wiki/Disaster_Recovery_Plan.
-
- A- The disaster recovery process
- presentation
- A disaster could happen to several components of
- your architecture: a disk crash, a network loss, a
- power cut, and so on. In this example, assume the
- following set up:
-
-
- A cloud controller (nova-api,
- nova-objecstore,
- nova-network)
-
-
- A compute node (nova-compute)
-
-
- A Storage Area Network used by
- cinder-volumes (aka
- SAN)
-
-
- The disaster example is the worst one: a power
- loss. That power loss applies to the three
- components. Let's see what
- runs and how it runs before the
- crash:
-
-
- From the SAN to the cloud controller, we
- have an active iscsi session (used for the
- "cinder-volumes" LVM's VG).
-
-
- From the cloud controller to the compute node, we also have active
- iscsi sessions (managed by cinder-volume).
-
-
- For every volume, an iscsi session is made (so 14 ebs volumes equals
- 14 sessions).
-
-
- From the cloud controller to the compute node, we also have iptables/
- ebtables rules which allow access from the cloud controller to the running
- instance.
-
-
- And at least, from the cloud controller to the compute node; saved
- into database, the current state of the instances (in that case "running" ),
- and their volumes attachment (mount point, volume ID, volume status, and so
- on.)
-
-
- Now, after the power loss occurs and all
- hardware components restart, the situation is as
- follows:
-
-
- From the SAN to the cloud, the ISCSI
- session no longer exists.
-
-
- From the cloud controller to the compute
- node, the ISCSI sessions no longer exist.
-
-
-
- From the cloud controller to the compute node, the iptables and
- ebtables are recreated, since, at boot,
- nova-network reapplies the
- configurations.
-
-
- From the cloud controller, instances are in a shutdown state (because
- they are no longer running)
-
-
- In the database, data was not updated at all, since Compute could not
- have anticipated the crash.
-
-
- Before going further, and to prevent the administrator from making fatal
- mistakes, the instances won't be lost, because
- no "destroy" or "terminate" command was invoked, so the files for the instances remain
- on the compute node.
- Perform these tasks in this exact order. Any extra
- step would be dangerous at this stage :
-
-
-
- Get the current relation from a
- volume to its instance, so that you
- can recreate the attachment.
-
-
- Update the database to clean the
- stalled state. (After that, you cannot
- perform the first step).
-
-
- Restart the instances. In other
- words, go from a shutdown to running
- state.
-
-
- After the restart, reattach the volumes to their respective
- instances (optional).
-
-
- SSH into the instances to reboot them.
-
-
-
-
-
- B - Disaster recovery
-
- To perform disaster recovery
-
- Get the instance-to-volume
- relationship
- You must get the current relationship from a volume to its instance,
- because you will re-create the attachment.
- You can find this relationship by running nova
- volume-list. Note that the nova client
- includes the ability to get volume information from Block Storage.
-
-
- Update the database
- Update the database to clean the stalled state. You must restore for
- every volume, using these queries to clean up the database:
- mysql>use cinder;
-mysql>update volumes set mountpoint=NULL;
-mysql>update volumes set status="available" where status <>"error_deleting";
-mysql>update volumes set attach_status="detached";
-mysql>update volumes set instance_id=0;
- Then, when you run nova volume-list commands, all
- volumes appear in the listing.
-
-
- Restart instances
- Restart the instances using the nova reboot
- $instance command.
- At this stage, depending on your image, some instances completely
- reboot and become reachable, while others stop on the "plymouth"
- stage.
-
-
- DO NOT reboot a second time
- Do not reboot instances that are stopped at this point. Instance state
- depends on whether you added an /etc/fstab entry for
- that volume. Images built with the cloud-init package
- remain in a pending state, while others skip the missing volume and start.
- The idea of that stage is only to ask nova to reboot every instance, so the
- stored state is preserved. For more information about
- cloud-init, see help.ubuntu.com/community/CloudInit.
-
-
- Reattach volumes
- After the restart, you can reattach the volumes to their respective
- instances. Now that nova has restored the right status,
- it is time to perform the attachments through a nova
- volume-attach
- This simple snippet uses the created
- file:
- #!/bin/bash
-
-while read line; do
- volume=`echo $line | $CUT -f 1 -d " "`
- instance=`echo $line | $CUT -f 2 -d " "`
- mount_point=`echo $line | $CUT -f 3 -d " "`
- echo "ATTACHING VOLUME FOR INSTANCE - $instance"
- nova volume-attach $instance $volume $mount_point
- sleep 2
-done < $volumes_tmp_file
- At that stage, instances that were
- pending on the boot sequence (plymouth)
- automatically continue their boot, and
- restart normally, while the ones that
- booted see the volume.
-
-
- SSH into instances
- If some services depend on the volume, or if a volume has an entry
- into fstab, it could be good to simply restart the
- instance. This restart needs to be made from the instance itself, not
- through nova. So, we SSH into the instance and perform a
- reboot:
- #shutdown -r now
-
-
- By completing this procedure, you can
- successfully recover your cloud.
-
- Follow these guidelines:
-
-
- Use the errors=remount parameter in the
- fstab file, which prevents data
- corruption.
- The system locks any write to the disk if it detects an I/O error.
- This configuration option should be added into the cinder-volume server (the one which
- performs the ISCSI connection to the SAN), but also into the instances'
- fstab file.
-
-
- Do not add the entry for the SAN's disks to the cinder-volume's
- fstab file.
- Some systems hang on that step, which means you could lose access to
- your cloud-controller. To re-run the session manually, you would run the
- following command before performing the mount:
- #iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -l
-
-
- For your instances, if you have the whole /home/
- directory on the disk, instead of emptying the
- /home directory and map the disk on it, leave a
- user's directory with the user's bash files and the
- authorized_keys file.
- This enables you to connect to the instance, even without the volume
- attached, if you allow only connections through public keys.
-
-
-
-
-
- C - Scripted DRP
-
- To use scripted DRP
- You can download from here a bash script which performs
- these steps:
-
- The "test mode" allows you to perform
- that whole sequence for only one
- instance.
-
-
- To reproduce the power loss, connect to
- the compute node which runs that same
- instance and close the iscsi session.
- Do not
- detach the volume through
- nova
- volume-detach,
- but instead manually close the iscsi
- session.
-
-
- In this example, the iscsi session is
- number 15 for that instance:
- #iscsiadm -m session -u -r 15
-
-
- Do not forget the -r
- flag. Otherwise, you close ALL
- sessions.
-
-
-
-
+
+
diff --git a/doc/common/section_cli_nova_evacuate.xml b/doc/common/section_cli_nova_evacuate.xml
index 94e141f726..6da91f4e73 100644
--- a/doc/common/section_cli_nova_evacuate.xml
+++ b/doc/common/section_cli_nova_evacuate.xml
@@ -4,34 +4,26 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="nova_cli_evacuate">
Evacuate instances
- If a cloud compute node fails due to a hardware malfunction
- or another reason, you can evacuate instances to make them
- available again.
- You can choose evacuation parameters for your use
- case.
- To preserve user data on server disk, you must configure
- shared storage on the target host. Also, you must validate
- that the current VM host is down. Otherwise the evacuation
+ If a cloud compute node fails due to a hardware malfunction or another reason, you can
+ evacuate instances to make them available again. You can choose evacuation parameters for
+ your use case.
+ To preserve user data on server disk, you must configure shared storage on the target
+ host. Also, you must validate that the current VM host is down; otherwise, the evacuation
fails with an error.
- To find a different host for the evacuated instance,
- run this command to list hosts:
+ To list hosts and find a different host for the evacuated instance, run:$nova host-list
- You can pass the instance password to the command by
- using the --password <pwd>
- option. If you do not specify a password, one is
- generated and printed after the command finishes
- successfully. The following command evacuates a server
- without shared storage:
+ Evacuate the instance. You can pass the instance password to the command by using
+ the --password <pwd> option. If you do not specify a
+ password, one is generated and printed after the command finishes successfully. The
+ following command evacuates a server without shared storage from a host that is down
+ to the specified host_b:$nova evacuate evacuated_server_namehost_b
- The command evacuates an instance from a down host
- to a specified host. The instance is booted from a new
- disk, but preserves its configuration including its
- ID, name, uid, IP address, and so on. The command
- returns a password:
+ The instance is booted from a new disk, but preserves its configuration including
+ its ID, name, uid, IP address, and so on. The command returns a password:+-----------+--------------+
| Property | Value |
+-----------+--------------+
@@ -39,14 +31,12 @@
+-----------+--------------+
- To preserve the user disk data on the evacuated
- server, deploy OpenStack Compute with shared file
- system. To configure your system, see To preserve the user disk data on the evacuated server, deploy OpenStack Compute
+ with a shared file system. To configure your system, see Configure migrations in
- OpenStack Configuration
- Reference. In this example, the
- password remains unchanged.
+ >Configure migrations in OpenStack Configuration
+ Reference. In the following example, the password remains
+ unchanged:$nova evacuate evacuated_server_namehost_b --on-shared-storage
diff --git a/doc/common/section_trusted-compute-pools.xml b/doc/common/section_trusted-compute-pools.xml
index 83adcce80d..9b91bad26b 100644
--- a/doc/common/section_trusted-compute-pools.xml
+++ b/doc/common/section_trusted-compute-pools.xml
@@ -4,16 +4,14 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="trusted-compute-pools">
Trusted compute pools
- Trusted compute pools enable administrators to designate a
- group of compute hosts as trusted. These hosts use hardware-based
- security features, such as the Intel Trusted Execution
- Technology (TXT), to provide an additional level of security.
- Combined with an external stand-alone web-based remote
- attestation server, cloud providers can ensure that the
- compute node runs only software with verified measurements and
- can ensure a secure cloud stack.
- Through the trusted compute pools, cloud subscribers can
- request services to run on verified compute nodes.
+ Trusted compute pools enable administrators to designate a group of compute hosts as
+ trusted. These hosts use hardware-based security features, such as the Intel Trusted
+ Execution Technology (TXT), to provide an additional level of security. Combined with an
+ external stand-alone, web-based remote attestation server, cloud providers can ensure that
+ the compute node runs only software with verified measurements and can ensure a secure cloud
+ stack.
+ Using the trusted compute pools, cloud subscribers can request services to run on verified
+ compute nodes.The remote attestation server performs node verification as
follows:
@@ -26,13 +24,12 @@
measured.
- Measured data is sent to the attestation server when
- challenged by attestation server.
+ Measured data is sent to the attestation server when challenged by the attestation
+ server.
- The attestation server verifies those measurements
- against a good and known database to determine nodes'
- trustworthiness.
+ The attestation server verifies those measurements against a good and known
+ database to determine node trustworthiness.A description of how to set up an attestation service is
@@ -57,27 +54,40 @@
Configure Compute to use trusted compute pools
- Configure the Compute service with the
- connection information for the attestation
- service.
- Specify these connection options in the
- trusted_computing section
- in the nova.conf
- configuration file:
+ Enable scheduling support for trusted compute pools by adding the following
+ lines in the DEFAULT section in the
+ /etc/nova/nova.conf file:
+ [DEFAULT]
+compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
+scheduler_available_filters=nova.scheduler.filters.all_filters
+scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter,TrustedFilter
+
+
+ Specify the connection information for your attestation service by adding the
+ following lines to the trusted_computing section in the
+ /etc/nova/nova.conf file:
+ [trusted_computing]
+server=10.1.71.206
+port=8443
+server_ca_file=/etc/nova/ssl.10.1.71.206.crt
+# If using OAT v1.5, use this api_url:
+api_url=/AttestationService/resources
+# If using OAT pre-v1.5, use this api_url:
+#api_url=/OpenAttestationWebServices/V1.0
+auth_blob=i-am-openstack
+ Where:server
- Host name or IP address of the host
- that runs the attestation
- service
+ Host name or IP address of the host that runs the attestation
+ service.port
- HTTPS port for the attestation
- service
+ HTTPS port for the attestation service.
@@ -90,8 +100,7 @@
api_url
- The attestation service URL
- path.
+ The attestation service's URL path.
@@ -104,31 +113,6 @@
-
- To enable scheduling support for trusted compute
- pools, add the following lines to the
- DEFAULT and
- trusted_computing sections
- in the /etc/nova/nova.conf
- file. Edit the details in the
- trusted_computing section
- based on the details of your attestation
- service:
- [DEFAULT]
-compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
-scheduler_available_filters=nova.scheduler.filters.all_filters
-scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter,TrustedFilter
-
-[trusted_computing]
-server=10.1.71.206
-port=8443
-server_ca_file=/etc/nova/ssl.10.1.71.206.crt
-# If using OAT v1.5, use this api_url:
-api_url=/AttestationService/resources
-# If using OAT pre-v1.5, use this api_url:
-#api_url=/OpenAttestationWebServices/V1.0
-auth_blob=i-am-openstack
- Restart the nova-compute and Configuration reference
- To customize the trusted compute pools, use the configuration
- option settings documented in .
+ To customize the trusted compute pools, use the following configuration
+ option settings:
+
+ Specify trusted flavors
- You must configure one or more flavors as
- trusted. Users can request
- trusted nodes by specifying a trusted flavor when they
- boot an instance.
- Use the nova flavor-key set command
- to set a flavor as trusted. For example, to set the
- m1.tiny flavor as trusted:
- $nova flavor-key m1.tiny set trust:trusted_host trusted
- To request that their instances run on a trusted host,
- users can specify a trusted flavor on the nova
- boot command:
-
-
-
-
-
-
-
-
+ To designate hosts as trusted:
+
+
+ Configure one or more flavors as trusted by using the nova
+ flavor-key set command. For example, to set the
+ m1.tiny flavor as trusted:
+ $nova flavor-key m1.tiny set trust:trusted_host trusted
+
+ Request that your instance be run on a trusted host, by specifying a trusted flavor when
+ booting the instance. For example:
+ $nova boot --flavor m1.tiny --key_name myKeypairName --image myImageID newInstanceName
+
+
+
diff --git a/doc/config-reference/ch_computeconfigure.xml b/doc/config-reference/ch_computeconfigure.xml
index 7e64edc496..00a0d57bac 100644
--- a/doc/config-reference/ch_computeconfigure.xml
+++ b/doc/config-reference/ch_computeconfigure.xml
@@ -92,7 +92,6 @@
-