From 39ac6cc258e66076a955463a53647888419867b8 Mon Sep 17 00:00:00 2001 From: Andreas Jaeger Date: Wed, 5 Mar 2014 20:09:43 +0100 Subject: [PATCH] Lowercase compute node It's "compute node", not "Compute node" (similarly compute host). Also, fix capitalization of "live migration". Change-Id: I57ac46b845e217c2607cf99dfabcfaab25d84ea5 --- doc/admin-guide-cloud/ch_compute.xml | 2 +- .../section_networking-scenarios.xml | 10 ++++----- .../section_networking_adv_features.xml | 2 +- .../section_networking_introduction.xml | 2 +- ...ion_ts_failed_attach_vol_no_sysfsutils.xml | 4 ++-- .../section_ts_failed_connect_vol_FC_SAN.xml | 5 ++--- .../section_ts_multipath_warn.xml | 6 ++--- .../section_ts_vol_attach_miss_sg_scan.xml | 4 ++-- .../section_compute_config-firewalls.xml | 2 +- doc/common/section_fibrechannel.xml | 2 +- .../drivers/emc-volume-driver.xml | 22 +++++++++++-------- .../drivers/huawei-storage-driver.xml | 10 ++++----- .../block-storage/drivers/xenapi-nfs.xml | 2 +- .../section_block-storage-overview.xml | 2 +- .../compute/section_hypervisor_hyper-v.xml | 2 +- doc/glossary/glossary-terms.xml | 2 +- doc/install-guide/section_ceilometer-nova.xml | 2 +- doc/install-guide/section_nova-compute.xml | 6 ++--- doc/install-guide/section_nova-kvm.xml | 2 +- .../ch055_security-services-for-instances.xml | 11 +++++++++- ...001-ch005-vm-provisioning-walk-through.xml | 2 +- .../module001-ch011-block-storage.xml | 2 +- .../section_cli_keystone_set_quotas.xml | 2 +- .../section_dashboard_admin_set_quotas.xml | 2 +- .../section_cli_nova_config-drive.xml | 2 +- ..._dashboard_launch_instances_from_image.xml | 4 ++-- 26 files changed, 63 insertions(+), 51 deletions(-) diff --git a/doc/admin-guide-cloud/ch_compute.xml b/doc/admin-guide-cloud/ch_compute.xml index 2749a1194d..5cad3bd1bb 100644 --- a/doc/admin-guide-cloud/ch_compute.xml +++ b/doc/admin-guide-cloud/ch_compute.xml @@ -2314,7 +2314,7 @@ HostC p2 5 10240 150 ID]). The important changes to make are to change the DHCPSERVER value to - the host ip address of the Compute host + the host ip address of the compute host that is the VMs new home, and update the VNC IP if it isn't already 0.0.0.0. diff --git a/doc/admin-guide-cloud/section_networking-scenarios.xml b/doc/admin-guide-cloud/section_networking-scenarios.xml index ff657ecd97..3d92b336ca 100644 --- a/doc/admin-guide-cloud/section_networking-scenarios.xml +++ b/doc/admin-guide-cloud/section_networking-scenarios.xml @@ -74,7 +74,7 @@ bridge_mappings = physnet2:br-eth1
Scenario 1: Compute host config - The following figure shows how to configure various Linux networking devices on the Compute host: + The following figure shows how to configure various Linux networking devices on the compute host: @@ -334,14 +334,14 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1
Scenario 2: Compute host config - The following figure shows how to configure Linux networking devices on the Compute host: + The following figure shows how to configure Linux networking devices on the compute host: - The Compute host configuration resembles the + The compute host configuration resembles the configuration in scenario 1. However, in scenario 1, a guest connects to two subnets while in this scenario, the subnets belong to different tenants. @@ -545,14 +545,14 @@ physical_interface_mappings = physnet2:eth1
Scenario 2: Compute host config The following figure shows how the various Linux - networking devices would be configured on the Compute host + networking devices would be configured on the compute host under this scenario. - The configuration on the Compute host is very + The configuration on the compute host is very similar to the configuration in scenario 1. The only real difference is that scenario 1 had a guest connected to two subnets, and in this scenario the subnets belong to diff --git a/doc/admin-guide-cloud/section_networking_adv_features.xml b/doc/admin-guide-cloud/section_networking_adv_features.xml index a53b7f25cd..6138cb272f 100644 --- a/doc/admin-guide-cloud/section_networking_adv_features.xml +++ b/doc/admin-guide-cloud/section_networking_adv_features.xml @@ -61,7 +61,7 @@ physical network A network connecting virtualization hosts - (such as, Compute nodes) with each other + (such as compute nodes) with each other and with other network resources. Each physical network might support multiple virtual networks. The provider extension diff --git a/doc/admin-guide-cloud/section_networking_introduction.xml b/doc/admin-guide-cloud/section_networking_introduction.xml index 14cf6b35cb..0e999fe2ba 100644 --- a/doc/admin-guide-cloud/section_networking_introduction.xml +++ b/doc/admin-guide-cloud/section_networking_introduction.xml @@ -818,7 +818,7 @@ password = "PLUMgrid-director-admin-password" Installation Guide. You can use the same configuration file - for many Compute nodes by using a network + for many compute nodes by using a network interface name with a different IP address: openflow_rest_api = <ip-address>:<port-no> ovsdb_interface = <eth0> tunnel_interface = <eth0> diff --git a/doc/admin-guide-cloud/section_ts_failed_attach_vol_no_sysfsutils.xml b/doc/admin-guide-cloud/section_ts_failed_attach_vol_no_sysfsutils.xml index 45e89c4608..60e7451a08 100644 --- a/doc/admin-guide-cloud/section_ts_failed_attach_vol_no_sysfsutils.xml +++ b/doc/admin-guide-cloud/section_ts_failed_attach_vol_no_sysfsutils.xml @@ -5,7 +5,7 @@
Problem This warning and error occurs if you do not have the required - sysfsutils package installed on the Compute node. + sysfsutils package installed on the compute node. WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool is not installed ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] [instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-477a-be9b-47c97626555c] @@ -13,7 +13,7 @@ Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk.
Solution - Run the following command on the Compute node to install the + Run the following command on the compute node to install the sysfsutils packages. $ sudo apt-get install sysfsutils diff --git a/doc/admin-guide-cloud/section_ts_failed_connect_vol_FC_SAN.xml b/doc/admin-guide-cloud/section_ts_failed_connect_vol_FC_SAN.xml index facfcf3604..6e2ec22f46 100644 --- a/doc/admin-guide-cloud/section_ts_failed_connect_vol_FC_SAN.xml +++ b/doc/admin-guide-cloud/section_ts_failed_connect_vol_FC_SAN.xml @@ -5,7 +5,7 @@
Problem Compute node failed to connect to a volume in a Fibre Channel (FC) SAN configuration. - The WWN may not be zoned correctly in your FC SAN that links the Compute host to the + The WWN may not be zoned correctly in your FC SAN that links the compute host to the storage array. ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3] Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3] @@ -14,7 +14,6 @@ Traceback (most recent call last):…f07aa4c3d5f3\] ClientException: The server
Solution The network administrator must configure the FC SAN fabric by correctly zoning the WWN - (port names) from your Compute node HBAs. + (port names) from your compute node HBAs.
- diff --git a/doc/admin-guide-cloud/section_ts_multipath_warn.xml b/doc/admin-guide-cloud/section_ts_multipath_warn.xml index 772357ab07..83399c1de5 100644 --- a/doc/admin-guide-cloud/section_ts_multipath_warn.xml +++ b/doc/admin-guide-cloud/section_ts_multipath_warn.xml @@ -7,10 +7,10 @@
Problem Multipath call failed exit. This warning occurs in the Compute log if you do not have the - optional multipath-tools package installed on the Compute node. + optional multipath-tools package installed on the compute node. This is an optional package and the volume attachment does work without the multipath tools installed. If the multipath-tools package is installed on the - Compute node, it is used to perform the volume attachment. The IDs in your message are + compute node, it is used to perform the volume attachment. The IDs in your message are unique to your system. WARNING nova.storage.linuxscsi [req-cac861e3-8b29-4143-8f1b-705d0084e571 admin admin|req-cac861e3-8b29-4143-8f1b-705d0084e571 admin admin] Multipath call failed exit @@ -18,7 +18,7 @@
Solution - Run the following command on the Compute node to install the + Run the following command on the compute node to install the multipath-tools packages. $ sudo apt-get install multipath-tools diff --git a/doc/admin-guide-cloud/section_ts_vol_attach_miss_sg_scan.xml b/doc/admin-guide-cloud/section_ts_vol_attach_miss_sg_scan.xml index ec7b68eb3f..790d1e050b 100644 --- a/doc/admin-guide-cloud/section_ts_vol_attach_miss_sg_scan.xml +++ b/doc/admin-guide-cloud/section_ts_vol_attach_miss_sg_scan.xml @@ -12,7 +12,7 @@ sg_scan file not found. This warning and error occur when the sg3-utils package is not installed - on the Compute node. The IDs in your message are unique to + on the compute node. The IDs in your message are unique to your system: ERROR nova.compute.manager [req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin|req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin] [instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5] @@ -22,7 +22,7 @@ Stdout: '/usr/local/bin/nova-rootwrap: Executable not found: /usr/bin/sg_scan
Solution - Run this command on the Compute node to install the + Run this command on the compute node to install the sg3-utils package: $ sudo apt-get install sg3-utils
diff --git a/doc/common/section_compute_config-firewalls.xml b/doc/common/section_compute_config-firewalls.xml index c137a669c5..63974be267 100644 --- a/doc/common/section_compute_config-firewalls.xml +++ b/doc/common/section_compute_config-firewalls.xml @@ -51,6 +51,6 @@ The iptables firewall now enables incoming connections to the Compute - services. Repeat this process for each Compute node. + services. Repeat this process for each compute node.
diff --git a/doc/common/section_fibrechannel.xml b/doc/common/section_fibrechannel.xml index 92d1525689..bcd7e4cd30 100644 --- a/doc/common/section_fibrechannel.xml +++ b/doc/common/section_fibrechannel.xml @@ -4,7 +4,7 @@ xml:id="fibrechannel"> Fibre Channel support in Compute Fibre Channel support in OpenStack Compute is remote block - storage attached to Compute nodes for VMs. + storage attached to compute nodes for VMs.
In the Grizzly release, Fibre Channel supported only the KVM hypervisor. Compute and Block Storage for Fibre Channel do not support automatic diff --git a/doc/config-reference/block-storage/drivers/emc-volume-driver.xml b/doc/config-reference/block-storage/drivers/emc-volume-driver.xml index 28201fa09c..6d71029aea 100644 --- a/doc/config-reference/block-storage/drivers/emc-volume-driver.xml +++ b/doc/config-reference/block-storage/drivers/emc-volume-driver.xml @@ -144,11 +144,11 @@
Register with VNX - To export a VNX volume to a Compute node, you must + To export a VNX volume to a compute node, you must register the node with VNX. Register the node - On the Compute node 1.1.1.1, do + On the compute node 1.1.1.1, do the following (assume 10.10.61.35 is the iscsi target): $ sudo /etc/init.d/open-iscsi start @@ -156,12 +156,12 @@ $ cd /etc/iscsi $ sudo more initiatorname.iscsi $ iscsiadm -m node - Log in to VNX from the Compute node using the target + Log in to VNX from the compute node using the target corresponding to the SPA port: $ sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l Where iqn.1992-04.com.emc:cx.apm01234567890.a0 - is the initiator name of the Compute node. Login to + is the initiator name of the compute node. Login to Unisphere, go to VNX00000->Hosts->Initiators, Refresh and wait until initiator @@ -173,10 +173,10 @@ IP address myhost1. Click Register. Now host 1.1.1.1 also appears under Hosts->Host List. - Log out of VNX on the Compute node: + Log out of VNX on the compute node: $ sudo iscsiadm -m node -u - Log in to VNX from the Compute node using the target + Log in to VNX from the compute node using the target corresponding to the SPB port: $ sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l @@ -247,9 +247,13 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml - To attach VMAX volumes to an OpenStack VM, you must create a Masking View by - using Unisphere for VMAX. The Masking View must have an Initiator Group that - contains the initiator of the OpenStack Compute node that hosts the VM. + + To attach VMAX volumes to an OpenStack VM, you must + create a Masking View by using Unisphere for + VMAX. The Masking View must have an Initiator Group + that contains the initiator of the OpenStack compute + node that hosts the VM. +
diff --git a/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml b/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml index 183f40f1fc..d51115d2c5 100644 --- a/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml +++ b/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml @@ -535,7 +535,7 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d IP address of the iSCSI port provided - for Compute nodes. + for compute nodes.
@@ -544,7 +544,7 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d Linux - The OS type for a Compute node. + The OS type for a compute node. @@ -552,7 +552,7 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d - The IPs for Compute nodes. + The IPs for compute nodes. @@ -560,9 +560,9 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d You can configure one iSCSI target port for - each or all Compute nodes. The driver checks + each or all compute nodes. The driver checks whether a target port IP address is configured - for the current Compute node. If not, select + for the current compute node. If not, select . diff --git a/doc/config-reference/block-storage/drivers/xenapi-nfs.xml b/doc/config-reference/block-storage/drivers/xenapi-nfs.xml index 7f7c54a5da..a314cf69e5 100644 --- a/doc/config-reference/block-storage/drivers/xenapi-nfs.xml +++ b/doc/config-reference/block-storage/drivers/xenapi-nfs.xml @@ -38,7 +38,7 @@ You can use a XenServer as a storage controller and - Compute node at the same time. This minimal + compute node at the same time. This minimal configuration consists of a XenServer/XCP box and an NFS share. diff --git a/doc/config-reference/block-storage/section_block-storage-overview.xml b/doc/config-reference/block-storage/section_block-storage-overview.xml index 080dd48600..3b314ccd96 100644 --- a/doc/config-reference/block-storage/section_block-storage-overview.xml +++ b/doc/config-reference/block-storage/section_block-storage-overview.xml @@ -91,7 +91,7 @@ they can be used as the root store to boot instances. Volumes are persistent R/W block storage devices most commonly attached to the - Compute node through iSCSI. + compute node through iSCSI. Snapshots. A read-only point in time copy diff --git a/doc/config-reference/compute/section_hypervisor_hyper-v.xml b/doc/config-reference/compute/section_hypervisor_hyper-v.xml index 87306d0aba..c29324bb35 100644 --- a/doc/config-reference/compute/section_hypervisor_hyper-v.xml +++ b/doc/config-reference/compute/section_hypervisor_hyper-v.xml @@ -25,7 +25,7 @@
Hyper-V configuration The following sections discuss how to prepare the Windows Hyper-V node for operation - as an OpenStack Compute node. Unless stated otherwise, any configuration information + as an OpenStack compute node. Unless stated otherwise, any configuration information should work for both the Windows 2008r2 and 2012 platforms. Local Storage Considerations The Hyper-V compute node needs to have ample storage for storing the virtual machine diff --git a/doc/glossary/glossary-terms.xml b/doc/glossary/glossary-terms.xml index 243b95bdcb..d7af289b03 100644 --- a/doc/glossary/glossary-terms.xml +++ b/doc/glossary/glossary-terms.xml @@ -2987,7 +2987,7 @@ Each entry in a typical ACL specifies a subject and an operation. For instance, network node - Any Compute node that runs the network worker + Any compute node that runs the network worker daemon. diff --git a/doc/install-guide/section_ceilometer-nova.xml b/doc/install-guide/section_ceilometer-nova.xml index 8b505ef5b5..1965b93eda 100644 --- a/doc/install-guide/section_ceilometer-nova.xml +++ b/doc/install-guide/section_ceilometer-nova.xml @@ -11,7 +11,7 @@ details how to install the agent that runs on the compute node. - Install the Telemetry service on the Compute node: + Install the Telemetry service on the compute node: # apt-get install ceilometer-agent-compute # yum install openstack-ceilometer-compute # zypper install openstack-ceilometer-agent-compute diff --git a/doc/install-guide/section_nova-compute.xml b/doc/install-guide/section_nova-compute.xml index 2e7cbc4f81..4a7545f130 100644 --- a/doc/install-guide/section_nova-compute.xml +++ b/doc/install-guide/section_nova-compute.xml @@ -2,10 +2,10 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="nova-compute"> - Configure a Compute node + Configure a compute node After you configure the Compute service on the controller - node, you must configure another system as a Compute node. The - Compute node receives requests from the controller node and hosts + node, you must configure another system as a compute node. The + compute node receives requests from the controller node and hosts virtual machine instances. You can run all services on a single node, but the examples in this guide use separate systems. This makes it easy to scale horizontally by adding additional Compute diff --git a/doc/install-guide/section_nova-kvm.xml b/doc/install-guide/section_nova-kvm.xml index 7b6d6e198c..091616a467 100644 --- a/doc/install-guide/section_nova-kvm.xml +++ b/doc/install-guide/section_nova-kvm.xml @@ -2,7 +2,7 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="nova-kvm"> - Enable KVM on the Compute node + Enable KVM on the compute node OpenStack Compute requires hardware virtualization support and certain kernel modules. Use the following procedure to diff --git a/doc/security-guide/ch055_security-services-for-instances.xml b/doc/security-guide/ch055_security-services-for-instances.xml index f49970fb8c..fc46dd7914 100644 --- a/doc/security-guide/ch055_security-services-for-instances.xml +++ b/doc/security-guide/ch055_security-services-for-instances.xml @@ -151,7 +151,16 @@ gpg --verify SHA256SUMS.gpg SHA256SUMSsha256sum -c SHA256SUMS 2>&1 | grep
Instance Migrations - OpenStack and the underlying virtualization layers provide for the Live Migration of images between OpenStack nodes allowing you to seamlessly perform rolling upgrades of your OpenStack Compute nodes without instance downtime. However, Live Migrations also come with their fair share of risk. To understand the risks involved, it is important to first understand how a live migration works. The following are the high level steps preformed during a live migration. + + OpenStack and the underlying virtualization layers provide for + the live migration of images between OpenStack nodes allowing + you to seamlessly perform rolling upgrades of your OpenStack + compute nodes without instance downtime. However, live + migrations also come with their fair share of risk. To + understand the risks involved, it is important to first + understand how a live migration works. The following are the + high level steps preformed during a live migration. + Start instance on destination host Transfer memory diff --git a/doc/training-guides/module001-ch005-vm-provisioning-walk-through.xml b/doc/training-guides/module001-ch005-vm-provisioning-walk-through.xml index 3949965b71..9a94cd5191 100644 --- a/doc/training-guides/module001-ch005-vm-provisioning-walk-through.xml +++ b/doc/training-guides/module001-ch005-vm-provisioning-walk-through.xml @@ -164,7 +164,7 @@ The following diagram shows the system state prior to launching an instance. The image store fronted by the Image Service has some number of predefined images. In the - cloud, there is an available Compute node with available vCPU, + cloud, there is an available compute node with available vCPU, memory and local disk resources. Plus there are a number of predefined volumes in the cinder-volume service. diff --git a/doc/training-guides/module001-ch011-block-storage.xml b/doc/training-guides/module001-ch011-block-storage.xml index 8c2c652725..0c6f2cd3e2 100644 --- a/doc/training-guides/module001-ch011-block-storage.xml +++ b/doc/training-guides/module001-ch011-block-storage.xml @@ -165,7 +165,7 @@ Volumes are allocated block storage resources that can be attached to instances as secondary storage or they can be used as the root store to boot instances. Volumes are persistent R/W Block - Storage devices most commonly attached to the Compute node via + Storage devices most commonly attached to the compute node via iSCSI. Snapshots A Snapshot in OpenStack Block Storage is a read-only point in diff --git a/doc/user-guide-admin/section_cli_keystone_set_quotas.xml b/doc/user-guide-admin/section_cli_keystone_set_quotas.xml index 121f536089..c03629f052 100644 --- a/doc/user-guide-admin/section_cli_keystone_set_quotas.xml +++ b/doc/user-guide-admin/section_cli_keystone_set_quotas.xml @@ -22,7 +22,7 @@ the OpenStack Compute Service, the OpenStack Block Storage Service, and the OpenStack Networking Service. Typically, default values are changed because a tenant - requires more than 10 volumes, or more than 1TB on a Compute node. + requires more than 10 volumes, or more than 1TB on a compute node. To view all tenants (projects), run: $ keystone tenant-list diff --git a/doc/user-guide-admin/section_dashboard_admin_set_quotas.xml b/doc/user-guide-admin/section_dashboard_admin_set_quotas.xml index 267348b56a..24107a8ed6 100644 --- a/doc/user-guide-admin/section_dashboard_admin_set_quotas.xml +++ b/doc/user-guide-admin/section_dashboard_admin_set_quotas.xml @@ -18,7 +18,7 @@ cloud resources are optimized. Quotas can be enforced at both the tenant (or project) and the tenant-user level. Typically, you change quotas when a project needs more than 10 - volumes or 1 TB on a Compute node. + volumes or 1 TB on a compute node. Using the Dashboard, you can view default Compute and Block Storage quotas for new tenants, as well as update quotas for existing tenants. diff --git a/doc/user-guide/section_cli_nova_config-drive.xml b/doc/user-guide/section_cli_nova_config-drive.xml index 7bf6a727eb..401cb8b440 100644 --- a/doc/user-guide/section_cli_nova_config-drive.xml +++ b/doc/user-guide/section_cli_nova_config-drive.xml @@ -33,7 +33,7 @@ To use configuration drive with libvirt, xenserver, or vmware, you must first install the genisoimage package on each - Compute host. Otherwise, instances do not boot + compute host. Otherwise, instances do not boot properly. Use the mkisofs_cmd flag to diff --git a/doc/user-guide/section_dashboard_launch_instances_from_image.xml b/doc/user-guide/section_dashboard_launch_instances_from_image.xml index 1dfc05ce6a..074da85898 100644 --- a/doc/user-guide/section_dashboard_launch_instances_from_image.xml +++ b/doc/user-guide/section_dashboard_launch_instances_from_image.xml @@ -12,7 +12,7 @@ Launch an instance from an image When you launch an instance from an image, OpenStack creates - a local copy of the image on the Compute node where the + a local copy of the image on the compute node where the instance starts. @@ -134,7 +134,7 @@ Click Launch. The instance - starts on a Compute node in the cloud. + starts on a compute node in the cloud. The Instances category shows