Lowercase compute node
It's "compute node", not "Compute node" (similarly compute host). Also, fix capitalization of "live migration". Change-Id: I57ac46b845e217c2607cf99dfabcfaab25d84ea5
This commit is contained in:
parent
3e96e02a62
commit
39ac6cc258
@ -2314,7 +2314,7 @@ HostC p2 5 10240 150
|
||||
ID]</literal>). The important changes
|
||||
to make are to change the
|
||||
<literal>DHCPSERVER</literal> value to
|
||||
the host ip address of the Compute host
|
||||
the host ip address of the compute host
|
||||
that is the VMs new home, and update the
|
||||
VNC IP if it isn't already
|
||||
<literal>0.0.0.0</literal>.</para>
|
||||
|
@ -74,7 +74,7 @@ bridge_mappings = physnet2:br-eth1</programlisting></para>
|
||||
<section xml:id="under_the_hood_openvswitch_scenario1_compute">
|
||||
|
||||
<title>Scenario 1: Compute host config</title>
|
||||
<para>The following figure shows how to configure various Linux networking devices on the Compute host:</para>
|
||||
<para>The following figure shows how to configure various Linux networking devices on the compute host:</para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="../common/figures/under-the-hood-scenario-1-ovs-compute.png" contentwidth="6in"/>
|
||||
@ -334,14 +334,14 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
|
||||
|
||||
<section xml:id="under_the_hood_openvswitch_scenario2_compute">
|
||||
<title>Scenario 2: Compute host config</title>
|
||||
<para>The following figure shows how to configure Linux networking devices on the Compute host:
|
||||
<para>The following figure shows how to configure Linux networking devices on the compute host:
|
||||
</para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="../common/figures/under-the-hood-scenario-2-ovs-compute.png" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
<note><para>The Compute host configuration resembles the
|
||||
<note><para>The compute host configuration resembles the
|
||||
configuration in scenario 1. However, in scenario 1, a
|
||||
guest connects to two subnets while in this scenario, the
|
||||
subnets belong to different tenants.
|
||||
@ -545,14 +545,14 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
|
||||
<section xml:id="under_the_hood_linuxbridge_scenario2_compute">
|
||||
<title>Scenario 2: Compute host config</title>
|
||||
<para>The following figure shows how the various Linux
|
||||
networking devices would be configured on the Compute host
|
||||
networking devices would be configured on the compute host
|
||||
under this scenario.</para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="../common/figures/under-the-hood-scenario-2-linuxbridge-compute.png" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
<note><para>The configuration on the Compute host is very
|
||||
<note><para>The configuration on the compute host is very
|
||||
similar to the configuration in scenario 1. The only real
|
||||
difference is that scenario 1 had a guest connected to two
|
||||
subnets, and in this scenario the subnets belong to
|
||||
|
@ -61,7 +61,7 @@
|
||||
<td><emphasis role="bold">physical
|
||||
network</emphasis></td>
|
||||
<td>A network connecting virtualization hosts
|
||||
(such as, Compute nodes) with each other
|
||||
(such as compute nodes) with each other
|
||||
and with other network resources. Each
|
||||
physical network might support multiple
|
||||
virtual networks. The provider extension
|
||||
|
@ -818,7 +818,7 @@ password = "PLUMgrid-director-admin-password"</programlisting>
|
||||
<citetitle>Installation
|
||||
Guide</citetitle>.</para>
|
||||
<para>You can use the same configuration file
|
||||
for many Compute nodes by using a network
|
||||
for many compute nodes by using a network
|
||||
interface name with a different IP
|
||||
address:</para>
|
||||
<programlisting language="ini">openflow_rest_api = <ip-address>:<port-no> ovsdb_interface = <eth0> tunnel_interface = <eth0></programlisting>
|
||||
|
@ -5,7 +5,7 @@
|
||||
<section xml:id="section_ts_failed_attach_vol_no_sysfsutils_problem">
|
||||
<title>Problem</title>
|
||||
<para>This warning and error occurs if you do not have the required
|
||||
<filename>sysfsutils</filename> package installed on the Compute node.</para>
|
||||
<filename>sysfsutils</filename> package installed on the compute node.</para>
|
||||
<programlisting>WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool is not installed
|
||||
ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin]
|
||||
[instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-477a-be9b-47c97626555c]
|
||||
@ -13,7 +13,7 @@ Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk.</progr
|
||||
</section>
|
||||
<section xml:id="section_ts_failed_attach_vol_no_sysfsutils_solution">
|
||||
<title>Solution</title>
|
||||
<para>Run the following command on the Compute node to install the
|
||||
<para>Run the following command on the compute node to install the
|
||||
<filename>sysfsutils</filename> packages.</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo apt-get install sysfsutils</userinput></screen>
|
||||
|
@ -5,7 +5,7 @@
|
||||
<section xml:id="section_ts_failed_connect_vol_FC_SAN_problem">
|
||||
<title>Problem</title>
|
||||
<para>Compute node failed to connect to a volume in a Fibre Channel (FC) SAN configuration.
|
||||
The WWN may not be zoned correctly in your FC SAN that links the Compute host to the
|
||||
The WWN may not be zoned correctly in your FC SAN that links the compute host to the
|
||||
storage array.</para>
|
||||
<programlisting>ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3]
|
||||
Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3]
|
||||
@ -14,7 +14,6 @@ Traceback (most recent call last):…f07aa4c3d5f3\] ClientException: The server
|
||||
<section xml:id="section_ts_failed_connect_vol_FC_SAN_solution">
|
||||
<title>Solution</title>
|
||||
<para>The network administrator must configure the FC SAN fabric by correctly zoning the WWN
|
||||
(port names) from your Compute node HBAs.</para>
|
||||
(port names) from your compute node HBAs.</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
|
@ -7,10 +7,10 @@
|
||||
<section xml:id="section_ts_multipath_warn_problem">
|
||||
<title>Problem</title>
|
||||
<para>Multipath call failed exit. This warning occurs in the Compute log if you do not have the
|
||||
optional <filename>multipath-tools</filename> package installed on the Compute node.
|
||||
optional <filename>multipath-tools</filename> package installed on the compute node.
|
||||
This is an optional package and the volume attachment does work without the multipath
|
||||
tools installed. If the <filename>multipath-tools</filename> package is installed on the
|
||||
Compute node, it is used to perform the volume attachment. The IDs in your message are
|
||||
compute node, it is used to perform the volume attachment. The IDs in your message are
|
||||
unique to your system.</para>
|
||||
<programlisting>WARNING nova.storage.linuxscsi [req-cac861e3-8b29-4143-8f1b-705d0084e571 admin
|
||||
admin|req-cac861e3-8b29-4143-8f1b-705d0084e571 admin admin] Multipath call failed exit
|
||||
@ -18,7 +18,7 @@
|
||||
</section>
|
||||
<section xml:id="section_ts_multipath_warn_solution">
|
||||
<title>Solution</title>
|
||||
<para>Run the following command on the Compute node to install the
|
||||
<para>Run the following command on the compute node to install the
|
||||
<filename>multipath-tools</filename> packages.</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo apt-get install multipath-tools</userinput></screen>
|
||||
|
@ -12,7 +12,7 @@
|
||||
<filename>sg_scan</filename> file not found. This
|
||||
warning and error occur when the
|
||||
<package>sg3-utils</package> package is not installed
|
||||
on the Compute node. The IDs in your message are unique to
|
||||
on the compute node. The IDs in your message are unique to
|
||||
your system:</para>
|
||||
<screen><computeroutput>ERROR nova.compute.manager [req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin|req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin]
|
||||
[instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5]
|
||||
@ -22,7 +22,7 @@ Stdout: '/usr/local/bin/nova-rootwrap: Executable not found: /usr/bin/sg_scan</c
|
||||
</section>
|
||||
<section xml:id="section_ts_vol_attach_miss_sg_scan_solution">
|
||||
<title>Solution</title>
|
||||
<para>Run this command on the Compute node to install the
|
||||
<para>Run this command on the compute node to install the
|
||||
<package>sg3-utils</package> package:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo apt-get install sg3-utils</userinput></screen>
|
||||
</section>
|
||||
|
@ -51,6 +51,6 @@
|
||||
<para>
|
||||
The <systemitem>iptables</systemitem> firewall
|
||||
now enables incoming connections to the Compute
|
||||
services. Repeat this process for each Compute node.
|
||||
services. Repeat this process for each compute node.
|
||||
</para>
|
||||
</section>
|
||||
|
@ -4,7 +4,7 @@
|
||||
xml:id="fibrechannel">
|
||||
<title>Fibre Channel support in Compute</title>
|
||||
<para>Fibre Channel support in OpenStack Compute is remote block
|
||||
storage attached to Compute nodes for VMs.</para>
|
||||
storage attached to compute nodes for VMs.</para>
|
||||
<para>In the Grizzly release, Fibre Channel supported only the KVM
|
||||
hypervisor.</para>
|
||||
<para>Compute and Block Storage for Fibre Channel do not support automatic
|
||||
|
@ -144,11 +144,11 @@
|
||||
</section>
|
||||
<section xml:id="register-emc">
|
||||
<title>Register with VNX</title>
|
||||
<para>To export a VNX volume to a Compute node, you must
|
||||
<para>To export a VNX volume to a compute node, you must
|
||||
register the node with VNX.</para>
|
||||
<procedure>
|
||||
<title>Register the node</title>
|
||||
<step><para>On the Compute node <literal>1.1.1.1</literal>, do
|
||||
<step><para>On the compute node <literal>1.1.1.1</literal>, do
|
||||
the following (assume <literal>10.10.61.35</literal>
|
||||
is the iscsi target):</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo /etc/init.d/open-iscsi start</userinput>
|
||||
@ -156,12 +156,12 @@
|
||||
<prompt>$</prompt> <userinput>cd /etc/iscsi</userinput>
|
||||
<prompt>$</prompt> <userinput>sudo more initiatorname.iscsi</userinput>
|
||||
<prompt>$</prompt> <userinput>iscsiadm -m node</userinput></screen></step>
|
||||
<step><para>Log in to VNX from the Compute node using the target
|
||||
<step><para>Log in to VNX from the compute node using the target
|
||||
corresponding to the SPA port:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l</userinput></screen>
|
||||
<para>Where
|
||||
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
|
||||
is the initiator name of the Compute node. Login to
|
||||
is the initiator name of the compute node. Login to
|
||||
Unisphere, go to
|
||||
<literal>VNX00000</literal>->Hosts->Initiators,
|
||||
Refresh and wait until initiator
|
||||
@ -173,10 +173,10 @@
|
||||
IP address <literal>myhost1</literal>. Click <guibutton>Register</guibutton>.
|
||||
Now host <literal>1.1.1.1</literal> also appears under
|
||||
Hosts->Host List.</para></step>
|
||||
<step><para>Log out of VNX on the Compute node:</para>
|
||||
<step><para>Log out of VNX on the compute node:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen></step>
|
||||
<step>
|
||||
<para>Log in to VNX from the Compute node using the target
|
||||
<para>Log in to VNX from the compute node using the target
|
||||
corresponding to the SPB port:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l</userinput></screen>
|
||||
</step>
|
||||
@ -247,9 +247,13 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para>To attach VMAX volumes to an OpenStack VM, you must create a Masking View by
|
||||
using Unisphere for VMAX. The Masking View must have an Initiator Group that
|
||||
contains the initiator of the OpenStack Compute node that hosts the VM.</para>
|
||||
<para>
|
||||
To attach VMAX volumes to an OpenStack VM, you must
|
||||
create a Masking View by using Unisphere for
|
||||
VMAX. The Masking View must have an Initiator Group
|
||||
that contains the initiator of the OpenStack compute
|
||||
node that hosts the VM.
|
||||
</para>
|
||||
</note>
|
||||
</section>
|
||||
</section>
|
||||
|
@ -535,7 +535,7 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d
|
||||
</td>
|
||||
<td>
|
||||
<para>IP address of the iSCSI port provided
|
||||
for Compute nodes.</para>
|
||||
for compute nodes.</para>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
@ -544,7 +544,7 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d
|
||||
<td>
|
||||
<para>Linux</para>
|
||||
</td>
|
||||
<td>The OS type for a Compute node.</td>
|
||||
<td>The OS type for a compute node.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><option>HostIP</option></td>
|
||||
@ -552,7 +552,7 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d
|
||||
<td>
|
||||
<para/>
|
||||
</td>
|
||||
<td>The IPs for Compute nodes.</td>
|
||||
<td>The IPs for compute nodes.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
@ -560,9 +560,9 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>You can configure one iSCSI target port for
|
||||
each or all Compute nodes. The driver checks
|
||||
each or all compute nodes. The driver checks
|
||||
whether a target port IP address is configured
|
||||
for the current Compute node. If not, select
|
||||
for the current compute node. If not, select
|
||||
<option>DefaultTargetIP</option>.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
@ -38,7 +38,7 @@
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para>You can use a XenServer as a storage controller and
|
||||
Compute node at the same time. This minimal
|
||||
compute node at the same time. This minimal
|
||||
configuration consists of a XenServer/XCP box and an
|
||||
NFS share.</para>
|
||||
</note>
|
||||
|
@ -91,7 +91,7 @@
|
||||
they can be used as the root store to boot
|
||||
instances. Volumes are persistent R/W block
|
||||
storage devices most commonly attached to the
|
||||
Compute node through iSCSI.</para>
|
||||
compute node through iSCSI.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Snapshots</emphasis>. A read-only point in time copy
|
||||
|
@ -25,7 +25,7 @@
|
||||
<section xml:id="installation-architecture-hyper-v">
|
||||
<title>Hyper-V configuration</title>
|
||||
<para>The following sections discuss how to prepare the Windows Hyper-V node for operation
|
||||
as an OpenStack Compute node. Unless stated otherwise, any configuration information
|
||||
as an OpenStack compute node. Unless stated otherwise, any configuration information
|
||||
should work for both the Windows 2008r2 and 2012 platforms.</para>
|
||||
<para><emphasis role="bold">Local Storage Considerations</emphasis></para>
|
||||
<para>The Hyper-V compute node needs to have ample storage for storing the virtual machine
|
||||
|
@ -2987,7 +2987,7 @@ Each entry in a typical ACL specifies a subject and an operation. For instance,
|
||||
<glossentry>
|
||||
<glossterm>network node</glossterm>
|
||||
<glossdef>
|
||||
<para>Any Compute node that runs the network worker
|
||||
<para>Any compute node that runs the network worker
|
||||
daemon.</para>
|
||||
</glossdef>
|
||||
</glossentry>
|
||||
|
@ -11,7 +11,7 @@
|
||||
details how to install the agent that runs on the compute
|
||||
node.</para>
|
||||
<step>
|
||||
<para>Install the Telemetry service on the Compute node:</para>
|
||||
<para>Install the Telemetry service on the compute node:</para>
|
||||
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install ceilometer-agent-compute</userinput></screen>
|
||||
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-ceilometer-compute</userinput></screen>
|
||||
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-ceilometer-agent-compute</userinput></screen>
|
||||
|
@ -2,10 +2,10 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="nova-compute">
|
||||
<title>Configure a Compute node</title>
|
||||
<title>Configure a compute node</title>
|
||||
<para>After you configure the Compute service on the controller
|
||||
node, you must configure another system as a Compute node. The
|
||||
Compute node receives requests from the controller node and hosts
|
||||
node, you must configure another system as a compute node. The
|
||||
compute node receives requests from the controller node and hosts
|
||||
virtual machine instances. You can run all services on a single
|
||||
node, but the examples in this guide use separate systems. This
|
||||
makes it easy to scale horizontally by adding additional Compute
|
||||
|
@ -2,7 +2,7 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="nova-kvm">
|
||||
<title>Enable KVM on the Compute node</title>
|
||||
<title>Enable KVM on the compute node</title>
|
||||
|
||||
<para>OpenStack Compute requires hardware virtualization support
|
||||
and certain kernel modules. Use the following procedure to
|
||||
|
@ -151,7 +151,16 @@ gpg --verify SHA256SUMS.gpg SHA256SUMSsha256sum -c SHA256SUMS 2>&1 | grep
|
||||
</section>
|
||||
<section xml:id="ch055_security-services-for-instances-idp170576">
|
||||
<title>Instance Migrations</title>
|
||||
<para>OpenStack and the underlying virtualization layers provide for the Live Migration of images between OpenStack nodes allowing you to seamlessly perform rolling upgrades of your OpenStack Compute nodes without instance downtime. However, Live Migrations also come with their fair share of risk. To understand the risks involved, it is important to first understand how a live migration works. The following are the high level steps preformed during a live migration.</para>
|
||||
<para>
|
||||
OpenStack and the underlying virtualization layers provide for
|
||||
the live migration of images between OpenStack nodes allowing
|
||||
you to seamlessly perform rolling upgrades of your OpenStack
|
||||
compute nodes without instance downtime. However, live
|
||||
migrations also come with their fair share of risk. To
|
||||
understand the risks involved, it is important to first
|
||||
understand how a live migration works. The following are the
|
||||
high level steps preformed during a live migration.
|
||||
</para>
|
||||
<orderedlist>
|
||||
<listitem><para>Start instance on destination host</para> </listitem>
|
||||
<listitem><para>Transfer memory</para> </listitem>
|
||||
|
@ -164,7 +164,7 @@
|
||||
<para>The following diagram shows the system state prior to
|
||||
launching an instance. The image store fronted by the Image
|
||||
Service has some number of predefined images. In the
|
||||
cloud, there is an available Compute node with available vCPU,
|
||||
cloud, there is an available compute node with available vCPU,
|
||||
memory and local disk resources. Plus there are a number of
|
||||
predefined volumes in the
|
||||
<systemitem class="service">cinder-volume</systemitem> service.
|
||||
|
@ -165,7 +165,7 @@
|
||||
<para>Volumes are allocated block storage resources that can be
|
||||
attached to instances as secondary storage or they can be used as
|
||||
the root store to boot instances. Volumes are persistent R/W Block
|
||||
Storage devices most commonly attached to the Compute node via
|
||||
Storage devices most commonly attached to the compute node via
|
||||
iSCSI.</para>
|
||||
<para><guilabel>Snapshots</guilabel></para>
|
||||
<para>A Snapshot in OpenStack Block Storage is a read-only point in
|
||||
|
@ -22,7 +22,7 @@
|
||||
the OpenStack Compute Service, the OpenStack Block Storage Service,
|
||||
and the OpenStack Networking Service.</para>
|
||||
<para>Typically, default values are changed because a tenant
|
||||
requires more than 10 volumes, or more than 1TB on a Compute node.</para>
|
||||
requires more than 10 volumes, or more than 1TB on a compute node.</para>
|
||||
<note>
|
||||
<para>To view all tenants (projects), run:
|
||||
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput>
|
||||
|
@ -18,7 +18,7 @@
|
||||
cloud resources are optimized. Quotas can be enforced at both the tenant
|
||||
(or project) and the tenant-user level.</para>
|
||||
<para>Typically, you change quotas when a project needs more than 10
|
||||
volumes or 1 TB on a Compute node.</para>
|
||||
volumes or 1 TB on a compute node.</para>
|
||||
<para>Using the Dashboard, you can view default Compute and Block Storage
|
||||
quotas for new tenants, as well as update quotas for existing tenants.</para>
|
||||
<note>
|
||||
|
@ -33,7 +33,7 @@
|
||||
<para>To use configuration drive with libvirt,
|
||||
xenserver, or vmware, you must first install the
|
||||
<package>genisoimage</package> package on each
|
||||
Compute host. Otherwise, instances do not boot
|
||||
compute host. Otherwise, instances do not boot
|
||||
properly.</para>
|
||||
|
||||
<para>Use the <literal>mkisofs_cmd</literal> flag to
|
||||
|
@ -12,7 +12,7 @@
|
||||
<title>Launch an instance from an image</title>
|
||||
<?dbhtml stop-chunking?>
|
||||
<para>When you launch an instance from an image, OpenStack creates
|
||||
a local copy of the image on the Compute node where the
|
||||
a local copy of the image on the compute node where the
|
||||
instance starts.</para>
|
||||
<procedure>
|
||||
<step>
|
||||
@ -134,7 +134,7 @@
|
||||
</step>
|
||||
<step>
|
||||
<para>Click <guibutton>Launch</guibutton>. The instance
|
||||
starts on a Compute node in the cloud.</para>
|
||||
starts on a compute node in the cloud.</para>
|
||||
</step>
|
||||
<step>
|
||||
<para>The <guilabel>Instances</guilabel> category shows
|
||||
|
Loading…
Reference in New Issue
Block a user