Merge "Change command line to command-line"

This commit is contained in:
Jenkins 2014-02-27 05:36:41 +00:00 committed by Gerrit Code Review
commit 21ddb7fefa
10 changed files with 149 additions and 94 deletions

View File

@ -382,25 +382,25 @@
xlink:href="http://docs.openstack.org/developer/python-glanceclient/"
> Python API</link>.</para>
<para>The OpenStack Image service can be controlled using a
command line tool. For more information about the
OpenStack Image command line tool, see the <link
command-line tool. For more information about the
OpenStack Image command-line tool, see the <link
xlink:href="http://docs.openstack.org/user-guide/content/cli_manage_images.html"
> Image Management</link> section in the
<citetitle>OpenStack User Guide</citetitle>.</para>
<para>Virtual images that have been made available through the
Image service can be stored in a variety of ways. In order
to use these services, you must have a working
installation of the Image service, with a working
installation of the Image Service, with a working
endpoint, and users that have been created in the Identity
service. Additionally, you must meet the environment
Service. Additionally, you must meet the environment
variables required by the Compute and Image
clients.</para>
<para>The Image service supports these back end stores:</para>
<para>The Image Service supports these back end stores:</para>
<variablelist>
<varlistentry>
<term>File system</term>
<listitem>
<para>The OpenStack Image service stores virtual
<para>The OpenStack Image Service stores virtual
machine images in the file system back-end by
default. This simple back end writes image
files to the local file system.</para>
@ -446,14 +446,14 @@
<xi:include href="image/section_glance-property-protection.xml"/>
<section xml:id="section_instance-mgmt">
<title>Instance management tools</title>
<para>OpenStack provides command line, web-based, and
<para>OpenStack provides command-line, web-based, and
API-based instance management tools. Additionally, a
number of third party management tools are available,
number of third-party management tools are available,
using either the native API or the provided EC2-compatible
API.</para>
<para>The OpenStack
<application>python-novaclient</application> package
provides a basic command line utility, which uses the
provides a basic command-line utility, which uses the
<command>nova</command> command. This is available as
a native package for most Linux distributions, or you can
install the latest version using the
@ -463,7 +463,7 @@
</para>
<para>For more information about
<application>python-novaclient</application> and other
available command line tools, see the <link
available command-line tools, see the <link
xlink:href="http://docs.openstack.org/user-guide/content/index.html">
<citetitle>OpenStack End User
Guide</citetitle></link>.</para>
@ -493,7 +493,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
Reference</citetitle> lists configuration options for
customizing this compatibility API on your OpenStack
cloud.</para>
<para>Numerous third party tools and language-specific SDKs
<para>Numerous third-party tools and language-specific SDKs
can be used to interact with OpenStack clouds, using both
native and compatibility APIs. Some of the more popular
third-party tools are:</para>
@ -501,7 +501,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
<varlistentry>
<term>Euca2ools</term>
<listitem>
<para>A popular open source command line tool for
<para>A popular open source command-line tool for
interacting with the EC2 API. This is
convenient for multi-cloud environments where
EC2 is the common API, or for transitioning
@ -1948,7 +1948,7 @@ net.bridge.bridge-nf-call-ip6tables=0</programlisting>
<simplesect>
<title>Use the euca2ools commands</title>
<para>For a command-line interface to EC2 API calls,
use the euca2ools command line tool. See <link
use the euca2ools command-line tool. See <link
xlink:href="http://open.eucalyptus.com/wiki/Euca2oolsGuide_v1.3"
>http://open.eucalyptus.com/wiki/Euca2oolsGuide_v1.3</link></para>
</simplesect>

View File

@ -8,7 +8,7 @@
<title>Logging settings</title>
<para>Networking components use Python logging module to do
logging. Logging configuration can be provided in
<filename>neutron.conf</filename> or as command line
<filename>neutron.conf</filename> or as command-line
options. Command options override ones in
<filename>neutron.conf</filename>.</para>
<para>To configure logging for Networking components, use one
@ -77,7 +77,7 @@ notification_driver = neutron.openstack.common.notifier.rpc_notifier
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# Defined in rpc_notifier for rpc way, can be comma-separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications</programlisting>
</section>

View File

@ -106,7 +106,7 @@ CACHES = {
the scope of this documentation.</para>
<procedure>
<step>
<para>Start the mysql command line client:</para>
<para>Start the mysql command-line client:</para>
<screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput></screen>
</step>
<step>
@ -196,7 +196,8 @@ No fixtures found.</computeroutput></screen>
<section xml:id="dashboard-session-cached-database">
<title>Cached database</title>
<para>To mitigate the performance issues of database queries,
you can use the Django cached_db session back end, which
you can use the Django <command>cached_db</command> session
back end, which
utilizes both your database and caching infrastructure to
perform write-through caching and efficient retrieval.</para>
<para>Enable this hybrid setting by configuring both your
@ -206,7 +207,7 @@ No fixtures found.</computeroutput></screen>
</section>
<section xml:id="dashboard-session-cookies">
<title>Cookies</title>
<para>If you use Django 1.4 or later, the signed_cookies
<para>If you use Django 1.4 or later, the <command>signed_cookies</command>
back end avoids server load and scaling problems.</para>
<para>This back end stores session data in a cookie, which is
stored by the users browser. The back end uses a

View File

@ -83,16 +83,18 @@
<para>The following outlines the steps of shared nothing live migration.</para>
<orderedlist>
<listitem>
<para>The target hosts ensures that live migration is enabled and properly
configured in Hyper-V.</para>
<para>The target hosts ensures that live migration is
enabled and properly configured in Hyper-V.</para>
</listitem>
<listitem>
<para>The target hosts checks if the image to be migrated requires a base VHD and
pulls it from Glance if not already available on the target host.</para>
<para>The target hosts checks if the image to be
migrated requires a base VHD and pulls it from the
Image Service if not already available on the target
host.</para>
</listitem>
<listitem>
<para>The source hosts ensures that live migration is enabled and properly
configured in Hyper-V.</para>
<para>The source hosts ensures that live migration is
enabled and properly configured in Hyper-V.</para>
</listitem>
<listitem>
<para>The source hosts initiates a Hyper-V live migration.</para>
@ -132,18 +134,21 @@
members</para>
</listitem>
<listitem>
<para>The instances_path command line option/flag needs to be the same on all
hosts</para>
<para>The instances_path command-line option/flag needs to be the same on all
hosts.</para>
</listitem>
<listitem>
<para>The openstack-compute service deployed with the setup must run with domain
credentials. You can set the service credentials with:</para>
<para>The <systemitem
class="service">openstack-compute</systemitem> service
deployed with the setup must run with domain
credentials. You can set the service credentials
with:</para>
<screen>
<prompt>C:\</prompt><userinput>sc config openstack-compute obj="DOMAIN\username" password="password"</userinput></screen>
</listitem>
</itemizedlist>
<para><emphasis role="bold">How to setup live migration on Hyper-V</emphasis></para>
<para>To enable shared nothing live migration run the 3 PowerShell instructions below on
<para>To enable 'shared nothing live' migration, run the 3 PowerShell instructions below on
each Hyper-V host:</para>
<screen>
<prompt>PS C:\</prompt><userinput>Enable-VMMigration</userinput>

View File

@ -163,7 +163,7 @@
<title>Health check</title>
<para>Provides an easy way to monitor whether the swift proxy
server is alive. If you access the proxy with the path
<filename>/healthcheck</filename>, it respond
<filename>/healthcheck</filename>, it responds with
<literal>OK</literal> in the response body, which
monitoring tools can use.</para>
<xi:include
@ -183,7 +183,7 @@
<title>CNAME lookup</title>
<para>Middleware that translates an unknown domain in the host
header to something that ends with the configured
storage_domain by looking up the given domain's CNAME
<code>storage_domain</code> by looking up the given domain's CNAME
record in DNS.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-cname_lookup.xml"
@ -215,7 +215,7 @@
<varlistentry>
<term><literal>temp_url_expires</literal></term>
<listitem>
<para>An expiration date, in Unix time.</para>
<para>An expiration date, in Unix time</para>
</listitem>
</varlistentry>
</variablelist></para>
@ -245,7 +245,7 @@
<para>The expiry date as a Unix timestamp</para>
</listitem>
<listitem>
<para>the full path to the object</para>
<para>The full path to the object</para>
</listitem>
<listitem>
<para>The secret key set as the
@ -275,20 +275,21 @@ url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=exp
PUT is allowed. Using this in combination with browser
form post translation middleware could also allow
direct-from-browser uploads to specific locations in
Swift. Note that <note>
Object Storage.</para>
<note>
<para>Changing the
<literal>X-Account-Meta-Temp-URL-Key</literal>
invalidates any previously generated temporary
URLs within 60 seconds (the memcache time for the
key). Swift supports up to two keys, specified by
key). Object Storage supports up to two keys, specified by
<literal>X-Account-Meta-Temp-URL-Key</literal>
and
<literal>X-Account-Meta-Temp-URL-Key-2</literal>.
Signatures are checked against both keys, if
present. This is to allow for key rotation without
invalidating all existing temporary URLs.</para>
</note></para>
<para>Swift includes a script called
</note>
<para>Opject Storage includes a script called
<command>swift-temp-url</command> that generates the
query parameters automatically:</para>
<screen><prompt>$</prompt> <userinput>bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey</userinput>
@ -296,7 +297,7 @@ url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=exp
temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91&amp;
temp_url_expires=1374497657</computeroutput></screen>
<para>Because this command only returns the path, you must
prefix the Swift storage host name (for example,
prefix the Object Storage host name (for example,
<literal>https://swift-cluster.example.com</literal>).</para>
<para>With GET Temporary URLs, a
<literal>Content-Disposition</literal> header is set
@ -370,7 +371,8 @@ pipeline = pipeline = healthcheck cache <emphasis role="bold">tempurl</emphasis>
create a new account solely for this usage. Next, you need
to place the containers and objects throughout the system
so that they are on distinct partitions. The
swift-dispersion-populate tool does this by making up
<command>swift-dispersion-populate</command> tool does this
by making up
random container and object names until they fall on
distinct partitions. Last, and repeatedly for the life of
the cluster, you must run the
@ -445,7 +447,7 @@ Sample represents 1.00% of the object partition space
</programlisting>
<para>Alternatively, the dispersion report can also be output
in json format. This allows it to be more easily consumed
by third party utilities:</para>
by third-party utilities:</para>
<screen><prompt>$</prompt> <userinput>swift-dispersion-report -j</userinput>
<computeroutput>{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0,
"copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container":
@ -463,7 +465,7 @@ Sample represents 1.00% of the object partition space
objects concurrently and afterwards download them as a
single object. It is different in that it does not rely on
eventually consistent container listings to do so.
Instead, a user defined manifest of the object segments is
Instead, a user-defined manifest of the object segments is
used.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-slo.xml"
@ -471,7 +473,8 @@ Sample represents 1.00% of the object partition space
</section>
<section xml:id="object-storage-container-quotas">
<title>Container quotas</title>
<para>The container_quotas middleware implements simple quotas
<para>The <code>container_quotas</code> middleware
implements simple quotas
that can be imposed on swift containers by a user with the
ability to set container metadata, most likely the account
administrator. This can be useful for limiting the scope
@ -509,11 +512,13 @@ Sample represents 1.00% of the object partition space
metadata entry must be requests (PUT, POST) if a given
account quota (in bytes) is exceeded while DELETE requests
are still allowed.</para>
<para>The x-account-meta-quota-bytes metadata entry must be
<para>The <parameter>x-account-meta-quota-bytes</parameter>
metadata entry must be
set to store and enable the quota. Write requests to this
metadata entry are only permitted for resellers. There is
no account quota limitation on a reseller account even if
x-account-meta-quota-bytes is set.</para>
<parameter>x-account-meta-quota-bytes</parameter> is set.
</para>
<para>Any object PUT operations that exceed the quota return a
413 response (request entity too large) with a descriptive
body.</para>
@ -536,21 +541,23 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
</section>
<section xml:id="object-storage-bulk-delete">
<title>Bulk delete</title>
<para>Use bulk-delete to delete multiple files from an account
<para>Use <code>bulk-delete</code> to delete multiple files
from an account
with a single request. Responds to DELETE requests with a
header 'X-Bulk-Delete: true_value'. The body of the DELETE
request is a new line separated list of files to delete.
request is a new line-separated list of files to delete.
The files listed must be URL encoded and in the
form:</para>
<programlisting>
/container_name/obj_name
</programlisting>
<para>If all files are successfully deleted (or did not
exist), the operation returns HTTPOk. If any files failed
to delete, the operation returns HTTPBadGateway. In both
cases the response body is a JSON dictionary that shows
the number of files that were successfully deleted or not
found. The files that failed are listed.</para>
exist), the operation returns <code>HTTPOk</code>. If any
files failed to delete, the operation returns
<code>HTTPBadGateway</code>. In both cases, the response body
is a JSON dictionary that shows the number of files that were
successfully deleted or not found. The files that failed are
listed.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-bulk.xml"
/>
@ -590,7 +597,7 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
The name of each file uploaded is appended to the
specified <literal>swift-url</literal>. So, you can upload
directly to the root of container with a url like:
directly to the root of container with a URL like:
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/</uri>
Optionally, you can include an object prefix to better
separate different users uploads, such as:
@ -641,7 +648,7 @@ signature = hmac.new(key, hmac_body, sha1).hexdigest()
on the account.</para>
<para>Be certain to use the full path, from the
<literal>/v1/</literal> onward.</para>
<para>The command line tool
<para>The command-line tool
<command>swift-form-signature</command> may be used
(mostly just when testing) to compute expires and
signature.</para>

View File

@ -20,12 +20,16 @@
Configuration for servers and daemons can be expressed together in
the same file for each type of server, or separately. If a required
section for the service trying to start is missing there will be an
error. The sections not used by the service are ignored.
error. Sections not used by the service are ignored.
</para>
<para>
Consider the example of an object storage node. By convention
configuration for the object-server, object-updater,
object-replicator, and object-auditor exist in a single file
Consider the example of an Object Storage node. By convention
configuration for the <systemitem
class="service">object-server</systemitem>, <systemitem
class="service">object-updater</systemitem>, <systemitem
class="service">object-replicator</systemitem>, and
<systemitem class="service">object-auditor</systemitem> exist
in a single file
<filename>/etc/swift/object-server.conf</filename>:
</para>
<programlisting language="ini">
@ -53,7 +57,7 @@ reclaim_age = 259200
Error: missing config path argument
</computeroutput></screen>
<para>
If you omit the object-auditor section this file can not be used
If you omit the object-auditor section, this file cannot be used
as the configuration path when starting the
<command>swift-object-auditor</command> daemon:
</para>
@ -65,10 +69,10 @@ Error: missing config path argument
the files in the directory with the file extension &quot;.conf&quot;
will be combined to generate the configuration object which is
delivered to the Object Storage service. This is referred to generally as
&quot;directory based configuration&quot;.
&quot;directory-based configuration&quot;.
</para>
<para>
Directory based configuration leverages ConfigParser's native
Directory-based configuration leverages ConfigParser's native
multi-file support. Files ending in &quot;.conf&quot; in the given
directory are parsed in lexicographical order. File names starting
with '.' are ignored. A mixture of file and directory configuration
@ -84,7 +88,7 @@ Error: missing config path argument
exist.
</para>
<para>
When using directory based configuration, if the same option under
When using directory-based configuration, if the same option under
the same section appears more than once in different files, the last
value parsed is said to override previous occurrences. You can
ensure proper override precedence by prefixing the files in the
@ -104,6 +108,6 @@ Error: missing config path argument
</programlisting>
<para>
You can inspect the resulting combined configuration object using
the <command>swift-config</command> command line tool.
the <command>swift-config</command> command-line tool.
</para>
</section>

View File

@ -1771,7 +1771,7 @@ Each entry in a typical ACL specifies a subject and an operation. For instance,
<glossentry>
<glossterm>euca2ools</glossterm>
<glossdef>
<para>A collection of command line tools for
<para>A collection of command-line tools for
administering VMs, most are compatible with
OpenStack.</para>
</glossdef>
@ -2016,7 +2016,7 @@ Each entry in a typical ACL specifies a subject and an operation. For instance,
<glossdef>
<para>The point where a user interacts with a service,
can be an API endpoint, the horizon dashboard, or
a command line tool.</para>
a command-line tool.</para>
</glossdef>
</glossentry>
</glossdiv>

View File

@ -1,10 +1,19 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch014_best-practices-for-operator-mode-access"><?dbhtml stop-chunking?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://docbook.org/ns/docbook"
xmlns:db="http://docbook.org/ns/docbook"
version="5.0"
xml:id="ch014_best-practices-for-operator-mode-access">
<?dbhtml stop-chunking?>
<title>Management Interfaces</title>
<para>It is necessary for administrators to perform command and control over the cloud for various operational functions. It is important these command and control facilities are understood and secured.</para>
<para>It is necessary for administrators to perform command and
control over the cloud for various operational functions. It is
important these command and control facilities are understood and
secured.</para>
<para>OpenStack provides several management interfaces for operators and tenants:</para>
<itemizedlist><listitem>
<para>OpenStack Dashboard (Horizon)</para>
<para>OpenStack dashboard (Horizon)</para>
</listitem>
<listitem>
<para>OpenStack API</para>
@ -13,7 +22,9 @@
<para>Secure Shell (SSH)</para>
</listitem>
<listitem>
<para>OpenStack Management Utilities (nova-manage, glance-manage, etc.)</para>
<para>OpenStack Management Utilities (for example,
<systemitem class="service">nova-manage</systemitem>,
<systemitem class="service">glance-manage</systemitem>)</para>
</listitem>
<listitem>
<para>Out-of-Band Management Interfaces (IPMI, etc.)</para>
@ -21,7 +32,7 @@
</itemizedlist>
<section xml:id="ch014_best-practices-for-operator-mode-access-idp49280">
<title>Dashboard</title>
<para>The OpenStack Dashboard (Horizon) provides administrators and tenants a web-based graphical interface to provision and access cloud-based resources. The dashboard communicates with the back-end services via calls to the OpenStack API (discussed above).</para>
<para>The OpenStack dashboard (Horizon) provides administrators and tenants a web-based graphical interface to provision and access cloud-based resources. The dashboard communicates with the back-end services via calls to the OpenStack API (discussed above).</para>
<section xml:id="ch014_best-practices-for-operator-mode-access-idp50608">
<title>Capabilities</title>
<itemizedlist><listitem>
@ -53,7 +64,7 @@
<para>Both the Horizon web service and the OpenStack API it uses to communicate with the back-end are susceptible to web attack vectors such as denial of service and must be monitored.</para>
</listitem>
<listitem>
<para>It is now possible (though there are numerous deployment/security implications) to upload an image file directly from a users hard disk to Glance through Horizon. For multi-GB images it is still strongly recommended that the upload be done using the Glance CLI</para>
<para>It is now possible (though there are numerous deployment/security implications) to upload an image file directly from a users hard disk to OpenStack Image Service through the dashboard. For multi-GB images it is still strongly recommended that the upload be done using the Glance CLI</para>
</listitem>
<listitem>
<para>Create and manage security groups through dashboard. The security groups allows L3-L4 packet filtering for security policies to protect virtual machines</para>
@ -67,11 +78,20 @@
</section>
<section xml:id="ch014_best-practices-for-operator-mode-access-idp63760">
<title>OpenStack API</title>
<para>The OpenStack API is a RESTful web service endpoint to access, provision and automate cloud-based resources.  Operators and users typically access the API through command-line utilities (i.e. Nova, Glance, etc.), language-specific libraries, or third-party tools.</para>
<para>The OpenStack API is a RESTful web service endpoint to
access, provision and automate cloud-based resources. Operators
and users typically access the API through command-line
utilities (for example, <command>nova</command> or
<command>glance</command>), language-specific libraries, or
third-party tools.</para>
<section xml:id="ch014_best-practices-for-operator-mode-access-idp65328">
<title>Capabilities</title>
<itemizedlist><listitem>
<para>To the cloud administrator the API provides an overall view of the size and state of the cloud deployment and allows the creation of users, tenants/projects, assigning users to tenants/projects and specifying resource quotas on a per tenant/project basis.</para>
<para>To the cloud administrator, the API provides an
overall view of the size and state of the cloud deployment
and allows the creation of users, tenants/projects,
assigning users to tenants/projects, and specifying
resource quotas on a per tenant/project basis.</para>
</listitem>
<listitem>
<para>The API provides a tenant interface for provisioning, managing, and accessing their resources.</para>
@ -104,9 +124,11 @@
<title>Management Utilities</title>
<para>The OpenStack Management Utilities are open-source Python
command-line clients that make API calls. There is a client for
each OpenStack service (nova, glance, etc.). In addition to the
each OpenStack service (for example, <systemitem
class="service">nova</systemitem>, <systemitem
class="service">glance</systemitem>). In addition to the
standard CLI client, most of the services have a management
command line which makes direct calls to the database. These
command-line utility which makes direct calls to the database. These
dedicated management utilities are slowly being
deprecated.</para>
<section xml:id="ch014_best-practices-for-operator-mode-access-idp77728">
@ -121,13 +143,18 @@
</section>
<section xml:id="ch014_best-practices-for-operator-mode-access-idp80496">
<title>References</title>
<para><citetitle>OpenStack End User Guide</citetitle> section <link xlink:href="http://docs.openstack.org/user-guide/content/section_cli_overview.html">command line clients overview</link></para>
<para><citetitle>OpenStack End User Guide</citetitle> section <link xlink:href="http://docs.openstack.org/user-guide/content/section_cli_overview.html">command-line clients overview</link></para>
<para><citetitle>OpenStack End User Guide</citetitle> section <link xlink:href="http://docs.openstack.org/user-guide/content/cli_openrc.html">Download and source the OpenStack RC file</link></para>
</section>
</section>
<section xml:id="ch014_best-practices-for-operator-mode-access-idp82336">
<title>Out-of-Band Management Interface</title>
<para>OpenStack management relies on out-of-band management interfaces such as the IPMI protocol to access into nodes running OpenStack components. IPMI is a very popular specification to remotely manage, diagnose and reboot servers whether the operating system is running or the system has crashed.</para>
<para>OpenStack management relies on out-of-band management
interfaces such as the IPMI protocol to access into nodes
running OpenStack components. IPMI is a very popular
specification to remotely manage, diagnose, and reboot servers
whether the operating system is running or the system has
crashed.</para>
<section xml:id="ch014_best-practices-for-operator-mode-access-idp83712">
<title>Security Considerations</title>
<itemizedlist><listitem>

View File

@ -19,7 +19,13 @@
<para>The OpenStack Dashboard (Horizon) can provide a VNC console for instances directly on the web page using the HTML5 noVNC client.  This requires the <systemitem class="service">nova-novncproxy</systemitem> service to bridge from the public network to the management network.</para>
</listitem>
<listitem>
<para>The nova command line utility can return a URL for the VNC console for access by the nova Java VNC client. This requires the nova-xvpvncproxy service to bridge from the public network to the management network.</para>
<para>The <command>nova</command> command-line utility can
return a URL for the VNC console for access by the
<systemitem class="service">nova</systemitem> Java VNC
client. This requires the <systemitem
class="service">nova-xvpvncproxy</systemitem> service to
bridge from the public network to the management
network.</para>
</listitem>
</itemizedlist>
</section>
@ -46,7 +52,7 @@
<para>SPICE is supported by the OpenStack Dashboard (Horizon) directly on the instance web page.  This requires the nova-spicehtml5proxy service.</para>
</listitem>
<listitem>
<para>The nova command line utility can return a URL for SPICE console for access by a SPICE-html client.</para>
<para>The nova command-line utility can return a URL for SPICE console for access by a SPICE-html client.</para>
</listitem>
</itemizedlist>
</section>

View File

@ -77,7 +77,8 @@
Roles control the actions that a user is allowed to perform. In
the default configuration, most actions do not require a
particular role, but this is configurable by the system
administrator editing the appropriate policy.json file that
administrator editing the appropriate <filename>policy.json</filename>
file that
maintains the rules. For example, a rule can be defined so that
a user cannot allocate a public IP without the admin role. A
user's access to particular images is limited by tenant, but the
@ -127,16 +128,16 @@
typical virtual system within the cloud. There are many ways to
configure the details of an OpenStack cloud and many ways to
implement a virtual system within that cloud. These
configuration details as well as the specific command line
configuration details as well as the specific command-line
utilities and API calls to perform the actions described are
presented in the Image Managementand Volume
Managementchapters.</para>
presented in the Image Management and Volume
Management chapters.</para>
<para>Images are disk images which are templates for virtual
machine file systems. The image service, Glance, is responsible
machine file systems. The OpenStack Image Service is responsible
for the storage and management of images within
OpenStack.</para>
<para>Instances are the individual virtual machines running on
physical compute nodes. The compute service, Nova, manages
physical compute nodes. The OpenStack Compute Service manages
instances. Any number of instances maybe started from the same
image. Each instance is run from a copy of the base image so
runtime changes made by an instance do not change the image it
@ -161,11 +162,13 @@
<para><guilabel>Initial State</guilabel></para>
<para><guilabel>Images and Instances</guilabel></para>
<para>The following diagram shows the system state prior to
launching an instance. The image store fronted by the image
service, Glance, has some number of predefined images. In the
cloud there is an available compute node with available vCPU,
launching an instance. The image store fronted by the Image
Service has some number of predefined images. In the
cloud, there is an available Compute node with available vCPU,
memory and local disk resources. Plus there are a number of
predefined volumes in the cinder-volume service.</para>
predefined volumes in the
<systemitem class="service">cinder-volume</systemitem> service.
</para>
<para>Figure 2.1. Base image state with no running
instances</para>
<figure>
@ -177,12 +180,13 @@
</mediaobject>
</figure>
<para><guilabel>Launching an instance</guilabel></para>
<para>To launch an instance the user selects an image, a flavor
and optionally other attributes. In this case the selected
<para>To launch an instance, the user selects an image, a flavor,
and other optional attributes. In this case the selected
flavor provides a root volume (as all flavors do) labeled vda in
the diagram and additional ephemeral storage labeled vdb in the
diagram. The user has also opted to map a volume from the
cinder-volume store to the third virtual disk, vdc, on this
<systemitem class="service">cinder-volume</systemitem>
store to the third virtual disk, vdc, on this
instance.</para>
<para>Figure 2.2. Instance creation from image and run time
state</para>
@ -202,7 +206,8 @@
present as the second disk (vdb). Be aware that the second disk
is an empty disk with an emphemeral life as it is destroyed when
you delete the instance. The compute node attaches to the
requested cinder-volume using iSCSI and maps this to the third
requested <systemitem class="service">cinder-volume</systemitem>
using iSCSI and maps this to the third
disk (vdc) as requested. The vCPU and memory resources are
provisioned and the instance is booted from the first drive. The
instance runs and changes data on the disks indicated in red in
@ -232,8 +237,8 @@
</figure>
<para>Once you launch a VM in OpenStack, there's something more
going on in the background. To understand what's happening
behind the Dashboard, lets take a deeper dive into OpenStacks
behind the dashboard, lets take a deeper dive into OpenStacks
VM provisioning. For launching a VM, you can either use
Command Line Interface or the OpenStack Horizon Dashboard.
the command-line interfaces or the OpenStack dashboard.
</para>
</chapter>