Heading and other consistency/clarity edits - Cloud Admin Guide
Closes-Bug: #1250515 author: diane fleming Change-Id: Ib1755a3e10ddd348d0575b3c5e6aa1660d5f612e backport: none
This commit is contained in:
parent
2a14350523
commit
371f556463
@ -38,17 +38,23 @@
|
||||
</annotation>
|
||||
</legalnotice>
|
||||
<abstract>
|
||||
<para>OpenStack offers open source software for cloud administrators to manage and troubleshoot
|
||||
an OpenStack cloud.</para>
|
||||
<para>OpenStack offers open source software for cloud
|
||||
administrators to manage and troubleshoot an OpenStack
|
||||
cloud.</para>
|
||||
</abstract>
|
||||
<revhistory>
|
||||
<!-- ... continue adding more revisions here as you change this document using the markup shown below... -->
|
||||
<revision>
|
||||
<date>2013-11-12</date>
|
||||
<revdescription><itemizedlist>
|
||||
<listitem><para>Adds options for tuning operational status synchronization
|
||||
in the NVP plugin.</para></listitem>
|
||||
</itemizedlist></revdescription>
|
||||
<revdescription>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Adds options for tuning operational
|
||||
status synchronization in the NVP
|
||||
plug-in.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</revdescription>
|
||||
</revision>
|
||||
<revision>
|
||||
<date>2013-10-17</date>
|
||||
@ -65,8 +71,13 @@
|
||||
<revdescription>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>Moves object storage monitoring section to this guide.</para></listitem>
|
||||
<listitem><para>Removes redundant object storage information.</para></listitem>
|
||||
<para>Moves object storage monitoring
|
||||
section to this guide.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Removes redundant object storage
|
||||
information.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</revdescription>
|
||||
</revision>
|
||||
@ -75,13 +86,27 @@
|
||||
<revdescription>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>Moved all but config and install info from the following component
|
||||
guides to create the new guide:</para>
|
||||
<para>Moved all but configuration and
|
||||
installation information from these
|
||||
component guides to create the new
|
||||
guide:</para>
|
||||
<itemizedlist>
|
||||
<listitem><para>OpenStack Compute Administration Guide</para></listitem>
|
||||
<listitem><para>OpenStack Networking Administration Guide</para></listitem>
|
||||
<listitem><para>OpenStack Object Storage Administration Guide</para></listitem>
|
||||
<listitem><para>OpenStack Block Storage Service Administration Guide</para></listitem>
|
||||
<listitem>
|
||||
<para>OpenStack Compute
|
||||
Administration Guide</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>OpenStack Networking
|
||||
Administration Guide</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>OpenStack Object Storage
|
||||
Administration Guide</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>OpenStack Block Storage
|
||||
Service Administration Guide</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
@ -98,4 +123,4 @@
|
||||
<xi:include href="ch_blockstorage.xml"/>
|
||||
<xi:include href="ch_networking.xml"/>
|
||||
<xi:include href="../common/app_support.xml"/>
|
||||
</book>
|
||||
</book>
|
||||
|
@ -5,130 +5,136 @@
|
||||
xml:id="managing-volumes">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Block Storage</title>
|
||||
<para>
|
||||
OpenStack Block Storage allows you to add block-level storage to your
|
||||
OpenStack Compute instances. It is similar in function to the Amazon
|
||||
EC2 Elastic Block Storage (EBS) offering.
|
||||
</para>
|
||||
<para>
|
||||
The OpenStack Block Storage service uses a series of daemon processes.
|
||||
Each process will have the prefix
|
||||
<systemitem class="service">cinder-</systemitem>, and they will all
|
||||
run persistently on the host. The binaries can all be run on from a
|
||||
single node, or they can be spread across multiple nodes. They can
|
||||
also be run on the same node as other OpenStack services.
|
||||
</para>
|
||||
<para>
|
||||
The default OpenStack Block Storage service implementation is an iSCSI
|
||||
solution that uses Logical Volume Manager (LVM) for Linux. It also
|
||||
provides drivers that allow you to use a back-end storage device from
|
||||
a different vendor, in addition to or instead of the base LVM
|
||||
implementation.
|
||||
</para>
|
||||
<note>
|
||||
<para>
|
||||
The OpenStack Block Storage service is not a shared storage
|
||||
solution like a Storage Area Network (SAN) of NFS volumes, where
|
||||
you can attach a volume to multiple servers. With the OpenStack
|
||||
Block Storage service, you can attach a volume to only one
|
||||
instance at a time.
|
||||
</para>
|
||||
</note>
|
||||
<para>
|
||||
This chapter uses a simple example to demonstrate Block Storage. In
|
||||
this example, one cloud controller runs the
|
||||
<systemitem class="service">nova-api</systemitem>,
|
||||
<systemitem class="service">nova-scheduler</systemitem>,
|
||||
<systemitem class="service">nova-objectstore</systemitem>,
|
||||
<literal>nova-network</literal> and <literal>cinder-*</literal>
|
||||
services. There are two additional compute nodes running
|
||||
<systemitem class="service">nova-compute</systemitem>.
|
||||
</para>
|
||||
<para>
|
||||
The example in this chapter uses a custom partitioning scheme that
|
||||
uses 60GB of space and labels it as a Logical Volume (LV). The network
|
||||
uses <literal>FlatManager</literal> as the
|
||||
<literal>NetworkManager</literal> setting for OpenStack Compute.
|
||||
</para>
|
||||
<para>
|
||||
The network mode does not interfere with the way Block Storage works,
|
||||
but networking must be set up. For more information on networking, see
|
||||
<xref linkend="ch_networking"/>.
|
||||
</para>
|
||||
|
||||
<para>The OpenStack Block Storage service works though the
|
||||
interaction of a series of daemon processes named <systemitem
|
||||
class="daemon">cinder-*</systemitem> that reside
|
||||
persistently on the host machine or machines. The binaries can
|
||||
all be run from a single node, or spread across multiple
|
||||
nodes. They can also be run on the same node as other
|
||||
OpenStack services.</para>
|
||||
<section xml:id="section_block-storage-intro">
|
||||
<title>Introduction to Block Storage</title>
|
||||
<para>To administer the OpenStack Block Storage service, it is
|
||||
helpful to understand a number of concepts. You must make
|
||||
certain choices when you configure the Block Storage
|
||||
service in OpenStack. The bulk of the options come down to
|
||||
two choices, single node or multi-node install. You can
|
||||
read a longer discussion about storage decisions in <link
|
||||
xlink:href="http://docs.openstack.org/trunk/openstack-ops/content/storage_decision.html"
|
||||
>Storage Decisions</link> in the <citetitle>OpenStack
|
||||
Operations Guide</citetitle>.</para>
|
||||
<para>The OpenStack Block Storage Service enables you to add
|
||||
extra block-level storage to your OpenStack Compute
|
||||
instances. This service is similar to the Amazon EC2
|
||||
Elastic Block Storage (EBS) offering.</para>
|
||||
</section>
|
||||
<?hard-pagebreak?>
|
||||
<section xml:id="section_manage-volumes">
|
||||
<title>Manage volumes</title>
|
||||
<para>
|
||||
To set up Compute to use volumes, you must have
|
||||
<systemitem class="service">lvm2</systemitem> installed.
|
||||
</para>
|
||||
<para>
|
||||
This procedure creates and attaches a volume to a server instance.
|
||||
</para>
|
||||
<title>Manage volumes</title>
|
||||
<para>The default OpenStack Block Storage service
|
||||
implementation is an iSCSI solution that uses Logical
|
||||
Volume Manager (LVM) for Linux.</para>
|
||||
<note>
|
||||
<para>The OpenStack Block Storage service is not a shared
|
||||
storage solution like a Storage Area Network (SAN) of
|
||||
NFS volumes, where you can attach a volume to multiple
|
||||
servers. With the OpenStack Block Storage service, you
|
||||
can attach a volume to only one instance at a
|
||||
time.</para>
|
||||
<para>The OpenStack Block Storage service also provides
|
||||
drivers that enable you to use several vendors'
|
||||
back-end storage devices, in addition to or instead of
|
||||
the base LVM implementation.</para>
|
||||
</note>
|
||||
<para>This high-level procedure shows you how to create and
|
||||
attach a volume to a server instance.</para>
|
||||
<procedure>
|
||||
<step>
|
||||
<para>
|
||||
The <command>cinder create</command> command creates a
|
||||
logical volume (LV) in the volume group (VG)
|
||||
<parameter>cinder-volumes</parameter>.
|
||||
</para>
|
||||
<para>
|
||||
Create a volume:
|
||||
</para>
|
||||
<programlisting>$ cinder create</programlisting>
|
||||
<para>You must configure both OpenStack Compute and
|
||||
the OpenStack Block Storage service through the
|
||||
<filename>cinder.conf</filename> file.</para>
|
||||
</step>
|
||||
<step>
|
||||
<para>
|
||||
The <command>nova volume-attach</command> command creates
|
||||
a unique iSCSI IQN that is exposed to the compute node.
|
||||
</para>
|
||||
<para>
|
||||
Attach the LV to an instance:
|
||||
</para>
|
||||
<programlisting>$ nova volume-attach</programlisting>
|
||||
<para>Create a volume through the <command>cinder
|
||||
create</command> command. This command creates
|
||||
an LV into the volume group (VG)
|
||||
"cinder-volumes."</para>
|
||||
</step>
|
||||
<step>
|
||||
<para>
|
||||
The OpenStack Block Storage service and OpenStack Compute
|
||||
can now be configured using the
|
||||
<filename>cinder.conf</filename> configuration file.
|
||||
</para>
|
||||
<para>Attach the volume to an instance through the
|
||||
<command>nova volume-attach</command> command.
|
||||
This command creates a unique iSCSI IQN that is
|
||||
exposed to the compute node.</para>
|
||||
<substeps>
|
||||
<step>
|
||||
<para>The compute node, which runs the
|
||||
instance, now has an active ISCSI session
|
||||
and new local storage (usually a
|
||||
<filename>/dev/sdX</filename>
|
||||
disk).</para>
|
||||
</step>
|
||||
<step>
|
||||
<para>libvirt uses that local storage as
|
||||
storage for the instance. The instance get
|
||||
a new disk, usually a
|
||||
<filename>/dev/vdX</filename>
|
||||
disk.</para>
|
||||
</step>
|
||||
</substeps>
|
||||
</step>
|
||||
</procedure>
|
||||
<para>
|
||||
The compute node, which runs the instance, now has an active
|
||||
iSCSI session. It will also have a new local disk, usually a
|
||||
<filename>/dev/sdX</filename> disk. This local storage is
|
||||
used by libvirt as storage for the instance. The instance
|
||||
itself will usually get a separate new disk, usually a
|
||||
<filename>/dev/vdX</filename> disk.
|
||||
</para>
|
||||
<para>
|
||||
In some cases, instances can be stored and run from within
|
||||
volumes. For more information, see the
|
||||
<link xlink:href="http://docs.openstack.org/user-guide/content/boot_from_volume.html">
|
||||
Launch an instance from a volume</link> section in the
|
||||
<link xlink:href="http://docs.openstack.org/user-guide/content/">
|
||||
<citetitle>OpenStack End User Guide</citetitle></link>.
|
||||
</para>
|
||||
<xi:include href="section_multi_backend.xml"/>
|
||||
<xi:include href="section_backup-block-storage-disks.xml"/>
|
||||
<xi:include href="section_volume-migration.xml"/>
|
||||
<para>For this particular walk through, one cloud controller
|
||||
runs <systemitem class="service">nova-api</systemitem>,
|
||||
<systemitem class="service"
|
||||
>nova-scheduler</systemitem>, <systemitem
|
||||
class="service">nova-objectstore</systemitem>,
|
||||
<literal>nova-network</literal> and
|
||||
<literal>cinder-*</literal> services. Two additional
|
||||
compute nodes run <systemitem class="service"
|
||||
>nova-compute</systemitem>. The walk through uses a
|
||||
custom partitioning scheme that carves out 60 GB of space
|
||||
and labels it as LVM. The network uses the
|
||||
<literal>FlatManager</literal> and
|
||||
<literal>NetworkManager</literal> settings for
|
||||
OpenStack Compute (Nova).</para>
|
||||
<para>The network mode does not interfere with the way cinder
|
||||
works, but you must set up networking for cinder to work.
|
||||
For details, see <xref linkend="ch_networking"/>.</para>
|
||||
<para>To set up Compute to use volumes, ensure that Block
|
||||
Storage is installed along with lvm2. This guide describes
|
||||
how to troubleshoot your installation and back up your
|
||||
Compute volumes.</para>
|
||||
<section xml:id="boot-from-volume">
|
||||
<title>Boot from volume</title>
|
||||
<para>In some cases, instances can be stored and run from
|
||||
inside volumes. For information, see the <link
|
||||
xlink:href="http://docs.openstack.org/user-guide/content/boot_from_volume.html"
|
||||
>Launch an instance from a volume</link> section
|
||||
in the <link
|
||||
xlink:href="http://docs.openstack.org/user-guide/content/"
|
||||
><citetitle>OpenStack End User
|
||||
Guide</citetitle></link>.</para>
|
||||
</section>
|
||||
<?hard-pagebreak?>
|
||||
<xi:include href="section_multi_backend.xml"/>
|
||||
<xi:include href="section_backup-block-storage-disks.xml"/>
|
||||
<xi:include href="section_volume-migration.xml"/>
|
||||
</section>
|
||||
<section xml:id="troubleshooting-cinder-install">
|
||||
<title>Troubleshoot your installation</title>
|
||||
<para>
|
||||
This section contains troubleshooting information for Block
|
||||
Storage.
|
||||
</para>
|
||||
<title>Troubleshoot your installation</title>
|
||||
<para>This section provides useful tips to help troubleshoot
|
||||
your Block Storage Service installation.</para>
|
||||
<xi:include href="section_ts_cinder_config.xml"/>
|
||||
<xi:include href="section_ts_multipath_warn.xml"/>
|
||||
<xi:include href="section_ts_vol_attach_miss_sg_scan.xml"/>
|
||||
<xi:include href="section_ts_HTTP_bad_req_in_cinder_vol_log.xml"/>
|
||||
<xi:include
|
||||
href="section_ts_HTTP_bad_req_in_cinder_vol_log.xml"/>
|
||||
<xi:include href="section_ts_attach_vol_fail_not_JSON.xml"/>
|
||||
<xi:include href="section_ts_duplicate_3par_host.xml"/>
|
||||
<xi:include href="section_ts_failed_attach_vol_after_detach.xml"/>
|
||||
<xi:include href="section_ts_failed_attach_vol_no_sysfsutils.xml"/>
|
||||
<xi:include
|
||||
href="section_ts_failed_attach_vol_after_detach.xml"/>
|
||||
<xi:include
|
||||
href="section_ts_failed_attach_vol_no_sysfsutils.xml"/>
|
||||
<xi:include href="section_ts_failed_connect_vol_FC_SAN.xml"/>
|
||||
<xi:include href="section_ts_failed_sched_create_vol.xml"/>
|
||||
<xi:include href="section_ts_no_emulator_x86_64.xml"/>
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -3,7 +3,7 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="ch_install-dashboard">
|
||||
<?dbhtml stop-chunking?>
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Dashboard</title>
|
||||
<para xmlns:raxm="http://docs.rackspace.com/api/metadata">The
|
||||
dashboard, also known as <link
|
||||
@ -14,9 +14,9 @@
|
||||
the OpenStack Compute cloud controller through the OpenStack
|
||||
APIs. For information about installing and configuring the
|
||||
dashboard, see the <citetitle>OpenStack Installation
|
||||
Guide</citetitle> for your distribution. After you install and
|
||||
configure the dashboard, you can complete the
|
||||
following tasks:</para>
|
||||
Guide</citetitle> for your distribution. After you install
|
||||
and configure the dashboard, complete these
|
||||
tasks:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Customize your dashboard. See <xref
|
||||
@ -33,7 +33,7 @@
|
||||
</listitem>
|
||||
<listitem xml:id="launch_instances">
|
||||
<para>Launch instances with the dashboard. See the <link
|
||||
xlink:href="http://docs.openstack.org/user-guide/content/"
|
||||
xlink:href="http://docs.openstack.org/user-guide/content/"
|
||||
><citetitle>OpenStack End User
|
||||
Guide</citetitle></link>.</para>
|
||||
</listitem>
|
||||
|
@ -4,17 +4,25 @@
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="ch-identity-mgmt-config">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Identity Management</title>
|
||||
<para>The default identity management system for OpenStack is the OpenStack Identity Service,
|
||||
code-named Keystone. Once Identity is installed, it is configured via a primary configuration
|
||||
file (<filename>etc/keystone.conf</filename>), possibly a separate logging configuration file,
|
||||
and initializing data into Keystone using the command line client.</para>
|
||||
<title>Identity management</title>
|
||||
<para>The OpenStack Identity Service, code-named Keystone, is the
|
||||
default identity management system for OpenStack. After you
|
||||
install the Identity Service, you configure it through the
|
||||
<filename>etc/keystone.conf</filename> configuration file and,
|
||||
possibly, a separate logging configuration file. You initialize
|
||||
data into the Identity Service by using the
|
||||
<command>keystone</command> command-line client.</para>
|
||||
<section xml:id="keystone-admin-concepts">
|
||||
<title>Identity Service Concepts</title>
|
||||
<xi:include href="../common/section_keystone-concepts-user-management.xml"/>
|
||||
<xi:include href="../common/section_keystone-concepts-service-management.xml"/>
|
||||
<xi:include href="../common/section_keystone-concepts-group-management.xml"/>
|
||||
<xi:include href="../common/section_keystone-concepts-domain-management.xml"/>
|
||||
<title>Identity Service concepts</title>
|
||||
<xi:include
|
||||
href="../common/section_keystone-concepts-user-management.xml"/>
|
||||
<xi:include
|
||||
href="../common/section_keystone-concepts-service-management.xml"/>
|
||||
<xi:include
|
||||
href="../common/section_keystone-concepts-group-management.xml"/>
|
||||
<xi:include
|
||||
href="../common/section_keystone-concepts-domain-management.xml"
|
||||
/>
|
||||
</section>
|
||||
<section xml:id="user-crud">
|
||||
<title>User CRUD</title>
|
||||
@ -24,8 +32,9 @@
|
||||
extension you should define a
|
||||
<literal>user_crud_extension</literal> filter, insert it after
|
||||
the <literal>*_body</literal> middleware and before the
|
||||
<literal>public_service</literal> app in the public_api WSGI
|
||||
pipeline in <filename>keystone.conf</filename> e.g.:</para>
|
||||
<literal>public_service</literal> application in the
|
||||
public_api WSGI pipeline in <filename>keystone.conf</filename>
|
||||
e.g.:</para>
|
||||
<programlisting language="ini"><?db-font-size 75%?>[filter:user_crud_extension]
|
||||
paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory
|
||||
|
||||
@ -36,26 +45,56 @@ pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body j
|
||||
<screen><prompt>$</prompt> <userinput>curl -X PATCH http://localhost:5000/v2.0/OS-KSCRUD/users/<userid> -H "Content-type: application/json" \
|
||||
-H "X_Auth_Token: <authtokenid>" -d '{"user": {"password": "ABCD", "original_password": "DCBA"}}'</userinput></screen>
|
||||
<para>In addition to changing their password all of the users
|
||||
current tokens are deleted (if the back end is kvs or
|
||||
current tokens are deleted (if the back-end is kvs or
|
||||
sql).</para>
|
||||
</section>
|
||||
<section xml:id="keystone-logging">
|
||||
<title>Logging</title>
|
||||
<para>You configure logging externally to the rest of Identity.
|
||||
The file specifying the logging configuration is in the
|
||||
<para>You configure logging externally to the rest of the Identity
|
||||
Service. The file specifying the logging configuration is in the
|
||||
<literal>[DEFAULT]</literal> section of the
|
||||
<filename>keystone.conf</filename> file under
|
||||
<literal>log_config</literal>. To route logging through
|
||||
syslog, set <literal>use_syslog=true</literal> option in the
|
||||
<literal>[DEFAULT]</literal> section.</para>
|
||||
<para>A sample logging file is available with the project in the
|
||||
directory <filename>etc/logging.conf.sample</filename>. Like
|
||||
other OpenStack projects, Identity uses the python logging
|
||||
module, which includes extensive configuration options for
|
||||
choosing the output levels and formats.</para>
|
||||
<filename>etc/logging.conf.sample</filename> directory. Like
|
||||
other OpenStack projects, the Identity Service uses the Python
|
||||
logging module, which includes extensive configuration options
|
||||
that let you define the output levels and formats.</para>
|
||||
<para>Review the <filename>etc/keystone.conf</filename> sample
|
||||
configuration files distributed with keystone for example
|
||||
configuration files for each server application.</para>
|
||||
configuration files that are distributed with the Identity
|
||||
Service. For example, each server application has its own
|
||||
configuration file.</para>
|
||||
<para>For services that have separate paste-deploy
|
||||
<filename>.ini</filename> files, you can configure
|
||||
<literal>auth_token</literal> middleware in the
|
||||
<literal>[keystone_authtoken]</literal> section in the main
|
||||
configuration file, such as <filename>nova.conf</filename>. For
|
||||
example in Compute, you can remove the middleware parameters
|
||||
from <filename>api-paste.ini</filename>, as follows:</para>
|
||||
<programlisting language="ini"><?db-font-size 75%?>[filter:authtoken]
|
||||
paste.filter_factory =
|
||||
keystoneclient.middleware.auth_token:filter_factory</programlisting>
|
||||
<para>Set these values in the <filename>nova.conf</filename>
|
||||
file:</para>
|
||||
<programlisting language="ini"><?db-font-size 75%?>[DEFAULT]
|
||||
...
|
||||
auth_strategy=keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
auth_host = 127.0.0.1
|
||||
auth_port = 35357
|
||||
auth_protocol = http
|
||||
auth_uri = http://127.0.0.1:5000/
|
||||
admin_user = admin
|
||||
admin_password = SuperSekretPassword
|
||||
admin_tenant_name = service</programlisting>
|
||||
<note>
|
||||
<para>Middleware parameters in paste config take priority. You
|
||||
must remove them to use values in the
|
||||
<literal>[keystone_authtoken]</literal> section.</para>
|
||||
</note>
|
||||
</section>
|
||||
<section xml:id="monitoring">
|
||||
<title>Monitoring</title>
|
||||
@ -129,20 +168,27 @@ keystone --username=admin --password=secrete --tenant_name=admin user-list
|
||||
keystone --username=admin --password=secrete --tenant_name=admin tenant-create --name=demo</programlisting>
|
||||
</section>
|
||||
<section xml:id="auth-token-middleware-with-username-and-password">
|
||||
<title>Authentication middleware with user name and password</title>
|
||||
<para>You can also configure the Identity Service authentication middleware using the
|
||||
<option>admin_user</option> and <option>admin_password</option> options. When using the
|
||||
<option>admin_user</option> and <option>admin_password</option> options the
|
||||
<option>admin_token</option> parameter is optional. If <option>admin_token</option> is
|
||||
specified, it is used only if the specified token is still valid.</para>
|
||||
<para>For services that have a separate paste-deploy ini file, you can configure the
|
||||
authentication middleware in the [keystone_authtoken] section of the main config file, such as
|
||||
<filename>nova.conf</filename>. In Compute, for example, you can remove the middleware
|
||||
parameters from <filename>api-paste.ini</filename> as follows:</para>
|
||||
<title>Authentication middleware with user name and
|
||||
password</title>
|
||||
<para>You can also configure the Identity Service authentication
|
||||
middleware using the <option>admin_user</option> and
|
||||
<option>admin_password</option> options. When using the
|
||||
<option>admin_user</option> and
|
||||
<option>admin_password</option> options the
|
||||
<option>admin_token</option> parameter is optional. If
|
||||
<option>admin_token</option> is specified, it is used only if
|
||||
the specified token is still valid.</para>
|
||||
<para>For services that have a separate paste-deploy .ini file,
|
||||
you can configure the authentication middleware in the
|
||||
<literal>[keystone_authtoken]</literal> section of the main
|
||||
configuration file, such as <filename>nova.conf</filename>. In
|
||||
Compute, for example, you can remove the middleware parameters
|
||||
from <filename>api-paste.ini</filename>, as follows:</para>
|
||||
<programlisting language="ini"><?db-font-size 75%?>[filter:authtoken]
|
||||
paste.filter_factory =
|
||||
keystoneclient.middleware.auth_token:filter_factory</programlisting>
|
||||
<para>And set the following values in <filename>nova.conf</filename> as follows:</para>
|
||||
<para>And set the following values in
|
||||
<filename>nova.conf</filename> as follows:</para>
|
||||
<programlisting language="ini"><?db-font-size 75%?>[DEFAULT]
|
||||
...
|
||||
auth_strategy=keystone
|
||||
@ -156,11 +202,13 @@ admin_user = admin
|
||||
admin_password = SuperSekretPassword
|
||||
admin_tenant_name = service</programlisting>
|
||||
<note>
|
||||
<para>The middleware parameters in the paste config take priority. You must remove them to use
|
||||
the values in the [keystone_authtoken] section.</para>
|
||||
<para>The middleware parameters in the paste config take
|
||||
priority. You must remove them to use the values in the
|
||||
[keystone_authtoken] section.</para>
|
||||
</note>
|
||||
<para>Here is a sample paste config filter that makes use of the <option>admin_user</option> and
|
||||
<option>admin_password</option> parameters:</para>
|
||||
<para>This sample paste config filter makes use of the
|
||||
<option>admin_user</option> and
|
||||
<option>admin_password</option> options:</para>
|
||||
<programlisting language="ini"><?db-font-size 75%?>[filter:authtoken]
|
||||
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
|
||||
service_port = 5000
|
||||
@ -170,8 +218,11 @@ auth_host = 127.0.0.1
|
||||
auth_token = 012345SECRET99TOKEN012345
|
||||
admin_user = admin
|
||||
admin_password = keystone123</programlisting>
|
||||
<para>Note that using this option requires an admin tenant/role relationship. The admin user is
|
||||
granted access to the admin role on the admin tenant.</para>
|
||||
<note>
|
||||
<para>Using this option requires an admin tenant/role
|
||||
relationship. The admin user is granted access to the admin
|
||||
role on the admin tenant.</para>
|
||||
</note>
|
||||
</section>
|
||||
<?hard-pagebreak?>
|
||||
<xi:include href="../common/section_identity-troubleshooting.xml"/>
|
||||
|
@ -12,7 +12,7 @@
|
||||
<section xml:id="section_networking-intro">
|
||||
<title>Introduction to Networking</title>
|
||||
<para>The Networking service, code-named Neutron, provides an
|
||||
API for defining network connectivity and addressing in
|
||||
API that lets you define network connectivity and addressing in
|
||||
the cloud. The Networking service enables operators to
|
||||
leverage different networking technologies to power their
|
||||
cloud networking. The Networking service also provides an
|
||||
@ -229,8 +229,8 @@
|
||||
options to decide on the right networking technology
|
||||
for the deployment.</para>
|
||||
<para>In the Havana release, OpenStack Networking provides
|
||||
the <emphasis role="bold">Modular Layer 2
|
||||
(ML2)</emphasis> plug-in that can concurrently use
|
||||
the <firstterm>Modular Layer 2
|
||||
(ML2)</firstterm> plug-in that can concurrently use
|
||||
multiple layer 2 networking technologies that are
|
||||
found in real-world data centers. It currently works
|
||||
with the existing Open vSwitch, Linux Bridge, and
|
||||
@ -239,7 +239,7 @@
|
||||
reduces the effort that is required to add and
|
||||
maintain them compared to monolithic plug-ins.</para>
|
||||
<note>
|
||||
<title>Plug-ins deprecation notice:</title>
|
||||
<title>Plug-in deprecation notice:</title>
|
||||
<para>The Open vSwitch and Linux Bridge plug-ins are
|
||||
deprecated in the Havana release and will be
|
||||
removed in the Icehouse release. All features have
|
||||
@ -391,9 +391,8 @@
|
||||
xlink:href="http://docs.openstack.org/havana/config-reference/content/section_networking-options-reference.html"
|
||||
>Networking configuration options</link> in
|
||||
<citetitle>Configuration
|
||||
Reference</citetitle>. The following sections
|
||||
explain in detail how to configure specific
|
||||
plug-ins.</para>
|
||||
Reference</citetitle>. These sections explain how
|
||||
to configure specific plug-ins.</para>
|
||||
<section xml:id="bigswitch_floodlight_plugin">
|
||||
<title>Configure Big Switch, Floodlight REST Proxy
|
||||
plug-in</title>
|
||||
@ -440,38 +439,38 @@
|
||||
Tunneling is easier to deploy because it does
|
||||
not require configuring VLANs on network
|
||||
switches.</para>
|
||||
<para>The following procedure uses
|
||||
tunneling:</para>
|
||||
<para>This procedure uses tunneling:</para>
|
||||
<procedure>
|
||||
<title>To configure OpenStack Networking to
|
||||
use the OVS plug-in</title>
|
||||
<step>
|
||||
<para>Edit
|
||||
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
|
||||
</filename> to specify the following
|
||||
values (for database configuration,
|
||||
see <link
|
||||
</filename> to specify these values
|
||||
(for database configuration, see <link
|
||||
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
|
||||
>Install Networking Services</link>
|
||||
in <citetitle>Installation
|
||||
Guide</citetitle>):</para>
|
||||
<programlisting language="ini">enable_tunneling=True
|
||||
tenant_network_type=gre
|
||||
tunnel_id_ranges=1:1000 # only required for nodes running agents
|
||||
local_ip=<net-IP-address-of-node></programlisting>
|
||||
tunnel_id_ranges=1:1000
|
||||
# only required for nodes running agents
|
||||
local_ip=<data-net-IP-address-of-node></programlisting>
|
||||
</step>
|
||||
<step>
|
||||
<para>If you are using the neutron DHCP
|
||||
agent, add the following to
|
||||
<filename>/etc/neutron/dhcp_agent.ini</filename>:</para>
|
||||
<para>If you use the neutron DHCP agent,
|
||||
add these lines to the
|
||||
<filename>/etc/neutron/dhcp_agent.ini</filename>
|
||||
file:</para>
|
||||
<programlisting language="ini">dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf</programlisting>
|
||||
</step>
|
||||
<step>
|
||||
<para>Create
|
||||
<filename>/etc/neutron/dnsmasq-neutron.conf</filename>,
|
||||
and add the following values to lower
|
||||
the MTU size on instances and prevent
|
||||
packet fragmentation over the GRE
|
||||
and add these values to lower the MTU
|
||||
size on instances and prevent packet
|
||||
fragmentation over the GRE
|
||||
tunnel:</para>
|
||||
<programlisting language="ini">dhcp-option-force=26,1400</programlisting>
|
||||
</step>
|
||||
@ -514,8 +513,8 @@ allow_overlapping_ips = True</programlisting>
|
||||
controller cluster, create a new
|
||||
[cluster:<name>] section in the
|
||||
<filename>/etc/neutron/plugins/nicira/nvp.ini</filename>
|
||||
file, and add the following entries
|
||||
(for database configuration, see <link
|
||||
file, and add these entries (for
|
||||
database configuration, see <link
|
||||
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
|
||||
>Install Networking Services</link>
|
||||
in <citetitle>Installation
|
||||
@ -556,9 +555,9 @@ allow_overlapping_ips = True</programlisting>
|
||||
does not update the neutron init
|
||||
script to point to the NVP
|
||||
configuration file. Instead, you
|
||||
must manually update
|
||||
must manually update the
|
||||
<filename>/etc/default/neutron-server</filename>
|
||||
with the following:</para>
|
||||
file as follows:</para>
|
||||
<programlisting language="ini">NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/nicira/nvp.ini</programlisting>
|
||||
</warning>
|
||||
</listitem>
|
||||
@ -581,15 +580,16 @@ nvp_controller_connection=10.0.0.3:443:admin:admin:30:10:2:2
|
||||
nvp_controller_connection=10.0.0.4:443:admin:admin:30:10:2:2</programlisting>
|
||||
<note>
|
||||
<para>To debug <filename>nvp.ini</filename>
|
||||
configuration issues, run the following
|
||||
command from the host running
|
||||
neutron-server:
|
||||
<screen><prompt>#</prompt> <userinput>check-nvp-config <path/to/nvp.ini></userinput></screen>This
|
||||
command tests whether <systemitem
|
||||
configuration issues, run this command
|
||||
from the host that runs <systemitem
|
||||
class="service"
|
||||
>neutron-server</systemitem>:</para>
|
||||
<screen><prompt>#</prompt> <userinput>check-nvp-config <path/to/nvp.ini></userinput></screen>
|
||||
<para>This command tests whether <systemitem
|
||||
class="service"
|
||||
>neutron-server</systemitem> can log
|
||||
into all of the NVP Controllers, SQL
|
||||
server, and whether all of the UUID values
|
||||
into all of the NVP Controllers and the
|
||||
SQL server, and whether all UUID values
|
||||
are correct.</para>
|
||||
</note>
|
||||
</section>
|
||||
@ -624,7 +624,7 @@ password = "PLUMgrid-director-admin-password"</programlisting>
|
||||
Guide</citetitle>.</para>
|
||||
</step>
|
||||
<step>
|
||||
<para>To apply the new settings, restart
|
||||
<para>To apply the settings, restart
|
||||
<systemitem class="service"
|
||||
>neutron-server</systemitem>:</para>
|
||||
<screen><prompt>#</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
|
||||
@ -934,8 +934,7 @@ password = "PLUMgrid-director-admin-password"</programlisting>
|
||||
</procedure>
|
||||
<section xml:id="dhcp_agent_ovs">
|
||||
<title>DHCP agent setup: OVS plug-in</title>
|
||||
<para>The following DHCP agent options are
|
||||
required in the
|
||||
<para>These DHCP agent options are required in the
|
||||
<filename>/etc/neutron/dhcp_agent.ini</filename>
|
||||
file for the OVS plug-in:</para>
|
||||
<programlisting language="bash">[DEFAULT]
|
||||
@ -946,8 +945,7 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
|
||||
</section>
|
||||
<section xml:id="dhcp_agent_nvp">
|
||||
<title>DHCP agent setup: NVP plug-in</title>
|
||||
<para>The following DHCP agent options are
|
||||
required in the
|
||||
<para>These DHCP agent options are required in the
|
||||
<filename>/etc/neutron/dhcp_agent.ini</filename>
|
||||
file for the NVP plug-in:</para>
|
||||
<programlisting language="bash">[DEFAULT]
|
||||
@ -959,8 +957,7 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
|
||||
</section>
|
||||
<section xml:id="dhcp_agent_ryu">
|
||||
<title>DHCP agent setup: Ryu plug-in</title>
|
||||
<para>The following DHCP agent options are
|
||||
required in the
|
||||
<para>These DHCP agent options are required in the
|
||||
<filename>/etc/neutron/dhcp_agent.ini</filename>
|
||||
file for the Ryu plug-in:</para>
|
||||
<programlisting language="bash">[DEFAULT]
|
||||
@ -1015,7 +1012,7 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
|
||||
<para>Install the
|
||||
<systemitem>neutron-l3-agent</systemitem>
|
||||
binary on the network node:</para>
|
||||
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-l3-agent</userinput> </screen>
|
||||
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-l3-agent</userinput></screen>
|
||||
</step>
|
||||
<step>
|
||||
<para>To uplink the node that runs
|
||||
@ -1065,9 +1062,8 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
|
||||
particular router's network namespace.
|
||||
The namespace will have the name
|
||||
"qrouter-<UUID of the router>.
|
||||
The following commands are examples of
|
||||
running commands in the namespace of a
|
||||
router with UUID
|
||||
These example commands run in the
|
||||
router namespace with UUID
|
||||
47af3868-0fa8-4447-85f6-1304de32153b:</para>
|
||||
<screen><prompt>#</prompt> <userinput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list</userinput>
|
||||
<prompt>#</prompt> <userinput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping <fixed-ip></userinput></screen>
|
||||
@ -1111,9 +1107,9 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
|
||||
<programlisting language="ini">device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver</programlisting>
|
||||
</step>
|
||||
<step>
|
||||
<para>Make sure to set the following parameter
|
||||
in <filename>neutron.conf</filename> on
|
||||
the host that runs <systemitem
|
||||
<para>Set this parameter in the
|
||||
<filename>neutron.conf</filename> file
|
||||
on the host that runs <systemitem
|
||||
class="service"
|
||||
>neutron-server</systemitem>:</para>
|
||||
<programlisting language="ini">service_plugins = neutron.plugins.services.agent_loadbalancer.plugin.LoadBalancerPlugin</programlisting>
|
||||
@ -1132,9 +1128,9 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
|
||||
<title>To configure FWaaS service and
|
||||
agent</title>
|
||||
<step>
|
||||
<para>Make sure to set the following parameter
|
||||
in the <filename>neutron.conf</filename>
|
||||
file on the host that runs <systemitem
|
||||
<para>Set this parameter in the
|
||||
<filename>neutron.conf</filename> file
|
||||
on the host that runs <systemitem
|
||||
class="service"
|
||||
>neutron-server</systemitem>:</para>
|
||||
<programlisting language="ini">service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPlugin</programlisting>
|
||||
@ -1178,50 +1174,45 @@ enabled = True</programlisting>
|
||||
Networking server on that same host. However,
|
||||
Networking is entirely standalone and can be deployed
|
||||
on its own host as well. Depending on your deployment,
|
||||
Networking can also include the following
|
||||
agents.</para>
|
||||
<para>
|
||||
<table rules="all">
|
||||
<caption>Networking agents</caption>
|
||||
<col width="30%"/>
|
||||
<col width="70%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Agent</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><emphasis role="bold">plug-in
|
||||
agent</emphasis>
|
||||
(<literal>neutron-*-agent</literal>)</td>
|
||||
<td>Runs on each hypervisor to perform
|
||||
local vswitch configuration. The agent
|
||||
that runs depends on the plug-in that
|
||||
you use, and some plug-ins do not
|
||||
require an agent.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">dhcp
|
||||
agent</emphasis>
|
||||
(<literal>neutron-dhcp-agent</literal>)</td>
|
||||
<td>Provides DHCP services to tenant
|
||||
networks. Some plug-ins use this
|
||||
agent.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">l3
|
||||
agent</emphasis>
|
||||
(<literal>neutron-l3-agent</literal>)</td>
|
||||
<td>Provides L3/NAT forwarding to provide
|
||||
external network access for VMs on
|
||||
tenant networks. Some plug-ins use
|
||||
this agent.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</para>
|
||||
Networking can also include these agents.</para>
|
||||
<table rules="all">
|
||||
<caption>Networking agents</caption>
|
||||
<col width="30%"/>
|
||||
<col width="70%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Agent</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><emphasis role="bold">plug-in
|
||||
agent</emphasis>
|
||||
(<literal>neutron-*-agent</literal>)</td>
|
||||
<td>Runs on each hypervisor to perform local
|
||||
vswitch configuration. The agent that runs
|
||||
depends on the plug-in that you use, and
|
||||
some plug-ins do not require an
|
||||
agent.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">dhcp
|
||||
agent</emphasis>
|
||||
(<literal>neutron-dhcp-agent</literal>)</td>
|
||||
<td>Provides DHCP services to tenant networks.
|
||||
Some plug-ins use this agent.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">l3 agent</emphasis>
|
||||
(<literal>neutron-l3-agent</literal>)</td>
|
||||
<td>Provides L3/NAT forwarding to provide
|
||||
external network access for VMs on tenant
|
||||
networks. Some plug-ins use this
|
||||
agent.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<para>These agents interact with the main neutron process
|
||||
through RPC (for example, rabbitmq or qpid) or through
|
||||
the standard Networking API. Further:</para>
|
||||
@ -1451,10 +1442,10 @@ enabled = True</programlisting>
|
||||
</tbody>
|
||||
</table></para>
|
||||
<?hard-pagebreak?>
|
||||
<para>The following table summarizes the attributes
|
||||
available for each networking abstraction. For
|
||||
information about API abstraction and operations,
|
||||
see the <link
|
||||
<para>This table summarizes the attributes available
|
||||
for each networking abstraction. For information
|
||||
about API abstraction and operations, see the
|
||||
<link
|
||||
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
|
||||
>Networking API v2.0 Reference</link>.</para>
|
||||
<table rules="all">
|
||||
@ -1531,7 +1522,7 @@ enabled = True</programlisting>
|
||||
</tbody>
|
||||
</table>
|
||||
<table rules="all">
|
||||
<caption>Subnet Attributes</caption>
|
||||
<caption>Subnet attributes</caption>
|
||||
<col width="20%"/>
|
||||
<col width="15%"/>
|
||||
<col width="17%"/>
|
||||
@ -1735,9 +1726,9 @@ enabled = True</programlisting>
|
||||
the <link
|
||||
xlink:href="http://docs.openstack.org/user-guide/content/index.html"
|
||||
> OpenStack End User Guide</link>.</para>
|
||||
<para>The following table shows example neutron
|
||||
commands that enable you to complete basic
|
||||
Networking operations:</para>
|
||||
<para>This table shows example neutron commands that
|
||||
enable you to complete basic Networking
|
||||
operations:</para>
|
||||
<table rules="all">
|
||||
<caption>Basic Networking operations</caption>
|
||||
<col width="40%"/>
|
||||
@ -1816,9 +1807,9 @@ enabled = True</programlisting>
|
||||
<?hard-pagebreak?>
|
||||
<section xml:id="advanced_networking">
|
||||
<title>Advanced Networking operations</title>
|
||||
<para>The following table shows example neutron
|
||||
commands that enable you to complete advanced
|
||||
Networking operations:</para>
|
||||
<para>This table shows example neutron commands that
|
||||
enable you to complete advanced Networking
|
||||
operations:</para>
|
||||
<table rules="all">
|
||||
<caption>Advanced Networking operations</caption>
|
||||
<col width="40%"/>
|
||||
@ -1874,11 +1865,11 @@ enabled = True</programlisting>
|
||||
<title>Use Compute with Networking</title>
|
||||
<section xml:id="basic_workflow_with_nova">
|
||||
<title>Basic Compute and Networking operations</title>
|
||||
<para>The following table shows example neutron and
|
||||
nova commands that enable you to complete basic
|
||||
Compute and Networking operations:</para>
|
||||
<para>This table shows example neutron and nova
|
||||
commands that enable you to complete basic Compute
|
||||
and Networking operations:</para>
|
||||
<table rules="all">
|
||||
<caption>Basic Compute/Networking
|
||||
<caption>Basic Compute and Networking
|
||||
operations</caption>
|
||||
<col width="40%"/>
|
||||
<col width="60%"/>
|
||||
@ -1927,7 +1918,7 @@ enabled = True</programlisting>
|
||||
logical router ID.</para>
|
||||
</note>
|
||||
<note xml:id="network_compute_note">
|
||||
<title>VM creation and deletion</title>
|
||||
<title>Create and delete VMs</title>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>When you boot a Compute VM, a port
|
||||
@ -1949,9 +1940,9 @@ enabled = True</programlisting>
|
||||
</section>
|
||||
<section xml:id="advanced_vm_creation">
|
||||
<title>Advanced VM creation operations</title>
|
||||
<para>The following table shows example nova and
|
||||
neutron commands that enable you to complete
|
||||
advanced VM creation operations:</para>
|
||||
<para>This table shows example nova and neutron
|
||||
commands that enable you to complete advanced VM
|
||||
creation operations:</para>
|
||||
<table rules="all">
|
||||
<caption>Advanced VM creation operations</caption>
|
||||
<col width="40%"/>
|
||||
@ -1997,8 +1988,8 @@ enabled = True</programlisting>
|
||||
</note>
|
||||
</section>
|
||||
<section xml:id="enabling_ping_and_ssh">
|
||||
<title>Security groups (enabling ping and SSH on
|
||||
VMs)</title>
|
||||
<title>Enable ping and SSH on VMs (security
|
||||
groups)</title>
|
||||
<para>You must configure security group rules
|
||||
depending on the type of plug-in you are using. If
|
||||
you are using a plug-in that:</para>
|
||||
@ -2008,7 +1999,7 @@ enabled = True</programlisting>
|
||||
you can configure security group rules
|
||||
directly by using <command>neutron
|
||||
security-group-rule-create</command>.
|
||||
The following example allows
|
||||
This example enables
|
||||
<command>ping</command> and
|
||||
<command>ssh</command> access to your
|
||||
VMs.</para>
|
||||
@ -2023,10 +2014,9 @@ enabled = True</programlisting>
|
||||
rules by using the <command>nova
|
||||
secgroup-add-rule</command> or
|
||||
<command>euca-authorize</command>
|
||||
command. The following
|
||||
<command>nova</command> commands allow
|
||||
<command>ping</command> and
|
||||
<command>ssh</command> access to your
|
||||
command. These <command>nova</command>
|
||||
commands enable <command>ping</command>
|
||||
and <command>ssh</command> access to your
|
||||
VMs.</para>
|
||||
<screen><prompt>#</prompt> <userinput>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0</userinput>
|
||||
<prompt>#</prompt> <userinput>nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
|
||||
@ -2166,9 +2156,8 @@ enabled = True</programlisting>
|
||||
user submitting the request.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>The following is an extract from the default
|
||||
<para>This extract is from the default
|
||||
<filename>policy.json</filename> file:</para>
|
||||
|
||||
<programlisting language="bash">{
|
||||
[1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
|
||||
"admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]],
|
||||
@ -2209,11 +2198,11 @@ enabled = True</programlisting>
|
||||
<emphasis role="italic">mac_address</emphasis>
|
||||
attribute for a port only to administrators and the owner
|
||||
of the network where the port is attached.</para>
|
||||
<para>In some cases, some operations should be restricted to
|
||||
administrators only. The following example shows you how
|
||||
to modify a policy file to permit tenants to define
|
||||
networks and see their resources and permit administrative
|
||||
users to perform all other operations:</para>
|
||||
<para>In some cases, some operations are restricted to
|
||||
administrators only. This example shows you how to modify
|
||||
a policy file to permit tenants to define networks and see
|
||||
their resources and permit administrative users to perform
|
||||
all other operations:</para>
|
||||
<programlisting language="bash">{
|
||||
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
|
||||
"admin_only": [["role:admin"]], "regular_user": [],
|
||||
|
@ -1,19 +1,18 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="ch_admin-openstack-object-storage">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Object Storage</title>
|
||||
<para>Object Storage is a scalable object storage system. It is
|
||||
not a file system in the traditional sense. You cannot mount
|
||||
this system like traditional SAN or NAS volumes. Because Object
|
||||
<para>Object Storage is a scalable object storage system and not a
|
||||
file system in the traditional sense. You cannot mount this
|
||||
system like traditional SAN or NAS volumes. Because Object
|
||||
Storage requires a different way of thinking when it comes to
|
||||
storage, take a few moments to review the key concepts in the
|
||||
developer documentation at <link
|
||||
xlink:href="http://docs.openstack.org/developer/swift/"
|
||||
>docs.openstack.org/developer/swift/</link>.</para>
|
||||
<!-- <xi:include href="../common/section_about-object-storage.xml"/> -->
|
||||
<!-- <xi:include href="../common/section_about-object-storage.xml"/> -->
|
||||
<xi:include href="section_object-storage-monitoring.xml"/>
|
||||
</chapter>
|
||||
|
@ -2,97 +2,82 @@
|
||||
<section xml:id="backup-block-storage-disks"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0">
|
||||
<title>Back up your Block Storage disks</title>
|
||||
<para>While you can use the snapshot functionality (using
|
||||
LVM snapshot), you can also back up your volumes. The
|
||||
advantage of this method is that it reduces the size of the
|
||||
backup; only existing data will be backed up, instead of the
|
||||
entire volume. For this example, assume that a 100 GB volume
|
||||
has been created for an instance, while only 4 gigabytes are
|
||||
used. This process will back up only those 4 gigabytes, with
|
||||
the following tools:</para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para><command>lvm2</command>, directly
|
||||
manipulates the volumes.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><command>kpartx</command> discovers the
|
||||
partition table created inside the instance.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><command>tar</command> creates a
|
||||
minimum-sized backup</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><command>sha1sum</command> calculates the
|
||||
backup checksum, to check its consistency</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<para>
|
||||
<emphasis role="bold">1- Create a snapshot of a used volume</emphasis></para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>In order to backup our volume, we first need
|
||||
to create a snapshot of it. An LVM snapshot is
|
||||
the exact copy of a logical volume, which
|
||||
contains data in a frozen state. This prevents
|
||||
data corruption, because data will not be
|
||||
manipulated during the process of creating the
|
||||
volume itself. Remember the volumes
|
||||
created through a
|
||||
<command>nova volume-create</command>
|
||||
exist in an LVM's logical volume.</para>
|
||||
<para>Before creating the
|
||||
snapshot, ensure that you have enough
|
||||
space to save it. As a precaution, you
|
||||
should have at least twice as much space
|
||||
as the potential snapshot size. If
|
||||
insufficient space is available, there is
|
||||
a risk that the snapshot could become
|
||||
corrupted.</para>
|
||||
<para>Use the following command to obtain a list
|
||||
of all volumes:
|
||||
<screen><prompt>$</prompt> <userinput>lvdisplay</userinput></screen>
|
||||
In
|
||||
this example, we will refer to a volume called
|
||||
<literal>volume-00000001</literal>, which
|
||||
is a 10GB volume. This process can be applied
|
||||
to all volumes, not matter their size. At the
|
||||
end of the section, we will present a script
|
||||
that you could use to create scheduled
|
||||
backups. The script itself exploits what we
|
||||
discuss here.</para>
|
||||
<para>First, create the snapshot; this can be
|
||||
achieved while the volume is attached to an
|
||||
instance :</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>lvcreate --size 10G --snapshot --name volume-00000001-snapshot /dev/nova-volumes/volume-00000001</userinput></screen>
|
||||
</para>
|
||||
<para>We indicate to LVM we want a snapshot of an
|
||||
already existing volume with the
|
||||
<literal>--snapshot</literal>
|
||||
configuration option. The command includes the
|
||||
size of the space reserved for the snapshot
|
||||
volume, the name of the snapshot, and the path
|
||||
of an already existing volume (In most cases,
|
||||
the path will be
|
||||
<filename>/dev/nova-volumes/<replaceable>$volume_name</replaceable></filename>).</para>
|
||||
<para>The size doesn't have to be the same as the
|
||||
volume of the snapshot. The size parameter
|
||||
designates the space that LVM will reserve for
|
||||
the snapshot volume. As a precaution, the size
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
||||
<title>Back up Block Storage Service disks</title>
|
||||
<para>While you can use the LVM snapshot to create snapshots, you
|
||||
can also use it to back up your volumes. By using LVM
|
||||
snapshot, you reduce the size of the backup; only existing
|
||||
data is backed up instead of the entire volume.</para>
|
||||
<para>To back up a volume, you must create a snapshot of it. An
|
||||
LVM snapshot is the exact copy of a logical volume, which
|
||||
contains data in a frozen state. This prevents data
|
||||
corruption, because data cannot be manipulated during the
|
||||
volume creation process. Remember that the volumes created
|
||||
through a <command>nova volume-create</command> command exist
|
||||
in an LVM logical volume.</para>
|
||||
<para>Before you create the snapshot, you must have enough space
|
||||
to save it. As a precaution, you should have at least twice as
|
||||
much space as the potential snapshot size. If insufficient
|
||||
space is available, the snapshot might become
|
||||
corrupted.</para>
|
||||
<para>For this example, assume that a 100 GB volume named
|
||||
<literal>volume-00000001</literal> was created for an
|
||||
instance while only 4 GB are used. This example uses these
|
||||
commands to back up only those 4 GB:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><command>lvm2</command> command. Directly
|
||||
manipulates the volumes.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><command>kpartx</command> command. Discovers the
|
||||
partition table created inside the instance.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><command>tar</command> command. Creates a
|
||||
minimum-sized backup.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><command>sha1sum</command> command. Calculates the
|
||||
backup checksum to check its consistency.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>You can apply this process to volumes of any size.</para>
|
||||
<procedure>
|
||||
<title>To back up Block Storage Service disks</title>
|
||||
<step>
|
||||
<title>Create a snapshot of a used volume</title>
|
||||
<substeps>
|
||||
<step>
|
||||
<para>Use this command to list all volumes:</para>
|
||||
<screen><prompt>$</prompt> <userinput>lvdisplay</userinput></screen>
|
||||
</step>
|
||||
<step>
|
||||
<para>Create the snapshot; you can do this while
|
||||
the volume is attached to an instance:</para>
|
||||
<screen><prompt>$</prompt> <userinput>lvcreate --size 10G --snapshot --name volume-00000001-snapshot /dev/nova-volumes/volume-00000001</userinput></screen>
|
||||
<para>Use the <option>--snapshot</option>
|
||||
configuration option to tell LVM that you want
|
||||
a snapshot of an already existing volume. The
|
||||
command includes the size of the space
|
||||
reserved for the snapshot volume, the name of
|
||||
the snapshot, and the path of an already
|
||||
existing volume. Generally, this path is
|
||||
<filename>/dev/nova-volumes/<replaceable>$volume_name</replaceable></filename>.</para>
|
||||
<para>The size does not have to be the same as the
|
||||
volume of the snapshot. The
|
||||
<parameter>size</parameter> parameter
|
||||
defines the space that LVM reserves for the
|
||||
snapshot volume. As a precaution, the size
|
||||
should be the same as that of the original
|
||||
volume, even if we know the whole space is not
|
||||
volume, even if the whole space is not
|
||||
currently used by the snapshot.</para>
|
||||
<para>We now have a full snapshot, and it only took few seconds !</para>
|
||||
<para>Run <command>lvdisplay</command> again to
|
||||
verify the snapshot. You should see now your
|
||||
snapshot:</para>
|
||||
<para>
|
||||
<programlisting>--- Logical volume ---
|
||||
</step>
|
||||
|
||||
<step>
|
||||
<para>Run the <command>lvdisplay</command> command
|
||||
again to verify the snapshot:</para>
|
||||
<programlisting>--- Logical volume ---
|
||||
LV Name /dev/nova-volumes/volume-00000001
|
||||
VG Name nova-volumes
|
||||
LV UUID gI8hta-p21U-IW2q-hRN1-nTzN-UC2G-dKbdKr
|
||||
@ -128,158 +113,132 @@ Allocation inherit
|
||||
Read ahead sectors auto
|
||||
- currently set to 256
|
||||
Block device 251:14</programlisting>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>
|
||||
<emphasis role="bold">2- Partition table discovery </emphasis></para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>If we want to exploit that snapshot with the
|
||||
<command>tar</command> program, we first
|
||||
need to mount our partition on the Block Storage server.</para>
|
||||
<para><command>kpartx</command> is a small utility
|
||||
which performs table partition discoveries,
|
||||
and maps it. It can be used to view partitions
|
||||
created inside the instance. Without using the
|
||||
partitions created inside instances, we won' t
|
||||
be able to see its content and create
|
||||
efficient backups.</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>kpartx -av /dev/nova-volumes/volume-00000001-snapshot</userinput></screen>
|
||||
</para>
|
||||
<para>If no errors are displayed, it means the
|
||||
tools has been able to find it, and map the
|
||||
partition table. Note that on a Debian flavor
|
||||
distro, you could also use <command>apt-get
|
||||
install kpartx</command>.</para>
|
||||
<para>You can easily check the partition table map
|
||||
by running the following command:</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>ls /dev/mapper/nova*</userinput></screen>
|
||||
You should now see a partition called
|
||||
</step>
|
||||
</substeps>
|
||||
</step>
|
||||
<step>
|
||||
<title>Partition table discovery</title>
|
||||
<substeps>
|
||||
<step>
|
||||
<para>To exploit the snapshot with the
|
||||
<command>tar</command> command, mount your
|
||||
partition on the Block Storage Service
|
||||
server.</para>
|
||||
<para>The <command>kpartx</command> utility
|
||||
discovers and maps table partitions. You
|
||||
can use it to view partitions that are created
|
||||
inside the instance. Without using the
|
||||
partitions created inside instances, you
|
||||
cannot see its content and create efficient
|
||||
backups.</para>
|
||||
<screen><prompt>$</prompt> <userinput>kpartx -av /dev/nova-volumes/volume-00000001-snapshot</userinput></screen>
|
||||
<note os="debian">
|
||||
<para>On a Debian-based distribution, you can
|
||||
also use the <command>apt-get install
|
||||
kpartx</command> command.</para>
|
||||
</note>
|
||||
<para>If the tools successfully find and map the
|
||||
partition table, no errors are
|
||||
returned.</para>
|
||||
</step>
|
||||
<step>
|
||||
<para>To check the partition table map, run this
|
||||
command:</para>
|
||||
<screen><prompt>$</prompt> <userinput>ls /dev/mapper/nova*</userinput></screen>
|
||||
<para>You can see the
|
||||
<literal>nova--volumes-volume--00000001--snapshot1</literal>
|
||||
</para>
|
||||
partition.</para>
|
||||
<para>If you created more than one partition on
|
||||
that volumes, you should have accordingly
|
||||
several partitions; for example.
|
||||
that volume, you see several partitions; for
|
||||
example:
|
||||
<literal>nova--volumes-volume--00000001--snapshot2</literal>,
|
||||
<literal>nova--volumes-volume--00000001--snapshot3</literal>
|
||||
and so forth.</para>
|
||||
<para>We can now mount our partition:</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>mount /dev/mapper/nova--volumes-volume--volume--00000001--snapshot1 /mnt</userinput></screen>
|
||||
</para>
|
||||
<para>If there are no errors, you have
|
||||
successfully mounted the partition.</para>
|
||||
<para>You should now be able to directly access
|
||||
the data that were created inside the
|
||||
instance. If you receive a message asking you
|
||||
to specify a partition, or if you are unable
|
||||
to mount it (despite a well-specified
|
||||
filesystem) there could be two causes:</para>
|
||||
<para><itemizedlist>
|
||||
<listitem>
|
||||
<para>You didn't allocate enough
|
||||
space for the snapshot</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<command>kpartx</command> was
|
||||
unable to discover the partition
|
||||
table.</para>
|
||||
</listitem>
|
||||
</itemizedlist>Allocate more space to the
|
||||
snapshot and try the process again.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>
|
||||
<emphasis role="bold"> 3- Use tar in order to create archives</emphasis>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Now that the volume has been mounted,
|
||||
you can create a backup of it:</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>tar --exclude={"lost+found","some/data/to/exclude"} -czf volume-00000001.tar.gz -C /mnt/ /backup/destination</userinput></screen>
|
||||
</para>
|
||||
<para>This command will create a tar.gz file
|
||||
containing the data, <emphasis
|
||||
role="italic">and data
|
||||
only</emphasis>. This ensures that you do
|
||||
not waste space by backing up empty
|
||||
sectors.</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
<para>
|
||||
<emphasis role="bold">4- Checksum calculation I</emphasis>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>You should always have the checksum for
|
||||
your backup files. The checksum is a
|
||||
unique identifier for a file.</para>
|
||||
<para>When you transfer that same file over
|
||||
the network, you can run another checksum
|
||||
calculation. If the checksums are
|
||||
different, this indicates that the file is
|
||||
corrupted; thus, the checksum provides a
|
||||
method to ensure your file has not been
|
||||
corrupted during its transfer.</para>
|
||||
<para>The following command runs a checksum
|
||||
for our file, and saves the result to a
|
||||
file :</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>sha1sum volume-00000001.tar.gz > volume-00000001.checksum</userinput></screen>
|
||||
<emphasis
|
||||
role="bold">Be aware</emphasis> the
|
||||
<command>sha1sum</command> should be
|
||||
used carefully, since the required time
|
||||
for the calculation is directly
|
||||
proportional to the file's size.</para>
|
||||
<para>For files larger than ~4-6 gigabytes,
|
||||
and depending on your CPU, the process may
|
||||
take a long time.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<emphasis role="bold">5- After work cleaning</emphasis>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Now that we have an efficient and
|
||||
consistent backup, the following commands
|
||||
will clean up the file system.<orderedlist>
|
||||
<listitem>
|
||||
<para>Unmount the volume:
|
||||
<command>unmount
|
||||
/mnt</command></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Delete the partition table:
|
||||
<command>kpartx -dv
|
||||
/dev/nova-volumes/volume-00000001-snapshot</command></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Remove the snapshot:
|
||||
<command>lvremove -f
|
||||
/dev/nova-volumes/volume-00000001-snapshot</command></para>
|
||||
</listitem>
|
||||
</orderedlist></para>
|
||||
<para>And voila :) You can now repeat these
|
||||
steps for every volume you have.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<emphasis role="bold">6- Automate your backups</emphasis>
|
||||
</para>
|
||||
<para>Because you can expect that more and more volumes
|
||||
will be allocated to your Block Storage service, you may
|
||||
want to automate your backups. This script <link
|
||||
<literal>nova--volumes-volume--00000001--snapshot3</literal>,
|
||||
and so on.</para>
|
||||
</step>
|
||||
<step>
|
||||
<para>Mount your partition:</para>
|
||||
<screen><prompt>$</prompt> <userinput>mount /dev/mapper/nova--volumes-volume--volume--00000001--snapshot1 /mnt</userinput></screen>
|
||||
<para>If the partition mounts successfully, no
|
||||
errors are returned.</para>
|
||||
<para>You can directly access the data inside the
|
||||
instance. If a message prompts you for a
|
||||
partition or you cannot mount it, determine
|
||||
whether enough space was allocated for the
|
||||
snapshot or the <command>kpartx</command>
|
||||
command failed to discover the partition
|
||||
table.</para>
|
||||
<para>Allocate more space to the snapshot and try
|
||||
the process again.</para>
|
||||
</step>
|
||||
</substeps>
|
||||
</step>
|
||||
<step>
|
||||
<title>Use the <command>tar</command> command to create
|
||||
archives</title>
|
||||
<para>Create a backup of the volume:</para>
|
||||
<screen><prompt>$</prompt> <userinput>tar --exclude={"lost+found","some/data/to/exclude"} -czf volume-00000001.tar.gz -C /mnt/ /backup/destination</userinput></screen>
|
||||
<para>This command creates a <filename>tar.gz</filename>
|
||||
file that contains the data, <emphasis role="italic"
|
||||
>and data only</emphasis>. This ensures that you
|
||||
do not waste space by backing up empty sectors.</para>
|
||||
</step>
|
||||
<step>
|
||||
<title>Checksum calculation I</title>
|
||||
<para>You should always have the checksum for your backup
|
||||
files. When you transfer the same file over the
|
||||
network, you can run a checksum calculation to ensure
|
||||
that your file was not corrupted during its transfer.
|
||||
The checksum is a unique ID for a file. If the
|
||||
checksums are different, the file is corrupted.</para>
|
||||
<para>Run this command to run a checksum for your file and
|
||||
save the result to a file:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sha1sum volume-00000001.tar.gz > volume-00000001.checksum</userinput></screen>
|
||||
<note>
|
||||
<para>Use the <command>sha1sum</command> command
|
||||
carefully because the time it takes to complete
|
||||
the calculation is directly proportional to the
|
||||
size of the file.</para>
|
||||
<para>For files larger than around 4 to 6 GB, and
|
||||
depending on your CPU, the process might take a
|
||||
long time.</para>
|
||||
</note>
|
||||
</step>
|
||||
<step>
|
||||
<title>After work cleaning</title>
|
||||
<para>Now that you have an efficient and consistent
|
||||
backup, use this command to clean up the file
|
||||
system:</para>
|
||||
<substeps>
|
||||
<step>
|
||||
<para>Unmount the volume:</para>
|
||||
<screen><userinput>unmount /mnt</userinput></screen>
|
||||
</step>
|
||||
<step>
|
||||
<para>Delete the partition table:</para>
|
||||
<screen><userinput>kpartx -dv /dev/nova-volumes/volume-00000001-snapshot</userinput></screen>
|
||||
</step>
|
||||
<step>
|
||||
<para>Remove the snapshot:</para>
|
||||
<screen><userinput>lvremove -f /dev/nova-volumes/volume-00000001-snapshot</userinput></screen>
|
||||
</step>
|
||||
</substeps>
|
||||
<para>Repeat these steps for all your volumes.</para>
|
||||
</step>
|
||||
<step>
|
||||
<title>Automate your backups</title>
|
||||
<para>Because more and more volumes might be allocated to
|
||||
your Block Storage service, you might want to automate
|
||||
your backups. The <link
|
||||
xlink:href="https://github.com/Razique/BashStuff/blob/master/SYSTEMS/OpenStack/SCR_5005_V01_NUAC-OPENSTACK-EBS-volumes-backup.sh"
|
||||
>here</link> will assist you on this task. The
|
||||
script performs the operations from the previous
|
||||
example, but also provides a mail report and runs the
|
||||
backup based on the
|
||||
<literal>backups_retention_days</literal> setting.
|
||||
It is meant to be launched from the server which runs
|
||||
the Block Storage component.</para>
|
||||
<para>Here is an example of a mail report:</para>
|
||||
>SCR_5005_V01_NUAC-OPENSTACK-EBS-volumes-backup.sh</link>
|
||||
script assists you with this task. The script performs
|
||||
the operations from the previous example, but also
|
||||
provides a mail report and runs the backup based on
|
||||
the <option>backups_retention_days</option>
|
||||
setting.</para>
|
||||
<para>Launch this script from the server that runs the
|
||||
Block Storage Service.</para>
|
||||
<para>This example shows a mail report:</para>
|
||||
<programlisting>Backup Start Time - 07/10 at 01:00:01
|
||||
Current retention - 7 days
|
||||
|
||||
@ -293,12 +252,13 @@ Removing old backups... : /BACKUPS/EBS-VOL/volume-0000001a/volume-0000001a_28_0
|
||||
---------------------------------------
|
||||
Total backups size - 267G - Used space : 35%
|
||||
Total execution time - 1 h 75 m and 35 seconds</programlisting>
|
||||
<para>The script also provides the ability to SSH to your
|
||||
instances and run a mysqldump into them. In order to
|
||||
make this to work, ensure the connection via the
|
||||
nova's project keys is enabled. If you don't want to
|
||||
run the mysqldumps, you can turn off this
|
||||
functionality by adding
|
||||
<para>The script also enables you to SSH to your instances
|
||||
and run a <command>mysqldump</command> command into
|
||||
them. To make this work, enable the connection to the
|
||||
Compute project keys. If you do not want to run the
|
||||
<command>mysqldump</command> command, you can add
|
||||
<literal>enable_mysql_dump=0</literal> to the
|
||||
script.</para>
|
||||
</section>
|
||||
script to turn off this functionality.</para>
|
||||
</step>
|
||||
</procedure>
|
||||
</section>
|
||||
|
@ -1,29 +1,48 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section xml:id="multi_backend" xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="1.0">
|
||||
<title>Configure a multiple-storage backend</title>
|
||||
<para>This section presents the multi-backend storage feature
|
||||
introduced with the Grizzly release. Multi-backend allows
|
||||
the creation of several backend storage solutions serving the same
|
||||
OpenStack Compute configuration. Basically, multi-backend launches
|
||||
one <systemitem class="service">cinder-volume</systemitem> per backend.</para>
|
||||
<para>In a multi-backend configuration, each backend has a name (<literal>volume_backend_name</literal>).
|
||||
Several backends can have the same name. In that case, the scheduler properly decides in which backend the volume has to be created.
|
||||
</para>
|
||||
<para>The name of the backend is declared as an extra-specification of a volume type (e.g. <literal>volume_backend_name=LVM_iSCSI</literal>).
|
||||
At a volume creation, according to the volume type specified by the user, the scheduler will choose an appropriate backend to handle the request.
|
||||
</para>
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0">
|
||||
<title>Configure a multiple-storage back-end</title>
|
||||
<para>This section presents the multi back-end storage feature
|
||||
introduced with the Grizzly release. multi back-end allows the
|
||||
creation of several back-end storage solutions serving the
|
||||
same OpenStack Compute configuration. Basically, multi
|
||||
back-end launches one <systemitem class="service"
|
||||
>cinder-volume</systemitem> for each back-end.</para>
|
||||
<para>In a multi back-end configuration, each back-end has a name
|
||||
(<literal>volume_backend_name</literal>). Several
|
||||
back-ends can have the same name. In that case, the scheduler
|
||||
properly decides in which back-end the volume has to be
|
||||
created.</para>
|
||||
<para>The name of the back-end is declared as an
|
||||
extra-specification of a volume type (such as,
|
||||
<literal>volume_backend_name=LVM_iSCSI</literal>). When a
|
||||
volume is created, the scheduler chooses an appropriate
|
||||
back-end to handle the request, according to the volume type
|
||||
specified by the user.</para>
|
||||
<simplesect>
|
||||
<title>Enable Multi-Backend</title>
|
||||
<para>The multi-backend configuration is done into the <literal>cinder.conf</literal> file.
|
||||
The <literal>enabled_backends</literal> flag has to be set up. This flag defines the names (separated by a comma) of the config groups for the different backends: one name is associated to one config group for a backend (e.g. <literal>[lvmdriver-1]</literal>).
|
||||
<note><para>The config group name is not related to the <literal>volume_backend_name</literal>.</para></note>
|
||||
The options for a config group have to be defined in the group (or default options will be used). All the standard Cinder configuration options (<literal>volume_group</literal>, <literal>volume_driver</literal>, etc) may be used in a config group.
|
||||
Config values in the <literal>[DEFAULT]</literal> config group will not be used.
|
||||
</para>
|
||||
<para>The following example shows three backends:</para>
|
||||
<programlisting language="ini"># a list of backends that will be served by this compute node
|
||||
<title>Enable multi back-end</title>
|
||||
<para>To enable a multi back-end configuration, you must set
|
||||
the <option>enabled_backends</option> flag in the
|
||||
<filename>cinder.conf</filename> file. This flag
|
||||
defines the names (separated by a comma) of the
|
||||
configuration groups for the different back-ends: one name
|
||||
is associated to one configuration group for a back-end
|
||||
(such as, <literal>[lvmdriver-1]</literal>).</para>
|
||||
<note>
|
||||
<para>The configuration group name is not related to the
|
||||
<literal>volume_backend_name</literal>.</para>
|
||||
</note>
|
||||
<para>The options for a configuration group must be defined in
|
||||
the group (or default options are used). All the standard
|
||||
Cinder configuration options
|
||||
(<literal>volume_group</literal>,
|
||||
<literal>volume_driver</literal>, and so on) might be
|
||||
used in a configuration group. Configuration values in the
|
||||
<literal>[DEFAULT]</literal> configuration group are
|
||||
not used.</para>
|
||||
<para>These examples show three back-ends:</para>
|
||||
<programlisting language="ini"># a list of back-ends that are served by this compute node
|
||||
enabled_backends=lvmdriver-1,lvmdriver-2,lvmdriver-3
|
||||
[lvmdriver-1]
|
||||
volume_group=cinder-volumes-1
|
||||
@ -36,62 +55,102 @@ volume_backend_name=LVM_iSCSI
|
||||
[lvmdriver-3]
|
||||
volume_group=cinder-volumes-3
|
||||
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
|
||||
volume_backend_name=LVM_iSCSI_b
|
||||
</programlisting>
|
||||
<para>In this configuration <literal>lvmdriver-1</literal> and <literal>lvmdriver-2</literal> have the same <literal>volume_backend_name</literal>.
|
||||
This means that, if a volume creation requests the <literal>LVM_iSCSI</literal> backend name, the scheduler will choose between <literal>lvmdriver-1</literal> and <literal>lvmdriver-2</literal> which one is the most suitable. This is done thanks to the capacity filter scheduler which is enabled by default (the following section gives more information on that point).
|
||||
In addition, this example presents a third backend named <literal>lvmdriver-3</literal>. This one has a different backend name.
|
||||
</para>
|
||||
volume_backend_name=LVM_iSCSI_b</programlisting>
|
||||
<para>In this configuration, <literal>lvmdriver-1</literal>
|
||||
and <literal>lvmdriver-2</literal> have the same
|
||||
<literal>volume_backend_name</literal>. If a volume
|
||||
creation requests the <literal>LVM_iSCSI</literal>
|
||||
back-end name, the scheduler uses the capacity filter
|
||||
scheduler to choose the most suitable driver, which is
|
||||
either <literal>lvmdriver-1</literal> or
|
||||
<literal>lvmdriver-2</literal>. The capacity filter
|
||||
scheduler is enabled by default. The next section provides
|
||||
more information. In addition, this example presents a
|
||||
<literal>lvmdriver-3</literal> back-end.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Cinder scheduler configuration with multi-backend</title>
|
||||
<para>Multi-backend has to be used with <literal>filter_scheduler</literal> enabled. Filter scheduler acts in two steps:
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>First, filter scheduler filters the available backends. By default, <literal>AvailabilityZoneFilter</literal>, <literal>CapacityFilter</literal> and <literal>CapabilitiesFilter</literal> are enabled.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Secondly, filter scheduler weights the previously filtered backends. By default, <literal>CapacityWeigher</literal> is enabled. The <literal>CapacityWeigher</literal> attributes high score to backends with the most available space.</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
According to the filtering and weighing, the scheduler will be able to pick "the best" backend in order to handle the request. In that way, filter scheduler achieves the goal that one can explicitly creates volume on specifics backends using volume types.
|
||||
<note><para>To enable the filter scheduler, the following line has to be added into the <literal>cinder.conf</literal> configuration file: <programlisting language="ini">scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler</programlisting></para>
|
||||
<para>However, <literal>filter_scheduler</literal> is the default Cinder Scheduler in Grizzly, this line is not mandatory.</para></note>
|
||||
<!-- TODO: when filter/weighing scheduler documentation will be up, a ref should be added here -->
|
||||
</para>
|
||||
<title>Configure Cinder scheduler multi back-end</title>
|
||||
<para>You must enable the <option>filter_scheduler</option>
|
||||
option to use multi back-end. Filter scheduler acts in two
|
||||
steps:</para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>The filter scheduler filters the available
|
||||
back-ends. By default,
|
||||
<literal>AvailabilityZoneFilter</literal>,
|
||||
<literal>CapacityFilter</literal> and
|
||||
<literal>CapabilitiesFilter</literal> are
|
||||
enabled.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The filter scheduler weighs the previously
|
||||
filtered back-ends. By default,
|
||||
<literal>CapacityWeigher</literal> is enabled.
|
||||
The <literal>CapacityWeigher</literal> attributes
|
||||
higher scores to back-ends with the most
|
||||
available.</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<para>The scheduler uses the filtering and weighing process to
|
||||
pick the best back-end to handle the request, and
|
||||
explicitly creates volumes on specific back-ends through
|
||||
the use of volume types.</para>
|
||||
<note>
|
||||
<para>To enable the filter scheduler, add this line to the
|
||||
<filename>cinder.conf</filename> configuration
|
||||
file:</para>
|
||||
<programlisting language="ini">scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler</programlisting>
|
||||
<para>While the Cinder Scheduler defaults to
|
||||
<option>filter_scheduler</option> in Grizzly, this
|
||||
setting is not required.</para>
|
||||
</note>
|
||||
<!-- TODO: when filter/weighing scheduler documentation will be up, a ref should be added here -->
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Volume type</title>
|
||||
<para>Before using it, a volume type has to be declared to Cinder. This can be done by the following command:
|
||||
<para>Before using it, a volume type has to be declared to
|
||||
Cinder. This can be done by the following command:</para>
|
||||
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-create lvm</userinput></screen>
|
||||
Then, an extra-specification have to be created to link the volume type to a backend name.
|
||||
This can be done by the following command:
|
||||
<para>Then, an extra-specification have to be created to link
|
||||
the volume type to a back-end name. Run this
|
||||
command:</para>
|
||||
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-key lvm set volume_backend_name=LVM_iSCSI</userinput></screen>
|
||||
In this example we have created a volume type named <literal>lvm</literal> with <literal>volume_backend_name=LVM_iSCSI</literal> as extra-specifications.
|
||||
</para>
|
||||
<para>We complete this example by creating another volume type:</para>
|
||||
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-create lvm_gold</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-key lvm_gold set volume_backend_name=LVM_iSCSI_b</userinput></screen>
|
||||
<para>This second volume type is named <literal>lvm_gold</literal> and has <literal>LVM_iSCSI_b</literal> as backend name.
|
||||
</para>
|
||||
<para>This example creates a <literal>lvm</literal> volume
|
||||
type with <literal>volume_backend_name=LVM_iSCSI</literal>
|
||||
as extra-specifications.</para>
|
||||
<para>Create another volume type:</para>
|
||||
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-create lvm_gold</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-key lvm_gold set volume_backend_name=LVM_iSCSI_b</userinput></screen>
|
||||
<para>This second volume type is named
|
||||
<literal>lvm_gold</literal> and has
|
||||
<literal>LVM_iSCSI_b</literal> as back-end
|
||||
name.</para>
|
||||
<note>
|
||||
<para>To list the extra-specifications, use the following command line:
|
||||
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin extra-specs-list</userinput></screen>
|
||||
</para>
|
||||
<para>To list the extra-specifications, use this
|
||||
command:</para>
|
||||
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin extra-specs-list</userinput></screen>
|
||||
</note>
|
||||
<note>
|
||||
<para>If a volume type points to a <literal>volume_backend_name</literal> which does not exist in the Cinder configuration, the <literal>filter_scheduler</literal> will return an error mentioning that it is not able to find a valid host with the suitable backend.
|
||||
</para>
|
||||
<para>If a volume type points to a
|
||||
<literal>volume_backend_name</literal> that does
|
||||
not exist in the Cinder configuration, the
|
||||
<literal>filter_scheduler</literal> returns an
|
||||
error that it cannot find a valid host with the
|
||||
suitable back-end.</para>
|
||||
</note>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Usage</title>
|
||||
<para>When creating a volume, the volume type has to be specified.
|
||||
The extra-specifications of the volume type will be used to determine which backend has to be used.
|
||||
<para>When you create a volume, you must specify the volume
|
||||
type. The extra-specifications of the volume type are used
|
||||
to determine which back-end has to be used.
|
||||
<screen><prompt>$</prompt> <userinput>cinder create --volume_type lvm --display_name test_multi_backend 1</userinput></screen>
|
||||
Considering the <literal>cinder.conf</literal> described above, the scheduler will create this volume on <literal>lvmdriver-1</literal> or <literal>lvmdriver-2</literal>.
|
||||
<screen><prompt>$</prompt> <userinput>cinder create --volume_type lvm_gold --display_name test_multi_backend 1</userinput></screen>
|
||||
This second volume will be created on <literal>lvmdriver-3</literal>.
|
||||
</para>
|
||||
Considering the <literal>cinder.conf</literal> described
|
||||
previously, the scheduler creates this volume on
|
||||
<literal>lvmdriver-1</literal> or
|
||||
<literal>lvmdriver-2</literal>.</para>
|
||||
<screen><prompt>$</prompt> <userinput>cinder create --volume_type lvm_gold --display_name test_multi_backend 1</userinput></screen>
|
||||
<para>This second volume is created on
|
||||
<literal>lvmdriver-3</literal>.</para>
|
||||
</simplesect>
|
||||
</section>
|
||||
|
@ -16,7 +16,7 @@
|
||||
tenants direct access to a public network that can be used
|
||||
to reach the Internet. It might also be used to integrate
|
||||
with VLANs in the network that already have a defined
|
||||
meaning (for example, allow a VM from the "marketing"
|
||||
meaning (for example, enable a VM from the "marketing"
|
||||
department to be placed on the same VLAN as bare-metal
|
||||
marketing hosts in the same data center).</para>
|
||||
<para>The provider extension allows administrators to
|
||||
@ -35,122 +35,130 @@
|
||||
<para>A number of terms are used in the provider extension
|
||||
and in the configuration of plug-ins supporting the
|
||||
provider extension:</para>
|
||||
<para>
|
||||
<table rules="all">
|
||||
<caption>Provider extension terminology</caption>
|
||||
<col width="20%"/>
|
||||
<col width="80%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Term</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><emphasis role="bold">virtual network</emphasis></td>
|
||||
<td>An Networking L2
|
||||
network (identified by a UUID and optional
|
||||
name) whose ports can be attached as vNICs to
|
||||
Compute instances and to various Networking
|
||||
agents. The Open vSwitch and Linux Bridge
|
||||
plug-ins each support several different
|
||||
mechanisms to realize virtual networks.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">physical network</emphasis></td>
|
||||
<td>A network connecting
|
||||
virtualization hosts (such as, Compute nodes)
|
||||
with each other and with other network
|
||||
resources. Each physical network might support
|
||||
multiple virtual networks. The provider
|
||||
extension and the plug-in configurations
|
||||
identify physical networks using simple string
|
||||
<table rules="all">
|
||||
<caption>Provider extension terminology</caption>
|
||||
<col width="20%"/>
|
||||
<col width="80%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Term</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><emphasis role="bold">virtual
|
||||
network</emphasis></td>
|
||||
<td>An Networking L2 network (identified by a
|
||||
UUID and optional name) whose ports can be
|
||||
attached as vNICs to Compute instances and
|
||||
to various Networking agents. The Open
|
||||
vSwitch and Linux Bridge plug-ins each
|
||||
support several different mechanisms to
|
||||
realize virtual networks.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">physical
|
||||
network</emphasis></td>
|
||||
<td>A network connecting virtualization hosts
|
||||
(such as, Compute nodes) with each other
|
||||
and with other network resources. Each
|
||||
physical network might support multiple
|
||||
virtual networks. The provider extension
|
||||
and the plug-in configurations identify
|
||||
physical networks using simple string
|
||||
names.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">tenant network</emphasis></td>
|
||||
<td>A virtual network that
|
||||
a tenant or an administrator creates. The
|
||||
physical details of the network are not
|
||||
exposed to the tenant.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">provider network</emphasis></td>
|
||||
<td>A virtual network
|
||||
administratively created to map to a specific
|
||||
network in the data center, typically to
|
||||
enable direct access to non-OpenStack
|
||||
resources on that network. Tenants can be
|
||||
given access to provider networks.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">VLAN network</emphasis></td>
|
||||
<td>A virtual network
|
||||
implemented as packets on a specific physical
|
||||
network containing IEEE 802.1Q headers with a
|
||||
specific VID field value. VLAN networks
|
||||
sharing the same physical network are isolated
|
||||
from each other at L2, and can even have
|
||||
overlapping IP address spaces. Each distinct
|
||||
physical network supporting VLAN networks is
|
||||
treated as a separate VLAN trunk, with a
|
||||
distinct space of VID values. Valid VID values
|
||||
are 1 through 4094.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">flat network</emphasis></td>
|
||||
<td>A virtual network
|
||||
implemented as packets on a specific physical
|
||||
network containing no IEEE 802.1Q header. Each
|
||||
physical network can realize at most one flat
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">tenant
|
||||
network</emphasis></td>
|
||||
<td>A virtual network that a tenant or an
|
||||
administrator creates. The physical
|
||||
details of the network are not exposed to
|
||||
the tenant.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">provider
|
||||
network</emphasis></td>
|
||||
<td>A virtual network administratively created
|
||||
to map to a specific network in the data
|
||||
center, typically to enable direct access
|
||||
to non-OpenStack resources on that
|
||||
network. Tenants can be given access to
|
||||
provider networks.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">VLAN
|
||||
network</emphasis></td>
|
||||
<td>A virtual network implemented as packets
|
||||
on a specific physical network containing
|
||||
IEEE 802.1Q headers with a specific VID
|
||||
field value. VLAN networks sharing the
|
||||
same physical network are isolated from
|
||||
each other at L2, and can even have
|
||||
overlapping IP address spaces. Each
|
||||
distinct physical network supporting VLAN
|
||||
networks is treated as a separate VLAN
|
||||
trunk, with a distinct space of VID
|
||||
values. Valid VID values are 1 through
|
||||
4094.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">flat
|
||||
network</emphasis></td>
|
||||
<td>A virtual network implemented as packets
|
||||
on a specific physical network containing
|
||||
no IEEE 802.1Q header. Each physical
|
||||
network can realize at most one flat
|
||||
network.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">local network</emphasis></td>
|
||||
<td>A virtual network that
|
||||
allows communication within each host, but not
|
||||
across a network. Local networks are intended
|
||||
mainly for single-node test scenarios, but can
|
||||
have other uses.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">GRE network</emphasis></td>
|
||||
<td>A virtual network
|
||||
implemented as network packets encapsulated
|
||||
using GRE. GRE networks are also referred to
|
||||
as <emphasis role="italic">tunnels</emphasis>.
|
||||
GRE tunnel packets are routed by the IP
|
||||
routing table for the host, so GRE networks
|
||||
are not associated by Networking with specific
|
||||
physical networks.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">Virtual Extensible LAN (VXLAN) network</emphasis></td>
|
||||
<td>VXLAN is a proposed
|
||||
encapsulation protocol for running an overlay
|
||||
network on existing Layer 3 infrastructure. An
|
||||
overlay network is a virtual network that is
|
||||
built on top of existing network Layer 2 and
|
||||
Layer 3 technologies to support elastic compute
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">local
|
||||
network</emphasis></td>
|
||||
<td>A virtual network that allows
|
||||
communication within each host, but not
|
||||
across a network. Local networks are
|
||||
intended mainly for single-node test
|
||||
scenarios, but can have other uses.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">GRE
|
||||
network</emphasis></td>
|
||||
<td>A virtual network implemented as network
|
||||
packets encapsulated using GRE. GRE
|
||||
networks are also referred to as <emphasis
|
||||
role="italic">tunnels</emphasis>. GRE
|
||||
tunnel packets are routed by the IP
|
||||
routing table for the host, so GRE
|
||||
networks are not associated by Networking
|
||||
with specific physical networks.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><emphasis role="bold">Virtual Extensible
|
||||
LAN (VXLAN) network</emphasis></td>
|
||||
<td>VXLAN is a proposed encapsulation protocol
|
||||
for running an overlay network on existing
|
||||
Layer 3 infrastructure. An overlay network
|
||||
is a virtual network that is built on top
|
||||
of existing network Layer 2 and Layer 3
|
||||
technologies to support elastic compute
|
||||
architectures.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</para>
|
||||
<para>The ML2, Open vSwitch and Linux Bridge plug-ins support
|
||||
VLAN networks, flat networks, and local networks. Only
|
||||
the ML2 and Open vSwitch plug-ins currently support GRE
|
||||
and VXLAN networks, provided that the required features
|
||||
exist in the hosts Linux kernel, Open vSwitch and iproute2
|
||||
packages.</para>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<para>The ML2, Open vSwitch, and Linux Bridge plug-ins
|
||||
support VLAN networks, flat networks, and local
|
||||
networks. Only the ML2 and Open vSwitch plug-ins
|
||||
currently support GRE and VXLAN networks, provided
|
||||
that the required features exist in the hosts Linux
|
||||
kernel, Open vSwitch, and iproute2 packages.</para>
|
||||
</section>
|
||||
<section xml:id="provider_attributes">
|
||||
<title>Provider attributes</title>
|
||||
<para>The provider extension extends the Networking
|
||||
network resource with the following attributes:</para>
|
||||
network resource with these attributes:</para>
|
||||
<table rules="all">
|
||||
<caption>Provider Network Attributes</caption>
|
||||
<caption>Provider network attributes</caption>
|
||||
<col width="25%"/>
|
||||
<col width="10%"/>
|
||||
<col width="25%"/>
|
||||
@ -229,13 +237,13 @@
|
||||
on policy configuration.</para>
|
||||
</section>
|
||||
<section xml:id="provider_api_workflow">
|
||||
<title>Provider Extension API operations</title>
|
||||
<title>Provider extension API operations</title>
|
||||
<para>To use the provider extension with the default
|
||||
policy settings, you must have the administrative
|
||||
role.</para>
|
||||
<para>The following table shows example neutron commands
|
||||
that enable you to complete basic provider extension
|
||||
API operations:</para>
|
||||
<para>This table shows example neutron commands that
|
||||
enable you to complete basic provider extension API
|
||||
operations:</para>
|
||||
<table rules="all">
|
||||
<caption>Basic provider extension API
|
||||
operations</caption>
|
||||
@ -325,7 +333,7 @@
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="section_l3_router_and_nat">
|
||||
<title>L3 Routing and NAT</title>
|
||||
<title>L3 routing and NAT</title>
|
||||
<para>The Networking API provides abstract L2 network segments
|
||||
that are decoupled from the technology used to implement
|
||||
the L2 network. Networking includes an API extension that
|
||||
@ -495,9 +503,8 @@
|
||||
the default policy settings enable only administrative
|
||||
users to create, update, and delete external
|
||||
networks.</para>
|
||||
<para>The following table shows example neutron commands
|
||||
that enable you to complete basic L3
|
||||
operations:</para>
|
||||
<para>This table shows example neutron commands that
|
||||
enable you to complete basic L3 operations:</para>
|
||||
<table rules="all">
|
||||
<caption>Basic L3 operations</caption>
|
||||
<col width="40%"/>
|
||||
@ -711,10 +718,12 @@
|
||||
<note>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>To use the Compute security group API with Networking, the Networking
|
||||
plug-in must implement the security group API. The following plug-ins
|
||||
currently implement this: ML2, Nicira NVP, Open vSwitch, Linux Bridge, NEC,
|
||||
and Ryu.</para>
|
||||
<para>To use the Compute security group API with
|
||||
Networking, the Networking plug-in must
|
||||
implement the security group API. The
|
||||
following plug-ins currently implement this:
|
||||
ML2, Nicira NVP, Open vSwitch, Linux Bridge,
|
||||
NEC, and Ryu.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>You must configure the correct firewall
|
||||
@ -735,9 +744,9 @@
|
||||
</itemizedlist>
|
||||
</note>
|
||||
<section xml:id="securitygroup_api_abstractions">
|
||||
<title>Security Group API Abstractions</title>
|
||||
<title>Security group API abstractions</title>
|
||||
<table rules="all">
|
||||
<caption>Security Group Attributes</caption>
|
||||
<caption>Security group attributes</caption>
|
||||
<col width="20%"/>
|
||||
<col width="20%"/>
|
||||
<col width="20%"/>
|
||||
@ -784,7 +793,7 @@
|
||||
</tbody>
|
||||
</table>
|
||||
<table rules="all">
|
||||
<caption>Security Group Rules</caption>
|
||||
<caption>Security group rules</caption>
|
||||
<col width="20%"/>
|
||||
<col width="20%"/>
|
||||
<col width="20%"/>
|
||||
@ -870,8 +879,8 @@
|
||||
</section>
|
||||
<section xml:id="securitygroup_workflow">
|
||||
<title>Basic security group operations</title>
|
||||
<para>The following table shows example neutron commands
|
||||
that enable you to complete basic security group
|
||||
<para>This table shows example neutron commands that
|
||||
enable you to complete basic security group
|
||||
operations:</para>
|
||||
<table rules="all">
|
||||
<caption>Basic security group operations</caption>
|
||||
@ -949,8 +958,8 @@
|
||||
release offers a reference implementation that is
|
||||
based on the HAProxy software load balancer.</para>
|
||||
</note>
|
||||
<para>The following table shows example neutron commands that
|
||||
enable you to complete basic LBaaS operations:</para>
|
||||
<para>This table shows example neutron commands that enable
|
||||
you to complete basic LBaaS operations:</para>
|
||||
<table rules="all">
|
||||
<caption>Basic LBaaS operations</caption>
|
||||
<col width="40%"/>
|
||||
@ -1024,7 +1033,6 @@
|
||||
<para>The Firewall-as-a-Service (FWaaS) API is an experimental
|
||||
API that enables early adopters and vendors to test their
|
||||
networking implementations.</para>
|
||||
|
||||
<para>The FWaaS is backed by a <emphasis role="bold">reference
|
||||
implementation</emphasis> that works with the
|
||||
Networking OVS plug-in and provides perimeter firewall
|
||||
@ -1398,15 +1406,13 @@
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para>The FWaaS features and the above workflow can also be accessed from the
|
||||
Horizon user interface. This support is disabled by default, but can be enabled
|
||||
by configuring
|
||||
<para>The FWaaS features and the above workflow can
|
||||
also be accessed from the Horizon user interface.
|
||||
This support is disabled by default, but can be
|
||||
enabled by configuring
|
||||
<filename>#HORIZON_DIR/openstack_dashboard/local/local_settings.py
|
||||
</filename> and setting
|
||||
<programlisting language="ini">
|
||||
'enable_firewall' = True
|
||||
</programlisting>
|
||||
</para>
|
||||
</filename> and setting:</para>
|
||||
<programlisting language="ini">'enable_firewall' = True</programlisting>
|
||||
</note>
|
||||
</section>
|
||||
</section>
|
||||
@ -1422,8 +1428,8 @@
|
||||
two instances to enable fast data plane failover.</para>
|
||||
<note>
|
||||
<para>The allowed-address-pairs extension is currently
|
||||
only supported by the following plug-ins: ML2, Nicira
|
||||
NVP, and Open vSwitch.</para>
|
||||
only supported by these plug-ins: ML2, Nicira NVP, and
|
||||
Open vSwitch.</para>
|
||||
</note>
|
||||
<section xml:id="section_allowed_address_pairs_workflow">
|
||||
<title>Basic allowed address pairs operations</title>
|
||||
@ -1467,7 +1473,7 @@
|
||||
extensions for each plug-in.</para>
|
||||
<section xml:id="section_nicira_extensions">
|
||||
<title>Nicira NVP extensions</title>
|
||||
<para>The following sections explain Nicira NVP plug-in
|
||||
<para>These sections explain Nicira NVP plug-in
|
||||
extensions.</para>
|
||||
<section xml:id="section_nicira_nvp_plugin_qos_extension">
|
||||
<title>Nicira NVP QoS extension</title>
|
||||
@ -1505,7 +1511,7 @@
|
||||
xml:id="section_nicira_nvp_qos_api_abstractions">
|
||||
<title>Nicira NVP QoS API abstractions</title>
|
||||
<table rules="all">
|
||||
<caption>Nicira NVP QoS Attributes</caption>
|
||||
<caption>Nicira NVP QoS attributes</caption>
|
||||
<col width="20%"/>
|
||||
<col width="20%"/>
|
||||
<col width="20%"/>
|
||||
@ -1578,9 +1584,9 @@
|
||||
</section>
|
||||
<section xml:id="nicira_nvp_qos_walk_through">
|
||||
<title>Basic Nicira NVP QoS operations</title>
|
||||
<para>The following table shows example neutron
|
||||
commands that enable you to complete basic
|
||||
queue operations:</para>
|
||||
<para>This table shows example neutron commands
|
||||
that enable you to complete basic queue
|
||||
operations:</para>
|
||||
<table rules="all">
|
||||
<caption>Basic Nicira NVP QoS
|
||||
operations</caption>
|
||||
@ -1652,45 +1658,43 @@
|
||||
parameter, which has a default value of
|
||||
5,000.</para>
|
||||
<para>The recommended value for this parameter varies
|
||||
with the NVP version running in the back end, as
|
||||
with the NVP version running in the back-end, as
|
||||
shown in the following table.</para>
|
||||
<para>
|
||||
<table rules="all">
|
||||
<caption>Recommended values for
|
||||
max_lp_per_bridged_ls</caption>
|
||||
<col width="50%"/>
|
||||
<col width="50%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<td>NVP version</td>
|
||||
<td>Recommended Value</td>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>2.x</td>
|
||||
<td>64</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>3.0.x</td>
|
||||
<td>5,000</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>3.1.x</td>
|
||||
<td>5,000</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>3.2.x</td>
|
||||
<td>10,000</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</para>
|
||||
<para>In addition to the above network types, the NVP
|
||||
<table rules="all">
|
||||
<caption>Recommended values for
|
||||
max_lp_per_bridged_ls</caption>
|
||||
<col width="50%"/>
|
||||
<col width="50%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<td>NVP version</td>
|
||||
<td>Recommended Value</td>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>2.x</td>
|
||||
<td>64</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>3.0.x</td>
|
||||
<td>5,000</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>3.1.x</td>
|
||||
<td>5,000</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>3.2.x</td>
|
||||
<td>10,000</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<para>In addition to these network types, the NVP
|
||||
plug-in also supports a special
|
||||
<emphasis>l3_ext</emphasis> network type,
|
||||
which maps external networks to specific NVP
|
||||
gateway services as discussed in the following
|
||||
gateway services as discussed in the next
|
||||
section.</para>
|
||||
</section>
|
||||
<section xml:id="section_nicira_nvp_plugin_l3_extension">
|
||||
@ -1718,29 +1722,42 @@
|
||||
</section>
|
||||
<section xml:id="section_nicira_nvp_plugin_status_sync">
|
||||
<title>Operational status synchronization in the
|
||||
Nicira NVP plugin</title>
|
||||
<para>Starting with the Havana release, the Nicira NVP plugin
|
||||
provides an asynchronous mechanism for retrieving the
|
||||
operational status for neutron resources from the NVP
|
||||
backend; this applies to <emphasis>network</emphasis>,
|
||||
<emphasis>port</emphasis>, and <emphasis>router</emphasis> resources.</para>
|
||||
<para>The backend is polled periodically, and the status for every resource is
|
||||
retrieved; then the status in the Neutron database is updated only for the
|
||||
resources for which a status change occurred. As operational status is now
|
||||
retrieved asynchronously, performance for <literal>GET</literal> operations is
|
||||
Nicira NVP plug-in</title>
|
||||
<para>Starting with the Havana release, the Nicira NVP
|
||||
plug-in provides an asynchronous mechanism for
|
||||
retrieving the operational status for neutron
|
||||
resources from the NVP back-end; this applies to
|
||||
<emphasis>network</emphasis>,
|
||||
<emphasis>port</emphasis>, and
|
||||
<emphasis>router</emphasis> resources.</para>
|
||||
<para>The back-end is polled periodically, and the
|
||||
status for every resource is retrieved; then the
|
||||
status in the Neutron database is updated only for
|
||||
the resources for which a status change occurred.
|
||||
As operational status is now retrieved
|
||||
asynchronously, performance for
|
||||
<literal>GET</literal> operations is
|
||||
consistently improved.</para>
|
||||
<para>Data to retrieve from the backend are divided in chunks in order to avoid
|
||||
expensive API requests; this is achieved leveraging NVP APIs response paging
|
||||
capabilities. The minimum chunk size can be specified using a configuration
|
||||
option; the actual chunk size is then determined dynamically according to:
|
||||
total number of resources to retrieve, interval between two synchronization task
|
||||
runs, minimum delay between two subsequent requests to the NVP backend.</para>
|
||||
<para>The operational status synchronization can be tuned or disabled using the
|
||||
configuration options reported in the following table; it is however worth
|
||||
noting that the default values will work fine in most cases.</para>
|
||||
<para>Data to retrieve from the back-end are divided
|
||||
in chunks in order to avoid expensive API
|
||||
requests; this is achieved leveraging NVP APIs
|
||||
response paging capabilities. The minimum chunk
|
||||
size can be specified using a configuration
|
||||
option; the actual chunk size is then determined
|
||||
dynamically according to: total number of
|
||||
resources to retrieve, interval between two
|
||||
synchronization task runs, minimum delay between
|
||||
two subsequent requests to the NVP
|
||||
back-end.</para>
|
||||
<para>The operational status synchronization can be
|
||||
tuned or disabled using the configuration options
|
||||
reported in this table; it is however worth noting
|
||||
that the default values work fine in most
|
||||
cases.</para>
|
||||
<table rules="all">
|
||||
<caption>Configuration options for tuning operational status synchronization
|
||||
in the NVP plugin</caption>
|
||||
<caption>Configuration options for tuning
|
||||
operational status synchronization in the NVP
|
||||
plug-in</caption>
|
||||
<col width="12%"/>
|
||||
<col width="8%"/>
|
||||
<col width="10%"/>
|
||||
@ -1761,97 +1778,119 @@
|
||||
<td><literal>nvp_sync</literal></td>
|
||||
<td>120 seconds</td>
|
||||
<td>Integer; no constraint.</td>
|
||||
<td>Interval in seconds between two run of the synchronization task.
|
||||
If the synchronization task takes more than
|
||||
<literal>state_sync_interval</literal> seconds to execute, a
|
||||
new instance of the task is started as soon as the other is
|
||||
completed. Setting the value for this option to 0 will disable
|
||||
the synchronization task.</td>
|
||||
<td>Interval in seconds between two run of
|
||||
the synchronization task. If the
|
||||
synchronization task takes more than
|
||||
<literal>state_sync_interval</literal>
|
||||
seconds to execute, a new instance of
|
||||
the task is started as soon as the
|
||||
other is completed. Setting the value
|
||||
for this option to 0 will disable the
|
||||
synchronization task.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>max_random_sync_delay</literal></td>
|
||||
<td><literal>nvp_sync</literal></td>
|
||||
<td>0 seconds</td>
|
||||
<td>Integer. Must not exceed
|
||||
<literal>min_sync_req_delay</literal></td>
|
||||
<td>When different from zero, a random delay between 0 and
|
||||
<literal>max_random_sync_delay</literal> will be added
|
||||
before processing the next chunk.</td>
|
||||
<literal>min_sync_req_delay</literal></td>
|
||||
<td>When different from zero, a random
|
||||
delay between 0 and
|
||||
<literal>max_random_sync_delay</literal>
|
||||
will be added before processing the
|
||||
next chunk.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>min_sync_req_delay</literal></td>
|
||||
<td><literal>nvp_sync</literal></td>
|
||||
<td>10 seconds</td>
|
||||
<td>Integer. Must not exceed
|
||||
<literal>state_sync_interval</literal>.</td>
|
||||
<td>The value of this option can be tuned according to the observed
|
||||
load on the NVP controllers. Lower values will result in faster
|
||||
synchronization, but might increase the load on the controller
|
||||
cluster.</td>
|
||||
<literal>state_sync_interval</literal>.</td>
|
||||
<td>The value of this option can be tuned
|
||||
according to the observed load on the
|
||||
NVP controllers. Lower values will
|
||||
result in faster synchronization, but
|
||||
might increase the load on the
|
||||
controller cluster.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>min_chunk_size</literal></td>
|
||||
<td><literal>nvp_sync</literal></td>
|
||||
<td>500 resources</td>
|
||||
<td>Integer; no constraint.</td>
|
||||
<td>Minimum number of resources to retrieve from the backend for
|
||||
each synchronization chunk. The expected number of
|
||||
synchronization chunks is given by the ratio between
|
||||
<literal>state_sync_interval</literal> and
|
||||
<literal>min_sync_req_delay</literal>. This size of a chunk
|
||||
might increase if the total number of resources is such that
|
||||
more than <literal>min_chunk_size</literal> resources must be
|
||||
fetched in one chunk with the current number of chunks.</td>
|
||||
<td>Minimum number of resources to
|
||||
retrieve from the back-end for each
|
||||
synchronization chunk. The expected
|
||||
number of synchronization chunks is
|
||||
given by the ratio between
|
||||
<literal>state_sync_interval</literal>
|
||||
and
|
||||
<literal>min_sync_req_delay</literal>.
|
||||
This size of a chunk might increase if
|
||||
the total number of resources is such
|
||||
that more than
|
||||
<literal>min_chunk_size</literal>
|
||||
resources must be fetched in one chunk
|
||||
with the current number of
|
||||
chunks.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>always_read_status</literal></td>
|
||||
<td><literal>nvp_sync</literal></td>
|
||||
<td>False</td>
|
||||
<td>Boolean; no constraint.</td>
|
||||
<td>When this option is enabled, the operational status will always
|
||||
be retrieved from the NVP backend ad every
|
||||
<literal>GET</literal> request. In this case it is advisable
|
||||
to disable the synchronization task.</td>
|
||||
<td>When this option is enabled, the
|
||||
operational status will always be
|
||||
retrieved from the NVP back-end ad
|
||||
every <literal>GET</literal> request.
|
||||
In this case it is advisable to
|
||||
disable the synchronization task.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<para>When running multiple Neutron server instances, the status
|
||||
synchronization task should not run on every node; doing so will need to
|
||||
unnecessary traffic towards the NVP backend as well as unnecessary DB
|
||||
operations. The configuration option <literal>state_sync_interval</literal>
|
||||
should therefore be non-zero exclusively on a node designated for backend status
|
||||
synchronization.</para>
|
||||
<para>Explicitly specifying the <emphasis role="italic">status</emphasis> attribute
|
||||
in Neutron API requests (e.g.: <literal>GET
|
||||
/v2.0/networks/<net-id>?fields=status&fields=name</literal>) will
|
||||
always trigger an explicit query to the NVP backend, even when asynchronous
|
||||
state synchronization is enabled.</para>
|
||||
<para>When running multiple Neutron server instances,
|
||||
the status synchronization task should not run on
|
||||
every node; doing so sends unnecessary traffic to
|
||||
the NVP back-end and performs unnecessary DB
|
||||
operations. Set the
|
||||
<option>state_sync_interval</option>
|
||||
configuration option to a non-zero value
|
||||
exclusively on a node designated for back-end
|
||||
status synchronization.</para>
|
||||
<para>Explicitly specifying the <emphasis
|
||||
role="italic">status</emphasis> attribute in
|
||||
Neutron API requests (e.g.: <literal>GET
|
||||
/v2.0/networks/<net-id>?fields=status&fields=name</literal>)
|
||||
always triggers an explicit query to the NVP
|
||||
back-end, even when asynchronous state
|
||||
synchronization is enabled.</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section xml:id="section_bigswitch_extensions">
|
||||
<title>Big Switch plug-in extensions</title>
|
||||
<para>The following section explains the Big Switch Neutron plug-in-specific
|
||||
extension.</para>
|
||||
<para>This section explains the Big Switch Neutron
|
||||
plug-in-specific extension.</para>
|
||||
<section xml:id="section_bigswitch_extension_routerrules">
|
||||
<title>Big Switch router rules</title>
|
||||
<para>Big Switch allows router rules to be added to each
|
||||
tenant router. These rules can be used to enforce routing
|
||||
policies such as denying traffic between subnets or traffic
|
||||
to external networks. By enforcing these at the router
|
||||
level, network segmentation policies can be enforced across
|
||||
many VMs that have differing security groups.</para>
|
||||
<para>Big Switch allows router rules to be added to
|
||||
each tenant router. These rules can be used to
|
||||
enforce routing policies such as denying traffic
|
||||
between subnets or traffic to external networks.
|
||||
By enforcing these at the router level, network
|
||||
segmentation policies can be enforced across many
|
||||
VMs that have differing security groups.</para>
|
||||
<section xml:id="section_bigswitch_routerrule_fields">
|
||||
<title>Router rule attributes</title>
|
||||
<para>Each tenant router has a set of router rules
|
||||
associated with it. Each router rule has the attributes
|
||||
in the following table. Router rules and their
|
||||
attributes can be set using the
|
||||
<command>neutron router-update</command> command,
|
||||
via the Horizon interface, or through the Neutron API.
|
||||
</para>
|
||||
associated with it. Each router rule has the
|
||||
attributes in this table. Router rules and
|
||||
their attributes can be set using the
|
||||
<command>neutron router-update</command>
|
||||
command, through the Horizon interface or the
|
||||
Neutron API.</para>
|
||||
<table rules="all">
|
||||
<caption>Big Switch Router rule attributes</caption>
|
||||
<caption>Big Switch Router rule
|
||||
attributes</caption>
|
||||
<col width="20%"/>
|
||||
<col width="15%"/>
|
||||
<col width="25%"/>
|
||||
@ -1868,63 +1907,70 @@
|
||||
<tr>
|
||||
<td>source</td>
|
||||
<td>Yes</td>
|
||||
<td>A valid CIDR or one of the keywords
|
||||
'any' or 'external'</td>
|
||||
<td>The network that a packet's source IP must
|
||||
match for the rule to be applied</td>
|
||||
<td>A valid CIDR or one of the
|
||||
keywords 'any' or 'external'</td>
|
||||
<td>The network that a packet's source
|
||||
IP must match for the rule to be
|
||||
applied</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>destination</td>
|
||||
<td>Yes</td>
|
||||
<td>A valid CIDR or one of the keywords
|
||||
'any' or 'external'</td>
|
||||
<td>The network that a packet's destination IP
|
||||
must match for the rule to be applied</td>
|
||||
<td>A valid CIDR or one of the
|
||||
keywords 'any' or 'external'</td>
|
||||
<td>The network that a packet's
|
||||
destination IP must match for the
|
||||
rule to be applied</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>action</td>
|
||||
<td>Yes</td>
|
||||
<td>'permit' or 'deny'</td>
|
||||
<td>Determines whether or not the matched
|
||||
packets will allowed to cross the router</td>
|
||||
<td>Determines whether or not the
|
||||
matched packets will allowed to
|
||||
cross the router</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>nexthop</td>
|
||||
<td>No</td>
|
||||
<td>A plus-separated (+) list of next-hop IP
|
||||
addresses (e.g. '1.1.1.1+1.1.1.2')</td>
|
||||
<td>Overrides the default virtual router used to
|
||||
handle traffic for packets that match the
|
||||
rule</td>
|
||||
<td>A plus-separated (+) list of
|
||||
next-hop IP addresses (e.g.
|
||||
'1.1.1.1+1.1.1.2')</td>
|
||||
<td>Overrides the default virtual
|
||||
router used to handle traffic for
|
||||
packets that match the rule</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<section xml:id="section_bigswitch_routerrule_processorder">
|
||||
<section
|
||||
xml:id="section_bigswitch_routerrule_processorder">
|
||||
<title>Order of rule processing</title>
|
||||
<para>The order of router rules has no effect. Overlapping
|
||||
rules are evaluated using longest prefix matching on
|
||||
the source and destination fields. The source field
|
||||
is matched first so it always takes higher precedence
|
||||
over the destination field. In other words, longest
|
||||
prefix matching is used on the destination field only
|
||||
if there are multiple matching rules with the same
|
||||
source.</para>
|
||||
<para>The order of router rules has no effect.
|
||||
Overlapping rules are evaluated using longest
|
||||
prefix matching on the source and destination
|
||||
fields. The source field is matched first so
|
||||
it always takes higher precedence over the
|
||||
destination field. In other words, longest
|
||||
prefix matching is used on the destination
|
||||
field only if there are multiple matching
|
||||
rules with the same source.</para>
|
||||
</section>
|
||||
<section xml:id="section_bigswitch_routerrule_walkthrough">
|
||||
<section
|
||||
xml:id="section_bigswitch_routerrule_walkthrough">
|
||||
<title>Big Switch router rules operations</title>
|
||||
<para>Router rules are configured with a router update
|
||||
operation in Neutron. The update overrides any previous
|
||||
rules so all of the rules must be provided at the same
|
||||
time.</para>
|
||||
<para>Update a router with rules to permit traffic by
|
||||
default but block traffic from external networks to the
|
||||
10.10.10.0/24 subnet:</para>
|
||||
<para>Router rules are configured with a router
|
||||
update operation in Neutron. The update
|
||||
overrides any previous rules so all rules must
|
||||
be provided at the same time.</para>
|
||||
<para>Update a router with rules to permit traffic
|
||||
by default but block traffic from external
|
||||
networks to the 10.10.10.0/24 subnet:</para>
|
||||
<screen><prompt>#</prompt> <userinput>neutron router-update <replaceable>Router-UUID</replaceable> --router_rules type=dict list=true\
|
||||
source=any,destination=any,action=permit \
|
||||
source=external,destination=10.10.10.0/24,action=deny</userinput></screen>
|
||||
<para>Specify alternate next-hop addresses for a specific
|
||||
subnet:</para>
|
||||
<para>Specify alternate next-hop addresses for a
|
||||
specific subnet:</para>
|
||||
<screen><prompt>#</prompt> <userinput>neutron router-update <replaceable>Router-UUID</replaceable> --router_rules type=dict list=true\
|
||||
source=any,destination=any,action=permit \
|
||||
source=10.10.10.0/24,destination=any,action=permit,nexthops=10.10.10.254+10.10.10.253</userinput></screen>
|
||||
@ -1939,13 +1985,16 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
|
||||
</section>
|
||||
<section xml:id="metering">
|
||||
<title>L3 metering</title>
|
||||
<para>The L3 metering is an API extension which allows administrators to configure IP ranges
|
||||
and to assign a specific label to them in order to be able to measure traffic going
|
||||
through a virtual router. L3 metering extension is decoupled from the technology used to
|
||||
implement the measurement. Two abstractions have been added, one is the metering label
|
||||
which can contain metering rules. Since a metering label is assciated with a tenant all
|
||||
virtual routers of this tenant will be associated with this label.</para>
|
||||
<?hard-pagebreak?>
|
||||
<para>The L3 metering API extension enables administrators to
|
||||
configure IP ranges and assign a specified label to them
|
||||
to be able to measure traffic that goes through a virtual
|
||||
router.</para>
|
||||
<para>The L3 metering extension is decoupled from the
|
||||
technology that implements the measurement. Two
|
||||
abstractions have been added: One is the metering label
|
||||
that can contain metering rules. Because a metering label
|
||||
is associated with a tenant, all virtual routers in this
|
||||
tenant are associated with this label.</para>
|
||||
<section xml:id="metering_abstraction">
|
||||
<title>L3 metering API abstractions</title>
|
||||
<table rules="all">
|
||||
@ -1973,13 +2022,15 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
|
||||
<td>name</td>
|
||||
<td>String</td>
|
||||
<td>None</td>
|
||||
<td>Human-readable name for the metering label. Might not be unique.</td>
|
||||
<td>Human-readable name for the metering
|
||||
label. Might not be unique.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>description</td>
|
||||
<td>String</td>
|
||||
<td>None</td>
|
||||
<td>The optional description for the metering label.</td>
|
||||
<td>The optional description for the metering
|
||||
label.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>tenant_id</td>
|
||||
@ -2014,31 +2065,34 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
|
||||
<td>direction</td>
|
||||
<td>String (Either ingress or egress)</td>
|
||||
<td>ingress</td>
|
||||
<td>The direction in which metering rule is applied, either ingress or
|
||||
egress.</td>
|
||||
<td>The direction in which metering rule is
|
||||
applied, either ingress or egress.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>metering_label_id</td>
|
||||
<td>uuid-str</td>
|
||||
<td>N/A</td>
|
||||
<td>
|
||||
<para>The metering label ID to associate with this metering rule.</para>
|
||||
<para>The metering label ID to associate
|
||||
with this metering rule.</para>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>excluded</td>
|
||||
<td>Boolean</td>
|
||||
<td>False</td>
|
||||
<td>Specify whether the remote_ip_prefix will be excluded or not from
|
||||
traffic counters of the metering label, For example to not count
|
||||
the traffic of a specific IP address of a range.</td>
|
||||
<td>Specify whether the remote_ip_prefix will
|
||||
be excluded or not from traffic counters
|
||||
of the metering label, For example to not
|
||||
count the traffic of a specific IP address
|
||||
of a range.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>remote_ip_prefix</td>
|
||||
<td>String (CIDR)</td>
|
||||
<td>N/A</td>
|
||||
<td>Indicates remote IP prefix to be associated with this metering rule.
|
||||
</td>
|
||||
<td>Indicates remote IP prefix to be
|
||||
associated with this metering rule.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
@ -2046,9 +2100,11 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
|
||||
<?hard-pagebreak?>
|
||||
<section xml:id="metering_operations">
|
||||
<title>Basic L3 metering operations</title>
|
||||
<para>Only administrators are allowed to manage the L3 metering labels/rules.</para>
|
||||
<para>The following table shows example neutron commands that enable you to complete
|
||||
basic L3 metering operations:</para>
|
||||
<para>Only administrators can manage the L3
|
||||
metering labels and rules.</para>
|
||||
<para>This table shows example <command>neutron</command> commands
|
||||
that enable you to complete basic L3 metering
|
||||
operations:</para>
|
||||
<table rules="all">
|
||||
<caption>Basic L3 operations</caption>
|
||||
<col width="40%"/>
|
||||
@ -2078,7 +2134,8 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<para>Shows information for a specified label.</para>
|
||||
<para>Shows information for a specified
|
||||
label.</para>
|
||||
</td>
|
||||
<td>
|
||||
<screen><prompt>$</prompt> <userinput>neutron meter-label-show <replaceable>label-uuid</replaceable></userinput>
|
||||
@ -2087,7 +2144,7 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<para>Delete a metering label.</para>
|
||||
<para>Deletes a metering label.</para>
|
||||
</td>
|
||||
<td>
|
||||
<screen><prompt>$</prompt> <userinput>neutron meter-label-delete <replaceable>label-uuid</replaceable></userinput>
|
||||
@ -2096,7 +2153,7 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<para>Create a metering rule.</para>
|
||||
<para>Creates a metering rule.</para>
|
||||
</td>
|
||||
<td>
|
||||
<screen><prompt>$</prompt> <userinput>neutron meter-label-rule-create <replaceable>label-uuid</replaceable> <replaceable>cidr</replaceable> --direction <replaceable>direction</replaceable> --excluded</userinput>
|
||||
@ -2106,7 +2163,8 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<para>Lists metering all label rules.</para>
|
||||
<para>Lists metering all label
|
||||
rules.</para>
|
||||
</td>
|
||||
<td>
|
||||
<screen><prompt>$</prompt> <userinput>neutron meter-label-rule-list</userinput></screen>
|
||||
@ -2114,14 +2172,15 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<para>Shows information for a specified label rule.</para>
|
||||
<para>Shows information for a specified
|
||||
label rule.</para>
|
||||
</td>
|
||||
<td>
|
||||
<screen><prompt>$</prompt> <userinput>neutron meter-label-rule-show <replaceable>rule-uuid</replaceable></userinput></screen>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Delete a metering label rule.</td>
|
||||
<td>Deletes a metering label rule.</td>
|
||||
<td>
|
||||
<screen><prompt>$</prompt> <userinput>neutron meter-label-rule-delete <replaceable>rule-uuid</replaceable></userinput></screen>
|
||||
</td>
|
||||
|
@ -12,7 +12,7 @@
|
||||
options. Command options override ones in
|
||||
<filename>neutron.conf</filename>.</para>
|
||||
<para>To configure logging for Networking components, use one
|
||||
of the following methods:</para>
|
||||
of these methods:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Provide logging settings in a logging
|
||||
@ -85,7 +85,7 @@ notification_topics = notifications</programlisting>
|
||||
<title>Setting cases</title>
|
||||
<section xml:id="section_adv_notification_cases_log_rpc">
|
||||
<title>Logging and RPC</title>
|
||||
<para>The following options configure the Networking
|
||||
<para>These options configure the Networking
|
||||
server to send notifications through logging and
|
||||
RPC. The logging options are described in
|
||||
<citetitle
|
||||
@ -125,7 +125,7 @@ notification_topics = notifications</programlisting>
|
||||
<section
|
||||
xml:id="ch_adv_notification_cases_multi_rpc_topics">
|
||||
<title>Multiple RPC topics</title>
|
||||
<para>The following options configure the Networking
|
||||
<para>These options configure the Networking
|
||||
server to send notifications to multiple RPC
|
||||
topics. RPC notifications go to
|
||||
'notifications_one.info' and
|
||||
|
@ -1,17 +1,18 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ch_running-openstack-object-storage">
|
||||
<title>System Administration for OpenStack Object Storage</title>
|
||||
<para>By understanding the concepts inherent to the Object Storage
|
||||
system you can better monitor and administer your storage
|
||||
solution. The majority of the administration information is maintained
|
||||
in developer documentation at
|
||||
<link xlink:href="http://docs.openstack.org/developer/swift/">docs.openstack.org/developer/swift/</link>.</para>
|
||||
<para>Refer to the <citetitle>OpenStack Configuration Reference</citetitle>
|
||||
for a listing of all configuration options for OpenStack Object
|
||||
Storage.</para>
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="ch_running-openstack-object-storage">
|
||||
<title>System administration for Object Storage</title>
|
||||
<para>By understanding Object Storage concepts, you can better
|
||||
monitor and administer your storage solution. The majority of
|
||||
the administration information is maintained in developer
|
||||
documentation at <link
|
||||
xlink:href="http://docs.openstack.org/developer/swift/"
|
||||
>docs.openstack.org/developer/swift/</link>.</para>
|
||||
<para>See the <link
|
||||
xlink:href="http://docs.openstack.org/havana/config-reference/content/"
|
||||
><citetitle>OpenStack Configuration
|
||||
Reference</citetitle></link> for a list of
|
||||
configuration options for Object Storage.</para>
|
||||
</section>
|
||||
|
@ -1,36 +1,34 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="ch_introduction-to-openstack-object-storage-monitoring">
|
||||
<title>OpenStack Object Storage Monitoring</title>
|
||||
<title>Object Storage monitoring</title>
|
||||
<?dbhtml stop-chunking?>
|
||||
<para>Excerpted from a blog post by <link
|
||||
xlink:href="http://swiftstack.com/blog/2012/04/11/swift-monitoring-with-statsd"
|
||||
>Darrell Bishop</link></para>
|
||||
<para>An OpenStack Object Storage cluster is a complicated beast—a
|
||||
collection of many daemons across many nodes, all working
|
||||
together. With so many “moving parts” it’s important to be
|
||||
able to tell what’s going on inside the cluster. Tracking
|
||||
server-level metrics like CPU utilization, load, memory
|
||||
consumption, disk usage and utilization, etc. is necessary,
|
||||
but not sufficient. We need to know what the different daemons
|
||||
are doing on each server. What’s the volume of object
|
||||
replication on node8? How long is it taking? Are there errors?
|
||||
If so, when did they happen?</para>
|
||||
<para>In such a complex ecosystem, it’s no surprise that there are
|
||||
multiple approaches to getting the answers to these kinds of
|
||||
questions. Let’s examine some of the existing approaches to
|
||||
OpenStack Object Storage monitoring.</para>
|
||||
<para>An OpenStack Object Storage cluster is a collection of many
|
||||
daemons that work together across many nodes. With so many
|
||||
different components, you must be able to tell what is going
|
||||
on inside the cluster. Tracking server-level metrics like CPU
|
||||
utilization, load, memory consumption, disk usage and
|
||||
utilization, and so on is necessary, but not
|
||||
sufficient.</para>
|
||||
<para>What are different daemons are doing on each server? What is
|
||||
the volume of object replication on node8? How long is it
|
||||
taking? Are there errors? If so, when did they happen?</para>
|
||||
<para>In such a complex ecosystem, you can use multiple approaches
|
||||
to get the answers to these questions. This section describes
|
||||
several approaches.</para>
|
||||
<section xml:id="monitoring-swiftrecon">
|
||||
<title>Swift Recon</title>
|
||||
<para>The <link
|
||||
<para>The Swift Recon middleware (see <link
|
||||
xlink:href="http://swift.openstack.org/admin_guide.html#cluster-telemetry-and-monitoring"
|
||||
>Swift Recon middleware</link> can provide general
|
||||
machine stats (load average, socket stats,
|
||||
<code>/proc/meminfo</code> contents, etc.) as well as
|
||||
Swift-specific metrics:</para>
|
||||
>http://swift.openstack.org/admin_guide.html#cluster-telemetry-and-monitoring</link>)
|
||||
provides general machine statistics, such as load average,
|
||||
socket statistics, <code>/proc/meminfo</code> contents,
|
||||
and so on, as well as Swift-specific metrics:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>The MD5 sum of each ring file.</para>
|
||||
@ -39,7 +37,7 @@
|
||||
<para>The most recent object replication time.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Count of each type of quarantined file: account,
|
||||
<para>Count of each type of quarantined file: Account,
|
||||
container, or object.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
@ -47,67 +45,80 @@
|
||||
updates) on disk.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>Swift Recon is middleware installed in the object
|
||||
server’s pipeline and takes one required option: a local
|
||||
cache directory. Tracking of async_pendings requires an
|
||||
additional cron job per object server. Data is then
|
||||
accessed by sending HTTP requests to the object server
|
||||
directly, or by using the swift-recon command-line
|
||||
tool.</para>
|
||||
<para>There are some good Object Storage cluster stats in
|
||||
there, but the general server metrics overlap with
|
||||
existing server monitoring systems and to get the
|
||||
Swift-specific metrics into a monitoring system, they must
|
||||
be polled. Swift Recon is essentially acting as a
|
||||
middle-man metrics collector. The process actually feeding
|
||||
metrics to your stats system, like collectd, gmond, etc.,
|
||||
is probably already running on the storage node. So it
|
||||
could either talk to Swift Recon or just collect the
|
||||
metrics itself.</para>
|
||||
<para>There’s an <link
|
||||
xlink:href="https://review.openstack.org/#change,6074"
|
||||
>upcoming update</link> to Swift Recon which broadens
|
||||
support to the account and container servers. The
|
||||
auditors, replicators, and updaters can also report
|
||||
statistics, but only for the most recent run.</para>
|
||||
<para>Swift Recon is middleware that is installed in the
|
||||
object servers pipeline and takes one required option: A
|
||||
local cache directory. To track
|
||||
<literal>async_pendings</literal>, you must set up an
|
||||
additional cron job for each object server. You access
|
||||
data by either sending HTTP requests directly to the
|
||||
object server or using the <command>swift-recon</command>
|
||||
command-line client.</para>
|
||||
<para>There are some good Object Storage cluster statistics
|
||||
but the general server metrics overlap with existing
|
||||
server monitoring systems. To get the Swift-specific
|
||||
metrics into a monitoring system, they must be polled.
|
||||
Swift Recon essentially acts as a middleware metrics
|
||||
collector. The process that feeds metrics to your
|
||||
statistics system, such as <literal>collectd</literal> and
|
||||
<literal>gmond</literal>, probably already runs on the
|
||||
storage node. So, you can choose to either talk to Swift
|
||||
Recon or collect the metrics directly.</para>
|
||||
</section>
|
||||
<section xml:id="monitoring-swift-informant">
|
||||
<title>Swift-Informant</title>
|
||||
<para>Florian Hines developed the <link
|
||||
<para>Florian Hines developed the Swift-Informant middleware
|
||||
(see <link
|
||||
xlink:href="http://pandemicsyn.posterous.com/swift-informant-statsd-getting-realtime-telem"
|
||||
>Swift-Informant middleware</link> to get real-time
|
||||
visibility into Object Storage client requests. It sits in
|
||||
the proxy server’s pipeline and after each request to the
|
||||
proxy server, sends three metrics to a <link
|
||||
>http://pandemicsyn.posterous.com/swift-informant-statsd-getting-realtime-telem</link>)
|
||||
to get real-time visibility into Object Storage client
|
||||
requests. It sits in the pipeline for the proxy server,
|
||||
and after each request to the proxy server, sends three
|
||||
metrics to a StatsD server (see <link
|
||||
xlink:href="http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-everything/"
|
||||
>StatsD</link> server:</para>
|
||||
>http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-everything/</link>):</para>
|
||||
<itemizedlist>
|
||||
<listitem><para>A counter increment for a metric like <code>obj.GET.200</code> or
|
||||
<code>cont.PUT.404</code>.</para></listitem>
|
||||
<listitem><para>Timing data for a metric like <code>acct.GET.200</code> or <code>obj.GET.200</code>. [The README says the metrics will look
|
||||
like <code>duration.acct.GET.200</code>, but I don’t see
|
||||
the “duration” in the code. I’m not sure what Etsy’s server does, but our StatsD
|
||||
server turns timing metrics into 5 derivative metrics with new segments
|
||||
appended, so it probably works as coded. The first metric above would turn into
|
||||
<code>acct.GET.200.lower</code>, <code>acct.GET.200.upper</code>, <code>acct.GET.200.mean</code>, <code>acct.GET.200.upper_90</code>, and <code>acct.GET.200.count</code>]</para></listitem>
|
||||
<listitem><para>A counter increase by the bytes transferred for a metric like <code>tfer.obj.PUT.201</code>.</para></listitem>
|
||||
<listitem>
|
||||
<para>A counter increment for a metric like
|
||||
<code>obj.GET.200</code> or
|
||||
<code>cont.PUT.404</code>.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Timing data for a metric like
|
||||
<code>acct.GET.200</code> or
|
||||
<code>obj.GET.200</code>. [The README says the
|
||||
metrics look like
|
||||
<code>duration.acct.GET.200</code>, but I do
|
||||
not see the <literal>duration</literal> in the
|
||||
code. I am not sure what the Etsy server does but
|
||||
our StatsD server turns timing metrics into five
|
||||
derivative metrics with new segments appended, so
|
||||
it probably works as coded. The first metric turns
|
||||
into <code>acct.GET.200.lower</code>,
|
||||
<code>acct.GET.200.upper</code>,
|
||||
<code>acct.GET.200.mean</code>,
|
||||
<code>acct.GET.200.upper_90</code>, and
|
||||
<code>acct.GET.200.count</code>].</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>A counter increase by the bytes transferred for
|
||||
a metric like
|
||||
<code>tfer.obj.PUT.201</code>.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>This is good for getting a feel for the quality of
|
||||
service clients are experiencing with the timing metrics,
|
||||
as well as getting a feel for the volume of the various
|
||||
permutations of request server type, command, and response
|
||||
code. Swift-Informant also requires no change to core
|
||||
Object Storage code since it is implemented as middleware.
|
||||
However, because of this, it gives you no insight into the
|
||||
workings of the cluster past the proxy server. If one
|
||||
storage node’s responsiveness degrades for some reason,
|
||||
you’ll only see that some of your requests are bad—either
|
||||
as high latency or error status codes. You won’t know
|
||||
exactly why or where that request tried to go. Maybe the
|
||||
container server in question was on a good node, but the
|
||||
object server was on a different, poorly-performing
|
||||
node.</para>
|
||||
|
||||
Object Storage code because it is implemented as
|
||||
middleware. However, it gives you no insight into the
|
||||
workings of the cluster past the proxy server. If the
|
||||
responsiveness of one storage node degrades, you can only
|
||||
see that some of your requests are bad, either as high
|
||||
latency or error status codes. You do not know exactly why
|
||||
or where that request tried to go. Maybe the container
|
||||
server in question was on a good node but the object
|
||||
server was on a different, poorly-performing node.</para>
|
||||
</section>
|
||||
<section xml:id="monitoring-statsdlog">
|
||||
<title>Statsdlog</title>
|
||||
@ -124,59 +135,117 @@
|
||||
of what metrics are extracted from the log stream.</para>
|
||||
<para>Currently, only the first matching regex triggers a
|
||||
StatsD counter increment, and the counter is always
|
||||
incremented by 1. There’s no way to increment a counter by
|
||||
more than one or send timing data to StatsD based on the
|
||||
log line content. The tool could be extended to handle
|
||||
more metrics per line and data extraction, including
|
||||
timing data. But even then, there would still be a
|
||||
coupling between the log textual format and the log
|
||||
parsing regexes, which would themselves be more complex in
|
||||
order to support multiple matches per line and data
|
||||
extraction. Also, log processing introduces a delay
|
||||
between the triggering event and sending the data to
|
||||
StatsD. We would prefer to increment error counters where
|
||||
they occur, send timing data as soon as it is known, avoid
|
||||
coupling between a log string and a parsing regex, and not
|
||||
introduce a time delay between events and sending data to
|
||||
StatsD. And that brings us to the next method of gathering
|
||||
incremented by one. There is no way to increment a counter
|
||||
by more than one or send timing data to StatsD based on
|
||||
the log line content. The tool could be extended to handle
|
||||
more metrics for each line and data extraction, including
|
||||
timing data. But a coupling would still exist between the
|
||||
log textual format and the log parsing regexes, which
|
||||
would themselves be more complex to support multiple
|
||||
matches for each line and data extraction. Also, log
|
||||
processing introduces a delay between the triggering event
|
||||
and sending the data to StatsD. It would be preferable to
|
||||
increment error counters where they occur and send timing
|
||||
data as soon as it is known to avoid coupling between a
|
||||
log string and a parsing regex and prevent a time delay
|
||||
between events and sending data to StatsD.</para>
|
||||
<para>The next section describes another method for gathering
|
||||
Object Storage operational metrics.</para>
|
||||
</section>
|
||||
<section xml:id="monitoring-statsD">
|
||||
<title>Swift StatsD Logging</title>
|
||||
<para><link xlink:href="http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-everything/">StatsD</link> was designed for application code to be deeply instrumented; metrics are sent in real-time by the code which just noticed something or did something. The overhead of sending a metric is extremely low: a <code>sendto</code> of one UDP packet. If that overhead is still too high, the StatsD client library can send only a random portion of samples and StatsD will approximate the actual number when flushing metrics upstream.</para>
|
||||
<para>To avoid the problems inherent with middleware-based
|
||||
<section xml:id="monitoring-statsD">
|
||||
<title>Swift StatsD logging</title>
|
||||
<para>StatsD (see <link
|
||||
xlink:href="http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-everything/"
|
||||
>http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-everything/</link>)
|
||||
was designed for application code to be deeply
|
||||
instrumented; metrics are sent in real-time by the code
|
||||
that just noticed or did something. The overhead of
|
||||
sending a metric is extremely low: a <code>sendto</code>
|
||||
of one UDP packet. If that overhead is still too high, the
|
||||
StatsD client library can send only a random portion of
|
||||
samples and StatsD approximates the actual number when
|
||||
flushing metrics upstream.</para>
|
||||
<para>To avoid the problems inherent with middleware-based
|
||||
monitoring and after-the-fact log processing, the sending
|
||||
of StatsD metrics is integrated into Object Storage
|
||||
itself. The <link
|
||||
itself. The submitted change set (see <link
|
||||
xlink:href="https://review.openstack.org/#change,6058"
|
||||
>submitted change set</link> currently reports 124
|
||||
metrics across 15 Object Storage daemons and the tempauth
|
||||
middleware. Details of the metrics tracked are in the
|
||||
<link
|
||||
xlink:href="http://swift.openstack.org/admin_guide.html"
|
||||
>https://review.openstack.org/#change,6058</link>)
|
||||
currently reports 124 metrics across 15 Object Storage
|
||||
daemons and the tempauth middleware. Details of the
|
||||
metrics tracked are in the <link
|
||||
xlink:href="http://swift.openstack.org/admin_guide.html"
|
||||
>Swift Administration Guide</link>.</para>
|
||||
<para>The sending of metrics is integrated with the logging framework. To enable, configure <code>log_statsd_host</code> in the relevant config file. You can also specify the port and a default sample rate. The specified default sample rate is used unless a specific call to a statsd logging method (see the list below) overrides it. Currently, no logging calls override the sample rate, but it’s conceivable that some metrics may require accuracy (sample_rate == 1) while others may not.</para>
|
||||
<literallayout class="monospaced">[DEFAULT]
|
||||
<para>The sending of metrics is integrated with the logging
|
||||
framework. To enable, configure
|
||||
<code>log_statsd_host</code> in the relevant config
|
||||
file. You can also specify the port and a default sample
|
||||
rate. The specified default sample rate is used unless a
|
||||
specific call to a statsd logging method (see the list
|
||||
below) overrides it. Currently, no logging calls override
|
||||
the sample rate, but it is conceivable that some metrics
|
||||
may require accuracy (sample_rate == 1) while others may
|
||||
not.</para>
|
||||
<literallayout class="monospaced">[DEFAULT]
|
||||
...
|
||||
log_statsd_host = 127.0.0.1
|
||||
log_statsd_port = 8125
|
||||
log_statsd_default_sample_rate = 1</literallayout>
|
||||
<para>Then the LogAdapter object returned by <code>get_logger()</code>, usually stored in <code>self.logger</code>, has the following new methods:</para>
|
||||
<itemizedlist>
|
||||
<listitem><para><code>set_statsd_prefix(self, prefix)</code> Sets the client library’s stat prefix value which gets prepended to every metric. The default prefix is the “name” of the logger (eg. “object-server”, “container-auditor”, etc.). This is currently used to turn “proxy-server” into one of “proxy-server.Account”, “proxy-server.Container”, or “proxy-server.Object” as soon as the Controller object is determined and instantiated for the request.</para></listitem>
|
||||
<listitem><para><code>update_stats(self, metric, amount, sample_rate=1)</code> Increments the supplied metric by the given amount. This is used when you need to add or subtract more that one from a counter, like incrementing “suffix.hashes” by the number of computed hashes in the object replicator.</para></listitem>
|
||||
<listitem><para><code>increment(self, metric, sample_rate=1)</code> Increments the given counter metric by one.</para></listitem>
|
||||
<listitem><para><code>decrement(self, metric, sample_rate=1)</code> Lowers the given counter metric by one.</para></listitem>
|
||||
<listitem><para><code>timing(self, metric, timing_ms, sample_rate=1)</code> Record that the given metric took the supplied number of milliseconds.</para></listitem>
|
||||
<listitem><para><code>timing_since(self, metric, orig_time, sample_rate=1)</code> Convenience method to record a timing metric whose value is “now” minus an existing timestamp.</para></listitem>
|
||||
</itemizedlist>
|
||||
<para>Then the LogAdapter object returned by
|
||||
<code>get_logger()</code>, usually stored in
|
||||
<code>self.logger</code>, has these new
|
||||
methods:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><code>set_statsd_prefix(self, prefix)</code>
|
||||
Sets the client library stat prefix value which
|
||||
gets prefixed to every metric. The default prefix
|
||||
is the “name” of the logger (such as, .
|
||||
“object-server”, “container-auditor”, etc.). This
|
||||
is currently used to turn “proxy-server” into one
|
||||
of “proxy-server.Account”,
|
||||
“proxy-server.Container”, or “proxy-server.Object”
|
||||
as soon as the Controller object is determined and
|
||||
instantiated for the request.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><code>update_stats(self, metric, amount,
|
||||
sample_rate=1)</code> Increments the supplied
|
||||
metric by the given amount. This is used when you
|
||||
need to add or subtract more that one from a
|
||||
counter, like incrementing “suffix.hashes” by the
|
||||
number of computed hashes in the object
|
||||
replicator.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><code>increment(self, metric,
|
||||
sample_rate=1)</code> Increments the given
|
||||
counter metric by one.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><code>decrement(self, metric,
|
||||
sample_rate=1)</code> Lowers the given counter
|
||||
metric by one.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><code>timing(self, metric, timing_ms,
|
||||
sample_rate=1)</code> Record that the given
|
||||
metric took the supplied number of
|
||||
milliseconds.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><code>timing_since(self, metric, orig_time,
|
||||
sample_rate=1)</code> Convenience method to
|
||||
record a timing metric whose value is “now” minus
|
||||
an existing timestamp.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>Note that these logging methods may safely be called
|
||||
anywhere you have a logger object. If StatsD logging has
|
||||
not been configured, the methods are no-ops. This avoids
|
||||
messy conditional logic each place a metric is recorded.
|
||||
Here’s two example usages of the new logging
|
||||
methods:</para>
|
||||
<programlisting language="bash"># swift/obj/replicator.py
|
||||
These example usages show the new logging methods:</para>
|
||||
<programlisting language="bash"># swift/obj/replicator.py
|
||||
def update(self, job):
|
||||
# ...
|
||||
begin = time.time()
|
||||
@ -214,7 +283,7 @@ def process_container(self, dbfile):
|
||||
else:
|
||||
self.logger.increment('no_changes')
|
||||
self.no_changes += 1</programlisting>
|
||||
<para>The development team of StatsD wanted to use the <link
|
||||
<para>The development team of StatsD wanted to use the <link
|
||||
xlink:href="https://github.com/sivy/py-statsd"
|
||||
>pystatsd</link> client library (not to be confused
|
||||
with a <link
|
||||
|
@ -2,68 +2,85 @@
|
||||
<section xml:id="root-wrap-reference"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0">
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
||||
<title>Secure with root wrappers</title>
|
||||
<para>The goal of the root wrapper is to allow the Compute unprivileged user to run a number of
|
||||
actions as the root user, in the safest manner possible. Historically, Compute used a
|
||||
specific sudoers file listing every command that the Compute user was allowed to run, and
|
||||
just used sudo to run that command as root. However this was difficult to maintain (the
|
||||
sudoers file was in packaging), and did not allow for complex filtering of parameters
|
||||
(advanced filters). The rootwrap was designed to solve those issues.</para>
|
||||
<simplesect> <title>How rootwrap works</title>
|
||||
<para>Instead of just calling sudo make me a sandwich, Compute services starting with nova- call
|
||||
sudo nova-rootwrap /etc/nova/rootwrap.conf make me a sandwich. A generic sudoers entry
|
||||
lets the Compute user run nova-rootwrap as root. The nova-rootwrap code looks for filter
|
||||
definition directories in its configuration file, and loads command filters from them.
|
||||
Then it checks if the command requested by Compute matches one of those filters, in
|
||||
which case it executes the command (as root). If no filter matches, it denies the
|
||||
request.</para></simplesect>
|
||||
<para>The root wrapper enables the Compute
|
||||
unprivileged user to run a number of actions as the root user
|
||||
in the safest manner possible. Historically, Compute used a
|
||||
specific <filename>sudoers</filename> file that listed every
|
||||
command that the Compute user was allowed to run, and used
|
||||
<command>sudo</command> to run that command as
|
||||
<literal>root</literal>. However this was difficult to
|
||||
maintain (the <filename>sudoers</filename> file was in
|
||||
packaging), and did not enable complex filtering of parameters
|
||||
(advanced filters). The rootwrap was designed to solve those
|
||||
issues.</para>
|
||||
<simplesect>
|
||||
<title>Security model</title>
|
||||
<para>The escalation path is fully controlled by the root user. A sudoers entry (owned by
|
||||
root) allows Compute to run (as root) a specific rootwrap executable, and only with a
|
||||
specific configuration file (which should be owned by root). nova-rootwrap imports the
|
||||
Python modules it needs from a cleaned (and system-default) PYTHONPATH. The
|
||||
configuration file (also root-owned) points to root-owned filter definition directories,
|
||||
which contain root-owned filters definition files. This chain ensures that the Compute
|
||||
user itself is not in control of the configuration or modules used by the nova-rootwrap
|
||||
executable.</para>
|
||||
<title>How rootwrap works</title>
|
||||
<para>Instead of calling <command>sudo make me a
|
||||
sandwich</command>, Compute services start with
|
||||
nova- call <command>sudo nova-rootwrap
|
||||
/etc/nova/rootwrap.conf make me a sandwich</command>.
|
||||
A generic sudoers entry lets the Compute user run
|
||||
nova-rootwrap as root. The nova-rootwrap code looks for
|
||||
filter definition directories in its configuration file,
|
||||
and loads command filters from them. Then it checks if the
|
||||
command requested by Compute matches one of those filters,
|
||||
in which case it executes the command (as root). If no
|
||||
filter matches, it denies the request.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Details of rootwrap.conf</title>
|
||||
<para>The <filename>rootwrap.conf</filename> file is used to
|
||||
influence how nova-rootwrap works. Since it's in the
|
||||
trusted security path, it needs to be owned and writable
|
||||
only by the root user. Its location is specified both in
|
||||
the sudoers entry and in the
|
||||
<title>Security model</title>
|
||||
<para>The escalation path is fully controlled by the root
|
||||
user. A sudoers entry (owned by root) allows Compute to
|
||||
run (as root) a specific rootwrap executable, and only
|
||||
with a specific configuration file (which should be owned
|
||||
by root). nova-rootwrap imports the Python modules it
|
||||
needs from a cleaned (and system-default) PYTHONPATH. The
|
||||
configuration file (also root-owned) points to root-owned
|
||||
filter definition directories, which contain root-owned
|
||||
filters definition files. This chain ensures that the
|
||||
Compute user itself is not in control of the configuration
|
||||
or modules used by the nova-rootwrap executable.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Details of rootwrap.conf</title>
|
||||
<para>You configure nova-rootwrap in the
|
||||
<filename>rootwrap.conf</filename> file. Because it's
|
||||
in the trusted security path, it must be owned and
|
||||
writable by only the root user. Its location is specified
|
||||
both in the sudoers entry and in the
|
||||
<filename>nova.conf</filename> configuration file with
|
||||
the rootwrap_config= entry.</para>
|
||||
<para>It uses an INI file format with the following sections and parameters:</para>
|
||||
<table rules= "all" frame= "border" xml:id= "rootwrap-conf-table-filter-path" width= "100%">
|
||||
<caption>Description of rootwrap.conf configuration options
|
||||
</caption>
|
||||
<col width= "50%"/>
|
||||
<col width= "50%"/>
|
||||
the <code>rootwrap_config=entry</code>.</para>
|
||||
<para>It uses an INI file format with these sections and
|
||||
parameters:</para>
|
||||
<table rules="all" frame="border"
|
||||
xml:id="rootwrap-conf-table-filter-path" width="100%">
|
||||
<caption>rootwrap.conf configuration options</caption>
|
||||
<col width="50%"/>
|
||||
<col width="50%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<td><para>Configuration option=Default value</para></td>
|
||||
<td><para>Configuration option=Default
|
||||
value</para></td>
|
||||
<td><para>(Type) Description</para></td>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody><tr>
|
||||
<td><para>
|
||||
[DEFAULT]</para>
|
||||
<para>filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
|
||||
</para></td>
|
||||
<td><para>(ListOpt) Comma-separated list of directories containing filter
|
||||
definition files. Defines where filters
|
||||
for root wrap are stored. Directories
|
||||
defined on this line should all exist, be
|
||||
owned and writable only by the root
|
||||
user.</para></td>
|
||||
</tr>
|
||||
</tbody></table>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><para>[DEFAULT]</para>
|
||||
<para>filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
|
||||
</para></td>
|
||||
<td><para>(ListOpt) Comma-separated list of
|
||||
directories containing filter definition
|
||||
files. Defines where filters for root wrap
|
||||
are stored. Directories defined on this
|
||||
line should all exist, be owned and
|
||||
writable only by the root
|
||||
user.</para></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Details of .filters files</title>
|
||||
@ -73,26 +90,33 @@
|
||||
they are in the trusted security path, they need to be
|
||||
owned and writable only by the root user. Their location
|
||||
is specified in the rootwrap.conf file.</para>
|
||||
<para>It uses an INI file format with a [Filters] section and several lines, each with a unique parameter name (different for each filter you define):</para>
|
||||
|
||||
<table rules= "all" frame= "border" xml:id= "rootwrap-conf-table-filter-name" width= "100%">
|
||||
<caption>Description of rootwrap.conf configuration options
|
||||
</caption>
|
||||
<col width= "50%"/>
|
||||
<col width= "50%"/>
|
||||
<para>It uses an INI file format with a [Filters] section and
|
||||
several lines, each with a unique parameter name
|
||||
(different for each filter that you define):</para>
|
||||
<table rules="all" frame="border"
|
||||
xml:id="rootwrap-conf-table-filter-name" width="100%">
|
||||
<caption>rootwrap.conf configuration options</caption>
|
||||
<col width="50%"/>
|
||||
<col width="50%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<td><para>Configuration option=Default value</para></td>
|
||||
<td><para>Configuration option=Default
|
||||
value</para></td>
|
||||
<td><para>(Type) Description</para></td>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody><tr>
|
||||
<td><para>
|
||||
[Filters]</para>
|
||||
<para>filter_name=kpartx: CommandFilter, /sbin/kpartx, root
|
||||
</para></td>
|
||||
<td><para>(ListOpt) Comma-separated list containing first the Filter class to use, followed by that Filter arguments (which vary depending on the Filter class selected). .</para></td>
|
||||
</tr>
|
||||
</tbody></table>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><para>[Filters]</para>
|
||||
<para>filter_name=kpartx: CommandFilter,
|
||||
/sbin/kpartx, root</para></td>
|
||||
<td><para>(ListOpt) Comma-separated list
|
||||
containing first the Filter class to use,
|
||||
followed by that Filter arguments (which
|
||||
vary depending on the Filter class
|
||||
selected).</para></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</simplesect>
|
||||
</section>
|
||||
</section>
|
||||
|
@ -1,11 +1,14 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0" xml:id="section_ts_HTTP_bad_req_in_cinder_vol_log">
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="section_ts_HTTP_bad_req_in_cinder_vol_log">
|
||||
<title>Failed to attach volume after detaching</title>
|
||||
<section xml:id="section_ts_HTTP_bad_req_in_cinder_vol_log_problem">
|
||||
<section
|
||||
xml:id="section_ts_HTTP_bad_req_in_cinder_vol_log_problem">
|
||||
<title>Problem</title>
|
||||
<para>The following errors are in the <filename>cinder-volume.log</filename> file.</para>
|
||||
<screen>2013-05-03 15:16:33 INFO [cinder.volume.manager] Updating volume status
|
||||
<para>These errors appear in the
|
||||
<filename>cinder-volume.log</filename> file.</para>
|
||||
<screen><?db-font-size 75%?><computeroutput>2013-05-03 15:16:33 INFO [cinder.volume.manager] Updating volume status
|
||||
2013-05-03 15:16:33 DEBUG [hp3parclient.http]
|
||||
REQ: curl -i https://10.10.22.241:8080/api/v1/cpgs -X GET -H "X-Hp3Par-Wsapi-Sessionkey: 48dc-b69ed2e5
|
||||
f259c58e26df9a4c85df110c-8d1e8451" -H "Accept: application/json" -H "User-Agent: python-3parclient"
|
||||
@ -33,12 +36,13 @@ File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 255, in get r
|
||||
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 224, in _cs_request **kwargs)
|
||||
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 198, in _time_request resp, body = self.request(url, method, **kwargs)
|
||||
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 192, in request raise exceptions.from_response(resp, body)
|
||||
HTTPBadRequest: Bad request (HTTP 400)</screen>
|
||||
HTTPBadRequest: Bad request (HTTP 400)</computeroutput></screen>
|
||||
</section>
|
||||
<section xml:id="section_ts_HTTP_bad_req_in_cinder_vol_log_solution">
|
||||
<section
|
||||
xml:id="section_ts_HTTP_bad_req_in_cinder_vol_log_solution">
|
||||
<title>Solution</title>
|
||||
<para>You need to update your copy of the <filename>hp_3par_fc.py</filename> driver which
|
||||
<para>You need to update your copy of the
|
||||
<filename>hp_3par_fc.py</filename> driver which
|
||||
contains the synchronization code.</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
|
@ -1,20 +1,29 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0" xml:id="section_ts_attach_vol_fail_not_JSON">
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="section_ts_attach_vol_fail_not_JSON">
|
||||
<title>Nova volume attach error, not JSON serializable</title>
|
||||
<section xml:id="section_ts_attach_vol_fail_not_JSON_problem">
|
||||
<title>Problem</title>
|
||||
<para>When you attach a nova volume to a VM, you will see the error with stack trace in <filename>/var/log/nova/nova-volume.log</filename>. The JSON serializable issue is caused by an RPC response timeout.</para>
|
||||
<para>When you attach a nova volume to a VM, you will see the
|
||||
error with stack trace in
|
||||
<filename>/var/log/nova/nova-volume.log</filename>.
|
||||
The JSON serializable issue is caused by an RPC response
|
||||
timeout.</para>
|
||||
</section>
|
||||
<section xml:id="section_ts_attach_vol_fail_not_JSON_solution">
|
||||
<title>Solution</title>
|
||||
<para>Make sure your iptables allow port 3260 comunication on the ISC controller. Run the
|
||||
following command.</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>iptables -I INPUT <Last Rule No> -p tcp --dport 3260 -j ACCEPT</userinput></screen></para>
|
||||
<para>If the port communication is properly configured, you can try running the following
|
||||
command.<screen><prompt>$</prompt> <userinput>service iptables stop</userinput></screen></para>
|
||||
<para>If you try these solutions and still get the RPC response time out, you probably have
|
||||
an ISC controller and KVM host incompatibility issue. Make sure they are
|
||||
compatible.</para></section></section>
|
||||
|
||||
<para>Make sure your iptables allow port 3260 communication on
|
||||
the ISC controller. Run this command:</para>
|
||||
<screen><prompt>#</prompt> <userinput>iptables -I INPUT <Last Rule No> -p tcp --dport 3260 -j ACCEPT</userinput></screen>
|
||||
<para>If the port communication is properly configured, you
|
||||
can try running this command.</para>
|
||||
<screen><prompt>#</prompt> <userinput>service iptables stop</userinput></screen>
|
||||
<note os="debian;ubuntu">
|
||||
<para>This service does not exist on Debian or
|
||||
Ubuntu.</para>
|
||||
</note>
|
||||
<para>If you continue to get the RPC response time out, your
|
||||
ISC controller and KVM host might be incompatible.</para>
|
||||
</section>
|
||||
</section>
|
||||
|
@ -1,32 +1,38 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section xml:id="section_ts_cinder_config" xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="1.0" linkend="section_ts_cinder_config">
|
||||
<title xml:id="ts_block_config">Troubleshooting your Block Storage Configuration</title>
|
||||
<para>This section is intended to help solve some basic and common errors that are encountered
|
||||
during setup and configuration of the Cinder block storage service. The focus here is on
|
||||
failed creation of volumes. The most important thing to know is where to look in case of a
|
||||
failure.</para>
|
||||
<para>There are two log files that are especially helpful for solving volume creation failures,
|
||||
the <systemitem class="service">cinder-api</systemitem> log and the <systemitem
|
||||
class="service">cinder-volume</systemitem> log. The <systemitem class="service"
|
||||
>cinder-api</systemitem> log is useful for determining if you have endpoint or
|
||||
connectivity issues. If you send a request to create a volume and it fails, it's a good idea
|
||||
to look in the <systemitem class="service">cinder-api</systemitem> log first and see if the
|
||||
request even made it to the Cinder service. If the request is logged and there are no errors
|
||||
or trace-backs, then you can check the <systemitem class="service"
|
||||
>cinder-volume</systemitem> log for errors or trace-backs.</para>
|
||||
<para>
|
||||
<note>
|
||||
<para>Create commands are listed in the <systemitem class="service"
|
||||
>cinder-api</systemitem> log.</para>
|
||||
</note>
|
||||
</para>
|
||||
|
||||
<para>The following entries in the <filename>cinder.openstack.common.log</filename> file can
|
||||
be used to assist in troubleshooting your block storage configuration.</para>
|
||||
<para>
|
||||
<programlisting language="ini">
|
||||
<section xml:id="section_ts_cinder_config"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0"
|
||||
linkend="section_ts_cinder_config">
|
||||
<title xml:id="ts_block_config">Troubleshoot the Block Storage
|
||||
configuration</title>
|
||||
<para>This section helps you solve some basic and common errors
|
||||
that you might encounter during setup and configuration of the
|
||||
Cinder Block Storage Service. The focus here is on failed
|
||||
creation of volumes. The most important thing to know is where
|
||||
to look in case of a failure.</para>
|
||||
<para>Two log files are especially helpful for solving volume
|
||||
creation failures, the <systemitem class="service"
|
||||
>cinder-api</systemitem> log and the <systemitem
|
||||
class="service">cinder-volume</systemitem> log. The
|
||||
<systemitem class="service">cinder-api</systemitem> log is
|
||||
useful for determining if you have endpoint or connectivity
|
||||
issues. If you send a request to create a volume and it fails,
|
||||
review the <systemitem class="service"
|
||||
>cinder-api</systemitem> log to determine whether the request
|
||||
made it to the Cinder service. If the request is logged
|
||||
and you see no errors or trace-backs, check the
|
||||
<systemitem class="service">cinder-volume</systemitem> log
|
||||
for errors or trace-backs.</para>
|
||||
<note>
|
||||
<para>Create commands are listed in the <systemitem
|
||||
class="service">cinder-api</systemitem> log.</para>
|
||||
</note>
|
||||
<para>These entries in the
|
||||
<filename>cinder.openstack.common.log</filename> file can
|
||||
be used to assist in troubleshooting your block storage
|
||||
configuration.</para>
|
||||
<programlisting language="ini">
|
||||
# Print debugging output (set logging level to DEBUG instead
|
||||
# of default WARNING level). (boolean value)
|
||||
#debug=false
|
||||
@ -94,75 +100,92 @@
|
||||
# syslog facility to receive log lines (string value)
|
||||
#syslog_log_facility=LOG_USER
|
||||
#log_config=<None></programlisting>
|
||||
</para>
|
||||
|
||||
<para>Here are some common issues discovered during configuration, and some suggested solutions
|
||||
.</para>
|
||||
<para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Issues with <literal>state_path</literal> and <literal>volumes_dir</literal>
|
||||
settings.</para>
|
||||
<para>Cinder uses <command>tgtd</command> as the default iscsi helper and implements
|
||||
persistent targets. This means that in the case of a tgt restart or even a node
|
||||
reboot your existing volumes on that node will be restored automatically with
|
||||
their original IQN.</para>
|
||||
<para>In order to make this possible the iSCSI target information needs to be stored
|
||||
in a file on creation that can be queried in case of restart of the tgt daemon.
|
||||
By default, Cinder uses a <literal>state_path</literal> variable, which if
|
||||
installing with Yum or APT should be set to
|
||||
<filename>/var/lib/cinder/</filename>. The next part is the
|
||||
<literal>volumes_dir</literal> variable, by default this just simply appends
|
||||
a "<literal>volumes</literal>" directory to the <literal>state_path</literal>.
|
||||
The result is a file-tree <filename>/var/lib/cinder/volumes/</filename>.</para>
|
||||
<para>While this should all be handled by the installer, it can go wrong. If you are
|
||||
having trouble creating volumes and this directory does not exist you should see
|
||||
an error message in the <systemitem class="service">cinder-volume</systemitem>
|
||||
log indicating that the <literal>volumes_dir</literal> doesn't exist, and it
|
||||
should give you information to specify what path exactly it was looking
|
||||
for.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The persistent tgt include file.</para>
|
||||
<para>Along with the <literal>volumes_dir</literal> mentioned above, the iSCSI
|
||||
target driver also needs to be configured to look in the correct place for the
|
||||
persist files. This is a simple entry in <filename>/etc/tgt/conf.d</filename>,
|
||||
and you should have created this when you went through the install guide. If you
|
||||
haven't or you're running into issues, verify that you have a file
|
||||
<filename>/etc/tgt/conf.d/cinder.conf</filename>.</para>
|
||||
<para>If the file is not there, you can create with the following
|
||||
command:
|
||||
<screen><prompt>$</prompt> <userinput>sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.conf"</userinput></screen>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>No sign of attach call in the <systemitem class="service"
|
||||
>cinder-api</systemitem> log.</para>
|
||||
<para>This is most likely going to be a minor adjustment to your
|
||||
<filename>nova.conf</filename> file. Make sure that your
|
||||
<filename>nova.conf</filename> has the following
|
||||
entry: <programlisting language="ini">volume_api_class=nova.volume.cinder.API</programlisting></para>
|
||||
<caution>
|
||||
<para>Make certain that you explicitly set <filename>enabled_apis</filename>
|
||||
because the default will include
|
||||
<filename>osapi_volume</filename>: <programlisting language="ini">enabled_apis=ec2,osapi_compute,metadata</programlisting>
|
||||
</para>
|
||||
</caution>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Failed to create iscsi target error in the
|
||||
<filename>cinder-volume.log</filename> file.</para>
|
||||
<programlisting language="bash">2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-137641b2-af72-4a2f-b243-65fdccd38780.</programlisting>
|
||||
<para>You may see this error in <filename>cinder-volume.log</filename> after trying
|
||||
to create a volume that is 1 GB. To fix this issue:</para>
|
||||
<para>Change content of the <filename>/etc/tgt/targets.conf</filename> from "include
|
||||
/etc/tgt/conf.d/*.conf" to: include /etc/tgt/conf.d/cinder_tgt.conf:</para>
|
||||
<programlisting language="bash"> include /etc/tgt/conf.d/cinder_tgt.conf
|
||||
include /etc/tgt/conf.d/cinder.conf
|
||||
default-driver iscsi</programlisting>
|
||||
<para>Then restart tgt and <literal>cinder-*</literal> services so they pick up the
|
||||
new configuration.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
<para>These common issues might occur during configuration. To
|
||||
correct, use these suggested solutions.</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Issues with <literal>state_path</literal> and
|
||||
<literal>volumes_dir</literal> settings.</para>
|
||||
<para>Cinder uses <command>tgtd</command> as the default
|
||||
iscsi helper and implements persistent targets. This
|
||||
means that in the case of a tgt restart or even a node
|
||||
reboot your existing volumes on that node will be
|
||||
restored automatically with their original IQN.</para>
|
||||
<para>In order to make this possible the iSCSI target
|
||||
information needs to be stored in a file on creation
|
||||
that can be queried in case of restart of the tgt
|
||||
daemon. By default, Cinder uses a
|
||||
<literal>state_path</literal> variable, which if
|
||||
installing with Yum or APT should be set to
|
||||
<filename>/var/lib/cinder/</filename>. The next
|
||||
part is the <literal>volumes_dir</literal> variable,
|
||||
by default this just simply appends a
|
||||
"<literal>volumes</literal>" directory to the
|
||||
<literal>state_path</literal>. The result is a
|
||||
file-tree
|
||||
<filename>/var/lib/cinder/volumes/</filename>.</para>
|
||||
<para>While this should all be handled by the installer,
|
||||
it can go wrong. If you have trouble creating volumes
|
||||
and this directory does not exist you should see an
|
||||
error message in the <systemitem class="service"
|
||||
>cinder-volume</systemitem> log indicating that
|
||||
the <literal>volumes_dir</literal> does not exist, and
|
||||
it should provide information about which path it was
|
||||
looking for.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The persistent tgt include file.</para>
|
||||
<para>Along with the <option>volumes_dir</option> option,
|
||||
the iSCSI target driver also needs to be configured to
|
||||
look in the correct place for the persist files. This
|
||||
is a simple entry in the
|
||||
<filename>/etc/tgt/conf.d</filename> file that you
|
||||
should have set when you installed OpenStack. If
|
||||
issues occur, verify that you have a
|
||||
<filename>/etc/tgt/conf.d/cinder.conf</filename>
|
||||
file.</para>
|
||||
<para>If the file is not present, create it with this
|
||||
command:</para>
|
||||
<screen><prompt>#</prompt> <userinput>echo 'include /var/lib/cinder/volumes/ *' >> /etc/tgt/conf.d/cinder.conf</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>No sign of attach call in the <systemitem
|
||||
class="service">cinder-api</systemitem>
|
||||
log.</para>
|
||||
<para>This is most likely going to be a minor adjustment
|
||||
to your <filename>nova.conf</filename> file. Make sure
|
||||
that your <filename>nova.conf</filename> has this
|
||||
entry:</para>
|
||||
<programlisting language="ini">volume_api_class=nova.volume.cinder.API</programlisting>
|
||||
<caution>
|
||||
<para>Make certain that you explicitly set
|
||||
<filename>enabled_apis</filename> because the
|
||||
default includes
|
||||
<filename>osapi_volume</filename>:</para>
|
||||
<programlisting language="ini">enabled_apis=ec2,osapi_compute,metadata</programlisting>
|
||||
</caution>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Failed to create iscsi target error in the
|
||||
<filename>cinder-volume.log</filename>
|
||||
file.</para>
|
||||
<programlisting language="bash">2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-137641b2-af72-4a2f-b243-65fdccd38780.</programlisting>
|
||||
<para>You might see this error in
|
||||
<filename>cinder-volume.log</filename> after
|
||||
trying to create a volume that is 1 GB. To fix this
|
||||
issue:</para>
|
||||
<para>Change content of the
|
||||
<filename>/etc/tgt/targets.conf</filename> from
|
||||
<literal>include /etc/tgt/conf.d/*.conf</literal>
|
||||
to <literal>include
|
||||
/etc/tgt/conf.d/cinder_tgt.conf</literal>, as
|
||||
follows:</para>
|
||||
<programlisting language="bash">include /etc/tgt/conf.d/cinder_tgt.conf
|
||||
include /etc/tgt/conf.d/cinder.conf
|
||||
default-driver iscsi</programlisting>
|
||||
<para>Restart <systemitem class="service">tgt</systemitem>
|
||||
and <systemitem class="service">cinder-*</systemitem>
|
||||
services so they pick up the new configuration.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
@ -1,18 +1,25 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0" xml:id="section_ts_failed_attach_vol_after_detach">
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="section_ts_failed_attach_vol_after_detach">
|
||||
<title>Failed to attach volume after detaching</title>
|
||||
<section xml:id="section_ts_failed_attach_vol_after_detach_problem">
|
||||
<section
|
||||
xml:id="section_ts_failed_attach_vol_after_detach_problem">
|
||||
<title>Problem</title>
|
||||
<para>Failed to attach a volume after detaching the same volume.</para>
|
||||
<para>Failed to attach a volume after detaching the same
|
||||
volume.</para>
|
||||
</section>
|
||||
<section xml:id="section_ts_failed_attach_vol_after_detach_solution">
|
||||
<section
|
||||
xml:id="section_ts_failed_attach_vol_after_detach_solution">
|
||||
<title>Solution</title>
|
||||
<para>You need to change the device name on the <code>nova-attach</code> call. The VM may
|
||||
not clean-up after a <code>nova-detach</code> operation. In the following example from
|
||||
the VM, the <code>nova-attach</code> call will fail if the device names
|
||||
<code>vdb</code>, <code>vdc</code>, or <code>vdd</code> are
|
||||
used.<screen><prompt>#</prompt> <userinput>ls -al /dev/disk/by-path/
|
||||
<para>You must change the device name on the
|
||||
<command>nova-attach</command> command. The VM might
|
||||
not clean up after a <command>nova-detach</command>
|
||||
command runs. This example shows how the
|
||||
<command>nova-attach</command> command fails when you
|
||||
use the <code>vdb</code>, <code>vdc</code>, or
|
||||
<code>vdd</code> device names:</para>
|
||||
<screen><prompt>#</prompt> <userinput>ls -al /dev/disk/by-path/
|
||||
total 0
|
||||
drwxr-xr-x 2 root root 200 2012-08-29 17:33 .
|
||||
drwxr-xr-x 5 root root 100 2012-08-29 17:33 ..
|
||||
@ -23,10 +30,10 @@ lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-p
|
||||
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:06.0-virtio-pci-virtio2 -> ../../vdb
|
||||
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:08.0-virtio-pci-virtio3 -> ../../vdc
|
||||
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4 -> ../../vdd
|
||||
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1</userinput></screen></para>
|
||||
<para>You may also have this problem after attaching and detaching the same volume from the
|
||||
same VM with the same mount point multiple times. In this case, restarting the KVM host
|
||||
may fix the problem.</para>
|
||||
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1</userinput></screen>
|
||||
<para>You might also have this problem after attaching and
|
||||
detaching the same volume from the same VM with the same
|
||||
mount point multiple times. In this case, restart the KVM
|
||||
host.</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
|
@ -17,7 +17,7 @@ Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk.</progr
|
||||
<filename>sysfsutils</filename> packages.</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo apt-get install sysfsutils</userinput></screen>
|
||||
</para>
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
|
@ -22,7 +22,7 @@
|
||||
<filename>multipath-tools</filename> packages.</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo apt-get install multipath-tools</userinput></screen>
|
||||
</para>
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
|
@ -12,7 +12,7 @@
|
||||
<para>On the KVM host run, <code>cat /proc/cpuinfo</code>. Make sure the <code>vme</code>
|
||||
and <code>svm</code> flags are set.</para>
|
||||
<para>Follow the instructions in the
|
||||
<link xlink:href="http://docs.openstack.org/trunk/config-reference/content/kvm.html#section_kvm_enable">
|
||||
<link xlink:href="http://docs.openstack.org/havana/config-reference/content/kvm.html#section_kvm_enable">
|
||||
enabling KVM section</link> of the <citetitle>Configuration
|
||||
Reference</citetitle> to enable hardware virtualization
|
||||
support in your BIOS.</para>
|
||||
|
@ -1,30 +1,29 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<?xml-model href="http://docbook.org/xml/5.0/rng/docbook.rng" schematypens="http://relaxng.org/ns/structure/1.0"?>
|
||||
<?xml-model href="http://docbook.org/xml/5.0/rng/docbook.rng" type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron"?>
|
||||
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0" xml:id="section_ts_vol_attach_miss_sg_scan"
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="section_ts_vol_attach_miss_sg_scan"
|
||||
linkend="section_ts_vol_attach_miss_sg_scan">
|
||||
<title>Failed to Attach Volume, Missing sg_scan</title>
|
||||
<section xml:id="section_ts_vol_attach_miss_sg_scan_problem">
|
||||
<title>Problem</title>
|
||||
<para>Failed to attach volume to an instance, <filename>sg_scan</filename> file not found.
|
||||
This warning and error occurs when the <filename>sg3-utils</filename> package is not
|
||||
installed on the Compute node. The IDs in your message are unique to your system.</para>
|
||||
<programlisting>ERROR nova.compute.manager [req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin|req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin] [instance:
|
||||
7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5]
|
||||
Failed to attach volume 4cc104c4-ac92-4bd6-9b95-c6686746414a at
|
||||
/dev/vdcTRACE
|
||||
nova.compute.manager [instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance:
|
||||
7d7c92e0-49fa-4a8e-87c7-73f22a9585d5] Stdout: '/usr/local/bin/nova-rootwrap: Executable
|
||||
not found: /usr/bin/sg_scan</programlisting>
|
||||
</section>
|
||||
<para>Failed to attach volume to an instance,
|
||||
<filename>sg_scan</filename> file not found. This
|
||||
warning and error occur when the
|
||||
<package>sg3-utils</package> package is not installed
|
||||
on the Compute node. The IDs in your message are unique to
|
||||
your system:</para>
|
||||
<screen><computeroutput>ERROR nova.compute.manager [req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin|req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin]
|
||||
[instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5]
|
||||
Failed to attach volume 4cc104c4-ac92-4bd6-9b95-c6686746414a at /dev/vdcTRACE nova.compute.manager
|
||||
[instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5]
|
||||
Stdout: '/usr/local/bin/nova-rootwrap: Executable not found: /usr/bin/sg_scan</computeroutput></screen>
|
||||
</section>
|
||||
<section xml:id="section_ts_vol_attach_miss_sg_scan_solution">
|
||||
<title>Solution</title>
|
||||
<para>Run the following command on the Compute node to install the
|
||||
<filename>sg3-utils</filename> packages.</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo apt-get install sg3-utils</userinput></screen>
|
||||
</para>
|
||||
<para>Run this command on the Compute node to install the
|
||||
<package>sg3-utils</package> package:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo apt-get install sg3-utils</userinput></screen>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
|
@ -2,53 +2,55 @@
|
||||
<section xml:id="volume-migration"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0">
|
||||
<title>Migrate volumes</title>
|
||||
<para>The Havana release of OpenStack introduces the ability to
|
||||
migrate volumes between backends. Migrating a volume will transparently
|
||||
move its data from the volume's current backend to a new one. This is an
|
||||
administrator function, and can be used for functions including storage
|
||||
evacuation (for maintenance or decommissioning), or manual optimizations
|
||||
(for example, performance, reliability, or cost).</para>
|
||||
<para>There are three possible flows for a migration:</para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>If the storage can migrate the volume on its
|
||||
own, it is given the opportunity to do so. This
|
||||
allows the Block Storage driver to enable
|
||||
optimizations that the storage may be able to
|
||||
perform. If the backend is not able to perform the
|
||||
migration, Block Storage will use one of two generic
|
||||
flows, as follows.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>If the volume is not attached, the Block Storage
|
||||
service will create a new volume, and copy the data from the
|
||||
original to the new volume. Note that while most backends
|
||||
support this function, not all do. See driver documentation in
|
||||
the <citetitle>OpenStack Configuration Reference</citetitle> for
|
||||
more details.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>If the volume is attached to a VM instance,
|
||||
the Block Storage service will create a new volume,
|
||||
and call Compute to copy the data from the original
|
||||
to the new volume. Currently this is supported only
|
||||
by the Compute libvirt driver.</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<para>As an example, we will show a scenario with two LVM
|
||||
backends, and migrate an attached volume from one to the
|
||||
other. This will use the 3rd migration flow.</para>
|
||||
<para>First, we can list the available backends:
|
||||
<screen><prompt>$</prompt> <userinput>cinder-manage host list</userinput>
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
||||
<title>Migrate volumes</title>
|
||||
<para>The Havana release of OpenStack introduces the ability to
|
||||
migrate volumes between back-ends. Migrating a volume
|
||||
transparently moves its data from the current back-end for the
|
||||
volume to a new one. This is an administrator function, and
|
||||
can be used for functions including storage evacuation (for
|
||||
maintenance or decommissioning), or manual optimizations (for
|
||||
example, performance, reliability, or cost).</para>
|
||||
<para>These workflows are possible for a migration:</para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>If the storage can migrate the volume on its own, it
|
||||
is given the opportunity to do so. This allows the
|
||||
Block Storage driver to enable optimizations that the
|
||||
storage might be able to perform. If the back-end is
|
||||
not able to perform the migration, the Block Storage
|
||||
Service uses one of two generic flows, as
|
||||
follows.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>If the volume is not attached, the Block Storage
|
||||
Service creates a volume and copies the data from the
|
||||
original to the new volume. Note that while most
|
||||
back-ends support this function, not all do. See
|
||||
driver documentation in the <link
|
||||
xlink:href="http://docs.openstack.org/havana/config-reference/content/"
|
||||
><citetitle>OpenStack Configuration
|
||||
Reference</citetitle></link> for more
|
||||
details.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>If the volume is attached to a VM instance, the
|
||||
Block Storage Service creates a volume, and calls
|
||||
Compute to copy the data from the original to the new
|
||||
volume. Currently this is supported only by the
|
||||
Compute libvirt driver.</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<para>As an example, this scenario shows two LVM back-ends and
|
||||
migrates an attached volume from one to the other. This
|
||||
scenario uses the third migration flow.</para>
|
||||
<para>First, list the available back-ends:</para>
|
||||
<screen><prompt>$</prompt> <userinput>cinder-manage host list</userinput>
|
||||
<computeroutput>server1@lvmstorage-1 zone1
|
||||
server2@lvmstorage-2 zone1</computeroutput></screen>
|
||||
</para>
|
||||
<para>Next, as the admin user, we can see the current status
|
||||
of the volume (replace the example ID with your own):</para>
|
||||
<screen><prompt>$</prompt> <userinput>cinder show 6088f80a-f116-4331-ad48-9afb0dfb196c</userinput>
|
||||
<para>Next, as the admin user, you can see the current status of
|
||||
the volume (replace the example ID with your own):</para>
|
||||
<screen><prompt>$</prompt> <userinput>cinder show 6088f80a-f116-4331-ad48-9afb0dfb196c</userinput>
|
||||
<computeroutput>+--------------------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------+--------------------------------------+
|
||||
@ -70,47 +72,47 @@ server2@lvmstorage-2 zone1</computeroutput></screen>
|
||||
| status | in-use |
|
||||
| volume_type | None |
|
||||
+--------------------------------+--------------------------------------+</computeroutput></screen>
|
||||
<para>Of special note are the following attributes:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>os-vol-host-attr:host</literal> - the
|
||||
volume's current backend.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>os-vol-mig-status-attr:migstat</literal>
|
||||
- the status of this volume's migration ('None'
|
||||
means that a migration is not currently in
|
||||
progress).</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>os-vol-mig-status-attr:name_id</literal>
|
||||
- the volume ID that this volume's name on the backend
|
||||
is based on. Before a volume is ever migrated, its
|
||||
name on the backend storage may be based on the
|
||||
volume's ID (see the volume_name_template
|
||||
configuration parameter). For example, if
|
||||
volume_name_template is kept at the default value
|
||||
(volume-%s), then our first LVM backend will have a
|
||||
logical volume named
|
||||
<para>Note these attributes:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>os-vol-host-attr:host</literal> - the
|
||||
volume's current back-end.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>os-vol-mig-status-attr:migstat</literal> -
|
||||
the status of this volume's migration ('None' means
|
||||
that a migration is not currently in progress).</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>os-vol-mig-status-attr:name_id</literal> -
|
||||
the volume ID that this volume's name on the back-end
|
||||
is based on. Before a volume is ever migrated, its
|
||||
name on the back-end storage may be based on the
|
||||
volume's ID (see the volume_name_template
|
||||
configuration parameter). For example, if
|
||||
volume_name_template is kept as the default value
|
||||
(volume-%s), your first LVM back-end has a logical
|
||||
volume named
|
||||
<literal>volume-6088f80a-f116-4331-ad48-9afb0dfb196c</literal>.
|
||||
During the course of a migration, if we create a new
|
||||
volume and copy the data over, we will remain with
|
||||
the original volume's ID, but with the new volume's
|
||||
name. This is exposed by the
|
||||
<literal>name_id</literal> attribute.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>Now we will migrate this volume to the second LVM backend:
|
||||
<screen><prompt>$</prompt> <userinput>cinder migrate 6088f80a-f116-4331-ad48-9afb0dfb196c server2@lvmstorage-2</userinput></screen>
|
||||
</para>
|
||||
<para>We can use the <command>cinder show</command> command to see
|
||||
the status of the migration. While migrating, the
|
||||
<literal>migstat</literal> attribute will show states such as
|
||||
<literal>migrating</literal> or <literal>completing</literal>. On
|
||||
error, <literal>migstat</literal> will be set to <literal>None</literal>
|
||||
and the <literal>host</literal> attribute will show the original host.
|
||||
On success, in our example, the output would look like:</para>
|
||||
<screen><computeroutput>+--------------------------------+--------------------------------------+
|
||||
During the course of a migration, if you create a
|
||||
volume and copy over the data, the volume get the new name but keeps its
|
||||
original ID. This is
|
||||
exposed by the <literal>name_id</literal>
|
||||
attribute.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>Migrate this volume to the second LVM back-end:</para>
|
||||
<screen><prompt>$</prompt> <userinput>cinder migrate 6088f80a-f116-4331-ad48-9afb0dfb196c server2@lvmstorage-2</userinput></screen>
|
||||
<para>You can use the <command>cinder show</command> command to
|
||||
see the status of the migration. While migrating, the
|
||||
<literal>migstat</literal> attribute shows states such as
|
||||
<literal>migrating</literal> or
|
||||
<literal>completing</literal>. On error,
|
||||
<literal>migstat</literal> is set to
|
||||
<literal>None</literal> and the <literal>host</literal>
|
||||
attribute shows the original host. On success, in this
|
||||
example, the output looks like:</para>
|
||||
<screen><computeroutput>+--------------------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------+--------------------------------------+
|
||||
| attachments | [...] |
|
||||
@ -132,23 +134,22 @@ server2@lvmstorage-2 zone1</computeroutput></screen>
|
||||
| volume_type | None |
|
||||
+--------------------------------+--------------------------------------+
|
||||
</computeroutput></screen>
|
||||
<para>Note that <literal>migstat</literal> is None,
|
||||
<para>Note that <literal>migstat</literal> is None,
|
||||
<literal>host</literal> is the new host, and
|
||||
<literal>name_id</literal> holds the ID of the volume
|
||||
created by the migration. If we were to look at the second
|
||||
LVM backend, we would find the logical volume
|
||||
created by the migration. If you look at the second LVM
|
||||
back-end, you find the logical volume
|
||||
<literal>volume-133d1f56-9ffc-4f57-8798-d5217d851862</literal>.</para>
|
||||
<note>
|
||||
<para>The migration will not be visible to non-admin
|
||||
users (for example, via the volume's
|
||||
<literal>status</literal>). However, some operations
|
||||
are not be allowed while a migration is taking place,
|
||||
such as attaching/detaching a volume and deleting a
|
||||
volume. If a user performs such an action during a
|
||||
migration, an error will be returned.</para>
|
||||
</note>
|
||||
<note>
|
||||
<para>Migrating volumes that have snapshots is currently
|
||||
not allowed.</para>
|
||||
</note>
|
||||
</section>
|
||||
<note>
|
||||
<para>The migration is not be visible to non-admin users (for
|
||||
example, through the volume <literal>status</literal>).
|
||||
However, some operations are not be allowed while a
|
||||
migration is taking place, such as attaching/detaching a
|
||||
volume and deleting a volume. If a user performs such an
|
||||
action during a migration, an error is returned.</para>
|
||||
</note>
|
||||
<note>
|
||||
<para>Migrating volumes that have snapshots is currently not
|
||||
allowed.</para>
|
||||
</note>
|
||||
</section>
|
||||
|
@ -4,88 +4,100 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
||||
<title>Flavors</title>
|
||||
<para>The <command>nova flavor-create</command> command allows
|
||||
authorized users to create new flavors. Additional flavor
|
||||
manipulation commands can be shown with the command
|
||||
<command>nova help | grep flavor</command>. Note that the
|
||||
OpenStack Dashboard simulates the ability to modify an
|
||||
existing flavor by deleting an existing flavor and creating a
|
||||
new one with the same name.</para>
|
||||
<para>Flavors define a number of elements:</para>
|
||||
<table rules="all">
|
||||
<caption>Identity Service configuration file sections</caption>
|
||||
<col width="15%"/>
|
||||
<col width="85%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<td>Element</td>
|
||||
<td>Description</td>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><literal>Name</literal></td>
|
||||
<td>A descriptive name.
|
||||
<replaceable>xx</replaceable>.<replaceable>size_name</replaceable>
|
||||
is typically not required, though some third party tools may
|
||||
rely on it.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>Memory_MB</literal></td>
|
||||
<td>Virtual machine memory in megabytes.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>Disk</literal></td>
|
||||
<td>Virtual root disk size in
|
||||
gigabytes. This is an ephemeral disk that the base
|
||||
image is copied into. When booting from a persistent
|
||||
volume it is not used. The "0" size is a special case
|
||||
which uses the native base image size as the size of
|
||||
the ephemeral root volume.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>Ephemeral</literal></td>
|
||||
<td>Specifies the size of
|
||||
a secondary ephemeral data disk. This is an empty,
|
||||
unformatted disk and exists only for the life of the
|
||||
instance.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>Swap</literal></td>
|
||||
<td>Optional swap space
|
||||
allocation for the instance.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>VCPUs</literal></td>
|
||||
<td>Number of virtual CPUs
|
||||
presented to the instance.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>RXTX_Factor</literal></td>
|
||||
<td>Optional property
|
||||
allows created servers to have a different bandwidth
|
||||
cap than that defined in the network they are attached
|
||||
to. This factor is multiplied by the rxtx_base
|
||||
property of the network. Default value is 1.0 (that
|
||||
is, the same as attached network).</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>Is_Public</literal></td>
|
||||
<td>Boolean value, whether
|
||||
flavor is available to all users or private to the
|
||||
tenant it was created in. Defaults to True.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>extra_specs</literal></td>
|
||||
<td>additional optional
|
||||
restrictions on which compute nodes the flavor can run
|
||||
on. This is implemented as key/value pairs that must
|
||||
match against the corresponding key/value pairs on
|
||||
compute nodes. Can be used to implement things like
|
||||
special resources (e.g., flavors that can only run on
|
||||
compute nodes with GPU hardware).</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
<para>Authorized users can use the <command>nova
|
||||
flavor-create</command> command to create flavors. To see
|
||||
the available flavor-related commands, run:</para>
|
||||
<screen><prompt>$</prompt> <userinput>nova help | grep flavor</userinput>
|
||||
<computeroutput>flavor-access-add Add flavor access for the given tenant.
|
||||
flavor-access-list Print access information about the given flavor.
|
||||
flavor-access-remove
|
||||
Remove flavor access for the given tenant.
|
||||
flavor-create Create a new flavor
|
||||
flavor-delete Delete a specific flavor
|
||||
flavor-key Set or unset extra_spec for a flavor.
|
||||
flavor-list Print a list of available 'flavors' (sizes of
|
||||
flavor-show Show details about the given flavor.
|
||||
volume-type-delete Delete a specific flavor</computeroutput></screen>
|
||||
<note>
|
||||
<para>To modify an existing flavor in the dashboard, you must
|
||||
delete the flavor and create a modified one with the same
|
||||
name.</para>
|
||||
</note>
|
||||
<para>Flavors define these elements:</para>
|
||||
<table rules="all" width="75%">
|
||||
<caption>Identity Service configuration file
|
||||
sections</caption>
|
||||
<col width="15%"/>
|
||||
<col width="85%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<td>Element</td>
|
||||
<td>Description</td>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><literal>Name</literal></td>
|
||||
<td>A descriptive name.
|
||||
<replaceable>xx</replaceable>.<replaceable>size_name</replaceable>
|
||||
is typically not required, though some third party
|
||||
tools may rely on it.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>Memory_MB</literal></td>
|
||||
<td>Virtual machine memory in megabytes.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>Disk</literal></td>
|
||||
<td>Virtual root disk size in gigabytes. This is an
|
||||
ephemeral disk that the base image is copied into.
|
||||
When booting from a persistent volume it is not
|
||||
used. The "0" size is a special case which uses
|
||||
the native base image size as the size of the
|
||||
ephemeral root volume.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>Ephemeral</literal></td>
|
||||
<td>Specifies the size of a secondary ephemeral data
|
||||
disk. This is an empty, unformatted disk and
|
||||
exists only for the life of the instance.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>Swap</literal></td>
|
||||
<td>Optional swap space allocation for the
|
||||
instance.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>VCPUs</literal></td>
|
||||
<td>Number of virtual CPUs presented to the
|
||||
instance.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>RXTX_Factor</literal></td>
|
||||
<td>Optional property allows created servers to have a
|
||||
different bandwidth cap than that defined in the
|
||||
network they are attached to. This factor is
|
||||
multiplied by the rxtx_base property of the
|
||||
network. Default value is 1.0 (that is, the same
|
||||
as attached network).</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>Is_Public</literal></td>
|
||||
<td>Boolean value, whether flavor is available to all
|
||||
users or private to the tenant it was created in.
|
||||
Defaults to True.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><literal>extra_specs</literal></td>
|
||||
<td>additional optional restrictions on which compute
|
||||
nodes the flavor can run on. This is implemented
|
||||
as key/value pairs that must match against the
|
||||
corresponding key/value pairs on compute nodes.
|
||||
Can be used to implement things like special
|
||||
resources (e.g., flavors that can only run on
|
||||
compute nodes with GPU hardware).</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<para>Flavor customization can be limited by the hypervisor in
|
||||
use, for example the libvirt driver enables quotas on CPUs
|
||||
@ -94,40 +106,41 @@
|
||||
<para>You can configure the CPU limits with three control
|
||||
parameters with the nova-manage tool. Here is an example of
|
||||
configuring the I/O limit:</para>
|
||||
<screen><prompt>#</prompt> <userinput>nova-manage flavor set_key --name m1.small --key quota:read_bytes_sec --value 10240000</userinput></screen>
|
||||
<screen><prompt>#</prompt> <userinput>nova-manage flavor set_key --name m1.small --key quota:read_bytes_sec --value 10240000</userinput></screen>
|
||||
<screen><prompt>#</prompt> <userinput>nova-manage flavor set_key --name m1.small --key quota:read_bytes_sec --value 10240000</userinput>
|
||||
<prompt>#</prompt> <userinput>nova-manage flavor set_key --name m1.small --key quota:read_bytes_sec --value 10240000</userinput></screen>
|
||||
<para>There are CPU control parameters for weight shares,
|
||||
enforcement intervals for runtime quotas, and a quota for
|
||||
maximum allowed bandwidth.</para>
|
||||
<para>The optional cpu_shares element specifies the proportional
|
||||
<para>The optional <literal>cpu_shares</literal> element specifies the proportional
|
||||
weighted share for the domain. If this element is omitted, the
|
||||
service defaults to the OS provided defaults. There is no unit
|
||||
for the value, it's a relative measure based on the setting of
|
||||
other VMs. For example, a VM configured with value 2048 will
|
||||
get twice as much CPU time as a VM configured with value 1024.</para>
|
||||
<para>The optional cpu_period element specifies the enforcement
|
||||
interval(unit: microseconds) for QEMU and LXC hypervisors.
|
||||
Within period, each VCPU of the domain will not be allowed to
|
||||
consume more than quota worth of runtime. The value should be
|
||||
in range [1000, 1000000]. A period with value 0 means no
|
||||
for the value. It is a relative measure based on the setting of
|
||||
other VMs. For example, a VM configured with value 2048
|
||||
gets twice as much CPU time as a VM configured with value
|
||||
1024.</para>
|
||||
<para>The optional <literal>cpu_period</literal> element specifies the enforcement
|
||||
interval (unit: microseconds) for QEMU and LXC hypervisors.
|
||||
Within a period, each VCPU of the domain is not allowed to
|
||||
consume more than the quota worth of runtime. The value should be
|
||||
in range <literal>[1000, 1000000]</literal>. A period with value 0 means no
|
||||
value.</para>
|
||||
<para>The optional cpu_quota element specifies the maximum allowed
|
||||
bandwidth(unit: microseconds). A domain with quota as any
|
||||
<para>The optional <literal>cpu_quota</literal> element specifies the maximum allowed
|
||||
bandwidth (unit: microseconds). A domain with a quota with a
|
||||
negative value indicates that the domain has infinite
|
||||
bandwidth, which means that it is not bandwidth controlled.
|
||||
The value should be in range [1000, 18446744073709551] or less
|
||||
The value should be in range <literal>[1000, 18446744073709551]</literal> or less
|
||||
than 0. A quota with value 0 means no value. You can use this
|
||||
feature to ensure that all vcpus run at the same speed. An
|
||||
feature to ensure that all vcpus run at the same speed. For
|
||||
example:</para>
|
||||
<screen><prompt>#</prompt> <userinput>nova flavor-key m1.low_cpu set quota:cpu_quota=10000</userinput></screen>
|
||||
<screen><prompt>#</prompt> <userinput>nova flavor-key m1.low_cpu set quota:cpu_period=20000</userinput></screen>
|
||||
<para>In that example, the instance of m1.low_cpu can only consume
|
||||
<screen><prompt>#</prompt> <userinput>nova flavor-key m1.low_cpu set quota:cpu_quota=10000</userinput>
|
||||
<prompt>#</prompt> <userinput>nova flavor-key m1.low_cpu set quota:cpu_period=20000</userinput></screen>
|
||||
<para>In this example, the instance of <literal>m1.low_cpu</literal> can only consume
|
||||
a maximum of 50% CPU of a physical CPU computing
|
||||
capability.</para>
|
||||
<para>Through quotas for disk I/O, you can set maximum disk write
|
||||
to 10MB/sec for VM user for example:</para>
|
||||
<para>Through disk I/O quotas, you can set maximum disk write to
|
||||
10 MB per second for a VM user. For example:</para>
|
||||
<screen><prompt>#</prompt> <userinput>nova flavor-set m1.medium set disk_write_bytes_sec=10240000</userinput></screen>
|
||||
<para>These are the options for disk I/O:</para>
|
||||
<para>The disk I/O options are:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>disk_read_bytes_sec</para>
|
||||
@ -148,46 +161,44 @@
|
||||
<para>disk_total_iops_sec</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>These are the options for vif I/O:</para>
|
||||
<para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>vif_inbound_ average</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>vif_inbound_burst</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>vif_inbound_peak</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>vif_outbound_ average</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>vif_outbound_burst</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>vif_outbound_peak</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
<para>The vif I/O options are:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>vif_inbound_ average</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>vif_inbound_burst</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>vif_inbound_peak</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>vif_outbound_ average</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>vif_outbound_burst</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>vif_outbound_peak</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>Incoming and outgoing traffic can be shaped independently.
|
||||
The bandwidth element can have at most one inbound and at most
|
||||
one outbound child element. Leaving any of these children
|
||||
element out result in no QoS applied on that traffic
|
||||
element out result in no quality of service (QoS) applied on that traffic
|
||||
direction. So, when you want to shape only the network's
|
||||
incoming traffic, use inbound only, and vice versa. Each of
|
||||
these elements have one mandatory attribute average. It
|
||||
specifies average bit rate on the interface being shaped. Then
|
||||
there are two optional attributes: peak, which specifies
|
||||
these elements have one mandatory attribute average.</para>
|
||||
<para>It specifies average bit rate on the interface being shaped.
|
||||
Then there are two optional attributes: peak, which specifies
|
||||
maximum rate at which bridge can send data, and burst, amount
|
||||
of bytes that can be burst at peak speed. Accepted values for
|
||||
attributes are integer numbers, The units for average and peak
|
||||
attributes are kilobytes per second, and for the burst just
|
||||
kilobytes. The rate is shared equally within domains connected
|
||||
to the network.</para>
|
||||
<para>Here are some examples for configuring a bandwidth limit for
|
||||
instance network traffic:</para>
|
||||
<screen><prompt>#</prompt> <userinput>nova-manage flavor set_key --name m1.small --key quota:inbound_average --value 10240</userinput></screen>
|
||||
<screen><prompt>#</prompt> <userinput>nova-manage flavor set_key --name m1.small --key quota:outbound_average --value 10240</userinput></screen>
|
||||
<para>This example configures a bandwidth limit for instance
|
||||
network traffic:</para>
|
||||
<screen><prompt>#</prompt> <userinput>nova-manage flavor set_key --name m1.small --key quota:inbound_average --value 10240</userinput>
|
||||
<prompt>#</prompt> <userinput>nova-manage flavor set_key --name m1.small --key quota:outbound_average --value 10240</userinput></screen>
|
||||
</section>
|
||||
|
@ -13,9 +13,9 @@
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><systemitem class="service"
|
||||
>glance-registry</systemitem>. Stores, processes, and
|
||||
retrieves metadata about images. Metadata includes size,
|
||||
type, and so on.</para>
|
||||
>glance-registry</systemitem>. Stores, processes, and
|
||||
retrieves metadata about images. Metadata includes items such
|
||||
as size and type.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Database. Stores image metadata. You can choose your
|
||||
|
@ -3,117 +3,113 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="keystone-user-management">
|
||||
<title>User management</title>
|
||||
<para>The main components of Identity user management are:
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Users</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Tenants</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Roles</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
<para>A <emphasis>user</emphasis> represents a human user and has associated information
|
||||
such as user name, password, and email. This example creates a user named "alice":</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone user-create --name=alice \
|
||||
--pass=mypassword123 --email=alice@example.com</userinput></screen>
|
||||
<para>A <emphasis>tenant</emphasis> can be a project, group,
|
||||
or organization. Whenever you make requests to OpenStack
|
||||
services, you must specify a tenant. For example, if you
|
||||
query the Compute service for a list of running instances,
|
||||
you receive a list of all of the running instances in the
|
||||
tenant that you specified in your query. This example
|
||||
creates a tenant named "acme":</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone tenant-create --name=acme</userinput></screen>
|
||||
<note>
|
||||
<para>Because the term <emphasis>project</emphasis> was
|
||||
used instead of <emphasis>tenant</emphasis> in earlier
|
||||
versions of OpenStack Compute, some command-line tools
|
||||
use <literal>--project_id</literal> instead of
|
||||
<literal>--tenant-id</literal> or
|
||||
<literal>--os-tenant-id</literal> to refer to a
|
||||
tenant ID.</para>
|
||||
</note>
|
||||
<para>A <emphasis>role</emphasis> captures what operations a
|
||||
user is permitted to perform in a given tenant. This
|
||||
example creates a role named "compute-user":</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone role-create --name=compute-user</userinput></screen>
|
||||
<note>
|
||||
<para>It is up to individual services such as the Compute
|
||||
service and Image service to assign meaning to these
|
||||
roles. As far as the Identity service is concerned, a
|
||||
role is simply a name.</para>
|
||||
</note>
|
||||
<?hard-pagebreak?>
|
||||
<para>The Identity service associates a user with a tenant and
|
||||
a role. To continue with the previous examples, you might
|
||||
to assign the "alice" user the "compute-user" role in the
|
||||
"acme" tenant:</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone user-list</userinput></screen>
|
||||
<screen><computeroutput>+--------+---------+-------------------+--------+
|
||||
<title>User management</title>
|
||||
<para>The main components of Identity user management are:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">User</emphasis>. Represents a
|
||||
human user. Has associated information such as user
|
||||
name, password, and email. This example creates a user
|
||||
named <literal>alice</literal>:</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone user-create --name=alice --pass=mypassword123 --email=alice@example.com</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Tenant</emphasis>. A project,
|
||||
group, or organization. When you make requests to
|
||||
OpenStack services, you must specify a tenant. For
|
||||
example, if you query the Compute service for a list
|
||||
of running instances, you get a list of all running
|
||||
instances in the tenant that you specified in your
|
||||
query. This example creates a tenant named
|
||||
<literal>acme</literal>:</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone tenant-create --name=acme</userinput></screen>
|
||||
<note>
|
||||
<para>Because the term <emphasis>project</emphasis>
|
||||
was used instead of <emphasis>tenant</emphasis> in
|
||||
earlier versions of OpenStack Compute, some
|
||||
command-line tools use
|
||||
<literal>--project_id</literal> instead of
|
||||
<literal>--tenant-id</literal> or
|
||||
<literal>--os-tenant-id</literal> to refer to
|
||||
a tenant ID.</para>
|
||||
</note>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Role</emphasis>. Captures the
|
||||
operations that a user can perform in a given tenant.</para>
|
||||
<para>This example creates a role named
|
||||
<literal>compute-user</literal>:</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone role-create --name=compute-user</userinput></screen>
|
||||
<note>
|
||||
<para>Individual services, such as Compute and the
|
||||
Image Service, assign meaning to roles. In the
|
||||
Identity Service, a role is simply a name.</para>
|
||||
</note>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<?hard-pagebreak?>
|
||||
<para>The Identity Service assigns a tenant and a role to a user.
|
||||
You might assign the <literal>compute-user</literal> role to
|
||||
the <literal>alice</literal> user in the
|
||||
<literal>acme</literal> tenant:</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone user-list</userinput>
|
||||
<computeroutput>+--------+---------+-------------------+--------+
|
||||
| id | enabled | email | name |
|
||||
+--------+---------+-------------------+--------+
|
||||
| 892585 | True | alice@example.com | alice |
|
||||
+--------+---------+-------------------+--------+</computeroutput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>keystone role-list</userinput></screen>
|
||||
<screen><computeroutput>+--------+--------------+
|
||||
<screen><prompt>$</prompt> <userinput>keystone role-list</userinput>
|
||||
<computeroutput>+--------+--------------+
|
||||
| id | name |
|
||||
+--------+--------------+
|
||||
| 9a764e | compute-user |
|
||||
+--------+--------------+</computeroutput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput></screen>
|
||||
<screen><computeroutput>+--------+------+---------+
|
||||
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput>
|
||||
<computeroutput>+--------+------+---------+
|
||||
| id | name | enabled |
|
||||
+--------+------+---------+
|
||||
| 6b8fd2 | acme | True |
|
||||
+--------+------+---------+</computeroutput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2</userinput> </screen>
|
||||
<para>A user can be assigned different roles in different
|
||||
tenants: for example, Alice might also have the "admin"
|
||||
role in the "Cyberdyne" tenant. A user can also be
|
||||
assigned multiple roles in the same tenant.</para>
|
||||
<para>The
|
||||
<filename>/etc/<replaceable>[SERVICE_CODENAME]</replaceable>/policy.json</filename>
|
||||
file controls the tasks that users can perform for a given
|
||||
service. For example,
|
||||
<filename>/etc/nova/policy.json</filename> specifies
|
||||
the access policy for the Compute service,
|
||||
<filename>/etc/glance/policy.json</filename> specifies
|
||||
the access policy for the Image service, and
|
||||
<filename>/etc/keystone/policy.json</filename>
|
||||
specifies the access policy for the Identity
|
||||
service.</para>
|
||||
<para>The default <filename>policy.json</filename> files in
|
||||
the Compute, Identity, and Image service recognize only
|
||||
the <literal>admin</literal> role: all operations that do
|
||||
not require the <literal>admin</literal> role are
|
||||
accessible by any user that has any role in a
|
||||
tenant.</para>
|
||||
<para>If you wish to restrict users from performing operations
|
||||
in, say, the Compute service, you need to create a role in
|
||||
the Identity service and then modify
|
||||
<filename>/etc/nova/policy.json</filename> so that
|
||||
this role is required for Compute operations.</para>
|
||||
<?hard-pagebreak?>
|
||||
<para>For example, this line in
|
||||
<filename>/etc/nova/policy.json</filename> specifies
|
||||
that there are no restrictions on which users can create
|
||||
volumes: if the user has any role in a tenant, they can
|
||||
create volumes in that tenant.</para>
|
||||
<programlisting language="json">"volume:create": [],</programlisting>
|
||||
<para>To restrict creation of volumes to users who had the
|
||||
<literal>compute-user</literal> role in a particular
|
||||
tenant, you would add
|
||||
<literal>"role:compute-user"</literal>, like
|
||||
so:</para>
|
||||
<programlisting language="json">"volume:create": ["role:compute-user"],</programlisting>
|
||||
<para>To restrict all Compute service requests to require this
|
||||
role, the resulting file would look like:</para>
|
||||
<programlisting language="json"><?db-font-size 50%?>{
|
||||
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2</userinput> </screen>
|
||||
<para>A user can have different roles in different tenants. For
|
||||
example, Alice might also have the <literal>admin</literal>
|
||||
role in the <literal>Cyberdyne</literal> tenant. A user can
|
||||
also have multiple roles in the same tenant.</para>
|
||||
<para>The
|
||||
<filename>/etc/<replaceable>[SERVICE_CODENAME]</replaceable>/policy.json</filename>
|
||||
file controls the tasks that users can perform for a given
|
||||
service. For example,
|
||||
<filename>/etc/nova/policy.json</filename> specifies the
|
||||
access policy for the Compute service,
|
||||
<filename>/etc/glance/policy.json</filename> specifies the
|
||||
access policy for the Image service, and
|
||||
<filename>/etc/keystone/policy.json</filename> specifies
|
||||
the access policy for the Identity Service.</para>
|
||||
<para>The default <filename>policy.json</filename> files in the
|
||||
Compute, Identity, and Image service recognize only the
|
||||
<literal>admin</literal> role: all operations that do not
|
||||
require the <literal>admin</literal> role are accessible by
|
||||
any user that has any role in a tenant.</para>
|
||||
<para>If you wish to restrict users from performing operations in,
|
||||
say, the Compute service, you need to create a role in the
|
||||
Identity Service and then modify
|
||||
<filename>/etc/nova/policy.json</filename> so that this
|
||||
role is required for Compute operations.</para>
|
||||
<?hard-pagebreak?>
|
||||
<para>For example, this line in
|
||||
<filename>/etc/nova/policy.json</filename> specifies that
|
||||
there are no restrictions on which users can create volumes:
|
||||
if the user has any role in a tenant, they can create volumes
|
||||
in that tenant.</para>
|
||||
<programlisting language="json">"volume:create": [],</programlisting>
|
||||
<para>To restrict creation of volumes to users who had the
|
||||
<literal>compute-user</literal> role in a particular
|
||||
tenant, you would add <literal>"role:compute-user"</literal>,
|
||||
like so:</para>
|
||||
<programlisting language="json">"volume:create": ["role:compute-user"],</programlisting>
|
||||
<para>To restrict all Compute service requests to require this
|
||||
role, the resulting file would look like:</para>
|
||||
<programlisting language="json"><?db-font-size 50%?>{
|
||||
"admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]],
|
||||
"default": [["rule:admin_or_owner"]],
|
||||
|
||||
|
@ -16,7 +16,7 @@
|
||||
<procedure xml:id="evacuate_shared">
|
||||
<step>
|
||||
<para>To find a different host for the evacuated instance,
|
||||
run the following command to lists hosts:</para>
|
||||
run this command to list hosts:</para>
|
||||
<screen><prompt>$</prompt> <userinput>nova host-list</userinput></screen>
|
||||
</step>
|
||||
<step>
|
||||
|
Loading…
Reference in New Issue
Block a user