Applied conventions for "e.g."

As per convention, we should avoid using "e.g." (we should use "for
example" instead). Found and replaced them accordingly; didn't touch
instances that looked copy-pasted from stdout.

Change-Id: I45aae1a30b872f916744a6a33f6888abe8cf6eb0
Partial-Bug: #1217503
This commit is contained in:
Don Domingo
2014-01-20 11:29:26 +10:00
committed by Andreas Jaeger
parent acc984a12a
commit 3e1100a70a
31 changed files with 3407 additions and 2030 deletions

View File

@@ -33,8 +33,8 @@
<literal>user_crud_extension</literal> filter, insert it after
the <literal>*_body</literal> middleware and before the
<literal>public_service</literal> application in the
public_api WSGI pipeline in <filename>keystone.conf</filename>
e.g.:</para>
public_api WSGI pipeline in <filename>keystone.conf</filename>.
For example:</para>
<programlisting language="ini"><?db-font-size 75%?>[filter:user_crud_extension]
paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory

View File

@@ -233,8 +233,8 @@
actions for users with the admin role. An authorized
client or an administrative user can view and set the
provider extended attributes through Networking API
calls. See <xref linkend="section_networking_auth"/> for details
on policy configuration.</para>
calls. See <xref linkend="section_networking_auth"/>
for details on policy configuration.</para>
</section>
<section xml:id="provider_api_workflow">
<title>Provider extension API operations</title>
@@ -1733,8 +1733,8 @@
<para>The back-end is polled periodically, and the
status for every resource is retrieved; then the
status in the Networking database is updated only
for the resources for which a status change occurred.
As operational status is now retrieved
for the resources for which a status change
occurred. As operational status is now retrieved
asynchronously, performance for
<literal>GET</literal> operations is
consistently improved.</para>
@@ -1848,22 +1848,21 @@
</tr>
</tbody>
</table>
<para>When running multiple OpenStack Networking server
instances, the status synchronization task should
not run on every node; doing so sends unnecessary
traffic to the NVP back-end and performs unnecessary
DB operations. Set the
<para>When running multiple OpenStack Networking
server instances, the status synchronization task
should not run on every node; doing so sends
unnecessary traffic to the NVP back-end and
performs unnecessary DB operations. Set the
<option>state_sync_interval</option>
configuration option to a non-zero value
exclusively on a node designated for back-end
status synchronization.</para>
<para>Explicitly specifying the <emphasis
role="italic">status</emphasis> attribute in
Neutron API requests (e.g.: <literal>GET
/v2.0/networks/&lt;net-id>?fields=status&amp;fields=name</literal>)
always triggers an explicit query to the NVP
back-end, even when asynchronous state
synchronization is enabled.</para>
<para>The <parameter>fields=status</parameter>
parameter in Networking API requests always
triggers an explicit query to the NVP back end,
even when you enable asynchronous state
synchronization. For example, <code>GET
/v2.0/networks/&lt;net-id>?fields=status&amp;fields=name</code>.</para>
</section>
</section>
<section xml:id="section_bigswitch_extensions">
@@ -1934,8 +1933,8 @@
<td>nexthop</td>
<td>No</td>
<td>A plus-separated (+) list of
next-hop IP addresses (e.g.
'1.1.1.1+1.1.1.2')</td>
next-hop IP addresses. For example,
<literal>1.1.1.1+1.1.1.2</literal>.</td>
<td>Overrides the default virtual
router used to handle traffic for
packets that match the rule</td>
@@ -1962,7 +1961,8 @@
<para>Router rules are configured with a router
update operation in OpenStack Networking. The
update overrides any previous rules so all
rules must be provided at the same time.</para>
rules must be provided at the same
time.</para>
<para>Update a router with rules to permit traffic
by default but block traffic from external
networks to the 10.10.10.0/24 subnet:</para>
@@ -2083,9 +2083,9 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
<td>False</td>
<td>Specify whether the remote_ip_prefix will
be excluded or not from traffic counters
of the metering label, For example to not
of the metering label (for example, to not
count the traffic of a specific IP address
of a range.</td>
of a range).</td>
</tr>
<tr>
<td>remote_ip_prefix</td>
@@ -2100,10 +2100,10 @@ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny</userinput></screen>
<?hard-pagebreak?>
<section xml:id="metering_operations">
<title>Basic L3 metering operations</title>
<para>Only administrators can manage the L3
metering labels and rules.</para>
<para>This table shows example <command>neutron</command> commands
that enable you to complete basic L3 metering
<para>Only administrators can manage the L3 metering
labels and rules.</para>
<para>This table shows example <command>neutron</command>
commands that enable you to complete basic L3 metering
operations:</para>
<table rules="all">
<caption>Basic L3 operations</caption>

View File

@@ -78,8 +78,8 @@ volume-type-delete Delete a specific flavor</computeroutput></screen>
different bandwidth cap than that defined in the
network they are attached to. This factor is
multiplied by the rxtx_base property of the
network. Default value is 1.0 (that is, the same
as attached network).</td>
network. Default value is 1.0. That is, the same
as attached network.</td>
</tr>
<tr>
<td><literal>Is_Public</literal></td>
@@ -89,13 +89,11 @@ volume-type-delete Delete a specific flavor</computeroutput></screen>
</tr>
<tr>
<td><literal>extra_specs</literal></td>
<td>additional optional restrictions on which compute
nodes the flavor can run on. This is implemented
as key/value pairs that must match against the
corresponding key/value pairs on compute nodes.
Can be used to implement things like special
resources (e.g., flavors that can only run on
compute nodes with GPU hardware).</td>
<td>Key and value pairs that define on which compute
nodes a flavor can run. These pairs must match
corresponding pairs on the compute nodes. Use to
implement special resources, such as flavors that
run on only compute nodes with GPU hardware.</td>
</tr>
</tbody>
</table>
@@ -205,9 +203,8 @@ volume-type-delete Delete a specific flavor</computeroutput></screen>
<para>Flavors can also be assigned to particular projects. By
default, a flavor is public and available to all projects.
Private flavors are only accessible to those on the access
list and are invisible to other projects. To
create and assign a private flavor to a project, run these
commands:</para>
list and are invisible to other projects. To create and assign
a private flavor to a project, run these commands:</para>
<screen><prompt>#</prompt> <userinput>nova flavor-create --is-public false p1.medium auto 512 40 4</userinput>
<prompt>#</prompt> <userinput>nova flavor-access-add 259d06a0-ba6d-4e60-b42d-ab3144411d58 86f94150ed744e08be565c2ff608eef9</userinput></screen>
</section>

View File

@@ -17,7 +17,8 @@
the top level Identity Service configuration file
<filename>keystone.conf</filename> as specified in the
above section. Additionally, the private key should only be
readable by the system user that will run the Identity Service.</para>
readable by the system user that will run the Identity
Service.</para>
<warning>
<para>The certificates can be world readable, but the private
key cannot be. The private key should only be readable by
@@ -135,7 +136,7 @@ SrWY8lF3HrTcJT23sZIleg==</screen>
<orderedlist numeration="arabic">
<listitem>
<para>Request Signing Certificate from External CA
</para>
</para>
</listitem>
<listitem>
<para>Convert certificate and private key to PEM if
@@ -152,8 +153,9 @@ SrWY8lF3HrTcJT23sZIleg==</screen>
<para>One way to request a signing certificate from an
external CA is to first generate a PKCS #10 Certificate
Request Syntax (CRS) using OpenSSL CLI.</para>
<para>First create a certificate request configuration file
(e.g. <filename>cert_req.conf</filename>):</para>
<para>Create a certificate request configuration file. For
example, create the <filename>cert_req.conf</filename>
file, as follows:</para>
<programlisting language="ini">[ req ]
default_bits = 1024
default_keyfile = keystonekey.pem
@@ -174,7 +176,7 @@ emailAddress = keystone@openstack.org
<para>Then generate a CRS with OpenSSL CLI. <emphasis
role="strong">Do not encrypt the generated private
key. Must use the -nodes option.</emphasis>
</para>
</para>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>openssl req -newkey rsa:1024 -keyout signing_key.pem -keyform PEM \
-out signing_cert_req.pem -outform PEM -config cert_req.conf -nodes</userinput></screen>
@@ -185,7 +187,7 @@ emailAddress = keystone@openstack.org
to request a token signing certificate and make sure to
ask the certificate to be in PEM format. Also, make sure
your trusted CA certificate chain is also in PEM format.
</para>
</para>
</section>
<section xml:id="install-external-signing-certificate">
<title>Install an external signing certificate</title>

View File

@@ -31,7 +31,7 @@
that apply to the storage objects with which the policy group is associated.</td>
</tr>
<tr>
<td><literal>netapp_mirrored</literal><footnote xml:id="netapp-conflict-extra-specs"><para>If both the positive and negative specs for a pair are specified (e.g. <literal>netapp_dedup</literal> and <literal>netapp_nodedup</literal>) and set to the same value within a single <literal>extra_specs</literal> list, then neither spec will be utilized by the driver.</para></footnote></td>
<td><literal>netapp_mirrored</literal><footnote xml:id="netapp-conflict-extra-specs"><para>If both the positive and negative specs for a pair are specified (for example, <literal>netapp_dedup</literal> and <literal>netapp_nodedup</literal>) and set to the same value within a single <literal>extra_specs</literal> list, then neither spec will be utilized by the driver.</para></footnote></td>
<td>Boolean</td>
<td>Limit the candidate volume list to only the ones that are mirrored on the storage controller.</td>
</tr>

View File

@@ -7,16 +7,16 @@
supports multiple storage families and protocols. A storage
family corresponds to storage systems built on different
NetApp technologies such as clustered Data ONTAP® and Data
ONTAP operating in 7-Mode. The storage protocol refers to
the protocol used to initiate data storage and access
operations on those storage systems like iSCSI and NFS. The
NetApp unified driver can be configured to provision and
manage OpenStack volumes on a given storage family using a
specified storage protocol. The OpenStack volumes can then be
used for accessing and storing data using the storage
protocol on the storage family system. The NetApp unified
driver is an extensible interface that can support new
storage families and protocols.</para>
ONTAP operating in 7-Mode. The storage protocol refers to the
protocol used to initiate data storage and access operations
on those storage systems like iSCSI and NFS. The NetApp
unified driver can be configured to provision and manage
OpenStack volumes on a given storage family using a specified
storage protocol. The OpenStack volumes can then be used for
accessing and storing data using the storage protocol on the
storage family system. The NetApp unified driver is an
extensible interface that can support new storage families and
protocols.</para>
<section xml:id="ontap-cluster-family">
<title>NetApp clustered Data ONTAP storage family</title>
<para>The NetApp clustered Data ONTAP storage family
@@ -35,9 +35,9 @@
<para>The iSCSI configuration for clustered Data ONTAP is
a direct interface from Cinder to the clustered Data
ONTAP instance and as such does not require additional
management software to achieve the desired functionality.
It uses NetApp APIs to interact with the clustered Data
ONTAP instance.</para>
management software to achieve the desired
functionality. It uses NetApp APIs to interact with
the clustered Data ONTAP instance.</para>
<simplesect>
<title>Configuration options for clustered Data ONTAP
family with iSCSI protocol</title>
@@ -59,37 +59,49 @@
netapp_login=username
netapp_password=password
</programlisting>
<note><para>You must override the default value of
<literal>netapp_storage_protocol</literal> with
<literal>iscsi</literal> in order to utilize the
iSCSI protocol.</para></note>
<xi:include href="../../../common/tables/cinder-netapp_cdot_iscsi.xml"/>
<note><para>If you specify an account in the
<literal>netapp_login</literal> that only has virtual
storage server (Vserver) administration privileges
(rather than cluster-wide administration privileges),
some advanced features of the NetApp unified driver
will not work and you may see warnings in the Cinder
logs.</para></note>
<tip><para>For more information on these options and
other deployment and operational scenarios, visit the
<link xlink:href="https://communities.netapp.com/groups/openstack">
OpenStack NetApp community.</link></para></tip>
<note>
<para>You must override the default value of
<literal>netapp_storage_protocol</literal>
with <literal>iscsi</literal> in order to
utilize the iSCSI protocol.</para>
</note>
<xi:include
href="../../../common/tables/cinder-netapp_cdot_iscsi.xml"/>
<note>
<para>If you specify an account in the
<literal>netapp_login</literal> that only
has virtual storage server (Vserver)
administration privileges (rather than
cluster-wide administration privileges), some
advanced features of the NetApp unified driver
will not work and you may see warnings in the
Cinder logs.</para>
</note>
<tip>
<para>For more information on these options and
other deployment and operational scenarios,
visit the <link
xlink:href="https://communities.netapp.com/groups/openstack"
> OpenStack NetApp
community.</link></para>
</tip>
</simplesect>
</section>
<section xml:id="ontap-cluster-nfs">
<title>NetApp NFS configuration for clustered Data
ONTAP</title>
<para>The NetApp NFS configuration for clustered Data
ONTAP is an interface from OpenStack to a clustered Data
ONTAP system for provisioning and managing OpenStack
volumes on NFS exports provided by the clustered Data
ONTAP system that are accessed using the NFS protocol.</para>
<para>The NFS configuration for clustered Data ONTAP is a direct
interface from Cinder to the clustered Data ONTAP instance and
as such does not require any additional management software to
achieve the desired functionality. It uses NetApp APIs
to interact with the clustered Data ONTAP instance.</para>
ONTAP is an interface from OpenStack to a clustered
Data ONTAP system for provisioning and managing
OpenStack volumes on NFS exports provided by the
clustered Data ONTAP system that are accessed using
the NFS protocol.</para>
<para>The NFS configuration for clustered Data ONTAP is a
direct interface from Cinder to the clustered Data
ONTAP instance and as such does not require any
additional management software to achieve the desired
functionality. It uses NetApp APIs to interact with
the clustered Data ONTAP instance.</para>
<simplesect>
<title>Configuration options for the clustered Data
ONTAP family with NFS protocol</title>
@@ -111,72 +123,95 @@
netapp_login=username
netapp_password=password
</programlisting>
<xi:include href="../../../common/tables/cinder-netapp_cdot_nfs.xml"/>
<note><para>If you specify an account in the
<literal>netapp_login</literal> that only has virtual
storage server (Vserver) administration privileges
(rather than cluster-wide administration privileges),
some advanced features of the NetApp unified driver
will not work and you may see warnings in the Cinder
logs.</para></note>
<tip><para>For more information on these options and other deployment and operational scenarios, visit the <link xlink:href="https://communities.netapp.com/groups/openstack">OpenStack NetApp community.</link></para></tip>
<xi:include
href="../../../common/tables/cinder-netapp_cdot_nfs.xml"/>
<note>
<para>If you specify an account in the
<literal>netapp_login</literal> that only
has virtual storage server (Vserver)
administration privileges (rather than
cluster-wide administration privileges), some
advanced features of the NetApp unified driver
will not work and you may see warnings in the
Cinder logs.</para>
</note>
<tip>
<para>For more information on these options and
other deployment and operational scenarios,
visit the <link
xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community.</link></para>
</tip>
</simplesect>
</section>
<section xml:id="ontap-cluster-extraspecs">
<title>NetApp-supported extra specs for clustered Data
ONTAP</title>
<para>Extra specs allow individual vendors to specify additional
filter criteria which the Cinder scheduler can use when evaluating
which volume node should fulfill a volume provisioning request.
When using the NetApp unified driver with a clustered Data ONTAP
storage system, you can leverage extra specs with Cinder volume types
to ensure that Cinder volumes are created on storage backends that
have certain properties (e.g. QoS, mirroring, compression) configured.</para>
<para>Extra specs are associated with Cinder volume types, so that when
users request volumes of a particular volume type, they will be
created on storage backends that meet the list of requirements (e.g.
available space, extra specs, etc). You can use the specs in the
table later in this section when defining Cinder volume types using the
<literal>cinder type-key</literal> command.</para>
<xi:include href="../../../common/tables/cinder-netapp_cdot_extraspecs.xml"/>
<note><para>It is recommended to only set the value of extra
specs to <literal>True</literal> when combining multiple
specs to enforce a certain logic set. If you desire to
remove volumes with a certain feature enabled from
consideration from the Cinder volume scheduler, be sure
to use the negated spec name with a value of <literal>
True</literal> rather than setting the positive spec
to a value of <literal>False</literal>.</para></note>
<para>Extra specs enable vendors to specify extra filter
criteria that the Block Storage scheduler uses when it
determines which volume node should fulfill a volume
provisioning request. When you use the NetApp unified
driver with a clustered Data ONTAP storage system, you
can leverage extra specs with Cinder volume types to
ensure that Cinder volumes are created on storage back
ends that have certain properties. For example, when
you configure QoS, mirroring, or compression for a
storage back end.</para>
<para>Extra specs are associated with Cinder volume types,
so that when users request volumes of a particular
volume type, the volumes are created on storage back
ends that meet the list of requirements. For example,
the back ends have the available space or extra specs.
You can use the specs in the following table when you
define Cinder volume types by using the
<command>cinder type-key</command> command.</para>
<xi:include
href="../../../common/tables/cinder-netapp_cdot_extraspecs.xml"/>
<note>
<para>It is recommended to only set the value of extra
specs to <literal>True</literal> when combining
multiple specs to enforce a certain logic set. If
you desire to remove volumes with a certain
feature enabled from consideration from the Cinder
volume scheduler, be sure to use the negated spec
name with a value of <literal>True</literal>
rather than setting the positive spec to a value
of <literal>False</literal>.</para>
</note>
</section>
</section>
<section xml:id="ontap-7mode-family">
<title>NetApp Data ONTAP operating in 7-Mode storage family</title>
<title>NetApp Data ONTAP operating in 7-Mode storage
family</title>
<para>The NetApp Data ONTAP operating in 7-Mode storage family
represents a configuration group which provides OpenStack
compute instances access to 7-Mode storage systems. At present
it can be configured in Cinder to work with iSCSI and NFS
storage protocols.</para>
compute instances access to 7-Mode storage systems. At
present it can be configured in Cinder to work with iSCSI
and NFS storage protocols.</para>
<section xml:id="ontap-7mode-iscsi">
<title>NetApp iSCSI configuration for Data ONTAP operating in
7-Mode</title>
<para>The NetApp iSCSI configuration for Data ONTAP operating
in 7-Mode is an interface from OpenStack to Data ONTAP
operating in 7-Mode storage systems for provisioning and
managing the SAN block storage entity, that is, a
LUN which can be accessed using iSCSI protocol.</para>
<para>The iSCSI configuration for Data ONTAP operating in 7-Mode
is a direct interface from OpenStack to Data ONTAP operating
in 7-Mode storage system and it does not require additional
management software to achieve the desired functionality. It
uses NetApp ONTAPI to interact with the Data ONTAP operating
in 7-Mode storage system.</para>
<title>NetApp iSCSI configuration for Data ONTAP operating
in 7-Mode</title>
<para>The NetApp iSCSI configuration for Data ONTAP
operating in 7-Mode is an interface from OpenStack to
Data ONTAP operating in 7-Mode storage systems for
provisioning and managing the SAN block storage
entity, that is, a LUN which can be accessed using
iSCSI protocol.</para>
<para>The iSCSI configuration for Data ONTAP operating in
7-Mode is a direct interface from OpenStack to Data
ONTAP operating in 7-Mode storage system and it does
not require additional management software to achieve
the desired functionality. It uses NetApp ONTAPI to
interact with the Data ONTAP operating in 7-Mode
storage system.</para>
<simplesect>
<title>Configuration options for the Data ONTAP operating in
7-Mode storage family with iSCSI protocol</title>
<title>Configuration options for the Data ONTAP
operating in 7-Mode storage family with iSCSI
protocol</title>
<para>Configure the volume driver, storage family and
storage protocol to the NetApp unified driver, Data
ONTAP operating in 7-Mode, and iSCSI respectively by
setting the
storage protocol to the NetApp unified driver,
Data ONTAP operating in 7-Mode, and iSCSI
respectively by setting the
<literal>volume_driver</literal>,
<literal>netapp_storage_family</literal> and
<literal>netapp_storage_protocol</literal>
@@ -191,36 +226,48 @@
netapp_login=username
netapp_password=password
</programlisting>
<note><para>You must override the default value of
<literal>netapp_storage_protocol</literal> with
<literal>iscsi</literal> in order to utilize the
iSCSI protocol.</para></note>
<xi:include href="../../../common/tables/cinder-netapp_7mode_iscsi.xml"/>
<tip><para>For more information on these options and other deployment and operational scenarios, visit the <link xlink:href="https://communities.netapp.com/groups/openstack">OpenStack NetApp community.</link></para></tip>
<note>
<para>You must override the default value of
<literal>netapp_storage_protocol</literal>
with <literal>iscsi</literal> in order to
utilize the iSCSI protocol.</para>
</note>
<xi:include
href="../../../common/tables/cinder-netapp_7mode_iscsi.xml"/>
<tip>
<para>For more information on these options and
other deployment and operational scenarios,
visit the <link
xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community.</link></para>
</tip>
</simplesect>
</section>
<section xml:id="ontap-7mode-nfs">
<title>NetApp NFS configuration for Data ONTAP operating in
7-Mode</title>
<para>The NetApp NFS configuration for Data ONTAP operating in
7-Mode is an interface from OpenStack to Data ONTAP
operating in 7-Mode storage system for provisioning and
managing OpenStack volumes on NFS exports provided by the
Data ONTAP operating in 7-Mode storage system which can
then be accessed using NFS protocol.</para>
<para>The NFS configuration for Data ONTAP operating in 7-Mode
is a direct interface from Cinder to the Data ONTAP
operating in 7-Mode instance and as such does not require
any additional management software to achieve the desired
functionality. It uses NetApp ONTAPI to interact with the
Data ONTAP operating in 7-Mode storage system.</para>
<title>NetApp NFS configuration for Data ONTAP operating
in 7-Mode</title>
<para>The NetApp NFS configuration for Data ONTAP
operating in 7-Mode is an interface from OpenStack to
Data ONTAP operating in 7-Mode storage system for
provisioning and managing OpenStack volumes on NFS
exports provided by the Data ONTAP operating in 7-Mode
storage system which can then be accessed using NFS
protocol.</para>
<para>The NFS configuration for Data ONTAP operating in
7-Mode is a direct interface from Cinder to the Data
ONTAP operating in 7-Mode instance and as such does
not require any additional management software to
achieve the desired functionality. It uses NetApp
ONTAPI to interact with the Data ONTAP operating in
7-Mode storage system.</para>
<simplesect>
<title>Configuration options for the Data ONTAP operating
in 7-Mode family with NFS protocol</title>
<title>Configuration options for the Data ONTAP
operating in 7-Mode family with NFS
protocol</title>
<para>Configure the volume driver, storage family and
storage protocol to the NetApp unified driver, Data
ONTAP operating in 7-Mode, and NFS respectively by
setting the
storage protocol to the NetApp unified driver,
Data ONTAP operating in 7-Mode, and NFS
respectively by setting the
<literal>volume_driver</literal>,
<literal>netapp_storage_family</literal> and
<literal>netapp_storage_protocol</literal>
@@ -235,25 +282,33 @@
netapp_login=username
netapp_password=password
</programlisting>
<xi:include href="../../../common/tables/cinder-netapp_7mode_nfs.xml"/>
<tip><para>For more information on these options and other deployment and operational scenarios, visit the <link xlink:href="https://communities.netapp.com/groups/openstack">OpenStack NetApp community.</link></para></tip>
<xi:include
href="../../../common/tables/cinder-netapp_7mode_nfs.xml"/>
<tip>
<para>For more information on these options and
other deployment and operational scenarios,
visit the <link
xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community.</link></para>
</tip>
</simplesect>
</section>
</section>
<section xml:id="ontap-unified-upgrade-deprecated">
<title>Upgrading prior NetApp drivers to the NetApp unified driver</title>
<title>Upgrading prior NetApp drivers to the NetApp unified
driver</title>
<para>NetApp introduced a new unified block storage driver in
Havana for configuring different storage families and storage
protocols. This requires defining upgrade path for NetApp
drivers which existed in releases prior to Havana.
Havana for configuring different storage families and
storage protocols. This requires defining upgrade path for
NetApp drivers which existed in releases prior to Havana.
This section covers the upgrade configuration for NetApp
drivers to the new unified configuration and a list of
deprecated NetApp drivers.</para>
<section xml:id="ontap-unified-upgrade">
<title>Upgraded NetApp drivers</title>
<para>This section describes how to update Cinder
configuration from a pre-Havana release to the new unified
driver format.</para>
configuration from a pre-Havana release to the new
unified driver format.</para>
<simplesect>
<title>Driver upgrade configuration</title>
<orderedlist>
@@ -286,9 +341,9 @@ netapp_storage_protocol=nfs
</programlisting>
</listitem>
<listitem>
<para>NetApp iSCSI direct driver for Data ONTAP
operating in 7-Mode storage controller in
Grizzly (or earlier)</para>
<para>NetApp iSCSI direct driver for Data
ONTAP operating in 7-Mode storage
controller in Grizzly (or earlier)</para>
<programlisting language="ini">
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
</programlisting>
@@ -321,8 +376,8 @@ netapp_storage_protocol=nfs
</section>
<section xml:id="ontap-driver-deprecate">
<title>Deprecated NetApp drivers</title>
<para>This section lists the NetApp drivers in previous releases
that are deprecated in Havana.</para>
<para>This section lists the NetApp drivers in previous
releases that are deprecated in Havana.</para>
<orderedlist>
<listitem>
<para>NetApp iSCSI driver for clustered Data
@@ -339,15 +394,15 @@ volume_driver=cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
</programlisting>
</listitem>
<listitem>
<para>NetApp iSCSI driver for Data ONTAP operating in
7-Mode storage controller.</para>
<para>NetApp iSCSI driver for Data ONTAP operating
in 7-Mode storage controller.</para>
<programlisting language="ini">
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
</programlisting>
</listitem>
<listitem>
<para>NetApp NFS driver for Data ONTAP operating in
7-Mode storage controller.</para>
<para>NetApp NFS driver for Data ONTAP operating
in 7-Mode storage controller.</para>
<programlisting language="ini">
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
</programlisting>
@@ -356,9 +411,9 @@ volume_driver=cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
<note>
<para>See the <link
xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for
support information on deprecated NetApp
drivers in the Havana release.</para>
>OpenStack NetApp community</link> for support
information on deprecated NetApp drivers in the
Havana release.</para>
</note>
</section>
</section>

View File

@@ -1,39 +1,54 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_modifying_images">
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_modifying_images">
<title>Modify images</title>
<?dbhtml stop-chunking?>
<para>Once you have obtained a virtual machine image, you may want to make some changes to it
before uploading it to the OpenStack Image service. Here we describe several tools available
that allow you to modify images.<warning>
<para>Do not attempt to use these tools to modify an image that is attached to a running
virtual machine. These tools are designed to only modify images that are not
<para>Once you have obtained a virtual machine image, you may want
to make some changes to it before uploading it to the
OpenStack Image service. Here we describe several tools
available that allow you to modify images.<warning>
<para>Do not attempt to use these tools to modify an image
that is attached to a running virtual machine. These
tools are designed to only modify images that are not
currently running.</para>
</warning></para>
<section xml:id="guestfish">
<title>guestfish</title>
<para>The <command>guestfish</command> program is a tool from the <link
xlink:href="http://libguestfs.org/">libguestfs</link> project that allows you to
modify the files inside of a virtual machine image.</para>
<para>Note that guestfish doesn't mount the image directly into the local filesystem.
Instead, it provides you with a shell interface that allows you to view, edit, and
delete files. Many of the guestfish commands (e.g., <command>touch</command>,
<command>chmod</command>, <command>rm</command>) are similar to traditional bash
commands.</para>
<para>The <command>guestfish</command> program is a tool from
the <link xlink:href="http://libguestfs.org/"
>libguestfs</link> project that allows you to modify
the files inside of a virtual machine image.</para>
<note>
<para><command>guestfish</command> does not mount the
image directly into the local file system. Instead, it
provides you with a shell interface that enables you
to view, edit, and delete files. Many of
<command>guestfish</command> commands, such as
<command>touch</command>,
<command>chmod</command>, and <command>rm</command>,
resemble traditional bash commands.</para>
</note>
<simplesect>
<title>Example guestfish session</title>
<para>We often need to modify a virtual machine image to remove any traces of the MAC
address that was assigned to the virtual network interface card when the image was
first created, since the MAC address will be different when it boots the next time.
In this example, we show how we can use guestfish to remove references to the old
MAC address by deleting the
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename> file and removing
the <literal>HWADDR</literal> line from the
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> file.</para>
<para>Assume we have a CentOS qcow2 image called
<filename>centos63_desktop.img</filename>. We would mount the image in
read-write mode by doing, as root:
<screen><prompt>#</prompt> <userinput>guestfish --rw -a centos63_desktop.img</userinput>
<para>Sometimes, you must modify a virtual machine image
to remove any traces of the MAC address that was
assigned to the virtual network interface card when
the image was first created, because the MAC address
will be different when it boots the next time. This
example shows how to use guestfish to remove
references to the old MAC address by deleting the
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename>
file and removing the <literal>HWADDR</literal> line
from the
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename>
file.</para>
<para>Assume that you have a CentOS qcow2 image called
<filename>centos63_desktop.img</filename>. Mount
the image in read-write mode as root, as
follows:</para>
<screen><prompt>#</prompt> <userinput>guestfish --rw -a centos63_desktop.img</userinput>
<computeroutput>
Welcome to guestfish, the libguestfs filesystem interactive shell for
editing virtual machine filesystems.
@@ -42,27 +57,34 @@ Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell
>&lt;fs></computeroutput></screen>This
starts a guestfish session. Note that the guestfish prompt looks like a fish:
<literal>> &lt;fs></literal>.</para>
<para>We must first use the <command>run</command> command at the guestfish prompt
before we can do anything else. This will launch a virtual machine, which will be
used to perform all of the file
>&lt;fs></computeroutput></screen>
<para>This starts a guestfish session. Note that the
guestfish prompt looks like a fish: <literal>>
&lt;fs></literal>.</para>
<para>We must first use the <command>run</command> command
at the guestfish prompt before we can do anything
else. This will launch a virtual machine, which will
be used to perform all of the file
manipulations.<screen><prompt>>&lt;fs></prompt> <userinput>run</userinput></screen>
We can now view the filesystems in the image using the
We can now view the file systems in the image using the
<command>list-filesystems</command>
command:<screen><prompt>>&lt;fs></prompt> <userinput>list-filesystems</userinput>
<computeroutput>/dev/vda1: ext4
/dev/vg_centosbase/lv_root: ext4
/dev/vg_centosbase/lv_swap: swap</computeroutput></screen>We
need to mount the logical volume that contains the root partition:
need to mount the logical volume that contains the
root partition:
<screen><prompt>>&lt;fs></prompt> <userinput>mount /dev/vg_centosbase/lv_root /</userinput></screen></para>
<para>Next, we want to delete a file. We can use the <command>rm</command> guestfish
command, which works the same way it does in a traditional shell.</para>
<para>Next, we want to delete a file. We can use the
<command>rm</command> guestfish command, which
works the same way it does in a traditional
shell.</para>
<para><screen><prompt>>&lt;fs></prompt> <userinput>rm /etc/udev/rules.d/70-persistent-net.rules</userinput></screen>We
want to edit the <filename>ifcfg-eth0</filename> file to remove the
<literal>HWADDR</literal> line. The <command>edit</command> command will copy
the file to the host, invoke your editor, and then copy the file back.
want to edit the <filename>ifcfg-eth0</filename> file
to remove the <literal>HWADDR</literal> line. The
<command>edit</command> command will copy the file
to the host, invoke your editor, and then copy the
file back.
<screen><prompt>>&lt;fs></prompt> <userinput>edit /etc/sysconfig/network-scripts/ifcfg-eth0</userinput></screen></para>
<para>If you want to modify this image to load the 8021q
kernel at boot time, you must create an executable
@@ -78,79 +100,106 @@ Type: 'help' for help on commands
it:<programlisting>modprobe 8021q</programlisting>Then
we set to executable:
<screen>>&lt;fs> <userinput>chmod 0755 /etc/sysconfig/modules/8021q.modules</userinput></screen></para>
<para>We're done, so we can exit using the <command>exit</command>
<para>We're done, so we can exit using the
<command>exit</command>
command:<screen><prompt>>&lt;fs></prompt> <userinput>exit</userinput></screen></para>
</simplesect>
<simplesect>
<title>Go further with guestfish</title>
<para>There is an enormous amount of functionality in guestfish and a full treatment is
beyond the scope of this document. Instead, we recommend that you read the <link
xlink:href="http://libguestfs.org/guestfs-recipes.1.html">guestfs-recipes</link>
documentation page for a sense of what is possible with these tools.</para>
<para>There is an enormous amount of functionality in
guestfish and a full treatment is beyond the scope of
this document. Instead, we recommend that you read the
<link
xlink:href="http://libguestfs.org/guestfs-recipes.1.html"
>guestfs-recipes</link> documentation page for a
sense of what is possible with these tools.</para>
</simplesect>
</section>
<section xml:id="guestmount">
<title>guestmount</title>
<para>For some types of changes, you may find it easier to mount the image's filesystem
directly in the guest. The <command>guestmount</command> program, also from the
<para>For some types of changes, you may find it easier to
mount the image's file system directly in the guest. The
<command>guestmount</command> program, also from the
libguestfs project, allows you to do so.</para>
<para>For example, to mount the root partition from our
<filename>centos63_desktop.qcow2</filename> image to <filename>/mnt</filename>, we
can do:</para>
<filename>centos63_desktop.qcow2</filename> image to
<filename>/mnt</filename>, we can do:</para>
<para>
<screen><prompt>#</prompt> <userinput>guestmount -a centos63_desktop.qcow2 -m /dev/vg_centosbase/lv_root --rw /mnt</userinput></screen>
</para>
<para>If we didn't know in advance what the mountpoint is in the guest, we could use the
<literal>-i</literal>(inspect) flag to tell guestmount to automatically determine
what mount point to
<para>If we didn't know in advance what the mount point is in
the guest, we could use the <literal>-i</literal>(inspect)
flag to tell guestmount to automatically determine what
mount point to
use:<screen><prompt>#</prompt> <userinput>guestmount -a centos63_desktop.qcow2 -i --rw /mnt</userinput></screen>Once
mounted, we could do things like list the installed packages using
mounted, we could do things like list the installed
packages using
rpm:<screen><prompt>#</prompt> <userinput>rpm -qa --dbpath /mnt/var/lib/rpm</userinput></screen>
Once done, we
unmount:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput></screen></para>
</section>
<section xml:id="virt-tools">
<title>virt-* tools</title>
<para>The <link xlink:href="http://libguestfs.org/">libguestfs</link> project has a number
of other useful tools, including:<itemizedlist>
<para>The <link xlink:href="http://libguestfs.org/"
>libguestfs</link> project has a number of other
useful tools, including:<itemizedlist>
<listitem>
<para><link xlink:href="http://libguestfs.org/virt-edit.1.html">virt-edit</link> for
editing a file inside of an image.</para>
<para><link
xlink:href="http://libguestfs.org/virt-edit.1.html"
>virt-edit</link> for editing a file
inside of an image.</para>
</listitem>
<listitem>
<para><link xlink:href="http://libguestfs.org/virt-df.1.html">virt-df</link> for
displaying free space inside of an image.</para>
<para><link
xlink:href="http://libguestfs.org/virt-df.1.html"
>virt-df</link> for displaying free space
inside of an image.</para>
</listitem>
<listitem>
<para><link xlink:href="http://libguestfs.org/virt-resize.1.html"
>virt-resize</link> for resizing an image.</para>
<para><link
xlink:href="http://libguestfs.org/virt-resize.1.html"
>virt-resize</link> for resizing an
image.</para>
</listitem>
<listitem>
<para><link xlink:href="http://libguestfs.org/virt-sysprep.1.html"
>virt-sysprep</link> for preparing an image for distribution (e.g.,
delete SSH host keys, remove MAC address info, remove user accounts).</para>
<para><link
xlink:href="http://libguestfs.org/virt-sysprep.1.html"
>virt-sysprep</link> for preparing an
image for distribution (for example, delete
SSH host keys, remove MAC address info, or
remove user accounts).</para>
</listitem>
<listitem>
<para><link xlink:href="http://libguestfs.org/virt-sparsify.1.html"
>virt-sparsify</link> for making an image sparse</para>
<para><link
xlink:href="http://libguestfs.org/virt-sparsify.1.html"
>virt-sparsify</link> for making an image
sparse</para>
</listitem>
<listitem>
<para><link xlink:href="http://libguestfs.org/virt-v2v/">virt-p2v</link> for
converting a physical machine to an image that runs on KVM</para>
<para><link
xlink:href="http://libguestfs.org/virt-v2v/"
>virt-p2v</link> for converting a physical
machine to an image that runs on KVM</para>
</listitem>
<listitem>
<para><link xlink:href="http://libguestfs.org/virt-v2v/">virt-v2v</link> for
converting Xen and VMWare images to KVM images</para>
<para><link
xlink:href="http://libguestfs.org/virt-v2v/"
>virt-v2v</link> for converting Xen and
VMWare images to KVM images</para>
</listitem>
</itemizedlist></para>
<simplesect>
<title>Modify a single file inside of an image</title>
<para>This example shows how to use <command>virt-edit</command> to modify a file. The
command can take either a filename as an argument with the <literal>-a</literal>
flag, or a domain name as an argument with the <literal>-d</literal> flag. The
<para>This example shows how to use
<command>virt-edit</command> to modify a file. The
command can take either a filename as an argument with
the <literal>-a</literal> flag, or a domain name as an
argument with the <literal>-d</literal> flag. The
following examples shows how to use this to modify the
<filename>/etc/shadow</filename> file in instance with libvirt domain name
<literal>instance-000000e1</literal> that is currently running:</para>
<filename>/etc/shadow</filename> file in instance
with libvirt domain name
<literal>instance-000000e1</literal> that is
currently running:</para>
<para>
<screen><prompt>#</prompt> <userinput>virsh shutdown instance-000000e1</userinput>
<prompt>#</prompt> <userinput>virt-edit -d instance-000000e1 /etc/shadow</userinput>
@@ -159,18 +208,23 @@ Type: 'help' for help on commands
</simplesect>
<simplesect>
<title>Resize an image</title>
<para>Here's a simple of example of how to use <command>virt-resize</command> to resize
an image. Assume we have a 16GB Windows image in qcow2 format that we want to resize
to 50GB. First, we use <command>virt-filesystems</command> to identify the
<para>Here's a simple of example of how to use
<command>virt-resize</command> to resize an image.
Assume we have a 16GB Windows image in qcow2 format
that we want to resize to 50GB. First, we use
<command>virt-filesystems</command> to identify
the
partitions:<screen><prompt>#</prompt> <userinput>virt-filesystems --long --parts --blkdevs -h -a /data/images/win2012.qcow2</userinput>
<computeroutput>Name Type MBR Size Parent
/dev/sda1 partition 07 350M /dev/sda
/dev/sda2 partition 07 16G /dev/sda
/dev/sda device - 16G -
</computeroutput></screen></para>
<para>In this case, it's the <filename>/dev/sda2</filename> partition that we want to
resize. We create a new qcow2 image and use the <command>virt-resize</command>
command to write a resized copy of the original into the new
<para>In this case, it's the
<filename>/dev/sda2</filename> partition that we
want to resize. We create a new qcow2 image and use
the <command>virt-resize</command> command to write a
resized copy of the original into the new
image<screen><prompt>#</prompt> <userinput>qemu-img create -f qcow2 /data/images/win2012-50gb.qcw2 50G</userinput>
<prompt>#</prompt> <userinput>virt-resize --expand /dev/sda2 /data/images/win2012.qcow2 /data/images/win2012-50gb.qcow2</userinput>
<computeroutput>Examining /data/images/win2012.qcow2 ...
@@ -199,33 +253,40 @@ disk, carefully check that the resized disk boots and works correctly.
</section>
<section xml:id="losetup-kpartx-nbd">
<title>Loop devices, kpartx, network block devices</title>
<para>If you don't have access to libguestfs, you can mount image file systems directly in
the host using loop devices, kpartx, and network block devices.<warning>
<para>Mounting untrusted guest images using the tools described in this section is a
security risk, always use libguestfs tools such as guestfish and guestmount if
you have access to them. See <link
<para>If you don't have access to libguestfs, you can mount
image file systems directly in the host using loop
devices, kpartx, and network block devices.<warning>
<para>Mounting untrusted guest images using the tools
described in this section is a security risk,
always use libguestfs tools such as guestfish and
guestmount if you have access to them. See <link
xlink:href="https://www.berrange.com/posts/2013/02/20/a-reminder-why-you-should-never-mount-guest-disk-images-on-the-host-os/"
>A reminder why you should never mount guest disk images on the host
OS</link> by Daniel Berrangé for more details.</para>
>A reminder why you should never mount guest
disk images on the host OS</link> by Daniel
Berrangé for more details.</para>
</warning></para>
<simplesect>
<title>Mount a raw image (without LVM)</title>
<para>If you have a raw virtual machine image that is not using LVM to manage its
partitions. First, use the <command>losetup</command> command to find an unused loop
device.
<para>If you have a raw virtual machine image that is not
using LVM to manage its partitions. First, use the
<command>losetup</command> command to find an
unused loop device.
<screen><prompt>#</prompt> <userinput>losetup -f</userinput>
<computeroutput>/dev/loop0</computeroutput></screen></para>
<para>In this example, <filename>/dev/loop0</filename> is free. Associate a loop device
with the raw
<para>In this example, <filename>/dev/loop0</filename> is
free. Associate a loop device with the raw
image:<screen><prompt>#</prompt> <userinput>losetup /dev/loop0 fedora17.img</userinput></screen></para>
<para>If the image only has a single partition, you can mount the loop device
<para>If the image only has a single partition, you can
mount the loop device
directly:<screen><prompt>#</prompt> <userinput>mount /dev/loop0 /mnt</userinput></screen></para>
<para>If the image has multiple partitions, use <command>kpartx</command> to expose the
partitions as separate devices (e.g., <filename>/dev/mapper/loop0p1</filename>),
then mount the partition that corresponds to the root file
<para>If the image has multiple partitions, use
<command>kpartx</command> to expose the partitions
as separate devices (for example,
<filename>/dev/mapper/loop0p1</filename>), then
mount the partition that corresponds to the root file
system:<screen><prompt>#</prompt> <userinput>kpartx -av /dev/loop0</userinput></screen></para>
<para>If the image has, say three partitions (/boot, /, /swap), there should be one new
device created per
<para>If the image has, say three partitions (/boot, /,
/swap), there should be one new device created per
partition:<screen><prompt>$</prompt> <userinput>ls -l /dev/mapper/loop0p*</userinput>
<computeroutput>brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/mapper/loop0p1
brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/mapper/loop0p2
@@ -240,61 +301,75 @@ brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/mapper/loop0p3</computeroutp
</simplesect>
<simplesect>
<title>Mount a raw image (with LVM)</title>
<para>If your partitions are managed with LVM, use losetup and kpartx as in the previous
example to expose the partitions to the
host:</para><screen><prompt>#</prompt> <userinput>losetup -f</userinput>
<para>If your partitions are managed with LVM, use losetup
and kpartx as in the previous example to expose the
partitions to the host:</para>
<screen><prompt>#</prompt> <userinput>losetup -f</userinput>
<computeroutput>/dev/loop0</computeroutput>
<prompt>#</prompt> <userinput>losetup /dev/loop0 rhel62.img</userinput>
<prompt>#</prompt> <userinput>kpartx -av /dev/loop0</userinput></screen>
<para>Next, you need to use the <command>vgscan</command> command to identify the LVM
volume groups and then <command>vgchange</command> to expose the volumes as
devices:</para><screen><prompt>#</prompt> <userinput>vgscan</userinput>
<para>Next, you need to use the <command>vgscan</command>
command to identify the LVM volume groups and then
<command>vgchange</command> to expose the volumes
as devices:</para>
<screen><prompt>#</prompt> <userinput>vgscan</userinput>
<computeroutput>Reading all physical volumes. This may take a while...
Found volume group "vg_rhel62x8664" using metadata type lvm2</computeroutput>
<prompt>#</prompt> <userinput>vgchange -ay</userinput>
<computeroutput> 2 logical volume(s) in volume group "vg_rhel62x8664" now active</computeroutput>
<prompt>#</prompt> <userinput>mount /dev/vg_rhel62x8664/lv_root /mnt</userinput></screen>
<para>Clean up when you're
done:</para><screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<para>Clean up when you're done:</para>
<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>vgchange -an vg_rhel62x8664</userinput>
<prompt>#</prompt> <userinput>kpartx -d /dev/loop0</userinput>
<prompt>#</prompt> <userinput>losetup -d /dev/loop0</userinput></screen>
</simplesect>
<simplesect>
<title>Mount a qcow2 image (without LVM)</title>
<para>You need the <literal>nbd</literal> (network block device) kernel module loaded to
mount qcow2 images. This will load it with support for 16 block devices, which is
fine for our purposes. As
<para>You need the <literal>nbd</literal> (network block
device) kernel module loaded to mount qcow2 images.
This will load it with support for 16 block devices,
which is fine for our purposes. As
root:<screen><prompt>#</prompt> <userinput>modprobe nbd max_part=16</userinput></screen></para>
<para>Assuming the first block device (<filename>/dev/nbd0</filename>) is not currently
in use, we can expose the disk partitions using the <command>qemu-nbd</command> and
<para>Assuming the first block device
(<filename>/dev/nbd0</filename>) is not currently
in use, we can expose the disk partitions using the
<command>qemu-nbd</command> and
<command>partprobe</command> commands. As
root:<screen><prompt>#</prompt> <userinput>qemu-nbd -c /dev/nbd0 image.qcow2</userinput>
<prompt>#</prompt> <userinput>partprobe /dev/nbd0</userinput></screen></para>
<para>If the image has, say three partitions (/boot, /, /swap), there should be one new
device created for each partition:</para>
<screen><prompt>$</prompt> <userinput>ls -l /dev/nbd3*</userinput>
<para>If the image has, say three partitions (/boot, /,
/swap), there should be one new device created for
each partition:</para>
<screen><prompt>$</prompt> <userinput>ls -l /dev/nbd3*</userinput>
<computeroutput>brw-rw---- 1 root disk 43, 48 2012-03-05 15:32 /dev/nbd0
brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/nbd0p1
brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/nbd0p2
brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/nbd0p3</computeroutput></screen>
<note><para>If the network block device you selected was already in use, the initial
<command>qemu-nbd</command> command will fail silently, and the
<filename>/dev/nbd3p{1,2,3}</filename> device files will not be
created.</para></note>
<para>If the image partitions are not managed with LVM, they can be mounted
directly:</para><screen><prompt>#</prompt> <userinput>mkdir /mnt/image</userinput>
<note>
<para>If the network block device you selected was
already in use, the initial
<command>qemu-nbd</command> command will fail
silently, and the
<filename>/dev/nbd3p{1,2,3}</filename> device
files will not be created.</para>
</note>
<para>If the image partitions are not managed with LVM,
they can be mounted directly:</para>
<screen><prompt>#</prompt> <userinput>mkdir /mnt/image</userinput>
<prompt>#</prompt> <userinput>mount /dev/nbd3p2 /mnt</userinput></screen>
<para>When you're done, clean
up:</para><screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<para>When you're done, clean up:</para>
<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>qemu-nbd -d /dev/g nbd0</userinput></screen>
</simplesect>
<simplesect>
<title>Mount a qcow2 image (with LVM)</title>
<para>If the image partitions are managed with LVM, after you use
<command>qemu-nbd</command> and <command>partprobe</command>, you must use
<command>vgscan</command> and <command>vgchange -ay</command> in order to expose
the LVM partitions as devices that can be
<para>If the image partitions are managed with LVM, after
you use <command>qemu-nbd</command> and
<command>partprobe</command>, you must use
<command>vgscan</command> and <command>vgchange
-ay</command> in order to expose the LVM
partitions as devices that can be
mounted:<screen><prompt>#</prompt> <userinput>modprobe nbd max_part=16</userinput>
<prompt>#</prompt> <userinput>qemu-nbd -c /dev/nbd0 image.qcow2</userinput>
<prompt>#</prompt> <userinput>partprobe /dev/nbd0</userinput><prompt>#</prompt> <userinput>vgscan</userinput>
@@ -303,7 +378,7 @@ brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/nbd0p3</computeroutput></scr
<prompt>#</prompt> <userinput>vgchange -ay</userinput>
<computeroutput> 2 logical volume(s) in volume group "vg_rhel62x8664" now active</computeroutput>
<prompt>#</prompt> <userinput>mount /dev/vg_rhel62x8664/lv_root /mnt</userinput></screen></para>
<para>When you're done, clean
<para>When you're done, clean
up:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>vgchange -an vg_rhel62x8664</userinput>
<prompt>#</prompt> <userinput>qemu-nbd -d /dev/nbd0</userinput></screen></para>

View File

@@ -3,15 +3,19 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_openstack_images">
<title>OpenStack Linux image requirements</title>
<?dbhtml stop-chunking?>
<para>For a Linux-based image to have full functionality in an OpenStack Compute cloud, there
are a few requirements. For some of these, the requirement can be fulfilled by installing
the <link xlink:href="https://cloudinit.readthedocs.org/en/latest/">cloud-init</link>
package. You should read this section before creating your own image to be sure that the
image supports the OpenStack features you plan on using.<itemizedlist>
<title>OpenStack Linux image requirements</title>
<?dbhtml stop-chunking?>
<para>For a Linux-based image to have full functionality in an
OpenStack Compute cloud, there are a few requirements. For
some of these, the requirement can be fulfilled by installing
the <link
xlink:href="https://cloudinit.readthedocs.org/en/latest/"
>cloud-init</link> package. You should read this section
before creating your own image to be sure that the image
supports the OpenStack features you plan on using.<itemizedlist>
<listitem>
<para>Disk partitions and resize root partition on boot (cloud-init)</para>
<para>Disk partitions and resize root partition on
boot (cloud-init)</para>
</listitem>
<listitem>
<para>No hard-coded MAC address information</para>
@@ -23,220 +27,290 @@
<para>Disable firewall</para>
</listitem>
<listitem>
<para>Access instance using ssh public key (cloud-init)</para>
<para>Access instance using ssh public key
(cloud-init)</para>
</listitem>
<listitem>
<para>Process user data and other metadata (cloud-init)</para>
<para>Process user data and other metadata
(cloud-init)</para>
</listitem>
<listitem>
<para>Paravirtualized Xen support in Linux kernel (Xen hypervisor only with Linux
kernel version &lt; 3.0)</para>
<para>Paravirtualized Xen support in Linux kernel (Xen
hypervisor only with Linux kernel version &lt;
3.0)</para>
</listitem>
</itemizedlist></para>
<section xml:id="support-resizing">
<title>Disk partitions and resize root partition on boot (cloud-init)</title>
<para>When you create a new Linux image, the first decision you will need to make is how to
partition the disks. The choice of partition method can affect the resizing
<title>Disk partitions and resize root partition on boot
(cloud-init)</title>
<para>When you create a new Linux image, the first decision
you will need to make is how to partition the disks. The
choice of partition method can affect the resizing
functionality, as described below.</para>
<para>The size of the disk in a virtual machine image is determined when you initially
create the image. However, OpenStack lets you launch instances with different size
drives by specifying different flavors. For example, if your image was created with a 5
GB disk, and you launch an instance with a flavor of <literal>m1.small</literal>, the
resulting virtual machine instance will have (by default) a primary disk of 10GB. When
an instance's disk is resized up, zeros are just added to the end.</para>
<para>Your image needs to be able to resize its partitions on boot to match the size
requested by the user. Otherwise, after the instance boots, you will need to manually
resize the partitions if you want to access the additional storage you have access to
when the disk size associated with the flavor exceeds the disk size your image was
<para>The size of the disk in a virtual machine image is
determined when you initially create the image. However,
OpenStack lets you launch instances with different size
drives by specifying different flavors. For example, if
your image was created with a 5 GB disk, and you launch an
instance with a flavor of <literal>m1.small</literal>. The
resulting virtual machine instance has, by default,
a primary disk size of 10 GB. When an instance's disk is resized
up, zeros are just added to the end.</para>
<para>Your image needs to be able to resize its partitions on
boot to match the size requested by the user. Otherwise,
after the instance boots, you will need to manually resize
the partitions if you want to access the additional
storage you have access to when the disk size associated
with the flavor exceeds the disk size your image was
created with.</para>
<simplesect>
<title>Xen: 1 ext3/ext4 partition (no LVM, no /boot, no swap)</title>
<para>If you are using the OpenStack XenAPI driver, the Compute service will
automatically adjust the partition and filesystem for your instance on boot.
<title>Xen: 1 ext3/ext4 partition (no LVM, no /boot, no
swap)</title>
<para>If you are using the OpenStack XenAPI driver, the
Compute service will automatically adjust the
partition and filesystem for your instance on boot.
Automatic resize will occur if the following are all true:<itemizedlist>
<listitem>
<para><literal>auto_disk_config=True</literal> is
set as a property on the image in the Image Registry.</para>
<para><literal>auto_disk_config=True</literal>
is set as a property on the image in the
Image Registry.</para>
</listitem>
<listitem>
<para>The disk on the image has only one partition.</para>
<para>The disk on the image has only one
partition.</para>
</listitem>
<listitem>
<para>The file system on the one partition is ext3 or ext4.</para>
<para>The file system on the one partition is
ext3 or ext4.</para>
</listitem>
</itemizedlist></para>
<para>Therefore, if you are using Xen, we recommend that when you create your images,
you create a single ext3 or ext4 partition (not managed by LVM). Otherwise, read
on.</para>
<para>Therefore, if you are using Xen, we recommend that
when you create your images, you create a single ext3
or ext4 partition (not managed by LVM). Otherwise,
read on.</para>
</simplesect>
<simplesect>
<title>Non-Xen with cloud-init/cloud-tools: 1 ext3/ext4 partition (no LVM, no /boot, no
swap)</title>
<title>Non-Xen with cloud-init/cloud-tools: 1 ext3/ext4
partition (no LVM, no /boot, no swap)</title>
<para>Your image must be configured to deal with two issues:<itemizedlist>
<listitem>
<para>The image's partition table describes the original size of the
image</para>
<para>The image's partition table describes
the original size of the image</para>
</listitem>
<listitem>
<para>The image's filesystem fills the original size of the image</para>
<para>The image's filesystem fills the
original size of the image</para>
</listitem>
</itemizedlist></para>
<para>Then, during the boot process:<itemizedlist>
<listitem>
<para>the partition table must be modified to be made aware of the
additional space<itemizedlist>
<para>the partition table must be modified to
be made aware of the additional space<itemizedlist>
<listitem>
<para>If you are not using LVM, you must modify the table to
extend the existing root partition to encompass this
additional space</para>
<para>If you are not using LVM, you
must modify the table to extend the
existing root partition to
encompass this additional
space</para>
</listitem>
<listitem>
<para>If you are using LVM, you can create add a new LVM entry
to the partition table, create a new LVM physical volume,
add it to the volume group, and extend the logical partition
with the root volume</para>
<para>If you are using LVM, you can
create add a new LVM entry to the
partition table, create a new LVM
physical volume, add it to the
volume group, and extend the
logical partition with the root
volume</para>
</listitem>
</itemizedlist></para>
</listitem>
<listitem>
<para>the root volume filesystem must be resized</para>
<para>the root volume filesystem must be
resized</para>
</listitem>
</itemizedlist></para>
<para>The simplest way to support this in your image is to install the <link
xlink:href="https://launchpad.net/cloud-utils">cloud-utils</link> package
(contains the <command>growpart</command> tool for extending partitions), the <link
<para>The simplest way to support this in your image is to
install the <link
xlink:href="https://launchpad.net/cloud-utils"
>cloud-utils</link> package (contains the
<command>growpart</command> tool for extending
partitions), the <link
xlink:href="https://launchpad.net/cloud-initramfs-tools"
>cloud-initramfs-tools</link> package (which will support resizing root
partition on the first boot), and the <link
xlink:href="https://launchpad.net/cloud-init">cloud-init</link> package into
your image. With these installed, the image will perform the root partition resize
on boot (for example in <filename>/etc/rc.local</filename>). These packages are in the
Ubuntu and Debian package repository, as well as the EPEL repository (for
>cloud-initramfs-tools</link> package (which will
support resizing root partition on the first boot),
and the <link
xlink:href="https://launchpad.net/cloud-init"
>cloud-init</link> package into your image. With
these installed, the image will perform the root
partition resize on boot. For example, in the
<filename>/etc/rc.local</filename> file. These
packages are in the Ubuntu and Debian package
repository, as well as the EPEL repository (for
Fedora/RHEL/CentOS/Scientific Linux guests).</para>
<para>If you are not able to install <literal>cloud-initramfs-tools</literal>, Robert
<para>If you are not able to install
<literal>cloud-initramfs-tools</literal>, Robert
Plestenjak has a github project called <link
xlink:href="https://github.com/flegmatik/linux-rootfs-resize"
>linux-rootfs-resize</link> that contains scripts that will update a ramdisk
using <command>growpart</command> so that the image will resize properly on
boot.</para>
<para>If you are able to install the cloud-utils and cloud-init packages, we recommend
that when you create your images, you create a single ext3 or ext4 partition (not
managed by LVM).</para>
>linux-rootfs-resize</link> that contains scripts
that will update a ramdisk using
<command>growpart</command> so that the image will
resize properly on boot.</para>
<para>If you are able to install the cloud-utils and
cloud-init packages, we recommend that when you create
your images, you create a single ext3 or ext4
partition (not managed by LVM).</para>
</simplesect>
<simplesect>
<title>Non-Xen without cloud-init/cloud-tools: LVM</title>
<para>If you cannot install cloud-init and cloud-tools inside of your guest, and you
want to support resize, you will need to write a script that your image will run on
boot to modify the partition table. In this case, we recommend using LVM to manage
your partitions. Due to a limitation in the Linux kernel (as of this writing), you
cannot modify a partition table of a raw disk that has partition currently mounted,
but you can do this for LVM.</para>
<para>If you cannot install cloud-init and cloud-tools
inside of your guest, and you want to support resize,
you will need to write a script that your image will
run on boot to modify the partition table. In this
case, we recommend using LVM to manage your
partitions. Due to a limitation in the Linux kernel
(as of this writing), you cannot modify a partition
table of a raw disk that has partition currently
mounted, but you can do this for LVM.</para>
<para>Your script will need to do something like the following:<orderedlist>
<listitem>
<para>Detect if there is any additional space on the disk (e.g., parsing
output of <command>parted /dev/sda --script "print
free"</command>)</para>
<para>Detect if any additional space is
available on the disk. For example, parse
the output of <command>parted /dev/sda
--script "print
free"</command>.</para>
</listitem>
<listitem>
<para>Create a new LVM partition with the additional space (e.g.,
<command>parted /dev/sda --script "mkpart lvm ..."</command>)</para>
<para>Create a new LVM partition with the
additional space. For example,
<command>parted /dev/sda --script
"mkpart lvm ..."</command>.</para>
</listitem>
<listitem>
<para>Create a new physical volume (e.g., <command>pvcreate
/dev/<replaceable>sda6</replaceable></command> )</para>
<para>Create a new physical volume. For
example, <command>pvcreate
/dev/<replaceable>sda6</replaceable></command>.</para>
</listitem>
<listitem>
<para>Extend the volume group with this physical partition (e.g.,
<command>vgextend <replaceable>vg00</replaceable>
/dev/<replaceable>sda6</replaceable></command>)</para>
<para>Extend the volume group with this
physical partition. For example,
<command>vgextend
<replaceable>vg00</replaceable>
/dev/<replaceable>sda6</replaceable></command>.)</para>
</listitem>
<listitem>
<para>Extend the logical volume contained the root partition by the amount
of space (e.g., <command>lvextend
/dev/mapper/<replaceable>node-root</replaceable>
/dev/<replaceable>sda6</replaceable></command>)</para>
<para>Extend the logical volume contained the
root partition by the amount of space. For
example, <command>lvextend
/dev/mapper/<replaceable>node-root</replaceable>
/dev/<replaceable>sda6</replaceable></command>.</para>
</listitem>
<listitem>
<para>Resize the root file system (e.g., <command>resize2fs
/dev/mapper/<replaceable>node-root</replaceable></command>).</para>
<para>Resize the root file system. For
example, <command>resize2fs
/dev/mapper/<replaceable>node-root</replaceable></command>.</para>
</listitem>
</orderedlist></para>
<para>You do not need to have a <filename>/boot</filename> partition, unless your image
is an older Linux distribution that requires that <filename>/boot</filename> is not
managed by LVM. You may elect to use a swap per</para>
<para>You do not need to have a <filename>/boot</filename>
partition, unless your image is an older Linux
distribution that requires that
<filename>/boot</filename> is not managed by LVM.
You may elect to use a swap per</para>
</simplesect>
</section>
<section xml:id="mac-adddress"><title>No hard-coded MAC address information</title>
<para>You must remove the network persistence rules in the image as their presence will
result in the network interface in the instance coming up as an interface other than
eth0. This is because your image has a record of the MAC address of the network
interface card when it was first installed, and this MAC address will be different each
time the instance boots up. You should alter the following files:<itemizedlist>
<section xml:id="mac-adddress">
<title>No hard-coded MAC address information</title>
<para>You must remove the network persistence rules in the
image as their presence will result in the network
interface in the instance coming up as an interface other
than eth0. This is because your image has a record of the
MAC address of the network interface card when it was
first installed, and this MAC address will be different
each time the instance boots up. You should alter the
following files:<itemizedlist>
<listitem>
<para>Replace <filename>/etc/udev/rules.d/70-persistent-net.rules</filename>
with an empty file (contains network persistence rules, including MAC
<para>Replace
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename>
with an empty file (contains network
persistence rules, including MAC
address)</para>
</listitem>
<listitem>
<para>Replace
<filename>/lib/udev/rules.d/75-persistent-net-generator.rules</filename>
with an empty file (this generates the file above)</para>
with an empty file (this generates the file
above)</para>
</listitem>
<listitem>
<para>Remove the HWADDR line from
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> on
Fedora-based images</para>
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename>
on Fedora-based images</para>
</listitem>
</itemizedlist><note>
<para>If you delete the network persistent rules files, you may get a udev kernel
warning at boot time, which is why we recommend replacing them with empty files
instead.</para>
<para>If you delete the network persistent rules
files, you may get a udev kernel warning at boot
time, which is why we recommend replacing them
with empty files instead.</para>
</note></para>
</section>
<section xml:id="ensure-ssh-server">
<title>Ensure ssh server runs</title>
<para>You must install an ssh server into the image and ensure that it starts up on boot, or
you will not be able to connect to your instance using ssh when it boots inside of
OpenStack. This package is typically called <literal>openssh-server</literal>.</para>
<para>You must install an ssh server into the image and ensure
that it starts up on boot, or you will not be able to
connect to your instance using ssh when it boots inside of
OpenStack. This package is typically called
<literal>openssh-server</literal>.</para>
</section>
<section xml:id="disable-firewall">
<title>Disable firewall</title>
<para>In general, we recommend that you disable any firewalls inside of your image and use
OpenStack security groups to restrict access to instances. The reason is that having a
firewall installed on your instance can make it more difficult to troubleshoot
networking issues if you cannot connect to your instance.</para>
<para>In general, we recommend that you disable any firewalls
inside of your image and use OpenStack security groups to
restrict access to instances. The reason is that having a
firewall installed on your instance can make it more
difficult to troubleshoot networking issues if you cannot
connect to your instance.</para>
</section>
<section xml:id="ssh-public-key">
<title>Access instance using ssh public key (cloud-init)</title>
<para>The typical way that users access virtual machines running on OpenStack is to ssh
using public key authentication. For this to work, your virtual machine image must be
configured to download the ssh public key from the OpenStack metadata service or config
drive, at boot time.</para>
<title>Access instance using ssh public key
(cloud-init)</title>
<para>The typical way that users access virtual machines
running on OpenStack is to ssh using public key
authentication. For this to work, your virtual machine
image must be configured to download the ssh public key
from the OpenStack metadata service or config drive, at
boot time.</para>
<simplesect>
<title>Use cloud-init to fetch the public key</title>
<para>The cloud-init package will automatically fetch the public key from the metadata
server and place the key in an account. The account varies by distribution. On
Ubuntu-based virtual machines, the account is called "ubuntu". On Fedora-based
virtual machines, the account is called "ec2-user".</para>
<para>The cloud-init package will automatically fetch the
public key from the metadata server and place the key
in an account. The account varies by distribution. On
Ubuntu-based virtual machines, the account is called
"ubuntu". On Fedora-based virtual machines, the
account is called "ec2-user".</para>
<para>You can change the name of the account used by
cloud-init by editing the
<filename>/etc/cloud/cloud.cfg</filename> file and
<filename>/etc/cloud/cloud.cfg</filename> file and
adding a line with a different user. For example, to
configure cloud-init to put the key in an account
named "admin", edit the config file so it has the
line:<programlisting>user: admin</programlisting></para>
named <literal>admin</literal>, edit the configuration
file so it has the line:</para>
<programlisting>user: admin</programlisting>
</simplesect>
<simplesect>
<title>Write a custom script to fetch the public key</title>
<para>If you are unable or unwilling to install cloud-init inside the guest, you can
write a custom script to fetch the public and add it to a user account.</para>
<para>To fetch the ssh public key and add it to the root account, edit the
<filename>/etc/rc.local</filename> file and add the following lines before the
line “touch /var/lock/subsys/local”. This code fragment is taken from the <link
<title>Write a custom script to fetch the public
key</title>
<para>If you are unable or unwilling to install cloud-init
inside the guest, you can write a custom script to
fetch the public and add it to a user account.</para>
<para>To fetch the ssh public key and add it to the root
account, edit the <filename>/etc/rc.local</filename>
file and add the following lines before the line
“touch /var/lock/subsys/local”. This code fragment is
taken from the <link
xlink:href="https://github.com/rackerjoe/oz-image-build/blob/master/templates/centos60_x86_64.tdl"
>rackerjoe oz-image-build CentOS 6 template</link>.</para>
>rackerjoe oz-image-build CentOS 6
template</link>.</para>
<programlisting language="bash">if [ ! -d /root/.ssh ]; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
@@ -269,9 +343,11 @@ while [ ! -f /root/.ssh/authorized_keys ]; do
fi
done</programlisting>
<note>
<para>Some VNC clients replace : (colon) with ; (semicolon) and _ (underscore) with
- (hyphen). If editing a file over a VNC session, make sure it's http: not http;
and authorized_keys not authorized-keys.</para>
<para>Some VNC clients replace : (colon) with ;
(semicolon) and _ (underscore) with - (hyphen). If
editing a file over a VNC session, make sure it's
http: not http; and authorized_keys not
authorized-keys.</para>
</note>
@@ -279,62 +355,89 @@ done</programlisting>
</section>
<section xml:id="metadata">
<title>Process user data and other metadata (cloud-init)</title>
<para>In additional the ssh public key, an image may need to retrieve additional information
from OpenStack, such as <link
xlink:href="http://docs.openstack.org/user-guide/content/user-data.html"
>user data</link> that the user submitted when requesting the image. For example,
you may wish to set the host name of the instance to name given to the instance when it
is booted. Or, you may wish to configure your image so that it executes user data
content as a script on boot.</para>
<para>This information is accessible via the metadata service or the <link
xlink:href="http://docs.openstack.org/user-guide/content/config-drive.html"
>config drive</link>. As the OpenStack metadata service is compatible with version
2009-04-04 of the Amazon EC2 metadata service, consult the Amazon EC2 documentation on
<title>Process user data and other metadata
(cloud-init)</title>
<para>In additional the ssh public key, an image may need to
retrieve additional information from OpenStack, such as
<link
xlink:href="http://docs.openstack.org/user-guide/content/user-data.html"
>user data</link> that the user submitted when
requesting the image. For example, you might want to set the
host name of the instance to name given to the instance
when it is booted. Or, you might wish to configure your
image so that it executes user data content as a script on
boot.</para>
<para>This information is accessible through the metadata service
or the <link
xlink:href="http://docs.openstack.org/user-guide/content/config-drive.html"
>config drive</link>. As the OpenStack metadata
service is compatible with version 2009-04-04 of the
Amazon EC2 metadata service, consult the Amazon EC2
documentation on <link
xlink:href="http://docs.amazonwebservices.com/AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html"
>Using Instance Metadata</link> for details on how to retrieve user data.</para>
>Using Instance Metadata</link> for details on how to
retrieve user data.</para>
<para>The easiest way to support this type of functionality is to install the
cloud-init package into your image, which is configured by default to treat user data as
an executable script, and will set the host name.</para>
<para>The easiest way to support this type of functionality is
to install the cloud-init package into your image, which
is configured by default to treat user data as an
executable script, and will set the host name.</para>
</section>
<section xml:id="write-to-console">
<title>Ensure image writes boot log to console</title>
<para>You must configure the image so that the kernel writes the boot log to the
<literal>ttyS0</literal> device. In particular, the <literal>console=ttyS0</literal>
argument must be passed to the kernel on boot.</para>
<para>If your image uses grub2 as the boot loader, there should be a line in the grub
configuration file (for example, <filename>/boot/grub/grub.cfg</filename>) that looks
something like
this:<programlisting>linux /boot/vmlinuz-3.2.0-49-virtual root=UUID=6d2231e4-0975-4f35-a94f-56738c1a8150 ro console=ttyS0</programlisting></para>
<para>If <literal>console=ttyS0</literal> does not appear, you will need to modify your grub
configuration. In general, you should not update the grub.cfg directly, since it is
automatically generated. Instead, you should edit <filename>/etc/default/grub</filename>
and modify the value of the <literal>GRUB_CMDLINE_LINUX_DEFAULT</literal> variable:
<para>You must configure the image so that the kernel writes
the boot log to the <literal>ttyS0</literal> device. In
particular, the <literal>console=ttyS0</literal> argument
must be passed to the kernel on boot.</para>
<para>If your image uses grub2 as the boot loader, there
should be a line in the grub configuration file. For
example, <filename>/boot/grub/grub.cfg</filename>, which
looks something like this:</para>
<programlisting>linux /boot/vmlinuz-3.2.0-49-virtual root=UUID=6d2231e4-0975-4f35-a94f-56738c1a8150 ro console=ttyS0</programlisting>
<para>If <literal>console=ttyS0</literal> does not appear, you
will need to modify your grub configuration. In general,
you should not update the grub.cfg directly, since it is
automatically generated. Instead, you should edit
<filename>/etc/default/grub</filename> and modify the
value of the <literal>GRUB_CMDLINE_LINUX_DEFAULT</literal>
variable:
<programlisting language="bash">GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"</programlisting></para>
<para>Next, update the grub configuration. On Debian-based operating-systems such as Ubuntu,
<para>Next, update the grub configuration. On Debian-based
operating-systems such as Ubuntu,
do:<screen><prompt>$</prompt> <userinput>sudo update-grub</userinput></screen></para>
<para>On Fedora-based systems such as RHEL and CentOS,
do:<screen><prompt>$</prompt> <userinput>sudo grub2-mkconfig -o /boot/grub2/grub.cfg</userinput></screen></para>
</section>
<section xml:id="image-xen-pv"><title>Paravirtualized Xen support in the kernel (Xen hypervisor only)</title>
<para>Prior to Linux kernel version 3.0, the mainline branch of the Linux kernel did not
have support paravirtualized Xen virtual machine instances (what Xen calls DomU guests).
If you are running the Xen hypervisor with paravirtualization, and you want to create an
image for an older Linux distribution that has a pre 3.0 kernel, you will need to ensure
that the image boots a kernel that has been compiled with Xen support.</para>
<section xml:id="image-xen-pv">
<title>Paravirtualized Xen support in the kernel (Xen
hypervisor only)</title>
<para>Prior to Linux kernel version 3.0, the mainline branch
of the Linux kernel did not have support paravirtualized
Xen virtual machine instances (what Xen calls DomU
guests). If you are running the Xen hypervisor with
paravirtualization, and you want to create an image for an
older Linux distribution that has a pre 3.0 kernel, you
will need to ensure that the image boots a kernel that has
been compiled with Xen support.</para>
</section>
<section xml:id="image-cache-management">
<title>Manage the image cache</title>
<para>Use options in <filename>nova.conf</filename> to control whether, and for how long,
unused base images are stored in <filename>/var/lib/nova/instances/_base/</filename>. If
you have configured live migration of instances, all your compute nodes share one common
<filename>/var/lib/nova/instances/</filename> directory.</para>
<para>For information about libvirt images in OpenStack, refer to <link xlink:href="http://www.pixelbeat.org/docs/openstack_libvirt_images/">The life of an OpenStack libvirt image from Pádraig Brady</link>.</para>
<para>Use options in <filename>nova.conf</filename> to control
whether, and for how long, unused base images are stored
in <filename>/var/lib/nova/instances/_base/</filename>. If
you have configured live migration of instances, all your
compute nodes share one common
<filename>/var/lib/nova/instances/</filename>
directory.</para>
<para>For information about libvirt images in OpenStack, refer
to <link
xlink:href="http://www.pixelbeat.org/docs/openstack_libvirt_images/"
>The life of an OpenStack libvirt image from Pádraig
Brady</link>.</para>
<table rules="all">
<caption>Image cache management configuration options</caption>
<caption>Image cache management configuration
options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
@@ -346,43 +449,58 @@ done</programlisting>
<tbody>
<tr>
<td>preallocate_images=none</td>
<td>(StrOpt) VM image preallocation mode: "none" =&gt; no storage provisioning
is done up front, "space" =&gt; storage is fully allocated at instance
start. If this is set to 'space', the $instance_dir/ images will be  <link
<td>(StrOpt) VM image preallocation mode: "none"
=&gt; no storage provisioning is done up
front, "space" =&gt; storage is fully
allocated at instance start. If this is set to
'space', the $instance_dir/ images will be 
<link
xlink:href="http://www.kernel.org/doc/man-pages/online/pages/man2/fallocate.2.html"
>fallocate</link>d to immediately determine if enough space is
available, and to possibly improve VM I/O performance due to ongoing
allocation avoidance, and better locality of block allocations.</td>
>fallocate</link>d to immediately
determine if enough space is available, and to
possibly improve VM I/O performance due to
ongoing allocation avoidance, and better
locality of block allocations.</td>
</tr>
<tr>
<td>remove_unused_base_images=True</td>
<td>(BoolOpt) Should unused base images be removed? When set to True, the
interval at which base images are removed are set with the following two
settings. If set to False base images are never removed by Compute.</td>
<td>(BoolOpt) Should unused base images be
removed? When set to True, the interval at
which base images are removed are set with the
following two settings. If set to False base
images are never removed by Compute.</td>
</tr>
<tr>
<td>remove_unused_original_minimum_age_seconds=86400</td>
<td>(IntOpt) Unused unresized base images younger than this will not be removed. Default is 86400 seconds, or 24 hours.</td>
<td>(IntOpt) Unused unresized base images younger
than this will not be removed. Default is
86400 seconds, or 24 hours.</td>
</tr>
<tr>
<td>remove_unused_resized_minimum_age_seconds=3600</td>
<td>(IntOpt) Unused resized base images younger than this will not be removed. Default is 3600 seconds, or one hour.</td>
<td>(IntOpt) Unused resized base images younger
than this will not be removed. Default is 3600
seconds, or one hour.</td>
</tr>
</tbody>
</table>
<para>To see how the settings affect the deletion of a running instance, check the directory where the images are stored:</para>
<para>To see how the settings affect the deletion of a running
instance, check the directory where the images are
stored:</para>
<screen><prompt>$</prompt> <userinput>sudo ls -lash /var/lib/nova/instances/_base/</userinput></screen>
<para>Then look for the identifier in
<filename>/var/log/compute/compute.log</filename>:</para>
<filename>/var/log/compute/compute.log</filename>:</para>
<screen><computeroutput>2012-02-18 04:24:17 41389 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810
a0d1d5d3_20
2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removable base files: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810
a0d1d5d3 /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removing base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3</computeroutput></screen>
<para>Since 86400 seconds (24 hours) is the default time for
<literal>remove_unused_original_minimum_age_seconds</literal>, you can either wait
for that time interval to see the base image removed, or set the value to a shorter time
period in <filename>nova.conf</filename>. Restart all nova services after changing a
setting in <filename>nova.conf</filename>.</para>
<literal>remove_unused_original_minimum_age_seconds</literal>,
you can either wait for that time interval to see the base
image removed, or set the value to a shorter time period
in <filename>nova.conf</filename>. Restart all nova
services after changing a setting in
<filename>nova.conf</filename>.</para>
</section>
</chapter>

View File

@@ -1,17 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<section
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="centos-image">
<title>Example: CentOS image</title>
<para>We'll run through an example of installing a CentOS image. This will focus mainly on
CentOS 6.4. Because the CentOS installation process may change across versions, if you are
using a different version of CentOS the installer steps may differ.</para>
<simplesect>
<title>Download a CentOS install ISO</title>
<para>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="centos-image">
<title>Example: CentOS image</title>
<para>We'll run through an example of installing a CentOS image.
This will focus mainly on CentOS 6.4. Because the CentOS
installation process may change across versions, if you are
using a different version of CentOS the installer steps may
differ.</para>
<simplesect>
<title>Download a CentOS install ISO</title>
<para>
<orderedlist>
<listitem>
<para>Navigate to the <link
@@ -19,38 +19,48 @@
>CentOS mirrors</link> page.</para>
</listitem>
<listitem>
<para>Click one of the <literal>HTTP</literal> links in the right-hand column
next to one of the mirrors.</para>
<para>Click one of the <literal>HTTP</literal>
links in the right-hand column next to one of
the mirrors.</para>
</listitem>
<listitem>
<para>Click the folder link of the CentOS version you want to use (e.g.,
<literal>6.4/</literal>).</para>
<para>Click the folder link of the CentOS version
you want to use. For example,
<literal>6.4/</literal>.</para>
</listitem>
<listitem>
<para>Click the <literal>isos/</literal> folder link.</para>
<para>Click the <literal>isos/</literal> folder
link.</para>
</listitem>
<listitem>
<para>Click the <literal>x86_64/</literal> folder link for 64-bit images.</para>
<para>Click the <literal>x86_64/</literal> folder
link for 64-bit images.</para>
</listitem>
<listitem>
<para>Click the ISO image you want to download. The netinstall ISO (e.g.,
<filename>CentOS-6.4-x86_64-netinstall.iso</filename>) is a good choice
since it's a smaller image that will download missing packages from the
<para>Click the ISO image you want to download.
The netinstall ISO. For example,
<filename>CentOS-6.4-x86_64-netinstall.iso</filename>
is a good choice since it's a smaller image
that will download missing packages from the
Internet during the install process.</para>
</listitem>
</orderedlist>
</para>
</simplesect>
</simplesect>
<simplesect>
<title>Start the install process</title>
<para>Start the installation process using either <command>virt-manager</command> or
<command>virt-install</command> as described in the previous section. If using
<command>virt-install</command>, don't forget to connect your VNC client to the
virtual machine.</para>
<para>Start the installation process using either
<command>virt-manager</command> or
<command>virt-install</command> as described in the
previous section. If using
<command>virt-install</command>, don't forget to connect
your VNC client to the virtual machine.</para>
<para>We will assume the name of your virtual machine image is
<literal>centos-6.4</literal>, which we need to know when using <command>virsh</command>
commands to manipulate the state of the image.</para>
<para>If you're using virt-manager, the commands should look something like
<literal>centos-6.4</literal>, which we need to know
when using <command>virsh</command> commands to manipulate
the state of the image.</para>
<para>If you're using virt-manager, the commands should look
something like
this:<screen><prompt>#</prompt> <userinput>qemu-img create -f qcow2 /tmp/centos-6.4.qcow2 10G</userinput>
<prompt>#</prompt> <userinput>virt-install --virt-type kvm --name centos-6.4 --ram 1024 \
--cdrom=/data/isos/CentOS-6.4-x86_64-netinstall.iso \
@@ -59,110 +69,134 @@
--graphics vnc,listen=0.0.0.0 --noautoconsole \
--os-type=linux --os-variant=rhel6</userinput></screen></para>
</simplesect>
<simplesect>
<title>Step through the install</title>
<para>At the initial Installer boot menu, choose the "Install or upgrade an existing system" option. Step through the
install prompts, the defaults should be fine.</para>
<mediaobject>
<imageobject role="fo">
<imagedata fileref="figures/centos-install.png" format="PNG" scale="60"/>
</imageobject>
<imageobject role="html">
<imagedata fileref="figures/centos-install.png" format="PNG"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Configure TCP/IP</title>
<para>The default TCP/IP settings are fine. In particular, ensure
that Enable IPv4 support is enabled with DHCP, which is the default.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/centos-tcpip.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<simplesect>
<title>Step through the install</title>
<para>At the initial Installer boot menu, choose the "Install
or upgrade an existing system" option. Step through the
install prompts, the defaults should be fine.</para>
<mediaobject>
<imageobject role="fo">
<imagedata fileref="figures/centos-install.png"
format="PNG" scale="60"/>
</imageobject>
<imageobject role="html">
<imagedata fileref="figures/centos-install.png"
format="PNG"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Configure TCP/IP</title>
<para>The default TCP/IP settings are fine. In particular,
ensure that Enable IPv4 support is enabled with DHCP,
which is the default.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/centos-tcpip.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Point the installer to a CentOS web server</title>
</simplesect>
<simplesect>
<title>Point the installer to a CentOS web server</title>
<para>Choose URL as the installation method.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/install-method.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Depending on the version of CentOS, the net installer requires that the user
specify either a URL, or the web site and a CentOS directory that corresponds to one of
the CentOS mirrors. If the installer asks for a single URL, an example of a valid URL
would be: <literal>http://mirror.umd/centos/6/os/x86_64</literal>.<note>
<para>Consider using other mirrors as an alternative to mirror.umd.edu.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/install-method.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Depending on the version of CentOS, the net installer
requires that the user specify either a URL, or the web
site and a CentOS directory that corresponds to one of the
CentOS mirrors. If the installer asks for a single URL, an
example of a valid URL would be:
<literal>http://mirror.umd/centos/6/os/x86_64</literal>.<note>
<para>Consider using other mirrors as an alternative
to mirror.umd.edu.</para>
</note></para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/url-setup.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<mediaobject>
<imageobject>
<imagedata fileref="figures/url-setup.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>If the installer asks for web site name and CentOS directory separately, an example
would be:</para>
<para>If the installer asks for web site name and CentOS
directory separately, an example would be:</para>
<para>
<itemizedlist>
<listitem>
<para>Web site name: <literal>mirror.umd.edu</literal>
<para>Web site name:
<literal>mirror.umd.edu</literal>
</para>
</listitem>
<listitem>
<para>CentOS directory: <literal>centos/6/os/x86_64</literal></para>
<para>CentOS directory:
<literal>centos/6/os/x86_64</literal></para>
</listitem>
</itemizedlist>
</para>
<para>See <link xlink:href="http://www.centos.org/modules/tinycontent/index.php?id=30"
>CentOS mirror page</link> to get a full list of mirrors, click on the "HTTP"
link of a mirror to retrieve the web site name of a mirror.</para>
</simplesect>
<simplesect>
<title>Storage devices</title>
<para>If asked about what type of devices your installation involves, choose "Basic
Storage Devices".</para>
</simplesect>
<simplesect>
<title>Hostname</title>
<para>The installer may ask you to choose a hostname. The default
(<literal>localhost.localdomain</literal>) is fine. We will install the cloud-init
packge later, which will set the hostname on boot when a new instance is provisioned
using this image.</para>
</simplesect>
<simplesect>
<title>Partition the disks</title>
<para>There are different options for partitioning the disks. The default installation
will use LVM partitions, and will create three partitions (<filename>/boot</filename>,
<filename>/</filename>, swap), and this will work fine. Alternatively, you may wish
to create a single ext4 partition, mounted to "<literal>/</literal>", should also work
fine.</para>
<para>If unsure, we recommend you use the installer's default partition scheme, since there
is no clear advantage to one scheme of another.</para>
</simplesect>
<simplesect>
<title>Step through the install</title>
<para>Step through the install, using the default options. The simplest thing to do is
to choose the "Basic Server" install (may be called "Server" install on older versions
<para>See <link
xlink:href="http://www.centos.org/modules/tinycontent/index.php?id=30"
>CentOS mirror page</link> to get a full list of
mirrors, click on the "HTTP" link of a mirror to retrieve
the web site name of a mirror.</para>
</simplesect>
<simplesect>
<title>Storage devices</title>
<para>If asked about what type of devices your installation
involves, choose "Basic Storage Devices".</para>
</simplesect>
<simplesect>
<title>Hostname</title>
<para>The installer may ask you to choose a hostname. The
default (<literal>localhost.localdomain</literal>) is
fine. We will install the cloud-init packge later, which
will set the hostname on boot when a new instance is
provisioned using this image.</para>
</simplesect>
<simplesect>
<title>Partition the disks</title>
<para>There are different options for partitioning the disks.
The default installation will use LVM partitions, and will
create three partitions (<filename>/boot</filename>,
<filename>/</filename>, swap), and this will work
fine. Alternatively, you may wish to create a single ext4
partition, mounted to "<literal>/</literal>", should also
work fine.</para>
<para>If unsure, we recommend you use the installer's default
partition scheme, since there is no clear advantage to one
scheme of another.</para>
</simplesect>
<simplesect>
<title>Step through the install</title>
<para>Step through the install, using the default options. The
simplest thing to do is to choose the "Basic Server"
install (may be called "Server" install on older versions
of CentOS), which will install an SSH server.</para>
</simplesect>
<simplesect>
<title>Detach the CD-ROM and reboot</title>
<para>Once the install completes, you will see the screen "Congratulations, your CentOS
installation is complete".</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/centos-complete.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Detach the CD-ROM and reboot</title>
<para>Once the install completes, you will see the screen
"Congratulations, your CentOS installation is
complete".</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/centos-complete.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>To eject a disk using <command>virsh</command>, libvirt requires that you attach an
empty disk at the same target that the CDROM was previously attached, which should be
<literal>hdc</literal>. You can confirm the appropriate target using the
<command>dom dumpxml <replaceable>vm-image</replaceable></command> command.</para>
<screen><prompt>#</prompt> <userinput>virsh dumpxml centos-6.4</userinput>
<para>To eject a disk using <command>virsh</command>, libvirt
requires that you attach an empty disk at the same target
that the CDROM was previously attached, which should be
<literal>hdc</literal>. You can confirm the
appropriate target using the <command>dom dumpxml
<replaceable>vm-image</replaceable></command>
command.</para>
<screen><prompt>#</prompt> <userinput>virsh dumpxml centos-6.4</userinput>
<computeroutput>&lt;domain type='kvm'>
&lt;name>centos-6.4&lt;/name>
...
@@ -175,67 +209,81 @@
...
&lt;/domain>
</computeroutput></screen>
<para>Run the following
commands from the host to eject the disk and reboot using virsh, as root. If you are
using virt-manager, the commands below will work, but you can also use the GUI to the
detach and reboot it by manually stopping and
<para>Run the following commands from the host to eject the
disk and reboot using virsh, as root. If you are using
virt-manager, the commands below will work, but you can
also use the GUI to the detach and reboot it by manually
stopping and
starting.<screen><prompt>#</prompt> <userinput>virsh attach-disk --type cdrom --mode readonly centos-6.4 "" hdc</userinput>
<prompt>#</prompt> <userinput>virsh destroy centos-6.4</userinput>
<prompt>#</prompt> <userinput>virsh start centos-6.4</userinput></screen></para>
<note><para>In theory, the <command>virsh reboot centos-6.4</command> command can be used instead of using
destroy and start commands. However, in our testing we were unable to reboot
successfully using the <command>virsh reboot</command> command.</para></note>
<note>
<para>In theory, the <command>virsh reboot
centos-6.4</command> command can be used instead
of using destroy and start commands. However, in our
testing we were unable to reboot successfully using
the <command>virsh reboot</command> command.</para>
</note>
</simplesect>
<simplesect>
<title>Log in to newly created image</title>
<para>When you boot the first time after install, it may ask you about authentication
tools, you can just choose "Exit". Then, log in as root using the root password you
</simplesect>
<simplesect>
<title>Log in to newly created image</title>
<para>When you boot the first time after install, it may ask
you about authentication tools, you can just choose
"Exit". Then, log in as root using the root password you
specified.</para>
</simplesect>
<simplesect>
<title>Configure to fetch metadata</title>
<para>An instance must perform several steps on start up by
interacting with the metadata service (e.g., retrieve ssh public
key, execute user data script). There are several ways to implement
this functionality, including:<itemizedlist>
</simplesect>
<simplesect>
<title>Configure to fetch metadata</title>
<para>An instance must perform several steps on start up by
interacting with the metadata service. For example,
retrieve ssh public key and execute user data script.
There are several ways to implement this functionality, including:<itemizedlist>
<listitem>
<para>Install a cloud-init RPM, which is a port of the
Ubuntu <link
<para>Install a cloud-init RPM, which is a port of
the Ubuntu <link
xlink:href="https://launchpad.net/cloud-init"
>cloud-init</link> package. This is the recommended
approach.</para>
>cloud-init</link> package. This is the
recommended approach.</para>
</listitem>
<listitem>
<para>Modify <filename>/etc/rc.local</filename> to fetch
desired information from the metadata service, as
described below.</para>
<para>Modify <filename>/etc/rc.local</filename> to
fetch desired information from the metadata
service, as described below.</para>
</listitem>
</itemizedlist></para>
</simplesect>
<simplesect>
<title>Use cloud-init to fetch the public key</title>
<para>The cloud-init package will automatically fetch the public key from the metadata
server and place the key in an account. You can install cloud-init inside the CentOS
</simplesect>
<simplesect>
<title>Use cloud-init to fetch the public key</title>
<para>The cloud-init package will automatically fetch the
public key from the metadata server and place the key in
an account. You can install cloud-init inside the CentOS
guest by adding the EPEL
repo:<screen><prompt>#</prompt> <userinput>rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm</userinput>
<prompt>#</prompt> <userinput>yum install cloud-init</userinput></screen></para>
<para>The account varies by distribution. On Ubuntu-based virtual machines, the
account is called "ubuntu". On Fedora-based virtual machines, the account is called
"ec2-user".</para>
<para>You can change the name of the account used by cloud-init by editing the
<filename>/etc/cloud/cloud.cfg</filename> file and adding a line with a
different user. For example, to configure cloud-init to put the key in an account
named "admin", edit the config file so it has the
line:<programlisting>user: admin</programlisting></para>
</simplesect>
<simplesect>
<title>Write a script to fetch the public key (if no cloud-init)</title>
<para>If you are not able to install the cloud-init package in your image, to fetch the
ssh public key and add it to the root account, edit the
<filename>/etc/rc.local</filename> file and add the following lines before the line
<literal>touch /var/lock/subsys/local</literal></para>
<programlisting language="bash">if [ ! -d /root/.ssh ]; then
<para>The account varies by distribution. On Ubuntu-based
virtual machines, the account is called "ubuntu". On
Fedora-based virtual machines, the account is called
"ec2-user".</para>
<para>You can change the name of the account used by
cloud-init by editing the
<filename>/etc/cloud/cloud.cfg</filename> file and
adding a line with a different user. For example, to
configure cloud-init to put the key in an account named
<literal>admin</literal>, edit the configuration file
so it has the line:</para>
<programlisting>user: admin</programlisting>
</simplesect>
<simplesect>
<title>Write a script to fetch the public key (if no
cloud-init)</title>
<para>If you are not able to install the cloud-init package in
your image, to fetch the ssh public key and add it to the
root account, edit the <filename>/etc/rc.local</filename>
file and add the following lines before the line
<literal>touch
/var/lock/subsys/local</literal></para>
<programlisting language="bash">if [ ! -d /root/.ssh ]; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
fi
@@ -257,63 +305,74 @@ while [ ! -f /root/.ssh/authorized_keys ]; do
cat /root/.ssh/authorized_keys
echo "*****************"
done</programlisting>
<note>
<para>Some VNC clients replace : (colon) with ; (semicolon) and _ (underscore) with
- (hyphen). Make sure it's http: not http; and authorized_keys not
authorized-keys.</para>
</note>
<note>
<para>The above script only retrieves the ssh public key from the metadata server.
It does not retrieve <emphasis role="italic">user data</emphasis>, which is
optional data that can be passed by the user when requesting a new instance.
User data is often used for running a custom script when an instance comes
up.</para>
<para>As the OpenStack metadata service is compatible with version 2009-04-04 of the
Amazon EC2 metadata service, consult the Amazon EC2 documentation on <link
xlink:href="http://docs.amazonwebservices.com/AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html"
>Using Instance Metadata</link> for details on how to retrieve user
data.</para>
</note>
</simplesect>
<simplesect>
<title>Configure console</title>
<para>In order for <command>nova console-log</command> to work properly on CentOS 6.x,
guests you may need to add the following lines to
<filename>/boot/grub/menu.lst</filename><programlisting>serial --unit=0 --speed=115200
<note>
<para>Some VNC clients replace : (colon) with ;
(semicolon) and _ (underscore) with - (hyphen). Make
sure it's http: not http; and authorized_keys not
authorized-keys.</para>
</note>
<note>
<para>The above script only retrieves the ssh public key
from the metadata server. It does not retrieve
<emphasis role="italic">user data</emphasis>,
which is optional data that can be passed by the user
when requesting a new instance. User data is often
used for running a custom script when an instance
comes up.</para>
<para>As the OpenStack metadata service is compatible with
version 2009-04-04 of the Amazon EC2 metadata service,
consult the Amazon EC2 documentation on <link
xlink:href="http://docs.amazonwebservices.com/AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html"
>Using Instance Metadata</link> for details on how
to retrieve user data.</para>
</note>
</simplesect>
<simplesect>
<title>Configure console</title>
<para>In order for <command>nova console-log</command> to work
properly on CentOS 6.x, guests you may need to add the
following lines to
<filename>/boot/grub/menu.lst</filename><programlisting>serial --unit=0 --speed=115200
terminal --timeout=10 console serial
# Edit the kernel line to add the console entries
kernel <replaceable>...</replaceable> console=tty0 console=ttyS0,115200n8</programlisting></para>
</simplesect>
<simplesect>
<title>Shut down the instance</title>
<para>From inside the instance, as
root:<screen><prompt>#</prompt> <userinput>/sbin/shutdown -h now</userinput></screen></para>
</simplesect>
</simplesect>
<simplesect>
<title>Shut down the instance</title>
<para>From inside the instance, as
root:<screen><prompt>#</prompt> <userinput>/sbin/shutdown -h now</userinput></screen></para>
</simplesect>
<simplesect>
<title>Clean up (remove MAC address details)</title>
<para>The operating system records the MAC address of the virtual ethernet card in locations
such as <filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> and
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename> during the instance
process. However, each time the image boots up, the virtual ethernet card will have a
different MAC address, so this information must be deleted from the configuration file.</para>
<para>There is a utility called <command>virt-sysprep</command>, that performs various
cleanup tasks such as removing the MAC address references. It will clean up a virtual
machine image in
<para>The operating system records the MAC address of the
virtual ethernet card in locations such as
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename>
and
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename>
during the instance process. However, each time the image
boots up, the virtual ethernet card will have a different
MAC address, so this information must be deleted from the
configuration file.</para>
<para>There is a utility called
<command>virt-sysprep</command>, that performs various
cleanup tasks such as removing the MAC address references.
It will clean up a virtual machine image in
place:<screen><prompt>#</prompt> <userinput>virt-sysprep -d centos-6.4</userinput></screen></para>
</simplesect>
<simplesect>
<title>Undefine the libvirt domain</title>
<para>Now that the image is ready to be uploaded to the Image service,
you no longer need to have this virtual machine image managed by
libvirt. Use the <command>virsh undefine
<replaceable>vm-image</replaceable></command> command to
inform
<para>Now that the image is ready to be uploaded to the Image
service, you no longer need to have this virtual machine
image managed by libvirt. Use the <command>virsh undefine
<replaceable>vm-image</replaceable></command>
command to inform
libvirt.<screen><prompt>#</prompt> <userinput>virsh undefine centos-6.4</userinput></screen></para>
</simplesect>
<simplesect>
<title>Image is complete</title>
<para>The underlying image file you created with <command>qemu-img create</command> (e.g.
<filename>/tmp/centos-6.4.qcow2</filename>) is now ready for uploading to the OpenStack
Image service.</para>
</simplesect>
</section>
<simplesect>
<title>Image is complete</title>
<para>The underlying image file you created with
<command>qemu-img create</command>. For example,
<filename>/tmp/centos-6.4.qcow2</filename> is now
ready for uploading to the OpenStack Image service.</para>
</simplesect>
</section>

View File

@@ -10,18 +10,20 @@
<replaceable>key</replaceable>=<replaceable>value</replaceable></literal>
argument to <command>glance image-create</command> or <command>glance
image-update</command>. For
example:<screen><prompt>$</prompt> <userinput>glance image-update <replaceable>img-uuid</replaceable> --property architecture=arm --property hypervisor_type=qemu</userinput></screen>
If the following properties are set on an image, and the ImagePropertiesFilter scheduler
example:</para>
<screen><prompt>$</prompt> <userinput>glance image-update <replaceable>img-uuid</replaceable> --property architecture=arm --property hypervisor_type=qemu</userinput></screen>
<para>If the following properties are set on an image, and the ImagePropertiesFilter scheduler
filter is enabled (which it is by default), then the scheduler will only consider
compute hosts that satisfy these properties:
compute hosts that satisfy these properties:</para>
<variablelist xml:id="image-metadata-properties">
<varlistentry>
<term>architecture</term>
<listitem>
<para>The CPU architecture that must be supported by the hypervisor, e.g.
<literal>x86_64</literal>, <literal>arm</literal>, <literal>ppc64</literal>. Run
<para>The CPU architecture that must be supported by
the hypervisor. For example,
<literal>x86_64</literal>, <literal>arm</literal>, or <literal>ppc64</literal>. Run
<command>uname -m</command> to get the architecture of a
machine. We strongly recommend using the architecture data vocabulary
machine. We strongly recommend using the architecture data vocabulary
defined by the <link xlink:href="http://libosinfo.org">libosinfo project</link>
for this purpose. Recognized values for this field are:
<variablelist xml:id="image-architecture" spacing="compact">
@@ -58,7 +60,7 @@
<para>
<link xlink:href="http://en.wikipedia.org/wiki/X86"
>Intel sixth-generation x86</link>
(P6 microarchitecture)
(P6 micro architecture)
</para>
</listitem>
</varlistentry>
@@ -337,7 +339,7 @@
</variablelist></para>
</listitem>
</varlistentry>
</variablelist>The following metadata properties are specific to the XenAPI driver:<variablelist>
</variablelist><para>The following metadata properties are specific to the XenAPI driver:</para><variablelist>
<varlistentry>
<term>auto_disk_config</term>
<listitem>
@@ -352,16 +354,17 @@
<varlistentry>
<term>os_type</term>
<listitem>
<para>The operating system installed on the image, e.g.
<literal>linux</literal>, <literal>windows</literal>. The XenAPI
<para>The operating system installed on the image. For
example,
<literal>linux</literal> or <literal>windows</literal>. The XenAPI
driver contains logic that will take different actions depending on the
value of the os_type parameter of the image. For example, for images
where <literal>os_type=windows</literal>, it will create a FAT32-based
swap partition instead of a Linux swap partition, and it will limit the
injected hostname to less than 16 characters.</para>
where <literal>os_type=windows</literal>, it creates a FAT32-based
swap partition instead of a Linux swap partition, and it limits the
injected host name to less than 16 characters.</para>
</listitem>
</varlistentry>
</variablelist></para>
</variablelist>
<para>The following metadata properties are specific to
the libvirt API driver:
@@ -378,7 +381,7 @@
</varlistentry>
</variablelist></para>
<para>The following metadata properties are specific to the VMware API driver:
<para>The following metadata properties are specific to the VMware API driver:</para>
<variablelist>
<varlistentry>
<term>vmware_adaptertype</term>
@@ -405,11 +408,10 @@
<para>Currently unused, set it to <literal>1</literal>.</para>
</listitem>
</varlistentry>
</variablelist></para>
<para>
In order to assist end-users in utilizing images, you may wish to put additional
</variablelist>
<para>To assist end-users in utilizing images, you may wish to put additional
common metadata on Glance images. By community agreement, the following metadata
keys may be used across Glance installations for the purposes described below.
keys may be used across Glance installations for the purposes described, as follows.</para>
<variablelist>
<varlistentry>
<term>instance_uuid</term>
@@ -710,5 +712,4 @@
</listitem>
</varlistentry>
</variablelist>
</para>
</section>

View File

@@ -1,32 +1,36 @@
<?xml version="1.0" encoding="UTF-8"?>
<section
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ubuntu-image">
<title>Example: Ubuntu image</title>
<para>We'll run through an example of installing an Ubuntu image. This will focus mainly on
Ubuntu 12.04 (Precise Pangolin) server. Because the Ubuntu installation process may change
across versions, if you are using a different version of Ubuntu the installer steps may
differ.</para>
<simplesect>
<title>Download an Ubuntu install ISO</title>
<para>In this example, we'll use the network installation ISO, since it's a smaller
image. The 64-bit 12.04 network installer ISO is at <link
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ubuntu-image">
<title>Example: Ubuntu image</title>
<para>We'll run through an example of installing an Ubuntu image.
This will focus mainly on Ubuntu 12.04 (Precise Pangolin)
server. Because the Ubuntu installation process may change
across versions, if you are using a different version of
Ubuntu the installer steps may differ.</para>
<simplesect>
<title>Download an Ubuntu install ISO</title>
<para>In this example, we'll use the network installation ISO,
since it's a smaller image. The 64-bit 12.04 network
installer ISO is at <link
xlink:href="http://archive.ubuntu.com/ubuntu/dists/precise/main/installer-amd64/current/images/netboot/mini.iso"
>http://archive.ubuntu.com/ubuntu/dists/precise/main/installer-amd64/current/images/netboot/mini.iso</link></para>
</simplesect>
</simplesect>
<simplesect>
<title>Start the install process</title>
<para>Start the installation process using either <command>virt-manager</command> or
<command>virt-install</command> as described in the previous section. If using
<command>virt-install</command>, don't forget to connect your VNC client to the
virtual machine.</para>
<para>Start the installation process using either
<command>virt-manager</command> or
<command>virt-install</command> as described in the
previous section. If using
<command>virt-install</command>, don't forget to connect
your VNC client to the virtual machine.</para>
<para>We will assume the name of your virtual machine image is
<literal>ubuntu-12.04</literal>, which we need to know when using
<command>virsh</command> commands to manipulate the state of the image.</para>
<para>If you're using virt-manager, the commands should look something like
<literal>ubuntu-12.04</literal>, which we need to know
when using <command>virsh</command> commands to manipulate
the state of the image.</para>
<para>If you're using virt-manager, the commands should look
something like
this:<screen><prompt>#</prompt> <userinput>qemu-img create -f qcow2 /tmp/precise.qcow2 10G</userinput>
<prompt>#</prompt> <userinput>virt-install --virt-type kvm --name precise --ram 1024 \
--cdrom=/data/isos/precise-64-mini.iso \
@@ -35,92 +39,111 @@
--graphics vnc,listen=0.0.0.0 --noautoconsole \
--os-type=linux --os-variant=ubuntuprecise</userinput></screen></para>
</simplesect>
<simplesect>
<title>Step through the install</title>
<para>At the initial Installer boot menu, choose the "Install" option. Step through the
install prompts, the defaults should be fine.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-install.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Hostname</title>
<para>The installer may ask you to choose a hostname. The default
(<literal>ubuntu</literal>) is fine. We will install the cloud-init
package later, which will set the hostname on boot when a new instance is provisioned
using this image.</para>
<simplesect>
<title>Step through the install</title>
<para>At the initial Installer boot menu, choose the "Install"
option. Step through the install prompts, the defaults
should be fine.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-install.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Hostname</title>
<para>The installer may ask you to choose a hostname. The
default (<literal>ubuntu</literal>) is fine. We will
install the cloud-init package later, which will set the
hostname on boot when a new instance is provisioned using
this image.</para>
</simplesect>
</simplesect>
<simplesect>
<title>Select a mirror</title>
<para>The default mirror proposed by the installer should be fine.</para>
<para>The default mirror proposed by the installer should be
fine.</para>
</simplesect>
<simplesect>
<title>Step through the install</title>
<para>Step through the install, using the default options.
When prompted for a username, the default
(<literal>ubuntu</literal>) is fine.</para>
</simplesect>
<simplesect>
<title>Partition the disks</title>
<para>There are different options for partitioning the disks.
The default installation will use LVM partitions, and will
create three partitions (<filename>/boot</filename>,
<filename>/</filename>, swap), and this will work
fine. Alternatively, you may wish to create a single ext4
partition, mounted to "<literal>/</literal>", should also
work fine.</para>
<para>If unsure, we recommend you use the installer's default
partition scheme, since there is no clear advantage to one
scheme of another.</para>
</simplesect>
<simplesect>
<title>Step through the install</title>
<para>Step through the install, using the default options. When prompted for a
username, the default (<literal>ubuntu</literal>) is fine.</para>
</simplesect>
<simplesect>
<title>Partition the disks</title>
<para>There are different options for partitioning the disks. The default installation
will use LVM partitions, and will create three partitions (<filename>/boot</filename>,
<filename>/</filename>, swap), and this will work fine. Alternatively, you may wish
to create a single ext4 partition, mounted to "<literal>/</literal>", should also work
fine.</para>
<para>If unsure, we recommend you use the installer's default partition scheme, since there
is no clear advantage to one scheme of another.</para>
</simplesect>
<simplesect>
<title>Automatic updates</title>
<para>The Ubuntu installer will ask how you want to manage upgrades on your system. This
option depends upon your specific use case. If your virtual machine instances will be
able to connect to the internet, we recommend "Install security updates
automatically".</para>
<para>The Ubuntu installer will ask how you want to manage
upgrades on your system. This option depends upon your
specific use case. If your virtual machine instances will
be able to connect to the internet, we recommend "Install
security updates automatically".</para>
</simplesect>
<simplesect>
<title>Software selection: OpenSSH server</title>
<para>Choose "OpenSSH server"so that you will be able to SSH into the virtual machine
when it launches inside of an OpenStack cloud.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-software-selection.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Install GRUB boot loader</title>
<para>Select "Yes" when asked about installing the GRUB boot loader to the master boot
record.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-grub.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Detach the CD-ROM and reboot</title>
<para>Select the defaults for all of the remaining options. When the installation is
complete, you will be prompted to remove the CD-ROM.</para>
<simplesect>
<title>Software selection: OpenSSH server</title>
<para>Choose "OpenSSH server"so that you will be able to SSH
into the virtual machine when it launches inside of an
OpenStack cloud.</para>
<mediaobject>
<imageobject>
<imagedata
fileref="figures/ubuntu-software-selection.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Install GRUB boot loader</title>
<para>Select "Yes" when asked about installing the GRUB boot
loader to the master boot record.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-grub.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Detach the CD-ROM and reboot</title>
<para>Select the defaults for all of the remaining options.
When the installation is complete, you will be prompted to
remove the CD-ROM.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-finished.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-finished.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>
<note>
<para>When you hit "Continue" the virtual machine will shut down, even though it
says it will reboot.</para>
<para>When you hit "Continue" the virtual machine will
shut down, even though it says it will
reboot.</para>
</note>
</para>
<para>To eject a disk using <command>virsh</command>, libvirt requires that you attach an
empty disk at the same target that the CDROM was previously attached, which should be
<literal>hdc</literal>. You can confirm the appropriate target using the
<command>dom dumpxml <replaceable>vm-image</replaceable></command> command.</para>
<screen><prompt>#</prompt> <userinput>virsh dumpxml precise</userinput>
<para>To eject a disk using <command>virsh</command>, libvirt
requires that you attach an empty disk at the same target
that the CDROM was previously attached, which should be
<literal>hdc</literal>. You can confirm the
appropriate target using the <command>dom dumpxml
<replaceable>vm-image</replaceable></command>
command.</para>
<screen><prompt>#</prompt> <userinput>virsh dumpxml precise</userinput>
<computeroutput>&lt;domain type='kvm'>
&lt;name>precise&lt;/name>
...
@@ -134,85 +157,93 @@
&lt;/domain>
</computeroutput></screen>
<para>Run the following commands in the host as root to start up the machine again as
paused, eject the disk and resume. If you are using virt-manager, you may instead use
the
<para>Run the following commands in the host as root to start
up the machine again as paused, eject the disk and resume.
If you are using virt-manager, you may instead use the
GUI.<screen><prompt>#</prompt> <userinput>virsh start precise --paused</userinput>
<prompt>#</prompt> <userinput>virsh attach-disk --type cdrom --mode readonly precise "" hdc</userinput>
<prompt>#</prompt> <userinput>virsh resume precise</userinput></screen></para>
<note><para>In the example above, we start the instance paused, eject the disk, and then unpause. In
theory, we could have ejected the disk at the "Installation complete" screen.
However, our testing indicates that the Ubuntu installer locks the drive so that it
cannot be ejected at that point.</para></note>
</simplesect>
<simplesect>
<title>Log in to newly created image</title>
<para>When you boot the first time after install, it may ask you about authentication
tools, you can just choose 'Exit'. Then, log in as root using the root password you
<note>
<para>In the example above, we start the instance paused,
eject the disk, and then unpause. In theory, we could
have ejected the disk at the "Installation complete"
screen. However, our testing indicates that the Ubuntu
installer locks the drive so that it cannot be ejected
at that point.</para>
</note>
</simplesect>
<simplesect>
<title>Log in to newly created image</title>
<para>When you boot the first time after install, it may ask
you about authentication tools, you can just choose
'Exit'. Then, log in as root using the root password you
specified.</para>
</simplesect>
<simplesect>
</simplesect>
<simplesect>
<title>Install cloud-init</title>
<para>
The <command>cloud-init</command> script starts on instance boot and will
search for a metadata provider to fetch a public key
from. The public key will be placed in the default user
account for the image.
</para>
<para>
Install the <package>cloud-init</package> package:
</para>
<para>The <command>cloud-init</command> script starts on
instance boot and will search for a metadata provider to
fetch a public key from. The public key will be placed in
the default user account for the image.</para>
<para>Install the <package>cloud-init</package> package:</para>
<screen><prompt>#</prompt> <userinput>apt-get install cloud-init</userinput></screen>
<para>
When building Ubuntu images <command>cloud-init</command> must be
explicitly configured for the metadata source in use. The OpenStack
metadata server emulates the EC2 metadata service used by images in
Amazon EC2.
<para>When building Ubuntu images
<command>cloud-init</command> must be explicitly
configured for the metadata source in use. The OpenStack
metadata server emulates the EC2 metadata service used by
images in Amazon EC2.</para>
<para>To set the metadata source to be used by the image run
the <command>dpkg-reconfigure</command> command against
the <package>cloud-init</package> package. When prompted
select the <literal>EC2</literal> data source:
<screen><prompt>#</prompt> <userinput>dpkg-reconfigure cloud-init</userinput></screen>
</para>
<para>
To set the metadata source to be used by the image run the
<command>dpkg-reconfigure</command> command against the
<package>cloud-init</package> package. When prompted select the
<literal>EC2</literal> datasource:
<screen><prompt>#</prompt> <userinput>dpkg-reconfigure cloud-init</userinput></screen>
</para>
<para>The account varies by distribution. On Ubuntu-based virtual machines, the
account is called "ubuntu". On Fedora-based virtual machines, the account is called
<para>The account varies by distribution. On Ubuntu-based
virtual machines, the account is called "ubuntu". On
Fedora-based virtual machines, the account is called
"ec2-user".</para>
<para>You can change the name of the account used by cloud-init by editing the
<filename>/etc/cloud/cloud.cfg</filename> file and adding a line with a different
user. For example, to configure cloud-init to put the key in an account named "admin",
edit the config file so it has the
<para>You can change the name of the account used by
cloud-init by editing the
<filename>/etc/cloud/cloud.cfg</filename> file and
adding a line with a different user. For example, to
configure cloud-init to put the key in an account named
"admin", edit the config file so it has the
line:<programlisting>user: admin</programlisting></para>
</simplesect>
<simplesect>
<title>Shut down the instance</title>
<para>From inside the instance, as
root:<screen><prompt>#</prompt> <userinput>/sbin/shutdown -h now</userinput></screen></para>
</simplesect>
<simplesect>
<title>Shut down the instance</title>
<para>From inside the instance, as
root:<screen><prompt>#</prompt> <userinput>/sbin/shutdown -h now</userinput></screen></para>
</simplesect>
<simplesect>
<title>Clean up (remove MAC address details)</title>
<para>The operating system records the MAC address of the virtual ethernet card in locations
such as <filename>/etc/udev/rules.d/70-persistent-net.rules</filename> during the
instance process. However, each time the image boots up, the virtual ethernet card will
have a different MAC address, so this information must be deleted from the configuration
file.</para>
<para>There is a utility called <command>virt-sysprep</command>, that performs various
cleanup tasks such as removing the MAC address references. It will clean up a virtual
machine image in
<para>The operating system records the MAC address of the
virtual ethernet card in locations such as
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename>
during the instance process. However, each time the image
boots up, the virtual ethernet card will have a different
MAC address, so this information must be deleted from the
configuration file.</para>
<para>There is a utility called
<command>virt-sysprep</command>, that performs various
cleanup tasks such as removing the MAC address references.
It will clean up a virtual machine image in
place:<screen><prompt>#</prompt> <userinput>virt-sysprep -d precise</userinput></screen></para>
</simplesect>
<simplesect>
<title>Undefine the libvirt domain</title>
<para>Now that the image is ready to be uploaded to the Image service, we know longer need
to have this virtual machine image managed by libvirt. Use the <command>virsh undefine
<replaceable>vm-image</replaceable></command> command to inform
<para>Now that the image is ready to be uploaded to the Image
service, we know longer need to have this virtual machine
image managed by libvirt. Use the <command>virsh undefine
<replaceable>vm-image</replaceable></command>
command to inform
libvirt<screen><prompt>#</prompt> <userinput>virsh undefine precise</userinput></screen></para>
</simplesect>
<simplesect>
<title>Image is complete</title>
<para>The underlying image file you created with <command>qemu-img create</command> (e.g.
<filename>/tmp/precise.qcow2</filename>) is now ready for uploading to the OpenStack
Image service.</para>
</simplesect>
</section>
<simplesect>
<title>Image is complete</title>
<para>The underlying image file that you created with
<command>qemu-img create</command>, such as
<filename>/tmp/precise.qcow2</filename>, is now ready
for uploading to the OpenStack Image service.</para>
</simplesect>
</section>

View File

@@ -1,21 +1,46 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch004_book-introduction"><?dbhtml stop-chunking?>
<title>Introduction to OpenStack</title>
<para>This guide provides security insight into OpenStack
<chapter xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://docbook.org/ns/docbook"
xmlns:db="http://docbook.org/ns/docbook" version="5.0"
xml:id="ch004_book-introduction">
<?dbhtml stop-chunking?>
<title>Introduction to OpenStack</title>
<para>This guide provides security insight into OpenStack
deployments. The intended audience is cloud architects, deployers,
and administrators. In addition, cloud users will find the guide
both educational and helpful in provider selection, while auditors
will find it useful as a reference document to support their
compliance certification efforts. This guide is also recommended
for anyone interested in cloud security.</para>
<para>Each OpenStack deployment embraces a wide variety of technologies, spanning Linux distributions, database systems, messaging queues, OpenStack components themselves, access control policies, logging services, security monitoring tools, and much more. It should come as no surprise that the security issues involved are equally diverse, and their in-depth analysis would require several guides. We strive to find a balance, providing enough context to understand OpenStack security issues and their handling, and provide external references for further information. The guide could be read from start to finish or sampled as necessary like a reference.</para>
<para>We briefly introduce the kinds of clouds: private, public, and hybrid before presenting an overview of the OpenStack components and their related security concerns in the remainder of the chapter.</para>
<section xml:id="ch004_book-introduction-idp117824">
<title>Cloud types</title>
<para>OpenStack is a key enabler in adoption of cloud technology and has several common deployment use cases. These are commonly known as Public, Private, and Hybrid models. The following sections use the National Institute of Standards and Technology (NIST) <link xlink:href="http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf">definition of cloud</link> to introduce these different types of cloud as they apply to OpenStack.</para>
<section xml:id="ch004_book-introduction-idp119328">
<title>Public cloud</title>
<para>According to NIST, a public cloud is one in which the
<para>Each OpenStack deployment embraces a wide variety of
technologies, spanning Linux distributions, database systems,
messaging queues, OpenStack components themselves, access control
policies, logging services, security monitoring tools, and much
more. It should come as no surprise that the security issues
involved are equally diverse, and their in-depth analysis would
require several guides. We strive to find a balance, providing
enough context to understand OpenStack security issues and their
handling, and provide external references for further information.
The guide could be read from start to finish or sampled as
necessary like a reference.</para>
<para>We briefly introduce the kinds of clouds: private, public, and
hybrid before presenting an overview of the OpenStack components
and their related security concerns in the remainder of the
chapter.</para>
<section xml:id="ch004_book-introduction-idp117824">
<title>Cloud types</title>
<para>OpenStack is a key enabler in adoption of cloud technology
and has several common deployment use cases. These are commonly
known as Public, Private, and Hybrid models. The following
sections use the National Institute of Standards and Technology
(NIST) <link
xlink:href="http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf"
>definition of cloud</link> to introduce these different types
of cloud as they apply to OpenStack.</para>
<section xml:id="ch004_book-introduction-idp119328">
<title>Public cloud</title>
<para>According to NIST, a public cloud is one in which the
infrastructure is open to the general public for consumption.
OpenStack public clouds are typically run by a service
provider and can be consumed by individuals, corporations, or
@@ -31,93 +56,182 @@
required to meet regulatory requirements, a provider should
ensure tenant isolation as well as protecting management
infrastructure from external attacks.</para>
</section>
<section xml:id="ch004_book-introduction-idp121488">
<title>Private cloud</title>
<para>At the opposite end of the spectrum is the private
cloud. As NIST defines it, a private cloud is provisioned for
</section>
<section xml:id="ch004_book-introduction-idp121488">
<title>Private cloud</title>
<para>At the opposite end of the spectrum is the private cloud.
As NIST defines it, a private cloud is provisioned for
exclusive use by a single organization comprising multiple
consumers (e.g. business units). It may be owned, managed, and
operated by the organization, a third-party, or some
consumers, such as business units. It may be owned, managed,
and operated by the organization, a third-party, or some
combination of them, and it may exist on or off premises.
Private cloud use cases are diverse, as such, their individual
security concerns vary.</para>
</section>
<section xml:id="ch004_book-introduction-idp123456">
<title>Community cloud</title>
<para>NIST defines a community cloud as one whose  infrastructure is provisioned for the exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third-party, or some combination of them, and it may exist on or off premises.</para>
</section>
<section xml:id="ch004_book-introduction-idp125312">
<title>Hybrid cloud</title>
<para>A hybrid cloud is defined by NIST as a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds). For example an online retailer may have their advertising and catalogue presented on a public cloud that allows for elastic provisioning. This would enable them to handle seasonal loads in a flexible, cost-effective fashion. Once a customer begins to process their order, they are transferred to the more secure private cloud backend that is PCI compliant.</para>
<para>For the purposes of this document, we treat Community
and Hybrid similarly, dealing explicitly only with the
extremes of Public and Private clouds from a security
perspective. Your security measures depend where your
deployment falls upon the private public continuum.</para>
</section>
</section>
<section xml:id="ch004_book-introduction-idp128528">
<title>OpenStack service overview</title>
<para>OpenStack embraces a modular architecture to provide a set of core services that facilitates scalability and elasticity as core design tenets. This chapter briefly reviews OpenStack components, their use cases and security considerations.</para>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="374" contentwidth="976" fileref="static/marketecture-diagram.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/marketecture-diagram.png" format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<section xml:id="ch004_book-introduction-idp134736">
<title>Compute</title>
<para>OpenStack Compute Service (Nova) provides services to support the management of virtual machine instances at scale, instances that host multi-tiered applications, dev/test environments, "Big Data" crunching Hadoop clusters, and/or high performance computing.</para>
<para>The Compute Service facilitates this management through an abstraction layer that interfaces with supported hypervisors, which we address later on in more detail.</para>
<para>Later in the guide, we focus generically on the
<section xml:id="ch004_book-introduction-idp123456">
<title>Community cloud</title>
<para>NIST defines a community cloud as one whose
 infrastructure is provisioned for the exclusive use by a
specific community of consumers from organizations that have
shared concerns. For example, mission, security requirements,
policy, and compliance considerations. It may be owned,
managed, and operated by one or more of the organizations in
the community, a third-party, or some combination of them, and
it may exist on or off premises.</para>
</section>
<section xml:id="ch004_book-introduction-idp125312">
<title>Hybrid cloud</title>
<para>A hybrid cloud is defined by NIST as a composition of two
or more distinct cloud infrastructures, such as private, community, or
public, that remain unique entities, but are bound together by
standardized or proprietary technology that enables data and
application portability, such as cloud bursting for load
balancing between clouds. For example an online retailer may
have their advertising and catalogue presented on a public
cloud that allows for elastic provisioning. This would enable
them to handle seasonal loads in a flexible, cost-effective
fashion. Once a customer begins to process their order, they
are transferred to the more secure private cloud backend that
is PCI compliant.</para>
<para>For the purposes of this document, we treat Community and
Hybrid similarly, dealing explicitly only with the extremes of
Public and Private clouds from a security perspective. Your
security measures depend where your deployment falls upon the
private public continuum.</para>
</section>
</section>
<section xml:id="ch004_book-introduction-idp128528">
<title>OpenStack service overview</title>
<para>OpenStack embraces a modular architecture to provide a set
of core services that facilitates scalability and elasticity as
core design tenets. This chapter briefly reviews OpenStack
components, their use cases and security considerations.</para>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="374" contentwidth="976"
fileref="static/marketecture-diagram.png" format="PNG"
scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/marketecture-diagram.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<section xml:id="ch004_book-introduction-idp134736">
<title>Compute</title>
<para>OpenStack Compute Service (Nova) provides services to
support the management of virtual machine instances at scale,
instances that host multi-tiered applications, dev/test
environments, "Big Data" crunching Hadoop clusters, and/or
high performance computing.</para>
<para>The Compute Service facilitates this management through an
abstraction layer that interfaces with supported hypervisors,
which we address later on in more detail.</para>
<para>Later in the guide, we focus generically on the
virtualization stack as it relates to hypervisors.</para>
<para>For information about the current state of feature support, see
<link
<para>For information about the current state of feature
support, see <link
xlink:href="https://wiki.openstack.org/wiki/HypervisorSupportMatrix"
>OpenStack Hypervisor Support Matrix</link>.</para>
<para>The security of Compute is critical for an OpenStack deployment. Hardening techniques should include support for strong instance isolation, secure communication between Compute sub-components, and resiliency of public-facing <glossterm>API</glossterm> endpoints.</para>
</section>
<section xml:id="ch004_book-introduction-idp138800">
<title>Object Storage</title>
<para>The OpenStack Object Storage Service (Swift) provides support for storing and retrieving arbitrary data in the cloud. The Object Storage Service provides both a native API and an Amazon Web Services S3 compatible API. The service provides a high degree of resiliency through data replication and can handle petabytes of data.</para>
<para>It is important to understand that object storage differs from traditional file system storage. It is best used for static data such as media files (MP3s, images, videos), virtual machine images, and backup files.</para>
<para>Object security should focus on access control and encryption of data in transit and at rest. Other concerns may relate to system abuse, illegal or malicious content storage, and cross authentication attack vectors.</para>
</section>
<section xml:id="ch004_book-introduction-idp141536">
<title>Block Storage</title>
<para>The OpenStack Block Storage service (Cinder) provides
persistent block storage for compute instances. The Block Storage
Service is responsible for managing the life-cycle of block devices,
from the creation and attachment of volumes to instances, to their
release.</para>
<para>Security considerations for block storage are similar to that of object storage.</para>
</section>
<section xml:id="ch004_book-introduction-idp143424">
<title>OpenStack Networking</title>
<para>The OpenStack Networking Service (Neutron, previously called Quantum) provides various networking services to cloud users (tenants) such as IP address management, <glossterm>DNS</glossterm>, <glossterm>DHCP</glossterm>, load balancing, and security groups (network access rules, like firewall policies). It provides a framework for software defined networking (SDN) that allows for pluggable integration with various networking solutions.</para>
<para>OpenStack Networking allows cloud tenants to manage their guest network configurations. Security concerns with the networking service include network traffic isolation, availability, integrity and confidentiality.</para>
</section>
<section xml:id="ch004_book-introduction-idp145600">
<title>Dashboard</title>
<para>The OpenStack Dashboard Service (Horizon) provides a web-based interface for both cloud administrators and cloud tenants. Through this interface administrators and tenants can provision, manage, and monitor cloud resources. Horizon is commonly deployed in a public facing manner with all the usual security concerns of public web portals.</para>
</section>
<section xml:id="ch004_book-introduction-idp147104">
<title>Identity Service</title>
<para>The OpenStack Identity Service (Keystone) is a <emphasis role="bold">shared service</emphasis> that provides authentication and authorization services throughout the entire cloud infrastructure. The Identity Service has pluggable support for multiple forms of authentication.</para>
<para>Security concerns here pertain to trust in authentication, management of authorization tokens, and secure communication.</para>
</section>
<section xml:id="ch004_book-introduction-idp149712">
<title>Image Service</title>
<para>The OpenStack Image Service (Glance) provides disk image management services. The Image Service provides image discovery, registration, and delivery services to Compute, the compute service, as needed.</para>
<para>Trusted processes for managing the life cycle of disk images are required, as are all the previously mentioned issues with respect to data security.</para>
</section>
<section xml:id="ch004_book-introduction-idp152400">
<title>Other supporting technology</title>
<para>OpenStack relies on messaging for internal communication between several of its services. By default, OpenStack uses message queues based on the Advanced Message Queue Protocol (<glossterm>AMQP</glossterm>). Similar to most OpenStack services, it supports pluggable components. Today the implementation backend could be <glossterm>RabbitMQ</glossterm>, <glossterm>Qpid</glossterm>, or <glossterm>ZeroMQ</glossterm>.</para>
<para>As most management commands flow through the message queueing system, it is a primary security concern for any OpenStack deployment. Message queueing security is discussed in detail later in this guide.</para>
<para>Several of the components use databases though it is not explicitly called out. Securing the access to the databases and their contents is yet another security concern, and consequently discussed in more detail later in this guide.</para>
</section>
<para>The security of Compute is critical for an OpenStack
deployment. Hardening techniques should include support for
strong instance isolation, secure communication between
Compute sub-components, and resiliency of public-facing
<glossterm>API</glossterm> endpoints.</para>
</section>
</chapter>
<section xml:id="ch004_book-introduction-idp138800">
<title>Object Storage</title>
<para>The OpenStack Object Storage Service (Swift) provides
support for storing and retrieving arbitrary data in the
cloud. The Object Storage Service provides both a native API
and an Amazon Web Services S3 compatible API. The service
provides a high degree of resiliency through data replication
and can handle petabytes of data.</para>
<para>It is important to understand that object storage differs
from traditional file system storage. It is best used for
static data such as media files (MP3s, images, videos),
virtual machine images, and backup files.</para>
<para>Object security should focus on access control and
encryption of data in transit and at rest. Other concerns may
relate to system abuse, illegal or malicious content storage,
and cross authentication attack vectors.</para>
</section>
<section xml:id="ch004_book-introduction-idp141536">
<title>Block Storage</title>
<para>The OpenStack Block Storage service (Cinder) provides
persistent block storage for compute instances. The Block
Storage Service is responsible for managing the life-cycle of
block devices, from the creation and attachment of volumes to
instances, to their release.</para>
<para>Security considerations for block storage are similar to
that of object storage.</para>
</section>
<section xml:id="ch004_book-introduction-idp143424">
<title>OpenStack Networking</title>
<para>The OpenStack Networking Service (Neutron, previously
called Quantum) provides various networking services to cloud
users (tenants) such as IP address management,
<glossterm>DNS</glossterm>, <glossterm>DHCP</glossterm>,
load balancing, and security groups (network access rules,
like firewall policies). It provides a framework for software
defined networking (SDN) that allows for pluggable integration
with various networking solutions.</para>
<para>OpenStack Networking allows cloud tenants to manage their
guest network configurations. Security concerns with the
networking service include network traffic isolation,
availability, integrity and confidentiality.</para>
</section>
<section xml:id="ch004_book-introduction-idp145600">
<title>Dashboard</title>
<para>The OpenStack Dashboard Service (Horizon) provides a
web-based interface for both cloud administrators and cloud
tenants. Through this interface administrators and tenants can
provision, manage, and monitor cloud resources. Horizon is
commonly deployed in a public facing manner with all the usual
security concerns of public web portals.</para>
</section>
<section xml:id="ch004_book-introduction-idp147104">
<title>Identity Service</title>
<para>The OpenStack Identity Service (Keystone) is a <emphasis
role="bold">shared service</emphasis> that provides
authentication and authorization services throughout the
entire cloud infrastructure. The Identity Service has
pluggable support for multiple forms of authentication.</para>
<para>Security concerns here pertain to trust in authentication,
management of authorization tokens, and secure
communication.</para>
</section>
<section xml:id="ch004_book-introduction-idp149712">
<title>Image Service</title>
<para>The OpenStack Image Service (Glance) provides disk image
management services. The Image Service provides image
discovery, registration, and delivery services to Compute, the
compute service, as needed.</para>
<para>Trusted processes for managing the life cycle of disk
images are required, as are all the previously mentioned
issues with respect to data security.</para>
</section>
<section xml:id="ch004_book-introduction-idp152400">
<title>Other supporting technology</title>
<para>OpenStack relies on messaging for internal communication
between several of its services. By default, OpenStack uses
message queues based on the Advanced Message Queue Protocol
(<glossterm>AMQP</glossterm>). Similar to most OpenStack
services, it supports pluggable components. Today the
implementation backend could be
<glossterm>RabbitMQ</glossterm>,
<glossterm>Qpid</glossterm>, or
<glossterm>ZeroMQ</glossterm>.</para>
<para>As most management commands flow through the message
queueing system, it is a primary security concern for any
OpenStack deployment. Message queueing security is discussed
in detail later in this guide.</para>
<para>Several of the components use databases though it is not
explicitly called out. Securing the access to the databases
and their contents is yet another security concern, and
consequently discussed in more detail later in this
guide.</para>
</section>
</section>
</chapter>

View File

@@ -1,19 +1,34 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch006_introduction-to-case-studies"><?dbhtml stop-chunking?>
<title>Introduction to Case Studies</title>
<para>Throughout this guide we will refer to two running case studies. We introduce them here and will return to them at the end of each chapter.</para>
<section xml:id="ch006_introduction-to-case-studies-idp44496">
<title>Case Study : Alice the private cloud builder</title>
<para>Alice is deploying a private cloud for use by a government department in the US. The cloud must comply with relevant standards such as FedRAMP. The security paperwork requirements for this cloud are very high. It will have no direct access to the internet: its API endpoints, compute instances and other resources will be exposed only to systems within the department's network which is entirely air-gapped from all other networks. The cloud can access other network services on the Organization's Intranet e.g. the authentication and logging services.</para>
</section>
<section xml:id="ch006_introduction-to-case-studies-idp46720">
<title>Case Study : Bob the public cloud provider</title>
<para>Bob is a lead architect for a company deploying a large
greenfield public cloud. This cloud will provide IaaS for the
masses, allowing any consumer with a valid credit card access to
utility computing and storage but the primary focus is
<chapter xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://docbook.org/ns/docbook"
xmlns:db="http://docbook.org/ns/docbook" version="5.0"
xml:id="ch006_introduction-to-case-studies">
<?dbhtml stop-chunking?>
<title>Introduction to Case Studies</title>
<para>This guide refers to two running case studies, which are
introduced here and referred to at the end of each chapter.</para>
<section xml:id="ch006_introduction-to-case-studies-idp44496">
<title>Case Study : Alice the private cloud builder</title>
<para>Alice deploys a private cloud for use by a government
department in the US. The cloud must comply with relevant
standards, such as FedRAMP. The security paperwork requirements
for this cloud are very high. It must have no direct access to
the internet: its API endpoints, compute instances, and other
resources must be exposed to only systems within the
department's network, which is entirely air-gapped from all
other networks. The cloud can access other network services on
the Organization's Intranet. For example, the authentication and
logging services.</para>
</section>
<section xml:id="ch006_introduction-to-case-studies-idp46720">
<title>Case Study : Bob the public cloud provider</title>
<para>Bob is a lead architect for a company that deploys a large
greenfield public cloud. This cloud provides IaaS for the masses
and enables any consumer with a valid credit card access to
utility computing and storage, but the primary focus is
enterprise customers. Data privacy concerns are a big priority
for Bob as they are seen as a major barrier to large-scale
adoption of the cloud by organizations.</para>
</section>
</chapter>
</section>
</chapter>

View File

@@ -1,51 +1,114 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch008_system-roles-types"><?dbhtml stop-chunking?>
<title>System Documentation Requirements</title>
<para>The system documentation for an OpenStack cloud deployment should follow the templates and best practices for the Enterprise Information Technology System in your organization. Organizations often have compliance requirements which may require an overall System Security Plan to inventory and document the architecture of a given system. There are common challenges across the industry related to documenting the dynamic cloud infrastructure and keeping the information up-to-date.</para>
<section xml:id="ch008_system-roles-types-idp44832">
<title>System Roles &amp; Types</title>
<para>It is necessary to describe the two broadly defined types of nodes that generally make up an OpenStack installation.</para>
<orderedlist><listitem>
<para>Infrastructure nodes, or the nodes that run the cloud related services such as the OpenStack Identity service, the message queuing service, storage, networking, and other services required to support the operation of the cloud.</para>
</listitem>
<listitem>
<para>The other type of nodes are compute, storage, or other resource nodes, those that provide storage capacity or virtual machines for your cloud.</para>
</listitem>
</orderedlist>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://docbook.org/ns/docbook"
xmlns:db="http://docbook.org/ns/docbook" version="5.0"
xml:id="ch008_system-roles-types">
<?dbhtml stop-chunking?>
<title>System Documentation Requirements</title>
<para>The system documentation for an OpenStack cloud deployment
should follow the templates and best practices for the Enterprise
Information Technology System in your organization. Organizations
often have compliance requirements which may require an overall
System Security Plan to inventory and document the architecture of
a given system. There are common challenges across the industry
related to documenting the dynamic cloud infrastructure and
keeping the information up-to-date.</para>
<section xml:id="ch008_system-roles-types-idp44832">
<title>System Roles &amp; Types</title>
<para>The two broadly defined types of
nodes that generally make up an OpenStack installation are:</para>
<itemizedlist>
<listitem>
<para>Infrastructure nodes. The nodes that run the cloud
related services such as the OpenStack Identity Service, the
message queuing service, storage, networking, and other
services required to support the operation of the
cloud.</para>
</listitem>
<listitem>
<para>Compute, storage, or other
resource nodes. Provide storage capacity or
virtual machines for your cloud.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="ch008_system-roles-types-idp48496">
<title>System Inventory</title>
<para>Documentation should provide a general description of the
OpenStack environment and cover all systems used (production,
development, test, etc.). Documenting system components,
networks, services, and software often provides the bird's-eye
view needed to thoroughly cover and consider security concerns,
attack vectors and possible security domain bridging points. A
system inventory may need to capture ephemeral resources such as
virtual machines or virtual disk volumes that would otherwise be
persistent resources in a traditional IT system.</para>
<section xml:id="ch008_system-roles-types-idp50832">
<title>Hardware Inventory</title>
<para>Clouds without stringent compliance requirements for
written documentation might benefit from having a
Configuration Management Database
(<glossterm>CMDB</glossterm>). CMDBs are normally used for
hardware asset tracking and overall life-cycle management. By
leveraging a CMDB, an organization can quickly identify cloud
infrastructure hardware. For example, compute nodes, storage
nodes, and network devices that exist on the network but that might
not be adequately protected and/or forgotten. OpenStack
provisioning system might provide some CMDB-like functions
especially if auto-discovery features of hardware attributes
are available.</para>
</section>
<section xml:id="ch008_system-roles-types-idp48496">
<title>System Inventory</title>
<para>Documentation should provide a general description of the OpenStack environment and cover all systems used (production, development, test, etc.). Documenting system components, networks, services, and software often provides the bird's-eye view needed to thoroughly cover and consider security concerns, attack vectors and possible security domain bridging points.  A system inventory may need to capture ephemeral resources such as virtual machines or virtual disk volumes that would otherwise be persistent resources in a traditional IT system.</para>
<section xml:id="ch008_system-roles-types-idp50832">
<title>Hardware Inventory</title>
<para>Clouds without stringent compliance requirements for written documentation may at least benefit from having a Configuration Management Database (<glossterm>CMDB</glossterm>). CMDB's are normally used for hardware asset tracking and overall life-cycle management. By leveraging a CMDB, an organization can quickly identify cloud infrastructure hardware (e.g. compute nodes, storage nodes, and network devices) that exists on the network but may not be adequately protected and/or forgotten. OpenStack provisioning system may provide some CMDB-like functions especially if auto-discovery features of hardware attributes are available.</para>
</section>
<section xml:id="ch008_system-roles-types-idp53536">
<title>Software Inventory</title>
<para>Just as with hardware, all software components within the OpenStack deployment should be documented. Components here should include system databases; OpenStack software components and supporting sub-components; and, supporting infrastructure software such as load-balancers, reverse proxies, and network address translators. Having an authoritative list like this may be critical toward understanding total system impact due to a compromise or vulnerability of a specific class of software.</para>
</section>
<section xml:id="ch008_system-roles-types-idp53536">
<title>Software Inventory</title>
<para>Just as with hardware, all software components within the
OpenStack deployment should be documented. Components here
should include system databases; OpenStack software components
and supporting sub-components; and, supporting infrastructure
software such as load-balancers, reverse proxies, and network
address translators. Having an authoritative list like this
may be critical toward understanding total system impact due
to a compromise or vulnerability of a specific class of
software.</para>
</section>
<section xml:id="ch008_system-roles-types-idp55328">
<title>Network Topology</title>
<para>A Network Topology should be provided with highlights specifically calling out the data flows and bridging points between the security domains. Network ingress and egress points should be identified along with any OpenStack logical system boundaries. Multiple diagrams may be needed to provide complete visual coverage of the system.  A network topology document should include virtual networks created on behalf of tenants by the system along with virtual machine instances and gateways created by OpenStack.</para>
</section>
<section xml:id="ch008_system-roles-types-idp57520">
<title>Services, Protocols and Ports</title>
<para>The Service, Protocols and Ports table provides important
</section>
<section xml:id="ch008_system-roles-types-idp55328">
<title>Network Topology</title>
<para>A Network Topology should be provided with highlights
specifically calling out the data flows and bridging points
between the security domains. Network ingress and egress points
should be identified along with any OpenStack logical system
boundaries. Multiple diagrams may be needed to provide complete
visual coverage of the system.  A network topology document
should include virtual networks created on behalf of tenants by
the system along with virtual machine instances and gateways
created by OpenStack.</para>
</section>
<section xml:id="ch008_system-roles-types-idp57520">
<title>Services, Protocols and Ports</title>
<para>The Service, Protocols and Ports table provides important
additional detail of an OpenStack deployment. A table view of
all services running within the cloud infrastructure can
immediately inform, guide, and help check security procedures.
Firewall configuration, service port conflicts, security
remediation areas, and compliance requirements become easier to
manage when you have concise information available. E.g. tabular
information as shown below.</para>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="166" contentwidth="967" fileref="static/services-protocols-ports.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/services-protocols-ports.png" format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>Referencing a table of services, protocols and ports can help in understanding the relationship between OpenStack components. It is highly recommended that OpenStack deployments have information similar to this on record.</para>
</section>
</chapter>
manage when you have concise information available. Consider the
following table:</para>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="166" contentwidth="967"
fileref="static/services-protocols-ports.png" format="PNG"
scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/services-protocols-ports.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>Referencing a table of services, protocols and ports can
help in understanding the relationship between OpenStack
components. It is highly recommended that OpenStack deployments
have information similar to this on record.</para>
</section>
</chapter>

View File

@@ -1,161 +1,325 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch012_configuration-management"><?dbhtml stop-chunking?>
<title>Continuous Systems Management</title>
<para>A cloud will always have bugs. Some of these will be security problems. For this reason, it is critically important to be prepared to apply security updates and general software updates. This involves smart use of configuration management tools, which are discussed below. This also involves knowing when an upgrade is necessary.</para>
<section xml:id="ch012_configuration-management-idp44720">
<title>Vulnerability Management</title>
<para>For announcements regarding security relevant changes,
<chapter xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://docbook.org/ns/docbook"
xmlns:db="http://docbook.org/ns/docbook" version="5.0"
xml:id="ch012_configuration-management">
<?dbhtml stop-chunking?>
<title>Continuous Systems Management</title>
<para>A cloud will always have bugs. Some of these will be security
problems. For this reason, it is critically important to be
prepared to apply security updates and general software updates.
This involves smart use of configuration management tools, which
are discussed below. This also involves knowing when an upgrade is
necessary.</para>
<section xml:id="ch012_configuration-management-idp44720">
<title>Vulnerability Management</title>
<para>For announcements regarding security relevant changes,
subscribe to the <link
xlink:href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce">OpenStack
Announce mailing list</link>.
The security notifications are also posted through the downstream packages for example through Linux distributions that you may be subscribed to as part of the package updates.</para>
<para>The OpenStack components are only a small fraction of the software in a cloud. It is important to keep up to date with all of these other components, too. While the specific data sources will be deployment specific, the key idea is to ensure that a cloud administrator subscribes to the necessary mailing lists for receiving notification of any related security updates. Often this is as simple as tracking an upstream Linux distribution.</para>
<note><para>OpenStack releases security information through two
channels.
<itemizedlist><listitem><para>OpenStack Security Advisories (OSSA) are created by
the OpenStack Vulnerability Management Team (VMT). They pertain
to security holes in core OpenStack services. More information
on the VMT can be found here: <link xlink:href="https://wiki.openstack.org/wiki/Vulnerability_Management">https://wiki.openstack.org/wiki/Vulnerability_Management</link></para></listitem>
<listitem><para>OpenStack Security Notes (OSSN) were created by the
OpenStack Security Group (OSSG) to support the work of the
VMT. OSSN address issues in supporting software and common
deployment configurations. They're referenced throughout this
guide. Security Notes are archived at <link xlink:href="https://launchpad.net/ossn/">https://launchpad.net/ossn/</link></para>
xlink:href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce"
>OpenStack Announce mailing list</link>. The security
notifications are also posted through the downstream packages
for example through Linux distributions that you may be
subscribed to as part of the package updates.</para>
<para>The OpenStack components are only a small fraction of the
software in a cloud. It is important to keep up to date with all
of these other components, too. While the specific data sources
will be deployment specific, the key idea is to ensure that a
cloud administrator subscribes to the necessary mailing lists
for receiving notification of any related security updates.
Often this is as simple as tracking an upstream Linux
distribution.</para>
<note>
<para>OpenStack releases security information through two
channels. <itemizedlist>
<listitem>
<para>OpenStack Security Advisories (OSSA) are created by
the OpenStack Vulnerability Management Team (VMT). They
pertain to security holes in core OpenStack services.
More information on the VMT can be found here: <link
xlink:href="https://wiki.openstack.org/wiki/Vulnerability_Management"
>https://wiki.openstack.org/wiki/Vulnerability_Management</link></para>
</listitem>
<listitem>
<para>OpenStack Security Notes (OSSN) were created by the
OpenStack Security Group (OSSG) to support the work of
the VMT. OSSN address issues in supporting software and
common deployment configurations. They're referenced
throughout this guide. Security Notes are archived at
<link xlink:href="https://launchpad.net/ossn/"
>https://launchpad.net/ossn/</link></para>
</listitem>
</itemizedlist>
</para>
</note>
<section xml:id="ch012_configuration-management-idp48592">
<title>Triage</title>
<para>After you are notified of a security update, the next step
is to determine how critical this update is to a given cloud
deployment. In this case, it is useful to have a pre-defined
policy. Existing vulnerability rating systems such as the
common vulnerability scoring system (CVSS) v2 do not properly
account for cloud deployments.</para>
<para>In this example we introduce a scoring matrix that places
vulnerabilities in three categories: Privilege Escalation,
Denial of Service and Information Disclosure. Understanding
the type of vulnerability and where it occurs in your
infrastructure will enable you to make reasoned response
decisions.</para>
<para>Privilege Escalation describes the ability of a user to
act with the privileges of some other user in a system,
bypassing appropriate authorization checks. For example, a
standard Linux user running code or performing an operation
that allows them to conduct further operations with the
privileges of the root users on the system.</para>
<para>Denial of Service refers to an exploited vulnerability
that may cause service or system disruption. This includes
both distributed attacks to overwhelm network resources, and
single-user attacks that are typically caused through resource
allocation bugs or input induced system failure flaws.</para>
<para>Information Disclosure vulnerabilities reveal information
about your system or operations. These vulnerabilities range
from debugging information disclosure, to exposure of critical
security data, such as authentication credentials and
passwords.</para>
<informaltable rules="all" width="80%">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<tbody>
<tr>
<td><para> </para></td>
<td colspan="4"><para><emphasis>Attacker Position /
Privilege Level</emphasis></para></td>
</tr>
<tr>
<td><para><emphasis role="bold"> </emphasis></para></td>
<td><para><emphasis role="bold"
>External</emphasis></para></td>
<td><para><emphasis role="bold">Cloud
User</emphasis></para></td>
<td><para><emphasis role="bold">Cloud
Admin</emphasis></para></td>
<td><para><emphasis role="bold">Control
Plane</emphasis></para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Privilege Elevation (3
levels)</emphasis></para></td>
<td><para>Critical</para></td>
<td><para>n/a</para></td>
<td><para>n/a</para></td>
<td><para>n/a</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Privilege Elevation (2
levels)</emphasis></para></td>
<td><para>Critical</para></td>
<td><para>Critical</para></td>
<td><para>n/a</para></td>
<td><para>n/a</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Privilege Elevation (1
level)</emphasis></para></td>
<td><para>Critical</para></td>
<td><para>Critical</para></td>
<td><para>Critical</para></td>
<td><para>n/a</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Denial of
Service</emphasis></para></td>
<td><para>High</para></td>
<td><para>Medium</para></td>
<td><para>Low</para></td>
<td><para>Low</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Information
Disclosure</emphasis></para></td>
<td><para>Critical / High</para></td>
<td><para>Critical / High</para></td>
<td><para>Medium / Low</para></td>
<td><para>Low</para></td>
</tr>
</tbody>
</informaltable>
<para>The previous table illustrates a generic approach to
measuring the impact of a vulnerability based on where it
occurs in your deployment and the effect. For example, a
single level privilege escalation on a Compute API node
potentially allows a standard user of the API to escalate to
have the same privileges as the root user on the node.</para>
<para>We suggest cloud administrators customize and expand this
table to suit their needs. Then work to define how to handle
the various severity levels. For example, a critical-level
security update might require the cloud to be upgraded on a
specified time line, whereas a low-level update might be more
relaxed.</para>
</section>
<section xml:id="ch012_configuration-management-idp100864">
<title>Testing the Updates</title>
<para>You should test any update before you deploy it in a
production environment. Typically this requires having a
separate test cloud setup that first receives the update. This
cloud should be as close to the production cloud as possible,
in terms of software and hardware. Updates should be tested
thoroughly in terms of performance impact, stability,
application impact, and more. Especially important is to
verify that the problem theoretically addressed by the update,
such as a specific vulnerability, is actually fixed.</para>
</section>
<section xml:id="ch012_configuration-management-idp102976">
<title>Deploying the Updates</title>
<para>Once the updates are fully tested, they can be deployed to
the production environment. This deployment should be fully
automated using the configuration management tools described
below.</para>
</section>
</section>
<section xml:id="ch012_configuration-management-idp104464">
<title>Configuration Management</title>
<para>A production quality cloud should always use tools to
automate configuration and deployment. This eliminates human
error, and allows the cloud to scale much more rapidly.
Automation also helps with continuous integration and
testing.</para>
<para>When building an OpenStack cloud it is strongly recommended
to approach your design and implementation with a configuration
management tool or framework in mind. Configuration management
allows you to avoid the many pitfalls inherent in building,
managing, and maintaining an infrastructure as complex as
OpenStack. By producing the manifests, cookbooks, or templates
required for a configuration management utility, you are able to
satisfy a number of documentation and regulatory reporting
requirements. Further, configuration management can also
function as part of your BCP and DR plans wherein you can
rebuild a node or service back to a known state in a DR event or
given a compromise.</para>
<para>Additionally, when combined with a version control system
such as Git or SVN, you can track changes to your environment
over time and re-mediate unauthorized changes that may occur.
For example, a <filename>nova.conf</filename> file or other
configuration file falls out of compliance with your standard,
your configuration management tool can revert or replace the
file and bring your configuration back into a known state.
Finally a configuration management tool can also be used to
deploy updates; simplifying the security patch process. These
tools have a broad range of capabilities that are useful in this
space. The key point for securing your cloud is to choose a tool
for configuration management and use it.</para>
<para>There are many configuration management solutions; at the
time of this writing there are two in the marketplace that are
robust in their support of OpenStack environments:
<glossterm>Chef</glossterm> and <glossterm>Puppet</glossterm>.
A non-exhaustive listing of tools in this space is provided
below:</para>
<itemizedlist>
<listitem>
<para>Chef</para>
</listitem>
<listitem>
<para>Puppet</para>
</listitem>
<listitem>
<para>Salt Stack</para>
</listitem>
<listitem>
<para>Ansible</para>
</listitem>
</itemizedlist>
<section xml:id="ch012_configuration-management-idp8400">
<title>Policy Changes</title>
<para>Whenever a policy or configuration management is changed,
it is good practice to log the activity, and backup a copy of
the new set. Often, such policies and configurations are
stored in a version controlled repository such as git.</para>
</section>
</section>
<section xml:id="ch012_configuration-management-idp10160">
<title>Secure Backup and Recovery</title>
<para>It is important to include Backup procedures and policies in
the overall System Security Plan. For a good overview of
OpenStack's Backup and Recovery capabilities and procedures,
please refer to the OpenStack Operations Guide.</para>
<section xml:id="ch012_configuration-management-idp123104">
<title>Security Considerations</title>
<itemizedlist>
<listitem>
<para>Ensure only authenticated users and backup clients
have access to the backup server.</para>
</listitem>
<listitem>
<para>Use data encryption options for storage and
transmission of backups.</para>
</listitem>
<listitem>
<para>Use a dedicated and hardened backup servers. The logs
for the backup server must be monitored daily and
accessible by only few individuals.</para>
</listitem>
<listitem>
<para>Test data recovery options regularly. One of the
things that can be restored from secured backups is the
images. In case of a compromise, the best practice would
be to terminate running instances immediately and then
relaunch the instances from the images in the secured
backup repository.</para>
</listitem>
</itemizedlist>
</para>
</note>
<section xml:id="ch012_configuration-management-idp48592">
<title>Triage</title>
<para>After receiving notification about a security update, the next step is to determine how critical this update is to a given cloud deployment. In this case, it is useful to have a pre-defined policy. Existing vulnerability rating systems such as the common vulnerability scoring system (CVSS) v2 do not properly account for cloud deployments.</para>
<para>In this example we introduce a scoring matrix that places vulnerabilities in three categories: Privilege Escalation, Denial of Service and Information Disclosure. Understanding the type of vulnerability and where it occurs in your infrastructure will enable you to make reasoned response decisions.</para>
<para>Privilege Escalation describes the ability of a user to act with the privileges of some other user in a system, bypassing appropriate authorization checks. For example a standard Linux user running code or performing an operation that allows them to conduct further operations with the privileges of the root users on the system.</para>
<para>Denial of Service refers to an exploited vulnerability that may cause service or system disruption. This includes both distributed attacks to overwhelm network resources, and single-user attacks that are typically caused through resource allocation bugs or input induced system failure flaws.</para>
<para>Information Disclosure vulnerabilities reveal information about your system or operations. These vulnerabilities range from debugging information disclosure, to exposure of critical security data, such as authentication credentials and passwords.</para>
<informaltable rules="all" width="80%"><colgroup><col/><col/><col/><col/><col/></colgroup>
<tbody>
<tr>
<td><para> </para></td>
<td colspan="4"><para><emphasis>Attacker Position / Privilege Level</emphasis></para></td>
</tr>
<tr>
<td><para><emphasis role="bold"> </emphasis></para></td>
<td><para><emphasis role="bold">External</emphasis></para></td>
<td><para><emphasis role="bold">Cloud User</emphasis></para></td>
<td><para><emphasis role="bold">Cloud Admin</emphasis></para></td>
<td><para><emphasis role="bold">Control Plane</emphasis></para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Privilege Elevation (3 levels)</emphasis></para></td>
<td><para>Critical</para></td>
<td><para>n/a</para></td>
<td><para>n/a</para></td>
<td><para>n/a</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Privilege Elevation (2 levels)</emphasis></para></td>
<td><para>Critical</para></td>
<td><para>Critical</para></td>
<td><para>n/a</para></td>
<td><para>n/a</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Privilege Elevation (1 level)</emphasis></para></td>
<td><para>Critical</para></td>
<td><para>Critical</para></td>
<td><para>Critical</para></td>
<td><para>n/a</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Denial of Service</emphasis></para></td>
<td><para>High</para></td>
<td><para>Medium</para></td>
<td><para>Low</para></td>
<td><para>Low</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Information Disclosure</emphasis></para></td>
<td><para>Critical / High</para></td>
<td><para>Critical / High</para></td>
<td><para>Medium / Low</para></td>
<td><para>Low</para></td>
</tr>
</tbody>
</informaltable>
<para>The above table illustrates a generic approach to measuring the impact of a vulnerability based on where it occurs in your deployment and the effect; for example, a single level privilege escalation on a Compute API node would potentially allow a standard user of the API to escalate to have the same privileges as the root user on the node.</para>
<para>We suggest cloud administrators customize and expand this table to suit their needs. Then work to define how to handle the various severity levels. For example, a critical-level security update might require the cloud to be upgraded on a specified time line, whereas a low-level update might be more relaxed.</para>
</section>
<section xml:id="ch012_configuration-management-idp100864">
<title>Testing the Updates</title>
<para>Any update should be fully tested before deploying in a production environment. Typically this requires having a separate test cloud setup that first receives the update.  This cloud should be as close to the production cloud as possible, in terms of software and hardware. Updates should be tested thoroughly in terms of performance impact, stability, application impact, and more. Especially important is to verify that the problem theoretically addressed by the update (e.g., a specific vulnerability) is actually fixed.</para>
</section>
<section xml:id="ch012_configuration-management-idp102976">
<title>Deploying the Updates</title>
<para>Once the updates are fully tested, they can be deployed to the production environment. This deployment should be fully automated using the configuration management tools described below.</para>
</section>
</section>
<section xml:id="ch012_configuration-management-idp104464">
<title>Configuration Management</title>
<para>A production quality cloud should always use tools to automate configuration and deployment. This eliminates human error, and allows the cloud to scale much more rapidly. Automation also helps with continuous integration and testing.</para>
<para>When building an OpenStack cloud it is strongly recommended to approach your design and implementation with a configuration management tool or framework in mind. Configuration management allows you to avoid the many pitfalls inherent in building, managing, and maintaining an infrastructure as complex as OpenStack. By producing the manifests, cookbooks, or templates required for a configuration management utility, you are able to satisfy a number of documentation and regulatory reporting requirements. Further, configuration management can also function as part of your BCP and DR plans wherein you can rebuild a node or service back to a known state in a DR event or given a compromise.</para>
<para>Additionally, when combined with a version control system such as Git or SVN, you can track changes to your environment over time and remediate unauthorized changes that may occur. For example, a nova.conf or other configuration file falls out of compliance with your standard, your configuration management tool will be able to revert or replace the file and bring your configuration back into a known state. Finally a configuration management tool can also be used to deploy updates; simplifying the security patch process. These tools have a broad range of capabilities that are useful in this space. The key point for securing your cloud is to choose a tool for configuration management and use it.</para>
<para>There are many configuration management solutions; at the time of this writing there are two in the marketplace that are robust in their support of OpenStack environments: <glossterm>Chef</glossterm> and <glossterm>Puppet</glossterm>. A non-exhaustive listing of tools in this space is provided below:</para>
<itemizedlist><listitem>
<para>Chef</para>
<section xml:id="ch012_configuration-management-idp128032">
<title>References</title>
<itemizedlist>
<listitem>
<para><citetitle>OpenStack Operations Guide</citetitle> on
<link
xlink:href="http://docs.openstack.org/trunk/openstack-ops/content/backup_and_recovery.html"
>backup and recovery</link></para>
</listitem>
<listitem>
<para>Puppet</para>
<listitem>
<para><link
xlink:href="http://www.sans.org/reading_room/whitepapers/backup/security-considerations-enterprise-level-backups_515"
>http://www.sans.org/reading_room/whitepapers/backup/security-considerations-enterprise-level-backups_515</link></para>
</listitem>
<listitem>
<para>Salt Stack</para>
<listitem>
<para><link xlink:href="http://www.music-piracy.com/?p=494"
>OpenStack Security Primer</link></para>
</listitem>
<listitem>
<para>Ansible</para>
</listitem>
</itemizedlist>
<section xml:id="ch012_configuration-management-idp8400">
<title>Policy Changes</title>
<para>Whenever a policy or configuration management is changed, it is good practice to log the activity, and backup a copy of the new set. Often, such policies and configurations are stored in a version controlled repository such as git.</para>
</section>
</itemizedlist>
</section>
<section xml:id="ch012_configuration-management-idp10160">
<title>Secure Backup and Recovery</title>
<para>It is important to include Backup procedures and policies in the overall System Security Plan. For a good overview of OpenStack's Backup and Recovery capabilities and procedures, please refer to the OpenStack Operations Guide.</para>
<section xml:id="ch012_configuration-management-idp123104">
<title>Security Considerations</title>
<itemizedlist><listitem>
<para>Ensure only authenticated users and backup clients have access to the backup server.</para>
</listitem>
<listitem>
<para>Use data encryption options for storage and transmission of backups.</para>
</listitem>
<listitem>
<para>Use a dedicated and hardened backup server(s). The backup server's logs should be monitored daily and should be accessible by only few individuals.</para>
</listitem>
<listitem>
<para>Test data recovery options regularly. One of the things that can be restored from secured backups is the images. In case of a compromise, the best practice would be to terminate running instances immediately and then relaunch the instances from the images in the secured backup repository.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="ch012_configuration-management-idp128032">
<title>References</title>
<itemizedlist><listitem>
<para><citetitle>OpenStack Operations Guide</citetitle> on <link xlink:href="http://docs.openstack.org/trunk/openstack-ops/content/backup_and_recovery.html">backup and recovery</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://www.sans.org/reading_room/whitepapers/backup/security-considerations-enterprise-level-backups_515">http://www.sans.org/reading_room/whitepapers/backup/security-considerations-enterprise-level-backups_515</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://www.music-piracy.com/?p=494">OpenStack Security Primer</link></para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="ch012_configuration-management-idp131856">
<title>Security Auditing Tools</title>
<para>Security auditing tools can complement the configuration management tools.  Security auditing tools automate the process of verifying that a large number of security controls are satisfied for a given system configuration. These tools help to bridge the gap from security configuration guidance documentation (for example, the STIG and NSA Guides) to a specific system installation. For example, <link xlink:href="https://fedorahosted.org/scap-security-guide/">SCAP</link> can compare a running system to a pre-defined profile. SCAP outputs a report detailing which controls in the profile were satisfied, which ones failed, and which ones were not checked.</para>
<para>Combining configuration management and security auditing tools creates a powerful combination. The auditing tools will highlight deployment concerns. And the configuration management tools simplify the process of changing each system to address the audit concerns. Used together in this fashion, these tools help to maintain a cloud that satisfies security requirements ranging from basic hardening to compliance validation.</para>
<para>Configuration management and security auditing tools will introduce another layer of complexity into the cloud.  This complexity brings with it additional security concerns. We view this as an acceptable risk trade-off, given their security benefits. Securing the operational use of these tools is beyond the scope of this guide.</para>
</section>
</chapter>
</section>
<section xml:id="ch012_configuration-management-idp131856">
<title>Security Auditing Tools</title>
<para>Security auditing tools can complement the configuration
management tools.  Security auditing tools automate the process
of verifying that a large number of security controls are
satisfied for a given system configuration. These tools help to
bridge the gap from security configuration guidance
documentation (for example, the STIG and NSA Guides) to a
specific system installation. For example, <link
xlink:href="https://fedorahosted.org/scap-security-guide/"
>SCAP</link> can compare a running system to a pre-defined
profile. SCAP outputs a report detailing which controls in the
profile were satisfied, which ones failed, and which ones were
not checked.</para>
<para>Combining configuration management and security auditing
tools creates a powerful combination. The auditing tools will
highlight deployment concerns. And the configuration management
tools simplify the process of changing each system to address
the audit concerns. Used together in this fashion, these tools
help to maintain a cloud that satisfies security requirements
ranging from basic hardening to compliance validation.</para>
<para>Configuration management and security auditing tools will
introduce another layer of complexity into the cloud.  This
complexity brings with it additional security concerns. We view
this as an acceptable risk trade-off, given their security
benefits. Securing the operational use of these tools is beyond
the scope of this guide.</para>
</section>
</chapter>

View File

@@ -1,153 +1,395 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch013_node-bootstrapping"><?dbhtml stop-chunking?>
<title>Integrity Life-cycle</title>
<para>We define integrity lifecycle as a deliberate process that provides assurance that we are always running the expected software with the expected configurations throughout the cloud. This process begins with secure bootstrapping and is maintained through configuration management and security monitoring. This chapter provides recommendations on how to approach the integrity life-cycle process.</para>
<section xml:id="ch013_node-bootstrapping-idp44768">
<title>Secure Bootstrapping</title>
<para>Nodes in the cloud -- including compute, storage, network, service, and hybrid nodes -- should have an automated provisioning process. This ensures that nodes are provisioned consistently and correctly. This also facilitates security patching, upgrading, bug fixing, and other critical changes. Since this process installs new software that runs at the highest privilege levels in the cloud, it is important to verify that the correct software is installed. This includes the earliest stages of the boot process.</para>
<para>There are a variety of technologies that enable verification of these early boot stages. These typically require hardware support such as the trusted platform module (TPM), Intel Trusted Execution Technology (TXT), dynamic root of trust measurement (DRTM), and Unified Extensible Firmware Interface (UEFI) secure boot. In this book, we will refer to all of these collectively as <emphasis>secure boot technologies</emphasis>. We recommend using secure boot, while acknowledging that many of the pieces necessary to deploy this require advanced technical skills in order to customize the tools for each environment. Utilizing secure boot will require deeper integration and customization than many of the other recommendations in this guide. TPM technology, while common in most business class laptops and desktops for several years, and is now becoming available in servers together with supporting BIOS. Proper planning is essential to a successful secure boot deployment.</para>
<para>A complete tutorial on secure boot deployment is beyond the scope of this book. Instead, here we provide a framework for how to integrate secure boot technologies with the typical node provisioning process. For additional details, cloud architects should refer to the related specifications and software configuration manuals.</para>
<section xml:id="ch013_node-bootstrapping-idp48720">
<title>Node Provisioning</title>
<para>Nodes should use Preboot eXecution Environment (PXE) for provisioning. This significantly reduces the effort required for redeploying nodes. The typical process involves the node receiving various boot stages (i.e., progressively more complex software to execute) from a server.</para>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="203" contentwidth="274" fileref="static/node-provisioning-pxe.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/node-provisioning-pxe.png" format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>We recommend using a separate, isolated network within the management security domain for provisioning. This network will handle all PXE traffic, along with the subsequent boot stage downloads depicted above. Note that the node boot process begins with two insecure operations: DHCP and TFTP. Then the boot process downloads over SSL the remaining information required to deploy the node. This information might include an initramfs and a kernel. This concludes by downloading the remaining information needed to deploy the node. This may be an operating system installer, a basic install managed by <link xlink:href="http://www.opscode.com/chef/">Chef</link> or <link xlink:href="https://puppetlabs.com/">Puppet</link>, or even a complete file system image that is written directly to disk.</para>
<para>While utilizing SSL during the PXE boot process is somewhat more challenging, common PXE firmware projects (e.g., iPXE) provide this support. Typically this involves building the PXE firmware with knowledge of the allowed SSL certificate chain(s) so that it can properly validate the server certificate.  This raises the bar for an attacker by limiting the number of insecure, plaintext network operations.</para>
</section>
<section xml:id="ch013_node-bootstrapping-idp58144">
<title>Verified Boot</title>
<para>In general, there are two different strategies for verifying the boot process. Traditional <emphasis>secure boot</emphasis> will validate the code run at each step in the process, and stop the boot if code is incorrect. <emphasis>Boot attestation</emphasis> will record which code is run at each step, and provide this information to another machine as proof that the boot process completed as expected. In both cases, the first step is to measure each piece of code before it is run. In this context, a measurement is effectively a SHA-1 hash of the code, taken before it is executed.  The hash is stored in a platform configuration register (PCR) in the TPM.</para>
<para>Note: SHA-1 is used here because this is what the TPM chips support.</para>
<para>Each TPM has at least 24 PCRs. The TCG Generic Server Specification, v1.0, March 2005, defines the PCR assignments for boot-time integrity measurements. The table below shows a typical PCR configuration. The context indicates if the values are determined based on the node hardware (firmware) or the software provisioned onto the node. Some values are influenced by firmware versions, disk sizes, and other low-level information. Therefore, it is important to have good practices in place around configuration management to ensure that each system deployed is configured exactly as desired.</para>
<informaltable rules="all" width="80%"><colgroup><col/><col/><col/></colgroup>
<tbody>
<tr>
<td><para><emphasis role="bold">Register</emphasis></para></td>
<td><para><emphasis role="bold">What Is Measured</emphasis></para></td>
<td><para><emphasis role="bold">Context</emphasis></para></td>
</tr>
<tr>
<td><para>PCR-00</para></td>
<td><para>Core Root of Trust Measurement (CRTM), Bios code, Host platform extensions</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-01</para></td>
<td><para>Host Platform Configuration</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-02</para></td>
<td><para>Option ROM Code </para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-03</para></td>
<td><para>Option ROM Configuration and Data </para></td>
<td><para>Hardware </para></td>
</tr>
<tr>
<td><para>PCR-04</para></td>
<td><para>Initial Program Loader (IPL) Code (e.g., master boot record) </para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-05</para></td>
<td><para>IPL Code Configuration and Data </para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-06</para></td>
<td><para>State Transition and Wake Events </para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-07</para></td>
<td><para>Host Platform Manufacturer Control </para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-08</para></td>
<td><para>Platform specific, often Kernel, Kernel Extensions, and Drivers</para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-09</para></td>
<td><para>Platform specific, often Initramfs</para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-10 to PCR-23</para></td>
<td><para>Platform specific </para></td>
<td><para>Software </para></td>
</tr>
</tbody>
</informaltable>
<para>At the time of this writing, very few clouds are using secure boot technologies in a production environment. As a result, these technologies are still somewhat immature. We recommend planning carefully in terms of hardware selection (e.g., ensure that you have a TPM and Intel TXT support). Then verify how the node hardware vendor populates the PCR values (e.g., which values will be available for validation). Typically the PCR values listed under the software context in the table above are the ones that a cloud architect has direct control over. But even these may change as the software in the cloud is upgraded.  Configuration management should be linked into the PCR policy engine to ensure that the validation is always up to date.</para>
<para>Each manufacturer must provide the BIOS and firmware code for their servers. Different servers, hypervisors, and operating systems will choose to populate different PCRs.  In most real world deployments, it will be impossible to validate every PCR against a known good quantity ("golden measurement"). Experience has shown that, even within a single vendor's product line, the measurement process for a given PCR may not be consistent. We recommend establishing a baseline for each server and monitoring the PCR values for unexpected changes. Third-party software may be available to assist in the TPM provisioning and monitoring process, depending upon your chosen hypervisor solution.</para>
<para>The initial program loader (IPL) code will most likely be the PXE firmware, assuming the node deployment strategy outlined above. Therefore, the secure boot or boot attestation process can measure all of the early stage boot code (e.g., bios, firmware, etc), the PXE firmware, and the node kernel. Ensuring that each node has the correct versions of these pieces installed provides a solid foundation on which to build the rest of the node software stack.</para>
<para>Depending on the strategy selected, in the event of a failure the node will either fail to boot or it can report the failure back to another entity in the cloud. For secure boot, the node will fail to boot and a provisioning service within the management security domain must recognize this and log the event. For boot attestation, the node will already be running when the failure is detected. In this case the node should be immediately quarantined by disabling its network access. Then the event should be analyzed for the root cause. In either case, policy should dictate how to proceed after a failure. A cloud may automatically attempt to reprovision a node a certain number of times. Or it may immediately notify a cloud administrator to investigate the problem. The right policy here will be deployment and failure mode specific.</para>
</section>
<section xml:id="ch013_node-bootstrapping-idp3728">
<title>Node Hardening</title>
<para>At this point we know that the node has booted with the correct kernel and underlying components. There are many paths for hardening a given operating system deployment. The specifics on these steps are outside of the scope of this book.  We recommend following the guidance from a hardening guide specific to your operating system.  For example, the <link xlink:href="http://iase.disa.mil/stigs/">security technical implementation guides</link> (STIG) and the <link xlink:href="http://www.nsa.gov/ia/mitigation_guidance/security_configuration_guides/">NSA guides</link> are useful starting places.</para>
<para>The nature of the nodes makes additional hardening possible. We recommend the following additional steps for production nodes:</para>
<itemizedlist><listitem>
<para>Use a read-only file system where possible. Ensure that writeable file systems do not permit execution.  This can be handled through the mount options provided in <literal>/etc/fstab</literal>.</para>
</listitem>
<listitem>
<para>Use a mandatory access control policy to contain the instances, the node services, and any other critical processes and data on the node.  See the discussions on sVirt / SELinux and AppArmor below.</para>
</listitem>
<listitem>
<para>Remove any unnecessary software packages. This should result in a very stripped down installation because a compute node has a relatively small number of dependencies.</para>
</listitem>
</itemizedlist>
<para>Finally, the node kernel should have a mechanism to validate that the rest of the node starts in a known good state. This provides the necessary link from the boot validation process to validating the entire system. The steps for doing this will be deployment specific. As an example, a kernel module could verify a hash over the blocks comprising the file system before mounting it using <link xlink:href="https://code.google.com/p/cryptsetup/wiki/DMVerity">dm-verity</link>.</para>
</section>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://docbook.org/ns/docbook"
xmlns:db="http://docbook.org/ns/docbook" version="5.0"
xml:id="ch013_node-bootstrapping">
<?dbhtml stop-chunking?>
<title>Integrity Life-cycle</title>
<para>We define integrity life cycle as a deliberate process that
provides assurance that we are always running the expected
software with the expected configurations throughout the cloud.
This process begins with secure bootstrapping and is maintained
through configuration management and security monitoring. This
chapter provides recommendations on how to approach the integrity
life-cycle process.</para>
<section xml:id="ch013_node-bootstrapping-idp44768">
<title>Secure Bootstrapping</title>
<para>Nodes in the cloud -- including compute, storage, network,
service, and hybrid nodes -- should have an automated
provisioning process. This ensures that nodes are provisioned
consistently and correctly. This also facilitates security
patching, upgrading, bug fixing, and other critical changes.
Since this process installs new software that runs at the
highest privilege levels in the cloud, it is important to verify
that the correct software is installed. This includes the
earliest stages of the boot process.</para>
<para>There are a variety of technologies that enable verification
of these early boot stages. These typically require hardware
support such as the trusted platform module (TPM), Intel Trusted
Execution Technology (TXT), dynamic root of trust measurement
(DRTM), and Unified Extensible Firmware Interface (UEFI) secure
boot. In this book, we will refer to all of these collectively
as <emphasis>secure boot technologies</emphasis>. We recommend
using secure boot, while acknowledging that many of the pieces
necessary to deploy this require advanced technical skills in
order to customize the tools for each environment. Utilizing
secure boot will require deeper integration and customization
than many of the other recommendations in this guide. TPM
technology, while common in most business class laptops and
desktops for several years, and is now becoming available in
servers together with supporting BIOS. Proper planning is
essential to a successful secure boot deployment.</para>
<para>A complete tutorial on secure boot deployment is beyond the
scope of this book. Instead, here we provide a framework for how
to integrate secure boot technologies with the typical node
provisioning process. For additional details, cloud architects
should refer to the related specifications and software
configuration manuals.</para>
<section xml:id="ch013_node-bootstrapping-idp48720">
<title>Node Provisioning</title>
<para>Nodes should use Preboot eXecution Environment (PXE) for
provisioning. This significantly reduces the effort required
for redeploying nodes. The typical process involves the node
receiving various boot stages (i.e., progressively more
complex software to execute) from a server.</para>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="203" contentwidth="274"
fileref="static/node-provisioning-pxe.png" format="PNG"
scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/node-provisioning-pxe.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>We recommend using a separate, isolated network within the
management security domain for provisioning. This network will
handle all PXE traffic, along with the subsequent boot stage
downloads depicted above. Note that the node boot process
begins with two insecure operations: DHCP and TFTP. Then the
boot process downloads over SSL the remaining information
required to deploy the node. This information might include an
initramfs and a kernel. This concludes by downloading the
remaining information needed to deploy the node. This may be
an operating system installer, a basic install managed by
<link xlink:href="http://www.opscode.com/chef/">Chef</link>
or <link xlink:href="https://puppetlabs.com/">Puppet</link>,
or even a complete file system image that is written directly
to disk.</para>
<para>While utilizing SSL during the PXE boot process is
somewhat more challenging, common PXE firmware projects, such
as iPXE, provide this support. Typically this involves
building the PXE firmware with knowledge of the allowed SSL
certificate chain(s) so that it can properly validate the
server certificate.  This raises the bar for an attacker by
limiting the number of insecure, plain text network
operations.</para>
</section>
<section xml:id="ch013_node-bootstrapping-idp11376">
<title>Runtime Verification</title>
<para>Once the node is running, we need to ensure that it remains in a good state over time. Broadly speaking, this includes both configuration management and security monitoring. The goals for each of these areas are different. By checking both, we achieve higher assurance that the system is operating as desired. We discuss configuration management in the management section, and security monitoring below.</para>
<section xml:id="ch013_node-bootstrapping-idp135504">
<title>Intrusion Detection System</title>
<para>Host-based intrusion detection tools are also useful for automated validation of the cloud internals. There are a wide variety of host-based intrusion detection tools available. Some are open source projects that are freely available, while others are commercial. Typically these tools analyze data from a variety of sources and produce security alerts based on rule sets and/or training. Typical capabilities include log analysis, file integrity checking, policy monitoring, and rootkit detection. More advanced -- often custom -- tools can validate that in-memory process images match the on-disk executable and validate the execution state of a running process.</para>
<para>One critical policy decision for a cloud architect is what to do with the output from a security monitoring tool. There are effectively two options. The first is to alert a human to investigate and/or take corrective action. This could be done by including the security alert in a log or events feed for cloud administrators. The second option is to have the cloud take some form of remedial action automatically, in addition to logging the event. Remedial actions could include anything from re-installing a node to performing a minor service configuration. However, automated remedial action can be challenging due to the possibility of false positives.</para>
<para>False positives occur when the security monitoring tool produces a security alert for a benign event. Due to the nature of security monitoring tools, false positives will most certainly occur from time to time. Typically a cloud administrator can tune security monitoring tools to reduce the false positives, but this may also reduce the overall detection rate at the same time. These classic trade-offs must be understood and accounted for when setting up a security monitoring system in the cloud.</para>
<para>The selection and configuration of a host-based intrusion detection tool is highly deployment specific. We recommend starting by exploring the following open source projects which implement a variety of host-based intrusion detection and file monitoring features.</para>
<itemizedlist><listitem>
<para><link xlink:href="http://www.ossec.net/">OSSEC</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://la-samhna.de/samhain/">Samhain</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://sourceforge.net/projects/tripwire/">Tripwire</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://aide.sourceforge.net/">AIDE</link></para>
</listitem>
</itemizedlist>
<para>Network intrusion detection tools complement the host-based tools. OpenStack doesn't have a specific network IDS built-in, but OpenStack's networking component, Neutron, provides a plugin mechanism to enable different technologies via the Neutron API. This plugin architecture will allow tenants to develop API extensions to insert and configure their own advanced networking services like a firewall, an intrusion detection system, or a VPN between the VMs.</para>
<para>Similar to host-based tools, the selection and configuration of a network-based intrusion detection tool is deployment specific. <link xlink:href="http://www.snort.org/">Snort</link> is the leading open source networking intrusion detection tool, and a good starting place to learn more.</para>
<para>There are a few important security considerations for network and host-based intrusion detection systems.</para>
<itemizedlist><listitem>
<para>It is important to consider the placement of the Network IDS on the cloud (e.g., adding it to the network boundary and/or around sensitive networks). The placement depends on your network environment but make sure to monitor the impact the IDS may have on your services depending on where you choose to add it. Encrypted traffic, such as SSL, cannot generally be inspected for content by a Network IDS. However, the Network IDS may still provide some benefit in identifying anomalous unencrypted traffic on the network.</para>
</listitem>
<listitem>
<para>In some deployments it may be required to add host-based IDS on sensitive components on security domain bridges.  A host-based IDS may detect anomalous activity by compromised or unauthorized processes on the component. The IDS should transmit alert and log information on the Management network.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="ch013_node-bootstrapping-idp58144">
<title>Verified Boot</title>
<para>In general, there are two different strategies for
verifying the boot process. Traditional <emphasis>secure
boot</emphasis> will validate the code run at each step in
the process, and stop the boot if code is incorrect.
<emphasis>Boot attestation</emphasis> will record which code
is run at each step, and provide this information to another
machine as proof that the boot process completed as expected.
In both cases, the first step is to measure each piece of code
before it is run. In this context, a measurement is
effectively a SHA-1 hash of the code, taken before it is
executed.  The hash is stored in a platform configuration
register (PCR) in the TPM.</para>
<para>Note: SHA-1 is used here because this is what the TPM
chips support.</para>
<para>Each TPM has at least 24 PCRs. The TCG Generic Server
Specification, v1.0, March 2005, defines the PCR assignments
for boot-time integrity measurements. The table below shows a
typical PCR configuration. The context indicates if the values
are determined based on the node hardware (firmware) or the
software provisioned onto the node. Some values are influenced
by firmware versions, disk sizes, and other low-level
information. Therefore, it is important to have good practices
in place around configuration management to ensure that each
system deployed is configured exactly as desired.</para>
<informaltable rules="all" width="80%">
<colgroup>
<col/>
<col/>
<col/>
</colgroup>
<tbody>
<tr>
<td><para><emphasis role="bold"
>Register</emphasis></para></td>
<td><para><emphasis role="bold">What Is
Measured</emphasis></para></td>
<td><para><emphasis role="bold"
>Context</emphasis></para></td>
</tr>
<tr>
<td><para>PCR-00</para></td>
<td><para>Core Root of Trust Measurement (CRTM), Bios
code, Host platform extensions</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-01</para></td>
<td><para>Host Platform Configuration</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-02</para></td>
<td><para>Option ROM Code </para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-03</para></td>
<td><para>Option ROM Configuration and Data </para></td>
<td><para>Hardware </para></td>
</tr>
<tr>
<td><para>PCR-04</para></td>
<td><para>Initial Program Loader (IPL) Code. For example,
master boot record.</para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-05</para></td>
<td><para>IPL Code Configuration and Data </para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-06</para></td>
<td><para>State Transition and Wake Events </para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-07</para></td>
<td><para>Host Platform Manufacturer Control </para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-08</para></td>
<td><para>Platform specific, often Kernel, Kernel
Extensions, and Drivers</para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-09</para></td>
<td><para>Platform specific, often Initramfs</para></td>
<td><para>Software </para></td>
</tr>
<tr>
<td><para>PCR-10 to PCR-23</para></td>
<td><para>Platform specific </para></td>
<td><para>Software </para></td>
</tr>
</tbody>
</informaltable>
<para>At the time of this writing, very few clouds are using
secure boot technologies in a production environment. As a
result, these technologies are still somewhat immature. We
recommend planning carefully in terms of hardware selection.
For example, ensure that you have a TPM and Intel TXT support.
Then verify how the node hardware vendor populates the PCR
values. For example, which values will be available for
validation. Typically the PCR values listed under the software
context in the table above are the ones that a cloud architect
has direct control over. But even these may change as the
software in the cloud is upgraded.  Configuration management
should be linked into the PCR policy engine to ensure that the
validation is always up to date.</para>
<para>Each manufacturer must provide the BIOS and firmware code
for their servers. Different servers, hypervisors, and
operating systems will choose to populate different PCRs.  In
most real world deployments, it will be impossible to validate
every PCR against a known good quantity ("golden
measurement"). Experience has shown that, even within a single
vendor's product line, the measurement process for a given PCR
may not be consistent. We recommend establishing a baseline
for each server and monitoring the PCR values for unexpected
changes. Third-party software may be available to assist in
the TPM provisioning and monitoring process, depending upon
your chosen hypervisor solution.</para>
<para>The initial program loader (IPL) code will most likely be
the PXE firmware, assuming the node deployment strategy
outlined above. Therefore, the secure boot or boot attestation
process can measure all of the early stage boot code, such as,
bios, firmware, and the like, the PXE firmware, and the node
kernel. Ensuring that each node has the correct versions of
these pieces installed provides a solid foundation on which to
build the rest of the node software stack.</para>
<para>Depending on the strategy selected, in the event of a
failure the node will either fail to boot or it can report the
failure back to another entity in the cloud. For secure boot,
the node will fail to boot and a provisioning service within
the management security domain must recognize this and log the
event. For boot attestation, the node will already be running
when the failure is detected. In this case the node should be
immediately quarantined by disabling its network access. Then
the event should be analyzed for the root cause. In either
case, policy should dictate how to proceed after a failure. A
cloud may automatically attempt to re-provision a node a
certain number of times. Or it may immediately notify a cloud
administrator to investigate the problem. The right policy
here will be deployment and failure mode specific.</para>
</section>
</chapter>
<section xml:id="ch013_node-bootstrapping-idp3728">
<title>Node Hardening</title>
<para>At this point we know that the node has booted with the
correct kernel and underlying components. There are many paths
for hardening a given operating system deployment. The
specifics on these steps are outside of the scope of this
book.  We recommend following the guidance from a hardening
guide specific to your operating system.  For example, the
<link xlink:href="http://iase.disa.mil/stigs/">security
technical implementation guides</link> (STIG) and the <link
xlink:href="http://www.nsa.gov/ia/mitigation_guidance/security_configuration_guides/"
>NSA guides</link> are useful starting places.</para>
<para>The nature of the nodes makes additional hardening
possible. We recommend the following additional steps for
production nodes:</para>
<itemizedlist>
<listitem>
<para>Use a read-only file system where possible. Ensure
that writeable file systems do not permit execution.  This
can be handled through the mount options provided in
<literal>/etc/fstab</literal>.</para>
</listitem>
<listitem>
<para>Use a mandatory access control policy to contain the
instances, the node services, and any other critical
processes and data on the node.  See the discussions on
sVirt / SELinux and AppArmor below.</para>
</listitem>
<listitem>
<para>Remove any unnecessary software packages. This should
result in a very stripped down installation because a
compute node has a relatively small number of
dependencies.</para>
</listitem>
</itemizedlist>
<para>Finally, the node kernel should have a mechanism to
validate that the rest of the node starts in a known good
state. This provides the necessary link from the boot
validation process to validating the entire system. The steps
for doing this will be deployment specific. As an example, a
kernel module could verify a hash over the blocks comprising
the file system before mounting it using <link
xlink:href="https://code.google.com/p/cryptsetup/wiki/DMVerity"
>dm-verity</link>.</para>
</section>
</section>
<section xml:id="ch013_node-bootstrapping-idp11376">
<title>Runtime Verification</title>
<para>Once the node is running, we need to ensure that it remains
in a good state over time. Broadly speaking, this includes both
configuration management and security monitoring. The goals for
each of these areas are different. By checking both, we achieve
higher assurance that the system is operating as desired. We
discuss configuration management in the management section, and
security monitoring below.</para>
<section xml:id="ch013_node-bootstrapping-idp135504">
<title>Intrusion Detection System</title>
<para>Host-based intrusion detection tools are also useful for
automated validation of the cloud internals. There are a wide
variety of host-based intrusion detection tools available.
Some are open source projects that are freely available, while
others are commercial. Typically these tools analyze data from
a variety of sources and produce security alerts based on rule
sets and/or training. Typical capabilities include log
analysis, file integrity checking, policy monitoring, and
rootkit detection. More advanced -- often custom -- tools can
validate that in-memory process images match the on-disk
executable and validate the execution state of a running
process.</para>
<para>One critical policy decision for a cloud architect is what
to do with the output from a security monitoring tool. There
are effectively two options. The first is to alert a human to
investigate and/or take corrective action. This could be done
by including the security alert in a log or events feed for
cloud administrators. The second option is to have the cloud
take some form of remedial action automatically, in addition
to logging the event. Remedial actions could include anything
from re-installing a node to performing a minor service
configuration. However, automated remedial action can be
challenging due to the possibility of false positives.</para>
<para>False positives occur when the security monitoring tool
produces a security alert for a benign event. Due to the
nature of security monitoring tools, false positives will most
certainly occur from time to time. Typically a cloud
administrator can tune security monitoring tools to reduce the
false positives, but this may also reduce the overall
detection rate at the same time. These classic trade-offs must
be understood and accounted for when setting up a security
monitoring system in the cloud.</para>
<para>The selection and configuration of a host-based intrusion
detection tool is highly deployment specific. We recommend
starting by exploring the following open source projects which
implement a variety of host-based intrusion detection and file
monitoring features.</para>
<itemizedlist>
<listitem>
<para><link xlink:href="http://www.ossec.net/"
>OSSEC</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://la-samhna.de/samhain/"
>Samhain</link></para>
</listitem>
<listitem>
<para><link
xlink:href="http://sourceforge.net/projects/tripwire/"
>Tripwire</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://aide.sourceforge.net/"
>AIDE</link></para>
</listitem>
</itemizedlist>
<para>Network intrusion detection tools complement the
host-based tools. OpenStack doesn't have a specific network
IDS built-in, but OpenStack's networking component, Neutron,
provides a plugin mechanism to enable different technologies
via the Neutron API. This plugin architecture will allow
tenants to develop API extensions to insert and configure
their own advanced networking services like a firewall, an
intrusion detection system, or a VPN between the VMs.</para>
<para>Similar to host-based tools, the selection and
configuration of a network-based intrusion detection tool is
deployment specific. <link xlink:href="http://www.snort.org/"
>Snort</link> is the leading open source networking
intrusion detection tool, and a good starting place to learn
more.</para>
<para>There are a few important security considerations for
network and host-based intrusion detection systems.</para>
<itemizedlist>
<listitem>
<para>It is important to consider the placement of the
Network IDS on the cloud (for example, adding it to the
network boundary and/or around sensitive networks). The
placement depends on your network environment but make
sure to monitor the impact the IDS may have on your
services depending on where you choose to add it.
Encrypted traffic, such as SSL, cannot generally be
inspected for content by a Network IDS. However, the
Network IDS may still provide some benefit in identifying
anomalous unencrypted traffic on the network.</para>
</listitem>
<listitem>
<para>In some deployments it may be required to add
host-based IDS on sensitive components on security domain
bridges.  A host-based IDS may detect anomalous activity
by compromised or unauthorized processes on the component.
The IDS should transmit alert and log information on the
Management network.</para>
</listitem>
</itemizedlist>
</section>
</section>
</chapter>

View File

@@ -137,10 +137,18 @@
<para>Ensure that the network interfaces are on their own private(management or a separate) network. Segregate management domains with firewalls or other network gear.</para>
</listitem>
<listitem>
<para>If you use a web interface to interact with the <glossterm>BMC</glossterm>/IPMI, always use the SSL interface (e.g. https or port 443). This SSL interface should <emphasis role="bold">NOT</emphasis> use self-signed certificates, as is often default, but should have trusted certificates using the correctly defined fully qualified domain names (FQDNs).</para>
<para>If you use a web interface to interact with the
<glossterm>BMC</glossterm>/IPMI, always use the SSL
interface, such as https or port 443. This SSL interface
should <emphasis role="bold">NOT</emphasis> use
self-signed certificates, as is often default, but should
have trusted certificates using the correctly defined
fully qualified domain names (FQDNs).</para>
</listitem>
<listitem>
<para>Monitor the traffic on the management network. The anomalies may be easier to track than on the busier compute nodes</para>
<para>Monitor the traffic on the management network. The
anomalies might be easier to track than on the busier
compute nodes.</para>
</listitem>
</itemizedlist>
<para>Out of band management interfaces also often include graphical machine console access. It is often possible, although not necessarily default, that these interfaces are encrypted. Consult with your system software documentation for encrypting these interfaces.</para>

View File

@@ -41,7 +41,11 @@
<para>Password Policy Enforcement: Requires user passwords to conform to minimum standards for length, diversity of characters, expiration, or failed login attempts.</para>
</listitem>
<listitem>
<para>Multi-factor authentication: The authentication service requires the user to provide information based on something they have (e.g., a one-time password token or X.509 certificate) and something they know (e.g., a password).</para>
<para>Multi-factor authentication: The authentication
service requires the user to provide information based on
something they have, such as a one-time password token or
X.509 certificate, and something they know, such as a
password.</para>
</listitem>
<listitem>
<para>Kerberos</para>
@@ -61,7 +65,15 @@
<title>Service Authorization</title>
<para>As described in the <link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/index.html"><citetitle>OpenStack Cloud Administrator Guide</citetitle></link>, cloud administrators must define a user for each service, with a role of Admin. This service user account provides the service with the authorization to authenticate users.</para>
<para>The Compute and Object Storage services can be configured to use either the "tempAuth" file or Identity Service to store authentication information. The "tempAuth" solution MUST NOT be deployed in a production environment since it stores passwords in plain text.</para>
<para>The Identity Service supports client authentication for SSL which may be enabled. SSL client authentication provides an additional authentication factor, in addition to the username / password, that provides greater reliability on user identification. It reduces the risk of unauthorized access when usernames and passwords may be compromised.  However, there is additional administrative overhead and cost to issue certificates to users that may not be feasible in every deployment.</para>
<para>The Identity Service supports client authentication for
SSL which may be enabled. SSL client authentication provides
an additional authentication factor, in addition to the
username / password, that provides greater reliability on user
identification. It reduces the risk of unauthorized access
when user names and passwords may be compromised.  However,
there is additional administrative overhead and cost to issue
certificates to users that may not be feasible in every
deployment.</para>
<note>
<para>We recommend that you use client authentication with SSL for the authentication of services to the Identity Service.</para>
</note>
@@ -82,7 +94,13 @@
</section>
<section xml:id="ch024_authentication-idp267040">
<title>Administrative Users</title>
<para>We recommend that admin users authenticate using Identity Service and an external authentication service that supports 2-factor authentication, such as a certificate.  This reduces the risk from passwords that may be compromised. This recommendation is in compliance with NIST 800-53 IA-2(1) guidance in the use of multifactor authentication for network access to privileged accounts.</para>
<para>We recommend that admin users authenticate using
Identity Service and an external authentication service that
supports 2-factor authentication, such as a certificate.  This
reduces the risk from passwords that may be compromised. This
recommendation is in compliance with NIST 800-53 IA-2(1)
guidance in the use of multi factor authentication for network
access to privileged accounts.</para>
</section>
<section xml:id="ch024_authentication-idp268960">
<title>End Users</title>
@@ -117,7 +135,9 @@
<section xml:id="ch024_authentication-idp276176">
<title>Tokens</title>
<para>Once a user is authenticated, a token is generated and used internally in OpenStack for authorization and access. The default token <emphasis role="bold">lifespan</emphasis> is<emphasis role="bold"> 24 hours</emphasis>. It is recommended that this value be set lower but caution needs to be taken as some internal services will need sufficient time to complete their work. The cloud may not provide services if tokens expire too early. An example of this would be the time needed by the Compute Service to transfer a disk image onto the hypervisor for local caching.</para>
<para>Below is an example of a PKI token. Note that, in practice, the token id value is very long (e.g., around 3500 bytes), but for brevity we shorten it in this example.</para>
<para>The following example shows a PKI token. Note that, in
practice, the token id value is about 3500 bytes. We shorten it
in this example.</para>
<screen> 
"token": {
"expires": "2013-06-26T16:52:50Z",

View File

@@ -1,99 +1,258 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch025_web-dashboard"><?dbhtml stop-chunking?>
<title>Dashboard</title>
<para>Horizon is the OpenStack dashboard, providing access to a majority of the capabilities available in OpenStack. These include provisioning users, defining instance flavors, uploading VM images, managing networks, setting up security groups, starting instances, and accessing the instances via a console.</para>
<para>The dashboard is based on the Django web framework, therefore secure deployment practices for Django apply directly to Horizon. This guide provides a popular set of Django security recommendations, further information can be found by reading the <link xlink:href="https://docs.djangoproject.com/en/1.5/#security">Django deployment and security documentation</link>.</para>
<para>The dashboard ships with reasonable default security settings, and has good <link xlink:href="http://docs.openstack.org/developer/horizon/topics/deployment.html">deployment and configuration documentation</link>.</para>
<section xml:id="ch025_web-dashboard-idp237648">
<title>Basic Web Server Configuration</title>
<para>The dashboard should be deployed as a Web Services Gateway Interface (WSGI) application behind an HTTPS proxy such as Apache or nginx. If Apache is not already in use, we recommend nginx since it is lighter weight and easier to configure correctly.</para>
<para>When using nginx, we recommend <link xlink:href="http://docs.gunicorn.org/en/latest/deploy.html">gunicorn</link> as the wsgi host with an appropriate number of synchronous workers. We strongly advise against deployments using fastcgi, scgi, or uWSGI. We strongly advise against the use of synthetic performance benchmarks when choosing a wsgi server.</para>
<para>When using Apache, we recommend <link xlink:href="https://docs.djangoproject.com/en/1.5/howto/deployment/wsgi/modwsgi/">mod_wsgi</link> to host dashboard.</para>
</section>
<section xml:id="ch025_web-dashboard-idp240704">
<title>HTTPS</title>
<para>The dashboard should be deployed behind a secure HTTPS server using a valid, trusted certificate from a recognized certificate authority (CA). Private organization-issued certificates are only appropriate when the root of trust is pre-installed in all user browsers.</para>
<para>HTTP requests to the dashboard domain should be configured to redirect to the fully qualified HTTPS URL.</para>
</section>
<section xml:id="ch025_web-dashboard-idp242624">
<title>HTTP Strict Transport Security (HSTS)</title>
<para>It is highly recommended to use HTTP Strict Transport Security (HSTS).</para>
<para>NOTE: If you are using an HTTPS proxy in front of your web server, rather than using an HTTP server with HTTPS functionality, follow the <link xlink:href="https://docs.djangoproject.com/en/1.5/ref/settings/#secure-proxy-ssl-header">Django documentation on modifying the SECURE_PROXY_SSL_HEADER variable</link>.</para>
<para>See the chapter on PKI/SSL Everywhere for more specific recommendations and server configurations for HTTPS configurations, including the configuration of HSTS.</para>
</section>
<section xml:id="ch025_web-dashboard-idp245456">
<title>Frontend Caching</title>
<para>Since dashboard is rendering dynamic content passed directly from OpenStack API requests, we do not recommend frontend caching layers such as varnish. In Django, static media is directly served from Apache or nginx and already benefits from web host caching.</para>
</section>
<section xml:id="ch025_web-dashboard-idp246880">
<title>Domain Names</title>
<para>Many organizations typically deploy web applications at subdomains of an overarching organization domain. It is natural for users to expect a domain of the form openstack.example.org. In this context, there are often many other applications deployed in the same second-level namespace, often serving user-controlled content. This name structure is convenient and simplifies nameserver maintenance.</para>
<para>We strongly recommend deploying horizon to a <emphasis>second-level domain</emphasis>, for example <uri>https://example.com</uri>, and advise against deploying horizon on a<emphasis> shared subdomain</emphasis> of any level, for example <uri>https://openstack.example.org</uri> or <uri>https://horizon.openstack.example.org</uri>. We also advise against deploying to bare internal domains like <uri>https://horizon/</uri>.</para>
<para>This recommendation is based on the limitations browser same-origin-policy. The recommendations in this guide cannot effectively protect users against known attacks if dashboard is deployed on a domain which also hosts user-generated content (e.g. scripts, images, uploads of any kind) even if the user-generated content is on a different subdomain. This approach is used by most major web presences (e.g. googleusercontent.com, fbcdn.com, github.io, twimg.com) to ensure that user generated content stays separate from cookies and security tokens.</para>
<para>Additionally, if you decline to follow this recommendation above about second-level domains, it is vital that you avoid the cookie backed session store and employ HTTP Strict Transport Security (HSTS). When deployed on a subdomain, dashboard's security is only as strong as the weakest application deployed on the same second-level domain.</para>
</section>
<section xml:id="ch025_web-dashboard-idp251760">
<title>Static Media</title>
<para>Dashboard's static media should be deployed to a subdomain of the dashboard domain and served by the web server. The use of an external content delivery network (CDN) is also acceptable. This subdomain should not set cookies or serve user-provided content. The media should also be served with HTTPS.</para>
<para>Django media settings are documented at <link xlink:href="https://docs.djangoproject.com/en/1.5/ref/settings/#static-root">https://docs.djangoproject.com/en/1.5/ref/settings/#static-root</link>.</para>
<para>Dashboard's default configuration uses <link xlink:href="http://django-compressor.readthedocs.org/">django_compressor</link> to compress and minify css and JavaScript content before serving it. This process should be statically done before deploying dashboard, rather than using the default in-request dynamic compression and copying the resulting files along with deployed code or to the CDN server. Compression should be done in a non-production build environment. If this is not practical, we recommend disabling resource minification entirely. Online compression dependencies (less, nodejs) should not be installed on production machines.</para>
</section>
<section xml:id="ch025_web-dashboard-idp255696">
<title>Secret Key</title>
<para>Dashboard depends on a shared SECRET_KEY setting for some security functions. It should be a randomly generated string at least 64 characters long. It must be shared across all active Horizon instances. Compromise of this key may allow a remote attacker to execute arbitrary code. Rotating this key invalidates existing user sessions and caching. Do not commit this key to public repositories.</para>
</section>
<section xml:id="ch025_web-dashboard-idp257248">
<title>Session Backend</title>
<para>Horizon's default session backend (<emphasis>django.contrib.sessions.backends.signed_cookies</emphasis>) stores user data in <emphasis>signed</emphasis> but <emphasis>unencrypted </emphasis>cookies stored in the browser. This approach allows the most simple session backend scaling since each Horizon instance is stateless, but it comes at the cost of <emphasis>storing sensitive access tokens in the client browser</emphasis> and transmitting them with every request. This backend ensures that session data has not been tampered with, but the data itself is not encrypted other than the encryption provided by HTTPS.</para>
<para>If your architecture allows it, we recommend using <emphasis>django.contrib.sessions.backends.cache</emphasis> as your session backend with memcache as the cache. Memcache must not be exposed publicly, and should communicate over a secured private channel. If you choose to use the signed cookies backend, refer to the Django documentation understand the security tradeoffs.</para>
<para>For further details, consult the <link xlink:href="https://docs.djangoproject.com/en/1.5/topics/http/sessions/#configuring-the-session-engine">Django session backend documentation</link>.</para>
</section>
<section xml:id="ch025_web-dashboard-idp262288">
<title>Allowed Hosts</title>
<para>Configure the ALLOWED_HOSTS setting with the domain or domains where Horizon is available. Failure to configure this setting (especially if not following the recommendation above regarding second level domains) opens Horizon to a number of serious attacks. Wildcard domains should be avoided.</para>
<para>For further details, see the <link xlink:href="https://docs.djangoproject.com/en/1.5/ref/settings/#allowed-hosts">Django documentation on settings</link>.</para>
</section>
<section xml:id="ch025_web-dashboard-idp264272">
<title>Cookies</title>
<para>Session Cookies should be set to HTTPONLY:</para>
<screen> 
<chapter xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://docbook.org/ns/docbook"
xmlns:db="http://docbook.org/ns/docbook" version="5.0"
xml:id="ch025_web-dashboard">
<?dbhtml stop-chunking?>
<title>Dashboard</title>
<para>Horizon is the OpenStack dashboard, providing access to a
majority of the capabilities available in OpenStack. These include
provisioning users, defining instance flavors, uploading VM
images, managing networks, setting up security groups, starting
instances, and accessing the instances via a console.</para>
<para>The dashboard is based on the Django web framework, therefore
secure deployment practices for Django apply directly to Horizon.
This guide provides a popular set of Django security
recommendations, further information can be found by reading the
<link
xlink:href="https://docs.djangoproject.com/en/1.5/#security"
>Django deployment and security documentation</link>.</para>
<para>The dashboard ships with reasonable default security settings,
and has good <link
xlink:href="http://docs.openstack.org/developer/horizon/topics/deployment.html"
>deployment and configuration documentation</link>.</para>
<section xml:id="ch025_web-dashboard-idp237648">
<title>Basic Web Server Configuration</title>
<para>The dashboard should be deployed as a Web Services Gateway
Interface (WSGI) application behind an HTTPS proxy such as
Apache or nginx. If Apache is not already in use, we recommend
nginx since it is lighter weight and easier to configure
correctly.</para>
<para>When using nginx, we recommend <link
xlink:href="http://docs.gunicorn.org/en/latest/deploy.html"
>gunicorn</link> as the wsgi host with an appropriate number
of synchronous workers. We strongly advise against deployments
using fastcgi, scgi, or uWSGI. We strongly advise against the
use of synthetic performance benchmarks when choosing a wsgi
server.</para>
<para>When using Apache, we recommend <link
xlink:href="https://docs.djangoproject.com/en/1.5/howto/deployment/wsgi/modwsgi/"
>mod_wsgi</link> to host dashboard.</para>
</section>
<section xml:id="ch025_web-dashboard-idp240704">
<title>HTTPS</title>
<para>The dashboard should be deployed behind a secure HTTPS
server using a valid, trusted certificate from a recognized
certificate authority (CA). Private organization-issued
certificates are only appropriate when the root of trust is
pre-installed in all user browsers.</para>
<para>HTTP requests to the dashboard domain should be configured
to redirect to the fully qualified HTTPS URL.</para>
</section>
<section xml:id="ch025_web-dashboard-idp242624">
<title>HTTP Strict Transport Security (HSTS)</title>
<para>It is highly recommended to use HTTP Strict Transport
Security (HSTS).</para>
<para>NOTE: If you are using an HTTPS proxy in front of your web
server, rather than using an HTTP server with HTTPS
functionality, follow the <link
xlink:href="https://docs.djangoproject.com/en/1.5/ref/settings/#secure-proxy-ssl-header"
>Django documentation on modifying the SECURE_PROXY_SSL_HEADER
variable</link>.</para>
<para>See the chapter on PKI/SSL Everywhere for more specific
recommendations and server configurations for HTTPS
configurations, including the configuration of HSTS.</para>
</section>
<section xml:id="ch025_web-dashboard-idp245456">
<title>Front end Caching</title>
<para>Since dashboard is rendering dynamic content passed directly
from OpenStack API requests, we do not recommend front end
caching layers such as varnish. In Django, static media is
directly served from Apache or nginx and already benefits from
web host caching.</para>
</section>
<section xml:id="ch025_web-dashboard-idp246880">
<title>Domain Names</title>
<para>Many organizations typically deploy web applications at
subdomains of an overarching organization domain. It is natural
for users to expect a domain of the form
<uri>openstack.example.org</uri>. In this context, there are
often many other applications deployed in the same second-level
namespace, often serving user-controlled content. This name
structure is convenient and simplifies name server
maintenance.</para>
<para>We strongly recommend deploying horizon to a
<emphasis>second-level domain</emphasis>, such as
<uri>https://example.com</uri>, and advise against deploying
horizon on a <emphasis>shared subdomain</emphasis> of any level,
for example <uri>https://openstack.example.org</uri> or
<uri>https://horizon.openstack.example.org</uri>. We also
advise against deploying to bare internal domains like
<uri>https://horizon/</uri>.</para>
<para>This recommendation is based on the limitations browser
same-origin-policy. The recommendations in this guide cannot
effectively protect users against known attacks if dashboard is
deployed on a domain which also hosts user-generated content,
such as scripts, images, or uploads of any kind, even if the
user-generated content is on a different subdomain. This
approach is used by most major web presences, such as
googleusercontent.com, fbcdn.com, github.io, and twimg.com, to
ensure that user generated content stays separate from cookies
and security tokens.</para>
<para>Additionally, if you decline to follow this recommendation
above about second-level domains, it is vital that you avoid the
cookie backed session store and employ HTTP Strict Transport
Security (HSTS). When deployed on a subdomain, dashboard's
security is only as strong as the weakest application deployed
on the same second-level domain.</para>
</section>
<section xml:id="ch025_web-dashboard-idp251760">
<title>Static Media</title>
<para>Dashboard's static media should be deployed to a subdomain
of the dashboard domain and served by the web server. The use of
an external content delivery network (CDN) is also acceptable.
This subdomain should not set cookies or serve user-provided
content. The media should also be served with HTTPS.</para>
<para>Django media settings are documented at <link
xlink:href="https://docs.djangoproject.com/en/1.5/ref/settings/#static-root"
>https://docs.djangoproject.com/en/1.5/ref/settings/#static-root</link>.</para>
<para>Dashboard's default configuration uses <link
xlink:href="http://django-compressor.readthedocs.org/"
>django_compressor</link> to compress and minify css and
JavaScript content before serving it. This process should be
statically done before deploying dashboard, rather than using
the default in-request dynamic compression and copying the
resulting files along with deployed code or to the CDN server.
Compression should be done in a non-production build
environment. If this is not practical, we recommend disabling
resource compression entirely. Online compression dependencies
(less, nodejs) should not be installed on production
machines.</para>
</section>
<section xml:id="ch025_web-dashboard-idp255696">
<title>Secret Key</title>
<para>Dashboard depends on a shared SECRET_KEY setting for some
security functions. It should be a randomly generated string at
least 64 characters long. It must be shared across all active
Horizon instances. Compromise of this key may allow a remote
attacker to execute arbitrary code. Rotating this key
invalidates existing user sessions and caching. Do not commit
this key to public repositories.</para>
</section>
<section xml:id="ch025_web-dashboard-idp257248">
<title>Session Backend</title>
<para>Horizon's default session backend
(<emphasis>django.contrib.sessions.backends.signed_cookies</emphasis>)
stores user data in <emphasis>signed</emphasis> but
<emphasis>unencrypted </emphasis>cookies stored in the
browser. This approach allows the most simple session backend
scaling since each Horizon instance is stateless, but it comes
at the cost of <emphasis>storing sensitive access tokens in the
client browser</emphasis> and transmitting them with every
request. This backend ensures that session data has not been
tampered with, but the data itself is not encrypted other than
the encryption provided by HTTPS.</para>
<para>If your architecture allows it, we recommend using
<emphasis>django.contrib.sessions.backends.cache</emphasis> as
your session backend with memcache as the cache. Memcache must
not be exposed publicly, and should communicate over a secured
private channel. If you choose to use the signed cookies
backend, refer to the Django documentation understand the
security trade-offs.</para>
<para>For further details, consult the <link
xlink:href="https://docs.djangoproject.com/en/1.5/topics/http/sessions/#configuring-the-session-engine"
>Django session backend documentation</link>.</para>
</section>
<section xml:id="ch025_web-dashboard-idp262288">
<title>Allowed Hosts</title>
<para>Configure the ALLOWED_HOSTS setting with the domain or
domains where Horizon is available. Failure to configure this
setting (especially if not following the recommendation above
regarding second level domains) opens Horizon to a number of
serious attacks. Wild card domains should be avoided.</para>
<para>For further details, see the <link
xlink:href="https://docs.djangoproject.com/en/1.5/ref/settings/#allowed-hosts"
>Django documentation on settings</link>.</para>
</section>
<section xml:id="ch025_web-dashboard-idp264272">
<title>Cookies</title>
<para>Session Cookies should be set to HTTPONLY:</para>
<screen> 
SESSION_COOKIE_HTTPONLY = True</screen>
<para>Never configure CSRF or session cookies to have a wildcard domain with a leading dot. Horizon's session and CSRF cookie should be secured when deployed with HTTPS:</para>
<screen> 
<para>Never configure CSRF or session cookies to have a wild card
domain with a leading dot. Horizon's session and CSRF cookie
should be secured when deployed with HTTPS:</para>
<screen> 
Code CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True</screen>
</section>
<section xml:id="ch025_web-dashboard-idp266976">
<title>Password Auto Complete</title>
<para>We recommend that implementers do not change the default password autocomplete behavior. Users choose stronger passwords in environments that allow them to use the secure browser password manager. Organizations which forbid the browser password manager should enforce this policy at the desktop level.</para>
</section>
<section xml:id="ch025_web-dashboard-idp268448">
<title>Cross Site Request Forgery (CSRF)</title>
<para>Django has a dedicated middleware for <link xlink:href="https://docs.djangoproject.com/en/1.5/ref/contrib/csrf/#how-it-works">cross-site request forgery</link> (CSRF).</para>
<para>Dashboard is designed to discourage developers from introducing cross-site scripting vulnerabilities with custom dashboards. However, it is important to audit custom dashboards, especially ones that are javascript-heavy for inappropriate use of the @csrf_exempt decorator. Dashboards which do not follow these recommended security settings should be carefully evaluated before restrictions are relaxed.</para>
</section>
<section xml:id="ch025_web-dashboard-idp270608">
<title>Cross Site Scripting (XSS)</title>
<para>Unlike many similar systems, OpenStack dashboard allows the entire unicode character set in most fields. This means developers have less latitude to make escaping mistakes that open attack vectors for cross-site scripting (XSS).</para>
<para>Dashboard provides tools for developers to avoid creating XSS vulnerabilities, but they only work if developers use them correctly. Audit any custom dashboards, paying particular attention to use of the mark_safe function, use of is_safe with custom template tags, the safe template tag, anywhere autoescape is turned off, and any javascript which might evaluate improperly escaped data.</para>
</section>
<section xml:id="ch025_web-dashboard-idp272832">
<title>Cross Origin Resource Sharing (CORS)</title>
<para>Configure your web server to send a restrictive CORS header with each response, allowing only the Horizon domain and protocol:</para>
<screen>
</section>
<section xml:id="ch025_web-dashboard-idp266976">
<title>Password Auto Complete</title>
<para>We recommend that implementers do not change the default
password auto complete behavior. Users choose stronger passwords
in environments that allow them to use the secure browser
password manager. Organizations which forbid the browser
password manager should enforce this policy at the desktop
level.</para>
</section>
<section xml:id="ch025_web-dashboard-idp268448">
<title>Cross Site Request Forgery (CSRF)</title>
<para>Django has a dedicated middleware for <link
xlink:href="https://docs.djangoproject.com/en/1.5/ref/contrib/csrf/#how-it-works"
>cross-site request forgery</link> (CSRF).</para>
<para>Dashboard is designed to discourage developers from
introducing cross-site scripting vulnerabilities with custom
dashboards. However, it is important to audit custom dashboards,
especially ones that are javascript-heavy for inappropriate use
of the @csrf_exempt decorator. Dashboards which do not follow
these recommended security settings should be carefully
evaluated before restrictions are relaxed.</para>
</section>
<section xml:id="ch025_web-dashboard-idp270608">
<title>Cross Site Scripting (XSS)</title>
<para>Unlike many similar systems, OpenStack dashboard allows the
entire Unicode character set in most fields. This means
developers have less latitude to make escaping mistakes that
open attack vectors for cross-site scripting (XSS).</para>
<para>Dashboard provides tools for developers to avoid creating
XSS vulnerabilities, but they only work if developers use them
correctly. Audit any custom dashboards, paying particular
attention to use of the mark_safe function, use of is_safe with
custom template tags, the safe template tag, anywhere auto escape
is turned off, and any JavaScript which might evaluate
improperly escaped data.</para>
</section>
<section xml:id="ch025_web-dashboard-idp272832">
<title>Cross Origin Resource Sharing (CORS)</title>
<para>Configure your web server to send a restrictive CORS header
with each response, allowing only the Horizon domain and
protocol:</para>
<screen>
Access-Control-Allow-Origin: https://example.com/</screen>
<para>Never allow the wildcard origin.</para>
</section>
<section xml:id="ch025_web-dashboard-idp275056">
<title>Horizon Image Upload</title>
<para>We recommend that implementers <link xlink:href="http://docs.openstack.org/developer/horizon/topics/deployment.html#file-uploads">disable HORIZON_IMAGES_ALLOW_UPLOAD</link> unless they have implemented a plan to prevent resource exhaustion and denial of service.</para>
</section>
<section xml:id="ch025_web-dashboard-idp276864">
<title>Upgrading</title>
<para>Django security releases are generally well tested and aggressively backwards compatible. In almost all cases, new major releases of Django are also fully backwards compatible with previous releases. Dashboard implementers are strongly encouraged to run the latest stable release of Django with up-to-date security releases.</para>
</section>
<section xml:id="ch025_web-dashboard-idp278672">
<title>Debug</title>
<para>Make sure DEBUG is set to False in production. In Django, DEBUG displays stack traces and sensitive web server state information on any exception.</para>
</section>
</chapter>
<para>Never allow the wild card origin.</para>
</section>
<section xml:id="ch025_web-dashboard-idp275056">
<title>Horizon Image Upload</title>
<para>We recommend that implementers <link
xlink:href="http://docs.openstack.org/developer/horizon/topics/deployment.html#file-uploads"
>disable HORIZON_IMAGES_ALLOW_UPLOAD</link> unless they have
implemented a plan to prevent resource exhaustion and denial of
service.</para>
</section>
<section xml:id="ch025_web-dashboard-idp276864">
<title>Upgrading</title>
<para>Django security releases are generally well tested and
aggressively backwards compatible. In almost all cases, new
major releases of Django are also fully backwards compatible
with previous releases. Dashboard implementers are strongly
encouraged to run the latest stable release of Django with
up-to-date security releases.</para>
</section>
<section xml:id="ch025_web-dashboard-idp278672">
<title>Debug</title>
<para>Make sure DEBUG is set to False in production. In Django,
DEBUG displays stack traces and sensitive web server state
information on any exception.</para>
</section>
</chapter>

View File

@@ -1,35 +1,51 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch027_storage"><?dbhtml stop-chunking?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://docbook.org/ns/docbook"
xmlns:db="http://docbook.org/ns/docbook" version="5.0"
xml:id="ch027_storage">
<?dbhtml stop-chunking?>
<title>Object Storage</title>
<para>OpenStack Object Storage (Swift) is a service that provides
storage and retrieval of data over HTTP. Objects (blobs of data)
are stored in an organizational hierarchy that offers anonymous
read-only access or ACL defined access based on the authentication
mechanism.</para>
storage and retrieval of data over HTTP. Objects (blobs of
data) are stored in an organizational hierarchy that offers
anonymous read-only access or ACL defined access based on the
authentication mechanism.</para>
<para>A consumer can store objects, modify them, or access them
using the HTTP protocol and REST APIs. Backend components of
Object Storage use different protocols for keeping the information
synchronized in a redundant cluster of services. For more details
on the API and the backend components see the
<link xlink:href="http://docs.openstack.org/api/openstack-object-storage/1.0/content/">OpenStack Storage documentation</link>.</para>
using the HTTP protocol and REST APIs. Backend components of
Object Storage use different protocols for keeping the
information synchronized in a redundant cluster of services.
For more details on the API and the backend components see the
<link
xlink:href="http://docs.openstack.org/api/openstack-object-storage/1.0/content/"
>OpenStack Storage documentation</link>.</para>
<para>For this document the components will be grouped into the
following primary groups:</para>
<para>
following primary groups:</para>
<orderedlist>
<listitem><para>Proxy services</para></listitem>
<listitem><para>Auth services</para></listitem>
<listitem><para>Storage services
<listitem>
<para>Proxy services</para>
</listitem>
<listitem>
<para>Auth services</para>
</listitem>
<listitem>
<para>Storage services</para>
<itemizedlist>
<listitem><para>Account service</para></listitem>
<listitem><para>Container service</para></listitem>
<listitem><para>Object service</para></listitem>
<listitem>
<para>Account service</para>
</listitem>
<listitem>
<para>Container service</para>
</listitem>
<listitem>
<para>Object service</para>
</listitem>
</itemizedlist>
</para></listitem>
</listitem>
</orderedlist>
</para>
<figure>
<title>An example diagram from the OpenStack Object Storage
Administration Guide (2013)</title>
Administration Guide (2013)</title>
<mediaobject>
<imageobject>
<imagedata contentdepth="329" contentwidth="494"
@@ -39,94 +55,97 @@
</mediaobject>
</figure>
<note>
<para>
An Object Storage environment does not have to necessarily be
on the Internet and could also be a private cloud with the
"Public Switch" being part of the organizations internal
network infrastructure.
</para>
<para>An Object Storage environment does not have to
necessarily be on the Internet and could also be a private
cloud with the "Public Switch" being part of the
organization's internal network infrastructure.</para>
</note>
<section xml:id="ch027_storage-idpA">
<title>First thing to secure the network</title>
<para>The first aspect of a secure architecture design for
Object Storage is in the networking component. The Storage
service nodes use rsync between each other for copying data
to provide replication and high availability. In addition,
the proxy service communicates with the Storage service when
relaying data back and forth between the end-point client and
the cloud environment.</para>
<caution><para>None of these use any type of encryption or
authentication at this layer/tier.</para></caution>
Object Storage is in the networking component. The Storage
service nodes use rsync between each other for copying
data to provide replication and high availability. In
addition, the proxy service communicates with the Storage
service when relaying data back and forth between the
end-point client and the cloud environment.</para>
<caution>
<para>None of these use any type of encryption or
authentication at this layer/tier.</para>
</caution>
<para>This is why you see a "Private Switch" or private
network ([V]LAN) in architecture diagrams. This data domain
should be separate from other OpenStack data networks as well.
For further discussion on security domains please see
<xref linkend="ch005_security-domains"/>.
</para>
<tip><para><emphasis>Rule:</emphasis> Use a private (V)LAN
network segment for your Storage services in the data domain.
</para></tip>
network ([V]LAN) in architecture diagrams. This data
domain should be separate from other OpenStack data
networks as well. For further discussion on security
domains please see <xref linkend="ch005_security-domains"
/>.</para>
<tip>
<para><emphasis>Rule:</emphasis> Use a private (V)LAN
network segment for your Storage services in the data
domain.</para>
</tip>
<para>This necessitates that the Proxy service nodes have dual
interfaces (physical or virtual):</para>
interfaces (physical or virtual):</para>
<orderedlist>
<listitem><para>One as a "public" interface for consumers
to reach</para></listitem>
<listitem><para>Another as a "private" interface with
access to the storage nodes</para></listitem>
<listitem>
<para>One as a "public" interface for consumers to
reach</para>
</listitem>
<listitem>
<para>Another as a "private" interface with access to
the storage nodes</para>
</listitem>
</orderedlist>
<para>The following figure demonstrates one possible network
architecture.
</para>
<para>
architecture.</para>
<figure>
<title>Object storage network architecture with a
management node (OSAM)
</title>
management node (OSAM)</title>
<mediaobject>
<imageobject role="html">
<imagedata contentdepth="913"
contentwidth="1264"
<imagedata contentdepth="913" contentwidth="1264"
fileref="static/swift_network_diagram-2.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/swift_network_diagram-2.png"
format="PNG" scalefit="1"
width="100%"/>
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</mediaobject>
</figure>
</para>
</section><!-- First thing to secure The Network -->
</section>
<!-- First thing to secure The Network -->
<section xml:id="ch027_storage-idpC1">
<title>Securing services general</title>
<section xml:id="ch027_storage-idpC12">
<title>Service runas user</title>
<para>It is recommended that you configure each service to
run under a non-root (UID 0) service account. One
recommendation is the username "swift" with primary group
"swift."</para>
run under a non-root (UID 0) service account. One
recommendation is the username "swift" with primary
group "swift."</para>
</section>
<section xml:id="ch027_storage-idpC123">
<title>File permissions</title>
<para>/etc/swift contains information about the ring
topology and environment configuration. The following
permissions are recommended:</para>
topology and environment configuration. The following
permissions are recommended:</para>
<screen>
<prompt>#</prompt><userinput>chown -R root:swift /etc/swift/*</userinput>
<prompt>#</prompt><userinput>find /etc/swift/ -type f -exec chmod 640 {} \;</userinput>
<prompt>#</prompt><userinput>find /etc/swift/ -type d -exec chmod 750 {} \;</userinput>
</screen>
<para>This restricts only root to be able to modify
configuration files while allowing the services to read
them via their group membership in "swift."</para>
configuration files while allowing the services to
read them via their group membership in
"swift."</para>
</section>
</section><!-- Securing Services General -->
</section>
<!-- Securing Services General -->
<section xml:id="ch027_storage-idpD1">
<title>Securing storage services</title>
<para>The following are the default listening ports for the
various storage services:</para>
various storage services:</para>
<informaltable>
<thead>
<tr>
@@ -159,198 +178,206 @@
</tbody>
</informaltable>
<para>Authentication does not happen at this level in Object
Storage. If someone was able to connect to a Storage service
node on one of these ports they could access or modify data
without authentication. In order to secure against this issue
you should follow the recommendations given previously about
using a private storage network.</para>
Storage. If someone was able to connect to a Storage
service node on one of these ports they could access or
modify data without authentication. In order to secure
against this issue you should follow the recommendations
given previously about using a private storage
network.</para>
<section xml:id="ch027_storage-idpD12">
<title>Object storage "account" terminology</title>
<para>An Object Storage "Account" is not a user account or
credential. The following explains the relations:</para>
credential. The following explains the
relations:</para>
<informaltable>
<tbody>
<tr>
<td>OpenStack Object Storage Account</td>
<td>Collection of containers; not user
accounts or authentication. Which users are
associated with the account and how they may
access it depends on the authentication system
used. See authentication systems later.
Referred to in this document as OSSAccount.
</td>
accounts or authentication. Which users
are associated with the account and how
they may access it depends on the
authentication system used. See
authentication systems later. Referred to
in this document as OSSAccount.</td>
</tr>
<tr>
<td>OpenStack Object Storage Containers</td>
<td>Collection of objects. Metadata on the
container is available for ACLs. The meaning
of ACLs is dependent on the authentication
system used.</td>
container is available for ACLs. The
meaning of ACLs is dependent on the
authentication system used.</td>
</tr>
<tr>
<td>OpenStack Object Storage Objects</td>
<td>The actual data objects. ACLs at the
object level are also possible with metadata.
It is dependent on the authentication system
used.</td>
object level are also possible with
metadata. It is dependent on the
authentication system used.</td>
</tr>
</tbody>
</informaltable>
<tip>
<para>
<?dbhtml bgcolor="#DDFADE" ?><?dbfo bgcolor="#DDFADE" ?>
Another way of thinking about the above would be: A
single shelf (Account) holds zero or more -> buckets
(Containers) which each hold zero or more -> objects. A
garage (Object Storage cloud environment)
may have multiple shelves (Accounts) with each shelf
belonging to zero or more users.
</para>
<para>
<?dbhtml bgcolor="#DDFADE" ?><?dbfo bgcolor="#DDFADE" ?>
Another way of thinking about the above would be:
A single shelf (Account) holds zero or more ->
buckets (Containers) which each hold zero or more
-> objects. A garage (Object Storage cloud
environment) may have multiple shelves (Accounts)
with each shelf belonging to zero or more
users.</para>
</tip>
<para>At each level you may have ACLs that dictate who has
what type of access. ACLs are interpreted based on what
authentication system is in use. The two most common
types of authentication providers used are Keystone and
SWAuth. Custom authentication providers are also
possible. Please see the Object Storage Authentication
section for more information.</para>
what type of access. ACLs are interpreted based on
what authentication system is in use. The two most
common types of authentication providers used are
Keystone and SWAuth. Custom authentication providers
are also possible. Please see the Object Storage
Authentication section for more information.</para>
</section>
</section><!-- Securing Storage Services -->
</section>
<!-- Securing Storage Services -->
<section xml:id="ch027_storage-idpE1">
<title>Securing proxy services</title>
<para>A Proxy service node should have at least two interfaces
(physical or virtual): one public and one private. The
public interface may be protected via firewalls or service
binding. The public facing service is an HTTP web server that
processes end-point client requests, authenticates them, and
performs the appropriate action. The private interface does
not require any listening services but is instead used to
establish outgoing connections to storage service nodes on the
private storage network.</para>
(physical or virtual): one public and one private. The
public interface may be protected via firewalls or service
binding. The public facing service is an HTTP web server
that processes end-point client requests, authenticates
them, and performs the appropriate action. The private
interface does not require any listening services but is
instead used to establish outgoing connections to storage
service nodes on the private storage network.</para>
<section xml:id="ch027_storage-idpE12">
<title>Use SSL/TLS</title>
<para>The built-in or included web server that comes with
Swift supports SSL, but it does not support transmission
of the entire SSL certificate chain. This causes issues
when you use a third party trusted and signed certificate
(ex: Verisign) for your cloud. The current work around is
to not use the built-in web server but an alternative web
server instead that supports sending both the public
server certificate as well as the CA signing authorities
intermediate certificate(s). This allows for end-point
clients that have the CA root certificate in their trust
store to be able to successfully validate your cloud
environments SSL certificate and chain. An example of
how to do this with mod_wsgi and Apache is given below.
Also consult the
<link xlink:href="http://docs.openstack.org/developer/swift/apache_deployment_guide.html">Apache Deployment Guide</link></para>
Swift supports SSL, but it does not support
transmission of the entire SSL certificate chain. This
causes issues when you use a third party trusted and
signed certificate, such as Verisign, for your cloud. The
current work around is to not use the built-in web
server but an alternative web server instead that
supports sending both the public server certificate as
well as the CA signing authorities intermediate
certificate(s). This allows for end-point clients that
have the CA root certificate in their trust store to
be able to successfully validate your cloud
environment's SSL certificate and chain. An example of
how to do this with mod_wsgi and Apache is given
below. Also consult the <link
xlink:href="http://docs.openstack.org/developer/swift/apache_deployment_guide.html"
>Apache Deployment Guide</link></para>
<screen>sudo apt-get install libapache2-mod-wsgi</screen>
<para>Modify file
<filename>/etc/apache2/envvars</filename> with</para>
<filename>/etc/apache2/envvars</filename>
with</para>
<programlisting>
export APACHE_RUN_USER=swift
export APACHE_RUN_GROUP=swift
</programlisting>
<para>An alternative is to modify your Apache conf file
with</para>
with</para>
<programlisting>
User swift
Group swift
</programlisting>
<para>Create a "swift" directory in your Apache document
root:</para>
root:</para>
<screen><prompt>#</prompt><userinput>sudo mkdir /var/www/swift/</userinput></screen>
<para>Create the file
<filename>$YOUR_APACHE_DOC_ROOT/swift/proxy-server.wsgi</filename>:</para>
<filename>$YOUR_APACHE_DOC_ROOT/swift/proxy-server.wsgi</filename>:</para>
<programlisting>from swift.common.wsgi import init_request_processor application, conf, logger, log_name = \
init_request_processor('/etc/swift/proxy-server.conf','proxy-server')</programlisting>
</section>
<section xml:id="ch027_storage-idpF1">
<title>HTTP listening port</title>
<para>You should run your Proxy service web server as a
non-root (no UID 0) user such as "swift" mentioned
before.
The use of a port greater than 1024 is required to make
this easy and avoid running any part of the web container
as root. Doing so is not a burden as end-point clients
are not typically going to type in the URL manually into
a web browser to browse around in the object storage.
Additionally, for clients using the HTTP REST API and
performing authentication they will normally
automatically grab the full REST API URL they are to use
as provided by the authentication response. OpenStacks
REST API allows for a client to authenticate to one URL
and then be told to use a completely different URL for
the actual service. Example: Client authenticates to
<uri>https://identity.cloud.example.org:55443/v1/auth</uri>
and gets a response with their authentication key and
Storage URL (the URL of the proxy nodes or load balancer)
of
<uri>https://swift.cloud.example.org:44443/v1/AUTH_8980</uri>.
</para>
<para>You should run your Proxy service web server as a
non-root (no UID 0) user such as "swift" mentioned
before. The use of a port greater than 1024 is
required to make this easy and avoid running any part
of the web container as root. Doing so is not a burden
as end-point clients are not typically going to type
in the URL manually into a web browser to browse
around in the object storage. Additionally, for
clients using the HTTP REST API and performing
authentication they will normally automatically grab
the full REST API URL they are to use as provided by
the authentication response. OpenStacks REST API
allows for a client to authenticate to one URL and
then be told to use a completely different URL for the
actual service. Example: Client authenticates to
<uri>https://identity.cloud.example.org:55443/v1/auth</uri>
and gets a response with their authentication key and
Storage URL (the URL of the proxy nodes or load
balancer) of
<uri>https://swift.cloud.example.org:44443/v1/AUTH_8980</uri>.</para>
<para>The method for configuring your web server to start
and run as a non-root user varies by web server and OS.
</para>
and run as a non-root user varies by web server and
OS.</para>
</section>
<section xml:id="ch027_storage-idpG1">
<title>Load balancer</title>
<para>If the option of using Apache is not feasible or for
performance you wish to offload your SSL work you may
employ a dedicated network device load balancer. This is
also the common way to provide redundancy and load
balancing when using multiple proxy nodes.</para>
performance you wish to offload your SSL work you may
employ a dedicated network device load balancer. This
is also the common way to provide redundancy and load
balancing when using multiple proxy nodes.</para>
<para>If you choose to offload your SSL ensure that the
network link between the load balancer and your proxy
nodes is on a private (V)LAN segment such that other nodes
on the network (possibly compromised) cannot wiretap
(sniff) the unencrypted traffic. If such a breach were to
occur the attacker could gain access to end-point client
or cloud administrator credentials and access the cloud
data.</para>
<para>The authentication service you use (e.g. Keystone,
SWAuth) will determine how you configure a different URL
in the responses to end-clients so they use your load
balancer instead of an individual Proxy service node.
</para>
network link between the load balancer and your proxy
nodes is on a private (V)LAN segment such that other
nodes on the network (possibly compromised) cannot
wiretap (sniff) the unencrypted traffic. If such a
breach were to occur the attacker could gain access to
end-point client or cloud administrator credentials
and access the cloud data.</para>
<para>The authentication service you use, such as
Keystone or SWAuth, will determine how you configure a
different URL in the responses to end-clients so they
use your load balancer instead of an individual Proxy
service node.</para>
</section>
</section><!-- Securing Proxy Services -->
</section>
<!-- Securing Proxy Services -->
<section xml:id="ch027_storage-idpH1">
<title>Object storage authentication</title>
<para>Object Storage uses wsgi to provide a middleware for
authentication of end-point clients. The authentication
provider defines what roles and user types exist. Some use
traditional username and password credentials while others may
leverage API key tokens or even client-side x.509 SSL
certificates. Custom providers can be integrated in using the
wsgi model.</para>
authentication of end-point clients. The authentication
provider defines what roles and user types exist. Some use
traditional username and password credentials while others
may leverage API key tokens or even client-side x.509 SSL
certificates. Custom providers can be integrated in using
the wsgi model.</para>
<section xml:id="ch027_storage-idpH12">
<title>Keystone</title>
<para>Keystone is the commonly used Identity provider in
OpenStack. It may also be used for authentication in
Object Storage. Coverage of securing Keystone is already
provided in <xref linkend="ch024_authentication"/>.</para>
OpenStack. It may also be used for authentication in
Object Storage. Coverage of securing Keystone is
already provided in <xref
linkend="ch024_authentication"/>.</para>
</section>
<section xml:id="ch027_storage-idpH123">
<title>SWAuth</title>
<para>SWAuth is another alternative to Keystone.
In contrast to Keystone it stores the user accounts,
credentials, and metadata in object storage itself. More
information can be found on the SWAuth website at
<link xlink:href="http://gholt.github.io/swauth/">http://gholt.github.io/swauth/</link>.
</para>
<para>SWAuth is another alternative to Keystone. In
contrast to Keystone it stores the user accounts,
credentials, and metadata in object storage itself.
More information can be found on the SWAuth website at
<link xlink:href="http://gholt.github.io/swauth/"
>http://gholt.github.io/swauth/</link>.</para>
</section>
</section><!-- Object Storage Authentication -->
</section>
<!-- Object Storage Authentication -->
<section xml:id="ch027_storage-idpI1">
<title>Other notable items</title>
<para>In /etc/swift/swift.conf on every service node there is
a "swift_hash_path_suffix" setting. This is provided to
reduce the chance of hash collisions for objects being stored
and avert one user overwriting the data of another user.
</para>
a "swift_hash_path_suffix" setting. This is provided to
reduce the chance of hash collisions for objects being
stored and avert one user overwriting the data of another
user.</para>
<para>This value should be initially set with a
cryptographically secure random number generator and
consistent across all service nodes. Ensure that it is
protected with proper ACLs and that you have a backup copy to
avoid data loss.</para>
cryptographically secure random number generator and
consistent across all service nodes. Ensure that it is
protected with proper ACLs and that you have a backup copy
to avoid data loss.</para>
</section>
</chapter>
</chapter>

View File

@@ -1,7 +1,19 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch051_vss-intro"><?dbhtml stop-chunking?>
<title>Hypervisor Selection</title>
<para>Virtualization provides flexibility and other key benefits that enable cloud building. However, a virtualization stack also needs to be secured appropriately to reduce the risks associated with hypervisor breakout attacks. That is, while a virtualization stack can provide isolation between instances, or guest virtual machines, there are situations where that isolation can be less than perfect. Making intelligent selections for virtualization stack as well as following the best practices outlined in this chapter can be included in a layered approach to cloud security. Finally, securing your virtualization stack is critical in order to deliver on the promise of multitennancy, either between customers in a public cloud, between business units in a private cloud, or some mixture of the two in a hybrid cloud.</para>
<para>Virtualization provides flexibility and other key benefits
that enable cloud building. However, a virtualization stack also
needs to be secured appropriately to reduce the risks associated
with hypervisor breakout attacks. That is, while a virtualization
stack can provide isolation between instances, or guest virtual
machines, there are situations where that isolation can be less
than perfect. Making intelligent selections for virtualization
stack as well as following the best practices outlined in this
chapter can be included in a layered approach to cloud security.
Finally, securing your virtualization stack is critical in order
to deliver on the promise of multi-tenant, either between
customers in a public cloud, between business units in a private
cloud, or some mixture of the two in a hybrid cloud.</para>
<para>In this chapter, we discuss the hypervisor selection process.  In the chapters that follow, we provide the foundational information needed for securing a virtualization stack.</para>
<section xml:id="ch051_vss-intro-idp236592">
<title>Hypervisors in OpenStack</title>
@@ -140,7 +152,14 @@
</tr>
<tr>
<td><para>TSF Protection</para></td>
<td><para>While in operation, the kernel software and data are protected by the hardware memory protection mechanisms. The memory and process management components of the kernel ensure a user process cannot access kernel storage or storage belonging to other processes.</para><para>Non-kernel TSF software and data are protected by DAC and process isolation  mechanisms. In the evaluated configuration, the reserved user ID root owns the directories and files that define the TSF configuration. In general, files and directories containing internal TSF data (e.g., configuration files, batch job queues) are also protected from reading by DAC permissions.</para><para>The system and the hardware and firmware components are required to be physically protected from unauthorized access. The system kernel mediates all access to the hardware mechanisms themselves, other than program visible CPU instruction functions.</para><para>In addition, mechanisms for protection against stack overflow attacks are provided.</para></td>
<td><para>While in operation, the kernel software and data are protected by the hardware memory protection mechanisms. The memory and process management components of the kernel ensure a user process cannot access kernel storage or storage belonging to other processes.</para><para>Non-kernel TSF software and data are protected by DAC and
process isolation  mechanisms. In the evaluated
configuration, the reserved user ID root owns the
directories and files that define the TSF
configuration. In general, files and directories
containing internal TSF data, such as configuration
files and batch job queues, are also protected from
reading by DAC permissions.</para><para>The system and the hardware and firmware components are required to be physically protected from unauthorized access. The system kernel mediates all access to the hardware mechanisms themselves, other than program visible CPU instruction functions.</para><para>In addition, mechanisms for protection against stack overflow attacks are provided.</para></td>
</tr>
</tbody>
</informaltable>

View File

@@ -38,12 +38,15 @@
legacy devices that have their own set of quirks. Putting all of
this together, QEMU has been the source of many security
problems, including hypervisor breakout attacks.</para>
<para>For the reasons stated above, it is important to take proactive steps to harden QEMU. We recommend three specific steps: minimizing the codebase, using compiler hardening, and using mandatory access controls, for example: sVirt, SELinux, or AppArmor.</para>
<para>For the reasons stated above, it is important to take
proactive steps to harden QEMU. We recommend three specific
steps: minimizing the code base, using compiler hardening, and
using mandatory access controls, such as sVirt, SELinux, or
AppArmor.</para>
<section xml:id="ch052_devices-idp490976">
<title>Minimizing the Qemu Codebase</title>
<title>Minimizing the Qemu Code base</title>
<para>One classic security principle is to remove any unused components from your system. QEMU provides support for many different virtual hardware devices. However, only a small number of devices are needed for a given instance. Most instances will use the virtio devices. However, some legacy instances will need access to specific hardware, which can be specified using glance metadata:</para>
<screen>
glance image-update \
<screen>glance image-update \
    --property hw_disk_bus=ide \
    --property hw_cdrom_bus=ide \
    --property hw_vif_model=e1000 \
@@ -90,10 +93,25 @@ CFLAGS="-arch x86_64 -fstack-protector-all -Wstack-protector --param ssp-buffer-
</section>
<section xml:id="ch052_devices-idp510512">
<title>sVirt: SELinux + Virtualization</title>
<para>With unique kernel-level architecture and National Security Agency (NSA) developed security mechanisms, KVM provides foundational isolation technologies for multitenancy. With developmental origins dating back to 2002, the Secure Virtualization (sVirt) technology is the application of SELinux against modern day virtualization. SELinux, which was designed to apply separation control based upon labels, has been extended to provide isolation between virtual machine processes, devices, data files and system processes acting upon their behalf.</para>
<para>With unique kernel-level architecture and National
Security Agency (NSA) developed security mechanisms, KVM
provides foundational isolation technologies for multi tenancy.
With developmental origins dating back to 2002, the Secure
Virtualization (sVirt) technology is the application of SELinux
against modern day virtualization. SELinux, which was designed
to apply separation control based upon labels, has been extended
to provide isolation between virtual machine processes, devices,
data files and system processes acting upon their behalf.</para>
<para>OpenStack's sVirt implementation aspires to protect hypervisor hosts and virtual machines against two primary threat vectors:</para>
<itemizedlist><listitem>
<para><emphasis role="bold">Hypervisor threats</emphasis> A compromised application running within a virtual machine attacks the hypervisor to access underlying resources (e.g. the host OS, applications, or devices within the physical machine). This is a threat vector unique to virtualization and represents considerable risk as the underlying real machine can be compromised due to vulnerability in a single virtual application.</para>
<para><emphasis role="bold">Hypervisor threats</emphasis> A
compromised application running within a virtual machine
attacks the hypervisor to access underlying resources. For
example, the host OS, applications, or devices within the
physical machine. This is a threat vector unique to
virtualization and represents considerable risk as the
underlying real machine can be compromised due to
vulnerability in a single virtual application.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Virtual Machine (multi-tenant)
@@ -126,9 +144,8 @@ system_u:object_r:svirt_image_t:SystemLow image2
system_u:object_r:svirt_image_t:SystemLow image3
system_u:object_r:svirt_image_t:SystemLow image4</screen>
<para>The svirt_image_t label uniquely identifies image files on disk, allowing for the SELinux policy to restrict access. When a KVM-based Compute image is powered on, sVirt appends a random numerical identifier to the image. sVirt is technically capable of assigning numerical identifiers to 524,288 virtual machines per hypervisor node, however OpenStack deployments are highly unlikely to encounter this limitation.</para>
<para>An example of the sVirt category identifier is shown below:</para>
<screen>
system_u:object_r:svirt_image_t:s0:c87,c520 image1
<para>This example shows the sVirt category identifier:</para>
<screen>system_u:object_r:svirt_image_t:s0:c87,c520 image1
system_u:object_r:svirt_image_t:s0:419,c172 image2</screen>
</section>
<section xml:id="ch052_devices-idp527632">

View File

@@ -1,91 +1,285 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch064_certifications-compliance-statements"><?dbhtml stop-chunking?>
<title>Certification &amp; Compliance Statements</title>
<para>Compliance and security are not exclusive, and must be addressed together. OpenStack deployments are unlikely to satisfy compliance requirements without security hardening. The listing below provides an OpenStack architect foundational knowledge and guidance to achieve compliance against commercial and government certifications and standards.</para>
<section xml:id="ch064_certifications-compliance-statements-idp44896">
<title>Commercial Standards</title>
<para>For commercial deployments of OpenStack, it is recommended that SOC 1/2 combined with ISO 2700 1/2 be considered as a starting point for OpenStack certification activities. The required security activities mandated by these certifications facilitate a foundation of security best practices and common control criteria that can assist in achieving more stringent compliance activities, including government attestations and certifications.</para>
<para>After completing these initial certifications, the remaining certifications are more deployment specific. For example, clouds processing credit card transactions will need PCI-DSS, clouds storing health care information require HIPAA, and clouds within the federal government may require FedRAMP/FISMA, and ITAR, certifications. </para>
<section xml:id="ch064_certifications-compliance-statements-idp47472">
<title>SOC 1 (SSAE 16) / ISAE 3402</title>
<para>Service Organization Controls (SOC) criteria are defined
by the <link xlink:href="http://www.aicpa.org/">American Institute of Certified Public Accountants</link> (AICPA). SOC controls assess relevant financial statements and assertions of a service provider, such as compliance with the Sarbanes-Oxley Act. SOC 1 is a replacement for Statement on Auditing Standards No. 70 (SAS 70) Type II report. These controls commonly include physical data centers in scope.</para>
<para>There are two types of SOC 1 reports:</para>
<itemizedlist><listitem>
<para>Type 1 report on the fairness of the presentation of managements description of the service organizations system and the suitability of the design of the controls to achieve the related control objectives included in the description as of a specified date.</para>
</listitem>
<listitem>
<para>Type 2 report on the fairness of the presentation of managements description of the service organizations system and the suitability of the design and operating effectiveness of the controls to achieve the related control objectives included in the description throughout a specified period</para>
</listitem>
</itemizedlist>
<para>For more details see the <link xlink:href="http://www.aicpa.org/InterestAreas/FRC/AssuranceAdvisoryServices/Pages/AICPASOC1Report.aspx">AICPA Report on Controls at a Service Organization Relevant to User Entities Internal Control over Financial Reporting</link>.</para>
</section>
<section xml:id="ch064_certifications-compliance-statements-idp53632">
<title>SOC 2</title>
<para>Service Organization Controls (SOC) 2 is a self attestation of controls that affect the security, availability, and processing integrity of the systems a service organization uses to process users' data and the confidentiality and privacy of information processed by these system. Examples of users are those responsible for governance of the service organization; customers of the service organization; regulators; business partners; suppliers and others who have an understanding of the service organization and its controls.</para>
<para>There are two types of SOC 2 reports:</para>
<itemizedlist><listitem>
<para>Type 1 report on the fairness of the presentation of managements description of the service organizations system and the suitability of the design of the controls to achieve the related control objectives included in the description as of a specified date.</para>
</listitem>
<listitem>
<para>Type 2 report on the fairness of the presentation of managements description of the service organizations system and the suitability of the design and operating effectiveness of the controls to achieve the related control objectives included in the description throughout a specified period.</para>
</listitem>
</itemizedlist>
<para>For more details see the <link xlink:href="http://www.aicpa.org/InterestAreas/FRC/AssuranceAdvisoryServices/Pages/AICPASOC2Report.aspx">AICPA Report on Controls at a Service Organization Relevant to Security, Availability, Processing Integrity, Confidentiality or Privacy</link>.</para>
</section>
<chapter xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://docbook.org/ns/docbook"
xmlns:db="http://docbook.org/ns/docbook" version="5.0"
xml:id="ch064_certifications-compliance-statements">
<?dbhtml stop-chunking?>
<title>Certification &amp; Compliance Statements</title>
<para>Compliance and security are not exclusive, and must be
addressed together. OpenStack deployments are unlikely to satisfy
compliance requirements without security hardening. The listing
below provides an OpenStack architect foundational knowledge and
guidance to achieve compliance against commercial and government
certifications and standards.</para>
<section
xml:id="ch064_certifications-compliance-statements-idp44896">
<title>Commercial Standards</title>
<para>For commercial deployments of OpenStack, it is recommended
that SOC 1/2 combined with ISO 2700 1/2 be considered as a
starting point for OpenStack certification activities. The
required security activities mandated by these certifications
facilitate a foundation of security best practices and common
control criteria that can assist in achieving more stringent
compliance activities, including government attestations and
certifications.</para>
<para>After completing these initial certifications, the remaining
certifications are more deployment specific. For example, clouds
processing credit card transactions will need PCI-DSS, clouds
storing health care information require HIPAA, and clouds within
the federal government may require FedRAMP/FISMA, and ITAR,
certifications. </para>
<section
xml:id="ch064_certifications-compliance-statements-idp47472">
<title>SOC 1 (SSAE 16) / ISAE 3402</title>
<para>Service Organization Controls (SOC) criteria are defined
by the <link xlink:href="http://www.aicpa.org/">American
Institute of Certified Public Accountants</link> (AICPA).
SOC controls assess relevant financial statements and
assertions of a service provider, such as compliance with the
Sarbanes-Oxley Act. SOC 1 is a replacement for Statement on
Auditing Standards No. 70 (SAS 70) Type II report. These
controls commonly include physical data centers in
scope.</para>
<para>There are two types of SOC 1 reports:</para>
<itemizedlist>
<listitem>
<para>Type 1 report on the fairness of the presentation of
management's description of the service organization's
system and the suitability of the design of the controls
to achieve the related control objectives included in the
description as of a specified date.</para>
</listitem>
<listitem>
<para>Type 2 report on the fairness of the presentation of
management's description of the service organization's
system and the suitability of the design and operating
effectiveness of the controls to achieve the related
control objectives included in the description throughout
a specified period</para>
</listitem>
</itemizedlist>
<para>For more details see the <link
xlink:href="http://www.aicpa.org/InterestAreas/FRC/AssuranceAdvisoryServices/Pages/AICPASOC1Report.aspx"
>AICPA Report on Controls at a Service Organization Relevant
to User Entities Internal Control over Financial
Reporting</link>.</para>
</section>
<section xml:id="ch064_certifications-compliance-statements-idp60416">
<title>SOC 3</title>
<para>Service Organization Controls (SOC) 3 is a trust services report for service organizations. These reports are designed to meet the needs of users who want assurance on the controls at a service organization related to security, availability, processing integrity, confidentiality, or privacy but do not have the need for or the knowledge necessary to make effective use of a SOC 2 Report. These reports are prepared using the AICPA/Canadian Institute of Chartered Accountants (CICA) Trust Services Principles, Criteria, and Illustrations for Security, Availability, Processing Integrity, Confidentiality, and Privacy. Because they are general use reports, SOC 3 Reports can be freely distributed or posted on a website as a seal.</para>
<para>For more details see the <link xlink:href="http://www.aicpa.org/InterestAreas/FRC/AssuranceAdvisoryServices/Pages/AICPASOC3Report.aspx">AICPA Trust Services Report for Service Organizations</link>.</para>
<section
xml:id="ch064_certifications-compliance-statements-idp53632">
<title>SOC 2</title>
<para>Service Organization Controls (SOC) 2 is a self
attestation of controls that affect the security,
availability, and processing integrity of the systems a
service organization uses to process users' data and the
confidentiality and privacy of information processed by these
system. Examples of users are those responsible for governance
of the service organization; customers of the service
organization; regulators; business partners; suppliers and
others who have an understanding of the service organization
and its controls.</para>
<para>There are two types of SOC 2 reports:</para>
<itemizedlist>
<listitem>
<para>Type 1 report on the fairness of the presentation of
management's description of the service organization's
system and the suitability of the design of the controls
to achieve the related control objectives included in the
description as of a specified date.</para>
</listitem>
<listitem>
<para>Type 2 report on the fairness of the presentation of
management's description of the service organization's
system and the suitability of the design and operating
effectiveness of the controls to achieve the related
control objectives included in the description throughout
a specified period.</para>
</listitem>
</itemizedlist>
<para>For more details see the <link
xlink:href="http://www.aicpa.org/InterestAreas/FRC/AssuranceAdvisoryServices/Pages/AICPASOC2Report.aspx"
>AICPA Report on Controls at a Service Organization Relevant
to Security, Availability, Processing Integrity,
Confidentiality or Privacy</link>.</para>
</section>
<section xml:id="ch064_certifications-compliance-statements-idp62832">
<title>ISO 27001/2</title>
<para>The ISO/IEC 27001/2 standards replace BS7799-2, and are specifications for an Information Security Management System (ISMS). An ISMS is a comprehensive set of policies and processes that an organization creates and maintains to manage risk to information assets.  These risks are based upon the confidentiality, integrity, and availability (CIA) of user information. The CIA security triad has been used as a foundation for much of the chapters in this book.</para>
<para>For more details see <link xlink:href="http://www.27000.org/iso-27001.htm">ISO 27001</link>.</para>
</section>
<section
xml:id="ch064_certifications-compliance-statements-idp60416">
<title>SOC 3</title>
<para>Service Organization Controls (SOC) 3 is a trust services
report for service organizations. These reports are designed to
meet the needs of users who want assurance on the controls at a
service organization related to security, availability,
processing integrity, confidentiality, or privacy but do not
have the need for or the knowledge necessary to make effective
use of a SOC 2 Report. These reports are prepared using the
AICPA/Canadian Institute of Chartered Accountants (CICA) Trust
Services Principles, Criteria, and Illustrations for Security,
Availability, Processing Integrity, Confidentiality, and
Privacy. Because they are general use reports, SOC 3 Reports can
be freely distributed or posted on a website as a seal.</para>
<para>For more details see the <link
xlink:href="http://www.aicpa.org/InterestAreas/FRC/AssuranceAdvisoryServices/Pages/AICPASOC3Report.aspx"
>AICPA Trust Services Report for Service
Organizations</link>.</para>
</section>
<section
xml:id="ch064_certifications-compliance-statements-idp62832">
<title>ISO 27001/2</title>
<para>The ISO/IEC 27001/2 standards replace BS7799-2, and are
specifications for an Information Security Management System
(ISMS). An ISMS is a comprehensive set of policies and processes
that an organization creates and maintains to manage risk to
information assets.  These risks are based upon the
confidentiality, integrity, and availability (CIA) of user
information. The CIA security triad has been used as a
foundation for much of the chapters in this book.</para>
<para>For more details see <link
xlink:href="http://www.27000.org/iso-27001.htm">ISO
27001</link>.</para>
</section>
<section
xml:id="ch064_certifications-compliance-statements-idp65296">
<title>HIPAA / HITECH</title>
<para>The Health Insurance Portability and Accountability Act
(HIPAA) is a United States congressional act that governs the
collection, storage, use and destruction of patient health
records. The act states that Protected Health Information (PHI)
must be rendered "unusable, unreadable, or indecipherable" to
unauthorized persons and that encryption for data 'at-rest' and
'inflight' should be addressed.</para>
<para>HIPAA is not a certification, rather a guide for protecting
healthcare data.  Similar to the PCI-DSS, the most important
issues with both PCI and HIPPA is that a breach of credit card
information, and health data, do not occur. In the instance of a
breach the cloud provider will be scrutinized for compliance
with PCI and HIPPA controls. If proven compliant, the provider
can be expected to immediately implement remedial controls,
breach notification responsibilities, and significant
expenditure on additional compliance activities.  If not
compliant, the cloud provider can expect on-site audit teams,
fines, potential loss of merchant ID (PCI), and massive
reputation impact.</para>
<para>Users or organizations that possess PHI must support HIPAA
requirements and are HIPAA covered entities. If an entity
intends to use a service, or in this case, an OpenStack cloud
that might use, store or have access to that PHI, then a
Business Associate Agreement must be signed. The BAA is a
contract between the HIPAA covered entity and the OpenStack
service provider that requires the provider to handle that PHI
in accordance with HIPAA requirements. If the service provider
does not handle the PHI, such as with security controls and
hardening, then they are subject to HIPAA fines and
penalties.</para>
<para>OpenStack architects interpret and respond to HIPAA
statements, with data encryption remaining a core practice.
Currently this would require any protected health information
contained within an OpenStack deployment to be encrypted with
industry standard encryption algorithms. Potential future
OpenStack projects such as object encryption will facilitate
HIPAA guidelines for compliance with the act.</para>
<para>For more details see the <link
xlink:href="https://www.cms.gov/Regulations-and-Guidance/HIPAA-Administrative-Simplification/HIPAAGenInfo/downloads/HIPAALaw.pdf"
>Health Insurance Portability And Accountability
Act</link>.</para>
<section
xml:id="ch064_certifications-compliance-statements-idp4736">
<title>PCI-DSS</title>
<para>The Payment Card Industry Data Security Standard (PCI DSS)
is defined by the Payment Card Industry Standards Council, and
created to increase controls around card holder data to reduce
credit card fraud. Annual compliance validation is assessed by
an external Qualified Security Assessor (QSA) who creates a
Report on Compliance (ROC), or by a Self-Assessment
Questionnaire (SAQ) dependent on volume of card-holder
transactions.  </para>
<para>OpenStack deployments which stores, processes, or
transmits payment card details are in scope for the PCI-DSS.
All OpenStack components that are not properly segmented from
systems or networks that handle payment data fall under the
guidelines of the PCI-DSS. Segmentation in the context of
PCI-DSS does not support multi-tenancy, but rather physical
separation (host/network). </para>
<para>For more details see <link
xlink:href="https://www.pcisecuritystandards.org/security_standards/"
>PCI security standards</link>.</para>
</section>
<section xml:id="ch064_certifications-compliance-statements-idp65296">
<title>HIPAA / HITECH</title>
<para>The Health Insurance Portability and Accountability Act (HIPAA) is a United States congressional act that governs the collection, storage, use and destruction of patient health records. The act states that Protected Health Information (PHI) must be rendered "unusable, unreadable, or indecipherable" to unauthorized persons and that encryption for data 'at-rest' and 'inflight' should be addressed.</para>
<para>HIPAA is not a certification, rather a guide for protecting healthcare data.  Similar to the PCI-DSS, the most important issues with both PCI and HIPPA is that a breach of credit card information, and health data, do not occur. In the instance of a breach the cloud provider will be scrutinized for compliance with PCI and HIPPA controls. If proven compliant, the provider can be expected to immediately implement remedial controls, breach notification responsibilities, and significant expenditure on additional compliance activities.  If not compliant, the cloud provider can expect on-site audit teams, fines, potential loss of merchant ID (PCI), and massive reputational impact.</para>
<para>Users or organizations that possess PHI must support HIPAA requirements and are HIPAA covered entities. If an entity intends to use a service, or in this case, an OpenStack cloud that might use, store or have access to that PHI, then a Business Associate Agreement must be signed. The BAA is a contract between the HIPAA covered entity and the OpenStack service provider that requires the provider to handle that PHI in accordance with HIPAA requirements. If the service provider does not handle the PHI (e.g. with security controls and hardening) then they are subject to HIPAA fines and penalties.</para>
<para>OpenStack architects interpret and respond to HIPAA statements, with data encryption remaining a core practice. Currently this would require any protected health information contained within an OpenStack deployment to be encrypted with industry standard encryption algorithms. Potential future OpenStack projects such as object encryption will facilitate HIPAA guidelines for compliance with the act.</para>
<para>For more details see the <link xlink:href="https://www.cms.gov/Regulations-and-Guidance/HIPAA-Administrative-Simplification/HIPAAGenInfo/downloads/HIPAALaw.pdf">Health Insurance Portability And Accountability Act</link>.</para>
<section xml:id="ch064_certifications-compliance-statements-idp4736">
<title>PCI-DSS</title>
<para>The Payment Card Industry Data Security Standard (PCI DSS) is defined by the Payment Card Industry Standards Council, and created to increase controls around card holder data to reduce credit card fraud. Annual compliance validation is assessed by an external Qualified Security Assessor (QSA) who creates a Report on Compliance (ROC), or by a Self-Assessment Questionnaire (SAQ) dependent on volume of card-holder transactions.  </para>
<para>OpenStack deployments which stores, processes, or transmits payment card details are in scope for the PCI-DSS. All OpenStack components that are not properly segmented from systems or networks that handle payment data fall under the guidelines of the PCI-DSS. Segmentation in the context of PCI-DSS does not support multi-tenancy, but rather physical separation (host/network). </para>
<para>For more details see <link xlink:href="https://www.pcisecuritystandards.org/security_standards/">PCI security standards</link>.</para>
</section>
</section>
<section xml:id="ch064_certifications-compliance-statements-idp8448">
<title>Government Standards</title>
<section
xml:id="ch064_certifications-compliance-statements-idp9088">
<title>FedRAMP</title>
<para>"The <link xlink:href="http://www.fedramp.gov">Federal
Risk and Authorization Management Program</link> (FedRAMP)
is a government-wide program that provides a standardized
approach to security assessment, authorization, and continuous
monitoring for cloud products and services". NIST 800-53 is
the basis for both FISMA and FedRAMP which mandates security
controls specifically selected to provide protection in cloud
environments. FedRAMP can be extremely intensive from
specificity around security controls, and the volume of
documentation required to meet government standards.</para>
<para>For more details see <link
xlink:href="http://www.gsa.gov/portal/category/102371"
>http://www.gsa.gov/portal/category/102371</link>.</para>
</section>
<section xml:id="ch064_certifications-compliance-statements-idp8448">
<title>Government Standards</title>
<section xml:id="ch064_certifications-compliance-statements-idp9088">
<title>FedRAMP</title>
<para>"The <link xlink:href="http://www.fedramp.gov">Federal Risk and Authorization Management Program</link> (FedRAMP) is a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services". NIST 800-53 is the basis for both FISMA and FedRAMP which mandates security controls specifically selected to provide protection in cloud environments. FedRAMP can be extremely intensive from specificity around security controls, and the volume of documentation required to meet government standards.</para>
<para>For more details see <link xlink:href="http://www.gsa.gov/portal/category/102371">http://www.gsa.gov/portal/category/102371</link>.</para>
</section>
<section xml:id="ch064_certifications-compliance-statements-idp10768">
<title>ITAR</title>
<para>The International Traffic in Arms Regulations (ITAR) is a set of
United States government regulations that control the export and import of defense-related articles and services on the United States Munitions List (USML) and related technical data. ITAR is often approached by cloud providers as an "operational alignment" rather than a formal certification. This typically involves implementing a segregated cloud environment following practices based on the NIST 800-53 framework, as per FISMA requirements, complemented with additional controls restricting access to "U.S. Persons" only and background screening.</para>
<para>For more details see <link xlink:href="http://pmddtc.state.gov/regulations_laws/itar_official.html">http://pmddtc.state.gov/regulations_laws/itar_official.html</link>.</para>
</section>
<section xml:id="ch064_certifications-compliance-statements-idp89888">
<title>FISMA</title>
<para>The Federal Information Security Management Act requires that government agencies create a comprehensive plan to implement numerous government security standards, and was enacted within the E-Government Act of 2002. FISMA outlines a process, which utilizing multiple NIST publications, prepares an information system to store and process government data.</para>
<para>This process is broken apart into three primary categories:</para>
<itemizedlist><listitem>
<para><emphasis role="bold">System Categorization</emphasis>The information system will receive a security category as defined in Federal Information Processing Standards Publication 199 (FIPS 199). These categories reflect the potential impact of system compromise.</para>
</listitem>
</itemizedlist>
<itemizedlist><listitem>
<para><emphasis role="bold">Control Selection</emphasis>Based upon system security category as defined in FIPS 199, an organization utilizes FIPS 200 to identify specific security control requirements for the information system. For example, if a system is categorized as “moderate” a requirement may be introduced to mandate “secure passwords.”</para>
</listitem>
<listitem>
<para><emphasis role="bold">Control Tailoring</emphasis>Once system security controls are identified, an OpenStack architect will utilize NIST 800-53 to extract tailored control selection, e.g. specification of what constitutes a “secure password.”</para>
</listitem>
</itemizedlist>
</section>
<section
xml:id="ch064_certifications-compliance-statements-idp10768">
<title>ITAR</title>
<para>The International Traffic in Arms Regulations (ITAR) is a
set of United States government regulations that control the
export and import of defense-related articles and services on
the United States Munitions List (USML) and related technical
data. ITAR is often approached by cloud providers as an
"operational alignment" rather than a formal certification.
This typically involves implementing a segregated cloud
environment following practices based on the NIST 800-53
framework, as per FISMA requirements, complemented with
additional controls restricting access to "U.S. Persons" only
and background screening.</para>
<para>For more details see <link
xlink:href="http://pmddtc.state.gov/regulations_laws/itar_official.html"
>http://pmddtc.state.gov/regulations_laws/itar_official.html</link>.</para>
</section>
</chapter>
<section
xml:id="ch064_certifications-compliance-statements-idp89888">
<title>FISMA</title>
<para>The Federal Information Security Management Act requires
that government agencies create a comprehensive plan to
implement numerous government security standards, and was
enacted within the E-Government Act of 2002. FISMA outlines a
process, which utilizing multiple NIST publications, prepares
an information system to store and process government
data.</para>
<para>This process is broken apart into three primary
categories:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">System
Categorization</emphasis>The information system will
receive a security category as defined in Federal
Information Processing Standards Publication 199 (FIPS
199). These categories reflect the potential impact of
system compromise.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Control
Selection</emphasis>Based upon system security category as
defined in FIPS 199, an organization utilizes FIPS 200 to
identify specific security control requirements for the
information system. For example, if a system is
categorized as “moderate” a requirement may be introduced
to mandate “secure passwords.”</para>
</listitem>
<listitem>
<para><emphasis role="bold">Control Tailoring</emphasis>Once
system security controls are identified, an OpenStack
architect will utilize NIST 800-53 to extract tailored
control selection. For example, specification of what
constitutes a “secure password.”</para>
</listitem>
</itemizedlist>
</section>
</section>
</chapter>

View File

@@ -6,7 +6,7 @@
<title>Alice's Private Cloud</title>
<para>Alice is building an OpenStack private cloud for the United States government, specifically to provide elastic compute environments for signal processing. Alice has researched government compliance requirements, and has identified that her private cloud will be required to certify against FISMA and follow the FedRAMP accreditation process, which is required for all federal agencies, departments and contractors to become a Certified Cloud Provider (CCP). In this particular scenario for signal processing, the FISMA controls required will most likely be FISMA High, which indicates possible "severe or catastrophic adverse effects" should the information system become compromised. In addition to FISMA Moderate controls Alice must ensure her private cloud is FedRAMP certified, as this is a requirement for all agencies that currently utilize, or host federal information within a cloud environment.</para>
<para>To meet these strict government regulations Alice undertakes a number of activities. Scoping of requirements is particularly important due to the volume of controls that must be implemented, which will be defined in NIST Publication 800-53.</para>
<para>All technology within her private cloud must be FIPS certified technology, as mandated within NIST 800-53 and FedRAMP. As the U.S. Department of Defense is involved, Security Technical Implementation Guides (STIGs) will come into play, which are the configuration standards for DOD IA and IA-enabled devices / systems. Alice notices a number of complications here as there is no STIG for OpenStack, so she must address several underlying requirements for each OpenStack service, e.g. the networking SRG and Application SRG will both be applicable (<link xlink:href="http://iase.disa.mil/srgs/index.html">list of SRGs</link>). Other critical controls include ensuring that all identities in the cloud use PKI, that SELinux is enabled, that encryption exists for all wire-level communications, and that continuous monitoring is in place and clearly documented. Alice is not concerned with object encryption, as this will be the tenants responsibility rather than the provider.</para>
<para>All technology within her private cloud must be FIPS certified technology, as mandated within NIST 800-53 and FedRAMP. As the U.S. Department of Defense is involved, Security Technical Implementation Guides (STIGs) will come into play, which are the configuration standards for DOD IA and IA-enabled devices / systems. Alice notices a number of complications here as there is no STIG for OpenStack, so she must address several underlying requirements for each OpenStack service; for example, the networking SRG and Application SRG will both be applicable (<link xlink:href="http://iase.disa.mil/srgs/index.html">list of SRGs</link>). Other critical controls include ensuring that all identities in the cloud use PKI, that SELinux is enabled, that encryption exists for all wire-level communications, and that continuous monitoring is in place and clearly documented. Alice is not concerned with object encryption, as this will be the tenants responsibility rather than the provider.</para>
<para>If Alice has adequately scoped and executed these compliance activities, she may begin the process to become FedRAMP compliant by hiring an approved third-party auditor. Typically this process takes up to 6 months, after which she will receive an Authority to Operate and can offer OpenStack cloud services to the government.</para>
</section>
<section xml:id="ch066_case-studies-compliance-idp49712">

View File

@@ -1,5 +1,8 @@
<?xml version="1.0" encoding="utf-8"?>
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="module001-ch008-queues-messaging">
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="module001-ch008-queues-messaging">
<title>OpenStack Messaging and Queues</title>
<figure>
<title>Messaging in OpenStack</title>
@@ -20,7 +23,7 @@
<itemizedlist>
<listitem>
<para>Decoupling between client and servant (such as the client
does not need to know where the servants reference
does not need to know where the servant reference
is).</para>
</listitem>
<listitem>
@@ -48,55 +51,53 @@
<para>Nova implements RPC (both request+response, and one-way,
respectively nicknamed rpc.call and rpc.cast) over AMQP by
providing an adapter class which take cares of marshaling and
unmarshaling of messages into function calls. Each Nova service
(for example Compute, Scheduler, etc.) create two queues at the
un-marshalling of messages into function calls. Each Nova service,
such as Compute, Scheduler, and so on, creates two queues at the
initialization time, one which accepts messages with routing keys
NODE-TYPE.NODE-ID (for example compute.hostname) and another,
which accepts messages with routing keys as generic NODE-TYPE
(for example compute). The former is used specifically when
NODE-TYPE.NODE-ID, for example, compute.hostname, and another,
which accepts messages with routing keys as generic NODE-TYPE, for example compute. The former is used specifically when
Nova-API needs to redirect commands to a specific node like
euca-terminate instance. In this case, only the compute node
whose hosts hypervisor is running the virtual machine can kill
the instance. The API acts as a consumer when RPC calls are
request/response, otherwise is acts as publisher only.</para>
<para><guilabel>Nova RPC Mappings</guilabel></para>
<para>The figure below shows the internals of a message broker
node (referred to as a RabbitMQ node in the diagrams) when a
single instance is deployed and shared in an OpenStack cloud.
Every Nova component connects to the message broker and,
depending on its personality (for example a compute node or a
network node), may use the queue either as an Invoker (such as
API or Scheduler) or a Worker (such as Compute or Network).
Invokers and Workers do not actually exist in the Nova object
model, but we are going to use them as an abstraction for sake
of clarity. An Invoker is a component that sends messages in the
queuing system via two operations: 1) rpc.call and ii) rpc.cast;
a Worker is a component that receives messages from the queuing
system and reply accordingly to rcp.call operations.</para>
<para>Figure 2 shows the following internal elements:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Topic Publisher:</emphasis>a Topic
<para><guilabel>Nova RPC Mappings</guilabel></para>
<para>The figure below shows the internals of a message broker node
(referred to as a RabbitMQ node in the diagrams) when a single
instance is deployed and shared in an OpenStack cloud. Every Nova
component connects to the message broker and, depending on its
personality, such as a compute node or a network node, may
use the queue either as an Invoker (such as API or Scheduler) or a
Worker (such as Compute or Network). Invokers and Workers do not
actually exist in the Nova object model, but we are going to use
them as an abstraction for sake of clarity. An Invoker is a
component that sends messages in the queuing system via two
operations: 1) rpc.call and ii) rpc.cast; a Worker is a component
that receives messages from the queuing system and reply
accordingly to rcp.call operations.</para>
<para>Figure 2 shows the following internal elements:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Topic Publisher:</emphasis>a Topic
Publisher comes to life when an rpc.call or an rpc.cast
operation is executed; this object is instantiated and used to
push a message to the queuing system. Every publisher connects
always to the same topic-based exchange; its life-cycle is
limited to the message delivery.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Direct Consumer:</emphasis>a
Direct Consumer comes to life if (an only if) a rpc.call
operation is executed; this object is instantiated and used to
receive a response message from the queuing system; Every
consumer connects to a unique direct-based exchange via a
unique exclusive queue; its life-cycle is limited to the
message delivery; the exchange and queue identifiers are
determined by a UUID generator, and are marshaled in the
message sent by the Topic Publisher (only rpc.call
operations).</para>
</listitem>
<listitem>
<para><emphasis role="bold">Topic Consumer:</emphasis>a Topic
</listitem>
<listitem>
<para><emphasis role="bold">Direct Consumer:</emphasis>a Direct
Consumer comes to life if (an only if) a rpc.call operation is
executed; this object is instantiated and used to receive a
response message from the queuing system; Every consumer
connects to a unique direct-based exchange via a unique
exclusive queue; its life-cycle is limited to the message
delivery; the exchange and queue identifiers are determined by
a UUID generator, and are marshaled in the message sent by the
Topic Publisher (only rpc.call operations).</para>
</listitem>
<listitem>
<para><emphasis role="bold">Topic Consumer:</emphasis>a Topic
Consumer comes to life as soon as a Worker is instantiated and
exists throughout its life-cycle; this object is used to
receive messages from the queue and it invokes the appropriate
@@ -108,39 +109,39 @@
key is topic) and the other that is addressed only during
rpc.call operations (and it connects to a unique queue whose
exchange key is topic.host).</para>
</listitem>
<listitem>
<para><emphasis role="bold">Direct Publisher:</emphasis>a
Direct Publisher comes to life only during rpc.call operations
and it is instantiated to return the message required by the
</listitem>
<listitem>
<para><emphasis role="bold">Direct Publisher:</emphasis>a Direct
Publisher comes to life only during rpc.call operations and it
is instantiated to return the message required by the
request/response operation. The object connects to a
direct-based exchange whose identity is dictated by the
incoming message.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Topic Exchange:</emphasis>The
</listitem>
<listitem>
<para><emphasis role="bold">Topic Exchange:</emphasis>The
Exchange is a routing table that exists in the context of a
virtual host (the multi-tenancy mechanism provided by Qpid or
RabbitMQ); its type (such as topic vs. direct) determines the
routing policy; a message broker node will have only one
topic-based exchange for every topic in Nova.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Direct Exchange:</emphasis>this is
a routing table that is created during rpc.call operations;
</listitem>
<listitem>
<para><emphasis role="bold">Direct Exchange:</emphasis>this is a
routing table that is created during rpc.call operations;
there are many instances of this kind of exchange throughout
the life-cycle of a message broker node, one for each rpc.call
invoked.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Queue Element:</emphasis>A Queue
is a message bucket. Messages are kept in the queue until a
</listitem>
<listitem>
<para><emphasis role="bold">Queue Element:</emphasis>A Queue is
a message bucket. Messages are kept in the queue until a
Consumer (either Topic or Direct Consumer) connects to the
queue and fetch it. Queues can be shared or can be exclusive.
Queues whose routing key is topic are shared amongst Workers
of the same personality.</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<figure>
<title>RabbitMQ</title>
<mediaobject>
@@ -149,33 +150,33 @@
</imageobject>
</mediaobject>
</figure>
<para><guilabel>RPC Calls</guilabel></para>
<para>The diagram below shows the message flow during an rp.call
operation:</para>
<orderedlist>
<listitem>
<para>a Topic Publisher is instantiated to send the message
request to the queuing system; immediately before the
publishing operation, a Direct Consumer is instantiated to
wait for the response message.</para>
</listitem>
<listitem>
<para>once the message is dispatched by the exchange, it is
fetched by the Topic Consumer dictated by the routing key
(such as topic.host) and passed to the Worker in charge of
the task.</para>
</listitem>
<listitem>
<para>once the task is completed, a Direct Publisher is
allocated to send the response message to the queuing
system.</para>
</listitem>
<listitem>
<para>once the message is dispatched by the exchange, it is
fetched by the Direct Consumer dictated by the routing key
(such as msg_id) and passed to the Invoker.</para>
</listitem>
</orderedlist>
<para><guilabel>RPC Calls</guilabel></para>
<para>The diagram below shows the message flow during an rp.call
operation:</para>
<orderedlist>
<listitem>
<para>a Topic Publisher is instantiated to send the message
request to the queuing system; immediately before the
publishing operation, a Direct Consumer is instantiated to
wait for the response message.</para>
</listitem>
<listitem>
<para>once the message is dispatched by the exchange, it is
fetched by the Topic Consumer dictated by the routing key
(such as topic.host) and passed to the Worker in charge of
the task.</para>
</listitem>
<listitem>
<para>once the task is completed, a Direct Publisher is
allocated to send the response message to the queuing
system.</para>
</listitem>
<listitem>
<para>once the message is dispatched by the exchange, it is
fetched by the Direct Consumer dictated by the routing key
(such as msg_id) and passed to the Invoker.</para>
</listitem>
</orderedlist>
<figure>
<title>RabbitMQ</title>
<mediaobject>
@@ -184,21 +185,21 @@
</imageobject>
</mediaobject>
</figure>
<para><guilabel>RPC Casts</guilabel></para>
<para>The diagram below the message flow during an rp.cast
operation:</para>
<orderedlist>
<listitem>
<para>A Topic Publisher is instantiated to send the message
request to the queuing system.</para>
</listitem>
<listitem>
<para>Once the message is dispatched by the exchange, it is
fetched by the Topic Consumer dictated by the routing key
(such as topic) and passed to the Worker in charge of the
task.</para>
</listitem>
</orderedlist>
<para><guilabel>RPC Casts</guilabel></para>
<para>The diagram below the message flow during an rp.cast
operation:</para>
<orderedlist>
<listitem>
<para>A Topic Publisher is instantiated to send the message
request to the queuing system.</para>
</listitem>
<listitem>
<para>Once the message is dispatched by the exchange, it is
fetched by the Topic Consumer dictated by the routing key
(such as topic) and passed to the Worker in charge of the
task.</para>
</listitem>
</orderedlist>
<figure>
<title>RabbitMQ</title>
<mediaobject>
@@ -207,152 +208,150 @@
</imageobject>
</mediaobject>
</figure>
<para><guilabel>AMQP Broker Load</guilabel></para>
<para>At any given time the load of a message broker node running
either Qpid or RabbitMQ is function of the following
parameters:</para>
<itemizedlist>
<listitem>
<para>Throughput of API calls: the number of API calls (more
precisely rpc.call ops) being served by the OpenStack cloud
dictates the number of direct-based exchanges, related
queues and direct consumers connected to them.</para>
</listitem>
<listitem>
<para>Number of Workers: there is one queue shared amongst
workers with the same personality; however there are as many
exclusive queues as the number of workers; the number of
workers dictates also the number of routing keys within the
topic-based exchange, which is shared amongst all
workers.</para>
</listitem>
</itemizedlist>
<para>The figure below shows the status of a RabbitMQ node after
Nova components bootstrap in a test environment. Exchanges and
queues being created by Nova components are:</para>
<itemizedlist>
<listitem>
<para>Exchanges</para>
</listitem>
</itemizedlist>
<orderedlist>
<listitem>
<para>nova (topic exchange)</para>
</listitem>
</orderedlist>
<itemizedlist>
<listitem>
<para>Queues</para>
</listitem>
</itemizedlist>
<orderedlist>
<listitem>
<para>compute.phantom (phantom is hostname)</para>
</listitem>
<listitem>
<para>compute</para>
</listitem>
<listitem>
<para>network.phantom (phantom is hostname)</para>
</listitem>
<listitem>
<para>network</para>
</listitem>
<listitem>
<para>scheduler.phantom (phantom is hostname)</para>
</listitem>
<listitem>
<para>scheduler</para>
</listitem>
</orderedlist>
<para><guilabel>RabbitMQ Gotchas</guilabel></para>
<para>Nova uses Kombu to connect to the RabbitMQ environment.
Kombu is a Python library that in turn uses AMQPLib, a library
that implements the standard AMQP 0.8 at the time of writing.
When using Kombu, Invokers and Workers need the following
parameters in order to instantiate a Connection object that
connects to the RabbitMQ server (please note that most of the
following material can be also found in the Kombu documentation;
it has been summarized and revised here for sake of
clarity):</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Hostname:</emphasis> The hostname
to the AMQP server.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Userid:</emphasis> A valid
username used to authenticate to the server.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Password:</emphasis> The password
<para><guilabel>AMQP Broker Load</guilabel></para>
<para>At any given time the load of a message broker node running
either Qpid or RabbitMQ is function of the following
parameters:</para>
<itemizedlist>
<listitem>
<para>Throughput of API calls: the number of API calls (more
precisely rpc.call ops) being served by the OpenStack cloud
dictates the number of direct-based exchanges, related queues
and direct consumers connected to them.</para>
</listitem>
<listitem>
<para>Number of Workers: there is one queue shared amongst
workers with the same personality; however there are as many
exclusive queues as the number of workers; the number of
workers dictates also the number of routing keys within the
topic-based exchange, which is shared amongst all
workers.</para>
</listitem>
</itemizedlist>
<para>The figure below shows the status of a RabbitMQ node after
Nova components bootstrap in a test environment. Exchanges and
queues being created by Nova components are:</para>
<itemizedlist>
<listitem>
<para>Exchanges</para>
</listitem>
</itemizedlist>
<orderedlist>
<listitem>
<para>nova (topic exchange)</para>
</listitem>
</orderedlist>
<itemizedlist>
<listitem>
<para>Queues</para>
</listitem>
</itemizedlist>
<orderedlist>
<listitem>
<para>compute.phantom (phantom is hostname)</para>
</listitem>
<listitem>
<para>compute</para>
</listitem>
<listitem>
<para>network.phantom (phantom is hostname)</para>
</listitem>
<listitem>
<para>network</para>
</listitem>
<listitem>
<para>scheduler.phantom (phantom is hostname)</para>
</listitem>
<listitem>
<para>scheduler</para>
</listitem>
</orderedlist>
<para><guilabel>RabbitMQ Gotchas</guilabel></para>
<para>Nova uses Kombu to connect to the RabbitMQ environment. Kombu
is a Python library that in turn uses AMQPLib, a library that
implements the standard AMQP 0.8 at the time of writing. When
using Kombu, Invokers and Workers need the following parameters in
order to instantiate a Connection object that connects to the
RabbitMQ server (please note that most of the following material
can be also found in the Kombu documentation; it has been
summarized and revised here for sake of clarity):</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Hostname:</emphasis> The hostname to
the AMQP server.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Userid:</emphasis> A valid username
used to authenticate to the server.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Virtual_host:</emphasis> The name
of the virtual host to work with. This virtual host must exist
on the server, and the user must have access to it. Default is
</listitem>
<listitem>
<para><emphasis role="bold">Password:</emphasis> The password
used to authenticate to the server.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Virtual_host:</emphasis> The name of
the virtual host to work with. This virtual host must exist on
the server, and the user must have access to it. Default is
“/”.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Port:</emphasis> The port of the
</listitem>
<listitem>
<para><emphasis role="bold">Port:</emphasis> The port of the
AMQP server. Default is 5672 (amqp).</para>
</listitem>
</itemizedlist>
<para>The following parameters are default:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Insist:</emphasis> insist on
</listitem>
</itemizedlist>
<para>The following parameters are default:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Insist:</emphasis> insist on
connecting to a server. In a configuration with multiple
load-sharing servers, the Insist option tells the server that
the client is insisting on a connection to the specified
server. Default is False.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Connect_timeout:</emphasis> the
</listitem>
<listitem>
<para><emphasis role="bold">Connect_timeout:</emphasis> the
timeout in seconds before the client gives up connecting to
the server. The default is no timeout.</para>
</listitem>
<listitem>
<para><emphasis role="bold">SSL:</emphasis> use SSL to connect
</listitem>
<listitem>
<para><emphasis role="bold">SSL:</emphasis> use SSL to connect
to the server. The default is False.</para>
</listitem>
</itemizedlist>
<para>More precisely Consumers need the following
parameters:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Connection:</emphasis> the above
</listitem>
</itemizedlist>
<para>More precisely Consumers need the following parameters:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Connection:</emphasis> the above
mentioned Connection object.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Queue:</emphasis>name of the
</listitem>
<listitem>
<para><emphasis role="bold">Queue:</emphasis>name of the
queue.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Exchange:</emphasis>name of the
</listitem>
<listitem>
<para><emphasis role="bold">Exchange:</emphasis>name of the
exchange the queue binds to.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Routing_key:</emphasis>the
</listitem>
<listitem>
<para><emphasis role="bold">Routing_key:</emphasis>the
interpretation of the routing key depends on the value of the
exchange_type attribute.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Direct exchange:</emphasis>if the
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Direct exchange:</emphasis>if the
routing key property of the message and the routing_key
attribute of the queue are identical, then the message is
forwarded to the queue.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Fanout
exchange:</emphasis>messages are forwarded to the queues bound
the exchange, even if the binding does not have a key.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Topic exchange:</emphasis>if the
</listitem>
<listitem>
<para><emphasis role="bold">Fanout exchange:</emphasis>messages
are forwarded to the queues bound the exchange, even if the
binding does not have a key.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Topic exchange:</emphasis>if the
routing key property of the message matches the routing key of
the key according to a primitive pattern matching scheme, then
the message is forwarded to the queue. The message routing key
@@ -362,79 +361,77 @@
matches zero or more words. For example ”.stock.#” matches the
routing keys “usd.stock” and “eur.stock.db” but not
“stock.nasdaq”.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Durable:</emphasis>this flag
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Durable:</emphasis>this flag
determines the durability of both exchanges and queues;
durable exchanges and queues remain active when a RabbitMQ
server restarts. Non-durable exchanges/queues (transient
exchanges/queues) are purged when a server restarts. It is
worth noting that AMQP specifies that durable queues cannot
bind to transient exchanges. Default is True.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Auto_delete:</emphasis>if set, the
</listitem>
<listitem>
<para><emphasis role="bold">Auto_delete:</emphasis>if set, the
exchange is deleted when all queues have finished using it.
Default is False.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Exclusive:</emphasis>exclusive
</listitem>
<listitem>
<para><emphasis role="bold">Exclusive:</emphasis>exclusive
queues (such as non-shared) may only be consumed from by the
current connection. When exclusive is on, this also implies
auto_delete. Default is False.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Exchange_type:</emphasis>AMQP
</listitem>
<listitem>
<para><emphasis role="bold">Exchange_type:</emphasis>AMQP
defines several default exchange types (routing algorithms)
that covers most of the common messaging use cases.</para>
</listitem>
<listitem>
<para><emphasis role="bold"
>Auto_ack:</emphasis>acknowledgement is handled automatically
once messages are received. By default auto_ack is set to
False, and the receiver is required to manually handle
acknowledgment.</para>
</listitem>
<listitem>
<para><emphasis role="bold">No_ack:</emphasis>it disable
</listitem>
<listitem>
<para><emphasis role="bold">Auto_ack:</emphasis>acknowledgement
is handled automatically once messages are received. By
default auto_ack is set to False, and the receiver is required
to manually handle acknowledgment.</para>
</listitem>
<listitem>
<para><emphasis role="bold">No_ack:</emphasis>it disable
acknowledgement on the server-side. This is different from
auto_ack in that acknowledgement is turned off altogether.
This functionality increases performance but at the cost of
reliability. Messages can get lost if a client dies before it
can deliver them to the application.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Auto_declare:</emphasis>if this is
</listitem>
<listitem>
<para><emphasis role="bold">Auto_declare:</emphasis>if this is
True and the exchange name is set, the exchange will be
automatically declared at instantiation. Auto declare is on by
default. Publishers specify most the parameters of Consumers
(such as they do not specify a queue name), but they can also
specify the following:</para>
</listitem>
<listitem>
<para><emphasis role="bold">Delivery_mode:</emphasis>the
default delivery mode used for messages. The value is an
integer. The following delivery modes are supported by
RabbitMQ:</para>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para><emphasis role="bold">1 or “transient”:</emphasis>the
</listitem>
<listitem>
<para><emphasis role="bold">Delivery_mode:</emphasis>the default
delivery mode used for messages. The value is an integer. The
following delivery modes are supported by RabbitMQ:</para>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para><emphasis role="bold">1 or “transient”:</emphasis>the
message is transient. Which means it is stored in memory only,
and is lost if the server dies or restarts.</para>
</listitem>
<listitem>
<para><emphasis role="bold">2 or “persistent”:</emphasis>the
</listitem>
<listitem>
<para><emphasis role="bold">2 or “persistent”:</emphasis>the
message is persistent. Which means the message is stored both
in-memory, and on disk, and therefore preserved if the server
dies or restarts.</para>
</listitem>
</itemizedlist>
<para>The default value is 2 (persistent). During a send
operation, Publishers can override the delivery mode of messages
so that, for example, transient messages can be sent over a
durable queue.</para>
</listitem>
</itemizedlist>
<para>The default value is 2 (persistent). During a send operation,
Publishers can override the delivery mode of messages so that, for
example, transient messages can be sent over a durable
queue.</para>
</chapter>

View File

@@ -92,8 +92,8 @@
has a default router for Internet traffic.</para>
<para>The router provides L3 connectivity between private
networks, meaning that different tenants can reach each
others instances unless additional filtering (e.g.,
security groups) is used. Because there is only a single
others instances unless additional filtering, such as
security groups, is used. Because there is only a single
router, tenant networks cannot use overlapping IPs. Thus,
it is likely that the admin would create the private
networks on behalf of tenants.</para>
@@ -127,4 +127,4 @@
</imageobject>
</mediaobject>
</figure>
</chapter>
</chapter>

View File

@@ -57,10 +57,10 @@
<para>The Proxy Servers are the public face of Swift and
handle all incoming API requests. Once a Proxy Server
receive a request, it will determine the storage node
based on the URL of the object, e.g.
https://swift.example.com/v1/account/container/object. The
Proxy Servers also coordinates responses, handles failures
and coordinates timestamps.</para>
based on the URL of the object, such as <literal>
https://swift.example.com/v1/account/container/object
</literal>. The Proxy Servers also coordinates responses,
handles failures and coordinates timestamps.</para>
<para>Proxy servers use a shared-nothing architecture and can
be scaled as needed based on projected workloads. A
minimum of two Proxy Servers should be deployed for
@@ -87,8 +87,8 @@
server, a cabinet, a switch, or even a data center.</para>
<para>The partitions of the ring are equally divided among all
the devices in the OpenStack Object Storage installation.
When partitions need to be moved around (for example if a
device is added to the cluster), the ring ensures that a
When partitions need to be moved around, such as when a
device is added to the cluster, the ring ensures that a
minimum number of partitions are moved at a time, and only
one replica of a partition is moved at a time.</para>
<para>Weights can be used to balance the distribution of

View File

@@ -214,7 +214,7 @@
corruption is found (in the case of bit rot, for
example), the file is quarantined, and replication
will replace the bad file from another replica. If
other errors are found they are logged (for example,
an objects listing cant be found on any container
server it should be).</para>
other errors are found they are logged. For example,
an objects listing cannot be found on any container
server it should be.</para>
</chapter>

View File

@@ -54,7 +54,7 @@ Action methods take parameters that are sucked out of the URL by mapper.connect(
<para>
-------------
Actions return a dictionary, and wsgi.Controller serializes that to JSON or XML based on the request's content-type.
If you define a new controller, you'll need to define a ``_serialization_metadata`` attribute on the class, to tell wsgi.Controller how to convert your dictionary to XML. It needs to know the singular form of any list tag (e.g. ``[servers]`` list contains ``[server]`` tags) and which dictionary keys are to be XML attributes as opposed to subtags (e.g. ``[server id="4"/]`` instead of ``[server][id]4[/id][/server]``).
If you define a new controller, you'll need to define a ``_serialization_metadata`` attribute on the class, to tell wsgi.Controller how to convert your dictionary to XML. It needs to know the singular form of any list tag such as ``[servers]`` list contains ``[server]`` tags, and which dictionary keys are to be XML attributes as opposed to subtags such as ``[server id="4"/]`` instead of ``[server][id]4[/id][/server]``.
See `cinder/api/openstack/servers.py` for an example.
Faults
------
@@ -62,4 +62,4 @@ If you need to return a non-200, you should
return faults.Fault(webob.exc.HTTPNotFound())
</para>
</section>
</section>
</section>

View File

@@ -67,7 +67,7 @@ directory), instead of installing the packages at the system level.
Install the prerequisite packages.
On Ubuntu::
sudo apt-get install python-dev libssl-dev python-pip git-core libmysqlclient-dev libpq-dev
On Fedora-based distributions (e.g., Fedora/RHEL/CentOS/Scientific Linux)::
On Fedora-based distributions like Fedora, RHEL, CentOS and Scientific Linux::
sudo yum install python-devel openssl-devel python-pip git libmysqlclient-dev libqp-dev
</para>
</section>
@@ -142,4 +142,4 @@ code review system. For information on how to submit your branch to Gerrit,
see GerritWorkflow_.
</para>
</section>
</section>
</section>

View File

@@ -21,7 +21,7 @@ through using the Python `eventlet [http://eventlet.net/]`_ and
`greenlet [http://packages.python.org/greenlet/]`_ libraries.
Green threads use a cooperative model of threading: thread context
switches can only occur when specific eventlet or greenlet library calls are
made (e.g., sleep, certain I/O calls). From the operating system's point of
made. For example, sleep and certain I/O calls. From the operating system's point of
view, each OpenStack service runs in a single thread.
The use of green threads reduces the likelihood of race conditions, but does
not completely eliminate them. In some cases, you may need to use the