Edits on Security Guide

Better follow conventions, especially:
* remove Latinism like via and i.e.
* use variable lists
* Add missing <filename>
* wrap long lines

Change-Id: I2a537df78ddf4fbeb127b058bf05caaf42441d5f
This commit is contained in:
Andreas Jaeger 2014-05-11 16:37:29 +02:00
parent 8d32b87efb
commit aceec15371
20 changed files with 932 additions and 397 deletions

@ -146,7 +146,23 @@
</section>
<section xml:id="ch005_security-domains-idp116736">
<title>Outbound attacks and reputational risk</title>
<para>Careful consideration should be given to potential outbound abuse from a cloud deployment. Whether public or private, clouds tend to have lots of resource available. An attacker who has established a point of presence within the cloud, either through hacking in or via entitled access (rogue employee), can bring these resources to bear against the internet at large. Clouds with Compute services make for ideal DDoS and brute force engines. This is perhaps a more pressing issue for public clouds as their users are largely unaccountable, and can quickly spin up numerous disposable instances for outbound attacks. Major damage can be inflicted upon a company's reputation if it becomes known for hosting malicious software or launching attacks on other networks. Methods of prevention include egress security groups, outbound traffic inspection, customer education and awareness, and fraud and abuse mitigation strategies.</para>
<para>
Careful consideration should be given to potential outbound
abuse from a cloud deployment. Whether public or private,
clouds tend to have lots of resource available. An attacker
who has established a point of presence within the cloud,
either through hacking or entitled access, such as rogue
employee, can bring these resources to bear against the
internet at large. Clouds with Compute services make for
ideal DDoS and brute force engines. The issue is more
pressing for public clouds as their users are largely
unaccountable, and can quickly spin up numerous disposable
instances for outbound attacks. Major damage can be
inflicted upon a company's reputation if it becomes known
for hosting malicious software or launching attacks on other
networks. Methods of prevention include egress security
groups, outbound traffic inspection, customer education and
awareness, and fraud and abuse mitigation strategies.</para>
</section>
<section xml:id="ch005_security-domains-idp120000">
<title>Attack types</title>

@ -55,8 +55,8 @@
<para>Nodes should use Preboot eXecution Environment (PXE) for
provisioning. This significantly reduces the effort required
for redeploying nodes. The typical process involves the node
receiving various boot stages (i.e., progressively more
complex software to execute) from a server.</para>
receiving various boot stages&mdash;that is progressively more
complex software to execute&mdash; from a server.</para>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="203" contentwidth="274"
@ -136,50 +136,50 @@
</tr>
<tr>
<td><para>PCR-00</para></td>
<td><para>Core Root of Trust Measurement (CRTM), Bios
<td><para>Core Root of Trust Measurement (CRTM), BIOS
code, Host platform extensions</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-01</para></td>
<td><para>Host Platform Configuration</para></td>
<td><para>Host platform configuration</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-02</para></td>
<td><para>Option ROM Code</para></td>
<td><para>Option ROM code</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-03</para></td>
<td><para>Option ROM Configuration and Data</para></td>
<td><para>Option ROM configuration and data</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-04</para></td>
<td><para>Initial Program Loader (IPL) Code. For example,
<td><para>Initial Program Loader (IPL) code. For example,
master boot record.</para></td>
<td><para>Software</para></td>
</tr>
<tr>
<td><para>PCR-05</para></td>
<td><para>IPL Code Configuration and Data</para></td>
<td><para>IPL code configuration and data</para></td>
<td><para>Software</para></td>
</tr>
<tr>
<td><para>PCR-06</para></td>
<td><para>State Transition and Wake Events</para></td>
<td><para>State transition and wake events</para></td>
<td><para>Software</para></td>
</tr>
<tr>
<td><para>PCR-07</para></td>
<td><para>Host Platform Manufacturer Control</para></td>
<td><para>Host platform manufacturer control</para></td>
<td><para>Software</para></td>
</tr>
<tr>
<td><para>PCR-08</para></td>
<td><para>Platform specific, often Kernel, Kernel
Extensions, and Drivers</para></td>
<td><para>Platform specific, often kernel, kernel
extensions, and drivers</para></td>
<td><para>Software</para></td>
</tr>
<tr>
@ -263,7 +263,7 @@
<para>Use a read-only file system where possible. Ensure
that writeable file systems do not permit execution. This
can be handled through the mount options provided in
<literal>/etc/fstab</literal>.</para>
<filename>/etc/fstab</filename>.</para>
</listitem>
<listitem>
<para>Use a mandatory access control policy to contain the

@ -35,8 +35,7 @@
The OpenStack dashboard (horizon) provides administrators and
tenants with a web-based graphical interface to provision and
access cloud-based resources. The dashboard communicates with
the back-end services via calls to the OpenStack API
(discussed above).
the back-end services through calls to the OpenStack API.
</para>
<section xml:id="ch014_best-practices-for-operator-mode-access-idp50608">
<title>Capabilities</title>

@ -8,23 +8,51 @@
<title>Introduction to SSL/TLS</title>
<para>OpenStack services receive requests on behalf of users on public networks as well as from other internal services over management networks. Inter-service communications can also occur over public networks depending on deployment and architecture choices.</para>
<para>While it is commonly accepted that data over public networks should be secured using cryptographic measures, such as Secure Sockets Layer or Transport Layer Security (SSL/TLS) protocols, it is insufficient to rely on security domain separation to protect internal traffic. Using a security-in-depth approach, we recommend securing all domains with SSL/TLS, including the management domain services. It is important that should a tenant escape their VM isolation and gain access to the hypervisor or host resources, compromise an API endpoint, or any other service, they must not be able to easily inject or capture messages, commands, or otherwise affect or control management capabilities of the cloud. SSL/TLS provides the mechanisms to ensure authentication, non-repudiation, confidentiality, and integrity of user communications to the OpenStack services and between the OpenStack services themselves.</para>
<para>Public Key Infrastructure (PKI) is the set of hardware, software, and policies to operate a secure system which provides authentication, non-repudiation, confidentiality, and integrity. The core components of PKI are:</para>
<itemizedlist><listitem>
<para>End Entity - user, process, or system that is the subject of a certificate</para>
</listitem>
<listitem>
<para>Certification Authority (<glossterm>CA</glossterm>) - defines certificate policies, management, and issuance of certificates</para>
</listitem>
<listitem>
<para>Registration Authority (RA) - an optional system to which a CA delegates certain management functions</para>
</listitem>
<listitem>
<para>Repository - Where the end entity certificates and certificate revocation lists are stored and looked up - sometimes referred to as the "certificate bundle"</para>
</listitem>
<listitem>
<para>Relying Party - The end point that is trusting that the CA is valid.</para>
</listitem>
</itemizedlist>
<para>
Public Key Infrastructure (PKI) is a set of hardware, software,
policies, and procedures required to operate a secure system
that provides authentication, non-repudiation, confidentiality,
and integrity. The core components of PKI are:</para>
<variablelist>
<varlistentry>
<term>End entity</term>
<listitem>
<para>User, process, or system that is the subject of a
certificate.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Certification Authority (<glossterm>CA</glossterm>)</term>
<listitem>
<para>Defines certificate policies, management, and issuance
of certificates.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Registration Authority (RA)</term>
<listitem>
<para>An optional system to which a CA delegates certain
management functions.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Repository</term>
<listitem>
<para>
Where the end entity certificates and certificate
revocation lists are stored and looked up - sometimes
referred to as the <emphasis role="italic">certificate
bundle</emphasis>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Relying party</term>
<listitem>
<para>The endpoint that is trusting that the CA is valid.</para>
</listitem>
</varlistentry>
</variablelist>
<para>PKI builds the framework on which to provide encryption algorithms, cipher modes, and protocols for securing data and authentication. We strongly recommend securing all services with Public Key Infrastructure (PKI), including the use of SSL/TLS for API endpoints. It is impossible for the encryption or signing of transports or messages alone to solve all these problems. Hosts themselves must be secure and implement policy, namespaces, and other controls to protect their private credentials and keys. However, the challenges of key management and protection do not reduce the necessity of these controls, or lessen their importance.</para>
<section xml:id="ch017_threat-models-confidence-and-confidentiality-idp51264">
<title>Certification authorities</title>
@ -62,29 +90,38 @@
</section>
<section xml:id="ch017_threat-models-confidence-and-confidentiality-idp64128">
<title>Summary</title>
<para>Given the complexity of the OpenStack components and the number of deployment possibilities, you must take care to ensure that each component gets the appropriate configuration of SSL certificates, keys, and CAs. The following services will be discussed in later sections of this book where SSL and PKI is available (either natively or possible via SSL proxy):</para>
<itemizedlist><listitem>
<para>
Given the complexity of the OpenStack components and the
number of deployment possibilities, you must take care to
ensure that each component gets the appropriate configuration
of SSL certificates, keys, and CAs. Subsequent sections discuss
the following services:
</para>
<itemizedlist>
<listitem>
<para>Compute API endpoints</para>
</listitem>
<listitem>
<listitem>
<para>Identity API endpoints</para>
</listitem>
<listitem>
<listitem>
<para>Networking API endpoints</para>
</listitem>
<listitem>
<listitem>
<para>Storage API endpoints</para>
</listitem>
<listitem>
<listitem>
<para>Messaging server</para>
</listitem>
<listitem>
<listitem>
<para>Database server</para>
</listitem>
<listitem>
<listitem>
<para>Dashboard</para>
</listitem>
</itemizedlist>
<para>Throughout this book we will use SSL as shorthand to refer to these recommendations for SSL/TLS protocols.</para>
</itemizedlist>
<para>
This guide uses the term SSL as a shorthand to refer to these
recommendations for SSL/TLS protocols.</para>
</section>
</chapter>

@ -6,14 +6,32 @@
xml:id="ch021_paste-and-middleware">
<?dbhtml stop-chunking?>
<title>API endpoint configuration recommendations</title>
<para>This chapter provides recommendations for improving the security of both public and internal endpoints.</para>
<para>
This chapter provides recommendations security enhancements for
both public and private-facing API endpoints.</para>
<section xml:id="ch021_paste-and-middleware-idp38176">
<title>Internal API communications</title>
<para>OpenStack provides both public facing and private API endpoints. By default, OpenStack components use the publicly defined endpoints. The recommendation is to configure these components to use the API endpoint within the proper security domain.</para>
<para>Services select their respective API endpoints based on the OpenStack service catalog. The issue here is these services may not obey the listed public or internal API end point values. This can lead to internal management traffic being routed to external API endpoints.</para>
<para>
OpenStack provides both public facing and private API
endpoints. By default, OpenStack components use the publicly
defined endpoints. The recommendation is to configure these
components to use the API endpoint within the proper security
domain.</para>
<para>
Services select their respective API endpoints based on the
OpenStack service catalog. These services might not
obey the listed public or internal API end point
values. This can lead to internal management traffic being
routed to external API endpoints.</para>
<section xml:id="ch021_paste-and-middleware-idp40496">
<title>Configure internal URLs in Identity service catalog</title>
<para>The Identity Service catalog should be aware of your internal URLs. While this feature is not utilized by default, it may be leveraged through configuration. Additionally, it should be forward-compatible with expectant changes once this behavior becomes the default.</para>
<para>
The Identity service catalog should be aware of your
internal URLs. While this feature is not utilized by
default, it may be leveraged through
configuration. Additionally, it should be forward-compatible
with expectant changes once this behavior becomes the
default.</para>
<para>To register an internal URL for an endpoint:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--region RegionOne \
@ -24,8 +42,17 @@
</section>
<section xml:id="ch021_paste-and-middleware-idp43360">
<title>Configure applications for internal URLs</title>
<para>Some services can be forced to use specific API endpoints. Therefore, it is recommended that each OpenStack service communicating to the API of another service must be explicitly configured to access the proper internal API endpoint.</para>
<para>Each project may present an inconsistent way of defining target API endpoints. Future releases of OpenStack seek to resolve these inconsistencies through consistent use of the Identity Service catalog.</para>
<para>
You can force some services to use specific API
endpoints. Therefore, it is recommended that each OpenStack
service communicating to the API of another service must be
explicitly configured to access the proper internal API
endpoint.</para>
<para>
Each project may present an inconsistent way of defining
target API endpoints. Future releases of OpenStack seek to
resolve these inconsistencies through consistent use of the
Identity Service catalog.</para>
<section xml:id="ch021_paste-and-middleware-idp45520">
<title>Configuration example #1: nova</title>
<programlisting language="ini">[DEFAULT]
@ -44,27 +71,72 @@ s3_use_ssl=True</programlisting>
</section>
<section xml:id="ch021_paste-and-middleware-idp48768">
<title>Paste and middleware</title>
<para>Most API endpoints and other HTTP services in OpenStack utilize the Python Paste Deploy library. This is important to understand from a security perspective as it allows for manipulation of the request filter pipeline through the application's configuration. Each element in this chain is referred to as <emphasis>middleware</emphasis>. Changing the order of filters in the pipeline or adding additional middleware may have unpredictable security impact.</para>
<para>It is not uncommon that implementors will choose to add additional middleware to extend OpenStack's base functionality. We recommend implementors make careful consideration of the potential exposure introduced by the addition of non-standard software components to their HTTP request pipeline.</para>
<para>Additional information on Paste Deploy may be found at
<link xlink:href="http://pythonpaste.org/deploy/">http://pythonpaste.org/deploy/</link>.</para>
<para>
Most API endpoints and other HTTP services in OpenStack use
the Python Paste Deploy library. From a securtiy perspective,
this library enables manipulation of the request filter
pipeline through the application's configuration. Each element
in this chain is referred to as
<emphasis>middleware</emphasis>. Changing the order of filters
in the pipeline or adding additional middleware might have
unpredictable security impact.</para>
<para>
Commonly, implementers add middleware to extend OpenStack's
base functionality. We recommend implementers make careful
consideration of the potential exposure introduced by the
addition of non-standard software components to their HTTP
request pipeline.</para>
<para>For more information about Paste Deploy, see
<link xlink:href="http://pythonpaste.org/deploy/">http://pythonpaste.org/deploy/</link>.
</para>
</section>
<section xml:id="ch021_paste-and-middleware-idp52496">
<title>API endpoint process isolation and policy</title>
<para>API endpoint processes, especially those that reside within the public security domain should be isolated as much as possible. Where deployments allow, API endpoints should be deployed on separate hosts for increased isolation.</para>
<para>
As much as possible, you should isolate API endpoint
processes, especially those that reside within the public
security domain should be isolated as much as possible. Where
deployments allow, API endpoints should be deployed on
separate hosts for increased isolation.</para>
<section xml:id="ch021_paste-and-middleware-idp53840">
<title>Namespaces</title>
<para>Many operating systems now provide compartmentalization support. Linux supports namespaces to assign processes into independent domains. System compartmentalization is covered in more detail in other parts of the guide.</para>
<para>
Many operating systems now provide compartmentalization
support. Linux supports namespaces to assign processes into
independent domains. Other parts of this guide cover system
compartmentalization in more detail.</para>
</section>
<section xml:id="ch021_paste-and-middleware-idp55232">
<title>Network policy</title>
<para>API endpoints typically bridge multiple security domains, as such particular attention should be paid to the compartmentalization of the API processes. See the <emphasis>Security Domain Bridging</emphasis> section for additional information in this area.</para>
<para>With careful modeling, network ACLs and IDS technologies can be use to enforce explicit point to point communication between network services. As critical cross domain service, this type of explicit enforcement works well for OpenStack's message queue service.</para>
<para>Policy enforcement can be implemented through the configuration of services, host-based firewalls (such as IPTables), local policy (SELinux or AppArmor), and optionally enforced through global network policy.</para>
<para>
Because API endpoints typically bridge multiple security
domains, you must pay particular attention to the
compartmentalization of the API processes. See <xref
linkend="ch005_security-domains-idp61360"/> for additional
information in this area.</para>
<para>
With careful modeling, you can use network ACLs and IDS
technologies to enforce explicit point to point
communication between network services. As critical cross
domain service, this type of explicit enforcement works well
for OpenStack's message queue service.</para>
<para>
To enforce policies, you can configure services, host-based
firewalls (such as iptables), local policy (SELinux or
AppArmor), and optionally global network policy.</para>
</section>
<section xml:id="ch021_paste-and-middleware-idp58704">
<title>Mandatory access controls</title>
<para>API endpoint processes should be isolated from each other and other processes on a machine. The configuration for those processes should be restricted to those processes not only by Discretionary Access Controls, but through Mandatory Access Controls. The goal of these enhanced access control is to aid in the containment and escalation of API endpoint security breaches. With mandatory access controls, such breaches will severely limit access to resources and provide earlier alerting on such events.</para>
<para>
You should isolate API endpoint processes from each other
and other processes on a machine. The configuration for
those processes should be restricted to those processes not
only by Discretionary Access Controls, but through Mandatory
Access Controls. The goal of these enhanced access control
is to aid in the containment and escalation of API endpoint
security breaches. With mandatory access controls, such
breaches severely limit access to resources and provide
earlier alerting on such events.</para>
</section>
</section>
</chapter>

@ -33,7 +33,7 @@
control logs to identify unauthorized attempts to access
accounts. Possible remediation would include reviewing the
strength of the user password, or blocking the network source
of the attack via firewall rules. Firewall rules on the
of the attack through firewall rules. Firewall rules on the
keystone server that restrict the number of connections could
be used to reduce the attack effectiveness, and thus dissuade
the attacker.</para>
@ -63,13 +63,14 @@
Identity database may be separate from databases used by other
OpenStack services to reduce the risk of a compromise of the
stored credentials.</para>
<para>When authentication is provided via username and password,
the Identity Service does not enforce policies on password
strength, expiration, or failed authentication attempts as
recommended by NIST Special Publication 800-118 (draft).
Organizations that desire to enforce stronger password
policies should consider using Identity
extensions or external authentication services.</para>
<para>
When you use a user name and password to authenticate,
Identity does not enforce policies on password strength,
expiration, or failed authentication attempts as recommended
by NIST Special Publication 800-118 (draft). Organizations
that desire to enforce stronger password policies should
consider using Identity extensions or external authentication
services.</para>
<para>LDAP simplifies integration of Identity authentication
into an organization's existing directory service and user
account management processes.</para>
@ -85,9 +86,8 @@
<para>Note that if the LDAP system has attributes defined for
the user such as admin, finance, HR etc, these must be mapped
into roles and groups within Identity for use by the various
OpenStack services. The <emphasis>etc/keystone.conf</emphasis>
file provides the mapping from the LDAP attributes to Identity
attributes.</para>
OpenStack services. The <filename>/etc/keystone.conf</filename>
file maps LDAP attributes to Identity attributes.</para>
<para>The Identity Service <emphasis role="bold">MUST
NOT</emphasis> be allowed to write to LDAP services used for
authentication outside of the OpenStack deployment as this
@ -198,7 +198,7 @@
<para>The cloud administrator should protect sensitive
configuration files for unauthorized modification. This can be
achieved with mandatory access control frameworks such as
SELinux, including <literal>/etc/keystone.conf</literal> and
SELinux, including <filename>/etc/keystone.conf</filename> and
X.509 certificates.</para>
<para>For client authentication with SSL, you need to issue

@ -10,7 +10,7 @@
portal to provision their own resources within the limits set by
administrators. These include provisioning users, defining instance flavors,
uploading VM images, managing networks, setting up security groups, starting
instances, and accessing the instances via a console.</para>
instances, and accessing the instances through a console.</para>
<para>The dashboard is based on the Django web framework, therefore
secure deployment practices for Django apply directly to horizon.
This guide provides a popular set of Django security
@ -42,13 +42,15 @@
</section>
<section xml:id="ch025_web-dashboard-idp240704">
<title>HTTPS</title>
<para>The dashboard should be deployed behind a secure HTTPS
server using a valid, trusted certificate from a recognized
certificate authority (CA). Private organization-issued
certificates are only appropriate when the root of trust is
pre-installed in all user browsers.</para>
<para>HTTP requests to the dashboard domain should be configured
to redirect to the fully qualified HTTPS URL.</para>
<para>
Deploy the dashboard behind a secure
<glossterm>HTTPS</glossterm> server by using a valid, trusted
certificate from a recognized certificate authority
(CA). Private organization-issued certificates are only
appropriate when the root of trust is pre-installed in all user
browsers.</para>
<para>Configure HTTP requests to the dashboard domain to redirect
to the fully qualified HTTPS URL.</para>
</section>
<section xml:id="ch025_web-dashboard-idp242624">
<title>HTTP Strict Transport Security (HSTS)</title>

@ -136,8 +136,8 @@
<prompt>#</prompt> <userinput>find /etc/swift/ -type d -exec chmod 750 {} \;</userinput></screen>
<para>This restricts only root to be able to modify
configuration files while allowing the services to
read them via their group membership in
"swift."</para>
read them through their group membership in
the swift group.</para>
</section>
</section>
<!-- Securing Services General -->
@ -240,14 +240,15 @@
<section xml:id="ch027_storage-idpE1">
<title>Securing proxy services</title>
<para>A Proxy service node should have at least two interfaces
(physical or virtual): one public and one private. The
public interface may be protected via firewalls or service
binding. The public facing service is an HTTP web server
that processes end-point client requests, authenticates
them, and performs the appropriate action. The private
interface does not require any listening services but is
instead used to establish outgoing connections to storage
service nodes on the private storage network.</para>
(physical or virtual): one public and one
private. Firewalls or service binding might protect the
public interface. The public facing service is an HTTP web
server that processes end-point client requests,
authenticates them, and performs the appropriate
action. The private interface does not require any
listening services but is instead used to establish
outgoing connections to storage service nodes on the
private storage network.</para>
<section xml:id="ch027_storage-idpE12">
<title>Use SSL/TLS</title>
<para>The built-in or included web server that comes with

@ -41,7 +41,18 @@
</section>
<section xml:id="ch028_case-studies-identity-management-idp131936">
<title>Bob's public cloud</title>
<para>Bob must support authentication by the general public, so he elects to use provide for username / password authentication. He has concerns about brute force attacks attempting to crack user passwords, so he also uses an external authentication extension that throttles the number of failed login attempts. Bob's Management network is separate from the other networks within his cloud, but can be reached from his corporate network via ssh. As recommended earlier, Bob requires administrators to use two-factor authentication on the Management network to reduce the risk from compromised administrator passwords.</para>
<para>
Because Bob must support authentication for the general
public, he decides to use use user name and password
authentication. He has concerns about brute force attacks
attempting to crack user passwords, so he also uses an
external authentication extension that throttles the number of
failed login attempts. Bob's management network is separate
from the other networks within his cloud, but can be reached
from his corporate network through ssh. As recommended
earlier, Bob requires administrators to use two-factor
authentication on the Management network to reduce the risk
from compromised administrator passwords.</para>
<para>Bob also deploys the dashboard to manage many aspects of
the cloud. He deploys the dashboard with HSTS to ensure that
only HTTPS is used. He has ensured that the dashboard is

@ -7,6 +7,15 @@
<?dbhtml stop-chunking?>
<title>State of networking</title>
<para>OpenStack Networking in the Grizzly release enables the end-user or tenant to define, utilize, and consume networking resources in new ways that had not been possible in previous OpenStack Networking releases. OpenStack Networking provides a tenant-facing API for defining network connectivity and IP addressing for instances in the cloud in addition to orchestrating the network configuration. With the transition to an API-centric networking service, cloud architects and administrators should take into consideration best practices to secure physical and virtual network infrastructure and services.</para>
<para>OpenStack Networking was designed with a plug-in architecture that provides extensibility of the API via open source community or third-party services. As you evaluate your architectural design requirements, it is important to determine what features are available in OpenStack Networking core services, any additional services that are provided by third-party products, and what supplemental services are required to be implemented in the physical infrastructure.</para>
<para>
OpenStack Networking was designed with a plug-in architecture
that provides extensibility of the API through open source
community or third-party services. As you evaluate your
architectural design requirements, it is important to determine
what features are available in OpenStack Networking core
services, any additional services that are provided by
third-party products, and what supplemental services are
required to be implemented in the physical
infrastructure.</para>
<para>This section is a high-level overview of what processes and best practices should be considered when implementing OpenStack Networking. We will talk about the current state of services that are available, what future services will be implemented, and the current limitations in this project.</para>
</chapter>

@ -5,61 +5,172 @@
version="5.0"
xml:id="ch031_neutron-architecture">
<?dbhtml stop-chunking?>
<title>Networking architecture</title>
<para>OpenStack Networking is a standalone service that often involves deploying several processes across a number of nodes. These processes interact with each other and with other OpenStack services. The main process of the OpenStack Networking service is neutron-server, a Python daemon that exposes the OpenStack Networking API and passes tenant requests to a suite of plug-ins for additional processing.</para>
<para>OpenStack Networking components encompasses the following elements:</para>
<itemizedlist><listitem>
<para><emphasis role="bold">neutron server</emphasis> (<literal>neutron-server</literal> and <literal>neutron-*-plugin</literal>): This service runs on the network node to service the Networking API and its extensions. It also enforces the network model and IP addressing of each port. The neutron-server and plugin agents require access to a database for persistent storage and access to a message queue for inter-communication.</para>
</listitem>
<listitem>
<para><emphasis role="bold">plugin agent</emphasis> (<literal>neutron-*-agent</literal>): Runs on each compute node to manage local virtual switch (vswitch) configuration. The agents to be run will depend on which plugin you are using. This service requires message queue access. <emphasis>Optional depending on plugin.</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">DHCP agent</emphasis> (<literal>neutron-dhcp-agent</literal>): Provides DHCP services to tenant networks. This agent is the same across all plug-ins and is responsible for maintaining DHCP configuration. The neutron-dhcp-agent requires message queue access.</para>
</listitem>
<listitem>
<para><emphasis role="bold">l3 agent</emphasis> (<literal>neutron-l3-agent</literal>): Provides L3/NAT forwarding for external network access of VMs on tenant networks. Requires message queue access. <emphasis>Optional depending on plug-in.</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">network provider services</emphasis> (SDN server/services). Provide additional networking services that are provided to tenant networks. These SDN services may interact with the neutron-server, neutron-plugin, and/or plugin-agents via REST APIs or other communication channels.</para>
</listitem>
</itemizedlist>
<para>The figure that follows provides an architectural and networking flow diagram of the OpenStack Networking components:</para>
<title>Networking architecture</title>
<para>
OpenStack Networking is a standalone service that often deploys
several processes across a number of nodes. These processes
interact with each other and other OpenStack services. The main
process of the OpenStack Networking service is <systemitem
class="service">neutron-server</systemitem>, a Python daemon that
exposes the OpenStack Networking API and passes tenant requests to
a suite of plug-ins for additional processing.</para>
<para>
The OpenStack Networking components are:</para>
<variablelist>
<varlistentry>
<term>neutron server (<systemitem
class="service">neutron-server</systemitem> and <systemitem
class="service">neutron-*-plugin</systemitem>)</term>
<listitem>
<para>
This service runs on the network node to service the
Networking API and its extensions. It also enforces the
network model and IP addressing of each port. The
neutron-server and plugin agents require access to a
database for persistent storage and access to a message
queue for inter-communication.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>plugin agent (<systemitem class="service">neutron-*-agent</systemitem>)
</term>
<listitem>
<para>Runs on each compute node to manage local virtual
switch (vswitch) configuration. The plug-in that you use
determine which agents run. This service requires message
queue access. <emphasis>Optional depending on
plugin.</emphasis></para>
</listitem>
</varlistentry>
<varlistentry>
<term>DHCP agent (<systemitem class="service">neutron-dhcp-agent</systemitem>)
</term>
<listitem>
<para>
Provides DHCP services to tenant networks. This agent is
the same across all plug-ins and is responsible for
maintaining DHCP configuration. The <systemitem
class="service">neutron-dhcp-agent</systemitem> requires
message queue access.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>L3 agent (<systemitem class="service">neutron-l3-agent</systemitem>)
</term>
<listitem>
<para>
Provides L3/NAT forwarding for external network access of
VMs on tenant networks. Requires message queue
access. <emphasis>Optional depending on
plug-in.</emphasis></para>
</listitem>
</varlistentry>
<varlistentry>
<term>network provider services (SDN server/services)</term>
<listitem>
<para>
Provide additional networking services to tenant
networks. These SDN services might interact with the
<systemitem class="service">neutron-server</systemitem>,
<systemitem class="service">neutron-plugin</systemitem>,
and/or plugin-agents through REST APIs or other
communication channels.</para>
</listitem>
</varlistentry>
</variablelist>
<para>
The following figure shows an architectural and networking
flow diagram of the OpenStack Networking components:</para>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="319" contentwidth="536" fileref="static/sdn-connections.png" format="PNG" scalefit="1"/>
<imagedata contentdepth="319" contentwidth="536"
fileref="static/sdn-connections.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/sdn-connections.png" format="PNG" scalefit="1" width="100%"/>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/sdn-connections.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
</inlinemediaobject>
</para>
<section xml:id="ch031_neutron-architecture-idp61360">
<title>OpenStack Networking service placement on physical servers</title>
<para>In this guide, we focus primarily on a standard architecture that includes a <emphasis>cloud controller</emphasis> host, a <emphasis>network</emphasis> host, and a set of <emphasis>compute</emphasis> hypervisors for running VMs.</para>
<para>
This guide focuses on a standard architecture
that includes a <emphasis>cloud controller</emphasis> host, a
<emphasis>network</emphasis> host, and a set of
<emphasis>compute</emphasis> hypervisors for running
VMs.</para>
<section xml:id="ch031_neutron-architecture-idp63888">
<title>Network connectivity of physical servers</title>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="364" contentwidth="536" fileref="static/1aa-network-domains-diagram.png" format="PNG" scalefit="1"/>
<imagedata contentdepth="364" contentwidth="536"
fileref="static/1aa-network-domains-diagram.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/1aa-network-domains-diagram.png" format="PNG" scalefit="1" width="100%"/>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/1aa-network-domains-diagram.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>A standard OpenStack Networking setup has up to four distinct physical data center networks:</para>
<itemizedlist><listitem>
<para><emphasis role="bold">Management network</emphasis> Used for internal communication between OpenStack Components. The IP addresses on this network should be reachable only within the data center and is considered the Management Security Domain.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Guest network</emphasis> Used for VM data communication within the cloud deployment. The IP addressing requirements of this network depend on the OpenStack Networking plug-in in use and the network configuration choices of the virtual networks made by the tenant. This network is considered the Guest Security Domain.</para>
</listitem>
<listitem>
<para><emphasis role="bold">External network</emphasis> Used to provide VMs with Internet access in some deployment scenarios. The IP addresses on this network should be reachable by anyone on the Internet and is considered to be in the Public Security Domain.</para>
</listitem>
<listitem>
<para><emphasis role="bold">API network</emphasis> Exposes all OpenStack APIs, including the OpenStack Networking API, to tenants. The IP addresses on this network should be reachable by anyone on the Internet. This may be the same network as the external network, as it is possible to create a subnet for the external network that uses IP allocation ranges to use only less than the full range of IP addresses in an IP block. This network is considered the Public Security Domain.</para>
</listitem>
</itemizedlist>
<para>For additional information see the <link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html">Networking chapter</link> in the
<citetitle>OpenStack Cloud Administrator Guide</citetitle>.</para>
</inlinemediaobject>
</para>
<para>A standard OpenStack Networking setup has up to four
distinct physical data center networks:</para>
<variablelist>
<varlistentry>
<term>Management network</term>
<listitem>
<para>
Used for internal communication between OpenStack
Components. The IP addresses on this network should be
reachable only within the data center and is
considered the Management Security Domain.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Guest network</term>
<listitem>
<para>
Used for VM data communication within the cloud
deployment. The IP addressing requirements of this
network depend on the OpenStack Networking plug-in in
use and the network configuration choices of the
virtual networks made by the tenant. This network is
considered the Guest Security Domain.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>External network</term>
<listitem>
<para>
Used to provide VMs with Internet access in some
deployment scenarios. The IP addresses on this network
should be reachable by anyone on the Internet and is
considered to be in the Public Security Domain.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>API network</term>
<listitem>
<para>
Exposes all OpenStack APIs, including the OpenStack
Networking API, to tenants. The IP addresses on this
network should be reachable by anyone on the
Internet. This may be the same network as the external
network, as it is possible to create a subnet for the
external network that uses IP allocation ranges to use
only less than the full range of IP addresses in an IP
block. This network is considered the Public Security
Domain.</para>
</listitem>
</varlistentry>
</variablelist>
<para>
For additional information see the <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html">Networking
chapter</link> in the <citetitle>OpenStack Cloud
Administrator Guide</citetitle>.</para>
</section>
</section>
</chapter>

@ -34,7 +34,8 @@
<para>OpenStack Compute supports tenant network traffic access controls directly when deployed with the legacy nova-network service, or may defer access control to the OpenStack Networking service.</para>
<para>Note, legacy nova-network security groups are applied to all virtual interface ports on an instance using IPTables.</para>
<para>Security groups allow administrators and tenants the ability to specify the type of traffic, and direction (ingress/egress) that is allowed to pass through a virtual interface port. Security groups rules are stateful L2-L4 traffic filters.</para>
<para>It is our recommendation that you enable security groups via OpenStack Networking.</para>
<para>It is our recommendation that you enable security groups
through OpenStack Networking.</para>
</section>
<section xml:id="ch032_networking-best-practices-idp59008">
<title>L3 routing and NAT</title>
@ -46,18 +47,18 @@
<title>Quality of Service (QoS)</title>
<para>The ability to set QoS on the virtual interface ports of tenant instances is a current deficiency for OpenStack Networking. The application of QoS for traffic shaping and rate-limiting at the physical network edge device is insufficient due to the dynamic nature of workloads in an OpenStack deployment and can not be leveraged in the traditional way. QoS-as-a-Service (QoSaaS) is currently in development for the OpenStack Networking Havana release as an experimental feature. QoSaaS is planning to provide the following services:</para>
<itemizedlist><listitem>
<para>Traffic shaping via DSCP markings</para>
<para>Traffic shaping through DSCP markings</para>
</listitem>
<listitem>
<listitem>
<para>Rate-limiting on a per port/network/tenant basis.</para>
</listitem>
<listitem>
<para>Port mirroring (via open source or third-party plug-ins)</para>
<listitem>
<para>Port mirroring (through open source or third-party plug-ins)</para>
</listitem>
<listitem>
<para>Flow analysis (via open source or third-party plug-ins)</para>
<listitem>
<para>Flow analysis (through open source or third-party plug-ins)</para>
</listitem>
</itemizedlist>
</itemizedlist>
<para>Tenant traffic port mirroring or Network Flow monitoring is currently not an exposed feature in OpenStack Networking. There are third-party plug-in extensions that do provide Port Mirroring on a per port/network/tenant basis. If Open vSwitch is used on the networking hypervisor, it is possible to enable sFlow and port mirroring, however it will require some operational effort to implement.</para>
</section>
<section xml:id="ch032_networking-best-practices-idp69408">
@ -84,16 +85,44 @@
<section xml:id="ch032_networking-best-practices-idp78032">
<title>Networking services limitations</title>
<para>OpenStack Networking has the following known limitations:</para>
<itemizedlist><listitem>
<para><emphasis role="bold">Overlapping IP addresses</emphasis> &#x2014; If nodes that run either <literal>neutron-l3-agent</literal> or <literal>neutron-dhcp-agent</literal> use overlapping IP addresses, those nodes must use Linux network namespaces. By default, the DHCP and L3 agents use Linux network namespaces. However, if the host does not support these namespaces, run the DHCP and L3 agents on different hosts.</para>
<para>If network namespace support is not present, a further limitation of the L3 Agent is that only a single logical router is supported.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Multi-Host DHCP-agent</emphasis> &#x2014; OpenStack Networking supports multiple l3-agent and dhcp-agents with load balancing. However, tight coupling of the location of the virtual machine is not supported.</para>
</listitem>
<listitem>
<para><emphasis role="bold">No IPv6 Support for L3 agents</emphasis> &#x2014; The neutron-l3-agent, used by many plug-ins to implement L3 forwarding, supports only IPv4 forwarding.</para>
</listitem>
</itemizedlist>
<variablelist>
<varlistentry>
<term>Overlapping IP addresses</term>
<listitem>
<para>
If nodes that run either <systemitem
class="service">neutron-l3-agent</systemitem> or
<systemitem
class="service">neutron-dhcp-agent</systemitem> use
overlapping IP addresses, those nodes must use Linux
network namespaces. By default, the DHCP and L3 agents
use Linux network namespaces. However, if the host does
not support these namespaces, run the DHCP and L3 agents
on different hosts.</para>
<para>
If network namespace support is not present, a further
limitation of the L3 agent is that only a single logical
router is supported.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Multi-host DHCP-agent</term>
<listitem>
<para>
OpenStack Networking supports multiple L3 and DHCP
agents with load balancing. However, tight coupling of
the location of the virtual machine is not
supported.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>No IPv6 support for L3 agents</term>
<listitem>
<para>
The neutron-l3-agent, used by many plug-ins to implement
L3 forwarding, supports only IPv4 forwarding.</para>
</listitem>
</varlistentry>
</variablelist>
</section>
</chapter>

@ -6,39 +6,59 @@
xml:id="ch033_securing-neutron-services">
<?dbhtml stop-chunking?>
<title>Securing OpenStack Networking services</title>
<para>In order to secure OpenStack Networking, an understanding of the workflow process for tenant instance creation needs to be mapped to security domains.</para>
<para>There are four main services that interact with OpenStack Networking. In a typical OpenStack deployment these services map to the following security domains:</para>
<para>
To secure OpenStack Networking, you must understand how the
workflow process for tenant instance creation needs to be mapped
to security domains.</para>
<para>
There are four main services that interact with OpenStack
Networking. In a typical OpenStack deployment these services map
to the following security domains:</para>
<itemizedlist><listitem>
<para>OpenStack dashboard: Public and management</para>
</listitem>
<listitem>
<listitem>
<para>OpenStack Identity: Management</para>
</listitem>
<listitem>
<listitem>
<para>OpenStack compute node: Management and guest</para>
</listitem>
<listitem>
<listitem>
<para>OpenStack network node: Management, guest, and possibly
public depending upon neutron-plugin in use.</para>
</listitem>
<listitem>
<listitem>
<para>SDN services node: Management, guest and possibly
public depending upon product used.</para>
</listitem>
</itemizedlist>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="454" contentwidth="682" fileref="static/1aa-logical-neutron-flow.png" format="PNG" scalefit="1"/>
</itemizedlist>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="454" contentwidth="682"
fileref="static/1aa-logical-neutron-flow.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/1aa-logical-neutron-flow.png" format="PNG" scalefit="1" width="100%"/>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/1aa-logical-neutron-flow.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>In order to isolate sensitive data communication between the OpenStack Networking services and other OpenStack core services, we strongly recommend that these communication channels be configured to only allow communications over an isolated management network.</para>
</inlinemediaobject>
</para>
<para>To isolate sensitive data communication between the
OpenStack Networking services and other OpenStack core
services, configure these communication channels to only allow
communication over an isolated management network.</para>
<section xml:id="ch033_securing-neutron-services-idp55312">
<title>OpenStack Networking service configuration</title>
<section xml:id="ch033_securing-neutron-services-idp56016">
<title>Restrict bind address of the API server: neutron-server</title>
<para>To restrict the interface or IP address on which the OpenStack Networking API service binds a network socket for incoming client connections, specify the bind_host and bind_port in the neutron.conf file as shown:</para>
<para>
To restrict the interface or IP address on which the
OpenStack Networking API service binds a network socket for
incoming client connections, specify the bind_host and
bind_port in the neutron.conf file as shown:</para>
<programlisting language="ini">
# Address to bind the API server
bind_host = <replaceable>IP ADDRESS OF SERVER</replaceable>
@ -47,10 +67,23 @@ bind_host = <replaceable>IP ADDRESS OF SERVER</replaceable>
bind_port = 9696</programlisting>
</section>
<section xml:id="ch033_securing-neutron-services-idp58320">
<title>Restrict DB and RPC communication of the OpenStack Networking services:</title>
<para>Various components of the OpenStack Networking services use either the messaging queue or database connections to communicate with other components in OpenStack Networking.</para>
<para>It is recommended that you follow the guidelines provided in the Database Authentication and Access Control chapter in the Database section for all components that require direct DB connections.</para>
<para>It is recommended that you follow the guidelines provided in the Queue Authentication and Access Control chapter in the Messaging section for all components that require RPC communication.</para>
<title>Restrict DB and RPC communication of the OpenStack
Networking services</title>
<para>
Various components of the OpenStack Networking services use
either the messaging queue or database connections to
communicate with other components in OpenStack
Networking.</para>
<para>
It is recommended that you follow the guidelines provided in
the Database Authentication and Access Control chapter in
the Database section for all components that require direct
DB connections.</para>
<para>
It is recommended that you follow the guidelines provided in
the Queue Authentication and Access Control chapter in the
Messaging section for all components that require RPC
communication.</para>
</section>
</section>
</chapter>
</chapter>

@ -46,7 +46,17 @@
</section>
<section xml:id="ch034_tenant-secure-networking-best-practices-idp51440">
<title>Security groups</title>
<para>The OpenStack Networking Service provides security group functionality using a mechanism that is more flexible and powerful than the security group capabilities built into OpenStack Compute. Thus, when using OpenStack Networking, <emphasis>nova.conf</emphasis> should always disable built-in security groups and proxy all security group calls to the OpenStack Networking API. Failure to do so will result in conflicting security policies being simultaneously applied by both services. To proxy security groups to OpenStack Networking, use the following configuration values:</para>
<para>
The OpenStack Networking Service provides security group
functionality using a mechanism that is more flexible and
powerful than the security group capabilities built into
OpenStack Compute. Thus, when using OpenStack Networking,
<filename>nova.conf</filename> should always disable built-in
security groups and proxy all security group calls to the
OpenStack Networking API. Failure to do so will result in
conflicting security policies being simultaneously applied by
both services. To proxy security groups to OpenStack
Networking, use the following configuration values:</para>
<itemizedlist><listitem>
<para><option>firewall_driver</option> must be set to
<literal>nova.virt.firewall.NoopFirewallDriver</literal> so
@ -95,10 +105,11 @@ quota_security_group_rule = 100
# default driver to use for quota checks
quota_driver = neutron.quota.ConfDriver</programlisting>
<para>OpenStack Networking also supports per-tenant quotas limit
via a quota extension API. To enable per-tenant quotas, you must
set the <literal>quota_driver</literal> option in
<filename>neutron.conf</filename>.</para>
<para>
OpenStack Networking also supports per-tenant quotas limit
through a quota extension API. To enable per-tenant quotas,
you must set the <literal>quota_driver</literal> option in
<filename>neutron.conf</filename>.</para>
<programlisting language="ini">quota_driver = neutron.db.quota_db.DbQuotaDriver</programlisting>
</section>
</chapter>

@ -132,7 +132,13 @@ qpid_password=password</programlisting>
<section xml:id="ch038_transport-security-idp70304">
<title>Namespaces</title>
<para>Network namespaces are highly recommended for all services running on OpenStack Compute Hypervisors. This will help prevent against the bridging of network traffic between VM guests and the management network.</para>
<para>When using ZeroMQ messaging, each host must run at least one ZeroMQ message receiver to receive messages from the network and forward messages to local processes via IPC. It is possible and advisable to run an independent message receiver per project within an IPC namespace, along with other services within the same project.</para>
<para>
When using ZeroMQ messaging, each host must run at least one
ZeroMQ message receiver to receive messages from the network
and forward messages to local processes through IPC. It is
possible and advisable to run an independent message
receiver per project within an IPC namespace, along with
other services within the same project.</para>
</section>
<section xml:id="ch038_transport-security-idp72736">
<title>Network policy</title>

@ -65,10 +65,13 @@
</section>
<section xml:id="ch042_database-overview-idp90096">
<title>Configuration example #2: (PostgreSQL)</title>
<para>In file pg_hba.conf:</para>
<para>In file <filename>pg_hba.conf</filename>:</para>
<programlisting>hostssl dbname compute01 hostname md5</programlisting>
<para>Note this command only adds the ability to communicate over SSL and is non-exclusive. Other access methods that may allow unencrypted transport should be disabled so that SSL is the sole access method.</para>
<para>The 'md5' parameter defines the authentication method as a hashed password. We provide a secure authentication example in the section below.</para>
<para>
The <literal>md5</literal> parameter defines the
authentication method as a hashed password. We provide a
secure authentication example in the section below.</para>
</section>
</section>
<section xml:id="ch042_database-overview-idp93120">

@ -6,34 +6,61 @@
xml:id="ch043_database-transport-security">
<?dbhtml stop-chunking?>
<title>Database transport security</title>
<para>This chapter covers issues related to network communications to and from the database server. This includes IP address bindings and encrypting network traffic with SSL.</para>
<para>
This chapter covers issues related to network communications to
and from the database server. This includes IP address bindings
and encrypting network traffic with SSL.</para>
<section xml:id="ch043_database-transport-security-idp38176">
<title>Database server IP address binding</title>
<para>To isolate sensitive database communications between the services and the database, we strongly recommend that the database server(s) be configured to only allow communications to and from the database over an isolated management network. This is achieved by restricting the interface or IP address on which the database server binds a network socket for incoming client connections.</para>
<para>
To isolate sensitive database communications between the
services and the database, we strongly recommend that the
database server(s) be configured to only allow communications
to and from the database over an isolated management
network. This is achieved by restricting the interface or IP
address on which the database server binds a network socket
for incoming client connections.</para>
<section xml:id="ch043_database-transport-security-idp39696">
<title>Restricting bind address for MySQL</title>
<para>In my.cnf:</para>
<para>In <filename>my.cnf</filename>:</para>
<programlisting>[mysqld]
...
bind-address &lt;ip address or hostname of management network interface&gt;</programlisting>
</section>
<section xml:id="ch043_database-transport-security-idp41568">
<title>Restricting listen address for PostgreSQL</title>
<para>In postgresql.conf:</para>
<para>In <filename>postgresql.conf</filename>:</para>
<programlisting>listen_addresses = &lt;ip address or hostname of management network interface&gt;</programlisting>
</section>
</section>
<section xml:id="ch043_database-transport-security-idp43520">
<title>Database transport</title>
<para>In addition to restricting database communications to the management network, we also strongly recommend that the cloud administrator configure their database backend to require SSL. Using SSL for the database client connections protects the communications from tampering and eavesdropping. As will be discussed in the next section, using SSL also provides the framework for doing database user authentication via X.509 certificates (commonly referred to as PKI). Below is guidance on how SSL is typically configured for the two popular database backends MySQL and PostgreSQL.</para>
<para>
In addition to restricting database communications to the
management network, we also strongly recommend that the cloud
administrator configure their database backend to require
SSL. Using SSL for the database client connections protects
the communications from tampering and eavesdropping. As will
be discussed in the next section, using SSL also provides the
framework for doing database user authentication through X.509
certificates (commonly referred to as PKI). Below is guidance
on how SSL is typically configured for the two popular
database backends MySQL and PostgreSQL.</para>
<note>
<para>NOTE: When installing the certificate and key files, ensure that the file permissions are restricted, for example <literal>chmod 0600</literal>, and the ownership is restricted to the database daemon user to prevent unauthorized access by other processes and users on the database server.</para>
<para>
When installing the certificate and key files, ensure that
the file permissions are restricted, for example
<command>chmod 0600</command>, and the ownership is
restricted to the database daemon user to prevent
unauthorized access by other processes and users on the
database server.</para>
</note>
</section>
<section xml:id="ch043_database-transport-security-idp47184">
<title>MySQL SSL configuration</title>
<para>The following lines should be added in the system-wide MySQL configuration file:</para>
<para>In my.cnf:</para>
<para>The following lines should be added in the system-wide
MySQL configuration file:</para>
<para>In <filename>my.cnf</filename>:</para>
<programlisting>[[mysqld]]
...
ssl-ca=/path/to/ssl/cacert.pem
@ -45,7 +72,10 @@ ssl-key=/path/to/ssl/server-key.pem</programlisting>
</section>
<section xml:id="ch043_database-transport-security-idp50288">
<title>PostgreSQL SSL configuration</title>
<para>The following lines should be added in the system-wide PostgreSQL configuration file, <literal>postgresql.conf</literal>.</para>
<para>
The following lines should be added in the system-wide
PostgreSQL configuration file,
<filename>postgresql.conf</filename>.</para>
<programlisting>ssl = true</programlisting>
<para>Optionally, if you wish to restrict the set of SSL ciphers used for the encrypted connection. See <link xlink:href="http://www.openssl.org/docs/apps/ciphers.html">http://www.openssl.org/docs/apps/ciphers.html</link> for a list of ciphers and the syntax for specifying the cipher string:</para>
<programlisting>ssl-ciphers = 'cipher:list'</programlisting>

@ -30,7 +30,7 @@
</listitem>
<listitem>
<para>Track the destruction of both the tenant data and
metadata via ticketing in a CMDB.</para>
metadata through ticketing in a CMDB.</para>
</listitem>
<listitem><para>For Volume storage:</para>
<itemizedlist>
@ -63,7 +63,7 @@
</listitem>
<listitem>
<para>Track the destruction of both the customer data and
metadata via ticketing in a CMDB.</para>
metadata through ticketing in a CMDB.</para>
</listitem>
<listitem>
<para>For Volume storage:</para>

@ -121,7 +121,7 @@
<para>Active developer and user communities</para>
</listitem>
<listitem>
<para>Timeliness and Availability of updates</para>
<para>Timeliness and availability of updates</para>
</listitem>
<listitem>
<para>Incidence response</para>
@ -351,83 +351,80 @@
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Algorithm</th>
<th>Key length</th>
<th>Intended purpose</th>
<th>Security function</th>
<th>Implementation standard</th>
</tr>
</thead>
<tbody>
<tr>
<td><para><emphasis role="bold"
>Algorithm</emphasis></para></td>
<td><para><emphasis role="bold">Key
Length</emphasis></para></td>
<td><para><emphasis role="bold">Intended
Purpose</emphasis></para></td>
<td><para><emphasis role="bold">Security
Function</emphasis></para></td>
<td><para><emphasis role="bold">Implementation
Standard</emphasis></para></td>
<td>AES</td>
<td>128, 192, or 256 bits</td>
<td>Encryption / decryption</td>
<td>Protected data transfer, protection for data at
rest</td>
<td>RFC 4253</td>
</tr>
<tr>
<td><para>AES</para></td>
<td><para>128, 192, or 256 bits</para></td>
<td><para>Encryption / Decryption</para></td>
<td><para>Protected Data Transfer, Protection for Data at
Rest</para></td>
<td><para>RFC 4253</para></td>
<td>TDES</td>
<td>168 bits</td>
<td>Encryption / decryption</td>
<td>Protected data transfer</td>
<td>RFC 4253</td>
</tr>
<tr>
<td><para>TDES</para></td>
<td><para>168 bits</para></td>
<td><para>Encryption / Decryption</para></td>
<td><para>Protected Data Transfer</para></td>
<td><para>RFC 4253</para></td>
<td>RSA</td>
<td>1024, 2048, or 3072 bits</td>
<td>Authentication, key exchange</td>
<td>Identification and authentication, protected
data transfer</td>
<td>U.S. NIST FIPS PUB 186-3</td>
</tr>
<tr>
<td><para>RSA</para></td>
<td><para>1024, 2048, or 3072 bits</para></td>
<td><para>Authentication, Key Exchange</para></td>
<td><para>Identification and Authentication, Protected
Data Transfer</para></td>
<td><para>U.S. NIST FIPS PUB 186-3</para></td>
<td>DSA</td>
<td>L=1024, N=160 bits</td>
<td>Authentication, key exchange</td>
<td>Identification and authentication, protected
data transfer</td>
<td>U.S. NIST FIPS PUB 186-3</td>
</tr>
<tr>
<td><para>DSA</para></td>
<td><para>L=1024, N=160 bits</para></td>
<td><para>Authentication, Key Exchange</para></td>
<td><para>Identification and Authentication, Protected
Data Transfer</para></td>
<td><para>U.S. NIST FIPS PUB 186-3</para></td>
</tr>
<tr>
<td><para>Serpent</para></td>
<td><para>128, 192, or 256 bits</para></td>
<td><para>Encryption / Decryption</para></td>
<td><para>Protection of Data at Rest</para></td>
<td><para><link
<td>Serpent</td>
<td>128, 192, or 256 bits</td>
<td>Encryption / decryption</td>
<td>Protection of data at rest</td>
<td><link
xlink:href="http://www.cl.cam.ac.uk/~rja14/Papers/serpent.pdf"
>http://www.cl.cam.ac.uk/~rja14/Papers/serpent.pdf</link></para></td>
>http://www.cl.cam.ac.uk/~rja14/Papers/serpent.pdf</link></td>
</tr>
<tr>
<td><para>Twofish</para></td>
<td><para>128, 192, or 256 bit</para></td>
<td><para>Encryption / Decryption</para></td>
<td><para>Protection of Data at Rest</para></td>
<td><para><link
<td>Twofish</td>
<td>128, 192, or 256 bit</td>
<td>Encryption / decryption</td>
<td>Protection of data at rest</td>
<td><link
xlink:href="http://www.schneier.com/paper-twofish-paper.html"
>http://www.schneier.com/paper-twofish-paper.html</link></para></td>
>http://www.schneier.com/paper-twofish-paper.html</link></td>
</tr>
<tr>
<td><para>SHA-1</para></td>
<td><para>-</para></td>
<td><para>Message Digest</para></td>
<td><para>Protection of Data at Rest, Protected Data
Transfer</para></td>
<td><para>U.S. NIST FIPS 180-3</para></td>
<td>SHA-1</td>
<td>-</td>
<td>Message Digest</td>
<td>Protection of data at rest, protected data
transfer</td>
<td>U.S. NIST FIPS 180-3</td>
</tr>
<tr>
<td><para>SHA-2 (224, 256, 384, or 512 bits)</para></td>
<td><para>-</para></td>
<td><para>Message Digest</para></td>
<td><para>Protection for Data at Rest, Identification and
Authentication</para></td>
<td><para>U.S. NIST FIPS 180-3</para></td>
<td>SHA-2 (224, 256, 384, or 512 bits)</td>
<td>-</td>
<td>Message Digest</td>
<td>Protection for data at rest, identification and
authentication</td>
<td>U.S. NIST FIPS 180-3</td>
</tr>
</tbody>
</informaltable>
@ -482,40 +479,39 @@
<col/>
<col/>
</colgroup>
<thead>
<tr>
<td>Description</td>
<td>Technology</td>
<td>Explanation</td>
</tr>
</thead>
<tbody>
<tr>
<td><para><emphasis role="bold"
>Description</emphasis></para></td>
<td><para><emphasis role="bold"
>Technology</emphasis></para></td>
<td><para><emphasis role="bold"
>Explanation</emphasis></para></td>
<td>I/O MMU</td>
<td>VT-d / AMD-Vi</td>
<td>Required for protecting
PCI-passthrough</td>
</tr>
<tr>
<td><para>I/O MMU</para></td>
<td><para>VT-d / AMD-Vi</para></td>
<td><para>Required for protecting
PCI-passthrough</para></td>
<td>Intel Trusted Execution Technology</td>
<td>Intel TXT / SEM</td>
<td>Required for dynamic attestation
services</td>
</tr>
<tr>
<td><para>Intel Trusted Execution Technology</para></td>
<td><para>Intel TXT / SEM</para></td>
<td><para>Required for dynamic attestation
services</para></td>
</tr>
<tr>
<td><para><anchor
<td><anchor
xml:id="PCI-SIG_I.2FO_virtualization_.28IOV.29"
/>PCI-SIG I/O virtualization</para></td>
<td><para>SR-IOV, MR-IOV, ATS</para></td>
<td><para>Required to allow secure sharing of PCI Express
devices</para></td>
/>PCI-SIG I/O virtualization</td>
<td>SR-IOV, MR-IOV, ATS</td>
<td>Required to allow secure sharing of PCI Express
devices</td>
</tr>
<tr>
<td><para>Network virtualization</para></td>
<td><para>VT-c</para></td>
<td><para>Improves performance of network I/O on
hypervisors</para></td>
<td>Network virtualization</td>
<td>VT-c</td>
<td>Improves performance of network I/O on
hypervisors</td>
</tr>
</tbody>
</informaltable>

@ -10,156 +10,305 @@
xml:id="ch052_devices">
<?dbhtml stop-chunking?>
<title>Hardening the virtualization layers</title>
<para>In the beginning of this chapter we discuss the use of both physical and virtual hardware by instances, the associated security risks, and some recommendations for mitigating those risks. We conclude the chapter with a discussion of sVirt, an open source project for integrating SELinux mandatory access controls with the virtualization components.</para>
<para>
In the beginning of this chapter we discuss the use of both
physical and virtual hardware by instances, the associated
security risks, and some recommendations for mitigating those
risks. We conclude the chapter with a discussion of sVirt, an
open source project for integrating SELinux mandatory access
controls with the virtualization components.</para>
<section xml:id="ch052_devices-idp479920">
<title>Physical hardware (PCI passthrough)</title>
<para>Many hypervisors offer a functionality known as PCI passthrough. This allows an instance to have direct access to a piece of hardware on the node. For example, this could be used to allow instances to access video cards offering the compute unified device architecture (CUDA) for high performance computation. This feature carries two types of security risks: direct memory access and hardware infection.</para>
<para>Direct memory access (DMA) is a feature that permits certain hardware devices to access arbitrary physical memory addresses in the host computer. Often video cards have this capability. However, an instance should not be given arbitrary physical memory access because this would give it full view of both the host system and other instances running on the same node. Hardware vendors use an input/output memory management unit (IOMMU) to manage DMA access in these situations. Therefore, cloud architects should ensure that the hypervisor is configured to utilize this hardware feature.</para>
<itemizedlist><listitem>
<para>KVM: <link xlink:href="http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM">How to assign devices with VT-d in KVM</link></para>
</listitem>
<listitem>
<para>Xen: <link xlink:href="http://wiki.xen.org/wiki/VTd_HowTo">VTd Howto</link></para>
</listitem>
</itemizedlist>
<note>
<para>The IOMMU feature is marketed as VT-d by Intel and AMD-Vi by AMD.</para>
</note>
<para>A hardware infection occurs when an instance makes a malicious modification to the firmware or some other part of a device. As this device is used by other instances, or even the host OS, the malicious code can spread into these systems. The end result is that one instance can run code outside of its security domain. This is a potential problem in any hardware sharing scenario. The problem is specific to this scenario because it is harder to reset the state of physical hardware than virtual hardware.</para>
<para>
Solutions to the hardware infection problem are domain
specific. The strategy is to identify how an instance can modify
hardware state then determine how to reset any modifications
when the instance is done using the hardware. For example, one
option could be to re-flash the firmware after use. Clearly
there is a need to balance hardware longevity with security as
some firmwares will fail after a large number of writes. TPM
technology, described in <xref
linkend="ch013_node-bootstrapping-idp44768"/>, provides a
solution for detecting unauthorized firmware changes. Regardless
of the strategy selected, it is important to understand the
risks associated with this kind of hardware sharing so that they
can be properly mitigated for a given deployment scenario.
Many hypervisors offer a functionality known as PCI
passthrough. This allows an instance to have direct access to
a piece of hardware on the node. For example, this could be
used to allow instances to access video cards offering the
compute unified device architecture (CUDA) for high
performance computation. This feature carries two types of
security risks: direct memory access and hardware
infection.</para>
<para>
Direct memory access (DMA) is a feature that permits certain
hardware devices to access arbitrary physical memory addresses
in the host computer. Often video cards have this
capability. However, an instance should not be given arbitrary
physical memory access because this would give it full view of
both the host system and other instances running on the same
node. Hardware vendors use an input/output memory management
unit (IOMMU) to manage DMA access in these
situations. Therefore, cloud architects should ensure that the
hypervisor is configured to utilize this hardware
feature.</para>
<itemizedlist>
<listitem>
<para>KVM: <link
xlink:href="http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM">How
to assign devices with VT-d in KVM</link></para>
</listitem>
<listitem>
<para>Xen: <link xlink:href="http://wiki.xen.org/wiki/VTd_HowTo">VTd Howto</link>
</para>
</listitem>
</itemizedlist>
<note>
<para>
The IOMMU feature is marketed as VT-d by Intel and AMD-Vi by
AMD.</para>
</note>
<para>
A hardware infection occurs when an instance makes a malicious
modification to the firmware or some other part of a
device. As this device is used by other instances, or even the
host OS, the malicious code can spread into these systems. The
end result is that one instance can run code outside of its
security domain. This is a potential problem in any hardware
sharing scenario. The problem is specific to this scenario
because it is harder to reset the state of physical hardware
than virtual hardware.</para>
<para>
Solutions to the hardware infection problem are domain
specific. The strategy is to identify how an instance can
modify hardware state then determine how to reset any
modifications when the instance is done using the
hardware. For example, one option could be to re-flash the
firmware after use. Clearly there is a need to balance
hardware longevity with security as some firmwares will fail
after a large number of writes. TPM technology, described in
<xref linkend="ch013_node-bootstrapping-idp44768"/>, provides
a solution for detecting unauthorized firmware
changes. Regardless of the strategy selected, it is important
to understand the risks associated with this kind of hardware
sharing so that they can be properly mitigated for a given
deployment scenario.
</para>
<para>Additionally, due to the risk and complexities associated with PCI passthrough, it should be disabled by default. If enabled for a specific need, you will need to have appropriate processes in place to ensure the hardware is clean before re-issue.</para>
<para>
Additionally, due to the risk and complexities associated with
PCI passthrough, it should be disabled by default. If enabled
for a specific need, you will need to have appropriate
processes in place to ensure the hardware is clean before
re-issue.</para>
</section>
<section xml:id="ch052_devices-idp488320">
<title>Virtual hardware (QEMU)</title>
<para>When running a virtual machine, virtual hardware is a
software layer that provides the hardware interface for the
virtual machine. Instances use this functionality to provide
network, storage, video, and other devices that may be needed.
With this in mind, most instances in your environment will
exclusively use virtual hardware, with a minority that will
require direct hardware access. The major open source
hypervisors use QEMU for this functionality. While QEMU fills an
important need for virtualization platforms, it has proven to be
a very challenging software project to write and maintain. Much
of the functionality in QEMU is implemented with low-level code
that is difficult for most developers to comprehend.
Furthermore, the hardware virtualized by QEMU includes many
legacy devices that have their own set of quirks. Putting all of
this together, QEMU has been the source of many security
problems, including hypervisor breakout attacks.</para>
<para>For the reasons stated above, it is important to take
proactive steps to harden QEMU. We recommend three specific
steps: minimizing the code base, using compiler hardening, and
using mandatory access controls, such as sVirt, SELinux, or
AppArmor.</para>
<para>
When running a virtual machine, virtual hardware is a software
layer that provides the hardware interface for the virtual
machine. Instances use this functionality to provide network,
storage, video, and other devices that may be needed. With
this in mind, most instances in your environment will
exclusively use virtual hardware, with a minority that will
require direct hardware access. The major open source
hypervisors use QEMU for this functionality. While QEMU fills
an important need for virtualization platforms, it has proven
to be a very challenging software project to write and
maintain. Much of the functionality in QEMU is implemented
with low-level code that is difficult for most developers to
comprehend. Furthermore, the hardware virtualized by QEMU
includes many legacy devices that have their own set of
quirks. Putting all of this together, QEMU has been the source
of many security problems, including hypervisor breakout
attacks.</para>
<para>
For the reasons stated above, it is important to take
proactive steps to harden QEMU. We recommend three specific
steps: minimizing the code base, using compiler hardening, and
using mandatory access controls, such as sVirt, SELinux, or
AppArmor.</para>
<section xml:id="ch052_devices-idp490976">
<title>Minimizing the QEMU code base</title>
<para>One classic security principle is to remove any unused components from your system. QEMU provides support for many different virtual hardware devices. However, only a small number of devices are needed for a given instance. Most instances will use the virtio devices. However, some legacy instances will need access to specific hardware, which can be specified using glance metadata:</para>
<para>
One classic security principle is to remove any unused
components from your system. QEMU provides support for many
different virtual hardware devices. However, only a small
number of devices are needed for a given instance. Most
instances will use the virtio devices. However, some legacy
instances will need access to specific hardware, which can
be specified using glance metadata:</para>
<screen><prompt>$</prompt> <userinput>glance image-update \
--property hw_disk_bus=ide \
--property hw_cdrom_bus=ide \
--property hw_vif_model=e1000 \
f16-x86_64-openstack-sda</userinput></screen>
<para>A cloud architect should decide what devices to make available to cloud users. Anything that is not needed should be removed from QEMU. This step requires recompiling QEMU after modifying the options passed to the QEMU configure script. For a complete list of up-to-date options simply run <literal>./configure --help</literal> from within the QEMU source directory. Decide what is needed for your deployment, and disable the remaining options.</para>
<para>
A cloud architect should decide what devices to make
available to cloud users. Anything that is not needed should
be removed from QEMU. This step requires recompiling QEMU
after modifying the options passed to the QEMU configure
script. For a complete list of up-to-date options simply run
<command>./configure --help</command> from within the QEMU
source directory. Decide what is needed for your deployment,
and disable the remaining options.</para>
</section>
<section xml:id="ch052_devices-idp494336">
<title>Compiler hardening</title>
<para>The next step is to harden QEMU using compiler hardening options. Modern compilers provide a variety of compile time options to improve the security of the resulting binaries. These features, which we will describe in more detail below, include relocation read-only (RELRO), stack canaries, never execute (NX), position independent executable (PIE), and address space layout randomization (ASLR).</para>
<para>Many modern linux distributions already build QEMU with compiler hardening enabled, so you may want to verify your existing executable before proceeding with the information below. One tool that can assist you with this verification is called <link xlink:href="http://www.trapkit.de/tools/checksec.html"><literal>checksec.sh</literal></link>.</para>
<itemizedlist><listitem>
<para><emphasis>RELocation Read-Only (RELRO)</emphasis>: Hardens the data sections of an executable. Both full and partial RELRO modes are supported by gcc. For QEMU full RELRO is your best choice. This will make the global offset table read-only and place various internal data sections before the program data section in the resulting executable.</para>
<para>
The next step is to harden QEMU using compiler hardening
options. Modern compilers provide a variety of compile time
options to improve the security of the resulting
binaries. These features, which we will describe in more
detail below, include relocation read-only (RELRO), stack
canaries, never execute (NX), position independent
executable (PIE), and address space layout randomization
(ASLR).</para>
<para>
Many modern linux distributions already build QEMU with
compiler hardening enabled, so you may want to verify your
existing executable before proceeding with the information
below. One tool that can assist you with this verification
is called <link
xlink:href="http://www.trapkit.de/tools/checksec.html"><literal>checksec.sh</literal></link>.</para>
<variablelist>
<varlistentry>
<term>RELocation Read-Only (RELRO)</term>
<listitem>
<para>
Hardens the data sections of an executable. Both full
and partial RELRO modes are supported by gcc. For QEMU
full RELRO is your best choice. This will make the
global offset table read-only and place various
internal data sections before the program data section
in the resulting executable.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Stack canaries</term>
<listitem>
<para>
Places values on the stack and verifies their presence
to help prevent buffer overflow attacks.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Never eXecute (NX)</term>
<listitem>
<para>
Also known as Data Execution Prevention (DEP), ensures
that data sections of the executable can not be
executed.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Position Independent Executable (PIE)</term>
<listitem>
<para>
Produces a position independent executable, which is
necessary for ASLR.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Address Space Layout Randomization (ASLR)</term>
<listitem>
<para>
This ensures that placement of both code and data
regions will be randomized. Enabled by the kernel (all
modern linux kernels support ASLR), when the executable
is built with PIE.</para>
</listitem>
<listitem>
<para><emphasis>Stack Canaries</emphasis>: Places values on the stack and verifies their presence to help prevent buffer overflow attacks.</para>
</listitem>
<listitem>
<para><emphasis>Never eXecute (NX)</emphasis>: Also known as Data Execution Prevention (DEP), ensures that data sections of the executable can not be executed.</para>
</listitem>
<listitem>
<para><emphasis>Position Independent Executable (PIE)</emphasis>: Produces a position independent executable, which is necessary for ASLR.</para>
</listitem>
<listitem>
<para><emphasis>Address Space Layout Randomization
(ASLR)</emphasis>: This ensures that both code and data
regions will be randomized. Enabled by the kernel (all
modern linux kernels support ASLR), when the executable is
built with PIE.</para>
</listitem>
</itemizedlist>
<para>Putting this all together, and adding in some additional useful protections, we recommend the following compiler options for gcc when compiling QEMU:</para>
<programlisting>CFLAGS="-arch x86_64 -fstack-protector-all -Wstack-protector --param ssp-buffer-size=4 -pie -fPIE -ftrapv -D_FORTIFY_SOURCE=2 -O2 -Wl,-z,relro,-z,now"</programlisting>
<para>We recommend testing your QEMU executable file after it is compiled to ensure that the compiler hardening worked properly.</para>
<para>Most cloud deployments will not want to build software such as QEMU by hand. It is better to use packaging to ensure that the process is repeatable and to ensure that the end result can be easily deployed throughout the cloud. The references below provide some additional details on applying compiler hardening options to existing packages.</para>
<itemizedlist><listitem>
</varlistentry>
</variablelist>
<para>
Putting this all together, and adding in some additional
useful protections, we recommend the following compiler
options for GCC when compiling QEMU:</para>
<programlisting>CFLAGS="-arch x86_64 -fstack-protector-all -Wstack-protector \
--param ssp-buffer-size=4 -pie -fPIE -ftrapv -D_FORTIFY_SOURCE=2 -O2 \
-Wl,-z,relro,-z,now"</programlisting>
<para>
We recommend testing your QEMU executable file after it is
compiled to ensure that the compiler hardening worked
properly.</para>
<para>
Most cloud deployments will not want to build software such
as QEMU by hand. It is better to use packaging to ensure
that the process is repeatable and to ensure that the end
result can be easily deployed throughout the cloud. The
references below provide some additional details on applying
compiler hardening options to existing packages.</para>
<itemizedlist>
<listitem>
<para>DEB packages: <link xlink:href="http://wiki.debian.org/HardeningWalkthrough">Hardening Walkthrough</link></para>
</listitem>
<listitem>
<listitem>
<para>RPM packages: <link xlink:href="http://fedoraproject.org/wiki/How_to_create_an_RPM_package">How to create an RPM package</link></para>
</listitem>
</itemizedlist>
</itemizedlist>
</section>
<section xml:id="ch052_devices-idp508032">
<title>Mandatory access controls</title>
<para>Compiler hardening makes it more difficult to attack the QEMU process. However, if an attacker does succeed, we would like to limit the impact of the attack. Mandatory access controls accomplish this by restricting the privileges on QEMU process to only what is needed. This can be accomplished using sVirt / SELinux or AppArmor. When using sVirt, SELinux is configured to run every QEMU process under a different security context. AppArmor can be configured to provide similar functionality. We provide more details on sVirt in the instance isolation section below.</para>
<para>
Compiler hardening makes it more difficult to attack the
QEMU process. However, if an attacker does succeed, we would
like to limit the impact of the attack. Mandatory access
controls accomplish this by restricting the privileges on
QEMU process to only what is needed. This can be
accomplished using sVirt / SELinux or AppArmor. When using
sVirt, SELinux is configured to run every QEMU process under
a different security context. AppArmor can be configured to
provide similar functionality. We provide more details on
sVirt in the instance isolation section below.</para>
</section>
</section>
<section xml:id="ch052_devices-idp510512">
<title>sVirt: SELinux and virtualization</title>
<para>With unique kernel-level architecture and National
Security Agency (NSA) developed security mechanisms, KVM
provides foundational isolation technologies for multi tenancy.
With developmental origins dating back to 2002, the Secure
Virtualization (sVirt) technology is the application of SELinux
against modern day virtualization. SELinux, which was designed
to apply separation control based upon labels, has been extended
to provide isolation between virtual machine processes, devices,
data files and system processes acting upon their behalf.</para>
<para>OpenStack's sVirt implementation aspires to protect hypervisor hosts and virtual machines against two primary threat vectors:</para>
<para>
With unique kernel-level architecture and National Security
Agency (NSA) developed security mechanisms, KVM provides
foundational isolation technologies for multi tenancy. With
developmental origins dating back to 2002, the Secure
Virtualization (sVirt) technology is the application of
SELinux against modern day virtualization. SELinux, which was
designed to apply separation control based upon labels, has
been extended to provide isolation between virtual machine
processes, devices, data files and system processes acting
upon their behalf.</para>
<para>
OpenStack's sVirt implementation aspires to protect hypervisor
hosts and virtual machines against two primary threat
vectors:</para>
<itemizedlist><listitem>
<para><emphasis role="bold">Hypervisor threats</emphasis> A
compromised application running within a virtual machine
attacks the hypervisor to access underlying resources. For
example, the host OS, applications, or devices within the
physical machine. This is a threat vector unique to
virtualization and represents considerable risk as the
underlying real machine can be compromised due to
vulnerability in a single virtual application.</para>
compromised application running within a virtual machine
attacks the hypervisor to access underlying resources. For
example, the host OS, applications, or devices within the
physical machine. This is a threat vector unique to
virtualization and represents considerable risk as the
underlying real machine can be compromised due to
vulnerability in a single virtual application.</para>
</listitem>
<listitem>
<listitem>
<para><emphasis role="bold">Virtual Machine (multi-tenant)
threats</emphasis> A compromised application running
within a VM attacks the hypervisor to access/control another
virtual machine and its resources. This is a threat vector
unique to virtualization and represents considerable risk as
a multitude of virtual machine file images could be
compromised due to vulnerability in a single application.
This virtual network attack is a major concern as the
administrative techniques for protecting real networks do
not directly apply to the virtual environment.</para>
within a VM attacks the hypervisor to access/control
another virtual machine and its resources. This is a
threat vector unique to virtualization and represents
considerable risk as a multitude of virtual machine file
images could be compromised due to vulnerability in a
single application. This virtual network attack is a
major concern as the administrative techniques for
protecting real networks do not directly apply to the
virtual environment.</para>
</listitem>
</itemizedlist>
<para>Each KVM-based virtual machine is a process which is labeled by SELinux, effectively establishing a security boundary around each virtual machine. This security boundary is monitored and enforced by the Linux kernel, restricting the virtual machine's access to resources outside of its boundary such as host machine data files or other VMs.</para>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="583" contentwidth="1135" fileref="static/sVirt Diagram 1.png" format="PNG" scalefit="1"/>
</itemizedlist>
<para>
Each KVM-based virtual machine is a process which is labeled
by SELinux, effectively establishing a security boundary
around each virtual machine. This security boundary is
monitored and enforced by the Linux kernel, restricting the
virtual machine's access to resources outside of its boundary
such as host machine data files or other VMs.</para>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="583" contentwidth="1135"
fileref="static/sVirt Diagram 1.png" format="PNG"
scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/sVirt Diagram 1.png" format="PNG" scalefit="1" width="100%"/>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/sVirt Diagram 1.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject>
</inlinemediaobject>
</para>
<para>
As shown above, sVirt isolation is provided regardless of the
@ -171,29 +320,49 @@
</para>
<section xml:id="ch052_devices-idp523744">
<title>Labels and categories</title>
<para>KVM-based virtual machine instances are labelled with their own SELinux data type, known as svirt_image_t. Kernel level protections prevent unauthorized system processes, such as malware, from manipulating the virtual machine image files on disk. When virtual machines are powered off, images are stored as svirt_image_t as shown below:</para>
<para>
KVM-based virtual machine instances are labelled with their
own SELinux data type, known as svirt_image_t. Kernel level
protections prevent unauthorized system processes, such as
malware, from manipulating the virtual machine image files
on disk. When virtual machines are powered off, images are
stored as svirt_image_t as shown below:</para>
<programlisting>system_u:object_r:svirt_image_t:SystemLow image1
system_u:object_r:svirt_image_t:SystemLow image2
system_u:object_r:svirt_image_t:SystemLow image3
system_u:object_r:svirt_image_t:SystemLow image4</programlisting>
<para>The svirt_image_t label uniquely identifies image files on disk, allowing for the SELinux policy to restrict access. When a KVM-based Compute image is powered on, sVirt appends a random numerical identifier to the image. sVirt is technically capable of assigning numerical identifiers to 524,288 virtual machines per hypervisor node, however OpenStack deployments are highly unlikely to encounter this limitation.</para>
<para>
The <literal>svirt_image_t</literal> label uniquely
identifies image files on disk, allowing for the SELinux
policy to restrict access. When a KVM-based Compute image is
powered on, sVirt appends a random numerical identifier to
the image. sVirt is technically capable of assigning
numerical identifiers to 524,288 virtual machines per
hypervisor node, however OpenStack deployments are highly
unlikely to encounter this limitation.</para>
<para>This example shows the sVirt category identifier:</para>
<programlisting>system_u:object_r:svirt_image_t:s0:c87,c520 image1
system_u:object_r:svirt_image_t:s0:419,c172 image2</programlisting>
</section>
<section xml:id="ch052_devices-idp527632">
<title>Booleans</title>
<para>To ease the administrative burden of managing SELinux, many enterprise Linux platforms utilize SELinux Booleans to quickly change the security posture of sVirt.</para>
<para>Red Hat Enterprise Linux-based KVM deployments utilize the following sVirt booleans:</para>
<para>
To ease the administrative burden of managing SELinux, many
enterprise Linux platforms utilize SELinux Booleans to
quickly change the security posture of sVirt.</para>
<para>
Red Hat Enterprise Linux-based KVM deployments utilize the
following sVirt booleans:</para>
<informaltable rules="all" width="80%"><colgroup><col/><col/></colgroup>
<tbody>
<thead>
<tr>
<td><para><emphasis role="bold">sVirt SELinux Boolean</emphasis></para></td>
<td><para><emphasis role="bold">Description</emphasis></para></td>
</tr>
</thead>
<tbody>
<tr>
<td><para>virt_use_common</para></td>
<td><para>Allow virt to use serial/parallel communication ports.</para></td>