Merge "Removing references of out-of-date versions of OpenStack"

This commit is contained in:
Jenkins
2014-07-22 22:35:22 +00:00
committed by Gerrit Code Review
5 changed files with 19 additions and 10 deletions

View File

@@ -107,7 +107,6 @@
<title>References</title>
<para><link xlink:href="http://docs.openstack.org/trunk/config-reference/content/spice-console.html">SPICE Console</link></para>
<para><link xlink:href="https://bugzilla.redhat.com/show_bug.cgi?id=913607">Red Hat bug 913607</link></para>
<para><link xlink:href="http://openstack.redhat.com/forum/discussion/67/resolved-spice-support-in-rdo-grizzly/p1">SPICE support in RDO Grizzly</link></para>
</section>
</section>
</section>

View File

@@ -113,14 +113,12 @@ charset=utf8&amp;ssl_ca=/etc/mysql/cacert.pem&amp;ssl_cert=/etc/mysql/server-cer
</imageobject>
</inlinemediaobject></para>
<para>Unfortunately, this solution complicates the task of more fine-grained access control and the ability to audit data access. Because the <systemitem class="service">nova-conductor</systemitem> service receives requests over RPC, it highlights the importance of improving the security of messaging. Any node with access to the message queue may execute these methods provided by the <systemitem class="service">nova-conductor</systemitem> and effectively modifying the database.</para>
<para>Finally, it should be noted that as of the Grizzly release, gaps exist where <systemitem class="service">nova-conductor</systemitem> is not used throughout OpenStack Compute. Depending on one's configuration, the use of <systemitem class="service">nova-conductor</systemitem> may not allow deployers to avoid the necessity of providing database GRANTs to individual compute host systems.</para>
<para>Note, as <systemitem
class="service">nova-conductor</systemitem> only applies to
OpenStack Compute, direct database access from compute hosts may
still be necessary for the operation of other OpenStack
components such as Telemetry (ceilometer), Networking, and Block
Storage.</para>
<para>Implementors should weigh the benefits and risks of both configurations before enabling or disabling the <systemitem class="service">nova-conductor</systemitem> service. We are not yet prepared to recommend the use of <systemitem class="service">nova-conductor</systemitem> in the Grizzly release. However, we do believe that this recommendation will change as additional features are added into OpenStack.</para>
<para>To disable the <systemitem class="service">nova-conductor</systemitem>, place the following into your <filename>nova.conf</filename> file (on your compute hosts):</para>
<programlisting language="ini">[conductor]
use_local = true</programlisting>

View File

@@ -91,7 +91,7 @@
</section>
<section xml:id="management-interfaces-idp62384">
<title>References</title>
<para><link xlink:href="https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly"><citetitle>Grizzly Release Notes</citetitle></link></para>
<para><link xlink:href="https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse"><citetitle>Icehouse Release Notes</citetitle></link></para>
</section>
</section>
<section xml:id="management-interfaces-idp63760">

View File

@@ -22,7 +22,7 @@
<section xml:id="networking-services-idp50512">
<title>L2 tunneling</title>
<para>Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.</para>
<para>OpenStack Networking currently only supports GRE encapsulation with planned future support of VXLAN due in the Havana release.</para>
<para>OpenStack Networking currently supports both GRE and VXLAN encapsulation.</para>
<para>The choice of technology to provide L2 isolation is dependent upon the scope and size of tenant networks that will be created in your deployment. If your environment has limited VLAN ID availability or will have a large number of L2 networks, it is our recommendation that you utilize tunneling.</para>
</section>
</section>
@@ -45,7 +45,7 @@
</section>
<section xml:id="networking-services-idp62880">
<title>Quality of Service (QoS)</title>
<para>The ability to set QoS on the virtual interface ports of tenant instances is a current deficiency for OpenStack Networking. The application of QoS for traffic shaping and rate-limiting at the physical network edge device is insufficient due to the dynamic nature of workloads in an OpenStack deployment and can not be leveraged in the traditional way. QoS-as-a-Service (QoSaaS) is currently in development for the OpenStack Networking Havana release as an experimental feature. QoSaaS is planning to provide the following services:</para>
<para>The ability to set QoS on the virtual interface ports of tenant instances is a current deficiency for OpenStack Networking. The application of QoS for traffic shaping and rate-limiting at the physical network edge device is insufficient due to the dynamic nature of workloads in an OpenStack deployment and can not be leveraged in the traditional way. QoS-as-a-Service (QoSaaS) is currently in development for the OpenStack Networking Icehouse release as an experimental feature. QoSaaS is planning to provide the following services:</para>
<itemizedlist><listitem>
<para>Traffic shaping through DSCP markings</para>
</listitem>
@@ -63,11 +63,11 @@
</section>
<section xml:id="networking-services-idp69408">
<title>Load balancing</title>
<para>An experimental feature in the Grizzly release of OpenStack Networking is Load-Balancer-as-a-service (LBaaS). The LBaaS API gives early adopters and vendors a chance to build implementations of the technology. The reference implementation however, is still experimental and should likely not be run in a production environment. The current reference implementation is based on HA-Proxy. There are third-party plug-ins in development for extensions in OpenStack Networking to provide extensive L4-L7 functionality for virtual interface ports.</para>
<para>Another feature in OpenStack Networking is Load-Balancer-as-a-service (LBaaS). The LBaaS reference implementation, is based on HA-Proxy. There are third-party plug-ins in development for extensions in OpenStack Networking to provide extensive L4-L7 functionality for virtual interface ports.</para>
</section>
<section xml:id="networking-services-idp71664">
<title>Firewalls</title>
<para>FW-as-a-Service (FWaaS) is currently in development for the OpenStack Networking Havana release as an experimental feature. FWaaS will address the need to manage and leverage the rich set of security features provided by typical firewall products which are typically far more comprehensive than what is currently provided by security groups. There are third-party plug-ins in development for extensions in OpenStack Networking to support this.</para>
<para>FW-as-a-Service (FWaaS) is currently in development for the OpenStack Networking Icehouse release as an experimental feature. FWaaS will address the need to manage and leverage the rich set of security features provided by typical firewall products which are typically far more comprehensive than what is currently provided by security groups. There are third-party plug-ins in development for extensions in OpenStack Networking to support this.</para>
<para>It is critical during the design of an OpenStack Networking infrastructure to understand the current features and limitations of network services that are available. Understanding where the boundaries of your virtual and physical networks will help you add the required security controls in your environment.</para>
</section>
</section>

View File

@@ -6,7 +6,15 @@
xml:id="state-of-networking">
<?dbhtml stop-chunking?>
<title>State of networking</title>
<para>OpenStack Networking in the Grizzly release enables the end-user or tenant to define, utilize, and consume networking resources in new ways that had not been possible in previous OpenStack Networking releases. OpenStack Networking provides a tenant-facing API for defining network connectivity and IP addressing for instances in the cloud in addition to orchestrating the network configuration. With the transition to an API-centric networking service, cloud architects and administrators should take into consideration best practices to secure physical and virtual network infrastructure and services.</para>
<para>OpenStack Networking enables the end-user or tenant to
define, utilize, and consume networking resources. OpenStack
Networking provides a tenant-facing API for defining network
connectivity and IP addressing for instances in the cloud in
addition to orchestrating the network configuration. With the
transition to an API-centric networking service, cloud architects
and administrators should take into consideration best practices
to secure physical and virtual network infrastructure and services.
</para>
<para>
OpenStack Networking was designed with a plug-in architecture
that provides extensibility of the API through open source
@@ -17,5 +25,9 @@
third-party products, and what supplemental services are
required to be implemented in the physical
infrastructure.</para>
<para>This section is a high-level overview of what processes and best practices should be considered when implementing OpenStack Networking. We will talk about the current state of services that are available, what future services will be implemented, and the current limitations in this project.</para>
<para>This section is a high-level overview of what processes and best
practices should be considered when implementing OpenStack
Networking. We will talk about the current state of services that
are available, what future services will be implemented, and the
current limitations in this project.</para>
</chapter>