Merge "Remove old references in prep for Icehouse"

This commit is contained in:
Jenkins 2014-04-12 17:13:00 +00:00 committed by Gerrit Code Review
commit bb16cecfd1
27 changed files with 77 additions and 119 deletions

View File

@ -22,7 +22,7 @@
<year>2013</year>
<holder>OpenStack Foundation</holder>
</copyright>
<releaseinfo>havana</releaseinfo>
<releaseinfo>icehouse</releaseinfo>
<productname>OpenStack</productname>
<pubdate/>
<legalnotice role="apache2">

View File

@ -8,7 +8,7 @@
Service: "core properties," which are defined by the system, and
"additional properties," which are arbitrary key/value pairs that
can be set on an image.</para>
<para>With the Havana release, any such property can be protected
<para>Any such property can be protected
through configuration. When you put protections on a property, it
limits the users who can perform CRUD operations on the property
based on their user role. The use case is to enable the cloud
@ -23,4 +23,4 @@
<para>Property protection can be set in
<filename>/etc/glance/property-protections.conf</filename>, using
roles found in <filename>policy.json</filename>.</para>
</section>
</section>

View File

@ -419,7 +419,7 @@ external_network_bridge = br-ex-2</computeroutput></screen>
dhcp-agents with load being split across those
agents, but the tight coupling of that scheduling
with the location of the VM is not supported in
Grizzly. The Havana release is expected to include
Icehouse. The Juno release is expected to include
an exact replacement for the --multi_host flag in
nova-network.</para>
</listitem>

View File

@ -313,8 +313,8 @@
<td><emphasis role="bold">NEC OpenFlow
Plug-in</emphasis></td>
<td><link
xlink:href="http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin"
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link></td>
xlink:href="https://wiki.openstack.org/wiki/Neutron/NEC_OpenFlow_Plugin"
>https://wiki.openstack.org/wiki/Neutron/NEC_OpenFlow_Plugin</link></td>
</tr>
<tr>
<td><emphasis role="bold">Open vSwitch

View File

@ -3,9 +3,8 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0">
<title>Configure a multiple-storage back-end</title>
<para>This section presents the multi back-end storage feature
introduced with the Grizzly release. Multi back-end allows the
creation of several back-end storage solutions serving the
<para>With multiple storage back-ends configured, you can create
several back-end storage solutions serving the
same OpenStack Compute configuration. Basically, multi
back-end launches one <systemitem class="service"
>cinder-volume</systemitem> for each back-end.</para>
@ -95,15 +94,6 @@ volume_backend_name=LVM_iSCSI_b</programlisting>
pick the best back-end to handle the request, and
explicitly creates volumes on specific back-ends through
the use of volume types.</para>
<note>
<para>To enable the filter scheduler, add this line to the
<filename>cinder.conf</filename> configuration
file:</para>
<programlisting language="ini">scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler</programlisting>
<para>While the Block Storage Scheduler defaults to
<option>filter_scheduler</option> in Grizzly, this
setting is not required.</para>
</note>
<!-- TODO: when filter/weighing scheduler documentation will be up, a ref should be added here -->
</simplesect>
<simplesect>

View File

@ -4,7 +4,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Migrate volumes</title>
<para>The Havana release of OpenStack introduces the ability to
<para>OpenStack has the ability to
migrate volumes between back-ends. Migrating a volume
transparently moves its data from the current back-end for the
volume to a new one. This is an administrator function, and

View File

@ -267,12 +267,12 @@
<para>Be sure to include the software and package
versions that you are using, especially if you are
using a development branch, such as,
<literal>"Icehouse release" vs git commit
<literal>"Juno release" vs git commit
bc79c3ecc55929bac585d04a03475b72e06a3208</literal>.</para>
</listitem>
<listitem>
<para>Any deployment specific information is helpful,
such as Ubuntu 12.04 or multi-node install.</para>
such as Ubuntu 14.04 or multi-node install.</para>
</listitem>
</itemizedlist>
<para>The Launchpad Bugs areas are available here:</para>

View File

@ -74,14 +74,14 @@
| Name | Status |
+-----------------------+----------------------------------------+
| internal | available |
| |- devstack-grizzly | |
| |- devstack | |
| | |- nova-conductor | enabled :-) 2013-07-25T16:50:44.000000 |
| | |- nova-consoleauth | enabled :-) 2013-07-25T16:50:44.000000 |
| | |- nova-scheduler | enabled :-) 2013-07-25T16:50:44.000000 |
| | |- nova-cert | enabled :-) 2013-07-25T16:50:44.000000 |
| | |- nova-network | enabled :-) 2013-07-25T16:50:44.000000 |
| nova | available |
| |- devstack-grizzly | |
| |- devstack | |
| | |- nova-compute | enabled :-) 2013-07-25T16:50:39.000000 |
+-----------------------+----------------------------------------+</computeroutput></screen>
</step>
@ -155,7 +155,7 @@
| display_name | my-new-volume |
| id | 573e024d-5235-49ce-8332-be1576d323f8 |
| metadata | {} |
| os-vol-host-attr:host | devstack-grizzly |
| os-vol-host-attr:host | devstack |
| os-vol-tenant-attr:tenant_id | 66265572db174a7aa66eba661f58eb9e |
| size | 8 |
| snapshot_id | None |

View File

@ -30,14 +30,14 @@
| status | ACTIVE |
| updated | 2013-07-18T15:08:20Z |
| OS-EXT-STS:task_state | None |
| OS-EXT-SRV-ATTR:host | devstack-grizzly |
| OS-EXT-SRV-ATTR:host | devstack |
| key_name | None |
| image | cirros-0.3.2-x86_64-uec (397e713c-b95b-4186-ad46-6126863ea0a9) |
| private network | 10.0.0.3 |
| hostId | 6e1e69b71ac9b1e6871f91e2dfc9a9b9ceca0f05db68172a81d45385 |
| OS-EXT-STS:vm_state | active |
| OS-EXT-SRV-ATTR:instance_name | instance-00000005 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | devstack-grizzly |
| OS-EXT-SRV-ATTR:hypervisor_hostname | devstack |
| flavor | m1.small (2) |
| id | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 |
| security_groups | [{u'name': u'default'}] |

View File

@ -15,29 +15,29 @@
<title>To show host usage statistics</title>
<step><para>List the hosts and the nova-related services that run on
them:</para><screen><prompt>$</prompt> <userinput>nova host-list</userinput>
<computeroutput>+------------------+-------------+----------+
| host_name | service | zone |
+------------------+-------------+----------+
| devstack-grizzly | conductor | internal |
| devstack-grizzly | compute | nova |
| devstack-grizzly | cert | internal |
| devstack-grizzly | network | internal |
| devstack-grizzly | scheduler | internal |
| devstack-grizzly | consoleauth | internal |
+------------------+-------------+----------+</computeroutput></screen>
<computeroutput>+-----------+-------------+----------+
| host_name | service | zone |
+-----------+-------------+----------+
| devstack | conductor | internal |
| devstack | compute | nova |
| devstack | cert | internal |
| devstack | network | internal |
| devstack | scheduler | internal |
| devstack | consoleauth | internal |
+-----------+-------------+----------+</computeroutput></screen>
</step>
<step><para>Get a summary of resource usage of all of the instances running
on the host.</para>
<screen><prompt>$</prompt> <userinput>nova host-describe <replaceable>devstack-grizzly</replaceable></userinput>
<computeroutput>+------------------+----------------------------------+-----+-----------+---------+
| HOST | PROJECT | cpu | memory_mb | disk_gb |
+------------------+----------------------------------+-----+-----------+---------+
| devstack-grizzly | (total) | 2 | 4003 | 157 |
| devstack-grizzly | (used_now) | 3 | 5120 | 40 |
| devstack-grizzly | (used_max) | 3 | 4608 | 40 |
| devstack-grizzly | b70d90d65e464582b6b2161cf3603ced | 1 | 512 | 0 |
| devstack-grizzly | 66265572db174a7aa66eba661f58eb9e | 2 | 4096 | 40 |
+------------------+----------------------------------+-----+-----------+---------+</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>nova host-describe <replaceable>devstack</replaceable></userinput>
<computeroutput>+-----------+----------------------------------+-----+-----------+---------+
| HOST | PROJECT | cpu | memory_mb | disk_gb |
+----------+----------------------------------+-----+-----------+---------+
| devstack | (total) | 2 | 4003 | 157 |
| devstack | (used_now) | 3 | 5120 | 40 |
| devstack | (used_max) | 3 | 4608 | 40 |
| devstack | b70d90d65e464582b6b2161cf3603ced | 1 | 512 | 0 |
| devstack | 66265572db174a7aa66eba661f58eb9e | 2 | 4096 | 40 |
+----------+----------------------------------+-----+-----------+---------+</computeroutput></screen>
<para>The <literal>cpu</literal> column shows the sum of
the virtual CPUs for instances running on the host.</para>
<para>The <literal>memory_mb</literal> column shows the

View File

@ -5,6 +5,7 @@
<title>Fibre Channel support in Compute</title>
<para>Fibre Channel support in OpenStack Compute is remote block
storage attached to compute nodes for VMs.</para>
<!-- TODO: This below statement needs to be verified for current release-->
<para>In the Grizzly release, Fibre Channel supported only the KVM
hypervisor.</para>
<para>Compute and Block Storage for Fibre Channel do not support automatic

View File

@ -117,12 +117,6 @@
through a VNC connection. Supports browser-based novnc
clients.</para>
</listitem>
<listitem>
<para><systemitem class="service">nova-console</systemitem>
daemon. Deprecated for use with Grizzly. Instead, the
<systemitem class="service">nova-xvpnvncproxy</systemitem>
is used.</para>
</listitem>
<listitem>
<para><systemitem class="service">nova-xvpnvncproxy</systemitem>
daemon. A proxy for accessing running instances through a VNC

View File

@ -5,11 +5,12 @@
xml:id="identity-groups">
<title>Groups</title>
<para>A group is a collection of users. Administrators can
create groups and add users to them. Then, rather than
assign a role to each user individually, assign a role to
the group. Every group is in a domain. Groups were
introduced with version 3 of the Identity API (the Grizzly
release of Identity Service).</para>
create groups and add users to them. Then, rather than assign
a role to each user individually, assign a role to the group.
Every group is in a domain. Groups were introduced with the
Identity API v3.</para>
<!--TODO: eventually remove the last sentence, when v3 is
commonplace -->
<para>Identity API V3 provides the following group-related
operations:</para>
<itemizedlist>

View File

@ -18,7 +18,7 @@
horizontally. You can run multiple instances of <systemitem
class="service">nova-conductor</systemitem> on different
machines as needed for scaling purposes.</para>
<para>In the Grizzly release, the methods exposed by <systemitem
<para>The methods exposed by <systemitem
class="service">nova-conductor</systemitem> are relatively
simple methods used by <systemitem class="service"
>nova-compute</systemitem> to offload its database

View File

@ -134,14 +134,13 @@ libvirt_cpu_model=Nehalem</programlisting>
QEMU)</title>
<para>If your <filename>nova.conf</filename> file contains
<literal>libvirt_cpu_mode=none</literal>, libvirt does not specify a CPU model.
Instead, the hypervisor chooses the default model. This setting is equivalent to the
Compute service behavior prior to the Folsom release.</para>
Instead, the hypervisor chooses the default model.</para>
</simplesect>
</section>
<section xml:id="kvm-guest-agent-support">
<title>Guest agent support</title>
<para>With the Havana release, support for guest agents was added, allowing optional access
between compute nods and guests through a socket, using the qmp protocol.</para>
<para>Use guest agents to enable optional access between compute nodes and guests through a
socket, using the QMP protocol.</para>
<para>To enable this feature, you must set <literal>hw_qemu_guest_agent=yes</literal> as a
metadata parameter on the image you wish to use to create guest-agent-capable instances
from. You can explicitly disable the feature by setting

View File

@ -18,13 +18,13 @@ highly available.
==== Running Neutron DHCP Agent
Since the Grizzly release, OpenStack Networking service has a scheduler that
OpenStack Networking service has a scheduler that
lets you run multiple agents across nodes. Also, the DHCP agent can be natively
highly available. For details, see http://docs.openstack.org/trunk/config-reference/content/app_demo_multi_dhcp_agents.html[OpenStack Configuration Reference].
==== Running Neutron L3 Agent
Since the Grizzly release, the Neutron L3 Agent is scalable thanks to the scheduler
The Neutron L3 Agent is scalable thanks to the scheduler
that allows distribution of virtual routers across multiple nodes.
But there is no native feature to make these routers highly available.
At this time, the Active / Passive solution exists to run the Neutron L3

View File

@ -83,8 +83,7 @@ http://www.rabbitmq.com/ha.html[More information about High availability in Rabb
==== Configure OpenStack Services to use RabbitMQ
Since the Grizzly Release, most of the OpenStack components using queuing have supported the feature.
We have to configure them to use at least two RabbitMQ nodes.
We have to configure the OpenStack components to use at least two RabbitMQ nodes.
Do this configuration on all services using RabbitMQ:

View File

@ -24,7 +24,7 @@
<year>2013</year>
<holder>OpenStack Foundation</holder>
</copyright>
<releaseinfo>havana</releaseinfo>
<releaseinfo>icehouse</releaseinfo>
<productname>OpenStack</productname>
<pubdate></pubdate>
<legalnotice role="cc-by">

View File

@ -18,7 +18,7 @@
<year>2013</year>
<holder>OpenStack Foundation</holder>
</copyright>
<releaseinfo>havana</releaseinfo>
<releaseinfo>icehouse</releaseinfo>
<productname>OpenStack</productname>
<pubdate/>
<legalnotice role="cc-by">

View File

@ -35,6 +35,7 @@
<para>The <systemitem class="service">nova-novncproxy</systemitem>and nova-xvpvncproxy services by default open public-facing ports that are token authenticated.</para>
</listitem>
<listitem>
<!-- TODO - check if havana had this feature -->
<para>By default, the remote desktop traffic is not encrypted. Havana is expected to have VNC connections secured by Kerberos.</para>
</listitem>
</itemizedlist>

View File

@ -68,8 +68,13 @@
<section xml:id="ch032_networking-best-practices-idp74544">
<title>Network Services Extensions</title>
<para>Here is a list of known plug-ins provided by the open source community or by SDN companies that work with OpenStack Networking:</para>
<para>Big Switch Controller Plugin, Brocade Neutron Plugin Brocade Neutron Plugin, Cisco UCS/Nexus Plugin, Cloudbase Hyper-V Plugin, Extreme Networks Plugin, Juniper Networks Neutron Plugin, Linux Bridge Plugin, Mellanox Neutron Plugin, MidoNet Plugin, NEC OpenFlow Plugin, Open vSwitch Plugin, PLUMgrid Plugin, Ruijie Networks Plugin, Ryu OpenFlow Controller Plugin, VMware NSX plugin</para>
<para>For a more detailed comparison of all features provided by plug-ins as of the Folsom release, see <link xlink:href="http://www.sebastien-han.fr/blog/2012/09/28/quantum-plugin-comparison/">Sebastien Han's comparison</link>.</para>
<para>Big Switch Controller plug-in, Brocade Neutron plug-in
Brocade Neutron plug-in, Cisco UCS/Nexus plug-in, Cloudbase
Hyper-V plug-in, Extreme Networks plug-in, Juniper Networks
Neutron plug-in, Linux Bridge plug-in, Mellanox Neutron plug-in,
MidoNet plug-in, NEC OpenFlow plug-in, Open vSwitch plug-in,
PLUMgrid plug-in, Ruijie Networks plug-in, Ryu OpenFlow
Controller plug-in, VMware NSX plug-in.</para>
</section>
<section xml:id="ch032_networking-best-practices-idp78032">
<title>Networking Services Limitations</title>

View File

@ -83,7 +83,6 @@ rabbit_password=password
kombu_ssl_keyfile=/etc/ssl/node-key.pem
kombu_ssl_certfile=/etc/ssl/node-cert.pem
kombu_ssl_ca_certs=/etc/ssl/cacert.pem</screen>
<para>NOTE: A bug exists in the current version of OpenStack Grizzly where if 'kombu_ssl_version' is currently specified in the configuration file for any of the OpenStack services it will cause the following python traceback error: 'TypeError: an integer is required'. The current workaround is to remove 'kombu_ssl_version' from the configuration file. Refer to <link xlink:href="https://bugs.launchpad.net/oslo/+bug/1195431">bug report 1195431</link> for current status.</para>
</section>
<section xml:id="ch038_transport-security-idp62112">
<title>Authentication Configuration Example - Qpid</title>

View File

@ -27,22 +27,19 @@
<section xml:id="ch055_security-services-for-instances-idp128240">
<title>Scheduling Instances to Nodes</title>
<para>Before an instance is created, a host for the image instantiation must be selected. This selection is performed by the <systemitem class="service">nova-scheduler</systemitem> which determines how to dispatch compute and volume requests.</para>
<para>The default nova scheduler in Grizzly is the Filter
Scheduler, although other schedulers exist (see the section
<link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html">Scheduling</link>
in the <citetitle>OpenStack Configuration
Reference</citetitle>). The filter scheduler works in
collaboration with 'filters' to decide where an instance should
be started. This process of host selection allows administrators
to fulfill many different security requirements. Depending on the
cloud deployment type for example, one could choose to have
tenant instances reside on the same hosts whenever possible if
data isolation was a primary concern, conversely one could
attempt to have instances for a tenant reside on as many
different hosts as possible for availability or fault tolerance
reasons. The following diagram demonstrates how the filter
scheduler works:</para>
<para>The filter scheduler is the default scheduler for OpenStack Compute,
although other schedulers exist (see the section <link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html"
>Scheduling</link> in the <citetitle>OpenStack Configuration
Reference</citetitle>). The filter scheduler works in collaboration with
'filters' to decide where an instance should be started. This process of
host selection allows administrators to fulfill many different security
requirements. Depending on the cloud deployment type for example, one
could choose to have tenant instances reside on the same hosts whenever
possible if data isolation was a primary concern, conversely one could
attempt to have instances for a tenant reside on as many different hosts
as possible for availability or fault tolerance reasons. The following
diagram demonstrates how the filter scheduler works:</para>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="400" contentwidth="550" fileref="static/filteringWorkflow1.png" format="PNG" scalefit="1"/>
</imageobject>

View File

@ -22,10 +22,10 @@
</affiliation>
</author>
<copyright>
<year>2013</year>
<year>2014</year>
<holder>OpenStack Foundation</holder>
</copyright>
<releaseinfo>havana</releaseinfo>
<releaseinfo>icehouse</releaseinfo>
<productname>OpenStack</productname>
<pubdate/>
<legalnotice role="cc-by">

View File

@ -30,12 +30,6 @@
+----------+--------------+----------+-------------------+
| devstack | nova-compute | disabled | Trial log |
+----------+--------------+----------+-------------------+</computeroutput></screen>
<note>
<para>The Havana release introduces the optional
<parameter>--reason</parameter> parameter that
enables you to log a reason for disabling a
service.</para>
</note>
</step>
<step>
<para>Check the service list:</para>

View File

@ -37,26 +37,4 @@
+----+---------------------+</computeroutput></screen>
</step>
</procedure>
<note>
<para>
<itemizedlist>
<listitem>
<para>Beginning in the Folsom release, the
<literal>--availability_zone
<replaceable>zone</replaceable>:<replaceable>host</replaceable></literal>
parameter replaces the
<literal>--force_hosts</literal> scheduler
hint parameter.</para>
</listitem>
<listitem>
<para>Beginning in the Grizzly release, you can
enable the
<literal>create:forced_host</literal>
option in the <filename>policy.json</filename>
file to specify which roles can launch an
instance on a specified host.</para>
</listitem>
</itemizedlist>
</para>
</note>
</section>

View File

@ -22,10 +22,10 @@
</affiliation>
</author>
<copyright>
<year>2013</year>
<year>2014</year>
<holder>OpenStack Foundation</holder>
</copyright>
<releaseinfo>havana</releaseinfo>
<releaseinfo>icehouse</releaseinfo>
<productname>OpenStack</productname>
<pubdate/>
<legalnotice role="cc-by">