Remove old references in prep for Icehouse

A standard pre-release patch, looking for any obsolete text to remove,
and references to newly unsupported releases to remove.

Feel free to update the patch if you find other things.

Review request: this patch touches many files in a small way.
Please try and focus on the small changes, and not the rest of the
file ... since many of them do need an edit  ;)

Change-Id: I383c51acf149bb6553ddfd33cad382f5dd15fe62
This commit is contained in:
Tom Fifield
2014-04-09 20:35:31 +08:00
committed by annegentle
parent 937d3cbff0
commit e4aac82b07
27 changed files with 77 additions and 119 deletions

View File

@@ -22,7 +22,7 @@
<year>2013</year> <year>2013</year>
<holder>OpenStack Foundation</holder> <holder>OpenStack Foundation</holder>
</copyright> </copyright>
<releaseinfo>havana</releaseinfo> <releaseinfo>icehouse</releaseinfo>
<productname>OpenStack</productname> <productname>OpenStack</productname>
<pubdate/> <pubdate/>
<legalnotice role="apache2"> <legalnotice role="apache2">

View File

@@ -8,7 +8,7 @@
Service: "core properties," which are defined by the system, and Service: "core properties," which are defined by the system, and
"additional properties," which are arbitrary key/value pairs that "additional properties," which are arbitrary key/value pairs that
can be set on an image.</para> can be set on an image.</para>
<para>With the Havana release, any such property can be protected <para>Any such property can be protected
through configuration. When you put protections on a property, it through configuration. When you put protections on a property, it
limits the users who can perform CRUD operations on the property limits the users who can perform CRUD operations on the property
based on their user role. The use case is to enable the cloud based on their user role. The use case is to enable the cloud
@@ -23,4 +23,4 @@
<para>Property protection can be set in <para>Property protection can be set in
<filename>/etc/glance/property-protections.conf</filename>, using <filename>/etc/glance/property-protections.conf</filename>, using
roles found in <filename>policy.json</filename>.</para> roles found in <filename>policy.json</filename>.</para>
</section> </section>

View File

@@ -419,7 +419,7 @@ external_network_bridge = br-ex-2</computeroutput></screen>
dhcp-agents with load being split across those dhcp-agents with load being split across those
agents, but the tight coupling of that scheduling agents, but the tight coupling of that scheduling
with the location of the VM is not supported in with the location of the VM is not supported in
Grizzly. The Havana release is expected to include Icehouse. The Juno release is expected to include
an exact replacement for the --multi_host flag in an exact replacement for the --multi_host flag in
nova-network.</para> nova-network.</para>
</listitem> </listitem>

View File

@@ -313,8 +313,8 @@
<td><emphasis role="bold">NEC OpenFlow <td><emphasis role="bold">NEC OpenFlow
Plug-in</emphasis></td> Plug-in</emphasis></td>
<td><link <td><link
xlink:href="http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin" xlink:href="https://wiki.openstack.org/wiki/Neutron/NEC_OpenFlow_Plugin"
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link></td> >https://wiki.openstack.org/wiki/Neutron/NEC_OpenFlow_Plugin</link></td>
</tr> </tr>
<tr> <tr>
<td><emphasis role="bold">Open vSwitch <td><emphasis role="bold">Open vSwitch

View File

@@ -3,9 +3,8 @@
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0"> xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0">
<title>Configure a multiple-storage back-end</title> <title>Configure a multiple-storage back-end</title>
<para>This section presents the multi back-end storage feature <para>With multiple storage back-ends configured, you can create
introduced with the Grizzly release. Multi back-end allows the several back-end storage solutions serving the
creation of several back-end storage solutions serving the
same OpenStack Compute configuration. Basically, multi same OpenStack Compute configuration. Basically, multi
back-end launches one <systemitem class="service" back-end launches one <systemitem class="service"
>cinder-volume</systemitem> for each back-end.</para> >cinder-volume</systemitem> for each back-end.</para>
@@ -95,15 +94,6 @@ volume_backend_name=LVM_iSCSI_b</programlisting>
pick the best back-end to handle the request, and pick the best back-end to handle the request, and
explicitly creates volumes on specific back-ends through explicitly creates volumes on specific back-ends through
the use of volume types.</para> the use of volume types.</para>
<note>
<para>To enable the filter scheduler, add this line to the
<filename>cinder.conf</filename> configuration
file:</para>
<programlisting language="ini">scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler</programlisting>
<para>While the Block Storage Scheduler defaults to
<option>filter_scheduler</option> in Grizzly, this
setting is not required.</para>
</note>
<!-- TODO: when filter/weighing scheduler documentation will be up, a ref should be added here --> <!-- TODO: when filter/weighing scheduler documentation will be up, a ref should be added here -->
</simplesect> </simplesect>
<simplesect> <simplesect>

View File

@@ -4,7 +4,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Migrate volumes</title> <title>Migrate volumes</title>
<para>The Havana release of OpenStack introduces the ability to <para>OpenStack has the ability to
migrate volumes between back-ends. Migrating a volume migrate volumes between back-ends. Migrating a volume
transparently moves its data from the current back-end for the transparently moves its data from the current back-end for the
volume to a new one. This is an administrator function, and volume to a new one. This is an administrator function, and

View File

@@ -267,12 +267,12 @@
<para>Be sure to include the software and package <para>Be sure to include the software and package
versions that you are using, especially if you are versions that you are using, especially if you are
using a development branch, such as, using a development branch, such as,
<literal>"Icehouse release" vs git commit <literal>"Juno release" vs git commit
bc79c3ecc55929bac585d04a03475b72e06a3208</literal>.</para> bc79c3ecc55929bac585d04a03475b72e06a3208</literal>.</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Any deployment specific information is helpful, <para>Any deployment specific information is helpful,
such as Ubuntu 12.04 or multi-node install.</para> such as Ubuntu 14.04 or multi-node install.</para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
<para>The Launchpad Bugs areas are available here:</para> <para>The Launchpad Bugs areas are available here:</para>

View File

@@ -74,14 +74,14 @@
| Name | Status | | Name | Status |
+-----------------------+----------------------------------------+ +-----------------------+----------------------------------------+
| internal | available | | internal | available |
| |- devstack-grizzly | | | |- devstack | |
| | |- nova-conductor | enabled :-) 2013-07-25T16:50:44.000000 | | | |- nova-conductor | enabled :-) 2013-07-25T16:50:44.000000 |
| | |- nova-consoleauth | enabled :-) 2013-07-25T16:50:44.000000 | | | |- nova-consoleauth | enabled :-) 2013-07-25T16:50:44.000000 |
| | |- nova-scheduler | enabled :-) 2013-07-25T16:50:44.000000 | | | |- nova-scheduler | enabled :-) 2013-07-25T16:50:44.000000 |
| | |- nova-cert | enabled :-) 2013-07-25T16:50:44.000000 | | | |- nova-cert | enabled :-) 2013-07-25T16:50:44.000000 |
| | |- nova-network | enabled :-) 2013-07-25T16:50:44.000000 | | | |- nova-network | enabled :-) 2013-07-25T16:50:44.000000 |
| nova | available | | nova | available |
| |- devstack-grizzly | | | |- devstack | |
| | |- nova-compute | enabled :-) 2013-07-25T16:50:39.000000 | | | |- nova-compute | enabled :-) 2013-07-25T16:50:39.000000 |
+-----------------------+----------------------------------------+</computeroutput></screen> +-----------------------+----------------------------------------+</computeroutput></screen>
</step> </step>
@@ -155,7 +155,7 @@
| display_name | my-new-volume | | display_name | my-new-volume |
| id | 573e024d-5235-49ce-8332-be1576d323f8 | | id | 573e024d-5235-49ce-8332-be1576d323f8 |
| metadata | {} | | metadata | {} |
| os-vol-host-attr:host | devstack-grizzly | | os-vol-host-attr:host | devstack |
| os-vol-tenant-attr:tenant_id | 66265572db174a7aa66eba661f58eb9e | | os-vol-tenant-attr:tenant_id | 66265572db174a7aa66eba661f58eb9e |
| size | 8 | | size | 8 |
| snapshot_id | None | | snapshot_id | None |

View File

@@ -30,14 +30,14 @@
| status | ACTIVE | | status | ACTIVE |
| updated | 2013-07-18T15:08:20Z | | updated | 2013-07-18T15:08:20Z |
| OS-EXT-STS:task_state | None | | OS-EXT-STS:task_state | None |
| OS-EXT-SRV-ATTR:host | devstack-grizzly | | OS-EXT-SRV-ATTR:host | devstack |
| key_name | None | | key_name | None |
| image | cirros-0.3.2-x86_64-uec (397e713c-b95b-4186-ad46-6126863ea0a9) | | image | cirros-0.3.2-x86_64-uec (397e713c-b95b-4186-ad46-6126863ea0a9) |
| private network | 10.0.0.3 | | private network | 10.0.0.3 |
| hostId | 6e1e69b71ac9b1e6871f91e2dfc9a9b9ceca0f05db68172a81d45385 | | hostId | 6e1e69b71ac9b1e6871f91e2dfc9a9b9ceca0f05db68172a81d45385 |
| OS-EXT-STS:vm_state | active | | OS-EXT-STS:vm_state | active |
| OS-EXT-SRV-ATTR:instance_name | instance-00000005 | | OS-EXT-SRV-ATTR:instance_name | instance-00000005 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | devstack-grizzly | | OS-EXT-SRV-ATTR:hypervisor_hostname | devstack |
| flavor | m1.small (2) | | flavor | m1.small (2) |
| id | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | | id | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 |
| security_groups | [{u'name': u'default'}] | | security_groups | [{u'name': u'default'}] |

View File

@@ -15,29 +15,29 @@
<title>To show host usage statistics</title> <title>To show host usage statistics</title>
<step><para>List the hosts and the nova-related services that run on <step><para>List the hosts and the nova-related services that run on
them:</para><screen><prompt>$</prompt> <userinput>nova host-list</userinput> them:</para><screen><prompt>$</prompt> <userinput>nova host-list</userinput>
<computeroutput>+------------------+-------------+----------+ <computeroutput>+-----------+-------------+----------+
| host_name | service | zone | | host_name | service | zone |
+------------------+-------------+----------+ +-----------+-------------+----------+
| devstack-grizzly | conductor | internal | | devstack | conductor | internal |
| devstack-grizzly | compute | nova | | devstack | compute | nova |
| devstack-grizzly | cert | internal | | devstack | cert | internal |
| devstack-grizzly | network | internal | | devstack | network | internal |
| devstack-grizzly | scheduler | internal | | devstack | scheduler | internal |
| devstack-grizzly | consoleauth | internal | | devstack | consoleauth | internal |
+------------------+-------------+----------+</computeroutput></screen> +-----------+-------------+----------+</computeroutput></screen>
</step> </step>
<step><para>Get a summary of resource usage of all of the instances running <step><para>Get a summary of resource usage of all of the instances running
on the host.</para> on the host.</para>
<screen><prompt>$</prompt> <userinput>nova host-describe <replaceable>devstack-grizzly</replaceable></userinput> <screen><prompt>$</prompt> <userinput>nova host-describe <replaceable>devstack</replaceable></userinput>
<computeroutput>+------------------+----------------------------------+-----+-----------+---------+ <computeroutput>+-----------+----------------------------------+-----+-----------+---------+
| HOST | PROJECT | cpu | memory_mb | disk_gb | | HOST | PROJECT | cpu | memory_mb | disk_gb |
+------------------+----------------------------------+-----+-----------+---------+ +----------+----------------------------------+-----+-----------+---------+
| devstack-grizzly | (total) | 2 | 4003 | 157 | | devstack | (total) | 2 | 4003 | 157 |
| devstack-grizzly | (used_now) | 3 | 5120 | 40 | | devstack | (used_now) | 3 | 5120 | 40 |
| devstack-grizzly | (used_max) | 3 | 4608 | 40 | | devstack | (used_max) | 3 | 4608 | 40 |
| devstack-grizzly | b70d90d65e464582b6b2161cf3603ced | 1 | 512 | 0 | | devstack | b70d90d65e464582b6b2161cf3603ced | 1 | 512 | 0 |
| devstack-grizzly | 66265572db174a7aa66eba661f58eb9e | 2 | 4096 | 40 | | devstack | 66265572db174a7aa66eba661f58eb9e | 2 | 4096 | 40 |
+------------------+----------------------------------+-----+-----------+---------+</computeroutput></screen> +----------+----------------------------------+-----+-----------+---------+</computeroutput></screen>
<para>The <literal>cpu</literal> column shows the sum of <para>The <literal>cpu</literal> column shows the sum of
the virtual CPUs for instances running on the host.</para> the virtual CPUs for instances running on the host.</para>
<para>The <literal>memory_mb</literal> column shows the <para>The <literal>memory_mb</literal> column shows the

View File

@@ -5,6 +5,7 @@
<title>Fibre Channel support in Compute</title> <title>Fibre Channel support in Compute</title>
<para>Fibre Channel support in OpenStack Compute is remote block <para>Fibre Channel support in OpenStack Compute is remote block
storage attached to compute nodes for VMs.</para> storage attached to compute nodes for VMs.</para>
<!-- TODO: This below statement needs to be verified for current release-->
<para>In the Grizzly release, Fibre Channel supported only the KVM <para>In the Grizzly release, Fibre Channel supported only the KVM
hypervisor.</para> hypervisor.</para>
<para>Compute and Block Storage for Fibre Channel do not support automatic <para>Compute and Block Storage for Fibre Channel do not support automatic

View File

@@ -117,12 +117,6 @@
through a VNC connection. Supports browser-based novnc through a VNC connection. Supports browser-based novnc
clients.</para> clients.</para>
</listitem> </listitem>
<listitem>
<para><systemitem class="service">nova-console</systemitem>
daemon. Deprecated for use with Grizzly. Instead, the
<systemitem class="service">nova-xvpnvncproxy</systemitem>
is used.</para>
</listitem>
<listitem> <listitem>
<para><systemitem class="service">nova-xvpnvncproxy</systemitem> <para><systemitem class="service">nova-xvpnvncproxy</systemitem>
daemon. A proxy for accessing running instances through a VNC daemon. A proxy for accessing running instances through a VNC

View File

@@ -5,11 +5,12 @@
xml:id="identity-groups"> xml:id="identity-groups">
<title>Groups</title> <title>Groups</title>
<para>A group is a collection of users. Administrators can <para>A group is a collection of users. Administrators can
create groups and add users to them. Then, rather than create groups and add users to them. Then, rather than assign
assign a role to each user individually, assign a role to a role to each user individually, assign a role to the group.
the group. Every group is in a domain. Groups were Every group is in a domain. Groups were introduced with the
introduced with version 3 of the Identity API (the Grizzly Identity API v3.</para>
release of Identity Service).</para> <!--TODO: eventually remove the last sentence, when v3 is
commonplace -->
<para>Identity API V3 provides the following group-related <para>Identity API V3 provides the following group-related
operations:</para> operations:</para>
<itemizedlist> <itemizedlist>

View File

@@ -18,7 +18,7 @@
horizontally. You can run multiple instances of <systemitem horizontally. You can run multiple instances of <systemitem
class="service">nova-conductor</systemitem> on different class="service">nova-conductor</systemitem> on different
machines as needed for scaling purposes.</para> machines as needed for scaling purposes.</para>
<para>In the Grizzly release, the methods exposed by <systemitem <para>The methods exposed by <systemitem
class="service">nova-conductor</systemitem> are relatively class="service">nova-conductor</systemitem> are relatively
simple methods used by <systemitem class="service" simple methods used by <systemitem class="service"
>nova-compute</systemitem> to offload its database >nova-compute</systemitem> to offload its database

View File

@@ -134,14 +134,13 @@ libvirt_cpu_model=Nehalem</programlisting>
QEMU)</title> QEMU)</title>
<para>If your <filename>nova.conf</filename> file contains <para>If your <filename>nova.conf</filename> file contains
<literal>libvirt_cpu_mode=none</literal>, libvirt does not specify a CPU model. <literal>libvirt_cpu_mode=none</literal>, libvirt does not specify a CPU model.
Instead, the hypervisor chooses the default model. This setting is equivalent to the Instead, the hypervisor chooses the default model.</para>
Compute service behavior prior to the Folsom release.</para>
</simplesect> </simplesect>
</section> </section>
<section xml:id="kvm-guest-agent-support"> <section xml:id="kvm-guest-agent-support">
<title>Guest agent support</title> <title>Guest agent support</title>
<para>With the Havana release, support for guest agents was added, allowing optional access <para>Use guest agents to enable optional access between compute nodes and guests through a
between compute nods and guests through a socket, using the qmp protocol.</para> socket, using the QMP protocol.</para>
<para>To enable this feature, you must set <literal>hw_qemu_guest_agent=yes</literal> as a <para>To enable this feature, you must set <literal>hw_qemu_guest_agent=yes</literal> as a
metadata parameter on the image you wish to use to create guest-agent-capable instances metadata parameter on the image you wish to use to create guest-agent-capable instances
from. You can explicitly disable the feature by setting from. You can explicitly disable the feature by setting

View File

@@ -18,13 +18,13 @@ highly available.
==== Running Neutron DHCP Agent ==== Running Neutron DHCP Agent
Since the Grizzly release, OpenStack Networking service has a scheduler that OpenStack Networking service has a scheduler that
lets you run multiple agents across nodes. Also, the DHCP agent can be natively lets you run multiple agents across nodes. Also, the DHCP agent can be natively
highly available. For details, see http://docs.openstack.org/trunk/config-reference/content/app_demo_multi_dhcp_agents.html[OpenStack Configuration Reference]. highly available. For details, see http://docs.openstack.org/trunk/config-reference/content/app_demo_multi_dhcp_agents.html[OpenStack Configuration Reference].
==== Running Neutron L3 Agent ==== Running Neutron L3 Agent
Since the Grizzly release, the Neutron L3 Agent is scalable thanks to the scheduler The Neutron L3 Agent is scalable thanks to the scheduler
that allows distribution of virtual routers across multiple nodes. that allows distribution of virtual routers across multiple nodes.
But there is no native feature to make these routers highly available. But there is no native feature to make these routers highly available.
At this time, the Active / Passive solution exists to run the Neutron L3 At this time, the Active / Passive solution exists to run the Neutron L3

View File

@@ -83,8 +83,7 @@ http://www.rabbitmq.com/ha.html[More information about High availability in Rabb
==== Configure OpenStack Services to use RabbitMQ ==== Configure OpenStack Services to use RabbitMQ
Since the Grizzly Release, most of the OpenStack components using queuing have supported the feature. We have to configure the OpenStack components to use at least two RabbitMQ nodes.
We have to configure them to use at least two RabbitMQ nodes.
Do this configuration on all services using RabbitMQ: Do this configuration on all services using RabbitMQ:

View File

@@ -24,7 +24,7 @@
<year>2013</year> <year>2013</year>
<holder>OpenStack Foundation</holder> <holder>OpenStack Foundation</holder>
</copyright> </copyright>
<releaseinfo>havana</releaseinfo> <releaseinfo>icehouse</releaseinfo>
<productname>OpenStack</productname> <productname>OpenStack</productname>
<pubdate></pubdate> <pubdate></pubdate>
<legalnotice role="cc-by"> <legalnotice role="cc-by">

View File

@@ -18,7 +18,7 @@
<year>2013</year> <year>2013</year>
<holder>OpenStack Foundation</holder> <holder>OpenStack Foundation</holder>
</copyright> </copyright>
<releaseinfo>havana</releaseinfo> <releaseinfo>icehouse</releaseinfo>
<productname>OpenStack</productname> <productname>OpenStack</productname>
<pubdate/> <pubdate/>
<legalnotice role="cc-by"> <legalnotice role="cc-by">

View File

@@ -35,6 +35,7 @@
<para>The <systemitem class="service">nova-novncproxy</systemitem>and nova-xvpvncproxy services by default open public-facing ports that are token authenticated.</para> <para>The <systemitem class="service">nova-novncproxy</systemitem>and nova-xvpvncproxy services by default open public-facing ports that are token authenticated.</para>
</listitem> </listitem>
<listitem> <listitem>
<!-- TODO - check if havana had this feature -->
<para>By default, the remote desktop traffic is not encrypted. Havana is expected to have VNC connections secured by Kerberos.</para> <para>By default, the remote desktop traffic is not encrypted. Havana is expected to have VNC connections secured by Kerberos.</para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>

View File

@@ -68,8 +68,13 @@
<section xml:id="ch032_networking-best-practices-idp74544"> <section xml:id="ch032_networking-best-practices-idp74544">
<title>Network Services Extensions</title> <title>Network Services Extensions</title>
<para>Here is a list of known plug-ins provided by the open source community or by SDN companies that work with OpenStack Networking:</para> <para>Here is a list of known plug-ins provided by the open source community or by SDN companies that work with OpenStack Networking:</para>
<para>Big Switch Controller Plugin, Brocade Neutron Plugin Brocade Neutron Plugin, Cisco UCS/Nexus Plugin, Cloudbase Hyper-V Plugin, Extreme Networks Plugin, Juniper Networks Neutron Plugin, Linux Bridge Plugin, Mellanox Neutron Plugin, MidoNet Plugin, NEC OpenFlow Plugin, Open vSwitch Plugin, PLUMgrid Plugin, Ruijie Networks Plugin, Ryu OpenFlow Controller Plugin, VMware NSX plugin</para> <para>Big Switch Controller plug-in, Brocade Neutron plug-in
<para>For a more detailed comparison of all features provided by plug-ins as of the Folsom release, see <link xlink:href="http://www.sebastien-han.fr/blog/2012/09/28/quantum-plugin-comparison/">Sebastien Han's comparison</link>.</para> Brocade Neutron plug-in, Cisco UCS/Nexus plug-in, Cloudbase
Hyper-V plug-in, Extreme Networks plug-in, Juniper Networks
Neutron plug-in, Linux Bridge plug-in, Mellanox Neutron plug-in,
MidoNet plug-in, NEC OpenFlow plug-in, Open vSwitch plug-in,
PLUMgrid plug-in, Ruijie Networks plug-in, Ryu OpenFlow
Controller plug-in, VMware NSX plug-in.</para>
</section> </section>
<section xml:id="ch032_networking-best-practices-idp78032"> <section xml:id="ch032_networking-best-practices-idp78032">
<title>Networking Services Limitations</title> <title>Networking Services Limitations</title>

View File

@@ -83,7 +83,6 @@ rabbit_password=password
kombu_ssl_keyfile=/etc/ssl/node-key.pem kombu_ssl_keyfile=/etc/ssl/node-key.pem
kombu_ssl_certfile=/etc/ssl/node-cert.pem kombu_ssl_certfile=/etc/ssl/node-cert.pem
kombu_ssl_ca_certs=/etc/ssl/cacert.pem</screen> kombu_ssl_ca_certs=/etc/ssl/cacert.pem</screen>
<para>NOTE: A bug exists in the current version of OpenStack Grizzly where if 'kombu_ssl_version' is currently specified in the configuration file for any of the OpenStack services it will cause the following python traceback error: 'TypeError: an integer is required'. The current workaround is to remove 'kombu_ssl_version' from the configuration file. Refer to <link xlink:href="https://bugs.launchpad.net/oslo/+bug/1195431">bug report 1195431</link> for current status.</para>
</section> </section>
<section xml:id="ch038_transport-security-idp62112"> <section xml:id="ch038_transport-security-idp62112">
<title>Authentication Configuration Example - Qpid</title> <title>Authentication Configuration Example - Qpid</title>

View File

@@ -27,22 +27,19 @@
<section xml:id="ch055_security-services-for-instances-idp128240"> <section xml:id="ch055_security-services-for-instances-idp128240">
<title>Scheduling Instances to Nodes</title> <title>Scheduling Instances to Nodes</title>
<para>Before an instance is created, a host for the image instantiation must be selected. This selection is performed by the <systemitem class="service">nova-scheduler</systemitem> which determines how to dispatch compute and volume requests.</para> <para>Before an instance is created, a host for the image instantiation must be selected. This selection is performed by the <systemitem class="service">nova-scheduler</systemitem> which determines how to dispatch compute and volume requests.</para>
<para>The default nova scheduler in Grizzly is the Filter <para>The filter scheduler is the default scheduler for OpenStack Compute,
Scheduler, although other schedulers exist (see the section although other schedulers exist (see the section <link
<link xlink:href="http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html"
xlink:href="http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html">Scheduling</link> >Scheduling</link> in the <citetitle>OpenStack Configuration
in the <citetitle>OpenStack Configuration Reference</citetitle>). The filter scheduler works in collaboration with
Reference</citetitle>). The filter scheduler works in 'filters' to decide where an instance should be started. This process of
collaboration with 'filters' to decide where an instance should host selection allows administrators to fulfill many different security
be started. This process of host selection allows administrators requirements. Depending on the cloud deployment type for example, one
to fulfill many different security requirements. Depending on the could choose to have tenant instances reside on the same hosts whenever
cloud deployment type for example, one could choose to have possible if data isolation was a primary concern, conversely one could
tenant instances reside on the same hosts whenever possible if attempt to have instances for a tenant reside on as many different hosts
data isolation was a primary concern, conversely one could as possible for availability or fault tolerance reasons. The following
attempt to have instances for a tenant reside on as many diagram demonstrates how the filter scheduler works:</para>
different hosts as possible for availability or fault tolerance
reasons. The following diagram demonstrates how the filter
scheduler works:</para>
<para><inlinemediaobject><imageobject role="html"> <para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="400" contentwidth="550" fileref="static/filteringWorkflow1.png" format="PNG" scalefit="1"/> <imagedata contentdepth="400" contentwidth="550" fileref="static/filteringWorkflow1.png" format="PNG" scalefit="1"/>
</imageobject> </imageobject>

View File

@@ -22,10 +22,10 @@
</affiliation> </affiliation>
</author> </author>
<copyright> <copyright>
<year>2013</year> <year>2014</year>
<holder>OpenStack Foundation</holder> <holder>OpenStack Foundation</holder>
</copyright> </copyright>
<releaseinfo>havana</releaseinfo> <releaseinfo>icehouse</releaseinfo>
<productname>OpenStack</productname> <productname>OpenStack</productname>
<pubdate/> <pubdate/>
<legalnotice role="cc-by"> <legalnotice role="cc-by">

View File

@@ -30,12 +30,6 @@
+----------+--------------+----------+-------------------+ +----------+--------------+----------+-------------------+
| devstack | nova-compute | disabled | Trial log | | devstack | nova-compute | disabled | Trial log |
+----------+--------------+----------+-------------------+</computeroutput></screen> +----------+--------------+----------+-------------------+</computeroutput></screen>
<note>
<para>The Havana release introduces the optional
<parameter>--reason</parameter> parameter that
enables you to log a reason for disabling a
service.</para>
</note>
</step> </step>
<step> <step>
<para>Check the service list:</para> <para>Check the service list:</para>

View File

@@ -37,26 +37,4 @@
+----+---------------------+</computeroutput></screen> +----+---------------------+</computeroutput></screen>
</step> </step>
</procedure> </procedure>
<note>
<para>
<itemizedlist>
<listitem>
<para>Beginning in the Folsom release, the
<literal>--availability_zone
<replaceable>zone</replaceable>:<replaceable>host</replaceable></literal>
parameter replaces the
<literal>--force_hosts</literal> scheduler
hint parameter.</para>
</listitem>
<listitem>
<para>Beginning in the Grizzly release, you can
enable the
<literal>create:forced_host</literal>
option in the <filename>policy.json</filename>
file to specify which roles can launch an
instance on a specified host.</para>
</listitem>
</itemizedlist>
</para>
</note>
</section> </section>

View File

@@ -22,10 +22,10 @@
</affiliation> </affiliation>
</author> </author>
<copyright> <copyright>
<year>2013</year> <year>2014</year>
<holder>OpenStack Foundation</holder> <holder>OpenStack Foundation</holder>
</copyright> </copyright>
<releaseinfo>havana</releaseinfo> <releaseinfo>icehouse</releaseinfo>
<productname>OpenStack</productname> <productname>OpenStack</productname>
<pubdate/> <pubdate/>
<legalnotice role="cc-by"> <legalnotice role="cc-by">