Update the usage of unscoped extra specs

Although the unscoped extra specs are supported in
AggregateInstanceExtraSpecsFilter and ComputeCapabilitiesFilter, it
should be not encouraged, since it will cause conflict.

Closes-Bug: 1330962
Change-Id: I2b35bf85f0c2e8c8c386bf0c4bed1e6c21cea453
This commit is contained in:
Yunhong Jiang 2014-06-20 11:14:06 -07:00
parent 38ae296fd0
commit 7e7cb4d0ec

View File

@ -6,30 +6,30 @@
<section xml:id="section_compute-scheduler"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0">
xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0">
<?dbhtml stop-chunking?>
<title>Scheduling</title>
<para>Compute uses the <systemitem class="service"
>nova-scheduler</systemitem> service to determine how to
dispatch compute and volume requests. For example, the
<systemitem class="service">nova-scheduler</systemitem>
service determines which host a VM should launch on. The term
<firstterm>host</firstterm> in the context of filters
means a physical node that has a <systemitem class="service"
service determines on which host a VM should launch. In the
context of filters, the term <firstterm>host</firstterm> means
a physical node that has a <systemitem class="service"
>nova-compute</systemitem> service running on it. You can
configure the scheduler through a variety of options.</para>
<para>Compute is configured with the following default scheduler options in the
<filename>/etc/nova/nova.conf</filename> file:</para>
<para>Compute is configured with the following default scheduler
options in the <filename>/etc/nova/nova.conf</filename>
file:</para>
<programlisting language="ini">scheduler_driver=nova.scheduler.multi.MultiScheduler
scheduler_driver_task_period=60
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter</programlisting>
<para>By default, the scheduler_driver is configured as a filter
scheduler, as described in the next section. In the default
configuration, this scheduler considers hosts that meet all
the following criteria:</para>
<para>By default, the <parameter>scheduler_driver</parameter> is
configured as a filter scheduler, as described in the next
section. In the default configuration, this scheduler
considers hosts that meet all the following criteria:</para>
<itemizedlist>
<listitem>
<para>Have not been attempted for scheduling purposes
@ -44,7 +44,7 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi
(<literal>RamFilter</literal>).</para>
</listitem>
<listitem>
<para>Capable of servicing the request
<para>Can service the request
(<literal>ComputeFilter</literal>).</para>
</listitem>
<listitem>
@ -59,30 +59,31 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi
(<literal>ImagePropertiesFilter</literal>).</para>
</listitem>
</itemizedlist>
<para>The scheduler caches its list of available hosts; you can
specify how often the list is updated by modifying the
<option>scheduler_driver_task_period</option> value.
</para>
<para>The scheduler caches its list of available hosts; use the
<option>scheduler_driver_task_period</option> option to
specify how often the list is updated.</para>
<note>
<para>Do not configure <option>service_down_time</option> to be
much smaller than <option>scheduler_driver_task_period</option>;
otherwise, hosts will appear to be dead while the host list is
being cached.</para>
<para>Do not configure <option>service_down_time</option> to
be much smaller than
<option>scheduler_driver_task_period</option>;
otherwise, hosts appear to be dead while the host list is
being cached.</para>
</note>
<para>For information on the volume scheduler, refer the Block
<para>For information about the volume scheduler, see the Block
Storage section of <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/managing-volumes.html">
<citetitle>OpenStack Cloud Administrator
Guide</citetitle></link> for information.</para>
<para>The choice of a new host on instance migration is done by the
scheduler.</para>
<para>When evacuating instances from a host, the scheduler service does not
pick the next host. Instances are evacuated to the host explicitly
defined by the administrator. For information about instance evacuation,
refer to the <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/nova_cli_evacuate.html">Evacuate
instances</link> section of the <citetitle>Cloud Administrator Guide</citetitle>.</para>
Guide</citetitle></link>.</para>
<para>The scheduler chooses a new host when an instance is
migrated.</para>
<para>When evacuating instances from a host, the scheduler service
does not pick the next host. Instances are evacuated to the
host explicitly defined by the administrator. For information
about instance evacuation, see <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/nova_cli_evacuate.html"
>Evacuate instances</link> section of the
<citetitle>OpenStack Cloud Administrator
Guide</citetitle>.</para>
<section xml:id="filter-scheduler">
<title>Filter scheduler</title>
<para>The Filter Scheduler
@ -102,18 +103,17 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi
the filter, or it is rejected. Hosts that are accepted by
the filter are then processed by a different algorithm to
decide which hosts to use for that request, described in
the <link linkend="weights">Weights</link> section.
<figure xml:id="filter-figure">
<title>Filtering</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/filteringWorkflow1.png"
scale="80"/>
</imageobject>
</mediaobject>
</figure>
</para>
the <link linkend="weights">Weights</link> section.</para>
<figure xml:id="filter-figure">
<title>Filtering</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/filteringWorkflow1.png"
scale="80"/>
</imageobject>
</mediaobject>
</figure>
<para>The <option>scheduler_available_filters</option>
configuration option in <filename>nova.conf</filename>
provides the Compute service with the list of the filters
@ -130,10 +130,12 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi
contain:</para>
<programlisting language="ini">scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_available_filters=myfilter.MyFilter</programlisting>
<para>The <literal>scheduler_default_filters</literal> configuration option in
<filename>nova.conf</filename> defines the list of filters that are applied by the
<systemitem class="service">nova-scheduler</systemitem> service. The default filters
are:</para>
<para>The <literal>scheduler_default_filters</literal>
configuration option in <filename>nova.conf</filename>
defines the list of filters that are applied by the
<systemitem class="service"
>nova-scheduler</systemitem> service. The default
filters are:</para>
<programlisting language="ini">scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter</programlisting>
<para>The following sections describe the available
filters.</para>
@ -147,33 +149,35 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
</section>
<section xml:id="aggregateimagepropertiesisolationfilter">
<title>AggregateImagePropertiesIsolation</title>
<para>Matches properties defined in an image's metadata against those of aggregates to
determine host matches:</para>
<para>
<itemizedlist>
<listitem>
<para>If a host belongs to an aggregate and the aggregate defines one or
more metadata that matches an image's properties, that host is a candidate
to boot the image's instance.</para>
</listitem>
<listitem>
<para>If a host does not belong to any aggregate, it can boot instances from
all images.</para>
</listitem>
</itemizedlist>
</para>
<para>For example, the following aggregate <systemitem>myWinAgg</systemitem> has the
Windows operating system as metadata (named 'windows'):</para>
<para>Matches properties defined in an image's metadata
against those of aggregates to determine host
matches:</para>
<itemizedlist>
<listitem>
<para>If a host belongs to an aggregate and the
aggregate defines one or more metadata that
matches an image's properties, that host is a
candidate to boot the image's instance.</para>
</listitem>
<listitem>
<para>If a host does not belong to any aggregate,
it can boot instances from all images.</para>
</listitem>
</itemizedlist>
<para>For example, the following aggregate
<systemitem>myWinAgg</systemitem> has the Windows
operating system as metadata (named 'windows'):</para>
<screen><prompt>$</prompt> <userinput>nova aggregate-details MyWinAgg</userinput>
<computeroutput>+----+----------+-------------------+------------+---------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+----------+-------------------+------------+---------------+
| 1 | MyWinAgg | None | 'sf-devel' | 'os=windows' |
+----+----------+-------------------+------------+---------------+</computeroutput></screen>
<para>In this example, because the following Win-2012 image has the <property>windows</property>
property, it would boot on the <systemitem>sf-devel</systemitem> host (all other
filters being equal):</para>
<screen><prompt>$</prompt> <userinput>glance image-show Win-2012</userinput>
<para>In this example, because the following Win-2012
image has the <property>windows</property> property,
it boots on the <systemitem>sf-devel</systemitem> host
(all other filters being equal):</para>
<screen><prompt>$</prompt> <userinput>glance image-show Win-2012</userinput>
<computeroutput>+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
@ -182,10 +186,11 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
| container_format | ami |
| created_at | 2013-11-14T13:24:25 |
| ...</computeroutput></screen>
<para>You can configure the <systemitem>AggregateImagePropertiesIsolation</systemitem> filter using
the following options in the <filename>nova.conf</filename> file:
</para>
<programlisting language="ini"># Considers only keys matching the given namespace (string).
<para>You can configure the
<systemitem>AggregateImagePropertiesIsolation</systemitem>
filter by using the following options in the
<filename>nova.conf</filename> file:</para>
<programlisting language="ini"># Considers only keys matching the given namespace (string).
aggregate_image_properties_isolation_namespace=&lt;None>
# Separator used between the namespace and keys (string).
@ -193,25 +198,31 @@ aggregate_image_properties_isolation_separator=.</programlisting>
</section>
<section xml:id="aggregateinstanceextraspecsfilter">
<title>AggregateInstanceExtraSpecsFilter</title>
<para>Matches properties defined in an instance type's
extra specs against admin-defined properties on a host
aggregate. Works with specifications that are
unscoped, or are scoped with
<para>Matches properties defined in extra specs for an
instance type against admin-defined properties on a
host aggregate. Works with specifications that are
scoped with
<literal>aggregate_instance_extra_specs</literal>.
See the <link linkend="host-aggregates">host
aggregates</link> section for documentation on how
to use this filter.</para>
For backward compatibility, also works with non-scoped
specifications; this action is highly discouraged
because it conflicts with <link
linkend="computecapabilitiesfilter">
ComputeCapabilitiesFilter</link> filter when you
enable both filters. For information about how to use
this filter, see the <link linkend="host-aggregates"
>host aggregates</link> section.</para>
</section>
<section xml:id="aggregate-multi-tenancy-isolation">
<title>AggregateMultiTenancyIsolation</title>
<para>Isolates tenants to specific <link
linkend="host-aggregates">host aggregates</link>.
If a host is in an aggregate that has the metadata key
<literal>filter_tenant_id</literal> it only
creates instances from that tenant (or list of
tenants). A host can be in different aggregates. If a
host does not belong to an aggregate with the metadata
key, it can create instances from all tenants.</para>
If a host is in an aggregate that has the
<literal>filter_tenant_id</literal> metadata key,
the host creates instances from only that tenant or
list of tenants. A host can be in different
aggregates. If a host does not belong to an aggregate
with the metadata key, the host can create instances
from all tenants.</para>
</section>
<section xml:id="aggregate-ram-filter">
<title>AggregateRamFilter</title>
@ -224,40 +235,46 @@ aggregate_image_properties_isolation_separator=.</programlisting>
</section>
<section xml:id="allhostsfilter">
<title>AllHostsFilter</title>
<para>This is a no-op filter, it does not eliminate any of
<para>This is a no-op filter. It does not eliminate any of
the available hosts.</para>
</section>
<section xml:id="availabilityzonefilter">
<title>AvailabilityZoneFilter</title>
<para>Filters hosts by availability zone. This filter must
be enabled for the scheduler to respect availability
<para>Filters hosts by availability zone. You must enable
this filter for the scheduler to respect availability
zones in requests.</para>
</section>
<section xml:id="computecapabilitiesfilter">
<title>ComputeCapabilitiesFilter</title>
<para>Matches properties defined in an instance type's
extra specs against compute capabilities.</para>
<para>If an extra specs key contains a colon ":", anything
before the colon is treated as a namespace, and
anything after the colon is treated as the key to be
matched. If a namespace is present and is not
'capabilities', it is ignored by this filter.</para>
<para>Matches properties defined in extra specs for an
instance type against compute capabilities.</para>
<para>If an extra specs key contains a colon
(<literal>:</literal>), anything before the colon
is treated as a namespace and anything after the colon
is treated as the key to be matched. If a namespace is
present and is not <literal>capabilities</literal>,
the filter ignores the namespace. For backward
compatibility, also treats the extra specs key as the
key to be matched if no namespace is present; this
action is highly discouraged because it conflicts with
<link linkend="aggregateinstanceextraspecsfilter">
AggregateInstanceExtraSpecsFilter</link> filter
when you enable both filters.</para>
</section>
<section xml:id="computefilter">
<title>ComputeFilter</title>
<para>Passes all hosts that are operational and
enabled.</para>
<para>In general, this filter should always be enabled.
</para>
<para>In general, you should always enable this filter.</para>
</section>
<section xml:id="corefilter">
<title>CoreFilter</title>
<para>Only schedules instances on hosts if there are
sufficient CPU cores available. If this filter is not
set, the scheduler may over-provision a host based on
cores (for example, the virtual cores running on an
instance may exceed the physical cores).</para>
<para>This filter can be configured to allow a fixed
<para>Only schedules instances on hosts if sufficient CPU
cores are available. If this filter is not set, the
scheduler might over-provision a host based on cores.
For example, the virtual cores running on an instance
may exceed the physical cores.</para>
<para>You can configure this filter to enable a fixed
amount of vCPU overcommitment by using the
<option>cpu_allocation_ratio</option>
Configuration option in
@ -270,22 +287,23 @@ aggregate_image_properties_isolation_separator=.</programlisting>
<para>To disallow vCPU overcommitment set:</para>
<programlisting language="ini">cpu_allocation_ratio=1.0</programlisting>
<note>
<para>
The Compute API will always return the actual number of CPU
cores available on a compute node regardless of the value of
the <option>cpu_allocation_ratio</option> configuration key. As
a result changes to the <option>cpu_allocation_ratio</option>
are not reflected via the command line clients or the
dashboard. Changes to this configuration key are only taken
into account internally in the scheduler.
</para>
<para>The Compute API always returns the actual
number of CPU cores available on a compute node
regardless of the value of the
<option>cpu_allocation_ratio</option>
configuration key. As a result changes to the
<option>cpu_allocation_ratio</option> are not
reflected via the command line clients or the
dashboard. Changes to this configuration key are
only taken into account internally in the
scheduler.</para>
</note>
</section>
<section xml:id="differenthostfilter">
<title>DifferentHostFilter</title>
<para>Schedules the instance on a different host from a set
of instances. To take advantage of this filter, the
requester must pass a scheduler hint, using
<para>Schedules the instance on a different host from a
set of instances. To take advantage of this filter,
the requester must pass a scheduler hint, using
<literal>different_host</literal> as the key and a
list of instance uuids as the value. This filter is
the opposite of the <literal>SameHostFilter</literal>.
@ -305,7 +323,7 @@ aggregate_image_properties_isolation_separator=.</programlisting>
<para>Only schedules instances on hosts if there is
sufficient disk space available for root and ephemeral
storage.</para>
<para>This filter can be configured to allow a fixed
<para>You can configure this filter to enable a fixed
amount of disk overcommitment by using the
<literal>disk_allocation_ratio</literal>
Configuration option in
@ -315,14 +333,16 @@ aggregate_image_properties_isolation_separator=.</programlisting>
<para>Adjusting this value to greater than 1.0 enables
scheduling instances while over committing disk
resources on the node. This might be desirable if you
use an image format that is sparse or copy on write
so that each virtual instance does not require a 1:1
use an image format that is sparse or copy on write so
that each virtual instance does not require a 1:1
allocation of virtual disk to physical storage.</para>
</section>
<section xml:id="groupaffinityfilter">
<title>GroupAffinityFilter</title>
<note><para>This filter is deprecated in favor of <link
linkend="servergroupantiaffinityfilter">ServerGroupAffinityFilter</link>.</para>
<note>
<para>This filter is deprecated in favor of <link
linkend="servergroupantiaffinityfilter"
>ServerGroupAffinityFilter</link>.</para>
</note>
<para>The GroupAffinityFilter ensures that an instance is
scheduled on to a host from a set of group hosts. To
@ -333,15 +353,17 @@ aggregate_image_properties_isolation_separator=.</programlisting>
<literal>--hint</literal> flag. For
example:</para>
<screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>IMAGE_ID</replaceable> --flavor 1 --hint group=foo server-1</userinput></screen>
<para>This filter should not be enabled at the same time as <link
linkend="groupantiaffinityfilter">GroupAntiAffinityFilter</link>
or neither filter will work properly.</para>
<para>This filter should not be enabled at the same time
as <link linkend="groupantiaffinityfilter"
>GroupAntiAffinityFilter</link> or neither filter
will work properly.</para>
</section>
<section xml:id="groupantiaffinityfilter">
<title>GroupAntiAffinityFilter</title>
<note><para>This filter is deprecated in favor of
<link
linkend="servergroupantiaffinityfilter">ServerGroupAntiAffinityFilter</link>.</para>
<note>
<para>This filter is deprecated in favor of <link
linkend="servergroupantiaffinityfilter"
>ServerGroupAntiAffinityFilter</link>.</para>
</note>
<para>The GroupAntiAffinityFilter ensures that each
instance in a group is on a different host. To take
@ -352,9 +374,10 @@ aggregate_image_properties_isolation_separator=.</programlisting>
<literal>--hint</literal> flag. For
example:</para>
<screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>IMAGE_ID</replaceable> --flavor 1 --hint group=foo server-1</userinput></screen>
<para>This filter should not be enabled at the same time as <link
linkend="groupaffinityfilter">GroupAffinityFilter</link> or
neither filter will work properly.</para>
<para>This filter should not be enabled at the same time
as <link linkend="groupaffinityfilter"
>GroupAffinityFilter</link> or neither filter will
work properly.</para>
</section>
<section xml:id="imagepropertiesfilter">
<title>ImagePropertiesFilter</title>
@ -407,10 +430,9 @@ aggregate_image_properties_isolation_separator=.</programlisting>
and hosts in the <filename>nova.conf</filename> file
using the <literal>isolated_hosts</literal> and
<literal>isolated_images</literal> configuration
options. For example:
<programlisting language="ini">isolated_hosts=server1,server2
options. For example:</para>
<programlisting language="ini">isolated_hosts=server1,server2
isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09</programlisting>
</para>
</section>
<section xml:id="jsonfilter">
<title>JsonFilter</title>
@ -476,16 +498,15 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
the scheduler may over provision a host based on RAM
(for example, the RAM allocated by virtual machine
instances may exceed the physical RAM).</para>
<para>This filter can be configured to allow a fixed
<para>You can configure this filter to enable a fixed
amount of RAM overcommitment by using the
<literal>ram_allocation_ratio</literal>
configuration option in
<filename>nova.conf</filename>. The default setting
is:</para>
<programlisting
language="ini">ram_allocation_ratio=1.5</programlisting>
<programlisting language="ini">ram_allocation_ratio=1.5</programlisting>
<para>This setting enables 1.5&nbsp;GB instances to run on
any compute node with 1&nbsp;GB of free RAM.</para>
any compute node with 1&nbsp;GB of free RAM.</para>
</section>
<section xml:id="retryfilter">
<title>RetryFilter</title>
@ -520,12 +541,13 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
</section>
<section xml:id="servergroupaffinityfilter">
<title>ServerGroupAffinityFilter</title>
<para>The ServerGroupAffinityFilter ensures that an instance is
scheduled on to a host from a set of group hosts. To
take advantage of this filter, the requester must create a
server group with an <literal>affinity</literal> policy, and
pass a scheduler hint, using <literal>group</literal> as the key
and the server group UUID as the value. Using the
<para>The ServerGroupAffinityFilter ensures that an
instance is scheduled on to a host from a set of group
hosts. To take advantage of this filter, the requester
must create a server group with an
<literal>affinity</literal> policy, and pass a
scheduler hint, using <literal>group</literal> as the
key and the server group UUID as the value. Using the
<command>nova</command> command-line tool, use the
<literal>--hint</literal> flag. For
example:</para>
@ -534,11 +556,12 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
</section>
<section xml:id="servergroupantiaffinityfilter">
<title>ServerGroupAntiAffinityFilter</title>
<para>The ServerGroupAntiAffinityFilter ensures that each instance in
a group is on a different host. To take advantage of this
filter, the requester must create a server group with an
<literal>anti-affinity</literal> policy, and pass a scheduler
hint, using <literal>group</literal> as the key and the server
<para>The ServerGroupAntiAffinityFilter ensures that each
instance in a group is on a different host. To take
advantage of this filter, the requester must create a
server group with an <literal>anti-affinity</literal>
policy, and pass a scheduler hint, using
<literal>group</literal> as the key and the server
group UUID as the value. Using the
<command>nova</command> command-line tool, use the
<literal>--hint</literal> flag. For
@ -548,10 +571,10 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
</section>
<section xml:id="simplecidraffinityfilter">
<title>SimpleCIDRAffinityFilter</title>
<para>Schedules the instance based on host IP subnet range.
To take advantage of this filter, the requester must
specify a range of valid IP address in CIDR format, by
passing two scheduler hints:</para>
<para>Schedules the instance based on host IP subnet
range. To take advantage of this filter, the requester
must specify a range of valid IP address in CIDR
format, by passing two scheduler hints:</para>
<variablelist>
<varlistentry>
<term><literal>build_near_host_ip</literal></term>
@ -584,26 +607,33 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
<section xml:id="weights">
<title>Weights</title>
<?dbhtml stop-chunking?>
<para>When resourcing instances, the Filter Scheduler filters and weighs each host in the
list of acceptable hosts. Each time the scheduler selects a host, it virtually consumes
resources on it, and subsequent selections are adjusted accordingly. This process is
useful when the customer asks for the same large amount of instances, because weight is
<para>When resourcing instances, the Filter Scheduler filters
and weighs each host in the list of acceptable hosts. Each
time the scheduler selects a host, it virtually consumes
resources on it, and subsequent selections are adjusted
accordingly. This process is useful when the customer asks
for the same large amount of instances, because weight is
computed for each requested instance.</para>
<para>All weights are normalized before being summed up; the host with the largest weight is
given the highest priority.</para>
<para>All weights are normalized before being summed up; the
host with the largest weight is given the highest
priority.</para>
<figure xml:id="figure_weighing-hosts">
<title>Weighing hosts</title>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/nova-weighting-hosts.png"/>
<imagedata
fileref="../../common/figures/nova-weighting-hosts.png"
/>
</imageobject>
</mediaobject>
</figure>
<para>If cells are used, cells are weighted by the scheduler in the same manner as hosts.</para>
<para>Hosts and cells are weighed based on the following options in the
<filename>/etc/nova/nova.conf</filename> file:</para>
<para>If cells are used, cells are weighted by the scheduler
in the same manner as hosts.</para>
<para>Hosts and cells are weighed based on the following
options in the <filename>/etc/nova/nova.conf</filename>
file:</para>
<table rules="all" xml:id="table_host-weighting-options">
<caption>Host Weighting options</caption>
<caption>Host weighting options</caption>
<col width="10%" title="Section"/>
<col width="25%" title="Option"/>
<col width="60%" title="Description"/>
@ -618,58 +648,81 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
<tr valign="top">
<td>[DEFAULT]</td>
<td><literal>ram_weight_multiplier</literal></td>
<td>By default, the scheduler spreads instances across all hosts evenly. Set the
<option>ram_weight_multiplier</option> option to
a negative number if you prefer stacking instead of spreading. Use a
<td>By default, the scheduler spreads instances
across all hosts evenly. Set the
<option>ram_weight_multiplier</option>
option to a negative number if you prefer
stacking instead of spreading. Use a
floating-point value.</td>
</tr>
<tr valign="top">
<td>[DEFAULT]</td>
<td><literal>scheduler_host_subset_size</literal></td>
<td>New instances are scheduled on a host that is chosen randomly from a subset
of the N best hosts. This property defines the subset size from which a host
is chosen. A value of 1 chooses the first host returned by the weighing
functions.This value must be at least 1. A value less than 1 is ignored, and
1 is used instead. Use an integer value.</td>
<td>New instances are scheduled on a host that is
chosen randomly from a subset of the N best
hosts. This property defines the subset size
from which a host is chosen. A value of 1
chooses the first host returned by the
weighing functions. This value must be at
least 1. A value less than 1 is ignored, and 1
is used instead. Use an integer value.</td>
</tr>
<tr valign="top">
<td>[DEFAULT]</td>
<td><literal>scheduler_weight_classes</literal></td>
<td>Defaults to <literal>nova.scheduler.weights.all_weighers</literal>, which
selects the only available weigher, the RamWeigher. Hosts are then weighed
and sorted with the largest weight winning.</td>
<td>Defaults to
<literal>nova.scheduler.weights.all_weighers</literal>,
which selects the only available weigher, the
RamWeigher. Hosts are then weighed and sorted
with the largest weight winning.</td>
</tr>
<tr valign="top">
<td>[metrics]</td>
<td><literal>weight_multiplier</literal></td>
<td>Multiplier for weighing metrices. Use a floating-point value.</td>
<td>Multiplier for weighing metrics. Use a
floating-point value.</td>
</tr>
<tr valign="top">
<td>[metrics]</td>
<td><literal>weight_setting</literal></td>
<td>Determines how metrics are weighed. Use a comma-separated list of
metricName=ratio. For example: "name1=1.0, name2=-1.0" results in:
<literal>name1.value * 1.0 + name2.value * -1.0</literal>
<td>Determines how metrics are weighed. Use a
comma-separated list of metricName=ratio. For
example: "name1=1.0, name2=-1.0" results in:
<literal>name1.value * 1.0 + name2.value *
-1.0</literal>
</td>
</tr>
<tr valign="top">
<td>[metrics]</td>
<td><literal>required</literal></td>
<td><para>Specifies how to treat unavailable metrics:<itemizedlist>
<listitem><para>True&mdash;Raises an exception. To avoid the raised exception, you should use the scheduler
filter <literal>MetricFilter</literal> to filter out hosts
with unavailable metrics.</para></listitem>
<listitem><para>False&mdash;Treated as a negative factor in the weighing process (uses the
<option>weight_of_unavailable</option> option).</para></listitem>
</itemizedlist></para></td>
<listitem>
<para>True&mdash;Raises an
exception. To avoid the raised
exception, you should use the
scheduler filter
<literal>MetricFilter</literal> to
filter out hosts with unavailable
metrics.</para>
</listitem>
<listitem>
<para>False&mdash;Treated as a
negative factor in the weighing
process (uses the
<option>weight_of_unavailable</option>
option).</para>
</listitem>
</itemizedlist></para></td>
</tr>
<tr valign="top">
<td>[metrics]</td>
<td><literal>weight_of_unavailable</literal></td>
<td>If <option>required</option> is set to False, and any one of the metrics set
by <option>weight_setting</option> is unavailable, the
<option>weight_of_unavailable</option> value is returned to the
scheduler.</td>
<td>If <option>required</option> is set to False,
and any one of the metrics set by
<option>weight_setting</option> is
unavailable, the
<option>weight_of_unavailable</option>
value is returned to the scheduler.</td>
</tr>
</tbody>
</table>
@ -684,52 +737,61 @@ weight_setting=name1=1.0, name2=-1.0
required=false
weight_of_unavailable=-10000.0</programlisting>
<table rules="all" xml:id="table_cell-weighting-options">
<caption>Cell weighting options</caption>
<col width="10%" title="Section"/>
<col width="25%" title="Option"/>
<col width="60%" title="Description"/>
<thead>
<tr>
<th>Section</th>
<th>Option</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr valign="top">
<td>[cells]</td>
<td><literal>mute_weight_multiplier</literal></td>
<td>Multiplier to weigh mute children (hosts which have not sent capacity or
capacity updates for some time). Use a negative, floating-point value.</td>
</tr>
<tr valign="top">
<td>[cells]</td>
<td><literal>mute_weight_value</literal></td>
<td>Weight value assigned to mute children. Use a positive, floating-point value
with a maximum of '1.0'.</td>
</tr>
<tr valign="top">
<td>[cells]</td>
<td><literal>offset_weight_multiplier</literal></td>
<td>Multiplier to weigh cells, so you can specify a preferred cell. Use a floating
point value.</td>
</tr>
<tr valign="top">
<td>[cells]</td>
<td><literal>ram_weight_multiplier</literal></td>
<td>By default, the scheduler spreads instances across all cells evenly. Set the
<option>ram_weight_multiplier</option> option to a negative number if
you prefer stacking instead of spreading. Use a floating-point value.</td>
</tr>
<tr valign="top">
<td>[cells]</td>
<td><literal>scheduler_weight_classes</literal></td>
<td>Defaults to <literal>nova.cells.weights.all_weighers</literal>, which maps to
all cell weighers included with Compute. Cells are then weighed and sorted
<caption>Cell weighting options</caption>
<col width="10%" title="Section"/>
<col width="25%" title="Option"/>
<col width="60%" title="Description"/>
<thead>
<tr>
<th>Section</th>
<th>Option</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr valign="top">
<td>[cells]</td>
<td><literal>mute_weight_multiplier</literal></td>
<td>Multiplier to weigh mute children (hosts which
have not sent capacity or capacity updates for
some time). Use a negative, floating-point
value.</td>
</tr>
<tr valign="top">
<td>[cells]</td>
<td><literal>mute_weight_value</literal></td>
<td>Weight value assigned to mute children. Use a
positive, floating-point value with a maximum
of '1.0'.</td>
</tr>
<tr valign="top">
<td>[cells]</td>
<td><literal>offset_weight_multiplier</literal></td>
<td>Multiplier to weigh cells, so you can specify
a preferred cell. Use a floating point
value.</td>
</tr>
<tr valign="top">
<td>[cells]</td>
<td><literal>ram_weight_multiplier</literal></td>
<td>By default, the scheduler spreads instances
across all cells evenly. Set the
<option>ram_weight_multiplier</option>
option to a negative number if you prefer
stacking instead of spreading. Use a
floating-point value.</td>
</tr>
<tr valign="top">
<td>[cells]</td>
<td><literal>scheduler_weight_classes</literal></td>
<td>Defaults to
<literal>nova.cells.weights.all_weighers</literal>,
which maps to all cell weighers included with
Compute. Cells are then weighed and sorted
with the largest weight winning.</td>
</tr>
</tr>
</tbody>
</table>
</table>
<para>For example:</para>
<programlisting language="ini">[cells]
scheduler_weight_classes=nova.cells.weights.all_weighers
@ -752,8 +814,8 @@ offset_weight_multiplier=1.0</programlisting>
href="../../common/section_cli_nova_host_aggregates.xml"/>
<section xml:id="compute-scheduler-config-ref">
<title>Configuration reference</title>
<para>To customize the Compute scheduler, use the configuration option
settings documented in <xref
<para>To customize the Compute scheduler, use the
configuration option settings documented in <xref
linkend="config_table_nova_scheduling"/>.</para>
</section>
</section>