Correct incorrect uses of "weight" and "weighting"

Also made some other edits to back-end versus back end
Back end is the thing, and back-end is the adjective

Closes-Bug: #1335423

Change-Id: I09fcf289788630b6f016aa5ee373068401c762cf
Author: Diane Fleming dfleming@austin.rr.com
This commit is contained in:
Diane Fleming 2014-06-28 08:33:29 -05:00 committed by Diane Fleming
parent a70c78e269
commit 95c9b4c011
9 changed files with 331 additions and 271 deletions

View File

@ -2,31 +2,30 @@
<section xml:id="multi_backend" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0">
<title>Configure a multiple-storage back-end</title>
<para>With multiple storage back-ends configured, you can create
several back-end storage solutions serving the
same OpenStack Compute configuration. Basically, multi
back-end launches one <systemitem class="service"
>cinder-volume</systemitem> for each back-end.</para>
<para>In a multi back-end configuration, each back-end has a name
(<literal>volume_backend_name</literal>). Several
back-ends can have the same name. In that case, the scheduler
properly decides which back-end the volume has to be
created in.</para>
<para>The name of the back-end is declared as an
<title>Configure multiple-storage back ends</title>
<para>When you configure multiple-storage back ends, you can
create several back-end storage solutions that serve the same
OpenStack Compute configuration and one <systemitem class="service">cinder-volume</systemitem>
is launched for each back end.</para>
<para>In a multiple-storage back end configuration, each back end has a name
(<literal>volume_backend_name</literal>). Several back
ends can have the same name. In that case, the scheduler
properly decides which back end the volume has to be created
in.</para>
<para>The name of the back end is declared as an
extra-specification of a volume type (such as,
<literal>volume_backend_name=LVM_iSCSI</literal>). When a
volume is created, the scheduler chooses an appropriate
back-end to handle the request, according to the volume type
volume is created, the scheduler chooses an appropriate back
end to handle the request, according to the volume type
specified by the user.</para>
<simplesect>
<title>Enable multi back-end</title>
<para>To enable a multi back-end configuration, you must set
<title>Enable multiple-storage back ends</title>
<para>To enable a multiple-storage back ends, you must set
the <option>enabled_backends</option> flag in the
<filename>cinder.conf</filename> file. This flag
defines the names (separated by a comma) of the
configuration groups for the different back-ends: one name
is associated to one configuration group for a back-end
configuration groups for the different back ends: one name
is associated to one configuration group for a back end
(such as, <literal>[lvmdriver-1]</literal>).</para>
<note>
<para>The configuration group name is not related to the
@ -40,7 +39,7 @@
used in a configuration group. Configuration values in the
<literal>[DEFAULT]</literal> configuration group are
not used.</para>
<para>These examples show three back-ends:</para>
<para>These examples show three back ends:</para>
<programlisting language="ini">enabled_backends=lvmdriver-1,lvmdriver-2,lvmdriver-3
[lvmdriver-1]
volume_group=cinder-volumes-1
@ -57,51 +56,52 @@ volume_backend_name=LVM_iSCSI_b</programlisting>
<para>In this configuration, <literal>lvmdriver-1</literal>
and <literal>lvmdriver-2</literal> have the same
<literal>volume_backend_name</literal>. If a volume
creation requests the <literal>LVM_iSCSI</literal>
back-end name, the scheduler uses the capacity filter
scheduler to choose the most suitable driver, which is
either <literal>lvmdriver-1</literal> or
creation requests the <literal>LVM_iSCSI</literal> back
end name, the scheduler uses the capacity filter scheduler
to choose the most suitable driver, which is either
<literal>lvmdriver-1</literal> or
<literal>lvmdriver-2</literal>. The capacity filter
scheduler is enabled by default. The next section provides
more information. In addition, this example presents a
<literal>lvmdriver-3</literal> back-end.</para>
<literal>lvmdriver-3</literal> back end.</para>
</simplesect>
<simplesect>
<title>Configure Block Storage scheduler multi back-end</title>
<title>Configure Block Storage scheduler multi back
end</title>
<para>You must enable the <option>filter_scheduler</option>
option to use multi back-end. Filter scheduler acts in two
steps:</para>
option to use multiple-storage back ends. The filter scheduler:</para>
<orderedlist>
<listitem>
<para>The filter scheduler filters the available
back-ends. By default,
<para>Filters the available back
ends. By default,
<literal>AvailabilityZoneFilter</literal>,
<literal>CapacityFilter</literal> and
<literal>CapabilitiesFilter</literal> are
enabled.</para>
</listitem>
<listitem>
<para>The filter scheduler weighs the previously
filtered back-ends. By default,
<literal>CapacityWeigher</literal> is enabled.
The <literal>CapacityWeigher</literal> attributes
higher scores to back-ends with the most
available.</para>
<para>Weights the previously
filtered back ends. By default, the
<option>CapacityWeigher</option> option is
enabled. When this option is enabled, the filter
scheduler assigns the highest weight to back ends
with the most available capacity.</para>
</listitem>
</orderedlist>
<para>The scheduler uses the filtering and weighing process to
pick the best back-end to handle the request, and
explicitly creates volumes on specific back-ends through
the use of volume types.</para>
<para>The scheduler uses filters and weights to pick the best
back end to handle the request. The scheduler uses volume
types to explicitly create volumes on specific back
ends.</para>
<!-- TODO: when filter/weighing scheduler documentation will be up, a ref should be added here -->
</simplesect>
<simplesect>
<title>Volume type</title>
<para>Before using it, a volume type has to be declared to
Block Storage. This can be done by the following command:</para>
Block Storage. This can be done by the following
command:</para>
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-create lvm</userinput></screen>
<para>Then, an extra-specification has to be created to link
the volume type to a back-end name. Run this
the volume type to a back end name. Run this
command:</para>
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-key lvm set volume_backend_name=LVM_iSCSI</userinput></screen>
<para>This example creates a <literal>lvm</literal> volume
@ -112,7 +112,7 @@ volume_backend_name=LVM_iSCSI_b</programlisting>
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-key lvm_gold set volume_backend_name=LVM_iSCSI_b</userinput></screen>
<para>This second volume type is named
<literal>lvm_gold</literal> and has
<literal>LVM_iSCSI_b</literal> as back-end
<literal>LVM_iSCSI_b</literal> as back end
name.</para>
<note>
<para>To list the extra-specifications, use this
@ -125,14 +125,14 @@ volume_backend_name=LVM_iSCSI_b</programlisting>
not exist in the Block Storage configuration, the
<literal>filter_scheduler</literal> returns an
error that it cannot find a valid host with the
suitable back-end.</para>
suitable back end.</para>
</note>
</simplesect>
<simplesect>
<title>Usage</title>
<para>When you create a volume, you must specify the volume
type. The extra-specifications of the volume type are used
to determine which back-end has to be used.
to determine which back end has to be used.
<screen><prompt>$</prompt> <userinput>cinder create --volume_type lvm --display_name test_multi_backend 1</userinput></screen>
Considering the <literal>cinder.conf</literal> described
previously, the scheduler creates this volume on

View File

@ -499,9 +499,8 @@ enabled = True</programlisting>
<para>Plug-ins can have different properties for hardware
requirements, features, performance, scale, or operator
tools. Because Networking supports a large number of
plug-ins, the cloud administrator can weigh options to
decide on the right networking technology for the
deployment.</para>
plug-ins, the cloud administrator must determine the right
networking technology for the deployment.</para>
<para>In the Havana release, OpenStack Networking introduces
the <glossterm
baseform="Modular Layer 2 (ML2) neutron plug-in"

View File

@ -3,10 +3,14 @@
<!ENTITY % openstack SYSTEM "entities/openstack.ent">
%openstack;
]>
<section xml:id="customize-flavors" xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<section xml:id="customize-flavors"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Flavors</title>
<para>Admin users can use the <command>nova flavor-</command> commands to customize and manage
flavors. To see the available flavor-related commands, run:</para>
<para>Admin users can use the <command>nova flavor-</command>
commands to customize and manage flavors. To see the available
flavor-related commands, run:</para>
<screen><prompt>$</prompt> <userinput>nova help | grep flavor-</userinput>
<computeroutput> flavor-access-add Add flavor access for the given tenant.
flavor-access-list Print access information about the given flavor.
@ -19,13 +23,20 @@
flavor-show Show details about the given flavor.</computeroutput></screen>
<note>
<itemizedlist>
<listitem><para>Configuration rights can be delegated to additional users
by redefining the access controls for <option>compute_extension:flavormanage</option>
in <filename>/etc/nova/policy.json</filename> on the
<systemitem class="server">nova-api</systemitem> server.</para></listitem>
<listitem><para>To modify an existing flavor in the dashboard, you must
delete the flavor and create a modified one with the same
name.</para></listitem>
<listitem>
<para>Configuration rights can be delegated to
additional users by redefining the access controls
for
<option>compute_extension:flavormanage</option>
in <filename>/etc/nova/policy.json</filename> on
the <systemitem class="server"
>nova-api</systemitem> server.</para>
</listitem>
<listitem>
<para>To modify an existing flavor in the dashboard,
you must delete the flavor and create a modified
one with the same name.</para>
</listitem>
</itemizedlist>
</note>
<para>Flavors define these elements:</para>
@ -94,190 +105,241 @@
</tr>
<tr>
<td><literal>extra_specs</literal></td>
<td><para>Key and value pairs that define on which compute
nodes a flavor can run. These pairs must match
corresponding pairs on the compute nodes. Use to
implement special resources, such as flavors that
run on only compute nodes with GPU hardware.</para></td>
<td><para>Key and value pairs that define on which
compute nodes a flavor can run. These pairs
must match corresponding pairs on the compute
nodes. Use to implement special resources,
such as flavors that run on only compute nodes
with GPU hardware.</para></td>
</tr>
</tbody>
</table>
<para>Flavor customization can be limited by the hypervisor in use. For example the
<systemitem>libvirt</systemitem> driver enables quotas on CPUs available to a VM, disk
tuning, bandwidth I/O, watchdog behavior, random number generator device control, and
instance VIF traffic control.</para>
<variablelist>
<varlistentry><term>CPU limits</term>
<listitem><para>You can configure the CPU limits with control parameters with the <command>nova</command>
client. For example, to configure the I/O limit, use:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key m1.small set quota:read_bytes_sec=10240000</userinput>
<para>Flavor customization can be limited by the hypervisor in
use. For example the <systemitem>libvirt</systemitem> driver
enables quotas on CPUs available to a VM, disk tuning,
bandwidth I/O, watchdog behavior, random number generator
device control, and instance VIF traffic control.</para>
<variablelist>
<varlistentry>
<term>CPU limits</term>
<listitem>
<para>You can configure the CPU limits with control
parameters with the <command>nova</command>
client. For example, to configure the I/O limit,
use:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key m1.small set quota:read_bytes_sec=10240000</userinput>
<prompt>$</prompt> <userinput>nova flavor-key m1.small set quota:write_bytes_sec=10240000</userinput></screen>
<para>There are optional CPU control parameters for weight shares, enforcement
intervals for runtime quotas, and a quota for maximum allowed
bandwidth:</para>
<para>
<itemizedlist>
<listitem>
<para><literal>cpu_shares</literal> specifies the proportional
weighted share for the domain. If this element is omitted, the
service defaults to the OS provided defaults. There is no unit
for the value; it is a relative measure based on the setting of
other VMs. For example, a VM configured with value 2048 gets
twice as much CPU time as a VM configured with value
1024.</para>
</listitem>
<listitem>
<para><literal>cpu_period</literal> specifies the enforcement
interval (unit: microseconds) for QEMU and LXC hypervisors.
Within a period, each VCPU of the domain is not allowed to
consume more than the quota worth of runtime. The value should
be in range <literal>[1000, 1000000]</literal>. A period with
value 0 means no value.</para>
</listitem>
<listitem>
<para><literal>cpu_quota</literal> specifies the maximum allowed
bandwidth (unit: microseconds). A domain with a negative-value quota
indicates that the domain has infinite bandwidth, which means that
it is not bandwidth controlled. The value should be in range
<literal>[1000, 18446744073709551]</literal> or less than 0. A
quota with value 0 means no value. You can use this feature to
ensure that all vCPUs run at the same speed. For example:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key m1.low_cpu set quota:cpu_quota=10000</userinput>
<para>Use these optional parameters to control weight
shares, enforcement intervals for runtime quotas,
and a quota for maximum allowed bandwidth:</para>
<itemizedlist>
<listitem>
<para><parameter>cpu_shares</parameter>. Specifies the proportional weighted share
for the domain. If this element is
omitted, the service defaults to the OS
provided defaults. There is no unit for
the value; it is a relative measure based
on the setting of other VMs. For example,
a VM configured with value 2048 gets twice
as much CPU time as a VM configured with
value 1024.</para>
</listitem>
<listitem>
<para><parameter>cpu_period</parameter>. Specifies the enforcement interval (unit:
microseconds) for QEMU and LXC
hypervisors. Within a period, each VCPU of
the domain is not allowed to consume more
than the quota worth of runtime. The value
should be in range <literal>[1000,
1000000]</literal>. A period with
value 0 means no value.</para>
</listitem>
<listitem>
<para><parameter>cpu_quota</parameter>. Specifies the maximum allowed bandwidth
(unit: microseconds). A domain with a
negative-value quota indicates that the
domain has infinite bandwidth, which means
that it is not bandwidth controlled. The
value should be in range <literal>[1000,
18446744073709551]</literal> or less
than 0. A quota with value 0 means no
value. You can use this feature to ensure
that all vCPUs run at the same speed. For
example:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key m1.low_cpu set quota:cpu_quota=10000</userinput>
<prompt>$</prompt> <userinput>nova flavor-key m1.low_cpu set quota:cpu_period=20000</userinput></screen>
<para>In this example, the instance of
<literal>m1.low_cpu</literal> can only consume a maximum
of 50% CPU of a physical CPU computing capability.</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</varlistentry>
<varlistentry><term>Disk tuning</term>
<listitem><para>Using disk I/O quotas, you can set maximum disk write to 10 MB per second for a VM user. For
example:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key m1.medium set disk_write_bytes_sec=10485760</userinput></screen>
<para>The disk I/O options are:</para>
<itemizedlist>
<listitem>
<para>disk_read_bytes_sec</para>
</listitem>
<listitem>
<para>disk_read_iops_sec</para>
</listitem>
<listitem>
<para>disk_write_bytes_sec</para>
</listitem>
<listitem>
<para>disk_write_iops_sec</para>
</listitem>
<listitem>
<para>disk_total_bytes_sec</para>
</listitem>
<listitem>
<para>disk_total_iops_sec</para>
</listitem>
</itemizedlist>
<para>The vif I/O options are:</para>
<itemizedlist>
<listitem>
<para>vif_inbound_ average</para>
</listitem>
<listitem>
<para>vif_inbound_burst</para>
</listitem>
<listitem>
<para>vif_inbound_peak</para>
</listitem>
<listitem>
<para>vif_outbound_ average</para>
</listitem>
<listitem>
<para>vif_outbound_burst</para>
</listitem>
<listitem>
<para>vif_outbound_peak</para>
</listitem>
</itemizedlist></listitem>
</varlistentry>
<varlistentry><term>Bandwidth I/O</term>
<listitem><para>Incoming and outgoing traffic can be shaped independently. The bandwidth element can have at
most, one inbound and at most, one outbound child element. If you leave any of
these child elements out, no quality of service (QoS) is applied on that
traffic direction. So, if you want to shape only the network's incoming traffic,
use inbound only (and vice versa). Each element has one mandatory attribute
average, which specifies the average bit rate on the interface being shaped.</para>
<para>There are also two optional attributes (integer): <option>peak</option>, which
specifies the maximum rate at which a bridge can send data (kilobytes/second), and
<option>burst</option>, the amount of bytes that can be burst at peak speed
(kilobytes). The rate is shared equally within domains connected to the
network.</para>
<para>The following example configures a bandwidth limit for instance network
traffic:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key m1.small set quota:inbound_average=10240</userinput>
<prompt>$</prompt> <userinput>nova flavor-key m1.small set quota:outbound_average=10240</userinput></screen></listitem>
</varlistentry>
<varlistentry><term>Watchdog behavior</term>
<listitem><para>For the <systemitem>libvirt</systemitem> driver, you can enable and set the behavior of a
virtual hardware watchdog device for each flavor. Watchdog devices keep an eye
on the guest server, and carry out the configured action, if the server hangs.
The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If
<literal>hw_watchdog_action</literal> is not specified, the watchdog is
disabled.</para>
<para>To set the behavior, use:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key <replaceable>FLAVOR-NAME</replaceable> set hw_watchdog_action=<replaceable>ACTION</replaceable></userinput></screen>
<para>Valid <replaceable>ACTION</replaceable> values are:</para>
<itemizedlist>
<listitem>
<para><literal>disabled</literal>&mdash;(default) The device is not
attached.</para></listitem>
<listitem>
<para><literal>reset</literal>&mdash;Forcefully reset the guest.
</para>
</listitem>
<listitem>
<para><literal>poweroff</literal>&mdash;Forcefully power off the
guest.</para>
</listitem>
<listitem>
<para><literal>pause</literal>&mdash;Pause the guest.</para>
</listitem>
<listitem>
<para><literal>none</literal>&mdash;Only enable the watchdog; do
nothing if the server hangs.</para>
</listitem>
</itemizedlist>
<note><para>Watchdog behavior set using a specific image's properties will override behavior set using
flavors.</para></note>
</listitem>
</varlistentry>
<varlistentry>
<term>Random-number generator</term>
<listitem><para>If a random-number generator device has been added to the instance through its image
properties, the device can be enabled and configured using:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key <replaceable>FLAVOR-NAME</replaceable> set hw_rng:allowed=True</userinput>
<prompt>$</prompt> <userinput>nova flavor-key <replaceable>FLAVOR-NAME</replaceable> set hw_rng:rate_bytes=<replaceable>RATE-BYTES</replaceable></userinput>
<prompt>$</prompt> <userinput>nova flavor-key <replaceable>FLAVOR-NAME</replaceable> set hw_rng:rate_period=<replaceable>RATE-PERIOD</replaceable></userinput></screen>
<para>Where:</para>
<itemizedlist>
<listitem>
<para><replaceable>RATE-BYTES</replaceable>&mdash;(Integer) Allowed amount of
bytes that the guest can read from the host's entropy per period.</para>
</listitem>
<listitem>
<para><replaceable>RATE-PERIOD</replaceable>&mdash;(Integer) Duration of the
read period in seconds.</para>
<para>In this example, the instance of
<literal>m1.low_cpu</literal> can only
consume a maximum of 50% CPU of a physical
CPU computing capability.</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
<varlistentry><term>Instance VIF traffic control</term>
<listitem><para>Flavors can also be assigned to particular projects. By
default, a flavor is public and available to all projects.
Private flavors are only accessible to those on the access
list and are invisible to other projects. To create and assign
a private flavor to a project, run these commands:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-create --is-public false p1.medium auto 512 40 4</userinput>
<prompt>$</prompt> <userinput>nova flavor-access-add 259d06a0-ba6d-4e60-b42d-ab3144411d58 86f94150ed744e08be565c2ff608eef9</userinput></screen></listitem>
</varlistentry>
</variablelist>
</listitem>
</varlistentry>
<varlistentry>
<term>Disk tuning</term>
<listitem>
<para>Using disk I/O quotas, you can set maximum disk
write to 10 MB per second for a VM user. For
example:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key m1.medium set disk_write_bytes_sec=10485760</userinput></screen>
<para>The disk I/O options are:</para>
<itemizedlist>
<listitem>
<para>disk_read_bytes_sec</para>
</listitem>
<listitem>
<para>disk_read_iops_sec</para>
</listitem>
<listitem>
<para>disk_write_bytes_sec</para>
</listitem>
<listitem>
<para>disk_write_iops_sec</para>
</listitem>
<listitem>
<para>disk_total_bytes_sec</para>
</listitem>
<listitem>
<para>disk_total_iops_sec</para>
</listitem>
</itemizedlist>
<para>The vif I/O options are:</para>
<itemizedlist>
<listitem>
<para>vif_inbound_ average</para>
</listitem>
<listitem>
<para>vif_inbound_burst</para>
</listitem>
<listitem>
<para>vif_inbound_peak</para>
</listitem>
<listitem>
<para>vif_outbound_ average</para>
</listitem>
<listitem>
<para>vif_outbound_burst</para>
</listitem>
<listitem>
<para>vif_outbound_peak</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
<varlistentry>
<term>Bandwidth I/O</term>
<listitem>
<para>Incoming and outgoing traffic can be shaped
independently. The bandwidth element can have at
most, one inbound and at most, one outbound child
element. If you leave any of these child elements
out, no quality of service (QoS) is applied on
that traffic direction. So, if you want to shape
only the network's incoming traffic, use inbound
only (and vice versa). Each element has one
mandatory attribute average, which specifies the
average bit rate on the interface being
shaped.</para>
<para>There are also two optional attributes
(integer): <option>peak</option>, which specifies
the maximum rate at which a bridge can send data
(kilobytes/second), and <option>burst</option>,
the amount of bytes that can be burst at peak
speed (kilobytes). The rate is shared equally
within domains connected to the network.</para>
<para>The following example configures a bandwidth
limit for instance network traffic:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key m1.small set quota:inbound_average=10240</userinput>
<prompt>$</prompt> <userinput>nova flavor-key m1.small set quota:outbound_average=10240</userinput></screen>
</listitem>
</varlistentry>
<varlistentry>
<term>Watchdog behavior</term>
<listitem>
<para>For the <systemitem>libvirt</systemitem> driver,
you can enable and set the behavior of a virtual
hardware watchdog device for each flavor. Watchdog
devices keep an eye on the guest server, and carry
out the configured action, if the server hangs.
The watchdog uses the i6300esb device (emulating a
PCI Intel 6300ESB). If
<literal>hw_watchdog_action</literal> is not
specified, the watchdog is disabled.</para>
<para>To set the behavior, use:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key <replaceable>FLAVOR-NAME</replaceable> set hw_watchdog_action=<replaceable>ACTION</replaceable></userinput></screen>
<para>Valid <replaceable>ACTION</replaceable> values
are:</para>
<itemizedlist>
<listitem>
<para><literal>disabled</literal>&mdash;(default)
The device is not attached.</para>
</listitem>
<listitem>
<para><literal>reset</literal>&mdash;Forcefully
reset the guest.</para>
</listitem>
<listitem>
<para><literal>poweroff</literal>&mdash;Forcefully
power off the guest.</para>
</listitem>
<listitem>
<para><literal>pause</literal>&mdash;Pause the
guest.</para>
</listitem>
<listitem>
<para><literal>none</literal>&mdash;Only
enable the watchdog; do nothing if the
server hangs.</para>
</listitem>
</itemizedlist>
<note>
<para>Watchdog behavior set using a specific
image's properties will override behavior set
using flavors.</para>
</note>
</listitem>
</varlistentry>
<varlistentry>
<term>Random-number generator</term>
<listitem>
<para>If a random-number generator device has been
added to the instance through its image
properties, the device can be enabled and
configured using:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-key <replaceable>FLAVOR-NAME</replaceable> set hw_rng:allowed=True</userinput>
<prompt>$</prompt> <userinput>nova flavor-key <replaceable>FLAVOR-NAME</replaceable> set hw_rng:rate_bytes=<replaceable>RATE-BYTES</replaceable></userinput>
<prompt>$</prompt> <userinput>nova flavor-key <replaceable>FLAVOR-NAME</replaceable> set hw_rng:rate_period=<replaceable>RATE-PERIOD</replaceable></userinput></screen>
<para>Where:</para>
<itemizedlist>
<listitem>
<para><replaceable>RATE-BYTES</replaceable>&mdash;(Integer)
Allowed amount of bytes that the guest can
read from the host's entropy per
period.</para>
</listitem>
<listitem>
<para><replaceable>RATE-PERIOD</replaceable>&mdash;(Integer)
Duration of the read period in
seconds.</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
<varlistentry>
<term>Instance VIF traffic control</term>
<listitem>
<para>Flavors can also be assigned to particular
projects. By default, a flavor is public and
available to all projects. Private flavors are
only accessible to those on the access list and
are invisible to other projects. To create and
assign a private flavor to a project, run these
commands:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-create --is-public false p1.medium auto 512 40 4</userinput>
<prompt>$</prompt> <userinput>nova flavor-access-add 259d06a0-ba6d-4e60-b42d-ab3144411d58 86f94150ed744e08be565c2ff608eef9</userinput></screen>
</listitem>
</varlistentry>
</variablelist>
</section>

View File

@ -74,7 +74,7 @@
Storage installation. When partitions need to be moved around (for example, if a device
is added to the cluster), the ring ensures that a minimum number of partitions are moved
at a time, and only one replica of a partition is moved at a time.</para>
<para>Weights can be used to balance the distribution of partitions on drives across the
<para>You can use weights to balance the distribution of partitions on drives across the
cluster. This can be useful, for example, when differently sized drives are used in a
cluster.</para>
<para>The ring is used by the proxy server and several background processes (like
@ -89,8 +89,8 @@
</figure>
<para>These rings are externally managed, in that the server processes themselves do not
modify the rings, they are instead given new rings modified by other tools.</para>
<para>The ring uses a configurable number of bits from a
paths MD5 hash as a partition index that designates a
<para>The ring uses a configurable number of bits from an
MD5 hash for a path as a partition index that designates a
device. The number of bits kept from the hash is known as
the partition power, and 2 to the partition power
indicates the partition count. Partitioning the full MD5

View File

@ -122,7 +122,7 @@ self._part_shift</programlisting></para>
assign to each device based on the weight of the
device. For example, for a partition at the power
of 20, the ring has 1,048,576 partitions. One
thousand devices of equal weight will each want
thousand devices of equal weight each want
1,048.576 partitions. The devices are sorted by
the number of partitions they desire and kept in
order throughout the initialization

View File

@ -274,8 +274,8 @@ cinder type-key platinum set volume_backend_name=hus-2</programlisting>
<literal>svc_1</literal>, <literal>svc_2</literal>,
and <literal>svc_3</literal><footnote
xml:id="hds-no-weight">
<para>There is no relative precedence or weight among
these four labels.</para>
<para>Each of
these four labels has no relative precedence or weight.</para>
</footnote>. Each respective service label associates with
these parameters and tags:</para>
<orderedlist>

View File

@ -89,18 +89,17 @@
<varlistentry>
<term>scheduler_weight_classes</term>
<listitem>
<para>Weight classes the cells scheduler
should use. By default, uses
"<literal>nova.cells.weights.all_weighers</literal>"
<para>Weight classes that the scheduler for cells uses. By default, uses
<literal>nova.cells.weights.all_weighers</literal>
to map to all cells weight algorithms
(weighers) included with Compute.</para>
included with Compute.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>ram_weight_multiplier</term>
<listitem>
<para>Multiplier used for weighing ram.
Negative numbers mean you want Compute to
<para>Multiplier used to weight RAM.
Negative numbers indicate that Compute should
stack VMs on one host instead of spreading
out new VMs to more hosts in the cell. The
default value is 10.0.</para>
@ -207,7 +206,7 @@ rabbit_virtual_host=cell1_vhost</programlisting></para>
<varlistentry>
<term><option>scheduler_filter_classes</option></term>
<listitem>
<para>. Specifies the list of filter classes. By
<para>List of filter classes. By
default
<option>nova.cells.weights.all_filters</option>
is specified, which maps to all cells filters
@ -218,11 +217,11 @@ rabbit_virtual_host=cell1_vhost</programlisting></para>
<varlistentry>
<term><option>scheduler_weight_classes</option></term>
<listitem>
<para>Specifies the list of weight classes. By
<para>List of weight classes. By
default
<option>nova.cells.weights.all_weighers</option>
is specified, which maps to all cell weight
algorithms (weighers) included with Compute.
algorithms included with Compute.
The following modules are available:</para>
<itemizedlist>
<listitem>

View File

@ -86,7 +86,7 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi
Guide</citetitle>.</para>
<section xml:id="filter-scheduler">
<title>Filter scheduler</title>
<para>The Filter Scheduler
<para>The filter scheduler
(<literal>nova.scheduler.filter_scheduler.FilterScheduler</literal>)
is the default scheduler for scheduling virtual machine
instances. It supports filtering and weighting to make
@ -96,7 +96,7 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi
<section xml:id="scheduler-filters">
<?dbhtml stop-chunking?>
<title>Filters</title>
<para>When the Filter Scheduler receives a request for a
<para>When the filter scheduler receives a request for a
resource, it first applies filters to determine which
hosts are eligible for consideration when dispatching a
resource. Filters are binary: either a host is accepted by
@ -607,8 +607,8 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
<section xml:id="weights">
<title>Weights</title>
<?dbhtml stop-chunking?>
<para>When resourcing instances, the Filter Scheduler filters
and weighs each host in the list of acceptable hosts. Each
<para>When resourcing instances, the filter scheduler filters
and weights each host in the list of acceptable hosts. Each
time the scheduler selects a host, it virtually consumes
resources on it, and subsequent selections are adjusted
accordingly. This process is useful when the customer asks
@ -617,8 +617,8 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
<para>All weights are normalized before being summed up; the
host with the largest weight is given the highest
priority.</para>
<figure xml:id="figure_weighing-hosts">
<title>Weighing hosts</title>
<figure xml:id="figure_weighting-hosts">
<title>Weighting hosts</title>
<mediaobject>
<imageobject>
<imagedata
@ -629,7 +629,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
</figure>
<para>If cells are used, cells are weighted by the scheduler
in the same manner as hosts.</para>
<para>Hosts and cells are weighed based on the following
<para>Hosts and cells are weighted based on the following
options in the <filename>/etc/nova/nova.conf</filename>
file:</para>
<table rules="all" xml:id="table_host-weighting-options">
@ -663,7 +663,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
hosts. This property defines the subset size
from which a host is chosen. A value of 1
chooses the first host returned by the
weighing functions. This value must be at
weighting functions. This value must be at
least 1. A value less than 1 is ignored, and 1
is used instead. Use an integer value.</td>
</tr>
@ -672,20 +672,20 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
<td><literal>scheduler_weight_classes</literal></td>
<td>Defaults to
<literal>nova.scheduler.weights.all_weighers</literal>,
which selects the only available weigher, the
RamWeigher. Hosts are then weighed and sorted
which selects the
RamWeigher. Hosts are then weighted and sorted
with the largest weight winning.</td>
</tr>
<tr valign="top">
<td>[metrics]</td>
<td><literal>weight_multiplier</literal></td>
<td>Multiplier for weighing metrics. Use a
<td>Multiplier for weighting metrics. Use a
floating-point value.</td>
</tr>
<tr valign="top">
<td>[metrics]</td>
<td><literal>weight_setting</literal></td>
<td>Determines how metrics are weighed. Use a
<td>Determines how metrics are weighted. Use a
comma-separated list of metricName=ratio. For
example: "name1=1.0, name2=-1.0" results in:
<literal>name1.value * 1.0 + name2.value *
@ -707,7 +707,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
</listitem>
<listitem>
<para>False&mdash;Treated as a
negative factor in the weighing
negative factor in the weighting
process (uses the
<option>weight_of_unavailable</option>
option).</para>
@ -752,7 +752,7 @@ weight_of_unavailable=-10000.0</programlisting>
<tr valign="top">
<td>[cells]</td>
<td><literal>mute_weight_multiplier</literal></td>
<td>Multiplier to weigh mute children (hosts which
<td>Multiplier to weight mute children (hosts which
have not sent capacity or capacity updates for
some time). Use a negative, floating-point
value.</td>
@ -767,7 +767,7 @@ weight_of_unavailable=-10000.0</programlisting>
<tr valign="top">
<td>[cells]</td>
<td><literal>offset_weight_multiplier</literal></td>
<td>Multiplier to weigh cells, so you can specify
<td>Multiplier to weight cells, so you can specify
a preferred cell. Use a floating point
value.</td>
</tr>
@ -786,8 +786,8 @@ weight_of_unavailable=-10000.0</programlisting>
<td><literal>scheduler_weight_classes</literal></td>
<td>Defaults to
<literal>nova.cells.weights.all_weighers</literal>,
which maps to all cell weighers included with
Compute. Cells are then weighed and sorted
which maps to all cell weighters included with
Compute. Cells are then weighted and sorted
with the largest weight winning.</td>
</tr>
</tbody>
@ -803,7 +803,7 @@ offset_weight_multiplier=1.0</programlisting>
<section xml:id="chance-scheduler">
<title>Chance scheduler</title>
<?dbhtml stop-chunking?>
<para>As an administrator, you work with the Filter Scheduler.
<para>As an administrator, you work with the filter scheduler.
However, the Compute service also uses the Chance
Scheduler,
<literal>nova.scheduler.chance.ChanceScheduler</literal>,

View File

@ -7719,8 +7719,8 @@
<title>W</title>
<glossentry>
<glossterm>weighing<indexterm class="singular">
<primary>weighing</primary>
<glossterm>weighting<indexterm class="singular">
<primary>weighting</primary>
</indexterm></glossterm>
<glossdef>