From 7e7cb4d0ec13229be12a1532968e4940a0525d5d Mon Sep 17 00:00:00 2001 From: Yunhong Jiang Date: Fri, 20 Jun 2014 11:14:06 -0700 Subject: [PATCH] Update the usage of unscoped extra specs Although the unscoped extra specs are supported in AggregateInstanceExtraSpecsFilter and ComputeCapabilitiesFilter, it should be not encouraged, since it will cause conflict. Closes-Bug: 1330962 Change-Id: I2b35bf85f0c2e8c8c386bf0c4bed1e6c21cea453 --- .../compute/section_compute-scheduler.xml | 528 ++++++++++-------- 1 file changed, 295 insertions(+), 233 deletions(-) diff --git a/doc/config-reference/compute/section_compute-scheduler.xml b/doc/config-reference/compute/section_compute-scheduler.xml index 9a52cd2822..bd5b46db02 100644 --- a/doc/config-reference/compute/section_compute-scheduler.xml +++ b/doc/config-reference/compute/section_compute-scheduler.xml @@ -6,30 +6,30 @@
+ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0"> Scheduling Compute uses the nova-scheduler service to determine how to dispatch compute and volume requests. For example, the nova-scheduler - service determines which host a VM should launch on. The term - host in the context of filters - means a physical node that has a host means + a physical node that has a nova-compute service running on it. You can configure the scheduler through a variety of options. - Compute is configured with the following default scheduler options in the - /etc/nova/nova.conf file: + Compute is configured with the following default scheduler + options in the /etc/nova/nova.conf + file: scheduler_driver=nova.scheduler.multi.MultiScheduler scheduler_driver_task_period=60 compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter - By default, the scheduler_driver is configured as a filter - scheduler, as described in the next section. In the default - configuration, this scheduler considers hosts that meet all - the following criteria: + By default, the scheduler_driver is + configured as a filter scheduler, as described in the next + section. In the default configuration, this scheduler + considers hosts that meet all the following criteria: Have not been attempted for scheduling purposes @@ -44,7 +44,7 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi (RamFilter). - Capable of servicing the request + Can service the request (ComputeFilter). @@ -59,30 +59,31 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi (ImagePropertiesFilter). - The scheduler caches its list of available hosts; you can - specify how often the list is updated by modifying the - value. - - + The scheduler caches its list of available hosts; use the + option to + specify how often the list is updated. - Do not configure to be - much smaller than ; - otherwise, hosts will appear to be dead while the host list is - being cached. + Do not configure to + be much smaller than + ; + otherwise, hosts appear to be dead while the host list is + being cached. - For information on the volume scheduler, refer the Block + For information about the volume scheduler, see the Block Storage section of OpenStack Cloud Administrator - Guide for information. - The choice of a new host on instance migration is done by the - scheduler. - When evacuating instances from a host, the scheduler service does not - pick the next host. Instances are evacuated to the host explicitly - defined by the administrator. For information about instance evacuation, - refer to the Evacuate - instances section of the Cloud Administrator Guide. + Guide. + The scheduler chooses a new host when an instance is + migrated. + When evacuating instances from a host, the scheduler service + does not pick the next host. Instances are evacuated to the + host explicitly defined by the administrator. For information + about instance evacuation, see Evacuate instances section of the + OpenStack Cloud Administrator + Guide.
Filter scheduler The Filter Scheduler @@ -102,18 +103,17 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi the filter, or it is rejected. Hosts that are accepted by the filter are then processed by a different algorithm to decide which hosts to use for that request, described in - the Weights section. -
- Filtering - - - - - -
-
+ the Weights section. +
+ Filtering + + + + + +
The configuration option in nova.conf provides the Compute service with the list of the filters @@ -130,10 +130,12 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi contain: scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_available_filters=myfilter.MyFilter - The scheduler_default_filters configuration option in - nova.conf defines the list of filters that are applied by the - nova-scheduler service. The default filters - are: + The scheduler_default_filters + configuration option in nova.conf + defines the list of filters that are applied by the + nova-scheduler service. The default + filters are: scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter The following sections describe the available filters. @@ -147,33 +149,35 @@ scheduler_available_filters=myfilter.MyFilter
AggregateImagePropertiesIsolation - Matches properties defined in an image's metadata against those of aggregates to - determine host matches: - - - - If a host belongs to an aggregate and the aggregate defines one or - more metadata that matches an image's properties, that host is a candidate - to boot the image's instance. - - - If a host does not belong to any aggregate, it can boot instances from - all images. - - - - For example, the following aggregate myWinAgg has the - Windows operating system as metadata (named 'windows'): + Matches properties defined in an image's metadata + against those of aggregates to determine host + matches: + + + If a host belongs to an aggregate and the + aggregate defines one or more metadata that + matches an image's properties, that host is a + candidate to boot the image's instance. + + + If a host does not belong to any aggregate, + it can boot instances from all images. + + + For example, the following aggregate + myWinAgg has the Windows + operating system as metadata (named 'windows'): $ nova aggregate-details MyWinAgg +----+----------+-------------------+------------+---------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+----------+-------------------+------------+---------------+ | 1 | MyWinAgg | None | 'sf-devel' | 'os=windows' | +----+----------+-------------------+------------+---------------+ -In this example, because the following Win-2012 image has the windows - property, it would boot on the sf-devel host (all other - filters being equal): - $ glance image-show Win-2012 + In this example, because the following Win-2012 + image has the windows property, + it boots on the sf-devel host + (all other filters being equal): + $ glance image-show Win-2012 +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ @@ -182,10 +186,11 @@ scheduler_available_filters=myfilter.MyFilter | container_format | ami | | created_at | 2013-11-14T13:24:25 | | ... -You can configure the AggregateImagePropertiesIsolation filter using - the following options in the nova.conf file: - -# Considers only keys matching the given namespace (string). + You can configure the + AggregateImagePropertiesIsolation + filter by using the following options in the + nova.conf file: + # Considers only keys matching the given namespace (string). aggregate_image_properties_isolation_namespace=<None> # Separator used between the namespace and keys (string). @@ -193,25 +198,31 @@ aggregate_image_properties_isolation_separator=.
AggregateInstanceExtraSpecsFilter - Matches properties defined in an instance type's - extra specs against admin-defined properties on a host - aggregate. Works with specifications that are - unscoped, or are scoped with + Matches properties defined in extra specs for an + instance type against admin-defined properties on a + host aggregate. Works with specifications that are + scoped with aggregate_instance_extra_specs. - See the host - aggregates section for documentation on how - to use this filter. + For backward compatibility, also works with non-scoped + specifications; this action is highly discouraged + because it conflicts with + ComputeCapabilitiesFilter filter when you + enable both filters. For information about how to use + this filter, see the host aggregates section.
AggregateMultiTenancyIsolation Isolates tenants to specific host aggregates. - If a host is in an aggregate that has the metadata key - filter_tenant_id it only - creates instances from that tenant (or list of - tenants). A host can be in different aggregates. If a - host does not belong to an aggregate with the metadata - key, it can create instances from all tenants. + If a host is in an aggregate that has the + filter_tenant_id metadata key, + the host creates instances from only that tenant or + list of tenants. A host can be in different + aggregates. If a host does not belong to an aggregate + with the metadata key, the host can create instances + from all tenants.
AggregateRamFilter @@ -224,40 +235,46 @@ aggregate_image_properties_isolation_separator=.
AllHostsFilter - This is a no-op filter, it does not eliminate any of + This is a no-op filter. It does not eliminate any of the available hosts.
AvailabilityZoneFilter - Filters hosts by availability zone. This filter must - be enabled for the scheduler to respect availability + Filters hosts by availability zone. You must enable + this filter for the scheduler to respect availability zones in requests.
ComputeCapabilitiesFilter - Matches properties defined in an instance type's - extra specs against compute capabilities. - If an extra specs key contains a colon ":", anything - before the colon is treated as a namespace, and - anything after the colon is treated as the key to be - matched. If a namespace is present and is not - 'capabilities', it is ignored by this filter. + Matches properties defined in extra specs for an + instance type against compute capabilities. + If an extra specs key contains a colon + (:), anything before the colon + is treated as a namespace and anything after the colon + is treated as the key to be matched. If a namespace is + present and is not capabilities, + the filter ignores the namespace. For backward + compatibility, also treats the extra specs key as the + key to be matched if no namespace is present; this + action is highly discouraged because it conflicts with + + AggregateInstanceExtraSpecsFilter filter + when you enable both filters.
ComputeFilter Passes all hosts that are operational and enabled. - In general, this filter should always be enabled. - + In general, you should always enable this filter.
CoreFilter - Only schedules instances on hosts if there are - sufficient CPU cores available. If this filter is not - set, the scheduler may over-provision a host based on - cores (for example, the virtual cores running on an - instance may exceed the physical cores). - This filter can be configured to allow a fixed + Only schedules instances on hosts if sufficient CPU + cores are available. If this filter is not set, the + scheduler might over-provision a host based on cores. + For example, the virtual cores running on an instance + may exceed the physical cores. + You can configure this filter to enable a fixed amount of vCPU overcommitment by using the Configuration option in @@ -270,22 +287,23 @@ aggregate_image_properties_isolation_separator=. To disallow vCPU overcommitment set: cpu_allocation_ratio=1.0 - - The Compute API will always return the actual number of CPU - cores available on a compute node regardless of the value of - the configuration key. As - a result changes to the - are not reflected via the command line clients or the - dashboard. Changes to this configuration key are only taken - into account internally in the scheduler. - + The Compute API always returns the actual + number of CPU cores available on a compute node + regardless of the value of the + + configuration key. As a result changes to the + are not + reflected via the command line clients or the + dashboard. Changes to this configuration key are + only taken into account internally in the + scheduler.
DifferentHostFilter - Schedules the instance on a different host from a set - of instances. To take advantage of this filter, the - requester must pass a scheduler hint, using + Schedules the instance on a different host from a + set of instances. To take advantage of this filter, + the requester must pass a scheduler hint, using different_host as the key and a list of instance uuids as the value. This filter is the opposite of the SameHostFilter. @@ -305,7 +323,7 @@ aggregate_image_properties_isolation_separator=. Only schedules instances on hosts if there is sufficient disk space available for root and ephemeral storage. - This filter can be configured to allow a fixed + You can configure this filter to enable a fixed amount of disk overcommitment by using the disk_allocation_ratio Configuration option in @@ -315,14 +333,16 @@ aggregate_image_properties_isolation_separator=. Adjusting this value to greater than 1.0 enables scheduling instances while over committing disk resources on the node. This might be desirable if you - use an image format that is sparse or copy on write - so that each virtual instance does not require a 1:1 + use an image format that is sparse or copy on write so + that each virtual instance does not require a 1:1 allocation of virtual disk to physical storage.
GroupAffinityFilter - This filter is deprecated in favor of ServerGroupAffinityFilter. + + This filter is deprecated in favor of ServerGroupAffinityFilter. The GroupAffinityFilter ensures that an instance is scheduled on to a host from a set of group hosts. To @@ -333,15 +353,17 @@ aggregate_image_properties_isolation_separator=. --hint flag. For example: $ nova boot --image IMAGE_ID --flavor 1 --hint group=foo server-1 - This filter should not be enabled at the same time as GroupAntiAffinityFilter - or neither filter will work properly. + This filter should not be enabled at the same time + as GroupAntiAffinityFilter or neither filter + will work properly.
GroupAntiAffinityFilter - This filter is deprecated in favor of - ServerGroupAntiAffinityFilter. + + This filter is deprecated in favor of ServerGroupAntiAffinityFilter. The GroupAntiAffinityFilter ensures that each instance in a group is on a different host. To take @@ -352,9 +374,10 @@ aggregate_image_properties_isolation_separator=. --hint flag. For example: $ nova boot --image IMAGE_ID --flavor 1 --hint group=foo server-1 - This filter should not be enabled at the same time as GroupAffinityFilter or - neither filter will work properly. + This filter should not be enabled at the same time + as GroupAffinityFilter or neither filter will + work properly.
ImagePropertiesFilter @@ -407,10 +430,9 @@ aggregate_image_properties_isolation_separator=. and hosts in the nova.conf file using the isolated_hosts and isolated_images configuration - options. For example: - isolated_hosts=server1,server2 + options. For example: + isolated_hosts=server1,server2 isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09 -
JsonFilter @@ -476,16 +498,15 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1 the scheduler may over provision a host based on RAM (for example, the RAM allocated by virtual machine instances may exceed the physical RAM). - This filter can be configured to allow a fixed + You can configure this filter to enable a fixed amount of RAM overcommitment by using the ram_allocation_ratio configuration option in nova.conf. The default setting is: - ram_allocation_ratio=1.5 + ram_allocation_ratio=1.5 This setting enables 1.5 GB instances to run on - any compute node with 1 GB of free RAM. + any compute node with 1 GB of free RAM.
RetryFilter @@ -520,12 +541,13 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
ServerGroupAffinityFilter - The ServerGroupAffinityFilter ensures that an instance is - scheduled on to a host from a set of group hosts. To - take advantage of this filter, the requester must create a - server group with an affinity policy, and - pass a scheduler hint, using group as the key - and the server group UUID as the value. Using the + The ServerGroupAffinityFilter ensures that an + instance is scheduled on to a host from a set of group + hosts. To take advantage of this filter, the requester + must create a server group with an + affinity policy, and pass a + scheduler hint, using group as the + key and the server group UUID as the value. Using the nova command-line tool, use the --hint flag. For example: @@ -534,11 +556,12 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
ServerGroupAntiAffinityFilter - The ServerGroupAntiAffinityFilter ensures that each instance in - a group is on a different host. To take advantage of this - filter, the requester must create a server group with an - anti-affinity policy, and pass a scheduler - hint, using group as the key and the server + The ServerGroupAntiAffinityFilter ensures that each + instance in a group is on a different host. To take + advantage of this filter, the requester must create a + server group with an anti-affinity + policy, and pass a scheduler hint, using + group as the key and the server group UUID as the value. Using the nova command-line tool, use the --hint flag. For @@ -548,10 +571,10 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
SimpleCIDRAffinityFilter - Schedules the instance based on host IP subnet range. - To take advantage of this filter, the requester must - specify a range of valid IP address in CIDR format, by - passing two scheduler hints: + Schedules the instance based on host IP subnet + range. To take advantage of this filter, the requester + must specify a range of valid IP address in CIDR + format, by passing two scheduler hints: build_near_host_ip @@ -584,26 +607,33 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
Weights - When resourcing instances, the Filter Scheduler filters and weighs each host in the - list of acceptable hosts. Each time the scheduler selects a host, it virtually consumes - resources on it, and subsequent selections are adjusted accordingly. This process is - useful when the customer asks for the same large amount of instances, because weight is + When resourcing instances, the Filter Scheduler filters + and weighs each host in the list of acceptable hosts. Each + time the scheduler selects a host, it virtually consumes + resources on it, and subsequent selections are adjusted + accordingly. This process is useful when the customer asks + for the same large amount of instances, because weight is computed for each requested instance. - All weights are normalized before being summed up; the host with the largest weight is - given the highest priority. + All weights are normalized before being summed up; the + host with the largest weight is given the highest + priority.
Weighing hosts - +
- If cells are used, cells are weighted by the scheduler in the same manner as hosts. - Hosts and cells are weighed based on the following options in the - /etc/nova/nova.conf file: + If cells are used, cells are weighted by the scheduler + in the same manner as hosts. + Hosts and cells are weighed based on the following + options in the /etc/nova/nova.conf + file: - + @@ -618,58 +648,81 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1 - - + - + - + - + + True—Raises an + exception. To avoid the raised + exception, you should use the + scheduler filter + MetricFilter to + filter out hosts with unavailable + metrics. + + + False—Treated as a + negative factor in the weighing + process (uses the + + option). + + - +
Host Weighting optionsHost weighting options
[DEFAULT] ram_weight_multiplierBy default, the scheduler spreads instances across all hosts evenly. Set the - option to - a negative number if you prefer stacking instead of spreading. Use a + By default, the scheduler spreads instances + across all hosts evenly. Set the + + option to a negative number if you prefer + stacking instead of spreading. Use a floating-point value.
[DEFAULT] scheduler_host_subset_sizeNew instances are scheduled on a host that is chosen randomly from a subset - of the N best hosts. This property defines the subset size from which a host - is chosen. A value of 1 chooses the first host returned by the weighing - functions.This value must be at least 1. A value less than 1 is ignored, and - 1 is used instead. Use an integer value.New instances are scheduled on a host that is + chosen randomly from a subset of the N best + hosts. This property defines the subset size + from which a host is chosen. A value of 1 + chooses the first host returned by the + weighing functions. This value must be at + least 1. A value less than 1 is ignored, and 1 + is used instead. Use an integer value.
[DEFAULT] scheduler_weight_classesDefaults to nova.scheduler.weights.all_weighers, which - selects the only available weigher, the RamWeigher. Hosts are then weighed - and sorted with the largest weight winning.Defaults to + nova.scheduler.weights.all_weighers, + which selects the only available weigher, the + RamWeigher. Hosts are then weighed and sorted + with the largest weight winning.
[metrics] weight_multiplierMultiplier for weighing metrices. Use a floating-point value.Multiplier for weighing metrics. Use a + floating-point value.
[metrics] weight_settingDetermines how metrics are weighed. Use a comma-separated list of - metricName=ratio. For example: "name1=1.0, name2=-1.0" results in: - name1.value * 1.0 + name2.value * -1.0 + Determines how metrics are weighed. Use a + comma-separated list of metricName=ratio. For + example: "name1=1.0, name2=-1.0" results in: + name1.value * 1.0 + name2.value * + -1.0
[metrics] required Specifies how to treat unavailable metrics: - True—Raises an exception. To avoid the raised exception, you should use the scheduler - filter MetricFilter to filter out hosts - with unavailable metrics. - False—Treated as a negative factor in the weighing process (uses the - option). -
[metrics] weight_of_unavailableIf is set to False, and any one of the metrics set - by is unavailable, the - value is returned to the - scheduler.If is set to False, + and any one of the metrics set by + is + unavailable, the + + value is returned to the scheduler.
@@ -684,52 +737,61 @@ weight_setting=name1=1.0, name2=-1.0 required=false weight_of_unavailable=-10000.0 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + -
Cell weighting options
SectionOptionDescription
[cells]mute_weight_multiplierMultiplier to weigh mute children (hosts which have not sent capacity or - capacity updates for some time). Use a negative, floating-point value.
[cells]mute_weight_valueWeight value assigned to mute children. Use a positive, floating-point value - with a maximum of '1.0'.
[cells]offset_weight_multiplierMultiplier to weigh cells, so you can specify a preferred cell. Use a floating - point value.
[cells]ram_weight_multiplierBy default, the scheduler spreads instances across all cells evenly. Set the - option to a negative number if - you prefer stacking instead of spreading. Use a floating-point value.
[cells]scheduler_weight_classesDefaults to nova.cells.weights.all_weighers, which maps to - all cell weighers included with Compute. Cells are then weighed and sorted +
Cell weighting options
SectionOptionDescription
[cells]mute_weight_multiplierMultiplier to weigh mute children (hosts which + have not sent capacity or capacity updates for + some time). Use a negative, floating-point + value.
[cells]mute_weight_valueWeight value assigned to mute children. Use a + positive, floating-point value with a maximum + of '1.0'.
[cells]offset_weight_multiplierMultiplier to weigh cells, so you can specify + a preferred cell. Use a floating point + value.
[cells]ram_weight_multiplierBy default, the scheduler spreads instances + across all cells evenly. Set the + + option to a negative number if you prefer + stacking instead of spreading. Use a + floating-point value.
[cells]scheduler_weight_classesDefaults to + nova.cells.weights.all_weighers, + which maps to all cell weighers included with + Compute. Cells are then weighed and sorted with the largest weight winning.
+ For example: [cells] scheduler_weight_classes=nova.cells.weights.all_weighers @@ -752,8 +814,8 @@ offset_weight_multiplier=1.0 href="../../common/section_cli_nova_host_aggregates.xml"/>
Configuration reference - To customize the Compute scheduler, use the configuration option - settings documented in To customize the Compute scheduler, use the + configuration option settings documented in .