conf: Group scheduler options

Move all scheduler options into their one of two groups. Many of the
options are simply renamed to remove their 'scheduler_' prefix, with
some exceptions:

* scheduler_default_filters -> enabled_filters
* scheduler_baremetal_default_filters -> baremetal_enabled_filters
* scheduler_driver_task_period -> periodic_task_interval
* scheduler_tracks_instance_changes -> track_instance_changes

Change-Id: I3f48e52815e80c99612bcd10cb53331a8c995fc3
Co-Authored-By: Stephen Finucane <sfinucan@redhat.com>
Implements: blueprint centralize-config-options-ocata
This commit is contained in:
Alexis Lee 2016-08-02 13:08:54 +01:00 committed by Stephen Finucane
parent f57cc519fd
commit 7d0381c91a
41 changed files with 287 additions and 190 deletions

View File

@ -318,17 +318,17 @@ Configuring Filters
To use filters you specify two settings: To use filters you specify two settings:
* ``scheduler_available_filters`` - Defines filter classes made available to the * ``filter_scheduler.available_filters`` - Defines filter classes made
scheduler. This setting can be used multiple times. available to the scheduler. This setting can be used multiple times.
* ``scheduler_default_filters`` - Of the available filters, defines those that * ``filter_scheduler.enabled_filters`` - Of the available filters, defines
the scheduler uses by default. those that the scheduler uses by default.
The default values for these settings in nova.conf are: The default values for these settings in nova.conf are:
:: ::
--scheduler_available_filters=nova.scheduler.filters.all_filters --filter_scheduler.available_filters=nova.scheduler.filters.all_filters
--scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter' --filter_scheduler.enabled_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
With this configuration, all filters in ``nova.scheduler.filters`` With this configuration, all filters in ``nova.scheduler.filters``
would be available, and by default the |RamFilter|, |ComputeFilter|, would be available, and by default the |RamFilter|, |ComputeFilter|,
@ -350,10 +350,10 @@ settings:
:: ::
--scheduler_driver=nova.scheduler.FilterScheduler --scheduler.driver=nova.scheduler.FilterScheduler
--scheduler_available_filters=nova.scheduler.filters.all_filters --filter_scheduler.available_filters=nova.scheduler.filters.all_filters
--scheduler_available_filters=myfilter.MyFilter --filter_scheduler.available_filters=myfilter.MyFilter
--scheduler_default_filters=RamFilter,ComputeFilter,MyFilter --filter_scheduler.enabled_filters=RamFilter,ComputeFilter,MyFilter
.. note:: When writing your own filter, be sure to add it to the list of available filters .. note:: When writing your own filter, be sure to add it to the list of available filters
and enable it in the default filters. The "all_filters" setting only includes the and enable it in the default filters. The "all_filters" setting only includes the
@ -364,15 +364,15 @@ driver. The standard nova filters and MyFilter are available to the
FilterScheduler. The RamFilter, ComputeFilter, and MyFilter are used by FilterScheduler. The RamFilter, ComputeFilter, and MyFilter are used by
default when no filters are specified in the request. default when no filters are specified in the request.
Each filter selects hosts in a different way and has different costs. The order of Each filter selects hosts in a different way and has different costs. The order
``scheduler_default_filters`` affects scheduling performance. The general suggestion of ``filter_scheduler.enabled_filters`` affects scheduling performance. The
is to filter out invalid hosts as soon as possible to avoid unnecessary costs. general suggestion is to filter out invalid hosts as soon as possible to avoid
We can sort ``scheduler_default_filters`` items by their costs in reverse order. unnecessary costs. We can sort ``filter_scheduler.enabled_filters`` items by
For example, ComputeFilter is better before any resource calculating filters their costs in reverse order. For example, ComputeFilter is better before any
like RamFilter, CoreFilter. resource calculating filters like RamFilter, CoreFilter.
In medium/large environments having AvailabilityZoneFilter before any capability or In medium/large environments having AvailabilityZoneFilter before any
resource calculating filters can be useful. capability or resource calculating filters can be useful.
Weights Weights
------- -------
@ -396,7 +396,7 @@ and not modify the weight of the object directly, since final weights are normal
and computed by ``weight.BaseWeightHandler``. and computed by ``weight.BaseWeightHandler``.
The Filter Scheduler weighs hosts based on the config option The Filter Scheduler weighs hosts based on the config option
`scheduler_weight_classes`, this defaults to `filter_scheduler.weight_classes`, this defaults to
`nova.scheduler.weights.all_weighers`, which selects the following weighers: `nova.scheduler.weights.all_weighers`, which selects the following weighers:
* |RAMWeigher| Compute weight based on available RAM on the compute node. * |RAMWeigher| Compute weight based on available RAM on the compute node.

View File

@ -295,8 +295,8 @@ Set the following parameters:
.. code-block:: bash .. code-block:: bash
[DEFAULT] [filter_scheduler]
scheduler_default_filters=RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, NUMATopologyFilter enabled_filters=RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, NUMATopologyFilter
[libvirt] [libvirt]
virt_type = kvm virt_type = kvm

View File

@ -515,7 +515,8 @@ class ComputeManager(manager.Manager):
self._sync_power_pool = eventlet.GreenPool( self._sync_power_pool = eventlet.GreenPool(
size=CONF.sync_power_state_pool_size) size=CONF.sync_power_state_pool_size)
self._syncs_in_progress = {} self._syncs_in_progress = {}
self.send_instance_updates = CONF.scheduler_tracks_instance_changes self.send_instance_updates = (
CONF.filter_scheduler.track_instance_changes)
if CONF.max_concurrent_builds != 0: if CONF.max_concurrent_builds != 0:
self._build_semaphore = eventlet.semaphore.Semaphore( self._build_semaphore = eventlet.semaphore.Semaphore(
CONF.max_concurrent_builds) CONF.max_concurrent_builds)

View File

@ -437,7 +437,7 @@ interval_opts = [
help='Waiting time interval (seconds) between sending the ' help='Waiting time interval (seconds) between sending the '
'scheduler a list of current instance UUIDs to verify ' 'scheduler a list of current instance UUIDs to verify '
'that its view of instances is in sync with nova. If the ' 'that its view of instances is in sync with nova. If the '
'CONF option `scheduler_tracks_instance_changes` is ' 'CONF option `filter_scheduler.track_instance_changes` is '
'False, changing this option will have no effect.'), 'False, changing this option will have no effect.'),
cfg.IntOpt('update_resources_interval', cfg.IntOpt('update_resources_interval',
default=0, default=0,

View File

@ -1,5 +1,3 @@
# needs:check_opt_group_and_type
# Copyright 2015 OpenStack Foundation # Copyright 2015 OpenStack Foundation
# All Rights Reserved. # All Rights Reserved.
# #
@ -17,7 +15,7 @@
from oslo_config import cfg from oslo_config import cfg
scheduler_opts = [ default_opts = [
cfg.StrOpt("scheduler_topic", cfg.StrOpt("scheduler_topic",
default="scheduler", default="scheduler",
deprecated_for_removal=True, deprecated_for_removal=True,
@ -39,8 +37,8 @@ Possible values:
* A valid AMQP topic name * A valid AMQP topic name
"""), """),
cfg.StrOpt( # TODO(sfinucan): Deprecate this option
"scheduler_json_config_location", cfg.StrOpt("scheduler_json_config_location",
default="", default="",
help=""" help="""
The absolute path to the scheduler configuration JSON file, if any. The absolute path to the scheduler configuration JSON file, if any.
@ -53,10 +51,17 @@ dynamic configuration.
Possible values: Possible values:
* A valid file path, or an empty string * A valid file path, or an empty string
"""), """)]
cfg.StrOpt("scheduler_host_manager",
scheduler_group = cfg.OptGroup(name="scheduler",
title="Scheduler configuration")
scheduler_opts = [
cfg.StrOpt("host_manager",
default="host_manager", default="host_manager",
choices=("host_manager", "ironic_host_manager"), choices=("host_manager", "ironic_host_manager"),
deprecated_name="scheduler_host_manager",
deprecated_group="DEFAULT",
help=""" help="""
The scheduler host manager to use. The scheduler host manager to use.
@ -64,10 +69,12 @@ The host manager manages the in-memory picture of the hosts that the scheduler
uses. The options values are chosen from the entry points under the namespace uses. The options values are chosen from the entry points under the namespace
'nova.scheduler.host_manager' in 'setup.cfg'. 'nova.scheduler.host_manager' in 'setup.cfg'.
"""), """),
cfg.StrOpt("scheduler_driver", cfg.StrOpt("driver",
default="filter_scheduler", default="filter_scheduler",
choices=("filter_scheduler", "caching_scheduler", choices=("filter_scheduler", "caching_scheduler",
"chance_scheduler", "fake_scheduler"), "chance_scheduler", "fake_scheduler"),
deprecated_name="scheduler_driver",
deprecated_group="DEFAULT",
help=""" help="""
The class of the driver used by the scheduler. The class of the driver used by the scheduler.
@ -86,9 +93,10 @@ Possible values:
** A custom scheduler driver. In this case, you will be responsible for ** A custom scheduler driver. In this case, you will be responsible for
creating and maintaining the entry point in your 'setup.cfg' file creating and maintaining the entry point in your 'setup.cfg' file
"""), """),
# TODO(sfinucan): Add 'min' paramter cfg.IntOpt("periodic_task_interval",
cfg.IntOpt("scheduler_driver_task_period",
default=60, default=60,
deprecated_name="scheduler_driver_task_period",
deprecated_group="DEFAULT",
help=""" help="""
Periodic task interval. Periodic task interval.
@ -111,9 +119,11 @@ Related options:
* ``nova-service service_down_time`` * ``nova-service service_down_time``
"""), """),
cfg.IntOpt("scheduler_max_attempts", cfg.IntOpt("max_attempts",
default=3, default=3,
min=1, min=1,
deprecated_name="scheduler_max_attempts",
deprecated_group="DEFAULT",
help=""" help="""
Maximum number of schedule attempts for a chosen host. Maximum number of schedule attempts for a chosen host.
@ -129,10 +139,15 @@ Possible values:
attempts that can be made when scheduling an instance. attempts that can be made when scheduling an instance.
""")] """)]
filter_scheduler_group = cfg.OptGroup(name="filter_scheduler",
title="Filter scheduler options")
filter_scheduler_opts = [ filter_scheduler_opts = [
# TODO(sfinucan): Add 'min' paramter # TODO(sfinucan): Add 'min' paramter
cfg.IntOpt("scheduler_host_subset_size", cfg.IntOpt("host_subset_size",
default=1, default=1,
deprecated_name="scheduler_host_subset_size",
deprecated_group="DEFAULT",
help=""" help="""
Size of subset of best hosts selected by scheduler. Size of subset of best hosts selected by scheduler.
@ -172,8 +187,10 @@ Possible values:
* An integer, where the integer corresponds to the max number of instances * An integer, where the integer corresponds to the max number of instances
that can be actively performing IO on any given host. that can be actively performing IO on any given host.
"""), """),
# TODO(sfinucan): Add 'min' parameter
cfg.IntOpt("max_instances_per_host", cfg.IntOpt("max_instances_per_host",
default=50, default=50,
deprecated_group="DEFAULT",
help=""" help="""
Maximum number of instances that be active on a host. Maximum number of instances that be active on a host.
@ -191,8 +208,10 @@ Possible values:
* An integer, where the integer corresponds to the max instances that can be * An integer, where the integer corresponds to the max instances that can be
scheduled on a host. scheduled on a host.
"""), """),
cfg.BoolOpt("scheduler_tracks_instance_changes", cfg.BoolOpt("track_instance_changes",
default=True, default=True,
deprecated_name="scheduler_tracks_instance_changes",
deprecated_group="DEFAULT",
help=""" help="""
Enable querying of individual hosts for instance information. Enable querying of individual hosts for instance information.
@ -209,13 +228,15 @@ usage data to query the database on each request instead.
This option is only used by the FilterScheduler and its subclasses; if you use This option is only used by the FilterScheduler and its subclasses; if you use
a different scheduler, this option has no effect. a different scheduler, this option has no effect.
"""), """),
cfg.MultiStrOpt("scheduler_available_filters", cfg.MultiStrOpt("available_filters",
default=["nova.scheduler.filters.all_filters"], default=["nova.scheduler.filters.all_filters"],
deprecated_name="scheduler_available_filters",
deprecated_group="DEFAULT",
help=""" help="""
Filters that the scheduler can use. Filters that the scheduler can use.
An unordered list of the filter classes the nova scheduler may apply. Only the An unordered list of the filter classes the nova scheduler may apply. Only the
filters specified in the 'scheduler_default_filters' option will be used, but filters specified in the 'scheduler_enabled_filters' option will be used, but
any filter appearing in that option must also be included in this list. any filter appearing in that option must also be included in this list.
By default, this is set to all filters that are included with nova. By default, this is set to all filters that are included with nova.
@ -230,9 +251,9 @@ Possible values:
Related options: Related options:
* scheduler_default_filters * scheduler_enabled_filters
"""), """),
cfg.ListOpt("scheduler_default_filters", cfg.ListOpt("enabled_filters",
default=[ default=[
"RetryFilter", "RetryFilter",
"AvailabilityZoneFilter", "AvailabilityZoneFilter",
@ -244,6 +265,8 @@ Related options:
"ServerGroupAntiAffinityFilter", "ServerGroupAntiAffinityFilter",
"ServerGroupAffinityFilter", "ServerGroupAffinityFilter",
], ],
deprecated_name="scheduler_default_filters",
deprecated_group="DEFAULT",
help=""" help="""
Filters that the scheduler will use. Filters that the scheduler will use.
@ -267,7 +290,7 @@ Related options:
'scheduler_available_filters' option, or a SchedulerHostFilterNotFound 'scheduler_available_filters' option, or a SchedulerHostFilterNotFound
exception will be raised. exception will be raised.
"""), """),
cfg.ListOpt("baremetal_scheduler_default_filters", cfg.ListOpt("baremetal_enabled_filters",
default=[ default=[
"RetryFilter", "RetryFilter",
"AvailabilityZoneFilter", "AvailabilityZoneFilter",
@ -278,6 +301,8 @@ Related options:
"ExactDiskFilter", "ExactDiskFilter",
"ExactCoreFilter", "ExactCoreFilter",
], ],
deprecated_name="baremetal_scheduler_default_filters",
deprecated_group="DEFAULT",
help=""" help="""
Filters used for filtering baremetal hosts. Filters used for filtering baremetal hosts.
@ -297,13 +322,15 @@ Related options:
* If the 'scheduler_use_baremetal_filters' option is False, this option has * If the 'scheduler_use_baremetal_filters' option is False, this option has
no effect. no effect.
"""), """),
cfg.BoolOpt("scheduler_use_baremetal_filters", cfg.BoolOpt("use_baremetal_filters",
deprecated_name="scheduler_use_baremetal_filters",
deprecated_group="DEFAULT",
default=False, default=False,
help=""" help="""
Enable baremetal filters. Enable baremetal filters.
Set this to True to tell the nova scheduler that it should use the filters Set this to True to tell the nova scheduler that it should use the filters
specified in the 'baremetal_scheduler_default_filters' option. If you are not specified in the 'baremetal_scheduler_enabled_filters' option. If you are not
scheduling baremetal nodes, leave this at the default setting of False. scheduling baremetal nodes, leave this at the default setting of False.
This option is only used by the FilterScheduler and its subclasses; if you use This option is only used by the FilterScheduler and its subclasses; if you use
@ -312,11 +339,13 @@ a different scheduler, this option has no effect.
Related options: Related options:
* If this option is set to True, then the filters specified in the * If this option is set to True, then the filters specified in the
'baremetal_scheduler_default_filters' are used instead of the filters 'baremetal_scheduler_enabled_filters' are used instead of the filters
specified in 'scheduler_default_filters'. specified in 'scheduler_enabled_filters'.
"""), """),
cfg.ListOpt("scheduler_weight_classes", cfg.ListOpt("weight_classes",
default=["nova.scheduler.weights.all_weighers"], default=["nova.scheduler.weights.all_weighers"],
deprecated_name="scheduler_weight_classes",
deprecated_group="DEFAULT",
help=""" help="""
Weighers that the scheduler will use. Weighers that the scheduler will use.
@ -338,6 +367,7 @@ Possible values:
"""), """),
cfg.FloatOpt("ram_weight_multiplier", cfg.FloatOpt("ram_weight_multiplier",
default=1.0, default=1.0,
deprecated_group="DEFAULT",
help=""" help="""
Ram weight multipler ratio. Ram weight multipler ratio.
@ -361,6 +391,7 @@ Possible values:
"""), """),
cfg.FloatOpt("disk_weight_multiplier", cfg.FloatOpt("disk_weight_multiplier",
default=1.0, default=1.0,
deprecated_group="DEFAULT",
help=""" help="""
Disk weight multipler ratio. Disk weight multipler ratio.
@ -378,6 +409,7 @@ Possible values:
"""), """),
cfg.FloatOpt("io_ops_weight_multiplier", cfg.FloatOpt("io_ops_weight_multiplier",
default=-1.0, default=-1.0,
deprecated_group="DEFAULT",
help=""" help="""
IO operations weight multipler ratio. IO operations weight multipler ratio.
@ -401,6 +433,7 @@ Possible values:
"""), """),
cfg.FloatOpt("soft_affinity_weight_multiplier", cfg.FloatOpt("soft_affinity_weight_multiplier",
default=1.0, default=1.0,
deprecated_group="DEFAULT",
help=""" help="""
Multiplier used for weighing hosts for group soft-affinity. Multiplier used for weighing hosts for group soft-affinity.
@ -413,6 +446,7 @@ Possible values:
cfg.FloatOpt( cfg.FloatOpt(
"soft_anti_affinity_weight_multiplier", "soft_anti_affinity_weight_multiplier",
default=1.0, default=1.0,
deprecated_group="DEFAULT",
help=""" help="""
Multiplier used for weighing hosts for group soft-anti-affinity. Multiplier used for weighing hosts for group soft-anti-affinity.
@ -423,10 +457,10 @@ Possible values:
meaningful, as negative values would make this behave as a soft affinity meaningful, as negative values would make this behave as a soft affinity
weigher. weigher.
"""), """),
# TODO(mikal): replace this option with something involving host aggregates # TODO(mikal): replace this option with something involving host aggregates
cfg.ListOpt("isolated_images", cfg.ListOpt("isolated_images",
default=[], default=[],
deprecated_group="DEFAULT",
help=""" help="""
List of UUIDs for images that can only be run on certain hosts. List of UUIDs for images that can only be run on certain hosts.
@ -449,6 +483,7 @@ Related options:
"""), """),
cfg.ListOpt("isolated_hosts", cfg.ListOpt("isolated_hosts",
default=[], default=[],
deprecated_group="DEFAULT",
help=""" help="""
List of hosts that can only run certain images. List of hosts that can only run certain images.
@ -471,6 +506,7 @@ Related options:
cfg.BoolOpt( cfg.BoolOpt(
"restrict_isolated_hosts_to_isolated_images", "restrict_isolated_hosts_to_isolated_images",
default=True, default=True,
deprecated_group="DEFAULT",
help=""" help="""
Prevent non-isolated images from being built on isolated hosts. Prevent non-isolated images from being built on isolated hosts.
@ -487,6 +523,7 @@ Related options:
"""), """),
cfg.StrOpt( cfg.StrOpt(
"aggregate_image_properties_isolation_namespace", "aggregate_image_properties_isolation_namespace",
deprecated_group="DEFAULT",
help=""" help="""
Image property namespace for use in the host aggregate. Image property namespace for use in the host aggregate.
@ -513,6 +550,7 @@ Related options:
cfg.StrOpt( cfg.StrOpt(
"aggregate_image_properties_isolation_separator", "aggregate_image_properties_isolation_separator",
default=".", default=".",
deprecated_group="DEFAULT",
help=""" help="""
Separator character(s) for image property namespace and name. Separator character(s) for image property namespace and name.
@ -523,8 +561,8 @@ separator. This option defines the separator to be used.
This option is only used by the FilterScheduler and its subclasses; if you use This option is only used by the FilterScheduler and its subclasses; if you use
a different scheduler, this option has no effect. Also note that this setting a different scheduler, this option has no effect. Also note that this setting
only affects scheduling if the 'aggregate_image_properties_isolation' filter is only affects scheduling if the 'aggregate_image_properties_isolation' filter
enabled. is enabled.
Possible values: Possible values:
@ -534,8 +572,7 @@ Possible values:
Related options: Related options:
* aggregate_image_properties_isolation_namespace * aggregate_image_properties_isolation_namespace
"""), """)]
]
trust_group = cfg.OptGroup(name="trusted_computing", trust_group = cfg.OptGroup(name="trusted_computing",
title="Trust parameters", title="Trust parameters",
@ -717,7 +754,6 @@ Related options:
"""), """),
] ]
metrics_group = cfg.OptGroup(name="metrics", metrics_group = cfg.OptGroup(name="metrics",
title="Metrics parameters", title="Metrics parameters",
help=""" help="""
@ -727,7 +763,6 @@ Options under this group allow to adjust how values assigned to metrics are
calculated. calculated.
""") """)
metrics_weight_opts = [ metrics_weight_opts = [
cfg.FloatOpt("weight_multiplier", cfg.FloatOpt("weight_multiplier",
default=1.0, default=1.0,
@ -840,7 +875,13 @@ Related options:
def register_opts(conf): def register_opts(conf):
conf.register_opts(scheduler_opts + filter_scheduler_opts) conf.register_opts(default_opts)
conf.register_group(scheduler_group)
conf.register_opts(scheduler_opts, group=scheduler_group)
conf.register_group(filter_scheduler_group)
conf.register_opts(filter_scheduler_opts, group=filter_scheduler_group)
conf.register_group(trust_group) conf.register_group(trust_group)
conf.register_opts(trusted_opts, group=trust_group) conf.register_opts(trusted_opts, group=trust_group)
@ -850,6 +891,7 @@ def register_opts(conf):
def list_opts(): def list_opts():
return {"DEFAULT": scheduler_opts + filter_scheduler_opts, return {scheduler_group: scheduler_opts,
filter_scheduler_group: filter_scheduler_opts,
trust_group: trusted_opts, trust_group: trusted_opts,
metrics_group: metrics_weight_opts} metrics_group: metrics_weight_opts}

View File

@ -38,7 +38,7 @@ class Scheduler(object):
def __init__(self): def __init__(self):
self.host_manager = driver.DriverManager( self.host_manager = driver.DriverManager(
"nova.scheduler.host_manager", "nova.scheduler.host_manager",
CONF.scheduler_host_manager, CONF.scheduler.host_manager,
invoke_on_load=True).driver invoke_on_load=True).driver
self.servicegroup_api = servicegroup.API() self.servicegroup_api = servicegroup.API()

View File

@ -122,10 +122,9 @@ class FilterScheduler(driver.Scheduler):
LOG.debug("Weighed %(hosts)s", {'hosts': weighed_hosts}) LOG.debug("Weighed %(hosts)s", {'hosts': weighed_hosts})
scheduler_host_subset_size = max(1, host_subset_size = max(1, CONF.filter_scheduler.host_subset_size)
CONF.scheduler_host_subset_size) if host_subset_size < len(weighed_hosts):
if scheduler_host_subset_size < len(weighed_hosts): weighed_hosts = weighed_hosts[0:host_subset_size]
weighed_hosts = weighed_hosts[0:scheduler_host_subset_size]
chosen_host = random.choice(weighed_hosts) chosen_host = random.choice(weighed_hosts)
LOG.debug("Selected host: %(host)s", {'host': chosen_host}) LOG.debug("Selected host: %(host)s", {'host': chosen_host})

View File

@ -36,8 +36,10 @@ class AggregateImagePropertiesIsolation(filters.BaseHostFilter):
"""Checks a host in an aggregate that metadata key/value match """Checks a host in an aggregate that metadata key/value match
with image properties. with image properties.
""" """
cfg_namespace = CONF.aggregate_image_properties_isolation_namespace cfg_namespace = (CONF.filter_scheduler.
cfg_separator = CONF.aggregate_image_properties_isolation_separator aggregate_image_properties_isolation_namespace)
cfg_separator = (CONF.filter_scheduler.
aggregate_image_properties_isolation_separator)
image_props = spec_obj.image.properties if spec_obj.image else {} image_props = spec_obj.image.properties if spec_obj.image else {}
metadata = utils.aggregate_metadata_get_by_host(host_state) metadata = utils.aggregate_metadata_get_by_host(host_state)

View File

@ -29,7 +29,7 @@ class IoOpsFilter(filters.BaseHostFilter):
"""Filter out hosts with too many concurrent I/O operations.""" """Filter out hosts with too many concurrent I/O operations."""
def _get_max_io_ops_per_host(self, host_state, spec_obj): def _get_max_io_ops_per_host(self, host_state, spec_obj):
return CONF.max_io_ops_per_host return CONF.filter_scheduler.max_io_ops_per_host
def host_passes(self, host_state, spec_obj): def host_passes(self, host_state, spec_obj):
"""Use information about current vm and task states collected from """Use information about current vm and task states collected from
@ -54,14 +54,16 @@ class AggregateIoOpsFilter(IoOpsFilter):
""" """
def _get_max_io_ops_per_host(self, host_state, spec_obj): def _get_max_io_ops_per_host(self, host_state, spec_obj):
max_io_ops_per_host = CONF.filter_scheduler.max_io_ops_per_host
aggregate_vals = utils.aggregate_values_from_key( aggregate_vals = utils.aggregate_values_from_key(
host_state, host_state,
'max_io_ops_per_host') 'max_io_ops_per_host')
try: try:
value = utils.validate_num_values( value = utils.validate_num_values(
aggregate_vals, CONF.max_io_ops_per_host, cast_to=int) aggregate_vals, max_io_ops_per_host, cast_to=int)
except ValueError as e: except ValueError as e:
LOG.warning(_LW("Could not decode max_io_ops_per_host: '%s'"), e) LOG.warning(_LW("Could not decode max_io_ops_per_host: '%s'"), e)
value = CONF.max_io_ops_per_host value = max_io_ops_per_host
return value return value

View File

@ -46,10 +46,10 @@ class IsolatedHostsFilter(filters.BaseHostFilter):
# If the configuration does not list any hosts, the filter will always # If the configuration does not list any hosts, the filter will always
# return True, assuming a configuration error, so letting all hosts # return True, assuming a configuration error, so letting all hosts
# through. # through.
isolated_hosts = CONF.isolated_hosts isolated_hosts = CONF.filter_scheduler.isolated_hosts
isolated_images = CONF.isolated_images isolated_images = CONF.filter_scheduler.isolated_images
restrict_isolated_hosts_to_isolated_images = (CONF. restrict_isolated_hosts_to_isolated_images = (
restrict_isolated_hosts_to_isolated_images) CONF.filter_scheduler.restrict_isolated_hosts_to_isolated_images)
if not isolated_images: if not isolated_images:
# As there are no images to match, return True if the filter is # As there are no images to match, return True if the filter is
# not restrictive otherwise return False if the host is in the # not restrictive otherwise return False if the host is in the

View File

@ -29,7 +29,7 @@ class NumInstancesFilter(filters.BaseHostFilter):
"""Filter out hosts with too many instances.""" """Filter out hosts with too many instances."""
def _get_max_instances_per_host(self, host_state, spec_obj): def _get_max_instances_per_host(self, host_state, spec_obj):
return CONF.max_instances_per_host return CONF.filter_scheduler.max_instances_per_host
def host_passes(self, host_state, spec_obj): def host_passes(self, host_state, spec_obj):
num_instances = host_state.num_instances num_instances = host_state.num_instances
@ -52,15 +52,16 @@ class AggregateNumInstancesFilter(NumInstancesFilter):
""" """
def _get_max_instances_per_host(self, host_state, spec_obj): def _get_max_instances_per_host(self, host_state, spec_obj):
max_instances_per_host = CONF.filter_scheduler.max_instances_per_host
aggregate_vals = utils.aggregate_values_from_key( aggregate_vals = utils.aggregate_values_from_key(
host_state, host_state,
'max_instances_per_host') 'max_instances_per_host')
try: try:
value = utils.validate_num_values( value = utils.validate_num_values(
aggregate_vals, CONF.max_instances_per_host, cast_to=int) aggregate_vals, max_instances_per_host, cast_to=int)
except ValueError as e: except ValueError as e:
LOG.warning(_LW("Could not decode max_instances_per_host: '%s'"), LOG.warning(_LW("Could not decode max_instances_per_host: '%s'"),
e) e)
value = CONF.max_instances_per_host value = max_instances_per_host
return value return value

View File

@ -330,13 +330,13 @@ class HostManager(object):
self.host_state_map = {} self.host_state_map = {}
self.filter_handler = filters.HostFilterHandler() self.filter_handler = filters.HostFilterHandler()
filter_classes = self.filter_handler.get_matching_classes( filter_classes = self.filter_handler.get_matching_classes(
CONF.scheduler_available_filters) CONF.filter_scheduler.available_filters)
self.filter_cls_map = {cls.__name__: cls for cls in filter_classes} self.filter_cls_map = {cls.__name__: cls for cls in filter_classes}
self.filter_obj_map = {} self.filter_obj_map = {}
self.default_filters = self._choose_host_filters(self._load_filters()) self.enabled_filters = self._choose_host_filters(self._load_filters())
self.weight_handler = weights.HostWeightHandler() self.weight_handler = weights.HostWeightHandler()
weigher_classes = self.weight_handler.get_matching_classes( weigher_classes = self.weight_handler.get_matching_classes(
CONF.scheduler_weight_classes) CONF.filter_scheduler.weight_classes)
self.weighers = [cls() for cls in weigher_classes] self.weighers = [cls() for cls in weigher_classes]
# Dict of aggregates keyed by their ID # Dict of aggregates keyed by their ID
self.aggs_by_id = {} self.aggs_by_id = {}
@ -344,14 +344,15 @@ class HostManager(object):
# to those aggregates # to those aggregates
self.host_aggregates_map = collections.defaultdict(set) self.host_aggregates_map = collections.defaultdict(set)
self._init_aggregates() self._init_aggregates()
self.tracks_instance_changes = CONF.scheduler_tracks_instance_changes self.track_instance_changes = (
CONF.filter_scheduler.track_instance_changes)
# Dict of instances and status, keyed by host # Dict of instances and status, keyed by host
self._instance_info = {} self._instance_info = {}
if self.tracks_instance_changes: if self.track_instance_changes:
self._init_instance_info() self._init_instance_info()
def _load_filters(self): def _load_filters(self):
return CONF.scheduler_default_filters return CONF.filter_scheduler.enabled_filters
def _init_aggregates(self): def _init_aggregates(self):
elevated = context_module.get_admin_context() elevated = context_module.get_admin_context()
@ -560,7 +561,7 @@ class HostManager(object):
return [] return []
hosts = six.itervalues(name_to_cls_map) hosts = six.itervalues(name_to_cls_map)
return self.filter_handler.get_filtered_objects(self.default_filters, return self.filter_handler.get_filtered_objects(self.enabled_filters,
hosts, spec_obj, index) hosts, spec_obj, index)
def get_weighed_hosts(self, hosts, spec_obj): def get_weighed_hosts(self, hosts, spec_obj):

View File

@ -81,8 +81,8 @@ class IronicHostManager(host_manager.HostManager):
return ht == hv_type.IRONIC return ht == hv_type.IRONIC
def _load_filters(self): def _load_filters(self):
if CONF.scheduler_use_baremetal_filters: if CONF.filter_scheduler.use_baremetal_filters:
return CONF.baremetal_scheduler_default_filters return CONF.filter_scheduler.baremetal_enabled_filters
return super(IronicHostManager, self)._load_filters() return super(IronicHostManager, self)._load_filters()
def host_state_cls(self, host, node, **kwargs): def host_state_cls(self, host, node, **kwargs):

View File

@ -48,7 +48,7 @@ class SchedulerManager(manager.Manager):
def __init__(self, scheduler_driver=None, *args, **kwargs): def __init__(self, scheduler_driver=None, *args, **kwargs):
if not scheduler_driver: if not scheduler_driver:
scheduler_driver = CONF.scheduler_driver scheduler_driver = CONF.scheduler.driver
self.driver = driver.DriverManager( self.driver = driver.DriverManager(
"nova.scheduler.driver", "nova.scheduler.driver",
scheduler_driver, scheduler_driver,
@ -60,7 +60,7 @@ class SchedulerManager(manager.Manager):
def _expire_reservations(self, context): def _expire_reservations(self, context):
QUOTAS.expire(context) QUOTAS.expire(context)
@periodic_task.periodic_task(spacing=CONF.scheduler_driver_task_period, @periodic_task.periodic_task(spacing=CONF.scheduler.periodic_task_interval,
run_immediately=True) run_immediately=True)
def _run_periodic_tasks(self, context): def _run_periodic_tasks(self, context):
self.driver.run_periodic_tasks(context) self.driver.run_periodic_tasks(context)

View File

@ -151,7 +151,7 @@ def populate_filter_properties(filter_properties, host_state):
def populate_retry(filter_properties, instance_uuid): def populate_retry(filter_properties, instance_uuid):
max_attempts = CONF.scheduler_max_attempts max_attempts = CONF.scheduler.max_attempts
force_hosts = filter_properties.get('force_hosts', []) force_hosts = filter_properties.get('force_hosts', [])
force_nodes = filter_properties.get('force_nodes', []) force_nodes = filter_properties.get('force_nodes', [])
@ -252,14 +252,15 @@ def parse_options(opts, sep='=', converter=str, name=""):
def validate_filter(filter): def validate_filter(filter):
"""Validates that the filter is configured in the default filters.""" """Validates that the filter is configured in the default filters."""
return filter in CONF.scheduler_default_filters return filter in CONF.filter_scheduler.enabled_filters
def validate_weigher(weigher): def validate_weigher(weigher):
"""Validates that the weigher is configured in the default weighers.""" """Validates that the weigher is configured in the default weighers."""
if 'nova.scheduler.weights.all_weighers' in CONF.scheduler_weight_classes: weight_classes = CONF.filter_scheduler.weight_classes
if 'nova.scheduler.weights.all_weighers' in weight_classes:
return True return True
return weigher in CONF.scheduler_weight_classes return weigher in weight_classes
_SUPPORTS_AFFINITY = None _SUPPORTS_AFFINITY = None
@ -381,4 +382,4 @@ def retry_on_timeout(retries=1):
return wrapped return wrapped
return outer return outer
retry_select_destinations = retry_on_timeout(CONF.scheduler_max_attempts - 1) retry_select_destinations = retry_on_timeout(CONF.scheduler.max_attempts - 1)

View File

@ -58,7 +58,7 @@ class ServerGroupSoftAffinityWeigher(_SoftAffinityWeigherBase):
warning_sent = False warning_sent = False
def weight_multiplier(self): def weight_multiplier(self):
if (CONF.soft_affinity_weight_multiplier < 0 and if (CONF.filter_scheduler.soft_affinity_weight_multiplier < 0 and
not self.warning_sent): not self.warning_sent):
LOG.warning(_LW('For the soft_affinity_weight_multiplier only a ' LOG.warning(_LW('For the soft_affinity_weight_multiplier only a '
'positive value is meaningful as a negative value ' 'positive value is meaningful as a negative value '
@ -66,7 +66,7 @@ class ServerGroupSoftAffinityWeigher(_SoftAffinityWeigherBase):
'prefer non-collocating placement.')) 'prefer non-collocating placement.'))
self.warning_sent = True self.warning_sent = True
return CONF.soft_affinity_weight_multiplier return CONF.filter_scheduler.soft_affinity_weight_multiplier
class ServerGroupSoftAntiAffinityWeigher(_SoftAffinityWeigherBase): class ServerGroupSoftAntiAffinityWeigher(_SoftAffinityWeigherBase):
@ -74,7 +74,7 @@ class ServerGroupSoftAntiAffinityWeigher(_SoftAffinityWeigherBase):
warning_sent = False warning_sent = False
def weight_multiplier(self): def weight_multiplier(self):
if (CONF.soft_anti_affinity_weight_multiplier < 0 and if (CONF.filter_scheduler.soft_anti_affinity_weight_multiplier < 0 and
not self.warning_sent): not self.warning_sent):
LOG.warning(_LW('For the soft_anti_affinity_weight_multiplier ' LOG.warning(_LW('For the soft_anti_affinity_weight_multiplier '
'only a positive value is meaningful as a ' 'only a positive value is meaningful as a '
@ -82,7 +82,7 @@ class ServerGroupSoftAntiAffinityWeigher(_SoftAffinityWeigherBase):
'weigher would prefer collocating placement.')) 'weigher would prefer collocating placement.'))
self.warning_sent = True self.warning_sent = True
return CONF.soft_anti_affinity_weight_multiplier return CONF.filter_scheduler.soft_anti_affinity_weight_multiplier
def _weigh_object(self, host_state, request_spec): def _weigh_object(self, host_state, request_spec):
weight = super(ServerGroupSoftAntiAffinityWeigher, self)._weigh_object( weight = super(ServerGroupSoftAntiAffinityWeigher, self)._weigh_object(

View File

@ -31,7 +31,7 @@ class DiskWeigher(weights.BaseHostWeigher):
def weight_multiplier(self): def weight_multiplier(self):
"""Override the weight multiplier.""" """Override the weight multiplier."""
return CONF.disk_weight_multiplier return CONF.filter_scheduler.disk_weight_multiplier
def _weigh_object(self, host_state, weight_properties): def _weigh_object(self, host_state, weight_properties):
"""Higher weights win. We want spreading to be the default.""" """Higher weights win. We want spreading to be the default."""

View File

@ -32,7 +32,7 @@ class IoOpsWeigher(weights.BaseHostWeigher):
def weight_multiplier(self): def weight_multiplier(self):
"""Override the weight multiplier.""" """Override the weight multiplier."""
return CONF.io_ops_weight_multiplier return CONF.filter_scheduler.io_ops_weight_multiplier
def _weigh_object(self, host_state, weight_properties): def _weigh_object(self, host_state, weight_properties):
"""Higher weights win. We want to choose light workload host """Higher weights win. We want to choose light workload host

View File

@ -31,7 +31,7 @@ class RAMWeigher(weights.BaseHostWeigher):
def weight_multiplier(self): def weight_multiplier(self):
"""Override the weight multiplier.""" """Override the weight multiplier."""
return CONF.ram_weight_multiplier return CONF.filter_scheduler.ram_weight_multiplier
def _weigh_object(self, host_state, weight_properties): def _weigh_object(self, host_state, weight_properties):
"""Higher weights win. We want spreading to be the default.""" """Higher weights win. We want spreading to be the default."""

View File

@ -90,7 +90,7 @@ class _IntegratedTestBase(test.TestCase):
return self.start_service('compute') return self.start_service('compute')
def _setup_scheduler_service(self): def _setup_scheduler_service(self):
self.flags(scheduler_driver='chance_scheduler') self.flags(group='scheduler', driver='chance_scheduler')
return self.start_service('scheduler') return self.start_service('scheduler')
def _setup_services(self): def _setup_services(self):

View File

@ -80,9 +80,10 @@ class NUMAServersTest(ServersTestBase):
def _setup_scheduler_service(self): def _setup_scheduler_service(self):
self.flags(compute_driver='libvirt.LibvirtDriver') self.flags(compute_driver='libvirt.LibvirtDriver')
self.flags(scheduler_driver='filter_scheduler') self.flags(driver='filter_scheduler', group='scheduler')
self.flags(scheduler_default_filters=CONF.scheduler_default_filters self.flags(enabled_filters=CONF.filter_scheduler.enabled_filters
+ ['NUMATopologyFilter']) + ['NUMATopologyFilter'],
group='filter_scheduler')
return self.start_service('scheduler') return self.start_service('scheduler')
def _run_build_test(self, flavor_id, filter_mock, end_status='ACTIVE'): def _run_build_test(self, flavor_id, filter_mock, end_status='ACTIVE'):

View File

@ -69,7 +69,7 @@ class NotificationSampleTestBase(test.TestCase,
self.useFixture(utils_fixture.TimeFixture(test_services.fake_utcnow())) self.useFixture(utils_fixture.TimeFixture(test_services.fake_utcnow()))
self.flags(scheduler_driver='chance_scheduler') self.flags(driver='chance_scheduler', group='scheduler')
# the image fake backend needed for image discovery # the image fake backend needed for image discovery
nova.tests.unit.image.fake.stub_out_image_service(self) nova.tests.unit.image.fake.stub_out_image_service(self)
self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset) self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset)

View File

@ -41,7 +41,7 @@ class TestServerGet(test.TestCase):
nova.tests.unit.image.fake.stub_out_image_service(self) nova.tests.unit.image.fake.stub_out_image_service(self)
self.start_service('conductor', manager=CONF.conductor.manager) self.start_service('conductor', manager=CONF.conductor.manager)
self.flags(scheduler_driver='chance_scheduler') self.flags(driver='chance_scheduler', group='scheduler')
self.start_service('scheduler') self.start_service('scheduler')
self.network = self.start_service('network') self.network = self.start_service('network')
self.compute = self.start_service('compute') self.compute = self.start_service('compute')

View File

@ -49,7 +49,7 @@ class TestServerGet(test.TestCase):
nova.tests.unit.image.fake.stub_out_image_service(self) nova.tests.unit.image.fake.stub_out_image_service(self)
self.start_service('conductor', manager=CONF.conductor.manager) self.start_service('conductor', manager=CONF.conductor.manager)
self.flags(scheduler_driver='chance_scheduler') self.flags(driver='chance_scheduler', group='scheduler')
self.start_service('scheduler') self.start_service('scheduler')
self.network = self.start_service('network') self.network = self.start_service('network')
self.compute = self.start_service('compute') self.compute = self.start_service('compute')

View File

@ -67,7 +67,7 @@ class TestSerialConsoleLiveMigrate(test.TestCase):
self.flags(host="test_compute1") self.flags(host="test_compute1")
self.start_service('conductor', manager=CONF.conductor.manager) self.start_service('conductor', manager=CONF.conductor.manager)
self.flags(scheduler_driver='chance_scheduler') self.flags(driver='chance_scheduler', group='scheduler')
self.start_service('scheduler') self.start_service('scheduler')
self.network = self.start_service('network') self.network = self.start_service('network')
self.compute = self.start_service('compute', host='test_compute1') self.compute = self.start_service('compute', host='test_compute1')

View File

@ -46,7 +46,7 @@ class ComputeManagerTestCase(test.TestCase):
last exception. The fault message field is limited in size and a long last exception. The fault message field is limited in size and a long
message with a traceback displaces the original error message. message with a traceback displaces the original error message.
""" """
self.flags(scheduler_max_attempts=3) self.flags(max_attempts=3, group='scheduler')
flavor = objects.Flavor( flavor = objects.Flavor(
id=1, name='flavor1', memory_mb=256, vcpus=1, root_gb=1, id=1, name='flavor1', memory_mb=256, vcpus=1, root_gb=1,
ephemeral_gb=1, flavorid='1', swap=0, rxtx_factor=1.0, ephemeral_gb=1, flavorid='1', swap=0, rxtx_factor=1.0,

View File

@ -46,9 +46,9 @@ class ServerGroupTestBase(test.TestCase,
# Note(gibi): RamFilter is needed to ensure that # Note(gibi): RamFilter is needed to ensure that
# test_boot_servers_with_affinity_no_valid_host behaves as expected # test_boot_servers_with_affinity_no_valid_host behaves as expected
_scheduler_default_filters = ['ServerGroupAntiAffinityFilter', _enabled_filters = ['ServerGroupAntiAffinityFilter',
'ServerGroupAffinityFilter', 'ServerGroupAffinityFilter',
'RamFilter'] 'RamFilter']
# Override servicegroup parameters to make the tests run faster # Override servicegroup parameters to make the tests run faster
_service_down_time = 2 _service_down_time = 2
@ -62,8 +62,10 @@ class ServerGroupTestBase(test.TestCase,
def setUp(self): def setUp(self):
super(ServerGroupTestBase, self).setUp() super(ServerGroupTestBase, self).setUp()
self.flags(scheduler_default_filters=self._scheduler_default_filters) self.flags(enabled_filters=self._enabled_filters,
self.flags(scheduler_weight_classes=self._get_weight_classes()) group='filter_scheduler')
self.flags(weight_classes=self._get_weight_classes(),
group='filter_scheduler')
self.flags(service_down_time=self._service_down_time) self.flags(service_down_time=self._service_down_time)
self.flags(report_interval=self._report_interval) self.flags(report_interval=self._report_interval)
@ -460,7 +462,7 @@ class ServerGroupTestV21(ServerGroupTestBase):
class ServerGroupAffinityConfTest(ServerGroupTestBase): class ServerGroupAffinityConfTest(ServerGroupTestBase):
api_major_version = 'v2.1' api_major_version = 'v2.1'
# Load only anti-affinity filter so affinity will be missing # Load only anti-affinity filter so affinity will be missing
_scheduler_default_filters = 'ServerGroupAntiAffinityFilter' _enabled_filters = 'ServerGroupAntiAffinityFilter'
@mock.patch('nova.scheduler.utils._SUPPORTS_AFFINITY', None) @mock.patch('nova.scheduler.utils._SUPPORTS_AFFINITY', None)
def test_affinity_no_filter(self): def test_affinity_no_filter(self):
@ -477,7 +479,7 @@ class ServerGroupAffinityConfTest(ServerGroupTestBase):
class ServerGroupAntiAffinityConfTest(ServerGroupTestBase): class ServerGroupAntiAffinityConfTest(ServerGroupTestBase):
api_major_version = 'v2.1' api_major_version = 'v2.1'
# Load only affinity filter so anti-affinity will be missing # Load only affinity filter so anti-affinity will be missing
_scheduler_default_filters = 'ServerGroupAffinityFilter' _enabled_filters = 'ServerGroupAffinityFilter'
@mock.patch('nova.scheduler.utils._SUPPORTS_ANTI_AFFINITY', None) @mock.patch('nova.scheduler.utils._SUPPORTS_ANTI_AFFINITY', None)
def test_anti_affinity_no_filter(self): def test_anti_affinity_no_filter(self):
@ -520,10 +522,6 @@ class ServerGroupSoftAntiAffinityConfTest(ServerGroupTestBase):
soft_anti_affinity = {'name': 'fake-name-3', soft_anti_affinity = {'name': 'fake-name-3',
'policies': ['soft-anti-affinity']} 'policies': ['soft-anti-affinity']}
# Load only soft affinity filter so anti-affinity will be missing
_scheduler_weight_classes = ['nova.scheduler.weights.affinity.'
'ServerGroupSoftAffinityWeigher']
def _get_weight_classes(self): def _get_weight_classes(self):
# Load only soft affinity filter so anti-affinity will be missing # Load only soft affinity filter so anti-affinity will be missing
return ['nova.scheduler.weights.affinity.' return ['nova.scheduler.weights.affinity.'

View File

@ -94,8 +94,10 @@ class TestAggImagePropsIsolationFilter(test.NoDBTestCase):
def test_aggregate_image_properties_isolation_props_namespace(self, def test_aggregate_image_properties_isolation_props_namespace(self,
agg_mock): agg_mock):
self.flags(aggregate_image_properties_isolation_namespace="hw") self.flags(aggregate_image_properties_isolation_namespace='hw',
self.flags(aggregate_image_properties_isolation_separator="_") group='filter_scheduler')
self.flags(aggregate_image_properties_isolation_separator='_',
group='filter_scheduler')
agg_mock.return_value = {'hw_vm_mode': 'hvm', 'img_owner_id': 'foo'} agg_mock.return_value = {'hw_vm_mode': 'hvm', 'img_owner_id': 'foo'}
spec_obj = objects.RequestSpec( spec_obj = objects.RequestSpec(
context=mock.sentinel.ctx, context=mock.sentinel.ctx,

View File

@ -22,7 +22,7 @@ from nova.tests.unit.scheduler import fakes
class TestNumInstancesFilter(test.NoDBTestCase): class TestNumInstancesFilter(test.NoDBTestCase):
def test_filter_num_iops_passes(self): def test_filter_num_iops_passes(self):
self.flags(max_io_ops_per_host=8) self.flags(max_io_ops_per_host=8, group='filter_scheduler')
self.filt_cls = io_ops_filter.IoOpsFilter() self.filt_cls = io_ops_filter.IoOpsFilter()
host = fakes.FakeHostState('host1', 'node1', host = fakes.FakeHostState('host1', 'node1',
{'num_io_ops': 7}) {'num_io_ops': 7})
@ -30,7 +30,7 @@ class TestNumInstancesFilter(test.NoDBTestCase):
self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_filter_num_iops_fails(self): def test_filter_num_iops_fails(self):
self.flags(max_io_ops_per_host=8) self.flags(max_io_ops_per_host=8, group='filter_scheduler')
self.filt_cls = io_ops_filter.IoOpsFilter() self.filt_cls = io_ops_filter.IoOpsFilter()
host = fakes.FakeHostState('host1', 'node1', host = fakes.FakeHostState('host1', 'node1',
{'num_io_ops': 8}) {'num_io_ops': 8})
@ -39,7 +39,7 @@ class TestNumInstancesFilter(test.NoDBTestCase):
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
def test_aggregate_filter_num_iops_value(self, agg_mock): def test_aggregate_filter_num_iops_value(self, agg_mock):
self.flags(max_io_ops_per_host=7) self.flags(max_io_ops_per_host=7, group='filter_scheduler')
self.filt_cls = io_ops_filter.AggregateIoOpsFilter() self.filt_cls = io_ops_filter.AggregateIoOpsFilter()
host = fakes.FakeHostState('host1', 'node1', host = fakes.FakeHostState('host1', 'node1',
{'num_io_ops': 7}) {'num_io_ops': 7})
@ -52,7 +52,7 @@ class TestNumInstancesFilter(test.NoDBTestCase):
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
def test_aggregate_filter_num_iops_value_error(self, agg_mock): def test_aggregate_filter_num_iops_value_error(self, agg_mock):
self.flags(max_io_ops_per_host=8) self.flags(max_io_ops_per_host=8, group='filter_scheduler')
self.filt_cls = io_ops_filter.AggregateIoOpsFilter() self.filt_cls = io_ops_filter.AggregateIoOpsFilter()
host = fakes.FakeHostState('host1', 'node1', host = fakes.FakeHostState('host1', 'node1',
{'num_io_ops': 7}) {'num_io_ops': 7})

View File

@ -30,7 +30,8 @@ class TestIsolatedHostsFilter(test.NoDBTestCase):
self.flags(isolated_images=[uuids.image_ref], self.flags(isolated_images=[uuids.image_ref],
isolated_hosts=['isolated_host'], isolated_hosts=['isolated_host'],
restrict_isolated_hosts_to_isolated_images= restrict_isolated_hosts_to_isolated_images=
restrict_isolated_hosts_to_isolated_images) restrict_isolated_hosts_to_isolated_images,
group='filter_scheduler')
host_name = 'isolated_host' if host_in_list else 'free_host' host_name = 'isolated_host' if host_in_list else 'free_host'
image_ref = uuids.image_ref if image_in_list else uuids.fake_image_ref image_ref = uuids.image_ref if image_in_list else uuids.fake_image_ref
spec_obj = objects.RequestSpec(image=objects.ImageMeta(id=image_ref)) spec_obj = objects.RequestSpec(image=objects.ImageMeta(id=image_ref))
@ -58,7 +59,7 @@ class TestIsolatedHostsFilter(test.NoDBTestCase):
self.assertTrue(self._do_test_isolated_hosts(False, False, False)) self.assertTrue(self._do_test_isolated_hosts(False, False, False))
def test_isolated_hosts_no_hosts_config(self): def test_isolated_hosts_no_hosts_config(self):
self.flags(isolated_images=[uuids.image_ref]) self.flags(isolated_images=[uuids.image_ref], group='filter_scheduler')
# If there are no hosts in the config, it should only filter out # If there are no hosts in the config, it should only filter out
# images that are listed # images that are listed
self.assertFalse(self._do_test_isolated_hosts(False, True, False)) self.assertFalse(self._do_test_isolated_hosts(False, True, False))
@ -67,7 +68,7 @@ class TestIsolatedHostsFilter(test.NoDBTestCase):
self.assertTrue(self._do_test_isolated_hosts(False, False, False)) self.assertTrue(self._do_test_isolated_hosts(False, False, False))
def test_isolated_hosts_no_images_config(self): def test_isolated_hosts_no_images_config(self):
self.flags(isolated_hosts=['isolated_host']) self.flags(isolated_hosts=['isolated_host'], group='filter_scheduler')
# If there are no images in the config, it should only filter out # If there are no images in the config, it should only filter out
# isolated_hosts # isolated_hosts
self.assertTrue(self._do_test_isolated_hosts(False, True, False)) self.assertTrue(self._do_test_isolated_hosts(False, True, False))

View File

@ -21,7 +21,7 @@ from nova.tests.unit.scheduler import fakes
class TestNumInstancesFilter(test.NoDBTestCase): class TestNumInstancesFilter(test.NoDBTestCase):
def test_filter_num_instances_passes(self): def test_filter_num_instances_passes(self):
self.flags(max_instances_per_host=5) self.flags(max_instances_per_host=5, group='filter_scheduler')
self.filt_cls = num_instances_filter.NumInstancesFilter() self.filt_cls = num_instances_filter.NumInstancesFilter()
host = fakes.FakeHostState('host1', 'node1', host = fakes.FakeHostState('host1', 'node1',
{'num_instances': 4}) {'num_instances': 4})
@ -29,7 +29,7 @@ class TestNumInstancesFilter(test.NoDBTestCase):
self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_filter_num_instances_fails(self): def test_filter_num_instances_fails(self):
self.flags(max_instances_per_host=5) self.flags(max_instances_per_host=5, group='filter_scheduler')
self.filt_cls = num_instances_filter.NumInstancesFilter() self.filt_cls = num_instances_filter.NumInstancesFilter()
host = fakes.FakeHostState('host1', 'node1', host = fakes.FakeHostState('host1', 'node1',
{'num_instances': 5}) {'num_instances': 5})
@ -38,7 +38,7 @@ class TestNumInstancesFilter(test.NoDBTestCase):
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
def test_filter_aggregate_num_instances_value(self, agg_mock): def test_filter_aggregate_num_instances_value(self, agg_mock):
self.flags(max_instances_per_host=4) self.flags(max_instances_per_host=4, group='filter_scheduler')
self.filt_cls = num_instances_filter.AggregateNumInstancesFilter() self.filt_cls = num_instances_filter.AggregateNumInstancesFilter()
host = fakes.FakeHostState('host1', 'node1', host = fakes.FakeHostState('host1', 'node1',
{'num_instances': 5}) {'num_instances': 5})
@ -53,7 +53,7 @@ class TestNumInstancesFilter(test.NoDBTestCase):
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
def test_filter_aggregate_num_instances_value_error(self, agg_mock): def test_filter_aggregate_num_instances_value_error(self, agg_mock):
self.flags(max_instances_per_host=6) self.flags(max_instances_per_host=6, group='filter_scheduler')
self.filt_cls = num_instances_filter.AggregateNumInstancesFilter() self.filt_cls = num_instances_filter.AggregateNumInstancesFilter()
host = fakes.FakeHostState('host1', 'node1', {}) host = fakes.FakeHostState('host1', 'node1', {})
spec_obj = objects.RequestSpec(context=mock.sentinel.ctx) spec_obj = objects.RequestSpec(context=mock.sentinel.ctx)

View File

@ -122,9 +122,9 @@ class FilterSchedulerTestCase(test_scheduler.SchedulerTestCase):
'pci_requests': None}) 'pci_requests': None})
def test_schedule_host_pool(self, mock_get_extra, mock_get_all, def test_schedule_host_pool(self, mock_get_extra, mock_get_all,
mock_by_host, mock_get_by_binary): mock_by_host, mock_get_by_binary):
"""Make sure the scheduler_host_subset_size property works properly.""" """Make sure the host_subset_size property works properly."""
self.flags(scheduler_host_subset_size=2) self.flags(host_subset_size=2, group='filter_scheduler')
spec_obj = objects.RequestSpec( spec_obj = objects.RequestSpec(
num_instances=1, num_instances=1,
@ -161,7 +161,7 @@ class FilterSchedulerTestCase(test_scheduler.SchedulerTestCase):
is larger than number of filtered hosts. is larger than number of filtered hosts.
""" """
self.flags(scheduler_host_subset_size=20) self.flags(host_subset_size=20, group='filter_scheduler')
spec_obj = objects.RequestSpec( spec_obj = objects.RequestSpec(
num_instances=1, num_instances=1,
@ -195,11 +195,11 @@ class FilterSchedulerTestCase(test_scheduler.SchedulerTestCase):
def test_schedule_chooses_best_host(self, mock_get_extra, mock_cn_get_all, def test_schedule_chooses_best_host(self, mock_get_extra, mock_cn_get_all,
mock_get_by_binary, mock_get_by_binary,
mock_get_inst_info): mock_get_inst_info):
"""If scheduler_host_subset_size is 1, the largest host with greatest """If host_subset_size is 1, the largest host with greatest weight
weight should be returned. should be returned.
""" """
self.flags(scheduler_host_subset_size=1) self.flags(host_subset_size=1, group='filter_scheduler')
self.next_weight = 50 self.next_weight = 50
def _fake_weigh_objects(_self, functions, hosts, options): def _fake_weigh_objects(_self, functions, hosts, options):

View File

@ -58,10 +58,11 @@ class HostManagerTestCase(test.NoDBTestCase):
@mock.patch.object(host_manager.HostManager, '_init_aggregates') @mock.patch.object(host_manager.HostManager, '_init_aggregates')
def setUp(self, mock_init_agg, mock_init_inst): def setUp(self, mock_init_agg, mock_init_inst):
super(HostManagerTestCase, self).setUp() super(HostManagerTestCase, self).setUp()
self.flags(scheduler_available_filters=['%s.%s' % (__name__, cls) for self.flags(available_filters=[
cls in ['FakeFilterClass1', __name__ + '.FakeFilterClass1', __name__ + '.FakeFilterClass2'],
'FakeFilterClass2']]) group='filter_scheduler')
self.flags(scheduler_default_filters=['FakeFilterClass1']) self.flags(enabled_filters=['FakeFilterClass1'],
group='filter_scheduler')
self.host_manager = host_manager.HostManager() self.host_manager = host_manager.HostManager()
self.fake_hosts = [host_manager.HostState('fake_host%s' % x, self.fake_hosts = [host_manager.HostState('fake_host%s' % x,
'fake-node') for x in range(1, 5)] 'fake-node') for x in range(1, 5)]
@ -133,10 +134,10 @@ class HostManagerTestCase(test.NoDBTestCase):
# should not be called if the list of nodes was passed explicitly # should not be called if the list of nodes was passed explicitly
self.assertFalse(mock_get_all.called) self.assertFalse(mock_get_all.called)
def test_default_filters(self): def test_enabled_filters(self):
default_filters = self.host_manager.default_filters enabled_filters = self.host_manager.enabled_filters
self.assertEqual(1, len(default_filters)) self.assertEqual(1, len(enabled_filters))
self.assertIsInstance(default_filters[0], FakeFilterClass1) self.assertIsInstance(enabled_filters[0], FakeFilterClass1)
@mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_instance_info')
@mock.patch.object(objects.AggregateList, 'get_all') @mock.patch.object(objects.AggregateList, 'get_all')

View File

@ -320,21 +320,23 @@ class IronicHostManagerTestFilters(test.NoDBTestCase):
@mock.patch.object(host_manager.HostManager, '_init_aggregates') @mock.patch.object(host_manager.HostManager, '_init_aggregates')
def setUp(self, mock_init_agg, mock_init_inst): def setUp(self, mock_init_agg, mock_init_inst):
super(IronicHostManagerTestFilters, self).setUp() super(IronicHostManagerTestFilters, self).setUp()
self.flags(scheduler_available_filters=['%s.%s' % (__name__, cls) for self.flags(available_filters=[
cls in ['FakeFilterClass1', __name__ + '.FakeFilterClass1', __name__ + '.FakeFilterClass2'],
'FakeFilterClass2']]) group='filter_scheduler')
self.flags(scheduler_default_filters=['FakeFilterClass1']) self.flags(enabled_filters=['FakeFilterClass1'],
self.flags(baremetal_scheduler_default_filters=['FakeFilterClass2']) group='filter_scheduler')
self.flags(baremetal_enabled_filters=['FakeFilterClass2'],
group='filter_scheduler')
self.host_manager = ironic_host_manager.IronicHostManager() self.host_manager = ironic_host_manager.IronicHostManager()
self.fake_hosts = [ironic_host_manager.IronicNodeState( self.fake_hosts = [ironic_host_manager.IronicNodeState(
'fake_host%s' % x, 'fake-node') for x in range(1, 5)] 'fake_host%s' % x, 'fake-node') for x in range(1, 5)]
self.fake_hosts += [ironic_host_manager.IronicNodeState( self.fake_hosts += [ironic_host_manager.IronicNodeState(
'fake_multihost', 'fake-node%s' % x) for x in range(1, 5)] 'fake_multihost', 'fake-node%s' % x) for x in range(1, 5)]
def test_default_filters(self): def test_enabled_filters(self):
default_filters = self.host_manager.default_filters enabled_filters = self.host_manager.enabled_filters
self.assertEqual(1, len(default_filters)) self.assertEqual(1, len(enabled_filters))
self.assertIsInstance(default_filters[0], FakeFilterClass1) self.assertIsInstance(enabled_filters[0], FakeFilterClass1)
def test_choose_host_filters_not_found(self): def test_choose_host_filters_not_found(self):
self.assertRaises(exception.SchedulerHostFilterNotFound, self.assertRaises(exception.SchedulerHostFilterNotFound,
@ -348,33 +350,33 @@ class IronicHostManagerTestFilters(test.NoDBTestCase):
self.assertEqual(1, len(host_filters)) self.assertEqual(1, len(host_filters))
self.assertIsInstance(host_filters[0], FakeFilterClass2) self.assertIsInstance(host_filters[0], FakeFilterClass2)
def test_host_manager_default_filters(self): def test_host_manager_enabled_filters(self):
default_filters = self.host_manager.default_filters enabled_filters = self.host_manager.enabled_filters
self.assertEqual(1, len(default_filters)) self.assertEqual(1, len(enabled_filters))
self.assertIsInstance(default_filters[0], FakeFilterClass1) self.assertIsInstance(enabled_filters[0], FakeFilterClass1)
@mock.patch.object(ironic_host_manager.IronicHostManager, @mock.patch.object(ironic_host_manager.IronicHostManager,
'_init_instance_info') '_init_instance_info')
@mock.patch.object(host_manager.HostManager, '_init_aggregates') @mock.patch.object(host_manager.HostManager, '_init_aggregates')
def test_host_manager_default_filters_uses_baremetal(self, mock_init_agg, def test_host_manager_enabled_filters_uses_baremetal(self, mock_init_agg,
mock_init_inst): mock_init_inst):
self.flags(scheduler_use_baremetal_filters=True) self.flags(use_baremetal_filters=True, group='filter_scheduler')
host_manager = ironic_host_manager.IronicHostManager() host_manager = ironic_host_manager.IronicHostManager()
# ensure the defaults come from baremetal_scheduler_default_filters # ensure the defaults come from scheduler.baremetal_enabled_filters
# and not scheduler_default_filters # and not enabled_filters
default_filters = host_manager.default_filters enabled_filters = host_manager.enabled_filters
self.assertEqual(1, len(default_filters)) self.assertEqual(1, len(enabled_filters))
self.assertIsInstance(default_filters[0], FakeFilterClass2) self.assertIsInstance(enabled_filters[0], FakeFilterClass2)
def test_load_filters(self): def test_load_filters(self):
# without scheduler_use_baremetal_filters # without scheduler.use_baremetal_filters
filters = self.host_manager._load_filters() filters = self.host_manager._load_filters()
self.assertEqual(['FakeFilterClass1'], filters) self.assertEqual(['FakeFilterClass1'], filters)
def test_load_filters_baremetal(self): def test_load_filters_baremetal(self):
# with scheduler_use_baremetal_filters # with scheduler.use_baremetal_filters
self.flags(scheduler_use_baremetal_filters=True) self.flags(use_baremetal_filters=True, group='filter_scheduler')
filters = self.host_manager._load_filters() filters = self.host_manager._load_filters()
self.assertEqual(['FakeFilterClass2'], filters) self.assertEqual(['FakeFilterClass2'], filters)

View File

@ -51,7 +51,7 @@ class SchedulerManagerInitTestCase(test.NoDBTestCase):
def test_init_using_chance_schedulerdriver(self, def test_init_using_chance_schedulerdriver(self,
mock_init_agg, mock_init_agg,
mock_init_inst): mock_init_inst):
self.flags(scheduler_driver='chance_scheduler') self.flags(driver='chance_scheduler', group='scheduler')
driver = self.manager_cls().driver driver = self.manager_cls().driver
self.assertIsInstance(driver, chance.ChanceScheduler) self.assertIsInstance(driver, chance.ChanceScheduler)
@ -60,7 +60,7 @@ class SchedulerManagerInitTestCase(test.NoDBTestCase):
def test_init_using_caching_schedulerdriver(self, def test_init_using_caching_schedulerdriver(self,
mock_init_agg, mock_init_agg,
mock_init_inst): mock_init_inst):
self.flags(scheduler_driver='caching_scheduler') self.flags(driver='caching_scheduler', group='scheduler')
driver = self.manager_cls().driver driver = self.manager_cls().driver
self.assertIsInstance(driver, caching_scheduler.CachingScheduler) self.assertIsInstance(driver, caching_scheduler.CachingScheduler)
@ -70,7 +70,7 @@ class SchedulerManagerInitTestCase(test.NoDBTestCase):
mock_init_agg, mock_init_agg,
mock_init_inst): mock_init_inst):
with testtools.ExpectedException(ValueError): with testtools.ExpectedException(ValueError):
self.flags(scheduler_driver='nonexist_scheduler') self.flags(driver='nonexist_scheduler', group='scheduler')
class SchedulerManagerTestCase(test.NoDBTestCase): class SchedulerManagerTestCase(test.NoDBTestCase):
@ -84,7 +84,7 @@ class SchedulerManagerTestCase(test.NoDBTestCase):
@mock.patch.object(host_manager.HostManager, '_init_aggregates') @mock.patch.object(host_manager.HostManager, '_init_aggregates')
def setUp(self, mock_init_agg, mock_init_inst): def setUp(self, mock_init_agg, mock_init_inst):
super(SchedulerManagerTestCase, self).setUp() super(SchedulerManagerTestCase, self).setUp()
self.flags(scheduler_driver=self.driver_plugin_name) self.flags(driver=self.driver_plugin_name, group='scheduler')
with mock.patch.object(host_manager.HostManager, '_init_aggregates'): with mock.patch.object(host_manager.HostManager, '_init_aggregates'):
self.manager = self.manager_cls() self.manager = self.manager_cls()
self.context = context.RequestContext('fake_user', 'fake_project') self.context = context.RequestContext('fake_user', 'fake_project')
@ -180,7 +180,7 @@ class SchedulerInitTestCase(test.NoDBTestCase):
def test_init_using_ironic_hostmanager(self, def test_init_using_ironic_hostmanager(self,
mock_init_agg, mock_init_agg,
mock_init_inst): mock_init_inst):
self.flags(scheduler_host_manager='ironic_host_manager') self.flags(host_manager='ironic_host_manager', group='scheduler')
manager = self.driver_cls().host_manager manager = self.driver_cls().host_manager
self.assertIsInstance(manager, ironic_host_manager.IronicHostManager) self.assertIsInstance(manager, ironic_host_manager.IronicHostManager)

View File

@ -203,7 +203,7 @@ class SchedulerUtilsTestCase(test.NoDBTestCase):
'force-node2']) 'force-node2'])
def test_populate_retry_exception_at_max_attempts(self): def test_populate_retry_exception_at_max_attempts(self):
self.flags(scheduler_max_attempts=2) self.flags(max_attempts=2, group='scheduler')
msg = 'The exception text was preserved!' msg = 'The exception text was preserved!'
filter_properties = dict(retry=dict(num_attempts=2, hosts=[], filter_properties = dict(retry=dict(num_attempts=2, hosts=[],
exc_reason=[msg])) exc_reason=[msg]))
@ -243,15 +243,16 @@ class SchedulerUtilsTestCase(test.NoDBTestCase):
[('bar', -2.1)]) [('bar', -2.1)])
def test_validate_filters_configured(self): def test_validate_filters_configured(self):
self.flags(scheduler_default_filters='FakeFilter1,FakeFilter2') self.flags(enabled_filters='FakeFilter1,FakeFilter2',
group='filter_scheduler')
self.assertTrue(scheduler_utils.validate_filter('FakeFilter1')) self.assertTrue(scheduler_utils.validate_filter('FakeFilter1'))
self.assertTrue(scheduler_utils.validate_filter('FakeFilter2')) self.assertTrue(scheduler_utils.validate_filter('FakeFilter2'))
self.assertFalse(scheduler_utils.validate_filter('FakeFilter3')) self.assertFalse(scheduler_utils.validate_filter('FakeFilter3'))
def test_validate_weighers_configured(self): def test_validate_weighers_configured(self):
self.flags(scheduler_weight_classes= self.flags(weight_classes=[
['ServerGroupSoftAntiAffinityWeigher', 'ServerGroupSoftAntiAffinityWeigher', 'FakeFilter1'],
'FakeFilter1']) group='filter_scheduler')
self.assertTrue(scheduler_utils.validate_weigher( self.assertTrue(scheduler_utils.validate_weigher(
'ServerGroupSoftAntiAffinityWeigher')) 'ServerGroupSoftAntiAffinityWeigher'))
@ -304,8 +305,8 @@ class SchedulerUtilsTestCase(test.NoDBTestCase):
self.assertIsNone(group_info) self.assertIsNone(group_info)
def _get_group_details_with_filter_not_configured(self, policy): def _get_group_details_with_filter_not_configured(self, policy):
self.flags(scheduler_default_filters=['fake']) self.flags(enabled_filters=['fake'], group='filter_scheduler')
self.flags(scheduler_weight_classes=['fake']) self.flags(weight_classes=['fake'], group='filter_scheduler')
instance = fake_instance.fake_instance_obj(self.context, instance = fake_instance.fake_instance_obj(self.context,
params={'host': 'hostA'}) params={'host': 'hostA'})

View File

@ -91,20 +91,23 @@ class SoftAffinityWeigherTestCase(SoftWeigherTestBase):
def test_soft_affinity_weight_multiplier_zero_value(self): def test_soft_affinity_weight_multiplier_zero_value(self):
# We do not know the host, all have same weight. # We do not know the host, all have same weight.
self.flags(soft_affinity_weight_multiplier=0.0) self.flags(soft_affinity_weight_multiplier=0.0,
group='filter_scheduler')
self._do_test(policy='soft-affinity', self._do_test(policy='soft-affinity',
expected_weight=0.0, expected_weight=0.0,
expected_host=None) expected_host=None)
def test_soft_affinity_weight_multiplier_positive_value(self): def test_soft_affinity_weight_multiplier_positive_value(self):
self.flags(soft_affinity_weight_multiplier=2.0) self.flags(soft_affinity_weight_multiplier=2.0,
group='filter_scheduler')
self._do_test(policy='soft-affinity', self._do_test(policy='soft-affinity',
expected_weight=2.0, expected_weight=2.0,
expected_host='host2') expected_host='host2')
@mock.patch.object(affinity, 'LOG') @mock.patch.object(affinity, 'LOG')
def test_soft_affinity_weight_multiplier_negative_value(self, mock_log): def test_soft_affinity_weight_multiplier_negative_value(self, mock_log):
self.flags(soft_affinity_weight_multiplier=-1.0) self.flags(soft_affinity_weight_multiplier=-1.0,
group='filter_scheduler')
self._do_test(policy='soft-affinity', self._do_test(policy='soft-affinity',
expected_weight=0.0, expected_weight=0.0,
expected_host='host3') expected_host='host3')
@ -128,13 +131,15 @@ class SoftAntiAffinityWeigherTestCase(SoftWeigherTestBase):
def test_soft_anti_affinity_weight_multiplier_zero_value(self): def test_soft_anti_affinity_weight_multiplier_zero_value(self):
# We do not know the host, all have same weight. # We do not know the host, all have same weight.
self.flags(soft_anti_affinity_weight_multiplier=0.0) self.flags(soft_anti_affinity_weight_multiplier=0.0,
group='filter_scheduler')
self._do_test(policy='soft-anti-affinity', self._do_test(policy='soft-anti-affinity',
expected_weight=0.0, expected_weight=0.0,
expected_host=None) expected_host=None)
def test_soft_anti_affinity_weight_multiplier_positive_value(self): def test_soft_anti_affinity_weight_multiplier_positive_value(self):
self.flags(soft_anti_affinity_weight_multiplier=2.0) self.flags(soft_anti_affinity_weight_multiplier=2.0,
group='filter_scheduler')
self._do_test(policy='soft-anti-affinity', self._do_test(policy='soft-anti-affinity',
expected_weight=2.0, expected_weight=2.0,
expected_host='host3') expected_host='host3')
@ -142,7 +147,8 @@ class SoftAntiAffinityWeigherTestCase(SoftWeigherTestBase):
@mock.patch.object(affinity, 'LOG') @mock.patch.object(affinity, 'LOG')
def test_soft_anti_affinity_weight_multiplier_negative_value(self, def test_soft_anti_affinity_weight_multiplier_negative_value(self,
mock_log): mock_log):
self.flags(soft_anti_affinity_weight_multiplier=-1.0) self.flags(soft_anti_affinity_weight_multiplier=-1.0,
group='filter_scheduler')
self._do_test(policy='soft-anti-affinity', self._do_test(policy='soft-anti-affinity',
expected_weight=0.0, expected_weight=0.0,
expected_host='host2') expected_host='host2')

View File

@ -58,7 +58,7 @@ class DiskWeigherTestCase(test.NoDBTestCase):
self.assertEqual('host4', weighed_host.obj.host) self.assertEqual('host4', weighed_host.obj.host)
def test_disk_filter_multiplier1(self): def test_disk_filter_multiplier1(self):
self.flags(disk_weight_multiplier=0.0) self.flags(disk_weight_multiplier=0.0, group='filter_scheduler')
hostinfo_list = self._get_all_hosts() hostinfo_list = self._get_all_hosts()
# host1: free_disk_mb=5120 # host1: free_disk_mb=5120
@ -71,7 +71,7 @@ class DiskWeigherTestCase(test.NoDBTestCase):
self.assertEqual(0.0, weighed_host.weight) self.assertEqual(0.0, weighed_host.weight)
def test_disk_filter_multiplier2(self): def test_disk_filter_multiplier2(self):
self.flags(disk_weight_multiplier=2.0) self.flags(disk_weight_multiplier=2.0, group='filter_scheduler')
hostinfo_list = self._get_all_hosts() hostinfo_list = self._get_all_hosts()
# host1: free_disk_mb=5120 # host1: free_disk_mb=5120
@ -85,7 +85,7 @@ class DiskWeigherTestCase(test.NoDBTestCase):
self.assertEqual('host4', weighed_host.obj.host) self.assertEqual('host4', weighed_host.obj.host)
def test_disk_filter_negative(self): def test_disk_filter_negative(self):
self.flags(disk_weight_multiplier=1.0) self.flags(disk_weight_multiplier=1.0, group='filter_scheduler')
hostinfo_list = self._get_all_hosts() hostinfo_list = self._get_all_hosts()
host_attr = {'id': 100, 'disk_mb': 81920, 'free_disk_mb': -5120} host_attr = {'id': 100, 'disk_mb': 81920, 'free_disk_mb': -5120}
host_state = fakes.FakeHostState('negative', 'negative', host_attr) host_state = fakes.FakeHostState('negative', 'negative', host_attr)

View File

@ -28,7 +28,8 @@ class IoOpsWeigherTestCase(test.NoDBTestCase):
def _get_weighed_host(self, hosts, io_ops_weight_multiplier): def _get_weighed_host(self, hosts, io_ops_weight_multiplier):
if io_ops_weight_multiplier is not None: if io_ops_weight_multiplier is not None:
self.flags(io_ops_weight_multiplier=io_ops_weight_multiplier) self.flags(io_ops_weight_multiplier=io_ops_weight_multiplier,
group='filter_scheduler')
return self.weight_handler.get_weighed_objects(self.weighers, return self.weight_handler.get_weighed_objects(self.weighers,
hosts, {})[0] hosts, {})[0]

View File

@ -58,7 +58,7 @@ class RamWeigherTestCase(test.NoDBTestCase):
self.assertEqual('host4', weighed_host.obj.host) self.assertEqual('host4', weighed_host.obj.host)
def test_ram_filter_multiplier1(self): def test_ram_filter_multiplier1(self):
self.flags(ram_weight_multiplier=0.0) self.flags(ram_weight_multiplier=0.0, group='filter_scheduler')
hostinfo_list = self._get_all_hosts() hostinfo_list = self._get_all_hosts()
# host1: free_ram_mb=512 # host1: free_ram_mb=512
@ -71,7 +71,7 @@ class RamWeigherTestCase(test.NoDBTestCase):
self.assertEqual(0.0, weighed_host.weight) self.assertEqual(0.0, weighed_host.weight)
def test_ram_filter_multiplier2(self): def test_ram_filter_multiplier2(self):
self.flags(ram_weight_multiplier=2.0) self.flags(ram_weight_multiplier=2.0, group='filter_scheduler')
hostinfo_list = self._get_all_hosts() hostinfo_list = self._get_all_hosts()
# host1: free_ram_mb=512 # host1: free_ram_mb=512
@ -85,7 +85,7 @@ class RamWeigherTestCase(test.NoDBTestCase):
self.assertEqual('host4', weighed_host.obj.host) self.assertEqual('host4', weighed_host.obj.host)
def test_ram_filter_negative(self): def test_ram_filter_negative(self):
self.flags(ram_weight_multiplier=1.0) self.flags(ram_weight_multiplier=1.0, group='filter_scheduler')
hostinfo_list = self._get_all_hosts() hostinfo_list = self._get_all_hosts()
host_attr = {'id': 100, 'memory_mb': 8192, 'free_ram_mb': -512} host_attr = {'id': 100, 'memory_mb': 8192, 'free_ram_mb': -512}
host_state = fakes.FakeHostState('negative', 'negative', host_attr) host_state = fakes.FakeHostState('negative', 'negative', host_attr)

View File

@ -0,0 +1,35 @@
---
upgrade:
- |
All general scheduler configuration options have been added to the
``scheduler`` group.
- ``scheduler_driver`` (now ``driver``)
- ``scheduler_host_manager`` (now ``host_manager``)
- ``scheduler_driver_task_period`` (now ``periodic_task_interval``)
- ``scheduler_max_attempts`` (now ``max_attempts``)
In addition, all filter scheduler configuration options have been added to
the ``filter_scheduler`` group.
- ``scheduler_host_subset_size`` (now ``host_subset_size``)
- ``scheduler_max_instances_per_host`` (now ``max_instances_per_host``)
- ``scheduler_tracks_instance_changes`` (now ``track_instance_changes``)
- ``scheduler_available_filters`` (now ``available_filters``)
- ``scheduler_default_filters`` (now ``enabled_filters``)
- ``baremetal_scheduler_default_filters`` (now
``baremetal_enabled_filters``)
- ``scheduler_use_baremetal_filters`` (now ``use_baremetal_filters``)
- ``scheduler_weight_classes`` (now ``weight_classes``)
- ``ram_weight_multiplier``
- ``disk_weight_multipler``
- ``io_ops_weight_multipler``
- ``soft_affinity_weight_multiplier``
- ``soft_anti_affinity_weight_multiplier``
- ``isolated_images``
- ``isolated_hosts``
- ``restrict_isolated_hosts_to_isolated_images``
- ``aggregate_image_properties_isolation_namespace``
- ``aggregate_image_properties_isolation_separator``
These options should no longer be included in the ``DEFAULT`` group.