Remove deprecated scheduler filters

The RetryFilter was deprecated in Train.
The Aggregate[core|ram|disk] filters were also deprecated in train.
This change removes all four deprecated filters and their docs.

Change-Id: Idc29c759632850d3d767a261c9f385af71348f65
This commit is contained in:
Sean Mooney 2020-08-04 23:09:12 +00:00
parent 8ecc29bfcc
commit 4939d0d1e2
12 changed files with 21 additions and 763 deletions

View File

@ -336,14 +336,11 @@ related availability zones feature operate under the hood:
Finally, as discussed previously, there are a number of host aggregate-specific
scheduler filters. These are:
- :ref:`AggregateCoreFilter`
- :ref:`AggregateDiskFilter`
- :ref:`AggregateImagePropertiesIsolation`
- :ref:`AggregateInstanceExtraSpecsFilter`
- :ref:`AggregateIoOpsFilter`
- :ref:`AggregateMultiTenancyIsolation`
- :ref:`AggregateNumInstancesFilter`
- :ref:`AggregateRamFilter`
- :ref:`AggregateTypeAffinityFilter`
The following configuration options are applicable to the scheduler

View File

@ -158,68 +158,6 @@ Compute filters
The following sections describe the available compute filters.
.. _AggregateCoreFilter:
AggregateCoreFilter
-------------------
.. deprecated:: 20.0.0
``AggregateCoreFilter`` is deprecated since the 20.0.0 Train release.
As of the introduction of the placement service in Ocata, the behavior
of this filter :ref:`has changed <bug-1804125>` and no longer should be used.
In the 18.0.0 Rocky release nova `automatically mirrors`_ host aggregates
to placement aggregates.
In the 19.0.0 Stein release initial allocation ratios support was added
which allows management of the allocation ratios via the placement API in
addition to the existing capability to manage allocation ratios via the nova
config. See `Allocation ratios`_ for details.
.. _`automatically mirrors`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-mirror-host-aggregates.html
Filters host by CPU core count with a per-aggregate ``cpu_allocation_ratio``
value. If the per-aggregate value is not found, the value falls back to the
global setting. If the host is in more than one aggregate and more than one
value is found, the minimum value will be used.
Refer to :doc:`/admin/aggregates` for more information.
.. important::
Note the ``cpu_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>`
restriction.
.. _AggregateDiskFilter:
AggregateDiskFilter
-------------------
.. deprecated:: 20.0.0
``AggregateDiskFilter`` is deprecated since the 20.0.0 Train release.
As of the introduction of the placement service in Ocata, the behavior
of this filter :ref:`has changed <bug-1804125>` and no longer should be used.
In the 18.0.0 Rocky release nova `automatically mirrors`_ host aggregates
to placement aggregates.
In the 19.0.0 Stein release initial allocation ratios support was added
which allows management of the allocation ratios via the placement API in
addition to the existing capability to manage allocation ratios via the nova
config. See `Allocation ratios`_ for details.
Filters host by disk allocation with a per-aggregate ``disk_allocation_ratio``
value. If the per-aggregate value is not found, the value falls back to the
global setting. If the host is in more than one aggregate and more than one
value is found, the minimum value will be used.
Refer to :doc:`/admin/aggregates` for more information.
.. important::
Note the ``disk_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>`
restriction.
.. _AggregateImagePropertiesIsolation:
AggregateImagePropertiesIsolation
@ -370,37 +308,6 @@ Refer to :doc:`/admin/aggregates` and :ref:`NumInstancesFilter` for more
information.
.. _AggregateRamFilter:
AggregateRamFilter
------------------
.. deprecated:: 20.0.0
``AggregateRamFilter`` is deprecated since the 20.0.0 Train release.
As of the introduction of the placement service in Ocata, the behavior
of this filter :ref:`has changed <bug-1804125>` and no longer should be used.
In the 18.0.0 Rocky release nova `automatically mirrors`_ host aggregates
to placement aggregates.
In the 19.0.0 Stein release initial allocation ratios support was added
which allows management of the allocation ratios via the placement API in
addition to the existing capability to manage allocation ratios via the nova
config. See `Allocation ratios`_ for details.
Filters host by RAM allocation of instances with a per-aggregate
``ram_allocation_ratio`` value. If the per-aggregate value is not found, the
value falls back to the global setting. If the host is in more than one
aggregate and thus more than one value is found, the minimum value will be
used.
Refer to :doc:`/admin/aggregates` for more information.
.. important::
Note the ``ram_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>`
restriction.
.. _AggregateTypeAffinityFilter:
AggregateTypeAffinityFilter
@ -689,26 +596,6 @@ PciPassthroughFilter
The filter schedules instances on a host if the host has devices that meet the
device requests in the ``extra_specs`` attribute for the flavor.
RetryFilter
-----------
.. deprecated:: 20.0.0
Since the 17.0.0 (Queens) release, the scheduler has provided alternate
hosts for rescheduling so the scheduler does not need to be called during
a reschedule which makes the ``RetryFilter`` useless. See the
`Return Alternate Hosts`_ spec for details.
Filters out hosts that have already been attempted for scheduling purposes. If
the scheduler selects a host to respond to a service request, and the host
fails to respond to the request, this filter prevents the scheduler from
retrying that host for the service request.
This filter is only useful if the :oslo.config:option:`scheduler.max_attempts`
configuration option is set to a value greater than one.
.. _Return Alternate Hosts: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/return-alternate-hosts.html
SameHostFilter
--------------
@ -1012,23 +899,6 @@ The allocation ratio configuration is used both during reporting of compute
node `resource provider inventory`_ to the placement service and during
scheduling.
.. _bug-1804125:
.. note:: Regarding the `AggregateCoreFilter`_, `AggregateDiskFilter`_ and
`AggregateRamFilter`_, starting in 15.0.0 (Ocata) there is a behavior
change where aggregate-based overcommit ratios will no longer be honored
during scheduling for the FilterScheduler. Instead, overcommit values must
be set on a per-compute-node basis in the Nova configuration files.
If you have been relying on per-aggregate overcommit, during your upgrade,
you must change to using per-compute-node overcommit ratios in order for
your scheduling behavior to stay consistent. Otherwise, you may notice
increased NoValidHost scheduling failures as the aggregate-based overcommit
is no longer being considered.
See `bug 1804125 <https://bugs.launchpad.net/nova/+bug/1804125>`_ for more
details.
.. _resource provider inventory: https://docs.openstack.org/api-ref/placement/?expanded=#resource-provider-inventories
Usage scenarios
@ -1060,8 +930,6 @@ here.
$ openstack resource provider inventory set --resource VCPU:allocation_ratio=1.0 815a5634-86fb-4e1e-8824-8a631fee3e06
Note the :ref:`bug 1804125 <bug-1804125>` restriction.
3. When the deployer wants to **always** use the placement API to set
allocation ratios, then the deployer should ensure that
``[DEFAULT]/xxx_allocation_ratio`` options are all set to None (the

View File

@ -95,33 +95,12 @@ There are many standard filter classes which may be used
use a comma. E.g., "value1,value2". All hosts are passed if no extra_specs
are specified.
* |ComputeFilter| - passes all hosts that are operational and enabled.
* |AggregateCoreFilter| - DEPRECATED; filters hosts by CPU core number with per-aggregate
:oslo.config:option:`cpu_allocation_ratio` setting. If no
per-aggregate value is found, it will fall back to the global default
:oslo.config:option:`cpu_allocation_ratio`.
If more than one value is found for a host (meaning the host is in two
different aggregates with different ratio settings), the minimum value
will be used.
* |IsolatedHostsFilter| - filter based on
:oslo.config:option:`filter_scheduler.isolated_images`,
:oslo.config:option:`filter_scheduler.isolated_hosts`
and :oslo.config:option:`filter_scheduler.restrict_isolated_hosts_to_isolated_images`
flags.
* |JsonFilter| - allows simple JSON-based grammar for selecting hosts.
* |AggregateRamFilter| - DEPRECATED; filters hosts by RAM with per-aggregate
:oslo.config:option:`ram_allocation_ratio` setting. If no per-aggregate value
is found, it will fall back to the global default
:oslo.config:option:`ram_allocation_ratio`.
If more than one value is found for a host (meaning the host is in two
different aggregates with different ratio settings), the minimum value
will be used.
* |AggregateDiskFilter| - DEPRECATED; filters hosts by disk allocation with per-aggregate
:oslo.config:option:`disk_allocation_ratio` setting. If no per-aggregate value
is found, it will fall back to the global default
:oslo.config:option:`disk_allocation_ratio`.
If more than one value is found for a host (meaning the host is in two or more
different aggregates with different ratio settings), the minimum value will
be used.
* |NumInstancesFilter| - filters compute nodes by number of instances.
Nodes with too many instances will be filtered. The host will be
ignored by the scheduler if more than
@ -156,8 +135,6 @@ There are many standard filter classes which may be used
set of instances.
* |SameHostFilter| - puts the instance on the same host as another instance in
a set of instances.
* |RetryFilter| - DEPRECATED; filters hosts that have been attempted for
scheduling. Only passes hosts that have not been previously attempted.
* |AggregateTypeAffinityFilter| - limits instance_type by aggregate.
This filter passes hosts if no instance_type key is set or
the instance_type aggregate metadata value contains the name of the
@ -281,26 +258,6 @@ creation of the new server for the user. The only exception for this rule is
directly. Variable naming, such as the ``$free_ram_mb`` example above, should
be based on those attributes.
The |RetryFilter| filters hosts that have already been attempted for
scheduling. It only passes hosts that have not been previously attempted. If a
compute node is raising an exception when spawning an instance, then the
compute manager will reschedule it by adding the failing host to a retry
dictionary so that the RetryFilter will not accept it as a possible
destination. That means that if all of your compute nodes are failing, then the
RetryFilter will return 0 hosts and the scheduler will raise a NoValidHost
exception even if the problem is related to 1:N compute nodes. If you see that
case in the scheduler logs, then your problem is most likely related to a
compute problem and you should check the compute logs.
.. note:: The ``RetryFilter`` is deprecated since the 20.0.0 (Train) release
and will be removed in an upcoming release. Since the 17.0.0 (Queens)
release, the scheduler has provided alternate hosts for rescheduling
so the scheduler does not need to be called during a reschedule which
makes the ``RetryFilter`` useless. See the `Return Alternate Hosts`_
spec for details.
.. _Return Alternate Hosts: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/return-alternate-hosts.html
The |NUMATopologyFilter| considers the NUMA topology that was specified for the instance
through the use of flavor extra_specs in combination with the image properties, as
described in detail in the related nova-spec document:
@ -647,11 +604,8 @@ in :mod:`nova.tests.scheduler`.
.. |BaseHostFilter| replace:: :class:`BaseHostFilter <nova.scheduler.filters.BaseHostFilter>`
.. |ComputeCapabilitiesFilter| replace:: :class:`ComputeCapabilitiesFilter <nova.scheduler.filters.compute_capabilities_filter.ComputeCapabilitiesFilter>`
.. |ComputeFilter| replace:: :class:`ComputeFilter <nova.scheduler.filters.compute_filter.ComputeFilter>`
.. |AggregateCoreFilter| replace:: :class:`AggregateCoreFilter <nova.scheduler.filters.core_filter.AggregateCoreFilter>`
.. |IsolatedHostsFilter| replace:: :class:`IsolatedHostsFilter <nova.scheduler.filters.isolated_hosts_filter>`
.. |JsonFilter| replace:: :class:`JsonFilter <nova.scheduler.filters.json_filter.JsonFilter>`
.. |AggregateRamFilter| replace:: :class:`AggregateRamFilter <nova.scheduler.filters.ram_filter.AggregateRamFilter>`
.. |AggregateDiskFilter| replace:: :class:`AggregateDiskFilter <nova.scheduler.filters.disk_filter.AggregateDiskFilter>`
.. |NumInstancesFilter| replace:: :class:`NumInstancesFilter <nova.scheduler.filters.num_instances_filter.NumInstancesFilter>`
.. |AggregateNumInstancesFilter| replace:: :class:`AggregateNumInstancesFilter <nova.scheduler.filters.num_instances_filter.AggregateNumInstancesFilter>`
.. |IoOpsFilter| replace:: :class:`IoOpsFilter <nova.scheduler.filters.io_ops_filter.IoOpsFilter>`
@ -660,7 +614,6 @@ in :mod:`nova.tests.scheduler`.
.. |SimpleCIDRAffinityFilter| replace:: :class:`SimpleCIDRAffinityFilter <nova.scheduler.filters.affinity_filter.SimpleCIDRAffinityFilter>`
.. |DifferentHostFilter| replace:: :class:`DifferentHostFilter <nova.scheduler.filters.affinity_filter.DifferentHostFilter>`
.. |SameHostFilter| replace:: :class:`SameHostFilter <nova.scheduler.filters.affinity_filter.SameHostFilter>`
.. |RetryFilter| replace:: :class:`RetryFilter <nova.scheduler.filters.retry_filter.RetryFilter>`
.. |AggregateTypeAffinityFilter| replace:: :class:`AggregateTypeAffinityFilter <nova.scheduler.filters.type_filter.AggregateTypeAffinityFilter>`
.. |ServerGroupAntiAffinityFilter| replace:: :class:`ServerGroupAntiAffinityFilter <nova.scheduler.filters.affinity_filter.ServerGroupAntiAffinityFilter>`
.. |ServerGroupAffinityFilter| replace:: :class:`ServerGroupAffinityFilter <nova.scheduler.filters.affinity_filter.ServerGroupAffinityFilter>`

View File

@ -1,98 +0,0 @@
# Copyright (c) 2011 OpenStack Foundation
# Copyright (c) 2012 Justin Santa Barbara
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from nova.scheduler import filters
from nova.scheduler.filters import utils
LOG = logging.getLogger(__name__)
class AggregateCoreFilter(filters.BaseHostFilter):
"""DEPRECATED: AggregateCoreFilter with per-aggregate allocation ratio.
Fall back to global cpu_allocation_ratio if no per-aggregate setting found.
"""
RUN_ON_REBUILD = False
def __init__(self):
super(AggregateCoreFilter, self).__init__()
LOG.warning('The AggregateCoreFilter is deprecated since the 20.0.0 '
'Train release. VCPU filtering is performed natively '
'using the Placement service when using the '
'filter_scheduler driver. Operators should define cpu '
'allocation ratios either per host in the nova.conf '
'or via the placement API.')
def _get_cpu_allocation_ratio(self, host_state, spec_obj):
aggregate_vals = utils.aggregate_values_from_key(
host_state,
'cpu_allocation_ratio')
try:
ratio = utils.validate_num_values(
aggregate_vals, host_state.cpu_allocation_ratio, cast_to=float)
except ValueError as e:
LOG.warning("Could not decode cpu_allocation_ratio: '%s'", e)
ratio = host_state.cpu_allocation_ratio
return ratio
def host_passes(self, host_state, spec_obj):
"""Return True if host has sufficient CPU cores.
:param host_state: nova.scheduler.host_manager.HostState
:param spec_obj: filter options
:return: boolean
"""
if not host_state.vcpus_total:
# Fail safe
LOG.warning("VCPUs not set; assuming CPU collection broken")
return True
instance_vcpus = spec_obj.vcpus
cpu_allocation_ratio = self._get_cpu_allocation_ratio(host_state,
spec_obj)
vcpus_total = host_state.vcpus_total * cpu_allocation_ratio
# Only provide a VCPU limit to compute if the virt driver is reporting
# an accurate count of installed VCPUs. (XenServer driver does not)
if vcpus_total > 0:
host_state.limits['vcpu'] = vcpus_total
# Do not allow an instance to overcommit against itself, only
# against other instances.
if instance_vcpus > host_state.vcpus_total:
LOG.debug("%(host_state)s does not have %(instance_vcpus)d "
"total cpus before overcommit, it only has %(cpus)d",
{'host_state': host_state,
'instance_vcpus': instance_vcpus,
'cpus': host_state.vcpus_total})
return False
free_vcpus = vcpus_total - host_state.vcpus_used
if free_vcpus < instance_vcpus:
LOG.debug("%(host_state)s does not have %(instance_vcpus)d "
"usable vcpus, it only has %(free_vcpus)d usable "
"vcpus",
{'host_state': host_state,
'instance_vcpus': instance_vcpus,
'free_vcpus': free_vcpus})
return False
return True

View File

@ -1,95 +0,0 @@
# Copyright (c) 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from nova.scheduler import filters
from nova.scheduler.filters import utils
LOG = logging.getLogger(__name__)
class AggregateDiskFilter(filters.BaseHostFilter):
"""DEPRECATED: AggregateDiskFilter with per-aggregate disk allocation ratio
Fall back to global disk_allocation_ratio if no per-aggregate setting
found.
"""
RUN_ON_REBUILD = False
def __init__(self):
super(AggregateDiskFilter, self).__init__()
LOG.warning('The AggregateDiskFilter is deprecated since the 20.0.0 '
'Train release. DISK_GB filtering is performed natively '
'using the Placement service when using the '
'filter_scheduler driver. Operators should define disk '
'allocation ratios either per host in the nova.conf '
'or via the placement API.')
def _get_disk_allocation_ratio(self, host_state, spec_obj):
aggregate_vals = utils.aggregate_values_from_key(
host_state,
'disk_allocation_ratio')
try:
ratio = utils.validate_num_values(
aggregate_vals, host_state.disk_allocation_ratio,
cast_to=float)
except ValueError as e:
LOG.warning("Could not decode disk_allocation_ratio: '%s'", e)
ratio = host_state.disk_allocation_ratio
return ratio
def host_passes(self, host_state, spec_obj):
"""Filter based on disk usage."""
requested_disk = (1024 * (spec_obj.root_gb +
spec_obj.ephemeral_gb) +
spec_obj.swap)
free_disk_mb = host_state.free_disk_mb
total_usable_disk_mb = host_state.total_usable_disk_gb * 1024
# Do not allow an instance to overcommit against itself, only against
# other instances. In other words, if there isn't room for even just
# this one instance in total_usable_disk space, consider the host full.
if total_usable_disk_mb < requested_disk:
LOG.debug("%(host_state)s does not have %(requested_disk)s "
"MB usable disk space before overcommit, it only "
"has %(physical_disk_size)s MB.",
{'host_state': host_state,
'requested_disk': requested_disk,
'physical_disk_size':
total_usable_disk_mb})
return False
disk_allocation_ratio = self._get_disk_allocation_ratio(
host_state, spec_obj)
disk_mb_limit = total_usable_disk_mb * disk_allocation_ratio
used_disk_mb = total_usable_disk_mb - free_disk_mb
usable_disk_mb = disk_mb_limit - used_disk_mb
if not usable_disk_mb >= requested_disk:
LOG.debug("%(host_state)s does not have %(requested_disk)s MB "
"usable disk, it only has %(usable_disk_mb)s MB usable "
"disk.", {'host_state': host_state,
'requested_disk': requested_disk,
'usable_disk_mb': usable_disk_mb})
return False
disk_gb_limit = disk_mb_limit / 1024
host_state.limits['disk_gb'] = disk_gb_limit
return True

View File

@ -1,89 +0,0 @@
# Copyright (c) 2011 OpenStack Foundation
# Copyright (c) 2012 Cloudscaling
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from nova.scheduler import filters
from nova.scheduler.filters import utils
LOG = logging.getLogger(__name__)
class AggregateRamFilter(filters.BaseHostFilter):
"""DEPRECATED: AggregateRamFilter with per-aggregate ram subscription flag.
Fall back to global ram_allocation_ratio if no per-aggregate setting found.
"""
RUN_ON_REBUILD = False
def __init__(self):
super(AggregateRamFilter, self).__init__()
LOG.warning('The AggregateRamFilter is deprecated since the 20.0.0 '
'Train release. MEMORY_MB filtering is performed natively '
'using the Placement service when using the '
'filter_scheduler driver. Operators should define ram '
'allocation ratios either per host in the nova.conf '
'or via the placement API.')
def _get_ram_allocation_ratio(self, host_state, spec_obj):
aggregate_vals = utils.aggregate_values_from_key(
host_state,
'ram_allocation_ratio')
try:
ratio = utils.validate_num_values(
aggregate_vals, host_state.ram_allocation_ratio, cast_to=float)
except ValueError as e:
LOG.warning("Could not decode ram_allocation_ratio: '%s'", e)
ratio = host_state.ram_allocation_ratio
return ratio
def host_passes(self, host_state, spec_obj):
"""Only return hosts with sufficient available RAM."""
requested_ram = spec_obj.memory_mb
free_ram_mb = host_state.free_ram_mb
total_usable_ram_mb = host_state.total_usable_ram_mb
# Do not allow an instance to overcommit against itself, only against
# other instances.
if not total_usable_ram_mb >= requested_ram:
LOG.debug("%(host_state)s does not have %(requested_ram)s MB "
"usable ram before overcommit, it only has "
"%(usable_ram)s MB.",
{'host_state': host_state,
'requested_ram': requested_ram,
'usable_ram': total_usable_ram_mb})
return False
ram_allocation_ratio = self._get_ram_allocation_ratio(host_state,
spec_obj)
memory_mb_limit = total_usable_ram_mb * ram_allocation_ratio
used_ram_mb = total_usable_ram_mb - free_ram_mb
usable_ram = memory_mb_limit - used_ram_mb
if not usable_ram >= requested_ram:
LOG.debug("%(host_state)s does not have %(requested_ram)s MB "
"usable ram, it only has %(usable_ram)s MB usable ram.",
{'host_state': host_state,
'requested_ram': requested_ram,
'usable_ram': usable_ram})
return False
# save oversubscription limit for compute node to test against:
host_state.limits['memory_mb'] = memory_mb_limit
return True

View File

@ -1,60 +0,0 @@
# Copyright (c) 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from nova.scheduler import filters
LOG = logging.getLogger(__name__)
class RetryFilter(filters.BaseHostFilter):
"""Filter out nodes that have already been attempted for scheduling
purposes
"""
# NOTE(danms): This does not affect _where_ an instance lands, so not
# related to rebuild.
RUN_ON_REBUILD = False
def __init__(self):
super(RetryFilter, self).__init__()
LOG.warning('The RetryFilter is deprecated since the 20.0.0 Train '
'release. Since the 17.0.0 (Queens) release, the '
'scheduler has provided alternate hosts for rescheduling '
'so the scheduler does not need to be called during a '
'reschedule which makes the RetryFilter useless.')
def host_passes(self, host_state, spec_obj):
"""Skip nodes that have already been attempted."""
retry = spec_obj.retry
if not retry:
return True
# TODO(sbauza): Once the HostState is actually a ComputeNode, we could
# easily get this one...
host = [host_state.host, host_state.nodename]
# TODO(sbauza)... and we wouldn't need to primitive the hosts into
# lists
hosts = [[cn.host, cn.hypervisor_hostname] for cn in retry.hosts]
passes = host not in hosts
if not passes:
LOG.info("Host %(host)s fails. Previously tried hosts: "
"%(hosts)s", {'host': host, 'hosts': hosts})
# Host passes if it's not in the list of previously attempted hosts:
return passes

View File

@ -1,64 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import objects
from nova.scheduler.filters import core_filter
from nova import test
from nova.tests.unit.scheduler import fakes
class TestAggregateCoreFilter(test.NoDBTestCase):
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
def test_aggregate_core_filter_value_error(self, agg_mock):
self.filt_cls = core_filter.AggregateCoreFilter()
spec_obj = objects.RequestSpec(
context=mock.sentinel.ctx, flavor=objects.Flavor(vcpus=1))
host = fakes.FakeHostState('host1', 'node1',
{'vcpus_total': 4, 'vcpus_used': 7,
'cpu_allocation_ratio': 2})
agg_mock.return_value = set(['XXX'])
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
agg_mock.assert_called_once_with(host, 'cpu_allocation_ratio')
self.assertEqual(4 * 2, host.limits['vcpu'])
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
def test_aggregate_core_filter_default_value(self, agg_mock):
self.filt_cls = core_filter.AggregateCoreFilter()
spec_obj = objects.RequestSpec(
context=mock.sentinel.ctx, flavor=objects.Flavor(vcpus=1))
host = fakes.FakeHostState('host1', 'node1',
{'vcpus_total': 4, 'vcpus_used': 8,
'cpu_allocation_ratio': 2})
agg_mock.return_value = set([])
# False: fallback to default flag w/o aggregates
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
agg_mock.assert_called_once_with(host, 'cpu_allocation_ratio')
# True: use ratio from aggregates
agg_mock.return_value = set(['3'])
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
self.assertEqual(4 * 3, host.limits['vcpu'])
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
def test_aggregate_core_filter_conflict_values(self, agg_mock):
self.filt_cls = core_filter.AggregateCoreFilter()
spec_obj = objects.RequestSpec(
context=mock.sentinel.ctx, flavor=objects.Flavor(vcpus=1))
host = fakes.FakeHostState('host1', 'node1',
{'vcpus_total': 4, 'vcpus_used': 8,
'cpu_allocation_ratio': 1})
agg_mock.return_value = set(['2', '3'])
# use the minimum ratio from aggregates
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
self.assertEqual(4 * 2, host.limits['vcpu'])

View File

@ -1,55 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import objects
from nova.scheduler.filters import disk_filter
from nova import test
from nova.tests.unit.scheduler import fakes
class TestAggregateDiskFilter(test.NoDBTestCase):
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
def test_aggregate_disk_filter_value_error(self, agg_mock):
filt_cls = disk_filter.AggregateDiskFilter()
spec_obj = objects.RequestSpec(
context=mock.sentinel.ctx,
flavor=objects.Flavor(
root_gb=1, ephemeral_gb=1, swap=1024))
host = fakes.FakeHostState('host1', 'node1',
{'free_disk_mb': 3 * 1024,
'total_usable_disk_gb': 4,
'disk_allocation_ratio': 1.0})
agg_mock.return_value = set(['XXX'])
self.assertTrue(filt_cls.host_passes(host, spec_obj))
agg_mock.assert_called_once_with(host, 'disk_allocation_ratio')
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
def test_aggregate_disk_filter_default_value(self, agg_mock):
filt_cls = disk_filter.AggregateDiskFilter()
spec_obj = objects.RequestSpec(
context=mock.sentinel.ctx,
flavor=objects.Flavor(
root_gb=2, ephemeral_gb=1, swap=1024))
host = fakes.FakeHostState('host1', 'node1',
{'free_disk_mb': 3 * 1024,
'total_usable_disk_gb': 4,
'disk_allocation_ratio': 1.0})
# Uses global conf.
agg_mock.return_value = set([])
self.assertFalse(filt_cls.host_passes(host, spec_obj))
agg_mock.assert_called_once_with(host, 'disk_allocation_ratio')
agg_mock.return_value = set(['2'])
self.assertTrue(filt_cls.host_passes(host, spec_obj))

View File

@ -1,64 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import objects
from nova.scheduler.filters import ram_filter
from nova import test
from nova.tests.unit.scheduler import fakes
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
class TestAggregateRamFilter(test.NoDBTestCase):
def setUp(self):
super(TestAggregateRamFilter, self).setUp()
self.filt_cls = ram_filter.AggregateRamFilter()
def test_aggregate_ram_filter_value_error(self, agg_mock):
spec_obj = objects.RequestSpec(
context=mock.sentinel.ctx,
flavor=objects.Flavor(memory_mb=1024))
host = fakes.FakeHostState('host1', 'node1',
{'free_ram_mb': 1024, 'total_usable_ram_mb': 1024,
'ram_allocation_ratio': 1.0})
agg_mock.return_value = set(['XXX'])
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
self.assertEqual(1024 * 1.0, host.limits['memory_mb'])
def test_aggregate_ram_filter_default_value(self, agg_mock):
spec_obj = objects.RequestSpec(
context=mock.sentinel.ctx,
flavor=objects.Flavor(memory_mb=1024))
host = fakes.FakeHostState('host1', 'node1',
{'free_ram_mb': 1023, 'total_usable_ram_mb': 1024,
'ram_allocation_ratio': 1.0})
# False: fallback to default flag w/o aggregates
agg_mock.return_value = set()
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
agg_mock.return_value = set(['2.0'])
# True: use ratio from aggregates
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
self.assertEqual(1024 * 2.0, host.limits['memory_mb'])
def test_aggregate_ram_filter_conflict_values(self, agg_mock):
spec_obj = objects.RequestSpec(
context=mock.sentinel.ctx,
flavor=objects.Flavor(memory_mb=1024))
host = fakes.FakeHostState('host1', 'node1',
{'free_ram_mb': 1023, 'total_usable_ram_mb': 1024,
'ram_allocation_ratio': 1.0})
agg_mock.return_value = set(['1.5', '2.0'])
# use the minimum ratio from aggregates
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
self.assertEqual(1024 * 1.5, host.limits['memory_mb'])

View File

@ -1,56 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova import objects
from nova.scheduler.filters import retry_filter
from nova import test
from nova.tests.unit.scheduler import fakes
class TestRetryFilter(test.NoDBTestCase):
def setUp(self):
super(TestRetryFilter, self).setUp()
self.filt_cls = retry_filter.RetryFilter()
self.assertIn('The RetryFilter is deprecated',
self.stdlog.logger.output)
def test_retry_filter_disabled(self):
# Test case where retry/re-scheduling is disabled.
host = fakes.FakeHostState('host1', 'node1', {})
spec_obj = objects.RequestSpec(retry=None)
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_retry_filter_pass(self):
# Node not previously tried.
host = fakes.FakeHostState('host1', 'nodeX', {})
retry = objects.SchedulerRetries(
num_attempts=2,
hosts=objects.ComputeNodeList(objects=[
# same host, different node
objects.ComputeNode(host='host1', hypervisor_hostname='node1'),
# different host and node
objects.ComputeNode(host='host2', hypervisor_hostname='node2'),
]))
spec_obj = objects.RequestSpec(retry=retry)
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_retry_filter_fail(self):
# Node was already tried.
host = fakes.FakeHostState('host1', 'node1', {})
retry = objects.SchedulerRetries(
num_attempts=1,
hosts=objects.ComputeNodeList(objects=[
objects.ComputeNode(host='host1', hypervisor_hostname='node1')
]))
spec_obj = objects.RequestSpec(retry=retry)
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))

View File

@ -0,0 +1,21 @@
---
upgrade:
- |
The following deprecated scheduler filters have been removed.
RetryFilter
Deprecated in Train (20.0.0). The RetryFilter has
not been requied since Queens following the completion of the
return-alternate-hosts blueprint
Aggregatefilter, AggregateRAMFilter, AggregateDiskFilter
Deprecated in Train (20.0.0). These filters have not worked
correctly since the introduction of placement in ocata.
On upgrade operators should ensure they have not configured
any of the new removed filters and instead should use placement
to control cpu, ram and disk allocation ratios.
Refer to the `config reference documentation`__ for more information.
.. __: https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#allocation-ratios