Merge "Remove deprecated Core/Ram/DiskFilter"
This commit is contained in:
commit
ab34c941be
@ -2378,14 +2378,16 @@ disabled_reason_body:
|
||||
disk_available_least:
|
||||
description: |
|
||||
The actual free disk on this hypervisor(in GiB). If allocation ratios used
|
||||
for overcommit are configured, this may be negative.
|
||||
for overcommit are configured, this may be negative. This is intentional as
|
||||
it provides insight into the amount by which the disk is overcommitted.
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
disk_available_least_total:
|
||||
description: |
|
||||
The actual free disk on all hypervisors(in GiB). If allocation ratios used
|
||||
for overcommit are configured, this may be negative.
|
||||
for overcommit are configured, this may be negative. This is intentional as
|
||||
it provides insight into the amount by which the disk is overcommitted.
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
|
@ -156,7 +156,7 @@ Filters host by CPU core numbers with a per-aggregate ``cpu_allocation_ratio``
|
||||
value. If the per-aggregate value is not found, the value falls back to the
|
||||
global setting. If the host is in more than one aggregate and more than one
|
||||
value is found, the minimum value will be used. For information about how to
|
||||
use this filter, see :ref:`host-aggregates`. See also :ref:`CoreFilter`.
|
||||
use this filter, see :ref:`host-aggregates`.
|
||||
|
||||
Note the ``cpu_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>` restriction.
|
||||
|
||||
@ -167,7 +167,7 @@ Filters host by disk allocation with a per-aggregate ``disk_allocation_ratio``
|
||||
value. If the per-aggregate value is not found, the value falls back to the
|
||||
global setting. If the host is in more than one aggregate and more than one
|
||||
value is found, the minimum value will be used. For information about how to
|
||||
use this filter, see :ref:`host-aggregates`. See also :ref:`DiskFilter`.
|
||||
use this filter, see :ref:`host-aggregates`.
|
||||
|
||||
Note the ``disk_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>`
|
||||
restriction.
|
||||
@ -302,7 +302,7 @@ Filters host by RAM allocation of instances with a per-aggregate
|
||||
value falls back to the global setting. If the host is in more than one
|
||||
aggregate and thus more than one value is found, the minimum value will be
|
||||
used. For information about how to use this filter, see
|
||||
:ref:`host-aggregates`. See also :ref:`ramfilter`.
|
||||
:ref:`host-aggregates`.
|
||||
|
||||
Note the ``ram_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>` restriction.
|
||||
|
||||
@ -364,50 +364,6 @@ Passes all hosts that are operational and enabled.
|
||||
|
||||
In general, you should always enable this filter.
|
||||
|
||||
.. _CoreFilter:
|
||||
|
||||
CoreFilter
|
||||
----------
|
||||
|
||||
.. deprecated:: 19.0.0
|
||||
|
||||
``CoreFilter`` is deprecated since the 19.0.0 Stein release. VCPU
|
||||
filtering is performed natively using the Placement service when using the
|
||||
``filter_scheduler`` driver. Furthermore, enabling CoreFilter may
|
||||
incorrectly filter out `baremetal nodes`_ which must be scheduled using
|
||||
custom resource classes.
|
||||
|
||||
Only schedules instances on hosts if sufficient CPU cores are available. If
|
||||
this filter is not set, the scheduler might over-provision a host based on
|
||||
cores. For example, the virtual cores running on an instance may exceed the
|
||||
physical cores.
|
||||
|
||||
You can configure this filter to enable a fixed amount of vCPU overcommitment
|
||||
by using the ``cpu_allocation_ratio`` configuration option in ``nova.conf``.
|
||||
The default setting is:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
cpu_allocation_ratio = 16.0
|
||||
|
||||
With this setting, if 8 vCPUs are on a node, the scheduler allows instances up
|
||||
to 128 vCPU to be run on that node.
|
||||
|
||||
To disallow vCPU overcommitment set:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
cpu_allocation_ratio = 1.0
|
||||
|
||||
.. note::
|
||||
|
||||
The Compute API always returns the actual number of CPU cores available on a
|
||||
compute node regardless of the value of the ``cpu_allocation_ratio``
|
||||
configuration key. As a result changes to the ``cpu_allocation_ratio`` are
|
||||
not reflected via the command line clients or the dashboard. Changes to
|
||||
this configuration key are only taken into account internally in the
|
||||
scheduler.
|
||||
|
||||
DifferentHostFilter
|
||||
-------------------
|
||||
|
||||
@ -442,83 +398,6 @@ With the API, use the ``os:scheduler_hints`` key. For example:
|
||||
}
|
||||
}
|
||||
|
||||
.. _DiskFilter:
|
||||
|
||||
DiskFilter
|
||||
----------
|
||||
|
||||
.. deprecated:: 19.0.0
|
||||
|
||||
``DiskFilter`` is deprecated since the 19.0.0 Stein release. DISK_GB
|
||||
filtering is performed natively using the Placement service when using the
|
||||
``filter_scheduler`` driver. Furthermore, enabling DiskFilter may
|
||||
incorrectly filter out `baremetal nodes`_ which must be scheduled using
|
||||
custom resource classes.
|
||||
|
||||
Only schedules instances on hosts if there is sufficient disk space available
|
||||
for root and ephemeral storage.
|
||||
|
||||
You can configure this filter to enable a fixed amount of disk overcommitment
|
||||
by using the ``disk_allocation_ratio`` configuration option in the
|
||||
``nova.conf`` configuration file. The default setting disables the possibility
|
||||
of the overcommitment and allows launching a VM only if there is a sufficient
|
||||
amount of disk space available on a host:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
disk_allocation_ratio = 1.0
|
||||
|
||||
DiskFilter always considers the value of the ``disk_available_least`` property
|
||||
and not the one of the ``free_disk_gb`` property of a hypervisor's statistics:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack hypervisor stats show
|
||||
+----------------------+-------+
|
||||
| Field | Value |
|
||||
+----------------------+-------+
|
||||
| count | 1 |
|
||||
| current_workload | 0 |
|
||||
| disk_available_least | 14 |
|
||||
| free_disk_gb | 27 |
|
||||
| free_ram_mb | 15374 |
|
||||
| local_gb | 27 |
|
||||
| local_gb_used | 0 |
|
||||
| memory_mb | 15886 |
|
||||
| memory_mb_used | 512 |
|
||||
| running_vms | 0 |
|
||||
| vcpus | 8 |
|
||||
| vcpus_used | 0 |
|
||||
+----------------------+-------+
|
||||
|
||||
As it can be viewed from the command output above, the amount of the available
|
||||
disk space can be less than the amount of the free disk space. It happens
|
||||
because the ``disk_available_least`` property accounts for the virtual size
|
||||
rather than the actual size of images. If you use an image format that is
|
||||
sparse or copy on write so that each virtual instance does not require a 1:1
|
||||
allocation of a virtual disk to a physical storage, it may be useful to allow
|
||||
the overcommitment of disk space.
|
||||
|
||||
When disk space is overcommitted, the value of ``disk_available_least`` can
|
||||
be negative. Rather than rounding up to 0, the original negative value is
|
||||
reported, as this way a user can see the amount by which they are
|
||||
overcommitting, and the disk weigher can select a host which is less
|
||||
overcommitted than another host.
|
||||
|
||||
To enable scheduling instances while overcommitting disk resources on the node,
|
||||
adjust the value of the ``disk_allocation_ratio`` configuration option to
|
||||
greater than ``1.0``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
disk_allocation_ratio > 1.0
|
||||
|
||||
.. note::
|
||||
|
||||
If the value is set to ``>1``, we recommend keeping track of the free disk
|
||||
space, as the value approaching ``0`` may result in the incorrect
|
||||
functioning of instances using it at the moment.
|
||||
|
||||
.. _ImagePropertiesFilter:
|
||||
|
||||
ImagePropertiesFilter
|
||||
@ -709,37 +588,6 @@ PciPassthroughFilter
|
||||
The filter schedules instances on a host if the host has devices that meet the
|
||||
device requests in the ``extra_specs`` attribute for the flavor.
|
||||
|
||||
.. _RamFilter:
|
||||
|
||||
RamFilter
|
||||
---------
|
||||
|
||||
.. deprecated:: 19.0.0
|
||||
|
||||
``RamFilter`` is deprecated since the 19.0.0 Stein release. MEMORY_MB
|
||||
filtering is performed natively using the Placement service when using the
|
||||
``filter_scheduler`` driver. Furthermore, enabling RamFilter may
|
||||
incorrectly filter out `baremetal nodes`_ which must be scheduled using
|
||||
custom resource classes.
|
||||
|
||||
.. _baremetal nodes: https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html
|
||||
|
||||
Only schedules instances on hosts that have sufficient RAM available. If this
|
||||
filter is not set, the scheduler may over provision a host based on RAM (for
|
||||
example, the RAM allocated by virtual machine instances may exceed the physical
|
||||
RAM).
|
||||
|
||||
You can configure this filter to enable a fixed amount of RAM overcommitment by
|
||||
using the ``ram_allocation_ratio`` configuration option in ``nova.conf``. The
|
||||
default setting is:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
ram_allocation_ratio = 1.5
|
||||
|
||||
This setting enables 1.5 GB instances to run on any compute node with 1 GB of
|
||||
free RAM.
|
||||
|
||||
RetryFilter
|
||||
-----------
|
||||
|
||||
@ -1436,10 +1284,6 @@ The allocation ratio configuration is used both during reporting of compute
|
||||
node `resource provider inventory`_ to the placement service and during
|
||||
scheduling.
|
||||
|
||||
The (deprecated) `CoreFilter`_, `DiskFilter`_ and `RamFilter`_ filters will use
|
||||
the allocation ratio from the compute node directly when calculating available
|
||||
capacity on a given node during scheduling.
|
||||
|
||||
The `AggregateCoreFilter`_, `AggregateDiskFilter`_ and `AggregateRamFilter`_
|
||||
filters allow overriding per-compute allocation ratios by setting an allocation
|
||||
ratio value using host aggregate metadata. This provides a convenient way to
|
||||
@ -1508,6 +1352,41 @@ here.
|
||||
|
||||
.. _osc-placement: https://docs.openstack.org/osc-placement/latest/index.html
|
||||
|
||||
.. _hypervisor-specific-considerations:
|
||||
|
||||
Hypervisor-specific considerations
|
||||
----------------------------------
|
||||
|
||||
Nova provides three configuration options,
|
||||
:oslo.config:option:`reserved_host_cpus`,
|
||||
:oslo.config:option:`reserved_host_memory_mb`, and
|
||||
:oslo.config:option:`reserved_host_disk_mb`, that can be used to set aside some
|
||||
number of resources that will not be consumed by an instance, whether these
|
||||
resources are overcommitted or not. Some virt drivers may benefit from the use
|
||||
of these options to account for hypervisor-specific overhead.
|
||||
|
||||
HyperV
|
||||
Hyper-V creates a VM memory file on the local disk when an instance starts.
|
||||
The size of this file corresponds to the amount of RAM allocated to the
|
||||
instance.
|
||||
|
||||
You should configure the
|
||||
:oslo.config:option:`reserved_host_disk_mb` config option to
|
||||
account for this overhead, based on the amount of memory available
|
||||
to instances.
|
||||
|
||||
XenAPI
|
||||
XenServer memory overhead is proportional to the size of the VM and larger
|
||||
flavor VMs become more efficient with respect to overhead. This overhead
|
||||
can be calculated using the following formula::
|
||||
|
||||
overhead (MB) = (instance.memory * 0.00781) + (instance.vcpus * 1.5) + 3
|
||||
|
||||
You should configure the
|
||||
:oslo.config:option:`reserved_host_memory_mb` config option to
|
||||
account for this overhead, based on the size of your hosts and
|
||||
instances.
|
||||
|
||||
Cells considerations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@ -184,11 +184,8 @@ Using fake computes for tests
|
||||
|
||||
The number of instances supported by fake computes is not limited by physical
|
||||
constraints. It allows you to perform stress tests on a deployment with few
|
||||
resources (typically a laptop). But you must avoid using scheduler filters
|
||||
limiting the number of instances per compute (like RamFilter, DiskFilter,
|
||||
AggregateCoreFilter), otherwise they will limit the number of instances per
|
||||
compute.
|
||||
|
||||
resources (typically a laptop). Take care to avoid using scheduler filters
|
||||
that will limit the number of instances per compute, such as ``AggregateCoreFilter``.
|
||||
|
||||
Fake computes can also be used in multi hypervisor-type deployments in order to
|
||||
take advantage of fake and "real" computes during tests:
|
||||
|
@ -95,8 +95,6 @@ There are many standard filter classes which may be used
|
||||
use a comma. E.g., "value1,value2". All hosts are passed if no extra_specs
|
||||
are specified.
|
||||
* |ComputeFilter| - passes all hosts that are operational and enabled.
|
||||
* |CoreFilter| - DEPRECATED; filters based on CPU core utilization. It passes
|
||||
hosts with sufficient number of CPU cores.
|
||||
* |AggregateCoreFilter| - filters hosts by CPU core number with per-aggregate
|
||||
:oslo.config:option:`cpu_allocation_ratio` setting. If no
|
||||
per-aggregate value is found, it will fall back to the global default
|
||||
@ -110,8 +108,6 @@ There are many standard filter classes which may be used
|
||||
and :oslo.config:option:`filter_scheduler.restrict_isolated_hosts_to_isolated_images`
|
||||
flags.
|
||||
* |JsonFilter| - allows simple JSON-based grammar for selecting hosts.
|
||||
* |RamFilter| - DEPRECATED; filters hosts by their RAM. Only hosts with
|
||||
sufficient RAM to host the instance are passed.
|
||||
* |AggregateRamFilter| - filters hosts by RAM with per-aggregate
|
||||
:oslo.config:option:`ram_allocation_ratio` setting. If no per-aggregate value
|
||||
is found, it will fall back to the global default
|
||||
@ -119,11 +115,6 @@ There are many standard filter classes which may be used
|
||||
If more than one value is found for a host (meaning the host is in two
|
||||
different aggregates with different ratio settings), the minimum value
|
||||
will be used.
|
||||
* |DiskFilter| - DEPRECATED; filters hosts by their disk allocation. Only
|
||||
hosts with sufficient disk space to host the instance are passed.
|
||||
:oslo.config:option:`disk_allocation_ratio` setting. The virtual disk to
|
||||
physical disk allocation ratio, 1.0 by default. The total allowed allocated
|
||||
disk size will be physical disk multiplied this ratio.
|
||||
* |AggregateDiskFilter| - filters hosts by disk allocation with per-aggregate
|
||||
:oslo.config:option:`disk_allocation_ratio` setting. If no per-aggregate value
|
||||
is found, it will fall back to the global default
|
||||
@ -356,8 +347,8 @@ of :oslo.config:option:`filter_scheduler.enabled_filters` affects scheduling
|
||||
performance. The general suggestion is to filter out invalid hosts as soon as
|
||||
possible to avoid unnecessary costs. We can sort
|
||||
:oslo.config:option:`filter_scheduler.enabled_filters`
|
||||
items by their costs in reverse order. For example, ComputeFilter is better
|
||||
before any resource calculating filters like RamFilter, CoreFilter.
|
||||
items by their costs in reverse order. For example, ``ComputeFilter`` is better
|
||||
before any resource calculating filters like ``NUMATopologyFilter``.
|
||||
|
||||
In medium/large environments having AvailabilityZoneFilter before any
|
||||
capability or resource calculating filters can be useful.
|
||||
@ -389,7 +380,7 @@ settings:
|
||||
--scheduler.driver=nova.scheduler.FilterScheduler
|
||||
--filter_scheduler.available_filters=nova.scheduler.filters.all_filters
|
||||
--filter_scheduler.available_filters=myfilter.MyFilter
|
||||
--filter_scheduler.enabled_filters=RamFilter,ComputeFilter,MyFilter
|
||||
--filter_scheduler.enabled_filters=ComputeFilter,MyFilter
|
||||
|
||||
.. note:: When writing your own filter, be sure to add it to the list of available filters
|
||||
and enable it in the default filters. The "all_filters" setting only includes the
|
||||
@ -397,7 +388,7 @@ settings:
|
||||
|
||||
With these settings, nova will use the ``FilterScheduler`` for the scheduler
|
||||
driver. All of the standard nova filters and MyFilter are available to the
|
||||
FilterScheduler, but just the RamFilter, ComputeFilter, and MyFilter will be
|
||||
FilterScheduler, but just the ``ComputeFilter`` and ``MyFilter`` will be
|
||||
used on each request.
|
||||
|
||||
Weights
|
||||
@ -558,13 +549,10 @@ in :mod:`nova.tests.scheduler`.
|
||||
.. |BaseHostFilter| replace:: :class:`BaseHostFilter <nova.scheduler.filters.BaseHostFilter>`
|
||||
.. |ComputeCapabilitiesFilter| replace:: :class:`ComputeCapabilitiesFilter <nova.scheduler.filters.compute_capabilities_filter.ComputeCapabilitiesFilter>`
|
||||
.. |ComputeFilter| replace:: :class:`ComputeFilter <nova.scheduler.filters.compute_filter.ComputeFilter>`
|
||||
.. |CoreFilter| replace:: :class:`CoreFilter <nova.scheduler.filters.core_filter.CoreFilter>`
|
||||
.. |AggregateCoreFilter| replace:: :class:`AggregateCoreFilter <nova.scheduler.filters.core_filter.AggregateCoreFilter>`
|
||||
.. |IsolatedHostsFilter| replace:: :class:`IsolatedHostsFilter <nova.scheduler.filters.isolated_hosts_filter>`
|
||||
.. |JsonFilter| replace:: :class:`JsonFilter <nova.scheduler.filters.json_filter.JsonFilter>`
|
||||
.. |RamFilter| replace:: :class:`RamFilter <nova.scheduler.filters.ram_filter.RamFilter>`
|
||||
.. |AggregateRamFilter| replace:: :class:`AggregateRamFilter <nova.scheduler.filters.ram_filter.AggregateRamFilter>`
|
||||
.. |DiskFilter| replace:: :class:`DiskFilter <nova.scheduler.filters.disk_filter.DiskFilter>`
|
||||
.. |AggregateDiskFilter| replace:: :class:`AggregateDiskFilter <nova.scheduler.filters.disk_filter.AggregateDiskFilter>`
|
||||
.. |NumInstancesFilter| replace:: :class:`NumInstancesFilter <nova.scheduler.filters.num_instances_filter.NumInstancesFilter>`
|
||||
.. |AggregateNumInstancesFilter| replace:: :class:`AggregateNumInstancesFilter <nova.scheduler.filters.num_instances_filter.AggregateNumInstancesFilter>`
|
||||
|
@ -387,23 +387,20 @@ allocation_ratio_opts = [
|
||||
default=None,
|
||||
min=0.0,
|
||||
help="""
|
||||
This option helps you specify virtual CPU to physical CPU allocation ratio.
|
||||
Virtual CPU to physical CPU allocation ratio.
|
||||
|
||||
From Ocata (15.0.0) this is used to influence the hosts selected by
|
||||
the Placement API. Note that when Placement is used, the CoreFilter
|
||||
is redundant, because the Placement API will have already filtered
|
||||
out hosts that would have failed the CoreFilter.
|
||||
This option is used to influence the hosts selected by the Placement API. In
|
||||
addition, the ``AggregateCoreFilter`` will fall back to this configuration
|
||||
value if no per-aggregate setting is found.
|
||||
|
||||
This configuration specifies ratio for CoreFilter which can be set
|
||||
per compute node. For AggregateCoreFilter, it will fall back to this
|
||||
configuration value if no per-aggregate setting is found.
|
||||
.. note::
|
||||
|
||||
NOTE: If this option is set to something *other than* ``None`` or ``0.0``, the
|
||||
allocation ratio will be overwritten by the value of this option, otherwise,
|
||||
the allocation ratio will not change. Once set to a non-default value, it is
|
||||
not possible to "unset" the config to get back to the default behavior. If you
|
||||
want to reset back to the initial value, explicitly specify it to the value of
|
||||
``initial_cpu_allocation_ratio``.
|
||||
If this option is set to something *other than* ``None`` or ``0.0``, the
|
||||
allocation ratio will be overwritten by the value of this option, otherwise,
|
||||
the allocation ratio will not change. Once set to a non-default value, it is
|
||||
not possible to "unset" the config to get back to the default behavior. If
|
||||
you want to reset back to the initial value, explicitly specify it to the
|
||||
value of ``initial_cpu_allocation_ratio``.
|
||||
|
||||
Possible values:
|
||||
|
||||
@ -417,24 +414,20 @@ Related options:
|
||||
default=None,
|
||||
min=0.0,
|
||||
help="""
|
||||
This option helps you specify virtual RAM to physical RAM
|
||||
allocation ratio.
|
||||
Virtual RAM to physical RAM allocation ratio.
|
||||
|
||||
From Ocata (15.0.0) this is used to influence the hosts selected by
|
||||
the Placement API. Note that when Placement is used, the RamFilter
|
||||
is redundant, because the Placement API will have already filtered
|
||||
out hosts that would have failed the RamFilter.
|
||||
This option is used to influence the hosts selected by the Placement API. In
|
||||
addition, the ``AggregateRamFilter`` will fall back to this configuration value
|
||||
if no per-aggregate setting is found.
|
||||
|
||||
This configuration specifies ratio for RamFilter which can be set
|
||||
per compute node. For AggregateRamFilter, it will fall back to this
|
||||
configuration value if no per-aggregate setting found.
|
||||
.. note::
|
||||
|
||||
NOTE: If this option is set to something *other than* ``None`` or ``0.0``, the
|
||||
allocation ratio will be overwritten by the value of this option, otherwise,
|
||||
the allocation ratio will not change. Once set to a non-default value, it is
|
||||
not possible to "unset" the config to get back to the default behavior. If you
|
||||
want to reset back to the initial value, explicitly specify it to the value of
|
||||
``initial_ram_allocation_ratio``.
|
||||
If this option is set to something *other than* ``None`` or ``0.0``, the
|
||||
allocation ratio will be overwritten by the value of this option, otherwise,
|
||||
the allocation ratio will not change. Once set to a non-default value, it is
|
||||
not possible to "unset" the config to get back to the default behavior. If
|
||||
you want to reset back to the initial value, explicitly specify it to the
|
||||
value of ``initial_ram_allocation_ratio``.
|
||||
|
||||
Possible values:
|
||||
|
||||
@ -448,28 +441,32 @@ Related options:
|
||||
default=None,
|
||||
min=0.0,
|
||||
help="""
|
||||
This option helps you specify virtual disk to physical disk
|
||||
allocation ratio.
|
||||
Virtual disk to physical disk allocation ratio.
|
||||
|
||||
From Ocata (15.0.0) this is used to influence the hosts selected by
|
||||
the Placement API. Note that when Placement is used, the DiskFilter
|
||||
is redundant, because the Placement API will have already filtered
|
||||
out hosts that would have failed the DiskFilter.
|
||||
This option is used to influence the hosts selected by the Placement API. In
|
||||
addition, the ``AggregateDiskFilter`` will fall back to this configuration
|
||||
value if no per-aggregate setting is found.
|
||||
|
||||
A ratio greater than 1.0 will result in over-subscription of the
|
||||
available physical disk, which can be useful for more
|
||||
efficiently packing instances created with images that do not
|
||||
use the entire virtual disk, such as sparse or compressed
|
||||
images. It can be set to a value between 0.0 and 1.0 in order
|
||||
to preserve a percentage of the disk for uses other than
|
||||
instances.
|
||||
When configured, a ratio greater than 1.0 will result in over-subscription of
|
||||
the available physical disk, which can be useful for more efficiently packing
|
||||
instances created with images that do not use the entire virtual disk, such as
|
||||
sparse or compressed images. It can be set to a value between 0.0 and 1.0 in
|
||||
order to preserve a percentage of the disk for uses other than instances.
|
||||
|
||||
NOTE: If this option is set to something *other than* ``None`` or ``0.0``, the
|
||||
allocation ratio will be overwritten by the value of this option, otherwise,
|
||||
the allocation ratio will not change. Once set to a non-default value, it is
|
||||
not possible to "unset" the config to get back to the default behavior. If you
|
||||
want to reset back to the initial value, explicitly specify it to the value of
|
||||
``initial_disk_allocation_ratio``.
|
||||
.. note::
|
||||
|
||||
If the value is set to ``>1``, we recommend keeping track of the free disk
|
||||
space, as the value approaching ``0`` may result in the incorrect
|
||||
functioning of instances using it at the moment.
|
||||
|
||||
.. note::
|
||||
|
||||
If this option is set to something *other than* ``None`` or ``0.0``, the
|
||||
allocation ratio will be overwritten by the value of this option, otherwise,
|
||||
the allocation ratio will not change. Once set to a non-default value, it is
|
||||
not possible to "unset" the config to get back to the default behavior. If
|
||||
you want to reset back to the initial value, explicitly specify it to the
|
||||
value of ``initial_disk_allocation_ratio``.
|
||||
|
||||
Possible values:
|
||||
|
||||
@ -483,8 +480,7 @@ Related options:
|
||||
default=16.0,
|
||||
min=0.0,
|
||||
help="""
|
||||
This option helps you specify initial virtual CPU to physical CPU allocation
|
||||
ratio.
|
||||
Initial virtual CPU to physical CPU allocation ratio.
|
||||
|
||||
This is only used when initially creating the ``computes_nodes`` table record
|
||||
for a given nova-compute service.
|
||||
@ -500,8 +496,7 @@ Related options:
|
||||
default=1.5,
|
||||
min=0.0,
|
||||
help="""
|
||||
This option helps you specify initial virtual RAM to physical RAM allocation
|
||||
ratio.
|
||||
Initial virtual RAM to physical RAM allocation ratio.
|
||||
|
||||
This is only used when initially creating the ``computes_nodes`` table record
|
||||
for a given nova-compute service.
|
||||
@ -517,8 +512,7 @@ Related options:
|
||||
default=1.0,
|
||||
min=0.0,
|
||||
help="""
|
||||
This option helps you specify initial virtual disk to physical disk allocation
|
||||
ratio.
|
||||
Initial virtual disk to physical disk allocation ratio.
|
||||
|
||||
This is only used when initially creating the ``computes_nodes`` table record
|
||||
for a given nova-compute service.
|
||||
|
@ -23,12 +23,26 @@ from nova.scheduler.filters import utils
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseCoreFilter(filters.BaseHostFilter):
|
||||
class AggregateCoreFilter(filters.BaseHostFilter):
|
||||
"""AggregateCoreFilter with per-aggregate CPU subscription flag.
|
||||
|
||||
Fall back to global cpu_allocation_ratio if no per-aggregate setting found.
|
||||
"""
|
||||
|
||||
RUN_ON_REBUILD = False
|
||||
|
||||
def _get_cpu_allocation_ratio(self, host_state, spec_obj):
|
||||
raise NotImplementedError
|
||||
aggregate_vals = utils.aggregate_values_from_key(
|
||||
host_state,
|
||||
'cpu_allocation_ratio')
|
||||
try:
|
||||
ratio = utils.validate_num_values(
|
||||
aggregate_vals, host_state.cpu_allocation_ratio, cast_to=float)
|
||||
except ValueError as e:
|
||||
LOG.warning("Could not decode cpu_allocation_ratio: '%s'", e)
|
||||
ratio = host_state.cpu_allocation_ratio
|
||||
|
||||
return ratio
|
||||
|
||||
def host_passes(self, host_state, spec_obj):
|
||||
"""Return True if host has sufficient CPU cores.
|
||||
@ -73,39 +87,3 @@ class BaseCoreFilter(filters.BaseHostFilter):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
class CoreFilter(BaseCoreFilter):
|
||||
"""DEPRECATED: CoreFilter filters based on CPU core utilization."""
|
||||
|
||||
def __init__(self):
|
||||
super(CoreFilter, self).__init__()
|
||||
LOG.warning('The CoreFilter is deprecated since the 19.0.0 Stein '
|
||||
'release. VCPU filtering is performed natively using the '
|
||||
'Placement service when using the filter_scheduler '
|
||||
'driver. Furthermore, enabling CoreFilter '
|
||||
'may incorrectly filter out baremetal nodes which must be '
|
||||
'scheduled using custom resource classes.')
|
||||
|
||||
def _get_cpu_allocation_ratio(self, host_state, spec_obj):
|
||||
return host_state.cpu_allocation_ratio
|
||||
|
||||
|
||||
class AggregateCoreFilter(BaseCoreFilter):
|
||||
"""AggregateCoreFilter with per-aggregate CPU subscription flag.
|
||||
|
||||
Fall back to global cpu_allocation_ratio if no per-aggregate setting found.
|
||||
"""
|
||||
|
||||
def _get_cpu_allocation_ratio(self, host_state, spec_obj):
|
||||
aggregate_vals = utils.aggregate_values_from_key(
|
||||
host_state,
|
||||
'cpu_allocation_ratio')
|
||||
try:
|
||||
ratio = utils.validate_num_values(
|
||||
aggregate_vals, host_state.cpu_allocation_ratio, cast_to=float)
|
||||
except ValueError as e:
|
||||
LOG.warning("Could not decode cpu_allocation_ratio: '%s'", e)
|
||||
ratio = host_state.cpu_allocation_ratio
|
||||
|
||||
return ratio
|
||||
|
@ -21,25 +21,28 @@ from nova.scheduler.filters import utils
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DiskFilter(filters.BaseHostFilter):
|
||||
"""DEPRECATED: Disk Filter with over subscription flag."""
|
||||
class AggregateDiskFilter(filters.BaseHostFilter):
|
||||
"""AggregateDiskFilter with per-aggregate disk allocation ratio flag.
|
||||
|
||||
Fall back to global disk_allocation_ratio if no per-aggregate setting
|
||||
found.
|
||||
"""
|
||||
|
||||
RUN_ON_REBUILD = False
|
||||
DEPRECATED = True
|
||||
|
||||
def __init__(self):
|
||||
super(DiskFilter, self).__init__()
|
||||
if self.DEPRECATED:
|
||||
LOG.warning('The DiskFilter is deprecated since the 19.0.0 Stein '
|
||||
'release. DISK_GB filtering is performed natively '
|
||||
'using the Placement service when using the '
|
||||
'filter_scheduler driver. Furthermore, enabling '
|
||||
'DiskFilter may incorrectly filter out baremetal '
|
||||
'nodes which must be scheduled using custom resource '
|
||||
'classes.')
|
||||
|
||||
def _get_disk_allocation_ratio(self, host_state, spec_obj):
|
||||
return host_state.disk_allocation_ratio
|
||||
aggregate_vals = utils.aggregate_values_from_key(
|
||||
host_state,
|
||||
'disk_allocation_ratio')
|
||||
try:
|
||||
ratio = utils.validate_num_values(
|
||||
aggregate_vals, host_state.disk_allocation_ratio,
|
||||
cast_to=float)
|
||||
except ValueError as e:
|
||||
LOG.warning("Could not decode disk_allocation_ratio: '%s'", e)
|
||||
ratio = host_state.disk_allocation_ratio
|
||||
|
||||
return ratio
|
||||
|
||||
def host_passes(self, host_state, spec_obj):
|
||||
"""Filter based on disk usage."""
|
||||
@ -81,28 +84,3 @@ class DiskFilter(filters.BaseHostFilter):
|
||||
disk_gb_limit = disk_mb_limit / 1024
|
||||
host_state.limits['disk_gb'] = disk_gb_limit
|
||||
return True
|
||||
|
||||
|
||||
class AggregateDiskFilter(DiskFilter):
|
||||
"""AggregateDiskFilter with per-aggregate disk allocation ratio flag.
|
||||
|
||||
Fall back to global disk_allocation_ratio if no per-aggregate setting
|
||||
found.
|
||||
"""
|
||||
|
||||
RUN_ON_REBUILD = False
|
||||
DEPRECATED = False
|
||||
|
||||
def _get_disk_allocation_ratio(self, host_state, spec_obj):
|
||||
aggregate_vals = utils.aggregate_values_from_key(
|
||||
host_state,
|
||||
'disk_allocation_ratio')
|
||||
try:
|
||||
ratio = utils.validate_num_values(
|
||||
aggregate_vals, host_state.disk_allocation_ratio,
|
||||
cast_to=float)
|
||||
except ValueError as e:
|
||||
LOG.warning("Could not decode disk_allocation_ratio: '%s'", e)
|
||||
ratio = host_state.disk_allocation_ratio
|
||||
|
||||
return ratio
|
||||
|
@ -22,12 +22,27 @@ from nova.scheduler.filters import utils
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseRamFilter(filters.BaseHostFilter):
|
||||
class AggregateRamFilter(filters.BaseHostFilter):
|
||||
"""AggregateRamFilter with per-aggregate ram subscription flag.
|
||||
|
||||
Fall back to global ram_allocation_ratio if no per-aggregate setting found.
|
||||
"""
|
||||
|
||||
RUN_ON_REBUILD = False
|
||||
|
||||
def _get_ram_allocation_ratio(self, host_state, spec_obj):
|
||||
raise NotImplementedError
|
||||
aggregate_vals = utils.aggregate_values_from_key(
|
||||
host_state,
|
||||
'ram_allocation_ratio')
|
||||
|
||||
try:
|
||||
ratio = utils.validate_num_values(
|
||||
aggregate_vals, host_state.ram_allocation_ratio, cast_to=float)
|
||||
except ValueError as e:
|
||||
LOG.warning("Could not decode ram_allocation_ratio: '%s'", e)
|
||||
ratio = host_state.ram_allocation_ratio
|
||||
|
||||
return ratio
|
||||
|
||||
def host_passes(self, host_state, spec_obj):
|
||||
"""Only return hosts with sufficient available RAM."""
|
||||
@ -63,40 +78,3 @@ class BaseRamFilter(filters.BaseHostFilter):
|
||||
# save oversubscription limit for compute node to test against:
|
||||
host_state.limits['memory_mb'] = memory_mb_limit
|
||||
return True
|
||||
|
||||
|
||||
class RamFilter(BaseRamFilter):
|
||||
"""Ram Filter with over subscription flag."""
|
||||
|
||||
def __init__(self):
|
||||
super(RamFilter, self).__init__()
|
||||
LOG.warning('The RamFilter is deprecated since the 19.0.0 Stein '
|
||||
'release. MEMORY_MB filtering is performed natively '
|
||||
'using the Placement service when using the '
|
||||
'filter_scheduler driver. Furthermore, enabling RamFilter '
|
||||
'may incorrectly filter out baremetal nodes which must be '
|
||||
'scheduled using custom resource classes.')
|
||||
|
||||
def _get_ram_allocation_ratio(self, host_state, spec_obj):
|
||||
return host_state.ram_allocation_ratio
|
||||
|
||||
|
||||
class AggregateRamFilter(BaseRamFilter):
|
||||
"""AggregateRamFilter with per-aggregate ram subscription flag.
|
||||
|
||||
Fall back to global ram_allocation_ratio if no per-aggregate setting found.
|
||||
"""
|
||||
|
||||
def _get_ram_allocation_ratio(self, host_state, spec_obj):
|
||||
aggregate_vals = utils.aggregate_values_from_key(
|
||||
host_state,
|
||||
'ram_allocation_ratio')
|
||||
|
||||
try:
|
||||
ratio = utils.validate_num_values(
|
||||
aggregate_vals, host_state.ram_allocation_ratio, cast_to=float)
|
||||
except ValueError as e:
|
||||
LOG.warning("Could not decode ram_allocation_ratio: '%s'", e)
|
||||
ratio = host_state.ram_allocation_ratio
|
||||
|
||||
return ratio
|
||||
|
@ -2407,9 +2407,7 @@ class ServerMovingTests(integrated_helpers.ProviderUsageBaseTestCase):
|
||||
# to make the claim fail, by doing something like returning a too high
|
||||
# memory_mb overhead, but the limits dict passed to the claim is empty
|
||||
# so the claim test is considering it as unlimited and never actually
|
||||
# performs a claim test. Configuring the scheduler to use the RamFilter
|
||||
# to get the memory_mb limit at least seems like it should work but
|
||||
# it doesn't appear to for some reason...
|
||||
# performs a claim test.
|
||||
def fake_move_claim(*args, **kwargs):
|
||||
# Assert the destination node allocation exists.
|
||||
self.assertFlavorMatchesUsage(dest_rp_uuid, self.flavor1)
|
||||
|
@ -18,37 +18,7 @@ from nova import test
|
||||
from nova.tests.unit.scheduler import fakes
|
||||
|
||||
|
||||
class TestCoreFilter(test.NoDBTestCase):
|
||||
|
||||
def test_core_filter_passes(self):
|
||||
self.filt_cls = core_filter.CoreFilter()
|
||||
spec_obj = objects.RequestSpec(flavor=objects.Flavor(vcpus=1))
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'vcpus_total': 4, 'vcpus_used': 7,
|
||||
'cpu_allocation_ratio': 2})
|
||||
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
|
||||
|
||||
def test_core_filter_fails_safe(self):
|
||||
self.filt_cls = core_filter.CoreFilter()
|
||||
spec_obj = objects.RequestSpec(flavor=objects.Flavor(vcpus=1))
|
||||
host = fakes.FakeHostState('host1', 'node1', {})
|
||||
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
|
||||
|
||||
def test_core_filter_fails(self):
|
||||
self.filt_cls = core_filter.CoreFilter()
|
||||
spec_obj = objects.RequestSpec(flavor=objects.Flavor(vcpus=1))
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'vcpus_total': 4, 'vcpus_used': 8,
|
||||
'cpu_allocation_ratio': 2})
|
||||
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
|
||||
|
||||
def test_core_filter_single_instance_overcommit_fails(self):
|
||||
self.filt_cls = core_filter.CoreFilter()
|
||||
spec_obj = objects.RequestSpec(flavor=objects.Flavor(vcpus=2))
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'vcpus_total': 1, 'vcpus_used': 0,
|
||||
'cpu_allocation_ratio': 2})
|
||||
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
|
||||
class TestAggregateCoreFilter(test.NoDBTestCase):
|
||||
|
||||
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
|
||||
def test_aggregate_core_filter_value_error(self, agg_mock):
|
||||
|
@ -18,63 +18,7 @@ from nova import test
|
||||
from nova.tests.unit.scheduler import fakes
|
||||
|
||||
|
||||
class TestDiskFilter(test.NoDBTestCase):
|
||||
|
||||
def test_disk_filter_passes(self):
|
||||
filt_cls = disk_filter.DiskFilter()
|
||||
spec_obj = objects.RequestSpec(
|
||||
flavor=objects.Flavor(root_gb=1, ephemeral_gb=1, swap=512))
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 13,
|
||||
'disk_allocation_ratio': 1.0})
|
||||
self.assertTrue(filt_cls.host_passes(host, spec_obj))
|
||||
|
||||
def test_disk_filter_fails(self):
|
||||
filt_cls = disk_filter.DiskFilter()
|
||||
spec_obj = objects.RequestSpec(
|
||||
flavor=objects.Flavor(
|
||||
root_gb=10, ephemeral_gb=1, swap=1024))
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 13,
|
||||
'disk_allocation_ratio': 1.0})
|
||||
self.assertFalse(filt_cls.host_passes(host, spec_obj))
|
||||
|
||||
def test_disk_filter_oversubscribe(self):
|
||||
filt_cls = disk_filter.DiskFilter()
|
||||
spec_obj = objects.RequestSpec(
|
||||
flavor=objects.Flavor(
|
||||
root_gb=3, ephemeral_gb=3, swap=1024))
|
||||
# Only 1Gb left, but with 10x overprovision a 7Gb instance should
|
||||
# still fit. Schedule will succeed.
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'free_disk_mb': 1 * 1024, 'total_usable_disk_gb': 12,
|
||||
'disk_allocation_ratio': 10.0})
|
||||
self.assertTrue(filt_cls.host_passes(host, spec_obj))
|
||||
self.assertEqual(12 * 10.0, host.limits['disk_gb'])
|
||||
|
||||
def test_disk_filter_oversubscribe_single_instance_fails(self):
|
||||
filt_cls = disk_filter.DiskFilter()
|
||||
spec_obj = objects.RequestSpec(
|
||||
flavor=objects.Flavor(
|
||||
root_gb=10, ephemeral_gb=2, swap=1024))
|
||||
# According to the allocation ratio, This host has 119 Gb left,
|
||||
# but it doesn't matter because the requested instance is
|
||||
# bigger than the whole drive. Schedule will fail.
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 12,
|
||||
'disk_allocation_ratio': 10.0})
|
||||
self.assertFalse(filt_cls.host_passes(host, spec_obj))
|
||||
|
||||
def test_disk_filter_oversubscribe_fail(self):
|
||||
filt_cls = disk_filter.DiskFilter()
|
||||
spec_obj = objects.RequestSpec(
|
||||
flavor=objects.Flavor(
|
||||
root_gb=100, ephemeral_gb=19, swap=1024))
|
||||
# 1GB used... so 119GB allowed...
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 12,
|
||||
'disk_allocation_ratio': 10.0})
|
||||
self.assertFalse(filt_cls.host_passes(host, spec_obj))
|
||||
class TestAggregateDiskFilter(test.NoDBTestCase):
|
||||
|
||||
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
|
||||
def test_aggregate_disk_filter_value_error(self, agg_mock):
|
||||
|
@ -18,46 +18,6 @@ from nova import test
|
||||
from nova.tests.unit.scheduler import fakes
|
||||
|
||||
|
||||
class TestRamFilter(test.NoDBTestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestRamFilter, self).setUp()
|
||||
self.filt_cls = ram_filter.RamFilter()
|
||||
|
||||
def test_ram_filter_fails_on_memory(self):
|
||||
spec_obj = objects.RequestSpec(
|
||||
flavor=objects.Flavor(memory_mb=1024))
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'free_ram_mb': 1023, 'total_usable_ram_mb': 1024,
|
||||
'ram_allocation_ratio': 1.0})
|
||||
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
|
||||
|
||||
def test_ram_filter_passes(self):
|
||||
spec_obj = objects.RequestSpec(
|
||||
flavor=objects.Flavor(memory_mb=1024))
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'free_ram_mb': 1024, 'total_usable_ram_mb': 1024,
|
||||
'ram_allocation_ratio': 1.0})
|
||||
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
|
||||
|
||||
def test_ram_filter_oversubscribe(self):
|
||||
spec_obj = objects.RequestSpec(
|
||||
flavor=objects.Flavor(memory_mb=1024))
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'free_ram_mb': -1024, 'total_usable_ram_mb': 2048,
|
||||
'ram_allocation_ratio': 2.0})
|
||||
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
|
||||
self.assertEqual(2048 * 2.0, host.limits['memory_mb'])
|
||||
|
||||
def test_ram_filter_oversubscribe_singe_instance_fails(self):
|
||||
spec_obj = objects.RequestSpec(
|
||||
flavor=objects.Flavor(memory_mb=1024))
|
||||
host = fakes.FakeHostState('host1', 'node1',
|
||||
{'free_ram_mb': 512, 'total_usable_ram_mb': 512,
|
||||
'ram_allocation_ratio': 2.0})
|
||||
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
|
||||
|
||||
|
||||
@mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key')
|
||||
class TestAggregateRamFilter(test.NoDBTestCase):
|
||||
|
||||
|
@ -0,0 +1,15 @@
|
||||
---
|
||||
upgrade:
|
||||
- |
|
||||
The ``CoreFilter``, ``DiskFilter`` and ``RamFilter``, which were
|
||||
deprecated in Stein (19.0.0), are now removed. ``VCPU``,
|
||||
``DISK_GB`` and ``MEMORY_MB`` filtering is performed natively
|
||||
using the Placement service. These filters have been warning
|
||||
operators at startup that they conflict with proper operation of
|
||||
placement and should have been disabled since approximately
|
||||
Pike. If you did still have these filters enabled and were relying
|
||||
on them to account for virt driver overhead (at the expense of
|
||||
scheduler races and retries), see the `scheduler`_ documentation about
|
||||
the topic.
|
||||
|
||||
.. _scheduler: https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#hypervisor-specific-considerations
|
Loading…
Reference in New Issue
Block a user