Update Nova configuration documentation
* Actually explain *why* we change options, instead of merely copy-pasting an old version of their description. * Document the new consecutive_build_service_disable_threshold, which may be quite confusing at first. * Do not force options that are not needed with resource classes. * Remove excessive indentation from the list. * Small wording fixes here and there. Change-Id: Ia60fedb8b28936530b24f5f05a654e083fc67499
This commit is contained in:
parent
9daabbbb05
commit
9935093ead
@ -21,68 +21,93 @@ driver. The configuration file for the Compute service is typically located at
|
|||||||
The following configuration file must be modified on the Compute
|
The following configuration file must be modified on the Compute
|
||||||
service's controller nodes and compute nodes.
|
service's controller nodes and compute nodes.
|
||||||
|
|
||||||
#. Change these configuration options in the ``default`` section, as follows:
|
#. Change these configuration options in the Compute service configuration
|
||||||
|
file (for example, ``/etc/nova/nova.conf``):
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[default]
|
[default]
|
||||||
|
|
||||||
# Driver to use for controlling virtualization. Options
|
# Defines which driver to use for controlling virtualization.
|
||||||
# include: libvirt.LibvirtDriver, xenapi.XenAPIDriver,
|
# Enable the ironic virt driver for this compute instance.
|
||||||
# fake.FakeDriver, baremetal.BareMetalDriver,
|
|
||||||
# vmwareapi.VMwareESXDriver, vmwareapi.VMwareVCDriver (string
|
|
||||||
# value)
|
|
||||||
#compute_driver=<None>
|
|
||||||
compute_driver=ironic.IronicDriver
|
compute_driver=ironic.IronicDriver
|
||||||
|
|
||||||
# Firewall driver (defaults to hypervisor specific iptables
|
# Firewall driver to use with nova-network service.
|
||||||
# driver) (string value)
|
# Ironic supports only neutron, so set this to noop.
|
||||||
#firewall_driver=<None>
|
|
||||||
firewall_driver=nova.virt.firewall.NoopFirewallDriver
|
firewall_driver=nova.virt.firewall.NoopFirewallDriver
|
||||||
|
|
||||||
# The scheduler host manager class to use (string value)
|
# Amount of memory in MB to reserve for the host so that it is always
|
||||||
#scheduler_host_manager=host_manager
|
# available to host processes.
|
||||||
scheduler_host_manager=ironic_host_manager
|
# It is impossible to reserve any memory on bare metal nodes, so set
|
||||||
|
# this to zero.
|
||||||
# Virtual ram to physical ram allocation ratio which affects
|
|
||||||
# all ram filters. This configuration specifies a global ratio
|
|
||||||
# for RamFilter. For AggregateRamFilter, it will fall back to
|
|
||||||
# this configuration value if no per-aggregate setting found.
|
|
||||||
# (floating point value)
|
|
||||||
#ram_allocation_ratio=1.5
|
|
||||||
ram_allocation_ratio=1.0
|
|
||||||
|
|
||||||
# Amount of disk in MB to reserve for the host (integer value)
|
|
||||||
#reserved_host_disk_mb=0
|
|
||||||
reserved_host_memory_mb=0
|
reserved_host_memory_mb=0
|
||||||
|
|
||||||
# Determines if the Scheduler tracks changes to instances to help with
|
[filter_scheduler]
|
||||||
# its filtering decisions (boolean value)
|
|
||||||
#scheduler_tracks_instance_changes=True
|
|
||||||
scheduler_tracks_instance_changes=False
|
|
||||||
|
|
||||||
# New instances will be scheduled on a host chosen randomly from a subset
|
# Enables querying of individual hosts for instance information.
|
||||||
# of the N best hosts, where N is the value set by this option. Valid
|
# Not possible for bare metal nodes, so set it to False.
|
||||||
# values are 1 or greater. Any value less than one will be treated as 1.
|
track_instance_changes=False
|
||||||
# For ironic, this should be set to a number >= the number of ironic nodes
|
|
||||||
# to more evenly distribute instances across the nodes.
|
|
||||||
#scheduler_host_subset_size=1
|
|
||||||
scheduler_host_subset_size=9999999
|
|
||||||
|
|
||||||
If you have not migrated to using :ref:`scheduling-resource-classes`, then
|
#. If you have not switched to make use of :ref:`scheduling-resource-classes`,
|
||||||
the following should be set as well:
|
then the following options should be set as well. They must be removed from
|
||||||
|
the configuration file after switching to resource classes.
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[default]
|
[scheduler]
|
||||||
|
|
||||||
# Flag to decide whether to use baremetal_scheduler_default_filters or not.
|
# Use the ironic scheduler host manager. This host manager will consume
|
||||||
# (boolean value)
|
# all CPUs, disk space, and RAM from a host as bare metal hosts, can not
|
||||||
#scheduler_use_baremetal_filters=False
|
# be subdivided into multiple instances. Scheduling based on resource
|
||||||
scheduler_use_baremetal_filters=True
|
# classes does not use CPU/disk/RAM, so the default host manager can be
|
||||||
|
# used in such cases.
|
||||||
|
host_manager=ironic_host_manager
|
||||||
|
|
||||||
This option is deprecated and has to be unset after migration
|
[filter_scheduler]
|
||||||
to resource classes.
|
|
||||||
|
# Size of subset of best hosts selected by scheduler.
|
||||||
|
# New instances will be scheduled on a host chosen randomly from a
|
||||||
|
# subset of the 999 hosts. The big value is used to avoid race
|
||||||
|
# conditions, when several instances are scheduled on the same bare
|
||||||
|
# metal nodes. This is not a problem when resource classes are used.
|
||||||
|
host_subset_size=999
|
||||||
|
|
||||||
|
# This flag enables a different set of scheduler filters, which is more
|
||||||
|
# suitable for bare metals. CPU, disk and memory filters are replaced
|
||||||
|
# with their exact counterparts, to make sure only nodes strictly
|
||||||
|
# matching the flavor are picked. These filters do not work with
|
||||||
|
# scheduling based on resource classes only.
|
||||||
|
use_baremetal_filters=True
|
||||||
|
|
||||||
|
#. Carefully consider the following option:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
[compute]
|
||||||
|
|
||||||
|
# This option will cause nova-compute to set itself to a disabled state
|
||||||
|
# if a certain number of consecutive build failures occur. This will
|
||||||
|
# prevent the scheduler from continuing to send builds to a compute
|
||||||
|
# service that is consistently failing. In the case of bare metal
|
||||||
|
# provisioning, however, a compute service is rarely the cause of build
|
||||||
|
# failures. Furthermore, bare metal nodes, managed by a disabled
|
||||||
|
# compute service, will be remapped to a different one. That may cause
|
||||||
|
# the second compute service to also be disabled, and so on, until no
|
||||||
|
# compute services are active.
|
||||||
|
# If this is not the desired behavior, consider increasing this value or
|
||||||
|
# setting it to 0 to disable this behavior completely.
|
||||||
|
#consecutive_build_service_disable_threshold = 10
|
||||||
|
|
||||||
|
[scheduler]
|
||||||
|
|
||||||
|
# This value controls how often (in seconds) the scheduler should
|
||||||
|
# attempt to discover new hosts that have been added to cells.
|
||||||
|
# If negative (the default), no automatic discovery will occur.
|
||||||
|
# As each bare metal node is represented by a separate host, it has
|
||||||
|
# to be discovered before the Compute service can deploy on it.
|
||||||
|
# It can be done manually after enrolling every node or batch of nodes,
|
||||||
|
# or this periodic task can be enabled to do it automatically.
|
||||||
|
#discover_hosts_in_cells_interval=-1
|
||||||
|
|
||||||
#. Change these configuration options in the ``ironic`` section.
|
#. Change these configuration options in the ``ironic`` section.
|
||||||
Replace:
|
Replace:
|
||||||
|
Loading…
x
Reference in New Issue
Block a user