Minor improvements to the resource classes documentation

This change makes it clearer that scheduling based on resource classes
is the default now. Also fixes small issues and nits spotted on review
I22234aafdd195dd76c621b93042a67cdb36f3e65.

Change-Id: Ia9a06d6b59024781069bb6fd9a9fb18e1217a949
This commit is contained in:
Dmitry Tantsur 2017-08-22 18:27:21 +02:00
parent 9935093ead
commit acc6ce3498
3 changed files with 90 additions and 54 deletions

View File

@ -47,11 +47,26 @@ A few things should be checked in this case:
Maintenance mode will be also set on a node if automated cleaning has
failed for it previously.
#. Inspection should have succeeded for you before, or you should have
entered the required Ironic node properties manually. For each node with
``available`` state make sure that the ``properties`` JSON field has valid
values for the keys ``cpus``, ``cpu_arch``, ``memory_mb`` and ``local_gb``.
Example of valid properties::
#. Starting with the Pike release, check that all your nodes have the
``resource_class`` field set using the following command::
openstack --os-baremetal-api-version 1.21 baremetal node list --fields uuid name resource_class
Then check that the flavor(s) are configured to request these resource
classes via their properties::
openstack flavor show <FLAVOR NAME> -f value -c properties
For example, if your node has resource class ``baremetal-large``, it will
be matched by a flavor with property ``resources:CUSTOM_BAREMETAL_LARGE``
set to ``1``. See :doc:`/install/configure-nova-flavors` for more
details on the correct configuration.
#. If you do not use scheduling based on resource classes, then the node's
properties must have been set either manually or via inspection.
For each node with ``available`` state check that the ``properties``
JSON field has valid values for the keys ``cpus``, ``cpu_arch``,
``memory_mb`` and ``local_gb``. Example of valid properties::
$ openstack baremetal node show <IRONIC NODE> --fields properties
+------------+------------------------------------------------------------------------------------+
@ -103,6 +118,8 @@ A few things should be checked in this case:
check ``openstack hypervisor show <IRONIC NODE>`` to see the status of
individual Ironic nodes as reported to Nova.
.. TODO(dtantsur): explain inspecting the placement API
#. Figure out which Nova Scheduler filter ruled out your nodes. Check the
``nova-scheduler`` logs for lines containing something like::

View File

@ -38,25 +38,31 @@ The flavor is mapped to the bare metal node through the hardware specifications.
Scheduling based on resource classes
====================================
The Newton release of the Bare Metal service includes a field on the node
resource called ``resource_class``. This field is available in version 1.21 of
the Bare Metal service API. Starting with the Pike release, this field has
to be populated for all nodes, as explained in :doc:`enrollment`.
As of the Pike release, a Compute service flavor is able to use this field
As of the Pike release, a Compute service flavor is able to use the node's
``resource_class`` field (available starting with Bare Metal API version 1.21)
for scheduling, instead of the CPU, RAM, and disk properties defined in
the flavor above. A flavor can request *exactly one* instance of a bare metal
the flavor. A flavor can request *exactly one* instance of a bare metal
resource class.
To achieve that, the flavors, created as described in `Scheduling based on
properties`_, have to be associated with one custom resource class each.
A name of the custom resource class is the name of node's resource class, but
upper-cased, with ``CUSTOM_`` prefix prepended, and all punctuation replaced
with an underscore:
Start with creating the flavor in the same way as described in
`Scheduling based on properties`_. The ``CPU``, ``RAM_MB`` and ``DISK_GB``
values are not going to be used for scheduling, but the ``DISK_GB``
value will still be used to determine the root partition size.
After creation, associate each flavor with one custom resource class. The name
of a custom resource class that corresponds to a node's resource class (in the
Bare Metal service) is:
* the bare metal node's resource class all upper-cased
* prefixed with ``CUSTOM_``
* all punctuation replaced with an underscore
For example, if the resource class is named ``baremetal-small``, associate
the flavor with this custom resource class via:
.. code-block:: console
$ nova flavor-key my-baremetal-flavor set resources:CUSTOM_<RESOURCE_CLASS>=1
$ nova flavor-key my-baremetal-flavor set resources:CUSTOM_BAREMETAL_SMALL=1
Another set of flavor properties should be used to disable scheduling
based on standard properties for a bare metal flavor:
@ -79,12 +85,12 @@ with tagging some nodes with it:
.. code-block:: console
$ ironic --ironic-api-version=1.21 node-update $NODE_UUID \
replace resource_class=baremetal.with-GPU
$ openstack --os-baremetal-api-version 1.21 baremetal node set $NODE_UUID \
--resource-class baremetal.with-GPU
.. warning::
It is possible to **add** a resource class to ``active`` nodes, but it is
not possiblre to **replace** an existing resource class on them.
not possible to **replace** an existing resource class on them.
Then you can update your flavor to request the resource class instead of
the standard properties:

View File

@ -256,11 +256,50 @@ and may be combined if desired.
$ ironic port-create -n $NODE_UUID -a $MAC_ADDRESS
.. _enrollment-scheduling:
Adding scheduling information
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Update the node's properties to match the bare metal flavor you created
when :doc:`configure-nova-flavors`:
#. Assign a *resource class* to the node. A *resource class* should represent
a class of hardware in your data center, that corresponds to a Compute
flavor.
For example, let's split hardware into these three groups:
#. nodes with a lot of RAM and powerful CPU for computational tasks,
#. nodes with powerful GPU for OpenCL computing,
#. smaller nodes for development and testing.
We can define three resource classes to reflect these hardware groups, named
``large-cpu``, ``large-gpu`` and ``small`` respectively. Then, for each node
in each of the hardware groups, we'll set their ``resource_class``
appropriately via:
.. code-block:: console
$ openstack --os-baremetal-api-version 1.21 baremetal node set $NODE_UUID \
--resource-class $CLASS_NAME
The ``--resource-class`` argument can also be used when creating a node:
.. code-block:: console
$ openstack --os-baremetal-api-version 1.21 baremetal node create \
--driver $DRIVER --resource-class $CLASS_NAME
To use resource classes for scheduling you need to update your flavors as
described in :doc:`configure-nova-flavors`.
.. warning::
Scheduling based on resource classes will replace scheduling based on
properties in the Queens release.
.. note::
This is not required for standalone deployments, only for those using
the Compute service for provisioning bare metal instances.
#. Update the node's properties to match the actual hardware of the node:
.. code-block:: console
@ -286,6 +325,11 @@ Adding scheduling information
These values can also be discovered during `Hardware Inspection`_.
.. warning::
If scheduling based on resource classes is not used, the three properties
``cpus``, ``memory_mb`` and ``local_gb`` must match ones defined on the
flavor created when :doc:`configure-nova-flavors`.
.. warning::
The value provided for the ``local_gb`` property must match the size of
the root device you're going to deploy on. By default
@ -296,37 +340,6 @@ Adding scheduling information
:ref:`root-device-hints`), the ``local_gb`` value should match the size
of picked target disk.
.. note::
Properties-based approach to scheduling will eventually be replaced by
scheduling based on custom resource classes, as explained below and in
:doc:`configure-nova-flavors`.
#. Assign a *resource class* to the node. Resource classes will be used for
scheduling bare metal instances in the future. A *resource class* should
represent a class of hardware in your data center, that roughly corresponds
to a Compute flavor.
For example, you may split hardware into three classes:
#. nodes with a lot of RAM and powerful CPU for computational tasks,
#. nodes with powerful GPU for OpenCL computing,
#. smaller nodes for development and testing.
These would correspond to three resource classes, which you can name
arbitrary, e.g. ``large-cpu``, ``large-gpu`` and ``small``.
.. code-block:: console
$ ironic --ironic-api-version=1.21 node-update $NODE_UUID \
replace resource_class=$CLASS_NAME
To use resource classes for scheduling you need to update your flavors as
described in :doc:`configure-nova-flavors`.
.. note::
Scheduling based on resource classes will replace scheduling based on
properties in the future.
#. If you wish to perform more advanced scheduling of the instances based on
hardware capabilities, you may add metadata to each node that will be
exposed to the the Compute scheduler (see: `ComputeCapabilitiesFilter`_).