docs: Rewrite host aggregate, availability zone docs
These closely related features are the source of a disproportionate number of bugs and a large amount of confusion among users. The spread of information around multiple docs probably doesn't help matters. Do what we've already done for the metadata service and remote consoles and clean these docs up. There are a number of important changes: - All documentation related to host aggregates and availability zones is placed in one of three documents, '/user/availability-zones', '/admin/aggregates' and '/admin/availability-zones'. (note that there is no '/user/aggregates' document since this is not user-facing) - References to these features are updated to point to the new location - A glossary is added. Currently this only contains definitions for host aggregates and availability zones - nova CLI commands are replaced with their openstack CLI counterparts - Some gaps in related documentation are closed Change-Id: If847b0085dbfb4c813d4a8d14d99346f8252bc19 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This commit is contained in:
parent
2c6542948f
commit
5c5927a3d2
363
doc/source/admin/aggregates.rst
Normal file
363
doc/source/admin/aggregates.rst
Normal file
@ -0,0 +1,363 @@
|
||||
===============
|
||||
Host aggregates
|
||||
===============
|
||||
|
||||
Host aggregates are a mechanism for partitioning hosts in an OpenStack cloud,
|
||||
or a region of an OpenStack cloud, based on arbitrary characteristics.
|
||||
Examples where an administrator may want to do this include where a group of
|
||||
hosts have additional hardware or performance characteristics.
|
||||
|
||||
Host aggregates started out as a way to use Xen hypervisor resource pools, but
|
||||
have been generalized to provide a mechanism to allow administrators to assign
|
||||
key-value pairs to groups of machines. Each node can have multiple aggregates,
|
||||
each aggregate can have multiple key-value pairs, and the same key-value pair
|
||||
can be assigned to multiple aggregates. This information can be used in the
|
||||
scheduler to enable advanced scheduling, to set up Xen hypervisor resource
|
||||
pools or to define logical groups for migration.
|
||||
|
||||
Host aggregates are not explicitly exposed to users. Instead administrators map
|
||||
flavors to host aggregates. Administrators do this by setting metadata on a
|
||||
host aggregate, and matching flavor extra specifications. The scheduler then
|
||||
endeavors to match user requests for instances of the given flavor to a host
|
||||
aggregate with the same key-value pair in its metadata. Compute nodes can be in
|
||||
more than one host aggregate. Weight multipliers can be controlled on a
|
||||
per-aggregate basis by setting the desired ``xxx_weight_multiplier`` aggregate
|
||||
metadata.
|
||||
|
||||
Administrators are able to optionally expose a host aggregate as an
|
||||
:term:`availability zone`. Availability zones are different from host
|
||||
aggregates in that they are explicitly exposed to the user, and hosts can only
|
||||
be in a single availability zone. Administrators can configure a default
|
||||
availability zone where instances will be scheduled when the user fails to
|
||||
specify one. For more information on how to do this, refer to
|
||||
:doc:`/admin/availability-zones`.
|
||||
|
||||
|
||||
.. _config-sch-for-aggs:
|
||||
|
||||
Configure scheduler to support host aggregates
|
||||
----------------------------------------------
|
||||
|
||||
One common use case for host aggregates is when you want to support scheduling
|
||||
instances to a subset of compute hosts because they have a specific capability.
|
||||
For example, you may want to allow users to request compute hosts that have SSD
|
||||
drives if they need access to faster disk I/O, or access to compute hosts that
|
||||
have GPU cards to take advantage of GPU-accelerated code.
|
||||
|
||||
To configure the scheduler to support host aggregates, the
|
||||
:oslo.config:option:`filter_scheduler.enabled_filters` configuration option
|
||||
must contain the ``AggregateInstanceExtraSpecsFilter`` in addition to the other
|
||||
filters used by the scheduler. Add the following line to ``nova.conf`` on the
|
||||
host that runs the ``nova-scheduler`` service to enable host aggregates
|
||||
filtering, as well as the other filters that are typically enabled:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[filter_scheduler]
|
||||
enabled_filters=...,AggregateInstanceExtraSpecsFilter
|
||||
|
||||
Example: Specify compute hosts with SSDs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This example configures the Compute service to enable users to request nodes
|
||||
that have solid-state drives (SSDs). You create a ``fast-io`` host aggregate in
|
||||
the ``nova`` availability zone and you add the ``ssd=true`` key-value pair to
|
||||
the aggregate. Then, you add the ``node1``, and ``node2`` compute nodes to it.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate create --zone nova fast-io
|
||||
+-------------------+----------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------+
|
||||
| availability_zone | nova |
|
||||
| created_at | 2016-12-22T07:31:13.013466 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| id | 1 |
|
||||
| name | fast-io |
|
||||
| updated_at | None |
|
||||
+-------------------+----------------------------+
|
||||
|
||||
$ openstack aggregate set --property ssd=true 1
|
||||
+-------------------+----------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------+
|
||||
| availability_zone | nova |
|
||||
| created_at | 2016-12-22T07:31:13.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [] |
|
||||
| id | 1 |
|
||||
| name | fast-io |
|
||||
| properties | ssd='true' |
|
||||
| updated_at | None |
|
||||
+-------------------+----------------------------+
|
||||
|
||||
$ openstack aggregate add host 1 node1
|
||||
+-------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------------+
|
||||
| availability_zone | nova |
|
||||
| created_at | 2016-12-22T07:31:13.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'node1'] |
|
||||
| id | 1 |
|
||||
| metadata | {u'ssd': u'true', u'availability_zone': u'nova'} |
|
||||
| name | fast-io |
|
||||
| updated_at | None |
|
||||
+-------------------+--------------------------------------------------+
|
||||
|
||||
$ openstack aggregate add host 1 node2
|
||||
+-------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------------+
|
||||
| availability_zone | nova |
|
||||
| created_at | 2016-12-22T07:31:13.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'node1', u'node2'] |
|
||||
| id | 1 |
|
||||
| metadata | {u'ssd': u'true', u'availability_zone': u'nova'} |
|
||||
| name | fast-io |
|
||||
| updated_at | None |
|
||||
+-------------------+--------------------------------------------------+
|
||||
|
||||
Use the :command:`openstack flavor create` command to create the ``ssd.large``
|
||||
flavor called with an ID of 6, 8 GB of RAM, 80 GB root disk, and 4 vCPUs.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor create --id 6 --ram 8192 --disk 80 --vcpus 4 ssd.large
|
||||
+----------------------------+-----------+
|
||||
| Field | Value |
|
||||
+----------------------------+-----------+
|
||||
| OS-FLV-DISABLED:disabled | False |
|
||||
| OS-FLV-EXT-DATA:ephemeral | 0 |
|
||||
| disk | 80 |
|
||||
| id | 6 |
|
||||
| name | ssd.large |
|
||||
| os-flavor-access:is_public | True |
|
||||
| ram | 8192 |
|
||||
| rxtx_factor | 1.0 |
|
||||
| swap | |
|
||||
| vcpus | 4 |
|
||||
+----------------------------+-----------+
|
||||
|
||||
Once the flavor is created, specify one or more key-value pairs that match the
|
||||
key-value pairs on the host aggregates with scope
|
||||
``aggregate_instance_extra_specs``. In this case, that is the
|
||||
``aggregate_instance_extra_specs:ssd=true`` key-value pair. Setting a
|
||||
key-value pair on a flavor is done using the :command:`openstack flavor set`
|
||||
command.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set \
|
||||
--property aggregate_instance_extra_specs:ssd=true ssd.large
|
||||
|
||||
Once it is set, you should see the ``extra_specs`` property of the
|
||||
``ssd.large`` flavor populated with a key of ``ssd`` and a corresponding value
|
||||
of ``true``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor show ssd.large
|
||||
+----------------------------+-------------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------------------+-------------------------------------------+
|
||||
| OS-FLV-DISABLED:disabled | False |
|
||||
| OS-FLV-EXT-DATA:ephemeral | 0 |
|
||||
| disk | 80 |
|
||||
| id | 6 |
|
||||
| name | ssd.large |
|
||||
| os-flavor-access:is_public | True |
|
||||
| properties | aggregate_instance_extra_specs:ssd='true' |
|
||||
| ram | 8192 |
|
||||
| rxtx_factor | 1.0 |
|
||||
| swap | |
|
||||
| vcpus | 4 |
|
||||
+----------------------------+-------------------------------------------+
|
||||
|
||||
Now, when a user requests an instance with the ``ssd.large`` flavor,
|
||||
the scheduler only considers hosts with the ``ssd=true`` key-value pair.
|
||||
In this example, these are ``node1`` and ``node2``.
|
||||
|
||||
|
||||
Aggregates in Placement
|
||||
-----------------------
|
||||
|
||||
Aggregates also exist in placement and are not the same thing as host
|
||||
aggregates in nova. These aggregates are defined (purely) as groupings of
|
||||
related resource providers. Since compute nodes in nova are represented in
|
||||
placement as resource providers, they can be added to a placement aggregate as
|
||||
well. For example, get the UUID of the compute node using :command:`openstack
|
||||
hypervisor list` and add it to an aggregate in placement using
|
||||
:command:`openstack resource provider aggregate set`.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-compute-api-version=2.53 hypervisor list
|
||||
+--------------------------------------+---------------------+-----------------+-----------------+-------+
|
||||
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
|
||||
+--------------------------------------+---------------------+-----------------+-----------------+-------+
|
||||
| 815a5634-86fb-4e1e-8824-8a631fee3e06 | node1 | QEMU | 192.168.1.123 | up |
|
||||
+--------------------------------------+---------------------+-----------------+-----------------+-------+
|
||||
|
||||
$ openstack --os-placement-api-version=1.2 resource provider aggregate set \
|
||||
--aggregate df4c74f3-d2c4-4991-b461-f1a678e1d161 \
|
||||
815a5634-86fb-4e1e-8824-8a631fee3e06
|
||||
|
||||
Some scheduling filter operations can be performed by placement for increased
|
||||
speed and efficiency.
|
||||
|
||||
.. note::
|
||||
|
||||
The nova-api service attempts (as of nova 18.0.0) to automatically mirror
|
||||
the association of a compute host with an aggregate when an administrator
|
||||
adds or removes a host to/from a nova host aggregate. This should alleviate
|
||||
the need to manually create those association records in the placement API
|
||||
using the ``openstack resource provider aggregate set`` CLI invocation.
|
||||
|
||||
|
||||
.. _tenant-isolation-with-placement:
|
||||
|
||||
Tenant Isolation with Placement
|
||||
-------------------------------
|
||||
|
||||
In order to use placement to isolate tenants, there must be placement
|
||||
aggregates that match the membership and UUID of nova host aggregates that you
|
||||
want to use for isolation. The same key pattern in aggregate metadata used by
|
||||
the :ref:`AggregateMultiTenancyIsolation` filter controls this function, and is
|
||||
enabled by setting
|
||||
:oslo.config:option:`scheduler.limit_tenants_to_placement_aggregate` to
|
||||
``True``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-compute-api-version=2.53 aggregate create myagg
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2018-03-29T16:22:23.175884 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| id | 4 |
|
||||
| name | myagg |
|
||||
| updated_at | None |
|
||||
| uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
$ openstack --os-compute-api-version=2.53 aggregate add host myagg node1
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2018-03-29T16:22:23.175884 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'node1'] |
|
||||
| id | 4 |
|
||||
| name | myagg |
|
||||
| updated_at | None |
|
||||
| uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
$ openstack project list -f value | grep 'demo'
|
||||
9691591f913949818a514f95286a6b90 demo
|
||||
|
||||
$ openstack aggregate set \
|
||||
--property filter_tenant_id=9691591f913949818a514f95286a6b90 myagg
|
||||
|
||||
$ openstack --os-placement-api-version=1.2 resource provider aggregate set \
|
||||
--aggregate 019e2189-31b3-49e1-aff2-b220ebd91c24 \
|
||||
815a5634-86fb-4e1e-8824-8a631fee3e06
|
||||
|
||||
Note that the ``filter_tenant_id`` metadata key can be optionally suffixed
|
||||
with any string for multiple tenants, such as ``filter_tenant_id3=$tenantid``.
|
||||
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Much of the configuration of host aggregates is driven from the API or
|
||||
command-line clients. For example, to create a new aggregate and add hosts to
|
||||
it using the :command:`openstack` client, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate create my-aggregate
|
||||
$ openstack aggregate add host my-aggregate my-host
|
||||
|
||||
To list all aggregates and show information about a specific aggregate, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggegrate list
|
||||
$ openstack aggregate show my-aggregate
|
||||
|
||||
To set and unset a property on the aggregate, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate set --property pinned=true my-aggregrate
|
||||
$ openstack aggregate unset --property pinned my-aggregate
|
||||
|
||||
To rename the aggregate, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate set --name my-awesome-aggregate my-aggregate
|
||||
|
||||
To remove a host from an aggregate and delete the aggregate, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate remove host my-aggregate my-host
|
||||
$ openstack aggregate delete my-aggregate
|
||||
|
||||
For more information, refer to the :python-openstackclient-doc:`OpenStack
|
||||
Client documentation <cli/command-objects/aggregate.html>`.
|
||||
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
In addition to CRUD operations enabled by the API and clients, the following
|
||||
configuration options can be used to configure how host aggregates and the
|
||||
related availability zones feature operate under the hood:
|
||||
|
||||
- :oslo.config:option:`default_schedule_zone`
|
||||
- :oslo.config:option:`scheduler.limit_tenants_to_placement_aggregate`
|
||||
- :oslo.config:option:`cinder.cross_az_attach`
|
||||
|
||||
Finally, as discussed previously, there are a number of host aggregate-specific
|
||||
scheduler filters. These are:
|
||||
|
||||
- :ref:`AggregateCoreFilter`
|
||||
- :ref:`AggregateDiskFilter`
|
||||
- :ref:`AggregateImagePropertiesIsolation`
|
||||
- :ref:`AggregateInstanceExtraSpecsFilter`
|
||||
- :ref:`AggregateIoOpsFilter`
|
||||
- :ref:`AggregateMultiTenancyIsolation`
|
||||
- :ref:`AggregateNumInstancesFilter`
|
||||
- :ref:`AggregateRamFilter`
|
||||
- :ref:`AggregateTypeAffinityFilter`
|
||||
|
||||
The following configuration options are applicable to the scheduler
|
||||
configuration:
|
||||
|
||||
- :oslo.config:option:`cpu_allocation_ratio`
|
||||
- :oslo.config:option:`ram_allocation_ratio`
|
||||
- :oslo.config:option:`filter_scheduler.max_instances_per_host`
|
||||
- :oslo.config:option:`filter_scheduler.aggregate_image_properties_isolation_separator`
|
||||
- :oslo.config:option:`filter_scheduler.aggregate_image_properties_isolation_namespace`
|
||||
|
||||
|
||||
References
|
||||
----------
|
||||
|
||||
- `Curse your bones, Availability Zones! (Openstack Summit Vancouver 2018)
|
||||
<https://www.openstack.org/videos/vancouver-2018/curse-your-bones-availability-zones-1>`__
|
@ -1,117 +1,273 @@
|
||||
=========================================
|
||||
Select hosts where instances are launched
|
||||
=========================================
|
||||
==================
|
||||
Availability Zones
|
||||
==================
|
||||
|
||||
With the appropriate permissions, you can select which host instances are
|
||||
launched on and which roles can boot instances on this host.
|
||||
.. note::
|
||||
|
||||
Starting with the 2.74 microversion, there are two ways to specify a host
|
||||
and/or node when creating a server.
|
||||
This section provides deployment and admin-user usage information about the
|
||||
availability zone feature. For end-user information about availability
|
||||
zones, refer to the :doc:`user guide </user/availability-zones>`.
|
||||
|
||||
Using Explicit Host and/or Node
|
||||
-------------------------------
|
||||
Availability Zones are an end-user visible logical abstraction for partitioning
|
||||
a cloud without knowing the physical infrastructure. Availability zones are not
|
||||
modeled in the database; rather, they are defined by attaching specific
|
||||
metadata information to an :doc:`aggregate </admin/aggregates>` The addition of
|
||||
this specific metadata to an aggregate makes the aggregate visible from an
|
||||
end-user perspective and consequently allows users to schedule instances to a
|
||||
specific set of hosts, the ones belonging to the aggregate.
|
||||
|
||||
We can create servers by using explicit host and/or node. When we use this
|
||||
way to request where instances are launched, we will still execute scheduler
|
||||
filters on the requested destination.
|
||||
However, despite their similarities, there are a few additional differences to
|
||||
note when comparing availability zones and host aggregates:
|
||||
|
||||
.. todo: mention the minimum required release of python-openstackclient for
|
||||
the --host and --hypevisor-hostname options to work with "server create".
|
||||
- A host can be part of multiple aggregates but it can only be in one
|
||||
availability zone.
|
||||
|
||||
- To select the host where instances are launched, use the ``--host HOST``
|
||||
and/or ``--hypervisor-hostname HYPERVISOR`` options on
|
||||
the :command:`openstack server create` command.
|
||||
- By default a host is part of a default availability zone even if it doesn't
|
||||
belong to an aggregate. The name of this default availability zone can be
|
||||
configured using :oslo.config:option:`default_availability_zone` config
|
||||
option.
|
||||
|
||||
For example:
|
||||
.. warning::
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-compute-api-version 2.74 server create --image IMAGE \
|
||||
--flavor m1.tiny --key-name KEY --host HOST \
|
||||
--hypervisor-hostname HYPERVISOR --nic net-id=UUID SERVER
|
||||
|
||||
- To specify which roles can launch an instance on a specified host, enable
|
||||
the ``compute:servers:create:requested_destination`` rule in the
|
||||
``policy.json`` file. By default, this rule is enabled for only the admin
|
||||
role. If you see ``Forbidden (HTTP 403)`` in the response, then you are
|
||||
not using the required credentials.
|
||||
|
||||
- To view the list of valid compute hosts and nodes, you can follow
|
||||
`Finding Host and Node Names`_.
|
||||
|
||||
[Legacy] Using Host and/or Node with Availability Zone
|
||||
------------------------------------------------------
|
||||
|
||||
We can create servers by using host and/or node with availability zone. When
|
||||
we use this way to select hosts where instances are launched, we will not run
|
||||
the scheduler filters.
|
||||
|
||||
- To select the host where instances are launched, use the
|
||||
``--availability-zone ZONE:HOST:NODE`` parameter on the :command:`openstack
|
||||
server create` command.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --image IMAGE --flavor m1.tiny --key-name KEY \
|
||||
--availability-zone ZONE:HOST:NODE --nic net-id=UUID SERVER
|
||||
The use of the default availability zone name is requests can be very
|
||||
error-prone. Since the user can see the list of availability zones, they
|
||||
have no way to know whether the default availability zone name (currently
|
||||
``nova``) is provided because an host belongs to an aggregate whose AZ
|
||||
metadata key is set to ``nova``, or because there is at least one host
|
||||
not belonging to any aggregate. Consequently, it is highly recommended
|
||||
for users to never ever ask for booting an instance by specifying an
|
||||
explicit AZ named ``nova`` and for operators to never set the AZ metadata
|
||||
for an aggregate to ``nova``. This can result is some problems due to the
|
||||
fact that the instance AZ information is explicitly attached to ``nova``
|
||||
which could break further move operations when either the host is moved
|
||||
to another aggregate or when the user would like to migrate the instance.
|
||||
|
||||
.. note::
|
||||
|
||||
HOST and NODE are optional parameters. In such cases, use the
|
||||
``--availability-zone ZONE::NODE``, ``--availability-zone ZONE:HOST`` or
|
||||
``--availability-zone ZONE``.
|
||||
Availability zone names must NOT contain ``:`` since it is used by admin
|
||||
users to specify hosts where instances are launched in server creation.
|
||||
See `Using availability zones to select hosts`_ for more information.
|
||||
|
||||
- To specify which roles can launch an instance on a specified host, enable
|
||||
the ``os_compute_api:servers:create:forced_host`` rule in the ``policy.json``
|
||||
file. By default, this rule is enabled for only the admin role. If you see
|
||||
``Forbidden (HTTP 403)`` in return, then you are not using the required
|
||||
credentials.
|
||||
In addition, other services, such as the :neutron-doc:`networking service <>`
|
||||
and the :cinder-doc:`block storage service <>`, also provide an availability
|
||||
zone feature. However, the implementation of these features differs vastly
|
||||
between these different services. Consult the documentation for these other
|
||||
services for more information on their implementation of this feature.
|
||||
|
||||
- To view the list of valid zones, use the :command:`openstack availability
|
||||
zone list --compute` command.
|
||||
|
||||
.. code-block:: console
|
||||
.. _availability-zones-with-placement:
|
||||
|
||||
$ openstack availability zone list --compute
|
||||
+-----------+-------------+
|
||||
| Zone Name | Zone Status |
|
||||
+-----------+-------------+
|
||||
| zone1 | available |
|
||||
| zone2 | available |
|
||||
+-----------+-------------+
|
||||
Availability Zones with Placement
|
||||
---------------------------------
|
||||
|
||||
- To view the list of valid compute hosts and nodes, you can follow
|
||||
`Finding Host and Node Names`_.
|
||||
In order to use placement to honor availability zone requests, there must be
|
||||
placement aggregates that match the membership and UUID of nova host aggregates
|
||||
that you assign as availability zones. The same key in aggregate metadata used
|
||||
by the `AvailabilityZoneFilter` filter controls this function, and is enabled by
|
||||
setting :oslo.config:option:`scheduler.query_placement_for_availability_zone`
|
||||
to ``True``.
|
||||
|
||||
Finding Host and Node Names
|
||||
---------------------------
|
||||
.. code-block:: console
|
||||
|
||||
- To view the list of valid compute hosts, use the :command:`openstack compute
|
||||
service list --service nova-compute` command.
|
||||
$ openstack --os-compute-api-version=2.53 aggregate create myaz
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2018-03-29T16:22:23.175884 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| id | 4 |
|
||||
| name | myaz |
|
||||
| updated_at | None |
|
||||
| uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
.. code-block:: console
|
||||
$ openstack --os-compute-api-version=2.53 aggregate add host myaz node1
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2018-03-29T16:22:23.175884 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'node1'] |
|
||||
| id | 4 |
|
||||
| name | myagg |
|
||||
| updated_at | None |
|
||||
| uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
$ openstack compute service list --service nova-compute
|
||||
+----+--------------+---------------+------+---------+-------+----------------------------+
|
||||
| ID | Binary | Host | Zone | Status | State | Updated At |
|
||||
+----+--------------+---------------+------+---------+-------+----------------------------+
|
||||
| 10 | nova-compute | compute01 | nova | enabled | up | 2019-07-09T03:59:19.000000 |
|
||||
| 11 | nova-compute | compute02 | nova | enabled | up | 2019-07-09T03:59:19.000000 |
|
||||
| 12 | nova-compute | compute03 | nova | enabled | up | 2019-07-09T03:59:19.000000 |
|
||||
+----+--------------+---------------+------+---------+-------+----------------------------+
|
||||
$ openstack aggregate set --property availability_zone=az002 myaz
|
||||
|
||||
- To view the list of valid compute nodes, use the :command:`openstack
|
||||
hypervisor list` command.
|
||||
$ openstack --os-placement-api-version=1.2 resource provider aggregate set --aggregate 019e2189-31b3-49e1-aff2-b220ebd91c24 815a5634-86fb-4e1e-8824-8a631fee3e06
|
||||
|
||||
.. code-block:: console
|
||||
With the above configuration, the `AvailabilityZoneFilter` filter can be
|
||||
disabled in :oslo.config:option:`filter_scheduler.enabled_filters` while
|
||||
retaining proper behavior (and doing so with the higher performance of
|
||||
placement's implementation).
|
||||
|
||||
$ openstack hypervisor list
|
||||
+----+---------------------+-----------------+---------------+-------+
|
||||
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
|
||||
+----+---------------------+-----------------+---------------+-------+
|
||||
| 6 | compute01 | QEMU | 172.16.50.100 | up |
|
||||
| 7 | compute02 | QEMU | 172.16.50.101 | up |
|
||||
| 8 | compute03 | QEMU | 172.16.50.102 | up |
|
||||
+----+---------------------+-----------------+---------------+-------+
|
||||
|
||||
Implications for moving servers
|
||||
-------------------------------
|
||||
|
||||
There are several ways to move a server to another host: evacuate, resize,
|
||||
cold migrate, live migrate, and unshelve. Move operations typically go through
|
||||
the scheduler to pick the target host *unless* a target host is specified and
|
||||
the request forces the server to that host by bypassing the scheduler. Only
|
||||
evacuate and live migrate can forcefully bypass the scheduler and move a
|
||||
server to a specified host and even then it is highly recommended to *not*
|
||||
force and bypass the scheduler.
|
||||
|
||||
With respect to availability zones, a server is restricted to a zone if:
|
||||
|
||||
1. The server was created in a specific zone with the ``POST /servers`` request
|
||||
containing the ``availability_zone`` parameter.
|
||||
|
||||
2. If the server create request did not contain the ``availability_zone``
|
||||
parameter but the API service is configured for
|
||||
:oslo.config:option:`default_schedule_zone` then by default the server will
|
||||
be scheduled to that zone.
|
||||
|
||||
3. The shelved offloaded server was unshelved by specifying the
|
||||
``availability_zone`` with the ``POST /servers/{server_id}/action`` request
|
||||
using microversion 2.77 or greater.
|
||||
|
||||
If the server was not created in a specific zone then it is free to be moved
|
||||
to other zones, i.e. the :ref:`AvailabilityZoneFilter <AvailabilityZoneFilter>`
|
||||
is a no-op.
|
||||
|
||||
Knowing this, it is dangerous to force a server to another host with evacuate
|
||||
or live migrate if the server is restricted to a zone and is then forced to
|
||||
move to a host in another zone, because that will create an inconsistency in
|
||||
the internal tracking of where that server should live and may require manually
|
||||
updating the database for that server. For example, if a user creates a server
|
||||
in zone A and then the admin force live migrates the server to zone B, and then
|
||||
the user resizes the server, the scheduler will try to move it back to zone A
|
||||
which may or may not work, e.g. if the admin deleted or renamed zone A in the
|
||||
interim.
|
||||
|
||||
Resource affinity
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The :oslo.config:option:`cinder.cross_az_attach` configuration option can be
|
||||
used to restrict servers and the volumes attached to servers to the same
|
||||
availability zone.
|
||||
|
||||
A typical use case for setting ``cross_az_attach=False`` is to enforce compute
|
||||
and block storage affinity, for example in a High Performance Compute cluster.
|
||||
|
||||
By default ``cross_az_attach`` is True meaning that the volumes attached to
|
||||
a server can be in a different availability zone than the server. If set to
|
||||
False, then when creating a server with pre-existing volumes or attaching a
|
||||
volume to a server, the server and volume zone must match otherwise the
|
||||
request will fail. In addition, if the nova-compute service creates the volumes
|
||||
to attach to the server during server create, it will request that those
|
||||
volumes are created in the same availability zone as the server, which must
|
||||
exist in the block storage (cinder) service.
|
||||
|
||||
As noted in the `Implications for moving servers`_ section, forcefully moving
|
||||
a server to another zone could also break affinity with attached volumes.
|
||||
|
||||
.. note::
|
||||
|
||||
``cross_az_attach=False`` is not widely used nor tested extensively and
|
||||
thus suffers from some known issues:
|
||||
|
||||
* `Bug 1694844 <https://bugs.launchpad.net/nova/+bug/1694844>`_
|
||||
* `Bug 1781421 <https://bugs.launchpad.net/nova/+bug/1781421>`_
|
||||
|
||||
|
||||
.. _using-availability-zones-to-select-hosts:
|
||||
|
||||
Using availability zones to select hosts
|
||||
----------------------------------------
|
||||
|
||||
We can combine availability zones with a specific host and/or node to select
|
||||
where an instance is launched. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --availability-zone ZONE:HOST:NODE ... SERVER
|
||||
|
||||
.. note::
|
||||
|
||||
It is possible to use ``ZONE``, ``ZONE:HOST``, and ``ZONE::NODE``.
|
||||
|
||||
.. note::
|
||||
|
||||
This is an admin-only operation by default, though you can modify this
|
||||
behavior using the ``os_compute_api:servers:create:forced_host`` rule in
|
||||
``policy.json``.
|
||||
|
||||
However, as discussed `previously <Implications for moving servers>`_, when
|
||||
launching instances in this manner the scheduler filters are not run. For this
|
||||
reason, this behavior is considered legacy behavior and, starting with the 2.74
|
||||
microversion, it is now possible to specify a host or node explicitly. For
|
||||
example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-compute-api-version 2.74 server create \
|
||||
--host HOST --hypervisor-hostname HYPERVISOR ... SERVER
|
||||
|
||||
.. note::
|
||||
|
||||
This is an admin-only operation by default, though you can modify this
|
||||
behavior using the ``compute:servers:create:requested_destination`` rule in
|
||||
``policy.json``.
|
||||
|
||||
This avoids the need to explicitly select an availability zone and ensures the
|
||||
scheduler filters are not bypassed.
|
||||
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Creating an availability zone (AZ) is done by associating metadata with a
|
||||
:doc:`host aggregate </admin/aggregates>`. For this reason, the
|
||||
:command:`openstack` client provides the ability to create a host aggregate and
|
||||
associate it with an AZ in one command. For example, to create a new aggregate,
|
||||
associating it with an AZ in the process, and add host to it using the
|
||||
:command:`openstack` client, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate create --zone my-availability-zone my-aggregate
|
||||
$ openstack aggregate add host my-aggregate my-host
|
||||
|
||||
.. note::
|
||||
|
||||
While it is possible to add a host to multiple host aggregates, it is not
|
||||
possile to add them to multiple availability zones. Attempting to add a
|
||||
host to multiple host aggregates associated with differing availability
|
||||
zones will result in a failure.
|
||||
|
||||
Alternatively, you can set this metadata manually for an existing host
|
||||
aggregate. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate set \
|
||||
--property availability_zone=my-availability-zone my-aggregate
|
||||
|
||||
To list all host aggregates and show information about a specific aggregate, in
|
||||
order to determine which AZ the host aggregate(s) belong to, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggegrate list --long
|
||||
$ openstack aggregate show my-aggregate
|
||||
|
||||
Finally, to disassociate a host aggregate from an availability zone, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate unset --property availability_zone my-aggregate
|
||||
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
Refer to :doc:`/admin/aggregates` for information on configuring both host
|
||||
aggregates and availability zones.
|
||||
|
@ -24,7 +24,7 @@ By default, the scheduler ``driver`` is configured as a filter scheduler, as
|
||||
described in the next section. In the default configuration, this scheduler
|
||||
considers hosts that meet all the following criteria:
|
||||
|
||||
* Are in the requested availability zone (``AvailabilityZoneFilter``).
|
||||
* Are in the requested :term:`availability zone` (``AvailabilityZoneFilter``).
|
||||
|
||||
* Can service the request (``ComputeFilter``).
|
||||
|
||||
@ -52,13 +52,14 @@ target host. For information about instance evacuation, see
|
||||
Prefiltering
|
||||
~~~~~~~~~~~~
|
||||
|
||||
As of the Rocky release, the scheduling process includes a prefilter
|
||||
step to increase the efficiency of subsequent stages. These prefilters
|
||||
are largely optional, and serve to augment the request that is sent to
|
||||
placement to reduce the set of candidate compute hosts based on
|
||||
attributes that placement is able to answer for us ahead of time. In
|
||||
addition to the prefilters listed here, also see `Tenant Isolation
|
||||
with Placement`_ and `Availability Zones with Placement`_.
|
||||
As of the Rocky release, the scheduling process includes a prefilter step to
|
||||
increase the efficiency of subsequent stages. These prefilters are largely
|
||||
optional, and serve to augment the request that is sent to placement to reduce
|
||||
the set of candidate compute hosts based on attributes that placement is able
|
||||
to answer for us ahead of time. In addition to the prefilters listed here, also
|
||||
see :ref:`tenant-isolation-with-placement` and
|
||||
:ref:`availability-zones-with-placement`.
|
||||
|
||||
|
||||
Compute Image Type Support
|
||||
--------------------------
|
||||
@ -157,6 +158,8 @@ Compute filters
|
||||
|
||||
The following sections describe the available compute filters.
|
||||
|
||||
.. _AggregateCoreFilter:
|
||||
|
||||
AggregateCoreFilter
|
||||
-------------------
|
||||
|
||||
@ -174,13 +177,20 @@ AggregateCoreFilter
|
||||
|
||||
.. _`automatically mirrors`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-mirror-host-aggregates.html
|
||||
|
||||
Filters host by CPU core numbers with a per-aggregate ``cpu_allocation_ratio``
|
||||
Filters host by CPU core count with a per-aggregate ``cpu_allocation_ratio``
|
||||
value. If the per-aggregate value is not found, the value falls back to the
|
||||
global setting. If the host is in more than one aggregate and more than one
|
||||
value is found, the minimum value will be used. For information about how to
|
||||
use this filter, see :ref:`host-aggregates`.
|
||||
value is found, the minimum value will be used.
|
||||
|
||||
Note the ``cpu_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>` restriction.
|
||||
Refer to :doc:`/admin/aggregates` for more information.
|
||||
|
||||
.. important::
|
||||
|
||||
Note the ``cpu_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>`
|
||||
restriction.
|
||||
|
||||
|
||||
.. _AggregateDiskFilter:
|
||||
|
||||
AggregateDiskFilter
|
||||
-------------------
|
||||
@ -200,13 +210,17 @@ AggregateDiskFilter
|
||||
Filters host by disk allocation with a per-aggregate ``disk_allocation_ratio``
|
||||
value. If the per-aggregate value is not found, the value falls back to the
|
||||
global setting. If the host is in more than one aggregate and more than one
|
||||
value is found, the minimum value will be used. For information about how to
|
||||
use this filter, see :ref:`host-aggregates`.
|
||||
value is found, the minimum value will be used.
|
||||
|
||||
Note the ``disk_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>`
|
||||
restriction.
|
||||
Refer to :doc:`/admin/aggregates` for more information.
|
||||
|
||||
.. _`AggregateImagePropertiesIsolation`:
|
||||
.. important::
|
||||
|
||||
Note the ``disk_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>`
|
||||
restriction.
|
||||
|
||||
|
||||
.. _AggregateImagePropertiesIsolation:
|
||||
|
||||
AggregateImagePropertiesIsolation
|
||||
---------------------------------
|
||||
@ -265,6 +279,7 @@ following options in the ``nova.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[scheduler]
|
||||
# Considers only keys matching the given namespace (string).
|
||||
# Multiple values can be given, as a comma-separated list.
|
||||
aggregate_image_properties_isolation_namespace = <None>
|
||||
@ -279,6 +294,9 @@ following options in the ``nova.conf`` file:
|
||||
which are addressed in placement :doc:`/reference/isolate-aggregates`
|
||||
request filter.
|
||||
|
||||
Refer to :doc:`/admin/aggregates` for more information.
|
||||
|
||||
|
||||
.. _AggregateInstanceExtraSpecsFilter:
|
||||
|
||||
AggregateInstanceExtraSpecsFilter
|
||||
@ -287,11 +305,15 @@ AggregateInstanceExtraSpecsFilter
|
||||
Matches properties defined in extra specs for an instance type against
|
||||
admin-defined properties on a host aggregate. Works with specifications that
|
||||
are scoped with ``aggregate_instance_extra_specs``. Multiple values can be
|
||||
given, as a comma-separated list. For backward compatibility, also works with
|
||||
given, as a comma-separated list. For backward compatibility, also works with
|
||||
non-scoped specifications; this action is highly discouraged because it
|
||||
conflicts with :ref:`ComputeCapabilitiesFilter` filter when you enable both
|
||||
filters. For information about how to use this filter, see the
|
||||
:ref:`host-aggregates` section.
|
||||
filters.
|
||||
|
||||
Refer to :doc:`/admin/aggregates` for more information.
|
||||
|
||||
|
||||
.. _AggregateIoOpsFilter:
|
||||
|
||||
AggregateIoOpsFilter
|
||||
--------------------
|
||||
@ -299,14 +321,18 @@ AggregateIoOpsFilter
|
||||
Filters host by disk allocation with a per-aggregate ``max_io_ops_per_host``
|
||||
value. If the per-aggregate value is not found, the value falls back to the
|
||||
global setting. If the host is in more than one aggregate and more than one
|
||||
value is found, the minimum value will be used. For information about how to
|
||||
use this filter, see :ref:`host-aggregates`. See also :ref:`IoOpsFilter`.
|
||||
value is found, the minimum value will be used.
|
||||
|
||||
Refer to :doc:`/admin/aggregates` and :ref:`IoOpsFilter` for more information.
|
||||
|
||||
|
||||
.. _AggregateMultiTenancyIsolation:
|
||||
|
||||
AggregateMultiTenancyIsolation
|
||||
------------------------------
|
||||
|
||||
Ensures hosts in tenant-isolated :ref:`host-aggregates` will only be available
|
||||
to a specified set of tenants. If a host is in an aggregate that has the
|
||||
Ensures hosts in tenant-isolated host aggregates will only be available to a
|
||||
specified set of tenants. If a host is in an aggregate that has the
|
||||
``filter_tenant_id`` metadata key, the host can build instances from only that
|
||||
tenant or comma-separated list of tenants. A host can be in different
|
||||
aggregates. If a host does not belong to an aggregate with the metadata key,
|
||||
@ -320,12 +346,16 @@ scheduling. A server create request from another tenant Y will result in only
|
||||
HostA being a scheduling candidate since HostA is not part of the
|
||||
tenant-isolated aggregate.
|
||||
|
||||
.. note:: There is a
|
||||
`known limitation <https://bugs.launchpad.net/nova/+bug/1802111>`_ with
|
||||
the number of tenants that can be isolated per aggregate using this
|
||||
filter. This limitation does not exist, however, for the
|
||||
`Tenant Isolation with Placement`_ filtering capability added in the
|
||||
18.0.0 Rocky release.
|
||||
.. note::
|
||||
|
||||
There is a `known limitation
|
||||
<https://bugs.launchpad.net/nova/+bug/1802111>`_ with the number of tenants
|
||||
that can be isolated per aggregate using this filter. This limitation does
|
||||
not exist, however, for the :ref:`tenant-isolation-with-placement`
|
||||
filtering capability added in the 18.0.0 Rocky release.
|
||||
|
||||
|
||||
.. _AggregateNumInstancesFilter:
|
||||
|
||||
AggregateNumInstancesFilter
|
||||
---------------------------
|
||||
@ -334,8 +364,13 @@ Filters host by number of instances with a per-aggregate
|
||||
``max_instances_per_host`` value. If the per-aggregate value is not found, the
|
||||
value falls back to the global setting. If the host is in more than one
|
||||
aggregate and thus more than one value is found, the minimum value will be
|
||||
used. For information about how to use this filter, see
|
||||
:ref:`host-aggregates`. See also :ref:`NumInstancesFilter`.
|
||||
used.
|
||||
|
||||
Refer to :doc:`/admin/aggregates` and :ref:`NumInstancesFilter` for more
|
||||
information.
|
||||
|
||||
|
||||
.. _AggregateRamFilter:
|
||||
|
||||
AggregateRamFilter
|
||||
------------------
|
||||
@ -356,10 +391,17 @@ Filters host by RAM allocation of instances with a per-aggregate
|
||||
``ram_allocation_ratio`` value. If the per-aggregate value is not found, the
|
||||
value falls back to the global setting. If the host is in more than one
|
||||
aggregate and thus more than one value is found, the minimum value will be
|
||||
used. For information about how to use this filter, see
|
||||
:ref:`host-aggregates`.
|
||||
used.
|
||||
|
||||
Note the ``ram_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>` restriction.
|
||||
Refer to :doc:`/admin/aggregates` for more information.
|
||||
|
||||
.. important::
|
||||
|
||||
Note the ``ram_allocation_ratio`` :ref:`bug 1804125 <bug-1804125>`
|
||||
restriction.
|
||||
|
||||
|
||||
.. _AggregateTypeAffinityFilter:
|
||||
|
||||
AggregateTypeAffinityFilter
|
||||
---------------------------
|
||||
@ -369,8 +411,10 @@ This filter passes hosts if no ``instance_type`` key is set or the
|
||||
``instance_type`` requested. The value of the ``instance_type`` metadata entry
|
||||
is a string that may contain either a single ``instance_type`` name or a
|
||||
comma-separated list of ``instance_type`` names, such as ``m1.nano`` or
|
||||
``m1.nano,m1.small``. For information about how to use this filter, see
|
||||
:ref:`host-aggregates`.
|
||||
``m1.nano,m1.small``.
|
||||
|
||||
Refer to :doc:`/admin/aggregates` for more information.
|
||||
|
||||
|
||||
AllHostsFilter
|
||||
--------------
|
||||
@ -385,6 +429,8 @@ AvailabilityZoneFilter
|
||||
Filters hosts by availability zone. You must enable this filter for the
|
||||
scheduler to respect availability zones in requests.
|
||||
|
||||
Refer to :doc:`/admin/availability-zones` for more information.
|
||||
|
||||
.. _ComputeCapabilitiesFilter:
|
||||
|
||||
ComputeCapabilitiesFilter
|
||||
@ -918,386 +964,6 @@ file. For example to configure metric1 with ratio1 and metric2 with ratio2:
|
||||
|
||||
weight_setting = "metric1=ratio1, metric2=ratio2"
|
||||
|
||||
.. _host-aggregates:
|
||||
|
||||
Host aggregates and availability zones
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Host aggregates are a mechanism for partitioning hosts in an OpenStack cloud,
|
||||
or a region of an OpenStack cloud, based on arbitrary characteristics.
|
||||
Examples where an administrator may want to do this include where a group of
|
||||
hosts have additional hardware or performance characteristics.
|
||||
|
||||
Host aggregates are not explicitly exposed to users. Instead administrators
|
||||
map flavors to host aggregates. Administrators do this by setting metadata on
|
||||
a host aggregate, and matching flavor extra specifications. The scheduler then
|
||||
endeavors to match user requests for instance of the given flavor to a host
|
||||
aggregate with the same key-value pair in its metadata. Compute nodes can be
|
||||
in more than one host aggregate. Weight multipliers can be controlled on a
|
||||
per-aggregate basis by setting the desired ``xxx_weight_multiplier`` aggregate
|
||||
metadata.
|
||||
Administrators are able to optionally expose a host aggregate as an
|
||||
availability zone. Availability zones are different from host aggregates in
|
||||
that they are explicitly exposed to the user, and hosts can only be in a single
|
||||
availability zone. Administrators can configure a default availability zone
|
||||
where instances will be scheduled when the user fails to specify one.
|
||||
|
||||
Command-line interface
|
||||
----------------------
|
||||
|
||||
The :command:`nova` command-line client supports the following
|
||||
aggregate-related commands.
|
||||
|
||||
nova aggregate-list
|
||||
Print a list of all aggregates.
|
||||
|
||||
nova aggregate-create <name> [<availability-zone>]
|
||||
Create a new aggregate named ``<name>``, and optionally in availability zone
|
||||
``[<availability-zone>]`` if specified. The command returns the ID of the
|
||||
newly created aggregate. Hosts can be made available to multiple host
|
||||
aggregates. Be careful when adding a host to an additional host aggregate
|
||||
when the host is also in an availability zone. Pay attention when using the
|
||||
:command:`nova aggregate-set-metadata` and :command:`nova aggregate-update`
|
||||
commands to avoid user confusion when they boot instances in different
|
||||
availability zones. An error occurs if you cannot add a particular host to
|
||||
an aggregate zone for which it is not intended.
|
||||
|
||||
nova aggregate-delete <aggregate>
|
||||
Delete an aggregate with its ``<id>`` or ``<name>``.
|
||||
|
||||
nova aggregate-show <aggregate>
|
||||
Show details of the aggregate with its ``<id>`` or ``<name>``.
|
||||
|
||||
nova aggregate-add-host <aggregate> <host>
|
||||
Add host with name ``<host>`` to aggregate with its ``<id>`` or ``<name>``.
|
||||
|
||||
nova aggregate-remove-host <aggregate> <host>
|
||||
Remove the host with name ``<host>`` from the aggregate with its ``<id>``
|
||||
or ``<name>``.
|
||||
|
||||
nova aggregate-set-metadata <aggregate> <key=value> [<key=value> ...]
|
||||
Add or update metadata (key-value pairs) associated with the aggregate with
|
||||
its ``<id>`` or ``<name>``.
|
||||
|
||||
nova aggregate-update [--name <name>] [--availability-zone <availability-zone>] <aggregate>
|
||||
Update the name and/or availability zone for the aggregate.
|
||||
|
||||
nova host-list
|
||||
List all hosts by service. It has been deprecated since microversion 2.43.
|
||||
Use :command:`nova hypervisor-list` instead.
|
||||
|
||||
nova hypervisor-list [--matching <hostname>] [--marker <marker>] [--limit <limit>]
|
||||
List hypervisors.
|
||||
|
||||
nova host-update [--status <enable|disable>] [--maintenance <enable|disable>] <hostname>
|
||||
Put/resume host into/from maintenance. It has been deprecated since
|
||||
microversion 2.43. To enable or disable a service,
|
||||
use :command:`nova service-enable` or :command:`nova service-disable` instead.
|
||||
|
||||
nova service-enable <id>
|
||||
Enable the service.
|
||||
|
||||
nova service-disable [--reason <reason>] <id>
|
||||
Disable the service.
|
||||
|
||||
.. note::
|
||||
|
||||
Only administrators can access these commands. If you try to use these
|
||||
commands and the user name and tenant that you use to access the Compute
|
||||
service do not have the ``admin`` role or the appropriate privileges, these
|
||||
errors occur:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
ERROR: Policy doesn't allow compute_extension:aggregates to be performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2-e7e1f96b4864)
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
ERROR: Policy doesn't allow compute_extension:hosts to be performed. (HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1)
|
||||
|
||||
.. _config-sch-for-aggs:
|
||||
|
||||
Configure scheduler to support host aggregates
|
||||
----------------------------------------------
|
||||
|
||||
One common use case for host aggregates is when you want to support scheduling
|
||||
instances to a subset of compute hosts because they have a specific capability.
|
||||
For example, you may want to allow users to request compute hosts that have SSD
|
||||
drives if they need access to faster disk I/O, or access to compute hosts that
|
||||
have GPU cards to take advantage of GPU-accelerated code.
|
||||
|
||||
To configure the scheduler to support host aggregates, the
|
||||
:oslo.config:option:`filter_scheduler.enabled_filters` configuration option must
|
||||
contain the ``AggregateInstanceExtraSpecsFilter`` in addition to the other filters
|
||||
used by the scheduler. Add the following line to ``/etc/nova/nova.conf`` on the
|
||||
host that runs the ``nova-scheduler`` service to enable host aggregates filtering,
|
||||
as well as the other filters that are typically enabled:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[filter_scheduler]
|
||||
enabled_filters=...,AggregateInstanceExtraSpecsFilter
|
||||
|
||||
Example: Specify compute hosts with SSDs
|
||||
----------------------------------------
|
||||
|
||||
This example configures the Compute service to enable users to request nodes
|
||||
that have solid-state drives (SSDs). You create a ``fast-io`` host aggregate in
|
||||
the ``nova`` availability zone and you add the ``ssd=true`` key-value pair to
|
||||
the aggregate. Then, you add the ``node1``, and ``node2`` compute nodes to it.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate create --zone nova fast-io
|
||||
+-------------------+----------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------+
|
||||
| availability_zone | nova |
|
||||
| created_at | 2016-12-22T07:31:13.013466 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| id | 1 |
|
||||
| name | fast-io |
|
||||
| updated_at | None |
|
||||
+-------------------+----------------------------+
|
||||
|
||||
$ openstack aggregate set --property ssd=true 1
|
||||
+-------------------+----------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------+
|
||||
| availability_zone | nova |
|
||||
| created_at | 2016-12-22T07:31:13.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [] |
|
||||
| id | 1 |
|
||||
| name | fast-io |
|
||||
| properties | ssd='true' |
|
||||
| updated_at | None |
|
||||
+-------------------+----------------------------+
|
||||
|
||||
$ openstack aggregate add host 1 node1
|
||||
+-------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------------+
|
||||
| availability_zone | nova |
|
||||
| created_at | 2016-12-22T07:31:13.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'node1'] |
|
||||
| id | 1 |
|
||||
| metadata | {u'ssd': u'true', u'availability_zone': u'nova'} |
|
||||
| name | fast-io |
|
||||
| updated_at | None |
|
||||
+-------------------+--------------------------------------------------+
|
||||
|
||||
$ openstack aggregate add host 1 node2
|
||||
+-------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------------+
|
||||
| availability_zone | nova |
|
||||
| created_at | 2016-12-22T07:31:13.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'node1', u'node2'] |
|
||||
| id | 1 |
|
||||
| metadata | {u'ssd': u'true', u'availability_zone': u'nova'} |
|
||||
| name | fast-io |
|
||||
| updated_at | None |
|
||||
+-------------------+--------------------------------------------------+
|
||||
|
||||
Use the :command:`openstack flavor create` command to create the ``ssd.large``
|
||||
flavor called with an ID of 6, 8 GB of RAM, 80 GB root disk, and 4 vCPUs.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor create --id 6 --ram 8192 --disk 80 --vcpus 4 ssd.large
|
||||
+----------------------------+-----------+
|
||||
| Field | Value |
|
||||
+----------------------------+-----------+
|
||||
| OS-FLV-DISABLED:disabled | False |
|
||||
| OS-FLV-EXT-DATA:ephemeral | 0 |
|
||||
| disk | 80 |
|
||||
| id | 6 |
|
||||
| name | ssd.large |
|
||||
| os-flavor-access:is_public | True |
|
||||
| ram | 8192 |
|
||||
| rxtx_factor | 1.0 |
|
||||
| swap | |
|
||||
| vcpus | 4 |
|
||||
+----------------------------+-----------+
|
||||
|
||||
Once the flavor is created, specify one or more key-value pairs that match the
|
||||
key-value pairs on the host aggregates with scope
|
||||
``aggregate_instance_extra_specs``. In this case, that is the
|
||||
``aggregate_instance_extra_specs:ssd=true`` key-value pair. Setting a
|
||||
key-value pair on a flavor is done using the :command:`openstack flavor set`
|
||||
command.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set --property aggregate_instance_extra_specs:ssd=true ssd.large
|
||||
|
||||
Once it is set, you should see the ``extra_specs`` property of the
|
||||
``ssd.large`` flavor populated with a key of ``ssd`` and a corresponding value
|
||||
of ``true``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor show ssd.large
|
||||
+----------------------------+-------------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------------------+-------------------------------------------+
|
||||
| OS-FLV-DISABLED:disabled | False |
|
||||
| OS-FLV-EXT-DATA:ephemeral | 0 |
|
||||
| disk | 80 |
|
||||
| id | 6 |
|
||||
| name | ssd.large |
|
||||
| os-flavor-access:is_public | True |
|
||||
| properties | aggregate_instance_extra_specs:ssd='true' |
|
||||
| ram | 8192 |
|
||||
| rxtx_factor | 1.0 |
|
||||
| swap | |
|
||||
| vcpus | 4 |
|
||||
+----------------------------+-------------------------------------------+
|
||||
|
||||
Now, when a user requests an instance with the ``ssd.large`` flavor,
|
||||
the scheduler only considers hosts with the ``ssd=true`` key-value pair.
|
||||
In this example, these are ``node1`` and ``node2``.
|
||||
|
||||
Aggregates in Placement
|
||||
-----------------------
|
||||
|
||||
Aggregates also exist in placement and are not the same thing as host
|
||||
aggregates in nova. These aggregates are defined (purely) as groupings
|
||||
of related resource providers. Since compute nodes in nova are
|
||||
represented in placement as resource providers, they can be added to a
|
||||
placement aggregate as well. For example, get the uuid of the compute
|
||||
node using :command:`openstack hypervisor list` and add it to an
|
||||
aggregate in placement using :command:`openstack resource provider aggregate
|
||||
set`.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-compute-api-version=2.53 hypervisor list
|
||||
+--------------------------------------+---------------------+-----------------+-----------------+-------+
|
||||
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
|
||||
+--------------------------------------+---------------------+-----------------+-----------------+-------+
|
||||
| 815a5634-86fb-4e1e-8824-8a631fee3e06 | node1 | QEMU | 192.168.1.123 | up |
|
||||
+--------------------------------------+---------------------+-----------------+-----------------+-------+
|
||||
|
||||
$ openstack --os-placement-api-version=1.2 resource provider aggregate set --aggregate df4c74f3-d2c4-4991-b461-f1a678e1d161 815a5634-86fb-4e1e-8824-8a631fee3e06
|
||||
|
||||
Some scheduling filter operations can be performed by placement for
|
||||
increased speed and efficiency.
|
||||
|
||||
.. note::
|
||||
|
||||
The nova-api service attempts (as of nova 18.0.0) to automatically mirror
|
||||
the association of a compute host with an aggregate when an administrator
|
||||
adds or removes a host to/from a nova host aggregate. This should alleviate
|
||||
the need to manually create those association records in the placement API
|
||||
using the ``openstack resource provider aggregate set`` CLI invocation.
|
||||
|
||||
Tenant Isolation with Placement
|
||||
-------------------------------
|
||||
|
||||
In order to use placement to isolate tenants, there must be placement
|
||||
aggregates that match the membership and UUID of nova host aggregates
|
||||
that you want to use for isolation. The same key pattern in aggregate
|
||||
metadata used by the `AggregateMultiTenancyIsolation`_ filter controls
|
||||
this function, and is enabled by setting
|
||||
`[scheduler]/limit_tenants_to_placement_aggregate=True`.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-compute-api-version=2.53 aggregate create myagg
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2018-03-29T16:22:23.175884 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| id | 4 |
|
||||
| name | myagg |
|
||||
| updated_at | None |
|
||||
| uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
$ openstack --os-compute-api-version=2.53 aggregate add host myagg node1
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2018-03-29T16:22:23.175884 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'node1'] |
|
||||
| id | 4 |
|
||||
| name | myagg |
|
||||
| updated_at | None |
|
||||
| uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
$ openstack project list -f value | grep 'demo'
|
||||
9691591f913949818a514f95286a6b90 demo
|
||||
|
||||
$ openstack aggregate set --property filter_tenant_id=9691591f913949818a514f95286a6b90 myagg
|
||||
|
||||
$ openstack --os-placement-api-version=1.2 resource provider aggregate set --aggregate 019e2189-31b3-49e1-aff2-b220ebd91c24 815a5634-86fb-4e1e-8824-8a631fee3e06
|
||||
|
||||
Note that the ``filter_tenant_id`` metadata key can be optionally suffixed
|
||||
with any string for multiple tenants, such as ``filter_tenant_id3=$tenantid``.
|
||||
|
||||
Availability Zones with Placement
|
||||
---------------------------------
|
||||
|
||||
In order to use placement to honor availability zone requests, there must be
|
||||
placement aggregates that match the membership and UUID of nova host aggregates
|
||||
that you assign as availability zones. The same key in aggregate metadata used
|
||||
by the `AvailabilityZoneFilter` filter controls this function, and is enabled by
|
||||
setting `[scheduler]/query_placement_for_availability_zone=True`.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-compute-api-version=2.53 aggregate create myaz
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2018-03-29T16:22:23.175884 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| id | 4 |
|
||||
| name | myaz |
|
||||
| updated_at | None |
|
||||
| uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
$ openstack --os-compute-api-version=2.53 aggregate add host myaz node1
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2018-03-29T16:22:23.175884 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'node1'] |
|
||||
| id | 4 |
|
||||
| name | myagg |
|
||||
| updated_at | None |
|
||||
| uuid | 019e2189-31b3-49e1-aff2-b220ebd91c24 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
$ openstack aggregate set --property availability_zone=az002 myaz
|
||||
|
||||
$ openstack --os-placement-api-version=1.2 resource provider aggregate set --aggregate 019e2189-31b3-49e1-aff2-b220ebd91c24 815a5634-86fb-4e1e-8824-8a631fee3e06
|
||||
|
||||
With the above configuration, the `AvailabilityZoneFilter` filter can be disabled
|
||||
in `[filter_scheduler]/enabled_filters` while retaining proper behavior (and doing
|
||||
so with the higher performance of placement's implementation).
|
||||
|
||||
XenServer hypervisor pools to support live migration
|
||||
----------------------------------------------------
|
||||
|
||||
|
@ -52,8 +52,8 @@ Qemu
|
||||
Memory overcommit
|
||||
File-backed memory is not compatible with memory overcommit.
|
||||
``ram_allocation_ratio`` must be set to ``1.0`` in ``nova.conf``, and the
|
||||
host must not be added to a host aggregate with ``ram_allocation_ratio``
|
||||
set to anything but ``1.0``.
|
||||
host must not be added to a :doc:`host aggregate </admin/aggregates>`
|
||||
with ``ram_allocation_ratio`` set to anything but ``1.0``.
|
||||
|
||||
Huge pages
|
||||
File-backed memory is not compatible with huge pages. Instances with huge
|
||||
|
@ -20,6 +20,7 @@ operating system, and exposes functionality over a web-based API.
|
||||
|
||||
admin-password-injection.rst
|
||||
adv-config.rst
|
||||
aggregates
|
||||
arch.rst
|
||||
availability-zones.rst
|
||||
cells.rst
|
||||
|
@ -74,7 +74,7 @@ Configure a flavor to request one virtual GPU:
|
||||
The enabled vGPU types on the compute hosts are not exposed to API users.
|
||||
Flavors configured for vGPU support can be tied to host aggregates as a means
|
||||
to properly schedule those flavors onto the compute hosts that support them.
|
||||
See the :doc:`/user/aggregates` for more information.
|
||||
See :doc:`/admin/aggregates` for more information.
|
||||
|
||||
Create instances with virtual GPU devices
|
||||
-----------------------------------------
|
||||
|
@ -152,7 +152,7 @@ Once you are running nova, the following information is extremely useful.
|
||||
* :doc:`Upgrades </user/upgrade>`: How nova is designed to be upgraded for minimal
|
||||
service impact, and the order you should do them in.
|
||||
* :doc:`Quotas </user/quotas>`: Managing project quotas in nova.
|
||||
* :doc:`Aggregates </user/aggregates>`: Aggregates are a useful way of grouping
|
||||
* :doc:`Aggregates </admin/aggregates>`: Aggregates are a useful way of grouping
|
||||
hosts together for scheduling purposes.
|
||||
* :doc:`Filter Scheduler </user/filter-scheduler>`: How the filter scheduler is
|
||||
configured, and how that will impact where compute instances land in your
|
||||
@ -225,6 +225,7 @@ looking parts of our architecture. These are collected below.
|
||||
contributor/ptl-guide
|
||||
reference/api-microversion-history.rst
|
||||
reference/conductor
|
||||
reference/glossary
|
||||
reference/gmr
|
||||
reference/i18n
|
||||
reference/live-migration
|
||||
@ -242,8 +243,8 @@ looking parts of our architecture. These are collected below.
|
||||
reference/scheduler-hints-vs-flavor-extra-specs
|
||||
reference/isolate-aggregates
|
||||
user/index
|
||||
user/aggregates
|
||||
user/architecture
|
||||
user/availability-zones
|
||||
user/block-device-mapping
|
||||
user/cells
|
||||
user/cellsv2-layout
|
||||
|
25
doc/source/reference/glossary.rst
Normal file
25
doc/source/reference/glossary.rst
Normal file
@ -0,0 +1,25 @@
|
||||
========
|
||||
Glossary
|
||||
========
|
||||
|
||||
.. glossary::
|
||||
|
||||
Availability Zone
|
||||
Availability zones are a logical subdivision of cloud block storage,
|
||||
compute and network services. They provide a way for cloud operators to
|
||||
logically segment their compute based on arbitrary factors like
|
||||
location (country, datacenter, rack), network layout and/or power
|
||||
source.
|
||||
|
||||
For more information, refer to :doc:`/admin/aggregates`.
|
||||
|
||||
Host Aggregate
|
||||
Host aggregates can be regarded as a mechanism to further partition an
|
||||
:term:`availability zone`; while availability zones are visible to
|
||||
users, host aggregates are only visible to administrators. Host
|
||||
aggregates provide a mechanism to allow administrators to assign
|
||||
key-value pairs to groups of machines. Each node can have multiple
|
||||
aggregates, each aggregate can have multiple key-value pairs, and the
|
||||
same key-value pair can be assigned to multiple aggregates.
|
||||
|
||||
For more information, refer to :doc:`/admin/aggregates`.
|
@ -61,3 +61,9 @@ these are a great place to start reading up on the current plans.
|
||||
* :doc:`/reference/stable-api`: What stable api means to nova
|
||||
* :doc:`/reference/scheduler-evolution`: Motivation behind the scheduler /
|
||||
placement evolution
|
||||
|
||||
Additional Information
|
||||
======================
|
||||
|
||||
* :doc:`/reference/glossary`: A quick reference guide to some of the terms you
|
||||
might encounter working on or using nova.
|
||||
|
@ -27,7 +27,7 @@ Extra Specs
|
||||
|
||||
In general flavor extra specs are specific to the cloud and how it is
|
||||
organized for capabilities, and should be abstracted from the end user.
|
||||
Extra specs are tied to :doc:`host aggregates </user/aggregates>` and a lot
|
||||
Extra specs are tied to :doc:`host aggregates </admin/aggregates>` and a lot
|
||||
of them also define how a guest is created in the hypervisor, for example
|
||||
what the watchdog action is for a VM. Extra specs are also generally
|
||||
interchangeable with `image properties`_ when it comes to VM behavior, like
|
||||
@ -104,7 +104,7 @@ different extra specs mean as they are just a key/value pair. There is some
|
||||
documentation for some "standard" extra specs though [3]_. However, that is
|
||||
not an exhaustive list and it does not include anything that different
|
||||
deployments would define for things like linking a flavor to a set of
|
||||
host aggregates, for example, when creating flavors
|
||||
:doc:`host aggregates </admin/aggregates>`, for example, when creating flavors
|
||||
for baremetal instances, or what the chosen
|
||||
:doc:`hypervisor driver </admin/configuration/hypervisors>` might support for
|
||||
flavor extra specs.
|
||||
|
@ -1,204 +0,0 @@
|
||||
..
|
||||
Copyright 2012 OpenStack Foundation
|
||||
Copyright 2012 Citrix Systems, Inc.
|
||||
Copyright 2012, The Cloudscaling Group, Inc.
|
||||
All Rights Reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
Host Aggregates
|
||||
===============
|
||||
|
||||
Host aggregates can be regarded as a mechanism to further partition an
|
||||
availability zone; while availability zones are visible to users, host
|
||||
aggregates are only visible to administrators. Host aggregates started out as
|
||||
a way to use Xen hypervisor resource pools, but have been generalized to provide
|
||||
a mechanism to allow administrators to assign key-value pairs to groups of
|
||||
machines. Each node can have multiple aggregates, each aggregate can have
|
||||
multiple key-value pairs, and the same key-value pair can be assigned to
|
||||
multiple aggregates. This information can be used in the scheduler to enable
|
||||
advanced scheduling, to set up Xen hypervisor resource pools or to define
|
||||
logical groups for migration. For more information, including an example of
|
||||
associating a group of hosts to a flavor, see :ref:`host-aggregates`.
|
||||
|
||||
|
||||
Availability Zones (AZs)
|
||||
------------------------
|
||||
|
||||
Availability Zones are the end-user visible logical abstraction for
|
||||
partitioning a cloud without knowing the physical infrastructure.
|
||||
That abstraction doesn't come up in Nova with an actual database model since
|
||||
the availability zone is actually a specific metadata information attached to
|
||||
an aggregate. Adding that specific metadata to an aggregate makes the aggregate
|
||||
visible from an end-user perspective and consequently allows to schedule upon a
|
||||
specific set of hosts (the ones belonging to the aggregate).
|
||||
|
||||
That said, there are a few rules to know that diverge from an API perspective
|
||||
between aggregates and availability zones:
|
||||
|
||||
- one host can be in multiple aggregates, but it can only be in one
|
||||
availability zone
|
||||
- by default a host is part of a default availability zone even if it doesn't
|
||||
belong to an aggregate (the configuration option is named
|
||||
:oslo.config:option:`default_availability_zone`)
|
||||
|
||||
.. warning:: That last rule can be very error-prone. Since the user can see the
|
||||
list of availability zones, they have no way to know whether the default
|
||||
availability zone name (currently *nova*) is provided because an host
|
||||
belongs to an aggregate whose AZ metadata key is set to *nova*, or because
|
||||
there is at least one host not belonging to any aggregate. Consequently, it is
|
||||
highly recommended for users to never ever ask for booting an instance by
|
||||
specifying an explicit AZ named *nova* and for operators to never set the
|
||||
AZ metadata for an aggregate to *nova*. That leads to some problems
|
||||
due to the fact that the instance AZ information is explicitly attached to
|
||||
*nova* which could break further move operations when either the host is
|
||||
moved to another aggregate or when the user would like to migrate the
|
||||
instance.
|
||||
|
||||
.. note:: Availability zone name must NOT contain ':' since it is used by admin
|
||||
users to specify hosts where instances are launched in server creation.
|
||||
See :doc:`Select hosts where instances are launched </admin/availability-zones>` for more detail.
|
||||
|
||||
There is a nice educational video about availability zones from the Rocky
|
||||
summit which can be found here: https://www.openstack.org/videos/vancouver-2018/curse-your-bones-availability-zones-1
|
||||
|
||||
Implications for moving servers
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
There are several ways to move a server to another host: evacuate, resize,
|
||||
cold migrate, live migrate, and unshelve. Move operations typically go through
|
||||
the scheduler to pick the target host *unless* a target host is specified and
|
||||
the request forces the server to that host by bypassing the scheduler. Only
|
||||
evacuate and live migrate can forcefully bypass the scheduler and move a
|
||||
server to a specified host and even then it is highly recommended to *not*
|
||||
force and bypass the scheduler.
|
||||
|
||||
With respect to availability zones, a server is restricted to a zone if:
|
||||
|
||||
1. The server was created in a specific zone with the ``POST /servers`` request
|
||||
containing the ``availability_zone`` parameter.
|
||||
2. If the server create request did not contain the ``availability_zone``
|
||||
parameter but the API service is configured for
|
||||
:oslo.config:option:`default_schedule_zone` then by default the server will
|
||||
be scheduled to that zone.
|
||||
3. The shelved offloaded server was unshelved by specifying the
|
||||
``availability_zone`` with the ``POST /servers/{server_id}/action`` request
|
||||
using microversion 2.77 or greater.
|
||||
|
||||
If the server was not created in a specific zone then it is free to be moved
|
||||
to other zones, i.e. the :ref:`AvailabilityZoneFilter <AvailabilityZoneFilter>`
|
||||
is a no-op.
|
||||
|
||||
Knowing this, it is dangerous to force a server to another host with evacuate
|
||||
or live migrate if the server is restricted to a zone and is then forced to
|
||||
move to a host in another zone, because that will create an inconsistency in
|
||||
the internal tracking of where that server should live and may require manually
|
||||
updating the database for that server. For example, if a user creates a server
|
||||
in zone A and then the admin force live migrates the server to zone B, and then
|
||||
the user resizes the server, the scheduler will try to move it back to zone A
|
||||
which may or may not work, e.g. if the admin deleted or renamed zone A in the
|
||||
interim.
|
||||
|
||||
Resource affinity
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The :oslo.config:option:`cinder.cross_az_attach` configuration option can be
|
||||
used to restrict servers and the volumes attached to servers to the same
|
||||
availability zone.
|
||||
|
||||
A typical use case for setting ``cross_az_attach=False`` is to enforce compute
|
||||
and block storage affinity, for example in a High Performance Compute cluster.
|
||||
|
||||
By default ``cross_az_attach`` is True meaning that the volumes attached to
|
||||
a server can be in a different availability zone than the server. If set to
|
||||
False, then when creating a server with pre-existing volumes or attaching a
|
||||
volume to a server, the server and volume zone must match otherwise the
|
||||
request will fail. In addition, if the nova-compute service creates the volumes
|
||||
to attach to the server during server create, it will request that those
|
||||
volumes are created in the same availability zone as the server, which must
|
||||
exist in the block storage (cinder) service.
|
||||
|
||||
As noted in the `Implications for moving servers`_ section, forcefully moving
|
||||
a server to another zone could also break affinity with attached volumes.
|
||||
|
||||
.. note:: ``cross_az_attach=False`` is not widely used nor tested extensively
|
||||
and thus suffers from some known issues:
|
||||
|
||||
* `Bug 1694844 <https://bugs.launchpad.net/nova/+bug/1694844>`_
|
||||
* `Bug 1781421 <https://bugs.launchpad.net/nova/+bug/1781421>`_
|
||||
|
||||
Design
|
||||
------
|
||||
|
||||
The OSAPI Admin API is extended to support the following operations:
|
||||
|
||||
* Aggregates
|
||||
|
||||
* list aggregates: returns a list of all the host-aggregates
|
||||
* create aggregate: creates an aggregate, takes a friendly name, etc. returns an id
|
||||
* show aggregate: shows the details of an aggregate (id, name, availability_zone, hosts and metadata)
|
||||
* update aggregate: updates the name and availability zone of an aggregate
|
||||
* set metadata: sets the metadata on an aggregate to the values supplied
|
||||
* delete aggregate: deletes an aggregate, it fails if the aggregate is not empty
|
||||
* add host: adds a host to the aggregate
|
||||
* remove host: removes a host from the aggregate
|
||||
* Hosts
|
||||
|
||||
* list all hosts by service
|
||||
|
||||
* It has been deprecated since microversion 2.43. Use `list hypervisors` instead.
|
||||
* start host maintenance (or evacuate-host): disallow a host to serve API requests and migrate instances to other hosts of the aggregate
|
||||
|
||||
* It has been deprecated since microversion 2.43. Use `disable service` instead.
|
||||
* stop host maintenance (or rebalance-host): put the host back into operational mode, migrating instances back onto that host
|
||||
|
||||
* It has been deprecated since microversion 2.43. Use `enable service` instead.
|
||||
|
||||
* Hypervisors
|
||||
|
||||
* list hypervisors: list hypervisors with hypervisor hostname
|
||||
|
||||
* Compute services
|
||||
|
||||
* enable service
|
||||
* disable service
|
||||
|
||||
Using the Nova CLI
|
||||
------------------
|
||||
|
||||
Using the nova command you can create, delete and manage aggregates. The following section outlines the list of available commands.
|
||||
|
||||
Usage
|
||||
~~~~~
|
||||
|
||||
::
|
||||
|
||||
* aggregate-list Print a list of all aggregates.
|
||||
* aggregate-create <name> [<availability_zone>] Create a new aggregate with the specified details.
|
||||
* aggregate-delete <aggregate> Delete the aggregate by its ID or name.
|
||||
* aggregate-show <aggregate> Show details of the aggregate specified by its ID or name.
|
||||
* aggregate-add-host <aggregate> <host> Add the host to the aggregate specified by its ID or name.
|
||||
* aggregate-remove-host <aggregate> <host> Remove the specified host from the aggregate specified by its ID or name.
|
||||
* aggregate-set-metadata <aggregate> <key=value> [<key=value> ...]
|
||||
Update the metadata associated with the aggregate specified by its ID or name.
|
||||
* aggregate-update [--name <name>] [--availability-zone <availability-zone>] <aggregate>
|
||||
Update the aggregate's name or availability zone.
|
||||
|
||||
* host-list List all hosts by service.
|
||||
* hypervisor-list [--matching <hostname>] [--marker <marker>] [--limit <limit>]
|
||||
List hypervisors.
|
||||
|
||||
* host-update [--status <enable|disable>] [--maintenance <enable|disable>] <hostname>
|
||||
Put/resume host into/from maintenance.
|
||||
* service-enable <id> Enable the service.
|
||||
* service-disable [--reason <reason>] <id> Disable the service.
|
31
doc/source/user/availability-zones.rst
Normal file
31
doc/source/user/availability-zones.rst
Normal file
@ -0,0 +1,31 @@
|
||||
==================
|
||||
Availability zones
|
||||
==================
|
||||
|
||||
Availability Zones are an end-user visible logical abstraction for partitioning
|
||||
a cloud without knowing the physical infrastructure. Availability zones can be
|
||||
used to partition a cloud on arbitrary factors, such as location (country,
|
||||
datacenter, rack), network layout and/or power source. Because of the
|
||||
flexibility, the names and purposes of availability zones can vary massively
|
||||
between clouds.
|
||||
|
||||
In addition, other services, such as the :neutron-doc:`networking service <>`
|
||||
and the :cinder-doc:`block storage service <>`, also provide an availability
|
||||
zone feature. However, the implementation of these features differs vastly
|
||||
between these different services. Consult the documentation for these other
|
||||
services for more information on their implementation of this feature.
|
||||
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Availability zones can only be created and configured by an admin but they can
|
||||
be used by an end-user when creating an instance. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --availability-zone ZONE ... SERVER
|
||||
|
||||
It is also possible to specify a destination host and/or node using this
|
||||
command; however, this is an admin-only operation by default. For more
|
||||
information, see :ref:`using-availability-zones-to-select-hosts`.
|
@ -72,8 +72,11 @@ Once you are running nova, the following information is extremely useful.
|
||||
|
||||
* :doc:`Quotas </user/quotas>`: Managing project quotas in nova.
|
||||
|
||||
* :doc:`Aggregates </user/aggregates>`: Aggregates are a useful way of grouping
|
||||
hosts together for scheduling purposes.
|
||||
* :doc:`Availablity Zones </admin/availability-zones>`: Availability Zones are
|
||||
an end-user visible logical abstraction for partitioning a cloud without
|
||||
knowing the physical infrastructure. They can be used to partition a cloud on
|
||||
arbitrary factors, such as location (country, datacenter, rack), network
|
||||
layout and/or power source.
|
||||
|
||||
* :doc:`Filter Scheduler </user/filter-scheduler>`: How the filter scheduler is
|
||||
configured, and how that will impact where compute instances land in your
|
||||
|
Loading…
Reference in New Issue
Block a user