Use links to placement docs in nova docs

Placement documents have been published since
I667387ec262680af899a628520c107fa0d4eec24.

So use links to placement documents
https://docs.openstack.org/placement/latest/
in nova documents.

Change-Id: I218a6d11fea934e8991e41b4b36203c6ba3e3dbf
This commit is contained in:
Takashi NATSUME 2018-10-30 10:13:24 +09:00
parent 594c653dc1
commit 7dd7d9a5fa
14 changed files with 16 additions and 826 deletions

View File

@ -95,7 +95,7 @@ Upgrade
make a successful request to the endpoint. The command also checks to
see that there are compute node resource providers checking in with the
Placement service. More information on the Placement service can be found
at :nova-doc:`Placement API <user/placement.html>`.
at :placement-doc:`Placement API <>`.
**16.0.0 (Pike)**

View File

@ -185,6 +185,7 @@ openstack_projects = [
'oslo.messaging',
'oslo.i18n',
'oslo.versionedobjects',
'placement',
'python-novaclient',
'python-openstackclient',
'reno',

View File

@ -7,6 +7,9 @@ The static configuration for nova lives in two main files: ``nova.conf`` and
configuring nova to solve specific problems, refer to the :doc:`Nova Admin
Guide </admin/index>`.
For Placement configuration,
see :placement-doc:`Placement Configuration Guide <configuration>`.
Configuration
-------------
@ -20,8 +23,8 @@ Configuration
* :doc:`Sample Config File <sample-config>`: A sample config
file with inline documentation.
Nova Policy
-----------
Policy
------
Nova, like most OpenStack projects, uses a policy language to restrict
permissions on REST API actions.
@ -32,19 +35,6 @@ permissions on REST API actions.
* :doc:`Sample Policy File <sample-policy>`: A sample nova
policy file with inline documentation.
Placement Policy
----------------
Placement, like most OpenStack projects, uses a policy language to restrict
permissions on REST API actions.
* :doc:`Policy Reference <placement-policy>`: A complete
reference of all policy points in placement and what they impact.
* :doc:`Sample Policy File <sample-placement-policy>`: A sample
placement policy file with inline documentation.
.. # NOTE(mriedem): This is the section where we hide things that we don't
# actually want in the table of contents but sphinx build would fail if
# they aren't in the toctree somewhere.
@ -55,5 +45,3 @@ permissions on REST API actions.
sample-config
policy
sample-policy
placement-policy
sample-placement-policy

View File

@ -1,10 +0,0 @@
==================
Placement Policies
==================
The following is an overview of all available policies in Placement.
For a sample configuration file, refer to
:doc:`/configuration/sample-placement-policy`.
.. show-policy::
:config-file: etc/nova/placement-policy-generator.conf

View File

@ -1,16 +0,0 @@
============================
Sample Placement Policy File
============================
The following is a sample placement policy file for adaptation and use.
The sample policy can also be viewed in :download:`file form
</_static/placement.policy.yaml.sample>`.
.. important::
The sample policy file is auto-generated from placement when this
documentation is built. You must ensure your version of placement matches
the version of this documentation.
.. literalinclude:: /_static/placement.policy.yaml.sample

View File

@ -100,7 +100,7 @@ Here are some top tips around engaging with the Nova community:
- IRC
- we talk a lot in #openstack-nova
- we also have #openstack-placement for :doc:`placement </user/placement>`
- we also have #openstack-placement for :placement-doc:`placement <>`
- do ask us questions in there, and we will try to help you
- not sure about asking questions? feel free to listen in around
other people's questions

View File

@ -100,6 +100,6 @@ Major subsystems in nova have different needs; some of those are documented
here. If you are contributing to one of these please read the subsystem guide
before diving in.
* :doc:`/contributor/placement`
* :placement-doc:`Placement API Developer Notes <contributor/index>`
* :doc:`/user/conductor`

View File

@ -1,434 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
===============================
Placement API Developer Notes
===============================
Overview
========
The Nova project introduced the :doc:`placement service </user/placement>` as
part of the Newton release. The service provides an HTTP API to manage
inventories of different classes of resources, such as disk or virtual cpus,
made available by entities called resource providers. Information provided
through the placement API is intended to enable more effective accounting of
resources in an OpenStack deployment and better scheduling of various entities
in the cloud.
The document serves to explain the architecture of the system and to provide
some guidance on how to maintain and extend the code. For more detail on why
the system was created and how it does its job see :doc:`/user/placement`.
Big Picture
===========
The placement service is straightforward: It is a `WSGI`_ application that
sends and receives JSON, using an RDBMS (usually MySQL) for persistence.
As state is managed solely in the DB, scaling the placement service is done by
increasing the number of WSGI application instances and scaling the RDBMS using
traditional database scaling techniques.
For sake of consistency and because there was initially intent to make the
entities in the placement service available over RPC,
:oslo.versionedobjects-doc:`versioned objects <>` are used to provide the
interface between the HTTP application layer and the SQLAlchemy-driven
persistence layer. Even without RPC, these objects provide useful structuring
and separation of the code.
Though the placement service doesn't aspire to be a `microservice` it does
aspire to continue to be small and minimally complex. This means a relatively
small amount of middleware that is not configurable, and a limited number of
exposed resources where any given resource is represented by one (and only
one) URL that expresses a noun that is a member of the system. Adding
additional resources should be considered a significant change requiring robust
review from many stakeholders.
The set of HTTP resources represents a concise and constrained grammar for
expressing the management of resource providers, inventories, resource classes,
traits, and allocations. If a solution is initially designed to need more
resources or a more complex grammar that may be a sign that we need to give our
goals greater scrutiny. Is there a way to do what we want with what we have
already? Can some other service help? Is a new collaborating service required?
Minimal Framework
=================
The API is set up to use a minimal framework that tries to keep the structure
of the application as discoverable as possible and keeps the HTTP interaction
near the surface. The goal of this is to make things easy to trace when
debugging or adding functionality.
Functionality which is required for every request is handled in raw WSGI
middleware that is composed in the `nova.api.openstack.placement.deploy`
module. Dispatch or routing is handled declaratively via the
``ROUTE_DECLARATIONS`` map defined in the
`nova.api.openstack.placement.handler` module.
Mapping is by URL plus request method. The destination is a complete WSGI
application, using a subclass of the `wsgify`_ method from `WebOb`_ to provide
a `Request`_ object that provides convenience methods for accessing request
headers, bodies, and query parameters and for generating responses. In the
placement API these mini-applications are called `handlers`. The `wsgify`
subclass is provided in `nova.api.openstack.placement.wsgi_wrapper` as
`PlacementWsgify`. It is used to make sure that JSON formatted error responses
are structured according to the API-WG `errors`_ guideline.
This division between middleware, dispatch and handlers is supposed to
provide clues on where a particular behavior or functionality should be
implemented. Like most such systems, this doesn't always work but is a useful
tool.
Gotchas
=======
This section tries to shed some light on some of the differences between the
placement API and some of the nova APIs or on situations which may be
surprising or unexpected.
* The placement API is somewhat more strict about `Content-Type` and `Accept`
headers in an effort to follow the HTTP RFCs.
If a user-agent sends some JSON in a `PUT` or `POST` request without a
`Content-Type` of `application/json` the request will result in an error.
If a `GET` request is made without an `Accept` header, the response will
default to being `application/json`.
If a request is made with an explicit `Accept` header that does not include
`application/json` then there will be an error and the error will attempt to
be in the requested format (for example, `text/plain`).
* If a URL exists, but a request is made using a method that that URL does not
support, the API will respond with a `405` error. Sometimes in the nova APIs
this can be a `404` (which is wrong, but understandable given the constraints
of the code).
* Because each handler is individually wrapped by the `PlacementWsgify`
decorator any exception that is a subclass of `webob.exc.WSGIHTTPException`
that is raised from within the handler, such as `webob.exc.HTTPBadRequest`,
will be caught by WebOb and turned into a valid `Response`_ containing
headers and body set by WebOb based on the information given when the
exception was raised. It will not be seen as an exception by any of the
middleware in the placement stack.
In general this is a good thing, but it can lead to some confusion if, for
example, you are trying to add some middleware that operates on exceptions.
Other exceptions that are not from `WebOb`_ will raise outside the handlers
where they will either be caught in the `__call__` method of the
`PlacementHandler` app that is responsible for dispatch, or by the
`FaultWrap` middleware.
Microversions
=============
The placement API makes use of `microversions`_ to allow the release of new
features on an opt in basis. See :doc:`/user/placement` for an up to date
history of the available microversions.
The rules around when a microversion is needed are the same as for the
:doc:`compute API </contributor/microversions>`. When adding a new microversion
there are a few bits of required housekeeping that must be done in the code:
* Update the ``VERSIONS`` list in
``nova/api/openstack/placement/microversion.py`` to indicate the new
microversion and give a very brief summary of the added feature.
* Update ``nova/api/openstack/placement/rest_api_version_history.rst``
to add a more detailed section describing the new microversion.
* Add a :reno-doc:`release note <>` with a ``features`` section announcing the
new or changed feature and the microversion.
* If the ``version_handler`` decorator (see below) has been used,
increment ``TOTAL_VERSIONED_METHODS`` in
``nova/tests/unit/api/openstack/placement/test_microversion.py``.
This provides a confirmatory check just to make sure you're paying
attention and as a helpful reminder to do the other things in this
list.
* Include functional gabbi tests as appropriate (see `Using Gabbi`_). At the
least, update the ``latest microversion`` test in
``nova/tests/functional/api/openstack/placement/gabbits/microversion.yaml``.
* Update the `API Reference`_ documentation as appropriate. The source is
located under `placement-api-ref/source/`.
In the placement API, microversions only use the modern form of the
version header::
OpenStack-API-Version: placement 1.2
If a valid microversion is present in a request it will be placed,
as a ``Version`` object, into the WSGI environment with the
``placement.microversion`` key. Often, accessing this in handler
code directly (to control branching) is the most explicit and
granular way to have different behavior per microversion. A
``Version`` instance can be treated as a tuple of two ints and
compared as such or there is a ``matches`` method.
A ``version_handler`` decorator is also available. It makes it possible to have
multiple different handler methods of the same (fully-qualified by package)
name, each available for a different microversion window. If a request wants a
microversion that's not available, a defined status code is returned (usually
``404`` or ``405``). There is a unit test in place which will fail if there are
version intersections.
Adding a New Handler
====================
Adding a new URL or a new method (e.g, ``PATCH``) to an existing URL
requires adding a new handler function. In either case a new microversion and
release note is required. When adding an entirely new route a request for a
lower microversion should return a ``404``. When adding a new method to an
existing URL a request for a lower microversion should return a ``405``.
In either case, the ``ROUTE_DECLARATIONS`` dictionary in the
`nova.api.openstack.placement.handler` module should be updated to point to a
function within a module that contains handlers for the type of entity
identified by the URL. Collection and individual entity handlers of the same
type should be in the same module.
As mentioned above, the handler function should be decorated with
``@wsgi_wrapper.PlacementWsgify``, take a single argument ``req`` which is a
WebOb `Request`_ object, and return a WebOb `Response`_.
For ``PUT`` and ``POST`` methods, request bodies are expected to be JSON
based on a content-type of ``application/json``. This may be enforced by using
a decorator: ``@util.require_content('application/json')``. If the body is not
`JSON`, a ``415`` response status is returned.
Response bodies are usually `JSON`. A handler can check the `Accept` header
provided in a request using another decorator:
``@util.check_accept('application/json')``. If the header does not allow
`JSON`, a ``406`` response status is returned.
If a hander returns a response body, a ``Last-Modified`` header should be
included with the response. If the entity or entities in the response body
are directly associated with an object (or objects, in the case of a
collection response) that has an ``updated_at`` (or ``created_at``)
field, that field's value can be used as the value of the header (WebOb will
take care of turning the datetime object into a string timestamp). A
``util.pick_last_modified`` is available to help choose the most recent
last-modified when traversing a collection of entities.
If there is no directly associated object (for example, the output is the
composite of several objects) then the ``Last-Modified`` time should be
``timeutils.utcnow(with_timezone=True)`` (the timezone must be set in order
to be a valid HTTP timestamp). For example, the response__ to
``GET /allocation_candidates`` should have a last-modified header of now
because it is composed from queries against many different database entities,
presents a mixture of result types (allocation requests and provider
summaries), and has a view of the system that is only meaningful *now*.
__ https://developer.openstack.org/api-ref/placement/#list-allocation-candidates
If a ``Last-Modified`` header is set, then a ``Cache-Control`` header with a
value of ``no-cache`` must be set as well. This is to avoid user-agents
inadvertently caching the responses.
`JSON` sent in a request should be validated against a JSON Schema. A
``util.extract_json`` method is available. This takes a request body and a
schema. If multiple schema are used for different microversions of the same
request, the caller is responsible for selecting the right one before calling
``extract_json``.
When a handler needs to read or write the data store it should use methods on
the objects found in the
`nova.api.openstack.placement.objects.resource_provider` package. Doing so
requires a context which is provided to the handler method via the WSGI
environment. It can be retrieved as follows::
context = req.environ['placement.context']
.. note:: If your change requires new methods or new objects in the
`resource_provider` package, after you've made sure that you really
do need those new methods or objects (you may not!) make those
changes in a patch that is separate from and prior to the HTTP API
change.
If a handler needs to return an error response, with the advent of `Placement
API Error Handling`_, it is possible to include a code in the JSON error
response. This can be used to distinguish different errors with the same HTTP
response status code (a common case is a generation conflict versus an
inventory in use conflict). Error codes are simple namespaced strings (e.g.,
``placement.inventory.inuse``) for which symbols are maintained in
``nova.api.openstack.placement.errors``. Adding a symbol to a response is done
by using the ``comment`` kwarg to a WebOb exception, like this::
except exception.InventoryInUse as exc:
raise webob.exc.HTTPConflict(
_('update conflict: %(error)s') % {'error': exc},
comment=errors.INVENTORY_INUSE)
Code that adds newly raised exceptions should include an error code. Find
additional guidelines on use in the docs for
``nova.api.openstack.placement.errors``.
Testing of handler code is described in the next section.
Testing
=======
Most of the handler code in the placement API is tested using `gabbi`_. Some
utility code is tested with unit tests found in
`nova/tests/unit/api/openstack/placement/`. The back-end objects are tested
with a combination of unit and functional tests found in
``nova/tests/unit/api/openstack/placement/objects/test_resource_provider.py``
and `nova/tests/functional/api/openstack/placement/db`. Adding unit and
non-gabbi functional tests is done in the same way as other aspects of nova.
When writing tests for handler code (that is, the code found in
``nova/api/openstack/placement/handlers``) a good rule of thumb is that if you
feel like there needs to be a unit test for some of the code in the handler,
that is a good sign that the piece of code should be extracted to a separate
method. That method should be independent of the handler method itself (the one
decorated by the ``wsgify`` method) and testable as a unit, without mocks if
possible. If the extracted method is useful for multiple resources consider
putting it in the ``util`` package.
As a general guide, handler code should be relatively short and where there are
conditionals and branching, they should be reachable via the gabbi functional
tests. This is merely a design goal, not a strict constraint.
Using Gabbi
-----------
Gabbi was developed in the `telemetry`_ project to provide a declarative way to
test HTTP APIs that preserves visibility of both the request and response of
the HTTP interaction. Tests are written in YAML files where each file is an
ordered suite of tests. Fixtures (such as a database) are set up and torn down
at the beginning and end of each file, not each test. JSON response bodies can
be evaluated with `JSONPath`_. The placement WSGI
application is run via `wsgi-intercept`_, meaning that real HTTP requests are
being made over a file handle that appears to Python to be a socket.
In the placement API the YAML files (aka "gabbits") can be found in
``nova/tests/functional/api/openstack/placement/gabbits``. Fixture definitions
are in ``nova/tests/functional/api/openstack/placement/fixtures/gabbits.py``.
Tests are frequently grouped by handler name (e.g., ``resource-provider.yaml``
and ``inventory.yaml``). This is not a requirement and as we increase the
number of tests it makes sense to have more YAML files with fewer tests,
divided up by the arc of API interaction that they test.
The gabbi tests are integrated into the functional tox target, loaded via
``nova/tests/functional/api/openstack/placement/test_placement_api.py``. If you
want to run just the gabbi tests one way to do so is::
tox -efunctional test_placement_api
If you want to run just one yaml file (in this example ``inventory.yaml``)::
tox -efunctional placement_api.inventory
It is also possible to run just one test from within one file. When you do this
every test prior to the one you asked for will also be run. This is because
the YAML represents a sequence of dependent requests. Select the test by using
the name in the yaml file, replacing space with ``_``::
tox -efunctional placement_api.inventory_post_new_ipv4_address_inventory
.. note:: ``tox.ini`` in the nova repository is configured by a ``group_regex``
so that each gabbi YAML is considered a group. Thus, all tests in the
file will be run in the same process when running stestr concurrently
(the default).
Writing More Gabbi Tests
------------------------
The docs for `gabbi`_ try to be complete and explain the `syntax`_ in some
depth. Where something is missing or confusing, please log a `bug`_.
While it is possible to test all aspects of a response (all the response
headers, the status code, every attribute in a JSON structure) in one single
test, doing so will likely make the test harder to read and will certainly make
debugging more challenging. If there are multiple things that need to be
asserted, making multiple requests is reasonable. Since database set up is only
happening once per file (instead of once per test) and since there's no TCP
overhead, the tests run quickly.
While `fixtures`_ can be used to establish entities that are required for
tests, creating those entities via the HTTP API results in tests which are more
descriptive. For example the ``inventory.yaml`` file creates the resource
provider to which it will then add inventory. This makes it easy to explore a
sequence of interactions and a variety of responses with the tests:
* create a resource provider
* confirm it has empty inventory
* add inventory to the resource provider (in a few different ways)
* confirm the resource provider now has inventory
* modify the inventory
* delete the inventory
* confirm the resource provider now has empty inventory
Nothing special is required to add a new set of tests: create a YAML file with
a unique name in the same directory as the others. The other files can provide
examples. Gabbi can provide a useful way of doing test driven development of a
new handler: create a YAML file that describes the desired URLs and behavior
and write the code to make it pass.
It's also possible to use gabbi against a running placement service, for
example in devstack. See `gabbi-run`_ to get started.
Futures
=======
Since before it was created there has been a long term goal for the placement
service to be extracted to its own repository and operate as its own
independent service. There are many reasons for this, but two main ones are:
* Multiple projects, not just nova, will eventually need to manage resource
providers using the placement API.
* A separate service helps to maintain and preserve a strong contract between
the placement service and the consumers of the service.
To lessen the pain of the eventual extraction of placement the service has been
developed in a way to limit dependency on the rest of the nova codebase and be
self-contained:
* Most code is in `nova/api/openstack/placement`.
* Database query code is kept within the objects in
`nova/api/openstack/placement/objects`.
* The methods on the objects are not remotable, as the only intended caller is
the placement API code.
There are some exceptions to the self-contained rule (which are actively being
addressed to prepare for the extraction):
* Some of the code related to a resource class cache is within the `nova.db`
package, while other parts are in ``nova/rc_fields.py``.
* Database models, migrations and tables are described as part of the nova api
database. An optional configuration option,
:oslo.config:option:`placement_database.connection`, can be set to use a
database just for placement (based on the api database schema).
* `nova.i18n` package provides the ``_`` and related functions.
* ``nova.conf`` is used for configuration.
* Unit and functional tests depend on fixtures and other functionality in base
classes provided by nova.
When creating new code for the placement service, please be aware of the plan
for an eventual extraction and avoid creating unnecessary interdependencies.
.. _WSGI: https://www.python.org/dev/peps/pep-3333/
.. _wsgify: http://docs.webob.org/en/latest/api/dec.html
.. _WebOb: http://docs.webob.org/en/latest/
.. _Request: http://docs.webob.org/en/latest/reference.html#request
.. _Response: http://docs.webob.org/en/latest/#response
.. _microversions: http://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html
.. _gabbi: https://gabbi.readthedocs.io/
.. _telemetry: http://specs.openstack.org/openstack/telemetry-specs/specs/kilo/declarative-http-tests.html
.. _wsgi-intercept: http://wsgi-intercept.readthedocs.io/
.. _syntax: https://gabbi.readthedocs.io/en/latest/format.html
.. _bug: https://github.com/cdent/gabbi/issues
.. _fixtures: http://gabbi.readthedocs.io/en/latest/fixtures.html
.. _JSONPath: http://goessner.net/articles/JsonPath/
.. _gabbi-run: http://gabbi.readthedocs.io/en/latest/runner.html
.. _errors: http://specs.openstack.org/openstack/api-wg/guidelines/errors.html
.. _API Reference: https://developer.openstack.org/api-ref/placement/
.. _Placement API Error Handling: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/placement-api-error-handling.html

View File

@ -81,7 +81,7 @@ resources will help you get started with consuming the API directly.
* `Placement API Reference <https://developer.openstack.org/api-ref/placement/>`_:
The complete reference for the placement API, including all methods and
request / response parameters and their meaning.
* :ref:`Placement API Microversion History <placement-api-microversion-history>`:
* :placement-doc:`Placement API Microversion History <#rest-api-version-history>`:
The placement API evolves over time through `Microversions
<https://developer.openstack.org/api-guide/compute/microversions.html>`_. This
provides the history of all those changes. Consider it a "what's new" in the
@ -142,8 +142,8 @@ the defaults from the :doc:`install guide </install/index>` will be sufficient.
* :doc:`Cells v2 Planning </user/cellsv2-layout>`: For large deployments, Cells v2
allows sharding of your compute environment. Upfront planning is key to a
successful Cells v2 layout.
* :doc:`Placement service </user/placement>`: Overview of the placement
service, including how it fits in with the rest of nova.
* :placement-doc:`Placement service <>`: Overview of the placement service,
including how it fits in with the rest of nova.
* :doc:`Running nova-api on wsgi <user/wsgi>`: Considerations for using a real
WSGI container instead of the baked-in eventlet web server.
@ -215,7 +215,6 @@ looking parts of our architecture. These are collected below.
contributor/code-review
contributor/documentation
contributor/microversions
contributor/placement.rst
contributor/policies.rst
contributor/releasenotes
contributor/testing
@ -254,7 +253,6 @@ looking parts of our architecture. These are collected below.
user/filter-scheduler
user/flavors
user/manage-ip-addresses
user/placement
user/quotas
user/support-matrix
user/upgrade

View File

@ -44,7 +44,7 @@ OpenStack Compute consists of the following areas and their components:
``nova-placement-api`` service
Tracks the inventory and usage of each provider. For details, see
:doc:`/user/placement`.
:placement-doc:`Placement <>`.
``nova-scheduler`` service
Takes a virtual machine instance request from the queue and determines on

View File

@ -59,6 +59,6 @@ of a typical Nova deployment.
* Compute: manages communication with hypervisor and virtual machines.
* Conductor: handles requests that need coordination (build/resize), acts as a
database proxy, or handles object conversions.
* `Placement <https://docs.openstack.org/nova/latest/user/placement.html>`__: tracks resource provider inventories and usages.
* :placement-doc:`Placement <>`: tracks resource provider inventories and usages.
While all services are designed to be horizontally scalable, you should have significantly more computes than anything else.

View File

@ -52,7 +52,7 @@ the defaults from the :doc:`install guide </install/index>` will be sufficient.
allows sharding of your compute environment. Upfront planning is key to a
successful Cells v2 layout.
* :doc:`Placement service </user/placement>`: Overview of the placement
* :placement-doc:`Placement service <>`: Overview of the placement
service, including how it fits in with the rest of nova.
* :doc:`Running nova-api on wsgi </user/wsgi>`: Considerations for using a real

View File

@ -1,337 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
===============
Placement API
===============
Overview
========
Nova introduced the placement API service in the 14.0.0 Newton release. This
is a separate REST API stack and data model used to track resource provider
inventories and usages, along with different classes of resources. For example,
a resource provider can be a compute node, a shared storage pool, or an IP
allocation pool. The placement service tracks the inventory and usage of each
provider. For example, an instance created on a compute node may be a consumer
of resources such as RAM and CPU from a compute node resource provider, disk
from an external shared storage pool resource provider and IP addresses from
an external IP pool resource provider.
The types of resources consumed are tracked as **classes**. The service
provides a set of standard resource classes (for example ``DISK_GB``,
``MEMORY_MB``, and ``VCPU``) and provides the ability to define custom
resource classes as needed.
Each resource provider may also have a set of traits which describe qualitative
aspects of the resource provider. Traits describe an aspect of a resource
provider that cannot itself be consumed but a workload may wish to specify. For
example, available disk may be solid state drives (SSD).
References
~~~~~~~~~~
The following specifications represent the stages of design and development of
resource providers and the Placement service. Implementation details may have
changed or be partially complete at this time.
* `Generic Resource Pools <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/generic-resource-pools.html>`_
* `Compute Node Inventory <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/compute-node-inventory-newton.html>`_
* `Resource Provider Allocations <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/resource-providers-allocations.html>`_
* `Resource Provider Base Models <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/resource-providers.html>`_
* `Nested Resource Providers`_
* `Custom Resource Classes <http://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/custom-resource-classes.html>`_
* `Scheduler Filters in DB <http://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/resource-providers-scheduler-db-filters.html>`_
* `Scheduler claiming resources to the Placement API <http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/placement-claims.html>`_
* `The Traits API - Manage Traits with ResourceProvider <http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/resource-provider-traits.html>`_
* `Request Traits During Scheduling`_
* `filter allocation candidates by aggregate membership`_
* `perform granular allocation candidate requests`_
* `inventory and allocation data migration`_ (reshaping provider trees)
* `handle allocation updates in a safe way`_
.. _Nested Resource Providers: http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/nested-resource-providers.html
.. _Request Traits During Scheduling: https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/request-traits-in-nova.html
.. _filter allocation candidates by aggregate membership: https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/alloc-candidates-member-of.html
.. _perform granular allocation candidate requests: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html
.. _inventory and allocation data migration: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html
.. _handle allocation updates in a safe way: https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/add-consumer-generation.html
Deployment
==========
The placement-api service must be deployed at some point after you have
upgraded to the 14.0.0 Newton release but before you can upgrade to the 15.0.0
Ocata release. This is so that the resource tracker in the nova-compute service
can populate resource provider (compute node) inventory and allocation
information which will be used by the nova-scheduler service in Ocata.
Steps
~~~~~
**1. Deploy the API service**
At this time the placement API code is still in Nova alongside the compute REST
API code (nova-api). So once you have upgraded nova-api to Newton you already
have the placement API code, you just need to install the service. Nova
provides a ``nova-placement-api`` WSGI script for running the service with
Apache, nginx or other WSGI-capable web servers. Depending on what packaging
solution is used to deploy OpenStack, the WSGI script may be in ``/usr/bin``
or ``/usr/local/bin``.
.. note:: The placement API service is currently developed within Nova but
it is designed to be as separate as possible from the existing code so
that it can eventually be split into a separate project.
``nova-placement-api``, as a standard WSGI script, provides a module level
``application`` attribute that most WSGI servers expect to find. This means it
is possible to run it with lots of different servers, providing flexibility in
the face of different deployment scenarios. Common scenarios include:
* apache2_ with mod_wsgi_
* apache2 with mod_proxy_uwsgi_
* nginx_ with uwsgi_
* nginx with gunicorn_
In all of these scenarios the host, port and mounting path (or prefix) of the
application is controlled in the web server's configuration, not in the
configuration (``nova.conf``) of the placement application.
When placement was `first added to DevStack`_ it used the ``mod_wsgi`` style.
Later it `was updated`_ to use mod_proxy_uwsgi_. Looking at those changes can
be useful for understanding the relevant options.
DevStack is configured to host placement at ``/placement`` on either the
default port for http or for https (``80`` or ``443``) depending on whether TLS
is being used. Using a default port is desirable.
By default, the placement application will get its configuration for settings
such as the database connection URL from ``/etc/nova/nova.conf``. The directory
the configuration file will be found in can be changed by setting
``OS_PLACEMENT_CONFIG_DIR`` in the environment of the process that starts the
application.
.. note:: When using uwsgi with a front end (e.g., apache2 or nginx) something
needs to ensure that the uwsgi process is running. In DevStack this is done
with systemd_. This is one of many different ways to manage uwsgi.
This document refrains from declaring a set of installation instructions for
the placement service. This is because a major point of having a WSGI
application is to make the deployment as flexible as possible. Because the
placement API service is itself stateless (all state is in the database), it is
possible to deploy as many servers as desired behind a load balancing solution
for robust and simple scaling. If you familiarize yourself with installing
generic WSGI applications (using the links in the common scenarios list,
above), those techniques will be applicable here.
.. _apache2: http://httpd.apache.org/
.. _mod_wsgi: https://modwsgi.readthedocs.io/
.. _mod_proxy_uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Apache.html
.. _nginx: http://nginx.org/
.. _uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Nginx.html
.. _gunicorn: http://gunicorn.org/
.. _first added to DevStack: https://review.openstack.org/#/c/342362/
.. _was updated: https://review.openstack.org/#/c/456717/
.. _systemd: https://review.openstack.org/#/c/448323/
**2. Synchronize the database**
In the Newton release the Nova **api** database is the only deployment
option for the placement API service and the resources it manages. After
upgrading the nova-api service for Newton and running the
``nova-manage api_db sync`` command the placement tables will be created.
With the Rocky release, it has become possible to use a separate database for
placement. If :oslo.config:option:`placement_database.connection` is
configured with a database connect string, that database will be used for
storing placement data. Once the database is created, the
``nova-manage api_db sync`` command will create and synchronize both the
nova api and placement tables. If ``[placement_database]/connection`` is not
set, the nova api database will be used.
.. note:: At this time there is no facility for migrating existing placement
data from the nova api database to a placement database. There are
many ways to do this. Which one is best will depend on the environment.
**3. Create accounts and update the service catalog**
Create a **placement** service user with an **admin** role in Keystone.
The placement API is a separate service and thus should be registered under
a **placement** service type in the service catalog as that is what the
resource tracker in the nova-compute node will use to look up the endpoint.
Devstack sets up the placement service on the default HTTP port (80) with a
``/placement`` prefix instead of using an independent port.
**4. Configure and restart nova-compute services**
The 14.0.0 Newton nova-compute service code will begin reporting resource
provider inventory and usage information as soon as the placement API
service is in place and can respond to requests via the endpoint registered
in the service catalog.
``nova.conf`` on the compute nodes must be updated in the ``[placement]``
group to contain credentials for making requests from nova-compute to the
placement-api service.
.. note:: After upgrading nova-compute code to Newton and restarting the
service, the nova-compute service will attempt to make a connection
to the placement API and if that is not yet available a warning will
be logged. The nova-compute service will keep attempting to connect
to the placement API, warning periodically on error until it is
successful. Keep in mind that Placement is optional in Newton, but
required in Ocata, so the placement service should be enabled before
upgrading to Ocata. nova.conf on the compute nodes will need to be
updated in the ``[placement]`` group for credentials to make requests
from nova-compute to the placement-api service.
.. _placement-upgrade-notes:
Upgrade Notes
=============
The following sub-sections provide notes on upgrading to a given target release.
.. note::
As a reminder, the :doc:`nova-status upgrade check </cli/nova-status>` tool
can be used to help determine the status of your deployment and how ready it
is to perform an upgrade.
Ocata (15.0.0)
~~~~~~~~~~~~~~
* The ``nova-compute`` service will fail to start in Ocata unless the
``[placement]`` section of nova.conf on the compute is configured. As
mentioned in the deployment steps above, the Placement service should be
deployed by this point so the computes can register and start reporting
inventory and allocation information. If the computes are deployed
and configured `before` the Placement service, they will continue to try
and reconnect in a loop so that you do not need to restart the nova-compute
process to talk to the Placement service after the compute is properly
configured.
* The ``nova.scheduler.filter_scheduler.FilterScheduler`` in Ocata will
fallback to not using the Placement service as long as there are older
``nova-compute`` services running in the deployment. This allows for rolling
upgrades of the computes to not affect scheduling for the FilterScheduler.
However, the fallback mechanism will be removed in the 16.0.0 Pike release
such that the scheduler will make decisions based on the Placement service
and the resource providers (compute nodes) registered there. This means if
the computes are not reporting into Placement by Pike, build requests will
fail with **NoValidHost** errors.
* While the FilterScheduler technically depends on the Placement service
in Ocata, if you deploy the Placement service `after` you upgrade the
``nova-scheduler`` service to Ocata and restart it, things will still work.
The scheduler will gracefully handle the absence of the Placement service.
However, once all computes are upgraded, the scheduler not being able to make
requests to Placement will result in **NoValidHost** errors.
* It is currently possible to exclude the ``CoreFilter``, ``RamFilter`` and
``DiskFilter`` from the list of enabled FilterScheduler filters such that
scheduling decisions are not based on CPU, RAM or disk usage. Once all
computes are reporting into the Placement service, however, and the
FilterScheduler starts to use the Placement service for decisions, those
excluded filters are ignored and the scheduler will make requests based on
VCPU, MEMORY_MB and DISK_GB inventory. If you wish to effectively ignore
that type of resource for placement decisions, you will need to adjust the
corresponding ``cpu_allocation_ratio``, ``ram_allocation_ratio``, and/or
``disk_allocation_ratio`` configuration options to be very high values, e.g.
9999.0.
* Users of CellsV1 will need to deploy a placement per cell, matching
the scope and cardinality of the regular ``nova-scheduler`` process.
Pike (16.0.0)
~~~~~~~~~~~~~
* The ``nova.scheduler.filter_scheduler.FilterScheduler`` in Pike will
no longer fall back to not using the Placement Service, even if older
computes are running in the deployment.
* The FilterScheduler now requests allocation candidates from the Placement
service during scheduling. The allocation candidates information was
introduced in the Placement API 1.10 microversion, so you should upgrade the
placement service **before** the Nova scheduler service so that the scheduler
can take advantage of the allocation candidate information.
The scheduler gets the allocation candidates from the placement API and
uses those to get the compute nodes, which come from the cell(s). The
compute nodes are passed through the enabled scheduler filters and weighers.
The scheduler then iterates over this filtered and weighed list of hosts and
attempts to claim resources in the placement API for each instance in the
request. Claiming resources involves finding an allocation candidate that
contains an allocation against the selected host's UUID and asking the
placement API to allocate the requested instance resources. We continue
performing this claim request until success or we run out of allocation
candidates, resulting in a NoValidHost error.
For a move operation, such as migration, allocations are made in Placement
against both the source and destination compute node. Once the
move operation is complete, the resource tracker in the *nova-compute*
service will adjust the allocations in Placement appropriately.
For a resize to the same host, allocations are summed on the single compute
node. This could pose a problem if the compute node has limited capacity.
Since resizing to the same host is disabled by default, and generally only
used in testing, this is mentioned for completeness but should not be a
concern for production deployments.
Queens (17.0.0)
~~~~~~~~~~~~~~~
* The minimum Placement API microversion required by the *nova-scheduler*
service is ``1.17`` in order to support `Request Traits During Scheduling`_.
This means you must upgrade the placement service before upgrading any
*nova-scheduler* services to Queens.
Rocky (18.0.0)
~~~~~~~~~~~~~~
* The ``nova-api`` service now requires the ``[placement]`` section to be
configured in nova.conf if you are using a separate config file just for
that service. This is because the ``nova-api`` service now needs to talk
to the placement service in order to (1) delete resource provider allocations
when deleting an instance and the ``nova-compute`` service on which that
instance is running is down (2) delete a ``nova-compute`` service record via
the ``DELETE /os-services/{service_id}`` API and (3) mirror aggregate host
associations to the placement service. This change is idempotent if
``[placement]`` is not configured in ``nova-api`` but it will result in new
warnings in the logs until configured.
* As described above, before Rocky, the placement service used the nova api
database to store placement data. In Rocky, if the ``connection`` setting in
a ``[placement_database]`` group is set in configuration, that group will be
used to describe where and how placement data is stored.
REST API
========
The placement API service has its own `REST API`_ and data model. One
can get a sample of the REST API via the functional test `gabbits`_.
.. _`REST API`: https://developer.openstack.org/api-ref/placement/
.. _gabbits: http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/functional/api/openstack/placement/gabbits
Microversions
~~~~~~~~~~~~~
The placement API uses microversions for making incremental changes to the
API which client requests must opt into.
It is especially important to keep in mind that nova-compute is a client of
the placement REST API and based on how Nova supports rolling upgrades the
nova-compute service could be Newton level code making requests to an Ocata
placement API, and vice-versa, an Ocata compute service in a cells v2 cell
could be making requests to a Newton placement API.
.. _placement-api-microversion-history:
.. include:: ../../../nova/api/openstack/placement/rest_api_version_history.rst

View File

@ -83,7 +83,7 @@ same time.
* Several nova services rely on the external placement service being at the
latest level. Therefore, you must upgrade placement before any nova
services. See the
:ref:`placement upgrade notes <placement-upgrade-notes>` for more
:placement-doc:`placement upgrade notes <#upgrade-notes>` for more
details on upgrading the placement service.
* For maximum safety (no failed API operations), gracefully shutdown all