Make docs build

This patch does pretty much the minimum possible to get `tox -e docs`
building and gating. Much, much more needs to be done to make the
content good.

Also corrects a link in the api-ref which, once these are publishing,
would have pointed to the wrong place.

Change-Id: I5cbc3a3cceeaeaa7be5593658b6a03fa25fb69d0
This commit is contained in:
Eric Fried 2018-09-05 17:06:07 -05:00
parent 8c2bb44181
commit 9b584fcd1d
15 changed files with 375 additions and 830 deletions

View File

@ -15,6 +15,7 @@
- openstack-tox-py36
- openstack-tox-pep8
- build-openstack-api-ref
- openstack-tox-docs
gate:
jobs:
- openstack-tox-functional
@ -25,3 +26,4 @@
- openstack-tox-py36
- openstack-tox-pep8
- build-openstack-api-ref
- openstack-tox-docs

View File

@ -6,7 +6,7 @@
This is a reference for the OpenStack Placement API. To learn more about
OpenStack Placement API concepts, please refer to the
:placement-doc:`Placement Introduction <user/placement.html>`.
:placement-doc:`Placement Introduction <>`.
The Placement API uses JSON for data exchange. As such, the ``Content-Type``
header for APIs sending data payloads in the request body (i.e. ``PUT`` and

View File

@ -1,79 +0,0 @@
# The following is generated with:
#
# git log --follow --name-status --format='%H' 2d0dfc632f.. -- doc/source | \
# grep ^R | grep .rst | cut -f2- | \
# sed -e 's|doc/source/|redirectmatch 301 ^/nova/([^/]+)/|' -e 's|doc/source/|/nova/$1/|' -e 's/.rst/.html$/' -e 's/.rst/.html/' | \
# sort
redirectmatch 301 ^/nova/([^/]+)/addmethod.openstackapi.html$ /nova/$1/contributor/api-2.html
redirectmatch 301 ^/nova/([^/]+)/admin/flavors2.html$ /nova/$1/admin/flavors.html
redirectmatch 301 ^/nova/([^/]+)/admin/numa.html$ /nova/$1/admin/cpu-topologies.html
redirectmatch 301 ^/nova/([^/]+)/aggregates.html$ /nova/$1/user/aggregates.html
redirectmatch 301 ^/nova/([^/]+)/api_microversion_dev.html$ /nova/$1/contributor/microversions.html
redirectmatch 301 ^/nova/([^/]+)/api_microversion_history.html$ /nova/$1/reference/api-microversion-history.html
redirectmatch 301 ^/nova/([^/]+)/api_plugins.html$ /nova/$1/contributor/api.html
redirectmatch 301 ^/nova/([^/]+)/architecture.html$ /nova/$1/user/architecture.html
redirectmatch 301 ^/nova/([^/]+)/block_device_mapping.html$ /nova/$1/user/block-device-mapping.html
redirectmatch 301 ^/nova/([^/]+)/blueprints.html$ /nova/$1/contributor/blueprints.html
redirectmatch 301 ^/nova/([^/]+)/cells.html$ /nova/$1/user/cells.html
redirectmatch 301 ^/nova/([^/]+)/code-review.html$ /nova/$1/contributor/code-review.html
redirectmatch 301 ^/nova/([^/]+)/conductor.html$ /nova/$1/user/conductor.html
redirectmatch 301 ^/nova/([^/]+)/development.environment.html$ /nova/$1/contributor/development-environment.html
redirectmatch 301 ^/nova/([^/]+)/devref/api.html /nova/$1/contributor/api.html
redirectmatch 301 ^/nova/([^/]+)/devref/cells.html /nova/$1/user/cells.html
redirectmatch 301 ^/nova/([^/]+)/devref/filter_scheduler.html /nova/$1/user/filter-scheduler.html
# catch all, if we hit something in devref assume it moved to
# reference unless we have already triggered a hit above.
redirectmatch 301 ^/nova/([^/]+)/devref/([^/]+).html /nova/$1/reference/$2.html
redirectmatch 301 ^/nova/([^/]+)/feature_classification.html$ /nova/$1/user/feature-classification.html
redirectmatch 301 ^/nova/([^/]+)/filter_scheduler.html$ /nova/$1/user/filter-scheduler.html
redirectmatch 301 ^/nova/([^/]+)/gmr.html$ /nova/$1/reference/gmr.html
redirectmatch 301 ^/nova/([^/]+)/how_to_get_involved.html$ /nova/$1/contributor/how-to-get-involved.html
redirectmatch 301 ^/nova/([^/]+)/i18n.html$ /nova/$1/reference/i18n.html
redirectmatch 301 ^/nova/([^/]+)/man/index.html$ /nova/$1/cli/index.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-api-metadata.html$ /nova/$1/cli/nova-api-metadata.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-api-os-compute.html$ /nova/$1/cli/nova-api-os-compute.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-api.html$ /nova/$1/cli/nova-api.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-cells.html$ /nova/$1/cli/nova-cells.html
# this is gone and never coming back, indicate that to the end users
redirectmatch 301 ^/nova/([^/]+)/man/nova-compute.html$ /nova/$1/cli/nova-compute.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-conductor.html$ /nova/$1/cli/nova-conductor.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-console.html$ /nova/$1/cli/nova-console.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-consoleauth.html$ /nova/$1/cli/nova-consoleauth.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-dhcpbridge.html$ /nova/$1/cli/nova-dhcpbridge.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-idmapshift.html$ /nova/$1/cli/nova-idmapshift.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-manage.html$ /nova/$1/cli/nova-manage.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-network.html$ /nova/$1/cli/nova-network.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-novncproxy.html$ /nova/$1/cli/nova-novncproxy.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-rootwrap.html$ /nova/$1/cli/nova-rootwrap.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-scheduler.html$ /nova/$1/cli/nova-scheduler.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-serialproxy.html$ /nova/$1/cli/nova-serialproxy.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-spicehtml5proxy.html$ /nova/$1/cli/nova-spicehtml5proxy.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-status.html$ /nova/$1/cli/nova-status.html
redirectmatch 301 ^/nova/([^/]+)/man/nova-xvpvncproxy.html$ /nova/$1/cli/nova-xvpvncproxy.html
redirectmatch 301 ^/nova/([^/]+)/notifications.html$ /nova/$1/reference/notifications.html
redirectmatch 301 ^/nova/([^/]+)/placement.html$ /nova/$1/user/placement.html
redirectmatch 301 ^/nova/([^/]+)/placement_dev.html$ /nova/$1/contributor/placement.html
redirectmatch 301 ^/nova/([^/]+)/policies.html$ /nova/$1/contributor/policies.html
redirectmatch 301 ^/nova/([^/]+)/policy_enforcement.html$ /nova/$1/reference/policy-enforcement.html
redirectmatch 301 ^/nova/([^/]+)/process.html$ /nova/$1/contributor/process.html
redirectmatch 301 ^/nova/([^/]+)/project_scope.html$ /nova/$1/contributor/project-scope.html
redirectmatch 301 ^/nova/([^/]+)/quotas.html$ /nova/$1/user/quotas.html
redirectmatch 301 ^/nova/([^/]+)/releasenotes.html$ /nova/$1/contributor/releasenotes.html
redirectmatch 301 ^/nova/([^/]+)/rpc.html$ /nova/$1/reference/rpc.html
redirectmatch 301 ^/nova/([^/]+)/sample_config.html$ /nova/$1/configuration/sample-config.html
redirectmatch 301 ^/nova/([^/]+)/sample_policy.html$ /nova/$1/configuration/sample-policy.html
redirectmatch 301 ^/nova/([^/]+)/scheduler_evolution.html$ /nova/$1/reference/scheduler-evolution.html
redirectmatch 301 ^/nova/([^/]+)/services.html$ /nova/$1/reference/services.html
redirectmatch 301 ^/nova/([^/]+)/stable_api.html$ /nova/$1/reference/stable-api.html
redirectmatch 301 ^/nova/([^/]+)/support-matrix.html$ /nova/$1/user/support-matrix.html
redirectmatch 301 ^/nova/([^/]+)/test_strategy.html$ /nova/$1/contributor/testing.html
redirectmatch 301 ^/nova/([^/]+)/testing/libvirt-numa.html$ /nova/$1/contributor/testing/libvirt-numa.html
redirectmatch 301 ^/nova/([^/]+)/testing/serial-console.html$ /nova/$1/contributor/testing/serial-console.html
redirectmatch 301 ^/nova/([^/]+)/testing/zero-downtime-upgrade.html$ /nova/$1/contributor/testing/zero-downtime-upgrade.html
redirectmatch 301 ^/nova/([^/]+)/threading.html$ /nova/$1/reference/threading.html
redirectmatch 301 ^/nova/([^/]+)/upgrade.html$ /nova/$1/user/upgrade.html
redirectmatch 301 ^/nova/([^/]+)/vendordata.html$ /nova/$1/user/vendordata.html
redirectmatch 301 ^/nova/([^/]+)/vmstates.html$ /nova/$1/reference/vm-states.html
redirectmatch 301 ^/nova/([^/]+)/wsgi.html$ /nova/$1/user/wsgi.html
redirectmatch 301 ^/nova/([^/]+)/user/cellsv2_layout.html$ /nova/$1/user/cellsv2-layout.html

View File

@ -10,7 +10,7 @@
# License for the specific language governing permissions and limitations
# under the License.
#
# nova documentation build configuration file
# placement documentation build configuration file
#
# Refer to the Sphinx documentation for advice on configuring this file:
#
@ -19,7 +19,10 @@
import os
import sys
from nova.version import version_info
import pbr.version
version_info = pbr.version.VersionInfo('placement')
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
@ -33,33 +36,30 @@ sys.path.insert(0, os.path.abspath('./'))
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
# TODO(efried): Trim this moar
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.todo',
'openstackdocstheme',
'sphinx.ext.coverage',
'sphinx.ext.graphviz',
'sphinx_feature_classification.support_matrix',
'oslo_config.sphinxconfiggen',
# TODO(efried): make this work
# 'oslo_config.sphinxconfiggen',
'oslo_config.sphinxext',
'oslo_policy.sphinxpolicygen',
'oslo_policy.sphinxext',
'ext.versioned_notifications',
'ext.feature_matrix',
'sphinxcontrib.actdiag',
'sphinxcontrib.seqdiag',
]
# openstackdocstheme options
repository_name = 'openstack/nova'
repository_name = 'openstack/placement'
bug_project = 'nova'
bug_tag = ''
config_generator_config_file = '../../etc/nova/nova-config-generator.conf'
sample_config_basename = '_static/nova'
bug_tag = 'docs'
policy_generator_config_file = [
('../../etc/nova/nova-policy-generator.conf', '_static/nova'),
('../../etc/nova/placement-policy-generator.conf', '_static/placement')
('../../etc/placement/placement-policy-generator.conf',
'_static/placement')
]
actdiag_html_image_format = 'SVG'
@ -77,7 +77,7 @@ source_suffix = '.rst'
master_doc = 'index'
# General information about the project.
project = u'nova'
project = u'placement'
copyright = u'2010-present, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
@ -89,14 +89,6 @@ release = version_info.release_string()
# The short X.Y version.
version = version_info.version_string()
# A list of glob-style patterns that should be excluded when looking for
# source files. They are matched against the source file names relative to the
# source directory, using slashes as directory separators on all platforms.
exclude_patterns = [
'api/nova.wsgi.nova-*',
'api/nova.tests.*',
]
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
@ -109,36 +101,7 @@ show_authors = False
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['nova.']
# -- Options for man page output ----------------------------------------------
# Grouping the document tree for man pages.
# List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual'
_man_pages = [
('nova-api-metadata', u'Cloud controller fabric'),
('nova-api-os-compute', u'Cloud controller fabric'),
('nova-api', u'Cloud controller fabric'),
('nova-cells', u'Cloud controller fabric'),
('nova-compute', u'Cloud controller fabric'),
('nova-console', u'Cloud controller fabric'),
('nova-consoleauth', u'Cloud controller fabric'),
('nova-dhcpbridge', u'Cloud controller fabric'),
('nova-manage', u'Cloud controller fabric'),
('nova-network', u'Cloud controller fabric'),
('nova-novncproxy', u'Cloud controller fabric'),
('nova-spicehtml5proxy', u'Cloud controller fabric'),
('nova-serialproxy', u'Cloud controller fabric'),
('nova-rootwrap', u'Cloud controller fabric'),
('nova-scheduler', u'Cloud controller fabric'),
('nova-xvpvncproxy', u'Cloud controller fabric'),
('nova-conductor', u'Cloud controller fabric'),
]
man_pages = [
('cli/%s' % name, name, description, [u'OpenStack'], 1)
for name, description in _man_pages]
modindex_common_prefix = ['placement.']
# -- Options for HTML output --------------------------------------------------
@ -151,10 +114,6 @@ html_theme = 'openstackdocs'
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any paths that contain "extra" files, such as .htaccess or
# robots.txt.
html_extra_path = ['_extra']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
@ -165,7 +124,7 @@ html_last_updated_fmt = '%Y-%m-%d %H:%M'
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index', 'Nova.tex', u'Nova Documentation',
('index', 'Placement.tex', u'Placement Documentation',
u'OpenStack Foundation', 'manual'),
]
@ -173,87 +132,9 @@ latex_documents = [
# keep this ordered to keep mriedem happy
openstack_projects = [
'ceilometer',
'cinder',
'glance',
'horizon',
'ironic',
'keystone',
'oslo.versionedobjects',
'neutron',
'nova',
'oslo.log',
'oslo.messaging',
'oslo.i18n',
'oslo.versionedobjects',
'python-novaclient',
'python-openstackclient',
'placement',
'reno',
'watcher',
]
# -- Custom extensions --------------------------------------------------------
def monkey_patch_blockdiag():
"""Monkey patch the blockdiag library.
The default word wrapping in blockdiag is poor, and breaks on a fixed
text width rather than on word boundaries. There's a patch submitted to
resolve this [1]_ but it's unlikely to merge anytime soon.
In addition, blockdiag monkey patches a core library function,
``codecs.getreader`` [2]_, to work around some Python 3 issues. Because
this operates in the same environment as other code that uses this library,
it ends up causing issues elsewhere. We undo these destructive changes
pending a fix.
TODO: Remove this once blockdiag is bumped to 1.6, which will hopefully
include the fix.
.. [1] https://bitbucket.org/blockdiag/blockdiag/pull-requests/16/
.. [2] https://bitbucket.org/blockdiag/blockdiag/src/1.5.3/src/blockdiag/utils/compat.py # noqa
"""
import codecs
from codecs import getreader
from blockdiag.imagedraw import textfolder
# oh, blockdiag. Let's undo the mess you made.
codecs.getreader = getreader
def splitlabel(text):
"""Split text to lines as generator.
Every line will be stripped. If text includes characters "\n\n", treat
as line separator. Ignore '\n' to allow line wrapping.
"""
lines = [x.strip() for x in text.splitlines()]
out = []
for line in lines:
if line:
out.append(line)
else:
yield ' '.join(out)
out = []
yield ' '.join(out)
def splittext(metrics, text, bound, measure='width'):
folded = [' ']
for word in text.split():
# Try appending the word to the last line
tryline = ' '.join([folded[-1], word]).strip()
textsize = metrics.textsize(tryline)
if getattr(textsize, measure) > bound:
# Start a new line. Appends `word` even if > bound.
folded.append(word)
else:
folded[-1] = tryline
return folded
# monkey patch those babies
textfolder.splitlabel = splitlabel
textfolder.splittext = splittext
monkey_patch_blockdiag()

View File

@ -4,36 +4,23 @@ Configuration Guide
The static configuration for nova lives in two main files: ``nova.conf`` and
``policy.json``. These are described below. For a bigger picture view on
configuring nova to solve specific problems, refer to the :doc:`Nova Admin
configuring nova to solve specific problems, refer to the :nova-doc:`Nova Admin
Guide </admin/index>`.
Configuration
-------------
* :doc:`Configuration Guide </admin/configuration/index>`: Detailed
configuration guides for various parts of you Nova system. Helpful reference
for setting up specific hypervisor backends.
.. TODO(efried):: Get these working
* :nova-doc:`Configuration Guide </admin/configuration/index>`: Detailed
configuration guides for various parts of you Nova system. Helpful reference
for setting up specific hypervisor backends.
* :doc:`Config Reference <config>`: A complete reference of all
configuration options available in the ``nova.conf`` file.
* :doc:`Sample Config File <sample-config>`: A sample config
file with inline documentation.
* :doc:`Config Reference <config>`: A complete reference of all
configuration options available in the ``nova.conf`` file.
* :doc:`Sample Config File <sample-config>`: A sample config
file with inline documentation.
Nova Policy
-----------
Nova, like most OpenStack projects, uses a policy language to restrict
permissions on REST API actions.
* :doc:`Policy Reference <policy>`: A complete reference of all
policy points in nova and what they impact.
* :doc:`Sample Policy File <sample-policy>`: A sample nova
policy file with inline documentation.
Placement Policy
----------------
Policy
------
Placement, like most OpenStack projects, uses a policy language to restrict
permissions on REST API actions.
@ -51,9 +38,9 @@ permissions on REST API actions.
.. toctree::
:hidden:
config
sample-config
policy
sample-policy
placement-policy
sample-placement-policy
.. TODO(efried):: get these working
config
sample-config

View File

@ -7,4 +7,4 @@ For a sample configuration file, refer to
:doc:`/configuration/sample-placement-policy`.
.. show-policy::
:config-file: etc/nova/placement-policy-generator.conf
:config-file: etc/placement/placement-policy-generator.conf

View File

@ -2,14 +2,13 @@
API reference guideline
=======================
The API reference should be updated when compute or placement APIs are modified
The API reference should be updated when placement APIs are modified
(microversion is bumped, etc.).
This page describes the guideline for updating the API reference.
API reference
=============
* `Compute API reference <https://developer.openstack.org/api-ref/compute/>`_
* `Placement API reference <https://developer.openstack.org/api-ref/placement/>`_
The guideline to write the API reference
@ -17,19 +16,9 @@ The guideline to write the API reference
The API reference consists of the following files.
Compute API reference
---------------------
* API reference text: ``api-ref/source/*.inc``
* Parameter definition: ``api-ref/source/parameters.yaml``
* JSON request/response samples: ``doc/api_samples/*``
Placement API reference
-----------------------
* API reference text: ``placement-api-ref/source/*.inc``
* Parameter definition: ``placement-api-ref/source/parameters.yaml``
* JSON request/response samples: ``placement-api-ref/source/samples/*``
* JSON request/response samples: ``api-ref/source/samples/*``
Structure of inc file
---------------------
@ -228,6 +217,5 @@ Body
Reference
=========
* `Verifying the Nova API Ref <https://wiki.openstack.org/wiki/NovaAPIRef>`_
* `The description for Parameters whose values are null <http://lists.openstack.org/pipermail/openstack-dev/2017-January/109868.html>`_
* `The definition of "Optional" parameter <http://lists.openstack.org/pipermail/openstack-dev/2017-July/119239.html>`_

View File

@ -18,17 +18,16 @@
Overview
========
The Nova project introduced the :doc:`placement service </user/placement>` as
part of the Newton release. The service provides an HTTP API to manage
inventories of different classes of resources, such as disk or virtual cpus,
made available by entities called resource providers. Information provided
through the placement API is intended to enable more effective accounting of
resources in an OpenStack deployment and better scheduling of various entities
in the cloud.
The Nova project introduced the placement service as part of the Newton
release. The service provides an HTTP API to manage inventories of different
classes of resources, such as disk or virtual cpus, made available by entities
called resource providers. Information provided through the placement API is
intended to enable more effective accounting of resources in an OpenStack
deployment and better scheduling of various entities in the cloud.
The document serves to explain the architecture of the system and to provide
some guidance on how to maintain and extend the code. For more detail on why
the system was created and how it does its job see :doc:`/user/placement`.
the system was created and how it does its job see :doc:`/index`.
Big Picture
===========
@ -134,12 +133,13 @@ Microversions
=============
The placement API makes use of `microversions`_ to allow the release of new
features on an opt in basis. See :doc:`/user/placement` for an up to date
features on an opt in basis. See :doc:`/index` for an up to date
history of the available microversions.
The rules around when a microversion is needed are the same as for the
:doc:`compute API </contributor/microversions>`. When adding a new microversion
there are a few bits of required housekeeping that must be done in the code:
:nova-doc:`compute API </contributor/microversions>`. When adding a new
microversion there are a few bits of required housekeeping that must be done in
the code:
* Update the ``VERSIONS`` list in
``nova/api/openstack/placement/microversion.py`` to indicate the new
@ -401,11 +401,14 @@ self-contained:
There are some exceptions to the self-contained rule (which are actively being
addressed to prepare for the extraction):
.. TODO(efried):: Get :oslo.config:option: role working below:
:oslo.config:option:`placement_database.connection`, can be set to use a
* Some of the code related to a resource class cache is within the `placement.db`
package, while other parts are in ``nova/rc_fields.py``.
* Database models, migrations and tables are described as part of the nova api
database. An optional configuration option,
:oslo.config:option:`placement_database.connection`, can be set to use a
`placement_database.connection`, can be set to use a
database just for placement (based on the api database schema).
* `nova.i18n` package provides the ``_`` and related functions.
* ``nova.conf`` is used for configuration.

View File

@ -1,8 +1,4 @@
..
Copyright 2010-2012 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
@ -15,186 +11,343 @@
License for the specific language governing permissions and limitations
under the License.
========================
OpenStack Compute (nova)
========================
===============
Placement API
===============
What is nova?
Overview
========
Nova introduced the placement API service in the 14.0.0 Newton release. This
is a separate REST API stack and data model used to track resource provider
inventories and usages, along with different classes of resources. For example,
a resource provider can be a compute node, a shared storage pool, or an IP
allocation pool. The placement service tracks the inventory and usage of each
provider. For example, an instance created on a compute node may be a consumer
of resources such as RAM and CPU from a compute node resource provider, disk
from an external shared storage pool resource provider and IP addresses from
an external IP pool resource provider.
The types of resources consumed are tracked as **classes**. The service
provides a set of standard resource classes (for example ``DISK_GB``,
``MEMORY_MB``, and ``VCPU``) and provides the ability to define custom
resource classes as needed.
Each resource provider may also have a set of traits which describe qualitative
aspects of the resource provider. Traits describe an aspect of a resource
provider that cannot itself be consumed but a workload may wish to specify. For
example, available disk may be solid state drives (SSD).
References
~~~~~~~~~~
The following specifications represent the stages of design and development of
resource providers and the Placement service. Implementation details may have
changed or be partially complete at this time.
* `Generic Resource Pools <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/generic-resource-pools.html>`_
* `Compute Node Inventory <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/compute-node-inventory-newton.html>`_
* `Resource Provider Allocations <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/resource-providers-allocations.html>`_
* `Resource Provider Base Models <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/resource-providers.html>`_
* `Nested Resource Providers`_
* `Custom Resource Classes <http://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/custom-resource-classes.html>`_
* `Scheduler Filters in DB <http://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/resource-providers-scheduler-db-filters.html>`_
* `Scheduler claiming resources to the Placement API <http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/placement-claims.html>`_
* `The Traits API - Manage Traits with ResourceProvider <http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/resource-provider-traits.html>`_
* `Request Traits During Scheduling`_
* `filter allocation candidates by aggregate membership`_
* `perform granular allocation candidate requests`_
* `inventory and allocation data migration`_ (reshaping provider trees)
.. _Nested Resource Providers: http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/nested-resource-providers.html
.. _Request Traits During Scheduling: https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/request-traits-in-nova.html
.. _filter allocation candidates by aggregate membership: https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/alloc-candidates-member-of.html
.. _perform granular allocation candidate requests: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html
.. _inventory and allocation data migration: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html
Deployment
==========
The placement-api service must be deployed at some point after you have
upgraded to the 14.0.0 Newton release but before you can upgrade to the 15.0.0
Ocata release. This is so that the resource tracker in the nova-compute service
can populate resource provider (compute node) inventory and allocation
information which will be used by the nova-scheduler service in Ocata.
Steps
~~~~~
**1. Deploy the API service**
At this time the placement API code is still in Nova alongside the compute REST
API code (nova-api). So once you have upgraded nova-api to Newton you already
have the placement API code, you just need to install the service. Nova
provides a ``nova-placement-api`` WSGI script for running the service with
Apache, nginx or other WSGI-capable web servers. Depending on what packaging
solution is used to deploy OpenStack, the WSGI script may be in ``/usr/bin``
or ``/usr/local/bin``.
.. note:: The placement API service is currently developed within Nova but
it is designed to be as separate as possible from the existing code so
that it can eventually be split into a separate project.
``nova-placement-api``, as a standard WSGI script, provides a module level
``application`` attribute that most WSGI servers expect to find. This means it
is possible to run it with lots of different servers, providing flexibility in
the face of different deployment scenarios. Common scenarios include:
* apache2_ with mod_wsgi_
* apache2 with mod_proxy_uwsgi_
* nginx_ with uwsgi_
* nginx with gunicorn_
In all of these scenarios the host, port and mounting path (or prefix) of the
application is controlled in the web server's configuration, not in the
configuration (``nova.conf``) of the placement application.
When placement was `first added to DevStack`_ it used the ``mod_wsgi`` style.
Later it `was updated`_ to use mod_proxy_uwsgi_. Looking at those changes can
be useful for understanding the relevant options.
DevStack is configured to host placement at ``/placement`` on either the
default port for http or for https (``80`` or ``443``) depending on whether TLS
is being used. Using a default port is desirable.
By default, the placement application will get its configuration for settings
such as the database connection URL from ``/etc/nova/nova.conf``. The directory
the configuration file will be found in can be changed by setting
``OS_PLACEMENT_CONFIG_DIR`` in the environment of the process that starts the
application.
.. note:: When using uwsgi with a front end (e.g., apache2 or nginx) something
needs to ensure that the uwsgi process is running. In DevStack this is done
with systemd_. This is one of many different ways to manage uwsgi.
This document refrains from declaring a set of installation instructions for
the placement service. This is because a major point of having a WSGI
application is to make the deployment as flexible as possible. Because the
placement API service is itself stateless (all state is in the database), it is
possible to deploy as many servers as desired behind a load balancing solution
for robust and simple scaling. If you familiarize yourself with installing
generic WSGI applications (using the links in the common scenarios list,
above), those techniques will be applicable here.
.. _apache2: http://httpd.apache.org/
.. _mod_wsgi: https://modwsgi.readthedocs.io/
.. _mod_proxy_uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Apache.html
.. _nginx: http://nginx.org/
.. _uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Nginx.html
.. _gunicorn: http://gunicorn.org/
.. _first added to DevStack: https://review.openstack.org/#/c/342362/
.. _was updated: https://review.openstack.org/#/c/456717/
.. _systemd: https://review.openstack.org/#/c/448323/
**2. Synchronize the database**
In the Newton release the Nova **api** database is the only deployment
option for the placement API service and the resources it manages. After
upgrading the nova-api service for Newton and running the
``nova-manage api_db sync`` command the placement tables will be created.
.. TODO(efried):: Get :oslo.config:option: role working below:
placement. If :oslo.config:option:`placement_database.connection` is
With the Rocky release, it has become possible to use a separate database for
placement. If `placement_database.connection` is
configured with a database connect string, that database will be used for
storing placement data. Once the database is created, the
``nova-manage api_db sync`` command will create and synchronize both the
nova api and placement tables. If ``[placement_database]/connection`` is not
set, the nova api database will be used.
.. note:: At this time there is no facility for migrating existing placement
data from the nova api database to a placement database. There are
many ways to do this. Which one is best will depend on the environment.
**3. Create accounts and update the service catalog**
Create a **placement** service user with an **admin** role in Keystone.
The placement API is a separate service and thus should be registered under
a **placement** service type in the service catalog as that is what the
resource tracker in the nova-compute node will use to look up the endpoint.
Devstack sets up the placement service on the default HTTP port (80) with a
``/placement`` prefix instead of using an independent port.
**4. Configure and restart nova-compute services**
The 14.0.0 Newton nova-compute service code will begin reporting resource
provider inventory and usage information as soon as the placement API
service is in place and can respond to requests via the endpoint registered
in the service catalog.
``nova.conf`` on the compute nodes must be updated in the ``[placement]``
group to contain credentials for making requests from nova-compute to the
placement-api service.
.. note:: After upgrading nova-compute code to Newton and restarting the
service, the nova-compute service will attempt to make a connection
to the placement API and if that is not yet available a warning will
be logged. The nova-compute service will keep attempting to connect
to the placement API, warning periodically on error until it is
successful. Keep in mind that Placement is optional in Newton, but
required in Ocata, so the placement service should be enabled before
upgrading to Ocata. nova.conf on the compute nodes will need to be
updated in the ``[placement]`` group for credentials to make requests
from nova-compute to the placement-api service.
.. _placement-upgrade-notes:
Upgrade Notes
=============
Nova is the OpenStack project that provides a way to provision compute
instances (aka virtual servers). Nova supports creating virtual machines,
baremetal servers (through the use of ironic), and has limited support for
system containers. Nova runs as a set of daemons on top of existing Linux
servers to provide that service.
The following sub-sections provide notes on upgrading to a given target release.
It requires the following additional OpenStack services for basic function:
.. note::
* :keystone-doc:`Keystone <>`: This provides identity and authentication for
all OpenStack services.
* :glance-doc:`Glance <>`: This provides the compute image repository. All
compute instances launch from glance images.
* :neutron-doc:`Neutron <>`: This is responsible for provisioning the virtual
or physical networks that compute instances connect to on boot.
As a reminder, the :nova-doc:`nova-status upgrade check </cli/nova-status>`
tool can be used to help determine the status of your deployment and how
ready it is to perform an upgrade.
It can also integrate with other services to include: persistent block
storage, encrypted disks, and baremetal compute instances.
Ocata (15.0.0)
~~~~~~~~~~~~~~
For End Users
=============
* The ``nova-compute`` service will fail to start in Ocata unless the
``[placement]`` section of nova.conf on the compute is configured. As
mentioned in the deployment steps above, the Placement service should be
deployed by this point so the computes can register and start reporting
inventory and allocation information. If the computes are deployed
and configured `before` the Placement service, they will continue to try
and reconnect in a loop so that you do not need to restart the nova-compute
process to talk to the Placement service after the compute is properly
configured.
* The ``nova.scheduler.filter_scheduler.FilterScheduler`` in Ocata will
fallback to not using the Placement service as long as there are older
``nova-compute`` services running in the deployment. This allows for rolling
upgrades of the computes to not affect scheduling for the FilterScheduler.
However, the fallback mechanism will be removed in the 16.0.0 Pike release
such that the scheduler will make decisions based on the Placement service
and the resource providers (compute nodes) registered there. This means if
the computes are not reporting into Placement by Pike, build requests will
fail with **NoValidHost** errors.
* While the FilterScheduler technically depends on the Placement service
in Ocata, if you deploy the Placement service `after` you upgrade the
``nova-scheduler`` service to Ocata and restart it, things will still work.
The scheduler will gracefully handle the absence of the Placement service.
However, once all computes are upgraded, the scheduler not being able to make
requests to Placement will result in **NoValidHost** errors.
* It is currently possible to exclude the ``CoreFilter``, ``RamFilter`` and
``DiskFilter`` from the list of enabled FilterScheduler filters such that
scheduling decisions are not based on CPU, RAM or disk usage. Once all
computes are reporting into the Placement service, however, and the
FilterScheduler starts to use the Placement service for decisions, those
excluded filters are ignored and the scheduler will make requests based on
VCPU, MEMORY_MB and DISK_GB inventory. If you wish to effectively ignore
that type of resource for placement decisions, you will need to adjust the
corresponding ``cpu_allocation_ratio``, ``ram_allocation_ratio``, and/or
``disk_allocation_ratio`` configuration options to be very high values, e.g.
9999.0.
* Users of CellsV1 will need to deploy a placement per cell, matching
the scope and cardinality of the regular ``nova-scheduler`` process.
As an end user of nova, you'll use nova to create and manage servers with
either tools or the API directly.
Pike (16.0.0)
~~~~~~~~~~~~~
Tools for using Nova
--------------------
* The ``nova.scheduler.filter_scheduler.FilterScheduler`` in Pike will
no longer fall back to not using the Placement Service, even if older
computes are running in the deployment.
* The FilterScheduler now requests allocation candidates from the Placement
service during scheduling. The allocation candidates information was
introduced in the Placement API 1.10 microversion, so you should upgrade the
placement service **before** the Nova scheduler service so that the scheduler
can take advantage of the allocation candidate information.
* :horizon-doc:`Horizon <user/launch-instances.html>`: The official web UI for
the OpenStack Project.
* :python-openstackclient-doc:`OpenStack Client <>`: The official CLI for
OpenStack Projects. You should use this as your CLI for most things, it
includes not just nova commands but also commands for most of the projects in
OpenStack.
* :python-novaclient-doc:`Nova Client <user/shell.html>`: For some very
advanced features (or administrative commands) of nova you may need to use
nova client. It is still supported, but the ``openstack`` cli is recommended.
The scheduler gets the allocation candidates from the placement API and
uses those to get the compute nodes, which come from the cell(s). The
compute nodes are passed through the enabled scheduler filters and weighers.
The scheduler then iterates over this filtered and weighed list of hosts and
attempts to claim resources in the placement API for each instance in the
request. Claiming resources involves finding an allocation candidate that
contains an allocation against the selected host's UUID and asking the
placement API to allocate the requested instance resources. We continue
performing this claim request until success or we run out of allocation
candidates, resulting in a NoValidHost error.
Writing to the API
------------------
For a move operation, such as migration, allocations are made in Placement
against both the source and destination compute node. Once the
move operation is complete, the resource tracker in the *nova-compute*
service will adjust the allocations in Placement appropriately.
All end user (and some administrative) features of nova are exposed via a REST
API, which can be used to build more complicated logic or automation with
nova. This can be consumed directly, or via various SDKs. The following
resources will help you get started with consuming the API directly.
For a resize to the same host, allocations are summed on the single compute
node. This could pose a problem if the compute node has limited capacity.
Since resizing to the same host is disabled by default, and generally only
used in testing, this is mentioned for completeness but should not be a
concern for production deployments.
* `Compute API Guide <https://developer.openstack.org/api-guide/compute/>`_: The
concept guide for the API. This helps lay out the concepts behind the API to
make consuming the API reference easier.
* `Compute API Reference <http://developer.openstack.org/api-ref/compute/>`_:
The complete reference for the compute API, including all methods and
request / response parameters and their meaning.
* :doc:`Compute API Microversion History </reference/api-microversion-history>`:
The compute API evolves over time through `Microversions
<https://developer.openstack.org/api-guide/compute/microversions.html>`_. This
provides the history of all those changes. Consider it a "what's new" in the
compute API.
* `Placement API Reference <https://developer.openstack.org/api-ref/placement/>`_:
The complete reference for the placement API, including all methods and
request / response parameters and their meaning.
* :ref:`Placement API Microversion History <placement-api-microversion-history>`:
The placement API evolves over time through `Microversions
<https://developer.openstack.org/api-guide/compute/microversions.html>`_. This
provides the history of all those changes. Consider it a "what's new" in the
placement API.
* :doc:`Block Device Mapping </user/block-device-mapping>`: One of the trickier
parts to understand is the Block Device Mapping parameters used to connect
specific block devices to computes. This deserves its own deep dive.
* :doc:`Configuration drive </user/config-drive>`: Provide information to the
guest instance when it is created.
Queens (17.0.0)
~~~~~~~~~~~~~~~
Nova can be configured to emit notifications over RPC.
* The minimum Placement API microversion required by the *nova-scheduler*
service is ``1.17`` in order to support `Request Traits During Scheduling`_.
This means you must upgrade the placement service before upgrading any
*nova-scheduler* services to Queens.
* :ref:`Versioned Notifications <versioned_notification_samples>`: This
provides the list of existing versioned notifications with sample payloads.
Rocky (18.0.0)
~~~~~~~~~~~~~~
For Operators
=============
* The ``nova-api`` service now requires the ``[placement]`` section to be
configured in nova.conf if you are using a separate config file just for
that service. This is because the ``nova-api`` service now needs to talk
to the placement service in order to (1) delete resource provider allocations
when deleting an instance and the ``nova-compute`` service on which that
instance is running is down (2) delete a ``nova-compute`` service record via
the ``DELETE /os-services/{service_id}`` API and (3) mirror aggregate host
associations to the placement service. This change is idempotent if
``[placement]`` is not configured in ``nova-api`` but it will result in new
warnings in the logs until configured.
* As described above, before Rocky, the placement service used the nova api
database to store placement data. In Rocky, if the ``connection`` setting in
a ``[placement_database]`` group is set in configuration, that group will be
used to describe where and how placement data is stored.
Architecture Overview
---------------------
REST API
========
* :doc:`Nova architecture </user/architecture>`: An overview of how all the parts in
nova fit together.
The placement API service has its own `REST API`_ and data model. One
can get a sample of the REST API via the functional test `gabbits`_.
Installation
------------
.. _`REST API`: https://developer.openstack.org/api-ref/placement/
.. _gabbits: http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/functional/api/openstack/placement/gabbits
.. TODO(sdague): links to all the rest of the install guide pieces.
Microversions
~~~~~~~~~~~~~
The detailed install guide for nova. A functioning nova will also require
having installed :keystone-doc:`keystone <install/>`, :glance-doc:`glance
<install/>`, and :neutron-doc:`neutron <install/>`. Ensure that you follow
their install guides first.
The placement API uses microversions for making incremental changes to the
API which client requests must opt into.
.. toctree::
:maxdepth: 2
It is especially important to keep in mind that nova-compute is a client of
the placement REST API and based on how Nova supports rolling upgrades the
nova-compute service could be Newton level code making requests to an Ocata
placement API, and vice-versa, an Ocata compute service in a cells v2 cell
could be making requests to a Newton placement API.
install/index
.. _placement-api-microversion-history:
Deployment Considerations
-------------------------
.. include:: ../../placement/rest_api_version_history.rst
There is information you might want to consider before doing your deployment,
especially if it is going to be a larger deployment. For smaller deployments
the defaults from the :doc:`install guide </install/index>` will be sufficient.
Configuration
~~~~~~~~~~~~~
* **Compute Driver Features Supported**: While the majority of nova deployments use
libvirt/kvm, you can use nova with other compute drivers. Nova attempts to
provide a unified feature set across these, however, not all features are
implemented on all backends, and not all features are equally well tested.
See the `Configuration Guide <configuration/index>` for information on
configuring the system, including role-based access control policy rules.
* :doc:`Feature Support by Use Case </user/feature-classification>`: A view of
what features each driver supports based on what's important to some large
use cases (General Purpose Cloud, NFV Cloud, HPC Cloud).
* :doc:`Feature Support full list </user/support-matrix>`: A detailed dive through
features in each compute driver backend.
* :doc:`Cells v2 Planning </user/cellsv2-layout>`: For large deployments, Cells v2
allows sharding of your compute environment. Upfront planning is key to a
successful Cells v2 layout.
* :doc:`Placement service </user/placement>`: Overview of the placement
service, including how it fits in with the rest of nova.
* :doc:`Running nova-api on wsgi <user/wsgi>`: Considerations for using a real
WSGI container instead of the baked-in eventlet web server.
Maintenance
-----------
Once you are running nova, the following information is extremely useful.
* :doc:`Admin Guide </admin/index>`: A collection of guides for administrating
nova.
* :doc:`Flavors </user/flavors>`: What flavors are and why they are used.
* :doc:`Upgrades </user/upgrade>`: How nova is designed to be upgraded for minimal
service impact, and the order you should do them in.
* :doc:`Quotas </user/quotas>`: Managing project quotas in nova.
* :doc:`Aggregates </user/aggregates>`: Aggregates are a useful way of grouping
hosts together for scheduling purposes.
* :doc:`Filter Scheduler </user/filter-scheduler>`: How the filter scheduler is
configured, and how that will impact where compute instances land in your
environment. If you are seeing unexpected distribution of compute instances
in your hosts, you'll want to dive into this configuration.
* :doc:`Exposing custom metadata to compute instances </user/vendordata>`: How and
when you might want to extend the basic metadata exposed to compute instances
(either via metadata server or config drive) for your specific purposes.
Reference Material
------------------
* :doc:`Nova CLI Command References </cli/index>`: the complete command reference
for all the daemons and admin tools that come with nova.
* :doc:`Configuration Guide <configuration/index>`: Information on configuring
the system, including role-based access control policy rules.
For Contributors
================
If you are new to Nova, this should help you start to understand what Nova
actually does, and why.
.. toctree::
:maxdepth: 1
contributor/index
There are also a number of technical references on both current and future
looking parts of our architecture. These are collected below.
.. toctree::
:maxdepth: 1
reference/index
Contributors
~~~~~~~~~~~~
See the `Contributor Guide <contributor/index>` for information on how to
contribute to the placement project.
.. # NOTE(mriedem): This is the section where we hide things that we don't
# actually want in the table of contents but sphinx build would fail if
@ -203,68 +356,9 @@ looking parts of our architecture. These are collected below.
.. toctree::
:hidden:
admin/index
admin/configuration/index
cli/index
configuration/index
contributor/development-environment
contributor/api
contributor/api-2
contributor/index
contributor/api-ref-guideline
contributor/blueprints
contributor/code-review
contributor/documentation
contributor/microversions
contributor/placement.rst
contributor/policies.rst
contributor/releasenotes
contributor/testing
contributor/testing/libvirt-numa
contributor/testing/serial-console
contributor/testing/zero-downtime-upgrade
contributor/how-to-get-involved
contributor/process
contributor/project-scope
reference/api-microversion-history.rst
reference/gmr
reference/i18n
reference/live-migration
reference/notifications
reference/policy-enforcement
reference/rpc
reference/scheduling
reference/scheduler-evolution
reference/services
reference/stable-api
reference/threading
reference/update-provider-tree
reference/vm-states
user/index
user/aggregates
user/architecture
user/block-device-mapping
user/cells
user/cellsv2-layout
user/certificate-validation
user/conductor
user/config-drive
user/feature-classification
user/filter-scheduler
user/flavors
user/manage-ip-addresses
user/placement
user/quotas
user/support-matrix
user/upgrade
user/user-data
user/vendordata
user/wsgi
Search
======
* :ref:`Nova document search <search>`: Search the contents of this document.
* `OpenStack wide search <https://docs.openstack.org>`_: Search the wider
set of OpenStack documentation, including forums.
install/controller-install-obs
install/controller-install-rdo
install/controller-install-ubuntu

View File

@ -270,7 +270,7 @@ databases, service credentials, and API endpoints.
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
.. include:: note_configuration_vary_by_distribution.rst
.. note::

View File

@ -269,7 +269,7 @@ databases, service credentials, and API endpoints.
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
.. include:: note_configuration_vary_by_distribution.rst
#. Install the packages:

View File

@ -270,7 +270,7 @@ databases, service credentials, and API endpoints.
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
.. include:: note_configuration_vary_by_distribution.rst
#. Install the packages:

View File

@ -0,0 +1,6 @@
.. note::
Default configuration files vary by distribution. You might need to add
these sections and options rather than modifying existing sections and
options. Also, an ellipsis (``...``) in the configuration snippets indicates
potential default configuration options that you should retain.

View File

@ -1,335 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
===============
Placement API
===============
Overview
========
Nova introduced the placement API service in the 14.0.0 Newton release. This
is a separate REST API stack and data model used to track resource provider
inventories and usages, along with different classes of resources. For example,
a resource provider can be a compute node, a shared storage pool, or an IP
allocation pool. The placement service tracks the inventory and usage of each
provider. For example, an instance created on a compute node may be a consumer
of resources such as RAM and CPU from a compute node resource provider, disk
from an external shared storage pool resource provider and IP addresses from
an external IP pool resource provider.
The types of resources consumed are tracked as **classes**. The service
provides a set of standard resource classes (for example ``DISK_GB``,
``MEMORY_MB``, and ``VCPU``) and provides the ability to define custom
resource classes as needed.
Each resource provider may also have a set of traits which describe qualitative
aspects of the resource provider. Traits describe an aspect of a resource
provider that cannot itself be consumed but a workload may wish to specify. For
example, available disk may be solid state drives (SSD).
References
~~~~~~~~~~
The following specifications represent the stages of design and development of
resource providers and the Placement service. Implementation details may have
changed or be partially complete at this time.
* `Generic Resource Pools <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/generic-resource-pools.html>`_
* `Compute Node Inventory <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/compute-node-inventory-newton.html>`_
* `Resource Provider Allocations <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/resource-providers-allocations.html>`_
* `Resource Provider Base Models <https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/resource-providers.html>`_
* `Nested Resource Providers`_
* `Custom Resource Classes <http://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/custom-resource-classes.html>`_
* `Scheduler Filters in DB <http://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/resource-providers-scheduler-db-filters.html>`_
* `Scheduler claiming resources to the Placement API <http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/placement-claims.html>`_
* `The Traits API - Manage Traits with ResourceProvider <http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/resource-provider-traits.html>`_
* `Request Traits During Scheduling`_
* `filter allocation candidates by aggregate membership`_
* `perform granular allocation candidate requests`_
* `inventory and allocation data migration`_ (reshaping provider trees)
.. _Nested Resource Providers: http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/nested-resource-providers.html
.. _Request Traits During Scheduling: https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/request-traits-in-nova.html
.. _filter allocation candidates by aggregate membership: https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/alloc-candidates-member-of.html
.. _perform granular allocation candidate requests: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html
.. _inventory and allocation data migration: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html
Deployment
==========
The placement-api service must be deployed at some point after you have
upgraded to the 14.0.0 Newton release but before you can upgrade to the 15.0.0
Ocata release. This is so that the resource tracker in the nova-compute service
can populate resource provider (compute node) inventory and allocation
information which will be used by the nova-scheduler service in Ocata.
Steps
~~~~~
**1. Deploy the API service**
At this time the placement API code is still in Nova alongside the compute REST
API code (nova-api). So once you have upgraded nova-api to Newton you already
have the placement API code, you just need to install the service. Nova
provides a ``nova-placement-api`` WSGI script for running the service with
Apache, nginx or other WSGI-capable web servers. Depending on what packaging
solution is used to deploy OpenStack, the WSGI script may be in ``/usr/bin``
or ``/usr/local/bin``.
.. note:: The placement API service is currently developed within Nova but
it is designed to be as separate as possible from the existing code so
that it can eventually be split into a separate project.
``nova-placement-api``, as a standard WSGI script, provides a module level
``application`` attribute that most WSGI servers expect to find. This means it
is possible to run it with lots of different servers, providing flexibility in
the face of different deployment scenarios. Common scenarios include:
* apache2_ with mod_wsgi_
* apache2 with mod_proxy_uwsgi_
* nginx_ with uwsgi_
* nginx with gunicorn_
In all of these scenarios the host, port and mounting path (or prefix) of the
application is controlled in the web server's configuration, not in the
configuration (``nova.conf``) of the placement application.
When placement was `first added to DevStack`_ it used the ``mod_wsgi`` style.
Later it `was updated`_ to use mod_proxy_uwsgi_. Looking at those changes can
be useful for understanding the relevant options.
DevStack is configured to host placement at ``/placement`` on either the
default port for http or for https (``80`` or ``443``) depending on whether TLS
is being used. Using a default port is desirable.
By default, the placement application will get its configuration for settings
such as the database connection URL from ``/etc/nova/nova.conf``. The directory
the configuration file will be found in can be changed by setting
``OS_PLACEMENT_CONFIG_DIR`` in the environment of the process that starts the
application.
.. note:: When using uwsgi with a front end (e.g., apache2 or nginx) something
needs to ensure that the uwsgi process is running. In DevStack this is done
with systemd_. This is one of many different ways to manage uwsgi.
This document refrains from declaring a set of installation instructions for
the placement service. This is because a major point of having a WSGI
application is to make the deployment as flexible as possible. Because the
placement API service is itself stateless (all state is in the database), it is
possible to deploy as many servers as desired behind a load balancing solution
for robust and simple scaling. If you familiarize yourself with installing
generic WSGI applications (using the links in the common scenarios list,
above), those techniques will be applicable here.
.. _apache2: http://httpd.apache.org/
.. _mod_wsgi: https://modwsgi.readthedocs.io/
.. _mod_proxy_uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Apache.html
.. _nginx: http://nginx.org/
.. _uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Nginx.html
.. _gunicorn: http://gunicorn.org/
.. _first added to DevStack: https://review.openstack.org/#/c/342362/
.. _was updated: https://review.openstack.org/#/c/456717/
.. _systemd: https://review.openstack.org/#/c/448323/
**2. Synchronize the database**
In the Newton release the Nova **api** database is the only deployment
option for the placement API service and the resources it manages. After
upgrading the nova-api service for Newton and running the
``nova-manage api_db sync`` command the placement tables will be created.
With the Rocky release, it has become possible to use a separate database for
placement. If :oslo.config:option:`placement_database.connection` is
configured with a database connect string, that database will be used for
storing placement data. Once the database is created, the
``nova-manage api_db sync`` command will create and synchronize both the
nova api and placement tables. If ``[placement_database]/connection`` is not
set, the nova api database will be used.
.. note:: At this time there is no facility for migrating existing placement
data from the nova api database to a placement database. There are
many ways to do this. Which one is best will depend on the environment.
**3. Create accounts and update the service catalog**
Create a **placement** service user with an **admin** role in Keystone.
The placement API is a separate service and thus should be registered under
a **placement** service type in the service catalog as that is what the
resource tracker in the nova-compute node will use to look up the endpoint.
Devstack sets up the placement service on the default HTTP port (80) with a
``/placement`` prefix instead of using an independent port.
**4. Configure and restart nova-compute services**
The 14.0.0 Newton nova-compute service code will begin reporting resource
provider inventory and usage information as soon as the placement API
service is in place and can respond to requests via the endpoint registered
in the service catalog.
``nova.conf`` on the compute nodes must be updated in the ``[placement]``
group to contain credentials for making requests from nova-compute to the
placement-api service.
.. note:: After upgrading nova-compute code to Newton and restarting the
service, the nova-compute service will attempt to make a connection
to the placement API and if that is not yet available a warning will
be logged. The nova-compute service will keep attempting to connect
to the placement API, warning periodically on error until it is
successful. Keep in mind that Placement is optional in Newton, but
required in Ocata, so the placement service should be enabled before
upgrading to Ocata. nova.conf on the compute nodes will need to be
updated in the ``[placement]`` group for credentials to make requests
from nova-compute to the placement-api service.
.. _placement-upgrade-notes:
Upgrade Notes
=============
The following sub-sections provide notes on upgrading to a given target release.
.. note::
As a reminder, the :doc:`nova-status upgrade check </cli/nova-status>` tool
can be used to help determine the status of your deployment and how ready it
is to perform an upgrade.
Ocata (15.0.0)
~~~~~~~~~~~~~~
* The ``nova-compute`` service will fail to start in Ocata unless the
``[placement]`` section of nova.conf on the compute is configured. As
mentioned in the deployment steps above, the Placement service should be
deployed by this point so the computes can register and start reporting
inventory and allocation information. If the computes are deployed
and configured `before` the Placement service, they will continue to try
and reconnect in a loop so that you do not need to restart the nova-compute
process to talk to the Placement service after the compute is properly
configured.
* The ``nova.scheduler.filter_scheduler.FilterScheduler`` in Ocata will
fallback to not using the Placement service as long as there are older
``nova-compute`` services running in the deployment. This allows for rolling
upgrades of the computes to not affect scheduling for the FilterScheduler.
However, the fallback mechanism will be removed in the 16.0.0 Pike release
such that the scheduler will make decisions based on the Placement service
and the resource providers (compute nodes) registered there. This means if
the computes are not reporting into Placement by Pike, build requests will
fail with **NoValidHost** errors.
* While the FilterScheduler technically depends on the Placement service
in Ocata, if you deploy the Placement service `after` you upgrade the
``nova-scheduler`` service to Ocata and restart it, things will still work.
The scheduler will gracefully handle the absence of the Placement service.
However, once all computes are upgraded, the scheduler not being able to make
requests to Placement will result in **NoValidHost** errors.
* It is currently possible to exclude the ``CoreFilter``, ``RamFilter`` and
``DiskFilter`` from the list of enabled FilterScheduler filters such that
scheduling decisions are not based on CPU, RAM or disk usage. Once all
computes are reporting into the Placement service, however, and the
FilterScheduler starts to use the Placement service for decisions, those
excluded filters are ignored and the scheduler will make requests based on
VCPU, MEMORY_MB and DISK_GB inventory. If you wish to effectively ignore
that type of resource for placement decisions, you will need to adjust the
corresponding ``cpu_allocation_ratio``, ``ram_allocation_ratio``, and/or
``disk_allocation_ratio`` configuration options to be very high values, e.g.
9999.0.
* Users of CellsV1 will need to deploy a placement per cell, matching
the scope and cardinality of the regular ``nova-scheduler`` process.
Pike (16.0.0)
~~~~~~~~~~~~~
* The ``nova.scheduler.filter_scheduler.FilterScheduler`` in Pike will
no longer fall back to not using the Placement Service, even if older
computes are running in the deployment.
* The FilterScheduler now requests allocation candidates from the Placement
service during scheduling. The allocation candidates information was
introduced in the Placement API 1.10 microversion, so you should upgrade the
placement service **before** the Nova scheduler service so that the scheduler
can take advantage of the allocation candidate information.
The scheduler gets the allocation candidates from the placement API and
uses those to get the compute nodes, which come from the cell(s). The
compute nodes are passed through the enabled scheduler filters and weighers.
The scheduler then iterates over this filtered and weighed list of hosts and
attempts to claim resources in the placement API for each instance in the
request. Claiming resources involves finding an allocation candidate that
contains an allocation against the selected host's UUID and asking the
placement API to allocate the requested instance resources. We continue
performing this claim request until success or we run out of allocation
candidates, resulting in a NoValidHost error.
For a move operation, such as migration, allocations are made in Placement
against both the source and destination compute node. Once the
move operation is complete, the resource tracker in the *nova-compute*
service will adjust the allocations in Placement appropriately.
For a resize to the same host, allocations are summed on the single compute
node. This could pose a problem if the compute node has limited capacity.
Since resizing to the same host is disabled by default, and generally only
used in testing, this is mentioned for completeness but should not be a
concern for production deployments.
Queens (17.0.0)
~~~~~~~~~~~~~~~
* The minimum Placement API microversion required by the *nova-scheduler*
service is ``1.17`` in order to support `Request Traits During Scheduling`_.
This means you must upgrade the placement service before upgrading any
*nova-scheduler* services to Queens.
Rocky (18.0.0)
~~~~~~~~~~~~~~
* The ``nova-api`` service now requires the ``[placement]`` section to be
configured in nova.conf if you are using a separate config file just for
that service. This is because the ``nova-api`` service now needs to talk
to the placement service in order to (1) delete resource provider allocations
when deleting an instance and the ``nova-compute`` service on which that
instance is running is down (2) delete a ``nova-compute`` service record via
the ``DELETE /os-services/{service_id}`` API and (3) mirror aggregate host
associations to the placement service. This change is idempotent if
``[placement]`` is not configured in ``nova-api`` but it will result in new
warnings in the logs until configured.
* As described above, before Rocky, the placement service used the nova api
database to store placement data. In Rocky, if the ``connection`` setting in
a ``[placement_database]`` group is set in configuration, that group will be
used to describe where and how placement data is stored.
REST API
========
The placement API service has its own `REST API`_ and data model. One
can get a sample of the REST API via the functional test `gabbits`_.
.. _`REST API`: https://developer.openstack.org/api-ref/placement/
.. _gabbits: http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/functional/api/openstack/placement/gabbits
Microversions
~~~~~~~~~~~~~
The placement API uses microversions for making incremental changes to the
API which client requests must opt into.
It is especially important to keep in mind that nova-compute is a client of
the placement REST API and based on how Nova supports rolling upgrades the
nova-compute service could be Newton level code making requests to an Ocata
placement API, and vice-versa, an Ocata compute service in a cells v2 cell
could be making requests to a Newton placement API.
.. _placement-api-microversion-history:
.. include:: ../../../nova/api/openstack/placement/rest_api_version_history.rst

View File

@ -149,8 +149,6 @@ deps = -r{toxinidir}/doc/requirements.txt
commands =
rm -rf doc/build
sphinx-build -W -b html doc/source doc/build/html
# Test the redirects. This must run after the main docs build
whereto doc/build/html/.htaccess doc/test/redirect-tests.txt
{[testenv:api-ref]commands}
[testenv:api-ref]