Merge "Fix warnings in the document generation"

This commit is contained in:
Zuul 2019-03-25 14:28:14 +00:00 committed by Gerrit Code Review
commit ee6f94fecc
5 changed files with 51 additions and 53 deletions

View File

@ -97,10 +97,10 @@ Resource claims
Let's address the resource claims aspect first. An effort has begun to support
NUMA resource providers in placement [3]_ and to standardize CPU resource
tracking [8]_. However, placement can only track inventories and allocations of
tracking [4]_. However, placement can only track inventories and allocations of
quantities of resources. It does not track which specific resources are used.
Specificity is needed for NUMA live migration. Consider an instance that uses
4 dedicated CPUs in a future where the standard CPU resource tracking spec [8]_
4 dedicated CPUs in a future where the standard CPU resource tracking spec [4]_
has been implemented. During live migration, the scheduler claims those 4 CPUs
in placement on the destination. However, we need to prevent other instances
from using those specific CPUs. Therefore, in addition to claiming quantities
@ -394,7 +394,7 @@ Primary assignee:
Work Items
----------
* Fail live migration of instances with NUMA topology [9]_ until this spec is
* Fail live migration of instances with NUMA topology [5]_ until this spec is
fully implemented.
* Add NUMA Nova objects
* Add claim context to live migration
@ -411,13 +411,13 @@ Testing
=======
The libvirt/qemu driver used in the gate does not currently support NUMA
features (though work is in progress [4]_). Therefore, testing NUMA aware
features (though work is in progress [6]_). Therefore, testing NUMA aware
live migration in the upstream gate would require nested virt. In addition, the
only assertable outcome of a NUMA live migration test (if it ever becomes
possible) would be that the live migration succeeded. Examining the instance
XML to assert things about its NUMA affinity or CPU pin mapping is explicitly
out of tempest's scope. For these reasons, NUMA aware live migration is best
tested in third party CI [5]_ or other downstream test scenarios [6]_.
tested in third party CI [7]_ or other downstream test scenarios [8]_.
Documentation Impact
====================
@ -432,12 +432,13 @@ References
.. [1] https://bugs.launchpad.net/nova/+bug/1496135
.. [2] https://bugs.launchpad.net/nova/+bug/1607996
.. [3] https://review.openstack.org/#/c/552924/
.. [4] https://review.openstack.org/#/c/533077/
.. [5] https://github.com/openstack/intel-nfv-ci-tests
.. [6] https://review.rdoproject.org/r/gitweb?p=openstack/whitebox-tempest-plugin.git
.. [7] https://review.openstack.org/#/c/244489/
.. [8] https://review.openstack.org/#/c/555081/
.. [9] https://review.openstack.org/#/c/611088/
.. [4] https://review.openstack.org/#/c/555081/
.. [5] https://review.openstack.org/#/c/611088/
.. [6] https://review.openstack.org/#/c/533077/
.. [7] https://github.com/openstack/intel-nfv-ci-tests
.. [8] https://review.rdoproject.org/r/gitweb?p=openstack/whitebox-tempest-plugin.git
[9] https://review.openstack.org/#/c/244489/
History
=======

View File

@ -8,7 +8,7 @@
Show server numa topology
=========================
Add NUMA into new sub-resource``GET /servers/{server_id}/topology`` API.
Add NUMA into new sub-resource ``GET /servers/{server_id}/topology`` API.
https://blueprints.launchpad.net/nova/+spec/show-server-numa-topology
@ -92,9 +92,7 @@ REST API impact
API ``GET /servers/{server_id}/topology`` will show NUMA information with
a new microversion.
The returned information for NUMA topology:
.. code-block:: json
The returned information for NUMA topology::
{
# overall policy: TOPOLOGY % 'index
@ -132,31 +130,33 @@ Security impact
* Add new ``topology`` policy, admin only by default:
TOPOLOGY = 'os_compute_api:servers:topology:%s'
.. code-block:: python
server_topology_policies = [
policy.DocumentedRuleDefault(
BASE_POLICY_NAME,
base.RULE_ADMIN_API,
"Show the topology data for a server",
[
{
'method': 'GET',
'path': '/servers/{server_id}/topology'
}
]),
policy.DocumentedRuleDefault(
# control host numa node and cpu pin information
TOPOLOGY % 'index:host_info',
base.RULE_ADMIN_API,
"List all servers with detailed information",
[
{
'method': 'GET',
'path': '/servers/{server_id}/topology'
}
]),
]
TOPOLOGY = 'os_compute_api:servers:topology:%s'
server_topology_policies = [
policy.DocumentedRuleDefault(
BASE_POLICY_NAME,
base.RULE_ADMIN_API,
"Show the topology data for a server",
[
{
'method': 'GET',
'path': '/servers/{server_id}/topology'
}
]),
policy.DocumentedRuleDefault(
# control host numa node and cpu pin information
TOPOLOGY % 'index:host_info',
base.RULE_ADMIN_API,
"List all servers with detailed information",
[
{
'method': 'GET',
'path': '/servers/{server_id}/topology'
}
]),
]
Notifications impact

View File

@ -51,7 +51,7 @@ Examples of validations to be added [1]_:
* Validate the realtime mask.
* Validate the number of serial ports.
* Validate the cpu topology constraints.
* Validate the ``quota:*``settings (that are not virt driver specific) in the
* Validate the ``quota:*`` settings (that are not virt driver specific) in the
flavor.
Alternatives

View File

@ -17,7 +17,7 @@ Problem description
Currently you had to loop over all groups to find the group the server
belongs to. This spec tries to address this by proposing showing the server
group information in API `GET /servers/{server_id}`.
group information in API ``GET /servers/{server_id}``.
Use Cases
---------
@ -41,11 +41,11 @@ needs another DB query.
Alternatives
------------
* One alternative is support the server groups filter by server UUID. Like
"GET /os-server-groups?server=<UUID>".
* One alternative is support the server groups filter by server UUID. Like
``GET /os-server-groups?server=<UUID>``.
* Another alternative to support the server group query is following API:
"GET /servers/{server_id}/server_groups".
``GET /servers/{server_id}/server_groups``.
Data model impact
-----------------
@ -57,11 +57,9 @@ REST API impact
---------------
Allows the `GET /servers/{server_id}` API to show server group's UUID.
"PUT /servers/{server_id}" and REBUILD API "POST /servers/{server_id}/action"
also response same information.
.. highlight:: json
Allows the ``GET /servers/{server_id}`` API to show server group's UUID.
``PUT /servers/{server_id}`` and REBUILD API
``POST /servers/{server_id}/action`` also response same information.
The returned information for server group::
@ -94,7 +92,7 @@ Performance Impact
------------------
* Need another DB query retrieve the server group UUID. To reduce the
perfermance impact for batch API call, "GET /servers/detail" won't
perfermance impact for batch API call, ``GET /servers/detail`` won't
return server group information.
Other deployer impact
@ -147,7 +145,7 @@ Documentation Impact
References
==========
* Stein PTG discussion:https://etherpad.openstack.org/p/nova-ptg-stein
* Stein PTG discussion: https://etherpad.openstack.org/p/nova-ptg-stein
History
@ -158,7 +156,6 @@ History
* - Release Name
- Version
* - Stein
- First Version

View File

@ -14,7 +14,7 @@ whitelist_externals = find
commands = {posargs}
[testenv:docs]
commands = sphinx-build -b html doc/source doc/build/html
commands = sphinx-build -W -b html doc/source doc/build/html
[testenv:pep8]
deps =