Fix formatting errors and warnings

Fix errors and warnings when running 'tox -e docs'.

TrivialFix
Change-Id: I0caa170ccfba54231de3c8337f87450ff7fe98cc
Co-Authored-By: Stephen Finucane <sfinucan@redhat.com>
This commit is contained in:
Takashi NATSUME
2016-12-06 14:22:56 +09:00
committed by Stephen Finucane
parent f164026b64
commit f0aaee6b32
17 changed files with 72 additions and 58 deletions

View File

@@ -60,20 +60,20 @@ Request:
POST
JSON format:
JSON format::
{
{
"os-getConsoleOutput": {
"lines": 2,
"offset": 10
}
}
"lines": 2,
"offset": 10
}
}
Response:
Response::
{
{
"output": "ANOTHER\nLAST LINE"
}
}
Security impact

View File

@@ -84,14 +84,15 @@ for cases which fail for trivial reasons like lack of CPU/RAM/disk. We propose
that this option default to ['CoreFilter', 'RamFilter', 'DiskFilter',
'ComputeFilter']. This option would apply regardless of which way the
"log only on failure" config option is set, for several reasons:
* When logging only on NoValidHost we would still need to do work to store
the logs for the success path, so this would give a way to reduce the
overhead and also reduce the amount of logging on a scheduler failure.
* If the operator decides they don't need logging from particular filters
then it likely doesn't matter whether we only log on failure.
* If an operator does want to change the suppressed filters when changing
the "sched_detailed_log_only_on_failure" config option, then they have
the option of doing so.
* When logging only on NoValidHost we would still need to do work to store
the logs for the success path, so this would give a way to reduce the
overhead and also reduce the amount of logging on a scheduler failure.
* If the operator decides they don't need logging from particular filters
then it likely doesn't matter whether we only log on failure.
* If an operator does want to change the suppressed filters when changing
the "sched_detailed_log_only_on_failure" config option, then they have
the option of doing so.
Alternatives

View File

@@ -119,6 +119,7 @@ In case of success following workflow will happen:
* post_live_migration_at_destination - non-blocking rpc cast from conductor to
destination compute
In case of failure:
* rollback_live_migration_at_source - non-blocking rpc cast from conductor to

View File

@@ -55,7 +55,7 @@ Methods related to keypairs that are currently in the database API will be
moved to the ``KeyPair`` object.
Migration to the API database will follow the existing pattern established
by the merged flavor migration series. [1]_
by the merged flavor migration series.
The metadata service currently reads the ``key_pairs`` table directly. We
would like to prevent this once the table has been moved to the API database.

View File

@@ -176,5 +176,7 @@ History
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Newton
- Introduced

View File

@@ -300,5 +300,7 @@ History
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Newton
- Introduced

View File

@@ -552,7 +552,7 @@ The returned HTTP response code will be one of the following:
will succeed but the `inventories` key will have an empty dict as its value.
`POST /resource_providers/{uuid}/inventories`
********************************************
*********************************************
Create a new inventory for the resource provider identified by `{uuid}`.
@@ -1067,13 +1067,13 @@ functionality:
* `openstack resource-provider update $UUID --name="New name"`
* `openstack resource-provider list inventory $UUID`
* `openstack resource-provider set inventory $UUID \
--resource-class=DISK_GB \
--total=1024 \
--reserved=450 \
--min-unit=1 \
--max-unit=1 \
--step-size=1 \
--allocation-ratio=1.0`
--resource-class=DISK_GB \
--total=1024 \
--reserved=450 \
--min-unit=1 \
--max-unit=1 \
--step-size=1 \
--allocation-ratio=1.0`
* `openstack resource-provider delete inventory $UUID \
--resource-class=DISK_GB`
* `openstack resource-provider add aggregate $UUID $AGG_UUID`

View File

@@ -209,4 +209,4 @@ History
- Introduced but no changes merged.
* - Newton
- Re-proposed.
- Completely re-written to use a hash ring.
Completely re-written to use a hash ring.

View File

@@ -191,5 +191,5 @@ History
* - Mitaka
- Introduced
* - Newton
- Re-proposed
- Removed portgroups support
- Re-proposed.
Removed portgroups support.

View File

@@ -44,9 +44,9 @@ bridge qbr for each VIF and make qbr be connected to integration bridge
(e.g. br-int) in compute node. So the connection in compute node will looks
like:
VIF-1 -> LinuxBridge(qbr-1) ->
VM OvsBridge(br-int) -> OvsBridge(br-eth)
VIF-2 -> LinuxBridge(qbr-2) ->
| VIF-1 -> LinuxBridge(qbr-1) ->
| VM OvsBridge(br-int) -> OvsBridge(br-eth)
| VIF-2 -> LinuxBridge(qbr-2) ->
So, with the new added Linux bridge qbr, at neutron side, it can detect these
bridges qbr-XXX automatically and apply security group rules on each of the
@@ -98,12 +98,16 @@ Other deployer impact
This implementation is to support neutron security group function with XenSerer
just like other hypervisor does. The main deployment changes if you want to use
this function are:
1. Deploy neutron in OpenStack environment
2. Change nova.conf, below configuration items should be specified
2. Change nova.conf, below configuration items should be specified::
[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
3. Change neutron config file ml2_conf.ini
3. Change neutron config file ml2_conf.ini::
[securitygroup]
firewall_driver = \
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
@@ -142,15 +146,15 @@ Testing
=======
* Scenario test will be done manually or automatically with tempest.
When it is implemented, we can deploy an environment using neutron VLAN
network, enable neutron security group and set the correct firewall_driver
in neutron's ml2_conf.ini file in compute node.
When it is implemented, we can deploy an environment using neutron VLAN
network, enable neutron security group and set the correct firewall_driver
in neutron's ml2_conf.ini file in compute node.
* XenServer Neutron CI will also be updated to test security groups though
existing tempest tests. When the code patchset is ready, we will change some
configurations as mentioned above and start full tempest to check the function
and make sure there is no negative impact. The test report will be accessible
publicly.
existing tempest tests. When the code patchset is ready, we will change some
configurations as mentioned above and start full tempest to check the
function and make sure there is no negative impact. The test report will be
accessible publicly.
Documentation Impact
====================

View File

@@ -121,7 +121,7 @@ file called vendor_data.json.
For DynamicJSON, the results of calls to the microservice will be placed into
a new JSON file in the metadata, called vendor_data2.json. This file contains
a a dictionary which is a series of dictionaries. For example:
a dictionary which is a series of dictionaries. For example::
{
"static": {
@@ -139,7 +139,7 @@ flag. This flag is composed of entries of the form:
name@http://example.com/foo
The name element for this microservice becomes the a new key for the
vendor_data2.json file, like this:
vendor_data2.json file, like this::
{
"name": {