Merge "Fix the ocata config-reference URLs" into stable/pike

This commit is contained in:
Zuul 2018-02-05 14:22:40 +00:00 committed by Gerrit Code Review
commit 5b965cd71b
13 changed files with 50 additions and 55 deletions

View File

@ -1189,7 +1189,7 @@ addresses:
in a state set in the ``hide_server_address_states`` configuration option.
By default, servers in ``building`` state hide their addresses information.
See ``nova.conf`` `configuration options
<https://docs.openstack.org/ocata/config-reference/compute/config-options.html>`_
<https://docs.openstack.org/nova/latest/configuration/config.html>`_
for more information.
in: body
required: true

View File

@ -50,7 +50,7 @@ support across different hypervisors, see the `Feature Support Matrix
You can also orchestrate clouds using multiple hypervisors in different
availability zones. Compute supports the following hypervisors:
- `Baremetal <https://wiki.openstack.org/wiki/Ironic>`__
- `Baremetal <https://docs.openstack.org/ironic/latest/>`__
- `Docker <https://www.docker.io>`__
@ -71,9 +71,9 @@ availability zones. Compute supports the following hypervisors:
- `Xen <http://www.xen.org/support/documentation.html>`__
For more information about hypervisors, see the `Hypervisors
<https://docs.openstack.org/ocata/config-reference/compute/hypervisors.html>`__
section in the OpenStack Configuration Reference.
For more information about hypervisors, see
:doc:`/admin/configuration/hypervisors`
section in the Nova Configuration Reference.
Projects, users, and roles
~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -175,7 +175,7 @@ ephemeral storage, depending on the flavor selected. In this case, the root
file system can be on the persistent volume, and its state is maintained, even
if the instance is shut down. For more information about this type of
configuration, see `Introduction to the Block Storage service
<https://docs.openstack.org/ocata/config-reference/block-storage/block-storage-overview.html>`_
<https://docs.openstack.org/cinder/latest/configuration/block-storage/block-storage-overview.html>`_
in the OpenStack Configuration Reference.
.. note::

View File

@ -56,6 +56,8 @@ see the following distribution-specific documentation:
data/sec_vt_installation_kvm.html>`_ from the SUSE Linux Enterprise Server
``Virtualization Guide``.
.. _enable-kvm:
Enable KVM
~~~~~~~~~~

View File

@ -352,10 +352,10 @@ Networking configuration
The Networking service in the Compute node is running
``neutron-openvswitch-agent``, this manages dom0's OVS. You can refer
Networking `openvswitch_agent.ini.sample <https://github.com/openstack/
openstack-manuals/blob/master/doc/config-reference/source/samples/neutron/
openvswitch_agent.ini.sample>`_ for details, however there are several specific
items to look out for.
Networking `openvswitch_agent.ini sample`__ for details,
however there are several specific items to look out for.
__ https://docs.openstack.org/neutron/latest/configuration/samples/openvswitch-agent.html
.. code-block:: ini

View File

@ -54,8 +54,7 @@ The migration types are:
- **Block live migration**, or simply block migration. The instance has
ephemeral disks that are not shared between the source and destination
hosts. Block migration is incompatible with read-only devices such as
CD-ROMs and `Configuration Drive (config\_drive)
<https://docs.openstack.org/user-guide/cli-config-drive.html>`_.
CD-ROMs and Configuration Drive (config\_drive).
- **Volume-backed live migration**. Instances use volumes rather than
ephemeral disks.
@ -316,8 +315,7 @@ memory-intensive instances succeed.
.. but perhaps I am missing something.
The full list of live migration configuration parameters is documented in the
`OpenStack Configuration Reference Guide
<https://docs.openstack.org/ocata/config-reference/compute/config-options.html>`_
:doc:`Nova Configuration Options </configuration/config>`
.. _configuring-migrations-xenserver:

View File

@ -100,7 +100,8 @@ By default, an instance floats across all NUMA nodes on a host. NUMA awareness
can be enabled implicitly through the use of huge pages or pinned CPUs or
explicitly through the use of flavor extra specs or image metadata. In all
cases, the ``NUMATopologyFilter`` filter must be enabled. Details on this
filter are provided in `Scheduling`_ configuration guide.
filter are provided in :doc:`/admin/configuration/schedulers` in Nova
configuration guide.
.. caution::
@ -162,7 +163,7 @@ memory mapping between the two nodes, run:
driver will not spawn instances with such topologies.
For more information about the syntax for ``hw:numa_nodes``, ``hw:numa_cpus.N``
and ``hw:num_mem.N``, refer to the `Flavors`_ guide.
and ``hw:num_mem.N``, refer to the :doc:`/admin/flavors` guide.
Customizing instance CPU pinning policies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -227,7 +228,7 @@ siblings if available. This is the default, but it can be set explicitly:
--property hw:cpu_thread_policy=prefer
For more information about the syntax for ``hw:cpu_policy`` and
``hw:cpu_thread_policy``, refer to the `Flavors`_ guide.
``hw:cpu_thread_policy``, refer to the :doc:`/admin/flavors` guide.
Applications are frequently packaged as images. For applications that require
real-time or near real-time behavior, configure image metadata to ensure
@ -311,7 +312,7 @@ Similarly, to configure a flavor to use one core and one thread, run:
with ten cores fails.
For more information about the syntax for ``hw:cpu_sockets``, ``hw:cpu_cores``
and ``hw:cpu_threads``, refer to the `Flavors`_ guide.
and ``hw:cpu_threads``, refer to the :doc:`/admin/flavors` guide.
It is also possible to set upper limits on the number of sockets, cores, and
threads used. Unlike the hard values above, it is not necessary for this exact
@ -325,8 +326,8 @@ instance topology, run:
$ openstack flavor set m1.large --property=hw:cpu_max_sockets=2
For more information about the syntax for ``hw:cpu_max_sockets``,
``hw:cpu_max_cores``, and ``hw:cpu_max_threads``, refer to the `Flavors`_
guide.
``hw:cpu_max_cores``, and ``hw:cpu_max_threads``, refer to the
:doc:`/admin/flavors` guide.
Applications are frequently packaged as images. For applications that prefer
certain CPU topologies, configure image metadata to hint that created instances
@ -359,7 +360,5 @@ For more information about image metadata, refer to the `Image metadata`_
guide.
.. Links
.. _`Scheduling`: https://docs.openstack.org/ocata/config-reference/compute/schedulers.html
.. _`Flavors`: https://docs.openstack.org/admin-guide/compute-flavors.html
.. _`Image metadata`: https://docs.openstack.org/image-guide/image-metadata.html
.. _`discussion`: http://lists.openstack.org/pipermail/openstack-dev/2016-March/090367.html

View File

@ -504,9 +504,8 @@ PCI passthrough
Where:
- ALIAS: (string) The alias which correspond to a particular PCI device class
as configured in the nova configuration file (see `nova.conf configuration
options
<https://docs.openstack.org/ocata/config-reference/compute/config-options.html>`_).
as configured in the nova configuration file (see
:doc:`/configuration/config`).
- COUNT: (integer) The amount of PCI devices of type ALIAS to be assigned to
a guest.

View File

@ -35,8 +35,8 @@ For more about the logging configuration syntax, including the ``handlers`` and
on logging configuration files.
For an example of the ``logging.conf`` file with various defined handlers, see
the `OpenStack Configuration Reference
<https://docs.openstack.org/ocata/config-reference/>`__.
the `Example Configuration File for nova
<https://docs.openstack.org/oslo.log/latest/admin/example_nova.html>`__.
Syslog
~~~~~~

View File

@ -191,9 +191,7 @@ configuration option:
dnsmasq_config_file=/etc/dnsmasq-nova.conf
For more information about creating a dnsmasq configuration file, see the
`OpenStack Configuration Reference
<https://docs.openstack.org/ocata/config-reference/>`__, and `the dnsmasq
documentation
:doc:`/configuration/config`, and `the dnsmasq documentation
<http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq.conf.example>`__.
Dnsmasq also acts as a caching DNS server for instances. You can specify the
@ -311,6 +309,8 @@ command:
* - use_ipv6 = False
- (BoolOpt) Use IPv6
.. _metadata-service:
Metadata service
~~~~~~~~~~~~~~~~
@ -564,7 +564,8 @@ Configure public (floating) IP addresses
This section describes how to configure floating IP addresses with
``nova-network``. For information about doing this with OpenStack Networking,
see `L3-routing-and-NAT
<https://docs.openstack.org/admin-guide/networking-adv-features.html#l3-routing-and-nat>`_.
<https://docs.openstack.org/neutron/latest/admin/archives/adv-features.html
#l3-routing-and-nat>`_.
Private and public IP addresses
-------------------------------
@ -706,9 +707,9 @@ perform floating IP operations:
# openstack floating ip delete CIDR
For more information about how administrators can associate floating IPs with
instances, see `Manage IP addresses
<https://docs.openstack.org/admin-guide/cli-admin-manage-ip-addresses.html>`__
in the OpenStack Administrator Guide.
instances, see `ip floating
<https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/
ip-floating.html>`__ in the python-openstackclient User Documentation.
Automatically add floating IPs
------------------------------

View File

@ -60,8 +60,8 @@ Configure nova-api (Controller)
[pci]
alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1" }
For more information about the syntax of ``alias``, refer to `nova.conf
configuration options`_.
For more information about the syntax of ``alias``, refer to
:doc:`/configuration/config`.
#. Restart the ``nova-api`` service.
@ -76,7 +76,7 @@ Configure a flavor to request two PCI devices, each with ``vendor_id`` of
# openstack flavor set m1.large --property "pci_passthrough:alias"="a1:2"
For more information about the syntax for ``pci_passthrough:alias``, refer to
`flavor`_.
:doc:`/admin/flavors`.
Enable PCI passthrough (Compute)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -106,7 +106,7 @@ Configure PCI devices (Compute)
the pool of PCI devices available for passthrough to VMs.
For more information about the syntax of ``passthrough_whitelist``,
refer to `nova.conf configuration options`_.
refer to :doc:`/configuration/config`.
#. Specify the PCI alias for the device.
@ -124,8 +124,7 @@ Configure PCI devices (Compute)
[pci]
alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1" }
For more information about the syntax of ``alias``, refer to `nova.conf
configuration options`_.
For more information about the syntax of ``alias``, refer to :doc:`/configuration/config`.
#. Restart the ``nova-compute`` service.
@ -141,8 +140,6 @@ available with the specified ``vendor_id`` and ``product_id`` that matches the
# openstack server create --flavor m1.large --image cirros-0.3.5-x86_64-uec --wait test-pci
.. Links
.. _`Create Virtual Functions`: https://docs.openstack.org/ocata/networking-guide/config-sriov.html#create-virtual-functions-compute
.. _`Configure nova-scheduler`: https://docs.openstack.org/ocata/networking-guide/config-sriov.html#configure-nova-scheduler-controller
.. _`nova.conf configuration options`: https://docs.openstack.org/ocata/config-reference/compute/config-options.html
.. _`flavor`: https://docs.openstack.org/admin-guide/compute-flavors.html
.. _`Networking Guide`: https://docs.openstack.org/ocata/networking-guide/config-sriov.html
.. _`Create Virtual Functions`: https://docs.openstack.org/neutron/latest/admin/config-sriov.html#create-virtual-functions-compute
.. _`Configure nova-scheduler`: https://docs.openstack.org/neutron/latest/admin/config-sriov.html#configure-nova-scheduler-controller
.. _`Networking Guide`: https://docs.openstack.org/neutron/latest/admin/config-sriov.html

View File

@ -6,6 +6,8 @@ To provide a remote console or remote desktop access to guest virtual machines,
use VNC or SPICE HTML5 through either the OpenStack dashboard or the command
line. Best practice is to select one or the other to run.
.. _about-nova-consoleauth:
About nova-consoleauth
~~~~~~~~~~~~~~~~~~~~~~

View File

@ -273,9 +273,8 @@ Solution
On the KVM host, run :command:`cat /proc/cpuinfo`. Make sure the ``vmx`` or
``svm`` flags are set.
Follow the instructions in the `Enable KVM
<https://docs.openstack.org/ocata/config-reference/compute/hypervisor-kvm.html#enable-kvm>`__
section in the OpenStack Configuration Reference to enable hardware
Follow the instructions in the :ref:`enable-kvm`
section in the Nova Configuration Reference to enable hardware
virtualization support in your BIOS.
Failed to attach volume after detaching

View File

@ -26,9 +26,8 @@ OpenStack Compute consists of the following areas and their components:
``nova-api-metadata`` service
Accepts metadata requests from instances. The ``nova-api-metadata`` service
is generally used when you run in multi-host mode with ``nova-network``
installations. For details, see `Metadata service
<https://docs.openstack.org/admin-guide/compute-networking-nova.html#metadata-service>`__
in the OpenStack Administrator Guide.
installations. For details, see :ref:`metadata-service`
in the Compute Administrator Guide.
``nova-compute`` service
A worker daemon that creates and terminates virtual machine instances through
@ -57,16 +56,15 @@ OpenStack Compute consists of the following areas and their components:
It eliminates direct accesses to the cloud database made by the
``nova-compute`` service. The ``nova-conductor`` module scales horizontally.
However, do not deploy it on nodes where the ``nova-compute`` service runs.
For more information, see `Configuration Reference Guide
<https://docs.openstack.org/ocata/config-reference/compute/config-options.html#nova-conductor>`__.
For more information, see the ``conductor`` section in the
:doc:`/configuration/config`.
``nova-consoleauth`` daemon
Authorizes tokens for users that console proxies provide. See
``nova-novncproxy`` and ``nova-xvpvncproxy``. This service must be running
for console proxies to work. You can run proxies of either type against a
single nova-consoleauth service in a cluster configuration. For information,
see `About nova-consoleauth
<https://docs.openstack.org/admin-guide/compute-remote-console-access.html#about-nova-consoleauth>`__.
see :ref:`about-nova-consoleauth`.
``nova-novncproxy`` daemon
Provides a proxy for accessing running instances through a VNC connection.