Following the new PTI for document build

For compliance with the Project Testing Interface [1]
as described in [2]

[1]
https://governance.openstack.org/tc/reference/project-testing-interface.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2017-December/125710.html

doc8 command is dropped from docs tox envs.
So this affect nothing and run in PEP8.

Related-Bug: #1765348

Depends-On: Icc7fe3a8f9716281de88825e9d5b2fd84de3d00a
Change-Id: Idf9a16111479ccc64004eac9508da575822a3df5
This commit is contained in:
confi-surya 2018-04-06 18:28:51 +09:00 committed by Mark Goddard
parent 5c1f0226d3
commit dbf754655f
28 changed files with 144 additions and 112 deletions

10
doc/requirements.txt Normal file
View File

@ -0,0 +1,10 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
# Order matters to the pip dependency resolver, so sorting this file
# changes how packages are installed. New dependencies should be
# added in alphabetical order, however, some dependencies may need to
# be installed in a specific order.
openstackdocstheme>=1.18.1 # Apache-2.0
reno>=2.5.0 # Apache-2.0
sphinx!=1.6.6,>=1.6.2 # BSD

View File

@ -26,7 +26,7 @@ For the combined option, set the two variables below, while allowing the
other two to accept their default values. In this configuration all REST other two to accept their default values. In this configuration all REST
API requests, internal and external, will flow over the same network. API requests, internal and external, will flow over the same network.
.. code-block:: none .. code-block:: yaml
kolla_internal_vip_address: "10.10.10.254" kolla_internal_vip_address: "10.10.10.254"
network_interface: "eth0" network_interface: "eth0"
@ -37,7 +37,7 @@ For the separate option, set these four variables. In this configuration
the internal and external REST API requests can flow over separate the internal and external REST API requests can flow over separate
networks. networks.
.. code-block:: none .. code-block:: yaml
kolla_internal_vip_address: "10.10.10.254" kolla_internal_vip_address: "10.10.10.254"
network_interface: "eth0" network_interface: "eth0"
@ -57,7 +57,7 @@ in your kolla deployment use the variables:
- kolla_internal_fqdn - kolla_internal_fqdn
- kolla_external_fqdn - kolla_external_fqdn
.. code-block:: none .. code-block:: yaml
kolla_internal_fqdn: inside.mykolla.example.net kolla_internal_fqdn: inside.mykolla.example.net
kolla_external_fqdn: mykolla.example.net kolla_external_fqdn: mykolla.example.net
@ -95,7 +95,7 @@ The configuration variables that control TLS networking are:
The default for TLS is disabled, to enable TLS networking: The default for TLS is disabled, to enable TLS networking:
.. code-block:: none .. code-block:: yaml
kolla_enable_tls_external: "yes" kolla_enable_tls_external: "yes"
kolla_external_fqdn_cert: "{{ node_config_directory }}/certificates/mycert.pem" kolla_external_fqdn_cert: "{{ node_config_directory }}/certificates/mycert.pem"
@ -176,7 +176,7 @@ OpenStack Service Configuration in Kolla
An operator can change the location where custom config files are read from by An operator can change the location where custom config files are read from by
editing ``/etc/kolla/globals.yml`` and adding the following line. editing ``/etc/kolla/globals.yml`` and adding the following line.
.. code-block:: none .. code-block:: yaml
# The directory to merge custom config files the kolla's config files # The directory to merge custom config files the kolla's config files
node_custom_config: "/etc/kolla/config" node_custom_config: "/etc/kolla/config"
@ -253,7 +253,7 @@ If a development environment doesn't have a free IP address available for VIP
configuration, the host's IP address may be used here by disabling HAProxy by configuration, the host's IP address may be used here by disabling HAProxy by
adding: adding:
.. code-block:: none .. code-block:: yaml
enable_haproxy: "no" enable_haproxy: "no"
@ -269,7 +269,7 @@ External Elasticsearch/Kibana environment
It is possible to use an external Elasticsearch/Kibana environment. To do this It is possible to use an external Elasticsearch/Kibana environment. To do this
first disable the deployment of the central logging. first disable the deployment of the central logging.
.. code-block:: none .. code-block:: yaml
enable_central_logging: "no" enable_central_logging: "no"
@ -285,7 +285,7 @@ It is sometimes required to use a different than default port
for service(s) in Kolla. It is possible with setting for service(s) in Kolla. It is possible with setting
``<service>_port`` in ``globals.yml`` file. For example: ``<service>_port`` in ``globals.yml`` file. For example:
.. code-block:: none .. code-block:: yaml
database_port: 3307 database_port: 3307
@ -301,7 +301,7 @@ By default, Fluentd is used as a syslog server to collect Swift and HAProxy
logs. When Fluentd is disabled or you want to use an external syslog server, logs. When Fluentd is disabled or you want to use an external syslog server,
You can set syslog parameters in ``globals.yml`` file. For example: You can set syslog parameters in ``globals.yml`` file. For example:
.. code-block:: none .. code-block:: yaml
syslog_server: "172.29.9.145" syslog_server: "172.29.9.145"
syslog_udp_port: "514" syslog_udp_port: "514"
@ -311,7 +311,7 @@ You can set syslog parameters in ``globals.yml`` file. For example:
You can also set syslog facility names for Swift and HAProxy logs. You can also set syslog facility names for Swift and HAProxy logs.
By default, Swift and HAProxy use ``local0`` and ``local1``, respectively. By default, Swift and HAProxy use ``local0`` and ``local1``, respectively.
.. code-block:: none .. code-block:: yaml
syslog_swift_facility: "local0" syslog_swift_facility: "local0"
syslog_haproxy_facility: "local1" syslog_haproxy_facility: "local1"

View File

@ -87,7 +87,7 @@ that Kolla uses throughout that should be followed.
content: content:
.. path ansible/roles/common/templates/cron-logrotate-PROJECT.conf.j2 .. path ansible/roles/common/templates/cron-logrotate-PROJECT.conf.j2
.. code-block:: none .. code-block:: console
"/var/log/kolla/PROJECT/*.log" "/var/log/kolla/PROJECT/*.log"
{ {

View File

@ -26,7 +26,7 @@ To enable dev mode for all supported services, set in
``/etc/kolla/globals.yml``: ``/etc/kolla/globals.yml``:
.. path /etc/kolla/globals.yml .. path /etc/kolla/globals.yml
.. code-block:: none .. code-block:: yaml
kolla_dev_mode: true kolla_dev_mode: true
@ -35,7 +35,7 @@ To enable dev mode for all supported services, set in
To enable it just for heat, set: To enable it just for heat, set:
.. path /etc/kolla/globals.yml .. path /etc/kolla/globals.yml
.. code-block:: none .. code-block:: yaml
heat_dev_mode: true heat_dev_mode: true
@ -70,7 +70,7 @@ make sure it is installed in the container in question:
Then, set your breakpoint as follows: Then, set your breakpoint as follows:
.. code-block:: none .. code-block:: python
from remote_pdb import RemotePdb from remote_pdb import RemotePdb
RemotePdb('127.0.0.1', 4444).set_trace() RemotePdb('127.0.0.1', 4444).set_trace()

View File

@ -91,7 +91,7 @@ resolving the deployment host's hostname to ``127.0.0.1``, for example:
The following lines are desirable for IPv6 capable hosts: The following lines are desirable for IPv6 capable hosts:
.. code-block:: none .. code-block:: console
::1 ip6-localhost ip6-loopback ::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet fe00::0 ip6-localnet
@ -109,12 +109,14 @@ Build a Bifrost Container Image
This section provides instructions on how to build a container image for This section provides instructions on how to build a container image for
bifrost using kolla. bifrost using kolla.
Currently kolla only supports the ``source`` install type for the bifrost image. Currently kolla only supports the ``source`` install type for the
bifrost image.
#. To generate kolla-build.conf configuration File #. To generate kolla-build.conf configuration File
* If required, generate a default configuration file for :command:`kolla-build`: * If required, generate a default configuration file for
:command:`kolla-build`:
.. code-block:: console .. code-block:: console

View File

@ -95,7 +95,7 @@ In this output, look for the key ``X-Compute-Request-Id``. This is a unique
identifier that can be used to track the request through the system. An identifier that can be used to track the request through the system. An
example ID looks like this: example ID looks like this:
.. code-block:: none .. code-block:: console
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5 X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5

View File

@ -99,10 +99,10 @@ To prepare the journal external drive execute the following command:
Configuration Configuration
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
Edit the ``[storage]`` group in the inventory which contains the hostname of the Edit the ``[storage]`` group in the inventory which contains the hostname
hosts that have the block devices you have prepped as shown above. of the hosts that have the block devices you have prepped as shown above.
.. code-block:: none .. code-block:: ini
[storage] [storage]
controller controller
@ -340,7 +340,7 @@ implement caching.
Here is the top part of the multinode inventory file used in the example Here is the top part of the multinode inventory file used in the example
environment before adding the 3rd node for Ceph: environment before adding the 3rd node for Ceph:
.. code-block:: none .. code-block:: ini
[control] [control]
# These hostname must be resolvable from your deployment host # These hostname must be resolvable from your deployment host
@ -384,7 +384,7 @@ Next, edit the multinode inventory file and make sure the 3 nodes are listed
under ``[storage]``. In this example I will add kolla3.ducourrier.com to the under ``[storage]``. In this example I will add kolla3.ducourrier.com to the
existing inventory file: existing inventory file:
.. code-block:: none .. code-block:: ini
[control] [control]
# These hostname must be resolvable from your deployment host # These hostname must be resolvable from your deployment host

View File

@ -38,7 +38,7 @@ During development, it may be desirable to use file backed block storage. It
is possible to use a file and mount it as a block device via the loopback is possible to use a file and mount it as a block device via the loopback
system. system.
.. code-block:: none .. code-block:: console
free_device=$(losetup -f) free_device=$(losetup -f)
fallocate -l 20G /var/lib/cinder_data.img fallocate -l 20G /var/lib/cinder_data.img
@ -67,7 +67,7 @@ NFS
To use the ``nfs`` backend, configure ``/etc/exports`` to contain the mount To use the ``nfs`` backend, configure ``/etc/exports`` to contain the mount
where the volumes are to be stored: where the volumes are to be stored:
.. code-block:: none .. code-block:: console
/kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash) /kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
@ -89,7 +89,7 @@ Then start ``nfsd``:
On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for
each storage node: each storage node:
.. code-block:: none .. code-block:: console
storage01:/kolla_nfs storage01:/kolla_nfs
storage02:/kolla_nfs storage02:/kolla_nfs

View File

@ -103,7 +103,7 @@ Ceph) into the same directory, for example:
.. end .. end
.. code-block:: none .. code-block:: console
$ cat /etc/kolla/config/glance/ceph.client.glance.keyring $ cat /etc/kolla/config/glance/ceph.client.glance.keyring

View File

@ -183,8 +183,9 @@ all you need to do is the following steps:
.. end .. end
#. Set the common password for all components within ``/etc/kolla/passwords.yml``. #. Set the common password for all components within
In order to achieve that you could use the following command: ``/etc/kolla/passwords.yml``. In order to achieve that you
could use the following command:
.. code-block:: console .. code-block:: console

View File

@ -116,7 +116,7 @@ be found on `Cloudbase website
Add the Hyper-V node in ``ansible/inventory`` file: Add the Hyper-V node in ``ansible/inventory`` file:
.. code-block:: none .. code-block:: ini
[hyperv] [hyperv]
<HyperV IP> <HyperV IP>

View File

@ -18,7 +18,7 @@ Preparation and Deployment
To allow Docker daemon connect to the etcd, add the following in the To allow Docker daemon connect to the etcd, add the following in the
``docker.service`` file. ``docker.service`` file.
.. code-block:: none .. code-block:: ini
ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375 ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375

View File

@ -369,7 +369,8 @@ Use the manila migration command, as shown in the following example:
Checking share migration progress Checking share migration progress
--------------------------------- ---------------------------------
Use the :command:`manila migration-get-progress shareID` command to check progress. Use the :command:`manila migration-get-progress shareID` command to
check progress.
.. code-block:: console .. code-block:: console

View File

@ -360,4 +360,4 @@ For more information about how to manage shares, see the
For more information about how HNAS driver works, see For more information about how HNAS driver works, see
`Hitachi NAS Platform File Services Driver for OpenStack `Hitachi NAS Platform File Services Driver for OpenStack
<https://docs.openstack.org/manila/latest/admin/hitachi_hnas_driver.html>`__. <https://docs.openstack.org/manila/latest/admin/hitachi_hnas_driver.html>`__.

View File

@ -4,9 +4,9 @@
Networking in Kolla Networking in Kolla
=================== ===================
Kolla deploys Neutron by default as OpenStack networking component. This section Kolla deploys Neutron by default as OpenStack networking component.
describes configuring and running Neutron extensions like LBaaS, Networking-SFC, This section describes configuring and running Neutron extensions like
QoS, and so on. LBaaS, Networking-SFC, QoS, and so on.
Enabling Provider Networks Enabling Provider Networks
========================== ==========================
@ -218,7 +218,7 @@ it is advised to allocate them via the kernel command line instead to prevent
memory fragmentation. This can be achieved by adding the following to the grub memory fragmentation. This can be achieved by adding the following to the grub
config and regenerating your grub file. config and regenerating your grub file.
.. code-block:: none .. code-block:: console
default_hugepagesz=2M hugepagesz=2M hugepages=25000 default_hugepagesz=2M hugepagesz=2M hugepages=25000
@ -233,16 +233,17 @@ While it is technically possible to use all 3 only ``uio_pci_generic`` and
and distributed as part of the dpdk library. While it has some advantages over and distributed as part of the dpdk library. While it has some advantages over
``uio_pci_generic`` loading the ``igb_uio`` module will taint the kernel and ``uio_pci_generic`` loading the ``igb_uio`` module will taint the kernel and
possibly invalidate distro support. To successfully deploy ``ovs-dpdk``, possibly invalidate distro support. To successfully deploy ``ovs-dpdk``,
``vfio_pci`` or ``uio_pci_generic`` kernel module must be present on the platform. ``vfio_pci`` or ``uio_pci_generic`` kernel module must be present on the
Most distros include ``vfio_pci`` or ``uio_pci_generic`` as part of the default platform. Most distros include ``vfio_pci`` or ``uio_pci_generic`` as part of
kernel though on some distros you may need to install ``kernel-modules-extra`` or the default kernel though on some distros you may need to install
the distro equivalent prior to running :command:`kolla-ansible deploy`. ``kernel-modules-extra`` or the distro equivalent prior to running
:command:`kolla-ansible deploy`.
Installation Installation
------------ ------------
To enable ovs-dpdk, add the following configuration to ``/etc/kolla/globals.yml`` To enable ovs-dpdk, add the following configuration to
file: ``/etc/kolla/globals.yml`` file:
.. code-block:: yaml .. code-block:: yaml
@ -308,9 +309,10 @@ Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
.. end .. end
Modify the ``/etc/kolla/config/neutron/ml2_conf.ini`` file and add ``sriovnicswitch`` Modify the ``/etc/kolla/config/neutron/ml2_conf.ini`` file and add
to the ``mechanism_drivers``. Also, the provider networks used by SRIOV should be configured. ``sriovnicswitch`` to the ``mechanism_drivers``. Also, the provider
Both flat and VLAN are configured with the same physical network name in this example: networks used by SRIOV should be configured. Both flat and VLAN are configured
with the same physical network name in this example:
.. path /etc/kolla/config/neutron/ml2_conf.ini .. path /etc/kolla/config/neutron/ml2_conf.ini
.. code-block:: ini .. code-block:: ini
@ -331,9 +333,9 @@ Add ``PciPassthroughFilter`` to scheduler_default_filters
The ``PciPassthroughFilter``, which is required by Nova Scheduler service The ``PciPassthroughFilter``, which is required by Nova Scheduler service
on the Controller, should be added to ``scheduler_default_filters`` on the Controller, should be added to ``scheduler_default_filters``
Modify the ``/etc/kolla/config/nova.conf`` file and add ``PciPassthroughFilter`` Modify the ``/etc/kolla/config/nova.conf`` file and add
to ``scheduler_default_filters``. this filter is required by The Nova Scheduler ``PciPassthroughFilter`` to ``scheduler_default_filters``. this filter is
service on the controller node. required by The Nova Scheduler service on the controller node.
.. path /etc/kolla/config/nova.conf .. path /etc/kolla/config/nova.conf
.. code-block:: ini .. code-block:: ini
@ -489,12 +491,12 @@ so in environments that have NICs with multiple ports configured for SRIOV,
it is impossible to specify a specific NIC port to pull VFs from. it is impossible to specify a specific NIC port to pull VFs from.
Modify the file ``/etc/kolla/config/nova.conf``. The Nova Scheduler service Modify the file ``/etc/kolla/config/nova.conf``. The Nova Scheduler service
on the control node requires the ``PciPassthroughFilter`` to be added to the list on the control node requires the ``PciPassthroughFilter`` to be added to the
of filters and the Nova Compute service(s) on the compute node(s) need PCI list of filters and the Nova Compute service(s) on the compute node(s) need
device whitelisting. The Nova API service on the control node and the Nova PCI device whitelisting. The Nova API service on the control node and the Nova
Compute service on the compute node also require the ``alias`` option under the Compute service on the compute node also require the ``alias`` option under the
``[pci]`` section. The alias can be configured as 'type-VF' to pass VFs or 'type-PF' ``[pci]`` section. The alias can be configured as 'type-VF' to pass VFs or
to pass the PF. Type-VF is shown in this example: 'type-PF' to pass the PF. Type-VF is shown in this example:
.. path /etc/kolla/config/nova.conf .. path /etc/kolla/config/nova.conf
.. code-block:: ini .. code-block:: ini
@ -514,8 +516,8 @@ Run deployment.
Verification Verification
------------ ------------
Create (or use an existing) flavor, and then configure it to request one PCI device Create (or use an existing) flavor, and then configure it to request one PCI
from the PCI alias: device from the PCI alias:
.. code-block:: console .. code-block:: console
@ -534,4 +536,5 @@ Start a new instance using the flavor:
Verify VF devices were created and the instance starts successfully as in Verify VF devices were created and the instance starts successfully as in
the Neutron SRIOV case. the Neutron SRIOV case.
For more information see `OpenStack PCI passthrough documentation <https://docs.openstack.org/nova/pike/admin/pci-passthrough.html>`_. For more information see `OpenStack PCI passthrough documentation <https://docs.openstack.org/nova/pike/admin/pci-passthrough.html>`_.

View File

@ -5,10 +5,10 @@ Nova Fake Driver
================ ================
One common question from OpenStack operators is that "how does the control One common question from OpenStack operators is that "how does the control
plane (for example, database, messaging queue, nova-scheduler ) scales?". To answer plane (for example, database, messaging queue, nova-scheduler ) scales?".
this question, operators setup Rally to drive workload to the OpenStack cloud. To answer this question, operators setup Rally to drive workload to the
However, without a large number of nova-compute nodes, it becomes difficult to OpenStack cloud. However, without a large number of nova-compute nodes,
exercise the control performance. it becomes difficult to exercise the control performance.
Given the built-in feature of Docker container, Kolla enables standing up many Given the built-in feature of Docker container, Kolla enables standing up many
of Compute nodes with nova fake driver on a single host. For example, of Compute nodes with nova fake driver on a single host. For example,
@ -19,9 +19,9 @@ Use nova-fake driver
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
Nova fake driver can not work with all-in-one deployment. This is because the Nova fake driver can not work with all-in-one deployment. This is because the
fake ``neutron-openvswitch-agent`` for the fake ``nova-compute`` container conflicts fake ``neutron-openvswitch-agent`` for the fake ``nova-compute`` container
with ``neutron-openvswitch-agent`` on the Compute nodes. Therefore, in the conflicts with ``neutron-openvswitch-agent`` on the Compute nodes. Therefore,
inventory the network node must be different than the Compute node. in the inventory the network node must be different than the Compute node.
By default, Kolla uses libvirt driver on the Compute node. To use nova-fake By default, Kolla uses libvirt driver on the Compute node. To use nova-fake
driver, edit the following parameters in ``/etc/kolla/globals.yml`` or in driver, edit the following parameters in ``/etc/kolla/globals.yml`` or in
@ -35,5 +35,5 @@ the command line options.
.. end .. end
Each Compute node will run 5 ``nova-compute`` containers and 5 Each Compute node will run 5 ``nova-compute`` containers and 5
``neutron-plugin-agent`` containers. When booting instance, there will be no real ``neutron-plugin-agent`` containers. When booting instance, there will be
instances created. But :command:`nova list` shows the fake instances. no real instances created. But :command:`nova list` shows the fake instances.

View File

@ -82,7 +82,7 @@ table** example listed above. Please modify accordingly if your setup is
different. different.
Prepare for Rings generating Prepare for Rings generating
---------------------------- ----------------------------
To perpare for Swift Rings generating, run the following commands to initialize To perpare for Swift Rings generating, run the following commands to initialize
the environment variable and create ``/etc/kolla/config/swift`` directory: the environment variable and create ``/etc/kolla/config/swift`` directory:
@ -251,4 +251,4 @@ A very basic smoke test:
| Bytes | 6684 | | Bytes | 6684 |
| Containers | 1 | | Containers | 1 |
| Objects | 1 | | Objects | 1 |
+------------+---------------------------------------+ +------------+---------------------------------------+

View File

@ -190,4 +190,4 @@ can be cleaned up executing ``cleanup-tacker`` script.
$ sh cleanup-tacker $ sh cleanup-tacker
.. end .. end

View File

@ -61,9 +61,9 @@ For more information, please see `VMware NSX-V documentation <https://docs.vmwar
In addition, it is important to modify the firewall rule of vSphere to make In addition, it is important to modify the firewall rule of vSphere to make
sure that VNC is accessible from outside VMware environment. sure that VNC is accessible from outside VMware environment.
On every VMware host, edit /etc/vmware/firewall/vnc.xml as below: On every VMware host, edit ``/etc/vmware/firewall/vnc.xml`` as below:
.. code-block:: none .. code-block:: xml
<!-- FirewallRule for VNC Console --> <!-- FirewallRule for VNC Console -->
<ConfigRoot> <ConfigRoot>
@ -216,7 +216,8 @@ Options for Neutron NSX-V support:
.. end .. end
Then you should start :command:`kolla-ansible` deployment normally as KVM/QEMU deployment. Then you should start :command:`kolla-ansible` deployment normally as
KVM/QEMU deployment.
VMware NSX-DVS VMware NSX-DVS
@ -293,7 +294,8 @@ Options for Neutron NSX-DVS support:
.. end .. end
Then you should start :command:`kolla-ansible` deployment normally as KVM/QEMU deployment. Then you should start :command:`kolla-ansible` deployment normally as
KVM/QEMU deployment.
For more information on OpenStack vSphere, see For more information on OpenStack vSphere, see
`VMware vSphere `VMware vSphere

View File

@ -17,7 +17,7 @@ configure kuryr refer to :doc:`kuryr-guide`.
To allow Zun Compute connect to the Docker Daemon, add the following in the To allow Zun Compute connect to the Docker Daemon, add the following in the
``docker.service`` file on each zun-compute node. ``docker.service`` file on each zun-compute node.
.. code-block:: none .. code-block:: ini
ExecStart= -H tcp://<DOCKER_SERVICE_IP>:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://<DOCKER_SERVICE_IP>:2379 --cluster-advertise=<DOCKER_SERVICE_IP>:2375 ExecStart= -H tcp://<DOCKER_SERVICE_IP>:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://<DOCKER_SERVICE_IP>:2379 --cluster-advertise=<DOCKER_SERVICE_IP>:2375

View File

@ -39,7 +39,7 @@ regions. In this example, we consider two regions. The current one,
formerly knows as RegionOne, that is hided behind formerly knows as RegionOne, that is hided behind
``openstack_region_name`` variable, and the RegionTwo: ``openstack_region_name`` variable, and the RegionTwo:
.. code-block:: none .. code-block:: yaml
openstack_region_name: "RegionOne" openstack_region_name: "RegionOne"
multiple_regions_names: multiple_regions_names:
@ -69,7 +69,7 @@ update the ``/etc/kolla/globals.yml`` configuration file to tell Kolla how
to reach Keystone. In the following, ``kolla_internal_fqdn_r1`` refers to to reach Keystone. In the following, ``kolla_internal_fqdn_r1`` refers to
the value of ``kolla_internal_fqdn`` in RegionOne: the value of ``kolla_internal_fqdn`` in RegionOne:
.. code-block:: none .. code-block:: yaml
kolla_internal_fqdn_r1: 10.10.10.254 kolla_internal_fqdn_r1: 10.10.10.254
@ -142,7 +142,7 @@ directory, a ``ceilometer.conf`` file with below content:
And link the directory that contains these files into the And link the directory that contains these files into the
``/etc/kolla/globals.yml``: ``/etc/kolla/globals.yml``:
.. code-block:: none .. code-block:: yaml
node_custom_config: path/to/the/directory/of/global&nova_conf/ node_custom_config: path/to/the/directory/of/global&nova_conf/
@ -150,7 +150,7 @@ And link the directory that contains these files into the
Also, change the name of the current region. For instance, RegionTwo: Also, change the name of the current region. For instance, RegionTwo:
.. code-block:: none .. code-block:: yaml
openstack_region_name: "RegionTwo" openstack_region_name: "RegionTwo"
@ -159,7 +159,7 @@ Also, change the name of the current region. For instance, RegionTwo:
Finally, disable the deployment of Keystone and Horizon that are Finally, disable the deployment of Keystone and Horizon that are
unnecessary in this region and run ``kolla-ansible``: unnecessary in this region and run ``kolla-ansible``:
.. code-block:: none .. code-block:: yaml
enable_keystone: "no" enable_keystone: "no"
enable_horizon: "no" enable_horizon: "no"

View File

@ -24,9 +24,9 @@ Edit the ``/etc/kolla/globals.yml`` and add the following where 192.168.1.100
is the IP address of the machine and 5000 is the port where the registry is is the IP address of the machine and 5000 is the port where the registry is
currently running: currently running:
.. code-block:: none .. code-block:: yaml
docker_registry = 192.168.1.100:5000 docker_registry: 192.168.1.100:5000
.. end .. end
@ -185,7 +185,7 @@ controls how ansible interacts with remote hosts.
information about SSH authentication please reference information about SSH authentication please reference
`Ansible documentation <http://docs.ansible.com/ansible/intro_inventory.html>`__. `Ansible documentation <http://docs.ansible.com/ansible/intro_inventory.html>`__.
.. code-block:: none .. code-block:: ini
# These initial groups are the only groups required to be modified. The # These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment. # additional groups are for more control of the environment.
@ -208,7 +208,7 @@ For more advanced roles, the operator can edit which services will be
associated in with each group. Keep in mind that some services have to be associated in with each group. Keep in mind that some services have to be
grouped together and changing these around can break your deployment: grouped together and changing these around can break your deployment:
.. code-block:: none .. code-block:: ini
[kibana:children] [kibana:children]
control control

View File

@ -72,8 +72,8 @@ While there may be some cases where it is possible to upgrade by skipping this
step (i.e. by upgrading only the ``openstack_release`` version) - generally, step (i.e. by upgrading only the ``openstack_release`` version) - generally,
when looking at a more comprehensive upgrade, the kolla-ansible package itself when looking at a more comprehensive upgrade, the kolla-ansible package itself
should be upgraded first. This will include reviewing some of the configuration should be upgraded first. This will include reviewing some of the configuration
and inventory files. On the operator/master node, a backup of the ``/etc/kolla`` and inventory files. On the operator/master node, a backup of the
directory may be desirable. ``/etc/kolla`` directory may be desirable.
If upgrading from ``5.0.0`` to ``6.0.0``, upgrade the kolla-ansible package: If upgrading from ``5.0.0`` to ``6.0.0``, upgrade the kolla-ansible package:
@ -83,8 +83,8 @@ If upgrading from ``5.0.0`` to ``6.0.0``, upgrade the kolla-ansible package:
.. end .. end
If this is a minor upgrade, and you do not wish to upgrade kolla-ansible itself, If this is a minor upgrade, and you do not wish to upgrade kolla-ansible
you may skip this step. itself, you may skip this step.
The inventory file for the deployment should be updated, as the newer sample The inventory file for the deployment should be updated, as the newer sample
inventory files may have updated layout or other relevant changes. inventory files may have updated layout or other relevant changes.
@ -101,15 +101,16 @@ In addition the ``6.0.0`` sample configuration files should be taken from::
# Ubuntu # Ubuntu
/usr/local/share/kolla-ansible/etc_examples/kolla /usr/local/share/kolla-ansible/etc_examples/kolla
At this stage, files that are still at the ``5.0.0`` version - which need manual At this stage, files that are still at the ``5.0.0`` version - which need
updating are: manual updating are:
- ``/etc/kolla/globals.yml`` - ``/etc/kolla/globals.yml``
- ``/etc/kolla/passwords.yml`` - ``/etc/kolla/passwords.yml``
For ``globals.yml`` relevant changes should be merged into a copy of the new For ``globals.yml`` relevant changes should be merged into a copy of the new
template, and then replace the file in ``/etc/kolla`` with the updated version. template, and then replace the file in ``/etc/kolla`` with the updated version.
For ``passwords.yml``, see the ``kolla-mergepwd`` instructions in `Tips and Tricks`. For ``passwords.yml``, see the ``kolla-mergepwd`` instructions in
`Tips and Tricks`.
For the kolla docker images, the ``openstack_release`` is updated to ``6.0.0``: For the kolla docker images, the ``openstack_release`` is updated to ``6.0.0``:

View File

@ -204,8 +204,8 @@ Install Kolla for development
.. end .. end
#. Copy the inventory files to the current directory. ``kolla-ansible`` holds #. Copy the inventory files to the current directory. ``kolla-ansible`` holds
inventory files ( ``all-in-one`` and ``multinode``) in the ``ansible/inventory`` inventory files ( ``all-in-one`` and ``multinode``) in the
directory. ``ansible/inventory`` directory.
.. code-block:: console .. code-block:: console
@ -230,7 +230,7 @@ than one node, edit ``multinode`` inventory:
#. Edit the first section of ``multinode`` with connection details of your #. Edit the first section of ``multinode`` with connection details of your
environment, for example: environment, for example:
.. code-block:: none .. code-block:: ini
[control] [control]
10.0.0.[10:12] ansible_user=ubuntu ansible_password=foobar ansible_become=true 10.0.0.[10:12] ansible_user=ubuntu ansible_password=foobar ansible_become=true

View File

@ -71,8 +71,8 @@ necessary tasks. In Rocky, all services have this capability, so users do not
need to add ``ansible_become`` option if connection user has passwordless sudo need to add ``ansible_become`` option if connection user has passwordless sudo
capability. capability.
Prior to Rocky, ``ansible_user`` (the user which Ansible uses to connect via SSH) Prior to Rocky, ``ansible_user`` (the user which Ansible uses to connect
is default configuration owner and group in target nodes. via SSH) is default configuration owner and group in target nodes.
From Rocky release, Kolla support connection using any user which has From Rocky release, Kolla support connection using any user which has
passwordless sudo capability. For setting custom owner user and group, user can passwordless sudo capability. For setting custom owner user and group, user
set ``config_owner_user`` and ``config_owner_group`` in ``globals.yml`` can set ``config_owner_user`` and ``config_owner_group`` in ``globals.yml``.

View File

@ -49,14 +49,6 @@ console_scripts =
setup-hooks = setup-hooks =
pbr.hooks.setup_hook pbr.hooks.setup_hook
[pbr]
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source
warning-is-error = 1
[build_releasenotes] [build_releasenotes]
all_files = 1 all_files = 1
build-dir = releasenotes/build build-dir = releasenotes/build

View File

@ -11,14 +11,12 @@ hacking>=0.10.0,<1.1.0
openstackdocstheme>=1.18.1 # Apache-2.0 openstackdocstheme>=1.18.1 # Apache-2.0
oslo.log>=3.36.0 # Apache-2.0 oslo.log>=3.36.0 # Apache-2.0
oslotest>=3.2.0 # Apache-2.0 oslotest>=3.2.0 # Apache-2.0
reno>=2.5.0 # Apache-2.0
PrettyTable<0.8,>=0.7.1 # BSD PrettyTable<0.8,>=0.7.1 # BSD
PyYAML>=3.12 # MIT PyYAML>=3.12 # MIT
python-ceilometerclient>=2.5.0 # Apache-2.0 python-ceilometerclient>=2.5.0 # Apache-2.0
python-neutronclient>=6.7.0 # Apache-2.0 python-neutronclient>=6.7.0 # Apache-2.0
python-openstackclient>=3.12.0 # Apache-2.0 python-openstackclient>=3.12.0 # Apache-2.0
pytz>=2013.6 # MIT pytz>=2013.6 # MIT
sphinx!=1.6.6,!=1.6.7,>=1.6.2 # BSD
testrepository>=0.0.18 # Apache-2.0/BSD testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD
testtools>=2.2.0 # MIT testtools>=2.2.0 # MIT

32
tox.ini
View File

@ -7,8 +7,9 @@ envlist = py35,py27,pep8,pypy
usedevelop=True usedevelop=True
whitelist_externals = find whitelist_externals = find
rm rm
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages} install_command = pip install {opts} {packages}
deps = -r{toxinidir}/requirements.txt deps = -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt -r{toxinidir}/test-requirements.txt
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_LOG_CAPTURE OS_TEST_TIMEOUT OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_LOG_CAPTURE OS_TEST_TIMEOUT
@ -30,12 +31,15 @@ setenv = VIRTUAL_ENV={envdir}
commands = python setup.py testr --coverage --testr-args='{posargs}' commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:pep8] [testenv:pep8]
# sphinx needs to be installed to make doc8 work properly
deps = deps =
{[testenv]deps} {[testenv]deps}
-r{toxinidir}/doc/requirements.txt
yamllint yamllint
commands = commands =
{toxinidir}/tools/run-bashate.sh {toxinidir}/tools/run-bashate.sh
flake8 {posargs} flake8 {posargs}
doc8 doc
python {toxinidir}/tools/validate-all-file.py python {toxinidir}/tools/validate-all-file.py
bandit -r ansible kolla_ansible tests tools bandit -r ansible kolla_ansible tests tools
yamllint . yamllint .
@ -44,16 +48,30 @@ commands =
commands = bandit -r ansible kolla_ansible tests tools commands = bandit -r ansible kolla_ansible tests tools
[testenv:venv] [testenv:venv]
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
-r{toxinidir}/test-requirements.txt
-r{toxinidir}/doc/requirements.txt
commands = {posargs} commands = {posargs}
[testenv:docs] [testenv:docs]
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
-r{toxinidir}/requirements.txt
-r{toxinidir}/doc/requirements.txt
commands = commands =
rm -rf doc/build rm -rf doc/build
doc8 doc sphinx-build -W -b html doc/source doc/build/html
python setup.py build_sphinx
[testenv:deploy-guide] [testenv:deploy-guide]
commands = sphinx-build -a -E -W -d deploy-guide/build/doctrees -b html deploy-guide/source deploy-guide/build/html deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
-r{toxinidir}/requirements.txt
-r{toxinidir}/doc/requirements.txt
commands =
rm -rf deploy-guide/build
sphinx-build -a -E -W -d deploy-guide/build/doctrees -b html deploy-guide/source deploy-guide/build/html
[testenv:setupenv] [testenv:setupenv]
commands = commands =
@ -61,6 +79,10 @@ commands =
{toxinidir}/tools/dump_info.sh {toxinidir}/tools/dump_info.sh
[testenv:releasenotes] [testenv:releasenotes]
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
-r{toxinidir}/requirements.txt
-r{toxinidir}/doc/requirements.txt
commands = commands =
rm -rf releasenotes/build rm -rf releasenotes/build
sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html