Merge "Fix docs formatting error"

This commit is contained in:
Jenkins 2016-10-17 17:24:24 +00:00 committed by Gerrit Code Review
commit 6c1212a25f
9 changed files with 116 additions and 103 deletions

View File

@ -127,22 +127,22 @@ that Kolla uses throughout that should be followed.
All services should include the following tasks: All services should include the following tasks:
- ``do_reconfigure.yml`` : Used to push new configuration files to the host and - ``do_reconfigure.yml`` : Used to push new configuration files to the host
restart the service. and restart the service.
- ``pull.yml`` : Used to pre fetch the image into the Docker image cache on hosts, - ``pull.yml`` : Used to pre fetch the image into the Docker image cache
to speed up initial deploys. on hosts, to speed up initial deploys.
- ``upgrade.yml`` : Used for upgrading the service in a rolling fashion. May - ``upgrade.yml`` : Used for upgrading the service in a rolling fashion. May
include service specific setup and steps as not all services can be upgraded include service specific setup and steps as not all services can be
in the same way. upgraded in the same way.
* Log delivery * Log delivery
- For OpenStack services the service has be added to the ``file_match`` parameter - For OpenStack services the service has be added to the ``file_match``
in the ``openstack_logstreamer_input`` section in the ``heka-openstack.toml.j2`` parameter in the ``openstack_logstreamer_input`` section in the
template file in ``ansible/roles/comm/templates`` to deliver log messages to ``heka-openstack.toml.j2`` template file in
Elasticsearch. ``ansible/roles/comm/templates`` to deliver log messages to Elasticsearch.
* Logrotation * Logrotation
@ -161,8 +161,8 @@ that Kolla uses throughout that should be followed.
* Documentation * Documentation
- For OpenStack services there should be an entry in the list ``OpenStack services`` - For OpenStack services there should be an entry in the list
in the ``README.rst`` file. ``OpenStack services`` in the ``README.rst`` file.
- For infrastructure services there should be an entry in the list - For infrastructure services there should be an entry in the list
``Infrastructure components`` in the ``README.rst`` file. ``Infrastructure components`` in the ``README.rst`` file.
@ -173,16 +173,16 @@ that Kolla uses throughout that should be followed.
Other than the above, most roles follow the following pattern: Other than the above, most roles follow the following pattern:
- ``Register``: Involves registering the service with Keystone, creating endpoints, roles, - ``Register``: Involves registering the service with Keystone, creating
users, etc. endpoints, roles, users, etc.
- ``Config``: Distributes the config files to the nodes to be pulled into the container on - ``Config``: Distributes the config files to the nodes to be pulled into
startup. the container on startup.
- ``Bootstrap``: Creating the database (but not tables), database user for the service, - ``Bootstrap``: Creating the database (but not tables), database user for
permissions, etc. the service, permissions, etc.
- ``Bootstrap Service``: Starts a one shot container on the host to create the database tables, - ``Bootstrap Service``: Starts a one shot container on the host to create
and other initial run time config. the database tables, and other initial run time config.
- ``Start``: Start the service(s). - ``Start``: Start the service(s).

View File

@ -144,11 +144,6 @@ OpenStack Service Configuration in Kolla
======================================== ========================================
.. note:: As of now kolla only supports config overrides for ini based configs. .. note:: As of now kolla only supports config overrides for ini based configs.
An operator can change the location where custom config files are read
from by editing ``/etc/kolla/globals.yml`` and adding the following
line.
=======
.. NOTE:: As of now kolla only supports config overrides for ini based configs.
An operator can change the location where custom config files are read from by An operator can change the location where custom config files are read from by
editing ``/etc/kolla/globals.yml`` and adding the following line. editing ``/etc/kolla/globals.yml`` and adding the following line.

View File

@ -78,6 +78,8 @@ see bifrost dynamic inventory examples for mor details.
e.g. /etc/kolla/config/bifrost/servers.yml e.g. /etc/kolla/config/bifrost/servers.yml
.. code-block:: yaml
--- ---
cloud1: cloud1:
uuid: "31303735-3934-4247-3830-333132535336" uuid: "31303735-3934-4247-3830-333132535336"
@ -149,11 +151,15 @@ manual
Start Bifrost Container Start Bifrost Container
_______________________ _______________________
::
docker run -it --net=host -v /dev:/dev -d --privileged --name bifrost_deploy 192.168.1.51:5000/kollaglue/ubuntu-source-bifrost-deploy:3.0.0 docker run -it --net=host -v /dev:/dev -d --privileged --name bifrost_deploy 192.168.1.51:5000/kollaglue/ubuntu-source-bifrost-deploy:3.0.0
copy configs copy configs
____________ ____________
.. code-block:: console
docker exec -it bifrost_deploy mkdir /etc/bifrost docker exec -it bifrost_deploy mkdir /etc/bifrost
docker cp /etc/kolla/config/bifrost/servers.yml bifrost_deploy:/etc/bifrost/servers.yml docker cp /etc/kolla/config/bifrost/servers.yml bifrost_deploy:/etc/bifrost/servers.yml
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
@ -178,18 +184,23 @@ cd playbooks/
bootstrap and start services bootstrap and start services
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
ansible-playbook -vvvv -i /bifrost/playbooks/inventory/localhost /bifrost/playbooks/install.yaml -e @/etc/bifrost/bifrost.yml ansible-playbook -vvvv -i /bifrost/playbooks/inventory/localhost /bifrost/playbooks/install.yaml -e @/etc/bifrost/bifrost.yml
Check ironic is running Check ironic is running
======================= =======================
.. code-block:: console
docker exec -it bifrost_deploy bash docker exec -it bifrost_deploy bash
cd /bifrost cd /bifrost
. env-vars . env-vars
running "ironic node-list" should return with no nodes.
e.g. Running "ironic node-list" should return with no nodes, e.g.
.. code-block:: console
(bifrost-deploy)[root@bifrost bifrost]# ironic node-list (bifrost-deploy)[root@bifrost bifrost]# ironic node-list
+------+------+---------------+-------------+--------------------+-------------+ +------+------+---------------+-------------+--------------------+-------------+
@ -215,6 +226,8 @@ kolla-ansible deploy-servers
manual manual
------ ------
.. code-block:: console
docker exec -it bifrost_deploy bash docker exec -it bifrost_deploy bash
cd /bifrost cd /bifrost
. env-vars . env-vars
@ -227,7 +240,8 @@ cd /bifrost
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml -e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" -e network_interface=<prvisioning interface> -e @/etc/bifrost/dib.yml ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml -e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" -e network_interface=<prvisioning interface> -e @/etc/bifrost/dib.yml
At this point ironic should clean down your nodes and install the default os image. At this point ironic should clean down your nodes and install the default
os image.
Advanced configuration Advanced configuration
====================== ======================
@ -247,9 +261,12 @@ Known issues
SSH deamon not running SSH deamon not running
---------------------- ----------------------
By default sshd is installed in the image but may not be enabled. By default sshd is installed in the image but may not be enabled.
If you encounter this issue you will have to acess the server phyically in recovery mode to enable the ssh service. If you encounter this issue you will have to acess the server phyically in
if your hardware supports it, this can be done remotely with ipmitool and serial over lan. recovery mode to enable the ssh service. If your hardware supports it, this
e.g. can be done remotely with ipmitool and serial over lan. e.g.
.. code-block:: console
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
@ -270,4 +287,3 @@ code
____ ____
https://github.com/openstack/bifrost https://github.com/openstack/bifrost

View File

@ -3,6 +3,6 @@ Bug triage
========== ==========
The triage of Kolla bugs follows the OpenStack-wide process documented The triage of Kolla bugs follows the OpenStack-wide process documented
on `BugTriage <https://wiki.openstack.org/wiki/BugTriage`__ in the wiki. on `BugTriage <https://wiki.openstack.org/wiki/BugTriage>`_ in the wiki.
Please reference `Bugs <https://wiki.openstack.org/wiki/Bugs>`__ in the Please reference `Bugs <https://wiki.openstack.org/wiki/Bugs>`_ in the
wiki for further details. wiki for further details.

View File

@ -91,9 +91,9 @@ Cinder LVM2 backend with iSCSI
As of Newton-1 milestone, Kolla supports LVM2 as cinder backend. It is As of Newton-1 milestone, Kolla supports LVM2 as cinder backend. It is
accomplished by introducing two new containers ``tgtd`` and ``iscsid``. accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
``tgtd`` container serves as a bridge between cinder-volume process and a server ``tgtd`` container serves as a bridge between cinder-volume process and a
hosting Logical Volume Groups (LVG). ``iscsid`` container serves as a bridge server hosting Logical Volume Groups (LVG). ``iscsid`` container serves as
between nova-compute process and the server hosting LVG. a bridge between nova-compute process and the server hosting LVG.
In order to use Cinder's LVM backend, a LVG named ``cinder-volumes`` should In order to use Cinder's LVM backend, a LVG named ``cinder-volumes`` should
exist on the server and following parameter must be specified in exist on the server and following parameter must be specified in

View File

@ -14,20 +14,20 @@ Requirements
Preparation and Deployment Preparation and Deployment
-------------------------- --------------------------
To allow docker daemon connect to the etcd, add the following in the docker.server To allow docker daemon connect to the etcd, add the following in the
file. docker.server file.
:: ::
ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375 ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
The IP address is host runnning the etcd service. ```2375``` is port that allows The IP address is host runnning the etcd service. ```2375``` is port that
Docker daemon to be accessed remotely. ```2379``` is the etcd listening port. allows Docker daemon to be accessed remotely. ```2379``` is the etcd listening
port.
By default etcd and kuryr are disabled in the ``group_vars/all.yml``.
By default etcd and kuryr are disabled in the ``group_vars/all.yml``. In order to In order to enable them, you need to edit the file globals.yml and set the
enable them, you need to edit the file globals.yml and set the following following variables
variables
:: ::

View File

@ -14,8 +14,9 @@ Node types and services running on them
A basic Kolla inventory consists of several types of nodes, known in Ansible as A basic Kolla inventory consists of several types of nodes, known in Ansible as
``groups``. ``groups``.
* Controller - This is the cloud controller node. It hosts control services like * Controller - This is the cloud controller node. It hosts control services
APIs and databases. This group should have odd number of nodes for quorum. like APIs and databases. This group should have odd number of nodes for
quorum.
* Network - This is the network node. It will host Neutron agents along with * Network - This is the network node. It will host Neutron agents along with
haproxy / keepalived. These nodes will have a floating ip defined in haproxy / keepalived. These nodes will have a floating ip defined in
@ -54,13 +55,13 @@ In Kolla operators should configure following network interfaces:
communicate to Ceph. This can be heavily utilized so it's recommended to put communicate to Ceph. This can be heavily utilized so it's recommended to put
this network on 10Gig networking. Defaults to network_interface. this network on 10Gig networking. Defaults to network_interface.
* cluster_interface - This is another interface used by Ceph. It's used for data * cluster_interface - This is another interface used by Ceph. It's used for
replication. It can be heavily utilized also and if it becomes a bottleneck it data replication. It can be heavily utilized also and if it becomes a
can affect data consistency and performance of whole cluster. Defaults to bottleneck it can affect data consistency and performance of whole cluster.
network_interface. Defaults to network_interface.
* tunnel_interface - This interface is used by Neutron for vm-to-vm traffic over * tunnel_interface - This interface is used by Neutron for vm-to-vm traffic
tunneled networks (like VxLan). Defaults to network_interface. over tunneled networks (like VxLan). Defaults to network_interface.
* Neutron_external_interface - This interface is required by Neutron. Neutron * Neutron_external_interface - This interface is required by Neutron. Neutron
will put br-ex on it. It will be used for flat networking as well as tagged will put br-ex on it. It will be used for flat networking as well as tagged

View File

@ -286,7 +286,8 @@ On CentOS or RHEL systems, this can be done using:
yum install ansible yum install ansible
Many DEB based systems do not meet Kolla's Ansible version requirements. It is Many DEB based systems do not meet Kolla's Ansible version requirements. It is
recommended to use pip to install Ansible >2.0. Finally Ansible >2.0 may be installed using: recommended to use pip to install Ansible >2.0. Finally Ansible >2.0 may be
installed using:
:: ::