Merge "Fix docs formatting error"

This commit is contained in:
Jenkins 2016-10-17 17:24:24 +00:00 committed by Gerrit Code Review
commit 6c1212a25f
9 changed files with 116 additions and 103 deletions

View File

@ -127,22 +127,22 @@ that Kolla uses throughout that should be followed.
All services should include the following tasks:
- ``do_reconfigure.yml`` : Used to push new configuration files to the host and
restart the service.
- ``do_reconfigure.yml`` : Used to push new configuration files to the host
and restart the service.
- ``pull.yml`` : Used to pre fetch the image into the Docker image cache on hosts,
to speed up initial deploys.
- ``pull.yml`` : Used to pre fetch the image into the Docker image cache
on hosts, to speed up initial deploys.
- ``upgrade.yml`` : Used for upgrading the service in a rolling fashion. May
include service specific setup and steps as not all services can be upgraded
in the same way.
include service specific setup and steps as not all services can be
upgraded in the same way.
* Log delivery
- For OpenStack services the service has be added to the ``file_match`` parameter
in the ``openstack_logstreamer_input`` section in the ``heka-openstack.toml.j2``
template file in ``ansible/roles/comm/templates`` to deliver log messages to
Elasticsearch.
- For OpenStack services the service has be added to the ``file_match``
parameter in the ``openstack_logstreamer_input`` section in the
``heka-openstack.toml.j2`` template file in
``ansible/roles/comm/templates`` to deliver log messages to Elasticsearch.
* Logrotation
@ -161,8 +161,8 @@ that Kolla uses throughout that should be followed.
* Documentation
- For OpenStack services there should be an entry in the list ``OpenStack services``
in the ``README.rst`` file.
- For OpenStack services there should be an entry in the list
``OpenStack services`` in the ``README.rst`` file.
- For infrastructure services there should be an entry in the list
``Infrastructure components`` in the ``README.rst`` file.
@ -173,16 +173,16 @@ that Kolla uses throughout that should be followed.
Other than the above, most roles follow the following pattern:
- ``Register``: Involves registering the service with Keystone, creating endpoints, roles,
users, etc.
- ``Register``: Involves registering the service with Keystone, creating
endpoints, roles, users, etc.
- ``Config``: Distributes the config files to the nodes to be pulled into the container on
startup.
- ``Config``: Distributes the config files to the nodes to be pulled into
the container on startup.
- ``Bootstrap``: Creating the database (but not tables), database user for the service,
permissions, etc.
- ``Bootstrap``: Creating the database (but not tables), database user for
the service, permissions, etc.
- ``Bootstrap Service``: Starts a one shot container on the host to create the database tables,
and other initial run time config.
- ``Bootstrap Service``: Starts a one shot container on the host to create
the database tables, and other initial run time config.
- ``Start``: Start the service(s).

View File

@ -144,11 +144,6 @@ OpenStack Service Configuration in Kolla
========================================
.. note:: As of now kolla only supports config overrides for ini based configs.
An operator can change the location where custom config files are read
from by editing ``/etc/kolla/globals.yml`` and adding the following
line.
=======
.. NOTE:: As of now kolla only supports config overrides for ini based configs.
An operator can change the location where custom config files are read from by
editing ``/etc/kolla/globals.yml`` and adding the following line.

View File

@ -78,27 +78,29 @@ see bifrost dynamic inventory examples for mor details.
e.g. /etc/kolla/config/bifrost/servers.yml
---
cloud1:
uuid: "31303735-3934-4247-3830-333132535336"
driver_info:
power:
ipmi_username: "admin"
ipmi_address: "192.168.1.30"
ipmi_password: "root"
nics:
-
mac: "1c:c1:de:1c:aa:53"
-
mac: "1c:c1:de:1c:aa:52"
driver: "agent_ipmitool"
ipv4_address: "192.168.1.10"
properties:
cpu_arch: "x86_64"
ram: "24576"
disk_size: "120"
cpus: "16"
name: "cloud1"
.. code-block:: yaml
---
cloud1:
uuid: "31303735-3934-4247-3830-333132535336"
driver_info:
power:
ipmi_username: "admin"
ipmi_address: "192.168.1.30"
ipmi_password: "root"
nics:
-
mac: "1c:c1:de:1c:aa:53"
-
mac: "1c:c1:de:1c:aa:52"
driver: "agent_ipmitool"
ipv4_address: "192.168.1.10"
properties:
cpu_arch: "x86_64"
ram: "24576"
disk_size: "120"
cpus: "16"
name: "cloud1"
adjust as appropriate for your deployment
@ -149,15 +151,19 @@ manual
Start Bifrost Container
_______________________
docker run -it --net=host -v /dev:/dev -d --privileged --name bifrost_deploy 192.168.1.51:5000/kollaglue/ubuntu-source-bifrost-deploy:3.0.0
::
docker run -it --net=host -v /dev:/dev -d --privileged --name bifrost_deploy 192.168.1.51:5000/kollaglue/ubuntu-source-bifrost-deploy:3.0.0
copy configs
____________
docker exec -it bifrost_deploy mkdir /etc/bifrost
docker cp /etc/kolla/config/bifrost/servers.yml bifrost_deploy:/etc/bifrost/servers.yml
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
.. code-block:: console
docker exec -it bifrost_deploy mkdir /etc/bifrost
docker cp /etc/kolla/config/bifrost/servers.yml bifrost_deploy:/etc/bifrost/servers.yml
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
bootstrap bifrost
_________________
@ -178,24 +184,29 @@ cd playbooks/
bootstrap and start services
~~~~~~~~~~~~~~~~~~~~~~~~~~~
ansible-playbook -vvvv -i /bifrost/playbooks/inventory/localhost /bifrost/playbooks/install.yaml -e @/etc/bifrost/bifrost.yml
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
ansible-playbook -vvvv -i /bifrost/playbooks/inventory/localhost /bifrost/playbooks/install.yaml -e @/etc/bifrost/bifrost.yml
Check ironic is running
=======================
.. code-block:: console
docker exec -it bifrost_deploy bash
cd /bifrost
. env-vars
running "ironic node-list" should return with no nodes.
e.g.
docker exec -it bifrost_deploy bash
cd /bifrost
. env-vars
(bifrost-deploy)[root@bifrost bifrost]# ironic node-list
+------+------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+------+------+---------------+-------------+--------------------+-------------+
+------+------+---------------+-------------+--------------------+-------------+
Running "ironic node-list" should return with no nodes, e.g.
.. code-block:: console
(bifrost-deploy)[root@bifrost bifrost]# ironic node-list
+------+------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+------+------+---------------+-------------+--------------------+-------------+
+------+------+---------------+-------------+--------------------+-------------+
Enroll and Deploy Physical Nodes
@ -215,19 +226,22 @@ kolla-ansible deploy-servers
manual
------
docker exec -it bifrost_deploy bash
cd /bifrost
. env-vars
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
ansible-playbook -vvvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml -e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" -e network_interface=<provisioning interface>
.. code-block:: console
docker exec -it bifrost_deploy bash
cd /bifrost
. env-vars
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml -e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" -e network_interface=<prvisioning interface> -e @/etc/bifrost/dib.yml
docker exec -it bifrost_deploy bash
cd /bifrost
. env-vars
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
ansible-playbook -vvvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml -e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" -e network_interface=<provisioning interface>
At this point ironic should clean down your nodes and install the default os image.
docker exec -it bifrost_deploy bash
cd /bifrost
. env-vars
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml -e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" -e network_interface=<prvisioning interface> -e @/etc/bifrost/dib.yml
At this point ironic should clean down your nodes and install the default
os image.
Advanced configuration
======================
@ -247,10 +261,13 @@ Known issues
SSH deamon not running
----------------------
By default sshd is installed in the image but may not be enabled.
If you encounter this issue you will have to acess the server phyically in recovery mode to enable the ssh service.
if your hardware supports it, this can be done remotely with ipmitool and serial over lan.
e.g.
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
If you encounter this issue you will have to acess the server phyically in
recovery mode to enable the ssh service. If your hardware supports it, this
can be done remotely with ipmitool and serial over lan. e.g.
.. code-block:: console
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
References
@ -270,4 +287,3 @@ code
____
https://github.com/openstack/bifrost

View File

@ -3,6 +3,6 @@ Bug triage
==========
The triage of Kolla bugs follows the OpenStack-wide process documented
on `BugTriage <https://wiki.openstack.org/wiki/BugTriage`__ in the wiki.
Please reference `Bugs <https://wiki.openstack.org/wiki/Bugs>`__ in the
on `BugTriage <https://wiki.openstack.org/wiki/BugTriage>`_ in the wiki.
Please reference `Bugs <https://wiki.openstack.org/wiki/Bugs>`_ in the
wiki for further details.

View File

@ -91,9 +91,9 @@ Cinder LVM2 backend with iSCSI
As of Newton-1 milestone, Kolla supports LVM2 as cinder backend. It is
accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
``tgtd`` container serves as a bridge between cinder-volume process and a server
hosting Logical Volume Groups (LVG). ``iscsid`` container serves as a bridge
between nova-compute process and the server hosting LVG.
``tgtd`` container serves as a bridge between cinder-volume process and a
server hosting Logical Volume Groups (LVG). ``iscsid`` container serves as
a bridge between nova-compute process and the server hosting LVG.
In order to use Cinder's LVM backend, a LVG named ``cinder-volumes`` should
exist on the server and following parameter must be specified in

View File

@ -14,20 +14,20 @@ Requirements
Preparation and Deployment
--------------------------
To allow docker daemon connect to the etcd, add the following in the docker.server
file.
To allow docker daemon connect to the etcd, add the following in the
docker.server file.
::
ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
The IP address is host runnning the etcd service. ```2375``` is port that allows
Docker daemon to be accessed remotely. ```2379``` is the etcd listening port.
The IP address is host runnning the etcd service. ```2375``` is port that
allows Docker daemon to be accessed remotely. ```2379``` is the etcd listening
port.
By default etcd and kuryr are disabled in the ``group_vars/all.yml``. In order to
enable them, you need to edit the file globals.yml and set the following
variables
By default etcd and kuryr are disabled in the ``group_vars/all.yml``.
In order to enable them, you need to edit the file globals.yml and set the
following variables
::

View File

@ -14,8 +14,9 @@ Node types and services running on them
A basic Kolla inventory consists of several types of nodes, known in Ansible as
``groups``.
* Controller - This is the cloud controller node. It hosts control services like
APIs and databases. This group should have odd number of nodes for quorum.
* Controller - This is the cloud controller node. It hosts control services
like APIs and databases. This group should have odd number of nodes for
quorum.
* Network - This is the network node. It will host Neutron agents along with
haproxy / keepalived. These nodes will have a floating ip defined in
@ -54,13 +55,13 @@ In Kolla operators should configure following network interfaces:
communicate to Ceph. This can be heavily utilized so it's recommended to put
this network on 10Gig networking. Defaults to network_interface.
* cluster_interface - This is another interface used by Ceph. It's used for data
replication. It can be heavily utilized also and if it becomes a bottleneck it
can affect data consistency and performance of whole cluster. Defaults to
network_interface.
* cluster_interface - This is another interface used by Ceph. It's used for
data replication. It can be heavily utilized also and if it becomes a
bottleneck it can affect data consistency and performance of whole cluster.
Defaults to network_interface.
* tunnel_interface - This interface is used by Neutron for vm-to-vm traffic over
tunneled networks (like VxLan). Defaults to network_interface.
* tunnel_interface - This interface is used by Neutron for vm-to-vm traffic
over tunneled networks (like VxLan). Defaults to network_interface.
* Neutron_external_interface - This interface is required by Neutron. Neutron
will put br-ex on it. It will be used for flat networking as well as tagged

View File

@ -286,7 +286,8 @@ On CentOS or RHEL systems, this can be done using:
yum install ansible
Many DEB based systems do not meet Kolla's Ansible version requirements. It is
recommended to use pip to install Ansible >2.0. Finally Ansible >2.0 may be installed using:
recommended to use pip to install Ansible >2.0. Finally Ansible >2.0 may be
installed using:
::
@ -321,7 +322,7 @@ Install Kolla and its dependencies:
Copy the Kolla configuration files to ``/etc``:
::
# CentOS 7
cp -r /usr/share/kolla/etc_examples/kolla /etc/

View File

@ -103,4 +103,4 @@ For more information see the `oslotest documentation
.. rubric:: Footnotes
.. [#f1] See http://docs.openstack.org/infra/system-config/jenkins.html
.. [#f1] See http://docs.openstack.org/infra/system-config/jenkins.html