diff --git a/doc/CONTRIBUTING.rst b/doc/CONTRIBUTING.rst
index ae248028c8..85c11271bd 100644
--- a/doc/CONTRIBUTING.rst
+++ b/doc/CONTRIBUTING.rst
@@ -7,22 +7,22 @@ How To Contribute
Basics
======
-* Our source code is hosted on `OpenStack GitHub`_, but pull requests submitted
- through GitHub will be ignored. Bugs should be filed on launchpad_,
- not GitHub.
+#. Our source code is hosted on `OpenStack GitHub`_, but pull requests submitted
+ through GitHub will be ignored. Bugs should be filed on launchpad_,
+ not GitHub.
-* Please follow OpenStack `Gerrit Workflow`_ to to contribute to Kolla.
+#. Please follow OpenStack `Gerrit Workflow`_ to to contribute to Kolla.
-* Note the branch you're proposing changes to. ``master`` is the current focus
- of development. Kolla project has a strict policy of only allowing backports
- in ``stable/branch``, unless when not applicable. A bug in a ``stable/branch``
- will first have to be fixed in ``master``.
+#. Note the branch you're proposing changes to. ``master`` is the current focus
+ of development. Kolla project has a strict policy of only allowing backports
+ in ``stable/branch``, unless when not applicable. A bug in a ``stable/branch``
+ will first have to be fixed in ``master``.
-* Please file a launchpad_ blueprint for any significant code change and a bug
- for any significant bug fix or add a TrivialFix tag for simple changes.
- See how to reference a bug or a blueprint in the commit message here_
+#. Please file a launchpad_ blueprint for any significant code change and a bug
+ for any significant bug fix or add a TrivialFix tag for simple changes.
+ See how to reference a bug or a blueprint in the commit message here_
-* TrivialFix tags or bugs are not required for documentation changes.
+#. TrivialFix tags or bugs are not required for documentation changes.
.. _OpenStack GitHub: https://github.com/openstack/kolla
.. _Gerrit Workflow: http://docs.openstack.org/infra/manual/developers.html#development-workflow
@@ -32,6 +32,7 @@ Basics
Development Environment
========================
-* Please follow our `quickstart`_ to deploy your environment and test your changes
+#. Please follow our `quickstart`_ to deploy your environment and test your
+ changes.
.. _quickstart: http://docs.openstack.org/developer/kolla/quickstart.html
diff --git a/doc/ceph-guide.rst b/doc/ceph-guide.rst
index 3d2623bf86..1f892e13ba 100644
--- a/doc/ceph-guide.rst
+++ b/doc/ceph-guide.rst
@@ -6,7 +6,7 @@ Ceph in Kolla
The out-of-the-box Ceph deployment requires 3 hosts with at least one block
device on each host that can be dedicated for sole use by Ceph. However, with
-tweaks to the Ceph cluster you can deploy a "healthy" cluster with a single
+tweaks to the Ceph cluster you can deploy a **healthy** cluster with a single
host and a single block device.
Requirements
@@ -21,8 +21,8 @@ Preparation and Deployment
To prepare a disk for use as a
`Ceph OSD `_ you must add a
special partition label to the disk. This partition label is how Kolla detects
-the disks to format and bootstrap. Any disk with a matching partition label will
-be reformatted so use caution.
+the disks to format and bootstrap. Any disk with a matching partition label
+will be reformatted so use caution.
To prepare an OSD as a storage drive, execute the following operations:
@@ -32,7 +32,8 @@ To prepare an OSD as a storage drive, execute the following operations:
# where $DISK is /dev/sdb or something similar
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
-The following shows an example of using parted to configure /dev/sdb for usage with Kolla.
+The following shows an example of using parted to configure ``/dev/sdb`` for
+usage with Kolla.
::
@@ -56,24 +57,25 @@ hosts that have the block devices you have prepped as shown above.
compute1
-Enable Ceph in /etc/kolla/globals.yml:
+Enable Ceph in ``/etc/kolla/globals.yml``:
::
enable_ceph: "yes"
-RadosGW is optional, enable it in /etc/kolla/globals.yml:
+RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
::
enable_ceph_rgw: "yes"
-RGW requires a healthy cluster in order to be successfully deployed.
-On initial start up, RGW will create several pools.
-The first pool should be in an operational state to proceed with the second one, and so on.
-So, in the case of an all-in-one deployment, it is necessary to change the default number of copies
-for the pools before deployment. Modify the file /etc/kolla/config/ceph.conf and add the contents::
+RGW requires a healthy cluster in order to be successfully deployed. On initial
+start up, RGW will create several pools. The first pool should be in an
+operational state to proceed with the second one, and so on. So, in the case of
+an **all-in-one** deployment, it is necessary to change the default number of
+copies for the pools before deployment. Modify the file ``/etc/kolla/config/ceph.conf``
+and add the contents::
[global]
osd pool default size = 1
@@ -89,9 +91,8 @@ Finally deploy the Ceph-enabled OpenStack:
Using a Cache Tier
==================
-An optional
-`cache tier `_
-can be deployed by formatting at least one cache device and enabling cache
+An optional `cache tier `_
+can be deployed by formatting at least one cache device and enabling cache.
tiering in the globals.yml configuration file.
To prepare an OSD as a cache device, execute the following operations:
@@ -102,7 +103,7 @@ To prepare an OSD as a cache device, execute the following operations:
# where $DISK is /dev/sdb or something similar
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
-Enable the Ceph cache tier in /etc/kolla/globals.yml:
+Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
::
@@ -123,13 +124,13 @@ Setting up an Erasure Coded Pool
`Erasure code `_
is the new big thing from Ceph. Kolla has the ability to setup your Ceph pools
as erasure coded pools. Due to technical limitations with Ceph, using erasure
-coded pools as OpenStack uses them requires a cache tier. Additionally, you must
-make the choice to use an erasure coded pool or a replicated pool (the default)
-when you initially deploy. You cannot change this without completely removing
-the pool and recreating it.
+coded pools as OpenStack uses them requires a cache tier. Additionally, you
+must make the choice to use an erasure coded pool or a replicated pool
+(the default) when you initially deploy. You cannot change this without
+completely removing the pool and recreating it.
-To enable erasure coded pools add the following options to your
-/etc/kolla/globals.yml configuration file:
+To enable erasure coded pools add the following options to your ``/etc/kolla/globals.yml``
+configuration file:
::
@@ -157,9 +158,10 @@ indicates a healthy cluster:
68676 kB used, 20390 MB / 20457 MB avail
64 active+clean
-If Ceph is run in an all-in-one deployment or with less than three storage nodes, further
-configuration is required. It is necessary to change the default number of copies for the pool.
-The following example demonstrates how to change the number of copies for the pool to 1:
+If Ceph is run in an **all-in-one** deployment or with less than three storage
+nodes, further configuration is required. It is necessary to change the default
+number of copies for the pool. The following example demonstrates how to change
+the number of copies for the pool to 1:
::
@@ -178,7 +180,7 @@ If using a cache tier, these changes must be made as well:
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
-The default pool Ceph creates is named 'rbd'. It is safe to remove this pool:
+The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
::
diff --git a/doc/deployment-philosophy.rst b/doc/deployment-philosophy.rst
index 6a08e418f6..87ed2195e8 100644
--- a/doc/deployment-philosophy.rst
+++ b/doc/deployment-philosophy.rst
@@ -9,15 +9,15 @@ Overview
Kolla has an objective to replace the inflexible, painful, resource intensive
deployment process of OpenStack with a flexible, painless, inexpensive
-deployment process. Often to deploy OpenStack at one-hundred node scale that
+deployment process. Often to deploy OpenStack at one-hundred node scale that
a small business may require means building a team of OpenStack professionals
-to maintain and manage the OpenStack deployment. Finding people experienced
+to maintain and manage the OpenStack deployment. Finding people experienced
in OpenStack deployment is very difficult and expensive, resulting in a big
-barrier for OpenStack adoption. Kolla seeks to remedy this set of problems by
+barrier for OpenStack adoption. Kolla seeks to remedy this set of problems by
simplifying the deployment process but enabling flexible deployment models.
-Kolla is a highly opinionated deployment tool out of the box. This permits
-Kolla to be deployable with configuration of three key/value pairs. As an
+Kolla is a highly opinionated deployment tool out of the box. This permits
+Kolla to be deployable with configuration of three key/value pairs. As an
operator's experience with OpenStack grows and the desire to customize
OpenStack services increases, Kolla offers full capability to override every
OpenStack service configuration option in the deployment.
@@ -27,12 +27,12 @@ Why not Template Customization?
The Kolla upstream community does not want to place key/value pairs in the
Ansible playbook configuration options that are not essential to obtaining
-a functional deployment. If the Kolla upstream starts down the path of
+a functional deployment. If the Kolla upstream starts down the path of
templating configuration options, the Ansible configuration could conceivably
grow to hundreds of configuration key/value pairs which is unmanageable.
Further, as new versions of Kolla are released, there would be independent
customization available for different versions creating an unsupportable and
-difficult to document environment. Finally, adding key/value pairs for
+difficult to document environment. Finally, adding key/value pairs for
configuration options creates a situation in which a development and release
cycle is required in order to successfully add a new customization.
Essentially templating in configuration options is not a scalable solution
@@ -47,14 +47,14 @@ of existing deployment tools through a tidy simple design.
During deployment of an OpenStack service, a basic set of default configuration
options are merged with and overridden by custom ini configuration sections.
-Kolla deployment customization is that simple! This does create a situation
+Kolla deployment customization is that simple! This does create a situation
in which the Operator references the upstream documentation if a customization
-is desired in the OpenStack deployment. Fortunately the configuration options
+is desired in the OpenStack deployment. Fortunately the configuration options
documentation is extremely mature and well-formulated.
-As an example, consider running Kolla in a virtual machine. In order to
+As an example, consider running Kolla in a virtual machine. In order to
launch virtual machines from Nova in a virtual environment, it is necessary
-to use the QEMU hypervisor, rather than the KVM hypervisor. To achieve this
+to use the QEMU hypervisor, rather than the KVM hypervisor. To achieve this
result, simply modify the file `/etc/kolla/config/nova/nova-compute.conf` and
add the contents::
@@ -62,14 +62,14 @@ add the contents::
virt_type=qemu
After this change Kolla will use an emulated hypervisor with lower performance.
-Kolla could have templated this commonly modified configuration option. If
+Kolla could have templated this commonly modified configuration option. If
Kolla starts down this path, the Kolla project could end with hundreds of
config options all of which would have to be subjectively evaluated for
inclusion or exclusion in the source tree.
Kolla's approach yields a solution which enables complete customization without
-any upstream maintenance burden. Operators don't have to rely on a subjective
+any upstream maintenance burden. Operators don't have to rely on a subjective
approval process for configuration options nor rely on a
-development/test/release cycle to obtain a desired customization. Instead
+development/test/release cycle to obtain a desired customization. Instead
operators have ultimate freedom to make desired deployment choices immediately
without the approval of a third party.
diff --git a/doc/image-building.rst b/doc/image-building.rst
index f96747ef3f..eb70d8aaaf 100644
--- a/doc/image-building.rst
+++ b/doc/image-building.rst
@@ -15,9 +15,8 @@ The ``kolla-build`` command is responsible for building docker images.
Generating kolla-build.conf
===========================
-Install tox and generate the build configuration. The build
-configuration is designed to hold advanced customizations when building
-containers.
+Install tox and generate the build configuration. The build configuration is
+designed to hold advanced customizations when building containers.
Create kolla-build.conf using the following steps.
::
@@ -25,9 +24,10 @@ Create kolla-build.conf using the following steps.
pip install tox
tox -e genconfig
-The location of the generated configuration file is ``etc/kolla/kolla-build.conf``,
-You can also copy it to ``/etc/kolla``. The default location is one of
-``/etc/kolla/kolla-build.conf`` or ``etc/kolla/kolla-build.conf``.
+The location of the generated configuration file is
+``etc/kolla/kolla-build.conf``, You can also copy it to ``/etc/kolla``. The
+default location is one of ``/etc/kolla/kolla-build.conf`` or
+``etc/kolla/kolla-build.conf``.
Guide
=====
@@ -59,7 +59,7 @@ command line::
kolla-build keystone
In this case, the build script builds all images which name contains the
-'keystone' string along with their dependencies.
+*keystone* string along with their dependencies.
Multiple names may be specified on the command line::
@@ -88,12 +88,11 @@ command::
Build OpenStack from Source
===========================
-When building images, there are two methods of the OpenStack install.
-One is ``binary``. Another is ``source``.
-The ``binary`` means that OpenStack will be installed from apt/yum.
-And the ``source`` means that OpenStack will be installed from source code.
-The default method of the OpenStack install is ``binary``.
-It can be changed to ``source`` using the ``-t`` option::
+When building images, there are two methods of the OpenStack install. One is
+``binary``. Another is ``source``. The ``binary`` means that OpenStack will be
+installed from apt/yum. And the ``source`` means that OpenStack will be
+installed from source code. The default method of the OpenStack install is
+``binary``. It can be changed to ``source`` using the ``-t`` option::
kolla-build -t source
@@ -125,7 +124,7 @@ the best use of the docker cache.
To build RHEL containers, it is necessary to use the -i (include header)
feature to include registration with RHN of the container runtime operating
-system. To obtain a RHN username/password/pool id, contact Red Hat.
+system. To obtain a RHN username/password/pool id, contact Red Hat.
First create a file called rhel-include::
@@ -143,7 +142,7 @@ The build method allows the operator to build containers from custom repos.
The repos are accepted as a list of comma separated values and can be in
the form of .repo, .rpm, or a url. See examples below.
-Update rpm_setup_config in /etc/kolla/kolla-build.conf::
+Update rpm_setup_config in ``/etc/kolla/kolla-build.conf``::
rpm_setup_config = http://trunk.rdoproject.org/centos7/currrent/delorean.repo,http://trunk.rdoproject.org/centos7/delorean-deps.repo
@@ -206,13 +205,12 @@ Known issues
Docker Local Registry
=====================
-It is recommended to set up local registry for Kolla developers
-or deploying multinode. The reason using a local registry is
-deployment performance will operate at local network speeds,
-typically gigabit networking. Beyond performance considerations,
-the Operator would have full control over images that are deployed.
-If there is no local registry, nodes pull images from Docker Hub
-when images are not found in local caches.
+It is recommended to set up local registry for Kolla developers or deploying
+*multinode*. The reason using a local registry is deployment performance will
+operate at local network speeds, typically gigabit networking. Beyond
+performance considerations, the Operator would have full control over images
+that are deployed. If there is no local registry, nodes pull images from Docker
+Hub when images are not found in local caches.
Setting up Docker Local Registry
--------------------------------
@@ -225,18 +223,17 @@ Running Docker registry is easy. Just use the following command::
Note: ```` points to the folder where Docker registry
will store Docker images on the local host.
-The default port of Docker registry is 5000.
-But the 5000 port is also the port of keystone-api.
-To avoid conflict, use 4000 port as Docker registry port.
+The default port of Docker registry is 5000. But the 5000 port is also the port
+of keystone-api. To avoid conflict, use 4000 port as Docker registry port.
Now the Docker registry service is running.
Docker Insecure Registry Config
-------------------------------
-For docker to pull images, it is necessary to
-modify the Docker configuration. The guide assumes that
-the IP of the machine running Docker registry is 172.22.2.81.
+For docker to pull images, it is necessary to modify the Docker configuration.
+The guide assumes that the IP of the machine running Docker registry is
+172.22.2.81.
In Ubuntu, add ``--insecure-registry 172.22.2.81:4000``
to ``DOCKER_OPTS`` in ``/etc/default/docker``.
@@ -255,16 +252,15 @@ Kolla-ansible with Local Registry
To make kolla-ansible pull images from local registry, set
``"docker_registry"`` to ``"172.22.2.81:4000"`` in
-``"/etc/kolla/globals.yml"``. Make sure Docker is allowed to pull
-images from insecure registry. See
-`Docker Insecure Registry Config`_.
+``"/etc/kolla/globals.yml"``. Make sure Docker is allowed to pull images from
+insecure registry. See `Docker Insecure Registry Config`_.
Building behind a proxy
-----------------------
The build script supports augmenting the Dockerfiles under build via so called
-`header` and `footer` files. Statements in the `header` file are included at
+`header` and `footer` files. Statements in the `header` file are included at
the top of the `base` image, while those in `footer` are included at the bottom
of every Dockerfile in the build.
diff --git a/doc/kibana-guide.rst b/doc/kibana-guide.rst
index 46694acf1b..6419a11bee 100644
--- a/doc/kibana-guide.rst
+++ b/doc/kibana-guide.rst
@@ -24,13 +24,13 @@ pattern. To view, analyse and search logs, at least one index pattern has to
be created. To match indices stored in ElasticSearch, we suggest to use
following configuration:
-- Index contains time-based events - check
-- Use event times to create index names [DEPRECATED] - not checked
-- Index name or pattern - log-*
-- Do not expand index pattern when searching (Not recommended) - not checked
-- Time-field name - Timestamp
+#. Index contains time-based events - check
+#. Use event times to create index names [DEPRECATED] - not checked
+#. Index name or pattern - log-*
+#. Do not expand index pattern when searching (Not recommended) - not checked
+#. Time-field name - Timestamp
-After setting parameters, one can create an index with 'Create' button.
+After setting parameters, one can create an index with *Create* button.
Note: This step is necessary until the default Kibana dashboard is implemented
in Kolla.
@@ -51,10 +51,10 @@ Visualize data - Visualize tab
==============================
In the visualization tab a wide range of charts is available. If any
-visualization has not been saved yet, after choosing this tab 'Create a new
-visualization' panel is opened. If a visualization has already been saved,
+visualization has not been saved yet, after choosing this tab *Create a new
+visualization* panel is opened. If a visualization has already been saved,
after choosing this tab, lately modified visualization is opened. In this
-case, one can create a new visualization by choosing 'add visualization'
+case, one can create a new visualization by choosing *add visualization*
option in the menu on the right. In order to create new visualization, one
of the available options has to be chosen (pie chart, area chart). Each
visualization can be created from a saved or a new search. After choosing
@@ -63,8 +63,8 @@ generated and previewed. In the menu on the left, metrics for a chart can
be chosen. The chart can be generated by pressing a green arrow on the top
of the left-side menu.
-NOTE: After creating a visualization, it can be saved by choosing 'save
-visualization' option in the menu on the right. If it is not saved, it will
+NOTE: After creating a visualization, it can be saved by choosing *save
+visualization* option in the menu on the right. If it is not saved, it will
be lost after leaving a page or creating another visualization.
Organize visualizations and searches - Dashboard tab
@@ -72,17 +72,17 @@ Organize visualizations and searches - Dashboard tab
In the Dashboard tab all of saved visualizations and searches can be
organized in one Dashboard. To add visualization or search, one can choose
-'add visualization' option in the menu on the right and then choose an item
+*add visualization* option in the menu on the right and then choose an item
from all saved ones. The order and size of elements can be changed directly
in this place by moving them or resizing. The color of charts can also be
changed by checking a colorful dots on the legend near each visualization.
-NOTE: After creating a dashboard, it can be saved by choosing 'save dashboard'
+NOTE: After creating a dashboard, it can be saved by choosing *save dashboard*
option in the menu on the right. If it is not saved, it will be lost after
leaving a page or creating another dashboard.
-If a Dashboard has already been saved, it can be opened by choosing 'open
-dashboard' option in the menu on the right.
+If a Dashboard has already been saved, it can be opened by choosing *open
+dashboard* option in the menu on the right.
Exporting and importing created items - Settings tab
=====================================================
@@ -90,6 +90,6 @@ Exporting and importing created items - Settings tab
Once visualizations, searches or dashboards are created, they can be exported
to a json format by choosing Settings tab and then Objects tab. Each of the
item can be exported separately by selecting it in the menu. All of the items
-can also be exported at once by choosing 'export everything' option.
+can also be exported at once by choosing *export everything* option.
In the same tab (Settings - Objects) one can also import saved items by
-choosing 'import' option.
+choosing *import* option.
diff --git a/doc/liberty-deployment-warning.rst b/doc/liberty-deployment-warning.rst
index ebd206c6bf..0a95398884 100644
--- a/doc/liberty-deployment-warning.rst
+++ b/doc/liberty-deployment-warning.rst
@@ -6,34 +6,34 @@ Liberty 1.0.0 Deployment Warning
Warning Overview
================
-Please use Liberty 1.1.0 tag or later when using Kolla. No data loss
-occurs with this version. stable/liberty is also fully functional and
+Please use Liberty 1.1.0 tag or later when using Kolla. No data loss
+occurs with this version. ``stable/liberty`` is also fully functional and
suffers no data loss.
Data loss with 1.0.0
====================
The Kolla community discovered in the of middle Mitaka development that it
-was possible for data loss to occur if the data container is rebuilt. In
+was possible for data loss to occur if the data container is rebuilt. In
this scenario, Docker pulls a new container, and the new container doesn't
-contain the data from the old container. Kolla stable/liberty and Kolla
-1.0.0 are not to be used at this time, as they result in *critical data loss
-problems*.
+contain the data from the old container. Kolla ``stable/liberty`` and Kolla
+1.0.0 are not to be used at this time, as they result in **critical data loss
+problems**.
Resolution
==========
To rectify this problem, the OpenStack release and infrastructure teams
in coordination with the Kolla team executed the following actions:
-* Deleted the stable/liberty branch (where 1.0.0 was tagged from)
-* Created a tag liberty-early-demise at the end of the broken stable/liberty
+* Deleted the ``stable/liberty`` branch (where 1.0.0 was tagged from)
+* Created a tag liberty-early-demise at the end of the broken ``stable/liberty``
branch development.
-* Created a new stable/liberty branch based upon stable/mitaka.
-* Corrected stable/liberty to deploy Liberty.
-* Released Kolla 1.1.0 from the newly created stable/liberty branch.
+* Created a new ``stable/liberty`` branch based upon ``stable/mitaka``.
+* Corrected ``stable/liberty`` to deploy Liberty.
+* Released Kolla 1.1.0 from the newly created ``stable/liberty`` branch.
End Result
==========
A fully functional Liberty OpenStack deployment based upon the two years of
-testing that went into the development that went into stable/mitaka.
+testing that went into the development that went into ``stable/mitaka``.
The docker-engine 1.10.0 or later is required.
diff --git a/doc/manila-guide.rst b/doc/manila-guide.rst
index 1525925a7c..f7c4a75f76 100644
--- a/doc/manila-guide.rst
+++ b/doc/manila-guide.rst
@@ -34,7 +34,7 @@ services are properly working.
Preparation and Deployment
==========================
-Cinder and Ceph are required, enable it in /etc/kolla/globals.yml:
+Cinder and Ceph are required, enable it in ``/etc/kolla/globals.yml``:
.. code-block:: console
@@ -47,13 +47,14 @@ Enable Manila in /etc/kolla/globals.yml:
enable_manila: "yes"
-By default Manila uses instance flavor id 100 for its file systems. For
-Manila to work, either create a new nova flavor with id 100 (using "nova
-flavor-create") or change service_instance_flavor_id to use one of the
-default nova flavor ids.
-Ex: service_instance_flavor_id = 2 to use nova default flavor m1.small.
+By default Manila uses instance flavor id 100 for its file systems. For Manila
+to work, either create a new nova flavor with id 100 (use *nova flavor-create*)
+or change *service_instance_flavor_id* to use one of the default nova flavor
+ids.
+Ex: *service_instance_flavor_id = 2* to use nova default flavor ``m1.small``.
-Create or modify the file /etc/kolla/config/manila.conf and add the contents:
+Create or modify the file ``/etc/kolla/config/manila-share.conf`` and add the
+contents:
.. code-block:: console
@@ -79,11 +80,11 @@ to verify successful launch of each process:
Launch an Instance
==================
-Before being able to create a share, the manila with the generic driver and
-the DHSS mode enabled requires the definition of at least an image,
-a network and a share-network for being used to create a share server.
-For that back end configuration, the share server is an instance where
-NFS/CIFS shares are served.
+Before being able to create a share, the manila with the generic driver and the
+DHSS mode enabled requires the definition of at least an image, a network and a
+share-network for being used to create a share server. For that back end
+configuration, the share server is an instance where NFS/CIFS shares are
+served.
Determine the configuration of the share server
===============================================
@@ -166,8 +167,8 @@ Create a shared network
| description | None |
+-------------------+--------------------------------------+
-Create a flavor (Required if you not defined manila_instance_flavor_id in
-/etc/kolla/config/manila.conf file)
+Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in
+``/etc/kolla/config/manila-share.conf`` file)
.. code-block:: console
diff --git a/doc/multinode.rst b/doc/multinode.rst
index e7b6c1f874..f2a4abd273 100644
--- a/doc/multinode.rst
+++ b/doc/multinode.rst
@@ -7,31 +7,29 @@ Multinode Deployment of Kolla
Deploy a registry (required for multinode)
==========================================
-A Docker registry is a locally hosted registry that replaces the need
-to pull from the Docker Hub to get images. Kolla can function with
-or without a local registry, however for a multinode deployment a registry
-is required.
+A Docker registry is a locally hosted registry that replaces the need to pull
+from the Docker Hub to get images. Kolla can function with or without a local
+registry, however for a multinode deployment a registry is required.
-The Docker registry prior to version 2.3 has extremely bad performance
-because all container data is pushed for every image rather than taking
-advantage of Docker layering to optimize push operations. For more
-information reference
+The Docker registry prior to version 2.3 has extremely bad performance because
+all container data is pushed for every image rather than taking advantage of
+Docker layering to optimize push operations. For more information reference
`pokey registry `__.
-The Kolla community recommends using registry 2.3 or later. To deploy
-registry 2.3 do the following:
+The Kolla community recommends using registry 2.3 or later. To deploy registry
+2.3 do the following:
::
docker run -d -p 4000:5000 --restart=always --name registry registry:2
-Note: Kolla looks for the Docker registry to use port 4000. (Docker default
-is port 5000)
+Note: Kolla looks for the Docker registry to use port 4000. (Docker default is
+port 5000)
After starting the registry, it is necessary to instruct Docker that it will
-be communicating with an insecure registry. To enable insecure registry
-communication on CentOS, modify the "/etc/sysconfig/docker" file to contain
+be communicating with an insecure registry. To enable insecure registry
+communication on CentOS, modify the ``/etc/sysconfig/docker`` file to contain
the following where 192.168.1.100 is the IP address of the machine where the
registry is currently running:
@@ -40,18 +38,17 @@ registry is currently running:
# CentOS
other_args="--insecure-registry 192.168.1.100:4000"
-For Ubuntu, edit /etc/default/docker and add:
+For Ubuntu, edit ``/etc/default/docker`` and add:
::
# Ubuntu
DOCKER_OPTS="--insecure-registry 192.168.1.100:4000"
-Docker Inc's packaged version of docker-engine for CentOS is defective and
-does not read the other_args configuration options from
-"/etc/sysconfig/docker". To rectify this problem, ensure the
-following lines appear in the drop-in unit file at
-"/etc/systemd/system/docker.service.d/kolla.conf":
+Docker Inc's packaged version of docker-engine for CentOS is defective and does
+not read the other_args configuration options from ``/etc/sysconfig/docker``.
+To rectify this problem, ensure the following lines appear in the drop-in unit
+file at ``/etc/systemd/system/docker.service.d/kolla.conf``:
::
@@ -78,9 +75,9 @@ Edit the Inventory File
=======================
The ansible inventory file contains all the information needed to determine
-what services will land on which hosts. Edit the inventory file in the kolla
-directory ansible/inventory/multinode or if kolla was installed with pip, it
-can be found in /usr/share/kolla.
+what services will land on which hosts. Edit the inventory file in the kolla
+directory ``ansible/inventory/multinode`` or if kolla was installed with pip,
+it can be found in ``/usr/share/kolla``.
Add the ip addresses or hostnames to a group and the services associated with
that group will land on that host:
@@ -95,9 +92,9 @@ that group will land on that host:
192.168.122.24
-For more advanced roles, the operator can edit which services will be associated
-in with each group. Keep in mind that some services have to be grouped together
-and changing these around can break your deployment:
+For more advanced roles, the operator can edit which services will be
+associated in with each group. Keep in mind that some services have to be
+grouped together and changing these around can break your deployment:
::
diff --git a/doc/nova-fake-driver.rst b/doc/nova-fake-driver.rst
index d5ed86efff..7b4208e118 100644
--- a/doc/nova-fake-driver.rst
+++ b/doc/nova-fake-driver.rst
@@ -4,9 +4,9 @@
Nova Fake Driver
================
-One common question from OpenStack operators is that "how does the control plane
-(e.g., database, messaging queue, nova-scheduler ) scales?". To answer this
-question, operators setup Rally to drive workload to the OpenStack cloud.
+One common question from OpenStack operators is that "how does the control
+plane (e.g., database, messaging queue, nova-scheduler ) scales?". To answer
+this question, operators setup Rally to drive workload to the OpenStack cloud.
However, without a large number of nova-compute nodes, it becomes difficult to
exercise the control performance.
@@ -20,11 +20,11 @@ Use nova-fake driver
Nova fake driver can not work with all-in-one deployment. This is because the
fake neutron-openvswitch-agent for the fake nova-compute container conflicts
-with neutron-openvswitch-agent on the compute nodes. Therefore, in the inventory
-the network node must be different than the compute node.
+with neutron-openvswitch-agent on the compute nodes. Therefore, in the
+inventory the network node must be different than the compute node.
By default, Kolla uses libvirt driver on the compute node. To use nova-fake
-driver, edit the following parameters in ansible/group_vars or in the
+driver, edit the following parameters in ``ansible/group_vars`` or in the
command line options.
::
@@ -33,5 +33,5 @@ command line options.
num_nova_fake_per_node: 5
Each compute nodes will run 5 nova-compute containers and 5
-neutron-plugin-agent containers. When booting instance, there will be no
-real instances created. But "nova list" shows the fake instances.
+neutron-plugin-agent containers. When booting instance, there will be no real
+instances created. But *nova list* shows the fake instances.
diff --git a/doc/operating-kolla.rst b/doc/operating-kolla.rst
index 98addd99dc..85affda004 100644
--- a/doc/operating-kolla.rst
+++ b/doc/operating-kolla.rst
@@ -24,16 +24,16 @@ Tips and Tricks
===============
Kolla ships with several utilities intended to facilitate ease of operation.
-``tools/cleanup-containers`` can be used to remove deployed containers from
-the system. This can be useful when you want to do a new clean deployment. It
-will preserve the registry and the locally built images in the registry,
-but will remove all running Kolla containers from the local Docker daemon.
-It also removes the named volumes.
+``tools/cleanup-containers`` can be used to remove deployed containers from the
+system. This can be useful when you want to do a new clean deployment. It will
+preserve the registry and the locally built images in the registry, but will
+remove all running Kolla containers from the local Docker daemon. It also
+removes the named volumes.
``tools/cleanup-host`` can be used to remove remnants of network changes
triggered on the Docker host when the neutron-agents containers are launched.
-This can be useful when you want to do a new clean deployment, particularly
-one changing the network topology.
+This can be useful when you want to do a new clean deployment, particularly one
+changing the network topology.
-``tools/cleanup-images`` can be used to remove all Docker images built by
-Kolla from the local Docker cache.
+``tools/cleanup-images`` can be used to remove all Docker images built by Kolla
+from the local Docker cache.
diff --git a/doc/quickstart.rst b/doc/quickstart.rst
index ebdeedd963..b67538b5b5 100644
--- a/doc/quickstart.rst
+++ b/doc/quickstart.rst
@@ -19,21 +19,21 @@ Recommended Environment
=======================
If developing or evaluating Kolla, the community strongly recommends using bare
-metal or a virtual machine. Follow the instructions in this document to get
+metal or a virtual machine. Follow the instructions in this document to get
started with deploying OpenStack on bare metal or a virtual machine with Kolla.
There are other deployment environments referenced below in `Additional Environments`_.
Install Dependencies
====================
-Kolla is tested on CentOS, Oracle Linux, RHEL and Ubuntu as both container
-OS platforms and bare metal deployment targets.
+Kolla is tested on CentOS, Oracle Linux, RHEL and Ubuntu as both container OS
+platforms and bare metal deployment targets.
Fedora: Kolla will not run on Fedora 22 and later as a bare metal deployment
target. These distributions compress kernel modules with the .xz compressed
format. The guestfs system in the CentOS family of containers cannot read
-these images because a dependent package supermin in CentOS needs to be
-updated to add .xz compressed format support.
+these images because a dependent package supermin in CentOS needs to be updated
+to add .xz compressed format support.
Ubuntu: For Ubuntu based systems where Docker is used it is recommended to use
the latest available LTS kernel. The latest LTS kernel available is the wily
@@ -45,7 +45,7 @@ and OverlayFS. In order to update kernel in Ubuntu 14.04 LTS to 4.2, run:
apt-get -y install linux-image-generic-lts-wily
-.. NOTE:: Install is *very* sensitive about version of components. Please
+.. NOTE:: Install is *very* sensitive about version of components. Please
review carefully because default Operating System repos are likely out of
date.
@@ -89,10 +89,10 @@ command:
docker --version
-When running with systemd, setup docker-engine with the appropriate
-information in the Docker daemon to launch with. This means setting up the
-following information in the ``docker.service`` file. If you do not set the
-MountFlags option correctly then ``kolla-ansible`` will fail to deploy the
+When running with systemd, setup docker-engine with the appropriate information
+in the Docker daemon to launch with. This means setting up the following
+information in the ``docker.service`` file. If you do not set the MountFlags
+option correctly then ``kolla-ansible`` will fail to deploy the
``neutron-dhcp-agent`` container and throws APIError/HTTPError. After adding
the drop-in unit file as follows, reload and restart the docker service:
@@ -138,15 +138,15 @@ Or using ``pip`` to install a latest version:
pip install -U docker-py
-OpenStack, RabbitMQ, and Ceph require all hosts to have matching times to ensure
-proper message delivery. In the case of Ceph, it will complain if the hosts
-differ by more than 0.05 seconds. Some OpenStack services have timers as low as
-2 seconds by default. For these reasons it is highly recommended to setup an NTP
-service of some kind. While ``ntpd`` will achieve more accurate time for the
-deployment if the NTP servers are running in the local deployment environment,
-`chrony `_ is more accurate when syncing the time
-across a WAN connection. When running Ceph it is recommended to setup ``ntpd`` to
-sync time locally due to the tight time constraints.
+OpenStack, RabbitMQ, and Ceph require all hosts to have matching times to
+ensure proper message delivery. In the case of Ceph, it will complain if the
+hosts differ by more than 0.05 seconds. Some OpenStack services have timers as
+low as 2 seconds by default. For these reasons it is highly recommended to
+setup an NTP service of some kind. While ``ntpd`` will achieve more accurate
+time for the deployment if the NTP servers are running in the local deployment
+environment, `chrony `_ is more accurate when
+syncing the time across a WAN connection. When running Ceph it is recommended
+to setup ``ntpd`` to sync time locally due to the tight time constraints.
To install, start, and enable ntp on CentOS execute the following:
@@ -163,9 +163,9 @@ To install and start on Debian based systems execute the following:
apt-get install ntp
-Libvirt is started by default on many operating systems. Please disable ``libvirt``
-on any machines that will be deployment targets. Only one copy of libvirt may
-be running at a time.
+Libvirt is started by default on many operating systems. Please disable
+``libvirt`` on any machines that will be deployment targets. Only one copy of
+libvirt may be running at a time.
::
@@ -182,24 +182,24 @@ On Ubuntu, apparmor will sometimes prevent libvirt from working.
::
/usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
-If you are seeing the libvirt container fail with the error above, disable
-the libvirt profile.
+If you are seeing the libvirt container fail with the error above, disable the
+libvirt profile.
::
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
-Kolla deploys OpenStack using
-`Ansible `__. Install Ansible from distribution
-packaging if the distro packaging has recommended version available.
+Kolla deploys OpenStack using `Ansible `__. Install
+Ansible from distribution packaging if the distro packaging has recommended
+version available.
Some implemented distro versions of Ansible are too old to use distro
-packaging. Currently, CentOS and RHEL package Ansible 1.9.4 which is
-suitable for use with Kolla. As Ansible 2.0 is also available, version 1.9
-must be specified. Note that you will need to enable access
-to the EPEL repository to install via yum -- to do so, take a look at
-Fedora's EPEL `docs `__ and
+packaging. Currently, CentOS and RHEL package Ansible 1.9.4 which is suitable
+for use with Kolla. As Ansible 2.0 is also available, version 1.9 must be
+specified. Note that you will need to enable access to the EPEL repository to
+install via yum -- to do so, take a look at Fedora's EPEL
+`docs `__ and
`FAQ `__.
On CentOS or RHEL systems, this can be done using:
@@ -208,16 +208,16 @@ On CentOS or RHEL systems, this can be done using:
yum -y install ansible1.9
-Many DEB based systems do not meet Kolla's Ansible version requirements.
-It is recommended to use pip to install Ansible 1.9.4.
-Finally Ansible 1.9.4 may be installed using:
+Many DEB based systems do not meet Kolla's Ansible version requirements. It is
+recommended to use pip to install Ansible 1.9.4. Finally Ansible 1.9.4 may be
+installed using:
::
pip install -U ansible==1.9.4
-If DEB based systems include a version of Ansible that meets Kolla's
-version requirements it can be installed by:
+If DEB based systems include a version of Ansible that meets Kolla's version
+requirements it can be installed by:
::
@@ -277,31 +277,31 @@ To install the clients use:
Local Registry
==============
-A local registry is not required for an all-in-one installation. Check out the
-:doc:`multinode` for more information on using a local registry. Otherwise, the
-`Docker Hub Image Registry`_ contains all images from each of Kolla's major releases.
-The latest release tag is 2.0.0 for Mitaka.
+A local registry is not required for an ``all-in-one`` installation. Check out
+the :doc:`multinode` for more information on using a local registry. Otherwise,
+the `Docker Hub Image Registry`_ contains all images from each of Kolla's major
+releases. The latest release tag is 2.0.0 for Mitaka.
Additional Environments
=======================
-Two virtualized development environment options are available for Kolla.
-These options permit the development of Kolla without disrupting the host
-operating system.
+Two virtualized development environment options are available for Kolla. These
+options permit the development of Kolla without disrupting the host operating
+system.
If developing Kolla on an OpenStack cloud environment that supports Heat,
follow the :doc:`heat-dev-env`.
-If developing Kolla on a system that provides VirtualBox or Libvirt in
-addition to Vagrant, use the Vagrant virtual environment documented in
+If developing Kolla on a system that provides VirtualBox or Libvirt in addition
+to Vagrant, use the Vagrant virtual environment documented in
:doc:`vagrant-dev-env`.
-Currently the Heat development environment is entirely non-functional.
-The Kolla core reviewers have debated removing it from the repository
-but have resisted to provide an opportunity for contributors to make Heat
-usable for Kolla development. THe Kolla core reviewers believe Heat
-would offer a great way to develop Kolla in addition to Vagrant,
-bare metal, or a manually setup virtual machine.
+Currently the Heat development environment is entirely non-functional. The
+Kolla core reviewers have debated removing it from the repository but have
+resisted to provide an opportunity for contributors to make Heat usable for
+Kolla development. The Kolla core reviewers believe Heat would offer a great
+way to develop Kolla in addition to Vagrant, bare metal, or a manually setup
+virtual machine.
For more information refer to
`_bug 1562334 `__.
@@ -340,8 +340,8 @@ Note ``--base`` and ``--type`` can be added to the above ``kolla-build``
command if different distributions or types are desired.
It is also possible to build individual containers. As an example, if the
-glance containers failed to build, all glance related containers can be
-rebuilt as follows:
+glance containers failed to build, all glance related containers can be rebuilt
+as follows:
::
@@ -361,13 +361,13 @@ instruction in :doc:`image-building`.
Deploying Kolla
===============
-The Kolla community provides two example methods of Kolla
-deploy: *all-in-one* and *multinode*. The "all-in-one" deploy is similar
-to `devstack `__ deploy which
-installs all OpenStack services on a single host. In the "multinode" deploy,
+The Kolla community provides two example methods of Kolla deploy: *all-in-one*
+and *multinode*. The *all-in-one* deploy is similar to
+`devstack `__ deploy which
+installs all OpenStack services on a single host. In the *multinode* deploy,
OpenStack services can be run on specific hosts. This documentation only
-describes deploying *all-in-one* method as most simple one. To setup multinode
-see the :doc:`multinode`.
+describes deploying *all-in-one* method as most simple one. To setup
+*multinode* see the :doc:`multinode`.
Each method is represented as an Ansible inventory file. More information on
the Ansible inventory file can be found in the Ansible `inventory introduction
@@ -377,15 +377,15 @@ All variables for the environment can be specified in the files:
``/etc/kolla/globals.yml`` and ``/etc/kolla/passwords.yml``.
Generate passwords for ``/etc/kolla/passwords.yml`` using the provided
-``kolla-genpwd`` tool. The tool will populate all empty fields in the
+``kolla-genpwd`` tool. The tool will populate all empty fields in the
``/etc/kolla/passwords.yml`` file using randomly generated values to secure the
-deployment. Optionally, the passwords may be populate in the file by hand.
+deployment. Optionally, the passwords may be populate in the file by hand.
::
kolla-genpwd
-Start by editing /etc/kolla/globals.yml. Check and edit, if needed, these
+Start by editing ``/etc/kolla/globals.yml``. Check and edit, if needed, these
parameters: ``kolla_base_distro``, ``kolla_install_type``. These parameters
should match what you used in the ``kolla-build`` command line. The default for
``kolla_base_distro`` is ``centos`` and for ``kolla_install_type`` is ``binary``.
@@ -399,23 +399,22 @@ sure ``globals.yml`` has the following entries:
Please specify an unused IP address in the network to act as a VIP for
-``kolla_internal_vip_address``. The VIP will be used with keepalived and
-added to the ``api_interface`` as specified in the ``globals.yml`` ::
+``kolla_internal_vip_address``. The VIP will be used with keepalived and added
+to the ``api_interface`` as specified in the ``globals.yml`` ::
kolla_internal_vip_address: "10.10.10.254"
The ``network_interface`` variable is the interface to which Kolla binds API
-services. For example, when starting up Mariadb it will bind to the
-IP on the interface list in the ``network_interface`` variable. ::
+services. For example, when starting up Mariadb it will bind to the IP on the
+interface list in the ``network_interface`` variable. ::
network_interface: "eth0"
-The ``neutron_external_interface`` variable is the interface that will
-be used for the external bridge in Neutron. Without this bridge the deployment
-instance traffic will be unable to access the rest of the Internet. In
-the case of a single interface on a machine, a veth pair may be used where
-one end of the veth pair is listed here and the other end is in a bridge on
-the system. ::
+The ``neutron_external_interface`` variable is the interface that will be used
+for the external bridge in Neutron. Without this bridge the deployment instance
+traffic will be unable to access the rest of the Internet. In the case of a
+single interface on a machine, a veth pair may be used where one end of the
+veth pair is listed here and the other end is in a bridge on the system. ::
neutron_external_interface: "eth1"
@@ -514,22 +513,21 @@ environment with a glance image and neutron networks:
Failures
========
-Nearly always when Kolla fails, it is caused by a CTRL-C during the
-deployment process or a problem in the ``globals.yml`` configuration.
+Nearly always when Kolla fails, it is caused by a CTRL-C during the deployment
+process or a problem in the ``globals.yml`` configuration.
-To correct the problem where Operators have a misconfigured
-environment, the Kolla developers have added a precheck feature which
-ensures the deployment targets are in a state where Kolla may deploy
-to them. To run the prechecks, execute:
+To correct the problem where Operators have a misconfigured environment, the
+Kolla developers have added a precheck feature which ensures the deployment
+targets are in a state where Kolla may deploy to them. To run the prechecks,
+execute:
::
kolla-ansible prechecks
-If a failure during deployment occurs it nearly always occurs during
-evaluation of the software. Once the Operator learns the few
-configuration options required, it is highly unlikely they will experience
-a failure in deployment.
+If a failure during deployment occurs it nearly always occurs during evaluation
+of the software. Once the Operator learns the few configuration options
+required, it is highly unlikely they will experience a failure in deployment.
Deployment may be run as many times as desired, but if a failure in a
bootstrap task occurs, a further deploy action will not correct the problem.
@@ -545,14 +543,14 @@ On each node where OpenStack is deployed run:
tools/cleanup-containers
tools/cleanup-host
-The Operator will have to copy via scp or some other means the cleanup
-scripts to the various nodes where the failed containers are located.
+The Operator will have to copy via scp or some other means the cleanup scripts
+to the various nodes where the failed containers are located.
Any time the tags of a release change, it is possible that the container
-implementation from older versions won't match the Ansible playbooks in
-a new version. If running multinode from a registry, each node's Docker
-image cache must be refreshed with the latest images before a new deployment
-can occur. To refresh the docker cache from the local Docker registry:
+implementation from older versions won't match the Ansible playbooks in a new
+version. If running multinode from a registry, each node's Docker image cache
+must be refreshed with the latest images before a new deployment can occur. To
+refresh the docker cache from the local Docker registry:
::
@@ -578,7 +576,7 @@ The logs can be examined by executing:
docker exec -it heka bash
The logs from all services in all containers may be read from
-/var/log/kolla/SERVICE_NAME
+``/var/log/kolla/SERVICE_NAME``
If the stdout logs are needed, please run:
diff --git a/doc/security.rst b/doc/security.rst
index f61e241b0b..9acd601296 100644
--- a/doc/security.rst
+++ b/doc/security.rst
@@ -6,9 +6,9 @@ Kolla Security
Non Root containers
===================
-The OpenStack services, with a few exceptions, run as non root inside of Kolla's
-containers. Kolla uses the Docker provided USER flag to set the appropriate
-user for each serivce.
+The OpenStack services, with a few exceptions, run as non root inside of
+Kolla's containers. Kolla uses the Docker provided USER flag to set the
+appropriate user for each serivce.
SELinux
=======
@@ -31,14 +31,15 @@ address volumes directly by name removing the need for so called **data
containers** all together.
Another solution to the persistent data issue is to use a host bind mount which
-involves making, for sake of example, host directory ``var/lib/mysql`` available
-inside the container at ``var/lib/mysql``. This absolutely solves the problem of
-persistent data, but it introduces another security issue, permissions. With
-this host bind mount solution the data in ``var/lib/mysql`` will be owned by the
-mysql user in the container. Unfortunately, that mysql user in the container
-could have any UID/GID and thats who will own the data outside the container
-introducing a potential security risk. Additionally, this method dirties the
-host and requires host permissions to the directories to bind mount.
+involves making, for sake of example, host directory ``var/lib/mysql``
+available inside the container at ``var/lib/mysql``. This absolutely solves the
+problem of persistent data, but it introduces another security issue,
+permissions. With this host bind mount solution the data in ``var/lib/mysql``
+will be owned by the mysql user in the container. Unfortunately, that mysql
+user in the container could have any UID/GID and thats who will own the data
+outside the container introducing a potential security risk. Additionally, this
+method dirties the host and requires host permissions to the directories to
+bind mount.
The solution Kolla chose is named volumes.
diff --git a/doc/swift-guide.rst b/doc/swift-guide.rst
index 6531fc88df..c241d2f3f8 100644
--- a/doc/swift-guide.rst
+++ b/doc/swift-guide.rst
@@ -6,11 +6,12 @@ Swift in Kolla
Overview
========
-Kolla can deploy a full working Swift setup in either a AIO or multi node setup.
+Kolla can deploy a full working Swift setup in either a **all-in-one** or
+**multinode** setup.
Prerequisites
=============
-Before running Swift we need to generate "rings", which are binary compressed
+Before running Swift we need to generate **rings**, which are binary compressed
files that at a high level let the various Swift services know where data is in
the cluster. We hope to automate this process in a future release.
@@ -19,9 +20,9 @@ Disks with a partition table (recommended)
Swift also expects block devices to be available for storage. To prepare a disk
for use as Swift storage device, a special partition name and filesystem label
-need to be added. So that Kolla can detect those disks and mount for services.
+need to be added. So that Kolla can detect those disks and mount for services.
-Follow the example below to add 3 disks for an AIO demo setup.
+Follow the example below to add 3 disks for an **all-in-one** demo setup.
::
@@ -50,13 +51,13 @@ For evaluation, loopback devices can be used in lieu of real disks:
Disks without a partition table
===============================
-Kolla also supports unpartitioned disk (filesystem on /dev/sdc instead of
-/dev/sdc1) detection purely based on filesystem label. This is generally not a
-recommended practice but can be helpful for Kolla to take over Swift deployment
-already using disk like this.
+Kolla also supports unpartitioned disk (filesystem on ``/dev/sdc`` instead of
+``/dev/sdc1``) detection purely based on filesystem label. This is generally
+not a recommended practice but can be helpful for Kolla to take over Swift
+deployment already using disk like this.
Given hard disks with labels swd1, swd2, swd3, use the following settings in
-ansible/roles/swift/defaults/main.yml
+``ansible/roles/swift/defaults/main.yml``.
::
@@ -66,9 +67,9 @@ ansible/roles/swift/defaults/main.yml
Rings
=====
-Run following commands locally to generate Rings for AIO demo setup. The
-commands work with "disks with partition table" example listed above. Please
-modify accordingly if your setup is different.
+Run following commands locally to generate Rings for **all-in-one** demo setup.
+The commands work with **disks with partition table** example listed above.
+Please modify accordingly if your setup is different.
::
@@ -122,22 +123,23 @@ modify accordingly if your setup is different.
/etc/kolla/config/swift/${ring}.builder rebalance;
done
-Similar commands can be used for multinode, you will just need to run the 'add' step for each IP
-in the cluster.
+Similar commands can be used for **multinode**, you will just need to run the
+**add** step for each IP in the cluster.
For more info, see
http://docs.openstack.org/kilo/install-guide/install/apt/content/swift-initial-rings.html
Deploying
=========
-Enable Swift in /etc/kolla/globals.yml:
+Enable Swift in ``/etc/kolla/globals.yml``:
::
enable_swift : "yes"
-Once the rings are in place, deploying Swift is the same as any other Kolla Ansible service. Below
-is the minimal command to bring up Swift AIO, and it's dependencies:
+Once the rings are in place, deploying Swift is the same as any other Kolla
+Ansible service. Below is the minimal command to bring up Swift **all-in-one**,
+and it's dependencies:
::
diff --git a/doc/vagrant-dev-env.rst b/doc/vagrant-dev-env.rst
index b2933ca46d..53726e4c33 100644
--- a/doc/vagrant-dev-env.rst
+++ b/doc/vagrant-dev-env.rst
@@ -4,8 +4,8 @@
Development Environment with Vagrant
====================================
-This guide describes how to use `Vagrant `__ to
-assist in developing for Kolla.
+This guide describes how to use `Vagrant `__ to assist in
+developing for Kolla.
Vagrant is a tool to assist in scripted creation of virtual machines. Vagrant
takes care of setting up CentOS-based VMs for Kolla development, each with
@@ -14,26 +14,26 @@ proper hardware like memory amount and number of network interfaces.
Getting Started
===============
-The Vagrant script implements All-in-One (AIO) or multi-node deployments. AIO
-is the default.
+The Vagrant script implements **all-in-one** or **multi-node** deployments.
+**all-in-one** is the default.
-In the case of multi-node deployment, the Vagrant setup builds a cluster with
-the following nodes by default:
+In the case of **multi-node** deployment, the Vagrant setup builds a cluster
+with the following nodes by default:
-- 3 control nodes
-- 1 compute node
-- 1 storage node (Note: ceph requires at least 3 storage nodes)
-- 1 network node
-- 1 operator node
+* 3 control nodes
+* 1 compute node
+* 1 storage node (Note: ceph requires at least 3 storage nodes)
+* 1 network node
+* 1 operator node
The cluster node count can be changed by editing the Vagrantfile.
Kolla runs from the operator node to deploy OpenStack.
-All nodes are connected with each other on the secondary NIC. The
-primary NIC is behind a NAT interface for connecting with the Internet.
-The third NIC is connected without IP configuration to a public bridge
-interface. This may be used for Neutron/Nova to connect to instances.
+All nodes are connected with each other on the secondary NIC. The primary NIC
+is behind a NAT interface for connecting with the Internet. The third NIC is
+connected without IP configuration to a public bridge interface. This may be
+used for Neutron/Nova to connect to instances.
Start by downloading and installing the Vagrant package for the distro of
choice. Various downloads can be found at the `Vagrant downloads
@@ -45,12 +45,12 @@ On Fedora 22 it is as easy as::
On Ubuntu 14.04 it is as easy as::
- sudo apt-get -y install vagrant ruby-dev ruby-libvirt python-libvirt libvirt-dev nfs-kernel-server
+ sudo apt-get install vagrant ruby-dev ruby-libvirt python-libvirt libvirt-dev nfs-kernel-server
**Note:** Many distros ship outdated versions of Vagrant by default. When in
doubt, always install the latest from the downloads page above.
-Next install the hostmanager plugin so all hosts are recorded in /etc/hosts
+Next install the hostmanager plugin so all hosts are recorded in ``/etc/hosts``
(inside each vm)::
vagrant plugin install vagrant-hostmanager
@@ -85,7 +85,7 @@ Find a location in the system's home directory and checkout the Kolla repo::
git clone https://github.com/openstack/kolla.git
-Developers can now tweak the Vagrantfile or bring up the default AIO
+Developers can now tweak the Vagrantfile or bring up the default **all-in-one**
CentOS 7-based environment::
cd kolla/dev/vagrant && vagrant up
@@ -97,16 +97,16 @@ Vagrant Up
==========
Once Vagrant has completed deploying all nodes, the next step is to launch
-Kolla. First, connect with the *operator* node::
+Kolla. First, connect with the **operator** node::
vagrant ssh operator
-To speed things up, there is a local registry running on the operator. All
+To speed things up, there is a local registry running on the operator. All
nodes are configured so they can use this insecure repo to pull from, and use
it as a mirror. Ansible may use this registry to pull images from.
All nodes have a local folder shared between the group and the hypervisor, and
-a folder shared between *all* nodes and the hypervisor. This mapping is lost
+a folder shared between **all** nodes and the hypervisor. This mapping is lost
after reboots, so make sure to use the command ``vagrant reload `` when
reboots are required. Having this shared folder provides a method to supply
a different docker binary to the cluster. The shared folder is also used to
@@ -116,18 +116,18 @@ like ``vagrant destroy``.
Building images
---------------
-Once logged on the *operator* VM call the ``kolla-build`` utility::
+Once logged on the **operator** VM call the ``kolla-build`` utility::
kolla-build
``kolla-build`` accept arguments as documented in :doc:`image-building`. It
-builds Docker images and pushes them to the local registry if the *push*
+builds Docker images and pushes them to the local registry if the **push**
option is enabled (in Vagrant this is the default behaviour).
Deploying OpenStack with Kolla
------------------------------
-Deploy AIO with::
+Deploy **all-in-one** with::
sudo kolla-ansible deploy