Merge "Update Documentation"

This commit is contained in:
Jenkins 2016-05-31 11:08:52 +00:00 committed by Gerrit Code Review
commit 63e7f63af9
14 changed files with 310 additions and 312 deletions

View File

@ -7,22 +7,22 @@ How To Contribute
Basics Basics
====== ======
* Our source code is hosted on `OpenStack GitHub`_, but pull requests submitted #. Our source code is hosted on `OpenStack GitHub`_, but pull requests submitted
through GitHub will be ignored. Bugs should be filed on launchpad_, through GitHub will be ignored. Bugs should be filed on launchpad_,
not GitHub. not GitHub.
* Please follow OpenStack `Gerrit Workflow`_ to to contribute to Kolla. #. Please follow OpenStack `Gerrit Workflow`_ to to contribute to Kolla.
* Note the branch you're proposing changes to. ``master`` is the current focus #. Note the branch you're proposing changes to. ``master`` is the current focus
of development. Kolla project has a strict policy of only allowing backports of development. Kolla project has a strict policy of only allowing backports
in ``stable/branch``, unless when not applicable. A bug in a ``stable/branch`` in ``stable/branch``, unless when not applicable. A bug in a ``stable/branch``
will first have to be fixed in ``master``. will first have to be fixed in ``master``.
* Please file a launchpad_ blueprint for any significant code change and a bug #. Please file a launchpad_ blueprint for any significant code change and a bug
for any significant bug fix or add a TrivialFix tag for simple changes. for any significant bug fix or add a TrivialFix tag for simple changes.
See how to reference a bug or a blueprint in the commit message here_ See how to reference a bug or a blueprint in the commit message here_
* TrivialFix tags or bugs are not required for documentation changes. #. TrivialFix tags or bugs are not required for documentation changes.
.. _OpenStack GitHub: https://github.com/openstack/kolla .. _OpenStack GitHub: https://github.com/openstack/kolla
.. _Gerrit Workflow: http://docs.openstack.org/infra/manual/developers.html#development-workflow .. _Gerrit Workflow: http://docs.openstack.org/infra/manual/developers.html#development-workflow
@ -32,6 +32,7 @@ Basics
Development Environment Development Environment
======================== ========================
* Please follow our `quickstart`_ to deploy your environment and test your changes #. Please follow our `quickstart`_ to deploy your environment and test your
changes.
.. _quickstart: http://docs.openstack.org/developer/kolla/quickstart.html .. _quickstart: http://docs.openstack.org/developer/kolla/quickstart.html

View File

@ -6,7 +6,7 @@ Ceph in Kolla
The out-of-the-box Ceph deployment requires 3 hosts with at least one block The out-of-the-box Ceph deployment requires 3 hosts with at least one block
device on each host that can be dedicated for sole use by Ceph. However, with device on each host that can be dedicated for sole use by Ceph. However, with
tweaks to the Ceph cluster you can deploy a "healthy" cluster with a single tweaks to the Ceph cluster you can deploy a **healthy** cluster with a single
host and a single block device. host and a single block device.
Requirements Requirements
@ -21,8 +21,8 @@ Preparation and Deployment
To prepare a disk for use as a To prepare a disk for use as a
`Ceph OSD <http://docs.ceph.com/docs/master/man/8/ceph-osd/>`_ you must add a `Ceph OSD <http://docs.ceph.com/docs/master/man/8/ceph-osd/>`_ you must add a
special partition label to the disk. This partition label is how Kolla detects special partition label to the disk. This partition label is how Kolla detects
the disks to format and bootstrap. Any disk with a matching partition label will the disks to format and bootstrap. Any disk with a matching partition label
be reformatted so use caution. will be reformatted so use caution.
To prepare an OSD as a storage drive, execute the following operations: To prepare an OSD as a storage drive, execute the following operations:
@ -32,7 +32,8 @@ To prepare an OSD as a storage drive, execute the following operations:
# where $DISK is /dev/sdb or something similar # where $DISK is /dev/sdb or something similar
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1 parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
The following shows an example of using parted to configure /dev/sdb for usage with Kolla. The following shows an example of using parted to configure ``/dev/sdb`` for
usage with Kolla.
:: ::
@ -56,24 +57,25 @@ hosts that have the block devices you have prepped as shown above.
compute1 compute1
Enable Ceph in /etc/kolla/globals.yml: Enable Ceph in ``/etc/kolla/globals.yml``:
:: ::
enable_ceph: "yes" enable_ceph: "yes"
RadosGW is optional, enable it in /etc/kolla/globals.yml: RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
:: ::
enable_ceph_rgw: "yes" enable_ceph_rgw: "yes"
RGW requires a healthy cluster in order to be successfully deployed. RGW requires a healthy cluster in order to be successfully deployed. On initial
On initial start up, RGW will create several pools. start up, RGW will create several pools. The first pool should be in an
The first pool should be in an operational state to proceed with the second one, and so on. operational state to proceed with the second one, and so on. So, in the case of
So, in the case of an all-in-one deployment, it is necessary to change the default number of copies an **all-in-one** deployment, it is necessary to change the default number of
for the pools before deployment. Modify the file /etc/kolla/config/ceph.conf and add the contents:: copies for the pools before deployment. Modify the file ``/etc/kolla/config/ceph.conf``
and add the contents::
[global] [global]
osd pool default size = 1 osd pool default size = 1
@ -89,9 +91,8 @@ Finally deploy the Ceph-enabled OpenStack:
Using a Cache Tier Using a Cache Tier
================== ==================
An optional An optional `cache tier <http://docs.ceph.com/docs/hammer/rados/operations/cache-tiering/>`_
`cache tier <http://docs.ceph.com/docs/hammer/rados/operations/cache-tiering/>`_ can be deployed by formatting at least one cache device and enabling cache.
can be deployed by formatting at least one cache device and enabling cache
tiering in the globals.yml configuration file. tiering in the globals.yml configuration file.
To prepare an OSD as a cache device, execute the following operations: To prepare an OSD as a cache device, execute the following operations:
@ -102,7 +103,7 @@ To prepare an OSD as a cache device, execute the following operations:
# where $DISK is /dev/sdb or something similar # where $DISK is /dev/sdb or something similar
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1 parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
Enable the Ceph cache tier in /etc/kolla/globals.yml: Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
:: ::
@ -123,13 +124,13 @@ Setting up an Erasure Coded Pool
`Erasure code <http://docs.ceph.com/docs/hammer/rados/operations/erasure-code/>`_ `Erasure code <http://docs.ceph.com/docs/hammer/rados/operations/erasure-code/>`_
is the new big thing from Ceph. Kolla has the ability to setup your Ceph pools is the new big thing from Ceph. Kolla has the ability to setup your Ceph pools
as erasure coded pools. Due to technical limitations with Ceph, using erasure as erasure coded pools. Due to technical limitations with Ceph, using erasure
coded pools as OpenStack uses them requires a cache tier. Additionally, you must coded pools as OpenStack uses them requires a cache tier. Additionally, you
make the choice to use an erasure coded pool or a replicated pool (the default) must make the choice to use an erasure coded pool or a replicated pool
when you initially deploy. You cannot change this without completely removing (the default) when you initially deploy. You cannot change this without
the pool and recreating it. completely removing the pool and recreating it.
To enable erasure coded pools add the following options to your To enable erasure coded pools add the following options to your ``/etc/kolla/globals.yml``
/etc/kolla/globals.yml configuration file: configuration file:
:: ::
@ -157,9 +158,10 @@ indicates a healthy cluster:
68676 kB used, 20390 MB / 20457 MB avail 68676 kB used, 20390 MB / 20457 MB avail
64 active+clean 64 active+clean
If Ceph is run in an all-in-one deployment or with less than three storage nodes, further If Ceph is run in an **all-in-one** deployment or with less than three storage
configuration is required. It is necessary to change the default number of copies for the pool. nodes, further configuration is required. It is necessary to change the default
The following example demonstrates how to change the number of copies for the pool to 1: number of copies for the pool. The following example demonstrates how to change
the number of copies for the pool to 1:
:: ::
@ -178,7 +180,7 @@ If using a cache tier, these changes must be made as well:
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
The default pool Ceph creates is named 'rbd'. It is safe to remove this pool: The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
:: ::

View File

@ -15,9 +15,8 @@ The ``kolla-build`` command is responsible for building docker images.
Generating kolla-build.conf Generating kolla-build.conf
=========================== ===========================
Install tox and generate the build configuration. The build Install tox and generate the build configuration. The build configuration is
configuration is designed to hold advanced customizations when building designed to hold advanced customizations when building containers.
containers.
Create kolla-build.conf using the following steps. Create kolla-build.conf using the following steps.
:: ::
@ -25,9 +24,10 @@ Create kolla-build.conf using the following steps.
pip install tox pip install tox
tox -e genconfig tox -e genconfig
The location of the generated configuration file is ``etc/kolla/kolla-build.conf``, The location of the generated configuration file is
You can also copy it to ``/etc/kolla``. The default location is one of ``etc/kolla/kolla-build.conf``, You can also copy it to ``/etc/kolla``. The
``/etc/kolla/kolla-build.conf`` or ``etc/kolla/kolla-build.conf``. default location is one of ``/etc/kolla/kolla-build.conf`` or
``etc/kolla/kolla-build.conf``.
Guide Guide
===== =====
@ -59,7 +59,7 @@ command line::
kolla-build keystone kolla-build keystone
In this case, the build script builds all images which name contains the In this case, the build script builds all images which name contains the
'keystone' string along with their dependencies. *keystone* string along with their dependencies.
Multiple names may be specified on the command line:: Multiple names may be specified on the command line::
@ -88,12 +88,11 @@ command::
Build OpenStack from Source Build OpenStack from Source
=========================== ===========================
When building images, there are two methods of the OpenStack install. When building images, there are two methods of the OpenStack install. One is
One is ``binary``. Another is ``source``. ``binary``. Another is ``source``. The ``binary`` means that OpenStack will be
The ``binary`` means that OpenStack will be installed from apt/yum. installed from apt/yum. And the ``source`` means that OpenStack will be
And the ``source`` means that OpenStack will be installed from source code. installed from source code. The default method of the OpenStack install is
The default method of the OpenStack install is ``binary``. ``binary``. It can be changed to ``source`` using the ``-t`` option::
It can be changed to ``source`` using the ``-t`` option::
kolla-build -t source kolla-build -t source
@ -143,7 +142,7 @@ The build method allows the operator to build containers from custom repos.
The repos are accepted as a list of comma separated values and can be in The repos are accepted as a list of comma separated values and can be in
the form of .repo, .rpm, or a url. See examples below. the form of .repo, .rpm, or a url. See examples below.
Update rpm_setup_config in /etc/kolla/kolla-build.conf:: Update rpm_setup_config in ``/etc/kolla/kolla-build.conf``::
rpm_setup_config = http://trunk.rdoproject.org/centos7/currrent/delorean.repo,http://trunk.rdoproject.org/centos7/delorean-deps.repo rpm_setup_config = http://trunk.rdoproject.org/centos7/currrent/delorean.repo,http://trunk.rdoproject.org/centos7/delorean-deps.repo
@ -206,13 +205,12 @@ Known issues
Docker Local Registry Docker Local Registry
===================== =====================
It is recommended to set up local registry for Kolla developers It is recommended to set up local registry for Kolla developers or deploying
or deploying multinode. The reason using a local registry is *multinode*. The reason using a local registry is deployment performance will
deployment performance will operate at local network speeds, operate at local network speeds, typically gigabit networking. Beyond
typically gigabit networking. Beyond performance considerations, performance considerations, the Operator would have full control over images
the Operator would have full control over images that are deployed. that are deployed. If there is no local registry, nodes pull images from Docker
If there is no local registry, nodes pull images from Docker Hub Hub when images are not found in local caches.
when images are not found in local caches.
Setting up Docker Local Registry Setting up Docker Local Registry
-------------------------------- --------------------------------
@ -225,18 +223,17 @@ Running Docker registry is easy. Just use the following command::
Note: ``<local_data_path>`` points to the folder where Docker registry Note: ``<local_data_path>`` points to the folder where Docker registry
will store Docker images on the local host. will store Docker images on the local host.
The default port of Docker registry is 5000. The default port of Docker registry is 5000. But the 5000 port is also the port
But the 5000 port is also the port of keystone-api. of keystone-api. To avoid conflict, use 4000 port as Docker registry port.
To avoid conflict, use 4000 port as Docker registry port.
Now the Docker registry service is running. Now the Docker registry service is running.
Docker Insecure Registry Config Docker Insecure Registry Config
------------------------------- -------------------------------
For docker to pull images, it is necessary to For docker to pull images, it is necessary to modify the Docker configuration.
modify the Docker configuration. The guide assumes that The guide assumes that the IP of the machine running Docker registry is
the IP of the machine running Docker registry is 172.22.2.81. 172.22.2.81.
In Ubuntu, add ``--insecure-registry 172.22.2.81:4000`` In Ubuntu, add ``--insecure-registry 172.22.2.81:4000``
to ``DOCKER_OPTS`` in ``/etc/default/docker``. to ``DOCKER_OPTS`` in ``/etc/default/docker``.
@ -255,9 +252,8 @@ Kolla-ansible with Local Registry
To make kolla-ansible pull images from local registry, set To make kolla-ansible pull images from local registry, set
``"docker_registry"`` to ``"172.22.2.81:4000"`` in ``"docker_registry"`` to ``"172.22.2.81:4000"`` in
``"/etc/kolla/globals.yml"``. Make sure Docker is allowed to pull ``"/etc/kolla/globals.yml"``. Make sure Docker is allowed to pull images from
images from insecure registry. See insecure registry. See `Docker Insecure Registry Config`_.
`Docker Insecure Registry Config`_.
Building behind a proxy Building behind a proxy

View File

@ -24,13 +24,13 @@ pattern. To view, analyse and search logs, at least one index pattern has to
be created. To match indices stored in ElasticSearch, we suggest to use be created. To match indices stored in ElasticSearch, we suggest to use
following configuration: following configuration:
- Index contains time-based events - check #. Index contains time-based events - check
- Use event times to create index names [DEPRECATED] - not checked #. Use event times to create index names [DEPRECATED] - not checked
- Index name or pattern - log-* #. Index name or pattern - log-*
- Do not expand index pattern when searching (Not recommended) - not checked #. Do not expand index pattern when searching (Not recommended) - not checked
- Time-field name - Timestamp #. Time-field name - Timestamp
After setting parameters, one can create an index with 'Create' button. After setting parameters, one can create an index with *Create* button.
Note: This step is necessary until the default Kibana dashboard is implemented Note: This step is necessary until the default Kibana dashboard is implemented
in Kolla. in Kolla.
@ -51,10 +51,10 @@ Visualize data - Visualize tab
============================== ==============================
In the visualization tab a wide range of charts is available. If any In the visualization tab a wide range of charts is available. If any
visualization has not been saved yet, after choosing this tab 'Create a new visualization has not been saved yet, after choosing this tab *Create a new
visualization' panel is opened. If a visualization has already been saved, visualization* panel is opened. If a visualization has already been saved,
after choosing this tab, lately modified visualization is opened. In this after choosing this tab, lately modified visualization is opened. In this
case, one can create a new visualization by choosing 'add visualization' case, one can create a new visualization by choosing *add visualization*
option in the menu on the right. In order to create new visualization, one option in the menu on the right. In order to create new visualization, one
of the available options has to be chosen (pie chart, area chart). Each of the available options has to be chosen (pie chart, area chart). Each
visualization can be created from a saved or a new search. After choosing visualization can be created from a saved or a new search. After choosing
@ -63,8 +63,8 @@ generated and previewed. In the menu on the left, metrics for a chart can
be chosen. The chart can be generated by pressing a green arrow on the top be chosen. The chart can be generated by pressing a green arrow on the top
of the left-side menu. of the left-side menu.
NOTE: After creating a visualization, it can be saved by choosing 'save NOTE: After creating a visualization, it can be saved by choosing *save
visualization' option in the menu on the right. If it is not saved, it will visualization* option in the menu on the right. If it is not saved, it will
be lost after leaving a page or creating another visualization. be lost after leaving a page or creating another visualization.
Organize visualizations and searches - Dashboard tab Organize visualizations and searches - Dashboard tab
@ -72,17 +72,17 @@ Organize visualizations and searches - Dashboard tab
In the Dashboard tab all of saved visualizations and searches can be In the Dashboard tab all of saved visualizations and searches can be
organized in one Dashboard. To add visualization or search, one can choose organized in one Dashboard. To add visualization or search, one can choose
'add visualization' option in the menu on the right and then choose an item *add visualization* option in the menu on the right and then choose an item
from all saved ones. The order and size of elements can be changed directly from all saved ones. The order and size of elements can be changed directly
in this place by moving them or resizing. The color of charts can also be in this place by moving them or resizing. The color of charts can also be
changed by checking a colorful dots on the legend near each visualization. changed by checking a colorful dots on the legend near each visualization.
NOTE: After creating a dashboard, it can be saved by choosing 'save dashboard' NOTE: After creating a dashboard, it can be saved by choosing *save dashboard*
option in the menu on the right. If it is not saved, it will be lost after option in the menu on the right. If it is not saved, it will be lost after
leaving a page or creating another dashboard. leaving a page or creating another dashboard.
If a Dashboard has already been saved, it can be opened by choosing 'open If a Dashboard has already been saved, it can be opened by choosing *open
dashboard' option in the menu on the right. dashboard* option in the menu on the right.
Exporting and importing created items - Settings tab Exporting and importing created items - Settings tab
===================================================== =====================================================
@ -90,6 +90,6 @@ Exporting and importing created items - Settings tab
Once visualizations, searches or dashboards are created, they can be exported Once visualizations, searches or dashboards are created, they can be exported
to a json format by choosing Settings tab and then Objects tab. Each of the to a json format by choosing Settings tab and then Objects tab. Each of the
item can be exported separately by selecting it in the menu. All of the items item can be exported separately by selecting it in the menu. All of the items
can also be exported at once by choosing 'export everything' option. can also be exported at once by choosing *export everything* option.
In the same tab (Settings - Objects) one can also import saved items by In the same tab (Settings - Objects) one can also import saved items by
choosing 'import' option. choosing *import* option.

View File

@ -7,7 +7,7 @@ Liberty 1.0.0 Deployment Warning
Warning Overview Warning Overview
================ ================
Please use Liberty 1.1.0 tag or later when using Kolla. No data loss Please use Liberty 1.1.0 tag or later when using Kolla. No data loss
occurs with this version. stable/liberty is also fully functional and occurs with this version. ``stable/liberty`` is also fully functional and
suffers no data loss. suffers no data loss.
Data loss with 1.0.0 Data loss with 1.0.0
@ -15,25 +15,25 @@ Data loss with 1.0.0
The Kolla community discovered in the of middle Mitaka development that it The Kolla community discovered in the of middle Mitaka development that it
was possible for data loss to occur if the data container is rebuilt. In was possible for data loss to occur if the data container is rebuilt. In
this scenario, Docker pulls a new container, and the new container doesn't this scenario, Docker pulls a new container, and the new container doesn't
contain the data from the old container. Kolla stable/liberty and Kolla contain the data from the old container. Kolla ``stable/liberty`` and Kolla
1.0.0 are not to be used at this time, as they result in *critical data loss 1.0.0 are not to be used at this time, as they result in **critical data loss
problems*. problems**.
Resolution Resolution
========== ==========
To rectify this problem, the OpenStack release and infrastructure teams To rectify this problem, the OpenStack release and infrastructure teams
in coordination with the Kolla team executed the following actions: in coordination with the Kolla team executed the following actions:
* Deleted the stable/liberty branch (where 1.0.0 was tagged from) * Deleted the ``stable/liberty`` branch (where 1.0.0 was tagged from)
* Created a tag liberty-early-demise at the end of the broken stable/liberty * Created a tag liberty-early-demise at the end of the broken ``stable/liberty``
branch development. branch development.
* Created a new stable/liberty branch based upon stable/mitaka. * Created a new ``stable/liberty`` branch based upon ``stable/mitaka``.
* Corrected stable/liberty to deploy Liberty. * Corrected ``stable/liberty`` to deploy Liberty.
* Released Kolla 1.1.0 from the newly created stable/liberty branch. * Released Kolla 1.1.0 from the newly created ``stable/liberty`` branch.
End Result End Result
========== ==========
A fully functional Liberty OpenStack deployment based upon the two years of A fully functional Liberty OpenStack deployment based upon the two years of
testing that went into the development that went into stable/mitaka. testing that went into the development that went into ``stable/mitaka``.
The docker-engine 1.10.0 or later is required. The docker-engine 1.10.0 or later is required.

View File

@ -34,7 +34,7 @@ services are properly working.
Preparation and Deployment Preparation and Deployment
========================== ==========================
Cinder and Ceph are required, enable it in /etc/kolla/globals.yml: Cinder and Ceph are required, enable it in ``/etc/kolla/globals.yml``:
.. code-block:: console .. code-block:: console
@ -47,13 +47,14 @@ Enable Manila in /etc/kolla/globals.yml:
enable_manila: "yes" enable_manila: "yes"
By default Manila uses instance flavor id 100 for its file systems. For By default Manila uses instance flavor id 100 for its file systems. For Manila
Manila to work, either create a new nova flavor with id 100 (using "nova to work, either create a new nova flavor with id 100 (use *nova flavor-create*)
flavor-create") or change service_instance_flavor_id to use one of the or change *service_instance_flavor_id* to use one of the default nova flavor
default nova flavor ids. ids.
Ex: service_instance_flavor_id = 2 to use nova default flavor m1.small. Ex: *service_instance_flavor_id = 2* to use nova default flavor ``m1.small``.
Create or modify the file /etc/kolla/config/manila.conf and add the contents: Create or modify the file ``/etc/kolla/config/manila-share.conf`` and add the
contents:
.. code-block:: console .. code-block:: console
@ -79,11 +80,11 @@ to verify successful launch of each process:
Launch an Instance Launch an Instance
================== ==================
Before being able to create a share, the manila with the generic driver and Before being able to create a share, the manila with the generic driver and the
the DHSS mode enabled requires the definition of at least an image, DHSS mode enabled requires the definition of at least an image, a network and a
a network and a share-network for being used to create a share server. share-network for being used to create a share server. For that back end
For that back end configuration, the share server is an instance where configuration, the share server is an instance where NFS/CIFS shares are
NFS/CIFS shares are served. served.
Determine the configuration of the share server Determine the configuration of the share server
=============================================== ===============================================
@ -166,8 +167,8 @@ Create a shared network
| description | None | | description | None |
+-------------------+--------------------------------------+ +-------------------+--------------------------------------+
Create a flavor (Required if you not defined manila_instance_flavor_id in Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in
/etc/kolla/config/manila.conf file) ``/etc/kolla/config/manila-share.conf`` file)
.. code-block:: console .. code-block:: console

View File

@ -7,31 +7,29 @@ Multinode Deployment of Kolla
Deploy a registry (required for multinode) Deploy a registry (required for multinode)
========================================== ==========================================
A Docker registry is a locally hosted registry that replaces the need A Docker registry is a locally hosted registry that replaces the need to pull
to pull from the Docker Hub to get images. Kolla can function with from the Docker Hub to get images. Kolla can function with or without a local
or without a local registry, however for a multinode deployment a registry registry, however for a multinode deployment a registry is required.
is required.
The Docker registry prior to version 2.3 has extremely bad performance The Docker registry prior to version 2.3 has extremely bad performance because
because all container data is pushed for every image rather than taking all container data is pushed for every image rather than taking advantage of
advantage of Docker layering to optimize push operations. For more Docker layering to optimize push operations. For more information reference
information reference
`pokey registry <https://github.com/docker/docker/issues/14018>`__. `pokey registry <https://github.com/docker/docker/issues/14018>`__.
The Kolla community recommends using registry 2.3 or later. To deploy The Kolla community recommends using registry 2.3 or later. To deploy registry
registry 2.3 do the following: 2.3 do the following:
:: ::
docker run -d -p 4000:5000 --restart=always --name registry registry:2 docker run -d -p 4000:5000 --restart=always --name registry registry:2
Note: Kolla looks for the Docker registry to use port 4000. (Docker default Note: Kolla looks for the Docker registry to use port 4000. (Docker default is
is port 5000) port 5000)
After starting the registry, it is necessary to instruct Docker that it will After starting the registry, it is necessary to instruct Docker that it will
be communicating with an insecure registry. To enable insecure registry be communicating with an insecure registry. To enable insecure registry
communication on CentOS, modify the "/etc/sysconfig/docker" file to contain communication on CentOS, modify the ``/etc/sysconfig/docker`` file to contain
the following where 192.168.1.100 is the IP address of the machine where the the following where 192.168.1.100 is the IP address of the machine where the
registry is currently running: registry is currently running:
@ -40,18 +38,17 @@ registry is currently running:
# CentOS # CentOS
other_args="--insecure-registry 192.168.1.100:4000" other_args="--insecure-registry 192.168.1.100:4000"
For Ubuntu, edit /etc/default/docker and add: For Ubuntu, edit ``/etc/default/docker`` and add:
:: ::
# Ubuntu # Ubuntu
DOCKER_OPTS="--insecure-registry 192.168.1.100:4000" DOCKER_OPTS="--insecure-registry 192.168.1.100:4000"
Docker Inc's packaged version of docker-engine for CentOS is defective and Docker Inc's packaged version of docker-engine for CentOS is defective and does
does not read the other_args configuration options from not read the other_args configuration options from ``/etc/sysconfig/docker``.
"/etc/sysconfig/docker". To rectify this problem, ensure the To rectify this problem, ensure the following lines appear in the drop-in unit
following lines appear in the drop-in unit file at file at ``/etc/systemd/system/docker.service.d/kolla.conf``:
"/etc/systemd/system/docker.service.d/kolla.conf":
:: ::
@ -79,8 +76,8 @@ Edit the Inventory File
The ansible inventory file contains all the information needed to determine The ansible inventory file contains all the information needed to determine
what services will land on which hosts. Edit the inventory file in the kolla what services will land on which hosts. Edit the inventory file in the kolla
directory ansible/inventory/multinode or if kolla was installed with pip, it directory ``ansible/inventory/multinode`` or if kolla was installed with pip,
can be found in /usr/share/kolla. it can be found in ``/usr/share/kolla``.
Add the ip addresses or hostnames to a group and the services associated with Add the ip addresses or hostnames to a group and the services associated with
that group will land on that host: that group will land on that host:
@ -95,9 +92,9 @@ that group will land on that host:
192.168.122.24 192.168.122.24
For more advanced roles, the operator can edit which services will be associated For more advanced roles, the operator can edit which services will be
in with each group. Keep in mind that some services have to be grouped together associated in with each group. Keep in mind that some services have to be
and changing these around can break your deployment: grouped together and changing these around can break your deployment:
:: ::

View File

@ -4,9 +4,9 @@
Nova Fake Driver Nova Fake Driver
================ ================
One common question from OpenStack operators is that "how does the control plane One common question from OpenStack operators is that "how does the control
(e.g., database, messaging queue, nova-scheduler ) scales?". To answer this plane (e.g., database, messaging queue, nova-scheduler ) scales?". To answer
question, operators setup Rally to drive workload to the OpenStack cloud. this question, operators setup Rally to drive workload to the OpenStack cloud.
However, without a large number of nova-compute nodes, it becomes difficult to However, without a large number of nova-compute nodes, it becomes difficult to
exercise the control performance. exercise the control performance.
@ -20,11 +20,11 @@ Use nova-fake driver
Nova fake driver can not work with all-in-one deployment. This is because the Nova fake driver can not work with all-in-one deployment. This is because the
fake neutron-openvswitch-agent for the fake nova-compute container conflicts fake neutron-openvswitch-agent for the fake nova-compute container conflicts
with neutron-openvswitch-agent on the compute nodes. Therefore, in the inventory with neutron-openvswitch-agent on the compute nodes. Therefore, in the
the network node must be different than the compute node. inventory the network node must be different than the compute node.
By default, Kolla uses libvirt driver on the compute node. To use nova-fake By default, Kolla uses libvirt driver on the compute node. To use nova-fake
driver, edit the following parameters in ansible/group_vars or in the driver, edit the following parameters in ``ansible/group_vars`` or in the
command line options. command line options.
:: ::
@ -33,5 +33,5 @@ command line options.
num_nova_fake_per_node: 5 num_nova_fake_per_node: 5
Each compute nodes will run 5 nova-compute containers and 5 Each compute nodes will run 5 nova-compute containers and 5
neutron-plugin-agent containers. When booting instance, there will be no neutron-plugin-agent containers. When booting instance, there will be no real
real instances created. But "nova list" shows the fake instances. instances created. But *nova list* shows the fake instances.

View File

@ -24,16 +24,16 @@ Tips and Tricks
=============== ===============
Kolla ships with several utilities intended to facilitate ease of operation. Kolla ships with several utilities intended to facilitate ease of operation.
``tools/cleanup-containers`` can be used to remove deployed containers from ``tools/cleanup-containers`` can be used to remove deployed containers from the
the system. This can be useful when you want to do a new clean deployment. It system. This can be useful when you want to do a new clean deployment. It will
will preserve the registry and the locally built images in the registry, preserve the registry and the locally built images in the registry, but will
but will remove all running Kolla containers from the local Docker daemon. remove all running Kolla containers from the local Docker daemon. It also
It also removes the named volumes. removes the named volumes.
``tools/cleanup-host`` can be used to remove remnants of network changes ``tools/cleanup-host`` can be used to remove remnants of network changes
triggered on the Docker host when the neutron-agents containers are launched. triggered on the Docker host when the neutron-agents containers are launched.
This can be useful when you want to do a new clean deployment, particularly This can be useful when you want to do a new clean deployment, particularly one
one changing the network topology. changing the network topology.
``tools/cleanup-images`` can be used to remove all Docker images built by ``tools/cleanup-images`` can be used to remove all Docker images built by Kolla
Kolla from the local Docker cache. from the local Docker cache.

View File

@ -26,14 +26,14 @@ There are other deployment environments referenced below in `Additional Environm
Install Dependencies Install Dependencies
==================== ====================
Kolla is tested on CentOS, Oracle Linux, RHEL and Ubuntu as both container Kolla is tested on CentOS, Oracle Linux, RHEL and Ubuntu as both container OS
OS platforms and bare metal deployment targets. platforms and bare metal deployment targets.
Fedora: Kolla will not run on Fedora 22 and later as a bare metal deployment Fedora: Kolla will not run on Fedora 22 and later as a bare metal deployment
target. These distributions compress kernel modules with the .xz compressed target. These distributions compress kernel modules with the .xz compressed
format. The guestfs system in the CentOS family of containers cannot read format. The guestfs system in the CentOS family of containers cannot read
these images because a dependent package supermin in CentOS needs to be these images because a dependent package supermin in CentOS needs to be updated
updated to add .xz compressed format support. to add .xz compressed format support.
Ubuntu: For Ubuntu based systems where Docker is used it is recommended to use Ubuntu: For Ubuntu based systems where Docker is used it is recommended to use
the latest available LTS kernel. The latest LTS kernel available is the wily the latest available LTS kernel. The latest LTS kernel available is the wily
@ -89,10 +89,10 @@ command:
docker --version docker --version
When running with systemd, setup docker-engine with the appropriate When running with systemd, setup docker-engine with the appropriate information
information in the Docker daemon to launch with. This means setting up the in the Docker daemon to launch with. This means setting up the following
following information in the ``docker.service`` file. If you do not set the information in the ``docker.service`` file. If you do not set the MountFlags
MountFlags option correctly then ``kolla-ansible`` will fail to deploy the option correctly then ``kolla-ansible`` will fail to deploy the
``neutron-dhcp-agent`` container and throws APIError/HTTPError. After adding ``neutron-dhcp-agent`` container and throws APIError/HTTPError. After adding
the drop-in unit file as follows, reload and restart the docker service: the drop-in unit file as follows, reload and restart the docker service:
@ -138,15 +138,15 @@ Or using ``pip`` to install a latest version:
pip install -U docker-py pip install -U docker-py
OpenStack, RabbitMQ, and Ceph require all hosts to have matching times to ensure OpenStack, RabbitMQ, and Ceph require all hosts to have matching times to
proper message delivery. In the case of Ceph, it will complain if the hosts ensure proper message delivery. In the case of Ceph, it will complain if the
differ by more than 0.05 seconds. Some OpenStack services have timers as low as hosts differ by more than 0.05 seconds. Some OpenStack services have timers as
2 seconds by default. For these reasons it is highly recommended to setup an NTP low as 2 seconds by default. For these reasons it is highly recommended to
service of some kind. While ``ntpd`` will achieve more accurate time for the setup an NTP service of some kind. While ``ntpd`` will achieve more accurate
deployment if the NTP servers are running in the local deployment environment, time for the deployment if the NTP servers are running in the local deployment
`chrony <http://chrony.tuxfamily.org>`_ is more accurate when syncing the time environment, `chrony <http://chrony.tuxfamily.org>`_ is more accurate when
across a WAN connection. When running Ceph it is recommended to setup ``ntpd`` to syncing the time across a WAN connection. When running Ceph it is recommended
sync time locally due to the tight time constraints. to setup ``ntpd`` to sync time locally due to the tight time constraints.
To install, start, and enable ntp on CentOS execute the following: To install, start, and enable ntp on CentOS execute the following:
@ -163,9 +163,9 @@ To install and start on Debian based systems execute the following:
apt-get install ntp apt-get install ntp
Libvirt is started by default on many operating systems. Please disable ``libvirt`` Libvirt is started by default on many operating systems. Please disable
on any machines that will be deployment targets. Only one copy of libvirt may ``libvirt`` on any machines that will be deployment targets. Only one copy of
be running at a time. libvirt may be running at a time.
:: ::
@ -182,24 +182,24 @@ On Ubuntu, apparmor will sometimes prevent libvirt from working.
:: ::
/usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied /usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
If you are seeing the libvirt container fail with the error above, disable If you are seeing the libvirt container fail with the error above, disable the
the libvirt profile. libvirt profile.
:: ::
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
Kolla deploys OpenStack using Kolla deploys OpenStack using `Ansible <http://www.ansible.com>`__. Install
`Ansible <http://www.ansible.com>`__. Install Ansible from distribution Ansible from distribution packaging if the distro packaging has recommended
packaging if the distro packaging has recommended version available. version available.
Some implemented distro versions of Ansible are too old to use distro Some implemented distro versions of Ansible are too old to use distro
packaging. Currently, CentOS and RHEL package Ansible 1.9.4 which is packaging. Currently, CentOS and RHEL package Ansible 1.9.4 which is suitable
suitable for use with Kolla. As Ansible 2.0 is also available, version 1.9 for use with Kolla. As Ansible 2.0 is also available, version 1.9 must be
must be specified. Note that you will need to enable access specified. Note that you will need to enable access to the EPEL repository to
to the EPEL repository to install via yum -- to do so, take a look at install via yum -- to do so, take a look at Fedora's EPEL
Fedora's EPEL `docs <https://fedoraproject.org/wiki/EPEL>`__ and `docs <https://fedoraproject.org/wiki/EPEL>`__ and
`FAQ <https://fedoraproject.org/wiki/EPEL/FAQ>`__. `FAQ <https://fedoraproject.org/wiki/EPEL/FAQ>`__.
On CentOS or RHEL systems, this can be done using: On CentOS or RHEL systems, this can be done using:
@ -208,16 +208,16 @@ On CentOS or RHEL systems, this can be done using:
yum -y install ansible1.9 yum -y install ansible1.9
Many DEB based systems do not meet Kolla's Ansible version requirements. Many DEB based systems do not meet Kolla's Ansible version requirements. It is
It is recommended to use pip to install Ansible 1.9.4. recommended to use pip to install Ansible 1.9.4. Finally Ansible 1.9.4 may be
Finally Ansible 1.9.4 may be installed using: installed using:
:: ::
pip install -U ansible==1.9.4 pip install -U ansible==1.9.4
If DEB based systems include a version of Ansible that meets Kolla's If DEB based systems include a version of Ansible that meets Kolla's version
version requirements it can be installed by: requirements it can be installed by:
:: ::
@ -277,31 +277,31 @@ To install the clients use:
Local Registry Local Registry
============== ==============
A local registry is not required for an all-in-one installation. Check out the A local registry is not required for an ``all-in-one`` installation. Check out
:doc:`multinode` for more information on using a local registry. Otherwise, the the :doc:`multinode` for more information on using a local registry. Otherwise,
`Docker Hub Image Registry`_ contains all images from each of Kolla's major releases. the `Docker Hub Image Registry`_ contains all images from each of Kolla's major
The latest release tag is 2.0.0 for Mitaka. releases. The latest release tag is 2.0.0 for Mitaka.
Additional Environments Additional Environments
======================= =======================
Two virtualized development environment options are available for Kolla. Two virtualized development environment options are available for Kolla. These
These options permit the development of Kolla without disrupting the host options permit the development of Kolla without disrupting the host operating
operating system. system.
If developing Kolla on an OpenStack cloud environment that supports Heat, If developing Kolla on an OpenStack cloud environment that supports Heat,
follow the :doc:`heat-dev-env`. follow the :doc:`heat-dev-env`.
If developing Kolla on a system that provides VirtualBox or Libvirt in If developing Kolla on a system that provides VirtualBox or Libvirt in addition
addition to Vagrant, use the Vagrant virtual environment documented in to Vagrant, use the Vagrant virtual environment documented in
:doc:`vagrant-dev-env`. :doc:`vagrant-dev-env`.
Currently the Heat development environment is entirely non-functional. Currently the Heat development environment is entirely non-functional. The
The Kolla core reviewers have debated removing it from the repository Kolla core reviewers have debated removing it from the repository but have
but have resisted to provide an opportunity for contributors to make Heat resisted to provide an opportunity for contributors to make Heat usable for
usable for Kolla development. THe Kolla core reviewers believe Heat Kolla development. The Kolla core reviewers believe Heat would offer a great
would offer a great way to develop Kolla in addition to Vagrant, way to develop Kolla in addition to Vagrant, bare metal, or a manually setup
bare metal, or a manually setup virtual machine. virtual machine.
For more information refer to For more information refer to
`_bug 1562334 <https://bugs.launchpad.net/kolla/+bug/1562334>`__. `_bug 1562334 <https://bugs.launchpad.net/kolla/+bug/1562334>`__.
@ -340,8 +340,8 @@ Note ``--base`` and ``--type`` can be added to the above ``kolla-build``
command if different distributions or types are desired. command if different distributions or types are desired.
It is also possible to build individual containers. As an example, if the It is also possible to build individual containers. As an example, if the
glance containers failed to build, all glance related containers can be glance containers failed to build, all glance related containers can be rebuilt
rebuilt as follows: as follows:
:: ::
@ -361,13 +361,13 @@ instruction in :doc:`image-building`.
Deploying Kolla Deploying Kolla
=============== ===============
The Kolla community provides two example methods of Kolla The Kolla community provides two example methods of Kolla deploy: *all-in-one*
deploy: *all-in-one* and *multinode*. The "all-in-one" deploy is similar and *multinode*. The *all-in-one* deploy is similar to
to `devstack <http://docs.openstack.org/developer/devstack/>`__ deploy which `devstack <http://docs.openstack.org/developer/devstack/>`__ deploy which
installs all OpenStack services on a single host. In the "multinode" deploy, installs all OpenStack services on a single host. In the *multinode* deploy,
OpenStack services can be run on specific hosts. This documentation only OpenStack services can be run on specific hosts. This documentation only
describes deploying *all-in-one* method as most simple one. To setup multinode describes deploying *all-in-one* method as most simple one. To setup
see the :doc:`multinode`. *multinode* see the :doc:`multinode`.
Each method is represented as an Ansible inventory file. More information on Each method is represented as an Ansible inventory file. More information on
the Ansible inventory file can be found in the Ansible `inventory introduction the Ansible inventory file can be found in the Ansible `inventory introduction
@ -385,7 +385,7 @@ deployment. Optionally, the passwords may be populate in the file by hand.
kolla-genpwd kolla-genpwd
Start by editing /etc/kolla/globals.yml. Check and edit, if needed, these Start by editing ``/etc/kolla/globals.yml``. Check and edit, if needed, these
parameters: ``kolla_base_distro``, ``kolla_install_type``. These parameters parameters: ``kolla_base_distro``, ``kolla_install_type``. These parameters
should match what you used in the ``kolla-build`` command line. The default for should match what you used in the ``kolla-build`` command line. The default for
``kolla_base_distro`` is ``centos`` and for ``kolla_install_type`` is ``binary``. ``kolla_base_distro`` is ``centos`` and for ``kolla_install_type`` is ``binary``.
@ -399,23 +399,22 @@ sure ``globals.yml`` has the following entries:
Please specify an unused IP address in the network to act as a VIP for Please specify an unused IP address in the network to act as a VIP for
``kolla_internal_vip_address``. The VIP will be used with keepalived and ``kolla_internal_vip_address``. The VIP will be used with keepalived and added
added to the ``api_interface`` as specified in the ``globals.yml`` :: to the ``api_interface`` as specified in the ``globals.yml`` ::
kolla_internal_vip_address: "10.10.10.254" kolla_internal_vip_address: "10.10.10.254"
The ``network_interface`` variable is the interface to which Kolla binds API The ``network_interface`` variable is the interface to which Kolla binds API
services. For example, when starting up Mariadb it will bind to the services. For example, when starting up Mariadb it will bind to the IP on the
IP on the interface list in the ``network_interface`` variable. :: interface list in the ``network_interface`` variable. ::
network_interface: "eth0" network_interface: "eth0"
The ``neutron_external_interface`` variable is the interface that will The ``neutron_external_interface`` variable is the interface that will be used
be used for the external bridge in Neutron. Without this bridge the deployment for the external bridge in Neutron. Without this bridge the deployment instance
instance traffic will be unable to access the rest of the Internet. In traffic will be unable to access the rest of the Internet. In the case of a
the case of a single interface on a machine, a veth pair may be used where single interface on a machine, a veth pair may be used where one end of the
one end of the veth pair is listed here and the other end is in a bridge on veth pair is listed here and the other end is in a bridge on the system. ::
the system. ::
neutron_external_interface: "eth1" neutron_external_interface: "eth1"
@ -514,22 +513,21 @@ environment with a glance image and neutron networks:
Failures Failures
======== ========
Nearly always when Kolla fails, it is caused by a CTRL-C during the Nearly always when Kolla fails, it is caused by a CTRL-C during the deployment
deployment process or a problem in the ``globals.yml`` configuration. process or a problem in the ``globals.yml`` configuration.
To correct the problem where Operators have a misconfigured To correct the problem where Operators have a misconfigured environment, the
environment, the Kolla developers have added a precheck feature which Kolla developers have added a precheck feature which ensures the deployment
ensures the deployment targets are in a state where Kolla may deploy targets are in a state where Kolla may deploy to them. To run the prechecks,
to them. To run the prechecks, execute: execute:
:: ::
kolla-ansible prechecks kolla-ansible prechecks
If a failure during deployment occurs it nearly always occurs during If a failure during deployment occurs it nearly always occurs during evaluation
evaluation of the software. Once the Operator learns the few of the software. Once the Operator learns the few configuration options
configuration options required, it is highly unlikely they will experience required, it is highly unlikely they will experience a failure in deployment.
a failure in deployment.
Deployment may be run as many times as desired, but if a failure in a Deployment may be run as many times as desired, but if a failure in a
bootstrap task occurs, a further deploy action will not correct the problem. bootstrap task occurs, a further deploy action will not correct the problem.
@ -545,14 +543,14 @@ On each node where OpenStack is deployed run:
tools/cleanup-containers tools/cleanup-containers
tools/cleanup-host tools/cleanup-host
The Operator will have to copy via scp or some other means the cleanup The Operator will have to copy via scp or some other means the cleanup scripts
scripts to the various nodes where the failed containers are located. to the various nodes where the failed containers are located.
Any time the tags of a release change, it is possible that the container Any time the tags of a release change, it is possible that the container
implementation from older versions won't match the Ansible playbooks in implementation from older versions won't match the Ansible playbooks in a new
a new version. If running multinode from a registry, each node's Docker version. If running multinode from a registry, each node's Docker image cache
image cache must be refreshed with the latest images before a new deployment must be refreshed with the latest images before a new deployment can occur. To
can occur. To refresh the docker cache from the local Docker registry: refresh the docker cache from the local Docker registry:
:: ::
@ -578,7 +576,7 @@ The logs can be examined by executing:
docker exec -it heka bash docker exec -it heka bash
The logs from all services in all containers may be read from The logs from all services in all containers may be read from
/var/log/kolla/SERVICE_NAME ``/var/log/kolla/SERVICE_NAME``
If the stdout logs are needed, please run: If the stdout logs are needed, please run:

View File

@ -6,9 +6,9 @@ Kolla Security
Non Root containers Non Root containers
=================== ===================
The OpenStack services, with a few exceptions, run as non root inside of Kolla's The OpenStack services, with a few exceptions, run as non root inside of
containers. Kolla uses the Docker provided USER flag to set the appropriate Kolla's containers. Kolla uses the Docker provided USER flag to set the
user for each serivce. appropriate user for each serivce.
SELinux SELinux
======= =======
@ -31,14 +31,15 @@ address volumes directly by name removing the need for so called **data
containers** all together. containers** all together.
Another solution to the persistent data issue is to use a host bind mount which Another solution to the persistent data issue is to use a host bind mount which
involves making, for sake of example, host directory ``var/lib/mysql`` available involves making, for sake of example, host directory ``var/lib/mysql``
inside the container at ``var/lib/mysql``. This absolutely solves the problem of available inside the container at ``var/lib/mysql``. This absolutely solves the
persistent data, but it introduces another security issue, permissions. With problem of persistent data, but it introduces another security issue,
this host bind mount solution the data in ``var/lib/mysql`` will be owned by the permissions. With this host bind mount solution the data in ``var/lib/mysql``
mysql user in the container. Unfortunately, that mysql user in the container will be owned by the mysql user in the container. Unfortunately, that mysql
could have any UID/GID and thats who will own the data outside the container user in the container could have any UID/GID and thats who will own the data
introducing a potential security risk. Additionally, this method dirties the outside the container introducing a potential security risk. Additionally, this
host and requires host permissions to the directories to bind mount. method dirties the host and requires host permissions to the directories to
bind mount.
The solution Kolla chose is named volumes. The solution Kolla chose is named volumes.

View File

@ -6,11 +6,12 @@ Swift in Kolla
Overview Overview
======== ========
Kolla can deploy a full working Swift setup in either a AIO or multi node setup. Kolla can deploy a full working Swift setup in either a **all-in-one** or
**multinode** setup.
Prerequisites Prerequisites
============= =============
Before running Swift we need to generate "rings", which are binary compressed Before running Swift we need to generate **rings**, which are binary compressed
files that at a high level let the various Swift services know where data is in files that at a high level let the various Swift services know where data is in
the cluster. We hope to automate this process in a future release. the cluster. We hope to automate this process in a future release.
@ -21,7 +22,7 @@ Swift also expects block devices to be available for storage. To prepare a disk
for use as Swift storage device, a special partition name and filesystem label for use as Swift storage device, a special partition name and filesystem label
need to be added. So that Kolla can detect those disks and mount for services. need to be added. So that Kolla can detect those disks and mount for services.
Follow the example below to add 3 disks for an AIO demo setup. Follow the example below to add 3 disks for an **all-in-one** demo setup.
:: ::
@ -50,13 +51,13 @@ For evaluation, loopback devices can be used in lieu of real disks:
Disks without a partition table Disks without a partition table
=============================== ===============================
Kolla also supports unpartitioned disk (filesystem on /dev/sdc instead of Kolla also supports unpartitioned disk (filesystem on ``/dev/sdc`` instead of
/dev/sdc1) detection purely based on filesystem label. This is generally not a ``/dev/sdc1``) detection purely based on filesystem label. This is generally
recommended practice but can be helpful for Kolla to take over Swift deployment not a recommended practice but can be helpful for Kolla to take over Swift
already using disk like this. deployment already using disk like this.
Given hard disks with labels swd1, swd2, swd3, use the following settings in Given hard disks with labels swd1, swd2, swd3, use the following settings in
ansible/roles/swift/defaults/main.yml ``ansible/roles/swift/defaults/main.yml``.
:: ::
@ -66,9 +67,9 @@ ansible/roles/swift/defaults/main.yml
Rings Rings
===== =====
Run following commands locally to generate Rings for AIO demo setup. The Run following commands locally to generate Rings for **all-in-one** demo setup.
commands work with "disks with partition table" example listed above. Please The commands work with **disks with partition table** example listed above.
modify accordingly if your setup is different. Please modify accordingly if your setup is different.
:: ::
@ -122,22 +123,23 @@ modify accordingly if your setup is different.
/etc/kolla/config/swift/${ring}.builder rebalance; /etc/kolla/config/swift/${ring}.builder rebalance;
done done
Similar commands can be used for multinode, you will just need to run the 'add' step for each IP Similar commands can be used for **multinode**, you will just need to run the
in the cluster. **add** step for each IP in the cluster.
For more info, see For more info, see
http://docs.openstack.org/kilo/install-guide/install/apt/content/swift-initial-rings.html http://docs.openstack.org/kilo/install-guide/install/apt/content/swift-initial-rings.html
Deploying Deploying
========= =========
Enable Swift in /etc/kolla/globals.yml: Enable Swift in ``/etc/kolla/globals.yml``:
:: ::
enable_swift : "yes" enable_swift : "yes"
Once the rings are in place, deploying Swift is the same as any other Kolla Ansible service. Below Once the rings are in place, deploying Swift is the same as any other Kolla
is the minimal command to bring up Swift AIO, and it's dependencies: Ansible service. Below is the minimal command to bring up Swift **all-in-one**,
and it's dependencies:
:: ::

View File

@ -4,8 +4,8 @@
Development Environment with Vagrant Development Environment with Vagrant
==================================== ====================================
This guide describes how to use `Vagrant <http://vagrantup.com>`__ to This guide describes how to use `Vagrant <http://vagrantup.com>`__ to assist in
assist in developing for Kolla. developing for Kolla.
Vagrant is a tool to assist in scripted creation of virtual machines. Vagrant Vagrant is a tool to assist in scripted creation of virtual machines. Vagrant
takes care of setting up CentOS-based VMs for Kolla development, each with takes care of setting up CentOS-based VMs for Kolla development, each with
@ -14,26 +14,26 @@ proper hardware like memory amount and number of network interfaces.
Getting Started Getting Started
=============== ===============
The Vagrant script implements All-in-One (AIO) or multi-node deployments. AIO The Vagrant script implements **all-in-one** or **multi-node** deployments.
is the default. **all-in-one** is the default.
In the case of multi-node deployment, the Vagrant setup builds a cluster with In the case of **multi-node** deployment, the Vagrant setup builds a cluster
the following nodes by default: with the following nodes by default:
- 3 control nodes * 3 control nodes
- 1 compute node * 1 compute node
- 1 storage node (Note: ceph requires at least 3 storage nodes) * 1 storage node (Note: ceph requires at least 3 storage nodes)
- 1 network node * 1 network node
- 1 operator node * 1 operator node
The cluster node count can be changed by editing the Vagrantfile. The cluster node count can be changed by editing the Vagrantfile.
Kolla runs from the operator node to deploy OpenStack. Kolla runs from the operator node to deploy OpenStack.
All nodes are connected with each other on the secondary NIC. The All nodes are connected with each other on the secondary NIC. The primary NIC
primary NIC is behind a NAT interface for connecting with the Internet. is behind a NAT interface for connecting with the Internet. The third NIC is
The third NIC is connected without IP configuration to a public bridge connected without IP configuration to a public bridge interface. This may be
interface. This may be used for Neutron/Nova to connect to instances. used for Neutron/Nova to connect to instances.
Start by downloading and installing the Vagrant package for the distro of Start by downloading and installing the Vagrant package for the distro of
choice. Various downloads can be found at the `Vagrant downloads choice. Various downloads can be found at the `Vagrant downloads
@ -45,12 +45,12 @@ On Fedora 22 it is as easy as::
On Ubuntu 14.04 it is as easy as:: On Ubuntu 14.04 it is as easy as::
sudo apt-get -y install vagrant ruby-dev ruby-libvirt python-libvirt libvirt-dev nfs-kernel-server sudo apt-get install vagrant ruby-dev ruby-libvirt python-libvirt libvirt-dev nfs-kernel-server
**Note:** Many distros ship outdated versions of Vagrant by default. When in **Note:** Many distros ship outdated versions of Vagrant by default. When in
doubt, always install the latest from the downloads page above. doubt, always install the latest from the downloads page above.
Next install the hostmanager plugin so all hosts are recorded in /etc/hosts Next install the hostmanager plugin so all hosts are recorded in ``/etc/hosts``
(inside each vm):: (inside each vm)::
vagrant plugin install vagrant-hostmanager vagrant plugin install vagrant-hostmanager
@ -85,7 +85,7 @@ Find a location in the system's home directory and checkout the Kolla repo::
git clone https://github.com/openstack/kolla.git git clone https://github.com/openstack/kolla.git
Developers can now tweak the Vagrantfile or bring up the default AIO Developers can now tweak the Vagrantfile or bring up the default **all-in-one**
CentOS 7-based environment:: CentOS 7-based environment::
cd kolla/dev/vagrant && vagrant up cd kolla/dev/vagrant && vagrant up
@ -97,7 +97,7 @@ Vagrant Up
========== ==========
Once Vagrant has completed deploying all nodes, the next step is to launch Once Vagrant has completed deploying all nodes, the next step is to launch
Kolla. First, connect with the *operator* node:: Kolla. First, connect with the **operator** node::
vagrant ssh operator vagrant ssh operator
@ -106,7 +106,7 @@ nodes are configured so they can use this insecure repo to pull from, and use
it as a mirror. Ansible may use this registry to pull images from. it as a mirror. Ansible may use this registry to pull images from.
All nodes have a local folder shared between the group and the hypervisor, and All nodes have a local folder shared between the group and the hypervisor, and
a folder shared between *all* nodes and the hypervisor. This mapping is lost a folder shared between **all** nodes and the hypervisor. This mapping is lost
after reboots, so make sure to use the command ``vagrant reload <node>`` when after reboots, so make sure to use the command ``vagrant reload <node>`` when
reboots are required. Having this shared folder provides a method to supply reboots are required. Having this shared folder provides a method to supply
a different docker binary to the cluster. The shared folder is also used to a different docker binary to the cluster. The shared folder is also used to
@ -116,18 +116,18 @@ like ``vagrant destroy``.
Building images Building images
--------------- ---------------
Once logged on the *operator* VM call the ``kolla-build`` utility:: Once logged on the **operator** VM call the ``kolla-build`` utility::
kolla-build kolla-build
``kolla-build`` accept arguments as documented in :doc:`image-building`. It ``kolla-build`` accept arguments as documented in :doc:`image-building`. It
builds Docker images and pushes them to the local registry if the *push* builds Docker images and pushes them to the local registry if the **push**
option is enabled (in Vagrant this is the default behaviour). option is enabled (in Vagrant this is the default behaviour).
Deploying OpenStack with Kolla Deploying OpenStack with Kolla
------------------------------ ------------------------------
Deploy AIO with:: Deploy **all-in-one** with::
sudo kolla-ansible deploy sudo kolla-ansible deploy