Update Kolla-Ansible documents

Kolla-ansible related documents existed on both Kolla and Kolla-Ansible
repositories. A lot of Kolla-ansible related documents were updated in
the Kolla repository. Porting the changes made to these documents in
Kolla repo to Kolla-Ansible repo so that the Kolla-ansible documents can
be removed (Refer to patch https://review.openstack.org/#/c/425749/)

Change-Id: I7c53626ce551189acdb0dcbabe9369b81eed3347
This commit is contained in:
Sayantani Goswami 2017-01-27 18:52:04 +00:00 committed by sayantani
parent 717c80aef5
commit 5b35e3898f
14 changed files with 832 additions and 987 deletions

View File

@ -62,8 +62,8 @@ RabbitMQ Hostname Resolution
============================
RabbitMQ doesn't work with IP address, hence the IP address of api_interface
should be resolvable by hostnames to make sure that all RabbitMQ Cluster
hosts can resolve each others hostname beforehand.
should be resolvable by hostnames to make sure that all RabbitMQ Cluster hosts
can resolve each others hostname beforehand.
TLS Configuration
=================
@ -226,5 +226,5 @@ For example:
database_port: 3307
As <service>_port value is saved in different services' configurationso
As <service>_port value is saved in different services' configuration so
it's advised to make above change before deploying.

View File

@ -6,77 +6,92 @@ Bifrost Guide
Prep host
=========
clone kolla-ansible
-------------------
git clone https://github.com/openstack/kolla-ansible
Clone kolla
-----------
cd kolla-ansible
::
set up kolla dependcies `doc`:quickstart.rst
git clone https://github.com/openstack/kolla
cd kolla
fix hosts file
set up kolla dependencies :doc:`quickstart`
Fix hosts file
--------------
Docker bind mounts ``/etc/hosts`` into the container from a volume.
Docker bind mounts ``/etc/hosts`` into the container from a volume
This prevents atomic renames which will prevent ansible from fixing
the ``/etc/hosts`` file automatically.
to enable bifrost to be bootstrapped correctly
add the deployment hosts hostname to 127.0.0.1 line
e.g.
To enable bifrost to be bootstrapped correctly add the deployment
hosts hostname to 127.0.0.1 line for example:
ubuntu@bifrost:/repo/kolla$ cat /etc/hosts
127.0.0.1 bifrost localhost
::
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
192.168.100.15 bifrost
ubuntu@bifrost:/repo/kolla$ cat /etc/hosts
127.0.0.1 bifrost localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
192.168.100.15 bifrost
enable source build type
Enable source build type
========================
via config file
Via config file
---------------
tox -e genconfig
::
modify kolla-build.conf as follows.
tox -e genconfig
set install_type to source
Modify ``kolla-build.conf`` as follows.
Set ``install_type`` to ``source``
command line
::
install_type = source
Command line
------------
alternitivly if you do not wish to use the kolla-build.conf
you can enable a source build by appending ``-t source`` to
your kolla-build or tools/build.py command.
build container
Alternatively if you do not wish to use the ``kolla-build.conf``
you can enable a source build by appending ``-t source`` to
your ``kolla-build`` or ``tools/build.py`` command.
Build container
===============
Development
-----------
tools/build.py bifrost-deploy
::
tools/build.py bifrost-deploy
Production
----------
kolla-build bifrost-deploy
::
kolla-build bifrost-deploy
Prepare bifrost configs
=======================
create servers.yml
Create servers.yml
------------------
the servers.yml will describing your physical nodes and list ipmi credentials.
see bifrost dynamic inventory examples for mor details.
The ``servers.yml`` will describing your physical nodes and list IPMI credentials.
See bifrost dynamic inventory examples for more details.
e.g. /etc/kolla/config/bifrost/servers.yml
For example ``/etc/kolla/config/bifrost/servers.yml``
.. code-block:: yaml
@ -104,13 +119,15 @@ e.g. /etc/kolla/config/bifrost/servers.yml
adjust as appropriate for your deployment
create bifrost.yml
Create bifrost.yml
------------------
By default kolla mostly use bifrosts default playbook values.
Parameters passed to the bifrost install playbook can be overridden by
creating a bifrost.yml file in the kolla custom config director or in a
creating a ``bifrost.yml`` file in the kolla custom config directory or in a
bifrost sub directory.
e.g. /etc/kolla/config/bifrost/bifrost.yml
For example ``/etc/kolla/config/bifrost/bifrost.yml``
::
mysql_service_name: mysql
ansible_python_interpreter: /var/lib/kolla/venv/bin/python
@ -125,37 +142,44 @@ Create Disk Image Builder Config
--------------------------------
By default kolla mostly use bifrosts default playbook values when
building the baremetal os image. The baremetal os image can be customised
by creating a dib.yml file in the kolla custom config director or in a
by creating a ``dib.yml`` file in the kolla custom config directory or in a
bifrost sub directory.
e.g. /etc/kolla/config/bifrost/dib.yml
For example ``/etc/kolla/config/bifrost/dib.yml``
dib_os_element: ubuntu
::
dib_os_element: ubuntu
Deploy Bifrost
=========================
ansible
Ansible
-------
Development
___________
tools/kolla-ansible deploy-bifrost
::
tools/kolla-ansible deploy-bifrost
Production
__________
kolla-ansible deploy-bifrost
manual
::
kolla-ansible deploy-bifrost
Manual
------
Start Bifrost Container
_______________________
::
docker run -it --net=host -v /dev:/dev -d --privileged --name bifrost_deploy 192.168.1.51:5000/kollaglue/ubuntu-source-bifrost-deploy:3.0.0
docker run -it --net=host -v /dev:/dev -d --privileged --name bifrost_deploy kolla/ubuntu-source-bifrost-deploy:3.0.1
copy configs
Copy configs
____________
.. code-block:: console
@ -165,25 +189,32 @@ ____________
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
bootstrap bifrost
Bootstrap bifrost
_________________
docker exec -it bifrost_deploy bash
::
generate ssh key
docker exec -it bifrost_deploy bash
Generate ssh key
~~~~~~~~~~~~~~~~
ssh-keygen
::
source env variables
ssh-keygen
Source env variables
~~~~~~~~~~~~~~~~~~~~
cd /bifrost
. env-vars
. /opt/stack/ansible/hacking/env-setup
cd playbooks/
::
cd /bifrost
. env-vars
. /opt/stack/ansible/hacking/env-setup
cd playbooks/
bootstrap and start services
Bootstrap and start services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
@ -198,7 +229,7 @@ Check ironic is running
cd /bifrost
. env-vars
Running "ironic node-list" should return with no nodes, e.g.
Running "ironic node-list" should return with no nodes, for example
.. code-block:: console
@ -212,19 +243,25 @@ Running "ironic node-list" should return with no nodes, e.g.
Enroll and Deploy Physical Nodes
================================
ansible
Ansible
-------
Development
___________
tools/kolla-ansible deploy-servers
::
tools/kolla-ansible deploy-servers
Production
__________
kolla-ansible deploy-servers
::
kolla-ansible deploy-servers
manual
Manual
------
.. code-block:: console
@ -252,18 +289,18 @@ TODO
Bring your own ssh key
----------------------
To use your own ssh key after you have generated the passwords.yml file
To use your own ssh key after you have generated the ``passwords.yml`` file
update the private and public keys under bifrost_ssh_key.
Known issues
============
SSH deamon not running
SSH daemon not running
----------------------
By default sshd is installed in the image but may not be enabled.
If you encounter this issue you will have to access the server phyically in
If you encounter this issue you will have to access the server physically in
recovery mode to enable the ssh service. If your hardware supports it, this
can be done remotely with ipmitool and serial over lan. e.g.
can be done remotely with ipmitool and serial over lan. For example
.. code-block:: console
@ -273,17 +310,9 @@ can be done remotely with ipmitool and serial over lan. e.g.
References
==========
Bifrost
-------
docs
____
http://docs.openstack.org/developer/bifrost/
Docs: http://docs.openstack.org/developer/bifrost/
troubleshooting
_______________
http://docs.openstack.org/developer/bifrost/troubleshooting.html
Troubleshooting: http://docs.openstack.org/developer/bifrost/troubleshooting.html
code
____
https://github.com/openstack/bifrost
Code: https://github.com/openstack/bifrost

View File

@ -86,16 +86,16 @@ A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
If the disk stays in the available state, something went wrong during the
iSCSI mounting of the volume to the guest VM.
Cinder LVM2 backend with iSCSI
==============================
Cinder LVM2 back end with iSCSI
===============================
As of Newton-1 milestone, Kolla supports LVM2 as cinder backend. It is
As of Newton-1 milestone, Kolla supports LVM2 as cinder back end. It is
accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
``tgtd`` container serves as a bridge between cinder-volume process and a
server hosting Logical Volume Groups (LVG). ``iscsid`` container serves as
a bridge between nova-compute process and the server hosting LVG.
In order to use Cinder's LVM backend, a LVG named ``cinder-volumes`` should
In order to use Cinder's LVM back end, a LVG named ``cinder-volumes`` should
exist on the server and following parameter must be specified in
``globals.yml`` ::
@ -127,8 +127,8 @@ targeted for nova compute role.
mount -t configfs /etc/rc.local /sys/kernel/config
Cinder backend with external iSCSI storage
==========================================
Cinder back end with external iSCSI storage
===========================================
In order to use external storage system (like one from EMC or NetApp)
the following parameter must be specified in ``globals.yml`` ::

View File

@ -47,7 +47,7 @@ Glance
Configuring Glance for Ceph includes three steps:
1) Configure RBD backend in glance-api.conf
1) Configure RBD back end in glance-api.conf
2) Create Ceph configuration file in /etc/ceph/ceph.conf
3) Create Ceph keyring file in /etc/ceph/ceph.client.<username>.keyring
@ -166,7 +166,7 @@ Put ceph.conf and keyring file into ``/etc/kolla/config/nova``:
$ ls /etc/kolla/config/nova
ceph.client.nova.keyring ceph.conf
Configure nova-compute to use Ceph as the ephemeral backend by creating
Configure nova-compute to use Ceph as the ephemeral back end by creating
``/etc/kolla/config/nova/nova-compute.conf`` and adding the following
contents:

View File

@ -1,462 +0,0 @@
.. _image-building:
=========================
Building Container Images
=========================
The ``kolla-build`` command is responsible for building Docker images.
.. note::
When developing Kolla it can be useful to build images using files located in
a local copy of Kolla. Use the ``tools/build.py`` script instead of
``kolla-build`` command in all below instructions.
Generating kolla-build.conf
===========================
Install tox and generate the build configuration. The build configuration is
designed to hold advanced customizations when building containers.
Create kolla-build.conf using the following steps.
::
pip install tox
tox -e genconfig
The location of the generated configuration file is
``etc/kolla/kolla-build.conf``, it can also be copied to ``/etc/kolla``. The
default location is one of ``/etc/kolla/kolla-build.conf`` or
``etc/kolla/kolla-build.conf``.
Guide
=====
In general, images are built like this::
kolla-build
By default, the above command would build all images based on CentOS image.
The operator can change the base distro with the ``-b`` option::
kolla-build -b ubuntu
There are following distros available for building images:
- centos
- oraclelinux
- ubuntu
.. warning::
Fedora images are deprecated since Newton and will be removed
in the future.
To push the image after building, add ``--push``::
kolla-build --push
It is possible to build only a subset of images by specifying them on the
command line::
kolla-build keystone
In this case, the build script builds all images which name contains the
``keystone`` string along with their dependencies.
Multiple names may be specified on the command line::
kolla-build keystone nova
The set of images built can be defined as a profile in the ``profiles`` section
of ``kolla-build.conf``. Later, profile can be specified by ``--profile`` CLI
argument or ``profile`` option in ``kolla-build.conf``. Kolla provides some
pre-defined profiles:
- ``infra`` infrastructure-related images
- ``main`` core OpenStack images
- ``aux`` auxiliary images such as trove, magnum, ironic
- ``default`` minimal set of images for a working deploy
For example, due to Magnum requires Heat, following profile can be add to
``profiles`` section in ``kolla-build.conf`` ::
magnum = magnum,heat
These images can be built using command line ::
kolla-build --profile magnum
Or put following line to ``DEFAULT`` section in ``kolla-build.conf`` ::
profile = magnum
``kolla-build`` uses ``kolla`` as default Docker namespace. This is
controlled with the ``-n`` command line option. To push images to a Dockerhub
repository named ``mykollarepo``::
kolla-build -n mykollarepo --push
To push images to a local registry, use ``--registry`` flag::
kolla-build --registry 172.22.2.81:4000 --push
To trigger the build script to pull images from a local registry, the Docker
configuration needs to be modified. See `Docker Insecure Registry Config`_.
The build configuration can be customized using a config file, the default
location being one of ``/etc/kolla/kolla-build.conf`` or
``etc/kolla/kolla-build.conf``. This file can be generated using the following
command::
tox -e genconfig
Build OpenStack from source
===========================
When building images, there are two methods of the OpenStack install. One is
``binary``. Another is ``source``. The ``binary`` means that OpenStack will be
installed from apt/yum. And the ``source`` means that OpenStack will be
installed from source code. The default method of the OpenStack install is
``binary``. It can be changed to ``source`` using the ``-t`` option::
kolla-build -t source
The locations of OpenStack source code are written in
``etc/kolla/kolla-build.conf``.
Now the source type supports ``url``, ``git``, and ``local``. The location of
the ``local`` source type can point to either a directory containing the source
code or to a tarball of the source. The ``local`` source type permits to make
the best use of the Docker cache.
``etc/kolla/kolla-build.conf`` looks like::
[glance-base]
type = url
location = http://tarballs.openstack.org/glance/glance-master.tar.gz
[keystone]
type = git
location = https://git.openstack.org/openstack/keystone
reference = stable/mitaka
[heat-base]
type = local
location = /home/kolla/src/heat
[ironic-base]
type = local
location = /tmp/ironic.tar.gz
To build RHEL containers, it is necessary to use the ``-i`` (include header)
feature to include registration with RHN of the container runtime operating
system. To obtain a RHN username/password/pool id, contact Red Hat.
First create a file called ``rhel-include``:
::
RUN subscription-manager register --user=<user-name> --password=<password> \
&& subscription-manager attach --pool <pool-id>
Then build RHEL containers::
kolla-build -b rhel -i ./rhel-include
Dockerfile Customisation
========================
As of the Newton release, the ``kolla-build`` tool provides a Jinja2 based
mechanism which allows operators to customise the Dockerfiles used to generate
Kolla images.
This offers a lot of flexibility on how images are built, e.g. installing extra
packages as part of the build, tweaking settings, installing plugins, and
numerous other capabilities. Some of these examples are described in more
detail below.
Generic Customisation
---------------------
Anywhere the line ``{% block ... %}`` appears may be modified. The Kolla
community have added blocks throughout the Dockerfiles where we think they will
be useful, however, operators are free to submit more if the ones provided are
inadequate.
The following is an example of how an operator would modify the setup steps
within the Horizon Dockerfile.
First, create a file to contain the customisations, e.g.
``template-overrides.j2``. In this place the following::
{% extends parent_template %}
# Horizon
{% block horizon_redhat_binary_setup %}
RUN useradd --user-group myuser
{% endblock %}
Then rebuild the horizon image, passing the ``--template-override`` argument::
kolla-build --template-override template-overrides.j2 horizon
.. note::
The above example will replace all contents from the original block. Hence
in many cases one may want to copy the original contents of the block before
making changes.
More specific functionality such as removing/appending entries is available
for packages, described in the next section.
Package Customisation
---------------------
Packages installed as part of a container build can be overridden, appended to,
and deleted. Taking the Horizon example, the following packages are installed
as part of a binary install type build:
* ``openstack-dashboard``
* ``httpd``
* ``mod_wsgi``
* ``gettext``
To add a package to this list, say, ``iproute``, first create a file, e.g.
``template-overrides.j2``. In this place the following::
{% extends parent_template %}
# Horizon
{% set horizon_packages_append = ['iproute'] %}
Then rebuild the horizon image, passing the ``--template-override`` argument:
kolla-build --template-override template-overrides.j2 horizon
Alternatively ``template_override`` can be set in ``kolla-build.conf``.
The ``append`` suffix in the above example carries special significance. It
indicates the operation taken on the package list. The following is a complete
list of operations available:
override
Replace the default packages with a custom list.
append
Add a package to the default list.
remove
Remove a package from the default list.
Using a different base image
----------------------------
Base-image can be specified by argument ``--base-image``. For example::
kolla-build --base-image registry.access.redhat.com/rhel7/rhel --base rhel
Plugin Functionality
--------------------
The Dockerfile customisation mechanism is also useful for adding/installing
plugins to services. An example of this is Neutron's third party L2 drivers_.
The bottom of each Dockerfile contains two blocks, ``image_name_footer``, and
``footer``. The ``image_name_footer`` is intended for image specific
modifications, while the ``footer`` can be used to apply a common set of
modifications to every Dockerfile.
For example, to add the ``networking-cisco`` plugin to the ``neutron_server``
image, add the following to the ``template-override`` file::
{% extends parent_template %}
{% block neutron_server_footer %}
RUN git clone https://git.openstack.org/openstack/networking-cisco \
&& pip --no-cache-dir install networking-cisco
{% endblock %}
Acute readers may notice there is one problem with this however. Assuming
nothing else in the Dockerfile changes for a period of time, the above ``RUN``
statement will be cached by Docker, meaning new commits added to the Git
repository may be missed on subsequent builds. To solve this the Kolla build
tool also supports cloning additional repositories at build time, which will be
automatically made available to the build, within an archive named
``plugins-archive``.
.. note::
The following is available for source build types only.
To use this, add a section to ``/etc/kolla/kolla-build.conf`` in the following
format::
[<image>-plugin-<plugin-name>]
Where ``<image>`` is the image that the plugin should be installed into, and
``<plugin-name>`` is the chosen plugin identifier.
Continuing with the above example, add the following to
``/etc/kolla/kolla-build.conf``::
[neutron-server-plugin-networking-cisco]
type = git
location = https://git.openstack.org/openstack/networking-cisco
reference = master
The build will clone the repository, resulting in the following archive
structure::
plugins-archive.tar
|__ plugins
|__networking-cisco
The template now becomes::
{% block neutron_server_footer %}
ADD plugins-archive /
pip --no-cache-dir install /plugins/*
{% endblock %}
Custom Repos
------------
Red Hat
-------
The build method allows the operator to build containers from custom repos.
The repos are accepted as a list of comma separated values and can be in the
form of ``.repo``, ``.rpm``, or a url. See examples below.
Update ``rpm_setup_config`` in ``/etc/kolla/kolla-build.conf``::
rpm_setup_config = http://trunk.rdoproject.org/centos7/currrent/delorean.repo,http://trunk.rdoproject.org/centos7/delorean-deps.repo
If specifying a ``.repo`` file, each ``.repo`` file will need to exist in the
same directory as the base Dockerfile (``kolla/docker/base``)::
rpm_setup_config = epel.repo,delorean.repo,delorean-deps.repo
Ubuntu
------
For Debian based images, additional apt sources may be added to the build as
follows::
apt_sources_list = custom.list
Known issues
============
#. Can't build base image because Docker fails to install systemd or httpd.
There are some issues between Docker and AUFS. The simple workaround to
avoid the issue is that add ``-s devicemapper`` or ``-s btrfs`` to
``DOCKER_OPTS``. Get more information about `the issue from the Docker bug
tracker <https://github.com/docker/docker/issues/6980>`_ and `how to
configure Docker with BTRFS backend <https://docs.docker.com/engine/userguide/storagedriver/btrfs-driver/#prerequisites>`_.
#. Mirrors are unreliable.
Some of the mirrors Kolla uses can be unreliable. As a result occasionally
some containers fail to build. To rectify build problems, the build tool
will automatically attempt three retries of a build operation if the first
one fails. The retry count is modified with the ``--retries`` option.
Docker Local Registry
=====================
It is recommended to set up local registry for Kolla developers or deploying
*multinode*. The reason using a local registry is deployment performance will
operate at local network speeds, typically gigabit networking. Beyond
performance considerations, the Operator would have full control over images
that are deployed. If there is no local registry, nodes pull images from Docker
Hub when images are not found in local caches.
Setting up Docker Local Registry
--------------------------------
Running Docker registry is easy. Just use the following command::
docker run -d -p 4000:5000 --restart=always --name registry \
-v <local_data_path>:/var/lib/registry registry
.. note:: ``<local_data_path>`` points to the folder where Docker registry
will store Docker images on the local host.
The default port of Docker registry is 5000. But the 5000 port is also the port
of keystone-api. To avoid conflict, use 4000 port as Docker registry port.
Now the Docker registry service is running.
Docker Insecure Registry Config
-------------------------------
For Docker to pull images, it is necessary to modify the Docker configuration.
The guide assumes that the IP of the machine running Docker registry is
172.22.2.81.
In Ubuntu, add ``--insecure-registry 172.22.2.81:4000``
to ``DOCKER_OPTS`` in ``/etc/default/docker``.
In CentOS, uncomment ``INSECURE_REGISTRY`` and set ``INSECURE_REGISTRY``
to ``--insecure-registry 172.22.2.81:4000`` in ``/etc/sysconfig/docker``.
And restart the Docker service.
To build and push images to local registry, use the following command::
kolla-build --registry 172.22.2.81:4000 --push
Kolla-ansible with Local Registry
---------------------------------
To make kolla-ansible pull images from local registry, set
``"docker_registry"`` to ``"172.22.2.81:4000"`` in
``"/etc/kolla/globals.yml"``. Make sure Docker is allowed to pull images from
insecure registry. See `Docker Insecure Registry Config`_.
Building behind a proxy
-----------------------
The build script supports augmenting the Dockerfiles under build via so called
`header` and `footer` files. Statements in the `header` file are included at
the top of the `base` image, while those in `footer` are included at the bottom
of every Dockerfile in the build.
A common use case for this is to insert http_proxy settings into the images to
fetch packages during build, and then unset them at the end to avoid having
them carry through to the environment of the final images. Note however, it's
not possible to drop the info completely using this method; it will still be
visible in the layers of the image.
To use this feature, create a file called ``.header``, with the following
content for example::
ENV http_proxy=https://evil.corp.proxy:80
ENV https_proxy=https://evil.corp.proxy:80
Then create another file called ``.footer``, with the following content::
ENV http_proxy=""
ENV https_proxy=""
Finally, pass them to the build script using the ``-i`` and ``-I`` flags::
kolla-build -i .header -I .footer
Besides this configuration options, the script will automatically read these
environment variables. If the host system proxy parameters match the ones
going to be used, no other input parameters will be needed. These are the
variables that will be picked up from the user env::
HTTP_PROXY, http_proxy, HTTPS_PROXY, https_proxy, FTP_PROXY,
ftp_proxy, NO_PROXY, no_proxy
Also these variables could be overwritten using ``--build-args``, which have
precedence.
.. _drivers: https://wiki.openstack.org/wiki/Neutron#Plugins

View File

@ -51,10 +51,10 @@ Kolla Overview
production-architecture-guide
quickstart
multinode
image-building
advanced-configuration
operating-kolla
security
troubleshooting
Kolla Services
==============

View File

@ -4,33 +4,36 @@
Kibana in Kolla
===============
Default index pattern
=====================
An OpenStack deployment generates vast amounts of log data. In order to
successfully monitor this and use it to diagnose problems, the standard "ssh
and grep" solution quickly becomes unmanageable.
After successful Kibana deployment, it can be accessed on
<kolla_internal_vip_address>:<kibana_server_port>
or <kolla_external_vip_address>:<kibana_server_port> in any web
browser after authenticating with ``<kibana_user>`` and ``<kibana_password>``.
Kolla can deploy Kibana as part of the E*K stack in order to allow operators to
search and visualise logs in a centralised manner.
The values ``<kolla_internal_vip_address>``, ``<kolla_external_vip_address>``,
``<kibana_server_port>`` and ``<kibana_user>`` can be found in
``<kolla_install_path>/kolla/ansible/group_vars/all.yml`` or if the default
values are overridden, in ``/etc/kolla/globals.yml``. The value
``<kibana_password>`` can be found in ``/etc/kolla/passwords.yml``.
By default kibana_server_port is set to 5601.
Preparation and deployment
==========================
Modify the configuration file ``/etc/kolla/globals.yml`` and change
the following:
::
enable_central_logging: "yes"
After successful deployment, Kibana can be accessed using a browser on
``<kolla_external_vip_address>:5601``.
The default username is ``kibana``, the password can be located under
``<kibana_password>`` in ``/etc/kolla/passwords.yml``.
When Kibana is opened for the first time, it requires creating a default index
pattern. To view, analyse and search logs, at least one index pattern has to
be created. To match indices stored in ElasticSearch, we suggest to use
following configuration:
pattern. To view, analyse and search logs, at least one index pattern has to be
created. To match indices stored in ElasticSearch, we suggest setting the
"Index name or pattern" field to ``log-*``. The rest of the fields can be left
as is.
#. Index contains time-based events - check
#. Use event times to create index names [DEPRECATED] - not checked
#. Index name or pattern - log-*
#. Do not expand index pattern when searching (Not recommended) - not checked
#. Time-field name - Timestamp
After setting parameters, one can create an index with *Create* button.
After setting parameters, create an index by clicking the ``Create`` button.
.. note:: This step is necessary until the default Kibana dashboard is implemented
in Kolla.
@ -38,15 +41,66 @@ After setting parameters, one can create an index with *Create* button.
Search logs - Discover tab
==========================
Logs search is available under Discover tab. In the menu on the left side,
one can choose any field that will be included in a new search. To do this,
add button has to be pressed. This button appears after pointing any field
from available ones. After adding a specific field, it is marked as selected
field in the menu on the left. Search panel is updated automatically. To
remove field from a current search, remove button has to be pressed. This
button appears after pointing any field from selected ones.
Current search can be saved by using 'Save search' option in the menu on the
right.
Operators can create and store searches based on various fields from logs, for
example, "show all logs marked with ERROR on nova-compute".
To do this, click the ``Discover`` tab. Fields from the logs can be filtered by
hovering over entries from the left hand side, and clicking ``add`` or
``remove``. Add the following fields:
* Hostname
* Payload
* severity_label
* programname
This yields an easy to read list of all log events from each node in the
deployment within the last 15 minutes. A "tail like" functionality can be
achieved by clicking the clock icon in the top right hand corner of the screen,
and selecting ``Auto-refresh``.
Logs can also be filtered down further. To use the above example, type
``programname:nova-compute`` in the search bar. Click the drop-down arrow from
one of the results, then the small magnifying glass icon from beside the
programname field. This should now show a list of all events from nova-compute
services across the cluster.
The current search can also be saved by clicking the ``Save Search`` icon
available from the menu on the right hand side.
Example: using Kibana to diagnose a common failure
--------------------------------------------------
The following example demonstrates how Kibana can be used to diagnose a common
OpenStack problem, where an instance fails to launch with the error 'No valid
host was found'.
First, re-run the server creation with ``--debug``:
::
openstack --debug server create --image cirros --flavor m1.tiny \
--key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \
demo1
In this output, look for the key ``X-Compute-Request-Id``. This is a unique
identifier that can be used to track the request through the system. An
example ID looks like this:
::
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
Taking the value of ``X-Compute-Request-Id``, enter the value into the Kibana
search bar, minus the leading ``req-``. Assuming some basic filters have been
added as shown in the previous section, Kibana should now show the path this
request made through the OpenStack deployment, starting at a ``nova-api`` on
a control node, through the ``nova-scheduler``, ``nova-conductor``, and finally
``nova-compute``. Inspecting the ``Payload`` of the entries marked ``ERROR``
should quickly lead to the source of the problem.
While some knowledge is still required of how Nova works in this instance, it
can still be seen how Kibana helps in tracing this data, particularly in a
large scale deployment scenario.
Visualize data - Visualize tab
==============================

View File

@ -41,7 +41,7 @@ Cinder and Ceph are required, enable it in ``/etc/kolla/globals.yml``:
enable_cinder: "yes"
enable_ceph: "yes"
Enable Manila and generic backend in ``/etc/kolla/globals.yml``:
Enable Manila and generic back end in ``/etc/kolla/globals.yml``:
.. code-block:: console
@ -83,7 +83,7 @@ Launch an Instance
Before being able to create a share, the manila with the generic driver and the
DHSS mode enabled requires the definition of at least an image, a network and a
share-network for being used to create a share server. For that backend
share-network for being used to create a share server. For that back end
configuration, the share server is an instance where NFS/CIFS shares are
served.
@ -285,6 +285,72 @@ Mount the NFS share in the instance using the export location of the share:
# mount -v 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 ~/test_folder
Share Migration
===============
As administrator, you can migrate a share with its data from one location to
another in a manner that is transparent to users and workloads. You can use
manila client commands to complete a share migration.
For share migration, is needed modify ``manila.conf`` and set a ip in the same
provider network for ``data_node_access_ip``.
Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
.. code-block:: console
[DEFAULT]
data_node_access_ip = 10.10.10.199
.. note::
Share migration requires have more than one back end configured.
`Configure multiple back ends
<http://docs.openstack.org/developer/kolla/manila-hnas-guide.html#configure-multiple-back-ends>`__.
Use the manila migration command, as shown in the following example:
.. code-block:: console
manila migration-start --preserve-metadata True|False \
--writable True|False --force_host_assisted_migration True|False \
--new_share_type share_type --new_share_network share_network \
shareID destinationHost
- ``--force-host-copy``: Forces the generic host-based migration mechanism and
bypasses any driver optimizations.
- ``destinationHost``: Is in this format ``host#pool`` which includes
destination host and pool.
- ``--writable`` and ``--preserve-metadata``: Are only for driver assisted.
- ``--new_share_network``: Only if driver supports shared network.
- ``--new_share_type``: Choose share type compatible with destinationHost.
Checking share migration progress
---------------------------------
Use the ``manila migration-get-progress shareID`` command to check progress.
.. code-block:: console
manila migration-get-progress demo-share1
+----------------+-----------------------+
| Property | Value |
+----------------+-----------------------+
| task_state | data_copying_starting |
| total_progress | 0 |
+----------------+-----------------------+
manila migration-get-progress demo-share1
+----------------+-------------------------+
| Property | Value |
+----------------+-------------------------+
| task_state | data_copying_completing |
| total_progress | 100 |
+----------------+-------------------------+
Use the ``manila migration-complete shareID`` command to complete share
migration process
For more information about how to manage shares, see the
`OpenStack User Guide

View File

@ -24,8 +24,8 @@ Requirements
- SSC CLI.
Supported shared filesystems and operations
------------------------------------------
Supported shared file systems and operations
-------------------------------------------
The driver supports CIFS and NFS shares.
The following operations are supported:
@ -72,7 +72,8 @@ Preparation and Deployment
Configuration on Kolla deployment
---------------------------------
Enable Shared File Systems service and HNAS driver in ``/etc/kolla/globals.yml``
Enable Shared File Systems service and HNAS driver in
``/etc/kolla/globals.yml``
.. code-block:: console
@ -99,7 +100,7 @@ In ``/etc/kolla/globals.yml`` set:
HNAS back end configuration
--------------------------
---------------------------
In ``/etc/kolla/globals.yml`` uncomment and set:
@ -264,6 +265,61 @@ Verify Operation
| metadata | {} |
+-----------------------------+-----------------------------------------------------------------+
Configure multiple back ends
============================
An administrator can configure an instance of Manila to provision shares from
one or more back ends. Each back end leverages an instance of a vendor-specific
implementation of the Manila driver API.
The name of the back end is declared as a configuration option
share_backend_name within a particular configuration stanza that contains the
related configuration options for that back end.
So, in the case of an multiple back ends deployment, it is necessary to change
the default share backends before deployment.
Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
.. code-block:: console
[DEFAULT]
enabled_share_backends = generic,hnas1,hnas2
Modify the file ``/etc/kolla/config/manila-share.conf`` and add the contents:
.. code-block:: console
[generic]
share_driver = manila.share.drivers.generic.GenericShareDriver
interface_driver = manila.network.linux.interface.OVSInterfaceDriver
driver_handles_share_servers = True
service_instance_password = manila
service_instance_user = manila
service_image_name = manila-service-image
share_backend_name = GENERIC
[hnas1]
share_backend_name = HNAS1
share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver
driver_handles_share_servers = False
hitachi_hnas_ip = <hnas_ip>
hitachi_hnas_user = <user>
hitachi_hnas_password = <password>
hitachi_hnas_evs_id = <evs_id>
hitachi_hnas_evs_ip = <evs_ip>
hitachi_hnas_file_system_name = FS-Manila1
[hnas2]
share_backend_name = HNAS2
share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver
driver_handles_share_servers = False
hitachi_hnas_ip = <hnas_ip>
hitachi_hnas_user = <user>
hitachi_hnas_password = <password>
hitachi_hnas_evs_id = <evs_id>
hitachi_hnas_evs_ip = <evs_ip>
hitachi_hnas_file_system_name = FS-Manila2
For more information about how to manage shares, see the
`OpenStack User Guide

View File

@ -4,28 +4,40 @@
Multinode Deployment of Kolla
=============================
Deploy a registry (required for multinode)
==========================================
.. _deploy_a_registry:
Deploy a registry
=================
A Docker registry is a locally hosted registry that replaces the need to pull
from the Docker Hub to get images. Kolla can function with or without a local
registry, however for a multinode deployment a registry is required.
registry, however for a multinode deployment some type of registry is mandatory.
Only one registry must be deployed, although HA features exist for registry
services.
The Docker registry prior to version 2.3 has extremely bad performance because
all container data is pushed for every image rather than taking advantage of
Docker layering to optimize push operations. For more information reference
`pokey registry <https://github.com/docker/docker/issues/14018>`__.
The Kolla community recommends using registry 2.3 or later. To deploy registry
with version greater than 2.3, do the following:
with version 2.3 or later, do the following:
::
docker run -d -p 4000:5000 --restart=always --name registry registry:2
tools/start-registry
.. note:: Kolla looks for the Docker registry to use port 4000. (Docker default is
port 5000)
.. _configure_docker_all_nodes:
Configure Docker on all nodes
=============================
.. note:: As the subtitle for this section implies, these steps should be
applied to all nodes, not just the deployment node.
The ``start-registry`` script configures a docker registry that proxies Kolla
images from Docker Hub, and can also be used with custom built images (see
:doc:`image-building`).
After starting the registry, it is necessary to instruct Docker that it will
be communicating with an insecure registry. To enable insecure registry
@ -36,7 +48,7 @@ registry is currently running:
::
# CentOS
INSECURE_REGISTRY="--insecure-registry 192.168.1.100:4000"
INSECURE_REGISTRY="--insecure-registry 192.168.1.100:5000"
For Ubuntu, check whether its using upstart or systemd.
@ -50,7 +62,7 @@ Edit ``/etc/default/docker`` and add:
::
# Ubuntu
DOCKER_OPTS="--insecure-registry 192.168.1.100:4000"
DOCKER_OPTS="--insecure-registry 192.168.1.100:5000"
If Ubuntu is using systemd, additional settings needs to be configured.
Copy Docker's systemd unit file to ``/etc/systemd/system/`` directory:
@ -94,9 +106,9 @@ Edit the Inventory File
=======================
The ansible inventory file contains all the information needed to determine
what services will land on which hosts. Edit the inventory file in the
kolla-ansible directory ``ansible/inventory/multinode`` or if kolla-ansible
was installed with pip, it can be found in ``/usr/share/kolla``.
what services will land on which hosts. Edit the inventory file in the kolla
directory ``ansible/inventory/multinode``. If kolla was installed with pip,
the inventory file can be found in ``/usr/share/kolla``.
Add the ip addresses or hostnames to a group and the services associated with
that group will land on that host:

View File

@ -88,6 +88,7 @@ For the source code, please refer to the following link:
https://github.com/openstack/networking-sfc
Neutron VPNaaS (VPN-as-a-Service)
================================
@ -149,3 +150,4 @@ the OpenStack wiki:
https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall
https://wiki.openstack.org/wiki/Neutron/VPNaaS

View File

@ -4,96 +4,90 @@
Quick Start
===========
This guide provides a step by step of how to deploy Kolla on bare metal or a
virtual machine.
This guide provides step by step instructions to deploy OpenStack using Kolla
and Kolla-Ansible on bare metal servers or virtual machines.
Host machine requirements
=========================
The recommended deployment target requirements:
The host machine must satisfy the following minimum requirements:
- 2 (or more) network interfaces.
- At least 8gb main memory
- At least 40gb disk space.
- 2 network interfaces
- 8GB main memory
- 40GB disk space
.. note:: Some commands below may require root permissions (e.g. pip, apt-get).
.. note::
Recommended Environment
Root access to the deployment host machine is required.
Recommended environment
=======================
If developing or evaluating Kolla, the community strongly recommends using bare
metal or a virtual machine. Follow the instructions in this document to get
started with deploying OpenStack on bare metal or a virtual machine with Kolla.
There are other deployment environments referenced below in
`Additional Environments`_.
This guide recommends using a bare metal server or a virtual machine. Follow
the instructions in this document to get started with deploying OpenStack on
bare metal or a virtual machine with Kolla.
Automatic host bootstrap
========================
If developing Kolla on a system that provides VirtualBox or Libvirt in addition
to Vagrant, use the Vagrant virtual environment documented in
:doc:`vagrant-dev-env`.
.. note:: New in Newton
Prerequisites
=============
To quickly prepare hosts for Kolla, playbook ``bootstrap-servers`` can be used.
This is an Ansible playbook which works on Ubuntu 14.04, 16.04 and CentOS 7
hosts to install and prepare cluster for Kolla installation.
.. note:: Installation of dependencies for deployment node and configuration
of Kolla interfaces is still required prior to running this command. More
information about Kolla interface configuration in
:ref:`interface-configuration`.
Command to run the playbook:
Verify the state of network interfaces. If using a VM spawned on
OpenStack as the host machine, the state of the second interface will be DOWN
on booting the VM.
::
kolla-ansible -i <<inventory file>> bootstrap-servers
ip addr show
To learn more about the inventory file, follow :ref:`edit-inventory`.
Bring up the second network interface if it is down.
::
Install Dependencies
ip link set ens4 up
Verify if the second interface has an IP address.
::
ip addr show
Install dependencies
====================
Kolla is tested on CentOS, Oracle Linux, RHEL and Ubuntu as both container OS
Kolla builds images which are used by Kolla-Ansible to deploy OpenStack. The
deployment is tested on CentOS, Oracle Linux and Ubuntu as both container OS
platforms and bare metal deployment targets.
Fedora: Kolla will not run on Fedora 22 and later as a bare metal deployment
target. These distributions compress kernel modules with the .xz compressed
format. The guestfs system in the CentOS family of containers cannot read
these images because a dependent package supermin in CentOS needs to be updated
to add .xz compressed format support.
Ubuntu: For Ubuntu based systems where Docker is used it is recommended to use
the latest available LTS kernel. The latest LTS kernel available is the wily
kernel (version 4.2). While all kernels should work for Docker, some older
kernels may have issues with some of the different Docker backends such as AUFS
and OverlayFS. In order to update kernel in Ubuntu 14.04 LTS to 4.2, run:
the latest available LTS kernel. While all kernels should work for Docker, some
older kernels may have issues with some of the different Docker back ends such
as AUFS and OverlayFS. In order to update kernel in Ubuntu 14.04 LTS to 4.2,
run:
::
apt-get install linux-image-generic-lts-wily
.. WARNING::
Operators performing an evaluation or deployment should use a stable
branch. Operators performing development (or developers) should use
master.
.. note:: Install is *very* sensitive about version of components. Please
review carefully because default Operating System repos are likely out of
date.
review carefully because default Operating System repos are likely out of
date.
Dependencies for the stable branch are:
Dependencies for the stable/mitaka branch are:
===================== =========== =========== =========================
Component Min Version Max Version Comment
===================== =========== =========== =========================
Ansible 1.9.4 < 2.0.0 On deployment host
Ansible 1.9.4 <2.0.0 On deployment host
Docker 1.10.0 none On target nodes
Docker Python 1.6.0 none On target nodes
Python Jinja2 2.6.0 none On deployment host
===================== =========== =========== =========================
Dependencies for the master branch are:
Dependencies for the stable/newton branch and later (including master branch)
are:
===================== =========== =========== =========================
Component Min Version Max Version Comment
@ -104,32 +98,78 @@ Docker Python 1.6.0 none On target nodes
Python Jinja2 2.8.0 none On deployment host
===================== =========== =========== =========================
Make sure the ``pip`` package manager is installed and upgraded to latest
Make sure the ``pip`` package manager is installed and upgraded to the latest
before proceeding:
::
# CentOS 7
#CentOS
yum install epel-release
yum install python-pip
# Ubuntu 14.04 LTS
apt-get install python-pip
# Upgrade pip and check version
pip install -U pip
pip -V
#Ubuntu
apt-get update
apt-get install python-pip
pip install -U pip
Install dependencies needed to build the code with ``pip`` package manager.
::
# Ubuntu
#CentOS
yum install python-devel libffi-devel gcc openssl-devel
#Ubuntu
apt-get install python-dev libffi-dev gcc libssl-dev
# CentOS 7
yum install python-devel libffi-devel gcc openssl-devel
Kolla deploys OpenStack using `Ansible <http://www.ansible.com>`__. Install
Ansible from distribution packaging if the distro packaging has recommended
version available.
Some implemented distro versions of Ansible are too old to use distro
packaging. Currently, CentOS and RHEL package Ansible >2.0 which is suitable
for use with Kolla. Note that you will need to enable access to the EPEL
repository to install via yum -- to do so, take a look at Fedora's EPEL `docs
<https://fedoraproject.org/wiki/EPEL>`__ and `FAQ
<https://fedoraproject.org/wiki/EPEL/FAQ>`__.
On CentOS or RHEL systems, this can be done using:
::
yum install ansible
Many DEB based systems do not meet Kolla's Ansible version requirements. It is
recommended to use pip to install Ansible >2.0. Finally Ansible >2.0 may be
installed using:
::
pip install -U ansible
.. note:: It is recommended to use virtualenv to install non-system packages.
If DEB based systems include a version of Ansible that meets Kolla's version
requirements it can be installed by:
::
apt-get install ansible
.. WARNING::
Kolla uses PBR in its implementation. PBR provides version information
to Kolla about the package in use. This information is later used when
building images to specify the Docker tag used in the image built. When
installing the Kolla package via pip, PBR will always use the PBR version
information. When obtaining a copy of the software via git, PBR will use
the git version information, but **ONLY** if Kolla has not been pip
installed via the pip package manager. This is why there is an operator
workflow and a developer workflow.
The following dependencies can be installed by bootstraping the host machine
as described in the `Automatic host bootstrap`_ section. For manual
installation, follow the instructions below:
Since Docker is required to build images as well as be present on all deployed
targets, the Kolla community recommends installing the official Docker, Inc.
@ -176,28 +216,6 @@ Restart Docker by executing the following commands:
systemctl daemon-reload
systemctl restart docker
For Ubuntu 14.04 which uses upstart and other non-systemd distros,
run the following:
::
mount --make-shared /run
For mounting ``/run`` as shared upon startup, add that command to
``/etc/rc.local``
::
# Edit /etc/rc.local to add:
mount --make-shared /run
.. note:: If centos/oraclelinux container images are built on an Ubuntu
host, the backend storage driver must not be AUFS (see the known issues in
:doc:`image-building`).
.. note:: On ubuntu 16.04, please uninstall ``lxd`` and ``lxc`` packages. (issue
with cgroup mounts, mounts exponentially increasing when restarting container).
On the target hosts you also need an updated version of the Docker python
libraries:
@ -208,7 +226,7 @@ libraries:
yum install python-docker-py
Or using ``pip`` to install a latest version:
Or using ``pip`` to install the latest version:
::
@ -258,180 +276,226 @@ On Ubuntu, apparmor will sometimes prevent libvirt from working.
::
/usr/sbin/libvirtd: error while loading shared libraries: libvirt-admin.so.0: cannot open shared object file: Permission denied
/usr/sbin/libvirtd: error while loading shared libraries:
libvirt-admin.so.0: cannot open shared object file: Permission denied
If you are seeing the libvirt container fail with the error above, disable the
libvirt profile.
::
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
Kolla deploys OpenStack using `Ansible <http://www.ansible.com>`__. Install
Ansible from distribution packaging if the distro packaging has recommended
version available.
.. note::
Some implemented distro versions of Ansible are too old to use distro
packaging. Currently, CentOS and RHEL package Ansible >2.0 which is suitable
for use with Kolla. Note that you will need to enable access to the EPEL
repository to install via yum -- to do so, take a look at Fedora's EPEL
`docs <https://fedoraproject.org/wiki/EPEL>`__ and
`FAQ <https://fedoraproject.org/wiki/EPEL/FAQ>`__.
On Ubuntu 16.04, please uninstall lxd and lxc packages. (An issue exists
with cgroup mounts, mounts exponentially increasing when restarting
container).
On CentOS or RHEL systems, this can be done using:
Additional steps for upstart and other non-systemd distros
==========================================================
For Ubuntu 14.04 which uses upstart and other non-systemd distros, run the
following.
::
yum install ansible
mount --make-shared /run
mount --make-shared /var/lib/nova/mnt
Many DEB based systems do not meet Kolla's Ansible version requirements. It is
recommended to use pip to install Ansible >2.0. Finally Ansible >2.0 may be
installed using:
If /var/lib/nova/mnt is not present, can do below work around.
::
pip install -U ansible
mkdir -p /var/lib/nova/mnt /var/lib/nova/mnt1
mount --bind /var/lib/nova/mnt1 /var/lib/nova/mnt
mount --make-shared /var/lib/nova/mnt
If DEB based systems include a version of Ansible that meets Kolla's version
requirements it can be installed by:
For mounting /run and /var/lib/nova/mnt as shared upon startup, edit
/etc/rc.local to add the following.
::
apt-get install ansible
mount --make-shared /run
mount --make-shared /var/lib/nova/mnt
.. WARNING::
Kolla uses PBR in its implementation. PBR provides version information
to Kolla about the package in use. This information is later used when
building images to specify the Docker tag used in the image built. When
installing the Kolla package via pip, PBR will always use the PBR version
information. When obtaining a copy of the software via git, PBR will use
the git version information, but **ONLY** if Kolla has not been pip
installed via the pip package manager. This is why there is an operator
workflow and a developer workflow.
.. note::
Installing Kolla for evaluation or deployment
---------------------------------------------
If CentOS/Fedora/OracleLinux container images are built on an Ubuntu host,
the back-end storage driver must not be AUFS (see the known issues in
:doc:`image-building`).
Install Kolla and its dependencies:
Install Kolla for deployment or evaluation
==========================================
Install Kolla and its dependencies using pip.
::
pip install kolla
Copy the Kolla configuration files to ``/etc``:
Copy the configuration files globals.yml and passwords.yml to /etc directory.
::
# CentOS 7
cp -r /usr/share/kolla/etc_examples/kolla /etc/
#CentOS
cp -r /usr/share/kolla/etc_examples/kolla /etc/kolla/
# Ubuntu
cp -r /usr/local/share/kolla/etc_examples/kolla /etc/
#Ubuntu
cp -r /usr/local/share/kolla/etc_examples/kolla /etc/kolla/
Installing Kolla and dependencies for development
-------------------------------------------------
To clone the kolla-ansible repo:
The inventory files (all-in-one and multinode) are located in
/usr/local/share/kolla/ansible/inventory. Copy the configuration files to the
current directory.
::
git clone https://git.openstack.org/openstack/kolla-ansible
#CentOS
cp /usr/share/kolla/ansible/inventory/* .
To install Kolla's Python dependencies use:
#Ubuntu
cp /usr/local/share/kolla/ansible/inventory/* .
Install Kolla for development
=============================
Clone the Kolla and Kolla-Ansible repositories from git.
::
pip install -r kolla-ansible/requirements.txt -r kolla-ansible/test-requirements.txt
git clone https://github.com/openstack/kolla
git clone https://github.com/openstack/kolla-ansible
.. note:: This does not actually install Kolla. Many commands in this documentation are named
differently in the tools directory.
Kolla holds configurations files in ``etc/kolla``. Copy the configuration files
to ``/etc``:
Kolla-ansible holds configuration files (globals.yml and passwords.yml) in
etc/kolla. Copy the configuration files to /etc directory.
::
cd kolla-ansible
cp -r etc/kolla /etc/
cp -r kolla-ansible/etc/kolla /etc/kolla/
Install Python Clients
======================
On the system where the OpenStack CLI/Python code is run, the Kolla community
recommends installing the OpenStack python clients if they are not installed.
This could be a completely different machine then the deployment host or
deployment targets. Install dependencies needed to build the code with ``pip``
package manager as explained earlier.
To install the clients use:
Kolla-ansible holds the inventory files (all-in-one and multinode) in
ansible/inventory. Copy the configuration files to the current directory.
::
yum install python-openstackclient python-neutronclient
Or using ``pip`` to install:
::
pip install -U python-openstackclient python-neutronclient
cp kolla-ansible/ansible/inventory/* .
Local Registry
==============
A local registry is not required for an ``all-in-one`` installation. Check out
the :doc:`multinode` for more information on using a local registry. Otherwise,
the `Docker Hub Image Registry`_ contains all images from each of Kolla's major
releases. The latest release tag is 2.0.0 for Mitaka.
A local registry is recommended but not required for an ``all-in-one``
installation when developing for master. Since no master images are available
on docker hub, the docker cache may be used for all-in-one deployments. When
deploying multinode, a registry is strongly recommended to serve as a single
source of images. Reference the :doc:`multinode` for more information on using
a local Docker registry. Otherwise, the Docker Hub Image Registry contains all
images from each of Kollas major releases. The latest release tag is 3.0.2 for
Newton.
Additional Environments
=======================
Automatic host bootstrap
========================
Two virtualized development environment options are available for Kolla. These
options permit the development of Kolla without disrupting the host operating
system.
Edit the ``/etc/kolla/globals.yml`` file to configure interfaces.
If developing Kolla on a system that provides VirtualBox or Libvirt in addition
to Vagrant, use the Vagrant virtual environment documented in
:doc:`vagrant-dev-env`.
::
Building Container Images
=========================
network_interface: "ens3"
neutron_external_interface: "ens4"
Generate passwords. This will populate all empty fields in the
``/etc/kolla/passwords.yml`` file using randomly generated values to secure the
deployment. Optionally, the passwords may be populated in the file by hand.
::
kolla-genpwd
To quickly prepare hosts, playbook bootstrap-servers can be used.This is an
Ansible playbook which works on Ubuntu 14.04, 16.04 and CentOS 7 hosts to
install and prepare the cluster for OpenStack installation.
::
kolla-ansible -i <<inventory file>> bootstrap-servers
Build container images
======================
When running with systemd, edit the file
``/etc/systemd/system/docker.service.d/kolla.conf``
to include the MTU size to be used for Docker containers.
::
[Service]
MountFlags=shared
ExecStart=
ExecStart=/usr/bin/docker daemon \
-H fd:// \
--mtu 1400
.. note::
The MTU size should be less than or equal to the MTU size allowed on the
network interfaces of the host machine. If the MTU size allowed on the
network interfaces of the host machine is 1500 then this step can be
skipped. This step is relevant for building containers. Actual openstack
services won't be affected.
.. note::
Verify that the MountFlags parameter is configured as shared. If you do not
set the MountFlags option correctly then kolla-ansible will fail to deploy the
neutron-dhcp-agent container and throws APIError/HTTPError.
Restart Docker and ensure that Docker is running.
::
systemctl daemon-reload
systemctl restart docker
The Kolla community builds and pushes tested images for each tagged release of
Kolla, but if running from master, it is recommended to build images locally.
Kolla. Pull required images with appropriate tags.
Checkout the :doc:`image-building` for more advanced build configuration.
::
Before running the below instructions, ensure the Docker daemon is running
or the build process will fail. To build images using default parameters run:
kolla-ansible pull
View the images.
::
docker images
Developers running from master are required to build container images as
the Docker Hub does not contain built images for the master branch.
Reference the :doc:`image-building` for more advanced build configuration.
To build images using default parameters run:
::
kolla-build
By default ``kolla-build`` will build all containers using CentOS as the base
image and binary installation as base installation method. To change this
behavior, please use the following parameters with ``kolla-build``:
By default kolla-build will build all containers using CentOS as the base image
and binary installation as base installation method. To change this behavior,
please use the following parameters with kolla-build:
::
--base [ubuntu|centos|oraclelinux]
--type [binary|source]
--base [ubuntu|centos|oraclelinux]
--type [binary|source]
If pushing to a local registry (recommended) use the flags:
.. note::
::
--base and --type can be added to the above kolla-build command if
different distributions or types are desired.
kolla-build --registry registry_ip_address:registry_ip_port --push
Note ``--base`` and ``--type`` can be added to the above ``kolla-build``
command if different distributions or types are desired.
It is also possible to build individual containers. As an example, if the
glance containers failed to build, all glance related containers can be rebuilt
as follows:
It is also possible to build individual container images. As an example, if the
glance images failed to build, all glance related images can be rebuilt as
follows:
::
@ -443,26 +507,27 @@ In order to see all available parameters, run:
kolla-build -h
For more information about building Kolla container images, check the detailed
instruction in :doc:`image-building`.
View the images.
::
docker images
.. WARNING::
Mixing of OpenStack releases with Kolla releases (example, updating
kolla-build.conf to build Mitaka Keystone to be deployed with Newton Kolla) is
not recommended and will likely cause issues.
.. _deploying-kolla:
Deploy Kolla
============
Deploying Kolla
===============
The Kolla community provides two example methods of Kolla deploy: *all-in-one*
and *multinode*. The *all-in-one* deploy is similar to
`devstack <http://docs.openstack.org/developer/devstack/>`__ deploy which
installs all OpenStack services on a single host. In the *multinode* deploy,
OpenStack services can be run on specific hosts. This documentation only
describes deploying *all-in-one* method as most simple one. To setup
*multinode* see the :doc:`multinode`.
Kolla-Ansible is used to deploy containers by using images built by Kolla.
There are two methods of deployment: *all-in-one* and *multinode*. The
*all-in-one* deployment is similar to `devstack
<http://docs.openstack.org/developer/devstack/>`__ deploy which installs all
OpenStack services on a single host. In the *multinode* deployment, OpenStack
services can be run on specific hosts. This documentation describes deploying
an *all-in-one* setup. To setup *multinode* see the :doc:`multinode`.
Each method is represented as an Ansible inventory file. More information on
the Ansible inventory file can be found in the Ansible `inventory introduction
@ -481,236 +546,128 @@ deployment. Optionally, the passwords may be populate in the file by hand.
kolla-genpwd
Start by editing ``/etc/kolla/globals.yml``. Check and edit, if needed, these
parameters: ``kolla_base_distro``, ``kolla_install_type``. These parameters
should match what you used in the ``kolla-build`` command line. The default for
parameters: ``kolla_base_distro``, ``kolla_install_type``. The default for
``kolla_base_distro`` is ``centos`` and for ``kolla_install_type`` is
``binary``. If you want to use ubuntu with source type, then you should make
sure ``globals.yml`` has the following entries:
sure globals.yml has the following entries:
::
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
Please specify an unused IP address in the network to act as a VIP for
``kolla_internal_vip_address``. The VIP will be used with keepalived and added
to the ``api_interface`` as specified in the ``globals.yml`` ::
to the ``api_interface`` as specified in the ``globals.yml``
kolla_internal_vip_address: "10.10.10.254"
::
kolla_internal_vip_address: “192.168.137.79”
.. note::
The kolla_internal_vip_address must be unique and should belong to the same
network to which the first network interface belongs to.
.. note::
The kolla_base_distro and kolla_install_type should be same as base and
install_type used in kolla-build command line.
The ``network_interface`` variable is the interface to which Kolla binds API
services. For example, when starting up Mariadb it will bind to the IP on the
interface list in the ``network_interface`` variable. ::
services. For example, when starting Mariadb, it will bind to the IP on the
interface list in the ``network_interface`` variable.
network_interface: "eth0"
::
network_interface: "ens3"
The ``neutron_external_interface`` variable is the interface that will be used
for the external bridge in Neutron. Without this bridge the deployment instance
traffic will be unable to access the rest of the Internet. In the case of a
single interface on a machine, a veth pair may be used where one end of the
veth pair is listed here and the other end is in a bridge on the system. ::
neutron_external_interface: "eth1"
If using a local Docker registry, set the ``docker_registry`` information where
the local registry is operating on IP address 192.168.1.100 and the port 4000.
traffic will be unable to access the rest of the Internet.
::
docker_registry: "192.168.1.100:4000"
neutron_external_interface: "ens4"
For *all-in-one* deploys, the following commands can be run. These will
In case of deployment using the **nested** environment (eg. Using Virtualbox
VMs, KVM VMs), verify if your compute node supports hardware acceleration for
virtual machines by executing the following command in the *compute node*.
::
egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of **zero**, your compute node does not support
hardware acceleration and you **must** configure libvirt to use **QEMU**
instead of KVM. Create a file /etc/kolla/config/nova/nova-compute.conf and add
the content shown below.
::
mkdir /etc/kolla/config/nova
cat << EOF > /etc/kolla/config/nova/nova-compute.conf
[libvirt]
virt_type=qemu
EOF
For *all-in-one* deployments, the following commands can be run. These will
setup all of the containers on the localhost. These commands will be
wrapped in the kolla-script in the future.
.. note:: Even for all-in-one installs it is possible to use the docker
.. note:: Even for all-in-one installs it is possible to use the Docker
registry for deployment, although not strictly required.
First, check that the deployment targets are in a state where Kolla may deploy
to them:
First, validate that the deployment targets are in a state where Kolla may
deploy to them. Provide the correct path to inventory file in the following
commands.
::
kolla-ansible prechecks
kolla-ansible prechecks -i /path/to/all-in-one
Verify that all required images with appropriate tags are available:
Deploy OpenStack.
::
kolla-ansible pull
kolla-ansible deploy -i /path/to/all-in-one
Run the deployment:
List the running containers.
::
kolla-ansible deploy
docker ps -a
If APIError/HTTPError is received from the neutron-dhcp-agent container,
remove the container and recreate it:
Generate the ``admin-openrc.sh`` file. The file will be created in
``/etc/kolla/`` directory.
::
docker rm -v -f neutron_dhcp_agent
kolla-ansible deploy
kolla-ansible post-deploy
In order to see all available parameters, run:
To test your deployment, run the following commands to initialize the network
with a glance image and neutron networks.
::
kolla-ansible -h
source /etc/kolla/admin-openrc.sh
.. note:: In case of deploying using the _nested_ environment (*eg*.
Using Virtualbox VM's, KVM VM's), if your compute node supports
hardware acceleration for virtual machines.
#centOS
cd /usr/share/kolla
./init-runonce
For this, run the follow command in **compute node**:
#ubuntu
cd /usr/local/share/kolla
./init-runonce
::
.. note::
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of **zero**, your compute node does not
support hardware acceleration and you **must** configure libvirt to use
**QEMU** instead of KVM.
For this, change the **virt_type** option in the `[libvirt]` section
of **nova-compute.conf** file inside the ``/etc/kolla/config/`` directory.
::
[libvirt]
virt_type=qemu
A bare metal system with Ceph takes 18 minutes to deploy. A virtual machine
deployment takes 25 minutes. These are estimates; different hardware may be
faster or slower but should be near these results.
Different hardware results in results in variance with deployment times.
After successful deployment of OpenStack, the Horizon dashboard will be
available by entering IP address or hostname from ``kolla_external_fqdn``, or
``kolla_internal_fqdn``. If these variables were not set during deploy they
default to ``kolla_internal_vip_address``.
Useful tools
------------
After successful deployment of OpenStack, run the following command can create
an openrc file ``/etc/kolla/admin-openrc.sh`` on the deploy node. Or view
``tools/openrc-example`` for an example of an openrc that may be used with the
environment.
::
kolla-ansible post-deploy
After the openrc file is created, use the following command to initialize an
environment with a glance image and neutron networks:
::
. /etc/kolla/admin-openrc.sh
kolla/tools/init-runonce
Failures
========
Nearly always when Kolla fails, it is caused by a CTRL-C during the deployment
process or a problem in the ``globals.yml`` configuration.
To correct the problem where Operators have a misconfigured environment, the
Kolla developers have added a precheck feature which ensures the deployment
targets are in a state where Kolla may deploy to them. To run the prechecks,
execute:
::
kolla-ansible prechecks
If a failure during deployment occurs it nearly always occurs during evaluation
of the software. Once the Operator learns the few configuration options
required, it is highly unlikely they will experience a failure in deployment.
Deployment may be run as many times as desired, but if a failure in a
bootstrap task occurs, a further deploy action will not correct the problem.
In this scenario, Kolla's behavior is undefined.
The fastest way during evaluation to recover from a deployment failure is to
remove the failed deployment:
On each node where OpenStack is deployed run:
::
tools/cleanup-containers
tools/cleanup-host
The Operator will have to copy via scp or some other means the cleanup scripts
to the various nodes where the failed containers are located.
Any time the tags of a release change, it is possible that the container
implementation from older versions won't match the Ansible playbooks in a new
version. If running multinode from a registry, each node's Docker image cache
must be refreshed with the latest images before a new deployment can occur. To
refresh the Docker cache from the local Docker registry:
::
kolla-ansible pull
Debugging Kolla
===============
The container's status can be determined on the deployment targets by
executing:
::
docker ps -a
If any of the containers exited, this indicates a bug in the container. Please
seek help by filing a `launchpad bug`_ or contacting the developers via IRC.
The logs can be examined by executing:
::
docker exec -it heka bash
The logs from all services in all containers may be read from
``/var/log/kolla/SERVICE_NAME``
If the stdout logs are needed, please run:
::
docker logs <container-name>
Note that most of the containers don't log to stdout so the above command will
provide no information.
To learn more about Docker command line operation please refer to `Docker
documentation <https://docs.docker.com/reference/commandline/cli/>`__.
When ``enable_central_logging`` is enabled, to view the logs in a web browser
using Kibana, go to:
::
http://<kolla_internal_vip_address>:<kibana_server_port>
or http://<kolla_external_vip_address>:<kibana_server_port>
and authenticate using ``<kibana_user>`` and ``<kibana_password>``.
The values ``<kolla_internal_vip_address>``, ``<kolla_external_vip_address>``
``<kibana_server_port>`` and ``<kibana_user>`` can be found in
``<kolla_install_path>/kolla/ansible/group_vars/all.yml`` or if the default
values are overridden, in ``/etc/kolla/globals.yml``. The value of
``<kibana_password>`` can be found in ``/etc/kolla/passwords.yml``.
.. note:: When you log in to Kibana web interface for the first time, you are
prompted to create an index. Please create an index using the name ``log-*``.
This step is necessary until the default Kibana dashboard is implemented in
Kolla.
.. _Docker Hub Image Registry: https://hub.docker.com/u/kolla/
.. _launchpad bug: https://bugs.launchpad.net/kolla/+filebug

132
doc/troubleshooting.rst Normal file
View File

@ -0,0 +1,132 @@
.. troubleshooting:
=====================
Troubleshooting Guide
=====================
Failures
========
If Kolla fails, often it is caused by a CTRL-C during the deployment
process or a problem in the ``globals.yml`` configuration.
To correct the problem where Operators have a misconfigured environment, the
Kolla community has added a precheck feature which ensures the deployment
targets are in a state where Kolla may deploy to them. To run the prechecks,
execute:
Production
==========
::
kolla-ansible prechecks
Development
===========
::
./tools/kolla-ansible prechecks
If a failure during deployment occurs it nearly always occurs during evaluation
of the software. Once the Operator learns the few configuration options
required, it is highly unlikely they will experience a failure in deployment.
Deployment may be run as many times as desired, but if a failure in a
bootstrap task occurs, a further deploy action will not correct the problem.
In this scenario, Kolla's behavior is undefined.
The fastest way during to recover from a deployment failure is to
remove the failed deployment:
Production
==========
::
kolla-ansible destroy -i <<inventory-file>>
Development
===========
::
./tools/kolla-ansible destroy -i <<inventory-file>>
Any time the tags of a release change, it is possible that the container
implementation from older versions won't match the Ansible playbooks in a new
version. If running multinode from a registry, each node's Docker image cache
must be refreshed with the latest images before a new deployment can occur. To
refresh the docker cache from the local Docker registry:
Production
==========
::
kolla-ansible pull
Development
===========
::
./tools/kolla-ansible pull
Debugging Kolla
===============
The status of containers after deployment can be determined on the deployment
targets by executing:
::
docker ps -a
If any of the containers exited, this indicates a bug in the container. Please
seek help by filing a `launchpad bug`_ or contacting the developers via IRC.
The logs can be examined by executing:
::
docker exec -it heka bash
The logs from all services in all containers may be read from
``/var/log/kolla/SERVICE_NAME``
If the stdout logs are needed, please run:
::
docker logs <container-name>
Note that most of the containers don't log to stdout so the above command will
provide no information.
To learn more about Docker command line operation please refer to `Docker
documentation <https://docs.docker.com/reference/commandline/cli/>`__.
When ``enable_central_logging`` is enabled, to view the logs in a web browser
using Kibana, go to:
::
http://<kolla_internal_vip_address>:<kibana_server_port>
or http://<kolla_external_vip_address>:<kibana_server_port>
and authenticate using ``<kibana_user>`` and ``<kibana_password>``.
The values ``<kolla_internal_vip_address>``, ``<kolla_external_vip_address>``
``<kibana_server_port>`` and ``<kibana_user>`` can be found in
``<kolla_install_path>/kolla/ansible/group_vars/all.yml`` or if the default
values are overridden, in ``/etc/kolla/globals.yml``. The value of
``<kibana_password>`` can be found in ``/etc/kolla/passwords.yml``.
.. note:: When you log in to Kibana web interface for the first time, you are
prompted to create an index. Please create an index using the name ``log-*``.
This step is necessary until the default Kibana dashboard is implemented in
Kolla.
.. _launchpad bug: https://bugs.launchpad.net/kolla/+filebug

View File

@ -43,15 +43,15 @@ Install required dependencies as follows:
On CentOS 7::
sudo yum install vagrant ruby-devel libvirt-devel libvirt-python gcc git
sudo yum install vagrant ruby-devel libvirt-devel libvirt-python zlib-devel libpng-devel gcc git
On Fedora 22 or later::
sudo dnf install vagrant ruby-devel libvirt-devel libvirt-python gcc git
sudo dnf install vagrant ruby-devel libvirt-devel libvirt-python zlib-devel libpng-devel gcc git
On Ubuntu 14.04 or later::
sudo apt-get install vagrant ruby-dev ruby-libvirt python-libvirt libvirt-dev nfs-kernel-server gcc git
sudo apt-get install vagrant ruby-dev ruby-libvirt python-libvirt libvirt-dev nfs-kernel-server zlib-dev libpng-dev gcc git
.. note:: Many distros ship outdated versions of Vagrant by default. When in
doubt, always install the latest from the downloads page above.
@ -87,15 +87,14 @@ correctly. On Fedora 22::
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
Find a location in the system's home directory and checkout the kolla-ansible
repo::
Find a location in the system's home directory and checkout the Kolla repo::
git clone https://git.openstack.org/openstack/kolla-ansible
git clone https://git.openstack.org/openstack/kolla
Developers can now tweak the Vagrantfile or bring up the default **all-in-one**
CentOS 7-based environment::
cd kolla-ansible/contrib/dev/vagrant && vagrant up
cd kolla/contrib/dev/vagrant && vagrant up
The command ``vagrant status`` provides a quick overview of the VMs composing
the environment.