Merge "Remove all kolla-ansible related docs"

This commit is contained in:
Jenkins 2017-02-05 02:27:47 +00:00 committed by Gerrit Code Review
commit 061c3f8371
24 changed files with 5 additions and 3935 deletions

View File

@ -27,14 +27,6 @@ Basics
.. _launchpad: https://bugs.launchpad.net/kolla
.. _here: https://wiki.openstack.org/wiki/GitCommitMessages
Development Environment
=======================
Please follow our `quickstart`_ to deploy your environment and test your
changes.
.. _quickstart: http://docs.openstack.org/developer/kolla/quickstart.html
Please use the existing sandbox repository, available at
https://git.openstack.org/cgit/openstack-dev/sandbox, for learning, understanding
and testing the `Gerrit Workflow`_.
@ -98,93 +90,3 @@ is as follows::
{{ include_footer }} is legacy and should not be included in new services, it
is superseded by {% block footer %}{% endblock %}
Orchestration
-------------
As of the Newton release there are two main orchestration methods in existence
for Kolla, Ansible and Kubernetes. Ansible is the most mature and generally
regarded as the reference implementation.
When adding a role for a new service in Ansible, there are couple of patterns
that Kolla uses throughout that should be followed.
* The sample inventories
Entries should be added for the service in each of
``ansible/inventory/multinode`` and ``ansible/inventory/all-in-one``.
* The playbook
The main playbook that ties all roles together is in ``ansible/site.yml``,
this should be updated with appropriate roles, tags, and conditions. Ensure
also that supporting hosts such as haproxy are updated when necessary.
* The common role
A ``common`` role exists which sets up logging, ``kolla-toolbox`` and other
supporting components. This should be included in all services within
``meta/main.yml`` of your role.
* Common tasks
All services should include the following tasks:
- ``reconfigure.yml`` : Used to push new configuration files to the host
and restart the service.
- ``pull.yml`` : Used to pre fetch the image into the Docker image cache
on hosts, to speed up initial deploys.
- ``upgrade.yml`` : Used for upgrading the service in a rolling fashion. May
include service specific setup and steps as not all services can be
upgraded in the same way.
* Log delivery
- For OpenStack services the service has be added to the ``file_match``
parameter in the ``openstack_logstreamer_input`` section in the
``heka-openstack.toml.j2`` template file in
``ansible/roles/comm/templates`` to deliver log messages to Elasticsearch.
* Logrotation
- For OpenStack services there should be a ``cron-logrotate-PROJECT.conf.j2``
template file in ``ansible/roles/common/templates`` with the following
content:
.. code::
"/var/log/kolla/PROJECT/*.log"
{
}
- For OpenStack services there should be an entry in the ``services`` list
in the ``cron.json.j2`` template file in ``ansible/roles/common/templates``.
* Documentation
- For OpenStack services there should be an entry in the list
``OpenStack services`` in the ``README.rst`` file.
- For infrastructure services there should be an entry in the list
``Infrastructure components`` in the ``README.rst`` file.
* Syntax
- All YAML data files should start with three dashes (``---``).
Other than the above, most roles follow the following pattern:
- ``Register``: Involves registering the service with Keystone, creating
endpoints, roles, users, etc.
- ``Config``: Distributes the config files to the nodes to be pulled into
the container on startup.
- ``Bootstrap``: Creating the database (but not tables), database user for
the service, permissions, etc.
- ``Bootstrap Service``: Starts a one shot container on the host to create
the database tables, and other initial run time config.
- ``Start``: Start the service(s).

View File

@ -1,223 +0,0 @@
.. _advanced-configuration:
======================
Advanced Configuration
======================
Endpoint Network Configuration
==============================
When an OpenStack cloud is deployed, each services' REST API is presented
as a series of endpoints. These endpoints are the admin URL, the internal
URL, and the external URL.
Kolla offers two options for assigning these endpoints to network addresses:
- Combined - Where all three endpoints share the same IP address
- Separate - Where the external URL is assigned to an IP address that is
different than the IP address shared by the internal and admin URLs
The configuration parameters related to these options are:
- kolla_internal_vip_address
- network_interface
- kolla_external_vip_address
- kolla_external_vip_interface
For the combined option, set the two variables below, while allowing the
other two to accept their default values. In this configuration all REST
API requests, internal and external, will flow over the same network. ::
kolla_internal_vip_address: "10.10.10.254"
network_interface: "eth0"
For the separate option, set these four variables. In this configuration
the internal and external REST API requests can flow over separate
networks. ::
kolla_internal_vip_address: "10.10.10.254"
network_interface: "eth0"
kolla_external_vip_address: "10.10.20.254"
kolla_external_vip_interface: "eth1"
Fully Qualified Domain Name Configuration
=========================================
When addressing a server on the internet, it is more common to use
a name, like www.example.net, instead of an address like 10.10.10.254.
If you prefer to use names to address the endpoints in your kolla
deployment use the variables:
- kolla_internal_fqdn
- kolla_external_fqdn
::
kolla_internal_fqdn: inside.mykolla.example.net
kolla_external_fqdn: mykolla.example.net
Provisions must be taken outside of kolla for these names to map to the
configured IP addresses. Using a DNS server or the /etc/hosts file are
two ways to create this mapping.
TLS Configuration
=================
An additional endpoint configuration option is to enable or disable
TLS protection for the external VIP. TLS allows a client to authenticate
the OpenStack service endpoint and allows for encryption of the requests
and responses.
.. note:: The kolla_internal_vip_address and kolla_external_vip_address must
be different to enable TLS on the external network.
The configuration variables that control TLS networking are:
- kolla_enable_tls_external
- kolla_external_fqdn_cert
The default for TLS is disabled; to enable TLS networking:
::
kolla_enable_tls_external: "yes"
kolla_external_fqdn_cert: "{{ node_config_directory }}/certificates/mycert.pem"
.. note:: TLS authentication is based on certificates that have been
signed by trusted Certificate Authorities. Examples of commercial
CAs are Comodo, Symantec, GoDaddy, and GlobalSign. Letsencrypt.org
is a CA that will provide trusted certificates at no charge. Many
company's IT departments will provide certificates within that
company's domain. If using a trusted CA is not possible for your
situation, you can use OpenSSL to create your own or see the section
company's domain. If using a trusted CA is not possible for your
situation, you can use `OpenSSL`_ to create your own or see the section
below about kolla generated self-signed certificates.
Two certificate files are required to use TLS securely with authentication.
These two files will be provided by your Certificate Authority. These
two files are the server certificate with private key and the CA certificate
with any intermediate certificates. The server certificate needs to be
installed with the kolla deployment and is configured with the
``kolla_external_fqdn_cert`` parameter. If the server certificate provided
is not already trusted by the client, then the CA certificate file will
need to be distributed to the client.
When using TLS to connect to a public endpoint, an OpenStack client will
have settings similar to this:
::
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo-password
export OS_AUTH_URL=https://mykolla.example.net:5000
# os_cacert is optional for trusted certificates
export OS_CACERT=/etc/pki/mykolla-cacert.crt
export OS_IDENTITY_API_VERSION=3
.. _OpenSSL: https://www.openssl.org/
Self-Signed Certificates
========================
.. note:: Self-signed certificates should never be used in production.
It is not always practical to get a certificate signed by a well-known
trust CA, for example a development or internal test kolla deployment. In
these cases it can be useful to have a self-signed certificate to use.
For convenience, the kolla-ansible command will generate the necessary
certificate files based on the information in the ``globals.yml``
configuration file:
::
kolla-ansible certificates
The files haproxy.pem and haproxy-ca.pem will be generated and stored
in the ``/etc/kolla/certificates/`` directory.
OpenStack Service Configuration in Kolla
========================================
.. note:: As of now kolla only supports config overrides for ini based configs.
An operator can change the location where custom config files are read from by
editing ``/etc/kolla/globals.yml`` and adding the following line.
::
# The directory to merge custom config files the kolla's config files
node_custom_config: "/etc/kolla/config"
Kolla allows the operator to override configuration of services. Kolla will
look for a file in ``/etc/kolla/config/<< service name >>/<< config file >>``.
This can be done per-project, per-service or per-service-on-specified-host.
For example to override scheduler_max_attempts in nova scheduler, the operator
needs to create ``/etc/kolla/config/nova/nova-scheduler.conf`` with content:
::
[DEFAULT]
scheduler_max_attempts = 100
If the operator wants to configure compute node ram allocation ratio
on host myhost, the operator needs to create file
``/etc/kolla/config/nova/myhost/nova.conf`` with content:
::
[DEFAULT]
ram_allocation_ratio = 5.0
The operator can make these changes after services were already deployed by
using following command:
::
kolla-ansible reconfigure
IP Address Constrained Environments
===================================
If a development environment doesn't have a free IP address available for VIP
configuration, the host's IP address may be used here by disabling HAProxy by
adding:
::
enable_haproxy: "no"
Note this method is not recommended and generally not tested by the
Kolla community, but included since sometimes a free IP is not available
in a testing environment.
External Elasticsearch/Kibana environment
=========================================
It is possible to use an external Elasticsearch/Kibana environment. To do this
first disable the deployment of the central logging.
::
enable_central_logging: "no"
Now you can use the parameter ``elasticsearch_address`` to configure the
address of the external Elasticsearch environment.
Non-default <service> port
==========================
It is sometimes required to use a different than default port
for service(s) in Kolla. It is possible with setting <service>_port
in ``globals.yml`` file.
For example:
::
database_port: 3307
As <service>_port value is saved in different services' configuration so
it's advised to make above change before deploying.

View File

@ -1,318 +0,0 @@
=============
Bifrost Guide
=============
Prep host
=========
Clone kolla
-----------
::
git clone https://github.com/openstack/kolla
cd kolla
set up kolla dependencies :doc:`quickstart`
Fix hosts file
--------------
Docker bind mounts ``/etc/hosts`` into the container from a volume
This prevents atomic renames which will prevent ansible from fixing
the ``/etc/hosts`` file automatically.
To enable bifrost to be bootstrapped correctly add the deployment
hosts hostname to 127.0.0.1 line for example:
::
ubuntu@bifrost:/repo/kolla$ cat /etc/hosts
127.0.0.1 bifrost localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
192.168.100.15 bifrost
Enable source build type
========================
Via config file
---------------
::
tox -e genconfig
Modify ``kolla-build.conf`` as follows.
Set ``install_type`` to ``source``
::
install_type = source
Command line
------------
Alternatively if you do not wish to use the ``kolla-build.conf``
you can enable a source build by appending ``-t source`` to
your ``kolla-build`` or ``tools/build.py`` command.
Build container
===============
Development
-----------
::
tools/build.py bifrost-deploy
Production
----------
::
kolla-build bifrost-deploy
Prepare bifrost configs
=======================
Create servers.yml
------------------
The ``servers.yml`` will describing your physical nodes and list IPMI credentials.
See bifrost dynamic inventory examples for more details.
For example ``/etc/kolla/config/bifrost/servers.yml``
.. code-block:: yaml
---
cloud1:
uuid: "31303735-3934-4247-3830-333132535336"
driver_info:
power:
ipmi_username: "admin"
ipmi_address: "192.168.1.30"
ipmi_password: "root"
nics:
-
mac: "1c:c1:de:1c:aa:53"
-
mac: "1c:c1:de:1c:aa:52"
driver: "agent_ipmitool"
ipv4_address: "192.168.1.10"
properties:
cpu_arch: "x86_64"
ram: "24576"
disk_size: "120"
cpus: "16"
name: "cloud1"
adjust as appropriate for your deployment
Create bifrost.yml
------------------
By default kolla mostly use bifrosts default playbook values.
Parameters passed to the bifrost install playbook can be overridden by
creating a ``bifrost.yml`` file in the kolla custom config directory or in a
bifrost sub directory.
For example ``/etc/kolla/config/bifrost/bifrost.yml``
::
mysql_service_name: mysql
ansible_python_interpreter: /var/lib/kolla/venv/bin/python
network_interface: < add you network interface here >
# uncomment below if needed
# dhcp_pool_start: 192.168.2.200
# dhcp_pool_end: 192.168.2.250
# dhcp_lease_time: 12h
# dhcp_static_mask: 255.255.255.0
Create Disk Image Builder Config
--------------------------------
By default kolla mostly use bifrosts default playbook values when
building the baremetal os image. The baremetal os image can be customised
by creating a ``dib.yml`` file in the kolla custom config directory or in a
bifrost sub directory.
For example ``/etc/kolla/config/bifrost/dib.yml``
::
dib_os_element: ubuntu
Deploy Bifrost
=========================
Ansible
-------
Development
___________
::
tools/kolla-ansible deploy-bifrost
Production
__________
::
kolla-ansible deploy-bifrost
Manual
------
Start Bifrost Container
_______________________
::
docker run -it --net=host -v /dev:/dev -d --privileged --name bifrost_deploy kolla/ubuntu-source-bifrost-deploy:3.0.1
Copy configs
____________
.. code-block:: console
docker exec -it bifrost_deploy mkdir /etc/bifrost
docker cp /etc/kolla/config/bifrost/servers.yml bifrost_deploy:/etc/bifrost/servers.yml
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
Bootstrap bifrost
_________________
::
docker exec -it bifrost_deploy bash
Generate ssh key
~~~~~~~~~~~~~~~~
::
ssh-keygen
Source env variables
~~~~~~~~~~~~~~~~~~~~
::
cd /bifrost
. env-vars
. /opt/stack/ansible/hacking/env-setup
cd playbooks/
Bootstrap and start services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
ansible-playbook -vvvv -i /bifrost/playbooks/inventory/localhost /bifrost/playbooks/install.yaml -e @/etc/bifrost/bifrost.yml
Check ironic is running
=======================
.. code-block:: console
docker exec -it bifrost_deploy bash
cd /bifrost
. env-vars
Running "ironic node-list" should return with no nodes, for example
.. code-block:: console
(bifrost-deploy)[root@bifrost bifrost]# ironic node-list
+------+------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+------+------+---------------+-------------+--------------------+-------------+
+------+------+---------------+-------------+--------------------+-------------+
Enroll and Deploy Physical Nodes
================================
Ansible
-------
Development
___________
::
tools/kolla-ansible deploy-servers
Production
__________
::
kolla-ansible deploy-servers
Manual
------
.. code-block:: console
docker exec -it bifrost_deploy bash
cd /bifrost
. env-vars
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
ansible-playbook -vvvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml -e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" -e network_interface=<provisioning interface>
docker exec -it bifrost_deploy bash
cd /bifrost
. env-vars
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml -e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" -e network_interface=<prvisioning interface> -e @/etc/bifrost/dib.yml
At this point ironic should clean down your nodes and install the default
os image.
Advanced configuration
======================
Bring your own image
--------------------
TODO
Bring your own ssh key
----------------------
To use your own ssh key after you have generated the ``passwords.yml`` file
update the private and public keys under bifrost_ssh_key.
Known issues
============
SSH daemon not running
----------------------
By default sshd is installed in the image but may not be enabled.
If you encounter this issue you will have to access the server physically in
recovery mode to enable the ssh service. If your hardware supports it, this
can be done remotely with ipmitool and serial over lan. For example
.. code-block:: console
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
References
==========
Docs: http://docs.openstack.org/developer/bifrost/
Troubleshooting: http://docs.openstack.org/developer/bifrost/troubleshooting.html
Code: https://github.com/openstack/bifrost

View File

@ -1,253 +0,0 @@
.. _ceph-guide:
=============
Ceph in Kolla
=============
The out-of-the-box Ceph deployment requires 3 hosts with at least one block
device on each host that can be dedicated for sole use by Ceph. However, with
tweaks to the Ceph cluster you can deploy a **healthy** cluster with a single
host and a single block device.
Requirements
============
* A minimum of 3 hosts for a vanilla deploy
* A minimum of 1 block device per host
Preparation
===========
To prepare a disk for use as a
`Ceph OSD <http://docs.ceph.com/docs/master/man/8/ceph-osd/>`_ you must add a
special partition label to the disk. This partition label is how Kolla detects
the disks to format and bootstrap. Any disk with a matching partition label
will be reformatted so use caution.
To prepare an OSD as a storage drive, execute the following operations:
::
# <WARNING ALL DATA ON $DISK will be LOST!>
# where $DISK is /dev/sdb or something similar
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
The following shows an example of using parted to configure ``/dev/sdb`` for
usage with Kolla.
::
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
parted /dev/sdb print
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP
Using an external journal drive
-------------------------------
The steps documented above created a journal partition of 5 GByte
and a data partition with the remaining storage capacity on the same tagged
drive.
It is a common practice to place the journal of an OSD on a separate
journal drive. This section documents how to use an external journal drive.
Prepare the storage drive in the same way as documented above:
::
# <WARNING ALL DATA ON $DISK will be LOST!>
# where $DISK is /dev/sdb or something similar
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1
To prepare the journal external drive execute the following command:
::
# <WARNING ALL DATA ON $DISK will be LOST!>
# where $DISK is /dev/sdc or something similar
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1
.. note::
Use different suffixes (``_42``, ``_FOO``, ``_FOO42``, ..) to use different external
journal drives for different storage drives. One external journal drive can only
be used for one storage drive.
.. note::
The partition labels ``KOLLA_CEPH_OSD_BOOTSTRAP`` and ``KOLLA_CEPH_OSD_BOOTSTRAP_J``
are not working when using external journal drives. It is required to use
suffixes (``_42``, ``_FOO``, ``_FOO42``, ..). If you want to setup only one
storage drive with one external journal drive it is also necessary to use a suffix.
Configuration
=============
Edit the [storage] group in the inventory which contains the hostname of the
hosts that have the block devices you have prepped as shown above.
::
[storage]
controller
compute1
Enable Ceph in ``/etc/kolla/globals.yml``:
::
enable_ceph: "yes"
RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
::
enable_ceph_rgw: "yes"
RGW requires a healthy cluster in order to be successfully deployed. On initial
start up, RGW will create several pools. The first pool should be in an
operational state to proceed with the second one, and so on. So, in the case of
an **all-in-one** deployment, it is necessary to change the default number of
copies for the pools before deployment. Modify the file
``/etc/kolla/config/ceph.conf`` and add the contents::
[global]
osd pool default size = 1
osd pool default min size = 1
Deployment
==========
Finally deploy the Ceph-enabled OpenStack:
::
kolla-ansible deploy -i path/to/inventory
Using a Cache Tier
==================
An optional `cache tier <http://docs.ceph.com/docs/jewel/rados/operations/cache-tiering/>`_
can be deployed by formatting at least one cache device and enabling cache.
tiering in the globals.yml configuration file.
To prepare an OSD as a cache device, execute the following operations:
::
# <WARNING ALL DATA ON $DISK will be LOST!>
# where $DISK is /dev/sdb or something similar
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
::
enable_ceph: "yes"
ceph_enable_cache: "yes"
# Valid options are [ forward, none, writeback ]
ceph_cache_mode: "writeback"
After this run the playbooks as you normally would. For example:
::
kolla-ansible deploy -i path/to/inventory
Setting up an Erasure Coded Pool
================================
`Erasure code <http://docs.ceph.com/docs/jewel/rados/operations/erasure-code/>`_
is the new big thing from Ceph. Kolla has the ability to setup your Ceph pools
as erasure coded pools. Due to technical limitations with Ceph, using erasure
coded pools as OpenStack uses them requires a cache tier. Additionally, you
must make the choice to use an erasure coded pool or a replicated pool
(the default) when you initially deploy. You cannot change this without
completely removing the pool and recreating it.
To enable erasure coded pools add the following options to your
``/etc/kolla/globals.yml`` configuration file:
::
# A requirement for using the erasure-coded pools is you must setup a cache tier
# Valid options are [ erasure, replicated ]
ceph_pool_type: "erasure"
# Optionally, you can change the profile
#ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
Managing Ceph
=============
Check the Ceph status for more diagnostic information. The sample output below
indicates a healthy cluster:
::
docker exec ceph_mon ceph -s
cluster 5fba2fbc-551d-11e5-a8ce-01ef4c5cf93c
health HEALTH_OK
monmap e1: 1 mons at {controller=10.0.0.128:6789/0}
election epoch 2, quorum 0 controller
osdmap e18: 2 osds: 2 up, 2 in
pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects
68676 kB used, 20390 MB / 20457 MB avail
64 active+clean
If Ceph is run in an **all-in-one** deployment or with less than three storage
nodes, further configuration is required. It is necessary to change the default
number of copies for the pool. The following example demonstrates how to change
the number of copies for the pool to 1:
::
docker exec ceph_mon ceph osd pool set rbd size 1
All the pools must be modified if Glance, Nova, and Cinder have been deployed.
An example of modifying the pools to have 2 copies:
::
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done
If using a cache tier, these changes must be made as well:
::
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
::
docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
Troubleshooting
===============
Deploy fails with 'Fetching Ceph keyrings ... No JSON object could be decoded'
------------------------------------------------------------------------------
If an initial deploy of Ceph fails, perhaps due to improper configuration or
similar, the cluster will be partially formed and will need to be reset for a
successful deploy.
In order to do this the operator should remove the `ceph_mon_config` volume
from each Ceph monitor node:
::
ansible \
-i ansible/inventory/multinode \
-a 'docker volume rm ceph_mon_config' \
ceph-mon

View File

@ -1,138 +0,0 @@
.. _cinder-guide:
===============
Cinder in Kolla
===============
Overview
========
Currently Kolla can deploy the cinder services:
- cinder-api
- cinder-scheduler
- cinder-backup
- cinder-volume
The cinder implementation defaults to using LVM storage. The default
implementation requires a volume group be set up. This can either be
a real physical volume or a loopback mounted file for development.
.. note ::
The Cinder community has closed a bug as WontFix which makes it
impossible for LVM to be used at all in a multi-controller setup.
The only option for multi-controller storage to work correctly at
present is via a Ceph deployment. If community members disagree
with this decision, please report the specific use case in the
Cinder bug tracker here:
`_bug 1571211 <https://launchpad.net/bugs/1571211>`__.
Create a Volume Group
=====================
Use ``pvcreate`` and ``vgcreate`` to create the volume group. For example
with the devices ``/dev/sdb`` and ``/dev/sdc``:
::
<WARNING ALL DATA ON /dev/sdb and /dev/sdc will be LOST!>
pvcreate /dev/sdb /dev/sdc
vgcreate cinder-volumes /dev/sdb /dev/sdc
During development, it may be desirable to use file backed block storage. It
is possible to use a file and mount it as a block device via the loopback
system. ::
mknod /dev/loop2 b 7 2
dd if=/dev/zero of=/var/lib/cinder_data.img bs=1G count=20
losetup /dev/loop2 /var/lib/cinder_data.img
pvcreate /dev/loop2
vgcreate cinder-volumes /dev/loop2
Validation
==========
Create a volume as follows:
::
$ openstack volume create --size 1 steak_volume
<bunch of stuff printed>
Verify it is available. If it says "error" here something went wrong during
LVM creation of the volume. ::
$ openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available | 1 | |
+--------------------------------------+--------------+-----------+------+-------------+
Attach the volume to a server using:
::
openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e
Check the console log added the disk:
::
openstack console log show steak_server
A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
If the disk stays in the available state, something went wrong during the
iSCSI mounting of the volume to the guest VM.
Cinder LVM2 back end with iSCSI
===============================
As of Newton-1 milestone, Kolla supports LVM2 as cinder back end. It is
accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
``tgtd`` container serves as a bridge between cinder-volume process and a
server hosting Logical Volume Groups (LVG). ``iscsid`` container serves as
a bridge between nova-compute process and the server hosting LVG.
In order to use Cinder's LVM back end, a LVG named ``cinder-volumes`` should
exist on the server and following parameter must be specified in
``globals.yml`` ::
enable_cinder_backend_lvm: "yes"
For Ubuntu and LVM2/iSCSI
~~~~~~~~~~~~~~~~~~~~~~~~~
``iscsd`` process uses configfs which is normally mounted at
``/sys/kernel/config`` to store discovered targets information, on centos/rhel
type of systems this special file system gets mounted automatically, which is
not the case on debian/ubuntu. Since ``iscsid`` container runs on every nova
compute node, the following steps must be completed on every Ubuntu server
targeted for nova compute role.
- Add configfs module to ``/etc/modules``
- Rebuild initramfs using: ``update-initramfs -u`` command
- Stop ``open-iscsi`` system service due to its conflicts
with iscsid container.
For Ubuntu 14.04 (upstart): ``service open-iscsi stop``
Ubuntu 16.04 (systemd):
``systemctl stop open-iscsi; systemctl stop iscsid``
- Make sure configfs gets mounted during a server boot up process. There are
multiple ways to accomplish it, one example:
::
mount -t configfs /etc/rc.local /sys/kernel/config
Cinder back end with external iSCSI storage
===========================================
In order to use external storage system (like one from EMC or NetApp)
the following parameter must be specified in ``globals.yml`` ::
enable_cinder_backend_iscsi: "yes"
Also ``enable_cinder_backend_lvm`` should be set to "no" in this case.

View File

@ -1,76 +0,0 @@
.. _deployment-philosophy:
=============================
Kolla's Deployment Philosophy
=============================
Overview
========
Kolla has an objective to replace the inflexible, painful, resource-intensive
deployment process of OpenStack with a flexible, painless, inexpensive
deployment process. Often to deploy OpenStack at the 100+ node scale small
businesses may require means building a team of OpenStack professionals to
maintain and manage the OpenStack deployment. Finding people experienced in
OpenStack deployment is very difficult and expensive, resulting in a big
barrier for OpenStack adoption. Kolla seeks to remedy this set of problems by
simplifying the deployment process while enabling flexible deployment models.
Kolla is a highly opinionated deployment tool out of the box. This permits
Kolla to be deployable with the simple configuration of three key/value pairs.
As an operator's experience with OpenStack grows and the desire to customize
OpenStack services increases, Kolla offers full capability to override every
OpenStack service configuration option in the deployment.
Why not Template Customization?
===============================
The Kolla upstream community does not want to place key/value pairs in the
Ansible playbook configuration options that are not essential to obtaining
a functional deployment. If the Kolla upstream starts down the path of
templating configuration options, the Ansible configuration could conceivably
grow to hundreds of configuration key/value pairs which is unmanageable.
Further, as new versions of Kolla are released, there would be independent
customization available for different versions creating an unsupportable and
difficult to document environment. Finally, adding key/value pairs for
configuration options creates a situation in which development and release
cycles are required in order to successfully add new customizations.
Essentially templating in configuration options is not a scalable solution
and would result in an inability of the project to execute its mission.
Kolla's Solution to Customization
=================================
Rather than deal with the customization madness of templating configuration
options in Kolla's Ansible playbooks, Kolla eliminates all the inefficiencies
of existing deployment tools through a simple, tidy design: custom
configuration sections.
During deployment of an OpenStack service, a basic set of default configuration
options are merged with and overridden by custom ini configuration sections.
Kolla deployment customization is that simple! This does create a situation
in which the Operator must reference the upstream documentation if a
customization is desired in the OpenStack deployment. Fortunately the
configuration options documentation is extremely mature and well-formulated.
As an example, consider running Kolla in a virtual machine. In order to
launch virtual machines from Nova in a virtual environment, it is necessary
to use the QEMU hypervisor, rather than the KVM hypervisor. To achieve this
result, simply modify the file `/etc/kolla/config/nova/nova-compute.conf` and
add the contents::
[libvirt]
virt_type=qemu
After this change Kolla will use an emulated hypervisor with lower performance.
Kolla could have templated this commonly modified configuration option. If
Kolla starts down this path, the Kolla project could end with hundreds of
config options all of which would have to be subjectively evaluated for
inclusion or exclusion in the source tree.
Kolla's approach yields a solution which enables complete customization without
any upstream maintenance burden. Operators don't have to rely on a subjective
approval process for configuration options nor rely on a
development/test/release cycle to obtain a desired customization. Instead
operators have ultimate freedom to make desired deployment choices immediately
without the approval of a third party.

View File

@ -1,181 +0,0 @@
.. _external-ceph-guide:
=============
External Ceph
=============
Sometimes it is necessary to connect OpenStack services to an existing Ceph
cluster instead of deploying it with Kolla. This can be achieved with only a
few configuration steps in Kolla.
Requirements
============
* An existing installation of Ceph
* Existing Ceph storage pools
* Existing credentials in Ceph for OpenStack services to connect to Ceph
(Glance, Cinder, Nova)
Enabling External Ceph
======================
Using external Ceph with Kolla means not to deploy Ceph via Kolla. Therefore,
disable Ceph deployment in ``/etc/kolla/global.yml``
::
enable_ceph: "no"
There are flags indicating individual services to use ceph or not which default
to the value of ``enable_ceph``. Those flags now need to be activated in order
to activate external Ceph integration. This can be done individually per
service in ``/etc/kolla/global.yml``:
::
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
The combination of ``enable_ceph: "no"`` and ``<service>_backend_ceph: "yes"``
triggers the activation of external ceph mechanism in Kolla.
Configuring External Ceph
=========================
Glance
------
Configuring Glance for Ceph includes three steps:
1) Configure RBD back end in glance-api.conf
2) Create Ceph configuration file in /etc/ceph/ceph.conf
3) Create Ceph keyring file in /etc/ceph/ceph.client.<username>.keyring
Step 1 is done by using Kolla's INI merge mechanism: Create a file in
``/etc/kolla/config/glance/glance-api.conf`` with the following contents:
::
[DEFAULT]
show_image_direct_url = True
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
[image_format]
container_formats = bare
disk_formats = raw
Now put ceph.conf and the keyring file (name depends on the username created in
Ceph) into the same directory, for example:
/etc/kolla/config/glance/ceph.conf
::
[global]
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
mon_initial_members = ceph-0
mon_host = 192.168.0.56
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
/etc/kolla/config/glance/ceph.client.glance.keyring
::
[client.glance]
key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
Kolla will pick up all files named ceph.* in this directory an copy them to the
/etc/ceph/ directory of the container.
Cinder
------
Configuring external Ceph for Cinder works very similar to
Glance. The required Cinder configuration goes into
/etc/kolla/config/cinder/cinder-volume.conf:
::
[DEFAULT]
enabled_backends=rbd-1
[rbd-1]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
backend_host=rbd:volumes
rbd_pool=volumes
volume_backend_name=rbd-1
volume_driver=cinder.volume.drivers.rbd.RBDDriver
Next, place the ceph.conf file into
/etc/kolla/config/cinder/ceph.conf:
::
[global]
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
mon_initial_members = ceph-0
mon_host = 192.168.0.56
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
Separate configuration options can be configured for
cinder-volume and cinder-backup by adding ceph.conf files to
/etc/kolla/config/cinder/cinder-volume and
/etc/kolla/config/cinder/cinder-backup respectively. They
will be merged with /etc/kolla/config/cinder/ceph.conf.
Ceph keyrings are deployed per service and placed into
cinder-volume and cinder-backup directories:
::
root@deploy:/etc/kolla/config# cat
cinder/cinder-backup/ceph.client.cinder.keyring
[client.cinder]
key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
root@deploy:/etc/kolla/config# cat
cinder/cinder-volume/ceph.client.cinder.keyring
[client.cinder]
key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
It is important that the files are named ceph.client*.
Nova
------
In ``/etc/kolla/global.yml`` set
::
nova_backend_ceph: "yes"
Put ceph.conf and keyring file into ``/etc/kolla/config/nova``:
::
$ ls /etc/kolla/config/nova
ceph.client.nova.keyring ceph.conf
Configure nova-compute to use Ceph as the ephemeral back end by creating
``/etc/kolla/config/nova/nova-compute.conf`` and adding the following
contents:
::
[libvirt]
images_rbd_pool=vms
images_type=rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova
.. note:: ``rbd_user`` might vary depending on your environment.

View File

@ -18,17 +18,14 @@
Welcome to Kolla's documentation!
=================================
Kolla's Mission
===============
Kolla provides Docker containers and Ansible playbooks to meet Kolla's mission.
Kolla's mission is to provide production-ready containers and deployment tools
for operating OpenStack clouds.
Kolla is highly opinionated out of the box, but allows for complete
customization. This permits operators with minimal experience to deploy
OpenStack quickly and as experience grows modify the OpenStack configuration to
suit the operator's exact requirements.
This documentation is for the Kolla container images. The following subprojects
are available to help deploy Kolla:
* `kolla-ansible <http://docs.openstack.org/developer/kolla-ansible/>`_
* `kolla-kubernetes <http://docs.openstack.org/developer/kolla-kubernetes>`_
Site Notes
==========
@ -47,33 +44,7 @@ Kolla Overview
.. toctree::
:maxdepth: 1
deployment-philosophy
production-architecture-guide
quickstart
multinode
image-building
advanced-configuration
operating-kolla
security
troubleshooting
Kolla Services
==============
.. toctree::
:maxdepth: 1
ceph-guide
external-ceph-guide
cinder-guide
ironic-guide
manila-guide
manila-hnas-guide
swift-guide
kibana-guide
bifrost
networking-guide
kuryr-guide
Developer Docs
==============
@ -82,6 +53,5 @@ Developer Docs
:maxdepth: 1
CONTRIBUTING
vagrant-dev-env
running-tests
bug-triage

View File

@ -1,44 +0,0 @@
.. _ironic-guide:
===============
Ironic in Kolla
===============
Overview
========
Currently Kolla can deploy the Ironic services:
- ironic-api
- ironic-conductor
- ironic-inspector
As well as a required PXE service, deployed as ironic-pxe.
Current status
==============
The Ironic implementation is "tech preview", so currently instances can only be
deployed on baremetal. Further work will be done to allow scheduling for both
virtualized and baremetal deployments.
Post-deployment configuration
=============================
Configuration based off upstream documentation_.
Again, remember that enabling Ironic reconfigures nova compute (driver and
scheduler) as well as changes neutron network settings. Further neutron setup
is required as outlined below.
Create the flat network to launch the instances:
::
neutron net-create --tenant-id $TENANT_ID sharednet1 --shared \
--provider:network_type flat --provider:physical_network physnet1
neutron subnet-create sharednet1 $NETWORK_CIDR --name $SUBNET_NAME \
--ip-version=4 --gateway=$GATEWAY_IP --allocation-pool \
start=$START_IP,end=$END_IP --enable-dhcp
And then the above ID is used to set cleaning_network_uuid in the neutron
section of ironic.conf.
.. _documentation: http://docs.openstack.org/developer/ironic/deploy/install-guide.html

View File

@ -1,150 +0,0 @@
.. _kibana-guide:
===============
Kibana in Kolla
===============
An OpenStack deployment generates vast amounts of log data. In order to
successfully monitor this and use it to diagnose problems, the standard "ssh
and grep" solution quickly becomes unmanageable.
Kolla can deploy Kibana as part of the E*K stack in order to allow operators to
search and visualise logs in a centralised manner.
Preparation and deployment
==========================
Modify the configuration file ``/etc/kolla/globals.yml`` and change
the following:
::
enable_central_logging: "yes"
After successful deployment, Kibana can be accessed using a browser on
``<kolla_external_vip_address>:5601``.
The default username is ``kibana``, the password can be located under
``<kibana_password>`` in ``/etc/kolla/passwords.yml``.
When Kibana is opened for the first time, it requires creating a default index
pattern. To view, analyse and search logs, at least one index pattern has to be
created. To match indices stored in ElasticSearch, we suggest setting the
"Index name or pattern" field to ``log-*``. The rest of the fields can be left
as is.
After setting parameters, create an index by clicking the ``Create`` button.
.. note:: This step is necessary until the default Kibana dashboard is implemented
in Kolla.
Search logs - Discover tab
==========================
Operators can create and store searches based on various fields from logs, for
example, "show all logs marked with ERROR on nova-compute".
To do this, click the ``Discover`` tab. Fields from the logs can be filtered by
hovering over entries from the left hand side, and clicking ``add`` or
``remove``. Add the following fields:
* Hostname
* Payload
* severity_label
* programname
This yields an easy to read list of all log events from each node in the
deployment within the last 15 minutes. A "tail like" functionality can be
achieved by clicking the clock icon in the top right hand corner of the screen,
and selecting ``Auto-refresh``.
Logs can also be filtered down further. To use the above example, type
``programname:nova-compute`` in the search bar. Click the drop-down arrow from
one of the results, then the small magnifying glass icon from beside the
programname field. This should now show a list of all events from nova-compute
services across the cluster.
The current search can also be saved by clicking the ``Save Search`` icon
available from the menu on the right hand side.
Example: using Kibana to diagnose a common failure
--------------------------------------------------
The following example demonstrates how Kibana can be used to diagnose a common
OpenStack problem, where an instance fails to launch with the error 'No valid
host was found'.
First, re-run the server creation with ``--debug``:
::
openstack --debug server create --image cirros --flavor m1.tiny \
--key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \
demo1
In this output, look for the key ``X-Compute-Request-Id``. This is a unique
identifier that can be used to track the request through the system. An
example ID looks like this:
::
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
Taking the value of ``X-Compute-Request-Id``, enter the value into the Kibana
search bar, minus the leading ``req-``. Assuming some basic filters have been
added as shown in the previous section, Kibana should now show the path this
request made through the OpenStack deployment, starting at a ``nova-api`` on
a control node, through the ``nova-scheduler``, ``nova-conductor``, and finally
``nova-compute``. Inspecting the ``Payload`` of the entries marked ``ERROR``
should quickly lead to the source of the problem.
While some knowledge is still required of how Nova works in this instance, it
can still be seen how Kibana helps in tracing this data, particularly in a
large scale deployment scenario.
Visualize data - Visualize tab
==============================
In the visualization tab a wide range of charts is available. If any
visualization has not been saved yet, after choosing this tab *Create a new
visualization* panel is opened. If a visualization has already been saved,
after choosing this tab, lately modified visualization is opened. In this
case, one can create a new visualization by choosing *add visualization*
option in the menu on the right. In order to create new visualization, one
of the available options has to be chosen (pie chart, area chart). Each
visualization can be created from a saved or a new search. After choosing
any kind of search, a design panel is opened. In this panel, a chart can be
generated and previewed. In the menu on the left, metrics for a chart can
be chosen. The chart can be generated by pressing a green arrow on the top
of the left-side menu.
.. note:: After creating a visualization, it can be saved by choosing *save
visualization* option in the menu on the right. If it is not saved, it
will be lost after leaving a page or creating another visualization.
Organize visualizations and searches - Dashboard tab
====================================================
In the Dashboard tab all of saved visualizations and searches can be
organized in one Dashboard. To add visualization or search, one can choose
*add visualization* option in the menu on the right and then choose an item
from all saved ones. The order and size of elements can be changed directly
in this place by moving them or resizing. The color of charts can also be
changed by checking a colorful dots on the legend near each visualization.
.. note:: After creating a dashboard, it can be saved by choosing *save dashboard*
option in the menu on the right. If it is not saved, it will be lost after
leaving a page or creating another dashboard.
If a Dashboard has already been saved, it can be opened by choosing *open
dashboard* option in the menu on the right.
Exporting and importing created items - Settings tab
====================================================
Once visualizations, searches or dashboards are created, they can be exported
to a JSON format by choosing Settings tab and then Objects tab. Each of the
item can be exported separately by selecting it in the menu. All of the items
can also be exported at once by choosing *export everything* option.
In the same tab (Settings - Objects) one can also import saved items by
choosing *import* option.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

View File

@ -1,63 +0,0 @@
Kuryr in Kolla
==============
"Kuryr is a Docker network plugin that uses Neutron to provide networking
services to Docker containers. It provides containerized images for the common
Neutron plugins" [1]. Kuryr requires at least Keystone and neutron. Kolla makes
kuryr deployment faster and accessible.
Requirements
------------
* A minimum of 3 hosts for a vanilla deploy
Preparation and Deployment
--------------------------
To allow Docker daemon connect to the etcd, add the following in the
docker.service file.
::
ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
The IP address is host running the etcd service. ```2375``` is port that
allows Docker daemon to be accessed remotely. ```2379``` is the etcd listening
port.
By default etcd and kuryr are disabled in the ``group_vars/all.yml``.
In order to enable them, you need to edit the file globals.yml and set the
following variables
::
enable_etcd: "yes"
enable_kuryr: "yes"
Deploy the OpenStack cloud and kuryr network plugin
::
kolla-ansible deploy
Create a Virtual Network
--------------------------------
::
docker network create -d kuryr --ipam-driver=kuryr --subnet=10.1.0.0/24 --gateway=10.1.0.1 docker-net1
To list the created network:
::
docker network ls
The created network is also available from OpenStack CLI:
::
openstack network list
[1] https://github.com/openstack/kuryr

View File

@ -1,357 +0,0 @@
.. _manila-guide:
===============
Manila in Kolla
===============
Overview
========
Currently, Kolla can deploy following manila services:
* manila-api
* manila-scheduler
* manila-share
The OpenStack Shared File Systems service (Manila) provides file storage to a
virtual machine. The Shared File Systems service provides an infrastructure
for managing and provisioning of file shares. The service also enables
management of share types as well as share snapshots if a driver supports
them.
Important
=========
For simplicity, this guide describes configuring the Shared File Systems
service to use the ``generic`` back end with the driver handles share
server mode (DHSS) enabled that uses Compute (nova), Networking (neutron)
and Block storage (cinder) services.
Networking service configuration requires the capability of networks being
attached to a public router in order to create shared networks.
Before you proceed, ensure that Compute, Networking and Block storage
services are properly working.
Preparation and Deployment
==========================
Cinder and Ceph are required, enable it in ``/etc/kolla/globals.yml``:
.. code-block:: console
enable_cinder: "yes"
enable_ceph: "yes"
Enable Manila and generic back end in ``/etc/kolla/globals.yml``:
.. code-block:: console
enable_manila: "yes"
enable_manila_backend_generic: "yes"
By default Manila uses instance flavor id 100 for its file systems. For Manila
to work, either create a new nova flavor with id 100 (use *nova flavor-create*)
or change *service_instance_flavor_id* to use one of the default nova flavor
ids.
Ex: *service_instance_flavor_id = 2* to use nova default flavor ``m1.small``.
Create or modify the file ``/etc/kolla/config/manila-share.conf`` and add the
contents:
.. code-block:: console
[generic]
service_instance_flavor_id = 2
Verify Operation
================
Verify operation of the Shared File Systems service. List service components
to verify successful launch of each process:
.. code-block:: console
# manila service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| manila-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |
| manila-share | share1@generic | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
Launch an Instance
==================
Before being able to create a share, the manila with the generic driver and the
DHSS mode enabled requires the definition of at least an image, a network and a
share-network for being used to create a share server. For that back end
configuration, the share server is an instance where NFS/CIFS shares are
served.
Determine the configuration of the share server
===============================================
Create a default share type before running manila-share service:
.. code-block:: console
# manila type-create default_share_type True
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
| ID | Name | Visibility | is_default | required_extra_specs | optional_extra_specs |
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
| 8a35da28-0f74-490d-afff-23664ecd4f01 | default_share_type | public | - | driver_handles_share_servers : True | snapshot_support : True |
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
Create a manila share server image to the Image service:
.. code-block:: console
# wget http://tarballs.openstack.org/manila-image-elements/images/manila-service-image-master.qcow2
# glance image-create --name "manila-service-image" \
--file manila-service-image-master.qcow2 \
--disk-format qcow2 --container-format bare \
--visibility public --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 48a08e746cf0986e2bc32040a9183445 |
| container_format | bare |
| created_at | 2016-01-26T19:52:24Z |
| disk_format | qcow2 |
| id | 1fc7f29e-8fe6-44ef-9c3c-15217e83997c |
| min_disk | 0 |
| min_ram | 0 |
| name | manila-service-image |
| owner | e2c965830ecc4162a002bf16ddc91ab7 |
| protected | False |
| size | 306577408 |
| status | active |
| tags | [] |
| updated_at | 2016-01-26T19:52:28Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------+
List available networks to get id and subnets of the private network:
.. code-block:: console
+--------------------------------------+---------+----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+----------------------------------------------------+
| 0e62efcd-8cee-46c7-b163-d8df05c3c5ad | public | 5cc70da8-4ee7-4565-be53-b9c011fca011 10.3.31.0/24 |
| 7c6f9b37-76b4-463e-98d8-27e5686ed083 | private | 3482f524-8bff-4871-80d4-5774c2730728 172.16.1.0/24 |
+--------------------------------------+---------+----------------------------------------------------+
Create a shared network
.. code-block:: console
# manila share-network-create --name demo-share-network1 \
--neutron-net-id PRIVATE_NETWORK_ID \
--neutron-subnet-id PRIVATE_NETWORK_SUBNET_ID
+-------------------+--------------------------------------+
| Property | Value |
+-------------------+--------------------------------------+
| name | demo-share-network1 |
| segmentation_id | None |
| created_at | 2016-01-26T20:03:41.877838 |
| neutron_subnet_id | 3482f524-8bff-4871-80d4-5774c2730728 |
| updated_at | None |
| network_type | None |
| neutron_net_id | 7c6f9b37-76b4-463e-98d8-27e5686ed083 |
| ip_version | None |
| nova_net_id | None |
| cidr | None |
| project_id | e2c965830ecc4162a002bf16ddc91ab7 |
| id | 58b2f0e6-5509-4830-af9c-97f525a31b14 |
| description | None |
+-------------------+--------------------------------------+
Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in
``/etc/kolla/config/manila-share.conf`` file)
.. code-block:: console
# nova flavor-create manila-service-flavor 100 128 0 1
Create a share
==============
Create a NFS share using the share network:
.. code-block:: console
# manila create NFS 1 --name demo-share1 --share-network demo-share-network1
+-----------------------------+--------------------------------------+
| Property | Value |
+-----------------------------+--------------------------------------+
| status | None |
| share_type_name | None |
| description | None |
| availability_zone | None |
| share_network_id | None |
| export_locations | [] |
| host | None |
| snapshot_id | None |
| is_public | False |
| task_state | None |
| snapshot_support | True |
| id | 016ca18f-bdd5-48e1-88c0-782e4c1aa28c |
| size | 1 |
| name | demo-share1 |
| share_type | None |
| created_at | 2016-01-26T20:08:50.502877 |
| export_location | None |
| share_proto | NFS |
| consistency_group_id | None |
| source_cgsnapshot_member_id | None |
| project_id | 48e8c35b2ac6495d86d4be61658975e7 |
| metadata | {} |
+-----------------------------+--------------------------------------+
After some time, the share status should change from ``creating``
to ``available``:
.. code-block:: console
# manila list
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
| e1e06b14-ba17-48d4-9e0b-ca4d59823166 | demo-share1 | 1 | NFS | available | False | default_share_type | share1@generic#GENERIC | nova |
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
Configure user access to the new share before attempting to mount it via the
network:
.. code-block:: console
# manila access-allow demo-share1 ip INSTANCE_PRIVATE_NETWORK_IP
Mount the share from an instance
================================
Get export location from share
.. code-block:: console
# manila show demo-share1
+-----------------------------+----------------------------------------------------------------------+
| Property | Value |
+-----------------------------+----------------------------------------------------------------------+
| status | available |
| share_type_name | default_share_type |
| description | None |
| availability_zone | nova |
| share_network_id | fa07a8c3-598d-47b5-8ae2-120248ec837f |
| export_locations | |
| | path = 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 |
| | preferred = False |
| | is_admin_only = False |
| | id = 5894734d-8d9a-49e4-b53e-7154c9ce0882 |
| | share_instance_id = 422dc546-8f37-472b-ac3c-d23fe410d1b6 |
| share_server_id | 4782feef-61c8-4ffb-8d95-69fbcc380a52 |
| host | share1@generic#GENERIC |
| access_rules_status | active |
| snapshot_id | None |
| is_public | False |
| task_state | None |
| snapshot_support | True |
| id | e1e06b14-ba17-48d4-9e0b-ca4d59823166 |
| size | 1 |
| name | demo-share1 |
| share_type | 6e1e803f-1c37-4660-a65a-c1f2b54b6e17 |
| has_replicas | False |
| replication_type | None |
| created_at | 2016-03-15T18:59:12.000000 |
| share_proto | NFS |
| consistency_group_id | None |
| source_cgsnapshot_member_id | None |
| project_id | 9dc02df0f2494286ba0252b3c81c01d0 |
| metadata | {} |
+-----------------------------+----------------------------------------------------------------------+
Create a folder where the mount will be placed:
.. code-block:: console
# mkdir ~/test_folder
Mount the NFS share in the instance using the export location of the share:
.. code-block:: console
# mount -v 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 ~/test_folder
Share Migration
===============
As administrator, you can migrate a share with its data from one location to
another in a manner that is transparent to users and workloads. You can use
manila client commands to complete a share migration.
For share migration, is needed modify ``manila.conf`` and set a ip in the same
provider network for ``data_node_access_ip``.
Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
.. code-block:: console
[DEFAULT]
data_node_access_ip = 10.10.10.199
.. note::
Share migration requires have more than one back end configured.
`Configure multiple back ends
<http://docs.openstack.org/developer/kolla/manila-hnas-guide.html#configure-multiple-back-ends>`__.
Use the manila migration command, as shown in the following example:
.. code-block:: console
manila migration-start --preserve-metadata True|False \
--writable True|False --force_host_assisted_migration True|False \
--new_share_type share_type --new_share_network share_network \
shareID destinationHost
- ``--force-host-copy``: Forces the generic host-based migration mechanism and
bypasses any driver optimizations.
- ``destinationHost``: Is in this format ``host#pool`` which includes
destination host and pool.
- ``--writable`` and ``--preserve-metadata``: Are only for driver assisted.
- ``--new_share_network``: Only if driver supports shared network.
- ``--new_share_type``: Choose share type compatible with destinationHost.
Checking share migration progress
---------------------------------
Use the ``manila migration-get-progress shareID`` command to check progress.
.. code-block:: console
manila migration-get-progress demo-share1
+----------------+-----------------------+
| Property | Value |
+----------------+-----------------------+
| task_state | data_copying_starting |
| total_progress | 0 |
+----------------+-----------------------+
manila migration-get-progress demo-share1
+----------------+-------------------------+
| Property | Value |
+----------------+-------------------------+
| task_state | data_copying_completing |
| total_progress | 100 |
+----------------+-------------------------+
Use the ``manila migration-complete shareID`` command to complete share
migration process
For more information about how to manage shares, see the
`OpenStack User Guide
<http://docs.openstack.org/user-guide/index.html>`__.

View File

@ -1,330 +0,0 @@
.. _manila-hnas-guide:
========================================================
Hitachi NAS Platform File Services Driver for OpenStack
========================================================
Overview
========
The Hitachi NAS Platform File Services Driver for OpenStack
provides NFS Shared File Systems to OpenStack.
Requirements
------------
- Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080, and 4100.
- HNAS/SMU software version is 12.2 or higher.
- HNAS configuration and management utilities to create a storage pool (span)
and an EVS.
- GUI (SMU).
- SSC CLI.
Supported shared file systems and operations
-------------------------------------------
The driver supports CIFS and NFS shares.
The following operations are supported:
- Create a share.
- Delete a share.
- Allow share access.
- Deny share access.
- Create a snapshot.
- Delete a snapshot.
- Create a share from a snapshot.
- Extend a share.
- Shrink a share.
- Manage a share.
- Unmanage a share.
Preparation and Deployment
==========================
.. note::
The manila-share node only requires the HNAS EVS data interface if you
plan to use share migration.
.. important ::
It is mandatory that HNAS management interface is reachable from the
Shared File System node through the admin network, while the selected
EVS data interface is reachable from OpenStack Cloud, such as through
Neutron flat networking.
Configuration on Kolla deployment
---------------------------------
Enable Shared File Systems service and HNAS driver in
``/etc/kolla/globals.yml``
.. code-block:: console
enable_manila: "yes"
enable_manila_backend_hnas: "yes"
Configure the OpenStack networking so it can reach HNAS Management
interface and HNAS EVS Data interface.
To configure two physical networks, physnet1 and physnet2, with
ports eth1 and eth2 associated respectively:
In ``/etc/kolla/globals.yml`` set:
.. code-block:: console
neutron_bridge_name: "br-ex,br-ex2"
neutron_external_interface: "eth1,eth2"
.. note::
eth1: Neutron external interface.
eth2: HNAS EVS data interface.
HNAS back end configuration
---------------------------
In ``/etc/kolla/globals.yml`` uncomment and set:
.. code-block:: console
hnas_ip: "172.24.44.15"
hnas_user: "supervisor"
hnas_password: "supervisor"
hnas_evs_id: "1"
hnas_evs_ip: "10.0.1.20"
hnas_file_system_name: "FS-Manila"
Configuration on HNAS
---------------------
Create the data HNAS network in Kolla OpenStack:
List the available tenants:
.. code-block:: console
$ openstack project list
Create a network to the given tenant (service), providing the tenant ID,
a name for the network, the name of the physical network over which the
virtual network is implemented, and the type of the physical mechanism by
which the virtual network is implemented:
.. code-block:: console
$ neutron net-create --tenant-id <SERVICE_ID> hnas_network \
--provider:physical_network=physnet2 --provider:network_type=flat
*Optional* - List available networks:
.. code-block:: console
$ neutron net-list
Create a subnet to the same tenant (service), the gateway IP of this subnet,
a name for the subnet, the network ID created before, and the CIDR of
subnet:
.. code-block:: console
$ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
*Optional* - List available subnets:
.. code-block:: console
$ neutron subnet-list
Add the subnet interface to a router, providing the router ID and subnet
ID created before:
.. code-block:: console
$ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
Create a file system on HNAS. See the `Hitachi HNAS reference <http://www.hds.com/assets/pdf/hus-file-module-file-services-administration-guide.pdf>`_.
.. important ::
Make sure that the filesystem is not created as a replication target.
Refer official HNAS administration guide.
Prepare the HNAS EVS network.
Create a route in HNAS to the tenant network:
.. code-block:: console
$ console-context --evs <EVS_ID_IN_USE> route-net-add --gateway <FLAT_NETWORK_GATEWAY> \
<TENANT_PRIVATE_NETWORK>
.. important ::
Make sure multi-tenancy is enabled and routes are configured per EVS.
.. code-block:: console
$ console-context --evs 3 route-net-add --gateway 192.168.1.1 \
10.0.0.0/24
Create a share
==============
Create a default share type before running manila-share service:
.. code-block:: console
$ manila type-create default_share_hitachi False
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
| ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs |
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
| 3e54c8a2-1e50-455e-89a0-96bb52876c35 | default_share_hitachi | public | - | driver_handles_share_servers : False | snapshot_support : True |
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
Create a NFS share using the HNAS back end:
.. code-block:: console
manila create NFS 1 \
--name mysharehnas \
--description "My Manila share" \
--share-type default_share_hitachi
Verify Operation
.. code-block:: console
$ manila list
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
| 721c0a6d-eea6-41af-8c10-72cd98985203 | mysharehnas | 1 | NFS | available | False | default_share_hitachi | control@hnas1#HNAS1 | nova |
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
.. code-block:: console
$ manila show mysharehnas
+-----------------------------+-----------------------------------------------------------------+
| Property | Value |
+-----------------------------+-----------------------------------------------------------------+
| status | available |
| share_type_name | default_share_hitachi |
| description | My Manila share |
| availability_zone | nova |
| share_network_id | None |
| export_locations | |
| | path = 172.24.53.1:/shares/45ed6670-688b-4cf0-bfe7-34956648fb84 |
| | preferred = False |
| | is_admin_only = False |
| | id = e81e716f-f1bd-47b2-8a56-2c2f9e33a98e |
| | share_instance_id = 45ed6670-688b-4cf0-bfe7-34956648fb84 |
| share_server_id | None |
| host | control@hnas1#HNAS1 |
| access_rules_status | active |
| snapshot_id | None |
| is_public | False |
| task_state | None |
| snapshot_support | True |
| id | 721c0a6d-eea6-41af-8c10-72cd98985203 |
| size | 1 |
| user_id | ba7f6d543713488786b4b8cb093e7873 |
| name | mysharehnas |
| share_type | 3e54c8a2-1e50-455e-89a0-96bb52876c35 |
| has_replicas | False |
| replication_type | None |
| created_at | 2016-10-14T14:50:47.000000 |
| share_proto | NFS |
| consistency_group_id | None |
| source_cgsnapshot_member_id | None |
| project_id | c3810d8bcc3346d0bdc8100b09abbbf1 |
| metadata | {} |
+-----------------------------+-----------------------------------------------------------------+
Configure multiple back ends
============================
An administrator can configure an instance of Manila to provision shares from
one or more back ends. Each back end leverages an instance of a vendor-specific
implementation of the Manila driver API.
The name of the back end is declared as a configuration option
share_backend_name within a particular configuration stanza that contains the
related configuration options for that back end.
So, in the case of an multiple back ends deployment, it is necessary to change
the default share backends before deployment.
Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
.. code-block:: console
[DEFAULT]
enabled_share_backends = generic,hnas1,hnas2
Modify the file ``/etc/kolla/config/manila-share.conf`` and add the contents:
.. code-block:: console
[generic]
share_driver = manila.share.drivers.generic.GenericShareDriver
interface_driver = manila.network.linux.interface.OVSInterfaceDriver
driver_handles_share_servers = True
service_instance_password = manila
service_instance_user = manila
service_image_name = manila-service-image
share_backend_name = GENERIC
[hnas1]
share_backend_name = HNAS1
share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver
driver_handles_share_servers = False
hitachi_hnas_ip = <hnas_ip>
hitachi_hnas_user = <user>
hitachi_hnas_password = <password>
hitachi_hnas_evs_id = <evs_id>
hitachi_hnas_evs_ip = <evs_ip>
hitachi_hnas_file_system_name = FS-Manila1
[hnas2]
share_backend_name = HNAS2
share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver
driver_handles_share_servers = False
hitachi_hnas_ip = <hnas_ip>
hitachi_hnas_user = <user>
hitachi_hnas_password = <password>
hitachi_hnas_evs_id = <evs_id>
hitachi_hnas_evs_ip = <evs_ip>
hitachi_hnas_file_system_name = FS-Manila2
For more information about how to manage shares, see the
`OpenStack User Guide
<http://docs.openstack.org/user-guide/index.html>`__.
For more information about how HNAS driver works, see
`Hitachi NAS Platform File Services Driver for OpenStack
<http://docs.openstack.org/developer/manila/devref/hitachi_hnas_driver.html>`__.

View File

@ -1,157 +0,0 @@
.. _multinode:
=============================
Multinode Deployment of Kolla
=============================
.. _deploy_a_registry:
Deploy a registry
=================
A Docker registry is a locally hosted registry that replaces the need to pull
from the Docker Hub to get images. Kolla can function with or without a local
registry, however for a multinode deployment some type of registry is mandatory.
Only one registry must be deployed, although HA features exist for registry
services.
The Docker registry prior to version 2.3 has extremely bad performance because
all container data is pushed for every image rather than taking advantage of
Docker layering to optimize push operations. For more information reference
`pokey registry <https://github.com/docker/docker/issues/14018>`__.
The Kolla community recommends using registry 2.3 or later. To deploy registry
with version 2.3 or later, do the following:
::
tools/start-registry
.. _configure_docker_all_nodes:
Configure Docker on all nodes
=============================
.. note:: As the subtitle for this section implies, these steps should be
applied to all nodes, not just the deployment node.
The ``start-registry`` script configures a docker registry that proxies Kolla
images from Docker Hub, and can also be used with custom built images (see
:doc:`image-building`).
After starting the registry, it is necessary to instruct Docker that it will
be communicating with an insecure registry. To enable insecure registry
communication on CentOS, modify the ``/etc/sysconfig/docker`` file to contain
the following where 192.168.1.100 is the IP address of the machine where the
registry is currently running:
::
# CentOS
INSECURE_REGISTRY="--insecure-registry 192.168.1.100:5000"
For Ubuntu, check whether its using upstart or systemd.
::
# stat /proc/1/exe
File: '/proc/1/exe' -> '/lib/systemd/systemd'
Edit ``/etc/default/docker`` and add:
::
# Ubuntu
DOCKER_OPTS="--insecure-registry 192.168.1.100:5000"
If Ubuntu is using systemd, additional settings needs to be configured.
Copy Docker's systemd unit file to ``/etc/systemd/system/`` directory:
::
cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
Next, modify ``/etc/systemd/system/docker.service``, add ``environmentFile``
variable and add ``$DOCKER_OPTS`` to the end of ExecStart in ``[Service]``
section:
::
# CentOS
[Service]
MountFlags=shared
EnvironmentFile=/etc/sysconfig/docker
ExecStart=/usr/bin/docker daemon $INSECURE_REGISTRY
# Ubuntu
[Service]
MountFlags=shared
EnvironmentFile=-/etc/default/docker
ExecStart=/usr/bin/docker daemon -H fd:// $DOCKER_OPTS
Restart Docker by executing the following commands:
::
# CentOS or Ubuntu with systemd
systemctl daemon-reload
systemctl restart docker
# Ubuntu with upstart or sysvinit
sudo service docker restart
.. _edit-inventory:
Edit the Inventory File
=======================
The ansible inventory file contains all the information needed to determine
what services will land on which hosts. Edit the inventory file in the kolla
directory ``ansible/inventory/multinode``. If kolla was installed with pip,
the inventory file can be found in ``/usr/share/kolla``.
Add the ip addresses or hostnames to a group and the services associated with
that group will land on that host:
::
# These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment.
[control]
# These hostname must be resolvable from your deployment host
control01
192.168.122.24
For more advanced roles, the operator can edit which services will be
associated in with each group. Keep in mind that some services have to be
grouped together and changing these around can break your deployment:
::
[kibana:children]
control
[elasticsearch:children]
control
[haproxy:children]
network
Deploying Kolla
===============
First, check that the deployment targets are in a state where Kolla may deploy
to them:
::
kolla-ansible prechecks -i <path/to/multinode/inventory/file>
For additional environment setup see the :ref:`deploying-kolla`.
Run the deployment:
::
kolla-ansible deploy -i <path/to/multinode/inventory/file>

View File

@ -1,89 +0,0 @@
.. _networking-guide:
============================
Enabling Neutron Extensions
============================
Overview
========
Kolla deploys Neutron by default as OpenStack networking component. This guide
describes configuring and running Neutron extensions like LBaaS,
Networking-SFC, QoS, etc.
Networking-SFC
==============
Preparation and deployment
--------------------------
Modify the configuration file ``/etc/kolla/globals.yml`` and change
the following:
::
neutron_plugin_agent: "sfc"
Networking-SFC is an additional Neutron plugin. For SFC to work, this plugin
has to be installed in ``neutron-server`` container as well. Modify the
configuration file ``/etc/kolla/kolla-build.conf`` and add the following
contents:
::
[neutron-server-plugin-networking-sfc]
type = git
location = https://github.com/openstack/networking-sfc.git
reference = mitaka
Verification
------------
Verify the build and deploy operation of Networking-SFC container. Successful
deployment will bring up an SFC container in the list of running containers.
Run the following command to login into the ``neutron-server`` container:
::
docker exec -it neutron_server bash
Neutron should provide the following CLI extensions.
::
#neutron help|grep port
port-chain-create [port_chain] Create a Port Chain.
port-chain-delete [port_chain] Delete a given Port Chain.
port-chain-list [port_chain] List Port Chains that belong
to a given tenant.
port-chain-show [port_chain] Show information of a
given Port Chain.
port-chain-update [port_chain] Update Port Chain's
information.
port-pair-create [port_pair] Create a Port Pair.
port-pair-delete [port_pair] Delete a given Port Pair.
port-pair-group-create [port_pair_group] Create a Port Pair
Group.
port-pair-group-delete [port_pair_group] Delete a given
Port Pair Group.
port-pair-group-list [port_pair_group] List Port Pair Groups
that belongs to a given tenant.
port-pair-group-show [port_pair_group] Show information of a
given Port Pair Group.
port-pair-group-update [port_pair_group] Update Port Pair
Group's information.
port-pair-list [port_pair] List Port Pairs that belongs
to a given tenant.
port-pair-show [port_pair] Show information of a given
Port Pair.
port-pair-update [port_pair] Update Port Pair's
information.
For setting up a testbed environment and creating a port chain, please refer
to the following link:
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
For the source code, please refer to the following link:
https://github.com/openstack/networking-sfc

View File

@ -1,37 +0,0 @@
.. nova-fake-driver:
================
Nova Fake Driver
================
One common question from OpenStack operators is that "how does the control
plane (e.g., database, messaging queue, nova-scheduler ) scales?". To answer
this question, operators setup Rally to drive workload to the OpenStack cloud.
However, without a large number of nova-compute nodes, it becomes difficult to
exercise the control performance.
Given the built-in feature of Docker container, Kolla enables standing up many
of nova-compute nodes with nova fake driver on a single host. For example,
we can create 100 nova-compute containers on a real host to simulate the
100-hypervisor workload to the nova-conductor and the messaging queue.
Use nova-fake driver
====================
Nova fake driver can not work with all-in-one deployment. This is because the
fake neutron-openvswitch-agent for the fake nova-compute container conflicts
with neutron-openvswitch-agent on the compute nodes. Therefore, in the
inventory the network node must be different than the compute node.
By default, Kolla uses libvirt driver on the compute node. To use nova-fake
driver, edit the following parameters in ``ansible/group_vars`` or in the
command line options.
::
enable_nova_fake: "yes"
num_nova_fake_per_node: 5
Each compute node will run 5 nova-compute containers and 5
neutron-plugin-agent containers. When booting instance, there will be no real
instances created. But *nova list* shows the fake instances.

View File

@ -1,83 +0,0 @@
.. _operating-kolla:
===============
Operating Kolla
===============
Upgrading
=========
Kolla's strategy for upgrades is to never make a mess and to follow consistent
patterns during deployment such that upgrades from one environment to the next
are simple to automate.
Kolla implements a one command operation for upgrading an existing deployment
consisting of a set of containers and configuration data to a new deployment.
Kolla uses the ``x.y.z`` semver nomenclature for naming versions. Kolla's
Liberty version is ``1.0.0`` and the Mitaka version is ``2.0.0``. The Kolla
community commits to release z-stream updates every 45 days that resolve
defects in the stable version in use and publish those images to the Docker Hub
registry. To prevent confusion, the Kolla community recommends using an alpha
identifier ``x.y.z.a`` where ``a`` represents any customization done on the
part of the operator. For example, if an operator intends to modify one of the
Docker files or the repos from the originals and build custom images for the
Liberty version, the operator should start with version 1.0.0.0 and increase
alpha for each release. Alpha tag usage is at discretion of the operator. The
alpha identifier could be a number as recommended or a string of the operator's
choosing.
If the alpha identifier is not used, Kolla will deploy or upgrade using the
version number information contained in the release. To customize the
version number uncomment openstack_version in globals.yml and specify
the version number desired.
For example, to deploy a custom built Liberty version built with the
``kolla-build --tag 1.0.0.0`` operation, change globals.yml::
openstack_version: 1.0.0.0
Then run the command to deploy::
kolla-ansible deploy
If using Liberty and a custom alpha number of 0, and upgrading to 1, change
globals.yml::
openstack_version: 1.0.0.1
Then run the command to upgrade::
kolla-ansible upgrade
.. note:: Varying degrees of success have been reported with upgrading
the libvirt container with a running virtual machine in it. The libvirt
upgrade still needs a bit more validation, but the Kolla community feels
confident this mechanism can be used with the correct Docker graph driver.
.. note:: The Kolla community recommends the btrfs or aufs graph drivers for
storing data as sometimes the LVM graph driver loses track of its reference
counting and results in an unremovable container.
.. note:: Because of system technical limitations, upgrade of a libvirt
container when using software emulation (``virt_driver=qemu`` in nova.conf),
does not work at all. This is acceptable because KVM is the recommended
virtualization driver to use with Nova.
Tips and Tricks
===============
Kolla ships with several utilities intended to facilitate ease of operation.
``tools/cleanup-containers`` can be used to remove deployed containers from the
system. This can be useful when you want to do a new clean deployment. It will
preserve the registry and the locally built images in the registry, but will
remove all running Kolla containers from the local Docker daemon. It also
removes the named volumes.
``tools/cleanup-host`` can be used to remove remnants of network changes
triggered on the Docker host when the neutron-agents containers are launched.
This can be useful when you want to do a new clean deployment, particularly one
changing the network topology.
``tools/cleanup-images`` can be used to remove all Docker images built by Kolla
from the local Docker cache.

View File

@ -1,101 +0,0 @@
.. architecture-guide:
=============================
Production architecture guide
=============================
This guide will help with configuring Kolla to suit production needs. It is
meant to answer some questions regarding basic configuration options that Kolla
requires. This document also contains other useful pointers.
Node types and services running on them
=======================================
A basic Kolla inventory consists of several types of nodes, known in Ansible as
``groups``.
* Controller - This is the cloud controller node. It hosts control services
like APIs and databases. This group should have odd number of nodes for
quorum.
* Network - This is the network node. It will host Neutron agents along with
haproxy / keepalived. These nodes will have a floating ip defined in
``Kolla_internal_vip_address``.
* Compute - These are servers for compute services. This is where guest VMs
live.
* Storage - Storage servers, for cinder-volume, LVM or ceph-osd.
Network configuration
=====================
.. _interface-configuration:
Interface configuration
***********************
In Kolla operators should configure following network interfaces:
* network_interface - While it is not used on its own, this provides the
required default for other interfaces below.
* api_interface - This interface is used for the management network. The
management network is the network OpenStack services uses to communicate to
each other and the databases. There are known security risks here, so it's
recommended to make this network internal, not accessible from outside.
Defaults to network_interface.
* kolla_external_vip_interface - This interface is public-facing one. It's
used when you want HAProxy public endpoints to be exposed in different
network than internal ones. It is mandatory to set this option when
``kolla_enable_tls_external`` is set to yes. Defaults to network_interface.
* storage_interface - This is the interface that is used by virtual machines to
communicate to Ceph. This can be heavily utilized so it's recommended to put
this network on 10Gig networking. Defaults to network_interface.
* cluster_interface - This is another interface used by Ceph. It's used for
data replication. It can be heavily utilized also and if it becomes a
bottleneck it can affect data consistency and performance of whole cluster.
Defaults to network_interface.
* tunnel_interface - This interface is used by Neutron for vm-to-vm traffic
over tunneled networks (like VxLan). Defaults to network_interface.
* Neutron_external_interface - This interface is required by Neutron. Neutron
will put br-ex on it. It will be used for flat networking as well as tagged
vlan networks. Has to be set separately.
Docker configuration
====================
Because Docker is core dependency of Kolla, proper configuration of Docker can
change the experience of Kolla significantly. Following section will highlight
several Docker configuration details relevant to Kolla operators.
Storage driver
**************
In certain distributions Docker storage driver defaults to devicemapper, which
can heavily hit performance of builds and deploys. We suggest to use btrfs or
aufs as driver. More details on which storage driver to use in
`Docker documentation <https://docs.docker.com/engine/userguide/storagedriver/selectadriver/>`_.
Volumes
*******
Kolla puts nearly all of persistent data in Docker volumes. These volumes are
created in Docker working directory, which defaults to
::
/var/lib/docker
We recommend to ensure that this directory has enough space and is placed on
fast disk as it will affect performance of builds, deploys as well as database
commits and rabbitmq.
This becomes especially relevant when ``enable_central_logging`` and
``openstack_logging_debug`` are both set to true, as fully loaded 130 node
cluster produced 30-50GB of logs daily.

View File

@ -1,673 +0,0 @@
.. quickstart:
===========
Quick Start
===========
This guide provides step by step instructions to deploy OpenStack using Kolla
and Kolla-Ansible on bare metal servers or virtual machines.
Host machine requirements
=========================
The host machine must satisfy the following minimum requirements:
- 2 network interfaces
- 8GB main memory
- 40GB disk space
.. note::
Root access to the deployment host machine is required.
Recommended environment
=======================
This guide recommends using a bare metal server or a virtual machine. Follow
the instructions in this document to get started with deploying OpenStack on
bare metal or a virtual machine with Kolla.
If developing Kolla on a system that provides VirtualBox or Libvirt in addition
to Vagrant, use the Vagrant virtual environment documented in
:doc:`vagrant-dev-env`.
Prerequisites
=============
Verify the state of network interfaces. If using a VM spawned on
OpenStack as the host machine, the state of the second interface will be DOWN
on booting the VM.
::
ip addr show
Bring up the second network interface if it is down.
::
ip link set ens4 up
Verify if the second interface has an IP address.
::
ip addr show
Install dependencies
====================
Kolla builds images which are used by Kolla-Ansible to deploy OpenStack. The
deployment is tested on CentOS, Oracle Linux and Ubuntu as both container OS
platforms and bare metal deployment targets.
Ubuntu: For Ubuntu based systems where Docker is used it is recommended to use
the latest available LTS kernel. While all kernels should work for Docker, some
older kernels may have issues with some of the different Docker back ends such
as AUFS and OverlayFS. In order to update kernel in Ubuntu 14.04 LTS to 4.2,
run:
::
apt-get install linux-image-generic-lts-wily
.. note:: Install is *very* sensitive about version of components. Please
review carefully because default Operating System repos are likely out of
date.
Dependencies for the stable/mitaka branch are:
===================== =========== =========== =========================
Component Min Version Max Version Comment
===================== =========== =========== =========================
Ansible 1.9.4 <2.0.0 On deployment host
Docker 1.10.0 none On target nodes
Docker Python 1.6.0 none On target nodes
Python Jinja2 2.6.0 none On deployment host
===================== =========== =========== =========================
Dependencies for the stable/newton branch and later (including master branch)
are:
===================== =========== =========== =========================
Component Min Version Max Version Comment
===================== =========== =========== =========================
Ansible 2.0.0 none On deployment host
Docker 1.10.0 none On target nodes
Docker Python 1.6.0 none On target nodes
Python Jinja2 2.8.0 none On deployment host
===================== =========== =========== =========================
Make sure the ``pip`` package manager is installed and upgraded to the latest
before proceeding:
::
#CentOS
yum install epel-release
yum install python-pip
pip install -U pip
#Ubuntu
apt-get update
apt-get install python-pip
pip install -U pip
Install dependencies needed to build the code with ``pip`` package manager.
::
#CentOS
yum install python-devel libffi-devel gcc openssl-devel
#Ubuntu
apt-get install python-dev libffi-dev gcc libssl-dev
Kolla deploys OpenStack using `Ansible <http://www.ansible.com>`__. Install
Ansible from distribution packaging if the distro packaging has recommended
version available.
Some implemented distro versions of Ansible are too old to use distro
packaging. Currently, CentOS and RHEL package Ansible >2.0 which is suitable
for use with Kolla. Note that you will need to enable access to the EPEL
repository to install via yum -- to do so, take a look at Fedora's EPEL `docs
<https://fedoraproject.org/wiki/EPEL>`__ and `FAQ
<https://fedoraproject.org/wiki/EPEL/FAQ>`__.
On CentOS or RHEL systems, this can be done using:
::
yum install ansible
Many DEB based systems do not meet Kolla's Ansible version requirements. It is
recommended to use pip to install Ansible >2.0. Finally Ansible >2.0 may be
installed using:
::
pip install -U ansible
.. note:: It is recommended to use virtualenv to install non-system packages.
If DEB based systems include a version of Ansible that meets Kolla's version
requirements it can be installed by:
::
apt-get install ansible
.. WARNING::
Kolla uses PBR in its implementation. PBR provides version information
to Kolla about the package in use. This information is later used when
building images to specify the Docker tag used in the image built. When
installing the Kolla package via pip, PBR will always use the PBR version
information. When obtaining a copy of the software via git, PBR will use
the git version information, but **ONLY** if Kolla has not been pip
installed via the pip package manager. This is why there is an operator
workflow and a developer workflow.
The following dependencies can be installed by bootstraping the host machine
as described in the `Automatic host bootstrap`_ section. For manual
installation, follow the instructions below:
Since Docker is required to build images as well as be present on all deployed
targets, the Kolla community recommends installing the official Docker, Inc.
packaged version of Docker for maximum stability and compatibility with the
following command:
::
curl -sSL https://get.docker.io | bash
This command will install the most recent stable version of Docker, but please
note that Kolla releases are not in sync with Docker in any way, so some things
could stop working with new version. The latest release of Kolla is tested to
work with docker-engine >= 1.10.0. To check your Docker version run this
command:
::
docker --version
When running with systemd, setup docker-engine with the appropriate information
in the Docker daemon to launch with. This means setting up the following
information in the ``docker.service`` file. If you do not set the MountFlags
option correctly then ``kolla-ansible`` will fail to deploy the
``neutron-dhcp-agent`` container and throws APIError/HTTPError. After adding
the drop-in unit file as follows, reload and restart the Docker service:
::
# Create the drop-in unit directory for docker.service
mkdir -p /etc/systemd/system/docker.service.d
# Create the drop-in unit file
tee /etc/systemd/system/docker.service.d/kolla.conf <<-'EOF'
[Service]
MountFlags=shared
EOF
Restart Docker by executing the following commands:
::
# Run these commands to reload the daemon
systemctl daemon-reload
systemctl restart docker
On the target hosts you also need an updated version of the Docker python
libraries:
.. note:: The old docker-python is obsoleted by python-docker-py.
::
yum install python-docker-py
Or using ``pip`` to install the latest version:
::
pip install -U docker-py
OpenStack, RabbitMQ, and Ceph require all hosts to have matching times to
ensure proper message delivery. In the case of Ceph, it will complain if the
hosts differ by more than 0.05 seconds. Some OpenStack services have timers as
low as 2 seconds by default. For these reasons it is highly recommended to
setup an NTP service of some kind. While ``ntpd`` will achieve more accurate
time for the deployment if the NTP servers are running in the local deployment
environment, `chrony <http://chrony.tuxfamily.org>`_ is more accurate when
syncing the time across a WAN connection. When running Ceph it is recommended
to setup ``ntpd`` to sync time locally due to the tight time constraints.
To install, start, and enable ntp on CentOS execute the following:
::
# CentOS 7
yum install ntp
systemctl enable ntpd.service
systemctl start ntpd.service
To install and start on Debian based systems execute the following:
::
apt-get install ntp
Libvirt is started by default on many operating systems. Please disable
``libvirt`` on any machines that will be deployment targets. Only one copy of
libvirt may be running at a time.
::
# CentOS 7
systemctl stop libvirtd.service
systemctl disable libvirtd.service
# Ubuntu
service libvirt-bin stop
update-rc.d libvirt-bin disable
On Ubuntu, apparmor will sometimes prevent libvirt from working.
::
/usr/sbin/libvirtd: error while loading shared libraries:
libvirt-admin.so.0: cannot open shared object file: Permission denied
If you are seeing the libvirt container fail with the error above, disable the
libvirt profile.
::
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
.. note::
On Ubuntu 16.04, please uninstall lxd and lxc packages. (An issue exists
with cgroup mounts, mounts exponentially increasing when restarting
container).
Additional steps for upstart and other non-systemd distros
==========================================================
For Ubuntu 14.04 which uses upstart and other non-systemd distros, run the
following.
::
mount --make-shared /run
mount --make-shared /var/lib/nova/mnt
If /var/lib/nova/mnt is not present, can do below work around.
::
mkdir -p /var/lib/nova/mnt /var/lib/nova/mnt1
mount --bind /var/lib/nova/mnt1 /var/lib/nova/mnt
mount --make-shared /var/lib/nova/mnt
For mounting /run and /var/lib/nova/mnt as shared upon startup, edit
/etc/rc.local to add the following.
::
mount --make-shared /run
mount --make-shared /var/lib/nova/mnt
.. note::
If CentOS/Fedora/OracleLinux container images are built on an Ubuntu host,
the back-end storage driver must not be AUFS (see the known issues in
:doc:`image-building`).
Install Kolla for deployment or evaluation
==========================================
Install Kolla and its dependencies using pip.
::
pip install kolla
Copy the configuration files globals.yml and passwords.yml to /etc directory.
::
#CentOS
cp -r /usr/share/kolla/etc_examples/kolla /etc/kolla/
#Ubuntu
cp -r /usr/local/share/kolla/etc_examples/kolla /etc/kolla/
The inventory files (all-in-one and multinode) are located in
/usr/local/share/kolla/ansible/inventory. Copy the configuration files to the
current directory.
::
#CentOS
cp /usr/share/kolla/ansible/inventory/* .
#Ubuntu
cp /usr/local/share/kolla/ansible/inventory/* .
Install Kolla for development
=============================
Clone the Kolla and Kolla-Ansible repositories from git.
::
git clone https://github.com/openstack/kolla
git clone https://github.com/openstack/kolla-ansible
Kolla-ansible holds configuration files (globals.yml and passwords.yml) in
etc/kolla. Copy the configuration files to /etc directory.
::
cp -r kolla-ansible/etc/kolla /etc/kolla/
Kolla-ansible holds the inventory files (all-in-one and multinode) in
ansible/inventory. Copy the configuration files to the current directory.
::
cp kolla-ansible/ansible/inventory/* .
Local Registry
==============
A local registry is recommended but not required for an ``all-in-one``
installation when developing for master. Since no master images are available
on docker hub, the docker cache may be used for all-in-one deployments. When
deploying multinode, a registry is strongly recommended to serve as a single
source of images. Reference the :doc:`multinode` for more information on using
a local Docker registry. Otherwise, the Docker Hub Image Registry contains all
images from each of Kollas major releases. The latest release tag is 3.0.2 for
Newton.
Automatic host bootstrap
========================
Edit the ``/etc/kolla/globals.yml`` file to configure interfaces.
::
network_interface: "ens3"
neutron_external_interface: "ens4"
Generate passwords. This will populate all empty fields in the
``/etc/kolla/passwords.yml`` file using randomly generated values to secure the
deployment. Optionally, the passwords may be populated in the file by hand.
::
kolla-genpwd
To quickly prepare hosts, playbook bootstrap-servers can be used.This is an
Ansible playbook which works on Ubuntu 14.04, 16.04 and CentOS 7 hosts to
install and prepare the cluster for OpenStack installation.
::
kolla-ansible -i <<inventory file>> bootstrap-servers
Build container images
======================
When running with systemd, edit the file
``/etc/systemd/system/docker.service.d/kolla.conf``
to include the MTU size to be used for Docker containers.
::
[Service]
MountFlags=shared
ExecStart=
ExecStart=/usr/bin/docker daemon \
-H fd:// \
--mtu 1400
.. note::
The MTU size should be less than or equal to the MTU size allowed on the
network interfaces of the host machine. If the MTU size allowed on the
network interfaces of the host machine is 1500 then this step can be
skipped. This step is relevant for building containers. Actual openstack
services won't be affected.
.. note::
Verify that the MountFlags parameter is configured as shared. If you do not
set the MountFlags option correctly then kolla-ansible will fail to deploy the
neutron-dhcp-agent container and throws APIError/HTTPError.
Restart Docker and ensure that Docker is running.
::
systemctl daemon-reload
systemctl restart docker
The Kolla community builds and pushes tested images for each tagged release of
Kolla. Pull required images with appropriate tags.
::
kolla-ansible pull
View the images.
::
docker images
Developers running from master are required to build container images as
the Docker Hub does not contain built images for the master branch.
Reference the :doc:`image-building` for more advanced build configuration.
To build images using default parameters run:
::
kolla-build
By default kolla-build will build all containers using CentOS as the base image
and binary installation as base installation method. To change this behavior,
please use the following parameters with kolla-build:
::
--base [ubuntu|centos|oraclelinux]
--type [binary|source]
.. note::
--base and --type can be added to the above kolla-build command if
different distributions or types are desired.
It is also possible to build individual container images. As an example, if the
glance images failed to build, all glance related images can be rebuilt as
follows:
::
kolla-build glance
In order to see all available parameters, run:
::
kolla-build -h
View the images.
::
docker images
.. WARNING::
Mixing of OpenStack releases with Kolla releases (example, updating
kolla-build.conf to build Mitaka Keystone to be deployed with Newton Kolla) is
not recommended and will likely cause issues.
Deploy Kolla
============
Kolla-Ansible is used to deploy containers by using images built by Kolla.
There are two methods of deployment: *all-in-one* and *multinode*. The
*all-in-one* deployment is similar to `devstack
<http://docs.openstack.org/developer/devstack/>`__ deploy which installs all
OpenStack services on a single host. In the *multinode* deployment, OpenStack
services can be run on specific hosts. This documentation describes deploying
an *all-in-one* setup. To setup *multinode* see the :doc:`multinode`.
Each method is represented as an Ansible inventory file. More information on
the Ansible inventory file can be found in the Ansible `inventory introduction
<https://docs.ansible.com/intro_inventory.html>`__.
All variables for the environment can be specified in the files:
``/etc/kolla/globals.yml`` and ``/etc/kolla/passwords.yml``.
Generate passwords for ``/etc/kolla/passwords.yml`` using the provided
``kolla-genpwd`` tool. The tool will populate all empty fields in the
``/etc/kolla/passwords.yml`` file using randomly generated values to secure the
deployment. Optionally, the passwords may be populate in the file by hand.
::
kolla-genpwd
Start by editing ``/etc/kolla/globals.yml``. Check and edit, if needed, these
parameters: ``kolla_base_distro``, ``kolla_install_type``. The default for
``kolla_base_distro`` is ``centos`` and for ``kolla_install_type`` is
``binary``. If you want to use ubuntu with source type, then you should make
sure globals.yml has the following entries:
::
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
Please specify an unused IP address in the network to act as a VIP for
``kolla_internal_vip_address``. The VIP will be used with keepalived and added
to the ``api_interface`` as specified in the ``globals.yml``
::
kolla_internal_vip_address: “192.168.137.79”
.. note::
The kolla_internal_vip_address must be unique and should belong to the same
network to which the first network interface belongs to.
.. note::
The kolla_base_distro and kolla_install_type should be same as base and
install_type used in kolla-build command line.
The ``network_interface`` variable is the interface to which Kolla binds API
services. For example, when starting Mariadb, it will bind to the IP on the
interface list in the ``network_interface`` variable.
::
network_interface: "ens3"
The ``neutron_external_interface`` variable is the interface that will be used
for the external bridge in Neutron. Without this bridge the deployment instance
traffic will be unable to access the rest of the Internet.
::
neutron_external_interface: "ens4"
In case of deployment using the **nested** environment (eg. Using Virtualbox
VMs, KVM VMs), verify if your compute node supports hardware acceleration for
virtual machines by executing the following command in the *compute node*.
::
egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of **zero**, your compute node does not support
hardware acceleration and you **must** configure libvirt to use **QEMU**
instead of KVM. Create a file /etc/kolla/config/nova/nova-compute.conf and add
the content shown below.
::
mkdir /etc/kolla/config/nova
cat << EOF > /etc/kolla/config/nova/nova-compute.conf
[libvirt]
virt_type=qemu
EOF
For *all-in-one* deployments, the following commands can be run. These will
setup all of the containers on the localhost. These commands will be
wrapped in the kolla-script in the future.
.. note:: Even for all-in-one installs it is possible to use the Docker
registry for deployment, although not strictly required.
First, validate that the deployment targets are in a state where Kolla may
deploy to them. Provide the correct path to inventory file in the following
commands.
::
kolla-ansible prechecks -i /path/to/all-in-one
Deploy OpenStack.
::
kolla-ansible deploy -i /path/to/all-in-one
List the running containers.
::
docker ps -a
Generate the ``admin-openrc.sh`` file. The file will be created in
``/etc/kolla/`` directory.
::
kolla-ansible post-deploy
To test your deployment, run the following commands to initialize the network
with a glance image and neutron networks.
::
source /etc/kolla/admin-openrc.sh
#centOS
cd /usr/share/kolla
./init-runonce
#ubuntu
cd /usr/local/share/kolla
./init-runonce
.. note::
Different hardware results in results in variance with deployment times.
After successful deployment of OpenStack, the Horizon dashboard will be
available by entering IP address or hostname from ``kolla_external_fqdn``, or
``kolla_internal_fqdn``. If these variables were not set during deploy they
default to ``kolla_internal_vip_address``.
.. _Docker Hub Image Registry: https://hub.docker.com/u/kolla/
.. _launchpad bug: https://bugs.launchpad.net/kolla/+filebug

View File

@ -1,56 +0,0 @@
.. _security:
==============
Kolla Security
==============
Non Root containers
===================
The OpenStack services, with a few exceptions, run as non root inside of
Kolla's containers. Kolla uses the Docker provided USER flag to set the
appropriate user for each service.
SELinux
=======
The state of SELinux in Kolla is a work in progress. The short answer is you
must disable it until selinux polices are written for the Docker containers.
To understand why Kolla needs to set certain selinux policies for services that
you wouldn't expect to need them (rabbitmq, mariadb, glance, etc.) we must take
a step back and talk about Docker.
Docker has not had the concept of persistent containerized data until recently.
This means when a container is run the data it creates is destroyed when the
container goes away, which is obviously no good in the case of upgrades.
It was suggested data containers could solve this issue by only holding data if
they were never recreated, leading to a scary state where you could lose access
to your data if the wrong command was executed. The real answer to this problem
came in Docker 1.9 with the introduction of named volumes. You could now
address volumes directly by name removing the need for so called **data
containers** all together.
Another solution to the persistent data issue is to use a host bind mount which
involves making, for sake of example, host directory ``var/lib/mysql``
available inside the container at ``var/lib/mysql``. This absolutely solves the
problem of persistent data, but it introduces another security issue,
permissions. With this host bind mount solution the data in ``var/lib/mysql``
will be owned by the mysql user in the container. Unfortunately, that mysql
user in the container could have any UID/GID and thats who will own the data
outside the container introducing a potential security risk. Additionally, this
method dirties the host and requires host permissions to the directories to
bind mount.
The solution Kolla chose is named volumes.
Why does this matter in the case of selinux? Kolla does not run the process it
is launching as root in most cases. So glance-api is run as the glance user,
and mariadb is run as the mysql user, and so on. When mounting a named volume
in the location that the persistent data will be stored it will be owned by the
root user and group. The mysql user has no permissions to write to this folder
now. What Kolla does is allow a select few commands to be run with sudo as the
mysql user. This allows the mysql user to chown a specific, explicit directory
and store its data in a named volume without the security risk and other
downsides of host bind mounts. The downside to this is selinux blocks those
sudo commands and it will do so until we make explicit policies to allow those
operations.

View File

@ -1,180 +0,0 @@
.. _swift-guide:
==============
Swift in Kolla
==============
Overview
========
Kolla can deploy a full working Swift setup in either a **all-in-one** or
**multinode** setup.
Prerequisites
=============
Before running Swift we need to generate **rings**, which are binary compressed
files that at a high level let the various Swift services know where data is in
the cluster. We hope to automate this process in a future release.
Disks with a partition table (recommended)
==========================================
Swift also expects block devices to be available for storage. To prepare a disk
for use as Swift storage device, a special partition name and filesystem label
need to be added. So that Kolla can detect those disks and mount for services.
Follow the example below to add 3 disks for an **all-in-one** demo setup.
::
# <WARNING ALL DATA ON DISK will be LOST!>
index=0
for d in sdc sdd sde; do
parted /dev/${d} -s -- mklabel gpt mkpart KOLLA_SWIFT_DATA 1 -1
sudo mkfs.xfs -f -L d${index} /dev/${d}1
(( index++ ))
done
For evaluation, loopback devices can be used in lieu of real disks:
::
index=0
for d in sdc sdd sde; do
free_device=$(losetup -f)
fallocate -l 1G /tmp/$d
losetup $free_device /tmp/$d
parted $free_device -s -- mklabel gpt mkpart KOLLA_SWIFT_DATA 1 -1
sudo mkfs.xfs -f -L d${index} ${free_device}p1
(( index++ ))
done
Disks without a partition table
===============================
Kolla also supports unpartitioned disk (filesystem on ``/dev/sdc`` instead of
``/dev/sdc1``) detection purely based on filesystem label. This is generally
not a recommended practice but can be helpful for Kolla to take over Swift
deployment already using disk like this.
Given hard disks with labels swd1, swd2, swd3, use the following settings in
``ansible/roles/swift/defaults/main.yml``.
::
swift_devices_match_mode: "prefix"
swift_devices_name: "swd"
Rings
=====
Run following commands locally to generate Rings for **all-in-one** demo setup.
The commands work with **disks with partition table** example listed above.
Please modify accordingly if your setup is different.
::
export KOLLA_INTERNAL_ADDRESS=1.2.3.4
export KOLLA_BASE_DISTRO=centos
export KOLLA_INSTALL_TYPE=binary
# Object ring
docker run \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
kolla/${KOLLA_BASE_DISTRO}-${KOLLA_INSTALL_TYPE}-swift-base \
swift-ring-builder /etc/kolla/config/swift/object.builder create 10 3 1
for i in {0..2}; do
docker run \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
kolla/${KOLLA_BASE_DISTRO}-${KOLLA_INSTALL_TYPE}-swift-base swift-ring-builder \
/etc/kolla/config/swift/object.builder add r1z1-${KOLLA_INTERNAL_ADDRESS}:6000/d${i} 1;
done
# Account ring
docker run \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
kolla/${KOLLA_BASE_DISTRO}-${KOLLA_INSTALL_TYPE}-swift-base \
swift-ring-builder /etc/kolla/config/swift/account.builder create 10 3 1
for i in {0..2}; do
docker run \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
kolla/${KOLLA_BASE_DISTRO}-${KOLLA_INSTALL_TYPE}-swift-base swift-ring-builder \
/etc/kolla/config/swift/account.builder add r1z1-${KOLLA_INTERNAL_ADDRESS}:6001/d${i} 1;
done
# Container ring
docker run \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
kolla/${KOLLA_BASE_DISTRO}-${KOLLA_INSTALL_TYPE}-swift-base \
swift-ring-builder /etc/kolla/config/swift/container.builder create 10 3 1
for i in {0..2}; do
docker run \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
kolla/${KOLLA_BASE_DISTRO}-${KOLLA_INSTALL_TYPE}-swift-base swift-ring-builder \
/etc/kolla/config/swift/container.builder add r1z1-${KOLLA_INTERNAL_ADDRESS}:6002/d${i} 1;
done
for ring in object account container; do
docker run \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
kolla/${KOLLA_BASE_DISTRO}-${KOLLA_INSTALL_TYPE}-swift-base swift-ring-builder \
/etc/kolla/config/swift/${ring}.builder rebalance;
done
Similar commands can be used for **multinode**, you will just need to run the
**add** step for each IP in the cluster.
For more info, see
http://docs.openstack.org/kilo/install-guide/install/apt/content/swift-initial-rings.html
Deploying
=========
Enable Swift in ``/etc/kolla/globals.yml``:
::
enable_swift : "yes"
Once the rings are in place, deploying Swift is the same as any other Kolla
Ansible service. Below is the minimal command to bring up Swift **all-in-one**,
and it's dependencies:
::
ansible-playbook \
-i ansible/inventory/all-in-one \
-e @/etc/kolla/globals.yml \
-e @etc/kolla/passwords.yml \
ansible/site.yml \
--tags=rabbitmq,mariadb,keystone,swift
Validation
==========
A very basic smoke test:
::
$ swift stat
Account: AUTH_4c19d363b9cf432a80e34f06b1fa5749
Containers: 1
Objects: 0
Bytes: 0
Containers in policy "policy-0": 1
Objects in policy "policy-0": 0
Bytes in policy "policy-0": 0
X-Account-Project-Domain-Id: default
X-Timestamp: 1440168098.28319
X-Trans-Id: txf5a62b7d7fc541f087703-0055d73be7
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
$ swift upload mycontainer README.rst
README.md
$ swift list
mycontainer
$ swift download mycontainer README.md
README.md [auth 0.248s, headers 0.939s, total 0.939s, 0.006 MB/s]

View File

@ -1,132 +0,0 @@
.. troubleshooting:
=====================
Troubleshooting Guide
=====================
Failures
========
If Kolla fails, often it is caused by a CTRL-C during the deployment
process or a problem in the ``globals.yml`` configuration.
To correct the problem where Operators have a misconfigured environment, the
Kolla community has added a precheck feature which ensures the deployment
targets are in a state where Kolla may deploy to them. To run the prechecks,
execute:
Production
==========
::
kolla-ansible prechecks
Development
===========
::
./tools/kolla-ansible prechecks
If a failure during deployment occurs it nearly always occurs during evaluation
of the software. Once the Operator learns the few configuration options
required, it is highly unlikely they will experience a failure in deployment.
Deployment may be run as many times as desired, but if a failure in a
bootstrap task occurs, a further deploy action will not correct the problem.
In this scenario, Kolla's behavior is undefined.
The fastest way during to recover from a deployment failure is to
remove the failed deployment:
Production
==========
::
kolla-ansible destroy -i <<inventory-file>>
Development
===========
::
./tools/kolla-ansible destroy -i <<inventory-file>>
Any time the tags of a release change, it is possible that the container
implementation from older versions won't match the Ansible playbooks in a new
version. If running multinode from a registry, each node's Docker image cache
must be refreshed with the latest images before a new deployment can occur. To
refresh the docker cache from the local Docker registry:
Production
==========
::
kolla-ansible pull
Development
===========
::
./tools/kolla-ansible pull
Debugging Kolla
===============
The status of containers after deployment can be determined on the deployment
targets by executing:
::
docker ps -a
If any of the containers exited, this indicates a bug in the container. Please
seek help by filing a `launchpad bug`_ or contacting the developers via IRC.
The logs can be examined by executing:
::
docker exec -it heka bash
The logs from all services in all containers may be read from
``/var/log/kolla/SERVICE_NAME``
If the stdout logs are needed, please run:
::
docker logs <container-name>
Note that most of the containers don't log to stdout so the above command will
provide no information.
To learn more about Docker command line operation please refer to `Docker
documentation <https://docs.docker.com/reference/commandline/cli/>`__.
When ``enable_central_logging`` is enabled, to view the logs in a web browser
using Kibana, go to:
::
http://<kolla_internal_vip_address>:<kibana_server_port>
or http://<kolla_external_vip_address>:<kibana_server_port>
and authenticate using ``<kibana_user>`` and ``<kibana_password>``.
The values ``<kolla_internal_vip_address>``, ``<kolla_external_vip_address>``
``<kibana_server_port>`` and ``<kibana_user>`` can be found in
``<kolla_install_path>/kolla/ansible/group_vars/all.yml`` or if the default
values are overridden, in ``/etc/kolla/globals.yml``. The value of
``<kibana_password>`` can be found in ``/etc/kolla/passwords.yml``.
.. note:: When you log in to Kibana web interface for the first time, you are
prompted to create an index. Please create an index using the name ``log-*``.
This step is necessary until the default Kibana dashboard is implemented in
Kolla.
.. _launchpad bug: https://bugs.launchpad.net/kolla/+filebug

View File

@ -1,161 +0,0 @@
.. vagrant-dev-env:
====================================
Development Environment with Vagrant
====================================
This guide describes how to use `Vagrant <http://vagrantup.com>`__ to assist in
developing for Kolla.
Vagrant is a tool to assist in scripted creation of virtual machines. Vagrant
takes care of setting up CentOS-based VMs for Kolla development, each with
proper hardware like memory amount and number of network interfaces.
Getting Started
===============
The Vagrant script implements **all-in-one** or **multi-node** deployments.
**all-in-one** is the default.
In the case of **multi-node** deployment, the Vagrant setup builds a cluster
with the following nodes by default:
* 3 control nodes
* 1 compute node
* 1 storage node (Note: ceph requires at least 3 storage nodes)
* 1 network node
* 1 operator node
The cluster node count can be changed by editing the Vagrantfile.
Kolla runs from the operator node to deploy OpenStack.
All nodes are connected with each other on the secondary NIC. The primary NIC
is behind a NAT interface for connecting with the Internet. The third NIC is
connected without IP configuration to a public bridge interface. This may be
used for Neutron/Nova to connect to instances.
Start by downloading and installing the Vagrant package for the distro of
choice. Various downloads can be found at the `Vagrant downloads
<https://www.vagrantup.com/downloads.html>`__.
Install required dependencies as follows:
On CentOS 7::
sudo yum install vagrant ruby-devel libvirt-devel libvirt-python zlib-devel libpng-devel gcc git
On Fedora 22 or later::
sudo dnf install vagrant ruby-devel libvirt-devel libvirt-python zlib-devel libpng-devel gcc git
On Ubuntu 14.04 or later::
sudo apt-get install vagrant ruby-dev ruby-libvirt python-libvirt libvirt-dev nfs-kernel-server zlib-dev libpng-dev gcc git
.. note:: Many distros ship outdated versions of Vagrant by default. When in
doubt, always install the latest from the downloads page above.
Next install the hostmanager plugin so all hosts are recorded in ``/etc/hosts``
(inside each vm)::
vagrant plugin install vagrant-hostmanager vagrant-vbguest
Vagrant supports a wide range of virtualization technologies. This
documentation describes libvirt. To install vagrant-libvirt plugin::
vagrant plugin install --plugin-version ">= 0.0.31" vagrant-libvirt
Some Linux distributions offer vagrant-libvirt packages, but the version they
provide tends to be too old to run Kolla. A version of >= 0.0.31 is required.
Setup NFS to permit file sharing between host and VMs. Contrary to the rsync
method, NFS allows both way synchronization and offers much better performance
than VirtualBox shared folders. On Fedora 22::
sudo systemctl start nfs-server
firewall-cmd --permanent --add-port=2049/udp
firewall-cmd --permanent --add-port=2049/tcp
firewall-cmd --permanent --add-port=111/udp
firewall-cmd --permanent --add-port=111/tcp
sudo systemctl restart firewalld
Ensure your system has libvirt and associated software installed and setup
correctly. On Fedora 22::
sudo dnf install @virtualization
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
Find a location in the system's home directory and checkout the Kolla repo::
git clone https://git.openstack.org/openstack/kolla
Developers can now tweak the Vagrantfile or bring up the default **all-in-one**
CentOS 7-based environment::
cd kolla/contrib/dev/vagrant && vagrant up
The command ``vagrant status`` provides a quick overview of the VMs composing
the environment.
Vagrant Up
==========
Once Vagrant has completed deploying all nodes, the next step is to launch
Kolla. First, connect with the **operator** node::
vagrant ssh operator
To speed things up, there is a local registry running on the operator. All
nodes are configured so they can use this insecure repo to pull from, and use
it as a mirror. Ansible may use this registry to pull images from.
All nodes have a local folder shared between the group and the hypervisor, and
a folder shared between **all** nodes and the hypervisor. This mapping is lost
after reboots, so make sure to use the command ``vagrant reload <node>`` when
reboots are required. Having this shared folder provides a method to supply
a different Docker binary to the cluster. The shared folder is also used to
store the docker-registry files, so they are save from destructive operations
like ``vagrant destroy``.
Building images
---------------
Once logged on the **operator** VM call the ``kolla-build`` utility::
kolla-build
``kolla-build`` accept arguments as documented in :doc:`image-building`. It
builds Docker images and pushes them to the local registry if the **push**
option is enabled (in Vagrant this is the default behaviour).
Deploying OpenStack with Kolla
------------------------------
Deploy **all-in-one** with::
sudo kolla-ansible deploy
Deploy multinode
On Centos 7::
sudo kolla-ansible deploy -i /usr/share/kolla/ansible/inventory/multinode
On Ubuntu 14.04 or later::
sudo kolla-ansible deploy -i /usr/local/share/kolla/ansible/inventory/multinode
Validate OpenStack is operational::
kolla-ansible post-deploy
. /etc/kolla/admin-openrc.sh
openstack user list
Or navigate to http://172.28.128.254/ with a web browser.
Further Reading
===============
All Vagrant documentation can be found at
`docs.vagrantup.com <http://docs.vagrantup.com>`__.