Run pandoc to convert the documentation

This converts all MD formatted docs that were renamed to RST to
preserve git history into actual RST documentation.  Some minor
edits were made but in general the purpose of this patch is to
*only* convert the documentation not rework the documentation.
I do plan on reworking the documentation in further patch sets.

All links were tested and a test rendering is available:

    http://github.com/sdake/kolla

Change-Id: I3df430b14df1ede15407c7f4ba7afcbdc6f9d757
This commit is contained in:
Steven Dake 2015-08-20 23:06:10 -07:00
parent bbcf22cc12
commit 6e3127d043
9 changed files with 573 additions and 411 deletions

View File

@ -1,107 +1,110 @@
Kolla Overview
==============
The Kolla project is a member of the OpenStack [Big Tent Governance][].
The Kolla project is a member of the OpenStack `Big Tent
Governance <http://governance.openstack.org/reference/projects/index.html>`__.
Kolla's mission statement is:
::
Kolla provides production-ready containers and deployment tools for
operating OpenStack clouds.
Kolla provides [Docker][] containers and [Ansible][] playbooks to meet Kolla's
mission. Kolla is highly opinionated out of the box, but allows for complete
customization. This permits operators with little experience to deploy
OpenStack quickly and as experience grows modify the OpenStack configuration
to suit the operator's exact requirements.
[Big Tent Governance]: http://governance.openstack.org/reference/projects/index.html
[Docker]: http://docker.com/
[Ansible]: http://ansible.com/
Kolla provides `Docker <http://docker.com/>`__ containers and
`Ansible <http://ansible.com/>`__ playbooks to meet Kolla's mission.
Kolla is highly opinionated out of the box, but allows for complete
customization. This permits operators with little experience to deploy
OpenStack quickly and as experience grows modify the OpenStack
configuration to suit the operator's exact requirements.
Getting Started
===============
Please get started by reading the [Developer Quickstart][] followed by the
[Ansible Deployment Guide][].
[Developer Quickstart]: https://github.com/stackforge/kolla/blob/master/docs/dev-quickstart.md
[Ansible Deployment guide]: https://github.com/stackforge/kolla/blob/master/docs/ansible-deployment.md]
Please get started by reading the `Developer
Quickstart <https://github.com/stackforge/kolla/blob/master/docs/dev-quickstart.md>`__
followed by the `Ansible Deployment
Guide <https://github.com/stackforge/kolla/blob/master/docs/ansible-deployment.md>`__.
Docker Images
-------------
The [Docker images][] are built by the Kolla project maintainers. A detailed
process for contributing to the images can be found in the
[image building guide][]. Images reside in the Docker Hub [Kollaglue repo][].
The `Docker images <https://docs.docker.com/userguide/dockerimages/>`__
are built by the Kolla project maintainers. A detailed process for
contributing to the images can be found in the `image building
guide <https://github.com/stackforge/kolla/blob/master/docs/image-building.md>`__.
Images reside in the Docker Hub `Kollaglue
repo <https://registry.hub.docker.com/repos/kollaglue/>`__.
[image building guide]: https://github.com/stackforge/kolla/blob/master/docs/image-building.md
[Docker images]: https://docs.docker.com/userguide/dockerimages/
[Kollaglue repo]: https://registry.hub.docker.com/repos/kollaglue/
The Kolla developers build images in the kollaglue namespace for the
following services for every tagged release and implement Ansible
deployment for them:
The Kolla developers build images in the kollaglue namespace for the following
services for every tagged release and implement Ansible deployment for them:
- Ceilometer
- Cinder
- Glance
- Haproxy
- Heat
- Horizon
- Keepalived
- Keystone
- Mariadb + galera
- Mongodb
- Neutron (linuxbridge or neutron)
- Nova
- Openvswitch
- Rabbitmq
* Ceilometer
* Cinder
* Glance
* Haproxy
* Heat
* Horizon
* Keepalived
* Keystone
* Mariadb + galera
* Mongodb
* Neutron (linuxbridge or neutron)
* Nova
* Openvswitch
* Rabbitmq
::
$ sudo docker search kollaglue
```
$ sudo docker search kollaglue
```
A list of the upstream built docker images will be shown.
Directories
===========
* ansible - Contains Anible playbooks to deploy Kolla in Docker containers.
* compose - Contains the docker-compose files serving as a compose reference.
Note compose support is removed from Kolla. These are for community members
which want to use Kolla container content without Ansible.
* demos - Contains a few demos to use with Kolla.
* devenv - Contains an OpenStack-Heat based development environment.
* docker - Contains a normal Dockerfile based set of artifacts for building
docker. This is planned for removal when docker_templates is completed.
* docs - Contains documentation.
* etc - Contains a reference etc directory structure which requires
configuration of a small number of configuration variables to achieve a
working All-in-One (AIO) deployment.
* docker_templates - Contains jinja2 templates for the docker build system.
* tools - Contains tools for interacting with Kolla.
* specs - Contains the Kolla communities key arguments about architectural
shifts in the code base.
* tests - Contains functional testing tools.
* vagrant - Contains a vagrant VirtualBox-based development environment.
- ansible - Contains Anible playbooks to deploy Kolla in Docker
containers.
- compose - Contains the docker-compose files serving as a compose
reference. Note compose support is removed from Kolla. These are for
community members which want to use Kolla container content without
Ansible.
- demos - Contains a few demos to use with Kolla.
- devenv - Contains an OpenStack-Heat based development environment.
- docker - Contains a normal Dockerfile based set of artifacts for
building docker. This is planned for removal when docker\_templates
is completed.
- docs - Contains documentation.
- etc - Contains a reference etc directory structure which requires
configuration of a small number of configuration variables to achieve
a working All-in-One (AIO) deployment.
- docker\_templates - Contains jinja2 templates for the docker build
system.
- tools - Contains tools for interacting with Kolla.
- specs - Contains the Kolla communities key arguments about
architectural shifts in the code base.
- tests - Contains functional testing tools.
- vagrant - Contains a vagrant VirtualBox-based development
environment.
Getting Involved
================
Need a feature? Find a bug? Let us know! Contributions are much appreciated
and should follow the standard [Gerrit workflow][].
Need a feature? Find a bug? Let us know! Contributions are much
appreciated and should follow the standard `Gerrit
workflow <https://wiki.openstack.org/wiki/Gerrit_Workflow>`__.
- We communicate using the #kolla irc channel.
- File bugs, blueprints, track releases, etc on [Launchpad][].
- Attend weekly [meetings][].
- Contribute [code][]
[Gerrit workflow]: https://wiki.openstack.org/wiki/Gerrit_Workflow
[Launchpad]: https://launchpad.net/kolla
[meetings]: https://wiki.openstack.org/wiki/Meetings/Kolla
[code]: https://github.com/stackforge/kolla
- We communicate using the #kolla irc channel.
- File bugs, blueprints, track releases, etc on
`Launchpad <https://launchpad.net/kolla>`__.
- Attend weekly
`meetings <https://wiki.openstack.org/wiki/Meetings/Kolla>`__.
- Contribute `code <https://github.com/stackforge/kolla>`__
Contributors
============
Check out who's [contributing code][] and [contributing reviews][].
[contributing code]: http://stackalytics.com/?module=kolla-group&metric=commits
[contributing reviews]: http://stackalytics.com/?module=kolla-group&metric=marks
Check out who's `contributing
code <http://stackalytics.com/?module=kolla-group&metric=commits>`__ and
`contributing
reviews <http://stackalytics.com/?module=kolla-group&metric=marks>`__.

View File

@ -1,29 +1,8 @@
Docker compose
==============
These scripts and docker compose files can be used to stand up a simple
installation of openstack. Running the 'tools/genenv' script creates an
'openstack.env' suitable for running on a single host system as well as an
'openrc' to allow access to the installation.
All compose support in Kolla has been completely removed as of liberty-3.
Once you have run that you can either manually start the containers using the
'docker-compose' command or try the 'tools/kolla-compose start' script which tries to
start them all in a reasonable order, waiting at key points for services to
become available. Once stood up you can issue the typical openstack commands
to use the installation. If using nova networking use:
```
# source openrc
# tools/init-runonce
# nova boot --flavor m1.medium --key_name mykey --image puffy_clouds instance_name
# ssh cirros@<ip>
```
Else if using neutron networking use:
```
# source openrc
# tools/init-runonce
# nova boot --flavor m1.medium --key_name mykey --image puffy_clouds instance_name --nic net-id:<net id>
# ssh cirros@<ip>
```
The files in this directory are only for reference by the TripleO project.
As they stand today, they likely don't work. There is a blueprint to port
them to support the CONFIG_EXTERNAL config strategy.

View File

@ -1,13 +1,15 @@
A Kolla Demo using Heat
=======================
By default, the launch script will spawn 3 Nova instances on a
Neutron network created from the tools/init-runonce script. Edit
the VM_COUNT parameter in the launch script if you would like to
spawn a different amount of Nova instances. Edit the IMAGE_FLAVOR
if you would like to launch images using a flavor other than
m1.tiny.
By default, the launch script will spawn 3 Nova instances on a Neutron
network created from the tools/init-runonce script. Edit the VM\_COUNT
parameter in the launch script if you would like to spawn a different
amount of Nova instances. Edit the IMAGE\_FLAVOR if you would like to
launch images using a flavor other than m1.tiny.
Then run the script:
::
$ ./launch

View File

@ -1,115 +1,197 @@
## Reliable, Scalable Redis on Kubernetes
Reliable, Scalable Redis on Kubernetes
--------------------------------------
The following document describes the deployment of a reliable, multi-node Redis on Kubernetes. It deploys a master with replicated slaves, as well as replicated redis sentinels which are use for health checking and failover.
The following document describes the deployment of a reliable,
multi-node Redis on Kubernetes. It deploys a master with replicated
slaves, as well as replicated redis sentinels which are use for health
checking and failover.
### Prerequisites
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides) for installation instructions for your platform.
Prerequisites
~~~~~~~~~~~~~
### A note for the impatient
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
This example assumes that you have a Kubernetes cluster installed and
running, and that you have installed the ``kubectl`` command line tool
somewhere in your path. Please see the `getting
started <https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides>`__
for installation instructions for your platform.
### Turning up an initial master/sentinel pod.
is a [_Pod_](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
A note for the impatient
~~~~~~~~~~~~~~~~~~~~~~~~
We will used the shared network namespace to bootstrap our Redis cluster. In particular, the very first sentinel needs to know how to find the master (subsequent sentinels just ask the first sentinel). Because all containers in a Pod share a network namespace, the sentinel can simply look at ```$(hostname -i):6379```.
This is a somewhat long tutorial. If you want to jump straight to the
"do it now" commands, please see the `tl; dr <#tl-dr>`__ at the end.
Here is the config for the initial master and sentinel pod: [redis-master.yaml](redis-master.yaml)
Turning up an initial master/sentinel pod.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
is a
`*Pod* <https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md>`__.
A Pod is one or more containers that *must* be scheduled onto the same
host. All containers in a pod share a network namespace, and may
optionally share mounted volumes.
We will used the shared network namespace to bootstrap our Redis
cluster. In particular, the very first sentinel needs to know how to
find the master (subsequent sentinels just ask the first sentinel).
Because all containers in a Pod share a network namespace, the sentinel
can simply look at ``$(hostname -i):6379``.
Here is the config for the initial master and sentinel pod:
`redis-master.yaml <redis-master.yaml>`__
Create this master as follows:
```sh
kubectl create -f examples/redis/v1beta3/redis-master.yaml
```
### Turning up a sentinel service
In Kubernetes a _Service_ describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.
.. code:: sh
In Redis, we will use a Kubernetes Service to provide a discoverable endpoints for the Redis sentinels in the cluster. From the sentinels Redis clients can find the master, and then the slaves and other relevant info for the cluster. This enables new members to join the cluster when failures occur.
kubectl create -f examples/redis/v1beta3/redis-master.yaml
Here is the definition of the sentinel service:[redis-sentinel-service.yaml](redis-sentinel-service.yaml)
Turning up a sentinel service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In Kubernetes a *Service* describes a set of Pods that perform the same
task. For example, the set of nodes in a Cassandra cluster, or even the
single node we created above. An important use for a Service is to
create a load balancer which distributes traffic across members of the
set. But a *Service* can also be used as a standing query which makes a
dynamically changing set of Pods (or the single Pod we've already
created) available via the Kubernetes API.
In Redis, we will use a Kubernetes Service to provide a discoverable
endpoints for the Redis sentinels in the cluster. From the sentinels
Redis clients can find the master, and then the slaves and other
relevant info for the cluster. This enables new members to join the
cluster when failures occur.
Here is the definition of the sentinel
service:\ `redis-sentinel-service.yaml <redis-sentinel-service.yaml>`__
Create this service:
```sh
kubectl create -f examples/redis/v1beta3/redis-sentinel-service.yaml
```
### Turning up replicated redis servers
So far, what we have done is pretty manual, and not very fault-tolerant. If the ```redis-master``` pod that we previously created is destroyed for some reason (e.g. a machine dying) our Redis service goes away with it.
.. code:: sh
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
kubectl create -f examples/redis/v1beta3/redis-sentinel-service.yaml
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Redis server.
[redis-controller.yaml](redis-controller.yaml)
Turning up replicated redis servers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The bulk of this controller config is actually identical to the redis-master pod definition above. It forms the template or "cookie cutter" that defines what it means to be a member of this set.
So far, what we have done is pretty manual, and not very fault-tolerant.
If the ``redis-master`` pod that we previously created is destroyed for
some reason (e.g. a machine dying) our Redis service goes away with it.
In Kubernetes a *Replication Controller* is responsible for replicating
sets of identical pods. Like a *Service* it has a selector query which
identifies the members of it's set. Unlike a *Service* it also has a
desired number of replicas, and it will create or delete *Pods* to
ensure that the number of *Pods* matches up with it's desired state.
Replication Controllers will "adopt" existing pods that match their
selector query, so let's create a Replication Controller with a single
replica to adopt our existing Redis server.
`redis-controller.yaml <redis-controller.yaml>`__
The bulk of this controller config is actually identical to the
redis-master pod definition above. It forms the template or "cookie
cutter" that defines what it means to be a member of this set.
Create this controller:
```sh
kubectl create -f examples/redis/v1beta3/redis-controller.yaml
```
.. code:: sh
We'll do the same thing for the sentinel. Here is the controller config:[redis-sentinel-controller.yaml](redis-sentinel-controller.yaml)
kubectl create -f examples/redis/v1beta3/redis-controller.yaml
We'll do the same thing for the sentinel. Here is the controller
config:\ `redis-sentinel-controller.yaml <redis-sentinel-controller.yaml>`__
We create it as follows:
```sh
kubectl create -f examples/redis/v1beta3/redis-sentinel-controller.yaml
```
### Resize our replicated pods
Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas:
.. code:: sh
```sh
kubectl resize rc redis --replicas=3
```
kubectl create -f examples/redis/v1beta3/redis-sentinel-controller.yaml
```sh
kubectl resize rc redis-sentinel --replicas=3
```
Resize our replicated pods
~~~~~~~~~~~~~~~~~~~~~~~~~~
This will create two additional replicas of the redis server and two additional replicas of the redis sentinel.
Initially creating those pods didn't actually do anything, since we only
asked for one sentinel and one redis server, and they already existed,
nothing changed. Now we will add more replicas:
Unlike our original redis-master pod, these pods exist independently, and they use the ```redis-sentinel-service``` that we defined above to discover and join the cluster.
.. code:: sh
### Delete our manual pod
The final step in the cluster turn up is to delete the original redis-master pod that we created manually. While it was useful for bootstrapping discovery in the cluster, we really don't want the lifespan of our sentinel to be tied to the lifespan of one of our redis servers, and now that we have a successful, replicated redis sentinel service up and running, the binding is unnecessary.
kubectl resize rc redis --replicas=3
.. code:: sh
kubectl resize rc redis-sentinel --replicas=3
This will create two additional replicas of the redis server and two
additional replicas of the redis sentinel.
Unlike our original redis-master pod, these pods exist independently,
and they use the ``redis-sentinel-service`` that we defined above to
discover and join the cluster.
Delete our manual pod
~~~~~~~~~~~~~~~~~~~~~
The final step in the cluster turn up is to delete the original
redis-master pod that we created manually. While it was useful for
bootstrapping discovery in the cluster, we really don't want the
lifespan of our sentinel to be tied to the lifespan of one of our redis
servers, and now that we have a successful, replicated redis sentinel
service up and running, the binding is unnecessary.
Delete the master as follows:
```sh
kubectl delete pods redis-master
```
Now let's take a close look at what happens after this pod is deleted. There are three things that happen:
.. code:: sh
1. The redis replication controller notices that its desired state is 3 replicas, but there are currently only 2 replicas, and so it creates a new redis server to bring the replica count back up to 3
2. The redis-sentinel replication controller likewise notices the missing sentinel, and also creates a new sentinel.
3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
kubectl delete pods redis-master
### Conclusion
At this point we now have a reliable, scalable Redis installation. By resizing the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
Now let's take a close look at what happens after this pod is deleted.
There are three things that happen:
### tl; dr
For those of you who are impatient, here is the summary of commands we ran in this tutorial
1. The redis replication controller notices that its desired state is 3
replicas, but there are currently only 2 replicas, and so it creates
a new redis server to bring the replica count back up to 3
2. The redis-sentinel replication controller likewise notices the
missing sentinel, and also creates a new sentinel.
3. The redis sentinels themselves, realize that the master has
disappeared from the cluster, and begin the election procedure for
selecting a new master. They perform this election and selection, and
chose one of the existing redis server replicas to be the new master.
```sh
# Create a bootstrap master
kubectl create -f examples/redis/v1beta3/redis-master.yaml
Conclusion
~~~~~~~~~~
# Create a service to track the sentinels
kubectl create -f examples/redis/v1beta3/redis-sentinel-service.yaml
At this point we now have a reliable, scalable Redis installation. By
resizing the replication controller for redis servers, we can increase
or decrease the number of read-slaves in our cluster. Likewise, if
failures occur, the redis-sentinels will perform master election and
select a new master.
# Create a replication controller for redis servers
kubectl create -f examples/redis/v1beta3/redis-controller.yaml
tl; dr
~~~~~~
# Create a replication controller for redis sentinels
kubectl create -f examples/redis/v1beta3/redis-sentinel-controller.yaml
For those of you who are impatient, here is the summary of commands we
ran in this tutorial
# Resize both replication controllers
kubectl resize rc redis --replicas=3
kubectl resize rc redis-sentinel --replicas=3
.. code:: sh
# Delete the original master pod
kubectl delete pods redis-master
```
# Create a bootstrap master
kubectl create -f examples/redis/v1beta3/redis-master.yaml
# Create a service to track the sentinels
kubectl create -f examples/redis/v1beta3/redis-sentinel-service.yaml
# Create a replication controller for redis servers
kubectl create -f examples/redis/v1beta3/redis-controller.yaml
# Create a replication controller for redis sentinels
kubectl create -f examples/redis/v1beta3/redis-sentinel-controller.yaml
# Resize both replication controllers
kubectl resize rc redis --replicas=3
kubectl resize rc redis-sentinel --replicas=3
# Delete the original master pod
kubectl delete pods redis-master

View File

@ -1,50 +1,55 @@
A Kolla Cluster with Heat
=========================
These [Heat][] templates will deploy an *N*-node [Kolla][] cluster,
where *N* is the value of the `number_of_nodes` parameter you
specify when creating the stack.
These `Heat <https://wiki.openstack.org/wiki/Heat>`__ templates will
deploy an *N*-node `Kolla <https://launchpad.net/kolla>`__ cluster,
where *N* is the value of the ``number_of_nodes`` parameter you specify
when creating the stack.
Kolla has recently undergone a considerable design change. The details
of the design change is addressed in this [spec][]. As part of the
design change, containers share pid and networking namespaces with
the Docker host. Therefore, containers no longer connect to a docker0
bridge and have separate networking from the host. As a result, Kolla
of the design change is addressed in this
`spec <https://review.openstack.org/#/c/153798/>`__. As part of the
design change, containers share pid and networking namespaces with the
Docker host. Therefore, containers no longer connect to a docker0 bridge
and have separate networking from the host. As a result, Kolla
networking has a configuration similar to:
![Image](https://raw.githubusercontent.com/stackforge/kolla/master/devenv/kollanet.png)
.. figure:: https://raw.githubusercontent.com/stackforge/kolla/master/devenv/kollanet.png
:alt: Image
Sharing pid and networking namespaces is detailed in the
[super privileged containers][] concept.
Image
Sharing pid and networking namespaces is detailed in the `super
privileged
containers <http://sdake.io/2015/01/28/an-atomic-upgrade-process-for-openstack-compute-nodes/>`__
concept.
The Kolla cluster is based on Fedora 21, requires the minimum Docker version of 1.7.0
[binary][].
The Kolla cluster is based on Fedora 21, requires the minimum Docker
version of 1.7.0
`binary <https://docs.docker.com/installation/binaries/>`__.
These templates are designed to work with the Icehouse or Juno
versions of Heat. If using Icehouse Heat, this [patch][] is
required to correct a bug with template validation when using the
"Fn::Join" function).
[heat]: https://wiki.openstack.org/wiki/Heat
[kolla]: https://launchpad.net/kolla
[binary]: https://docs.docker.com/installation/binaries/
[copr]: https://copr.fedoraproject.org/
[spec]: https://review.openstack.org/#/c/153798/
[super privileged containers]: http://sdake.io/2015/01/28/an-atomic-upgrade-process-for-openstack-compute-nodes/
[patch]: https://review.openstack.org/#/c/121139/
These templates are designed to work with the Icehouse or Juno versions
of Heat. If using Icehouse Heat, this
`patch <https://review.openstack.org/#/c/121139/>`__ is required to
correct a bug with template validation when using the "Fn::Join"
function).
Create the Glance Image
=======================
After cloning the project, run the get-image.sh script from the project's
devenv directory:
After cloning the project, run the get-image.sh script from the
project's devenv directory:
::
$ ./get-image.sh
The script will create a Fedora 21 image with the required modifications.
The script will create a Fedora 21 image with the required
modifications.
Add the image to your Glance image store:
::
$ glance image-create --name "fedora-21-x86_64" \
--file /var/lib/libvirt/images/fedora-21-x86_64 \
--disk-format qcow2 --container-format bare \
@ -57,6 +62,8 @@ Copy local.yaml.example to local.yaml and edit the contents to match
your deployment environment. Here is an example of a customized
local.yaml:
::
parameters:
ssh_key_name: admin-key
external_network_id: 028d70dd-67b8-4901-8bdd-0c62b06cce2d
@ -64,106 +71,125 @@ local.yaml:
container_external_subnet_id: 575770dd-6828-1101-34dd-0c62b06fjf8s
dns_nameserver: 192.168.200.1
The external_network_id is used by Heat to automatically assign
The external\_network\_id is used by Heat to automatically assign
floating IP's to your Kolla nodes. You can then access your Kolla nodes
directly using the floating IP. The network ID is derived from the
`neutron net-list` command.
``neutron net-list`` command.
The container_external_network_id is used by the nova-network container
within the Kolla node as the FLAT_INTERFACE. The FLAT_INTERFACE tells Nova what
device to use (i.e. eth1) to pass network traffic between Nova instances
across Kolla nodes. This network should be seperate from the external_network_id
above and is derived from the 'neutron net-list' command.
The container\_external\_network\_id is used by the nova-network
container within the Kolla node as the FLAT\_INTERFACE. The
FLAT\_INTERFACE tells Nova what device to use (i.e. eth1) to pass
network traffic between Nova instances across Kolla nodes. This network
should be seperate from the external\_network\_id above and is derived
from the 'neutron net-list' command.
The container_external_subnet_id: is the subnet equivalent to
container_external_network_id
The container\_external\_subnet\_id: is the subnet equivalent to
container\_external\_network\_id
Review the parameters section of kollacluster.yaml for a full list of
configuration options. **Note:** You must provide values for:
- `ssh_key_name`
- `external_network_id`
- `container_external_network_id`
- `container_external_subnet_id`
- ``ssh_key_name``
- ``external_network_id``
- ``container_external_network_id``
- ``container_external_subnet_id``
And then create the stack, referencing that environment file:
::
$ heat stack-create -f kollacluster.yaml -e local.yaml kolla-cluster
Access the Kolla Nodes
======================
You can get the ip address of the Kolla nodes using the `heat
output-show` command:
You can get the ip address of the Kolla nodes using the
``heat output-show`` command:
::
$ heat output-show kolla-cluster kolla_node_external_ip
"192.168.200.86"
You can ssh into that server as the `fedora` user:
You can ssh into that server as the ``fedora`` user:
::
$ ssh fedora@192.168.200.86
Once logged into your Kolla node, setup your environment.
The basic starting environment will be created using `docker-compose`.
This environment will start up the openstack services listed in the
compose directory.
Once logged into your Kolla node, setup your environment. The basic
starting environment will be created using ``docker-compose``. This
environment will start up the openstack services listed in the compose
directory.
To start, setup your environment variables.
::
$ cd kolla
$ ./tools/genenv
The `genenv` script will create a compose/openstack.env file
and an openrc file in your current directory. The openstack.env
file contains all of your initialized environment variables, which
you can edit for a different setup.
The ``genenv`` script will create a compose/openstack.env file and an
openrc file in your current directory. The openstack.env file contains
all of your initialized environment variables, which you can edit for a
different setup.
Next, run the start script.
::
$ ./tools/kolla-compose start
The `start` script is responsible for starting the containers
using `docker-compose -f <osp-service-container> up -d`.
The ``start`` script is responsible for starting the containers using
``docker-compose -f <osp-service-container> up -d``.
If you want to start a container set by hand use this template
::
$ docker-compose -f glance-api-registry.yml up -d
Debugging
==========
=========
All Docker commands should be run from the directory of the Docker binaray,
by default this is `/`.
All Docker commands should be run from the directory of the Docker
binaray, by default this is ``/``.
A few commands for debugging the system.
```
$ sudo ./docker images
```
Lists all images that have been pulled from the upstream kollaglue repository
thus far. This can be run on the node during the `./start` operation to
check on the download progress.
::
```
$ sudo ./docker ps -a
```
This will show all processes that docker has started. Removing the `-a` will
show only active processes. This can be run on the node during the `./start`
operation to check that the containers are orchestrated.
$ sudo ./docker images
```
$ sudo ./docker logs <containerid>
```
```
$ curl http://<NODE_IP>:3306
```
You can use curl to test connectivity to a container. This example demonstrates
the Mariadb service is running on the node. Output should appear as follows
Lists all images that have been pulled from the upstream kollaglue
repository thus far. This can be run on the node during the ``./start``
operation to check on the download progress.
```
$ curl http://10.0.0.4:3306
Trying 10.0.0.4...
Connected to 10.0.0.4.
Escape character is '^]'.
::
$ sudo ./docker ps -a
This will show all processes that docker has started. Removing the
``-a`` will show only active processes. This can be run on the node
during the ``./start`` operation to check that the containers are
orchestrated.
::
$ sudo ./docker logs <containerid>
::
$ curl http://<NODE_IP>:3306
You can use curl to test connectivity to a container. This example
demonstrates the Mariadb service is running on the node. Output should
appear as follows
::
$ curl http://10.0.0.4:3306
Trying 10.0.0.4...
Connected to 10.0.0.4.
Escape character is '^]'.
```

View File

@ -1,95 +1,111 @@
Kolla with Ansible!
============================
Kolla supports deploying Openstack using [Ansible][].
[Ansible]: https://docs.ansible.com
===================
Kolla supports deploying Openstack using
`Ansible <https://docs.ansible.com>`__.
Getting Started
---------------
To run the Ansible playbooks, an inventory file which tracks all of the
available nodes in the environment must be speficied. With this inventory file
Ansible will log into each node via ssh (configurable) and run tasks. Ansible
does not require password-less logins via ssh, however it is highly recommended
to setup ssh-keys.
available nodes in the environment must be specified. With this
inventory file Ansible will log into each node via ssh (configurable)
and run tasks. Ansible does not require password-less logins via ssh,
however it is highly recommended to setup ssh-keys.
Two sample inventory files are provided, *all-in-one*, and *multinode*. The
"all-in-one" inventory defaults to use the Ansible "local" connection type,
which removes the need to setup ssh keys in order to get started quickly.
Two sample inventory files are provided, *all-in-one*, and *multinode*.
The "all-in-one" inventory defaults to use the Ansible "local"
connection type, which removes the need to setup ssh keys in order to
get started quickly.
More information on the Ansible inventory file can be found [here][].
[here]: https://docs.ansible.com/intro_inventory.html
More information on the Ansible inventory file can be found in the Ansible
`inventory introduction <https://docs.ansible.com/intro_inventory.html>`__.
Prerequisites
-------------
On the deployment host you must have Ansible>=1.8.4 installed. That is the only
requirement for deploying. To build the images locally you must also have the
Python library docker-py>=1.2.0 installed.
On the deployment host you must have Ansible>=1.8.4 installed. That is
the only requirement for deploying. To build the images locally you must
also have the Python library docker-py>=1.2.0 installed.
On the target nodes you must have docker>=1.6.0 and docker-py>=1.2.0 installed.
On the target nodes you must have docker>=1.6.0 and docker-py>=1.2.0
installed.
Deploying
---------
Add the etc/kolla directory to /etc/kolla on the deployment host. Inside of
this directory are two files and a minimum number of parameters which are
listed below.
Add the etc/kolla directory to /etc/kolla on the deployment host. Inside
of this directory are two files and a minimum number of parameters which
are listed below.
All variables for the environment can be specified in the files:
"/etc/kolla/globals.yml" and "/etc/kolla/passwords.yml"
The kolla_*_address variables can both be the same. Please specify an unused IP
address in your network to act as a VIP for kolla_internal_address. The VIP will
be used with keepalived and added to your "api_interface" as specified in the
globals.yml
The kolla\_\*\_address variables can both be the same. Please specify
an unused IP address in your network to act as a VIP for
kolla\_internal\_address. The VIP will be used with keepalived and
added to your "api\_interface" as specified in the globals.yml
::
kolla_external_address: "openstack.example.com"
kolla_internal_address: "10.10.10.254"
The "network_interface" variable is the interface that we bind all our services
to. For example, when starting up Mariadb it will bind to the IP on the
interface list in the "network_interface" variable.
The "network\_interface" variable is the interface that we bind all our
services to. For example, when starting up Mariadb it will bind to the
IP on the interface list in the "network\_interface" variable.
::
network_interface: "eth0"
The "neutron_external_interface" variable is the interface that will be used for
your external bridge in Neutron. Without this bridge your instance traffic will
be unable to access the rest of the Internet. In the case of a single interface
on a machine, you may use a veth pair where one end of the veth pair is listed
here and the other end is in a bridge on your system.
The "neutron\_external\_interface" variable is the interface that will
be used for your external bridge in Neutron. Without this bridge your
instance traffic will be unable to access the rest of the Internet. In
the case of a single interface on a machine, you may use a veth pair
where one end of the veth pair is listed here and the other end is in a
bridge on your system.
::
neutron_external_interface: "eth1"
The docker_pull_policy specifies whether Docker should always pull images from
the repository it is configured for, or only in the case where the image isn't
present locally. If you are building your own images locally without pushing
them to the Docker Registry, or a local registry, you must set this value to
"missing" or when you run the playbooks docker will attempt to fetch the latest
image upstream.
The docker\_pull\_policy specifies whether Docker should always pull
images from the repository it is configured for, or only in the case
where the image isn't present locally. If you are building your own
images locally without pushing them to the Docker Registry, or a local
registry, you must set this value to "missing" or when you run the
playbooks docker will attempt to fetch the latest image upstream.
::
docker_pull_policy: "always"
For All-In-One deploys, the following commands can be run. These will setup all
of the containers on the localhost. These commands will be wrapped in the
kolla-script in the future.
For All-In-One deploys, the following commands can be run. These will
setup all of the containers on the localhost. These commands will be
wrapped in the kolla-script in the future.
::
cd ./kolla/ansible
ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml site.yml
To run the playbooks for only a particular service, Ansible tags can be used.
Multiple tags may be specified, and order is still determined by the playbooks.
To run the playbooks for only a particular service, Ansible tags can be
used. Multiple tags may be specified, and order is still determined by
the playbooks.
::
cd ./kolla/ansible
ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml site.yml --tags rabbitmq
ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml site.yml --tags rabbitmq,mariadb
Finally, you can view ./kolla/tools/openrc-example for an example of an openrc
you can use with your environment. If you wish you may also run the following
command to initiate your environment with an glance image and neutron networks.
Finally, you can view ./kolla/tools/openrc-example for an example of an
openrc you can use with your environment. If you wish you may also run
the following command to initiate your environment with an glance image
and neutron networks.
::
cd ./kolla/tools
./init-runonce
@ -97,6 +113,5 @@ command to initiate your environment with an glance image and neutron networks.
Further Reading
---------------
Ansible playbook documentation can be found [here][].
[here]: http://docs.ansible.com/playbooks.html
Ansible playbook documentation can be found in the Ansible
`playbook documentation <http://docs.ansible.com/playbooks.html>`__.

View File

@ -1,76 +1,92 @@
# Developer Environment
Developer Environment
=====================
If you are developing Kolla on an existing OpenStack cloud that supports
Heat, then follow the Heat template [README][]. Another option available
on systems with VirutalBox is the use of [Vagrant][].
Heat, then follow the Heat template
`README <https://github.com/stackforge/kolla/blob/master/devenv/README.md>`__.
Another option available on systems with VirutalBox is the use of
`Vagrant <https://github.com/stackforge/kolla/blob/master/docs/vagrant.md>`__.
The best experience is available with bare metal deployment by following
the instructions below to manually create your Kolla deployment.
[README]: https://github.com/stackforge/kolla/blob/master/devenv/README.md
[Vagrant]: https://github.com/stackforge/kolla/blob/master/docs/vagrant.md
Installing Dependencies
-----------------------
## Installing Dependencies
NB: Kolla will not run on Fedora 22 or later. Fedora 22 compresses kernel
modules with the .xz compressed format. The guestfs system cannot read
these images because a dependent package supermin in CentOS needs to be
updated to add .xz compressed format support.
NB: Kolla will not run on Fedora 22 or later. Fedora 22 compresses
kernel modules with the .xz compressed format. The guestfs system cannot
read these images because a dependent package supermin in CentOS needs
to be updated to add .xz compressed format support.
To install Kolla depenedencies use:
::
git clone http://github.com/stackforge/kolla
cd kolla
sudo pip install -r requirements.txt
In order to run Kolla, it is mandatory to run a version of `docker` that is
1.7.0 or later.
In order to run Kolla, it is mandatory to run a version of ``docker``
that is 1.7.0 or later.
For most systems you can install the latest stable version of Docker with the
following command:
For most systems you can install the latest stable version of Docker
with the following command:
::
curl -sSL https://get.docker.io | bash
For Ubuntu based systems, do not use AUFS when starting Docker daemon unless
you are running the Utopic (3.19) kernel. AUFS requires CONFIG_AUFS_XATTR=y
set when building the kernel. On Ubuntu, versions prior to 3.19 did not set that
flag. If you are unable to upgrade your kernel, you should use a different
storage backend such as btrfs.
For Ubuntu based systems, do not use AUFS when starting Docker daemon
unless you are running the Utopic (3.19) kernel. AUFS requires
CONFIG\_AUFS\_XATTR=y set when building the kernel. On Ubuntu, versions
prior to 3.19 did not set that flag. If you are unable to upgrade your
kernel, you should use a different storage backend such as btrfs.
Next, install the OpenStack python clients if they are not installed:
::
sudo pip install -U python-openstackclient
Finally stop libvirt on the host machine. Only one copy of libvirt may be
running at a time.
Finally stop libvirt on the host machine. Only one copy of libvirt may
be running at a time.
::
service libvirtd stop
The basic starting environment will be created using `ansible`.
This environment will start up the OpenStack services listed in the
inventory file.
The basic starting environment will be created using ``ansible``. This
environment will start up the OpenStack services listed in the inventory
file.
## Starting Kolla
Starting Kolla
--------------
Configure Ansible by reading the Kolla Ansible configuration documentation
[DEPLOY][].
[DEPLOY]: https://github.com/stackforge/kolla/blob/master/docs/ansible-deployment.md
Configure Ansible by reading the Kolla
`Ansible configuration <https://github.com/stackforge/kolla/blob/master/docs/ansible-deployment.md>`__ documentation.
Next, run the start command:
::
$ sudo ./tools/kolla-ansible deploy
A bare metal system takes three minutes to deploy AIO. A virtual machine
takes five minutes to deploy AIO. These are estimates; your hardware may
A bare metal system takes three minutes to deploy AIO. A virtual machine
takes five minutes to deploy AIO. These are estimates; your hardware may
be faster or slower but should near these results.
## Debugging Kolla
Debugging Kolla
---------------
You can determine a container's status by executing:
::
$ sudo docker ps -a
If any of the containers exited you can check the logs by executing:
::
$ sudo docker logs <container-name>

View File

@ -1,77 +1,101 @@
# Image building
Image building
==============
The `tools/build-docker-image` script in this repository is
responsible for building docker images. It is symlinked as `./build`
The ``tools/build-docker-image`` script in this repository is
responsible for building docker images. It is symlinked as ``./build``
inside each Docker image directory.
When creating new image directories, you can run the
`tools/update-build-links` scripts to install the `build` symlink
``tools/update-build-links`` scripts to install the ``build`` symlink
(this script will install the symlink anywhere it find a file named
`Dockerfile`).
``Dockerfile``).
## Workflow
Workflow
--------
In general, you will build images like this:
::
$ cd docker/keystone
$ ./build
By default, the above command would build
`kollaglue/centos-rdo-keystone:CID`, where `CID` is the current short
commit ID. That is, given:
``kollaglue/centos-rdo-keystone:CID``, where ``CID`` is the current
short commit ID. That is, given:
::
$ git rev-parse HEAD
76a16029006a2f5d3b79f1198d81acb6653110e9
The above command would generate
`kollaglue/centos-rdo-keystone:76a1602`. This tagging is meant to
``kollaglue/centos-rdo-keystone:76a1602``. This tagging is meant to
prevent developers from stepping on each other or on release images
during the development process.
To push the image after building, add `--push`:
To push the image after building, add ``--push``:
::
$ ./build --push
To use these images, you must specify the tag in your `docker run`
To use these images, you must specify the tag in your ``docker run``
commands:
::
$ docker run kollaglue/centos-rdo-keystone:76a1602
## Building releases
Building releases
-----------------
To build into the `latest` tag, add `--release`:
To build into the ``latest`` tag, add ``--release``:
::
$ ./build --release
Or to build and push:
::
$ ./build --push --release
## Build all images at once
Build all images at once
------------------------
The `build-all-docker-images` script in the tools directory is a wrapper for
the `build-docker-image` that builds all images, as the name suggests, in the
correct order. It responds to the same options as `build-docker-image` with the
additional `--from` and `--to` options that allows building only images that
have changed between the specified git revisions.
The ``build-all-docker-images`` script in the tools directory is a
wrapper for the ``build-docker-image`` that builds all images, as the
name suggests, in the correct order. It responds to the same options as
``build-docker-image`` with the additional ``--from`` and ``--to``
options that allows building only images that have changed between the
specified git revisions.
For example, to build all images contained in docker directory and push new release:
For example, to build all images contained in docker directory and push
new release:
::
$ tools/build-all-docker-images --release --push
To build only images modified in test-branch along with their children:
::
$ tools/build-all-docker-images --from master --to test-branch
## Configuration
Configuration
-------------
The `build-docker-image` script will look for a file named `.buildconf`
in the image directory and in the top level of the repository. You
can use this to set defaults, such as:
The ``build-docker-image`` script will look for a file named
``.buildconf`` in the image directory and in the top level of the
repository. You can use this to set defaults, such as:
::
NAMESPACE=larsks
PREFIX=fedora-rdo-
This setting would cause images to be tagged into the `larsks/`
This setting would cause images to be tagged into the ``larsks/``
namespace and use Fedora as base image instead of the default CentOS.

View File

@ -1,81 +1,96 @@
Vagrant up!
============================
===========
This guide describes how to use [Vagrant][] to assist in developing for Kolla.
Vagrant is a tool to assist in scripted creation of virtual machines, it will
take care of setting up a CentOS-based cluster of virtual machines, each with
proper hardware like memory amount and number of network interfaces.
[Vagrant]: http://vagrantup.com
This guide describes how to use `Vagrant <http://vagrantup.com>`__ to
assist in developing for Kolla.
Vagrant is a tool to assist in scripted creation of virtual machines, it
will take care of setting up a CentOS-based cluster of virtual machines,
each with proper hardware like memory amount and number of network
interfaces.
Getting Started
---------------
The vagrant setup will build a cluster with the following nodes:
- 3 control nodes
- 1 compute node
- 1 operator node
- 3 control nodes
- 1 compute node
- 1 operator node
Kolla runs from the operator node to deploy OpenStack on the other nodes.
Kolla runs from the operator node to deploy OpenStack on the other
nodes.
All nodes are connected with each other on the secondary nic, the primary nic
is behind a NAT interface for connecting with the internet. A third nic is
connected without IP configuration to a public bridge interface. This may be
used for Neutron/Nova to connect to instances.
All nodes are connected with each other on the secondary nic, the
primary nic is behind a NAT interface for connecting with the internet.
A third nic is connected without IP configuration to a public bridge
interface. This may be used for Neutron/Nova to connect to instances.
Start with downloading and installing the Vagrant package for your distro of
choice. Various downloads can be found [here][]. After we will install the
hostmanager plugin so all hosts are recorded in /etc/hosts (inside each vm):
Start with downloading and installing the Vagrant package for your
distro of choice. Various downloads can be found
at the `Vagrant downloads <https://www.vagrantup.com/downloads.html>`__.
After we will install the hostmanager plugin so all hosts are recorded in
/etc/hosts (inside each vm):
::
vagrant plugin install vagrant-hostmanager
Vagrant supports a wide range of virtualization technologies, of which we will
use VirtualBox for now.
Vagrant supports a wide range of virtualization technologies, of which
we will use VirtualBox for now.
Find some place in your homedir and checkout the Kolla repo
Find some place in your homedir and checkout the Kolla repo:
::
git clone https://github.com/stackforge/kolla.git ~/dev/kolla
You can now tweak the Vagrantfile or start a CentOS7-based cluster right away:
You can now tweak the Vagrantfile or start a CentOS7-based cluster right
away:
::
cd ~/dev/kolla/vagrant && vagrant up
The command `vagrant up` will build your cluster, `vagrant status` will give
you a quick overview once done.
[here]: https://www.vagrantup.com/downloads.html
The command ``vagrant up`` will build your cluster, ``vagrant status``
will give you a quick overview once done.
Vagrant Up
---------
----------
Once vagrant has completed deploying all nodes, we can focus on launching Kolla.
First, connect with the _operator_ node:
Once vagrant has completed deploying all nodes, we can focus on
launching Kolla. First, connect with the *operator* node:
::
vagrant ssh operator
Once connected you can run a simple Ansible-style ping to verify if the cluster is operable:
Once connected you can run a simple Ansible-style ping to verify if the
cluster is operable:
::
ansible -i kolla/ansible/inventory/multinode all -m ping -e ansible_ssh_user=root
Congratulations, your cluster is usable and you can start deploying OpenStack using Ansible!
Congratulations, your cluster is usable and you can start deploying
OpenStack using Ansible!
To speed things up, there is a local registry running on the operator. All nodes are configured
so they can use this insecure repo to pull from, and they will use it as mirror. Ansible may
use this registry to pull images from.
To speed things up, there is a local registry running on the operator.
All nodes are configured so they can use this insecure repo to pull
from, and they will use it as mirror. Ansible may use this registry to
pull images from.
All nodes have a local folder shared between the group and the hypervisor, and a folder shared
between _all_ nodes and the hypervisor. This mapping is lost after reboots, so make sure you use
the command `vagrant reload <node>` when reboots are required. Having this shared folder you
have a method to supply a different docker binary to the cluster. The shared folder is also
used to store the docker-registry files, so they are save from destructive operations like
`vagrant destroy`.
All nodes have a local folder shared between the group and the
hypervisor, and a folder shared between *all* nodes and the hypervisor.
This mapping is lost after reboots, so make sure you use the command
``vagrant reload <node>`` when reboots are required. Having this shared
folder you have a method to supply a different docker binary to the
cluster. The shared folder is also used to store the docker-registry
files, so they are save from destructive operations like
``vagrant destroy``.
Further Reading
---------------
All Vagrant documentation can be found on their [website][].
[website]: http://docs.vagrantup.com
All Vagrant documentation can be found at
`docs.vagrantup.com <http://docs.vagrantup.com>`__.