Deployment of containerised OpenStack to bare metal using kolla and bifrost
Go to file
Scott Solkhon 6496cfc0ba Support for Ceph and Swift storage networks, and improvements to Swift
In a deployment that has both Ceph or Swift deployed it can be useful to seperate the network traffic.
This change adds support for dedicated storage networks for both Ceph and Swift. By default, the storage hosts are
attached to the following networks:

* Overcloud admin network
* Internal network
* Storage network
* Storage management network

This adds four additional networks, which can be used to seperate the storage network traffic as follows:

* Ceph storage network (ceph_storage_net_name) is used to carry Ceph storage
  data traffic. Defaults to the storage network (storage_net_name).
* Ceph storage management network (ceph_storage_mgmt_net_name) is used to carry
  storage management traffic. Defaults to the storage management network
  (storage_mgmt_net_name).
* Swift storage network (swift_storage_net_name) is used to carry Swift storage data
  traffic. Defaults to the storage network (storage_net_name).
* Swift storage replication network (swift_storage_replication_net_name) is used to
  carry storage management traffic. Defaults to the storage management network
  (storage_mgmt_net_name).

This change also includes several improvements to Swift device management and ring generation.

The device management and ring generation are now separate, with device management occurring during
'kayobe overcloud host configure', and ring generation during a new command, 'kayobe overcloud swift rings generate'.

For the device management, we now use standard Ansible modules rather than commands for device preparation.
File system labels can be configured for each device individually.

For ring generation, all commands are run on a single host, by default a host in the Swift storage group.
A python script runs in one of the kolla Swift containers, which consumes an autogenerated YAML config file that defines
the layout of the rings.

Change-Id: Iedc7535532d706f02d710de69b422abf2f6fe54c
2019-04-24 12:40:20 +00:00
.github Add an issue template. 2017-12-14 20:39:55 +00:00
ansible Support for Ceph and Swift storage networks, and improvements to Swift 2019-04-24 12:40:20 +00:00
dev Don't cd to /tmp in environment-setup.sh 2019-02-05 16:49:49 +00:00
doc Support for Ceph and Swift storage networks, and improvements to Swift 2019-04-24 12:40:20 +00:00
etc/kayobe Support for Ceph and Swift storage networks, and improvements to Swift 2019-04-24 12:40:20 +00:00
kayobe Support for Ceph and Swift storage networks, and improvements to Swift 2019-04-24 12:40:20 +00:00
playbooks Don't cd to /tmp in environment-setup.sh 2019-02-05 16:49:49 +00:00
releasenotes Support for Ceph and Swift storage networks, and improvements to Swift 2019-04-24 12:40:20 +00:00
roles use include_tasks and import_playbook instead of include 2019-01-18 12:22:38 +00:00
tools Update release notes for stable/rocky 2019-02-15 11:23:43 +00:00
zuul.d Update release notes for stable/rocky 2019-02-15 11:23:43 +00:00
.coveragerc Use stestr for running unit tests, add a coverage environment 2018-03-08 16:37:08 +00:00
.gitignore Add 'venvs' to list of things which we don't want git to track 2018-07-02 14:17:18 +01:00
.gitreview Add .gitreview file 2018-03-08 16:37:08 +00:00
.stestr.conf Use stestr for running unit tests, add a coverage environment 2018-03-08 16:37:08 +00:00
.travis.yml Add a tox environment & dependencies for running molecule tests 2018-02-20 18:48:28 +00:00
.yamllint Run yamllint on etc/kayobe during pep8 tox env 2019-02-14 12:17:19 +00:00
CONTRIBUTING.rst Update README & CONTRIBUTING for OpenStack process 2018-03-13 14:14:26 +00:00
HACKING.rst Update README & CONTRIBUTING for OpenStack process 2018-03-13 14:14:26 +00:00
LICENSE License kayobe project under Apache2 2017-04-06 10:15:29 +01:00
README.rst Use readthedocs for release notes 2018-10-02 11:07:00 +01:00
requirements.txt Bump Ansible to 2.6.x 2018-11-07 09:18:33 +00:00
requirements.yml Merge "Update stackhpc.libvirt-vm and stackhpc.libvirt-host roles" 2019-02-01 19:04:24 +00:00
setup.cfg Support for Ceph and Swift storage networks, and improvements to Swift 2019-04-24 12:40:20 +00:00
setup.py Use pbr to build the project 2018-03-08 16:37:08 +00:00
test-requirements.txt Run yamllint on etc/kayobe during pep8 tox env 2019-02-14 12:17:19 +00:00
tox.ini Run yamllint on etc/kayobe during pep8 tox env 2019-02-14 12:17:19 +00:00
Vagrantfile Support complete installation of Kayobe as a python package 2019-02-01 12:55:27 +00:00

Kayobe

Kayobe enables deployment of containerised OpenStack to bare metal.

Containers offer a compelling solution for isolating OpenStack services, but running the control plane on an orchestrator such as Kubernetes or Docker Swarm adds significant complexity and operational overheads.

The hosts in an OpenStack control plane must somehow be provisioned, but deploying a secondary OpenStack cloud to do this seems like overkill.

Kayobe stands on the shoulders of giants:

  • OpenStack bifrost discovers and provisions the cloud
  • OpenStack kolla builds container images for OpenStack services
  • OpenStack kolla-ansible delivers painless deployment and upgrade of containerised OpenStack services

To this solid base, kayobe adds:

  • Configuration of cloud host OS & flexible networking
  • Management of physical network devices
  • A friendly openstack-like CLI

All this and more, automated from top to bottom using Ansible.

Features

  • Heavily automated using Ansible
  • kayobe Command Line Interface (CLI) for cloud operators
  • Deployment of a seed VM used to manage the OpenStack control plane
  • Configuration of physical network infrastructure
  • Discovery, introspection and provisioning of control plane hardware using OpenStack bifrost
  • Deployment of an OpenStack control plane using OpenStack kolla-ansible
  • Discovery, introspection and provisioning of bare metal compute hosts using OpenStack ironic and ironic inspector
  • Virtualised compute using OpenStack nova
  • Containerised workloads on bare metal using OpenStack magnum
  • Big data on bare metal using OpenStack sahara

In the near future we aim to add support for the following: