Kolla-Kubernetes multi-node persistence for Mariadb

This patch uses the Mariadb service to show how we may support the
configuration and use of kubernetes persistent volumes across storage
providers, including Ceph, AWS, GCE, and host development mounts on
the all-in-one Hyperkube.

Problem Statement: Currently, boostrap and running of kolla containers
requiring persistence will fail on multi-node kubernetes clusters
because the bootstrap using a hostmount may occur on a different node
than the node on which the service pod will run.  Thus, in order to
support multi-node deployments of kolla-kubernetes, we must support
persistent volumes that replication controllers can associate with and
follow pods which may move from one host to another.

See ./doc/source/multi-node.rst for the multi-node deployment guide.

Change-Id: I32968cd4faed5413066af28c25ca32b6e79743ac
This commit is contained in:
David C Wang 2016-06-28 20:08:52 +00:00
parent b597c89d4d
commit d85e5054ea
10 changed files with 388 additions and 4 deletions

View File

@ -0,0 +1,39 @@
{%- set resourceName = kolla_kubernetes.cli.args.service_name %}
{%- set size = '10' %}
{%- if storage_provider == "host" %}
{# Host storage provider uses storage on the local filesystem #}
{%- if kolla_kubernetes.cli.args.action == "create" %}
sudo mkdir -p /var/lib/kolla/volumes/{{ resourceName }}
{%- elif kolla_kubernetes.cli.args.action == "delete" %}
sudo rm -rf /var/lib/kolla/volumes/{{ resourceName }}
{%- else %}
{{ raise('Unknown action') }}
{%- endif %}
{%- elif storage_provider == "ceph" %}
{%- if kolla_kubernetes.cli.args.action == "create" %}
ssh {{ storage_ceph.ssh_user -}} @ {{- storage_ceph.monitors[0] }} rbd create {{ storage_ceph.pool -}}/{{- resourceName }} --size "{{ size }}G" --image-feature layering
{%- elif kolla_kubernetes.cli.args.action == "delete" %}
ssh {{ storage_ceph.ssh_user -}} @ {{- storage_ceph.monitors[0] }} rbd delete {{ storage_ceph.pool -}}/{{- resourceName }}
{%- else %}
{{ raise('Unknown action') }}
{%- endif %}
{%- elif storage_provider == "gce" %}
{%- set type = 'pd-standard' %}
{%- if kolla_kubernetes.cli.args.action == "create" %}
gcloud compute disks create "{{ resourceName }}" --size "{{ size }}" --type "{{ type }}"
{%- elif kolla_kubernetes.cli.args.action == "delete" %}
gcloud compute disks delete "{{ resourceName }}" -q
{%- else %}
{{ raise('Unknown action') }}
{%- endif %}
{%- elif storage_provider == "aws" %}
echo "# NO-OP for AWS, which supports Experimental Persistent Volume Provisioning"
echo "# https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/experimental/persistent-volume-provisioning/README.md"
{%- else %}
{{ raise('Unknown storage_provider: check kolla-kubernetes.yml:storage_provider') }}
{%- endif %}

View File

@ -1,3 +1,4 @@
{%- set resourceName = kolla_kubernetes.cli.args.service_name %}
apiVersion: batch/v1
kind: Job
metadata:
@ -32,7 +33,7 @@ spec:
configMap:
name: mariadb-configmap
- name: mariadb-persistent-storage
hostPath:
path: /var/lib/mysql
persistentVolumeClaim:
claimName: {{ resourceName }}
- name: kolla-logs
emptyDir: {}

View File

@ -0,0 +1,45 @@
{%- set resourceName = kolla_kubernetes.cli.args.service_name %}
{%- set size = '10Gi' %}
{%- if storage_provider in ["host", "ceph", "gce"] -%}
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ resourceName }}
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: {{ size }}
{%- if storage_provider == "host" %}
hostPath:
path: /var/lib/kolla/volumes/{{ resourceName }}
{%- elif storage_provider == "gce" %}
gcePersistentDisk:
pdName: {{ resourceName }}
fsType: ext4
{%- elif storage_provider == "ceph" %}
rbd:
monitors:
{%- for k in storage_ceph.monitors %}
- "{{ k }}:6789"
{%- endfor %}
pool: {{ storage_ceph.pool }}
image: {{ resourceName }}
user: {{ storage_ceph.user }}
keyring: {{ storage_ceph.keyring }}
secretRef:
name: {{ storage_ceph.secretName }}
fsType: ext4
readOnly: false
{%- endif %}
{%- elif storage_provider == "aws" %}
# NO-OP for AWS, which supports Experimental Persistent Volume Provisioning
# https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/experimental/persistent-volume-provisioning/README.md
{%- else %}
{{ raise('Unknown storage_provider: check kolla-kubernetes.yml:storage_provider') }}
{%- endif %}

View File

@ -0,0 +1,17 @@
{%- set resourceName = kolla_kubernetes.cli.args.service_name %}
{%- set size = '10Gi' %}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ resourceName }}
{%- if storage_provider in ["aws"] %}
annotations:
volume.alpha.kubernetes.io/storage-class: experimental_can_be_anything_in_kubernetes_1_2
{%- endif %}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ size }}

View File

@ -15,6 +15,7 @@ Contents:
installation
kubernetes-all-in-one
quickstart
multi-node
readme
usage
labels

253
doc/source/multi-node.rst Normal file
View File

@ -0,0 +1,253 @@
.. multi-node:
=================================
Kolla Kubernetes Multi-Node Guide
=================================
This guide documents how to deploy kolla-kubernetes within a
multi-node Kubernetes cluster. It will guide you through all of the
dependencies required to deploy Horizon, the Openstack Admin Web
Interface. It works for kubernetes clusters supporting various
storage providers including GCE compute disks, AWS EBS, Ceph RBD, and
even local host mounts if you are developing multi-node on an
all-in-one system.
This is an advanced guide. Before attempting to deploy on a
multi-node cluster, please follow the :doc:`quickstart` and ensure
that you have successfully deployed kolla-kubernetes on a single host
using the :doc:`kubernetes-all-in-one`. This multi-node guide
requires quite a few system dependencies to be addressed by the
:doc:`quickstart`.
Following this guide will result in a minimal kolla-kubernetes
multi-node deployment consisting of:
- 1 mariadb instance
- 1 memcached instance
- 3 keystone instances
- 3 horizon instances
The end result will be a working Horizon admin interface and its
dependencies deployed with all of the self-healing and auto-wiring
benefits that a kubernetes cluster has to offer. You should be able
to destroy kubernetes nodes at will, and the system should self-heal
and maintain state as pods migrate from destroyed nodes to new nodes.
You may also destroy *all* kubernetes nodes, then bring some back, and
the system should again self-heal. Because we are using network
volumes, mariadb state is maintained since its network volume will
follow the pod as it is rescheduled from one node to the next.
Pre-Requisites
==============
Follow the :doc:`quickstart`, configure your system, and do a
"Development Install" of kolla and kolla-kubernetes. This is
absolutely required.
Configure Kolla
===============
For multi-node deployments, a docker registry is required since the
kubernetes nodes will not be able to find the kolla images that your
development machine has built. Thus, we must configure kolla to name
the images correctly, so that we may easily push the images to the
right docker registry.
Add your docker registry settings in the kolla configuration file
```./etc/kolla/globals.yaml```.
::
# Edit kolla config ./etc/kolla/globals.yml
docker_registry: "<registry_url>" # e.g. "gcr.io"
docker_namespace: "<registry_namespace> # e.g. "annular-reef-123"
Generate the kolla configurations, build the kolla images, and push
the kolla images to your docker registry.
::
# Generate the kolla configurations
pushd kolla
sudo ./tools/generate_passwords.py # (Optional: will overwrite)
sudo ./tools/kolla-ansible genconfig
popd
Build Kolla Images and Push to Docker Registry
==============================================
::
# Set env variables to make subsequent commands cut-and-pasteable
export DOCKER_REGISTRY="<registry_url>"
export DOCKER_NAMESPACE="<registry_namespace>"
export DOCKER_TAG="3.0.0"
export KOLLA_CONTAINERS="mariadb memcached kolla-toolbox keystone horizon"
# Build the kolla containers
kolla-build $KOLLA_CONTAINERS --registry $DOCKER_REGISTRY --namespace $DOCKER_NAMESPACE
# Authenticate with your docker registry
# This may not be necessary if you are using a cloud provider
docker login
# Push the newly-built kolla containers to your docker registry
# For GKE, change the command below to be "gcloud docker push"
for i in $KOLLA_CONTAINERS; do
docker push "$DOCKER_REGISTRY/$DOCKER_NAMESPACE/centos-binary-$i:$DOCKER_TAG"
done
Configure Kolla-Kubernetes
==========================
Modify the kolla-kubernetes configuration file
```./etc/kolla-kubernetes/kolla-kubernetes.yml``` to set the number of
instance replicas. In addition, set the storage_provider settings to
match your environment.
::
# Edit kolla-kubernetes config ./etc/kolla-kubernetes/kolla-kubernetes.yml
########################
# Kubernetes Cluster
########################
keystone_replicas: "3"
horizon_replicas: "3"
########################
# Persistent Storage
########################
storage_provider: "host" # host, ceph, gce, aws
storage_ceph:
keyring: /etc/ceph/ceph.client.admin.keyring
monitors:
- x.x.x.x
- y.y.y.y
pool: rbd
secretName: pkt-ceph-secret
ssh_user: root
user: admin
Known Issues
============
#1. On GCE, the mariadb pod is unable to mount the network drive that
was prior mounted by the mariadb-bootstrap job, until the
mariadb-bootstrap job is deleted. The same should also occur for AWS
and Ceph.
#2. When running Kubernetes version < 1.3, Ceph RBD volumes will
auto-detach when Kubernetes nodes disappear, causing problems when a
pod migrates to a new node and cannot mount the required volume.
Details are found in the in this `kubernetes pull
request<https://github.com/kubernetes/kubernetes/pull/26351>`_.
Create all Kolla-Kubernetes Resources
=====================================
Execute the following commands to create the kolla-kubernetes
multi-node cluster. There are two unique perspectives, that of an
operator and that of a workflow engine. The workflow engine drives
the same CLI subcommands that are accessible to operators.
However, since the workflow engine does not yet exist, the shortcut
workflow commands as defined in the quickstart are still supported.
All of the commands below are cut and pasteable.
Operator Create Resources
-------------------------
::
kolla-kubernetes bootstrap mariadb
sleep 30 # wait for mariadb bootstrap to finish
kolla-kubernetes resource delete bootstrap mariadb # workaround known issue #1
kolla-kubernetes run mariadb
kolla-kubernetes run memcached
sleep 30 # wait for mariadb and memcached to start up
kolla-kubernetes bootstrap keystone
sleep 30 # wait for keystone to bootstrap in mariadb
kolla-kubernetes run keystone
sleep 30 # wait for keystone to start up
kolla-kubernetes run horizon
Workflow Engine Create Resources
--------------------------------
A future Ansible Workflow Engine would discretely call the individual
bits of logic.
::
kolla-kubernetes resource create disk mariadb
kolla-kubernetes resource create pv mariadb
kolla-kubernetes resource create pvc mariadb
kolla-kubernetes resource create svc mariadb
kolla-kubernetes resource create configmap mariadb
kolla-kubernetes resource create bootstrap mariadb
sleep 30 # wait for mariadb bootstrap to finish
kolla-kubernetes resource delete bootstrap mariadb # workaround known issue #1
kolla-kubernetes resource create pod mariadb
kolla-kubernetes resource create svc memcached
kolla-kubernetes resource create configmap memcached
kolla-kubernetes resource create pod memcached
kolla-kubernetes resource create svc keystone
kolla-kubernetes resource create configmap keystone
sleep 30 # wait for mariadb and memcached to start up
kolla-kubernetes resource create bootstrap keystone
sleep 30 # wait for keystone to bootstrap in mariadb
kolla-kubernetes resource create pod keystone
kolla-kubernetes resource create svc horizon
kolla-kubernetes resource create configmap horizon
sleep 30 # wait for keystone to start up
kolla-kubernetes resource create pod horizon
Delete all Kolla-Kubernetes Resources
=====================================
Deleting all resources is exactly executing the creation steps in
reverse.
Operator Delete Resources
-------------------------
::
kolla-kubernetes kill horizon
kolla-kubernetes kill keystone
kolla-kubernetes kill memcached
kolla-kubernetes kill mariadb
Workflow Engine Delete Resources
--------------------------------
::
kolla-kubernetes resource delete pod horizon
kolla-kubernetes resource delete configmap horizon
kolla-kubernetes resource delete svc horizon
kolla-kubernetes resource delete pod keystone
kolla-kubernetes resource delete bootstrap keystone
kolla-kubernetes resource delete configmap keystone
kolla-kubernetes resource delete svc keystone
kolla-kubernetes resource delete pod memcached
kolla-kubernetes resource delete configmap memcached
kolla-kubernetes resource delete svc memcached
kolla-kubernetes resource delete pod mariadb
kolla-kubernetes resource delete bootstrap mariadb
kolla-kubernetes resource delete configmap mariadb
kolla-kubernetes resource delete svc mariadb
kolla-kubernetes resource delete pvc mariadb
kolla-kubernetes resource delete pv mariadb
kolla-kubernetes resource delete disk mariadb

View File

@ -25,3 +25,17 @@ glance_registry_replicas: "1"
dns_replicas: "1"
#dns_server_ip: ""
dns_domain_name: "openstack.kolla"
########################
# Persistent Storage
########################
storage_provider: "host" # host, ceph, gce, aws
storage_ceph:
keyring: /etc/ceph/ceph.client.admin.keyring
monitors:
- x.x.x.x
- y.y.y.y
pool: rbd
secretName: pkt-ceph-secret
ssh_user: root
user: admin

View File

@ -22,8 +22,11 @@ kolla-kubernetes:
- name: mariadb
resources:
disk:
- bootstrap/mariadb/mariadb-disk.sh.j2
pv:
- bootstrap/mariadb/mariadb-pv.yml.j2
pvc:
- bootstrap/mariadb/mariadb-pvc.yml.j2
svc:
- services/mariadb/mariadb-service.yml.j2
bootstrap:

View File

@ -121,9 +121,19 @@ class JinjaUtils(object):
name = 'jvars'
j2env = jinja2.Environment(
loader=jinja2.DictLoader({name: template_str}))
# Do not print type for bools "!!bool" on output
j2env.filters['bool'] = type_utils.str_to_bool
# Add a "raise" keyword for raising exceptions from within jinja
def jinja_raise(message):
raise Exception(message)
j2env.globals['raise'] = jinja_raise
# Add a keyword for accessing KubeUtils from within jinja
j2env.globals['KubeUtils'] = KubeUtils
# Render the template
rendered_template = j2env.get_template(name).render(dict_)
return rendered_template + "\n"

View File

@ -1,3 +1,4 @@
{%- set resourceName = kolla_kubernetes.cli.args.service_name %}
apiVersion: v1
kind: ReplicationController
spec:
@ -32,8 +33,8 @@ spec:
configMap:
name: mariadb-configmap
- name: mariadb-persistent-storage
hostPath:
path: /var/lib/mysql
persistentVolumeClaim:
claimName: {{ resourceName }}
- name: kolla-logs
emptyDir: {}
metadata: