From 584380f8ee3925545a39e6ec8dda5ece3aa930f2 Mon Sep 17 00:00:00 2001 From: Jaycen Grant Date: Thu, 11 Aug 2016 14:03:13 -0700 Subject: [PATCH] Rename Bay to Cluster in docs This is a continuation of patch 353726 and includes all of the doc changes for replacing the term bay with cluster and BayModel with ClusterTemplate. Change-Id: Ia7efaed157971ad7631ddffb9c1400f3516720f0 Implements: blueprint rename-bay-to-cluster --- ...xample.rst => cluster-type-definition.rst} | 45 +- doc/source/dev/functional-test.rst | 2 +- doc/source/dev/kubernetes-load-balancer.rst | 32 +- doc/source/dev/manual-devstack.rst | 4 +- doc/source/dev/quickstart.rst | 298 +++---- doc/source/heat-templates.rst | 17 +- doc/source/index.rst | 13 +- doc/source/install-guide-from-source.rst | 5 +- doc/source/magnum-proxy.rst | 16 +- doc/source/troubleshooting-guide.rst | 79 +- doc/source/userguide.rst | 760 +++++++++--------- 11 files changed, 658 insertions(+), 613 deletions(-) rename doc/source/dev/{bay-template-example.rst => cluster-type-definition.rst} (77%) diff --git a/doc/source/dev/bay-template-example.rst b/doc/source/dev/cluster-type-definition.rst similarity index 77% rename from doc/source/dev/bay-template-example.rst rename to doc/source/dev/cluster-type-definition.rst index 3e42e36082..27ef52ab4c 100644 --- a/doc/source/dev/bay-template-example.rst +++ b/doc/source/dev/cluster-type-definition.rst @@ -1,30 +1,29 @@ -==================== -Example Bay Template -==================== +======================= +Cluster Type Definition +======================= -This project is an example to demonstrate the necessary pieces of a Bay -template. There are three key pieces to a bay template: +There are three key pieces to a Cluster Type Definition: -1. Heat template - The Heat template that Magnum will use to generate a Bay. +1. Heat Stack template - The HOT file that Magnum will use to generate a + cluster using a Heat Stack. 2. Template definition - Magnum's interface for interacting with the Heat template. -3. Definition Entry Point - Used to advertise the available template - definitions. +3. Definition Entry Point - Used to advertise the available Cluster Types. -The Heat Template ------------------ +The Heat Stack Template +----------------------- -The heat template is where most of the real work happens. The result of the -Heat template should be a full Container Orchestration Environment. +The Heat Stack Template is where most of the real work happens. The result of +the Heat Stack Template should be a full Container Orchestration Environment. The Template Definition ----------------------- Template definitions are a mapping of Magnum object attributes and Heat -template parameters, along with Magnum consumable template outputs. Each -definition also denotes which Bay Types it can provide. Bay Types are how -Magnum determines which of the enabled Template Definitions it will use for a -given Bay. +template parameters, along with Magnum consumable template outputs. A +Cluster Type Definition indicates which Cluster Types it can provide. +Cluster Types are how Magnum determines which of the enabled Cluster +Type Definitions it will use for a given cluster. The Definition Entry Point -------------------------- @@ -35,15 +34,15 @@ Each Template Definition should have an Entry Point in the Definition as `example_template = example_template:ExampleTemplate` in the `magnum.template_definitions` group. -Installing Bay Templates ------------------------- +Installing Cluster Templates +---------------------------- -Because Bay Templates are basically Python projects, they can be worked with -like any other Python project. They can be cloned from version control and -installed or uploaded to a package index and installed via utilities such as -pip. +Because Cluster Type Definitions are basically Python projects, they can be +worked with like any other Python project. They can be cloned from version +control and installed or uploaded to a package index and installed via +utilities such as pip. -Enabling a template is as simple as adding it's Entry Point to the +Enabling a Cluster Type is as simple as adding it's Entry Point to the `enabled_definitions` config option in magnum.conf.:: # Setup python environment and install Magnum diff --git a/doc/source/dev/functional-test.rst b/doc/source/dev/functional-test.rst index a07be30f6f..a7580e11d7 100644 --- a/doc/source/dev/functional-test.rst +++ b/doc/source/dev/functional-test.rst @@ -39,7 +39,7 @@ If you're using devstack, you can copy and modify the devstack configuration:: source /opt/stack/devstack/openrc demo demo iniset functional_creds.conf auth password $OS_PASSWORD -Set the DNS name server to be used in your bay nodes (e.g. 8.8.8.8):: +Set the DNS name server to be used by your cluster nodes (e.g. 8.8.8.8):: # update DNS name server source /opt/stack/devstack/openrc demo demo diff --git a/doc/source/dev/kubernetes-load-balancer.rst b/doc/source/dev/kubernetes-load-balancer.rst index 4fe47fabf1..ead5afb46b 100644 --- a/doc/source/dev/kubernetes-load-balancer.rst +++ b/doc/source/dev/kubernetes-load-balancer.rst @@ -44,7 +44,7 @@ required. All the services will be created normally; services that specify the load balancer will also be created successfully, but a load balancer will not be created. -To enable the load balancer, log into each master node of your bay and +To enable the load balancer, log into each master node of your cluster and perform the following steps: 1. Configure kube-apiserver:: @@ -72,7 +72,7 @@ perform the following steps: sudo vi /etc/sysconfig/kube_openstack_config The username and tenant-name entries have been filled in with the - Keystone values of the user who created the bay. Enter the password + Keystone values of the user who created the cluster. Enter the password of this user on the entry for password:: password=ChangeMe @@ -88,9 +88,9 @@ This only needs to be done once. The steps can be reversed to disable the load balancer feature. Before deleting the Kubernetes cluster, make sure to delete all the services that created load balancers. Because the Neutron objects created by Kubernetes are not managed by Heat, they will not be -deleted by Heat and this will cause the bay-delete operation to fail. If this -occurs, delete the neutron objects manually (lb-pool, lb-vip, lb-member, -lb-healthmonitor) and then run bay-delete again. +deleted by Heat and this will cause the cluster-delete operation to fail. If +this occurs, delete the neutron objects manually (lb-pool, lb-vip, lb-member, +lb-healthmonitor) and then run cluster-delete again. Steps for the users =================== @@ -137,9 +137,9 @@ Create a file (e.g nginx-service.yaml) describing a service for the nginx pod:: app: nginx type: LoadBalancer -Assuming that a Kubernetes bay named k8sbayv1 has been created, deploy the pod -and service by the commands. Please refer to the quickstart guide on how to -connect to Kubernetes running on the launched bay.:: +Assuming that a Kubernetes cluster named k8sclusterv1 has been created, deploy +the pod and service by the commands. Please refer to the quickstart guide on +how to connect to Kubernetes running on the launched cluster.:: kubectl create -f nginx.yaml @@ -160,7 +160,7 @@ Alternatively, associating a floating IP can be done on the command line by allocating a floating IP, finding the port of the VIP, and associating the floating IP to the port. The commands shown below are for illustration purpose and assume -that there is only one service with load balancer running in the bay and +that there is only one service with load balancer running in the cluster and no other load balancers exist except for those created for the cluster. First create a floating IP on the public network:: @@ -232,13 +232,13 @@ with Neutron in this sequence: These Neutron objects can be verified as follows. For the load balancer pool:: neutron lb-pool-list - +--------------------------------------+----------------------------------------------+----------+-------------+----------+----------------+--------+ - | id | name | provider | lb_method | protocol | admin_state_up | status | - +--------------------------------------+----------------------------------------------+----------+-------------+----------+----------------+--------+ - | 241357b3-2a8f-442e-b534-bde7cd6ba7e4 | a1f03e40f634011e59c9efa163eae8ab | haproxy | ROUND_ROBIN | TCP | True | ACTIVE | - | 82b39251-1455-4eb6-a81e-802b54c2df29 | k8sbayv1-iypacicrskib-api_pool-fydshw7uvr7h | haproxy | ROUND_ROBIN | HTTP | True | ACTIVE | - | e59ea983-c6e8-4cec-975d-89ade6b59e50 | k8sbayv1-iypacicrskib-etcd_pool-qbpo43ew2m3x | haproxy | ROUND_ROBIN | HTTP | True | ACTIVE | - +--------------------------------------+----------------------------------------------+----------+-------------+----------+----------------+--------+ + +--------------------------------------+--------------------------------------------------+----------+-------------+----------+----------------+--------+ + | id | name | provider | lb_method | protocol | admin_state_up | status | + +--------------------------------------+--------------------------------------------------+----------+-------------+----------+----------------+--------+ + | 241357b3-2a8f-442e-b534-bde7cd6ba7e4 | a1f03e40f634011e59c9efa163eae8ab | haproxy | ROUND_ROBIN | TCP | True | ACTIVE | + | 82b39251-1455-4eb6-a81e-802b54c2df29 | k8sclusterv1-iypacicrskib-api_pool-fydshw7uvr7h | haproxy | ROUND_ROBIN | HTTP | True | ACTIVE | + | e59ea983-c6e8-4cec-975d-89ade6b59e50 | k8sclusterv1-iypacicrskib-etcd_pool-qbpo43ew2m3x | haproxy | ROUND_ROBIN | HTTP | True | ACTIVE | + +--------------------------------------+--------------------------------------------------+----------+-------------+----------+----------------+--------+ Note that 2 load balancers already exist to implement high availability for the cluster (api and ectd). The new load balancer for the Kubernetes service uses diff --git a/doc/source/dev/manual-devstack.rst b/doc/source/dev/manual-devstack.rst index 9d4f69dcbe..3dc5ac74fc 100644 --- a/doc/source/dev/manual-devstack.rst +++ b/doc/source/dev/manual-devstack.rst @@ -85,7 +85,7 @@ add the following line to your `local.conf` file:: enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer Create a local.sh to automatically make necessary networking changes during -the devstack deployment process. This will allow bays spawned by magnum to +the devstack deployment process. This will allow clusters spawned by magnum to access the internet through PUBLIC_INTERFACE:: cat > local.sh << 'END_LOCAL_SH' @@ -142,7 +142,7 @@ Create a domain and domain admin for trust:: --user $TRUSTEE_DOMAIN_ADMIN_ID --domain $TRUSTEE_DOMAIN_ID \ admin -Create a keypair for use with the baymodel:: +Create a keypair for use with the ClusterTemplate:: test -f ~/.ssh/id_rsa.pub || ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa nova keypair-add --pub-key ~/.ssh/id_rsa.pub testkey diff --git a/doc/source/dev/quickstart.rst b/doc/source/dev/quickstart.rst index 01afdcbc6a..7d4d7e6dc2 100644 --- a/doc/source/dev/quickstart.rst +++ b/doc/source/dev/quickstart.rst @@ -78,8 +78,7 @@ To run unit test coverage and check percentage of code covered:: tox -e cover -To discover and interact with templates, please refer to -``_ + Exercising the Services Using Devstack ====================================== @@ -136,8 +135,8 @@ magnum will periodically send metrics to ceilometer:: enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer END -If you want to deploy Docker Registry 2.0 in your bay, you should enable swift -in devstack:: +If you want to deploy Docker Registry 2.0 in your cluster, you should enable +swift in devstack:: cat >> /opt/stack/devstack/local.conf << END enable_service s-proxy @@ -193,7 +192,8 @@ To list the available commands and resources for magnum, use:: magnum help -To list out the health of the internal services, namely conductor, of magnum, use:: +To list out the health of the internal services, namely conductor, of magnum, +use:: magnum service-list @@ -203,21 +203,21 @@ To list out the health of the internal services, namely conductor, of magnum, us | 1 | oxy-dev.hq1-0a5a3c02.hq1.abcde.com | magnum-conductor | up | +----+------------------------------------+------------------+-------+ -Create a keypair for use with the baymodel:: +Create a keypair for use with the ClusterTemplate:: test -f ~/.ssh/id_rsa.pub || ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa nova keypair-add --pub-key ~/.ssh/id_rsa.pub testkey -Building a Kubernetes Bay - Based on Fedora Atomic -================================================== +Building a Kubernetes Cluster - Based on Fedora Atomic +====================================================== -Create a baymodel. This is similar in nature to a flavor and describes -to magnum how to construct the bay. The baymodel specifies a Fedora Atomic -image so the bays which use this baymodel will be based on Fedora Atomic. -The COE (Container Orchestration Engine) and keypair need to be specified -as well:: +Create a ClusterTemplate. This is similar in nature to a flavor and describes +to magnum how to construct the cluster. The ClusterTemplate specifies a Fedora +Atomic image so the clusters which use this ClusterTemplate will be based on +Fedora Atomic. The COE (Container Orchestration Engine) and keypair need to +be specified as well:: - magnum baymodel-create --name k8sbaymodel \ + magnum cluster-template-create --name k8s-cluster-template \ --image-id fedora-atomic-latest \ --keypair-id testkey \ --external-network-id public \ @@ -227,39 +227,43 @@ as well:: --network-driver flannel \ --coe kubernetes -Create a bay. Use the baymodel name as a template for bay creation. -This bay will result in one master kubernetes node and one minion node:: +Create a cluster. Use the ClusterTemplate name as a template for cluster +creation. This cluster will result in one master kubernetes node and one minion +node:: - magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1 + magnum cluster-create --name k8s-cluster \ + --cluster-template k8s-cluster-template \ + --node-count 1 -Bays will have an initial status of CREATE_IN_PROGRESS. Magnum will update -the status to CREATE_COMPLETE when it is done creating the bay. Do not create -containers, pods, services, or replication controllers before magnum finishes -creating the bay. They will likely not be created, and may cause magnum to -become confused. +Clusters will have an initial status of CREATE_IN_PROGRESS. Magnum will update +the status to CREATE_COMPLETE when it is done creating the cluster. Do not +create containers, pods, services, or replication controllers before magnum +finishes creating the cluster. They will likely not be created, and may cause +magnum to become confused. -The existing bays can be listed as follows:: +The existing clusters can be listed as follows:: - magnum bay-list + magnum cluster-list - +--------------------------------------+---------+------------+-----------------+ - | uuid | name | node_count | status | - +--------------------------------------+---------+------------+-----------------+ - | 9dccb1e6-02dc-4e2b-b897-10656c5339ce | k8sbay | 1 | CREATE_COMPLETE | - +--------------------------------------+---------+------------+-----------------+ + +--------------------------------------+-------------+------------+-----------------+ + | uuid | name | node_count | status | + +--------------------------------------+-------------+------------+-----------------+ + | 9dccb1e6-02dc-4e2b-b897-10656c5339ce | k8s-cluster | 1 | CREATE_COMPLETE | + +--------------------------------------+-------------+------------+-----------------+ -More detailed information for a given bay is obtained via:: +More detailed information for a given cluster is obtained via:: - magnum bay-show k8sbay + magnum cluster-show k8s-cluster -After a bay is created, you can dynamically add/remove node(s) to/from the bay -by updating the node_count attribute. For example, to add one more node:: +After a cluster is created, you can dynamically add/remove node(s) to/from the +cluster by updating the node_count attribute. For example, to add one more +node:: - magnum bay-update k8sbay replace node_count=2 + magnum cluster-update k8s-cluster replace node_count=2 -Bays in the process of updating will have a status of UPDATE_IN_PROGRESS. +Clusters in the process of updating will have a status of UPDATE_IN_PROGRESS. Magnum will update the status to UPDATE_COMPLETE when it is done updating -the bay. +the cluster. **NOTE:** Reducing node_count will remove all the existing pods on the nodes that are deleted. If you choose to reduce the node_count, magnum will first @@ -271,27 +275,28 @@ node_count so any removed pods can be automatically recovered on your remaining nodes. Heat can be used to see detailed information on the status of a stack or -specific bay: +specific cluster: -To check the list of all bay stacks:: +To check the list of all cluster stacks:: openstack stack list -To check an individual bay's stack:: +To check an individual cluster's stack:: openstack stack show -Monitoring bay status in detail (e.g., creating, updating):: +Monitoring cluster status in detail (e.g., creating, updating):: - BAY_HEAT_NAME=$(openstack stack list | awk "/\sk8sbay-/{print \$4}") - echo ${BAY_HEAT_NAME} - openstack stack resource list ${BAY_HEAT_NAME} + CLUSTER_HEAT_NAME=$(openstack stack list | \ + awk "/\sk8s-cluster-/{print \$4}") + echo ${CLUSTER_HEAT_NAME} + openstack stack resource list ${CLUSTER_HEAT_NAME} -Building a Kubernetes Bay - Based on CoreOS -=========================================== +Building a Kubernetes Cluster - Based on CoreOS +=============================================== -You can create a Kubernetes bay based on CoreOS as an alternative to Atomic. -First, download the official CoreOS image:: +You can create a Kubernetes cluster based on CoreOS as an alternative to +Atomic. First, download the official CoreOS image:: wget http://beta.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2 bunzip2 coreos_production_openstack_image.img.bz2 @@ -305,10 +310,10 @@ Upload the image to glance:: --os-distro=coreos \ --file=coreos_production_openstack_image.img -Create a CoreOS Kubernetes baymodel, which is similar to the Atomic Kubernetes -baymodel, except for pointing to a different image:: +Create a CoreOS Kubernetes ClusterTemplate, which is similar to the Atomic +Kubernetes ClusterTemplate, except for pointing to a different image:: - magnum baymodel-create --name k8sbaymodel-coreos \ + magnum cluster-template-create --name k8s-cluster-template-coreos \ --image-id CoreOS \ --keypair-id testkey \ --external-network-id public \ @@ -317,18 +322,18 @@ baymodel, except for pointing to a different image:: --network-driver flannel \ --coe kubernetes -Create a CoreOS Kubernetes bay. Use the CoreOS baymodel as a template for bay -creation:: +Create a CoreOS Kubernetes cluster. Use the CoreOS ClusterTemplate as a +template for cluster creation:: - magnum bay-create --name k8sbay \ - --baymodel k8sbaymodel-coreos \ + magnum cluster-create --name k8s-cluster \ + --cluster-template k8scluster-template-coreos \ --node-count 2 -Using Kubernetes Bay -==================== +Using a Kubernetes Cluster +========================== **NOTE:** For the following examples, only one minion node is required in the -k8s bay created previously. +k8s cluster created previously. Kubernetes provides a number of examples you can use to check that things are working. You may need to clone kubernetes using:: @@ -372,22 +377,22 @@ the CSR.:: Now that you have your client CSR, you can use the Magnum CLI to send it off to Magnum to get it signed and also download the signing cert.:: - magnum ca-sign --bay k8sbay --csr client.csr > client.crt - magnum ca-show --bay k8sbay > ca.crt + magnum ca-sign --cluster k8s-cluster --csr client.csr > client.crt + magnum ca-show --cluster k8s-cluster > ca.crt Here's how to set up the replicated redis example. Now we create a pod for the redis-master:: - KUBERNETES_URL=$(magnum bay-show k8sbay | + KUBERNETES_URL=$(magnum cluster-show k8s-cluster | awk '/ api_address /{print $4}') # Set kubectl to use the correct certs - kubectl config set-cluster k8sbay --server=${KUBERNETES_URL} \ + kubectl config set-cluster k8s-cluster --server=${KUBERNETES_URL} \ --certificate-authority=$(pwd)/ca.crt kubectl config set-credentials client --certificate-authority=$(pwd)/ca.crt \ --client-key=$(pwd)/client.key --client-certificate=$(pwd)/client.crt - kubectl config set-context k8sbay --cluster=k8sbay --user=client - kubectl config use-context k8sbay + kubectl config set-context k8s-cluster --cluster=k8s-cluster --user=client + kubectl config use-context k8s-cluster # Test the cert and connection works kubectl version @@ -410,37 +415,38 @@ redis slaves and sentinels:: kubectl create -f ./redis-sentinel-controller.yaml Full lifecycle and introspection operations for each object are supported. -For example, magnum bay-create, magnum baymodel-delete. +For example, magnum cluster-create, magnum cluster-template-delete. Now there are four redis instances (one master and three slaves) running -across the bay, replicating data between one another. +across the cluster, replicating data between one another. -Run the bay-show command to get the IP of the bay host on which the +Run the cluster-show command to get the IP of the cluster host on which the redis-master is running:: - magnum bay-show k8sbay + magnum cluster-show k8s-cluster +--------------------+------------------------------------------------------------+ | Property | Value | +--------------------+------------------------------------------------------------+ | status | CREATE_COMPLETE | | uuid | cff82cd0-189c-4ede-a9cb-2c0af6997709 | + | stack_id | 7947844a-8e18-4c79-b591-ecf0f6067641 | | status_reason | Stack CREATE completed successfully | | created_at | 2016-05-26T17:45:57+00:00 | | updated_at | 2016-05-26T17:50:02+00:00 | - | bay_create_timeout | 60 | + | create_timeout | 60 | | api_address | https://172.24.4.4:6443 | - | baymodel_id | e73298e7-e621-4d42-b35b-7a1952b97158 | + | cluster_template_id| e73298e7-e621-4d42-b35b-7a1952b97158 | | master_addresses | ['172.24.4.6'] | | node_count | 1 | | node_addresses | ['172.24.4.5'] | | master_count | 1 | | discovery_url | https://discovery.etcd.io/4caaa65f297d4d49ef0a085a7aecf8e0 | - | name | k8sbay | + | name | k8s-cluster | +--------------------+------------------------------------------------------------+ -The output here indicates the redis-master is running on the bay host with IP -address 172.24.4.5. To access the redis master:: +The output here indicates the redis-master is running on the cluster host with +IP address 172.24.4.5. To access the redis master:: ssh fedora@172.24.4.5 REDIS_ID=$(sudo docker ps | grep redis:v1 | grep k8s_master | awk '{print $1}') @@ -474,19 +480,19 @@ Additional useful commands from a given minion:: kubectl get svc # Get services kubectl get nodes # Get nodes -After you finish using the bay, you want to delete it. A bay can be deleted as -follows:: +After you finish using the cluster, you want to delete it. A cluster can be +deleted as follows:: - magnum bay-delete k8sbay + magnum cluster-delete k8s-cluster -Building and Using a Swarm Bay -============================== +Building and Using a Swarm Cluster +================================== -Create a baymodel. It is very similar to the Kubernetes baymodel, except for -the absence of some Kubernetes-specific arguments and the use of 'swarm' -as the COE:: +Create a ClusterTemplate. It is very similar to the Kubernetes ClusterTemplate, +except for the absence of some Kubernetes-specific arguments and the use of +'swarm' as the COE:: - magnum baymodel-create --name swarmbaymodel \ + magnum cluster-template-create --name swarm-cluster-template \ --image-id fedora-atomic-latest \ --keypair-id testkey \ --external-network-id public \ @@ -501,31 +507,40 @@ as the COE:: http://docs.openstack.org/developer/magnum/magnum-proxy.html -Finally, create the bay. Use the baymodel 'swarmbaymodel' as a template for -bay creation. This bay will result in one swarm manager node and two extra -agent nodes:: +Finally, create the cluster. Use the ClusterTemplate 'swarm-cluster-template' +as a template for cluster creation. This cluster will result in one swarm +manager node and two extra agent nodes:: - magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 2 + magnum cluster-create --name swarm-cluster \ + --cluster-template swarm-cluster-template \ + --node-count 2 -Now that we have a swarm bay we can start interacting with it:: +Now that we have a swarm cluster we can start interacting with it:: - magnum bay-show swarmbay + magnum cluster-show swarm-cluster - +---------------+------------------------------------------+ - | Property | Value | - +---------------+------------------------------------------+ - | status | CREATE_COMPLETE | - | uuid | eda91c1e-6103-45d4-ab09-3f316310fa8e | - | created_at | 2015-04-20T19:05:27+00:00 | - | updated_at | 2015-04-20T19:06:08+00:00 | - | baymodel_id | a93ee8bd-fec9-4ea7-ac65-c66c1dba60af | - | node_count | 2 | - | discovery_url | | - | name | swarmbay | - +---------------+------------------------------------------+ + +--------------------+------------------------------------------------------------+ + | Property | Value | + +--------------------+------------------------------------------------------------+ + | status | CREATE_COMPLETE | + | uuid | eda91c1e-6103-45d4-ab09-3f316310fa8e | + | stack_id | 7947844a-8e18-4c79-b591-ecf0f6067641 | + | status_reason | Stack CREATE completed successfully | + | created_at | 2015-04-20T19:05:27+00:00 | + | updated_at | 2015-04-20T19:06:08+00:00 | + | create_timeout | 60 | + | api_address | https://172.24.4.4:6443 | + | cluster_template_id| e73298e7-e621-4d42-b35b-7a1952b97158 | + | master_addresses | ['172.24.4.6'] | + | node_count | 2 | + | node_addresses | ['172.24.4.5'] | + | master_count | 1 | + | discovery_url | https://discovery.etcd.io/4caaa65f297d4d49ef0a085a7aecf8e0 | + | name | swarm-cluster | + +--------------------+------------------------------------------------------------+ -We now need to setup the docker CLI to use the swarm bay we have created with -the appropriate credentials. +We now need to setup the docker CLI to use the swarm cluster we have created +with the appropriate credentials. Create a dir to store certs and cd into it. The `DOCKER_CERT_PATH` env variable is consumed by docker which expects ca.pem, key.pem and cert.pem to be in that @@ -562,8 +577,8 @@ Run the openssl 'req' command to generate the CSR.:: Now that you have your client CSR use the Magnum CLI to get it signed and also download the signing cert.:: - magnum ca-sign --bay swarmbay --csr client.csr > cert.pem - magnum ca-show --bay swarmbay > ca.pem + magnum ca-sign --cluster swarm-cluster --csr client.csr > cert.pem + magnum ca-show --cluster swarm-cluster > ca.pem Set the CLI to use TLS . This env var is consumed by docker.:: @@ -572,10 +587,10 @@ Set the CLI to use TLS . This env var is consumed by docker.:: Set the correct host to use which is the public ip address of swarm API server endpoint. This env var is consumed by docker.:: - export DOCKER_HOST=$(magnum bay-show swarmbay | awk '/ api_address /{print substr($4,9)}') + export DOCKER_HOST=$(magnum cluster-show swarm-cluster | awk '/ api_address /{print substr($4,9)}') -Next we will create a container in this swarm bay. This container will ping the -address 8.8.8.8 four times:: +Next we will create a container in this swarm cluster. This container will ping +the address 8.8.8.8 four times:: docker run --rm -it cirros:latest ping -c 4 8.8.8.8 @@ -591,10 +606,10 @@ You should see a similar output to:: 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 25.226/25.340/25.513 ms -Building and Using a Mesos Bay -============================== +Building and Using a Mesos Cluster +================================== -Provisioning a mesos bay requires a Ubuntu-based image with some packages +Provisioning a mesos cluster requires a Ubuntu-based image with some packages pre-installed. To build and upload such image, please refer to ``_ @@ -605,46 +620,51 @@ Alternatively, you can download and upload a pre-built image:: --disk-format=qcow2 --container-format=bare \ --os-distro=ubuntu --file=ubuntu-14.04.3-mesos-0.25.0.qcow2 -Then, create a baymodel by using 'mesos' as the COE, with the rest of arguments -similar to the Kubernetes baymodel:: +Then, create a ClusterTemplate by using 'mesos' as the COE, with the rest of +arguments similar to the Kubernetes ClusterTemplate:: - magnum baymodel-create --name mesosbaymodel --image-id ubuntu-mesos \ + magnum cluster-template-create --name mesos-cluster-template --image-id ubuntu-mesos \ --keypair-id testkey \ --external-network-id public \ --dns-nameserver 8.8.8.8 \ --flavor-id m1.small \ --coe mesos -Finally, create the bay. Use the baymodel 'mesosbaymodel' as a template for -bay creation. This bay will result in one mesos master node and two mesos -slave nodes:: +Finally, create the cluster. Use the ClusterTemplate 'mesos-cluster-template' +as a template for cluster creation. This cluster will result in one mesos +master node and two mesos slave nodes:: - magnum bay-create --name mesosbay --baymodel mesosbaymodel --node-count 2 + magnum cluster-create --name mesos-cluster \ + --cluster-template mesos-cluster-template \ + --node-count 2 -Now that we have a mesos bay we can start interacting with it. First we need -to make sure the bay's status is 'CREATE_COMPLETE':: +Now that we have a mesos cluster we can start interacting with it. First we +need to make sure the cluster's status is 'CREATE_COMPLETE':: - $ magnum bay-show mesosbay - +--------------------+--------------------------------------+ - | Property | Value | - +--------------------+--------------------------------------+ - | status | CREATE_COMPLETE | - | uuid | ff727f0d-72ca-4e2b-9fef-5ec853d74fdf | - | status_reason | Stack CREATE completed successfully | - | created_at | 2015-06-09T20:21:43+00:00 | - | updated_at | 2015-06-09T20:28:18+00:00 | - | bay_create_timeout | 60 | - | api_address | 172.24.4.115 | - | baymodel_id | 92dbda62-32d4-4435-88fc-8f42d514b347 | - | node_count | 2 | - | node_addresses | [u'172.24.4.116', u'172.24.4.117'] | - | master_count | 1 | - | discovery_url | None | - | name | mesosbay | - +--------------------+--------------------------------------+ + $ magnum cluster-show mesos-cluster -Next we will create a container in this bay by using the REST API of Marathon. -This container will ping the address 8.8.8.8:: + +--------------------+------------------------------------------------------------+ + | Property | Value | + +--------------------+------------------------------------------------------------+ + | status | CREATE_COMPLETE | + | uuid | ff727f0d-72ca-4e2b-9fef-5ec853d74fdf | + | stack_id | 7947844a-8e18-4c79-b591-ecf0f6067641 | + | status_reason | Stack CREATE completed successfully | + | created_at | 2015-06-09T20:21:43+00:00 | + | updated_at | 2015-06-09T20:28:18+00:00 | + | create_timeout | 60 | + | api_address | https://172.24.4.115:6443 | + | cluster_template_id| 92dbda62-32d4-4435-88fc-8f42d514b347 | + | master_addresses | ['172.24.4.115'] | + | node_count | 2 | + | node_addresses | ['172.24.4.116', '172.24.4.117'] | + | master_count | 1 | + | discovery_url | None | + | name | mesos-cluster | + +--------------------+------------------------------------------------------------+ + +Next we will create a container in this cluster by using the REST API of +Marathon. This container will ping the address 8.8.8.8:: $ cat > mesos.json << END { @@ -662,7 +682,7 @@ This container will ping the address 8.8.8.8:: "cmd": "ping 8.8.8.8" } END - $ MASTER_IP=$(magnum bay-show mesosbay | awk '/ api_address /{print $4}') + $ MASTER_IP=$(magnum cluster-show mesos-cluster | awk '/ api_address /{print $4}') $ curl -X POST -H "Content-Type: application/json" \ http://${MASTER_IP}:8080/v2/apps -d@mesos.json diff --git a/doc/source/heat-templates.rst b/doc/source/heat-templates.rst index 5d0933f346..74d18e6e98 100644 --- a/doc/source/heat-templates.rst +++ b/doc/source/heat-templates.rst @@ -1,11 +1,14 @@ -=========================== - Heat Template Definitions -=========================== +==================== +Heat Stack Templates +==================== -Heat Templates are what Magnum uses to generate a Bay. These various template -definitions provide a mapping of Magnum object attributes to Heat templates -parameters, along with Magnum consumable template outputs. The result of a -Heat template should be a full Container Orchestration Environment. +Heat Stack Templates are what Magnum passes to Heat to generate a cluster. For +each ClusterTemplate resource in Magnum, a Heat stack is created to arrange all +of the cloud resources needed to support the container orchestration +environment. These Heat stack templates provide a mapping of Magnum object +attributes to Heat template parameters, along with Magnum consumable stack +outputs. Magnum passes the Heat Stack Template to the Heat service to create a +Heat stack. The result is a full Container Orchestration Environment. .. list-plugins:: magnum.template_definitions :detailed: diff --git a/doc/source/index.rst b/doc/source/index.rst index 8b99fec9fc..d385d62915 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -32,9 +32,9 @@ Architecture There are several different types of objects in the magnum system: -* **Bay:** A collection of node objects where work is scheduled -* **BayModel:** An object stores template information about the bay which is - used to create new bays consistently +* **Cluster:** A collection of node objects where work is scheduled +* **ClusterTemplate:** An object stores template information about the cluster + which is used to create new clusters consistently * **Pod:** A collection of containers running on one physical or virtual machine * **Service:** An abstraction which defines a logical set of pods and a policy @@ -51,7 +51,7 @@ scalability to the conductor as well. The magnum-conductor process runs on a controller machine and connects to a Kubernetes or Docker REST API endpoint. The Kubernetes and Docker REST API -endpoints are managed by the bay object. +endpoints are managed by the cluster object. When service or pod objects are created, Kubernetes may be directly contacted via the Kubernetes REST API. When container objects are acted upon, the @@ -60,8 +60,7 @@ Docker REST API may be directly contacted. Features ======== -* Abstractions for bays, containers, nodes, pods, replication controllers, and - services +* Abstractions for Clusters * Integration with Kubernetes, Swarm, Mesos for backend container technology * Integration with Keystone for multi-tenant security * Integration with Neutron for Kubernetes multi-tenancy network security @@ -75,7 +74,7 @@ Developer Info dev/quickstart dev/manual-devstack - dev/bay-template-example.rst + dev/cluster-type-definition.rst dev/kubernetes-load-balancer.rst dev/functional-test.rst dev/reno.rst diff --git a/doc/source/install-guide-from-source.rst b/doc/source/install-guide-from-source.rst index 321e75ed1b..5abdd0f4b7 100644 --- a/doc/source/install-guide-from-source.rst +++ b/doc/source/install-guide-from-source.rst @@ -30,7 +30,7 @@ magnum related metrics. See `OpenStack Install Guides .. important:: - Magnum creates VM clusters on the Compute service (nova), called bays. These + Magnum creates groupings of Nova compute instances, called clusters. These VMs must have basic Internet connectivity and must be able to reach magnum's API server. Make sure that Compute and Network services are configured accordingly. @@ -176,8 +176,7 @@ service, you must create a database, service credentials, and API endpoints. +--------------+----------------------------------+ #. Magnum requires additional information in the Identity service to - manage COE clusters (bays). To add this information, complete these - steps: + manage clusters. To add this information, complete these steps: * Create the ``magnum`` domain that contains projects and users: diff --git a/doc/source/magnum-proxy.rst b/doc/source/magnum-proxy.rst index d2d046ec35..e3e0610ee7 100644 --- a/doc/source/magnum-proxy.rst +++ b/doc/source/magnum-proxy.rst @@ -7,9 +7,9 @@ for using services like docker, kubernetes and mesos. Use these steps when your firewall will not allow you to use those services without a proxy. -**NOTE:** This feature has only been tested with the supported bay type -and associated image: Kubernetes and Swarm bay using the Fedora Atomic -image, and Mesos bay using the Ubuntu image. +**NOTE:** This feature has only been tested with the supported cluster type +and associated image: Kubernetes and Swarm use the Fedora Atomic +image, and Mesos uses the Ubuntu image. Proxy Parameters to define before use ===================================== @@ -37,10 +37,10 @@ and ip addresses. Bad example: 192.168.0.0/28. Steps to configure proxies. ============================== -You can specify all three proxy parameters while creating baymodel of any -coe type. All of proxy parameters are optional. +You can specify all three proxy parameters while creating ClusterTemplate of +any coe type. All of proxy parameters are optional. - magnum baymodel-create --name k8sbaymodel \ + magnum cluster-template-create --name k8s-cluster-template \ --image-id fedora-atomic-latest \ --keypair-id testkey \ --external-network-id public \ @@ -50,7 +50,7 @@ coe type. All of proxy parameters are optional. --http-proxy \ --https-proxy \ --no-proxy <172.24.4.4,172.24.4.9,172.24.4.8> - magnum baymodel-create --name swarmbaymodel \ + magnum cluster-template-create --name swarm-cluster-template \ --image-id fedora-atomic-latest \ --keypair-id testkey \ --external-network-id public \ @@ -60,7 +60,7 @@ coe type. All of proxy parameters are optional. --http-proxy \ --https-proxy \ --no-proxy <172.24.4.4,172.24.4.9,172.24.4.8> - magnum baymodel-create --name mesosbaymodel \ + magnum cluster-template-create --name mesos-cluster-template \ --image-id ubuntu-mesos \ --keypair-id testkey \ --external-network-id public \ diff --git a/doc/source/troubleshooting-guide.rst b/doc/source/troubleshooting-guide.rst index ecc1e2a81e..15dae8a823 100644 --- a/doc/source/troubleshooting-guide.rst +++ b/doc/source/troubleshooting-guide.rst @@ -16,26 +16,26 @@ debugging unit tests and gate tests. Failure symptoms ================ -My bay-create takes a really long time - If you are using devstack on a small VM, bay-create will take a long +My cluster-create takes a really long time + If you are using devstack on a small VM, cluster-create will take a long time and may eventually fail because of insufficient resources. Another possible reason is that a process on one of the nodes is hung and heat is still waiting on the signal. In this case, it will eventually fail with a timeout, but since heat has a long default timeout, you can look at the `heat stacks`_ and check the WaitConditionHandle resources. -My bay-create fails with error: "Failed to create trustee XXX in domain XXX" - Check the `trustee for bay`_ +My cluster-create fails with error: "Failed to create trustee XXX in domain XXX" + Check the `trustee for cluster`_ -Kubernetes bay-create fails +Kubernetes cluster-create fails Check the `heat stacks`_, log into the master nodes and check the `Kubernetes services`_ and `etcd service`_. -Swarm bay-create fails +Swarm cluster-create fails Check the `heat stacks`_, log into the master nodes and check the `Swarm services`_ and `etcd service`_. -Mesos bay-create fails +Mesos cluster-create fails Check the `heat stacks`_, log into the master nodes and check the `Mesos services`_. @@ -43,20 +43,20 @@ I get the error "Timed out waiting for a reply" when deploying a pod Verify the `Kubernetes services`_ and `etcd service`_ are running on the master nodes. -I deploy pods on Kubernetes bay but the status stays "Pending" +I deploy pods on Kubernetes cluster but the status stays "Pending" The pod status is "Pending" while the Docker image is being downloaded, so if the status does not change for a long time, log into the minion node and check for `Cluster internet access`_. -I deploy pods and services on Kubernetes bay but the app is not working +I deploy pods and services on Kubernetes cluster but the app is not working The pods and services are running and the status looks correct, but if the app is performing communication between pods through services, verify `Kubernetes networking`_. -Swarm bay is created successfully but I cannot deploy containers +Swarm cluster is created successfully but I cannot deploy containers Check the `Swarm services`_ and `etcd service`_ on the master nodes. -Mesos bay is created successfully but I cannot deploy containers on Marathon +Mesos cluster is created successfully but I cannot deploy containers on Marathon Check the `Mesos services`_ on the master node. I get a "Protocol violation" error when deploying a container @@ -64,7 +64,7 @@ I get a "Protocol violation" error when deploying a container kube-apiserver is running to accept the request. Check `TLS`_ and `Barbican service`_. -My bay-create fails with a resource error on docker_volume +My cluster-create fails with a resource error on docker_volume Check for available volume space on Cinder and the `request volume size`_ in the heat template. Run "nova volume-list" to check the volume status. @@ -78,17 +78,17 @@ Heat stacks ----------- *To be filled in* -A bay is deployed by a set of heat stacks: one top level stack and several -nested stack. The stack names are prefixed with the bay name and the nested -stack names contain descriptive internal names like *kube_masters*, +A cluster is deployed by a set of heat stacks: one top level stack and several +nested stack. The stack names are prefixed with the cluster name and the +nested stack names contain descriptive internal names like *kube_masters*, *kube_minions*. -To list the status of all the stacks for a bay: +To list the status of all the stacks for a cluster: - heat stack-list -n | grep *bay-name* + heat stack-list -n | grep *cluster-name* -If the bay has failed, then one or more of the heat stacks would have failed. -From the stack list above, look for the stacks that failed, then +If the cluster has failed, then one or more of the heat stacks would have +failed. From the stack list above, look for the stacks that failed, then look for the particular resource(s) that failed in the failed stack by: heat resource-list *failed-stack-name* | grep "FAILED" @@ -108,14 +108,15 @@ services`_, `Swarm services`_ or `Mesos services`_. If the failure is in other scripts, look for them as `Heat software resource scripts`_. -Trustee for bay ---------------- -When a user creates a bay, Magnum will dynamically create a service account -for the creating bay. The service account will be used by the bay to access -the OpenStack services (i.e. Neutron, Swift, etc.). A trust relationship -will be created between the user who created the bay (the "trustor") and the -service account created for the bay (the "trustee"). For details, please refer -`_. +Trustee for cluster +------------------- +When a user creates a cluster, Magnum will dynamically create a service account +for the cluster. The service account will be used by the cluster to +access the OpenStack services (i.e. Neutron, Swift, etc.). A trust relationship +will be created between the user who created the cluster (the "trustor") and +the service account created for the cluster (the "trustee"). For details, +please refer +`_. If Magnum fails to create the trustee, check the magnum config file (usually in /etc/magnum/magnum.conf). Make sure 'trustee_*' and 'auth_uri' are set and @@ -192,7 +193,7 @@ The nodes for Kubernetes, Swarm and Mesos are connected to a private Neutron network, so to provide access to the external internet, a router connects the private network to a public network. With devstack, the default public network is "public", but this can be replaced by the -parameter "external-network-id" in the bay model. The "public" network +parameter "external-network-id" in the ClusterTemplate. The "public" network with devstack is actually not a real external network, so it is in turn routed to the network interface of the host for devstack. This is configured in the file local.conf with the variable PUBLIC_INTERFACE, @@ -215,8 +216,8 @@ Check the following: - Is PUBLIC_INTERFACE in devstack/local.conf the correct network interface? Does this interface have a route to the external internet? -- If "external-network-id" is specified in the bay model, does this network - have a route to the external internet? +- If "external-network-id" is specified in the ClusterTemplate, does this + network have a route to the external internet? - Is your devstack environment behind a firewall? This can be the case for some enterprises or countries. In this case, consider using a `proxy server `_. @@ -241,9 +242,9 @@ If the name lookup fails, check the following: - Is the DNS entry correct in the subnet? Try "neutron subnet-show " for the private subnet and check dns_nameservers. The IP should be either the default public DNS 8.8.8.8 or the value - specified by "dns-nameserver" in the bay model. + specified by "dns-nameserver" in the ClusterTemplate. - If you are using your own DNS server by specifying "dns-nameserver" - in the bay model, is it reachable and working? + in the ClusterTemplate, is it reachable and working? - More help on `DNS troubleshooting `_. @@ -264,12 +265,12 @@ the key:value may not be replicated correctly. In this case, use the following steps to verify the inter-pods networking and pinpoint problems. Since the steps are specific to the network drivers, refer to the -particular driver being used for the bay. +particular driver being used for the cluster. Using Flannel as network driver ............................... -Flannel is the default network driver for Kubernetes bays. Flannel is +Flannel is the default network driver for Kubernetes clusters. Flannel is an overlay network that runs on top of the neutron network. It works by encapsulating the messages between pods and forwarding them to the correct node that hosts the target pod. @@ -515,15 +516,15 @@ Running Flannel When deploying a COE, Flannel is available as a network driver for certain COE type. Magnum currently supports Flannel for a Kubernetes -or Swarm bay. +or Swarm cluster. -Flannel provides a flat network space for the containers in the bay: +Flannel provides a flat network space for the containers in the cluster: they are allocated IP in this network space and they will have connectivity to each other. Therefore, if Flannel fails, some containers will not -be able to access services from other containers in the bay. This can be +be able to access services from other containers in the cluster. This can be confirmed by running *ping* or *curl* from one container to another. -The Flannel daemon is run as a systemd service on each node of the bay. +The Flannel daemon is run as a systemd service on each node of the cluster. To check Flannel, run on each node:: sudo service flanneld status @@ -572,7 +573,7 @@ Check the following: } where the values for the parameters must match the corresponding - parameters from the bay model. + parameters from the ClusterTemplate. Magnum also loads this configuration into etcd, therefore, verify the configuration in etcd by running *etcdctl* on the master nodes:: diff --git a/doc/source/userguide.rst b/doc/source/userguide.rst index 87414892a9..41529c81e3 100644 --- a/doc/source/userguide.rst +++ b/doc/source/userguide.rst @@ -20,7 +20,7 @@ Contents #. `Overview`_ #. `Python Client`_ #. `Horizon Interface`_ -#. `Bay Drivers`_ +#. `Cluster Drivers`_ #. `Choosing a COE`_ #. `Native clients`_ #. `Kubernetes`_ @@ -38,22 +38,22 @@ Contents Terminology =========== -Bay - A bay is the construct in which Magnum launches container orchestration - engines. After a bay has been created the user is able to add containers to - it either directly, or in the case of the Kubernetes container orchestration - engine within pods - a logical construct specific to that implementation. A - bay is created based on a baymodel. +Cluster (previously Bay) + A cluster is the construct in which Magnum launches container orchestration + engines. After a cluster has been created the user is able to add containers + to it either directly, or in the case of the Kubernetes container + orchestration engine within pods - a logical construct specific to that + implementation. A cluster is created based on a ClusterTemplate. -Baymodel - A baymodel in Magnum is roughly equivalent to a flavor in Nova. It acts as a - template that defines options such as the container orchestration engine, - keypair and image for use when Magnum is creating bays using the given - baymodel. +ClusterTemplate (previously BayModel) + A ClusterTemplate in Magnum is roughly equivalent to a flavor in Nova. It + acts as a template that defines options such as the container orchestration + engine, keypair and image for use when Magnum is creating clusters using + the given ClusterTemplate. Container Orchestration Engine (COE) A container orchestration engine manages the lifecycle of one or more - containers, logically represented in Magnum as a bay. Magnum supports a + containers, logically represented in Magnum as a cluster. Magnum supports a number of container orchestration engines, each with their own pros and cons, including Docker Swarm, Kubernetes, and Mesos. @@ -64,32 +64,33 @@ Overview Magnum rationale, concept, compelling features -======== -BayModel -======== +=============== +ClusterTemplate +=============== -A baymodel is a collection of parameters to describe how a bay can be -constructed. Some parameters are relevant to the infrastructure of -the bay, while others are for the particular COE. In a typical -workflow, a user would create a baymodel, then create one or more bays -using the baymodel. A cloud provider can also define a number of -baymodels and provide them to the users. A baymodel cannot be updated -or deleted if a bay using this baymodel still exists. +A ClusterTemplate (previously known as BayModel) is a collection of parameters +to describe how a cluster can be constructed. Some parameters are relevant to +the infrastructure of the cluster, while others are for the particular COE. In +a typical workflow, a user would create a ClusterTemplate, then create one or +more clusters using the ClusterTemplate. A cloud provider can also define a +number of ClusterTemplates and provide them to the users. A ClusterTemplate +cannot be updated or deleted if a cluster using this ClusterTemplate still +exists. -The definition and usage of the parameters of a baymodel are as follows. +The definition and usage of the parameters of a ClusterTemplate are as follows. They are loosely grouped as: mandatory, infrastructure, COE specific. --coe \ Specify the Container Orchestration Engine to use. Supported COE's include 'kubernetes', 'swarm', 'mesos'. If your environment - has additional bay drivers installed, refer to the bay driver + has additional cluster drivers installed, refer to the cluster driver documentation for the new COE names. This is a mandatory parameter and there is no default value. --image-id \ The name or UUID of the base image in Glance to boot the servers for - the bay. The image must have the attribute 'os-distro' defined - as appropriate for the bay driver. For the currently supported + the cluster. The image must have the attribute 'os-distro' defined + as appropriate for the cluster driver. For the currently supported images, the os-distro names are: ========== ===================== @@ -103,47 +104,48 @@ They are loosely grouped as: mandatory, infrastructure, COE specific. This is a mandatory parameter and there is no default value. --keypair-id \ - The name or UUID of the SSH keypair to configure in the bay servers + The name or UUID of the SSH keypair to configure in the cluster servers for ssh access. You will need the key to be able to ssh to the - servers in the bay. The login name is specific to the bay + servers in the cluster. The login name is specific to the cluster driver. This is a mandatory parameter and there is no default value. --external-network-id \ The name or network ID of a Neutron network to provide connectivity - to the external internet for the bay. This network must be an + to the external internet for the cluster. This network must be an external network, i.e. its attribute 'router:external' must be - 'True'. The servers in the bay will be connected to a private + 'True'. The servers in the cluster will be connected to a private network and Magnum will create a router between this private network and the external network. This will allow the servers to download images, access discovery service, etc, and the containers to install packages, etc. In the opposite direction, floating IP's will be allocated from the external network to provide access from the external internet to servers and the container services hosted in - the bay. This is a mandatory parameter and there is no default + the cluster. This is a mandatory parameter and there is no default value. --name \ - Name of the baymodel to create. The name does not have to be - unique. If multiple baymodels have the same name, you will need to - use the UUID to select the baymodel when creating a bay or updating, - deleting a baymodel. If a name is not specified, a random name will - be generated using a string and a number, for example "pi-13-model". + Name of the ClusterTemplate to create. The name does not have to be + unique. If multiple ClusterTemplates have the same name, you will need to + use the UUID to select the ClusterTemplate when creating a cluster or + updating, deleting a ClusterTemplate. If a name is not specified, a random + name will be generated using a string and a number, for example + "pi-13-model". --public - Access to a baymodel is normally limited to the admin, owner or users + Access to a ClusterTemplate is normally limited to the admin, owner or users within the same tenant as the owners. Setting this flag - makes the baymodel public and accessible by other users. The default is - not public. + makes the ClusterTemplate public and accessible by other users. The default + is not public. --server-type \ - The servers in the bay can be VM or baremetal. This parameter selects - the type of server to create for the bay. The default is 'vm' and + The servers in the cluster can be VM or baremetal. This parameter selects + the type of server to create for the cluster. The default is 'vm' and currently this is the only supported server type. --network-driver \ The name of a network driver for providing the networks for the containers. Note that this is different and separate from the Neutron - network for the bay. The operation and networking model are specific + network for the cluster. The operation and networking model are specific to the particular driver; refer to the `Networking`_ section for more details. Supported network drivers and the default driver are: @@ -169,8 +171,8 @@ This is a mandatory parameter and there is no default value. ============= ============= =========== --dns-nameserver \ - The DNS nameserver for the servers and containers in the bay to use. - This is configured in the private Neutron network for the bay. The + The DNS nameserver for the servers and containers in the cluster to use. + This is configured in the private Neutron network for the cluster. The default is '8.8.8.8'. --flavor-id \ @@ -215,15 +217,15 @@ This is a mandatory parameter and there is no default value. --labels \ Arbitrary labels in the form of key=value pairs. The accepted keys - and valid values are defined in the bay drivers. They are used as a - way to pass additional parameters that are specific to a bay driver. + and valid values are defined in the cluster drivers. They are used as a + way to pass additional parameters that are specific to a cluster driver. Refer to the subsection on labels for a list of the supported key/value pairs and their usage. --tls-disabled Transport Layer Security (TLS) is normally enabled to secure the - bay. In some cases, users may want to disable TLS in the bay, for - instance during development or to troubleshoot certain problems. + cluster. In some cases, users may want to disable TLS in the cluster, + for instance during development or to troubleshoot certain problems. Specifying this parameter will disable TLS so that users can access the COE endpoints without a certificate. The default is TLS enabled. @@ -232,7 +234,7 @@ This is a mandatory parameter and there is no default value. Docker images by default are pulled from the public Docker registry, but in some cases, users may want to use a private registry. This option provides an alternative registry based on the Registry V2: - Magnum will create a local registry in the bay backed by swift to + Magnum will create a local registry in the cluster backed by swift to host the images. Refer to `Docker Registry 2.0 `_ for more details. The default is to use the public registry. @@ -252,28 +254,29 @@ Labels *To be filled in* -=== -Bay -=== +======= +Cluster +======= -A bay is an instance of the baymodel of a COE. Magnum deploys a bay -by referring to the attributes defined in the particular baymodel as -well as a few additional parameters for the bay. Magnum deploys the -orchestration templates provided by the bay driver to create and -configure all the necessary infrastructure. When ready, the bay is a -fully operational COE that can host containers. +A cluster (previously known as bay) is an instance of the ClusterTemplate +of a COE. Magnum deploys a cluster by referring to the attributes +defined in the particular ClusterTemplate as well as a few additional +parameters for the cluster. Magnum deploys the orchestration templates +provided by the cluster driver to create and configure all the necessary +infrastructure. When ready, the cluster is a fully operational COE that +can host containers. Infrastructure -------------- -The infrastructure of the bay consists of the resources provided by +The infrastructure of the cluster consists of the resources provided by the various OpenStack services. Existing infrastructure, including -infrastructure external to OpenStack, can also be used by the bay, +infrastructure external to OpenStack, can also be used by the cluster, such as DNS, public network, public discovery service, Docker registry. The actual resources created depends on the COE type and the options -specified; therefore you need to refer to the bay driver documentation +specified; therefore you need to refer to the cluster driver documentation of the COE for specific details. For instance, the option -'--master-lb-enabled' in the baymodel will cause a load balancer pool +'--master-lb-enabled' in the ClusterTemplate will cause a load balancer pool along with the health monitor and floating IP to be created. It is important to distinguish resources in the IaaS level from resources in the PaaS level. For instance, the infrastructure networking in @@ -283,7 +286,7 @@ in Kubernetes or Swarm PaaS. Typical infrastructure includes the following. Servers - The servers host the containers in the bay and these servers can be + The servers host the containers in the cluster and these servers can be VM or bare metal. VM's are provided by Nova. Since multiple VM's are hosted on a physical server, the VM's provide the isolation needed for containers between different tenants running on the same @@ -293,12 +296,12 @@ Servers Identity Keystone provides the authentication and authorization for managing - the bay infrastructure. + the cluster infrastructure. Network Networking among the servers is provided by Neutron. Since COE currently are not multi-tenant, isolation for multi-tenancy on the - networking level is done by using a private network for each bay. + networking level is done by using a private network for each cluster. As a result, containers belonging to one tenant will not be accessible to containers or servers of another tenant. Other networking resources may also be used, such as load balancer and @@ -311,24 +314,24 @@ Storage Security Barbican provides the storage of secrets such as certificates used - in the bay Transport Layer Security (TLS). + for Transport Layer Security (TLS) within the cluster. Life cycle ---------- -The set of life cycle operations on the bay is one of the key value -that Magnum provides, enabling bays to be managed painlessly on +The set of life cycle operations on the cluster is one of the key value +that Magnum provides, enabling clusters to be managed painlessly on OpenStack. The current operations are the basic CRUD operations, but more advanced operations are under discussion in the community and will be implemented as needed. -**NOTE** The OpenStack resources created for a bay are fully -accessible to the bay owner. Care should be taken when modifying or +**NOTE** The OpenStack resources created for a cluster are fully +accessible to the cluster owner. Care should be taken when modifying or reusing these resources to avoid impacting Magnum operations in unexpected manners. For instance, if you launch your own Nova instance on the bay private network, Magnum would not be aware of this -instance. Therefore, the bay-delete operation will fail because +instance. Therefore, the cluster-delete operation will fail because Magnum would not delete the extra Nova instance and the private Neutron network cannot be removed while a Nova instance is still attached. @@ -339,48 +342,56 @@ Heat. For more help on Heat stack troubleshooting, refer to the `_. + Create ++++++ -The 'bay-create' command deploys a bay, for example:: +**NOTE** bay- are the deprecated versions of these commands and are +still support in current release. They will be removed in a future version. +Any references to the term bay will be replaced in the parameters when using +the 'bay' versions of the commands. For example, in 'bay-create' --baymodel +is used as the baymodel parameter for this command instead of +--cluster-template. - magnum bay-create --name mybay \ - --baymodel mymodel \ +The 'cluster-create' command deploys a cluster, for example:: + + magnum cluster-create --name mycluster \ + --cluster-template mytemplate \ --node-count 8 \ --master-count 3 -The 'bay-create' operation is asynchronous; therefore you can initiate -another 'bay-create' operation while the current bay is being created. -If the bay fails to be created, the infrastructure created so far may +The 'cluster-create' operation is asynchronous; therefore you can initiate +another 'cluster-create' operation while the current cluster is being created. +If the cluster fails to be created, the infrastructure created so far may be retained or deleted depending on the particular orchestration -engine. As a common practice, a failed bay is retained during +engine. As a common practice, a failed cluster is retained during development for troubleshooting, but they are automatically deleted in -production. The current bay drivers use Heat templates and the -resources of a failed 'bay-create' are retained. +production. The current cluster drivers use Heat templates and the +resources of a failed 'cluster-create' are retained. -The definition and usage of the parameters for 'bay-create' are as +The definition and usage of the parameters for 'cluster-create' are as follows: ---baymodel \ - The ID or name of the baymodel to use. This is a mandatory - parameter. Once a baymodel is used to create a bay, it cannot - be deleted or modified until all bays that use the baymodel have +--cluster-template \ + The ID or name of the ClusterTemplate to use. This is a mandatory + parameter. Once a ClusterTemplate is used to create a cluster, it cannot + be deleted or modified until all clusters that use the ClusterTemplate have been deleted. --name \ - Name of the bay to create. If a name is not specified, a random + Name of the cluster to create. If a name is not specified, a random name will be generated using a string and a number, for example - "gamma-7-bay". + "gamma-7-cluster". --node-count \ - The number of servers that will serve as node in the bay. + The number of servers that will serve as node in the cluster. The default is 1. --master-count \ - The number of servers that will serve as master for the bay. The - default is 1. Set to more than 1 master to enable High + The number of servers that will serve as master for the cluster. + The default is 1. Set to more than 1 master to enable High Availability. If the option '--master-lb-enabled' is specified in - the baymodel, the master servers will be placed in a load balancer + the ClusterTemplate, the master servers will be placed in a load balancer pool. --discovery-url \ @@ -393,63 +404,63 @@ follows: https://discovery.etcd.io - In this case, Magnum will generate a unique url here for each bay + In this case, Magnum will generate a unique url here for each cluster and store the info for the servers. --timeout \ - The timeout for bay creation in minutes. The value expected is a + The timeout for cluster creation in minutes. The value expected is a positive integer and the default is 60 minutes. If the timeout is - reached during bay-create, the operation will be aborted and the bay - status will be set to 'CREATE_FAILED'. + reached during cluster-create, the operation will be aborted and the + cluster status will be set to 'CREATE_FAILED'. List ++++ -The 'bay-list' command lists all the bays that belong to the tenant, +The 'cluster-list' command lists all the clusters that belong to the tenant, for example:: - magnum bay-list + magnum cluster-list Show ++++ -The 'bay-show' command prints all the details of a bay, for +The 'cluster-show' command prints all the details of a cluster, for example:: - magnum bay-show mybay + magnum cluster-show mycluster The properties include those not specified by users that have been assigned default values and properties from new resources that -have been created for the bay. +have been created for the cluster. Update ++++++ -A bay can be modified using the 'bay-update' command, for example:: +A cluster can be modified using the 'cluster-update' command, for example:: - magnum bay-update mybay replace node_count=8 + magnum cluster-update mycluster replace node_count=8 The parameters are positional and their definition and usage are as follows. -\ - This is the first parameter, specifying the UUID or name of the bay +\ + This is the first parameter, specifying the UUID or name of the cluster to update. \ This is the second parameter, specifying the desired change to be - made to the bay attributes. The allowed changes are 'add', + made to the cluster attributes. The allowed changes are 'add', 'replace' and 'remove'. \ This is the third parameter, specifying the targeted attributes in - the bay as a list separated by blank space. To add or replace an + the cluster as a list separated by blank space. To add or replace an attribute, you need to specify the value for the attribute. To remove an attribute, you only need to specify the name of the attribute. Currently the only attribute that can be replaced or removed is 'node_count'. The attributes 'name', 'master_count' and 'discovery_url' cannot be replaced or delete. The table below - summarizes the possible change to a bay. + summarizes the possible change to a cluster. +---------------+-----+------------------+-----------------------+ | Attribute | add | replace | remove | @@ -463,22 +474,22 @@ follows. | discovery_url | no | no | no | +---------------+-----+------------------+-----------------------+ -The 'bay-update' operation cannot be initiated when another operation +The 'cluster-update' operation cannot be initiated when another operation is in progress. -**NOTE:** The attribute names in bay-update are slightly different -from the corresponding names in the bay-create command: the dash '-' +**NOTE:** The attribute names in cluster-update are slightly different +from the corresponding names in the cluster-create command: the dash '-' is replaced by an underscore '_'. For instance, 'node-count' in -bay-create is 'node_count' in bay-update. +cluster-create is 'node_count' in cluster-update. Scale +++++ -Scaling a bay means adding servers to or removing servers from the bay. -Currently, this is done through the 'bay-update' operation by modifying +Scaling a cluster means adding servers to or removing servers from the cluster. +Currently, this is done through the 'cluster-update' operation by modifying the node-count attribute, for example:: - magnum bay-update mybay replace node_count=2 + magnum cluster-update mycluster replace node_count=2 When some nodes are removed, Magnum will attempt to find nodes with no containers to remove. If some nodes with containers must be removed, @@ -487,21 +498,21 @@ Magnum will log a warning message. Delete ++++++ -The 'bay-delete' operation removes the bay by deleting all resources +The 'cluster-delete' operation removes the cluster by deleting all resources such as servers, network, storage; for example:: - magnum bay-delete mybay + magnum cluster-delete mycluster -The only parameter for the bay-delete command is the ID or name of the -bay to delete. Multiple bays can be specified, separated by a blank +The only parameter for the cluster-delete command is the ID or name of the +cluster to delete. Multiple clusters can be specified, separated by a blank space. If the operation fails, there may be some remaining resources that have not been deleted yet. In this case, you can troubleshoot through Heat. If the templates are deleted manually in Heat, you can delete -the bay in Magnum to clean up the bay from Magnum database. +the cluster in Magnum to clean up the cluster from Magnum database. -The 'bay-delete' operation can be initiated when another operation is +The 'cluster-delete' operation can be initiated when another operation is still in progress. @@ -558,15 +569,15 @@ Horizon Interface ================= *To be filled in with screenshots* -=========== -Bay Drivers -=========== +=============== +Cluster Drivers +=============== -A bay driver is a collection of python code, heat templates, scripts, +A cluster driver is a collection of python code, heat templates, scripts, images, and documents for a particular COE on a particular -distro. Magnum presents the concept of baymodels and bays. The -implementation for a particular bay type is provided by the bay driver. -In other words, the bay driver provisions and manages the infrastructure +distro. Magnum presents the concept of ClusterTemplates and clusters. The +implementation for a particular cluster type is provided by the cluster driver. +In other words, the cluster driver provisions and manages the infrastructure for the COE. Magnum includes default drivers for the following COE and distro pairs: @@ -582,8 +593,8 @@ COE and distro pairs: | Mesos | Ubuntu | +------------+---------------+ -Magnum is designed to accommodate new bay drivers to support custom -COE's and this section describes how a new bay driver can be +Magnum is designed to accommodate new cluster drivers to support custom +COE's and this section describes how a new cluster driver can be constructed and enabled in Magnum. @@ -608,16 +619,17 @@ The minimum required components are: driver.py Python code that implements the controller operations for the particular COE. The driver must implement: - Currently supported: ``bay_create``, ``bay_update``, ``bay_delete``. + Currently supported: + ``cluster_create``, ``cluster_update``, ``cluster_delete``. templates A directory of orchestration templates for managing the lifecycle - of bays, including creation, configuration, update, and deletion. + of clusters, including creation, configuration, update, and deletion. Currently only Heat templates are supported, but in the future other orchestration mechanism such as Ansible may be supported. template_def.py - Python code that maps the parameters from the baymodel to the + Python code that maps the parameters from the ClusterTemplate to the input parameters for the orchestration and invokes the orchestration in the templates directory. @@ -637,18 +649,18 @@ api.py Python code to interface with the COE. monitor.py - Python code to monitor the resource utilization of the bay. + Python code to monitor the resource utilization of the cluster. scale.py - Python code to scale the bay by adding or removing nodes. + Python code to scale the cluster by adding or removing nodes. -Sample bay driver ------------------ +Sample cluster driver +--------------------- -To help developers in creating new COE drivers, a minimal bay driver -is provided as an example. The 'docker' bay driver will simply deploy +To help developers in creating new COE drivers, a minimal cluster driver +is provided as an example. The 'docker' cluster driver will simply deploy a single VM running Ubuntu with the latest Docker version installed. It is not a true cluster, but the simplicity will help to illustrate the key concepts. @@ -657,8 +669,8 @@ the key concepts. -Installing a bay driver ------------------------ +Installing a cluster driver +--------------------------- *To be filled in* @@ -668,19 +680,19 @@ Choosing a COE Magnum supports a variety of COE options, and allows more to be added over time as they gain popularity. As an operator, you may choose to support the full variety of options, or you may want to offer a subset of the available choices. -Given multiple choices, your users can run one or more bays, and each may use -a different COE. For example, I might have multiple bays that use Kubernetes, -and just one bay that uses Swarm. All of these bays can run concurrently, even -though they use different COE software. +Given multiple choices, your users can run one or more clusters, and each may +use a different COE. For example, I might have multiple clusters that use +Kubernetes, and just one cluster that uses Swarm. All of these clusters can +run concurrently, even though they use different COE software. Choosing which COE to use depends on what tools you want to use to manage your containers once you start your app. If you want to use the Docker tools, you -may want to use the Swarm bay type. Swarm will spread your containers across -the various nodes in your bay automatically. It does not monitor the health of -your containers, so it can't restart them for you if they stop. It will not -automatically scale your app for you (as of Swarm version 1.2.2). You may view -this as a plus. If you prefer to manage your application yourself, you might -prefer swarm over the other COE options. +may want to use the Swarm cluster type. Swarm will spread your containers +across the various nodes in your cluster automatically. It does not monitor +the health of your containers, so it can't restart them for you if they stop. +It will not automatically scale your app for you (as of Swarm version 1.2.2). +You may view this as a plus. If you prefer to manage your application yourself, +you might prefer swarm over the other COE options. Kubernetes (as of v1.2) is more sophisticated than Swarm (as of v1.2.2). It offers an attractive YAML file description of a pod, which is a grouping of @@ -699,13 +711,14 @@ including Marathon, Aurora, Chronos, Hadoop, and `a number of others. The Apache Mesos framework design can be used to run alternate COE software directly on Mesos. Although this approach is not widely used yet, it may soon be possible to run Mesos with Kubernetes and Swarm as frameworks, allowing -you to share the resources of a bay between multiple different COEs. Until -this option matures, we encourage Magnum users to create multiple bays, and -use the COE in each bay that best fits the anticipated workload. +you to share the resources of a cluster between multiple different COEs. Until +this option matures, we encourage Magnum users to create multiple clusters, and +use the COE in each cluster that best fits the anticipated workload. Finding the right COE for your workload is up to you, but Magnum offers you a choice to select among the prevailing leading options. Once you decide, see -the next sections for examples of how to create a bay with your desired COE. +the next sections for examples of how to create a cluster with your desired +COE. ============== Native clients @@ -722,7 +735,7 @@ Pod When using the Kubernetes container orchestration engine, a pod is the smallest deployable unit that can be created and managed. A pod is a co-located group of application containers that run with a shared context. - When using Magnum, pods are created and managed within bays. Refer to the + When using Magnum, pods are created and managed within clusters. Refer to the `pods section `_ in the `Kubernetes User Guide`_ for more information. @@ -748,49 +761,49 @@ Service .. _Kubernetes User Guide: http://kubernetes.io/v1.0/docs/user-guide/ -When Magnum deploys a Kubernetes bay, it uses parameters defined in the -baymodel and specified on the bay-create command, for example:: +When Magnum deploys a Kubernetes cluster, it uses parameters defined in the +ClusterTemplate and specified on the cluster-create command, for example:: - magnum baymodel-create --name k8sbaymodel \ - --image-id fedora-atomic-latest \ - --keypair-id testkey \ - --external-network-id public \ - --dns-nameserver 8.8.8.8 \ - --flavor-id m1.small \ - --docker-volume-size 5 \ - --network-driver flannel \ - --coe kubernetes + magnum cluster-template-create --name k8s-cluster-template \ + --image-id fedora-atomic-latest \ + --keypair-id testkey \ + --external-network-id public \ + --dns-nameserver 8.8.8.8 \ + --flavor-id m1.small \ + --docker-volume-size 5 \ + --network-driver flannel \ + --coe kubernetes - magnum bay-create --name k8sbay \ - --baymodel k8sbaymodel \ - --master-count 3 \ - --node-count 8 + magnum cluster-create --name k8s-cluster \ + --cluster-template k8s-cluster-template \ + --master-count 3 \ + --node-count 8 -Refer to the `Baymodel`_ and `Bay`_ sections for the full list of parameters. -Following are further details relevant to a Kubernetes bay: +Refer to the `ClusterTemplate`_ and `Cluster`_ sections for the full list of +parameters. Following are further details relevant to a Kubernetes cluster: Number of masters (master-count) - Specified in the bay-create command to indicate how many servers will - run as master in the bay. Having more than one will provide high + Specified in the cluster-create command to indicate how many servers will + run as master in the cluster. Having more than one will provide high availability. The masters will be in a load balancer pool and the virtual IP address (VIP) of the load balancer will serve as the Kubernetes API endpoint. For external access, a floating IP associated with this VIP is available and this is the endpoint - shown for Kubernetes in the 'bay-show' command. + shown for Kubernetes in the 'cluster-show' command. Number of nodes (node-count) - Specified in the bay-create command to indicate how many servers will - run as node in the bay to host the users' pods. The nodes are registered + Specified in the cluster-create command to indicate how many servers will + run as node in the cluster to host the users' pods. The nodes are registered in Kubernetes using the Nova instance name. Network driver (network-driver) - Specified in the baymodel to select the network driver. + Specified in the ClusterTemplate to select the network driver. The supported and default network driver is 'flannel', an overlay network providing a flat network for all pods. Refer to the `Networking`_ section for more details. Volume driver (volume-driver) - Specified in the baymodel to select the volume driver. The supported + Specified in the ClusterTemplate to select the volume driver. The supported volume driver is 'cinder', allowing Cinder volumes to be mounted in containers for use as persistent storage. Data written to these volumes will persist after the container exits and can be accessed again from other @@ -798,19 +811,19 @@ Volume driver (volume-driver) will be deleted. Refer to the `Storage`_ section for more details. Storage driver (docker-storage-driver) - Specified in the baymodel to select the Docker storage driver. The + Specified in the ClusterTemplate to select the Docker storage driver. The supported storage drivers are 'devicemapper' and 'overlay', with 'devicemapper' being the default. You may get better performance with the overlay driver depending on your use patterns, with the requirement that SELinux must be disabled inside the containers, although it still runs - in enforcing mode on the bay servers. Magnum will create a Cinder volume + in enforcing mode on the cluster servers. Magnum will create a Cinder volume for each node, mount it on the node and configure it as a logical volume named 'docker'. The Docker daemon will run the selected device driver to manage this logical volume and host the container writable layer there. Refer to the `Storage`_ section for more details. Image (image-id) - Specified in the baymodel to indicate the image to boot the servers. + Specified in the ClusterTemplate to indicate the image to boot the servers. The image binary is loaded in Glance with the attribute 'os_distro = fedora-atomic'. Current supported images are Fedora Atomic (download from `Fedora @@ -822,7 +835,7 @@ TLS (tls-disabled) Transport Layer Security is enabled by default, so you need a key and signed certificate to access the Kubernetes API and CLI. Magnum handles its own key and certificate when interfacing with the - Kubernetes bay. In development mode, TLS can be disabled. Refer to + Kubernetes cluster. In development mode, TLS can be disabled. Refer to the 'Transport Layer Security'_ section for more details. What runs on the servers @@ -836,12 +849,12 @@ What runs on the servers Log into the servers You can log into the master servers using the login 'fedora' and the - keypair specified in the baymodel. + keypair specified in the ClusterTemplate. External load balancer for services ----------------------------------- -All Kubernetes pods and services created in the bay are assigned IP +All Kubernetes pods and services created in the cluster are assigned IP addresses on a private container network so they can access each other and the external internet. However, these IP addresses are not accessible from an external network. @@ -870,43 +883,44 @@ for more details. Swarm ===== -A Swarm bay is a pool of servers running Docker daemon that is +A Swarm cluster is a pool of servers running Docker daemon that is managed as a single Docker host. One or more Swarm managers accepts the standard Docker API and manage this pool of servers. -Magnum deploys a Swarm bay using parameters defined in -the baymodel and specified on the 'bay-create' command, for example:: +Magnum deploys a Swarm cluster using parameters defined in +the ClusterTemplate and specified on the 'cluster-create' command, for +example:: - magnum baymodel-create --name swarmbaymodel \ - --image-id fedora-atomic-latest \ - --keypair-id testkey \ - --external-network-id public \ - --dns-nameserver 8.8.8.8 \ - --flavor-id m1.small \ - --docker-volume-size 5 \ - --coe swarm + magnum cluster-template-create --name swarm-cluster-template \ + --image-id fedora-atomic-latest \ + --keypair-id testkey \ + --external-network-id public \ + --dns-nameserver 8.8.8.8 \ + --flavor-id m1.small \ + --docker-volume-size 5 \ + --coe swarm - magnum bay-create --name swarmbay \ - --baymodel swarmbaymodel \ + magnum cluster-create --name swarm-cluster \ + --cluster-template swarm-cluster-template \ --master-count 3 \ --node-count 8 -Refer to the `Baymodel`_ and `Bay`_ sections for the full list of parameters. -Following are further details relevant to Swarm: +Refer to the `ClusterTemplate`_ and `Cluster`_ sections for the full list of +parameters. Following are further details relevant to Swarm: What runs on the servers - There are two types of servers in the Swarm bay: managers and nodes. + There are two types of servers in the Swarm cluster: managers and nodes. The Docker daemon runs on all servers. On the servers for manager, the Swarm manager is run as a Docker container on port 2376 and this is initiated by the systemd service swarm-manager. Etcd is also run - on the manager servers for discovery of the node servers in the bay. + on the manager servers for discovery of the node servers in the cluster. On the servers for node, the Swarm agent is run as a Docker container on port 2375 and this is initiated by the systemd service swarm-agent. On start up, the agents will register themselves in etcd and the managers will discover the new node to manage. Number of managers (master-count) - Specified in the bay-create command to indicate how many servers will - run as managers in the bay. Having more than one will provide high + Specified in the cluster-create command to indicate how many servers will + run as managers in the cluster. Having more than one will provide high availability. The managers will be in a load balancer pool and the load balancer virtual IP address (VIP) will serve as the Swarm API endpoint. A floating IP associated with the load balancer VIP will @@ -917,14 +931,14 @@ Number of managers (master-count) and schedule the containers there. Number of nodes (node-count) - Specified in the bay-create command to indicate how many servers will - run as nodes in the bay to host your Docker containers. These servers + Specified in the cluster-create command to indicate how many servers will + run as nodes in the cluster to host your Docker containers. These servers will register themselves in etcd for discovery by the managers, and interact with the managers. Docker daemon is run locally to host containers from users. Network driver (network-driver) - Specified in the baymodel to select the network driver. The supported + Specified in the ClusterTemplate to select the network driver. The supported drivers are 'docker' and 'flannel', with 'docker' as the default. With the 'docker' driver, containers are connected to the 'docker0' bridge on each node and are assigned local IP address. With the @@ -933,7 +947,7 @@ Network driver (network-driver) section for more details. Volume driver (volume-driver) - Specified in the baymodel to select the volume driver to provide + Specified in the ClusterTemplate to select the volume driver to provide persistent storage for containers. The supported volume driver is 'rexray'. The default is no volume driver. When 'rexray' or other volume driver is deployed, you can use the Docker 'volume' command to @@ -942,12 +956,12 @@ Volume driver (volume-driver) Refer to the `Storage`_ section for more details. Storage driver (docker-storage-driver) - Specified in the baymodel to select the Docker storage driver. The + Specified in the ClusterTemplate to select the Docker storage driver. The supported storage driver are 'devicemapper' and 'overlay', with 'devicemapper' being the default. You may get better performance with the 'overlay' driver depending on your use patterns, with the requirement that SELinux must be disabled inside the containers, although it still runs - in enforcing mode on the bay servers. Magnum will create a Cinder volume + in enforcing mode on the cluster servers. Magnum will create a Cinder volume for each node and attach it as a device. Then depending on the driver, additional configuration is performed to make the volume available to the particular driver. For instance, 'devicemapper' uses LVM; therefore @@ -955,7 +969,7 @@ Storage driver (docker-storage-driver) device. Refer to the `Storage`_ section for more details. Image (image-id) - Specified in the baymodel to indicate the image to boot the servers + Specified in the ClusterTemplate to indicate the image to boot the servers for the Swarm manager and node. The image binary is loaded in Glance with the attribute 'os_distro = fedora-atomic'. @@ -967,32 +981,32 @@ TLS (tls-disabled) access by both the users and Magnum. You will need a key and a signed certificate to access the Swarm API and CLI. Magnum handles its own key and certificate when interfacing with the - Swarm bay. In development mode, TLS can be disabled. Refer to + Swarm cluster. In development mode, TLS can be disabled. Refer to the 'Transport Layer Security'_ section for details on how to create your key and have Magnum sign your certificate. Log into the servers You can log into the manager and node servers with the account 'fedora' and - the keypair specified in the baymodel. + the keypair specified in the ClusterTemplate. ===== Mesos ===== -A Mesos bay consists of a pool of servers running as Mesos agents, +A Mesos cluster consists of a pool of servers running as Mesos agents, managed by a set of servers running as Mesos masters. Mesos manages the resources from the agents but does not itself deploy containers. -Instead, one of more Mesos frameworks running on the Mesos bay would +Instead, one of more Mesos frameworks running on the Mesos cluster would accept user requests on their own endpoint, using their particular API. These frameworks would then negotiate the resources with Mesos and the containers are deployed on the servers where the resources are offered. -Magnum deploys a Mesos bay using parameters defined in the baymodel -and specified on the 'bay-create' command, for example:: +Magnum deploys a Mesos cluster using parameters defined in the ClusterTemplate +and specified on the 'cluster-create' command, for example:: - magnum baymodel-create --name mesosbaymodel \ + magnum cluster-template-create --name mesos-cluster-template \ --image-id ubuntu-mesos \ --keypair-id testkey \ --external-network-id public \ @@ -1000,16 +1014,16 @@ and specified on the 'bay-create' command, for example:: --flavor-id m1.small \ --coe mesos - magnum bay-create --name mesosbay \ - --baymodel mesosbaymodel \ + magnum cluster-create --name mesos-cluster \ + --cluster-template mesos-cluster-template \ --master-count 3 \ --node-count 8 -Refer to the `Baymodel`_ and `Bay`_ sections for the full list of +Refer to the `ClusterTemplate`_ and `Cluster`_ sections for the full list of parameters. Following are further details relevant to Mesos: What runs on the servers - There are two types of servers in the Mesos bay: masters and agents. + There are two types of servers in the Mesos cluster: masters and agents. The Docker daemon runs on all servers. On the servers for master, the Mesos master is run as a process on port 5050 and this is initiated by the upstart service 'mesos-master'. Zookeeper is also @@ -1023,8 +1037,8 @@ What runs on the servers 'mesos-agent'. Number of master (master-count) - Specified in the bay-create command to indicate how many servers - will run as masters in the bay. Having more than one will provide + Specified in the cluster-create command to indicate how many servers + will run as masters in the cluster. Having more than one will provide high availability. If the load balancer option is specified, the masters will be in a load balancer pool and the load balancer virtual IP address (VIP) will serve as the Mesos API endpoint. A @@ -1032,21 +1046,21 @@ Number of master (master-count) external Mesos API endpoint. Number of agents (node-count) - Specified in the bay-create command to indicate how many servers - will run as Mesos agent in the bay. Docker daemon is run locally to + Specified in the cluster-create command to indicate how many servers + will run as Mesos agent in the cluster. Docker daemon is run locally to host containers from users. The agents report their available resources to the master and accept request from the master to deploy tasks from the frameworks. In this case, the tasks will be to run Docker containers. Network driver (network-driver) - Specified in the baymodel to select the network driver. Currently + Specified in the ClusterTemplate to select the network driver. Currently 'docker' is the only supported driver: containers are connected to the 'docker0' bridge on each node and are assigned local IP address. Refer to the `Networking`_ section for more details. Volume driver (volume-driver) - Specified in the baymodel to select the volume driver to provide + Specified in the ClusterTemplate to select the volume driver to provide persistent storage for containers. The supported volume driver is 'rexray'. The default is no volume driver. When 'rexray' or other volume driver is deployed, you can use the Docker 'volume' command to @@ -1059,7 +1073,7 @@ Storage driver (docker-storage-driver) Image (image-id) - Specified in the baymodel to indicate the image to boot the servers + Specified in the ClusterTemplate to indicate the image to boot the servers for the Mesos master and agent. The image binary is loaded in Glance with the attribute 'os_distro = ubuntu'. You can download the `ready-built image @@ -1072,13 +1086,13 @@ TLS (tls-disabled) Log into the servers You can log into the manager and node servers with the account - 'ubuntu' and the keypair specified in the baymodel. + 'ubuntu' and the keypair specified in the ClusterTemplate. Building Mesos image -------------------- -The boot image for Mesos bay is an Ubuntu 14.04 base image with the +The boot image for Mesos cluster is an Ubuntu 14.04 base image with the following middleware pre-installed: - ``docker`` @@ -1086,10 +1100,10 @@ following middleware pre-installed: - ``mesos`` - ``marathon`` -The bay driver provides two ways to create this image, as follows. +The cluster driver provides two ways to create this image, as follows. Diskimage-builder -++++++++++++++++++ ++++++++++++++++++ To run the `diskimage-builder `__ tool @@ -1120,8 +1134,8 @@ Dockerfile To build the image as above but within a Docker container, use the provided `Dockerfile -`__. The -output image will be saved as '/tmp/ubuntu-mesos.qcow2'. +`__. +The output image will be saved as '/tmp/ubuntu-mesos.qcow2'. Following are the typical steps to run a Docker container to build the image:: $ git clone https://git.openstack.org/openstack/magnum @@ -1137,7 +1151,7 @@ Using Marathon Marathon is a Mesos framework for long running applications. Docker containers can be deployed via Marathon's REST API. To get the -endpoint for Marathon, run the bay-show command and look for the +endpoint for Marathon, run the cluster-show command and look for the property 'api_address'. Marathon's endpoint is port 8080 on this IP address, so the web console can be accessed at:: @@ -1163,7 +1177,7 @@ For example, you can 'post' a JSON app description to "cmd": "while sleep 10; do date -u +%T; done" } END - $ API_ADDRESS=$(magnum bay-show mesosbay | awk '/ api_address /{print $4}') + $ API_ADDRESS=$(magnum cluster-show mesoscluster | awk '/ api_address /{print $4}') $ curl -X POST -H "Content-Type: application/json" \ http://${API_ADDRESS}:8080/v2/apps -d@app.json @@ -1172,32 +1186,32 @@ For example, you can 'post' a JSON app description to Transport Layer Security ======================== -Magnum uses TLS to secure communication between a bay's services and +Magnum uses TLS to secure communication between a cluster's services and the outside world. TLS is a complex subject, and many guides on it exist already. This guide will not attempt to fully describe TLS, but instead will only cover the necessary steps to get a client set up to -talk to a Bay with TLS. A more in-depth guide on TLS can be found in +talk to a cluster with TLS. A more in-depth guide on TLS can be found in the `OpenSSL Cookbook `_ by Ivan Ristić. -TLS is employed at 3 points in a bay: +TLS is employed at 3 points in a cluster: -1. By Magnum to communicate with the bay API endpoint +1. By Magnum to communicate with the cluster API endpoint -2. By the bay worker nodes to communicate with the master nodes +2. By the cluster worker nodes to communicate with the master nodes 3. By the end-user when they use the native client libraries to - interact with the Bay. This applies to both a CLI or a program - that uses a client for the particular bay. Each client needs a - valid certificate to authenticate and communicate with a Bay. + interact with the cluster. This applies to both a CLI or a program + that uses a client for the particular cluster. Each client needs a + valid certificate to authenticate and communicate with a cluster. The first two cases are implemented internally by Magnum and are not exposed to the users, while the last case involves the users and is described in more details below. -Deploying a secure bay ----------------------- +Deploying a secure cluster +-------------------------- Current TLS support is summarized below: @@ -1211,27 +1225,27 @@ Current TLS support is summarized below: | Mesos | no | +------------+-------------+ -For bay type with TLS support, e.g. Kubernetes and Swarm, TLS is +For cluster type with TLS support, e.g. Kubernetes and Swarm, TLS is enabled by default. To disable TLS in Magnum, you can specify the -parameter '--tls-disabled' in the baymodel. Please note it is not +parameter '--tls-disabled' in the ClusterTemplate. Please note it is not recommended to disable TLS due to security reasons. In the following example, Kubernetes is used to illustrate a secure -bay, but the steps are similar for other bay types that have TLS +cluster, but the steps are similar for other cluster types that have TLS support. -First, create a baymodel; by default TLS is enabled in +First, create a ClusterTemplate; by default TLS is enabled in Magnum, therefore it does not need to be specified via a parameter:: - magnum baymodel-create --name secure-kubernetes \ - --keypair-id default \ - --external-network-id public \ - --image-id fedora-atomic-latest \ - --dns-nameserver 8.8.8.8 \ - --flavor-id m1.small \ - --docker-volume-size 3 \ - --coe kubernetes \ - --network-driver flannel + magnum cluster-template-create --name secure-kubernetes \ + --keypair-id default \ + --external-network-id public \ + --image-id fedora-atomic-latest \ + --dns-nameserver 8.8.8.8 \ + --flavor-id m1.small \ + --docker-volume-size 3 \ + --coe kubernetes \ + --network-driver flannel +-----------------------+--------------------------------------+ | Property | Value | @@ -1266,11 +1280,12 @@ Magnum, therefore it does not need to be specified via a parameter:: +-----------------------+--------------------------------------+ -Now create a bay. Use the baymodel name as a template for bay creation:: +Now create a cluster. Use the ClusterTemplate name as a template for cluster +creation:: - magnum bay-create --name secure-k8sbay \ - --baymodel secure-kubernetes \ - --node-count 1 + magnum cluster-create --name secure-k8s-cluster \ + --cluster-template secure-kubernetes \ + --node-count 1 +--------------------+------------------------------------------------------------+ | Property | Value | @@ -1281,22 +1296,22 @@ Now create a bay. Use the baymodel name as a template for bay creation:: | status_reason | None | | created_at | 2016-07-25T23:14:06+00:00 | | updated_at | None | - | bay_create_timeout | 0 | + | create_timeout | 0 | | api_address | None | - | baymodel_id | 5519b24a-621c-413c-832f-c30424528b31 | + | cluster_template_id| 5519b24a-621c-413c-832f-c30424528b31 | | master_addresses | None | | node_count | 1 | | node_addresses | None | | master_count | 1 | | discovery_url | https://discovery.etcd.io/ba52a8178e7364d43a323ee4387cf28e | - | name | secure-k8sbay | + | name | secure-k8s-cluster | +--------------------+------------------------------------------------------------+ -Now run bay-show command to get the details of the bay and verify that the -api_address is 'https':: +Now run cluster-show command to get the details of the cluster and verify that +the api_address is 'https':: - magnum bay-show secure-k8sbay + magnum cluster-show secure-k8scluster +--------------------+------------------------------------------------------------+ | Property | Value | +--------------------+------------------------------------------------------------+ @@ -1306,15 +1321,15 @@ api_address is 'https':: | status_reason | Stack CREATE completed successfully | | created_at | 2016-07-25T23:14:06+00:00 | | updated_at | 2016-07-25T23:14:10+00:00 | - | bay_create_timeout | 60 | + | create_timeout | 60 | | api_address | https://192.168.19.86:6443 | - | baymodel_id | da2825a0-6d09-4208-b39e-b2db666f1118 | + | cluster_template_id| da2825a0-6d09-4208-b39e-b2db666f1118 | | master_addresses | ['192.168.19.87'] | | node_count | 1 | | node_addresses | ['192.168.19.88'] | | master_count | 1 | | discovery_url | https://discovery.etcd.io/3b7fb09733429d16679484673ba3bfd5 | - | name | secure-k8sbay | + | name | secure-k8s-cluster | +--------------------+------------------------------------------------------------+ You can see the api_address contains https in the URL, showing that @@ -1322,10 +1337,10 @@ the Kubernetes services are configured securely with SSL certificates and now any communication to kube-apiserver will be over https. -Interfacing with a secure bay ------------------------------ +Interfacing with a secure cluster +--------------------------------- -To communicate with the API endpoint of a secure bay, you will need so +To communicate with the API endpoint of a secure cluster, you will need so supply 3 SSL artifacts: 1. Your client key @@ -1338,16 +1353,16 @@ There are two ways to obtain these 3 artifacts. Automated +++++++++ -Magnum provides the command 'bay-config' to help the user in setting +Magnum provides the command 'cluster-config' to help the user in setting up the environment and artifacts for TLS, for example:: - magnum bay-config swarmbay --dir mybayconfig + magnum cluster-config swarm-cluster --dir myclusterconfig This will display the necessary environment variables, which you can add to your environment:: export DOCKER_HOST=tcp://172.24.4.5:2376 - export DOCKER_CERT_PATH=mybayconfig + export DOCKER_CERT_PATH=myclusterconfig export DOCKER_TLS_VERIFY=True And the artifacts are placed in the directory specified:: @@ -1387,7 +1402,7 @@ Signed Certificate To authenticate your key, you need to have it signed by a CA. First generate the Certificate Signing Request (CSR). The CSR will be used by Magnum to generate a signed certificate that you will use to - communicate with the Bay. To generate a CSR, openssl requires a + communicate with the cluster. To generate a CSR, openssl requires a config file that specifies a few values. Using the example template below, you can fill in the 'CN' value with your name and save it as client.conf:: @@ -1414,23 +1429,23 @@ Signed Certificate Now that you have your client CSR, you can use the Magnum CLI to send it off to Magnum to get it signed:: - magnum ca-sign --bay secure-k8sbay --csr client.csr > cert.pem + magnum ca-sign --cluster secure-k8s-cluster --csr client.csr > cert.pem Certificate Authority The final artifact you need to retrieve is the CA certificate for - the bay. This is used by your native client to ensure you are only + the cluster. This is used by your native client to ensure you are only communicating with hosts that Magnum set up:: - magnum ca-show --bay secure-k8sbay > ca.pem + magnum ca-show --cluster secure-k8s-cluster > ca.pem User Examples ------------- Here are some examples for using the CLI on a secure Kubernetes and -Swarm bay. You can perform all the TLS set up automatically by:: +Swarm cluster. You can perform all the TLS set up automatically by:: - eval $(magnum bay-config ) + eval $(magnum cluster-config ) Or you can perform the manual steps as described above and specify the TLS options on the CLI. The SSL artifacts are assumed to be @@ -1438,17 +1453,18 @@ saved in local files as follows:: - key.pem: your SSL key - cert.pem: signed certificate -- ca.pem: certificate for bay CA +- ca.pem: certificate for cluster CA + For Kubernetes, you need to get 'kubectl', a kubernetes CLI tool, to -communicate with the bay:: +communicate with the cluster:: wget https://github.com/kubernetes/kubernetes/releases/download/v1.2.0/kubernetes.tar.gz tar -xzvf kubernetes.tar.gz sudo cp -a kubernetes/platforms/linux/amd64/kubectl /usr/bin/kubectl Now let's run some 'kubectl' commands to check the secure communication. -If you used 'bay-config', then you can simply run the 'kubectl' command +If you used 'cluster-config', then you can simply run the 'kubectl' command without having to specify the TLS options since they have been defined in the environment:: @@ -1458,7 +1474,7 @@ in the environment:: You can specify the TLS options manually as follows:: - KUBERNETES_URL=$(magnum bay-show secure-k8sbay | + KUBERNETES_URL=$(magnum cluster-show secure-k8s-cluster | awk '/ api_address /{print $4}') kubectl version --certificate-authority=ca.pem \ --client-key=key.pem \ @@ -1479,12 +1495,12 @@ You can specify the TLS options manually as follows:: Beside using the environment variables, you can also configure 'kubectl' to remember the TLS options:: - kubectl config set-cluster secure-k8sbay --server=${KUBERNETES_URL} \ + kubectl config set-cluster secure-k8s-cluster --server=${KUBERNETES_URL} \ --certificate-authority=${PWD}/ca.pem kubectl config set-credentials client --certificate-authority=${PWD}/ca.pem \ --client-key=${PWD}/key.pem --client-certificate=${PWD}/cert.pem - kubectl config set-context secure-k8sbay --cluster=secure-k8sbay --user=client - kubectl config use-context secure-k8sbay + kubectl config set-context secure-k8scluster --cluster=secure-k8scluster --user=client + kubectl config use-context secure-k8scluster Then you can use 'kubectl' commands without the certificates:: @@ -1506,7 +1522,7 @@ without installing a certificate in your browser:: You can then open http://localhost:8001/ui in your browser. -The examples for Docker are similar. With 'bay-config' set up, +The examples for Docker are similar. With 'cluster-config' set up, you can just run docker commands without TLS options. To specify the TLS options manually:: @@ -1520,8 +1536,8 @@ TLS options manually:: Storing the certificates ------------------------ -Magnum generates and maintains a certificate for each bay so that it -can also communicate securely with the bay. As a result, it is +Magnum generates and maintains a certificate for each cluster so that it +can also communicate securely with the cluster. As a result, it is necessary to store the certificates in a secure manner. Magnum provides the following methods for storing the certificates and this is configured in /etc/magnum/magnum.conf in the section [certificates] @@ -1592,27 +1608,27 @@ As a result, the implementation for the networking models is evolving and new models are likely to be introduced in the future. For the Neutron infrastructure, the following configuration can -be set in the baymodel: +be set in the ClusterTemplate: external-network-id - The external Neutron network ID to connect to this bay. This + The external Neutron network ID to connect to this cluster. This is used to connect the cluster to the external internet, allowing - the nodes in the bay to access external URL for discovery, image + the nodes in the cluster to access external URL for discovery, image download, etc. If not specified, the default value is "public" and this is valid for a typical devstack. fixed-network - The Neutron network to use as the private network for the bay nodes. + The Neutron network to use as the private network for the cluster nodes. If not specified, a new Neutron private network will be created. dns-nameserver - The DNS nameserver to use for this bay. This is an IP address for + The DNS nameserver to use for this cluster. This is an IP address for the server and it is used to configure the Neutron subnet of the cluster (dns_nameservers). If not specified, the default DNS is 8.8.8.8, the publicly available DNS. http-proxy, https-proxy, no-proxy - The proxy for the nodes in the bay, to be used when the cluster is + The proxy for the nodes in the cluster, to be used when the cluster is behind a firewall and containers cannot access URL's on the external internet directly. For the parameter http-proxy and https-proxy, the value to provide is a URL and it will be set in the environment @@ -1622,7 +1638,7 @@ http-proxy, https-proxy, no-proxy environment variable NO_PROXY in the nodes. For the networking model to the container, the following configuration -can be set in the baymodel: +can be set in the ClusterTemplate: network-driver The network driver name for instantiating container networks. @@ -1641,7 +1657,7 @@ network-driver Particular network driver may require its own set of parameters for configuration, and these parameters are specified through the labels -in the baymodel. Labels are arbitrary key=value pairs. +in the ClusterTemplate. Labels are arbitrary key=value pairs. When Flannel is specified as the network driver, the following optional labels can be added: @@ -1667,7 +1683,7 @@ flannel_backend messages, but it requires all the nodes to be on the same L2 network. The private Neutron network that Magnum creates does meet this requirement; therefore if the parameter *fixed_network* - is not specified in the baymodel, *host-gw* is the best choice for + is not specified in the ClusterTemplate, *host-gw* is the best choice for the Flannel backend. @@ -1684,13 +1700,14 @@ Performance tuning for periodic task ------------------------------------ Magnum's periodic task performs a `stack-get` operation on the Heat stack -underlying each of its bays. If you have a large amount of bays this can create -considerable load on the Heat API. To reduce that load you can configure Magnum -to perform one global `stack-list` per periodic task instead instead of one per -bay. This is disabled by default, both from the Heat and Magnum side since it -causes a security issue, though: any user in any tenant holding the `admin` -role can perform a global `stack-list` operation if Heat is configured to allow -it for Magnum. If you want to enable it nonetheless, proceed as follows: +underlying each of its clusters. If you have a large amount of clusters this +can create considerable load on the Heat API. To reduce that load you can +configure Magnum to perform one global `stack-list` per periodic task instead +of one per cluster. This is disabled by default, both from the Heat and Magnum +side since it causes a security issue, though: any user in any tenant holding +the `admin` role can perform a global `stack-list` operation if Heat is +configured to allow it for Magnum. If you want to enable it nonetheless, +proceed as follows: 1. Set `periodic_global_stack_list` in magnum.conf to `True` (`False` by default). @@ -1729,12 +1746,12 @@ container is also deleted. To manage this space in a flexible manner independent of the Nova instance flavor, Magnum creates a separate Cinder block volume for each -node in the bay, mounts it to the node and configures it to be used as +node in the cluster, mounts it to the node and configures it to be used as ephemeral storage. Users can specify the size of the Cinder volume with -the baymodel attribute 'docker-volume-size'. The default size is 5GB. -Currently the block size is fixed at bay creation time, but future +the ClusterTemplate attribute 'docker-volume-size'. The default size is 5GB. +Currently the block size is fixed at cluster creation time, but future lifecycle operations may allow modifying the block size during the -life of the bay. +life of the cluster. To use the Cinder block storage, there is a number of Docker storage drivers available. Only 'devicemapper' is supported as the @@ -1744,7 +1761,7 @@ for the storage drivers that should be considered. For instance, 'OperlayFS' may offer better performance, but it may not support the filesystem metadata needed to use SELinux, which is required to support strong isolation between containers running in the same -bay. Using the 'devicemapper' driver does allow the use of SELinux. +cluster. Using the 'devicemapper' driver does allow the use of SELinux. Persistent storage @@ -1774,7 +1791,7 @@ to Cinder to unmount the volume's filesystem, making it available to be mounted on other nodes. Magnum supports these features to use Cinder as persistent storage -using the baymodel attribute 'volume-driver' and the support matrix +using the ClusterTemplate attribute 'volume-driver' and the support matrix for the COE types is summarized as follows: +--------+-------------+-------------+-------------+ @@ -1797,30 +1814,32 @@ currently meets this requirement. **NOTE:** The following steps are a temporary workaround, and Magnum's development team is working on a long term solution to automate these steps. -1. Create the baymodel. +1. Create the ClusterTemplate. Specify 'cinder' as the volume-driver for Kubernetes:: - magnum baymodel-create --name k8sbaymodel \ - --image-id fedora-23-atomic-7 \ - --keypair-id testkey \ - --external-network-id public \ - --dns-nameserver 8.8.8.8 \ - --flavor-id m1.small \ - --docker-volume-size 5 \ - --network-driver flannel \ - --coe kubernetes \ - --volume-driver cinder + magnum cluster-template-create --name k8s-cluster-template \ + --image-id fedora-23-atomic-7 \ + --keypair-id testkey \ + --external-network-id public \ + --dns-nameserver 8.8.8.8 \ + --flavor-id m1.small \ + --docker-volume-size 5 \ + --network-driver flannel \ + --coe kubernetes \ + --volume-driver cinder -2. Create the bay:: +2. Create the cluster:: - magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1 + magnum cluster-create --name k8s-cluster \ + --cluster-template k8s-cluster-template \ + --node-count 1 3. Configure kubelet. To allow Kubernetes to interface with Cinder, log into each minion - node of your bay and perform step 4 through 6:: + node of your cluster and perform step 4 through 6:: sudo vi /etc/kubernetes/kubelet @@ -1838,7 +1857,7 @@ development team is working on a long term solution to automate these steps. sudo vi /etc/kubernetes/kube_openstack_config The username, tenant-name and region entries have been filled in with the - Keystone values of the user who created the bay. Enter the password + Keystone values of the user who created the cluster. Enter the password of this user on the entry for password:: password=ChangeMe @@ -1868,10 +1887,10 @@ Following is an example illustrating how Cinder is used in a pod. ID=$(cinder create --display-name=test-repo 1 | awk -F'|' '$2~/^[[:space:]]*id/ {print $3}') - The command will generate the volume with a ID. The volume ID will be specified in - Step 2. + The command will generate the volume with a ID. The volume ID will be + specified in Step 2. -2. Create a pod in this bay and mount this cinder volume to the pod. +2. Create a pod in this cluster and mount this cinder volume to the pod. Create a file (e.g nginx-cinder.yaml) describing the pod:: cat > nginx-cinder.yaml << END @@ -1926,7 +1945,7 @@ Using Cinder in Swarm Using Cinder in Mesos +++++++++++++++++++++ -1. Create the baymodel. +1. Create the ClusterTemplate. Specify 'rexray' as the volume-driver for Mesos. As an option, you can specify in a label the attributes 'rexray_preempt' to enable @@ -1934,24 +1953,26 @@ Using Cinder in Mesos hosts are using the volume. If this is set to false, the driver will ensure data safety by locking the volume:: - magnum baymodel-create --name mesosbaymodel \ - --image-id ubuntu-mesos \ - --keypair-id testkey \ - --external-network-id public \ - --dns-nameserver 8.8.8.8 \ - --master-flavor-id m1.magnum \ - --docker-volume-size 4 \ - --tls-disabled \ - --flavor-id m1.magnum \ - --coe mesos \ - --volume-driver rexray \ - --labels rexray-preempt=true + magnum cluster-template-create --name mesos-cluster-template \ + --image-id ubuntu-mesos \ + --keypair-id testkey \ + --external-network-id public \ + --dns-nameserver 8.8.8.8 \ + --master-flavor-id m1.magnum \ + --docker-volume-size 4 \ + --tls-disabled \ + --flavor-id m1.magnum \ + --coe mesos \ + --volume-driver rexray \ + --labels rexray-preempt=true -2. Create the Mesos bay:: +2. Create the Mesos cluster:: - magnum bay-create --name mesosbay --baymodel mesosbaymodel --node-count 1 + magnum cluster-create --name mesos-cluster \ + --cluster-template mesos-cluster-template \ + --node-count 1 -3. Create the cinder volume and configure this bay:: +3. Create the cinder volume and configure this cluster:: cinder create --display-name=redisdata 1 @@ -1979,10 +2000,10 @@ Using Cinder in Mesos } END -**NOTE:** When the Mesos bay is created using this baymodel, the Mesos bay -will be configured so that a filesystem on an existing cinder volume can -be mounted in a container by configuring the parameters to mount the cinder -volume in the json file :: +**NOTE:** When the Mesos cluster is created using this ClusterTemplate, the +Mesos cluster will be configured so that a filesystem on an existing cinder +volume can be mounted in a container by configuring the parameters to mount +the cinder volume in the json file :: "parameters": [ { "key": "volume-driver", "value": "rexray" }, @@ -1991,7 +2012,7 @@ volume in the json file :: 4. Create the container using Marathon REST API :: - MASTER_IP=$(magnum bay-show mesosbay | awk '/ api_address /{print $4}') + MASTER_IP=$(magnum cluster-show mesoscluster | awk '/ api_address /{print $4}') curl -X POST -H "Content-Type: application/json" \ http://${MASTER_IP}:8080/v2/apps -d@mesos.json @@ -2013,7 +2034,7 @@ The image is tightly coupled with the following in Magnum: 1. Heat templates to orchestrate the configuration. -2. Template definition to map baymodel parameters to Heat +2. Template definition to map ClusterTemplate parameters to Heat template parameters. 3. Set of scripts to configure software. @@ -2118,8 +2139,8 @@ The login for this image is *core*. Kubernetes on Ironic -------------------- -This image is built manually using diskimagebuilder. The scripts and instructions -are included in `Magnum code repo +This image is built manually using diskimagebuilder. The scripts and +instructions are included in `Magnum code repo `_. Currently Ironic is not fully supported yet, therefore more details will be provided when this driver has been fully tested. @@ -2252,18 +2273,21 @@ Supported Events ---------------- The following table displays the corresponding relationship between resource -types and operations. +types and operations. The bay type is deprecated and will be removed in a +future version. Cluster is the new equivalent term. +---------------+----------------------------+-------------------------+ | resource type | supported operations | typeURI | +===============+============================+=========================+ -| bay | create, update, delete | service/magnum/bay | +| bay | create, update, delete | service/magnum/bay | ++---------------+----------------------------+-------------------------+ +| cluster | create, update, delete | service/magnum/cluster | +---------------+----------------------------+-------------------------+ -Example Notification - Bay Create ---------------------------------- +Example Notification - Cluster Create +------------------------------------- -The following is an example of a notification that is sent when a bay is +The following is an example of a notification that is sent when a cluster is created. This example can be applied for any ``create``, ``update`` or ``delete`` event that is seen in the table above. The ```` and ``typeURI`` fields will be change. @@ -2271,7 +2295,7 @@ created. This example can be applied for any ``create``, ``update`` or .. code-block:: javascript { - "event_type": "magnum.bay.created", + "event_type": "magnum.cluster.created", "message_id": "0156ee79-b35f-4cef-ac37-d4a85f231c69", "payload": { "typeURI": "http://schemas.dmtf.org/cloud/audit/1.0/event", @@ -2282,11 +2306,11 @@ created. This example can be applied for any ``create``, ``update`` or "project_id": "3d4a50a9-2b59-438b-bf19-c231f9c7625a" }, "target": { - "typeURI": "service/magnum/bay", + "typeURI": "service/magnum/cluster", "id": "openstack:1c2fc591-facb-4479-a327-520dade1ea15" }, "observer": { - "typeURI": "service/magnum/bay", + "typeURI": "service/magnum/cluster", "id": "openstack:3d4a50a9-2b59-438b-bf19-c231f9c7625a" }, "eventType": "activity",