Rename Bay to Cluster in docs
This is a continuation of patch 353726 and includes all of the doc changes for replacing the term bay with cluster and BayModel with ClusterTemplate. Change-Id: Ia7efaed157971ad7631ddffb9c1400f3516720f0 Implements: blueprint rename-bay-to-cluster
This commit is contained in:
parent
52224c5c09
commit
584380f8ee
|
@ -1,30 +1,29 @@
|
|||
====================
|
||||
Example Bay Template
|
||||
====================
|
||||
=======================
|
||||
Cluster Type Definition
|
||||
=======================
|
||||
|
||||
This project is an example to demonstrate the necessary pieces of a Bay
|
||||
template. There are three key pieces to a bay template:
|
||||
There are three key pieces to a Cluster Type Definition:
|
||||
|
||||
1. Heat template - The Heat template that Magnum will use to generate a Bay.
|
||||
1. Heat Stack template - The HOT file that Magnum will use to generate a
|
||||
cluster using a Heat Stack.
|
||||
2. Template definition - Magnum's interface for interacting with the Heat
|
||||
template.
|
||||
3. Definition Entry Point - Used to advertise the available template
|
||||
definitions.
|
||||
3. Definition Entry Point - Used to advertise the available Cluster Types.
|
||||
|
||||
The Heat Template
|
||||
-----------------
|
||||
The Heat Stack Template
|
||||
-----------------------
|
||||
|
||||
The heat template is where most of the real work happens. The result of the
|
||||
Heat template should be a full Container Orchestration Environment.
|
||||
The Heat Stack Template is where most of the real work happens. The result of
|
||||
the Heat Stack Template should be a full Container Orchestration Environment.
|
||||
|
||||
The Template Definition
|
||||
-----------------------
|
||||
|
||||
Template definitions are a mapping of Magnum object attributes and Heat
|
||||
template parameters, along with Magnum consumable template outputs. Each
|
||||
definition also denotes which Bay Types it can provide. Bay Types are how
|
||||
Magnum determines which of the enabled Template Definitions it will use for a
|
||||
given Bay.
|
||||
template parameters, along with Magnum consumable template outputs. A
|
||||
Cluster Type Definition indicates which Cluster Types it can provide.
|
||||
Cluster Types are how Magnum determines which of the enabled Cluster
|
||||
Type Definitions it will use for a given cluster.
|
||||
|
||||
The Definition Entry Point
|
||||
--------------------------
|
||||
|
@ -35,15 +34,15 @@ Each Template Definition should have an Entry Point in the
|
|||
Definition as `example_template = example_template:ExampleTemplate` in the
|
||||
`magnum.template_definitions` group.
|
||||
|
||||
Installing Bay Templates
|
||||
------------------------
|
||||
Installing Cluster Templates
|
||||
----------------------------
|
||||
|
||||
Because Bay Templates are basically Python projects, they can be worked with
|
||||
like any other Python project. They can be cloned from version control and
|
||||
installed or uploaded to a package index and installed via utilities such as
|
||||
pip.
|
||||
Because Cluster Type Definitions are basically Python projects, they can be
|
||||
worked with like any other Python project. They can be cloned from version
|
||||
control and installed or uploaded to a package index and installed via
|
||||
utilities such as pip.
|
||||
|
||||
Enabling a template is as simple as adding it's Entry Point to the
|
||||
Enabling a Cluster Type is as simple as adding it's Entry Point to the
|
||||
`enabled_definitions` config option in magnum.conf.::
|
||||
|
||||
# Setup python environment and install Magnum
|
|
@ -39,7 +39,7 @@ If you're using devstack, you can copy and modify the devstack configuration::
|
|||
source /opt/stack/devstack/openrc demo demo
|
||||
iniset functional_creds.conf auth password $OS_PASSWORD
|
||||
|
||||
Set the DNS name server to be used in your bay nodes (e.g. 8.8.8.8)::
|
||||
Set the DNS name server to be used by your cluster nodes (e.g. 8.8.8.8)::
|
||||
|
||||
# update DNS name server
|
||||
source /opt/stack/devstack/openrc demo demo
|
||||
|
|
|
@ -44,7 +44,7 @@ required. All the services will be created normally; services that specify the
|
|||
load balancer will also be created successfully, but a load balancer will not
|
||||
be created.
|
||||
|
||||
To enable the load balancer, log into each master node of your bay and
|
||||
To enable the load balancer, log into each master node of your cluster and
|
||||
perform the following steps:
|
||||
|
||||
1. Configure kube-apiserver::
|
||||
|
@ -72,7 +72,7 @@ perform the following steps:
|
|||
sudo vi /etc/sysconfig/kube_openstack_config
|
||||
|
||||
The username and tenant-name entries have been filled in with the
|
||||
Keystone values of the user who created the bay. Enter the password
|
||||
Keystone values of the user who created the cluster. Enter the password
|
||||
of this user on the entry for password::
|
||||
|
||||
password=ChangeMe
|
||||
|
@ -88,9 +88,9 @@ This only needs to be done once. The steps can be reversed to disable the
|
|||
load balancer feature. Before deleting the Kubernetes cluster, make sure to
|
||||
delete all the services that created load balancers. Because the Neutron
|
||||
objects created by Kubernetes are not managed by Heat, they will not be
|
||||
deleted by Heat and this will cause the bay-delete operation to fail. If this
|
||||
occurs, delete the neutron objects manually (lb-pool, lb-vip, lb-member,
|
||||
lb-healthmonitor) and then run bay-delete again.
|
||||
deleted by Heat and this will cause the cluster-delete operation to fail. If
|
||||
this occurs, delete the neutron objects manually (lb-pool, lb-vip, lb-member,
|
||||
lb-healthmonitor) and then run cluster-delete again.
|
||||
|
||||
Steps for the users
|
||||
===================
|
||||
|
@ -137,9 +137,9 @@ Create a file (e.g nginx-service.yaml) describing a service for the nginx pod::
|
|||
app: nginx
|
||||
type: LoadBalancer
|
||||
|
||||
Assuming that a Kubernetes bay named k8sbayv1 has been created, deploy the pod
|
||||
and service by the commands. Please refer to the quickstart guide on how to
|
||||
connect to Kubernetes running on the launched bay.::
|
||||
Assuming that a Kubernetes cluster named k8sclusterv1 has been created, deploy
|
||||
the pod and service by the commands. Please refer to the quickstart guide on
|
||||
how to connect to Kubernetes running on the launched cluster.::
|
||||
|
||||
kubectl create -f nginx.yaml
|
||||
|
||||
|
@ -160,7 +160,7 @@ Alternatively, associating a floating IP can be done on the command line by
|
|||
allocating a floating IP, finding the port of the VIP, and associating the
|
||||
floating IP to the port.
|
||||
The commands shown below are for illustration purpose and assume
|
||||
that there is only one service with load balancer running in the bay and
|
||||
that there is only one service with load balancer running in the cluster and
|
||||
no other load balancers exist except for those created for the cluster.
|
||||
|
||||
First create a floating IP on the public network::
|
||||
|
@ -232,13 +232,13 @@ with Neutron in this sequence:
|
|||
These Neutron objects can be verified as follows. For the load balancer pool::
|
||||
|
||||
neutron lb-pool-list
|
||||
+--------------------------------------+----------------------------------------------+----------+-------------+----------+----------------+--------+
|
||||
| id | name | provider | lb_method | protocol | admin_state_up | status |
|
||||
+--------------------------------------+----------------------------------------------+----------+-------------+----------+----------------+--------+
|
||||
| 241357b3-2a8f-442e-b534-bde7cd6ba7e4 | a1f03e40f634011e59c9efa163eae8ab | haproxy | ROUND_ROBIN | TCP | True | ACTIVE |
|
||||
| 82b39251-1455-4eb6-a81e-802b54c2df29 | k8sbayv1-iypacicrskib-api_pool-fydshw7uvr7h | haproxy | ROUND_ROBIN | HTTP | True | ACTIVE |
|
||||
| e59ea983-c6e8-4cec-975d-89ade6b59e50 | k8sbayv1-iypacicrskib-etcd_pool-qbpo43ew2m3x | haproxy | ROUND_ROBIN | HTTP | True | ACTIVE |
|
||||
+--------------------------------------+----------------------------------------------+----------+-------------+----------+----------------+--------+
|
||||
+--------------------------------------+--------------------------------------------------+----------+-------------+----------+----------------+--------+
|
||||
| id | name | provider | lb_method | protocol | admin_state_up | status |
|
||||
+--------------------------------------+--------------------------------------------------+----------+-------------+----------+----------------+--------+
|
||||
| 241357b3-2a8f-442e-b534-bde7cd6ba7e4 | a1f03e40f634011e59c9efa163eae8ab | haproxy | ROUND_ROBIN | TCP | True | ACTIVE |
|
||||
| 82b39251-1455-4eb6-a81e-802b54c2df29 | k8sclusterv1-iypacicrskib-api_pool-fydshw7uvr7h | haproxy | ROUND_ROBIN | HTTP | True | ACTIVE |
|
||||
| e59ea983-c6e8-4cec-975d-89ade6b59e50 | k8sclusterv1-iypacicrskib-etcd_pool-qbpo43ew2m3x | haproxy | ROUND_ROBIN | HTTP | True | ACTIVE |
|
||||
+--------------------------------------+--------------------------------------------------+----------+-------------+----------+----------------+--------+
|
||||
|
||||
Note that 2 load balancers already exist to implement high availability for the
|
||||
cluster (api and ectd). The new load balancer for the Kubernetes service uses
|
||||
|
|
|
@ -85,7 +85,7 @@ add the following line to your `local.conf` file::
|
|||
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
|
||||
|
||||
Create a local.sh to automatically make necessary networking changes during
|
||||
the devstack deployment process. This will allow bays spawned by magnum to
|
||||
the devstack deployment process. This will allow clusters spawned by magnum to
|
||||
access the internet through PUBLIC_INTERFACE::
|
||||
|
||||
cat > local.sh << 'END_LOCAL_SH'
|
||||
|
@ -142,7 +142,7 @@ Create a domain and domain admin for trust::
|
|||
--user $TRUSTEE_DOMAIN_ADMIN_ID --domain $TRUSTEE_DOMAIN_ID \
|
||||
admin
|
||||
|
||||
Create a keypair for use with the baymodel::
|
||||
Create a keypair for use with the ClusterTemplate::
|
||||
|
||||
test -f ~/.ssh/id_rsa.pub || ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
|
||||
nova keypair-add --pub-key ~/.ssh/id_rsa.pub testkey
|
||||
|
|
|
@ -78,8 +78,7 @@ To run unit test coverage and check percentage of code covered::
|
|||
|
||||
tox -e cover
|
||||
|
||||
To discover and interact with templates, please refer to
|
||||
`<http://docs.openstack.org/developer/magnum/dev/bay-template-example.html>`_
|
||||
|
||||
|
||||
Exercising the Services Using Devstack
|
||||
======================================
|
||||
|
@ -136,8 +135,8 @@ magnum will periodically send metrics to ceilometer::
|
|||
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
|
||||
END
|
||||
|
||||
If you want to deploy Docker Registry 2.0 in your bay, you should enable swift
|
||||
in devstack::
|
||||
If you want to deploy Docker Registry 2.0 in your cluster, you should enable
|
||||
swift in devstack::
|
||||
|
||||
cat >> /opt/stack/devstack/local.conf << END
|
||||
enable_service s-proxy
|
||||
|
@ -193,7 +192,8 @@ To list the available commands and resources for magnum, use::
|
|||
|
||||
magnum help
|
||||
|
||||
To list out the health of the internal services, namely conductor, of magnum, use::
|
||||
To list out the health of the internal services, namely conductor, of magnum,
|
||||
use::
|
||||
|
||||
magnum service-list
|
||||
|
||||
|
@ -203,21 +203,21 @@ To list out the health of the internal services, namely conductor, of magnum, us
|
|||
| 1 | oxy-dev.hq1-0a5a3c02.hq1.abcde.com | magnum-conductor | up |
|
||||
+----+------------------------------------+------------------+-------+
|
||||
|
||||
Create a keypair for use with the baymodel::
|
||||
Create a keypair for use with the ClusterTemplate::
|
||||
|
||||
test -f ~/.ssh/id_rsa.pub || ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
|
||||
nova keypair-add --pub-key ~/.ssh/id_rsa.pub testkey
|
||||
|
||||
Building a Kubernetes Bay - Based on Fedora Atomic
|
||||
==================================================
|
||||
Building a Kubernetes Cluster - Based on Fedora Atomic
|
||||
======================================================
|
||||
|
||||
Create a baymodel. This is similar in nature to a flavor and describes
|
||||
to magnum how to construct the bay. The baymodel specifies a Fedora Atomic
|
||||
image so the bays which use this baymodel will be based on Fedora Atomic.
|
||||
The COE (Container Orchestration Engine) and keypair need to be specified
|
||||
as well::
|
||||
Create a ClusterTemplate. This is similar in nature to a flavor and describes
|
||||
to magnum how to construct the cluster. The ClusterTemplate specifies a Fedora
|
||||
Atomic image so the clusters which use this ClusterTemplate will be based on
|
||||
Fedora Atomic. The COE (Container Orchestration Engine) and keypair need to
|
||||
be specified as well::
|
||||
|
||||
magnum baymodel-create --name k8sbaymodel \
|
||||
magnum cluster-template-create --name k8s-cluster-template \
|
||||
--image-id fedora-atomic-latest \
|
||||
--keypair-id testkey \
|
||||
--external-network-id public \
|
||||
|
@ -227,39 +227,43 @@ as well::
|
|||
--network-driver flannel \
|
||||
--coe kubernetes
|
||||
|
||||
Create a bay. Use the baymodel name as a template for bay creation.
|
||||
This bay will result in one master kubernetes node and one minion node::
|
||||
Create a cluster. Use the ClusterTemplate name as a template for cluster
|
||||
creation. This cluster will result in one master kubernetes node and one minion
|
||||
node::
|
||||
|
||||
magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1
|
||||
magnum cluster-create --name k8s-cluster \
|
||||
--cluster-template k8s-cluster-template \
|
||||
--node-count 1
|
||||
|
||||
Bays will have an initial status of CREATE_IN_PROGRESS. Magnum will update
|
||||
the status to CREATE_COMPLETE when it is done creating the bay. Do not create
|
||||
containers, pods, services, or replication controllers before magnum finishes
|
||||
creating the bay. They will likely not be created, and may cause magnum to
|
||||
become confused.
|
||||
Clusters will have an initial status of CREATE_IN_PROGRESS. Magnum will update
|
||||
the status to CREATE_COMPLETE when it is done creating the cluster. Do not
|
||||
create containers, pods, services, or replication controllers before magnum
|
||||
finishes creating the cluster. They will likely not be created, and may cause
|
||||
magnum to become confused.
|
||||
|
||||
The existing bays can be listed as follows::
|
||||
The existing clusters can be listed as follows::
|
||||
|
||||
magnum bay-list
|
||||
magnum cluster-list
|
||||
|
||||
+--------------------------------------+---------+------------+-----------------+
|
||||
| uuid | name | node_count | status |
|
||||
+--------------------------------------+---------+------------+-----------------+
|
||||
| 9dccb1e6-02dc-4e2b-b897-10656c5339ce | k8sbay | 1 | CREATE_COMPLETE |
|
||||
+--------------------------------------+---------+------------+-----------------+
|
||||
+--------------------------------------+-------------+------------+-----------------+
|
||||
| uuid | name | node_count | status |
|
||||
+--------------------------------------+-------------+------------+-----------------+
|
||||
| 9dccb1e6-02dc-4e2b-b897-10656c5339ce | k8s-cluster | 1 | CREATE_COMPLETE |
|
||||
+--------------------------------------+-------------+------------+-----------------+
|
||||
|
||||
More detailed information for a given bay is obtained via::
|
||||
More detailed information for a given cluster is obtained via::
|
||||
|
||||
magnum bay-show k8sbay
|
||||
magnum cluster-show k8s-cluster
|
||||
|
||||
After a bay is created, you can dynamically add/remove node(s) to/from the bay
|
||||
by updating the node_count attribute. For example, to add one more node::
|
||||
After a cluster is created, you can dynamically add/remove node(s) to/from the
|
||||
cluster by updating the node_count attribute. For example, to add one more
|
||||
node::
|
||||
|
||||
magnum bay-update k8sbay replace node_count=2
|
||||
magnum cluster-update k8s-cluster replace node_count=2
|
||||
|
||||
Bays in the process of updating will have a status of UPDATE_IN_PROGRESS.
|
||||
Clusters in the process of updating will have a status of UPDATE_IN_PROGRESS.
|
||||
Magnum will update the status to UPDATE_COMPLETE when it is done updating
|
||||
the bay.
|
||||
the cluster.
|
||||
|
||||
**NOTE:** Reducing node_count will remove all the existing pods on the nodes
|
||||
that are deleted. If you choose to reduce the node_count, magnum will first
|
||||
|
@ -271,27 +275,28 @@ node_count so any removed pods can be automatically recovered on your
|
|||
remaining nodes.
|
||||
|
||||
Heat can be used to see detailed information on the status of a stack or
|
||||
specific bay:
|
||||
specific cluster:
|
||||
|
||||
To check the list of all bay stacks::
|
||||
To check the list of all cluster stacks::
|
||||
|
||||
openstack stack list
|
||||
|
||||
To check an individual bay's stack::
|
||||
To check an individual cluster's stack::
|
||||
|
||||
openstack stack show <stack-name or stack_id>
|
||||
|
||||
Monitoring bay status in detail (e.g., creating, updating)::
|
||||
Monitoring cluster status in detail (e.g., creating, updating)::
|
||||
|
||||
BAY_HEAT_NAME=$(openstack stack list | awk "/\sk8sbay-/{print \$4}")
|
||||
echo ${BAY_HEAT_NAME}
|
||||
openstack stack resource list ${BAY_HEAT_NAME}
|
||||
CLUSTER_HEAT_NAME=$(openstack stack list | \
|
||||
awk "/\sk8s-cluster-/{print \$4}")
|
||||
echo ${CLUSTER_HEAT_NAME}
|
||||
openstack stack resource list ${CLUSTER_HEAT_NAME}
|
||||
|
||||
Building a Kubernetes Bay - Based on CoreOS
|
||||
===========================================
|
||||
Building a Kubernetes Cluster - Based on CoreOS
|
||||
===============================================
|
||||
|
||||
You can create a Kubernetes bay based on CoreOS as an alternative to Atomic.
|
||||
First, download the official CoreOS image::
|
||||
You can create a Kubernetes cluster based on CoreOS as an alternative to
|
||||
Atomic. First, download the official CoreOS image::
|
||||
|
||||
wget http://beta.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
|
||||
bunzip2 coreos_production_openstack_image.img.bz2
|
||||
|
@ -305,10 +310,10 @@ Upload the image to glance::
|
|||
--os-distro=coreos \
|
||||
--file=coreos_production_openstack_image.img
|
||||
|
||||
Create a CoreOS Kubernetes baymodel, which is similar to the Atomic Kubernetes
|
||||
baymodel, except for pointing to a different image::
|
||||
Create a CoreOS Kubernetes ClusterTemplate, which is similar to the Atomic
|
||||
Kubernetes ClusterTemplate, except for pointing to a different image::
|
||||
|
||||
magnum baymodel-create --name k8sbaymodel-coreos \
|
||||
magnum cluster-template-create --name k8s-cluster-template-coreos \
|
||||
--image-id CoreOS \
|
||||
--keypair-id testkey \
|
||||
--external-network-id public \
|
||||
|
@ -317,18 +322,18 @@ baymodel, except for pointing to a different image::
|
|||
--network-driver flannel \
|
||||
--coe kubernetes
|
||||
|
||||
Create a CoreOS Kubernetes bay. Use the CoreOS baymodel as a template for bay
|
||||
creation::
|
||||
Create a CoreOS Kubernetes cluster. Use the CoreOS ClusterTemplate as a
|
||||
template for cluster creation::
|
||||
|
||||
magnum bay-create --name k8sbay \
|
||||
--baymodel k8sbaymodel-coreos \
|
||||
magnum cluster-create --name k8s-cluster \
|
||||
--cluster-template k8scluster-template-coreos \
|
||||
--node-count 2
|
||||
|
||||
Using Kubernetes Bay
|
||||
====================
|
||||
Using a Kubernetes Cluster
|
||||
==========================
|
||||
|
||||
**NOTE:** For the following examples, only one minion node is required in the
|
||||
k8s bay created previously.
|
||||
k8s cluster created previously.
|
||||
|
||||
Kubernetes provides a number of examples you can use to check that things are
|
||||
working. You may need to clone kubernetes using::
|
||||
|
@ -372,22 +377,22 @@ the CSR.::
|
|||
Now that you have your client CSR, you can use the Magnum CLI to send it off
|
||||
to Magnum to get it signed and also download the signing cert.::
|
||||
|
||||
magnum ca-sign --bay k8sbay --csr client.csr > client.crt
|
||||
magnum ca-show --bay k8sbay > ca.crt
|
||||
magnum ca-sign --cluster k8s-cluster --csr client.csr > client.crt
|
||||
magnum ca-show --cluster k8s-cluster > ca.crt
|
||||
|
||||
Here's how to set up the replicated redis example. Now we create a pod for the
|
||||
redis-master::
|
||||
|
||||
KUBERNETES_URL=$(magnum bay-show k8sbay |
|
||||
KUBERNETES_URL=$(magnum cluster-show k8s-cluster |
|
||||
awk '/ api_address /{print $4}')
|
||||
|
||||
# Set kubectl to use the correct certs
|
||||
kubectl config set-cluster k8sbay --server=${KUBERNETES_URL} \
|
||||
kubectl config set-cluster k8s-cluster --server=${KUBERNETES_URL} \
|
||||
--certificate-authority=$(pwd)/ca.crt
|
||||
kubectl config set-credentials client --certificate-authority=$(pwd)/ca.crt \
|
||||
--client-key=$(pwd)/client.key --client-certificate=$(pwd)/client.crt
|
||||
kubectl config set-context k8sbay --cluster=k8sbay --user=client
|
||||
kubectl config use-context k8sbay
|
||||
kubectl config set-context k8s-cluster --cluster=k8s-cluster --user=client
|
||||
kubectl config use-context k8s-cluster
|
||||
|
||||
# Test the cert and connection works
|
||||
kubectl version
|
||||
|
@ -410,37 +415,38 @@ redis slaves and sentinels::
|
|||
kubectl create -f ./redis-sentinel-controller.yaml
|
||||
|
||||
Full lifecycle and introspection operations for each object are supported.
|
||||
For example, magnum bay-create, magnum baymodel-delete.
|
||||
For example, magnum cluster-create, magnum cluster-template-delete.
|
||||
|
||||
Now there are four redis instances (one master and three slaves) running
|
||||
across the bay, replicating data between one another.
|
||||
across the cluster, replicating data between one another.
|
||||
|
||||
Run the bay-show command to get the IP of the bay host on which the
|
||||
Run the cluster-show command to get the IP of the cluster host on which the
|
||||
redis-master is running::
|
||||
|
||||
magnum bay-show k8sbay
|
||||
magnum cluster-show k8s-cluster
|
||||
|
||||
+--------------------+------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------+------------------------------------------------------------+
|
||||
| status | CREATE_COMPLETE |
|
||||
| uuid | cff82cd0-189c-4ede-a9cb-2c0af6997709 |
|
||||
| stack_id | 7947844a-8e18-4c79-b591-ecf0f6067641 |
|
||||
| status_reason | Stack CREATE completed successfully |
|
||||
| created_at | 2016-05-26T17:45:57+00:00 |
|
||||
| updated_at | 2016-05-26T17:50:02+00:00 |
|
||||
| bay_create_timeout | 60 |
|
||||
| create_timeout | 60 |
|
||||
| api_address | https://172.24.4.4:6443 |
|
||||
| baymodel_id | e73298e7-e621-4d42-b35b-7a1952b97158 |
|
||||
| cluster_template_id| e73298e7-e621-4d42-b35b-7a1952b97158 |
|
||||
| master_addresses | ['172.24.4.6'] |
|
||||
| node_count | 1 |
|
||||
| node_addresses | ['172.24.4.5'] |
|
||||
| master_count | 1 |
|
||||
| discovery_url | https://discovery.etcd.io/4caaa65f297d4d49ef0a085a7aecf8e0 |
|
||||
| name | k8sbay |
|
||||
| name | k8s-cluster |
|
||||
+--------------------+------------------------------------------------------------+
|
||||
|
||||
The output here indicates the redis-master is running on the bay host with IP
|
||||
address 172.24.4.5. To access the redis master::
|
||||
The output here indicates the redis-master is running on the cluster host with
|
||||
IP address 172.24.4.5. To access the redis master::
|
||||
|
||||
ssh fedora@172.24.4.5
|
||||
REDIS_ID=$(sudo docker ps | grep redis:v1 | grep k8s_master | awk '{print $1}')
|
||||
|
@ -474,19 +480,19 @@ Additional useful commands from a given minion::
|
|||
kubectl get svc # Get services
|
||||
kubectl get nodes # Get nodes
|
||||
|
||||
After you finish using the bay, you want to delete it. A bay can be deleted as
|
||||
follows::
|
||||
After you finish using the cluster, you want to delete it. A cluster can be
|
||||
deleted as follows::
|
||||
|
||||
magnum bay-delete k8sbay
|
||||
magnum cluster-delete k8s-cluster
|
||||
|
||||
Building and Using a Swarm Bay
|
||||
==============================
|
||||
Building and Using a Swarm Cluster
|
||||
==================================
|
||||
|
||||
Create a baymodel. It is very similar to the Kubernetes baymodel, except for
|
||||
the absence of some Kubernetes-specific arguments and the use of 'swarm'
|
||||
as the COE::
|
||||
Create a ClusterTemplate. It is very similar to the Kubernetes ClusterTemplate,
|
||||
except for the absence of some Kubernetes-specific arguments and the use of
|
||||
'swarm' as the COE::
|
||||
|
||||
magnum baymodel-create --name swarmbaymodel \
|
||||
magnum cluster-template-create --name swarm-cluster-template \
|
||||
--image-id fedora-atomic-latest \
|
||||
--keypair-id testkey \
|
||||
--external-network-id public \
|
||||
|
@ -501,31 +507,40 @@ as the COE::
|
|||
|
||||
http://docs.openstack.org/developer/magnum/magnum-proxy.html
|
||||
|
||||
Finally, create the bay. Use the baymodel 'swarmbaymodel' as a template for
|
||||
bay creation. This bay will result in one swarm manager node and two extra
|
||||
agent nodes::
|
||||
Finally, create the cluster. Use the ClusterTemplate 'swarm-cluster-template'
|
||||
as a template for cluster creation. This cluster will result in one swarm
|
||||
manager node and two extra agent nodes::
|
||||
|
||||
magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 2
|
||||
magnum cluster-create --name swarm-cluster \
|
||||
--cluster-template swarm-cluster-template \
|
||||
--node-count 2
|
||||
|
||||
Now that we have a swarm bay we can start interacting with it::
|
||||
Now that we have a swarm cluster we can start interacting with it::
|
||||
|
||||
magnum bay-show swarmbay
|
||||
magnum cluster-show swarm-cluster
|
||||
|
||||
+---------------+------------------------------------------+
|
||||
| Property | Value |
|
||||
+---------------+------------------------------------------+
|
||||
| status | CREATE_COMPLETE |
|
||||
| uuid | eda91c1e-6103-45d4-ab09-3f316310fa8e |
|
||||
| created_at | 2015-04-20T19:05:27+00:00 |
|
||||
| updated_at | 2015-04-20T19:06:08+00:00 |
|
||||
| baymodel_id | a93ee8bd-fec9-4ea7-ac65-c66c1dba60af |
|
||||
| node_count | 2 |
|
||||
| discovery_url | |
|
||||
| name | swarmbay |
|
||||
+---------------+------------------------------------------+
|
||||
+--------------------+------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------+------------------------------------------------------------+
|
||||
| status | CREATE_COMPLETE |
|
||||
| uuid | eda91c1e-6103-45d4-ab09-3f316310fa8e |
|
||||
| stack_id | 7947844a-8e18-4c79-b591-ecf0f6067641 |
|
||||
| status_reason | Stack CREATE completed successfully |
|
||||
| created_at | 2015-04-20T19:05:27+00:00 |
|
||||
| updated_at | 2015-04-20T19:06:08+00:00 |
|
||||
| create_timeout | 60 |
|
||||
| api_address | https://172.24.4.4:6443 |
|
||||
| cluster_template_id| e73298e7-e621-4d42-b35b-7a1952b97158 |
|
||||
| master_addresses | ['172.24.4.6'] |
|
||||
| node_count | 2 |
|
||||
| node_addresses | ['172.24.4.5'] |
|
||||
| master_count | 1 |
|
||||
| discovery_url | https://discovery.etcd.io/4caaa65f297d4d49ef0a085a7aecf8e0 |
|
||||
| name | swarm-cluster |
|
||||
+--------------------+------------------------------------------------------------+
|
||||
|
||||
We now need to setup the docker CLI to use the swarm bay we have created with
|
||||
the appropriate credentials.
|
||||
We now need to setup the docker CLI to use the swarm cluster we have created
|
||||
with the appropriate credentials.
|
||||
|
||||
Create a dir to store certs and cd into it. The `DOCKER_CERT_PATH` env variable
|
||||
is consumed by docker which expects ca.pem, key.pem and cert.pem to be in that
|
||||
|
@ -562,8 +577,8 @@ Run the openssl 'req' command to generate the CSR.::
|
|||
Now that you have your client CSR use the Magnum CLI to get it signed and also
|
||||
download the signing cert.::
|
||||
|
||||
magnum ca-sign --bay swarmbay --csr client.csr > cert.pem
|
||||
magnum ca-show --bay swarmbay > ca.pem
|
||||
magnum ca-sign --cluster swarm-cluster --csr client.csr > cert.pem
|
||||
magnum ca-show --cluster swarm-cluster > ca.pem
|
||||
|
||||
Set the CLI to use TLS . This env var is consumed by docker.::
|
||||
|
||||
|
@ -572,10 +587,10 @@ Set the CLI to use TLS . This env var is consumed by docker.::
|
|||
Set the correct host to use which is the public ip address of swarm API server
|
||||
endpoint. This env var is consumed by docker.::
|
||||
|
||||
export DOCKER_HOST=$(magnum bay-show swarmbay | awk '/ api_address /{print substr($4,9)}')
|
||||
export DOCKER_HOST=$(magnum cluster-show swarm-cluster | awk '/ api_address /{print substr($4,9)}')
|
||||
|
||||
Next we will create a container in this swarm bay. This container will ping the
|
||||
address 8.8.8.8 four times::
|
||||
Next we will create a container in this swarm cluster. This container will ping
|
||||
the address 8.8.8.8 four times::
|
||||
|
||||
docker run --rm -it cirros:latest ping -c 4 8.8.8.8
|
||||
|
||||
|
@ -591,10 +606,10 @@ You should see a similar output to::
|
|||
4 packets transmitted, 4 packets received, 0% packet loss
|
||||
round-trip min/avg/max = 25.226/25.340/25.513 ms
|
||||
|
||||
Building and Using a Mesos Bay
|
||||
==============================
|
||||
Building and Using a Mesos Cluster
|
||||
==================================
|
||||
|
||||
Provisioning a mesos bay requires a Ubuntu-based image with some packages
|
||||
Provisioning a mesos cluster requires a Ubuntu-based image with some packages
|
||||
pre-installed. To build and upload such image, please refer to
|
||||
`<http://docs.openstack.org/developer/magnum/dev/mesos.html>`_
|
||||
|
||||
|
@ -605,46 +620,51 @@ Alternatively, you can download and upload a pre-built image::
|
|||
--disk-format=qcow2 --container-format=bare \
|
||||
--os-distro=ubuntu --file=ubuntu-14.04.3-mesos-0.25.0.qcow2
|
||||
|
||||
Then, create a baymodel by using 'mesos' as the COE, with the rest of arguments
|
||||
similar to the Kubernetes baymodel::
|
||||
Then, create a ClusterTemplate by using 'mesos' as the COE, with the rest of
|
||||
arguments similar to the Kubernetes ClusterTemplate::
|
||||
|
||||
magnum baymodel-create --name mesosbaymodel --image-id ubuntu-mesos \
|
||||
magnum cluster-template-create --name mesos-cluster-template --image-id ubuntu-mesos \
|
||||
--keypair-id testkey \
|
||||
--external-network-id public \
|
||||
--dns-nameserver 8.8.8.8 \
|
||||
--flavor-id m1.small \
|
||||
--coe mesos
|
||||
|
||||
Finally, create the bay. Use the baymodel 'mesosbaymodel' as a template for
|
||||
bay creation. This bay will result in one mesos master node and two mesos
|
||||
slave nodes::
|
||||
Finally, create the cluster. Use the ClusterTemplate 'mesos-cluster-template'
|
||||
as a template for cluster creation. This cluster will result in one mesos
|
||||
master node and two mesos slave nodes::
|
||||
|
||||
magnum bay-create --name mesosbay --baymodel mesosbaymodel --node-count 2
|
||||
magnum cluster-create --name mesos-cluster \
|
||||
--cluster-template mesos-cluster-template \
|
||||
--node-count 2
|
||||
|
||||
Now that we have a mesos bay we can start interacting with it. First we need
|
||||
to make sure the bay's status is 'CREATE_COMPLETE'::
|
||||
Now that we have a mesos cluster we can start interacting with it. First we
|
||||
need to make sure the cluster's status is 'CREATE_COMPLETE'::
|
||||
|
||||
$ magnum bay-show mesosbay
|
||||
+--------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------+--------------------------------------+
|
||||
| status | CREATE_COMPLETE |
|
||||
| uuid | ff727f0d-72ca-4e2b-9fef-5ec853d74fdf |
|
||||
| status_reason | Stack CREATE completed successfully |
|
||||
| created_at | 2015-06-09T20:21:43+00:00 |
|
||||
| updated_at | 2015-06-09T20:28:18+00:00 |
|
||||
| bay_create_timeout | 60 |
|
||||
| api_address | 172.24.4.115 |
|
||||
| baymodel_id | 92dbda62-32d4-4435-88fc-8f42d514b347 |
|
||||
| node_count | 2 |
|
||||
| node_addresses | [u'172.24.4.116', u'172.24.4.117'] |
|
||||
| master_count | 1 |
|
||||
| discovery_url | None |
|
||||
| name | mesosbay |
|
||||
+--------------------+--------------------------------------+
|
||||
$ magnum cluster-show mesos-cluster
|
||||
|
||||
Next we will create a container in this bay by using the REST API of Marathon.
|
||||
This container will ping the address 8.8.8.8::
|
||||
+--------------------+------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------+------------------------------------------------------------+
|
||||
| status | CREATE_COMPLETE |
|
||||
| uuid | ff727f0d-72ca-4e2b-9fef-5ec853d74fdf |
|
||||
| stack_id | 7947844a-8e18-4c79-b591-ecf0f6067641 |
|
||||
| status_reason | Stack CREATE completed successfully |
|
||||
| created_at | 2015-06-09T20:21:43+00:00 |
|
||||
| updated_at | 2015-06-09T20:28:18+00:00 |
|
||||
| create_timeout | 60 |
|
||||
| api_address | https://172.24.4.115:6443 |
|
||||
| cluster_template_id| 92dbda62-32d4-4435-88fc-8f42d514b347 |
|
||||
| master_addresses | ['172.24.4.115'] |
|
||||
| node_count | 2 |
|
||||
| node_addresses | ['172.24.4.116', '172.24.4.117'] |
|
||||
| master_count | 1 |
|
||||
| discovery_url | None |
|
||||
| name | mesos-cluster |
|
||||
+--------------------+------------------------------------------------------------+
|
||||
|
||||
Next we will create a container in this cluster by using the REST API of
|
||||
Marathon. This container will ping the address 8.8.8.8::
|
||||
|
||||
$ cat > mesos.json << END
|
||||
{
|
||||
|
@ -662,7 +682,7 @@ This container will ping the address 8.8.8.8::
|
|||
"cmd": "ping 8.8.8.8"
|
||||
}
|
||||
END
|
||||
$ MASTER_IP=$(magnum bay-show mesosbay | awk '/ api_address /{print $4}')
|
||||
$ MASTER_IP=$(magnum cluster-show mesos-cluster | awk '/ api_address /{print $4}')
|
||||
$ curl -X POST -H "Content-Type: application/json" \
|
||||
http://${MASTER_IP}:8080/v2/apps -d@mesos.json
|
||||
|
||||
|
|
|
@ -1,11 +1,14 @@
|
|||
===========================
|
||||
Heat Template Definitions
|
||||
===========================
|
||||
====================
|
||||
Heat Stack Templates
|
||||
====================
|
||||
|
||||
Heat Templates are what Magnum uses to generate a Bay. These various template
|
||||
definitions provide a mapping of Magnum object attributes to Heat templates
|
||||
parameters, along with Magnum consumable template outputs. The result of a
|
||||
Heat template should be a full Container Orchestration Environment.
|
||||
Heat Stack Templates are what Magnum passes to Heat to generate a cluster. For
|
||||
each ClusterTemplate resource in Magnum, a Heat stack is created to arrange all
|
||||
of the cloud resources needed to support the container orchestration
|
||||
environment. These Heat stack templates provide a mapping of Magnum object
|
||||
attributes to Heat template parameters, along with Magnum consumable stack
|
||||
outputs. Magnum passes the Heat Stack Template to the Heat service to create a
|
||||
Heat stack. The result is a full Container Orchestration Environment.
|
||||
|
||||
.. list-plugins:: magnum.template_definitions
|
||||
:detailed:
|
||||
|
|
|
@ -32,9 +32,9 @@ Architecture
|
|||
|
||||
There are several different types of objects in the magnum system:
|
||||
|
||||
* **Bay:** A collection of node objects where work is scheduled
|
||||
* **BayModel:** An object stores template information about the bay which is
|
||||
used to create new bays consistently
|
||||
* **Cluster:** A collection of node objects where work is scheduled
|
||||
* **ClusterTemplate:** An object stores template information about the cluster
|
||||
which is used to create new clusters consistently
|
||||
* **Pod:** A collection of containers running on one physical or virtual
|
||||
machine
|
||||
* **Service:** An abstraction which defines a logical set of pods and a policy
|
||||
|
@ -51,7 +51,7 @@ scalability to the conductor as well.
|
|||
|
||||
The magnum-conductor process runs on a controller machine and connects to a
|
||||
Kubernetes or Docker REST API endpoint. The Kubernetes and Docker REST API
|
||||
endpoints are managed by the bay object.
|
||||
endpoints are managed by the cluster object.
|
||||
|
||||
When service or pod objects are created, Kubernetes may be directly contacted
|
||||
via the Kubernetes REST API. When container objects are acted upon, the
|
||||
|
@ -60,8 +60,7 @@ Docker REST API may be directly contacted.
|
|||
Features
|
||||
========
|
||||
|
||||
* Abstractions for bays, containers, nodes, pods, replication controllers, and
|
||||
services
|
||||
* Abstractions for Clusters
|
||||
* Integration with Kubernetes, Swarm, Mesos for backend container technology
|
||||
* Integration with Keystone for multi-tenant security
|
||||
* Integration with Neutron for Kubernetes multi-tenancy network security
|
||||
|
@ -75,7 +74,7 @@ Developer Info
|
|||
|
||||
dev/quickstart
|
||||
dev/manual-devstack
|
||||
dev/bay-template-example.rst
|
||||
dev/cluster-type-definition.rst
|
||||
dev/kubernetes-load-balancer.rst
|
||||
dev/functional-test.rst
|
||||
dev/reno.rst
|
||||
|
|
|
@ -30,7 +30,7 @@ magnum related metrics. See `OpenStack Install Guides
|
|||
|
||||
.. important::
|
||||
|
||||
Magnum creates VM clusters on the Compute service (nova), called bays. These
|
||||
Magnum creates groupings of Nova compute instances, called clusters. These
|
||||
VMs must have basic Internet connectivity and must be able to reach magnum's
|
||||
API server. Make sure that Compute and Network services are configured
|
||||
accordingly.
|
||||
|
@ -176,8 +176,7 @@ service, you must create a database, service credentials, and API endpoints.
|
|||
+--------------+----------------------------------+
|
||||
|
||||
#. Magnum requires additional information in the Identity service to
|
||||
manage COE clusters (bays). To add this information, complete these
|
||||
steps:
|
||||
manage clusters. To add this information, complete these steps:
|
||||
|
||||
* Create the ``magnum`` domain that contains projects and users:
|
||||
|
||||
|
|
|
@ -7,9 +7,9 @@ for using services like docker, kubernetes and mesos. Use these steps
|
|||
when your firewall will not allow you to use those services without a
|
||||
proxy.
|
||||
|
||||
**NOTE:** This feature has only been tested with the supported bay type
|
||||
and associated image: Kubernetes and Swarm bay using the Fedora Atomic
|
||||
image, and Mesos bay using the Ubuntu image.
|
||||
**NOTE:** This feature has only been tested with the supported cluster type
|
||||
and associated image: Kubernetes and Swarm use the Fedora Atomic
|
||||
image, and Mesos uses the Ubuntu image.
|
||||
|
||||
Proxy Parameters to define before use
|
||||
=====================================
|
||||
|
@ -37,10 +37,10 @@ and ip addresses. Bad example: 192.168.0.0/28.
|
|||
Steps to configure proxies.
|
||||
==============================
|
||||
|
||||
You can specify all three proxy parameters while creating baymodel of any
|
||||
coe type. All of proxy parameters are optional.
|
||||
You can specify all three proxy parameters while creating ClusterTemplate of
|
||||
any coe type. All of proxy parameters are optional.
|
||||
|
||||
magnum baymodel-create --name k8sbaymodel \
|
||||
magnum cluster-template-create --name k8s-cluster-template \
|
||||
--image-id fedora-atomic-latest \
|
||||
--keypair-id testkey \
|
||||
--external-network-id public \
|
||||
|
@ -50,7 +50,7 @@ coe type. All of proxy parameters are optional.
|
|||
--http-proxy <http://abc-proxy.com:8080> \
|
||||
--https-proxy <https://abc-proxy.com:8080> \
|
||||
--no-proxy <172.24.4.4,172.24.4.9,172.24.4.8>
|
||||
magnum baymodel-create --name swarmbaymodel \
|
||||
magnum cluster-template-create --name swarm-cluster-template \
|
||||
--image-id fedora-atomic-latest \
|
||||
--keypair-id testkey \
|
||||
--external-network-id public \
|
||||
|
@ -60,7 +60,7 @@ coe type. All of proxy parameters are optional.
|
|||
--http-proxy <http://abc-proxy.com:8080> \
|
||||
--https-proxy <https://abc-proxy.com:8080> \
|
||||
--no-proxy <172.24.4.4,172.24.4.9,172.24.4.8>
|
||||
magnum baymodel-create --name mesosbaymodel \
|
||||
magnum cluster-template-create --name mesos-cluster-template \
|
||||
--image-id ubuntu-mesos \
|
||||
--keypair-id testkey \
|
||||
--external-network-id public \
|
||||
|
|
|
@ -16,26 +16,26 @@ debugging unit tests and gate tests.
|
|||
Failure symptoms
|
||||
================
|
||||
|
||||
My bay-create takes a really long time
|
||||
If you are using devstack on a small VM, bay-create will take a long
|
||||
My cluster-create takes a really long time
|
||||
If you are using devstack on a small VM, cluster-create will take a long
|
||||
time and may eventually fail because of insufficient resources.
|
||||
Another possible reason is that a process on one of the nodes is hung
|
||||
and heat is still waiting on the signal. In this case, it will eventually
|
||||
fail with a timeout, but since heat has a long default timeout, you can
|
||||
look at the `heat stacks`_ and check the WaitConditionHandle resources.
|
||||
|
||||
My bay-create fails with error: "Failed to create trustee XXX in domain XXX"
|
||||
Check the `trustee for bay`_
|
||||
My cluster-create fails with error: "Failed to create trustee XXX in domain XXX"
|
||||
Check the `trustee for cluster`_
|
||||
|
||||
Kubernetes bay-create fails
|
||||
Kubernetes cluster-create fails
|
||||
Check the `heat stacks`_, log into the master nodes and check the
|
||||
`Kubernetes services`_ and `etcd service`_.
|
||||
|
||||
Swarm bay-create fails
|
||||
Swarm cluster-create fails
|
||||
Check the `heat stacks`_, log into the master nodes and check the `Swarm
|
||||
services`_ and `etcd service`_.
|
||||
|
||||
Mesos bay-create fails
|
||||
Mesos cluster-create fails
|
||||
Check the `heat stacks`_, log into the master nodes and check the `Mesos
|
||||
services`_.
|
||||
|
||||
|
@ -43,20 +43,20 @@ I get the error "Timed out waiting for a reply" when deploying a pod
|
|||
Verify the `Kubernetes services`_ and `etcd service`_ are running on the
|
||||
master nodes.
|
||||
|
||||
I deploy pods on Kubernetes bay but the status stays "Pending"
|
||||
I deploy pods on Kubernetes cluster but the status stays "Pending"
|
||||
The pod status is "Pending" while the Docker image is being downloaded,
|
||||
so if the status does not change for a long time, log into the minion
|
||||
node and check for `Cluster internet access`_.
|
||||
|
||||
I deploy pods and services on Kubernetes bay but the app is not working
|
||||
I deploy pods and services on Kubernetes cluster but the app is not working
|
||||
The pods and services are running and the status looks correct, but
|
||||
if the app is performing communication between pods through services,
|
||||
verify `Kubernetes networking`_.
|
||||
|
||||
Swarm bay is created successfully but I cannot deploy containers
|
||||
Swarm cluster is created successfully but I cannot deploy containers
|
||||
Check the `Swarm services`_ and `etcd service`_ on the master nodes.
|
||||
|
||||
Mesos bay is created successfully but I cannot deploy containers on Marathon
|
||||
Mesos cluster is created successfully but I cannot deploy containers on Marathon
|
||||
Check the `Mesos services`_ on the master node.
|
||||
|
||||
I get a "Protocol violation" error when deploying a container
|
||||
|
@ -64,7 +64,7 @@ I get a "Protocol violation" error when deploying a container
|
|||
kube-apiserver is running to accept the request.
|
||||
Check `TLS`_ and `Barbican service`_.
|
||||
|
||||
My bay-create fails with a resource error on docker_volume
|
||||
My cluster-create fails with a resource error on docker_volume
|
||||
Check for available volume space on Cinder and the `request volume
|
||||
size`_ in the heat template.
|
||||
Run "nova volume-list" to check the volume status.
|
||||
|
@ -78,17 +78,17 @@ Heat stacks
|
|||
-----------
|
||||
*To be filled in*
|
||||
|
||||
A bay is deployed by a set of heat stacks: one top level stack and several
|
||||
nested stack. The stack names are prefixed with the bay name and the nested
|
||||
stack names contain descriptive internal names like *kube_masters*,
|
||||
A cluster is deployed by a set of heat stacks: one top level stack and several
|
||||
nested stack. The stack names are prefixed with the cluster name and the
|
||||
nested stack names contain descriptive internal names like *kube_masters*,
|
||||
*kube_minions*.
|
||||
|
||||
To list the status of all the stacks for a bay:
|
||||
To list the status of all the stacks for a cluster:
|
||||
|
||||
heat stack-list -n | grep *bay-name*
|
||||
heat stack-list -n | grep *cluster-name*
|
||||
|
||||
If the bay has failed, then one or more of the heat stacks would have failed.
|
||||
From the stack list above, look for the stacks that failed, then
|
||||
If the cluster has failed, then one or more of the heat stacks would have
|
||||
failed. From the stack list above, look for the stacks that failed, then
|
||||
look for the particular resource(s) that failed in the failed stack by:
|
||||
|
||||
heat resource-list *failed-stack-name* | grep "FAILED"
|
||||
|
@ -108,14 +108,15 @@ services`_, `Swarm services`_ or `Mesos services`_. If the failure is in
|
|||
other scripts, look for them as `Heat software resource scripts`_.
|
||||
|
||||
|
||||
Trustee for bay
|
||||
---------------
|
||||
When a user creates a bay, Magnum will dynamically create a service account
|
||||
for the creating bay. The service account will be used by the bay to access
|
||||
the OpenStack services (i.e. Neutron, Swift, etc.). A trust relationship
|
||||
will be created between the user who created the bay (the "trustor") and the
|
||||
service account created for the bay (the "trustee"). For details, please refer
|
||||
<http://git.openstack.org/cgit/openstack/magnum/tree/specs/create-trustee-user-for-each-bay.rst>`_.
|
||||
Trustee for cluster
|
||||
-------------------
|
||||
When a user creates a cluster, Magnum will dynamically create a service account
|
||||
for the cluster. The service account will be used by the cluster to
|
||||
access the OpenStack services (i.e. Neutron, Swift, etc.). A trust relationship
|
||||
will be created between the user who created the cluster (the "trustor") and
|
||||
the service account created for the cluster (the "trustee"). For details,
|
||||
please refer
|
||||
<http://git.openstack.org/cgit/openstack/magnum/tree/specs/create-trustee-user-for-each-cluster.rst>`_.
|
||||
|
||||
If Magnum fails to create the trustee, check the magnum config file (usually
|
||||
in /etc/magnum/magnum.conf). Make sure 'trustee_*' and 'auth_uri' are set and
|
||||
|
@ -192,7 +193,7 @@ The nodes for Kubernetes, Swarm and Mesos are connected to a private
|
|||
Neutron network, so to provide access to the external internet, a router
|
||||
connects the private network to a public network. With devstack, the
|
||||
default public network is "public", but this can be replaced by the
|
||||
parameter "external-network-id" in the bay model. The "public" network
|
||||
parameter "external-network-id" in the ClusterTemplate. The "public" network
|
||||
with devstack is actually not a real external network, so it is in turn
|
||||
routed to the network interface of the host for devstack. This is
|
||||
configured in the file local.conf with the variable PUBLIC_INTERFACE,
|
||||
|
@ -215,8 +216,8 @@ Check the following:
|
|||
|
||||
- Is PUBLIC_INTERFACE in devstack/local.conf the correct network
|
||||
interface? Does this interface have a route to the external internet?
|
||||
- If "external-network-id" is specified in the bay model, does this network
|
||||
have a route to the external internet?
|
||||
- If "external-network-id" is specified in the ClusterTemplate, does this
|
||||
network have a route to the external internet?
|
||||
- Is your devstack environment behind a firewall? This can be the case for some
|
||||
enterprises or countries. In this case, consider using a `proxy server
|
||||
<https://github.com/openstack/magnum/blob/master/doc/source/magnum-proxy.rst>`_.
|
||||
|
@ -241,9 +242,9 @@ If the name lookup fails, check the following:
|
|||
- Is the DNS entry correct in the subnet? Try "neutron subnet-show
|
||||
<subnet-id>" for the private subnet and check dns_nameservers.
|
||||
The IP should be either the default public DNS 8.8.8.8 or the value
|
||||
specified by "dns-nameserver" in the bay model.
|
||||
specified by "dns-nameserver" in the ClusterTemplate.
|
||||
- If you are using your own DNS server by specifying "dns-nameserver"
|
||||
in the bay model, is it reachable and working?
|
||||
in the ClusterTemplate, is it reachable and working?
|
||||
- More help on `DNS troubleshooting <http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#debugging_dns_issues>`_.
|
||||
|
||||
|
||||
|
@ -264,12 +265,12 @@ the key:value may not be replicated correctly. In this case, use the
|
|||
following steps to verify the inter-pods networking and pinpoint problems.
|
||||
|
||||
Since the steps are specific to the network drivers, refer to the
|
||||
particular driver being used for the bay.
|
||||
particular driver being used for the cluster.
|
||||
|
||||
Using Flannel as network driver
|
||||
...............................
|
||||
|
||||
Flannel is the default network driver for Kubernetes bays. Flannel is
|
||||
Flannel is the default network driver for Kubernetes clusters. Flannel is
|
||||
an overlay network that runs on top of the neutron network. It works by
|
||||
encapsulating the messages between pods and forwarding them to the
|
||||
correct node that hosts the target pod.
|
||||
|
@ -515,15 +516,15 @@ Running Flannel
|
|||
|
||||
When deploying a COE, Flannel is available as a network driver for
|
||||
certain COE type. Magnum currently supports Flannel for a Kubernetes
|
||||
or Swarm bay.
|
||||
or Swarm cluster.
|
||||
|
||||
Flannel provides a flat network space for the containers in the bay:
|
||||
Flannel provides a flat network space for the containers in the cluster:
|
||||
they are allocated IP in this network space and they will have connectivity
|
||||
to each other. Therefore, if Flannel fails, some containers will not
|
||||
be able to access services from other containers in the bay. This can be
|
||||
be able to access services from other containers in the cluster. This can be
|
||||
confirmed by running *ping* or *curl* from one container to another.
|
||||
|
||||
The Flannel daemon is run as a systemd service on each node of the bay.
|
||||
The Flannel daemon is run as a systemd service on each node of the cluster.
|
||||
To check Flannel, run on each node::
|
||||
|
||||
sudo service flanneld status
|
||||
|
@ -572,7 +573,7 @@ Check the following:
|
|||
}
|
||||
|
||||
where the values for the parameters must match the corresponding
|
||||
parameters from the bay model.
|
||||
parameters from the ClusterTemplate.
|
||||
|
||||
Magnum also loads this configuration into etcd, therefore, verify
|
||||
the configuration in etcd by running *etcdctl* on the master nodes::
|
||||
|
|
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue