magnum/doc/source/userguide.rst

15 KiB

Magnum User Guide

This guide is intended for users who use Magnum to deploy and manage clusters of hosts for a Container Orchestration Engine. It describes the infrastructure that Magnum creates and how to work with them.

Section 1-3 describe Magnum itself, including an overview, the CLI and Horizon interface. Section 4-8 describe the Container Orchestration Engine's supported along with a guide on how to select one that best meets your needs. Section 9-14 describe the low level OpenStack infrastructure that is created and managed by Magnum to support the Container Orchestration Engine's.

Contents

  1. Overview
  2. Python Client
  3. Horizon Interface
  4. Choosing COE
  5. Native clients
  6. Kubernetes
  7. Swarm
  8. Mesos
  9. Transport Layer Security
  10. Networking
  11. High Availability
  12. Scaling
  13. Storage
  14. Image Management

Terminology

Bay

A bay is the construct in which Magnum launches container orchestration engines. After a bay has been created the user is able to add containers to it either directly, or in the case of the Kubernetes container orchestration engine within pods - a logical construct specific to that implementation. A bay is created based on a baymodel.

Baymodel

A baymodel in Magnum is roughly equivalent to a flavor in Nova. It acts as a template that defines options such as the container orchestration engine, keypair and image for use when Magnum is creating bays using the given baymodel.

Container Orchestration Engine (COE)

A container orchestration engine manages the lifecycle of one or more containers, logically represented in Magnum as a bay. Magnum supports a number of container orchestration engines, each with their own pros and cons, including Docker Swarm, Kubernetes, and Mesos.

Pod

When using the Kubernetes container orchestration engine, a pod is the smallest deployable unit that can be created and managed. A pod is a co-located group of application containers that run with a shared context. When using Magnum, pods are created and managed within bays. Refer to the pods section in the Kubernetes User Guide for more information.

Replication controller

A replication controller is used to ensure that at any given time a certain number of replicas of a pod are running. Pods are automatically created and deleted by the replication controller as necessary based on a template to ensure that the defined number of replicas exist. Refer to the replication controller section in the Kubernetes User Guide for more information.

Service

A service is an additional layer of abstraction provided by the Kubernetes container orchestration engine which defines a logical set of pods and a policy for accessing them. This is useful because pods are created and deleted by a replication controller, for example, other pods needing to discover them can do so via the service abstraction. Refer to the services section in the Kubernetes User Guide for more information.

Overview

To be filled in

Magnum rationale, concept, compelling features

BayModel

To be filled in

Bay

To be filled in

Python Client

Installation

Follow the instructions in the OpenStack Installation Guide to enable the repositories for your distribution:

Install using distribution packages for RHEL/CentOS/Fedora:

$ sudo yum install python-magnumclient

Install using distribution packages for Ubuntu/Debian:

$ sudo apt-get install python-magnumclient

Install using distribution packages for OpenSuSE and SuSE Enterprise Linux:

$ sudo zypper install python-magnumclient

Verifying installation

Execute the magnum command with the --version argument to confirm that the client is installed and in the system path:

$ magnum --version
1.1.0

Note that the version returned may differ from the above, 1.1.0 was the latest available version at the time of writing.

Using the command-line client

Refer to the OpenStack Command-Line Interface Reference for a full list of the commands supported by the magnum command-line client.

Horizon Interface

To be filled in with screenshots

Choosing COE

To be filled in

Buyer's guide with example use case, can use this as model:

http://www.openstack.org/software/project-navigator

Native clients

To be filled in

Kubernetes

To be filled in

Swarm

To be filled in

Mesos

To be filled in

Transport Layer Security

To be filled in

Native Client Configuration guide for each COE

Networking

There are two components that make up the networking in a cluster.

  1. The Neutron infrastructure for the cluster: this includes the private network, subnet, ports, routers, load balancers, etc.
  2. The networking model presented to the containers: this is what the containers see in communicating with each other and to the external world. Typically this consists of a driver deployed on each node.

The two components are deployed and managed separately. The Neutron infrastructure is the integration with OpenStack; therefore, it is stable and more or less similar across different COE types. The networking model, on the other hand, is specific to the COE type and is still under active development in the various COE communities, for example, Docker libnetwork and Kubernetes Container Networking. As a result, the implementation for the networking models is evolving and new models are likely to be introduced in the future.

For the Neutron infrastructure, the following configuration can be set in the baymodel:

external-network-id

The external Neutron network ID to connect to this bay. This is used to connect the cluster to the external internet, allowing the nodes in the bay to access external URL for discovery, image download, etc. If not specified, the default value is "public" and this is valid for a typical devstack.

fixed-network

The Neutron network to use as the private network for the bay nodes. If not specified, a new Neutron private network will be created.

dns-nameserver

The DNS nameserver to use for this bay. This is an IP address for the server and it is used to configure the Neutron subnet of the cluster (dns_nameservers). If not specified, the default DNS is 8.8.8.8, the publicly available DNS.

http-proxy, https-proxy, no-proxy

The proxy for the nodes in the bay, to be used when the cluster is behind a firewall and containers cannot access URL's on the external internet directly. For the parameter http-proxy and https-proxy, the value to provide is a URL and it will be set in the environment variable HTTP_PROXY and HTTPS_PROXY respectively in the nodes. For the parameter no-proxy, the value to provide is an IP or list of IP's separated by comma. Likewise, the value will be set in the environment variable NO_PROXY in the nodes.

For the networking model to the container, the following configuration can be set in the baymodel:

network-driver

The network driver name for instantiating container networks. Currently, the following network drivers are supported:

Driver Kubernetes Swarm Mesos
Flannel supported supported unsupported
Docker unsupported supported supported

If not specified, the default driver is Flannel for Kubernetes, and Docker for Swarm and Mesos.

Particular network driver may require its own set of parameters for configuration, and these parameters are specified through the labels in the baymodel. Labels are arbitrary key=value pairs.

When Flannel is specified as the network driver, the following optional labels can be added:

flannel_network_cidr

IPv4 network in CIDR format to use for the entire Flannel network. If not specified, the default is 10.100.0.0/16.

flannel_network_subnetlen

The size of the subnet allocated to each host. If not specified, the default is 24.

flannel_backend

The type of backend for Flannel. Possible values are udp, vxlan, host-gw. If not specified, the default is udp. Selecting the best backend depends on your networking. Generally, udp is the most generally supported backend since there is little requirement on the network, but it typically offers the lowest performance. The vxlan backend performs better, but requires vxlan support in the kernel so the image used to provision the nodes needs to include this support. The host-gw backend offers the best performance since it does not actually encapsulate messages, but it requires all the nodes to be on the same L2 network. The private Neutron network that Magnum creates does meet this requirement; therefore if the parameter fixed_network is not specified in the baymodel, host-gw is the best choice for the Flannel backend.

High Availability

To be filled in

Scaling

To be filled in

Include Autoscaling

Storage

To be filled in

Image Management

When a COE is deployed, an image from Glance is used to boot the nodes in the cluster and then the software will be configured and started on the nodes to bring up the full cluster. An image is based on a particular distro such as Fedora, Ubuntu, etc, and is prebuilt with the software specific to the COE such as Kubernetes, Swarm, Mesos. The image is tightly coupled with the following in Magnum:

  1. Heat templates to orchestrate the configuration.
  2. Template definition to map baymodel parameters to Heat template parameters.
  3. Set of scripts to configure software.

Collectively, they constitute the driver for a particular COE and a particular distro; therefore, developing a new image needs to be done in conjunction with developing these other components. Image can be built by various methods such as diskimagebuilder, or in some case, a distro image can be used directly. A number of drivers and the associated images is supported in Magnum as reference implementation. In this section, we focus mainly on the supported images.

All images must include support for cloud-init and the heat software configuration utility:

  • os-collect-config
  • os-refresh-config
  • os-apply-config
  • heat-config
  • heat-config-script

Additional software are described as follows.

Kubernetes on Fedora Atomic

This image is built manually by the instructions provided in this Atomic guide The Fedora site hosts the current image fedora-21-atomic-5.qcow2. This image has the following OS/software:

OS/software version
Fedora 21
Docker 1.8.1
Kubernetes 1.0.4
etcd 2.0.10
Flannel 0.5.0

The following software are managed as systemd services:

  • kube-apiserver
  • kubelet
  • etcd
  • flannel (if specified as network driver)
  • docker

The following software are managed as Docker containers:

  • kube-controller-manager
  • kube-scheduler
  • kube-proxy

The login for this image is minion.

Kubernetes on CoreOS

CoreOS publishes a stock image that is being used to deploy Kubernetes. This image has the following OS/software:

OS/software version
CoreOS 4.3.6
Docker 1.9.1
Kubernetes 1.0.6
etcd 2.2.3
Flannel 0.5.5

The following software are managed as systemd services:

  • kubelet
  • flannel (if specified as network driver)
  • docker
  • etcd

The following software are managed as Docker containers:

  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler
  • kube-proxy

The login for this image is core.

Kubernetes on Ironic

This image is built manually using diskimagebuilder. The scripts and instructions are included in Magnum code repo. Currently Ironic is not fully supported yet, therefore more details will be provided when this driver has been fully tested.

Swarm on Fedora Atomic

This image is the same as the image for Kubernetes on Fedora Atomic and was built manually by the instructions provided in this Atomic guide The Fedora site hosts the current image fedora-21-atomic-5.qcow2. This image has the following OS/software:

OS/software version
Fedora 21
Docker 1.8.1
Kubernetes 1.0.4
etcd 2.0.10
Flannel 0.5.0

The login for this image is fedora.

Mesos on Ubuntu

This image is built manually using diskimagebuilder. The instructions are provided in this Mesos guide. The Fedora site hosts the current image ubuntu-14.04.3-mesos-0.25.0.qcow2.

OS/software version
Ubuntu 14.04
Docker 1.8.1
Mesos 0.25.0
Marathon