Drop mesos documentation

Remove all mention of mesos in documentation prior to removing it in
code.

Story: 2009873
Task: 44581

Change-Id: Ib3bc3ee578bd5e3fd8124ebd370a36ec2fd735c2
This commit is contained in:
Jake Yip 2022-04-27 20:22:13 +10:00
parent 61c7f7b34b
commit 9ad849db7c
8 changed files with 35 additions and 569 deletions

View File

@ -3,13 +3,12 @@ Using Proxies in magnum if running under firewall
================================================= =================================================
If you are running magnum behind a firewall then you may need a proxy If you are running magnum behind a firewall then you may need a proxy
for using services like docker, kubernetes and mesos. Use these steps for using services like docker, kubernetes. Use these steps
when your firewall will not allow you to use those services without a when your firewall will not allow you to use those services without a
proxy. proxy.
**NOTE:** This feature has only been tested with the supported cluster type **NOTE:** This feature has only been tested with the supported cluster type
and associated image: Kubernetes and Swarm use the Fedora Atomic and associated image.
image, and Mesos uses the Ubuntu image.
Proxy Parameters to define before use Proxy Parameters to define before use
===================================== =====================================
@ -66,15 +65,3 @@ any coe type. All of proxy parameters are optional.
--https-proxy <https://abc-proxy.com:8080> \ --https-proxy <https://abc-proxy.com:8080> \
--no-proxy <172.24.4.4,172.24.4.9,172.24.4.8> --no-proxy <172.24.4.4,172.24.4.9,172.24.4.8>
.. code-block:: console
$ openstack coe cluster template create mesos-cluster-template \
--image ubuntu-mesos \
--keypair testkey \
--external-network public \
--dns-nameserver 8.8.8.8 \
--flavor m1.small \
--coe mesos \
--http-proxy <http://abc-proxy.com:8080> \
--https-proxy <https://abc-proxy.com:8080> \
--no-proxy <172.24.4.4,172.24.4.9,172.24.4.8>

View File

@ -36,10 +36,6 @@ Swarm cluster-create fails
Check the `heat stacks`_, log into the master nodes and check the `Swarm Check the `heat stacks`_, log into the master nodes and check the `Swarm
services`_ and `etcd service`_. services`_ and `etcd service`_.
Mesos cluster-create fails
Check the `heat stacks`_, log into the master nodes and check the `Mesos
services`_.
I get the error "Timed out waiting for a reply" when deploying a pod I get the error "Timed out waiting for a reply" when deploying a pod
Verify the `Kubernetes services`_ and `etcd service`_ are running on the Verify the `Kubernetes services`_ and `etcd service`_ are running on the
master nodes. master nodes.
@ -57,9 +53,6 @@ I deploy pods and services on Kubernetes cluster but the app is not working
Swarm cluster is created successfully but I cannot deploy containers Swarm cluster is created successfully but I cannot deploy containers
Check the `Swarm services`_ and `etcd service`_ on the master nodes. Check the `Swarm services`_ and `etcd service`_ on the master nodes.
Mesos cluster is created successfully but I cannot deploy containers on Marathon
Check the `Mesos services`_ on the master node.
I get a "Protocol violation" error when deploying a container I get a "Protocol violation" error when deploying a container
For Kubernetes, check the `Kubernetes services`_ to verify that For Kubernetes, check the `Kubernetes services`_ to verify that
kube-apiserver is running to accept the request. kube-apiserver is running to accept the request.
@ -104,7 +97,7 @@ in some cases it may only say "Unknown".
If the failed resource is OS::Heat::WaitConditionHandle, this indicates that If the failed resource is OS::Heat::WaitConditionHandle, this indicates that
one of the services that are being started on the node is hung. Log into the one of the services that are being started on the node is hung. Log into the
node where the failure occurred and check the respective `Kubernetes node where the failure occurred and check the respective `Kubernetes
services`_, `Swarm services`_ or `Mesos services`_. If the failure is in services`_ or `Swarm services`_. If the failure is in
other scripts, look for them as `Heat software resource scripts`_. other scripts, look for them as `Heat software resource scripts`_.
@ -212,7 +205,7 @@ Barbican service
Cluster internet access Cluster internet access
----------------------- -----------------------
The nodes for Kubernetes, Swarm and Mesos are connected to a private The nodes for Kubernetes and Swarm are connected to a private
Neutron network, so to provide access to the external internet, a router Neutron network, so to provide access to the external internet, a router
connects the private network to a public network. With devstack, the connects the private network to a public network. With devstack, the
default public network is "public", but this can be replaced by the default public network is "public", but this can be replaced by the
@ -681,10 +674,6 @@ Swarm services
(How to check on a swarm cluster: see membership information, view master, (How to check on a swarm cluster: see membership information, view master,
agent containers) agent containers)
Mesos services
--------------
*To be filled in*
Barbican issues Barbican issues
--------------- ---------------

View File

@ -710,95 +710,6 @@ You should see a similar output to::
4 packets transmitted, 4 packets received, 0% packet loss 4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 25.226/25.340/25.513 ms round-trip min/avg/max = 25.226/25.340/25.513 ms
Building and Using a Mesos Cluster
==================================
Provisioning a mesos cluster requires a Ubuntu-based image with some packages
pre-installed. To build and upload such image, please refer to
:ref`building_mesos_image`.
Alternatively, you can download and upload a pre-built image::
wget https://fedorapeople.org/groups/magnum/ubuntu-mesos-latest.qcow2
openstack image create ubuntu-mesos --public \
--disk-format=qcow2 --container-format=bare \
--property os_distro=ubuntu --file=ubuntu-mesos-latest.qcow2
Then, create a ClusterTemplate by using 'mesos' as the COE, with the rest of
arguments similar to the Kubernetes ClusterTemplate::
openstack coe cluster template create mesos-cluster-template --image ubuntu-mesos \
--keypair testkey \
--external-network public \
--dns-nameserver 8.8.8.8 \
--flavor m1.small \
--coe mesos
Finally, create the cluster. Use the ClusterTemplate 'mesos-cluster-template'
as a template for cluster creation. This cluster will result in one mesos
master node and two mesos slave nodes::
openstack coe cluster create mesos-cluster \
--cluster-template mesos-cluster-template \
--node-count 2
Now that we have a mesos cluster we can start interacting with it. First we
need to make sure the cluster's status is 'CREATE_COMPLETE'::
$ openstack coe cluster show mesos-cluster
+--------------------+------------------------------------------------------------+
| Property | Value |
+--------------------+------------------------------------------------------------+
| status | CREATE_COMPLETE |
| uuid | ff727f0d-72ca-4e2b-9fef-5ec853d74fdf |
| stack_id | 7947844a-8e18-4c79-b591-ecf0f6067641 |
| status_reason | Stack CREATE completed successfully |
| created_at | 2015-06-09T20:21:43+00:00 |
| updated_at | 2015-06-09T20:28:18+00:00 |
| create_timeout | 60 |
| api_address | https://172.24.4.115:6443 |
| coe_version | - |
| cluster_template_id| 92dbda62-32d4-4435-88fc-8f42d514b347 |
| master_addresses | ['172.24.4.115'] |
| node_count | 2 |
| node_addresses | ['172.24.4.116', '172.24.4.117'] |
| master_count | 1 |
| container_version | 1.9.1 |
| discovery_url | None |
| name | mesos-cluster |
+--------------------+------------------------------------------------------------+
Next we will create a container in this cluster by using the REST API of
Marathon. This container will ping the address 8.8.8.8::
$ cat > mesos.json << END
{
"container": {
"type": "DOCKER",
"docker": {
"image": "cirros"
}
},
"id": "ubuntu",
"instances": 1,
"cpus": 0.5,
"mem": 512,
"uris": [],
"cmd": "ping 8.8.8.8"
}
END
$ MASTER_IP=$(openstack coe cluster show mesos-cluster | awk '/ api_address /{print $4}')
$ curl -X POST -H "Content-Type: application/json" \
http://${MASTER_IP}:8080/v2/apps -d@mesos.json
To check application and task status::
$ curl http://${MASTER_IP}:8080/v2/apps
$ curl http://${MASTER_IP}:8080/v2/tasks
You can access to the Mesos web page at \http://<master>:5050/ and Marathon web
console at \http://<master>:8080/.
Building Developer Documentation Building Developer Documentation
================================ ================================

View File

@ -48,7 +48,7 @@ Features
======== ========
* Abstractions for Clusters * Abstractions for Clusters
* Integration with Kubernetes, Swarm, Mesos for backend container technology * Integration with Kubernetes, Swarm for backend container technology
* Integration with Keystone for multi-tenant security * Integration with Keystone for multi-tenant security
* Integration with Neutron for Kubernetes multi-tenancy network security * Integration with Neutron for Kubernetes multi-tenancy network security
* Integration with Cinder to provide volume service for containers * Integration with Cinder to provide volume service for containers

View File

@ -16,5 +16,5 @@ following components:
``magnum-conductor`` service ``magnum-conductor`` service
Runs on a controller machine and connects to heat to orchestrate a Runs on a controller machine and connects to heat to orchestrate a
cluster. Additionally, it connects to a Docker Swarm, Kubernetes cluster. Additionally, it connects to a Docker Swarm or Kubernetes
or Mesos REST API endpoint. API endpoint.

View File

@ -14,8 +14,8 @@ Magnum Installation Guide
The Container Infrastructure Management service codenamed (magnum) is an The Container Infrastructure Management service codenamed (magnum) is an
OpenStack API service developed by the OpenStack Containers Team making OpenStack API service developed by the OpenStack Containers Team making
container orchestration engines (COE) such as Docker Swarm, Kubernetes container orchestration engines (COE) such as Docker Swarm and Kubernetes
and Mesos available as first class resources in OpenStack. Magnum uses available as first class resources in OpenStack. Magnum uses
Heat to orchestrate an OS image which contains Docker and Kubernetes and Heat to orchestrate an OS image which contains Docker and Kubernetes and
runs that image in either virtual machines or bare metal in a cluster runs that image in either virtual machines or bare metal in a cluster
configuration. configuration.

View File

@ -26,7 +26,7 @@ Magnum Terminology
A container orchestration engine manages the lifecycle of one or more A container orchestration engine manages the lifecycle of one or more
containers, logically represented in Magnum as a cluster. Magnum supports a containers, logically represented in Magnum as a cluster. Magnum supports a
number of container orchestration engines, each with their own pros and cons, number of container orchestration engines, each with their own pros and cons,
including Docker Swarm, Kubernetes, and Mesos. including Docker Swarm and Kubernetes.
Labels Labels
Labels is a general method to specify supplemental parameters that are Labels is a general method to specify supplemental parameters that are

View File

@ -23,7 +23,6 @@ created and managed by Magnum to support the COE's.
#. `Native Clients`_ #. `Native Clients`_
#. `Kubernetes`_ #. `Kubernetes`_
#. `Swarm`_ #. `Swarm`_
#. `Mesos`_
#. `Transport Layer Security`_ #. `Transport Layer Security`_
#. `Networking`_ #. `Networking`_
#. `High Availability`_ #. `High Availability`_
@ -43,8 +42,8 @@ Overview
======== ========
Magnum is an OpenStack API service developed by the OpenStack Containers Team Magnum is an OpenStack API service developed by the OpenStack Containers Team
making container orchestration engines (COE) such as Docker Swarm, Kubernetes making container orchestration engines (COE) such as Docker Swarm and
and Apache Mesos available as first class resources in OpenStack. Kubernetes available as first class resources in OpenStack.
Magnum uses Heat to orchestrate an OS image which contains Docker and COE Magnum uses Heat to orchestrate an OS image which contains Docker and COE
and runs that image in either virtual machines or bare metal in a cluster and runs that image in either virtual machines or bare metal in a cluster
@ -59,7 +58,7 @@ Following are few salient features of Magnum:
- Standard API based complete life-cycle management for Container Clusters - Standard API based complete life-cycle management for Container Clusters
- Multi-tenancy for container clusters - Multi-tenancy for container clusters
- Choice of COE: Kubernetes, Swarm, Mesos, DC/OS - Choice of COE: Kubernetes, Swarm
- Choice of container cluster deployment model: VM or Bare-metal - Choice of container cluster deployment model: VM or Bare-metal
- Keystone-based multi-tenant security and auth management - Keystone-based multi-tenant security and auth management
- Neutron based multi-tenant network control and isolation - Neutron based multi-tenant network control and isolation
@ -94,7 +93,7 @@ They are loosely grouped as: mandatory, infrastructure, COE specific.
--coe \<coe\> --coe \<coe\>
Specify the Container Orchestration Engine to use. Supported Specify the Container Orchestration Engine to use. Supported
COE's include 'kubernetes', 'swarm', 'mesos'. If your environment COE's include 'kubernetes', 'swarm'. If your environment
has additional cluster drivers installed, refer to the cluster driver has additional cluster drivers installed, refer to the cluster driver
documentation for the new COE names. This is a mandatory parameter documentation for the new COE names. This is a mandatory parameter
and there is no default value. and there is no default value.
@ -110,7 +109,6 @@ They are loosely grouped as: mandatory, infrastructure, COE specific.
========== ===================== ========== =====================
Kubernetes fedora-coreos Kubernetes fedora-coreos
Swarm fedora-atomic Swarm fedora-atomic
Mesos ubuntu
========== ===================== ========== =====================
This is a mandatory parameter and there is no default value. Note that the This is a mandatory parameter and there is no default value. Note that the
@ -161,7 +159,6 @@ They are loosely grouped as: mandatory, infrastructure, COE specific.
=========== ================= ======== =========== ================= ========
Kubernetes flannel, calico flannel Kubernetes flannel, calico flannel
Swarm docker, flannel flannel Swarm docker, flannel flannel
Mesos docker docker
=========== ================= ======== =========== ================= ========
Note that the network driver name is case sensitive. Note that the network driver name is case sensitive.
@ -176,7 +173,6 @@ They are loosely grouped as: mandatory, infrastructure, COE specific.
============= ============= =========== ============= ============= ===========
Kubernetes cinder No Driver Kubernetes cinder No Driver
Swarm rexray No Driver Swarm rexray No Driver
Mesos rexray No Driver
============= ============= =========== ============= ============= ===========
Note that the volume driver name is case sensitive. Note that the volume driver name is case sensitive.
@ -290,25 +286,6 @@ the table are linked to more details elsewhere in the user guide.
| `rexray_preempt`_ | - true | false | | `rexray_preempt`_ | - true | false |
| | - false | | | | - false | |
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
| `mesos_slave_isolation`_ | - filesystem/posix | "" |
| | - filesystem/linux | |
| | - filesystem/shared| |
| | - posix/cpu | |
| | - posix/mem | |
| | - posix/disk | |
| | - cgroups/cpu | |
| | - cgroups/mem | |
| | - docker/runtime | |
| | - namespaces/pid | |
+---------------------------------------+--------------------+---------------+
| `mesos_slave_image_providers`_ | - appc | "" |
| | - docker | |
| | - appc,docker | |
+---------------------------------------+--------------------+---------------+
| `mesos_slave_work_dir`_ | (directory name) | "" |
+---------------------------------------+--------------------+---------------+
| `mesos_slave_executor_env_variables`_ | (file name) | "" |
+---------------------------------------+--------------------+---------------+
| `heapster_enabled`_ | - true | false | | `heapster_enabled`_ | - true | false |
| | - false | | | | - false | |
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
@ -883,8 +860,6 @@ COE and distro pairs:
+------------+---------------+ +------------+---------------+
| Swarm | Fedora Atomic | | Swarm | Fedora Atomic |
+------------+---------------+ +------------+---------------+
| Mesos | Ubuntu |
+------------+---------------+
Magnum is designed to accommodate new cluster drivers to support custom Magnum is designed to accommodate new cluster drivers to support custom
COE's and this section describes how a new cluster driver can be COE's and this section describes how a new cluster driver can be
@ -1006,18 +981,6 @@ that allow for sophisticated software deployments, including canary deploys
and blue/green deploys. Kubernetes is very popular, especially for web and blue/green deploys. Kubernetes is very popular, especially for web
applications. applications.
Apache Mesos is a COE that has been around longer than Kubernetes or Swarm. It
allows for a variety of different frameworks to be used along with it,
including Marathon, Aurora, Chronos, Hadoop, and `a number of others.
<http://mesos.apache.org/documentation/latest/frameworks/>`_
The Apache Mesos framework design can be used to run alternate COE software
directly on Mesos. Although this approach is not widely used yet, it may soon
be possible to run Mesos with Kubernetes and Swarm as frameworks, allowing
you to share the resources of a cluster between multiple different COEs. Until
this option matures, we encourage Magnum users to create multiple clusters, and
use the COE in each cluster that best fits the anticipated workload.
Finding the right COE for your workload is up to you, but Magnum offers you a Finding the right COE for your workload is up to you, but Magnum offers you a
choice to select among the prevailing leading options. Once you decide, see choice to select among the prevailing leading options. Once you decide, see
the next sections for examples of how to create a cluster with your desired the next sections for examples of how to create a cluster with your desired
@ -1033,7 +996,7 @@ clusters. In the typical case, there are two clients to consider:
COE level COE level
This is the orchestration or management level such as Kubernetes, This is the orchestration or management level such as Kubernetes,
Swarm, Mesos and its frameworks. Swarm and its frameworks.
Container level Container level
This is the low level container operation. Currently it is This is the low level container operation. Currently it is
@ -1066,9 +1029,6 @@ such as 'docker-compose', etc. Specific version of the binaries can
be obtained from the `Docker Engine installation be obtained from the `Docker Engine installation
<https://docs.docker.com/engine/installation/binaries/>`_. <https://docs.docker.com/engine/installation/binaries/>`_.
Mesos cluster uses the Marathon framework and details on the Marathon
UI can be found in the section `Using Marathon`_.
Depending on the client requirement, you may need to use a version of Depending on the client requirement, you may need to use a version of
the client that matches the version in the cluster. To determine the the client that matches the version in the cluster. To determine the
version of the COE and container, use the command 'cluster-show' and version of the COE and container, use the command 'cluster-show' and
@ -1894,106 +1854,6 @@ _`swarm_strategy`
- binpack - binpack
- random - random
Mesos
=====
A Mesos cluster consists of a pool of servers running as Mesos slaves,
managed by a set of servers running as Mesos masters. Mesos manages
the resources from the slaves but does not itself deploy containers.
Instead, one of more Mesos frameworks running on the Mesos cluster would
accept user requests on their own endpoint, using their particular
API. These frameworks would then negotiate the resources with Mesos
and the containers are deployed on the servers where the resources are
offered.
Magnum deploys a Mesos cluster using parameters defined in the ClusterTemplate
and specified on the 'cluster-create' command, for example::
openstack coe cluster template create mesos-cluster-template \
--image ubuntu-mesos \
--keypair testkey \
--external-network public \
--dns-nameserver 8.8.8.8 \
--flavor m1.small \
--coe mesos
openstack coe cluster create mesos-cluster \
--cluster-template mesos-cluster-template \
--master-count 3 \
--node-count 8
Refer to the `ClusterTemplate`_ and `Cluster`_ sections for the full list of
parameters. Following are further details relevant to Mesos:
What runs on the servers
There are two types of servers in the Mesos cluster: masters and slaves.
The Docker daemon runs on all servers. On the servers for master,
the Mesos master is run as a process on port 5050 and this is
initiated by the upstart service 'mesos-master'. Zookeeper is also
run on the master servers, initiated by the upstart service
'zookeeper'. Zookeeper is used by the master servers for electing
the leader among the masters, and by the slave servers and
frameworks to determine the current leader. The framework Marathon
is run as a process on port 8080 on the master servers, initiated by
the upstart service 'marathon'. On the servers for slave, the Mesos
slave is run as a process initiated by the upstart service
'mesos-slave'.
Number of master (master-count)
Specified in the cluster-create command to indicate how many servers
will run as masters in the cluster. Having more than one will provide
high availability. If the load balancer option is specified, the
masters will be in a load balancer pool and the load balancer
virtual IP address (VIP) will serve as the Mesos API endpoint. A
floating IP associated with the load balancer VIP will serve as the
external Mesos API endpoint.
Number of agents (node-count)
Specified in the cluster-create command to indicate how many servers
will run as Mesos slave in the cluster. Docker daemon is run locally to
host containers from users. The slaves report their available
resources to the master and accept request from the master to deploy
tasks from the frameworks. In this case, the tasks will be to
run Docker containers.
Network driver (network-driver)
Specified in the ClusterTemplate to select the network driver. Currently
'docker' is the only supported driver: containers are connected to
the 'docker0' bridge on each node and are assigned local IP address.
Refer to the `Networking`_ section for more details.
Volume driver (volume-driver)
Specified in the ClusterTemplate to select the volume driver to provide
persistent storage for containers. The supported volume driver is
'rexray'. The default is no volume driver. When 'rexray' or other
volume driver is deployed, you can use the Docker 'volume' command to
create, mount, unmount, delete volumes in containers. Cinder block
storage is used as the backend to support this feature.
Refer to the `Storage`_ section for more details.
Storage driver (docker-storage-driver)
This is currently not supported for Mesos.
Image (image)
Specified in the ClusterTemplate to indicate the image to boot the servers
for the Mesos master and slave. The image binary is loaded in
Glance with the attribute 'os_distro = ubuntu'. You can download
the `ready-built image
<https://fedorapeople.org/groups/magnum/ubuntu-mesos-latest.qcow2>`_,
or you can create the image as described below in the `Building
Mesos image`_ section.
TLS (tls-disabled)
Transport Layer Security is currently not implemented yet for Mesos.
Log into the servers
You can log into the manager and node servers with the account
'ubuntu' and the keypair specified in the ClusterTemplate.
In addition to the common attributes in the baymodel, you can specify
the following attributes that are specific to Mesos by using the
labels attribute.
_`rexray_preempt` _`rexray_preempt`
When the volume driver 'rexray' is used, you can mount a data volume When the volume driver 'rexray' is used, you can mount a data volume
backed by Cinder to a host to be accessed by a container. In this backed by Cinder to a host to be accessed by a container. In this
@ -2005,165 +1865,6 @@ _`rexray_preempt`
safety for locking the volume before remounting. The default value safety for locking the volume before remounting. The default value
is False. is False.
_`mesos_slave_isolation`
This label corresponds to the Mesos parameter for slave
'--isolation'. The isolators are needed to provide proper isolation
according to the runtime configurations specified in the container
image. For more details, refer to the `Mesos configuration
<http://mesos.apache.org/documentation/latest/configuration/>`_
and the `Mesos container image support
<http://mesos.apache.org/documentation/latest/container-image/>`_.
Valid values for this label are:
- filesystem/posix
- filesystem/linux
- filesystem/shared
- posix/cpu
- posix/mem
- posix/disk
- cgroups/cpu
- cgroups/mem
- docker/runtime
- namespaces/pid
_`mesos_slave_image_providers`
This label corresponds to the Mesos parameter for agent
'--image_providers', which tells Mesos containerizer what
types of container images are allowed.
For more details, refer to the `Mesos configuration
<http://mesos.apache.org/documentation/latest/configuration/>`_ and
the `Mesos container image support
<http://mesos.apache.org/documentation/latest/container-image/>`_.
Valid values are:
- appc
- docker
- appc,docker
_`mesos_slave_work_dir`
This label corresponds to the Mesos parameter '--work_dir' for slave.
For more details, refer to the `Mesos configuration
<http://mesos.apache.org/documentation/latest/configuration/>`_.
Valid value is a directory path to use as the work directory for
the framework, for example::
mesos_slave_work_dir=/tmp/mesos
_`mesos_slave_executor_env_variables`
This label corresponds to the Mesos parameter for slave
'--executor_environment_variables', which passes additional
environment variables to the executor and subsequent tasks.
For more details, refer to the `Mesos configuration
<http://mesos.apache.org/documentation/latest/configuration/>`_.
Valid value is the name of a JSON file, for example::
mesos_slave_executor_env_variables=/home/ubuntu/test.json
The JSON file should contain environment variables, for example::
{
"PATH": "/bin:/usr/bin",
"LD_LIBRARY_PATH": "/usr/local/lib"
}
By default the executor will inherit the slave's environment
variables.
.. _building_mesos_image:
Building Mesos image
--------------------
The boot image for Mesos cluster is an Ubuntu 14.04 base image with the
following middleware pre-installed:
- ``docker``
- ``zookeeper``
- ``mesos``
- ``marathon``
The cluster driver provides two ways to create this image, as follows.
Diskimage-builder
+++++++++++++++++
To run the `diskimage-builder
<https://docs.openstack.org/diskimage-builder/latest>`__ tool
manually, use the provided `elements
<https://opendev.org/openstack/magnum/src/branch/master/magnum/drivers/mesos_ubuntu_v1/image/mesos/>`__.
Following are the typical steps to use the diskimage-builder tool on
an Ubuntu server::
$ sudo apt-get update
$ sudo apt-get install git qemu-utils python-pip
$ sudo pip install diskimage-builder
$ git clone https://opendev.org/openstack/magnum
$ git clone https://opendev.org/openstack/dib-utils.git
$ git clone https://opendev.org/openstack/tripleo-image-elements.git
$ git clone https://opendev.org/openstack/heat-templates.git
$ export PATH="${PWD}/dib-utils/bin:$PATH"
$ export ELEMENTS_PATH=tripleo-image-elements/elements:heat-templates/hot/software-config/elements:magnum/magnum/drivers/mesos_ubuntu_v1/image/mesos
$ export DIB_RELEASE=trusty
$ disk-image-create ubuntu vm docker mesos \
os-collect-config os-refresh-config os-apply-config \
heat-config heat-config-script \
-o ubuntu-mesos.qcow2
Dockerfile
++++++++++
To build the image as above but within a Docker container, use the
provided `Dockerfile
<https://opendev.org/openstack/magnum/src/branch/master/magnum/drivers/mesos_ubuntu_v1/image/Dockerfile>`__.
The output image will be saved as '/tmp/ubuntu-mesos.qcow2'.
Following are the typical steps to run a Docker container to build the image::
$ git clone https://opendev.org/openstack/magnum
$ cd magnum/magnum/drivers/mesos_ubuntu_v1/image
$ sudo docker build -t magnum/mesos-builder .
$ sudo docker run -v /tmp:/output --rm -ti --privileged magnum/mesos-builder
...
Image file /output/ubuntu-mesos.qcow2 created...
Using Marathon
--------------
Marathon is a Mesos framework for long running applications. Docker
containers can be deployed via Marathon's REST API. To get the
endpoint for Marathon, run the cluster-show command and look for the
property 'api_address'. Marathon's endpoint is port 8080 on this IP
address, so the web console can be accessed at::
http://<api_address>:8080/
Refer to Marathon documentation for details on running applications.
For example, you can 'post' a JSON app description to
``http://<api_address>:8080/apps`` to deploy a Docker container::
$ cat > app.json << END
{
"container": {
"type": "DOCKER",
"docker": {
"image": "libmesos/ubuntu"
}
},
"id": "ubuntu",
"instances": 1,
"cpus": 0.5,
"mem": 512,
"uris": [],
"cmd": "while sleep 10; do date -u +%T; done"
}
END
$ API_ADDRESS=$(openstack coe cluster show mesos-cluster | awk '/ api_address /{print $4}')
$ curl -X POST -H "Content-Type: application/json" \
http://${API_ADDRESS}:8080/v2/apps -d@app.json
.. _transport_layer_security: .. _transport_layer_security:
Transport Layer Security Transport Layer Security
@ -2205,8 +1906,6 @@ Current TLS support is summarized below:
+------------+-------------+ +------------+-------------+
| Swarm | yes | | Swarm | yes |
+------------+-------------+ +------------+-------------+
| Mesos | no |
+------------+-------------+
For cluster type with TLS support, e.g. Kubernetes and Swarm, TLS is For cluster type with TLS support, e.g. Kubernetes and Swarm, TLS is
enabled by default. To disable TLS in Magnum, you can specify the enabled by default. To disable TLS in Magnum, you can specify the
@ -2657,18 +2356,18 @@ network-driver
The network driver name for instantiating container networks. The network driver name for instantiating container networks.
Currently, the following network drivers are supported: Currently, the following network drivers are supported:
+--------+-------------+-------------+-------------+ +--------+-------------+-------------+
| Driver | Kubernetes | Swarm | Mesos | | Driver | Kubernetes | Swarm |
+========+=============+=============+=============+ +========+=============+=============+
| Flannel| supported | supported | unsupported | | Flannel| supported | supported |
+--------+-------------+-------------+-------------+ +--------+-------------+-------------+
| Docker | unsupported | supported | supported | | Docker | unsupported | supported |
+--------+-------------+-------------+-------------+ +--------+-------------+-------------+
| Calico | supported | unsupported | unsupported | | Calico | supported | unsupported |
+--------+-------------+-------------+-------------+ +--------+-------------+-------------+
If not specified, the default driver is Flannel for Kubernetes, and If not specified, the default driver is Flannel for Kubernetes, and
Docker for Swarm and Mesos. Docker for Swarm.
Particular network driver may require its own set of parameters for Particular network driver may require its own set of parameters for
configuration, and these parameters are specified through the labels configuration, and these parameters are specified through the labels
@ -2848,14 +2547,7 @@ manually set the number of instances of a container. For Swarm
version 1.12 and later, services can also be scaled manually through version 1.12 and later, services can also be scaled manually through
the command `docker service scale the command `docker service scale
<https://docs.docker.com/engine/swarm/swarm-tutorial/scale-service/>`_. <https://docs.docker.com/engine/swarm/swarm-tutorial/scale-service/>`_.
Automatic scaling for Swarm is not yet available. Mesos manages the Automatic scaling for Swarm is not yet available.
resources and does not support scaling directly; instead, this is
provided by frameworks running within Mesos. With the Marathon
framework currently supported in the Mesos cluster, you can use the
`scale operation
<https://mesosphere.github.io/marathon/docs/application-basics.html>`_
on the Marathon UI or through a REST API call to manually set the
attribute 'instance' for a container.
Scaling the cluster nodes involves managing the number of nodes in the Scaling the cluster nodes involves managing the number of nodes in the
cluster by adding more nodes or removing nodes. There is no direct cluster by adding more nodes or removing nodes. There is no direct
@ -2907,21 +2599,6 @@ Swarm
the node_count, a node will be chosen by magnum without the node_count, a node will be chosen by magnum without
consideration of what containers are running on the selected node. consideration of what containers are running on the selected node.
Mesos
Magnum scans the running tasks on Marathon server to determine the
nodes on which there is *no* task running (empty nodes). If the
number of nodes to be removed is equal or less than the number of
these empty nodes, these nodes will be removed from the cluster.
If the number of nodes to be removed is larger than the number of
empty nodes, a warning message will be sent to the Magnum log and
the empty nodes along with additional nodes will be removed from the
cluster. The additional nodes are selected randomly and the containers
running on them will be deleted without warning. Note that even when
only the empty nodes are removed, there is no guarantee that no
container will be deleted because there is no locking to ensure that
Mesos will not launch new containers on these nodes after Magnum
has scanned the tasks.
Currently, scaling containers and scaling cluster nodes are handled Currently, scaling containers and scaling cluster nodes are handled
separately, but in many use cases, there are interactions between the separately, but in many use cases, there are interactions between the
@ -3011,7 +2688,7 @@ driver to perform the actual work. Then this volume can be mounted
when a container is created. A number of third-party volume drivers when a container is created. A number of third-party volume drivers
support OpenStack Cinder as the backend, for example Rexray and support OpenStack Cinder as the backend, for example Rexray and
Flocker. Magnum currently supports Rexray as the volume driver for Flocker. Magnum currently supports Rexray as the volume driver for
Swarm and Mesos. Other drivers are being considered. Swarm. Other drivers are being considered.
Kubernetes allows a previously created Cinder block to be mounted to Kubernetes allows a previously created Cinder block to be mounted to
a pod and this is done by specifying the block ID in the pod YAML file. a pod and this is done by specifying the block ID in the pod YAML file.
@ -3027,13 +2704,13 @@ Magnum supports these features to use Cinder as persistent storage
using the ClusterTemplate attribute 'volume-driver' and the support matrix using the ClusterTemplate attribute 'volume-driver' and the support matrix
for the COE types is summarized as follows: for the COE types is summarized as follows:
+--------+-------------+-------------+-------------+ +--------+-------------+-------------+
| Driver | Kubernetes | Swarm | Mesos | | Driver | Kubernetes | Swarm |
+========+=============+=============+=============+ +========+=============+=============+
| cinder | supported | unsupported | unsupported | | cinder | supported | unsupported |
+--------+-------------+-------------+-------------+ +--------+-------------+-------------+
| rexray | unsupported | supported | supported | | rexray | unsupported | supported |
+--------+-------------+-------------+-------------+ +--------+-------------+-------------+
Following are some examples for using Cinder as persistent storage. Following are some examples for using Cinder as persistent storage.
@ -3128,85 +2805,6 @@ Using Cinder in Swarm
*To be filled in* *To be filled in*
Using Cinder in Mesos
+++++++++++++++++++++
1. Create the ClusterTemplate.
Specify 'rexray' as the volume-driver for Mesos. As an option, you
can specify in a label the attributes 'rexray_preempt' to enable
any host to take control of a volume regardless if other
hosts are using the volume. If this is set to false, the driver
will ensure data safety by locking the volume::
openstack coe cluster template create mesos-cluster-template \
--image ubuntu-mesos \
--keypair testkey \
--external-network public \
--dns-nameserver 8.8.8.8 \
--master-flavor m1.magnum \
--docker-volume-size 4 \
--tls-disabled \
--flavor m1.magnum \
--coe mesos \
--volume-driver rexray \
--labels rexray-preempt=true
2. Create the Mesos cluster::
openstack coe cluster create mesos-cluster \
--cluster-template mesos-cluster-template \
--node-count 1
3. Create the cinder volume and configure this cluster::
cinder create --display-name=redisdata 1
Create the following file ::
cat > mesos.json << END
{
"id": "redis",
"container": {
"docker": {
"image": "redis",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 80, "hostPort": 0, "protocol": "tcp"}
],
"parameters": [
{ "key": "volume-driver", "value": "rexray" },
{ "key": "volume", "value": "redisdata:/data" }
]
}
},
"cpus": 0.2,
"mem": 32.0,
"instances": 1
}
END
**NOTE:** When the Mesos cluster is created using this ClusterTemplate, the
Mesos cluster will be configured so that a filesystem on an existing cinder
volume can be mounted in a container by configuring the parameters to mount
the cinder volume in the JSON file ::
"parameters": [
{ "key": "volume-driver", "value": "rexray" },
{ "key": "volume", "value": "redisdata:/data" }
]
4. Create the container using Marathon REST API ::
MASTER_IP=$(openstack coe cluster show mesos-cluster | awk '/ api_address /{print $4}')
curl -X POST -H "Content-Type: application/json" \
http://${MASTER_IP}:8080/v2/apps -d@mesos.json
You can log into the container to check that the mountPath exists, and
you can run the command 'cinder list' to verify that your cinder
volume status is 'in-use'.
Image Management Image Management
================ ================
@ -3214,7 +2812,7 @@ When a COE is deployed, an image from Glance is used to boot the nodes
in the cluster and then the software will be configured and started on in the cluster and then the software will be configured and started on
the nodes to bring up the full cluster. An image is based on a the nodes to bring up the full cluster. An image is based on a
particular distro such as Fedora, Ubuntu, etc, and is prebuilt with particular distro such as Fedora, Ubuntu, etc, and is prebuilt with
the software specific to the COE such as Kubernetes, Swarm, Mesos. the software specific to the COE such as Kubernetes and Swarm.
The image is tightly coupled with the following in Magnum: The image is tightly coupled with the following in Magnum:
1. Heat templates to orchestrate the configuration. 1. Heat templates to orchestrate the configuration.
@ -3280,25 +2878,6 @@ This image can be downloaded from the `public Atomic site
or can be built locally using diskimagebuilder. or can be built locally using diskimagebuilder.
The login for this image is *fedora*. The login for this image is *fedora*.
Mesos on Ubuntu
---------------
This image is built manually using diskimagebuilder. The instructions are
provided in the section `Diskimage-builder`_.
The Fedora site hosts the current image `ubuntu-mesos-latest.qcow2
<https://fedorapeople.org/groups/magnum/ubuntu-mesos-latest.qcow2>`_.
+-------------+-----------+
| OS/software | version |
+=============+===========+
| Ubuntu | 14.04 |
+-------------+-----------+
| Docker | 1.8.1 |
+-------------+-----------+
| Mesos | 0.25.0 |
+-------------+-----------+
| Marathon | 0.11.1 |
+-------------+-----------+
Notification Notification
============ ============