Drop mesos driver

The coe mesos has not been maitenaned for quite some
time and hasn't got much attetion from the community
in general. As discussed in the mailing list [1] we
are dropping for now.

In this patch, we start by removing the mesos driver
and its test cases. This part of the code has no impact
for other drivers. Then we can clean up mesos references
that affect the API.

[1] http://lists.openstack.org/pipermail/openstack-discuss/2021-December/026230.html

Conflicts:
	lower-constraints.txt
	tox.ini

Change-Id: Ied76095f1f1c57c6af93d1a6094baa6c7cc31c9b
This commit is contained in:
guilhermesteinmuller 2021-12-09 10:27:10 -03:00 committed by Jake Yip
parent 49804076b1
commit d3d28594b3
74 changed files with 4 additions and 5275 deletions

View File

@ -1,103 +0,0 @@
How to build a centos image which contains DC/OS 1.8.x
======================================================
Here is the advanced DC/OS 1.8 installation guide.
See [Advanced DC/OS Installation Guide]
(https://dcos.io/docs/1.8/administration/installing/custom/advanced/)
See [Install Docker on CentOS]
(https://dcos.io/docs/1.8/administration/installing/custom/system-requirements/install-docker-centos/)
See [Adding agent nodes]
(https://dcos.io/docs/1.8/administration/installing/custom/add-a-node/)
Create a centos image using DIB following the steps outlined in DC/OS installation guide.
1. Install and configure docker in chroot.
2. Install system requirements in chroot.
3. Download `dcos_generate_config.sh` outside chroot.
This file will be used to run `dcos_generate_config.sh --genconf` to generate
config files on the node during magnum cluster creation.
4. Some configuration changes are required for DC/OS, i.e disabling the firewalld
and adding the group named nogroup.
See comments in the script file.
Use the centos image to build a DC/OS cluster.
Command:
`magnum cluster-template-create`
`magnum cluster-create`
After all the instances with centos image are created.
1. Pass parameters to config.yaml with magnum cluster template properties.
2. Run `dcos_generate_config.sh --genconf` to generate config files.
3. Run `dcos_install.sh master` on master node and `dcos_install.sh slave` on slave node.
If we want to scale the DC/OS cluster.
Command:
`magnum cluster-update`
The same steps as cluster creation.
1. Create new instances, generate config files on them and install.
2. Or delete those agent nodes where containers are not running.
How to use magnum dcos coe
===============================================
We are assuming that magnum has been installed and the magnum path is `/opt/stack/magnum`.
1. Copy dcos magnum coe source code
$ mv -r /opt/stack/magnum/contrib/drivers/dcos_centos_v1 /opt/stack/magnum/magnum/drivers/
$ mv /opt/stack/magnum/contrib/drivers/common/dcos_* /opt/stack/magnum/magnum/drivers/common/
$ cd /opt/stack/magnum
$ sudo python setup.py install
2. Add driver in setup.cfg
dcos_centos_v1 = magnum.drivers.dcos_centos_v1.driver:Driver
3. Restart your magnum services.
4. Prepare centos image with elements dcos and docker installed
See how to build a centos image in /opt/stack/magnum/magnum/drivers/dcos_centos_v1/image/README.md
5. Create glance image
$ openstack image create centos-7-dcos.qcow2 \
--public \
--disk-format=qcow2 \
--container-format=bare \
--property os_distro=centos \
--file=centos-7-dcos.qcow2
6. Create magnum cluster template
Configure DC/OS cluster with --labels
See https://dcos.io/docs/1.8/administration/installing/custom/configuration-parameters/
$ magnum cluster-template-create --name dcos-cluster-template \
--image-id centos-7-dcos.qcow2 \
--keypair-id testkey \
--external-network-id public \
--dns-nameserver 8.8.8.8 \
--flavor-id m1.medium \
--labels oauth_enabled=false \
--coe dcos
Here is an example to specify the overlay network in DC/OS,
'dcos_overlay_network' should be json string format.
$ magnum cluster-template-create --name dcos-cluster-template \
--image-id centos-7-dcos.qcow2 \
--keypair-id testkey \
--external-network-id public \
--dns-nameserver 8.8.8.8 \
--flavor-id m1.medium \
--labels oauth_enabled=false \
--labels dcos_overlay_enable='true' \
--labels dcos_overlay_config_attempts='6' \
--labels dcos_overlay_mtu='9001' \
--labels dcos_overlay_network='{"vtep_subnet": "44.128.0.0/20",\
"vtep_mac_oui": "70:B3:D5:00:00:00","overlays":\
[{"name": "dcos","subnet": "9.0.0.0/8","prefix": 26}]}' \
--coe dcos
7. Create magnum cluster
$ magnum cluster-create --name dcos-cluster --cluster-template dcos-cluster-template --node-count 1
8. You need to wait for a while after magnum cluster creation completed to make
DC/OS web interface accessible.

View File

@ -1,36 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from magnum.drivers.dcos_centos_v1 import monitor
from magnum.drivers.dcos_centos_v1.scale_manager import DcosScaleManager
from magnum.drivers.dcos_centos_v1 import template_def
from magnum.drivers.heat import driver
class Driver(driver.HeatDriver):
@property
def provides(self):
return [
{'server_type': 'vm',
'os': 'centos',
'coe': 'dcos'},
]
def get_template_definition(self):
return template_def.DcosCentosVMTemplateDefinition()
def get_monitor(self, context, cluster):
return monitor.DcosMonitor(context, cluster)
def get_scale_manager(self, context, osclient, cluster):
return DcosScaleManager(context, osclient, cluster)

View File

@ -1,86 +0,0 @@
=============
centos-dcos
=============
This directory contains `[diskimage-builder](https://github.com/openstack/diskimage-builder)`
elements to build an centos image which contains dcos.
Pre-requisites to run diskimage-builder
---------------------------------------
For diskimage-builder to work, following packages need to be
present:
* kpartx
* qemu-utils
* curl
* xfsprogs
* yum
* yum-utils
* git
For Debian/Ubuntu systems, use::
apt-get install kpartx qemu-utils curl xfsprogs yum yum-utils git
For CentOS and Fedora < 22, use::
yum install kpartx qemu-utils curl xfsprogs yum yum-utils git
For Fedora >= 22, use::
dnf install kpartx @virtualization curl xfsprogs yum yum-utils git
How to generate Centos image with DC/OS 1.8.x
---------------------------------------------
1. Download and export element path
git clone https://git.openstack.org/openstack/magnum
git clone https://git.openstack.org/openstack/diskimage-builder.git
git clone https://git.openstack.org/openstack/dib-utils.git
git clone https://git.openstack.org/openstack/tripleo-image-elements.git
git clone https://git.openstack.org/openstack/heat-templates.git
export PATH="${PWD}/diskimage-builder/bin:$PATH"
export PATH="${PWD}/dib-utils/bin:$PATH"
export ELEMENTS_PATH=magnum/contrib/drivers/dcos_centos_v1/image
export ELEMENTS_PATH=${ELEMENTS_PATH}:diskimage-builder/elements
export ELEMENTS_PATH=${ELEMENTS_PATH}:tripleo-image-elements/elements:heat-templates/hot/software-config/elements
2. Export environment path of the url to download dcos_generate_config.sh
This default download url is for DC/OS 1.8.4
export DCOS_GENERATE_CONFIG_SRC=https://downloads.dcos.io/dcos/stable/commit/e64024af95b62c632c90b9063ed06296fcf38ea5/dcos_generate_config.sh
Or specify local file path
export DCOS_GENERATE_CONFIG_SRC=`pwd`/dcos_generate_config.sh
3. Set file system type to `xfs`
Only XFS is currently supported for overlay.
See https://dcos.io/docs/1.8/administration/installing/custom/system-requirements/install-docker-centos/#recommendations
export FS_TYPE=xfs
4. Create image
disk-image-create \
centos7 vm docker dcos selinux-permissive \
os-collect-config os-refresh-config os-apply-config \
heat-config heat-config-script \
-o centos-7-dcos.qcow2
5. (Optional) Create user image for bare metal node
Create with elements dhcp-all-interfaces and devuser
export DIB_DEV_USER_USERNAME=centos
export DIB_DEV_USER_PWDLESS_SUDO=YES
disk-image-create \
centos7 vm docker dcos selinux-permissive dhcp-all-interfaces devuser \
os-collect-config os-refresh-config os-apply-config \
heat-config heat-config-script \
-o centos-7-dcos-bm.qcow2

View File

@ -1,2 +0,0 @@
package-installs
docker

View File

@ -1,5 +0,0 @@
# Specify download url, default DC/OS version 1.8.4
export DCOS_GENERATE_CONFIG_SRC=${DCOS_GENERATE_CONFIG_SRC:-https://downloads.dcos.io/dcos/stable/commit/e64024af95b62c632c90b9063ed06296fcf38ea5/dcos_generate_config.sh}
# or local file path
# export DCOS_GENERATE_CONFIG_SRC=${DCOS_GENERATE_CONFIG_SRC:-${PWD}/dcos_generate_config.sh}

View File

@ -1,23 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# This scrpit file is used to dowload dcos_generate_config.sh outside chroot.
# Ihis file is essential that the size of dcos_generate_config.sh is more than
# 700M, we should download it into the image in advance.
sudo mkdir -p $TMP_MOUNT_PATH/opt/dcos
if [ -f $DCOS_GENERATE_CONFIG_SRC ]; then
# If $DCOS_GENERATE_CONFIG_SRC is a file path, copy the file
sudo cp $DCOS_GENERATE_CONFIG_SRC $TMP_MOUNT_PATH/opt/dcos
else
# If $DCOS_GENERATE_CONFIG_SRC is a url, download it
# Please make sure curl is installed on your host environment
cd $TMP_MOUNT_PATH/opt/dcos
sudo -E curl -O $DCOS_GENERATE_CONFIG_SRC
fi

View File

@ -1,6 +0,0 @@
tar:
xz:
unzip:
curl:
ipset:
ntp:

View File

@ -1,10 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# nogroup will be used on Mesos masters and agents.
sudo groupadd nogroup

View File

@ -1,9 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
sudo systemctl enable ntpd

View File

@ -1 +0,0 @@
package-installs

View File

@ -1,24 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Install the Docker engine, daemon, and service.
#
# The supported versions of Docker are:
# 1.7.x
# 1.8.x
# 1.9.x
# 1.10.x
# 1.11.x
# Docker 1.12.x is NOT supported.
# Docker 1.9.x - 1.11.x is recommended for stability reasons.
# https://github.com/docker/docker/issues/9718
#
# See DC/OS installtion guide for details
# https://dcos.io/docs/1.8/administration/installing/custom/system-requirements/install-docker-centos/
#
sudo -E yum install -y docker-engine-1.11.2

View File

@ -1,9 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
sudo systemctl enable docker

View File

@ -1,26 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Upgrade CentOS to 7.2
sudo -E yum upgrade --assumeyes --tolerant
sudo -E yum update --assumeyes
# Verify that the kernel is at least 3.10
function version_gt() { test "$(echo "$@" | tr " " "\n" | sort -V | head -n 1)" != "$1"; }
kernel_version=`uname -r | cut --bytes=1-4`
expect_version=3.10
if version_gt $expect_version $kernel_version; then
echo "Error: kernel version at least $expect_version, current version $kernel_version"
exit 1
fi
# Enable OverlayFS
sudo tee /etc/modules-load.d/overlay.conf <<-'EOF'
overlay
EOF

View File

@ -1,33 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Configure yum to use the Docker yum repo
sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
# Configure systemd to run the Docker Daemon with OverlayFS
# Manage Docker on CentOS with systemd.
# systemd handles starting Docker on boot and restarting it when it crashes.
#
# Docker 1.11.x will be installed, so issue for Docker 1.12.x on Centos7
# won't happen.
# https://github.com/docker/docker/issues/22847
# https://github.com/docker/docker/issues/25098
#
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo tee /etc/systemd/system/docker.service.d/override.conf <<- 'EOF'
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon --storage-driver=overlay -H fd://
EOF

View File

@ -1,25 +0,0 @@
#!/bin/bash
# This script installs all needed dependencies to generate
# images using diskimage-builder. Please note it only has been
# tested on Ubuntu Xenial.
set -eux
set -o pipefail
sudo apt update || true
sudo apt install -y \
git \
qemu-utils \
python-dev \
python-yaml \
python-six \
uuid-runtime \
curl \
sudo \
kpartx \
parted \
wget \
xfsprogs \
yum \
yum-utils

View File

@ -1,35 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2016 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
# check that image is valid
qemu-img check -q $1
# validate estimated size
FILESIZE=$(stat -c%s "$1")
MIN_SIZE=1231028224 # 1.15GB
MAX_SIZE=1335885824 # 1.25GB
if [ $FILESIZE -lt $MIN_SIZE ] ; then
echo "Error: generated image size is lower than expected."
exit 1
fi
if [ $FILESIZE -gt $MAX_SIZE ] ; then
echo "Error: generated image size is higher than expected."
exit 1
fi

View File

@ -1,74 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_serialization import jsonutils
from magnum.common import urlfetch
from magnum.conductor import monitors
class DcosMonitor(monitors.MonitorBase):
def __init__(self, context, cluster):
super(DcosMonitor, self).__init__(context, cluster)
self.data = {}
@property
def metrics_spec(self):
return {
'memory_util': {
'unit': '%',
'func': 'compute_memory_util',
},
'cpu_util': {
'unit': '%',
'func': 'compute_cpu_util',
},
}
# See https://github.com/dcos/adminrouter#ports-summary
# Use http://<mesos-master>/mesos/ instead of http://<mesos-master>:5050
def _build_url(self, url, protocol='http', server_name='mesos', path='/'):
return protocol + '://' + url + '/' + server_name + path
def _is_leader(self, state):
return state['leader'] == state['pid']
def pull_data(self):
self.data['mem_total'] = 0
self.data['mem_used'] = 0
self.data['cpu_total'] = 0
self.data['cpu_used'] = 0
for master_addr in self.cluster.master_addresses:
mesos_master_url = self._build_url(master_addr,
server_name='mesos',
path='/state')
master = jsonutils.loads(urlfetch.get(mesos_master_url))
if self._is_leader(master):
for slave in master['slaves']:
self.data['mem_total'] += slave['resources']['mem']
self.data['mem_used'] += slave['used_resources']['mem']
self.data['cpu_total'] += slave['resources']['cpus']
self.data['cpu_used'] += slave['used_resources']['cpus']
break
def compute_memory_util(self):
if self.data['mem_total'] == 0 or self.data['mem_used'] == 0:
return 0
else:
return self.data['mem_used'] * 100 / self.data['mem_total']
def compute_cpu_util(self):
if self.data['cpu_total'] == 0 or self.data['cpu_used'] == 0:
return 0
else:
return self.data['cpu_used'] * 100 / self.data['cpu_total']

View File

@ -1,29 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from magnum.conductor.scale_manager import ScaleManager
from marathon import MarathonClient
class DcosScaleManager(ScaleManager):
def __init__(self, context, osclient, cluster):
super(DcosScaleManager, self).__init__(context, osclient, cluster)
def _get_hosts_with_container(self, context, cluster):
marathon_client = MarathonClient(
'http://' + cluster.api_address + '/marathon/')
hosts = set()
for task in marathon_client.list_tasks():
hosts.add(task.host)
return hosts

View File

@ -1,28 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from magnum.drivers.heat import dcos_centos_template_def as dctd
class DcosCentosVMTemplateDefinition(dctd.DcosCentosTemplateDefinition):
"""DC/OS template for Centos VM."""
@property
def driver_module_path(self):
return __name__[:__name__.rindex('.')]
@property
def template_path(self):
return os.path.join(os.path.dirname(os.path.realpath(__file__)),
'templates/dcoscluster.yaml')

View File

@ -1,679 +0,0 @@
heat_template_version: 2014-10-16
description: >
This template will boot a DC/OS cluster with one or more masters
(as specified by number_of_masters, default is 1) and one or more slaves
(as specified by the number_of_slaves parameter, which
defaults to 1).
parameters:
cluster_name:
type: string
description: human readable name for the DC/OS cluster
default: my-cluster
number_of_masters:
type: number
description: how many DC/OS masters to spawn initially
default: 1
# In DC/OS, there are two types of slave nodes, public and private.
# Public slave nodes have external access and private slave nodes don't.
# Magnum only supports one type of slave nodes and I decide not to modify
# cluster template properties. So I create slave nodes as private agents.
number_of_slaves:
type: number
description: how many DC/OS agents or slaves to spawn initially
default: 1
master_flavor:
type: string
default: m1.medium
description: flavor to use when booting the master servers
slave_flavor:
type: string
default: m1.medium
description: flavor to use when booting the slave servers
server_image:
type: string
default: centos-dcos
description: glance image used to boot the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
ssh_public_key:
type: string
description: The public ssh key to add in all nodes
default: ""
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
default: public
fixed_network:
type: string
description: uuid/name of an existing network to use to provision machines
default: ""
fixed_subnet:
type: string
description: uuid/name of an existing subnet to use to provision machines
default: ""
fixed_subnet_cidr:
type: string
description: network range for fixed ip network
default: 10.0.0.0/24
dns_nameserver:
type: comma_delimited_list
description: address of a dns nameserver reachable in your environment
http_proxy:
type: string
description: http proxy address for docker
default: ""
https_proxy:
type: string
description: https proxy address for docker
default: ""
no_proxy:
type: string
description: no proxies for docker
default: ""
######################################################################
#
# Rexray Configuration
#
trustee_domain_id:
type: string
description: domain id of the trustee
default: ""
trustee_user_id:
type: string
description: user id of the trustee
default: ""
trustee_username:
type: string
description: username of the trustee
default: ""
trustee_password:
type: string
description: password of the trustee
default: ""
hidden: true
trust_id:
type: string
description: id of the trust which is used by the trustee
default: ""
hidden: true
######################################################################
#
# Rexray Configuration
#
volume_driver:
type: string
description: volume driver to use for container storage
default: ""
username:
type: string
description: user name
tenant_name:
type: string
description: >
tenant_name is used to isolate access to cloud resources
domain_name:
type: string
description: >
domain is to define the administrative boundaries for management
of Keystone entities
region_name:
type: string
description: a logically separate section of the cluster
rexray_preempt:
type: string
description: >
enables any host to take control of a volume irrespective of whether
other hosts are using the volume
default: "false"
auth_url:
type: string
description: url for keystone
slaves_to_remove:
type: comma_delimited_list
description: >
List of slaves to be removed when doing an update. Individual slave may
be referenced several ways: (1) The resource name (e.g.['1', '3']),
(2) The private IP address ['10.0.0.4', '10.0.0.6']. Note: the list should
be empty when doing a create.
default: []
wait_condition_timeout:
type: number
description: >
timeout for the Wait Conditions
default: 6000
password:
type: string
description: >
user password, not set in current implementation, only used to
fill in for DC/OS config file
default:
password
hidden: true
######################################################################
#
# DC/OS parameters
#
# cluster_name
exhibitor_storage_backend:
type: string
default: "static"
exhibitor_zk_hosts:
type: string
default: ""
exhibitor_zk_path:
type: string
default: ""
aws_access_key_id:
type: string
default: ""
aws_region:
type: string
default: ""
aws_secret_access_key:
type: string
default: ""
exhibitor_explicit_keys:
type: string
default: ""
s3_bucket:
type: string
default: ""
s3_prefix:
type: string
default: ""
exhibitor_azure_account_name:
type: string
default: ""
exhibitor_azure_account_key:
type: string
default: ""
exhibitor_azure_prefix:
type: string
default: ""
# master_discovery default set to "static"
# If --master-lb-enabled is specified,
# master_discovery will be set to "master_http_loadbalancer"
master_discovery:
type: string
default: "static"
# master_list
# exhibitor_address
# num_masters
####################################################
# Networking
dcos_overlay_enable:
type: string
default: ""
constraints:
- allowed_values:
- "true"
- "false"
- ""
dcos_overlay_config_attempts:
type: string
default: ""
dcos_overlay_mtu:
type: string
default: ""
dcos_overlay_network:
type: string
default: ""
dns_search:
type: string
description: >
This parameter specifies a space-separated list of domains that
are tried when an unqualified domain is entered
default: ""
# resolvers
# use_proxy
####################################################
# Performance and Tuning
check_time:
type: string
default: "true"
constraints:
- allowed_values:
- "true"
- "false"
docker_remove_delay:
type: number
default: 1
gc_delay:
type: number
default: 2
log_directory:
type: string
default: "/genconf/logs"
process_timeout:
type: number
default: 120
####################################################
# Security And Authentication
oauth_enabled:
type: string
default: "true"
constraints:
- allowed_values:
- "true"
- "false"
telemetry_enabled:
type: string
default: "true"
constraints:
- allowed_values:
- "true"
- "false"
resources:
######################################################################
#
# network resources. allocate a network and router for our server.
#
network:
type: ../../common/templates/network.yaml
properties:
existing_network: {get_param: fixed_network}
existing_subnet: {get_param: fixed_subnet}
private_network_cidr: {get_param: fixed_subnet_cidr}
dns_nameserver: {get_param: dns_nameserver}
external_network: {get_param: external_network}
api_lb:
type: lb.yaml
properties:
fixed_subnet: {get_attr: [network, fixed_subnet]}
external_network: {get_param: external_network}
######################################################################
#
# security groups. we need to permit network traffic of various
# sorts.
#
secgroup:
type: secgroup.yaml
######################################################################
#
# resources that expose the IPs of either the dcos master or a given
# LBaaS pool depending on whether LBaaS is enabled for the cluster.
#
api_address_lb_switch:
type: Magnum::ApiGatewaySwitcher
properties:
pool_public_ip: {get_attr: [api_lb, floating_address]}
pool_private_ip: {get_attr: [api_lb, address]}
master_public_ip: {get_attr: [dcos_masters, resource.0.dcos_master_external_ip]}
master_private_ip: {get_attr: [dcos_masters, resource.0.dcos_master_ip]}
######################################################################
#
# Master SoftwareConfig.
#
write_params_master:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: fragments/write-heat-params.sh}
inputs:
- name: HTTP_PROXY
type: String
- name: HTTPS_PROXY
type: String
- name: NO_PROXY
type: String
- name: AUTH_URL
type: String
- name: USERNAME
type: String
- name: PASSWORD
type: String
- name: TENANT_NAME
type: String
- name: VOLUME_DRIVER
type: String
- name: REGION_NAME
type: String
- name: DOMAIN_NAME
type: String
- name: REXRAY_PREEMPT
type: String
- name: CLUSTER_NAME
type: String
- name: EXHIBITOR_STORAGE_BACKEND
type: String
- name: EXHIBITOR_ZK_HOSTS
type: String
- name: EXHIBITOR_ZK_PATH
type: String
- name: AWS_ACCESS_KEY_ID
type: String
- name: AWS_REGION
type: String
- name: AWS_SECRET_ACCESS_KEY
type: String
- name: EXHIBITOR_EXPLICIT_KEYS
type: String
- name: S3_BUCKET
type: String
- name: S3_PREFIX
type: String
- name: EXHIBITOR_AZURE_ACCOUNT_NAME
type: String
- name: EXHIBITOR_AZURE_ACCOUNT_KEY
type: String
- name: EXHIBITOR_AZURE_PREFIX
type: String
- name: MASTER_DISCOVERY
type: String
- name: MASTER_LIST
type: String
- name: EXHIBITOR_ADDRESS
type: String
- name: NUM_MASTERS
type: String
- name: DCOS_OVERLAY_ENABLE
type: String
- name: DCOS_OVERLAY_CONFIG_ATTEMPTS
type: String
- name: DCOS_OVERLAY_MTU
type: String
- name: DCOS_OVERLAY_NETWORK
type: String
- name: DNS_SEARCH
type: String
- name: RESOLVERS
type: String
- name: CHECK_TIME
type: String
- name: DOCKER_REMOVE_DELAY
type: String
- name: GC_DELAY
type: String
- name: LOG_DIRECTORY
type: String
- name: PROCESS_TIMEOUT
type: String
- name: OAUTH_ENABLED
type: String
- name: TELEMETRY_ENABLED
type: String
- name: ROLES
type: String
######################################################################
#
# DC/OS configuration SoftwareConfig.
# Configuration files are readered and injected into instance.
#
dcos_config:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: fragments/configure-dcos.sh}
######################################################################
#
# Master SoftwareDeployment.
#
write_params_master_deployment:
type: OS::Heat::SoftwareDeploymentGroup
properties:
config: {get_resource: write_params_master}
servers: {get_attr: [dcos_masters, attributes, dcos_server_id]}
input_values:
HTTP_PROXY: {get_param: http_proxy}
HTTPS_PROXY: {get_param: https_proxy}
NO_PROXY: {get_param: no_proxy}
AUTH_URL: {get_param: auth_url}
USERNAME: {get_param: username}
PASSWORD: {get_param: password}
TENANT_NAME: {get_param: tenant_name}
VOLUME_DRIVER: {get_param: volume_driver}
REGION_NAME: {get_param: region_name}
DOMAIN_NAME: {get_param: domain_name}
REXRAY_PREEMPT: {get_param: rexray_preempt}
CLUSTER_NAME: {get_param: cluster_name}
EXHIBITOR_STORAGE_BACKEND: {get_param: exhibitor_storage_backend}
EXHIBITOR_ZK_HOSTS: {get_param: exhibitor_zk_hosts}
EXHIBITOR_ZK_PATH: {get_param: exhibitor_zk_path}
AWS_ACCESS_KEY_ID: {get_param: aws_access_key_id}
AWS_REGION: {get_param: aws_region}
AWS_SECRET_ACCESS_KEY: {get_param: aws_secret_access_key}
EXHIBITOR_EXPLICIT_KEYS: {get_param: exhibitor_explicit_keys}
S3_BUCKET: {get_param: s3_bucket}
S3_PREFIX: {get_param: s3_prefix}
EXHIBITOR_AZURE_ACCOUNT_NAME: {get_param: exhibitor_azure_account_name}
EXHIBITOR_AZURE_ACCOUNT_KEY: {get_param: exhibitor_azure_account_key}
EXHIBITOR_AZURE_PREFIX: {get_param: exhibitor_azure_prefix}
MASTER_DISCOVERY: {get_param: master_discovery}
MASTER_LIST: {list_join: [' ', {get_attr: [dcos_masters, dcos_master_ip]}]}
EXHIBITOR_ADDRESS: {get_attr: [api_lb, address]}
NUM_MASTERS: {get_param: number_of_masters}
DCOS_OVERLAY_ENABLE: {get_param: dcos_overlay_enable}
DCOS_OVERLAY_CONFIG_ATTEMPTS: {get_param: dcos_overlay_config_attempts}
DCOS_OVERLAY_MTU: {get_param: dcos_overlay_mtu}
DCOS_OVERLAY_NETWORK: {get_param: dcos_overlay_network}
DNS_SEARCH: {get_param: dns_search}
RESOLVERS: {get_param: dns_nameserver}
CHECK_TIME: {get_param: check_time}
DOCKER_REMOVE_DELAY: {get_param: docker_remove_delay}
GC_DELAY: {get_param: gc_delay}
LOG_DIRECTORY: {get_param: log_directory}
PROCESS_TIMEOUT: {get_param: process_timeout}
OAUTH_ENABLED: {get_param: oauth_enabled}
TELEMETRY_ENABLED: {get_param: telemetry_enabled}
ROLES: master
dcos_config_deployment:
type: OS::Heat::SoftwareDeploymentGroup
depends_on:
- write_params_master_deployment
properties:
config: {get_resource: dcos_config}
servers: {get_attr: [dcos_masters, attributes, dcos_server_id]}
######################################################################
#
# DC/OS masters. This is a resource group that will create
# <number_of_masters> masters.
#
dcos_masters:
type: OS::Heat::ResourceGroup
depends_on:
- network
properties:
count: {get_param: number_of_masters}
resource_def:
type: dcosmaster.yaml
properties:
ssh_key_name: {get_param: ssh_key_name}
server_image: {get_param: server_image}
master_flavor: {get_param: master_flavor}
external_network: {get_param: external_network}
fixed_network: {get_attr: [network, fixed_network]}
fixed_subnet: {get_attr: [network, fixed_subnet]}
secgroup_base_id: {get_attr: [secgroup, secgroup_base_id]}
secgroup_dcos_id: {get_attr: [secgroup, secgroup_dcos_id]}
api_pool_80_id: {get_attr: [api_lb, pool_80_id]}
api_pool_443_id: {get_attr: [api_lb, pool_443_id]}
api_pool_8080_id: {get_attr: [api_lb, pool_8080_id]}
api_pool_5050_id: {get_attr: [api_lb, pool_5050_id]}
api_pool_2181_id: {get_attr: [api_lb, pool_2181_id]}
api_pool_8181_id: {get_attr: [api_lb, pool_8181_id]}
######################################################################
#
# DC/OS slaves. This is a resource group that will initially
# create <number_of_slaves> public or private slaves,
# and needs to be manually scaled.
#
dcos_slaves:
type: OS::Heat::ResourceGroup
depends_on:
- network
properties:
count: {get_param: number_of_slaves}
removal_policies: [{resource_list: {get_param: slaves_to_remove}}]
resource_def:
type: dcosslave.yaml
properties:
ssh_key_name: {get_param: ssh_key_name}
server_image: {get_param: server_image}
slave_flavor: {get_param: slave_flavor}
fixed_network: {get_attr: [network, fixed_network]}
fixed_subnet: {get_attr: [network, fixed_subnet]}
external_network: {get_param: external_network}
wait_condition_timeout: {get_param: wait_condition_timeout}
secgroup_base_id: {get_attr: [secgroup, secgroup_base_id]}
# DC/OS params
auth_url: {get_param: auth_url}
username: {get_param: username}
password: {get_param: password}
tenant_name: {get_param: tenant_name}
volume_driver: {get_param: volume_driver}
region_name: {get_param: region_name}
domain_name: {get_param: domain_name}
rexray_preempt: {get_param: rexray_preempt}
http_proxy: {get_param: http_proxy}
https_proxy: {get_param: https_proxy}
no_proxy: {get_param: no_proxy}
cluster_name: {get_param: cluster_name}
exhibitor_storage_backend: {get_param: exhibitor_storage_backend}
exhibitor_zk_hosts: {get_param: exhibitor_zk_hosts}
exhibitor_zk_path: {get_param: exhibitor_zk_path}
aws_access_key_id: {get_param: aws_access_key_id}
aws_region: {get_param: aws_region}
aws_secret_access_key: {get_param: aws_secret_access_key}
exhibitor_explicit_keys: {get_param: exhibitor_explicit_keys}
s3_bucket: {get_param: s3_bucket}
s3_prefix: {get_param: s3_prefix}
exhibitor_azure_account_name: {get_param: exhibitor_azure_account_name}
exhibitor_azure_account_key: {get_param: exhibitor_azure_account_key}
exhibitor_azure_prefix: {get_param: exhibitor_azure_prefix}
master_discovery: {get_param: master_discovery}
master_list: {list_join: [' ', {get_attr: [dcos_masters, dcos_master_ip]}]}
exhibitor_address: {get_attr: [api_lb, address]}
num_masters: {get_param: number_of_masters}
dcos_overlay_enable: {get_param: dcos_overlay_enable}
dcos_overlay_config_attempts: {get_param: dcos_overlay_config_attempts}
dcos_overlay_mtu: {get_param: dcos_overlay_mtu}
dcos_overlay_network: {get_param: dcos_overlay_network}
dns_search: {get_param: dns_search}
resolvers: {get_param: dns_nameserver}
check_time: {get_param: check_time}
docker_remove_delay: {get_param: docker_remove_delay}
gc_delay: {get_param: gc_delay}
log_directory: {get_param: log_directory}
process_timeout: {get_param: process_timeout}
oauth_enabled: {get_param: oauth_enabled}
telemetry_enabled: {get_param: telemetry_enabled}
outputs:
api_address:
value: {get_attr: [api_address_lb_switch, public_ip]}
description: >
This is the API endpoint of the DC/OS master. Use this to access
the DC/OS API from outside the cluster.
dcos_master_private:
value: {get_attr: [dcos_masters, dcos_master_ip]}
description: >
This is a list of the "private" addresses of all the DC/OS masters.
dcos_master:
value: {get_attr: [dcos_masters, dcos_master_external_ip]}
description: >
This is the "public" ip address of the DC/OS master server. Use this address to
log in to the DC/OS master via ssh or to access the DC/OS API
from outside the cluster.
dcos_slaves_private:
value: {get_attr: [dcos_slaves, dcos_slave_ip]}
description: >
This is a list of the "private" addresses of all the DC/OS slaves.
dcos_slaves:
value: {get_attr: [dcos_slaves, dcos_slave_external_ip]}
description: >
This is a list of the "public" addresses of all the DC/OS slaves.

View File

@ -1,161 +0,0 @@
heat_template_version: 2014-10-16
description: >
This is a nested stack that defines a single DC/OS master, This stack is
included by a ResourceGroup resource in the parent template
(dcoscluster.yaml).
parameters:
server_image:
type: string
description: glance image used to boot the server
master_flavor:
type: string
description: flavor to use when booting the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
fixed_network:
type: string
description: Network from which to allocate fixed addresses.
fixed_subnet:
type: string
description: Subnet from which to allocate fixed addresses.
secgroup_base_id:
type: string
description: ID of the security group for base.
secgroup_dcos_id:
type: string
description: ID of the security group for DC/OS master.
api_pool_80_id:
type: string
description: ID of the load balancer pool of Http.
api_pool_443_id:
type: string
description: ID of the load balancer pool of Https.
api_pool_8080_id:
type: string
description: ID of the load balancer pool of Marathon.
api_pool_5050_id:
type: string
description: ID of the load balancer pool of Mesos master.
api_pool_2181_id:
type: string
description: ID of the load balancer pool of Zookeeper.
api_pool_8181_id:
type: string
description: ID of the load balancer pool of Exhibitor.
resources:
######################################################################
#
# DC/OS master server.
#
dcos_master:
type: OS::Nova::Server
properties:
image: {get_param: server_image}
flavor: {get_param: master_flavor}
key_name: {get_param: ssh_key_name}
user_data_format: SOFTWARE_CONFIG
networks:
- port: {get_resource: dcos_master_eth0}
dcos_master_eth0:
type: OS::Neutron::Port
properties:
network: {get_param: fixed_network}
security_groups:
- {get_param: secgroup_base_id}
- {get_param: secgroup_dcos_id}
fixed_ips:
- subnet: {get_param: fixed_subnet}
replacement_policy: AUTO
dcos_master_floating:
type: Magnum::Optional::DcosMaster::Neutron::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_resource: dcos_master_eth0}
api_pool_80_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_80_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 80
api_pool_443_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_443_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 443
api_pool_8080_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_8080_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 8080
api_pool_5050_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_5050_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 5050
api_pool_2181_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_2181_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 2181
api_pool_8181_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_8181_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 8181
outputs:
dcos_master_ip:
value: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
description: >
This is the "private" address of the DC/OS master node.
dcos_master_external_ip:
value: {get_attr: [dcos_master_floating, floating_ip_address]}
description: >
This is the "public" address of the DC/OS master node.
dcos_server_id:
value: {get_resource: dcos_master}
description: >
This is the logical id of the DC/OS master node.

View File

@ -1,338 +0,0 @@
heat_template_version: 2014-10-16
description: >
This is a nested stack that defines a single DC/OS slave, This stack is
included by a ResourceGroup resource in the parent template
(dcoscluster.yaml).
parameters:
server_image:
type: string
description: glance image used to boot the server
slave_flavor:
type: string
description: flavor to use when booting the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
wait_condition_timeout:
type: number
description : >
timeout for the Wait Conditions
http_proxy:
type: string
description: http proxy address for docker
https_proxy:
type: string
description: https proxy address for docker
no_proxy:
type: string
description: no proxies for docker
auth_url:
type: string
description: >
url for DC/OS to authenticate before sending request
username:
type: string
description: user name
password:
type: string
description: >
user password, not set in current implementation, only used to
fill in for Kubernetes config file
hidden: true
tenant_name:
type: string
description: >
tenant_name is used to isolate access to Compute resources
volume_driver:
type: string
description: volume driver to use for container storage
region_name:
type: string
description: A logically separate section of the cluster
domain_name:
type: string
description: >
domain is to define the administrative boundaries for management
of Keystone entities
fixed_network:
type: string
description: Network from which to allocate fixed addresses.
fixed_subnet:
type: string
description: Subnet from which to allocate fixed addresses.
secgroup_base_id:
type: string
description: ID of the security group for base.
rexray_preempt:
type: string
description: >
enables any host to take control of a volume irrespective of whether
other hosts are using the volume
######################################################################
#
# DC/OS parameters
#
cluster_name:
type: string
description: human readable name for the DC/OS cluster
default: my-cluster
exhibitor_storage_backend:
type: string
exhibitor_zk_hosts:
type: string
exhibitor_zk_path:
type: string
aws_access_key_id:
type: string
aws_region:
type: string
aws_secret_access_key:
type: string
exhibitor_explicit_keys:
type: string
s3_bucket:
type: string
s3_prefix:
type: string
exhibitor_azure_account_name:
type: string
exhibitor_azure_account_key:
type: string
exhibitor_azure_prefix:
type: string
master_discovery:
type: string
master_list:
type: string
exhibitor_address:
type: string
default: 127.0.0.1
num_masters:
type: number
dcos_overlay_enable:
type: string
dcos_overlay_config_attempts:
type: string
dcos_overlay_mtu:
type: string
dcos_overlay_network:
type: string
dns_search:
type: string
resolvers:
type: string
check_time:
type: string
docker_remove_delay:
type: number
gc_delay:
type: number
log_directory:
type: string
process_timeout:
type: number
oauth_enabled:
type: string
telemetry_enabled:
type: string
resources:
slave_wait_handle:
type: OS::Heat::WaitConditionHandle
slave_wait_condition:
type: OS::Heat::WaitCondition
depends_on: dcos_slave
properties:
handle: {get_resource: slave_wait_handle}
timeout: {get_param: wait_condition_timeout}
secgroup_all_open:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: icmp
- protocol: tcp
- protocol: udp
######################################################################
#
# software configs. these are components that are combined into
# a multipart MIME user-data archive.
#
write_heat_params:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: {get_file: fragments/write-heat-params.sh}
params:
"$HTTP_PROXY": {get_param: http_proxy}
"$HTTPS_PROXY": {get_param: https_proxy}
"$NO_PROXY": {get_param: no_proxy}
"$AUTH_URL": {get_param: auth_url}
"$USERNAME": {get_param: username}
"$PASSWORD": {get_param: password}
"$TENANT_NAME": {get_param: tenant_name}
"$VOLUME_DRIVER": {get_param: volume_driver}
"$REGION_NAME": {get_param: region_name}
"$DOMAIN_NAME": {get_param: domain_name}
"$REXRAY_PREEMPT": {get_param: rexray_preempt}
"$CLUSTER_NAME": {get_param: cluster_name}
"$EXHIBITOR_STORAGE_BACKEND": {get_param: exhibitor_storage_backend}
"$EXHIBITOR_ZK_HOSTS": {get_param: exhibitor_zk_hosts}
"$EXHIBITOR_ZK_PATH": {get_param: exhibitor_zk_path}
"$AWS_ACCESS_KEY_ID": {get_param: aws_access_key_id}
"$AWS_REGION": {get_param: aws_region}
"$AWS_SECRET_ACCESS_KEY": {get_param: aws_secret_access_key}
"$EXHIBITOR_EXPLICIT_KEYS": {get_param: exhibitor_explicit_keys}
"$S3_BUCKET": {get_param: s3_bucket}
"$S3_PREFIX": {get_param: s3_prefix}
"$EXHIBITOR_AZURE_ACCOUNT_NAME": {get_param: exhibitor_azure_account_name}
"$EXHIBITOR_AZURE_ACCOUNT_KEY": {get_param: exhibitor_azure_account_key}
"$EXHIBITOR_AZURE_PREFIX": {get_param: exhibitor_azure_prefix}
"$MASTER_DISCOVERY": {get_param: master_discovery}
"$MASTER_LIST": {get_param: master_list}
"$EXHIBITOR_ADDRESS": {get_param: exhibitor_address}
"$NUM_MASTERS": {get_param: num_masters}
"$DCOS_OVERLAY_ENABLE": {get_param: dcos_overlay_enable}
"$DCOS_OVERLAY_CONFIG_ATTEMPTS": {get_param: dcos_overlay_config_attempts}
"$DCOS_OVERLAY_MTU": {get_param: dcos_overlay_mtu}
"$DCOS_OVERLAY_NETWORK": {get_param: dcos_overlay_network}
"$DNS_SEARCH": {get_param: dns_search}
"$RESOLVERS": {get_param: resolvers}
"$CHECK_TIME": {get_param: check_time}
"$DOCKER_REMOVE_DELAY": {get_param: docker_remove_delay}
"$GC_DELAY": {get_param: gc_delay}
"$LOG_DIRECTORY": {get_param: log_directory}
"$PROCESS_TIMEOUT": {get_param: process_timeout}
"$OAUTH_ENABLED": {get_param: oauth_enabled}
"$TELEMETRY_ENABLED": {get_param: telemetry_enabled}
"$ROLES": slave
dcos_config:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/configure-dcos.sh}
slave_wc_notify:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: |
#!/bin/bash -v
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: {get_attr: [slave_wait_handle, curl_cli]}
dcos_slave_init:
type: OS::Heat::MultipartMime
properties:
parts:
- config: {get_resource: write_heat_params}
- config: {get_resource: dcos_config}
- config: {get_resource: slave_wc_notify}
######################################################################
#
# a single DC/OS slave.
#
dcos_slave:
type: OS::Nova::Server
properties:
image: {get_param: server_image}
flavor: {get_param: slave_flavor}
key_name: {get_param: ssh_key_name}
user_data_format: RAW
user_data: {get_resource: dcos_slave_init}
networks:
- port: {get_resource: dcos_slave_eth0}
dcos_slave_eth0:
type: OS::Neutron::Port
properties:
network: {get_param: fixed_network}
security_groups:
- get_resource: secgroup_all_open
- get_param: secgroup_base_id
fixed_ips:
- subnet: {get_param: fixed_subnet}
dcos_slave_floating:
type: Magnum::Optional::DcosSlave::Neutron::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_resource: dcos_slave_eth0}
outputs:
dcos_slave_ip:
value: {get_attr: [dcos_slave_eth0, fixed_ips, 0, ip_address]}
description: >
This is the "private" address of the DC/OS slave node.
dcos_slave_external_ip:
value: {get_attr: [dcos_slave_floating, floating_ip_address]}
description: >
This is the "public" address of the DC/OS slave node.

View File

@ -1,187 +0,0 @@
#!/bin/bash
. /etc/sysconfig/heat-params
GENCONF_SCRIPT_DIR=/opt/dcos
sudo mkdir -p $GENCONF_SCRIPT_DIR/genconf
sudo chown -R centos $GENCONF_SCRIPT_DIR/genconf
# Configure ip-detect
cat > $GENCONF_SCRIPT_DIR/genconf/ip-detect <<EOF
#!/usr/bin/env bash
set -o nounset -o errexit
export PATH=/usr/sbin:/usr/bin:\$PATH
echo \$(ip addr show eth0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | head -1)
EOF
# Configure config.yaml
CONFIG_YAML_FILE=$GENCONF_SCRIPT_DIR/genconf/config.yaml
####################################################
# Cluster Setup
# bootstrap_url is not configurable
echo "bootstrap_url: file://$GENCONF_SCRIPT_DIR/genconf/serve" > $CONFIG_YAML_FILE
# cluster_name
echo "cluster_name: $CLUSTER_NAME" >> $CONFIG_YAML_FILE
# exhibitor_storage_backend
if [ "static" == "$EXHIBITOR_STORAGE_BACKEND" ]; then
echo "exhibitor_storage_backend: static" >> $CONFIG_YAML_FILE
elif [ "zookeeper" == "$EXHIBITOR_STORAGE_BACKEND" ]; then
echo "exhibitor_storage_backend: zookeeper" >> $CONFIG_YAML_FILE
echo "exhibitor_zk_hosts: $EXHIBITOR_ZK_HOSTS" >> $CONFIG_YAML_FILE
echo "exhibitor_zk_path: $EXHIBITOR_ZK_PATH" >> $CONFIG_YAML_FILE
elif [ "aws_s3" == "$EXHIBITOR_STORAGE_BACKEND" ]; then
echo "exhibitor_storage_backend: aws_s3" >> $CONFIG_YAML_FILE
echo "aws_access_key_id: $AWS_ACCESS_KEY_ID" >> $CONFIG_YAML_FILE
echo "aws_region: $AWS_REGIION" >> $CONFIG_YAML_FILE
echo "aws_secret_access_key: $AWS_SECRET_ACCESS_KEY" >> $CONFIG_YAML_FILE
echo "exhibitor_explicit_keys: $EXHIBITOR_EXPLICIT_KEYS" >> $CONFIG_YAML_FILE
echo "s3_bucket: $S3_BUCKET" >> $CONFIG_YAML_FILE
echo "s3_prefix: $S3_PREFIX" >> $CONFIG_YAML_FILE
elif [ "azure" == "$EXHIBITOR_STORAGE_BACKEND" ]; then
echo "exhibitor_storage_backend: azure" >> $CONFIG_YAML_FILE
echo "exhibitor_azure_account_name: $EXHIBITOR_AZURE_ACCOUNT_NAME" >> $CONFIG_YAML_FILE
echo "exhibitor_azure_account_key: $EXHIBITOR_AZURE_ACCOUNT_KEY" >> $CONFIG_YAML_FILE
echo "exhibitor_azure_prefix: $EXHIBITOR_AZURE_PREFIX" >> $CONFIG_YAML_FILE
fi
# master_discovery
if [ "static" == "$MASTER_DISCOVERY" ]; then
echo "master_discovery: static" >> $CONFIG_YAML_FILE
echo "master_list:" >> $CONFIG_YAML_FILE
for ip in $MASTER_LIST; do
echo "- ${ip}" >> $CONFIG_YAML_FILE
done
elif [ "master_http_loadbalancer" == "$MASTER_DISCOVERY" ]; then
echo "master_discovery: master_http_loadbalancer" >> $CONFIG_YAML_FILE
echo "exhibitor_address: $EXHIBITOR_ADDRESS" >> $CONFIG_YAML_FILE
echo "num_masters: $NUM_MASTERS" >> $CONFIG_YAML_FILE
echo "master_list:" >> $CONFIG_YAML_FILE
for ip in $MASTER_LIST; do
echo "- ${ip}" >> $CONFIG_YAML_FILE
done
fi
####################################################
# Networking
# dcos_overlay_enable
if [ "false" == "$DCOS_OVERLAY_ENABLE" ]; then
echo "dcos_overlay_enable: false" >> $CONFIG_YAML_FILE
elif [ "true" == "$DCOS_OVERLAY_ENABLE" ]; then
echo "dcos_overlay_enable: true" >> $CONFIG_YAML_FILE
echo "dcos_overlay_config_attempts: $DCOS_OVERLAY_CONFIG_ATTEMPTS" >> $CONFIG_YAML_FILE
echo "dcos_overlay_mtu: $DCOS_OVERLAY_MTU" >> $CONFIG_YAML_FILE
echo "dcos_overlay_network:" >> $CONFIG_YAML_FILE
echo "$DCOS_OVERLAY_NETWORK" >> $CONFIG_YAML_FILE
fi
# dns_search
if [ -n "$DNS_SEARCH" ]; then
echo "dns_search: $DNS_SEARCH" >> $CONFIG_YAML_FILE
fi
# resolvers
echo "resolvers:" >> $CONFIG_YAML_FILE
for ip in $RESOLVERS; do
echo "- ${ip}" >> $CONFIG_YAML_FILE
done
# use_proxy
if [ -n "$HTTP_PROXY" ] && [ -n "$HTTPS_PROXY" ]; then
echo "use_proxy: true" >> $CONFIG_YAML_FILE
echo "http_proxy: $HTTP_PROXY" >> $CONFIG_YAML_FILE
echo "https_proxy: $HTTPS_PROXY" >> $CONFIG_YAML_FILE
if [ -n "$NO_PROXY" ]; then
echo "no_proxy:" >> $CONFIG_YAML_FILE
for ip in $NO_PROXY; do
echo "- ${ip}" >> $CONFIG_YAML_FILE
done
fi
fi
####################################################
# Performance and Tuning
# check_time
if [ "false" == "$CHECK_TIME" ]; then
echo "check_time: false" >> $CONFIG_YAML_FILE
fi
# docker_remove_delay
if [ "1" != "$DOCKER_REMOVE_DELAY" ]; then
echo "docker_remove_delay: $DOCKER_REMOVE_DELAY" >> $CONFIG_YAML_FILE
fi
# gc_delay
if [ "2" != "$GC_DELAY" ]; then
echo "gc_delay: $GC_DELAY" >> $CONFIG_YAML_FILE
fi
# log_directory
if [ "/genconf/logs" != "$LOG_DIRECTORY" ]; then
echo "log_directory: $LOG_DIRECTORY" >> $CONFIG_YAML_FILE
fi
# process_timeout
if [ "120" != "$PROCESS_TIMEOUT" ]; then
echo "process_timeout: $PROCESS_TIMEOUT" >> $CONFIG_YAML_FILE
fi
####################################################
# Security And Authentication
# oauth_enabled
if [ "false" == "$OAUTH_ENABLED" ]; then
echo "oauth_enabled: false" >> $CONFIG_YAML_FILE
fi
# telemetry_enabled
if [ "false" == "$TELEMETRY_ENABLED" ]; then
echo "telemetry_enabled: false" >> $CONFIG_YAML_FILE
fi
####################################################
# Rexray Configuration
# NOTE: This feature is considered experimental: use it at your own risk.
# We might add, change, or delete any functionality as described in this document.
# See https://dcos.io/docs/1.8/usage/storage/external-storage/
if [ "$VOLUME_DRIVER" == "rexray" ]; then
if [ ${AUTH_URL##*/}=="v3" ]; then
extra_configs="domainName: $DOMAIN_NAME"
else
extra_configs=""
fi
echo "rexray_config:" >> $CONFIG_YAML_FILE
echo " rexray:" >> $CONFIG_YAML_FILE
echo " modules:" >> $CONFIG_YAML_FILE
echo " default-admin:" >> $CONFIG_YAML_FILE
echo " host: tcp://127.0.0.1:61003" >> $CONFIG_YAML_FILE
echo " storageDrivers:" >> $CONFIG_YAML_FILE
echo " - openstack" >> $CONFIG_YAML_FILE
echo " volume:" >> $CONFIG_YAML_FILE
echo " mount:" >> $CONFIG_YAML_FILE
echo " preempt: $REXRAY_PREEMPT" >> $CONFIG_YAML_FILE
echo " openstack:" >> $CONFIG_YAML_FILE
echo " authUrl: $AUTH_URL" >> $CONFIG_YAML_FILE
echo " username: $USERNAME" >> $CONFIG_YAML_FILE
echo " password: $PASSWORD" >> $CONFIG_YAML_FILE
echo " tenantName: $TENANT_NAME" >> $CONFIG_YAML_FILE
echo " regionName: $REGION_NAME" >> $CONFIG_YAML_FILE
echo " availabilityZoneName: nova" >> $CONFIG_YAML_FILE
echo " $extra_configs" >> $CONFIG_YAML_FILE
fi
cd $GENCONF_SCRIPT_DIR
sudo bash $GENCONF_SCRIPT_DIR/dcos_generate_config.sh --genconf
cd $GENCONF_SCRIPT_DIR/genconf/serve
sudo bash $GENCONF_SCRIPT_DIR/genconf/serve/dcos_install.sh --no-block-dcos-setup $ROLES

View File

@ -1,56 +0,0 @@
#!/bin/sh
mkdir -p /etc/sysconfig
cat > /etc/sysconfig/heat-params <<EOF
HTTP_PROXY="$HTTP_PROXY"
HTTPS_PROXY="$HTTPS_PROXY"
NO_PROXY="$NO_PROXY"
AUTH_URL="$AUTH_URL"
USERNAME="$USERNAME"
PASSWORD="$PASSWORD"
TENANT_NAME="$TENANT_NAME"
VOLUME_DRIVER="$VOLUME_DRIVER"
REGION_NAME="$REGION_NAME"
DOMAIN_NAME="$DOMAIN_NAME"
REXRAY_PREEMPT="$REXRAY_PREEMPT"
CLUSTER_NAME="$CLUSTER_NAME"
EXHIBITOR_STORAGE_BACKEND="$EXHIBITOR_STORAGE_BACKEND"
EXHIBITOR_ZK_HOSTS="$EXHIBITOR_ZK_HOSTS"
EXHIBITOR_ZK_PATH="$EXHIBITOR_ZK_PATH"
AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID"
AWS_REGIION="$AWS_REGIION"
AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY"
EXHIBITOR_EXPLICIT_KEYS="$EXHIBITOR_EXPLICIT_KEYS"
S3_BUCKET="$S3_BUCKET"
S3_PREFIX="$S3_PREFIX"
EXHIBITOR_AZURE_ACCOUNT_NAME="$EXHIBITOR_AZURE_ACCOUNT_NAME"
EXHIBITOR_AZURE_ACCOUNT_KEY="$EXHIBITOR_AZURE_ACCOUNT_KEY"
EXHIBITOR_AZURE_PREFIX="$EXHIBITOR_AZURE_PREFIX"
MASTER_DISCOVERY="$MASTER_DISCOVERY"
MASTER_LIST="$MASTER_LIST"
EXHIBITOR_ADDRESS="$EXHIBITOR_ADDRESS"
NUM_MASTERS="$NUM_MASTERS"
DCOS_OVERLAY_ENABLE="$DCOS_OVERLAY_ENABLE"
DCOS_OVERLAY_CONFIG_ATTEMPTS="$DCOS_OVERLAY_CONFIG_ATTEMPTS"
DCOS_OVERLAY_MTU="$DCOS_OVERLAY_MTU"
DCOS_OVERLAY_NETWORK="$DCOS_OVERLAY_NETWORK"
DNS_SEARCH="$DNS_SEARCH"
RESOLVERS="$RESOLVERS"
CHECK_TIME="$CHECK_TIME"
DOCKER_REMOVE_DELAY="$DOCKER_REMOVE_DELAY"
GC_DELAY="$GC_DELAY"
LOG_DIRECTORY="$LOG_DIRECTORY"
PROCESS_TIMEOUT="$PROCESS_TIMEOUT"
OAUTH_ENABLED="$OAUTH_ENABLED"
TELEMETRY_ENABLED="$TELEMETRY_ENABLED"
ROLES="$ROLES"
EOF

View File

@ -1,201 +0,0 @@
heat_template_version: 2014-10-16
parameters:
fixed_subnet:
type: string
external_network:
type: string
resources:
# Admin Router is a customized Nginx that proxies all of the internal
# services on port 80 and 443 (if https is configured)
# See https://dcos.io/docs/1.8/administration/installing/custom/configuration-parameters/#-a-name-master-a-master_discovery
# If parameter is specified to master_http_loadbalancer, the
# load balancer must accept traffic on ports 8080, 5050, 80, and 443,
# and forward it to the same ports on the master
#
# Opening ports 2181 and 8181 are not mentioned in DC/OS document.
# When I create a cluster with load balancer, slave nodes will connect to
# some services in master nodes with the IP of load balancer, if the port
# is not open it will fail.
loadbalancer:
type: Magnum::Optional::Neutron::LBaaS::LoadBalancer
properties:
vip_subnet: {get_param: fixed_subnet}
listener_80:
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: HTTP
protocol_port: 80
pool_80:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_80}
protocol: HTTP
monitor_80:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_80 }
listener_443:
depends_on: monitor_80
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: HTTPS
protocol_port: 443
pool_443:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_443}
protocol: HTTPS
monitor_443:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_443 }
listener_8080:
depends_on: monitor_443
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: TCP
protocol_port: 8080
pool_8080:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_8080}
protocol: TCP
monitor_8080:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_8080 }
listener_5050:
depends_on: monitor_8080
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: TCP
protocol_port: 5050
pool_5050:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_5050}
protocol: TCP
monitor_5050:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_5050 }
listener_2181:
depends_on: monitor_5050
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: TCP
protocol_port: 2181
pool_2181:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_2181}
protocol: TCP
monitor_2181:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_2181 }
listener_8181:
depends_on: monitor_2181
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: TCP
protocol_port: 8181
pool_8181:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_8181}
protocol: TCP
monitor_8181:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_8181 }
floating:
type: Magnum::Optional::Neutron::LBaaS::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_attr: [loadbalancer, vip_port_id]}
outputs:
pool_80_id:
value: {get_resource: pool_80}
pool_443_id:
value: {get_resource: pool_443}
pool_8080_id:
value: {get_resource: pool_8080}
pool_5050_id:
value: {get_resource: pool_5050}
pool_2181_id:
value: {get_resource: pool_2181}
pool_8181_id:
value: {get_resource: pool_8181}
address:
value: {get_attr: [loadbalancer, vip_address]}
floating_address:
value: {get_attr: [floating, floating_ip_address]}

View File

@ -1,115 +0,0 @@
heat_template_version: 2014-10-16
parameters:
resources:
######################################################################
#
# security groups. we need to permit network traffic of various
# sorts.
# The following is a list of ports used by internal DC/OS components,
# and their corresponding systemd unit.
# https://dcos.io/docs/1.8/administration/installing/ports/
#
# The VIP features, added in DC/OS 1.8, require that ports 32768 - 65535
# are open between all agent and master nodes for both TCP and UDP.
# https://dcos.io/docs/1.8/administration/upgrading/
#
secgroup_base:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: icmp
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
remote_mode: remote_group_id
- protocol: udp
remote_mode: remote_group_id
# All nodes
- protocol: tcp
port_range_min: 32768
port_range_max: 65535
# Master nodes
- protocol: tcp
port_range_min: 53
port_range_max: 53
- protocol: tcp
port_range_min: 1050
port_range_max: 1050
- protocol: tcp
port_range_min: 1801
port_range_max: 1801
- protocol: tcp
port_range_min: 7070
port_range_max: 7070
# dcos-oauth
- protocol: tcp
port_range_min: 8101
port_range_max: 8101
- protocol: tcp
port_range_min: 8123
port_range_max: 8123
- protocol: tcp
port_range_min: 9000
port_range_max: 9000
- protocol: tcp
port_range_min: 9942
port_range_max: 9942
- protocol: tcp
port_range_min: 9990
port_range_max: 9990
- protocol: tcp
port_range_min: 15055
port_range_max: 15055
- protocol: udp
port_range_min: 53
port_range_max: 53
- protocol: udp
port_range_min: 32768
port_range_max: 65535
secgroup_dcos:
type: OS::Neutron::SecurityGroup
properties:
rules:
# Admin Router is a customized Nginx that proxies all of the internal
# services on port 80 and 443 (if https is configured)
# See https://github.com/dcos/adminrouter
# If parameter is specified to master_http_loadbalancer, the
# load balancer must accept traffic on ports 8080, 5050, 80, and 443,
# and forward it to the same ports on the master
# Admin Router http
- protocol: tcp
port_range_min: 80
port_range_max: 80
# Admin Router https
- protocol: tcp
port_range_min: 443
port_range_max: 443
# Marathon
- protocol: tcp
port_range_min: 8080
port_range_max: 8080
# Mesos master
- protocol: tcp
port_range_min: 5050
port_range_max: 5050
# Exhibitor
- protocol: tcp
port_range_min: 8181
port_range_max: 8181
# Zookeeper
- protocol: tcp
port_range_min: 2181
port_range_max: 2181
outputs:
secgroup_base_id:
value: {get_resource: secgroup_base}
secgroup_dcos_id:
value: {get_resource: secgroup_dcos}

View File

@ -1,15 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
version = '1.0.0'
driver = 'dcos_centos_v1'
container_version = '1.11.2'

View File

@ -1,163 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from oslo_serialization import jsonutils
from magnum.drivers.heat import template_def
LOG = logging.getLogger(__name__)
class ServerAddressOutputMapping(template_def.OutputMapping):
public_ip_output_key = None
private_ip_output_key = None
def __init__(self, dummy_arg, cluster_attr=None):
self.cluster_attr = cluster_attr
self.heat_output = self.public_ip_output_key
def set_output(self, stack, cluster_template, cluster):
if not cluster.floating_ip_enabled:
self.heat_output = self.private_ip_output_key
LOG.debug("Using heat_output: %s", self.heat_output)
super(ServerAddressOutputMapping,
self).set_output(stack, cluster_template, cluster)
class MasterAddressOutputMapping(ServerAddressOutputMapping):
public_ip_output_key = 'dcos_master'
private_ip_output_key = 'dcos_master_private'
class NodeAddressOutputMapping(ServerAddressOutputMapping):
public_ip_output_key = 'dcos_slaves'
private_ip_output_key = 'dcos_slaves_private'
class DcosCentosTemplateDefinition(template_def.BaseTemplateDefinition):
"""DC/OS template for Centos."""
def __init__(self):
super(DcosCentosTemplateDefinition, self).__init__()
self.add_parameter('external_network',
cluster_template_attr='external_network_id',
required=True)
self.add_parameter('number_of_slaves',
cluster_attr='node_count')
self.add_parameter('master_flavor',
cluster_template_attr='master_flavor_id')
self.add_parameter('slave_flavor',
cluster_template_attr='flavor_id')
self.add_parameter('cluster_name',
cluster_attr='name')
self.add_parameter('volume_driver',
cluster_template_attr='volume_driver')
self.add_output('api_address',
cluster_attr='api_address')
self.add_output('dcos_master_private',
cluster_attr=None)
self.add_output('dcos_slaves_private',
cluster_attr=None)
self.add_output('dcos_slaves',
cluster_attr='node_addresses',
mapping_type=NodeAddressOutputMapping)
self.add_output('dcos_master',
cluster_attr='master_addresses',
mapping_type=MasterAddressOutputMapping)
def get_params(self, context, cluster_template, cluster, **kwargs):
extra_params = kwargs.pop('extra_params', {})
# HACK(apmelton) - This uses the user's bearer token, ideally
# it should be replaced with an actual trust token with only
# access to do what the template needs it to do.
osc = self.get_osc(context)
extra_params['auth_url'] = context.auth_url
extra_params['username'] = context.user_name
extra_params['tenant_name'] = context.tenant
extra_params['domain_name'] = context.domain_name
extra_params['region_name'] = osc.cinder_region_name()
# Mesos related label parameters are deleted
# Because they are not optional in DC/OS configuration
label_list = ['rexray_preempt',
'exhibitor_storage_backend',
'exhibitor_zk_hosts',
'exhibitor_zk_path',
'aws_access_key_id',
'aws_region',
'aws_secret_access_key',
'exhibitor_explicit_keys',
's3_bucket',
's3_prefix',
'exhibitor_azure_account_name',
'exhibitor_azure_account_key',
'exhibitor_azure_prefix',
'dcos_overlay_enable',
'dcos_overlay_config_attempts',
'dcos_overlay_mtu',
'dcos_overlay_network',
'dns_search',
'check_time',
'docker_remove_delay',
'gc_delay',
'log_directory',
'process_timeout',
'oauth_enabled',
'telemetry_enabled']
for label in label_list:
extra_params[label] = cluster.labels.get(label)
# By default, master_discovery is set to 'static'
# If --master-lb-enabled is specified,
# master_discovery will be set to 'master_http_loadbalancer'
if cluster.master_lb_enabled:
extra_params['master_discovery'] = 'master_http_loadbalancer'
if 'true' == extra_params['dcos_overlay_enable']:
overlay_obj = jsonutils.loads(extra_params['dcos_overlay_network'])
extra_params['dcos_overlay_network'] = ''' vtep_subnet: %s
vtep_mac_oui: %s
overlays:''' % (overlay_obj['vtep_subnet'],
overlay_obj['vtep_mac_oui'])
for item in overlay_obj['overlays']:
extra_params['dcos_overlay_network'] += '''
- name: %s
subnet: %s
prefix: %s''' % (item['name'],
item['subnet'],
item['prefix'])
scale_mgr = kwargs.pop('scale_manager', None)
if scale_mgr:
hosts = self.get_output('dcos_slaves_private')
extra_params['slaves_to_remove'] = (
scale_mgr.get_removal_nodes(hosts))
return super(DcosCentosTemplateDefinition,
self).get_params(context, cluster_template, cluster,
extra_params=extra_params,
**kwargs)
def get_env_files(self, cluster_template, cluster):
env_files = []
template_def.add_priv_net_env_file(env_files, cluster)
template_def.add_lb_env_file(env_files, cluster)
template_def.add_fip_env_file(env_files, cluster)
return env_files

View File

@ -9,12 +9,6 @@ resource_registry:
# kubeminion.yaml
"Magnum::Optional::KubeMinion::Neutron::FloatingIP": "OS::Heat::None"
# dcosmaster.yaml
"Magnum::Optional::DcosMaster::Neutron::FloatingIP": "OS::Heat::None"
# dcosslave.yaml
"Magnum::Optional::DcosSlave::Neutron::FloatingIP": "OS::Heat::None"
# swarmmaster.yaml
"Magnum::Optional::SwarmMaster::Neutron::FloatingIP": "OS::Heat::None"

View File

@ -7,12 +7,6 @@ resource_registry:
# kubeminion.yaml
"Magnum::Optional::KubeMinion::Neutron::FloatingIP": "OS::Neutron::FloatingIP"
# dcosmaster.yaml
"Magnum::Optional::DcosMaster::Neutron::FloatingIP": "OS::Neutron::FloatingIP"
# dcosslave.yaml
"Magnum::Optional::DcosSlave::Neutron::FloatingIP": "OS::Neutron::FloatingIP"
# swarmmaster.yaml
"Magnum::Optional::SwarmMaster::Neutron::FloatingIP": "OS::Neutron::FloatingIP"

View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,43 +0,0 @@
# Copyright 2016 Rackspace Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from magnum.drivers.heat import driver
from magnum.drivers.mesos_ubuntu_v1 import monitor
from magnum.drivers.mesos_ubuntu_v1.scale_manager import MesosScaleManager
from magnum.drivers.mesos_ubuntu_v1 import template_def
class Driver(driver.HeatDriver):
@property
def provides(self):
return [
{'server_type': 'vm',
'os': 'ubuntu',
'coe': 'mesos'},
]
def get_template_definition(self):
return template_def.UbuntuMesosTemplateDefinition()
def get_monitor(self, context, cluster):
return monitor.MesosMonitor(context, cluster)
def get_scale_manager(self, context, osclient, cluster):
return MesosScaleManager(context, osclient, cluster)
def upgrade_cluster(self, context, cluster, cluster_template,
max_batch_size, nodegroup, scale_manager=None,
rollback=False):
raise NotImplementedError("Must implement 'upgrade_cluster'")

View File

@ -1,18 +0,0 @@
FROM ubuntu:trusty
RUN \
apt-get -yqq update && \
apt-get -yqq install git qemu-utils python-dev python-pip python-yaml python-six uuid-runtime curl sudo kpartx parted wget && \
pip install diskimage-builder && \
mkdir /output
WORKDIR /build
ENV PATH="dib-utils/bin:$PATH" ELEMENTS_PATH="$(python -c 'import os, diskimage_builder, pkg_resources;print(os.path.abspath(pkg_resources.resource_filename(diskimage_builder.__name__, "elements")))'):tripleo-image-elements/elements:heat-templates/hot/software-config/elements:magnum/magnum/drivers/mesos_ubuntu_v1/image" DIB_RELEASE=trusty
RUN git clone https://git.openstack.org/openstack/magnum
RUN git clone https://git.openstack.org/openstack/dib-utils.git
RUN git clone https://git.openstack.org/openstack/tripleo-image-elements.git
RUN git clone https://git.openstack.org/openstack/heat-templates.git
CMD disk-image-create ubuntu vm docker mesos os-collect-config os-refresh-config os-apply-config heat-config heat-config-script -o /output/ubuntu-mesos.qcow2

View File

@ -1,4 +0,0 @@
Mesos elements
==============
See [Building an image](http://docs.openstack.org/developer/magnum/userguide.html#building-mesos-image) for instructions.

View File

@ -1 +0,0 @@
package-installs

View File

@ -1,4 +0,0 @@
#!/bin/bash
service docker stop
[ -f /etc/init/docker.conf ] && echo "manual" > /etc/init/docker.override

View File

@ -1,17 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 \
--recv-keys 58118E89F3A912897C070ADBF76221572C52609D
DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
RELEASE=$(lsb_release -ics | tail -1 | tr '[:upper:]' '[:lower:]')
# Add the repository
echo "deb http://apt.dockerproject.org/repo ${DISTRO}-${RELEASE} main" | \
sudo tee /etc/apt/sources.list.d/docker.list

View File

@ -1,22 +0,0 @@
#!/bin/bash
# This script installs all needed dependencies to generate
# images using diskimage-builder. Please not it only has been
# tested on Ubuntu Trusty
set -eux
set -o pipefail
sudo apt-get update || true
sudo apt-get install -y \
git \
qemu-utils \
python-dev \
python-yaml \
python-six \
uuid-runtime \
curl \
sudo \
kpartx \
parted \
wget

View File

@ -1 +0,0 @@
package-installs

View File

@ -1,3 +0,0 @@
zookeeperd:
mesos:
marathon:

View File

@ -1,6 +0,0 @@
#!/bin/bash
for service in zookeeper mesos-slave mesos-master marathon; do
service $service stop
[ -f /etc/init/$service.conf ] && echo "manual" > /etc/init/$service.override
done

View File

@ -1,20 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
CODENAME=$(lsb_release -cs)
# Add the repository
echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" | \
sudo tee /etc/apt/sources.list.d/mesosphere.list
# Install Java 8 requirements for marathon
sudo add-apt-repository -y ppa:openjdk-r/ppa
sudo apt-get -y update
sudo apt-get -y install openjdk-8-jre-headless

View File

@ -1,27 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
# check that image is valid
qemu-img check -q $1
# validate estimated size
FILESIZE=$(stat -c%s "$1")
MIN_SIZE=681574400 # 650MB
if [ $FILESIZE -lt $MIN_SIZE ] ; then
echo "Error: generated image size is lower than expected."
exit 1
fi

View File

@ -1,71 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_serialization import jsonutils
from magnum.common import urlfetch
from magnum.conductor import monitors
class MesosMonitor(monitors.MonitorBase):
def __init__(self, context, cluster):
super(MesosMonitor, self).__init__(context, cluster)
self.data = {}
@property
def metrics_spec(self):
return {
'memory_util': {
'unit': '%',
'func': 'compute_memory_util',
},
'cpu_util': {
'unit': '%',
'func': 'compute_cpu_util',
},
}
def _build_url(self, url, protocol='http', port='80', path='/'):
return protocol + '://' + url + ':' + port + path
def _is_leader(self, state):
return state['leader'] == state['pid']
def pull_data(self):
self.data['mem_total'] = 0
self.data['mem_used'] = 0
self.data['cpu_total'] = 0
self.data['cpu_used'] = 0
for master_addr in self.cluster.master_addresses:
mesos_master_url = self._build_url(master_addr, port='5050',
path='/state')
master = jsonutils.loads(urlfetch.get(mesos_master_url))
if self._is_leader(master):
for slave in master['slaves']:
self.data['mem_total'] += slave['resources']['mem']
self.data['mem_used'] += slave['used_resources']['mem']
self.data['cpu_total'] += slave['resources']['cpus']
self.data['cpu_used'] += slave['used_resources']['cpus']
break
def compute_memory_util(self):
if self.data['mem_total'] == 0 or self.data['mem_used'] == 0:
return 0
else:
return self.data['mem_used'] * 100 / self.data['mem_total']
def compute_cpu_util(self):
if self.data['cpu_total'] == 0 or self.data['cpu_used'] == 0:
return 0
else:
return self.data['cpu_used'] * 100 / self.data['cpu_total']

View File

@ -1,39 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from marathon import MarathonClient
from magnum.conductor.scale_manager import ScaleManager
class MesosScaleManager(ScaleManager):
"""When scaling a mesos cluster, MesosScaleManager will inspect the
nodes and find out those with containers on them. Thus we can
ask Heat to delete the nodes without containers. Note that this
is a best effort basis -- Magnum doesn't have any synchronization
with Marathon, so while Magnum is checking for the containers to
choose nodes to remove, new containers can be deployed on the
nodes to be removed.
"""
def __init__(self, context, osclient, cluster):
super(MesosScaleManager, self).__init__(context, osclient, cluster)
def _get_hosts_with_container(self, context, cluster):
marathon_client = MarathonClient(
'http://' + cluster.api_address + ':8080')
hosts = set()
for task in marathon_client.list_tasks():
hosts.add(task.host)
return hosts

View File

@ -1,142 +0,0 @@
# Copyright 2016 Rackspace Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import magnum.conf
from magnum.drivers.heat import template_def
CONF = magnum.conf.CONF
class UbuntuMesosTemplateDefinition(template_def.BaseTemplateDefinition):
"""Mesos template for Ubuntu VM."""
def __init__(self):
super(UbuntuMesosTemplateDefinition, self).__init__()
self.add_parameter('external_network',
cluster_template_attr='external_network_id',
required=True)
self.add_parameter('fixed_network',
cluster_template_attr='fixed_network')
self.add_parameter('fixed_subnet',
cluster_template_attr='fixed_subnet')
self.add_parameter('cluster_name',
cluster_attr='name')
self.add_parameter('volume_driver',
cluster_template_attr='volume_driver')
self.add_output('api_address',
cluster_attr='api_address')
self.add_output('mesos_master_private',
cluster_attr=None)
self.add_output('mesos_slaves_private',
cluster_attr=None)
def get_nodegroup_param_maps(self, master_params=None, worker_params=None):
master_params = master_params or dict()
worker_params = worker_params or dict()
master_params.update({
'master_flavor': 'flavor_id',
'master_image': 'image_id',
})
worker_params.update({
'number_of_slaves': 'node_count',
'slave_flavor': 'flavor_id',
'slave_image': 'image_id',
})
return super(
UbuntuMesosTemplateDefinition, self).get_nodegroup_param_maps(
master_params=master_params, worker_params=worker_params)
def update_outputs(self, stack, cluster_template, cluster,
nodegroups=None):
nodegroups = nodegroups or [cluster.default_ng_worker,
cluster.default_ng_master]
for nodegroup in nodegroups:
if nodegroup.role == 'master':
self.add_output(
'mesos_master', nodegroup_attr='node_addresses',
nodegroup_uuid=nodegroup.uuid,
mapping_type=template_def.NodeGroupOutputMapping)
else:
self.add_output(
'mesos_slaves', nodegroup_attr='node_addresses',
nodegroup_uuid=nodegroup.uuid,
mapping_type=template_def.NodeGroupOutputMapping)
self.add_output(
'number_of_slaves', nodegroup_attr='node_count',
nodegroup_uuid=nodegroup.uuid, is_stack_param=True,
mapping_type=template_def.NodeGroupOutputMapping)
super(UbuntuMesosTemplateDefinition,
self).update_outputs(stack, cluster_template, cluster,
nodegroups=nodegroups)
def get_params(self, context, cluster_template, cluster, **kwargs):
extra_params = kwargs.pop('extra_params', {})
# HACK(apmelton) - This uses the user's bearer token, ideally
# it should be replaced with an actual trust token with only
# access to do what the template needs it to do.
osc = self.get_osc(context)
extra_params['auth_url'] = context.auth_url
extra_params['username'] = context.user_name
extra_params['tenant_name'] = context.project_id
extra_params['domain_name'] = context.domain_name
extra_params['region_name'] = osc.cinder_region_name()
extra_params['nodes_affinity_policy'] = \
CONF.cluster.nodes_affinity_policy
label_list = ['rexray_preempt', 'mesos_slave_isolation',
'mesos_slave_image_providers',
'mesos_slave_work_dir',
'mesos_slave_executor_env_variables']
labels = self._get_relevant_labels(cluster, kwargs)
for label in label_list:
extra_params[label] = labels.get(label)
return super(UbuntuMesosTemplateDefinition,
self).get_params(context, cluster_template, cluster,
extra_params=extra_params,
**kwargs)
def get_scale_params(self, context, cluster, node_count,
scale_manager=None, nodes_to_remove=None):
scale_params = dict()
if nodes_to_remove:
scale_params['slaves_to_remove'] = nodes_to_remove
if scale_manager:
hosts = self.get_output('mesos_slaves_private')
scale_params['slaves_to_remove'] = (
scale_manager.get_removal_nodes(hosts))
scale_params['number_of_slaves'] = node_count
return scale_params
def get_env_files(self, cluster_template, cluster, nodegroup=None):
env_files = []
template_def.add_priv_net_env_file(env_files, cluster_template,
cluster)
template_def.add_lb_env_file(env_files, cluster)
return env_files
@property
def driver_module_path(self):
return __name__[:__name__.rindex('.')]
@property
def template_path(self):
return os.path.join(os.path.dirname(os.path.realpath(__file__)),
'templates/mesoscluster.yaml')

View File

@ -1,27 +0,0 @@
#!/bin/sh
CACERTS=$(cat <<-EOF
@@CACERTS_CONTENT@@
EOF
)
CA_FILE=/usr/local/share/ca-certificates/magnum-external.crt
if [ -n "$CACERTS" ]; then
touch $CA_FILE
echo "$CACERTS" | tee -a $CA_FILE
chmod 0644 $CA_FILE
chown root:root $CA_FILE
update-ca-certificates
# Legacy versions of requests shipped with os-collect-config can have own CA cert database
for REQUESTS_LOCATION in \
/opt/stack/venvs/os-collect-config/lib/python2.7/site-packages/requests \
/usr/local/lib/python2.7/dist-packages/requests; do
if [ -f "${REQUESTS_LOCATION}/cacert.pem" ]; then
echo "$CACERTS" | tee -a "${REQUESTS_LOCATION}/cacert.pem"
fi
done
if [ -f /etc/init/os-collect-config.conf ]; then
service os-collect-config restart
fi
fi

View File

@ -1,38 +0,0 @@
#!/bin/sh
. /etc/sysconfig/heat-params
DOCKER_PROXY_CONF=/etc/default/docker
BASH_RC=/etc/bash.bashrc
if [ -n "$HTTP_PROXY" ]; then
echo "export http_proxy=$HTTP_PROXY" >> $DOCKER_PROXY_CONF
if [ -f "$BASH_RC" ]; then
echo "export http_proxy=$HTTP_PROXY" >> $BASH_RC
else
echo "File $BASH_RC does not exist, not setting http_proxy"
fi
fi
if [ -n "$HTTPS_PROXY" ]; then
echo "export https_proxy=$HTTPS_PROXY" >> $DOCKER_PROXY_CONF
if [ -f $BASH_RC ]; then
echo "export https_proxy=$HTTPS_PROXY" >> $BASH_RC
else
echo "File $BASH_RC does not exist, not setting https_proxy"
fi
fi
if [ -n "$HTTP_PROXY" -o -n $HTTPS_PROXY ]; then
service docker restart
fi
if [ -f "$BASH_RC" ]; then
if [ -n "$NO_PROXY" ]; then
echo "export no_proxy=$NO_PROXY" >> $BASH_RC
fi
else
echo "File $BASH_RC does not exist, not setting no_proxy"
fi

View File

@ -1,72 +0,0 @@
#!/bin/bash
. /etc/sysconfig/heat-params
echo "Configuring mesos (master)"
myip=$(ip addr show eth0 |
awk '$1 == "inet" {print $2}' | cut -f1 -d/)
# Fix /etc/hosts
sed -i "s/127.0.1.1/$myip/" /etc/hosts
######################################################################
#
# Configure ZooKeeper
#
# List all ZooKeeper nodes
id=1
for master_ip in $MESOS_MASTERS_IPS; do
echo "server.$((id++))=${master_ip}:2888:3888" >> /etc/zookeeper/conf/zoo.cfg
done
# Set a ID for this node
id=1
for master_ip in $MESOS_MASTERS_IPS; do
if [ "$master_ip" = "$myip" ]; then
break
fi
id=$((id+1))
done
echo "$id" > /etc/zookeeper/conf/myid
######################################################################
#
# Configure Mesos
#
# Set the ZooKeeper URL
zk="zk://"
for master_ip in $MESOS_MASTERS_IPS; do
zk="${zk}${master_ip}:2181,"
done
# Remove tailing ',' (format: zk://host1:port1,...,hostN:portN/path)
zk=${zk::-1}
echo "${zk}/mesos" > /etc/mesos/zk
# The IP address to listen on
echo "$myip" > /etc/mesos-master/ip
# The size of the quorum of replicas
echo "$QUORUM" > /etc/mesos-master/quorum
# The hostname advertised in ZooKeeper
echo "$myip" > /etc/mesos-master/hostname
# The cluster name
echo "$CLUSTER_NAME" > /etc/mesos-master/cluster
######################################################################
#
# Configure Marathon
#
mkdir -p /etc/marathon/conf
# Set the ZooKeeper URL
echo "${zk}/mesos" > /etc/marathon/conf/master
echo "${zk}/marathon" > /etc/marathon/conf/zk
# Set the hostname advertised in ZooKeeper
echo "$myip" > /etc/marathon/conf/hostname

View File

@ -1,53 +0,0 @@
#!/bin/bash
. /etc/sysconfig/heat-params
echo "Configuring mesos (slave)"
myip=$(ip addr show eth0 |
awk '$1 == "inet" {print $2}' | cut -f1 -d/)
zk=""
for master_ip in $MESOS_MASTERS_IPS; do
zk="${zk}${master_ip}:2181,"
done
# Remove last ','
zk=${zk::-1}
# Zookeeper URL. This specifies how to connect to a quorum of masters
# Format: zk://host1:port1,...,hostN:portN/path
echo "zk://${zk}/mesos" > /etc/mesos/zk
# The hostname the slave should report
echo "$myip" > /etc/mesos-slave/hostname
# The IP address to listen on
echo "$myip" > /etc/mesos-slave/ip
# List of containerizer implementations
echo "docker,mesos" > /etc/mesos-slave/containerizers
# Amount of time to wait for an executor to register
cat > /etc/mesos-slave/executor_registration_timeout <<EOF
$EXECUTOR_REGISTRATION_TIMEOUT
EOF
if [ -n "$ISOLATION" ]; then
echo "$ISOLATION" > /etc/mesos-slave/isolation
fi
if [ -n "$WORK_DIR" ]; then
echo "$WORK_DIR" > /etc/mesos-slave/work_dir
fi
if [ -n "$IMAGE_PROVIDERS" ]; then
if [ -n "$ISOLATION" ]; then
echo "$IMAGE_PROVIDERS" > /etc/mesos-slave/image_providers
else
echo "isolation doesn't exist, not setting image_providers"
fi
fi
if [ -n "$EXECUTOR_ENVIRONMENT_VARIABLES" ]; then
echo "$EXECUTOR_ENVIRONMENT_VARIABLES" > /etc/executor_environment_variables
echo "file:///etc/executor_environment_variables" > /etc/mesos-slave/executor_environment_variables
fi

View File

@ -1,8 +0,0 @@
#!/bin/sh
# Start master services
for service in zookeeper mesos-master marathon; do
echo "starting service $service"
service $service start
rm -f /etc/init/$service.override
done

View File

@ -1,8 +0,0 @@
#!/bin/sh
# Start slave services
for service in docker mesos-slave; do
echo "starting service $service"
service $service start
rm -f /etc/init/$service.override
done

View File

@ -1,42 +0,0 @@
#!/bin/sh
. /etc/sysconfig/heat-params
# Judge whether to install the rexray driver
if [ "$VOLUME_DRIVER" != "rexray" ]; then
exit 0
fi
# NOTE(yatin): "openstack" storageDriver is not supported in latest version
# of rexray. So use stable version 0.3.3. Once it is supported by rexray:
# http://rexray.readthedocs.io/en/stable/, we can revert this commit.
curl -sSL https://dl.bintray.com/emccode/rexray/install | bash -s -- stable 0.3.3
CLOUD_CONFIG=/etc/rexray/config.yml
CLOUD=/etc/rexray
if [ ! -d ${CLOUD_CONFIG} -o ! -d ${CLOUD} ]; then
mkdir -p $CLOUD
fi
if [ ${AUTH_URL##*/}=="v3" ]; then
extra_configs="domainName: $DOMAIN_NAME"
fi
cat > $CLOUD_CONFIG <<EOF
rexray:
storageDrivers:
- openstack
volume:
mount:
preempt: $REXRAY_PREEMPT
openstack:
authUrl: $AUTH_URL
username: $USERNAME
password: $PASSWORD
tenantName: $TENANT_NAME
regionName: $REGION_NAME
availabilityZoneName: nova
$extra_configs
EOF
service rexray start

View File

@ -1,11 +0,0 @@
#!/bin/sh
mkdir -p /etc/sysconfig
cat > /etc/sysconfig/heat-params <<EOF
MESOS_MASTERS_IPS="$MESOS_MASTERS_IPS"
CLUSTER_NAME="$CLUSTER_NAME"
QUORUM="$((($NUMBER_OF_MASTERS+1)/2))"
HTTP_PROXY="$HTTP_PROXY"
HTTPS_PROXY="$HTTPS_PROXY"
NO_PROXY="$NO_PROXY"
EOF

View File

@ -1,24 +0,0 @@
#cloud-config
merge_how: dict(recurse_array)+list(append)
write_files:
- path: /etc/sysconfig/heat-params
owner: "root:root"
permissions: "0600"
content: |
MESOS_MASTERS_IPS="$MESOS_MASTERS_IPS"
EXECUTOR_REGISTRATION_TIMEOUT="$EXECUTOR_REGISTRATION_TIMEOUT"
HTTP_PROXY="$HTTP_PROXY"
HTTPS_PROXY="$HTTPS_PROXY"
NO_PROXY="$NO_PROXY"
AUTH_URL="$AUTH_URL"
USERNAME="$USERNAME"
PASSWORD="$PASSWORD"
TENANT_NAME="$TENANT_NAME"
VOLUME_DRIVER="$VOLUME_DRIVER"
REGION_NAME="$REGION_NAME"
DOMAIN_NAME="$DOMAIN_NAME"
REXRAY_PREEMPT="$REXRAY_PREEMPT"
ISOLATION="$ISOLATION"
WORK_DIR="$WORK_DIR"
IMAGE_PROVIDERS="$IMAGE_PROVIDERS"
EXECUTOR_ENVIRONMENT_VARIABLES="$EXECUTOR_ENVIRONMENT_VARIABLES"

View File

@ -1,207 +0,0 @@
heat_template_version: 2014-10-16
description: >
This is a nested stack that defines software configs for Mesos slave.
parameters:
executor_registration_timeout:
type: string
description: >
Amount of time to wait for an executor to register with the slave before
considering it hung and shutting it down
http_proxy:
type: string
description: http proxy address for docker
https_proxy:
type: string
description: https proxy address for docker
no_proxy:
type: string
description: no proxies for docker
auth_url:
type: string
description: >
url for mesos to authenticate before sending request
username:
type: string
description: user name
password:
type: string
description: >
user password, not set in current implementation, only used to
fill in for Kubernetes config file
hidden: true
tenant_name:
type: string
description: >
tenant_name is used to isolate access to Compute resources
volume_driver:
type: string
description: volume driver to use for container storage
region_name:
type: string
description: A logically separate section of the cluster
domain_name:
type: string
description: >
domain is to define the administrative boundaries for management
of Keystone entities
rexray_preempt:
type: string
description: >
enables any host to take control of a volume irrespective of whether
other hosts are using the volume
verify_ca:
type: boolean
description: whether or not to validate certificate authority
mesos_slave_isolation:
type: string
description: >
Isolation mechanisms to use, e.g., `posix/cpu,posix/mem`, or
`cgroups/cpu,cgroups/mem`, or network/port_mapping (configure with flag:
`--with-network-isolator` to enable), or `cgroups/devices/gpus/nvidia`
for nvidia specific gpu isolation (configure with flag: `--enable-nvidia
-gpu-support` to enable), or `external`, or load an alternate isolator
module using the `--modules` flag. Note that this flag is only relevant
for the Mesos Containerizer.
mesos_slave_work_dir:
type: string
description: directory path to place framework work directories
mesos_slave_image_providers:
type: string
description: >
Comma separated list of supported image providers e.g.,
APPC,DOCKER
mesos_slave_executor_env_variables:
type: string
description: >
JSON object representing the environment variables that should be passed
to the executor, and thus subsequently task(s). By default the executor,
executor will inherit the slave's environment variables.
mesos_masters_ips:
type: string
description: IP addresses of the Mesos master servers.
mesos_slave_wc_curl_cli:
type: string
description: Wait condition notify command for slave.
openstack_ca:
type: string
description: The OpenStack CA certificate to install on the node.
resources:
######################################################################
#
# software configs. these are components that are combined into
# a multipart MIME user-data archive.
#
write_heat_params:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: {get_file: fragments/write-heat-params.yaml}
params:
"$MESOS_MASTERS_IPS": {get_param: mesos_masters_ips}
"$EXECUTOR_REGISTRATION_TIMEOUT": {get_param: executor_registration_timeout}
"$HTTP_PROXY": {get_param: http_proxy}
"$HTTPS_PROXY": {get_param: https_proxy}
"$NO_PROXY": {get_param: no_proxy}
"$AUTH_URL": {get_param: auth_url}
"$USERNAME": {get_param: username}
"$PASSWORD": {get_param: password}
"$TENANT_NAME": {get_param: tenant_name}
"$VOLUME_DRIVER": {get_param: volume_driver}
"$REGION_NAME": {get_param: region_name}
"$DOMAIN_NAME": {get_param: domain_name}
"$REXRAY_PREEMPT": {get_param: rexray_preempt}
"$ISOLATION": {get_param: mesos_slave_isolation}
"$WORK_DIR": {get_param: mesos_slave_work_dir}
"$IMAGE_PROVIDERS": {get_param: mesos_slave_image_providers}
"$EXECUTOR_ENVIRONMENT_VARIABLES": {get_param: mesos_slave_executor_env_variables}
add_ext_ca_certs:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: {get_file: fragments/add-ext-ca-certs.sh}
params:
"@@CACERTS_CONTENT@@": {get_param: openstack_ca}
configure_mesos_slave:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/configure-mesos-slave.sh}
start_services:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/start-services-slave.sh}
slave_wc_notify:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: |
#!/bin/bash -v
wc_notify $VERIFY_CA --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: {get_param: mesos_slave_wc_curl_cli}
"$VERIFY_CA": {get_param: verify_ca}
add_proxy:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/add-proxy.sh}
volume_service:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/volume-service.sh}
mesos_slave_init:
type: OS::Heat::MultipartMime
properties:
parts:
- config: {get_resource: add_ext_ca_certs}
- config: {get_resource: write_heat_params}
- config: {get_resource: configure_mesos_slave}
- config: {get_resource: add_proxy}
- config: {get_resource: volume_service}
- config: {get_resource: start_services}
- config: {get_resource: slave_wc_notify}
outputs:
mesos_init:
value: {get_resource: mesos_slave_init}
description: ID of the multipart mime.

View File

@ -1,543 +0,0 @@
heat_template_version: 2014-10-16
description: >
This template will boot a Mesos cluster with one or more masters
(as specified by number_of_masters, default is 1) and one or more slaves
(as specified by the number_of_slaves parameter, which
defaults to 1).
parameters:
is_cluster_stack:
type: boolean
default: false
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
default: ""
ssh_public_key:
type: string
description: The public ssh key to add in all nodes
default: ""
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
default: public
fixed_network:
type: string
description: uuid/name of an existing network to use to provision machines
default: ""
fixed_subnet:
type: string
description: uuid/name of an existing subnet to use to provision machines
default: ""
master_image:
type: string
default: ubuntu-mesos
description: glance image used to boot the server
slave_image:
type: string
default: ubuntu-mesos
description: glance image used to boot the server
master_flavor:
type: string
default: m1.small
description: flavor to use when booting the master server
slave_flavor:
type: string
default: m1.small
description: flavor to use when booting the slave server
dns_nameserver:
type: comma_delimited_list
description: address of a dns nameserver reachable in your environment
default: 8.8.8.8
number_of_slaves:
type: number
description: how many mesos slaves to spawn initially
default: 1
fixed_subnet_cidr:
type: string
description: network range for fixed ip network
default: 10.0.0.0/24
wait_condition_timeout:
type: number
description: >
timeout for the Wait Conditions
default: 6000
cluster_name:
type: string
description: human readable name for the mesos cluster
default: my-cluster
executor_registration_timeout:
type: string
description: >
Amount of time to wait for an executor to register with the slave before
considering it hung and shutting it down
default: 5mins
number_of_masters:
type: number
description: how many mesos masters to spawn initially
default: 1
http_proxy:
type: string
description: http proxy address for docker
default: ""
https_proxy:
type: string
description: https proxy address for docker
default: ""
no_proxy:
type: string
description: no proxies for docker
default: ""
trustee_domain_id:
type: string
description: domain id of the trustee
default: ""
trustee_user_id:
type: string
description: user id of the trustee
default: ""
trustee_username:
type: string
description: username of the trustee
default: ""
trustee_password:
type: string
description: password of the trustee
default: ""
hidden: true
trust_id:
type: string
description: id of the trust which is used by the trustee
default: ""
hidden: true
region_name:
type: string
description: a logically separate section of the cluster
username:
type: string
description: user name
password:
type: string
description: >
user password, not set in current implementation, only used to
fill in for Mesos config file
default:
password
hidden: true
tenant_name:
type: string
description: >
tenant_name is used to isolate access to Compute resources
volume_driver:
type: string
description: volume driver to use for container storage
default: ""
domain_name:
type: string
description: >
domain is to define the administrative boundaries for management
of Keystone entities
rexray_preempt:
type: string
description: >
enables any host to take control of a volume irrespective of whether
other hosts are using the volume
default: "false"
auth_url:
type: string
description: url for keystone
mesos_slave_isolation:
type: string
description: >
Isolation mechanisms to use, e.g., `posix/cpu,posix/mem`, or
`cgroups/cpu,cgroups/mem`, or network/port_mapping (configure with flag:
`--with-network-isolator` to enable), or `cgroups/devices/gpus/nvidia`
for nvidia specific gpu isolation (configure with flag: `--enable-nvidia
-gpu-support` to enable), or `external`, or load an alternate isolator
module using the `--modules` flag. Note that this flag is only relevant
for the Mesos Containerizer.
default: ""
mesos_slave_work_dir:
type: string
description: directory path to place framework work directories
default: ""
mesos_slave_image_providers:
type: string
description: >
Comma separated list of supported image providers e.g.,
APPC,DOCKER
default: ""
mesos_slave_executor_env_variables:
type: string
description: >
JSON object representing the environment variables that should be passed
to the executor, and thus subsequently task(s). By default the executor,
executor will inherit the slave's environment variables.
default: ""
slaves_to_remove:
type: comma_delimited_list
description: >
List of slaves to be removed when doing an update. Individual slave may
be referenced several ways: (1) The resource name (e.g.['1', '3']),
(2) The private IP address ['10.0.0.4', '10.0.0.6']. Note: the list should
be empty when doing a create.
default: []
verify_ca:
type: boolean
description: whether or not to validate certificate authority
openstack_ca:
type: string
hidden: true
description: The OpenStack CA certificate to install on the node.
nodes_affinity_policy:
type: string
description: >
affinity policy for nodes server group
constraints:
- allowed_values: ["affinity", "anti-affinity", "soft-affinity",
"soft-anti-affinity"]
resources:
######################################################################
#
# network resources. allocate a network and router for our server.
#
network:
type: ../../common/templates/network.yaml
properties:
existing_network: {get_param: fixed_network}
existing_subnet: {get_param: fixed_subnet}
private_network_cidr: {get_param: fixed_subnet_cidr}
dns_nameserver: {get_param: dns_nameserver}
external_network: {get_param: external_network}
api_lb:
type: ../../common/templates/lb_api.yaml
properties:
fixed_subnet: {get_attr: [network, fixed_subnet]}
external_network: {get_param: external_network}
protocol: HTTP
port: 8080
######################################################################
#
# security groups. we need to permit network traffic of various
# sorts.
#
secgroup_master:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: icmp
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
remote_mode: remote_group_id
- protocol: tcp
port_range_min: 5050
port_range_max: 5050
- protocol: tcp
port_range_min: 8080
port_range_max: 8080
secgroup_slave_all_open:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: icmp
- protocol: tcp
- protocol: udp
######################################################################
#
# Master SoftwareConfig.
#
write_params_master:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: fragments/write-heat-params-master.sh}
inputs:
- name: MESOS_MASTERS_IPS
type: String
- name: CLUSTER_NAME
type: String
- name: QUORUM
type: String
- name: HTTP_PROXY
type: String
- name: HTTPS_PROXY
type: String
- name: NO_PROXY
type: String
configure_master:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: fragments/configure-mesos-master.sh}
add_proxy_master:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: fragments/add-proxy.sh}
start_services_master:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: fragments/start-services-master.sh}
######################################################################
#
# Master SoftwareDeployment.
#
write_params_master_deployment:
type: OS::Heat::SoftwareDeploymentGroup
properties:
config: {get_resource: write_params_master}
servers: {get_attr: [mesos_masters, attributes, mesos_server_id]}
input_values:
MESOS_MASTERS_IPS: {list_join: [' ', {get_attr: [mesos_masters, mesos_master_ip]}]}
CLUSTER_NAME: {get_param: cluster_name}
NUMBER_OF_MASTERS: {get_param: number_of_masters}
HTTP_PROXY: {get_param: http_proxy}
HTTPS_PROXY: {get_param: https_proxy}
NO_PROXY: {get_param: no_proxy}
configure_master_deployment:
type: OS::Heat::SoftwareDeploymentGroup
depends_on:
- write_params_master_deployment
properties:
config: {get_resource: configure_master}
servers: {get_attr: [mesos_masters, attributes, mesos_server_id]}
add_proxy_master_deployment:
type: OS::Heat::SoftwareDeploymentGroup
depends_on:
- configure_master_deployment
properties:
config: {get_resource: add_proxy_master}
servers: {get_attr: [mesos_masters, attributes, mesos_server_id]}
start_services_master_deployment:
type: OS::Heat::SoftwareDeploymentGroup
depends_on:
- add_proxy_master_deployment
properties:
config: {get_resource: start_services_master}
servers: {get_attr: [mesos_masters, attributes, mesos_server_id]}
######################################################################
#
# resources that expose the IPs of either the mesos master or a given
# LBaaS pool depending on whether LBaaS is enabled for the bay.
#
api_address_lb_switch:
type: Magnum::ApiGatewaySwitcher
properties:
pool_public_ip: {get_attr: [api_lb, floating_address]}
pool_private_ip: {get_attr: [api_lb, address]}
master_public_ip: {get_attr: [mesos_masters, resource.0.mesos_master_external_ip]}
master_private_ip: {get_attr: [mesos_masters, resource.0.mesos_master_ip]}
######################################################################
#
# resources that expose one server group for each master and worker nodes
# separately.
#
master_nodes_server_group:
type: OS::Nova::ServerGroup
properties:
policies: [{get_param: nodes_affinity_policy}]
worker_nodes_server_group:
type: OS::Nova::ServerGroup
properties:
policies: [{get_param: nodes_affinity_policy}]
######################################################################
#
# Mesos masters. This is a resource group that will create
# <number_of_masters> masters.
#
mesos_masters:
type: OS::Heat::ResourceGroup
depends_on:
- network
properties:
count: {get_param: number_of_masters}
resource_def:
type: mesosmaster.yaml
properties:
name:
list_join:
- '-'
- [{ get_param: 'OS::stack_name' }, 'master', '%index%']
ssh_key_name: {get_param: ssh_key_name}
server_image: {get_param: master_image}
master_flavor: {get_param: master_flavor}
external_network: {get_param: external_network}
fixed_network: {get_attr: [network, fixed_network]}
fixed_subnet: {get_attr: [network, fixed_subnet]}
secgroup_mesos_id: {get_resource: secgroup_master}
api_pool_id: {get_attr: [api_lb, pool_id]}
openstack_ca: {get_param: openstack_ca}
nodes_server_group_id: {get_resource: master_nodes_server_group}
######################################################################
#
# Mesos slaves. This is a resource group that will initially
# create <number_of_slaves> slaves, and needs to be manually scaled.
#
mesos_slaves:
type: OS::Heat::ResourceGroup
depends_on:
- network
properties:
count: {get_param: number_of_slaves}
removal_policies: [{resource_list: {get_param: slaves_to_remove}}]
resource_def:
type: mesosslave.yaml
properties:
name:
list_join:
- '-'
- [{ get_param: 'OS::stack_name' }, 'slave', '%index%']
ssh_key_name: {get_param: ssh_key_name}
server_image: {get_param: slave_image}
slave_flavor: {get_param: slave_flavor}
fixed_network: {get_attr: [network, fixed_network]}
fixed_subnet: {get_attr: [network, fixed_subnet]}
external_network: {get_param: external_network}
secgroup_slave_all_open_id: {get_resource: secgroup_slave_all_open}
mesos_slave_software_configs: {get_attr: [mesos_slave_software_configs, mesos_init]}
nodes_server_group_id: {get_resource: worker_nodes_server_group}
######################################################################
#
# Wait condition handler for Mesos slaves.
#
slave_wait_handle:
type: OS::Heat::WaitConditionHandle
slave_wait_condition:
type: OS::Heat::WaitCondition
properties:
count: {get_param: number_of_slaves}
handle: {get_resource: slave_wait_handle}
timeout: {get_param: wait_condition_timeout}
######################################################################
#
# Software configs for Mesos slaves.
#
mesos_slave_software_configs:
type: mesos_slave_software_configs.yaml
properties:
mesos_masters_ips: {list_join: [' ', {get_attr: [mesos_masters, mesos_master_ip]}]}
executor_registration_timeout: {get_param: executor_registration_timeout}
http_proxy: {get_param: http_proxy}
https_proxy: {get_param: https_proxy}
no_proxy: {get_param: no_proxy}
auth_url: {get_param: auth_url}
username: {get_param: username}
password: {get_param: password}
tenant_name: {get_param: tenant_name}
volume_driver: {get_param: volume_driver}
region_name: {get_param: region_name}
domain_name: {get_param: domain_name}
rexray_preempt: {get_param: rexray_preempt}
mesos_slave_isolation: {get_param: mesos_slave_isolation}
mesos_slave_work_dir: {get_param: mesos_slave_work_dir}
mesos_slave_image_providers: {get_param: mesos_slave_image_providers}
mesos_slave_executor_env_variables: {get_param: mesos_slave_executor_env_variables}
mesos_slave_wc_curl_cli: {get_attr: [slave_wait_handle, curl_cli]}
verify_ca: {get_param: verify_ca}
openstack_ca: {get_param: openstack_ca}
outputs:
api_address:
value: {get_attr: [api_address_lb_switch, public_ip]}
description: >
This is the API endpoint of the Mesos master. Use this to access
the Mesos API from outside the cluster.
mesos_master_private:
value: {get_attr: [mesos_masters, mesos_master_ip]}
description: >
This is a list of the "private" addresses of all the Mesos masters.
mesos_master:
value: {get_attr: [mesos_masters, mesos_master_external_ip]}
description: >
This is the "public" ip address of the Mesos master server. Use this address to
log in to the Mesos master via ssh or to access the Mesos API
from outside the cluster.
mesos_slaves_private:
value: {get_attr: [mesos_slaves, mesos_slave_ip]}
description: >
This is a list of the "private" addresses of all the Mesos slaves.
mesos_slaves:
value: {get_attr: [mesos_slaves, mesos_slave_external_ip]}
description: >
This is a list of the "public" addresses of all the Mesos slaves.

View File

@ -1,131 +0,0 @@
heat_template_version: 2014-10-16
description: >
This is a nested stack that defines a single Mesos master, This stack is
included by a ResourceGroup resource in the parent template
(mesoscluster.yaml).
parameters:
name:
type: string
description: server name
server_image:
type: string
description: glance image used to boot the server
master_flavor:
type: string
description: flavor to use when booting the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
fixed_network:
type: string
description: Network from which to allocate fixed addresses.
fixed_subnet:
type: string
description: Subnet from which to allocate fixed addresses.
secgroup_mesos_id:
type: string
description: ID of the security group for mesos master.
api_pool_id:
type: string
description: ID of the load balancer pool of Marathon.
openstack_ca:
type: string
hidden: true
description: The OpenStack CA certificate to install on the node.
nodes_server_group_id:
type: string
description: ID of the server group for kubernetes cluster nodes.
resources:
add_ext_ca_certs:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template: {get_file: fragments/add-ext-ca-certs.sh}
params:
"@@CACERTS_CONTENT@@": {get_param: openstack_ca}
mesos_master_init:
type: OS::Heat::MultipartMime
properties:
parts:
- config: {get_resource: add_ext_ca_certs}
######################################################################
#
# Mesos master server.
#
# do NOT use "_" (underscore) in the Nova server name
# it creates a mismatch between the generated Nova name and its hostname
# which can lead to weird problems
mesos-master:
type: OS::Nova::Server
properties:
name: {get_param: name}
image: {get_param: server_image}
flavor: {get_param: master_flavor}
key_name: {get_param: ssh_key_name}
user_data_format: SOFTWARE_CONFIG
user_data: {get_resource: mesos_master_init}
networks:
- port: {get_resource: mesos_master_eth0}
scheduler_hints: { group: { get_param: nodes_server_group_id }}
mesos_master_eth0:
type: OS::Neutron::Port
properties:
network: {get_param: fixed_network}
security_groups:
- {get_param: secgroup_mesos_id}
fixed_ips:
- subnet: {get_param: fixed_subnet}
replacement_policy: AUTO
mesos_master_floating:
type: OS::Neutron::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_resource: mesos_master_eth0}
api_pool_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_id}
address: {get_attr: [mesos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 8080
outputs:
mesos_master_ip:
value: {get_attr: [mesos_master_eth0, fixed_ips, 0, ip_address]}
description: >
This is the "private" address of the Mesos master node.
mesos_master_external_ip:
value: {get_attr: [mesos_master_floating, floating_ip_address]}
description: >
This is the "public" address of the Mesos master node.
mesos_server_id:
value: {get_resource: mesos-master}
description: >
This is the logical id of the Mesos master node.

View File

@ -1,98 +0,0 @@
heat_template_version: 2014-10-16
description: >
This is a nested stack that defines a single Mesos slave, This stack is
included by a ResourceGroup resource in the parent template
(mesoscluster.yaml).
parameters:
name:
type: string
description: server name
server_image:
type: string
description: glance image used to boot the server
slave_flavor:
type: string
description: flavor to use when booting the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
fixed_network:
type: string
description: Network from which to allocate fixed addresses.
fixed_subnet:
type: string
description: Subnet from which to allocate fixed addresses.
secgroup_slave_all_open_id:
type: string
description: ID of the security group for slave.
mesos_slave_software_configs:
type: string
description: ID of the multipart mime.
nodes_server_group_id:
type: string
description: ID of the server group for kubernetes cluster nodes.
resources:
######################################################################
#
# a single Mesos slave.
#
# do NOT use "_" (underscore) in the Nova server name
# it creates a mismatch between the generated Nova name and its hostname
# which can lead to weird problems
mesos-slave:
type: OS::Nova::Server
properties:
name: {get_param: name}
image: {get_param: server_image}
flavor: {get_param: slave_flavor}
key_name: {get_param: ssh_key_name}
user_data_format: RAW
user_data: {get_param: mesos_slave_software_configs}
networks:
- port: {get_resource: mesos_slave_eth0}
scheduler_hints: { group: { get_param: nodes_server_group_id }}
mesos_slave_eth0:
type: OS::Neutron::Port
properties:
network: {get_param: fixed_network}
security_groups:
- get_param: secgroup_slave_all_open_id
fixed_ips:
- subnet: {get_param: fixed_subnet}
replacement_policy: AUTO
mesos_slave_floating:
type: OS::Neutron::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_resource: mesos_slave_eth0}
outputs:
mesos_slave_ip:
value: {get_attr: [mesos_slave_eth0, fixed_ips, 0, ip_address]}
description: >
This is the "private" address of the Mesos slave node.
mesos_slave_external_ip:
value: {get_attr: [mesos_slave_floating, floating_ip_address]}
description: >
This is the "public" address of the Mesos slave node.

View File

@ -1,17 +0,0 @@
# Copyright 2016 - Rackspace Hosting
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
version = '1.0.0'
driver = 'mesos_ubuntu_v1'
container_version = '1.9.1'

View File

@ -38,7 +38,6 @@ def gen_coe_dep_network_driver(coe):
'kubernetes': ['flannel', None],
'swarm': ['docker', 'flannel', None],
'swarm-mode': ['docker', None],
'mesos': ['docker', None],
}
driver_types = allowed_driver_types[coe]
return driver_types[random.randrange(0, len(driver_types))]
@ -49,7 +48,6 @@ def gen_coe_dep_volume_driver(coe):
'kubernetes': ['cinder', None],
'swarm': ['rexray', None],
'swarm-mode': ['rexray', None],
'mesos': ['rexray', None],
}
driver_types = allowed_driver_types[coe]
return driver_types[random.randrange(0, len(driver_types))]

View File

@ -1,25 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from magnum.tests.functional.python_client_base import ClusterTest
class TestClusterResource(ClusterTest):
coe = 'mesos'
cluster_template_kwargs = {
"tls_disabled": True,
"network_driver": 'docker',
"volume_driver": 'rexray'
}
def test_cluster_create_and_delete(self):
pass

View File

@ -1,545 +0,0 @@
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
from unittest.mock import patch
from magnum.drivers.heat import driver as heat_driver
from magnum.drivers.mesos_ubuntu_v1 import driver as mesos_dr
from magnum import objects
from magnum.objects.fields import ClusterStatus as cluster_status
from magnum.tests import base
class TestClusterConductorWithMesos(base.TestCase):
def setUp(self):
super(TestClusterConductorWithMesos, self).setUp()
self.cluster_template_dict = {
'image_id': 'image_id',
'flavor_id': 'flavor_id',
'master_flavor_id': 'master_flavor_id',
'keypair_id': 'keypair_id',
'dns_nameserver': 'dns_nameserver',
'external_network_id': 'external_network_id',
'cluster_distro': 'ubuntu',
'coe': 'mesos',
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'registry_enabled': False,
'server_type': 'vm',
'volume_driver': 'volume_driver',
'labels': {'rexray_preempt': 'False',
'mesos_slave_isolation':
'docker/runtime,filesystem/linux',
'mesos_slave_image_providers': 'docker',
'mesos_slave_executor_env_variables': '{}',
'mesos_slave_work_dir': '/tmp/mesos/slave'
},
'master_lb_enabled': False,
'fixed_network': 'fixed_network',
'fixed_subnet': 'fixed_subnet',
}
self.cluster_dict = {
'id': 1,
'uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
'cluster_template_id': 'xx-xx-xx-xx',
'keypair': 'keypair_id',
'master_flavor_id': 'master_flavor_id',
'flavor_id': 'flavor_id',
'name': 'cluster1',
'stack_id': 'xx-xx-xx-xx',
'api_address': '172.17.2.3',
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
'labels': {'rexray_preempt': 'False',
'mesos_slave_isolation':
'docker/runtime,filesystem/linux',
'mesos_slave_image_providers': 'docker',
'mesos_slave_executor_env_variables': '{}',
'mesos_slave_work_dir': '/tmp/mesos/slave'
},
'fixed_network': '',
'fixed_subnet': '',
'floating_ip_enabled': False,
'master_lb_enabled': False,
}
self.worker_ng_dict = {
'uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a53',
'name': 'worker_ng',
'cluster_id': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
'project_id': 'project_id',
'docker_volume_size': 20,
'labels': self.cluster_dict['labels'],
'flavor_id': 'flavor_id',
'image_id': 'image_id',
'node_addresses': ['172.17.2.4'],
'node_count': 1,
'role': 'worker',
'max_nodes': 5,
'min_nodes': 1,
'is_default': True
}
self.master_ng_dict = {
'uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a54',
'name': 'master_ng',
'cluster_id': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
'project_id': 'project_id',
'docker_volume_size': 20,
'labels': self.cluster_dict['labels'],
'flavor_id': 'master_flavor_id',
'image_id': 'image_id',
'node_addresses': ['172.17.2.18'],
'node_count': 1,
'role': 'master',
'max_nodes': 5,
'min_nodes': 1,
'is_default': True
}
self.context.user_name = 'mesos_user'
self.context.project_id = 'admin'
self.context.domain_name = 'domainname'
osc_patcher = mock.patch('magnum.common.clients.OpenStackClients')
self.mock_osc_class = osc_patcher.start()
self.addCleanup(osc_patcher.stop)
self.mock_osc = mock.MagicMock()
self.mock_osc.cinder_region_name.return_value = 'RegionOne'
mock_keypair = mock.MagicMock()
mock_keypair.public_key = 'ssh-rsa AAAAB3Nz'
self.mock_nova = mock.MagicMock()
self.mock_nova.keypairs.get.return_value = mock_keypair
self.mock_osc.nova.return_value = self.mock_nova
self.mock_keystone = mock.MagicMock()
self.mock_keystone.trustee_domain_id = 'trustee_domain_id'
self.mock_osc.keystone.return_value = self.mock_keystone
self.mock_osc_class.return_value = self.mock_osc
self.mock_osc.url_for.return_value = 'http://192.168.10.10:5000/v3'
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
@patch('magnum.objects.NodeGroup.list')
@patch('magnum.drivers.common.driver.Driver.get_driver')
def test_extract_template_definition_all_values(
self,
mock_driver,
mock_objects_nodegroup_list,
mock_objects_cluster_template_get_by_uuid):
cluster_template = objects.ClusterTemplate(
self.context, **self.cluster_template_dict)
mock_objects_cluster_template_get_by_uuid.return_value = \
cluster_template
cluster = objects.Cluster(self.context, **self.cluster_dict)
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
mock_driver.return_value = mesos_dr.Driver()
(template_path,
definition,
env_files) = mock_driver()._extract_template_definition(self.context,
cluster)
expected = {
'ssh_key_name': 'keypair_id',
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
'external_network': 'external_network_id',
'fixed_network': 'fixed_network',
'fixed_subnet': 'fixed_subnet',
'dns_nameserver': 'dns_nameserver',
'master_image': 'image_id',
'slave_image': 'image_id',
'master_flavor': 'master_flavor_id',
'slave_flavor': 'flavor_id',
'number_of_slaves': 1,
'number_of_masters': 1,
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'cluster_name': 'cluster1',
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': '',
'volume_driver': 'volume_driver',
'auth_url': 'http://192.168.10.10:5000/v3',
'region_name': self.mock_osc.cinder_region_name.return_value,
'username': 'mesos_user',
'tenant_name': 'admin',
'domain_name': 'domainname',
'rexray_preempt': 'False',
'mesos_slave_executor_env_variables': '{}',
'mesos_slave_isolation': 'docker/runtime,filesystem/linux',
'mesos_slave_work_dir': '/tmp/mesos/slave',
'mesos_slave_image_providers': 'docker',
'verify_ca': True,
'openstack_ca': '',
'nodes_affinity_policy': 'soft-anti-affinity',
}
self.assertEqual(expected, definition)
self.assertEqual(
['../../common/templates/environments/no_private_network.yaml',
'../../common/templates/environments/no_master_lb.yaml'],
env_files)
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
@patch('magnum.objects.NodeGroup.list')
@patch('magnum.drivers.common.driver.Driver.get_driver')
def test_extract_template_definition_only_required(
self,
mock_driver,
mock_objects_nodegroup_list,
mock_objects_cluster_template_get_by_uuid):
not_required = ['image_id', 'master_flavor_id', 'flavor_id',
'dns_nameserver', 'fixed_network', 'http_proxy',
'https_proxy', 'no_proxy', 'volume_driver',
'fixed_subnet']
for key in not_required:
self.cluster_template_dict[key] = None
cluster_template = objects.ClusterTemplate(
self.context, **self.cluster_template_dict)
mock_objects_cluster_template_get_by_uuid.return_value = \
cluster_template
cluster = objects.Cluster(self.context, **self.cluster_dict)
del self.worker_ng_dict['image_id']
del self.master_ng_dict['image_id']
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
mock_driver.return_value = mesos_dr.Driver()
(template_path,
definition,
env_files) = mock_driver()._extract_template_definition(self.context,
cluster)
expected = {
'ssh_key_name': 'keypair_id',
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
'external_network': 'external_network_id',
'number_of_slaves': 1,
'number_of_masters': 1,
'cluster_name': 'cluster1',
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': '',
'auth_url': 'http://192.168.10.10:5000/v3',
'region_name': self.mock_osc.cinder_region_name.return_value,
'username': 'mesos_user',
'tenant_name': 'admin',
'domain_name': 'domainname',
'rexray_preempt': 'False',
'mesos_slave_isolation': 'docker/runtime,filesystem/linux',
'mesos_slave_executor_env_variables': '{}',
'mesos_slave_work_dir': '/tmp/mesos/slave',
'mesos_slave_image_providers': 'docker',
'master_flavor': 'master_flavor_id',
'verify_ca': True,
'slave_flavor': 'flavor_id',
'openstack_ca': '',
'nodes_affinity_policy': 'soft-anti-affinity',
}
self.assertEqual(expected, definition)
self.assertEqual(
['../../common/templates/environments/with_private_network.yaml',
'../../common/templates/environments/no_master_lb.yaml'],
env_files)
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
@patch('magnum.objects.NodeGroup.list')
@patch('magnum.drivers.common.driver.Driver.get_driver')
@patch('magnum.common.keystone.KeystoneClientV3')
def test_extract_template_definition_with_lb_neutron(
self,
mock_kc,
mock_driver,
mock_objects_nodegroup_list,
mock_objects_cluster_template_get_by_uuid):
self.cluster_template_dict['master_lb_enabled'] = True
cluster_template = objects.ClusterTemplate(
self.context, **self.cluster_template_dict)
mock_objects_cluster_template_get_by_uuid.return_value = \
cluster_template
self.cluster_dict["master_lb_enabled"] = True
cluster = objects.Cluster(self.context, **self.cluster_dict)
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
mock_driver.return_value = mesos_dr.Driver()
mock_kc.return_value.client.services.list.return_value = []
(template_path,
definition,
env_files) = mock_driver()._extract_template_definition(self.context,
cluster)
expected = {
'ssh_key_name': 'keypair_id',
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
'external_network': 'external_network_id',
'fixed_network': 'fixed_network',
'fixed_subnet': 'fixed_subnet',
'dns_nameserver': 'dns_nameserver',
'master_image': 'image_id',
'slave_image': 'image_id',
'master_flavor': 'master_flavor_id',
'slave_flavor': 'flavor_id',
'number_of_slaves': 1,
'number_of_masters': 1,
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'cluster_name': 'cluster1',
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': '',
'volume_driver': 'volume_driver',
'auth_url': 'http://192.168.10.10:5000/v3',
'region_name': self.mock_osc.cinder_region_name.return_value,
'username': 'mesos_user',
'tenant_name': 'admin',
'domain_name': 'domainname',
'rexray_preempt': 'False',
'mesos_slave_executor_env_variables': '{}',
'mesos_slave_isolation': 'docker/runtime,filesystem/linux',
'mesos_slave_work_dir': '/tmp/mesos/slave',
'mesos_slave_image_providers': 'docker',
'verify_ca': True,
'openstack_ca': '',
'nodes_affinity_policy': 'soft-anti-affinity',
}
self.assertEqual(expected, definition)
self.assertEqual(
['../../common/templates/environments/no_private_network.yaml',
'../../common/templates/environments/with_master_lb.yaml'],
env_files)
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
@patch('magnum.objects.NodeGroup.list')
@patch('magnum.drivers.common.driver.Driver.get_driver')
@patch('magnum.common.keystone.KeystoneClientV3')
def test_extract_template_definition_with_lb_octavia(
self,
mock_kc,
mock_driver,
mock_objects_nodegroup_list,
mock_objects_cluster_template_get_by_uuid):
self.cluster_template_dict['master_lb_enabled'] = True
cluster_template = objects.ClusterTemplate(
self.context, **self.cluster_template_dict)
mock_objects_cluster_template_get_by_uuid.return_value = \
cluster_template
self.cluster_dict["master_lb_enabled"] = True
cluster = objects.Cluster(self.context, **self.cluster_dict)
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
mock_driver.return_value = mesos_dr.Driver()
class Service(object):
def __init__(self):
self.enabled = True
mock_kc.return_value.client.services.list.return_value = [Service()]
(template_path,
definition,
env_files) = mock_driver()._extract_template_definition(self.context,
cluster)
expected = {
'ssh_key_name': 'keypair_id',
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
'external_network': 'external_network_id',
'fixed_network': 'fixed_network',
'fixed_subnet': 'fixed_subnet',
'dns_nameserver': 'dns_nameserver',
'master_image': 'image_id',
'slave_image': 'image_id',
'master_flavor': 'master_flavor_id',
'slave_flavor': 'flavor_id',
'number_of_slaves': 1,
'number_of_masters': 1,
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'cluster_name': 'cluster1',
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': '',
'volume_driver': 'volume_driver',
'auth_url': 'http://192.168.10.10:5000/v3',
'region_name': self.mock_osc.cinder_region_name.return_value,
'username': 'mesos_user',
'tenant_name': 'admin',
'domain_name': 'domainname',
'rexray_preempt': 'False',
'mesos_slave_executor_env_variables': '{}',
'mesos_slave_isolation': 'docker/runtime,filesystem/linux',
'mesos_slave_work_dir': '/tmp/mesos/slave',
'mesos_slave_image_providers': 'docker',
'verify_ca': True,
'openstack_ca': '',
'nodes_affinity_policy': 'soft-anti-affinity',
}
self.assertEqual(expected, definition)
self.assertEqual(
['../../common/templates/environments/no_private_network.yaml',
'../../common/templates/environments/with_master_lb_octavia.yaml'
],
env_files)
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
@patch('magnum.objects.NodeGroup.list')
@patch('magnum.drivers.common.driver.Driver.get_driver')
@patch('magnum.common.keystone.KeystoneClientV3')
def test_extract_template_definition_multi_master(
self,
mock_kc,
mock_driver,
mock_objects_nodegroup_list,
mock_objects_cluster_template_get_by_uuid):
self.cluster_template_dict['master_lb_enabled'] = True
self.master_ng_dict['node_count'] = 2
cluster_template = objects.ClusterTemplate(
self.context, **self.cluster_template_dict)
mock_objects_cluster_template_get_by_uuid.return_value = \
cluster_template
self.cluster_dict["master_lb_enabled"] = True
cluster = objects.Cluster(self.context, **self.cluster_dict)
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
mock_driver.return_value = mesos_dr.Driver()
mock_kc.return_value.client.services.list.return_value = []
(template_path,
definition,
env_files) = mock_driver()._extract_template_definition(self.context,
cluster)
expected = {
'ssh_key_name': 'keypair_id',
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
'external_network': 'external_network_id',
'fixed_network': 'fixed_network',
'fixed_subnet': 'fixed_subnet',
'dns_nameserver': 'dns_nameserver',
'master_image': 'image_id',
'slave_image': 'image_id',
'master_flavor': 'master_flavor_id',
'slave_flavor': 'flavor_id',
'number_of_slaves': 1,
'number_of_masters': 2,
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'cluster_name': 'cluster1',
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': '',
'volume_driver': 'volume_driver',
'auth_url': 'http://192.168.10.10:5000/v3',
'region_name': self.mock_osc.cinder_region_name.return_value,
'username': 'mesos_user',
'tenant_name': 'admin',
'domain_name': 'domainname',
'rexray_preempt': 'False',
'mesos_slave_executor_env_variables': '{}',
'mesos_slave_isolation': 'docker/runtime,filesystem/linux',
'mesos_slave_work_dir': '/tmp/mesos/slave',
'mesos_slave_image_providers': 'docker',
'verify_ca': True,
'openstack_ca': '',
'nodes_affinity_policy': 'soft-anti-affinity',
}
self.assertEqual(expected, definition)
self.assertEqual(
['../../common/templates/environments/no_private_network.yaml',
'../../common/templates/environments/with_master_lb.yaml'],
env_files)
@patch('magnum.conductor.utils.retrieve_cluster_template')
@patch('magnum.conf.CONF')
@patch('magnum.common.clients.OpenStackClients')
@patch('magnum.drivers.common.driver.Driver.get_driver')
def setup_poll_test(self, mock_driver, mock_openstack_client,
mock_conf, mock_retrieve_cluster_template):
mock_conf.cluster_heat.max_attempts = 10
worker_ng = mock.MagicMock(
uuid='5d12f6fd-a196-4bf0-ae4c-1f639a523a53',
role='worker',
node_count=1,
)
master_ng = mock.MagicMock(
uuid='5d12f6fd-a196-4bf0-ae4c-1f639a523a54',
role='master',
node_count=1,
)
cluster = mock.MagicMock(nodegroups=[worker_ng, master_ng],
default_ng_worker=worker_ng,
default_ng_master=master_ng)
mock_heat_stack = mock.MagicMock()
mock_heat_client = mock.MagicMock()
mock_heat_client.stacks.get.return_value = mock_heat_stack
mock_openstack_client.heat.return_value = mock_heat_client
mock_driver.return_value = mesos_dr.Driver()
cluster_template = objects.ClusterTemplate(
self.context, **self.cluster_template_dict)
mock_retrieve_cluster_template.return_value = cluster_template
poller = heat_driver.HeatPoller(mock_openstack_client,
mock.MagicMock(), cluster,
mesos_dr.Driver())
poller.template_def.add_nodegroup_params(cluster)
poller.get_version_info = mock.MagicMock()
return (mock_heat_stack, cluster, poller)
def test_poll_node_count(self):
mock_heat_stack, cluster, poller = self.setup_poll_test()
mock_heat_stack.parameters = {
'number_of_slaves': 1,
'number_of_masters': 1
}
mock_heat_stack.stack_status = cluster_status.CREATE_IN_PROGRESS
poller.poll_and_check()
self.assertEqual(1, cluster.default_ng_worker.node_count)
def test_poll_node_count_by_update(self):
mock_heat_stack, cluster, poller = self.setup_poll_test()
mock_heat_stack.parameters = {
'number_of_slaves': 2,
'number_of_masters': 1
}
mock_heat_stack.stack_status = cluster_status.UPDATE_COMPLETE
poller.poll_and_check()
self.assertEqual(2, cluster.default_ng_worker.node_count)

View File

@ -16,12 +16,10 @@
import tempfile
from unittest import mock
from oslo_serialization import jsonutils
from requests_mock.contrib import fixture
from magnum.common import exception
from magnum.drivers.common import k8s_monitor
from magnum.drivers.mesos_ubuntu_v1 import monitor as mesos_monitor
from magnum.drivers.swarm_fedora_atomic_v1 import monitor as swarm_monitor
from magnum.drivers.swarm_fedora_atomic_v2 import monitor as swarm_v2_monitor
from magnum import objects
@ -65,8 +63,6 @@ class MonitorsTestCase(base.TestCase):
self.v2_monitor = swarm_v2_monitor.SwarmMonitor(self.context,
self.cluster)
self.k8s_monitor = k8s_monitor.K8sMonitor(self.context, self.cluster)
self.mesos_monitor = mesos_monitor.MesosMonitor(self.context,
self.cluster)
p = mock.patch('magnum.drivers.swarm_fedora_atomic_v1.monitor.'
'SwarmMonitor.metrics_spec',
new_callable=mock.PropertyMock)
@ -351,124 +347,6 @@ class MonitorsTestCase(base.TestCase):
cpu_util = self.k8s_monitor.compute_cpu_util()
self.assertEqual(0, cpu_util)
def _test_mesos_monitor_pull_data(
self, mock_url_get, state_json, expected_mem_total,
expected_mem_used, expected_cpu_total, expected_cpu_used):
state_json = jsonutils.dumps(state_json)
mock_url_get.return_value = state_json
self.mesos_monitor.pull_data()
self.assertEqual(self.mesos_monitor.data['mem_total'],
expected_mem_total)
self.assertEqual(self.mesos_monitor.data['mem_used'],
expected_mem_used)
self.assertEqual(self.mesos_monitor.data['cpu_total'],
expected_cpu_total)
self.assertEqual(self.mesos_monitor.data['cpu_used'],
expected_cpu_used)
@mock.patch('magnum.objects.NodeGroup.list')
@mock.patch('magnum.common.urlfetch.get')
def test_mesos_monitor_pull_data_success(self, mock_url_get,
mock_ng_list):
mock_ng_list.return_value = self.nodegroups
state_json = {
'leader': 'master@10.0.0.6:5050',
'pid': 'master@10.0.0.6:5050',
'slaves': [{
'resources': {
'mem': 100,
'cpus': 1,
},
'used_resources': {
'mem': 50,
'cpus': 0.2,
}
}]
}
self._test_mesos_monitor_pull_data(mock_url_get, state_json,
100, 50, 1, 0.2)
@mock.patch('magnum.objects.NodeGroup.list')
@mock.patch('magnum.common.urlfetch.get')
def test_mesos_monitor_pull_data_success_not_leader(self, mock_url_get,
mock_ng_list):
mock_ng_list.return_value = self.nodegroups
state_json = {
'leader': 'master@10.0.0.6:5050',
'pid': 'master@1.1.1.1:5050',
'slaves': []
}
self._test_mesos_monitor_pull_data(mock_url_get, state_json,
0, 0, 0, 0)
@mock.patch('magnum.objects.NodeGroup.list')
@mock.patch('magnum.common.urlfetch.get')
def test_mesos_monitor_pull_data_success_no_master(self, mock_url_get,
mock_ng_list):
mock_ng_list.return_value = []
self._test_mesos_monitor_pull_data(mock_url_get, {}, 0, 0, 0, 0)
def test_mesos_monitor_get_metric_names(self):
mesos_metric_spec = ('magnum.drivers.mesos_ubuntu_v1.monitor.'
'MesosMonitor.metrics_spec')
with mock.patch(mesos_metric_spec,
new_callable=mock.PropertyMock) as mock_mesos_metric:
mock_mesos_metric.return_value = self.test_metrics_spec
names = self.mesos_monitor.get_metric_names()
self.assertEqual(sorted(['metric1', 'metric2']), sorted(names))
def test_mesos_monitor_get_metric_unit(self):
mesos_metric_spec = ('magnum.drivers.mesos_ubuntu_v1.monitor.'
'MesosMonitor.metrics_spec')
with mock.patch(mesos_metric_spec,
new_callable=mock.PropertyMock) as mock_mesos_metric:
mock_mesos_metric.return_value = self.test_metrics_spec
unit = self.mesos_monitor.get_metric_unit('metric1')
self.assertEqual('metric1_unit', unit)
def test_mesos_monitor_compute_memory_util(self):
test_data = {
'mem_total': 100,
'mem_used': 50
}
self.mesos_monitor.data = test_data
mem_util = self.mesos_monitor.compute_memory_util()
self.assertEqual(50, mem_util)
test_data = {
'mem_total': 0,
'pods': 0,
}
self.mesos_monitor.data = test_data
mem_util = self.mesos_monitor.compute_memory_util()
self.assertEqual(0, mem_util)
test_data = {
'mem_total': 100,
'mem_used': 0,
'pods': 0,
}
self.mesos_monitor.data = test_data
mem_util = self.mesos_monitor.compute_memory_util()
self.assertEqual(0, mem_util)
def test_mesos_monitor_compute_cpu_util(self):
test_data = {
'cpu_total': 1,
'cpu_used': 0.2,
}
self.mesos_monitor.data = test_data
cpu_util = self.mesos_monitor.compute_cpu_util()
self.assertEqual(20, cpu_util)
test_data = {
'cpu_total': 100,
'cpu_used': 0,
}
self.mesos_monitor.data = test_data
cpu_util = self.mesos_monitor.compute_cpu_util()
self.assertEqual(0, cpu_util)
@mock.patch('magnum.conductor.k8s_api.create_client_files')
def test_k8s_monitor_health_healthy(self, mock_create_client_files):
mock_create_client_files.return_value = (

View File

@ -20,7 +20,6 @@ from requests_mock.contrib import fixture
from magnum.common import exception
from magnum.conductor import scale_manager
from magnum.drivers.common.k8s_scale_manager import K8sScaleManager
from magnum.drivers.mesos_ubuntu_v1.scale_manager import MesosScaleManager
from magnum.tests import base
@ -225,24 +224,3 @@ class TestK8sScaleManager(base.TestCase):
hosts = mgr._get_hosts_with_container(
mock.MagicMock(), mock_cluster)
self.assertEqual(hosts, {'node1', 'node2'})
class TestMesosScaleManager(base.TestCase):
@mock.patch('magnum.objects.Cluster.get_by_uuid')
@mock.patch('marathon.MarathonClient')
@mock.patch('marathon.MarathonClient.list_tasks')
def test_get_hosts_with_container(self, mock_list_tasks,
mock_client, mock_get):
task_1 = mock.MagicMock()
task_1.host = 'node1'
task_2 = mock.MagicMock()
task_2.host = 'node2'
tasks = [task_1, task_2]
mock_list_tasks.return_value = tasks
mgr = MesosScaleManager(
mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
hosts = mgr._get_hosts_with_container(
mock.MagicMock(), mock.MagicMock())
self.assertEqual(hosts, {'node1', 'node2'})

View File

@ -28,8 +28,6 @@ from magnum.drivers.k8s_fedora_atomic_v1 import driver as k8sa_dr
from magnum.drivers.k8s_fedora_atomic_v1 import template_def as k8sa_tdef
from magnum.drivers.k8s_fedora_ironic_v1 import driver as k8s_i_dr
from magnum.drivers.k8s_fedora_ironic_v1 import template_def as k8si_tdef
from magnum.drivers.mesos_ubuntu_v1 import driver as mesos_dr
from magnum.drivers.mesos_ubuntu_v1 import template_def as mesos_tdef
from magnum.drivers.swarm_fedora_atomic_v1 import driver as swarm_dr
from magnum.drivers.swarm_fedora_atomic_v1 import template_def as swarm_tdef
from magnum.drivers.swarm_fedora_atomic_v2 import driver as swarm_v2_dr
@ -110,17 +108,6 @@ class TemplateDefinitionTestCase(base.TestCase):
self.assertIsInstance(definition,
swarm_v2_tdef.AtomicSwarmTemplateDefinition)
@mock.patch('magnum.drivers.common.driver.Driver.get_driver')
def test_get_vm_ubuntu_mesos_definition(self, mock_driver):
mock_driver.return_value = mesos_dr.Driver()
cluster_driver = driver.Driver.get_driver('vm',
'ubuntu',
'mesos')
definition = cluster_driver.get_template_definition()
self.assertIsInstance(definition,
mesos_tdef.UbuntuMesosTemplateDefinition)
def test_get_driver_not_supported(self):
self.assertRaises(exception.ClusterTypeNotSupported,
driver.Driver.get_driver,
@ -2116,145 +2103,3 @@ class AtomicSwarmTemplateDefinitionTestCase(base.TestCase):
self.assertEqual(expected_api_address, self.mock_cluster.api_address)
self.assertEqual(expected_node_addresses,
self.worker_ng.node_addresses)
class UbuntuMesosTemplateDefinitionTestCase(base.TestCase):
def setUp(self):
super(UbuntuMesosTemplateDefinitionTestCase, self).setUp()
self.master_ng = mock.MagicMock(uuid='master_ng', role='master')
self.worker_ng = mock.MagicMock(uuid='worker_ng', role='worker')
self.nodegroups = [self.master_ng, self.worker_ng]
self.mock_cluster = mock.MagicMock(nodegroups=self.nodegroups,
default_ng_worker=self.worker_ng,
default_ng_master=self.master_ng)
@mock.patch('magnum.common.clients.OpenStackClients')
@mock.patch('magnum.drivers.heat.template_def.BaseTemplateDefinition'
'.get_params')
def test_mesos_get_params(self,
mock_get_params,
mock_osc_class):
mock_context = mock.MagicMock()
mock_context.auth_url = 'http://192.168.10.10:5000/v3'
mock_context.user_name = 'mesos_user'
mock_context.project_id = 'admin'
mock_context.domain_name = 'domainname'
mock_cluster_template = mock.MagicMock()
mock_cluster_template.tls_disabled = False
mock_cluster = mock.MagicMock()
mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52'
del mock_cluster.stack_id
rexray_preempt = mock_cluster.labels.get('rexray_preempt')
mesos_slave_isolation = mock_cluster.labels.get(
'mesos_slave_isolation')
mesos_slave_work_dir = mock_cluster.labels.get(
'mesos_slave_work_dir')
mesos_slave_image_providers = mock_cluster.labels.get(
'image_providers')
mesos_slave_executor_env_variables = mock_cluster.labels.get(
'mesos_slave_executor_env_variables')
mock_osc = mock.MagicMock()
mock_osc.cinder_region_name.return_value = 'RegionOne'
mock_osc_class.return_value = mock_osc
mesos_def = mesos_tdef.UbuntuMesosTemplateDefinition()
CONF.set_override('nodes_affinity_policy',
'anti-affinity',
group='cluster')
mesos_def.get_params(mock_context, mock_cluster_template, mock_cluster)
expected_kwargs = {'extra_params': {
'region_name': mock_osc.cinder_region_name.return_value,
'nodes_affinity_policy': 'anti-affinity',
'auth_url': 'http://192.168.10.10:5000/v3',
'username': 'mesos_user',
'tenant_name': 'admin',
'domain_name': 'domainname',
'rexray_preempt': rexray_preempt,
'mesos_slave_isolation': mesos_slave_isolation,
'mesos_slave_work_dir': mesos_slave_work_dir,
'mesos_slave_executor_env_variables':
mesos_slave_executor_env_variables,
'mesos_slave_image_providers': mesos_slave_image_providers}}
mock_get_params.assert_called_once_with(mock_context,
mock_cluster_template,
mock_cluster,
**expected_kwargs)
@mock.patch('magnum.common.clients.OpenStackClients')
@mock.patch('magnum.drivers.heat.template_def.TemplateDefinition'
'.get_output')
def test_mesos_get_scale_params(self, mock_get_output,
mock_osc_class):
mock_context = mock.MagicMock()
mock_cluster = mock.MagicMock()
mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52'
removal_nodes = ['node1', 'node2']
node_count = 7
mock_scale_manager = mock.MagicMock()
mock_scale_manager.get_removal_nodes.return_value = removal_nodes
mesos_def = mesos_tdef.UbuntuMesosTemplateDefinition()
scale_params = mesos_def.get_scale_params(
mock_context,
mock_cluster,
node_count,
mock_scale_manager)
expected_scale_params = {'slaves_to_remove': ['node1', 'node2'],
'number_of_slaves': 7}
self.assertEqual(scale_params, expected_scale_params)
def test_mesos_get_heat_param(self):
mesos_def = mesos_tdef.UbuntuMesosTemplateDefinition()
mesos_def.add_nodegroup_params(self.mock_cluster)
heat_param = mesos_def.get_heat_param(nodegroup_attr='node_count',
nodegroup_uuid='worker_ng')
self.assertEqual('number_of_slaves', heat_param)
heat_param = mesos_def.get_heat_param(nodegroup_attr='node_count',
nodegroup_uuid='master_ng')
self.assertEqual('number_of_masters', heat_param)
def test_update_outputs(self):
mesos_def = mesos_tdef.UbuntuMesosTemplateDefinition()
expected_api_address = 'updated_address'
expected_node_addresses = ['ex_slave', 'address']
expected_master_addresses = ['ex_master', 'address']
outputs = [
{"output_value": expected_api_address,
"description": "No description given",
"output_key": "api_address"},
{"output_value": ['any', 'output'],
"description": "No description given",
"output_key": "mesos_master_private"},
{"output_value": expected_master_addresses,
"description": "No description given",
"output_key": "mesos_master"},
{"output_value": ['any', 'output'],
"description": "No description given",
"output_key": "mesos_slaves_private"},
{"output_value": expected_node_addresses,
"description": "No description given",
"output_key": "mesos_slaves"},
]
mock_stack = mock.MagicMock()
mock_stack.to_dict.return_value = {'outputs': outputs}
mock_cluster_template = mock.MagicMock()
mesos_def.update_outputs(mock_stack, mock_cluster_template,
self.mock_cluster)
self.assertEqual(expected_api_address, self.mock_cluster.api_address)
self.assertEqual(expected_node_addresses,
self.mock_cluster.default_ng_worker.node_addresses)
self.assertEqual(expected_master_addresses,
self.mock_cluster.default_ng_master.node_addresses)

View File

@ -0,0 +1,4 @@
---
deprecations:
- |
Removed mesos driver. Mesos is no longer supported in Magnum.

View File

@ -22,7 +22,6 @@ iso8601>=0.1.11 # MIT
jsonpatch!=1.20,>=1.16 # BSD
keystoneauth1>=3.14.0 # Apache-2.0
keystonemiddleware>=9.0.0 # Apache-2.0
marathon!=0.9.1,>=0.8.6 # MIT
netaddr>=0.7.18 # BSD
oslo.concurrency>=4.1.0 # Apache-2.0
oslo.config>=8.1.0 # Apache-2.0

View File

@ -54,7 +54,6 @@ magnum.drivers =
k8s_coreos_v1 = magnum.drivers.k8s_coreos_v1.driver:Driver
swarm_fedora_atomic_v1 = magnum.drivers.swarm_fedora_atomic_v1.driver:Driver
swarm_fedora_atomic_v2 = magnum.drivers.swarm_fedora_atomic_v2.driver:Driver
mesos_ubuntu_v1 = magnum.drivers.mesos_ubuntu_v1.driver:Driver
k8s_fedora_ironic_v1 = magnum.drivers.k8s_fedora_ironic_v1.driver:Driver
magnum.database.migration_backend =

11
tox.ini
View File

@ -82,17 +82,6 @@ commands =
find . -type f -name "*.py[c|o]" -delete
stestr run {posargs}
[testenv:functional-mesos]
sitepackages = True
setenv = {[testenv]setenv}
OS_TEST_PATH=./magnum/tests/functional/mesos
OS_TEST_TIMEOUT=7200
deps =
{[testenv]deps}
commands =
find . -type f -name "*.py[c|o]" -delete
stestr run {posargs}
[testenv:pep8]
commands =
doc8 -e .rst specs/ doc/source/ contrib/ CONTRIBUTING.rst HACKING.rst README.rst