Drop mesos driver

The coe mesos has not been maitenaned for quite some
time and hasn't got much attetion from the community
in general. As discussed in the mailing list [1] we
are dropping for now.

In this patch, we start by removing the mesos driver
and its test cases. This part of the code has no impact
for other drivers. Then we can clean up mesos references
that affect the API.

[1] http://lists.openstack.org/pipermail/openstack-discuss/2021-December/026230.html

Conflicts:
	lower-constraints.txt
	tox.ini

Change-Id: Ied76095f1f1c57c6af93d1a6094baa6c7cc31c9b
This commit is contained in:
guilhermesteinmuller 2021-12-09 10:27:10 -03:00 committed by Jake Yip
parent 49804076b1
commit d3d28594b3
74 changed files with 4 additions and 5275 deletions

View File

@ -1,103 +0,0 @@
How to build a centos image which contains DC/OS 1.8.x
======================================================
Here is the advanced DC/OS 1.8 installation guide.
See [Advanced DC/OS Installation Guide]
(https://dcos.io/docs/1.8/administration/installing/custom/advanced/)
See [Install Docker on CentOS]
(https://dcos.io/docs/1.8/administration/installing/custom/system-requirements/install-docker-centos/)
See [Adding agent nodes]
(https://dcos.io/docs/1.8/administration/installing/custom/add-a-node/)
Create a centos image using DIB following the steps outlined in DC/OS installation guide.
1. Install and configure docker in chroot.
2. Install system requirements in chroot.
3. Download `dcos_generate_config.sh` outside chroot.
This file will be used to run `dcos_generate_config.sh --genconf` to generate
config files on the node during magnum cluster creation.
4. Some configuration changes are required for DC/OS, i.e disabling the firewalld
and adding the group named nogroup.
See comments in the script file.
Use the centos image to build a DC/OS cluster.
Command:
`magnum cluster-template-create`
`magnum cluster-create`
After all the instances with centos image are created.
1. Pass parameters to config.yaml with magnum cluster template properties.
2. Run `dcos_generate_config.sh --genconf` to generate config files.
3. Run `dcos_install.sh master` on master node and `dcos_install.sh slave` on slave node.
If we want to scale the DC/OS cluster.
Command:
`magnum cluster-update`
The same steps as cluster creation.
1. Create new instances, generate config files on them and install.
2. Or delete those agent nodes where containers are not running.
How to use magnum dcos coe
===============================================
We are assuming that magnum has been installed and the magnum path is `/opt/stack/magnum`.
1. Copy dcos magnum coe source code
$ mv -r /opt/stack/magnum/contrib/drivers/dcos_centos_v1 /opt/stack/magnum/magnum/drivers/
$ mv /opt/stack/magnum/contrib/drivers/common/dcos_* /opt/stack/magnum/magnum/drivers/common/
$ cd /opt/stack/magnum
$ sudo python setup.py install
2. Add driver in setup.cfg
dcos_centos_v1 = magnum.drivers.dcos_centos_v1.driver:Driver
3. Restart your magnum services.
4. Prepare centos image with elements dcos and docker installed
See how to build a centos image in /opt/stack/magnum/magnum/drivers/dcos_centos_v1/image/README.md
5. Create glance image
$ openstack image create centos-7-dcos.qcow2 \
--public \
--disk-format=qcow2 \
--container-format=bare \
--property os_distro=centos \
--file=centos-7-dcos.qcow2
6. Create magnum cluster template
Configure DC/OS cluster with --labels
See https://dcos.io/docs/1.8/administration/installing/custom/configuration-parameters/
$ magnum cluster-template-create --name dcos-cluster-template \
--image-id centos-7-dcos.qcow2 \
--keypair-id testkey \
--external-network-id public \
--dns-nameserver 8.8.8.8 \
--flavor-id m1.medium \
--labels oauth_enabled=false \
--coe dcos
Here is an example to specify the overlay network in DC/OS,
'dcos_overlay_network' should be json string format.
$ magnum cluster-template-create --name dcos-cluster-template \
--image-id centos-7-dcos.qcow2 \
--keypair-id testkey \
--external-network-id public \
--dns-nameserver 8.8.8.8 \
--flavor-id m1.medium \
--labels oauth_enabled=false \
--labels dcos_overlay_enable='true' \
--labels dcos_overlay_config_attempts='6' \
--labels dcos_overlay_mtu='9001' \
--labels dcos_overlay_network='{"vtep_subnet": "44.128.0.0/20",\
"vtep_mac_oui": "70:B3:D5:00:00:00","overlays":\
[{"name": "dcos","subnet": "9.0.0.0/8","prefix": 26}]}' \
--coe dcos
7. Create magnum cluster
$ magnum cluster-create --name dcos-cluster --cluster-template dcos-cluster-template --node-count 1
8. You need to wait for a while after magnum cluster creation completed to make
DC/OS web interface accessible.

View File

@ -1,36 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from magnum.drivers.dcos_centos_v1 import monitor
from magnum.drivers.dcos_centos_v1.scale_manager import DcosScaleManager
from magnum.drivers.dcos_centos_v1 import template_def
from magnum.drivers.heat import driver
class Driver(driver.HeatDriver):
@property
def provides(self):
return [
{'server_type': 'vm',
'os': 'centos',
'coe': 'dcos'},
]
def get_template_definition(self):
return template_def.DcosCentosVMTemplateDefinition()
def get_monitor(self, context, cluster):
return monitor.DcosMonitor(context, cluster)
def get_scale_manager(self, context, osclient, cluster):
return DcosScaleManager(context, osclient, cluster)

View File

@ -1,86 +0,0 @@
=============
centos-dcos
=============
This directory contains `[diskimage-builder](https://github.com/openstack/diskimage-builder)`
elements to build an centos image which contains dcos.
Pre-requisites to run diskimage-builder
---------------------------------------
For diskimage-builder to work, following packages need to be
present:
* kpartx
* qemu-utils
* curl
* xfsprogs
* yum
* yum-utils
* git
For Debian/Ubuntu systems, use::
apt-get install kpartx qemu-utils curl xfsprogs yum yum-utils git
For CentOS and Fedora < 22, use::
yum install kpartx qemu-utils curl xfsprogs yum yum-utils git
For Fedora >= 22, use::
dnf install kpartx @virtualization curl xfsprogs yum yum-utils git
How to generate Centos image with DC/OS 1.8.x
---------------------------------------------
1. Download and export element path
git clone https://git.openstack.org/openstack/magnum
git clone https://git.openstack.org/openstack/diskimage-builder.git
git clone https://git.openstack.org/openstack/dib-utils.git
git clone https://git.openstack.org/openstack/tripleo-image-elements.git
git clone https://git.openstack.org/openstack/heat-templates.git
export PATH="${PWD}/diskimage-builder/bin:$PATH"
export PATH="${PWD}/dib-utils/bin:$PATH"
export ELEMENTS_PATH=magnum/contrib/drivers/dcos_centos_v1/image
export ELEMENTS_PATH=${ELEMENTS_PATH}:diskimage-builder/elements
export ELEMENTS_PATH=${ELEMENTS_PATH}:tripleo-image-elements/elements:heat-templates/hot/software-config/elements
2. Export environment path of the url to download dcos_generate_config.sh
This default download url is for DC/OS 1.8.4
export DCOS_GENERATE_CONFIG_SRC=https://downloads.dcos.io/dcos/stable/commit/e64024af95b62c632c90b9063ed06296fcf38ea5/dcos_generate_config.sh
Or specify local file path
export DCOS_GENERATE_CONFIG_SRC=`pwd`/dcos_generate_config.sh
3. Set file system type to `xfs`
Only XFS is currently supported for overlay.
See https://dcos.io/docs/1.8/administration/installing/custom/system-requirements/install-docker-centos/#recommendations
export FS_TYPE=xfs
4. Create image
disk-image-create \
centos7 vm docker dcos selinux-permissive \
os-collect-config os-refresh-config os-apply-config \
heat-config heat-config-script \
-o centos-7-dcos.qcow2
5. (Optional) Create user image for bare metal node
Create with elements dhcp-all-interfaces and devuser
export DIB_DEV_USER_USERNAME=centos
export DIB_DEV_USER_PWDLESS_SUDO=YES
disk-image-create \
centos7 vm docker dcos selinux-permissive dhcp-all-interfaces devuser \
os-collect-config os-refresh-config os-apply-config \
heat-config heat-config-script \
-o centos-7-dcos-bm.qcow2

View File

@ -1,2 +0,0 @@
package-installs
docker

View File

@ -1,5 +0,0 @@
# Specify download url, default DC/OS version 1.8.4
export DCOS_GENERATE_CONFIG_SRC=${DCOS_GENERATE_CONFIG_SRC:-https://downloads.dcos.io/dcos/stable/commit/e64024af95b62c632c90b9063ed06296fcf38ea5/dcos_generate_config.sh}
# or local file path
# export DCOS_GENERATE_CONFIG_SRC=${DCOS_GENERATE_CONFIG_SRC:-${PWD}/dcos_generate_config.sh}

View File

@ -1,23 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# This scrpit file is used to dowload dcos_generate_config.sh outside chroot.
# Ihis file is essential that the size of dcos_generate_config.sh is more than
# 700M, we should download it into the image in advance.
sudo mkdir -p $TMP_MOUNT_PATH/opt/dcos
if [ -f $DCOS_GENERATE_CONFIG_SRC ]; then
# If $DCOS_GENERATE_CONFIG_SRC is a file path, copy the file
sudo cp $DCOS_GENERATE_CONFIG_SRC $TMP_MOUNT_PATH/opt/dcos
else
# If $DCOS_GENERATE_CONFIG_SRC is a url, download it
# Please make sure curl is installed on your host environment
cd $TMP_MOUNT_PATH/opt/dcos
sudo -E curl -O $DCOS_GENERATE_CONFIG_SRC
fi

View File

@ -1,6 +0,0 @@
tar:
xz:
unzip:
curl:
ipset:
ntp:

View File

@ -1,10 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# nogroup will be used on Mesos masters and agents.
sudo groupadd nogroup

View File

@ -1,9 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
sudo systemctl enable ntpd

View File

@ -1 +0,0 @@
package-installs

View File

@ -1,24 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Install the Docker engine, daemon, and service.
#
# The supported versions of Docker are:
# 1.7.x
# 1.8.x
# 1.9.x
# 1.10.x
# 1.11.x
# Docker 1.12.x is NOT supported.
# Docker 1.9.x - 1.11.x is recommended for stability reasons.
# https://github.com/docker/docker/issues/9718
#
# See DC/OS installtion guide for details
# https://dcos.io/docs/1.8/administration/installing/custom/system-requirements/install-docker-centos/
#
sudo -E yum install -y docker-engine-1.11.2

View File

@ -1,9 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
sudo systemctl enable docker

View File

@ -1,26 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Upgrade CentOS to 7.2
sudo -E yum upgrade --assumeyes --tolerant
sudo -E yum update --assumeyes
# Verify that the kernel is at least 3.10
function version_gt() { test "$(echo "$@" | tr " " "\n" | sort -V | head -n 1)" != "$1"; }
kernel_version=`uname -r | cut --bytes=1-4`
expect_version=3.10
if version_gt $expect_version $kernel_version; then
echo "Error: kernel version at least $expect_version, current version $kernel_version"
exit 1
fi
# Enable OverlayFS
sudo tee /etc/modules-load.d/overlay.conf <<-'EOF'
overlay
EOF

View File

@ -1,33 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Configure yum to use the Docker yum repo
sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
# Configure systemd to run the Docker Daemon with OverlayFS
# Manage Docker on CentOS with systemd.
# systemd handles starting Docker on boot and restarting it when it crashes.
#
# Docker 1.11.x will be installed, so issue for Docker 1.12.x on Centos7
# won't happen.
# https://github.com/docker/docker/issues/22847
# https://github.com/docker/docker/issues/25098
#
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo tee /etc/systemd/system/docker.service.d/override.conf <<- 'EOF'
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon --storage-driver=overlay -H fd://
EOF

View File

@ -1,25 +0,0 @@
#!/bin/bash
# This script installs all needed dependencies to generate
# images using diskimage-builder. Please note it only has been
# tested on Ubuntu Xenial.
set -eux
set -o pipefail
sudo apt update || true
sudo apt install -y \
git \
qemu-utils \
python-dev \
python-yaml \
python-six \
uuid-runtime \
curl \
sudo \
kpartx \
parted \
wget \
xfsprogs \
yum \
yum-utils

View File

@ -1,35 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2016 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
# check that image is valid
qemu-img check -q $1
# validate estimated size
FILESIZE=$(stat -c%s "$1")
MIN_SIZE=1231028224 # 1.15GB
MAX_SIZE=1335885824 # 1.25GB
if [ $FILESIZE -lt $MIN_SIZE ] ; then
echo "Error: generated image size is lower than expected."
exit 1
fi
if [ $FILESIZE -gt $MAX_SIZE ] ; then
echo "Error: generated image size is higher than expected."
exit 1
fi

View File

@ -1,74 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_serialization import jsonutils
from magnum.common import urlfetch
from magnum.conductor import monitors
class DcosMonitor(monitors.MonitorBase):
def __init__(self, context, cluster):
super(DcosMonitor, self).__init__(context, cluster)
self.data = {}
@property
def metrics_spec(self):
return {
'memory_util': {
'unit': '%',
'func': 'compute_memory_util',
},
'cpu_util': {
'unit': '%',
'func': 'compute_cpu_util',
},
}
# See https://github.com/dcos/adminrouter#ports-summary
# Use http://<mesos-master>/mesos/ instead of http://<mesos-master>:5050
def _build_url(self, url, protocol='http', server_name='mesos', path='/'):
return protocol + '://' + url + '/' + server_name + path
def _is_leader(self, state):
return state['leader'] == state['pid']
def pull_data(self):
self.data['mem_total'] = 0
self.data['mem_used'] = 0
self.data['cpu_total'] = 0
self.data['cpu_used'] = 0
for master_addr in self.cluster.master_addresses:
mesos_master_url = self._build_url(master_addr,
server_name='mesos',
path='/state')
master = jsonutils.loads(urlfetch.get(mesos_master_url))
if self._is_leader(master):
for slave in master['slaves']:
self.data['mem_total'] += slave['resources']['mem']
self.data['mem_used'] += slave['used_resources']['mem']
self.data['cpu_total'] += slave['resources']['cpus']
self.data['cpu_used'] += slave['used_resources']['cpus']
break
def compute_memory_util(self):
if self.data['mem_total'] == 0 or self.data['mem_used'] == 0:
return 0
else:
return self.data['mem_used'] * 100 / self.data['mem_total']
def compute_cpu_util(self):
if self.data['cpu_total'] == 0 or self.data['cpu_used'] == 0:
return 0
else:
return self.data['cpu_used'] * 100 / self.data['cpu_total']

View File

@ -1,29 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from magnum.conductor.scale_manager import ScaleManager
from marathon import MarathonClient
class DcosScaleManager(ScaleManager):
def __init__(self, context, osclient, cluster):
super(DcosScaleManager, self).__init__(context, osclient, cluster)
def _get_hosts_with_container(self, context, cluster):
marathon_client = MarathonClient(
'http://' + cluster.api_address + '/marathon/')
hosts = set()
for task in marathon_client.list_tasks():
hosts.add(task.host)
return hosts

View File

@ -1,28 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from magnum.drivers.heat import dcos_centos_template_def as dctd
class DcosCentosVMTemplateDefinition(dctd.DcosCentosTemplateDefinition):
"""DC/OS template for Centos VM."""
@property
def driver_module_path(self):
return __name__[:__name__.rindex('.')]
@property
def template_path(self):
return os.path.join(os.path.dirname(os.path.realpath(__file__)),
'templates/dcoscluster.yaml')

View File

@ -1,679 +0,0 @@
heat_template_version: 2014-10-16
description: >
This template will boot a DC/OS cluster with one or more masters
(as specified by number_of_masters, default is 1) and one or more slaves
(as specified by the number_of_slaves parameter, which
defaults to 1).
parameters:
cluster_name:
type: string
description: human readable name for the DC/OS cluster
default: my-cluster
number_of_masters:
type: number
description: how many DC/OS masters to spawn initially
default: 1
# In DC/OS, there are two types of slave nodes, public and private.
# Public slave nodes have external access and private slave nodes don't.
# Magnum only supports one type of slave nodes and I decide not to modify
# cluster template properties. So I create slave nodes as private agents.
number_of_slaves:
type: number
description: how many DC/OS agents or slaves to spawn initially
default: 1
master_flavor:
type: string
default: m1.medium
description: flavor to use when booting the master servers
slave_flavor:
type: string
default: m1.medium
description: flavor to use when booting the slave servers
server_image:
type: string
default: centos-dcos
description: glance image used to boot the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
ssh_public_key:
type: string
description: The public ssh key to add in all nodes
default: ""
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
default: public
fixed_network:
type: string
description: uuid/name of an existing network to use to provision machines
default: ""
fixed_subnet:
type: string
description: uuid/name of an existing subnet to use to provision machines
default: ""
fixed_subnet_cidr:
type: string
description: network range for fixed ip network
default: 10.0.0.0/24
dns_nameserver:
type: comma_delimited_list
description: address of a dns nameserver reachable in your environment
http_proxy:
type: string
description: http proxy address for docker
default: ""
https_proxy:
type: string
description: https proxy address for docker
default: ""
no_proxy:
type: string
description: no proxies for docker
default: ""
######################################################################
#
# Rexray Configuration
#
trustee_domain_id:
type: string
description: domain id of the trustee
default: ""
trustee_user_id:
type: string
description: user id of the trustee
default: ""
trustee_username:
type: string
description: username of the trustee
default: ""
trustee_password:
type: string
description: password of the trustee
default: ""
hidden: true
trust_id:
type: string
description: id of the trust which is used by the trustee
default: ""
hidden: true
######################################################################
#
# Rexray Configuration
#
volume_driver:
type: string
description: volume driver to use for container storage
default: ""
username:
type: string
description: user name
tenant_name:
type: string
description: >
tenant_name is used to isolate access to cloud resources
domain_name:
type: string
description: >
domain is to define the administrative boundaries for management
of Keystone entities
region_name:
type: string
description: a logically separate section of the cluster
rexray_preempt:
type: string
description: >
enables any host to take control of a volume irrespective of whether
other hosts are using the volume
default: "false"
auth_url:
type: string
description: url for keystone
slaves_to_remove:
type: comma_delimited_list
description: >
List of slaves to be removed when doing an update. Individual slave may
be referenced several ways: (1) The resource name (e.g.['1', '3']),
(2) The private IP address ['10.0.0.4', '10.0.0.6']. Note: the list should
be empty when doing a create.
default: []
wait_condition_timeout:
type: number
description: >
timeout for the Wait Conditions
default: 6000
password:
type: string
description: >
user password, not set in current implementation, only used to
fill in for DC/OS config file
default:
password
hidden: true
######################################################################
#
# DC/OS parameters
#
# cluster_name
exhibitor_storage_backend:
type: string
default: "static"
exhibitor_zk_hosts:
type: string
default: ""
exhibitor_zk_path:
type: string
default: ""
aws_access_key_id:
type: string
default: ""
aws_region:
type: string
default: ""
aws_secret_access_key:
type: string
default: ""
exhibitor_explicit_keys:
type: string
default: ""
s3_bucket:
type: string
default: ""
s3_prefix:
type: string
default: ""
exhibitor_azure_account_name:
type: string
default: ""
exhibitor_azure_account_key:
type: string
default: ""
exhibitor_azure_prefix:
type: string
default: ""
# master_discovery default set to "static"
# If --master-lb-enabled is specified,
# master_discovery will be set to "master_http_loadbalancer"
master_discovery:
type: string
default: "static"
# master_list
# exhibitor_address
# num_masters
####################################################
# Networking
dcos_overlay_enable:
type: string
default: ""
constraints:
- allowed_values:
- "true"
- "false"
- ""
dcos_overlay_config_attempts:
type: string
default: ""
dcos_overlay_mtu:
type: string
default: ""
dcos_overlay_network:
type: string
default: ""
dns_search:
type: string
description: >
This parameter specifies a space-separated list of domains that
are tried when an unqualified domain is entered
default: ""
# resolvers
# use_proxy
####################################################
# Performance and Tuning
check_time:
type: string
default: "true"
constraints:
- allowed_values:
- "true"
- "false"
docker_remove_delay:
type: number
default: 1
gc_delay:
type: number
default: 2
log_directory:
type: string
default: "/genconf/logs"
process_timeout:
type: number
default: 120
####################################################
# Security And Authentication
oauth_enabled:
type: string
default: "true"
constraints:
- allowed_values:
- "true"
- "false"
telemetry_enabled:
type: string
default: "true"
constraints:
- allowed_values:
- "true"
- "false"
resources:
######################################################################
#
# network resources. allocate a network and router for our server.
#
network:
type: ../../common/templates/network.yaml
properties:
existing_network: {get_param: fixed_network}
existing_subnet: {get_param: fixed_subnet}
private_network_cidr: {get_param: fixed_subnet_cidr}
dns_nameserver: {get_param: dns_nameserver}
external_network: {get_param: external_network}
api_lb:
type: lb.yaml
properties:
fixed_subnet: {get_attr: [network, fixed_subnet]}
external_network: {get_param: external_network}
######################################################################
#
# security groups. we need to permit network traffic of various
# sorts.
#
secgroup:
type: secgroup.yaml
######################################################################
#
# resources that expose the IPs of either the dcos master or a given
# LBaaS pool depending on whether LBaaS is enabled for the cluster.
#
api_address_lb_switch:
type: Magnum::ApiGatewaySwitcher
properties:
pool_public_ip: {get_attr: [api_lb, floating_address]}
pool_private_ip: {get_attr: [api_lb, address]}
master_public_ip: {get_attr: [dcos_masters, resource.0.dcos_master_external_ip]}
master_private_ip: {get_attr: [dcos_masters, resource.0.dcos_master_ip]}
######################################################################
#
# Master SoftwareConfig.
#
write_params_master:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: fragments/write-heat-params.sh}
inputs:
- name: HTTP_PROXY
type: String
- name: HTTPS_PROXY
type: String
- name: NO_PROXY
type: String
- name: AUTH_URL
type: String
- name: USERNAME
type: String
- name: PASSWORD
type: String
- name: TENANT_NAME
type: String
- name: VOLUME_DRIVER
type: String
- name: REGION_NAME
type: String
- name: DOMAIN_NAME
type: String
- name: REXRAY_PREEMPT
type: String
- name: CLUSTER_NAME
type: String
- name: EXHIBITOR_STORAGE_BACKEND
type: String
- name: EXHIBITOR_ZK_HOSTS
type: String
- name: EXHIBITOR_ZK_PATH
type: String
- name: AWS_ACCESS_KEY_ID
type: String
- name: AWS_REGION
type: String
- name: AWS_SECRET_ACCESS_KEY
type: String
- name: EXHIBITOR_EXPLICIT_KEYS
type: String
- name: S3_BUCKET
type: String
- name: S3_PREFIX
type: String
- name: EXHIBITOR_AZURE_ACCOUNT_NAME
type: String
- name: EXHIBITOR_AZURE_ACCOUNT_KEY
type: String
- name: EXHIBITOR_AZURE_PREFIX
type: String
- name: MASTER_DISCOVERY
type: String
- name: MASTER_LIST
type: String
- name: EXHIBITOR_ADDRESS
type: String
- name: NUM_MASTERS
type: String
- name: DCOS_OVERLAY_ENABLE
type: String
- name: DCOS_OVERLAY_CONFIG_ATTEMPTS
type: String
- name: DCOS_OVERLAY_MTU
type: String
- name: DCOS_OVERLAY_NETWORK
type: String
- name: DNS_SEARCH
type: String
- name: RESOLVERS
type: String
- name: CHECK_TIME
type: String
- name: DOCKER_REMOVE_DELAY
type: String
- name: GC_DELAY
type: String
- name: LOG_DIRECTORY
type: String
- name: PROCESS_TIMEOUT
type: String
- name: OAUTH_ENABLED
type: String
- name: TELEMETRY_ENABLED
type: String
- name: ROLES
type: String
######################################################################
#
# DC/OS configuration SoftwareConfig.
# Configuration files are readered and injected into instance.
#
dcos_config:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: fragments/configure-dcos.sh}
######################################################################
#
# Master SoftwareDeployment.
#
write_params_master_deployment:
type: OS::Heat::SoftwareDeploymentGroup
properties:
config: {get_resource: write_params_master}
servers: {get_attr: [dcos_masters, attributes, dcos_server_id]}
input_values:
HTTP_PROXY: {get_param: http_proxy}
HTTPS_PROXY: {get_param: https_proxy}
NO_PROXY: {get_param: no_proxy}
AUTH_URL: {get_param: auth_url}
USERNAME: {get_param: username}
PASSWORD: {get_param: password}
TENANT_NAME: {get_param: tenant_name}
VOLUME_DRIVER: {get_param: volume_driver}
REGION_NAME: {get_param: region_name}
DOMAIN_NAME: {get_param: domain_name}
REXRAY_PREEMPT: {get_param: rexray_preempt}
CLUSTER_NAME: {get_param: cluster_name}
EXHIBITOR_STORAGE_BACKEND: {get_param: exhibitor_storage_backend}
EXHIBITOR_ZK_HOSTS: {get_param: exhibitor_zk_hosts}
EXHIBITOR_ZK_PATH: {get_param: exhibitor_zk_path}
AWS_ACCESS_KEY_ID: {get_param: aws_access_key_id}
AWS_REGION: {get_param: aws_region}
AWS_SECRET_ACCESS_KEY: {get_param: aws_secret_access_key}
EXHIBITOR_EXPLICIT_KEYS: {get_param: exhibitor_explicit_keys}
S3_BUCKET: {get_param: s3_bucket}
S3_PREFIX: {get_param: s3_prefix}
EXHIBITOR_AZURE_ACCOUNT_NAME: {get_param: exhibitor_azure_account_name}
EXHIBITOR_AZURE_ACCOUNT_KEY: {get_param: exhibitor_azure_account_key}
EXHIBITOR_AZURE_PREFIX: {get_param: exhibitor_azure_prefix}
MASTER_DISCOVERY: {get_param: master_discovery}
MASTER_LIST: {list_join: [' ', {get_attr: [dcos_masters, dcos_master_ip]}]}
EXHIBITOR_ADDRESS: {get_attr: [api_lb, address]}
NUM_MASTERS: {get_param: number_of_masters}
DCOS_OVERLAY_ENABLE: {get_param: dcos_overlay_enable}
DCOS_OVERLAY_CONFIG_ATTEMPTS: {get_param: dcos_overlay_config_attempts}
DCOS_OVERLAY_MTU: {get_param: dcos_overlay_mtu}
DCOS_OVERLAY_NETWORK: {get_param: dcos_overlay_network}
DNS_SEARCH: {get_param: dns_search}
RESOLVERS: {get_param: dns_nameserver}
CHECK_TIME: {get_param: check_time}
DOCKER_REMOVE_DELAY: {get_param: docker_remove_delay}
GC_DELAY: {get_param: gc_delay}
LOG_DIRECTORY: {get_param: log_directory}
PROCESS_TIMEOUT: {get_param: process_timeout}
OAUTH_ENABLED: {get_param: oauth_enabled}
TELEMETRY_ENABLED: {get_param: telemetry_enabled}
ROLES: master
dcos_config_deployment:
type: OS::Heat::SoftwareDeploymentGroup
depends_on:
- write_params_master_deployment
properties:
config: {get_resource: dcos_config}
servers: {get_attr: [dcos_masters, attributes, dcos_server_id]}
######################################################################
#
# DC/OS masters. This is a resource group that will create
# <number_of_masters> masters.
#
dcos_masters:
type: OS::Heat::ResourceGroup
depends_on:
- network
properties:
count: {get_param: number_of_masters}
resource_def:
type: dcosmaster.yaml
properties:
ssh_key_name: {get_param: ssh_key_name}
server_image: {get_param: server_image}
master_flavor: {get_param: master_flavor}
external_network: {get_param: external_network}
fixed_network: {get_attr: [network, fixed_network]}
fixed_subnet: {get_attr: [network, fixed_subnet]}
secgroup_base_id: {get_attr: [secgroup, secgroup_base_id]}
secgroup_dcos_id: {get_attr: [secgroup, secgroup_dcos_id]}
api_pool_80_id: {get_attr: [api_lb, pool_80_id]}
api_pool_443_id: {get_attr: [api_lb, pool_443_id]}
api_pool_8080_id: {get_attr: [api_lb, pool_8080_id]}
api_pool_5050_id: {get_attr: [api_lb, pool_5050_id]}
api_pool_2181_id: {get_attr: [api_lb, pool_2181_id]}
api_pool_8181_id: {get_attr: [api_lb, pool_8181_id]}
######################################################################
#
# DC/OS slaves. This is a resource group that will initially
# create <number_of_slaves> public or private slaves,
# and needs to be manually scaled.
#
dcos_slaves:
type: OS::Heat::ResourceGroup
depends_on:
- network
properties:
count: {get_param: number_of_slaves}
removal_policies: [{resource_list: {get_param: slaves_to_remove}}]
resource_def:
type: dcosslave.yaml
properties:
ssh_key_name: {get_param: ssh_key_name}
server_image: {get_param: server_image}
slave_flavor: {get_param: slave_flavor}
fixed_network: {get_attr: [network, fixed_network]}
fixed_subnet: {get_attr: [network, fixed_subnet]}
external_network: {get_param: external_network}
wait_condition_timeout: {get_param: wait_condition_timeout}
secgroup_base_id: {get_attr: [secgroup, secgroup_base_id]}
# DC/OS params
auth_url: {get_param: auth_url}
username: {get_param: username}
password: {get_param: password}
tenant_name: {get_param: tenant_name}
volume_driver: {get_param: volume_driver}
region_name: {get_param: region_name}
domain_name: {get_param: domain_name}
rexray_preempt: {get_param: rexray_preempt}
http_proxy: {get_param: http_proxy}
https_proxy: {get_param: https_proxy}
no_proxy: {get_param: no_proxy}
cluster_name: {get_param: cluster_name}
exhibitor_storage_backend: {get_param: exhibitor_storage_backend}
exhibitor_zk_hosts: {get_param: exhibitor_zk_hosts}
exhibitor_zk_path: {get_param: exhibitor_zk_path}
aws_access_key_id: {get_param: aws_access_key_id}
aws_region: {get_param: aws_region}
aws_secret_access_key: {get_param: aws_secret_access_key}
exhibitor_explicit_keys: {get_param: exhibitor_explicit_keys}
s3_bucket: {get_param: s3_bucket}
s3_prefix: {get_param: s3_prefix}
exhibitor_azure_account_name: {get_param: exhibitor_azure_account_name}
exhibitor_azure_account_key: {get_param: exhibitor_azure_account_key}
exhibitor_azure_prefix: {get_param: exhibitor_azure_prefix}
master_discovery: {get_param: master_discovery}
master_list: {list_join: [' ', {get_attr: [dcos_masters, dcos_master_ip]}]}
exhibitor_address: {get_attr: [api_lb, address]}
num_masters: {get_param: number_of_masters}
dcos_overlay_enable: {get_param: dcos_overlay_enable}
dcos_overlay_config_attempts: {get_param: dcos_overlay_config_attempts}
dcos_overlay_mtu: {get_param: dcos_overlay_mtu}
dcos_overlay_network: {get_param: dcos_overlay_network}
dns_search: {get_param: dns_search}
resolvers: {get_param: dns_nameserver}
check_time: {get_param: check_time}
docker_remove_delay: {get_param: docker_remove_delay}
gc_delay: {get_param: gc_delay}
log_directory: {get_param: log_directory}
process_timeout: {get_param: process_timeout}
oauth_enabled: {get_param: oauth_enabled}
telemetry_enabled: {get_param: telemetry_enabled}
outputs:
api_address:
value: {get_attr: [api_address_lb_switch, public_ip]}
description: >
This is the API endpoint of the DC/OS master. Use this to access
the DC/OS API from outside the cluster.
dcos_master_private:
value: {get_attr: [dcos_masters, dcos_master_ip]}
description: >
This is a list of the "private" addresses of all the DC/OS masters.
dcos_master:
value: {get_attr: [dcos_masters, dcos_master_external_ip]}
description: >
This is the "public" ip address of the DC/OS master server. Use this address to
log in to the DC/OS master via ssh or to access the DC/OS API
from outside the cluster.
dcos_slaves_private:
value: {get_attr: [dcos_slaves, dcos_slave_ip]}
description: >
This is a list of the "private" addresses of all the DC/OS slaves.
dcos_slaves:
value: {get_attr: [dcos_slaves, dcos_slave_external_ip]}
description: >
This is a list of the "public" addresses of all the DC/OS slaves.

View File

@ -1,161 +0,0 @@
heat_template_version: 2014-10-16
description: >
This is a nested stack that defines a single DC/OS master, This stack is
included by a ResourceGroup resource in the parent template
(dcoscluster.yaml).
parameters:
server_image:
type: string
description: glance image used to boot the server
master_flavor:
type: string
description: flavor to use when booting the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
fixed_network:
type: string
description: Network from which to allocate fixed addresses.
fixed_subnet:
type: string
description: Subnet from which to allocate fixed addresses.
secgroup_base_id:
type: string
description: ID of the security group for base.
secgroup_dcos_id:
type: string
description: ID of the security group for DC/OS master.
api_pool_80_id:
type: string
description: ID of the load balancer pool of Http.
api_pool_443_id:
type: string
description: ID of the load balancer pool of Https.
api_pool_8080_id:
type: string
description: ID of the load balancer pool of Marathon.
api_pool_5050_id:
type: string
description: ID of the load balancer pool of Mesos master.
api_pool_2181_id:
type: string
description: ID of the load balancer pool of Zookeeper.
api_pool_8181_id:
type: string
description: ID of the load balancer pool of Exhibitor.
resources:
######################################################################
#
# DC/OS master server.
#
dcos_master:
type: OS::Nova::Server
properties:
image: {get_param: server_image}
flavor: {get_param: master_flavor}
key_name: {get_param: ssh_key_name}
user_data_format: SOFTWARE_CONFIG
networks:
- port: {get_resource: dcos_master_eth0}
dcos_master_eth0:
type: OS::Neutron::Port
properties:
network: {get_param: fixed_network}
security_groups:
- {get_param: secgroup_base_id}
- {get_param: secgroup_dcos_id}
fixed_ips:
- subnet: {get_param: fixed_subnet}
replacement_policy: AUTO
dcos_master_floating:
type: Magnum::Optional::DcosMaster::Neutron::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_resource: dcos_master_eth0}
api_pool_80_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_80_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 80
api_pool_443_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_443_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 443
api_pool_8080_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_8080_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 8080
api_pool_5050_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_5050_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 5050
api_pool_2181_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_2181_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 2181
api_pool_8181_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_8181_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 8181
outputs:
dcos_master_ip:
value: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
description: >
This is the "private" address of the DC/OS master node.
dcos_master_external_ip:
value: {get_attr: [dcos_master_floating, floating_ip_address]}
description: >
This is the "public" address of the DC/OS master node.
dcos_server_id:
value: {get_resource: dcos_master}
description: >
This is the logical id of the DC/OS master node.

View File

@ -1,338 +0,0 @@
heat_template_version: 2014-10-16
description: >
This is a nested stack that defines a single DC/OS slave, This stack is
included by a ResourceGroup resource in the parent template
(dcoscluster.yaml).
parameters:
server_image:
type: string
description: glance image used to boot the server
slave_flavor:
type: string
description: flavor to use when booting the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
wait_condition_timeout:
type: number
description : >
timeout for the Wait Conditions
http_proxy:
type: string
description: http proxy address for docker
https_proxy:
type: string
description: https proxy address for docker
no_proxy:
type: string
description: no proxies for docker
auth_url:
type: string
description: >
url for DC/OS to authenticate before sending request
username:
type: string
description: user name
password:
type: string
description: >
user password, not set in current implementation, only used to
fill in for Kubernetes config file
hidden: true
tenant_name:
type: string
description: >
tenant_name is used to isolate access to Compute resources
volume_driver:
type: string
description: volume driver to use for container storage
region_name:
type: string
description: A logically separate section of the cluster
domain_name:
type: string
description: >
domain is to define the administrative boundaries for management
of Keystone entities
fixed_network:
type: string
description: Network from which to allocate fixed addresses.
fixed_subnet:
type: string
description: Subnet from which to allocate fixed addresses.
secgroup_base_id:
type: string
description: ID of the security group for base.