Drop Swarm support
Label validator function has been left behind, although it's not checking for anything right now - might be useful in future. Change-Id: I74c744dc957d73aef7556aff00837611dadbada7
This commit is contained in:
parent
ab88ef3a5c
commit
bc79012f46
@ -96,9 +96,8 @@ coe:
|
||||
required: true
|
||||
description: |
|
||||
Specify the Container Orchestration Engine to use. Supported COEs
|
||||
include ``kubernetes``, ``swarm``. If your environment has
|
||||
additional cluster drivers installed, refer to the cluster driver
|
||||
documentation for the new COE names.
|
||||
include ``kubernetes``. If your environment has additional cluster drivers
|
||||
installed, refer to the cluster driver documentation for the new COE names.
|
||||
coe_version:
|
||||
type: string
|
||||
in: body
|
||||
|
@ -52,16 +52,3 @@ any coe type. All of proxy parameters are optional.
|
||||
--https-proxy <https://abc-proxy.com:8080> \
|
||||
--no-proxy <172.24.4.4,172.24.4.9,172.24.4.8>
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack coe cluster template create swarm-cluster-template \
|
||||
--image fedora-atomic-latest \
|
||||
--keypair testkey \
|
||||
--external-network public \
|
||||
--dns-nameserver 8.8.8.8 \
|
||||
--flavor m1.small \
|
||||
--coe swarm \
|
||||
--http-proxy <http://abc-proxy.com:8080> \
|
||||
--https-proxy <https://abc-proxy.com:8080> \
|
||||
--no-proxy <172.24.4.4,172.24.4.9,172.24.4.8>
|
||||
|
||||
|
@ -196,7 +196,7 @@ Barbican service
|
||||
|
||||
Cluster internet access
|
||||
-----------------------
|
||||
The nodes for Kubernetes and Swarm are connected to a private
|
||||
The nodes for Kubernetes are connected to a private
|
||||
Neutron network, so to provide access to the external internet, a router
|
||||
connects the private network to a public network. With devstack, the
|
||||
default public network is "public", but this can be replaced by the
|
||||
@ -523,8 +523,7 @@ Running Flannel
|
||||
---------------
|
||||
|
||||
When deploying a COE, Flannel is available as a network driver for
|
||||
certain COE type. Magnum currently supports Flannel for a Kubernetes
|
||||
or Swarm cluster.
|
||||
certain COE type. Magnum currently supports Flannel for a Kubernetes cluster.
|
||||
|
||||
Flannel provides a flat network space for the containers in the cluster:
|
||||
they are allocated IP in this network space and they will have connectivity
|
||||
@ -757,13 +756,13 @@ Simulating gate tests
|
||||
export KEEP_LOCALRC=1
|
||||
function gate_hook {
|
||||
cd /opt/stack/new/magnum/
|
||||
./magnum/tests/contrib/gate_hook.sh api # change this to swarm to run swarm functional tests or k8s to run kubernetes functional tests
|
||||
./magnum/tests/contrib/gate_hook.sh api # change this to k8s to run kubernetes functional tests
|
||||
}
|
||||
export -f gate_hook
|
||||
function post_test_hook {
|
||||
. $BASE/new/devstack/accrc/admin/admin
|
||||
cd /opt/stack/new/magnum/
|
||||
./magnum/tests/contrib/post_test_hook.sh api # change this to swarm to run swarm functional tests or k8s to run kubernetes functional tests
|
||||
./magnum/tests/contrib/post_test_hook.sh api # change this to k8s to run kubernetes functional tests
|
||||
}
|
||||
export -f post_test_hook
|
||||
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
|
||||
|
@ -129,4 +129,4 @@ To run a specific test or group of tests, specify the test path as a positional
|
||||
To avoid creating multiple clusters simultaneously, you can execute the tests
|
||||
with concurrency 1::
|
||||
|
||||
tox -e functional-swarm -- --concurrency 1
|
||||
tox -e functional-k8s -- --concurrency 1
|
||||
|
@ -48,7 +48,7 @@ Features
|
||||
========
|
||||
|
||||
* Abstractions for Clusters
|
||||
* Integration with Kubernetes, Swarm for backend container technology
|
||||
* Integration with Kubernetes for backend container technology
|
||||
* Integration with Keystone for multi-tenant security
|
||||
* Integration with Neutron for Kubernetes multi-tenancy network security
|
||||
* Integration with Cinder to provide volume service for containers
|
||||
|
@ -16,5 +16,4 @@ following components:
|
||||
|
||||
``magnum-conductor`` service
|
||||
Runs on a controller machine and connects to heat to orchestrate a
|
||||
cluster. Additionally, it connects to a Docker Swarm or Kubernetes
|
||||
API endpoint.
|
||||
cluster. Additionally, it connects to a Kubernetes API endpoint.
|
||||
|
@ -14,7 +14,7 @@ Magnum Installation Guide
|
||||
|
||||
The Container Infrastructure Management service codenamed (magnum) is an
|
||||
OpenStack API service developed by the OpenStack Containers Team making
|
||||
container orchestration engines (COE) such as Docker Swarm and Kubernetes
|
||||
container orchestration engines (COE) such as Kubernetes
|
||||
available as first class resources in OpenStack. Magnum uses
|
||||
Heat to orchestrate an OS image which contains Docker and Kubernetes and
|
||||
runs that image in either virtual machines or bare metal in a cluster
|
||||
|
@ -12,11 +12,11 @@ Compute service, Networking service, Block Storage service and Orchestration
|
||||
service. See `OpenStack Install Guides <https://docs.openstack.org/
|
||||
#install-guides>`__.
|
||||
|
||||
To provide access to Docker Swarm or Kubernetes using the native clients
|
||||
(docker or kubectl, respectively) magnum uses TLS certificates. To store the
|
||||
certificates, it is recommended to use the `Key Manager service, code-named
|
||||
barbican <https://docs.openstack.org/project-install-guide/key-manager/
|
||||
draft/>`__, or you can save them in magnum's database.
|
||||
To provide access to Kubernetes using the native client (kubectl) magnum uses
|
||||
TLS certificates. To store the certificates, it is recommended to use the
|
||||
`Key Manager service, code-named barbican
|
||||
<https://docs.openstack.org/project-install-guide/key-manager/draft/>`__,
|
||||
or you can save them in magnum's database.
|
||||
|
||||
Optionally, you can install the following components:
|
||||
|
||||
|
@ -26,7 +26,7 @@ Magnum Terminology
|
||||
A container orchestration engine manages the lifecycle of one or more
|
||||
containers, logically represented in Magnum as a cluster. Magnum supports a
|
||||
number of container orchestration engines, each with their own pros and cons,
|
||||
including Docker Swarm and Kubernetes.
|
||||
including Kubernetes.
|
||||
|
||||
Labels
|
||||
Labels is a general method to specify supplemental parameters that are
|
||||
|
@ -39,7 +39,7 @@ Overview
|
||||
========
|
||||
|
||||
Magnum is an OpenStack API service developed by the OpenStack Containers Team
|
||||
making container orchestration engines (COE) such as Docker Swarm and
|
||||
making container orchestration engines (COE) such as
|
||||
Kubernetes available as first class resources in OpenStack.
|
||||
|
||||
Magnum uses Heat to orchestrate an OS image which contains Docker and COE
|
||||
@ -55,7 +55,7 @@ Following are few salient features of Magnum:
|
||||
|
||||
- Standard API based complete life-cycle management for Container Clusters
|
||||
- Multi-tenancy for container clusters
|
||||
- Choice of COE: Kubernetes, Swarm
|
||||
- Choice of COE: Kubernetes
|
||||
- Choice of container cluster deployment model: VM or Bare-metal
|
||||
- Keystone-based multi-tenant security and auth management
|
||||
- Neutron based multi-tenant network control and isolation
|
||||
@ -152,7 +152,6 @@ They are loosely grouped as: mandatory, infrastructure, COE specific.
|
||||
COE Network-Driver Default
|
||||
=========== ================= ========
|
||||
Kubernetes flannel, calico flannel
|
||||
Swarm docker, flannel flannel
|
||||
=========== ================= ========
|
||||
|
||||
Note that the network driver name is case sensitive.
|
||||
@ -166,7 +165,6 @@ They are loosely grouped as: mandatory, infrastructure, COE specific.
|
||||
COE Volume-Driver Default
|
||||
============= ============= ===========
|
||||
Kubernetes cinder No Driver
|
||||
Swarm rexray No Driver
|
||||
============= ============= ===========
|
||||
|
||||
Note that the volume driver name is case sensitive.
|
||||
@ -501,7 +499,7 @@ along with the health monitor and floating IP to be created. It is
|
||||
important to distinguish resources in the IaaS level from resources in
|
||||
the PaaS level. For instance, the infrastructure networking in
|
||||
OpenStack IaaS is different and separate from the container networking
|
||||
in Kubernetes or Swarm PaaS.
|
||||
in Kubernetes PaaS.
|
||||
|
||||
Typical infrastructure includes the following.
|
||||
|
||||
@ -844,8 +842,6 @@ COE and distro pairs:
|
||||
+============+===============+
|
||||
| Kubernetes | Fedora CoreOS |
|
||||
+------------+---------------+
|
||||
| Swarm | Fedora Atomic |
|
||||
+------------+---------------+
|
||||
|
||||
Magnum is designed to accommodate new cluster drivers to support custom
|
||||
COE's and this section describes how a new cluster driver can be
|
||||
@ -891,7 +887,7 @@ version.py
|
||||
Tracks the latest version of the driver in this directory.
|
||||
This is defined by a ``version`` attribute and is represented in the
|
||||
form of ``1.0.0``. It should also include a ``Driver`` attribute with
|
||||
descriptive name such as ``fedora_swarm_atomic``.
|
||||
descriptive name such as ``k8s_fedora_coreos``.
|
||||
|
||||
|
||||
The remaining components are optional:
|
||||
@ -936,24 +932,10 @@ Heat Stack Templates
|
||||
Choosing a COE
|
||||
==============
|
||||
|
||||
Magnum supports a variety of COE options, and allows more to be added over time
|
||||
as they gain popularity. As an operator, you may choose to support the full
|
||||
variety of options, or you may want to offer a subset of the available choices.
|
||||
Given multiple choices, your users can run one or more clusters, and each may
|
||||
use a different COE. For example, I might have multiple clusters that use
|
||||
Kubernetes, and just one cluster that uses Swarm. All of these clusters can
|
||||
run concurrently, even though they use different COE software.
|
||||
|
||||
Choosing which COE to use depends on what tools you want to use to manage your
|
||||
containers once you start your app. If you want to use the Docker tools, you
|
||||
may want to use the Swarm cluster type. Swarm will spread your containers
|
||||
across the various nodes in your cluster automatically. It does not monitor
|
||||
the health of your containers, so it can't restart them for you if they stop.
|
||||
It will not automatically scale your app for you (as of Swarm version 1.2.2).
|
||||
You may view this as a plus. If you prefer to manage your application yourself,
|
||||
you might prefer swarm over the other COE options.
|
||||
containers once you start your app.
|
||||
|
||||
Kubernetes (as of v1.2) is more sophisticated than Swarm (as of v1.2.2). It
|
||||
Kubernetes
|
||||
offers an attractive YAML file description of a pod, which is a grouping of
|
||||
containers that run together as part of a distributed application. This file
|
||||
format allows you to model your application deployment using a declarative
|
||||
@ -976,8 +958,8 @@ native client for the particular cluster type to interface with the
|
||||
clusters. In the typical case, there are two clients to consider:
|
||||
|
||||
COE level
|
||||
This is the orchestration or management level such as Kubernetes,
|
||||
Swarm and its frameworks.
|
||||
This is the orchestration or management level such as Kubernetes
|
||||
its frameworks.
|
||||
|
||||
Container level
|
||||
This is the low level container operation. Currently it is
|
||||
@ -1005,11 +987,6 @@ Kubernetes Dashboard running; it can be accessed using::
|
||||
|
||||
The browser can be accessed at http://localhost:8001/ui
|
||||
|
||||
For Swarm, the main CLI is 'docker', along with associated tools
|
||||
such as 'docker-compose', etc. Specific version of the binaries can
|
||||
be obtained from the `Docker Engine installation
|
||||
<https://docs.docker.com/engine/installation/binaries/>`_.
|
||||
|
||||
Depending on the client requirement, you may need to use a version of
|
||||
the client that matches the version in the cluster. To determine the
|
||||
version of the COE and container, use the command 'cluster-show' and
|
||||
@ -1833,10 +1810,8 @@ Current TLS support is summarized below:
|
||||
+============+=============+
|
||||
| Kubernetes | yes |
|
||||
+------------+-------------+
|
||||
| Swarm | yes |
|
||||
+------------+-------------+
|
||||
|
||||
For cluster type with TLS support, e.g. Kubernetes and Swarm, TLS is
|
||||
For cluster type with TLS support, e.g. Kubernetes, TLS is
|
||||
enabled by default. To disable TLS in Magnum, you can specify the
|
||||
parameter '--tls-disabled' in the ClusterTemplate. Please note it is not
|
||||
recommended to disable TLS due to security reasons.
|
||||
@ -1971,7 +1946,7 @@ Automated
|
||||
Magnum provides the command 'cluster-config' to help the user in setting
|
||||
up the environment and artifacts for TLS, for example::
|
||||
|
||||
openstack coe cluster config swarm-cluster --dir myclusterconfig
|
||||
openstack coe cluster config kubernetes-cluster --dir myclusterconfig
|
||||
|
||||
This will display the necessary environment variables, which you
|
||||
can add to your environment::
|
||||
@ -2084,8 +2059,8 @@ Rotate Certificate
|
||||
User Examples
|
||||
-------------
|
||||
|
||||
Here are some examples for using the CLI on a secure Kubernetes and
|
||||
Swarm cluster. You can perform all the TLS set up automatically by::
|
||||
Here are some examples for using the CLI on a secure Kubernetes cluster.
|
||||
You can perform all the TLS set up automatically by::
|
||||
|
||||
eval $(openstack coe cluster config <cluster-name>)
|
||||
|
||||
@ -2285,18 +2260,17 @@ network-driver
|
||||
The network driver name for instantiating container networks.
|
||||
Currently, the following network drivers are supported:
|
||||
|
||||
+--------+-------------+-------------+
|
||||
| Driver | Kubernetes | Swarm |
|
||||
+========+=============+=============+
|
||||
| Flannel| supported | supported |
|
||||
+--------+-------------+-------------+
|
||||
| Docker | unsupported | supported |
|
||||
+--------+-------------+-------------+
|
||||
| Calico | supported | unsupported |
|
||||
+--------+-------------+-------------+
|
||||
+--------+-------------+
|
||||
| Driver | Kubernetes |
|
||||
+========+=============+
|
||||
| Flannel| supported |
|
||||
+--------+-------------+
|
||||
| Docker | unsupported |
|
||||
+--------+-------------+
|
||||
| Calico | supported |
|
||||
+--------+-------------+
|
||||
|
||||
If not specified, the default driver is Flannel for Kubernetes, and
|
||||
Docker for Swarm.
|
||||
If not specified, the default driver is Flannel for Kubernetes.
|
||||
|
||||
Particular network driver may require its own set of parameters for
|
||||
configuration, and these parameters are specified through the labels
|
||||
@ -2515,11 +2489,6 @@ Kubernetes
|
||||
ensure that Kubernetes will not launch new pods on these nodes after
|
||||
Magnum has scanned the pods.
|
||||
|
||||
Swarm
|
||||
No node selection heuristic is currently supported. If you decrease
|
||||
the node_count, a node will be chosen by magnum without
|
||||
consideration of what containers are running on the selected node.
|
||||
|
||||
|
||||
Currently, scaling containers and scaling cluster nodes are handled
|
||||
separately, but in many use cases, there are interactions between the
|
||||
@ -2603,14 +2572,6 @@ so that it can be accessed later. To persist the data, a Cinder
|
||||
volume with a filesystem on it can be mounted on a host and be made
|
||||
available to the container, then be unmounted when the container exits.
|
||||
|
||||
Docker provides the 'volume' feature for this purpose: the user
|
||||
invokes the 'volume create' command, specifying a particular volume
|
||||
driver to perform the actual work. Then this volume can be mounted
|
||||
when a container is created. A number of third-party volume drivers
|
||||
support OpenStack Cinder as the backend, for example Rexray and
|
||||
Flocker. Magnum currently supports Rexray as the volume driver for
|
||||
Swarm. Other drivers are being considered.
|
||||
|
||||
Kubernetes allows a previously created Cinder block to be mounted to
|
||||
a pod and this is done by specifying the block ID in the pod YAML file.
|
||||
When the pod is scheduled on a node, Kubernetes will interface with
|
||||
@ -2625,13 +2586,11 @@ Magnum supports these features to use Cinder as persistent storage
|
||||
using the ClusterTemplate attribute 'volume-driver' and the support matrix
|
||||
for the COE types is summarized as follows:
|
||||
|
||||
+--------+-------------+-------------+
|
||||
| Driver | Kubernetes | Swarm |
|
||||
+========+=============+=============+
|
||||
| cinder | supported | unsupported |
|
||||
+--------+-------------+-------------+
|
||||
| rexray | unsupported | supported |
|
||||
+--------+-------------+-------------+
|
||||
+--------+-------------+
|
||||
| Driver | Kubernetes |
|
||||
+========+=============+
|
||||
| cinder | supported |
|
||||
+--------+-------------+
|
||||
|
||||
Following are some examples for using Cinder as persistent storage.
|
||||
|
||||
@ -2721,11 +2680,6 @@ and on an OpenStack client you can run the command 'cinder list' to verify
|
||||
that the cinder volume status is 'in-use'.
|
||||
|
||||
|
||||
Using Cinder in Swarm
|
||||
+++++++++++++++++++++
|
||||
*To be filled in*
|
||||
|
||||
|
||||
Image Management
|
||||
================
|
||||
|
||||
@ -2733,7 +2687,7 @@ When a COE is deployed, an image from Glance is used to boot the nodes
|
||||
in the cluster and then the software will be configured and started on
|
||||
the nodes to bring up the full cluster. An image is based on a
|
||||
particular distro such as Fedora, Ubuntu, etc, and is prebuilt with
|
||||
the software specific to the COE such as Kubernetes and Swarm.
|
||||
the software specific to the COE such as Kubernetes.
|
||||
The image is tightly coupled with the following in Magnum:
|
||||
|
||||
1. Heat templates to orchestrate the configuration.
|
||||
|
@ -27,7 +27,6 @@ SUPPORTED_ISOLATION = ['filesystem/posix', 'filesystem/linux',
|
||||
'cgroups/mem', 'docker/runtime',
|
||||
'namespaces/pid']
|
||||
SUPPORTED_IMAGE_PROVIDERS = ['docker', 'appc']
|
||||
SUPPORTED_SWARM_STRATEGY = ['spread', 'binpack', 'random']
|
||||
|
||||
|
||||
def validate_image(cli, image):
|
||||
@ -148,21 +147,6 @@ def validate_labels(labels):
|
||||
validate_method(labels)
|
||||
|
||||
|
||||
def validate_labels_strategy(labels):
|
||||
"""Validate swarm_strategy"""
|
||||
swarm_strategy = list(labels.get('swarm_strategy', "").split())
|
||||
unsupported_strategy = set(swarm_strategy) - set(
|
||||
SUPPORTED_SWARM_STRATEGY)
|
||||
if (len(unsupported_strategy) > 0):
|
||||
raise exception.InvalidParameterValue(_(
|
||||
'property "labels/swarm_strategy" with value '
|
||||
'"%(strategy)s" is not supported, supported values are: '
|
||||
'%(supported_strategies)s') % {
|
||||
'strategy': ' '.join(list(unsupported_strategy)),
|
||||
'supported_strategies': ', '.join(
|
||||
SUPPORTED_SWARM_STRATEGY + ['unspecified'])})
|
||||
|
||||
|
||||
def validate_os_resources(context, cluster_template, cluster=None):
|
||||
"""Validate ClusterTemplate's OpenStack Resources"""
|
||||
|
||||
@ -227,4 +211,4 @@ validators = {'image_id': validate_image,
|
||||
'fixed_subnet': validate_fixed_subnet,
|
||||
'labels': validate_labels}
|
||||
|
||||
labels_validators = {'swarm_strategy': validate_labels_strategy}
|
||||
labels_validators = {}
|
||||
|
@ -133,7 +133,7 @@ class Cluster(base.APIBase):
|
||||
|
||||
coe_version = wsme.wsattr(wtypes.text, readonly=True)
|
||||
"""Version of the COE software currently running in this cluster.
|
||||
Example: swarm version or kubernetes version."""
|
||||
Example: kubernetes version."""
|
||||
|
||||
container_version = wsme.wsattr(wtypes.text, readonly=True)
|
||||
"""Version of the container software. Example: docker version."""
|
||||
|
@ -281,11 +281,6 @@ class ClusterTemplatesController(base.Controller):
|
||||
"The fedora ironic driver is deprecated. "
|
||||
"The driver will be removed in a future Magnum version.")
|
||||
|
||||
_docker_swarm_deprecation_note = (
|
||||
"The swarm coe is deprecated as the fedora_atomic distro is EOL. "
|
||||
"Please migrate to using the kubernetes coe. "
|
||||
"The swarm coe will be removed in a future Magnum version.")
|
||||
|
||||
def _generate_name_for_cluster_template(self, context):
|
||||
"""Generate a random name like: zeta-22-model."""
|
||||
|
||||
@ -459,12 +454,6 @@ class ClusterTemplatesController(base.Controller):
|
||||
DeprecationWarning)
|
||||
LOG.warning(self._fedora_ironic_deprecation_note)
|
||||
|
||||
if (cluster_template_dict['coe'] == 'swarm' or
|
||||
cluster_template_dict['coe'] == 'swarm-mode'):
|
||||
warnings.warn(self._docker_swarm_deprecation_note,
|
||||
DeprecationWarning)
|
||||
LOG.warning(self._docker_swarm_deprecation_note)
|
||||
|
||||
# NOTE(yuywz): We will generate a random human-readable name for
|
||||
# cluster_template if the name is not specified by user.
|
||||
arg_name = cluster_template_dict.get('name')
|
||||
|
@ -254,8 +254,6 @@ class Validator(object):
|
||||
def get_coe_validator(cls, coe):
|
||||
if coe == 'kubernetes':
|
||||
return K8sValidator()
|
||||
elif coe == 'swarm' or coe == 'swarm-mode':
|
||||
return SwarmValidator()
|
||||
else:
|
||||
raise exception.InvalidParameterValue(
|
||||
_('Requested COE type %s is not supported.') % coe)
|
||||
@ -329,15 +327,3 @@ class K8sValidator(Validator):
|
||||
CONF.cluster_template.kubernetes_default_network_driver)
|
||||
|
||||
supported_volume_driver = ['cinder']
|
||||
|
||||
|
||||
class SwarmValidator(Validator):
|
||||
|
||||
supported_network_drivers = ['docker', 'flannel']
|
||||
supported_server_types = ['vm', 'bm']
|
||||
allowed_network_drivers = (CONF.cluster_template.
|
||||
swarm_allowed_network_drivers)
|
||||
default_network_driver = (CONF.cluster_template.
|
||||
swarm_default_network_driver)
|
||||
|
||||
supported_volume_driver = ['rexray']
|
||||
|
@ -333,10 +333,6 @@ class UnsupportedK8sQuantityFormat(Invalid):
|
||||
message = _("Unsupported quantity format for k8s cluster.")
|
||||
|
||||
|
||||
class UnsupportedDockerQuantityFormat(Invalid):
|
||||
message = _("Unsupported quantity format for Swarm cluster.")
|
||||
|
||||
|
||||
class FlavorNotFound(ResourceNotFound):
|
||||
"""The code here changed to 400 according to the latest document."""
|
||||
message = _("Unable to find flavor %(flavor)s.")
|
||||
|
@ -233,35 +233,6 @@ def get_k8s_quantity(quantity):
|
||||
raise exception.UnsupportedK8sQuantityFormat()
|
||||
|
||||
|
||||
def get_docker_quantity(quantity):
|
||||
"""This function is used to get swarm Memory quantity.
|
||||
|
||||
Memory format must be in the format of:
|
||||
|
||||
<unsignedNumber><suffix>
|
||||
suffix = b | k | m | g
|
||||
|
||||
eg: 100m = 104857600
|
||||
:raises: exception.UnsupportedDockerQuantityFormat if the quantity string
|
||||
is a unsupported value
|
||||
"""
|
||||
matched_unsigned_number = re.search(r"(^\d+)", quantity)
|
||||
|
||||
if matched_unsigned_number is None:
|
||||
raise exception.UnsupportedDockerQuantityFormat()
|
||||
else:
|
||||
unsigned_number = matched_unsigned_number.group(0)
|
||||
|
||||
suffix = quantity.replace(unsigned_number, '', 1)
|
||||
if suffix == '':
|
||||
return int(quantity)
|
||||
|
||||
if re.search(r"^(b|k|m|g)$", suffix):
|
||||
return int(unsigned_number) * DOCKER_MEMORY_UNITS[suffix]
|
||||
|
||||
raise exception.UnsupportedDockerQuantityFormat()
|
||||
|
||||
|
||||
def generate_password(length, symbolgroups=None):
|
||||
"""Generate a random password from the supplied symbol groups.
|
||||
|
||||
|
@ -30,19 +30,6 @@ cluster_template_opts = [
|
||||
help=_("Default network driver for kubernetes "
|
||||
"cluster-templates."),
|
||||
),
|
||||
cfg.ListOpt('swarm_allowed_network_drivers',
|
||||
default=['all'],
|
||||
help=_("Allowed network drivers for docker swarm "
|
||||
"cluster-templates. Use 'all' keyword to allow all "
|
||||
"drivers supported for swarm cluster-templates. "
|
||||
"Supported network drivers include docker and flannel."
|
||||
),
|
||||
),
|
||||
cfg.StrOpt('swarm_default_network_driver',
|
||||
default='docker',
|
||||
help=_("Default network driver for docker swarm "
|
||||
"cluster-templates."),
|
||||
deprecated_group='baymodel'),
|
||||
]
|
||||
|
||||
|
||||
|
@ -8,9 +8,3 @@ resource_registry:
|
||||
|
||||
# kubeminion.yaml
|
||||
"Magnum::Optional::KubeMinion::Neutron::FloatingIP": "OS::Heat::None"
|
||||
|
||||
# swarmmaster.yaml
|
||||
"Magnum::Optional::SwarmMaster::Neutron::FloatingIP": "OS::Heat::None"
|
||||
|
||||
# swarmnode.yaml
|
||||
"Magnum::Optional::SwarmNode::Neutron::FloatingIP": "OS::Heat::None"
|
||||
|
@ -6,9 +6,3 @@ resource_registry:
|
||||
|
||||
# kubeminion.yaml
|
||||
"Magnum::Optional::KubeMinion::Neutron::FloatingIP": "OS::Neutron::FloatingIP"
|
||||
|
||||
# swarmmaster.yaml
|
||||
"Magnum::Optional::SwarmMaster::Neutron::FloatingIP": "OS::Neutron::FloatingIP"
|
||||
|
||||
# swarmnode.yaml
|
||||
"Magnum::Optional::SwarmNode::Neutron::FloatingIP": "OS::Neutron::FloatingIP"
|
||||
|
@ -1,18 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
opts="-H fd:// -H tcp://0.0.0.0:2375 "
|
||||
|
||||
if [ "$TLS_DISABLED" = 'False' ]; then
|
||||
opts=$opts"--tlsverify --tlscacert=/etc/docker/ca.crt "
|
||||
opts=$opts"--tlskey=/etc/docker/server.key "
|
||||
opts=$opts"--tlscert=/etc/docker/server.crt "
|
||||
fi
|
||||
|
||||
sed -i '/^OPTIONS=/ s#\(OPTIONS='"'"'\)#\1'"$opts"'#' /etc/sysconfig/docker
|
||||
|
||||
# NOTE(tobias-urdin): The live restore option is only for standalone daemons.
|
||||
# If its specified the swarm init will fail so we remove it here.
|
||||
# See: https://docs.docker.com/config/containers/live-restore
|
||||
sed -i 's/\ --live-restore//g' /etc/sysconfig/docker
|
@ -1,67 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
DOCKER_HTTP_PROXY_CONF=/etc/systemd/system/docker.service.d/http_proxy.conf
|
||||
|
||||
DOCKER_HTTPS_PROXY_CONF=/etc/systemd/system/docker.service.d/https_proxy.conf
|
||||
|
||||
DOCKER_NO_PROXY_CONF=/etc/systemd/system/docker.service.d/no_proxy.conf
|
||||
|
||||
DOCKER_RESTART=0
|
||||
|
||||
BASH_RC=/etc/bashrc
|
||||
|
||||
mkdir -p /etc/systemd/system/docker.service.d
|
||||
|
||||
if [ -n "$HTTP_PROXY" ]; then
|
||||
cat <<EOF | sed "s/^ *//" > $DOCKER_HTTP_PROXY_CONF
|
||||
[Service]
|
||||
Environment=HTTP_PROXY=$HTTP_PROXY
|
||||
EOF
|
||||
|
||||
DOCKER_RESTART=1
|
||||
|
||||
if [ -f "$BASH_RC" ]; then
|
||||
echo "declare -x http_proxy=$HTTP_PROXY" >> $BASH_RC
|
||||
else
|
||||
echo "File $BASH_RC does not exist, not setting http_proxy"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -n "$HTTPS_PROXY" ]; then
|
||||
cat <<EOF | sed "s/^ *//" > $DOCKER_HTTPS_PROXY_CONF
|
||||
[Service]
|
||||
Environment=HTTPS_PROXY=$HTTPS_PROXY
|
||||
EOF
|
||||
|
||||
DOCKER_RESTART=1
|
||||
|
||||
if [ -f "$BASH_RC" ]; then
|
||||
echo "declare -x https_proxy=$HTTPS_PROXY" >> $BASH_RC
|
||||
else
|
||||
echo "File $BASH_RC does not exist, not setting https_proxy"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -n "$HTTP_PROXY" -o -n "$HTTPS_PROXY" ]; then
|
||||
if [ -n "$NO_PROXY" ]; then
|
||||
cat <<EOF | sed "s/^ *//" > $DOCKER_NO_PROXY_CONF
|
||||
[Service]
|
||||
Environment=NO_PROXY=$NO_PROXY
|
||||
EOF
|
||||
|
||||
DOCKER_RESTART=1
|
||||
|
||||
if [ -f "$BASH_RC" ]; then
|
||||
echo "declare -x no_proxy=$NO_PROXY" >> $BASH_RC
|
||||
else
|
||||
echo "File $BASH_RC does not exist, not setting no_proxy"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$DOCKER_RESTART" -eq 1 ]; then
|
||||
systemctl daemon-reload
|
||||
systemctl --no-block restart docker.service
|
||||
fi
|
@ -1,20 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
echo "notifying heat"
|
||||
|
||||
if [ "$VERIFY_CA" == "True" ]; then
|
||||
VERIFY_CA=""
|
||||
else
|
||||
VERIFY_CA="-k"
|
||||
fi
|
||||
|
||||
STATUS="SUCCESS"
|
||||
REASON="Setup complete"
|
||||
DATA="OK"
|
||||
UUID=`uuidgen`
|
||||
|
||||
data=$(echo '{"status": "'${STATUS}'", "reason": "'$REASON'", "data": "'${DATA}'", "id": "'$UUID'"}')
|
||||
|
||||
sh -c "${WAIT_CURL} ${VERIFY_CA} --data-binary '${data}'"
|
@ -1,39 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
myip="$SWARM_NODE_IP"
|
||||
cert_dir="/etc/docker"
|
||||
protocol="https"
|
||||
|
||||
if [ "$TLS_DISABLED" = "True" ]; then
|
||||
protocol="http"
|
||||
fi
|
||||
|
||||
cat > /etc/etcd/etcd.conf <<EOF
|
||||
ETCD_NAME="$myip"
|
||||
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
|
||||
ETCD_LISTEN_CLIENT_URLS="$protocol://$myip:2379,http://127.0.0.1:2379"
|
||||
ETCD_LISTEN_PEER_URLS="$protocol://$myip:2380"
|
||||
|
||||
ETCD_ADVERTISE_CLIENT_URLS="$protocol://$myip:2379,http://127.0.0.1:2379"
|
||||
ETCD_INITIAL_ADVERTISE_PEER_URLS="$protocol://$myip:2380"
|
||||
ETCD_DISCOVERY="$ETCD_DISCOVERY_URL"
|
||||
EOF
|
||||
|
||||
if [ "$TLS_DISABLED" = "False" ]; then
|
||||
|
||||
cat >> /etc/etcd/etcd.conf <<EOF
|
||||
ETCD_CA_FILE=$cert_dir/ca.crt
|
||||
ETCD_CERT_FILE=$cert_dir/server.crt
|
||||
ETCD_KEY_FILE=$cert_dir/server.key
|
||||
ETCD_PEER_CA_FILE=$cert_dir/ca.crt
|
||||
ETCD_PEER_CERT_FILE=$cert_dir/server.crt
|
||||
ETCD_PEER_KEY_FILE=$cert_dir/server.key
|
||||
EOF
|
||||
|
||||
fi
|
||||
|
||||
if [ -n "$HTTP_PROXY" ]; then
|
||||
echo "ETCD_DISCOVERY_PROXY=$HTTP_PROXY" >> /etc/etcd/etcd.conf
|
||||
fi
|
@ -1,12 +0,0 @@
|
||||
#cloud-boothook
|
||||
#!/bin/sh
|
||||
|
||||
# files in /usr/local/bin should be labeled bin_t
|
||||
# however on Atomic /usr/local is a symlink to /var/usrlocal
|
||||
# so the default Fedora policy doesn't work
|
||||
echo '/var/usrlocal/(.*/)?bin(/.*)? system_u:object_r:bin_t:s0' > /etc/selinux/targeted/contexts/files/file_contexts.local
|
||||
restorecon -R /usr/local/bin
|
||||
|
||||
# disable selinux until cloud-init is over
|
||||
# enabled again in enable-services.sh
|
||||
setenforce 0
|
@ -1,15 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -x
|
||||
|
||||
systemctl stop docker
|
||||
|
||||
echo "starting services"
|
||||
systemctl daemon-reload
|
||||
for service in $NODE_SERVICES; do
|
||||
echo "activating service $service"
|
||||
systemctl enable $service
|
||||
systemctl --no-block start $service
|
||||
done
|
||||
|
||||
setenforce 1
|
@ -1,200 +0,0 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
# Copyright 2015 Rackspace, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
import requests
|
||||
|
||||
|
||||
HEAT_PARAMS_PATH = '/etc/sysconfig/heat-params'
|
||||
PUBLIC_IP_URL = 'http://169.254.169.254/latest/meta-data/public-ipv4'
|
||||
CERT_DIR = '/etc/docker'
|
||||
CERT_CONF_DIR = '%s/conf' % CERT_DIR
|
||||
CA_CERT_PATH = '%s/ca.crt' % CERT_DIR
|
||||
SERVER_CONF_PATH = '%s/server.conf' % CERT_CONF_DIR
|
||||
SERVER_KEY_PATH = '%s/server.key' % CERT_DIR
|
||||
SERVER_CSR_PATH = '%s/server.csr' % CERT_DIR
|
||||
SERVER_CERT_PATH = '%s/server.crt' % CERT_DIR
|
||||
|
||||
CSR_CONFIG_TEMPLATE = """
|
||||
[req]
|
||||
distinguished_name = req_distinguished_name
|
||||
req_extensions = req_ext
|
||||
x509_extensions = req_ext
|
||||
prompt = no
|
||||
copy_extensions = copyall
|
||||
[req_distinguished_name]
|
||||
CN = swarm.invalid
|
||||
[req_ext]
|
||||
subjectAltName = %(subject_alt_names)s
|
||||
extendedKeyUsage = clientAuth,serverAuth
|
||||
"""
|
||||
|
||||
|
||||
def _parse_config_value(value):
|
||||
parsed_value = value
|
||||
if parsed_value[-1] == '\n':
|
||||
parsed_value = parsed_value[:-1]
|
||||
return parsed_value[1:-1]
|
||||
|
||||
|
||||
def load_config():
|
||||
config = dict()
|
||||
with open(HEAT_PARAMS_PATH, 'r') as fp:
|
||||
for line in fp.readlines():
|
||||
key, value = line.split('=', 1)
|
||||
config[key] = _parse_config_value(value)
|
||||
return config
|
||||
|
||||
|
||||
def create_dirs():
|
||||
os.makedirs(CERT_CONF_DIR)
|
||||
|
||||
|
||||
def _get_public_ip():
|
||||
return requests.get(PUBLIC_IP_URL, timeout=60).text
|
||||
|
||||
|
||||
def _build_subject_alt_names(config):
|
||||
ips = {
|
||||
config['SWARM_NODE_IP'],
|
||||
config['SWARM_API_IP'],
|
||||
'127.0.0.1',
|
||||
}
|
||||
# NOTE(mgoddard): If floating IP is disabled, these can be empty.
|
||||
public_ip = _get_public_ip()
|
||||
if public_ip:
|
||||
ips.add(public_ip)
|
||||
api_ip = config['API_IP_ADDRESS']
|
||||
if api_ip:
|
||||
ips.add(api_ip)
|
||||
subject_alt_names = ['IP:%s' % ip for ip in ips]
|
||||
return ','.join(subject_alt_names)
|
||||
|
||||
|
||||
def write_ca_cert(config, verify_ca):
|
||||
cluster_cert_url = '%s/certificates/%s' % (config['MAGNUM_URL'],
|
||||
config['CLUSTER_UUID'])
|
||||
headers = {'X-Auth-Token': config['USER_TOKEN'],
|
||||
'OpenStack-API-Version': 'container-infra latest'}
|
||||
ca_cert_resp = requests.get(cluster_cert_url,
|
||||
headers=headers,
|
||||
verify=verify_ca, timeout=60)
|
||||
|
||||
with open(CA_CERT_PATH, 'w') as fp:
|
||||
fp.write(ca_cert_resp.json()['pem'])
|
||||
|
||||
|
||||
def write_server_key():
|
||||
subprocess.check_call(
|
||||
['openssl', 'genrsa',
|
||||
'-out', SERVER_KEY_PATH,
|
||||
'4096'])
|
||||
|
||||
|
||||
def _write_csr_config(config):
|
||||
with open(SERVER_CONF_PATH, 'w') as fp:
|
||||
params = {
|
||||
'subject_alt_names': _build_subject_alt_names(config)
|
||||
}
|
||||
fp.write(CSR_CONFIG_TEMPLATE % params)
|
||||
|
||||
|
||||
def create_server_csr(config):
|
||||
_write_csr_config(config)
|
||||
subprocess.check_call(
|
||||
['openssl', 'req', '-new',
|
||||
'-days', '1000',
|
||||
'-key', SERVER_KEY_PATH,
|
||||
'-out', SERVER_CSR_PATH,
|
||||
'-reqexts', 'req_ext',
|
||||
'-extensions', 'req_ext',
|
||||
'-config', SERVER_CONF_PATH])
|
||||
|
||||
with open(SERVER_CSR_PATH, 'r') as fp:
|
||||
return {'cluster_uuid': config['CLUSTER_UUID'], 'csr': fp.read()}
|
||||
|
||||
|
||||
def write_server_cert(config, csr_req, verify_ca):
|
||||
cert_url = '%s/certificates' % config['MAGNUM_URL']
|
||||
headers = {
|
||||
'Content-Type': 'application/json',
|
||||
'X-Auth-Token': config['USER_TOKEN'],
|
||||
'OpenStack-API-Version': 'container-infra latest'
|
||||
}
|
||||
csr_resp = requests.post(cert_url,
|
||||
data=json.dumps(csr_req),
|
||||
headers=headers,
|
||||
verify=verify_ca, timeout=60)
|
||||
|
||||
with open(SERVER_CERT_PATH, 'w') as fp:
|
||||
fp.write(csr_resp.json()['pem'])
|
||||
|
||||
|
||||
def get_user_token(config, verify_ca):
|
||||
creds_str = '''
|
||||
{
|
||||
"auth": {
|
||||
"identity": {
|
||||
"methods": [
|
||||
"password"
|
||||
],
|
||||
"password": {
|
||||
"user": {
|
||||
"id": "%(trustee_user_id)s",
|
||||
"password": "%(trustee_password)s"
|
||||
}
|
||||
}
|
||||
},
|
||||
"scope": {
|
||||
"OS-TRUST:trust": {
|
||||
"id": "$(trust_id)s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
'''
|
||||
params = {
|
||||
'trustee_user_id': config['TRUSTEE_USER_ID'],
|
||||
'trustee_password': config['TRUSTEE_PASSWORD'],
|
||||
'trust_id': config['TRUST_ID'],
|
||||
}
|
||||
creds = creds_str % params
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
url = config['AUTH_URL'] + '/auth/tokens'
|
||||
r = requests.post(url, headers=headers, data=creds, verify=verify_ca,
|
||||
timeout=60)
|
||||
config['USER_TOKEN'] = r.headers['X-Subject-Token']
|
||||
return config
|
||||
|
||||
|
||||
def main():
|
||||
config = load_config()
|
||||
if config['TLS_DISABLED'] == 'False':
|
||||
verify_ca = True if config['VERIFY_CA'] == 'True' else False
|
||||
create_dirs()
|
||||
config = get_user_token(config, verify_ca)
|
||||
write_ca_cert(config, verify_ca)
|
||||
write_server_key()
|
||||
csr_req = create_server_csr(config)
|
||||
write_server_cert(config, csr_req, verify_ca)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main())
|
@ -1,85 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
echo "Configuring ${NETWORK_DRIVER} network ..."
|
||||
|
||||
if [ "$NETWORK_DRIVER" != "flannel" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
FLANNELD_CONFIG=/etc/sysconfig/flanneld
|
||||
FLANNEL_CONFIG_BIN=/usr/local/bin/flannel-config
|
||||
FLANNEL_CONFIG_SERVICE=/etc/systemd/system/flannel-config.service
|
||||
FLANNEL_JSON=/etc/sysconfig/flannel-network.json
|
||||
CERT_DIR=/etc/docker
|
||||
PROTOCOL=https
|
||||
FLANNEL_OPTIONS="-etcd-cafile $CERT_DIR/ca.crt \
|
||||
-etcd-certfile $CERT_DIR/server.crt \
|
||||
-etcd-keyfile $CERT_DIR/server.key"
|
||||
ETCD_CURL_OPTIONS="--cacert $CERT_DIR/ca.crt \
|
||||
--cert $CERT_DIR/server.crt --key $CERT_DIR/server.key"
|
||||
|
||||
if [ "$TLS_DISABLED" = "True" ]; then
|
||||
PROTOCOL=http
|
||||
FLANNEL_OPTIONS=""
|
||||
ETCD_CURL_OPTIONS=""
|
||||
fi
|
||||
|
||||
sed -i '
|
||||
/^FLANNEL_ETCD=/ s|=.*|="'"$PROTOCOL"'://'"$ETCD_SERVER_IP"':2379"|
|
||||
' $FLANNELD_CONFIG
|
||||
|
||||
sed -i '/FLANNEL_OPTIONS/'d $FLANNELD_CONFIG
|
||||
|
||||
cat >> $FLANNELD_CONFIG <<EOF
|
||||
FLANNEL_OPTIONS="$FLANNEL_OPTIONS"
|
||||
EOF
|
||||
|
||||
. $FLANNELD_CONFIG
|
||||
|
||||
echo "creating $FLANNEL_CONFIG_BIN"
|
||||
cat > $FLANNEL_CONFIG_BIN <<EOF
|
||||
#!/bin/sh
|
||||
|
||||
if ! [ -f "$FLANNEL_JSON" ]; then
|
||||
echo "ERROR: missing network configuration file" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! [ "$FLANNEL_ETCD_ENDPOINTS" ] && [ "$FLANNEL_ETCD_PREFIX" ]; then
|
||||
echo "ERROR: missing required configuration" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "creating flanneld config in etcd"
|
||||
while ! curl -sf -L $ETCD_CURL_OPTIONS \
|
||||
$FLANNEL_ETCD/v2/keys${FLANNEL_ETCD_PREFIX}/config \
|
||||
-X PUT --data-urlencode value@${FLANNEL_JSON}; do
|
||||
echo "waiting for etcd"
|
||||
sleep 1
|
||||
done
|
||||
EOF
|
||||
|
||||
cat > $FLANNEL_CONFIG_SERVICE <<EOF
|
||||
[Unit]
|
||||
After=etcd.service
|
||||
Requires=etcd.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
EnvironmentFile=/etc/sysconfig/flanneld
|
||||
ExecStart=$FLANNEL_CONFIG_BIN
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
chown root:root $FLANNEL_CONFIG_BIN
|
||||
chmod 0755 $FLANNEL_CONFIG_BIN
|
||||
|
||||
chown root:root $FLANNEL_CONFIG_SERVICE
|
||||
chmod 0644 $FLANNEL_CONFIG_SERVICE
|
||||
|
||||
systemctl enable flannel-config
|
||||
systemctl start --no-block flannel-config
|
@ -1,140 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
CERT_DIR=/etc/docker
|
||||
PROTOCOL=https
|
||||
FLANNEL_OPTIONS="-etcd-cafile $CERT_DIR/ca.crt \
|
||||
-etcd-certfile $CERT_DIR/server.crt \
|
||||
-etcd-keyfile $CERT_DIR/server.key"
|
||||
DOCKER_NETWORK_OPTIONS="--cluster-store etcd://$ETCD_SERVER_IP:2379 \
|
||||
--cluster-store-opt kv.cacertfile=$CERT_DIR/ca.crt \
|
||||
--cluster-store-opt kv.certfile=$CERT_DIR/server.crt \
|
||||
--cluster-store-opt kv.keyfile=$CERT_DIR/server.key \
|
||||
--cluster-advertise $SWARM_NODE_IP:9379"
|
||||
|
||||
if [ "$TLS_DISABLED" = "True" ]; then
|
||||
PROTOCOL=http
|
||||
FLANNEL_OPTIONS=""
|
||||
DOCKER_NETWORK_OPTIONS="--cluster-store etcd://$ETCD_SERVER_IP:2379 \
|
||||
--cluster-advertise $SWARM_NODE_IP:9379"
|
||||
fi
|
||||
|
||||
echo "Configuring ${NETWORK_DRIVER} network service ..."
|
||||
|
||||
if [ "$NETWORK_DRIVER" == "docker" ]; then
|
||||
sed -i "/^DOCKER_NETWORK_OPTIONS=/ s#=.*#='$DOCKER_NETWORK_OPTIONS'#" \
|
||||
/etc/sysconfig/docker-network
|
||||
fi
|
||||
|
||||
if [ "$NETWORK_DRIVER" != "flannel" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
SYSTEMD_UNITS_DIR=/etc/systemd/system/
|
||||
FLANNELD_CONFIG=/etc/sysconfig/flanneld
|
||||
FLANNEL_DOCKER_BRIDGE_BIN=/usr/local/bin/flannel-docker-bridge
|
||||
FLANNEL_DOCKER_BRIDGE_SERVICE=/etc/systemd/system/flannel-docker-bridge.service
|
||||
FLANNEL_IPTABLES_FORWARD_ACCEPT_SERVICE=flannel-iptables-forward-accept.service
|
||||
DOCKER_FLANNEL_CONF=/etc/systemd/system/docker.service.d/flannel.conf
|
||||
FLANNEL_DOCKER_BRIDGE_CONF=/etc/systemd/system/flanneld.service.d/flannel-docker-bridge.conf
|
||||
|
||||
mkdir -p /etc/systemd/system/docker.service.d
|
||||
mkdir -p /etc/systemd/system/flanneld.service.d
|
||||
|
||||
sed -i '
|
||||
/^FLANNEL_ETCD=/ s|=.*|="'"$PROTOCOL"'://'"$ETCD_SERVER_IP"':2379"|
|
||||
' $FLANNELD_CONFIG
|
||||
|
||||
sed -i '/FLANNEL_OPTIONS/'d $FLANNELD_CONFIG
|
||||
|
||||
cat >> $FLANNELD_CONFIG <<EOF
|
||||
FLANNEL_OPTIONS="$FLANNEL_OPTIONS"
|
||||
EOF
|
||||
|
||||
cat >> $FLANNEL_DOCKER_BRIDGE_BIN <<EOF
|
||||
#!/bin/sh
|
||||
|
||||
if ! [ "\$FLANNEL_SUBNET" ] && [ "\$FLANNEL_MTU" ] ; then
|
||||
echo "ERROR: missing required environment variables." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# NOTE(mnaser): Since Docker 1.13, it does not set the default forwarding
|
||||
# policy to ACCEPT which will cause CNI networking to fail.
|
||||
iptables -P FORWARD ACCEPT
|
||||
|
||||
mkdir -p /run/flannel/
|
||||
cat > /run/flannel/docker <<EOF
|
||||
DOCKER_NETWORK_OPTIONS="--bip=\$FLANNEL_SUBNET --mtu=\$FLANNEL_MTU"
|
||||
EOF
|
||||
|
||||
chown root:root $FLANNEL_DOCKER_BRIDGE_BIN
|
||||
chmod 0755 $FLANNEL_DOCKER_BRIDGE_BIN
|
||||
|
||||
cat >> $FLANNEL_DOCKER_BRIDGE_SERVICE <<EOF
|
||||
[Unit]
|
||||
After=flanneld.service
|
||||
Before=docker.service
|
||||
Requires=flanneld.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
EnvironmentFile=/run/flannel/subnet.env
|
||||
ExecStart=/usr/local/bin/flannel-docker-bridge
|
||||
|
||||
[Install]
|
||||
WantedBy=docker.service
|
||||
EOF
|
||||
|
||||
chown root:root $FLANNEL_DOCKER_BRIDGE_SERVICE
|
||||
chmod 0644 $FLANNEL_DOCKER_BRIDGE_SERVICE
|
||||
|
||||
cat >> $DOCKER_FLANNEL_CONF <<EOF
|
||||
[Unit]
|
||||
Requires=flannel-docker-bridge.service
|
||||
After=flannel-docker-bridge.service
|
||||
|
||||
[Service]
|
||||
EnvironmentFile=/run/flannel/docker
|
||||
EOF
|
||||
|
||||
chown root:root $DOCKER_FLANNEL_CONF
|
||||
chmod 0644 $DOCKER_FLANNEL_CONF
|
||||
|
||||
cat >> $FLANNEL_DOCKER_BRIDGE_CONF <<EOF
|
||||
[Unit]
|
||||
Requires=flannel-docker-bridge.service
|
||||
Before=flannel-docker-bridge.service
|
||||
|
||||
[Install]
|
||||
Also=flannel-docker-bridge.service
|
||||
EOF
|
||||
|
||||
chown root:root $FLANNEL_DOCKER_BRIDGE_CONF
|
||||
chmod 0644 $FLANNEL_DOCKER_BRIDGE_CONF
|
||||
|
||||
# Workaround for https://github.com/coreos/flannel/issues/799
|
||||
# Not solved upstream properly yet.
|
||||
cat >> "${SYSTEMD_UNITS_DIR}${FLANNEL_IPTABLES_FORWARD_ACCEPT_SERVICE}" <<EOF
|
||||
[Unit]
|
||||
After=flanneld.service docker.service kubelet.service kube-proxy.service
|
||||
Requires=flanneld.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/sbin/iptables -P FORWARD ACCEPT
|
||||
ExecStartPost=/usr/sbin/iptables -S
|
||||
|
||||
[Install]
|
||||
WantedBy=flanneld.service
|
||||
EOF
|
||||
|
||||
chown root:root "${SYSTEMD_UNITS_DIR}${FLANNEL_IPTABLES_FORWARD_ACCEPT_SERVICE}"
|
||||
chmod 0644 "${SYSTEMD_UNITS_DIR}${FLANNEL_IPTABLES_FORWARD_ACCEPT_SERVICE}"
|
||||
systemctl daemon-reload
|
||||
systemctl enable "${FLANNEL_IPTABLES_FORWARD_ACCEPT_SERVICE}"
|
||||
|
||||
echo "activating service flanneld"
|
||||
systemctl enable flanneld
|
||||
systemctl --no-block start flanneld
|
@ -1,4 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "removing docker key"
|
||||
rm -f /etc/docker/key.json
|
@ -1,79 +0,0 @@
|
||||
#!/bin/sh
|
||||
# Add rexray volume driver support for Swarm
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
set -e
|
||||
set -x
|
||||
|
||||
# if no voulume driver is selected don't do any configuration
|
||||
if [ -z "$VOLUME_DRIVER" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
mkdir -p /etc/rexray
|
||||
mkdir -p /var/log/rexray
|
||||
mkdir -p /var/run/rexray
|
||||
mkdir -p /var/lib/rexray
|
||||
|
||||
REXRAY_CONFIG=/etc/rexray/config.yml
|
||||
|
||||
# Add rexray configuration
|
||||
cat > $REXRAY_CONFIG <<EOF
|
||||
libstorage:
|
||||
logging:
|
||||
level: info
|
||||
service: openstack
|
||||
integration:
|
||||
volume:
|
||||
operations:
|
||||
mount:
|
||||
preempt: $REXRAY_PREEMPT
|
||||
openstack:
|
||||
authUrl: $AUTH_URL
|
||||
userID: $TRUSTEE_USER_ID
|
||||
password: $TRUSTEE_PASSWORD
|
||||
trustID: $TRUST_ID
|
||||
EOF
|
||||
|
||||
if [ ! -f /usr/bin/rexray ]; then
|
||||
# If rexray is not installed, run it in a docker container
|
||||
|
||||
cat > /etc/systemd/system/rexray.service <<EOF
|
||||
[Unit]
|
||||
Description=Rexray container
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
RemainAfterExit=yes
|
||||
ExecStartPre=-/usr/bin/docker rm -f rexray
|
||||
ExecStartPre=-/usr/bin/docker pull openstackmagnum/rexray:alpine
|
||||
ExecStartPre=-/usr/bin/rm -f /var/run/rexray/rexray.pid
|
||||
ExecStart=/usr/bin/docker run -d --name=rexray --privileged \\
|
||||
--pid host \\
|
||||
--net host \\
|
||||
-p 7979:7979 \\
|
||||
-v /run/docker/plugins:/run/docker/plugins \\
|
||||
-v /var/lib/rexray:/var/lib/rexray:Z \\
|
||||
-v /var/lib/libstorage:/var/lib/libstorage:rshared \\
|
||||
-v /var/log/rexray:/var/log/rexray \\
|
||||
-v /var/run/rexray:/var/run/rexray \\
|
||||
-v /var/lib/docker:/var/lib/docker:rshared \\
|
||||
-v /var/run/docker:/var/run/docker \\
|
||||
-v /dev:/dev \\
|
||||
-v /etc/rexray/config.yml:/etc/rexray/config.yml \\
|
||||
openstackmagnum/rexray:alpine
|
||||
ExecStop=/usr/bin/docker stop rexray
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
chown root:root /etc/systemd/system/rexray.service
|
||||
chmod 644 /etc/systemd/system/rexray.service
|
||||
|
||||
systemctl daemon-reload
|
||||
fi
|
||||
|
||||
echo "starting rexray..."
|
||||
systemctl enable rexray
|
||||
systemctl --no-block start rexray
|
@ -1,15 +0,0 @@
|
||||
#cloud-config
|
||||
merge_how: dict(recurse_array)+list(append)
|
||||
write_files:
|
||||
- path: /etc/systemd/system/$SERVICE-failure.service
|
||||
owner: "root:root"
|
||||
permissions: "0644"
|
||||
content: |
|
||||
[Unit]
|
||||
Description=$SERVICE Failure Notifier
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
TimeoutStartSec=0
|
||||
ExecStart=/usr/bin/$WAIT_CURL $VERIFY_CA \
|
||||
--data-binary '{"status": "FAILURE", "reason": "$SERVICE service failed to start.", "data": "Failure"}'
|
@ -1,21 +0,0 @@
|
||||
#cloud-config
|
||||
merge_how: dict(recurse_array)+list(append)
|
||||
write_files:
|
||||
- path: /etc/systemd/system/docker.socket
|
||||
owner: "root:root"
|
||||
permissions: "0644"
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Docker Socket for the API
|
||||
PartOf=docker.service
|
||||
After=docker-storage-setup.service
|
||||
Before=docker.service
|
||||
|
||||
[Socket]
|
||||
ListenStream=/var/run/docker.sock
|
||||
SocketMode=0660
|
||||
SocketUser=root
|
||||
SocketGroup=root
|
||||
|
||||
[Install]
|
||||
WantedBy=sockets.target
|
@ -1,34 +0,0 @@
|
||||
#cloud-config
|
||||
merge_how: dict(recurse_array)+list(append)
|
||||
write_files:
|
||||
- path: /etc/sysconfig/heat-params
|
||||
owner: "root:root"
|
||||
permissions: "0600"
|
||||
content: |
|
||||
WAIT_CURL="$WAIT_CURL"
|
||||
ETCD_DISCOVERY_URL="$ETCD_DISCOVERY_URL"
|
||||
DOCKER_VOLUME="$DOCKER_VOLUME"
|
||||
DOCKER_VOLUME_SIZE="$DOCKER_VOLUME_SIZE"
|
||||
DOCKER_STORAGE_DRIVER="$DOCKER_STORAGE_DRIVER"
|
||||
HTTP_PROXY="$HTTP_PROXY"
|
||||
HTTPS_PROXY="$HTTPS_PROXY"
|
||||
NO_PROXY="$NO_PROXY"
|
||||
SWARM_API_IP="$SWARM_API_IP"
|
||||
SWARM_NODE_IP="$SWARM_NODE_IP"
|
||||
CLUSTER_UUID="$CLUSTER_UUID"
|
||||
MAGNUM_URL="$MAGNUM_URL"
|
||||
TLS_DISABLED="$TLS_DISABLED"
|
||||
VERIFY_CA="$VERIFY_CA"
|
||||
NETWORK_DRIVER="$NETWORK_DRIVER"
|
||||
FLANNEL_NETWORK_CIDR="$FLANNEL_NETWORK_CIDR"
|
||||
FLANNEL_NETWORK_SUBNETLEN="$FLANNEL_NETWORK_SUBNETLEN"
|
||||
FLANNEL_BACKEND="$FLANNEL_BACKEND"
|
||||
ETCD_SERVER_IP="$ETCD_SERVER_IP"
|
||||
API_IP_ADDRESS="$API_IP_ADDRESS"
|
||||
SWARM_VERSION="$SWARM_VERSION"
|
||||
TRUSTEE_USER_ID="$TRUSTEE_USER_ID"
|
||||
TRUSTEE_PASSWORD="$TRUSTEE_PASSWORD"
|
||||
TRUST_ID="$TRUST_ID"
|
||||
AUTH_URL="$AUTH_URL"
|
||||
VOLUME_DRIVER="$VOLUME_DRIVER"
|
||||
REXRAY_PREEMPT="$REXRAY_PREEMPT"
|
@ -1,38 +0,0 @@
|
||||
#cloud-config
|
||||
merge_how: dict(recurse_array)+list(append)
|
||||
write_files:
|
||||
- path: /etc/sysconfig/heat-params
|
||||
owner: "root:root"
|
||||
permissions: "0600"
|
||||
content: |
|
||||
WAIT_CURL="$WAIT_CURL"
|
||||
DOCKER_VOLUME="$DOCKER_VOLUME"
|
||||
DOCKER_VOLUME_SIZE="$DOCKER_VOLUME_SIZE"
|
||||
DOCKER_STORAGE_DRIVER="$DOCKER_STORAGE_DRIVER"
|
||||
HTTP_PROXY="$HTTP_PROXY"
|
||||
HTTPS_PROXY="$HTTPS_PROXY"
|
||||
NO_PROXY="$NO_PROXY"
|
||||
SWARM_API_IP="$SWARM_API_IP"
|
||||
SWARM_NODE_IP="$SWARM_NODE_IP"
|
||||
CLUSTER_UUID="$CLUSTER_UUID"
|
||||
MAGNUM_URL="$MAGNUM_URL"
|
||||
TLS_DISABLED="$TLS_DISABLED"
|
||||
VERIFY_CA="$VERIFY_CA"
|
||||
NETWORK_DRIVER="$NETWORK_DRIVER"
|
||||
ETCD_SERVER_IP="$ETCD_SERVER_IP"
|
||||
API_IP_ADDRESS="$API_IP_ADDRESS"
|
||||
SWARM_VERSION="$SWARM_VERSION"
|
||||
TRUSTEE_DOMAIN_ID="$TRUSTEE_DOMAIN_ID"
|
||||
TRUSTEE_USER_ID="$TRUSTEE_USER_ID"
|
||||
TRUSTEE_USERNAME="$TRUSTEE_USERNAME"
|
||||
TRUSTEE_PASSWORD="$TRUSTEE_PASSWORD"
|
||||
TRUST_ID="$TRUST_ID"
|
||||
AUTH_URL="$AUTH_URL"
|
||||
REGISTRY_ENABLED="$REGISTRY_ENABLED"
|
||||
REGISTRY_PORT="$REGISTRY_PORT"
|
||||
SWIFT_REGION="$SWIFT_REGION"
|
||||
REGISTRY_CONTAINER="$REGISTRY_CONTAINER"
|
||||
REGISTRY_INSECURE="$REGISTRY_INSECURE"
|
||||
REGISTRY_CHUNKSIZE="$REGISTRY_CHUNKSIZE"
|
||||
VOLUME_DRIVER="$VOLUME_DRIVER"
|
||||
REXRAY_PREEMPT="$REXRAY_PREEMPT"
|
@ -1,22 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
if [ "$NETWORK_DRIVER" != "flannel" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
FLANNEL_JSON=/etc/sysconfig/flannel-network.json
|
||||
|
||||
# Generate a flannel configuration that we will
|
||||
# store into etcd using curl.
|
||||
cat > $FLANNEL_JSON <<EOF
|
||||
{
|
||||
"Network": "$FLANNEL_NETWORK_CIDR",
|
||||
"Subnetlen": $FLANNEL_NETWORK_SUBNETLEN,
|
||||
"Backend": {
|
||||
"Type": "$FLANNEL_BACKEND"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
@ -1,90 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
myip="$SWARM_NODE_IP"
|
||||
|
||||
if [ "$VERIFY_CA" == "True" ]; then
|
||||
VERIFY_CA=""
|
||||
else
|
||||
VERIFY_CA="-k"
|
||||
fi
|
||||
|
||||
CONF_FILE=/etc/systemd/system/swarm-agent.service
|
||||
CERT_DIR=/etc/docker
|
||||
PROTOCOL=https
|
||||
ETCDCTL_OPTIONS="--ca-file $CERT_DIR/ca.crt \
|
||||
--cert-file $CERT_DIR/server.crt \
|
||||
--key-file $CERT_DIR/server.key"
|
||||
|
||||
if [ $TLS_DISABLED = 'True' ]; then
|
||||
PROTOCOL=http
|
||||
ETCDCTL_OPTIONS=""
|
||||
fi
|
||||
|
||||
cat > $CONF_FILE << EOF
|
||||
[Unit]
|
||||
Description=Swarm Agent
|
||||
After=docker.service
|
||||
Requires=docker.service
|
||||
OnFailure=swarm-agent-failure.service
|
||||
|
||||
[Service]
|
||||
TimeoutStartSec=0
|
||||
ExecStartPre=-/usr/bin/docker kill swarm-agent
|
||||
ExecStartPre=-/usr/bin/docker rm swarm-agent
|
||||
ExecStartPre=-/usr/bin/docker pull swarm:$SWARM_VERSION
|
||||
ExecStart=/usr/bin/docker run -e http_proxy=$HTTP_PROXY \\
|
||||
-e https_proxy=$HTTPS_PROXY \\
|
||||
-e no_proxy=$NO_PROXY \\
|
||||
-v $CERT_DIR:$CERT_DIR:Z \\
|
||||
--name swarm-agent \\
|
||||
swarm:$SWARM_VERSION \\
|
||||
join \\
|
||||
--addr $myip:2375 \\
|
||||
EOF
|
||||
|
||||
if [ $TLS_DISABLED = 'False' ]; then
|
||||
|
||||
cat >> /etc/systemd/system/swarm-agent.service << END_TLS
|
||||
--discovery-opt kv.cacertfile=$CERT_DIR/ca.crt \\
|
||||
--discovery-opt kv.certfile=$CERT_DIR/server.crt \\
|
||||
--discovery-opt kv.keyfile=$CERT_DIR/server.key \\
|
||||
END_TLS
|
||||
|
||||
fi
|
||||
|
||||
cat >> /etc/systemd/system/swarm-agent.service << END_SERVICE_BOTTOM
|
||||
etcd://$ETCD_SERVER_IP:2379/v2/keys/swarm/
|
||||
Restart=always
|
||||
ExecStop=/usr/bin/docker stop swarm-agent
|
||||
ExecStartPost=/usr/local/bin/notify-heat
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
END_SERVICE_BOTTOM
|
||||
|
||||
chown root:root $CONF_FILE
|
||||
chmod 644 $CONF_FILE
|
||||
|
||||
SCRIPT=/usr/local/bin/notify-heat
|
||||
|
||||
UUID=`uuidgen`
|
||||
cat > $SCRIPT << EOF
|
||||
#!/bin/sh
|
||||
until etcdctl \
|
||||
--peers $PROTOCOL://$ETCD_SERVER_IP:2379 \
|
||||
$ETCDCTL_OPTIONS --timeout 1s \
|
||||
--total-timeout 5s \
|
||||
ls /v2/keys/swarm/docker/swarm/nodes/$myip:2375
|
||||
do
|
||||
echo "Waiting for swarm agent registration..."
|
||||
sleep 5
|
||||
done
|
||||
|
||||
${WAIT_CURL} {$VERIFY_CA} \
|
||||
--data-binary '{"status": "SUCCESS", "reason": "Swarm agent ready", "data": "OK", "id": "${UUID}"}'
|
||||
EOF
|
||||
|
||||
chown root:root $SCRIPT
|
||||
chmod 755 $SCRIPT
|
@ -1,63 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
CERT_DIR=/etc/docker
|
||||
|
||||
if [ "$VERIFY_CA" == "True" ]; then
|
||||
VERIFY_CA=""
|
||||
else
|
||||
VERIFY_CA="-k"
|
||||
fi
|
||||
|
||||
cat > /etc/systemd/system/swarm-manager.service << END_SERVICE_TOP
|
||||
[Unit]
|
||||
Description=Swarm Manager
|
||||
After=docker.service etcd.service
|
||||
Requires=docker.service etcd.service
|
||||
OnFailure=swarm-manager-failure.service
|
||||
|
||||
[Service]
|
||||
TimeoutStartSec=0
|
||||
ExecStartPre=-/usr/bin/docker kill swarm-manager
|
||||
ExecStartPre=-/usr/bin/docker rm swarm-manager
|
||||
ExecStartPre=-/usr/bin/docker pull swarm:$SWARM_VERSION
|
||||
ExecStart=/usr/bin/docker run --name swarm-manager \\
|
||||
-v $CERT_DIR:$CERT_DIR:Z \\
|
||||
-p 2376:2375 \\
|
||||
-e http_proxy=$HTTP_PROXY \\
|
||||
-e https_proxy=$HTTPS_PROXY \\
|
||||
-e no_proxy=$NO_PROXY \\
|
||||
swarm:$SWARM_VERSION \\
|
||||
manage -H tcp://0.0.0.0:2375 \\
|
||||
--strategy $SWARM_STRATEGY \\
|
||||
--replication \\
|
||||
--advertise $NODE_IP:2376 \\
|
||||
END_SERVICE_TOP
|
||||
|
||||
if [ $TLS_DISABLED = 'False' ]; then
|
||||
|
||||
cat >> /etc/systemd/system/swarm-manager.service << END_TLS
|
||||
--tlsverify \\
|
||||
--tlscacert=$CERT_DIR/ca.crt \\
|
||||
--tlskey=$CERT_DIR/server.key \\
|
||||
--tlscert=$CERT_DIR/server.crt \\
|
||||
--discovery-opt kv.cacertfile=$CERT_DIR/ca.crt \\
|
||||
--discovery-opt kv.certfile=$CERT_DIR/server.crt \\
|
||||
--discovery-opt kv.keyfile=$CERT_DIR/server.key \\
|
||||
END_TLS
|
||||
|
||||
fi
|
||||
|
||||
UUID=`uuidgen`
|
||||
cat >> /etc/systemd/system/swarm-manager.service << END_SERVICE_BOTTOM
|
||||
etcd://$ETCD_SERVER_IP:2379/v2/keys/swarm/
|
||||
ExecStop=/usr/bin/docker stop swarm-manager
|
||||
Restart=always
|
||||
ExecStartPost=/usr/bin/$WAIT_CURL $VERIFY_CA \\
|
||||
--data-binary '{"status": "SUCCESS", "reason": "Setup complete", "data": "OK", "id": "$UUID"}'
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
END_SERVICE_BOTTOM
|
||||
|
||||
chown root:root /etc/systemd/system/swarm-manager.service
|
||||
chmod 644 /etc/systemd/system/swarm-manager.service
|
@ -1,174 +0,0 @@
|
||||
# Copyright 2016 Rackspace Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
from magnum.drivers.heat import template_def
|
||||
from oslo_config import cfg
|
||||
|
||||
CONF = cfg.CONF
|
||||
DOCKER_PORT = '2376'
|
||||
|
||||
|
||||
class SwarmApiAddressOutputMapping(template_def.OutputMapping):
|
||||
|
||||
def set_output(self, stack, cluster_template, cluster):
|
||||
if self.cluster_attr is None:
|
||||
return
|
||||
|
||||
output_value = self.get_output_value(stack, cluster)
|
||||
if output_value is not None:
|
||||
# Note(rocha): protocol should always be tcp as the docker
|
||||
# command client does not handle https (see bug #1604812).
|
||||
params = {
|
||||
'protocol': 'tcp',
|
||||
'address': output_value,
|
||||
'port': DOCKER_PORT,
|
||||
}
|
||||
value = "%(protocol)s://%(address)s:%(port)s" % params
|
||||
setattr(cluster, self.cluster_attr, value)
|
||||
|
||||
|
||||
class SwarmFedoraTemplateDefinition(template_def.BaseTemplateDefinition):
|
||||
"""Docker swarm template for a Fedora Atomic VM."""
|
||||
|
||||
def __init__(self):
|
||||
super(SwarmFedoraTemplateDefinition, self).__init__()
|
||||
self.add_parameter('cluster_uuid',
|
||||
cluster_attr='uuid',
|
||||
param_type=str)
|
||||
self.add_parameter('volume_driver',
|
||||
cluster_template_attr='volume_driver')
|
||||
self.add_parameter('external_network',
|
||||
cluster_template_attr='external_network_id',
|
||||
required=True)
|
||||
self.add_parameter('fixed_network',
|
||||
cluster_template_attr='fixed_network')
|
||||
self.add_parameter('fixed_subnet',
|
||||
cluster_template_attr='fixed_subnet')
|
||||
self.add_parameter('network_driver',
|
||||
cluster_template_attr='network_driver')
|
||||
self.add_parameter('tls_disabled',
|
||||
cluster_template_attr='tls_disabled',
|
||||
required=True)
|
||||
self.add_parameter('registry_enabled',
|
||||
cluster_template_attr='registry_enabled')
|
||||
self.add_parameter('docker_storage_driver',
|
||||
cluster_template_attr='docker_storage_driver')
|
||||
self.add_parameter('swarm_version',
|
||||
cluster_attr='coe_version')
|
||||
|
||||
self.add_output('api_address',
|
||||
cluster_attr='api_address',
|
||||
mapping_type=SwarmApiAddressOutputMapping)
|
||||
self.add_output('swarm_master_private',
|
||||
cluster_attr=None)
|
||||
self.add_output('swarm_nodes_private',
|
||||
cluster_attr=None)
|
||||
self.add_output('discovery_url',
|
||||
cluster_attr='discovery_url')
|
||||
|
||||
def get_nodegroup_param_maps(self, master_params=None, worker_params=None):
|
||||
master_params = master_params or dict()
|
||||
worker_params = worker_params or dict()
|
||||
master_params.update({
|
||||
'master_flavor': 'flavor_id',
|
||||
'master_image': 'image_id',
|
||||
'docker_volume_size': 'docker_volume_size'
|
||||
})
|
||||
worker_params.update({
|
||||
'number_of_nodes': 'node_count',
|
||||
'node_flavor': 'flavor_id',
|
||||
'node_image': 'image_id',
|
||||
'docker_volume_size': 'docker_volume_size'
|
||||
})
|
||||
return super(
|
||||
SwarmFedoraTemplateDefinition, self).get_nodegroup_param_maps(
|
||||
master_params=master_params, worker_params=worker_params)
|
||||
|
||||
def update_outputs(self, stack, cluster_template, cluster,
|
||||
nodegroups=None):
|
||||
nodegroups = nodegroups or [cluster.default_ng_worker,
|
||||
cluster.default_ng_master]
|
||||
|
||||
for nodegroup in nodegroups:
|
||||
if nodegroup.role == 'master':
|
||||
self.add_output(
|
||||
'swarm_masters', nodegroup_attr='node_addresses',
|
||||
nodegroup_uuid=nodegroup.uuid,
|
||||
mapping_type=template_def.NodeGroupOutputMapping)
|
||||
else:
|
||||
self.add_output(
|
||||
'swarm_nodes', nodegroup_attr='node_addresses',
|
||||
nodegroup_uuid=nodegroup.uuid,
|
||||
mapping_type=template_def.NodeGroupOutputMapping)
|
||||
self.add_output(
|
||||
'number_of_nodes', nodegroup_attr='node_count',
|
||||
nodegroup_uuid=nodegroup.uuid, is_stack_param=True,
|
||||
mapping_type=template_def.NodeGroupOutputMapping)
|
||||
super(SwarmFedoraTemplateDefinition,
|
||||
self).update_outputs(stack, cluster_template, cluster,
|
||||
nodegroups=nodegroups)
|
||||
|
||||
def get_params(self, context, cluster_template, cluster, **kwargs):
|
||||
extra_params = kwargs.pop('extra_params', {})
|
||||
extra_params['discovery_url'] = self.get_discovery_url(cluster)
|
||||
# HACK(apmelton) - This uses the user's bearer token, ideally
|
||||
# it should be replaced with an actual trust token with only
|
||||
# access to do what the template needs it to do.
|
||||
osc = self.get_osc(context)
|
||||
extra_params['magnum_url'] = osc.magnum_url()
|
||||
|
||||
label_list = ['flannel_network_cidr', 'flannel_backend',
|
||||
'flannel_network_subnetlen', 'rexray_preempt',
|
||||
'swarm_strategy']
|
||||
|
||||
extra_params['auth_url'] = context.auth_url
|
||||
extra_params['nodes_affinity_policy'] = \
|
||||
CONF.cluster.nodes_affinity_policy
|
||||
|
||||
# set docker_volume_type
|
||||
# use the configuration default if None provided
|
||||
docker_volume_type = cluster.labels.get(
|
||||
'docker_volume_type', CONF.cinder.default_docker_volume_type)
|
||||
extra_params['docker_volume_type'] = docker_volume_type
|
||||
|
||||
labels = self._get_relevant_labels(cluster, kwargs)
|
||||
|
||||
for label in label_list:
|
||||
extra_params[label] = labels.get(label)
|
||||
|
||||
if cluster_template.registry_enabled:
|
||||
extra_params['swift_region'] = CONF.docker_registry.swift_region
|
||||
extra_params['registry_container'] = (
|
||||
CONF.docker_registry.swift_registry_container)
|
||||
|
||||
return super(SwarmFedoraTemplateDefinition,
|
||||
self).get_params(context, cluster_template, cluster,
|
||||
extra_params=extra_params,
|
||||
**kwargs)
|
||||
|
||||
def get_env_files(self, cluster_template, cluster, nodegroup=None):
|
||||
env_files = []
|
||||
|
||||
template_def.add_priv_net_env_file(env_files, cluster_template,
|
||||
cluster)
|
||||
template_def.add_volume_env_file(env_files, cluster,
|
||||
nodegroup=nodegroup)
|
||||
template_def.add_lb_env_file(env_files, cluster)
|
||||
|
||||
return env_files
|
||||
|
||||
def get_scale_params(self, context, cluster, node_count,
|
||||
scale_manager=None, nodes_to_remove=None):
|
||||
scale_params = dict()
|
||||
scale_params['number_of_nodes'] = node_count
|
||||
return scale_params
|
@ -1,210 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from magnum.drivers.heat import template_def
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
|
||||
CONF = cfg.CONF
|
||||
DOCKER_PORT = '2375'
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SwarmModeApiAddressOutputMapping(template_def.OutputMapping):
|
||||
|
||||
def set_output(self, stack, cluster_template, cluster):
|
||||
if self.cluster_attr is None:
|
||||
return
|
||||
|
||||
output_value = self.get_output_value(stack, cluster)
|
||||
if output_value is not None:
|
||||
# Note(rocha): protocol should always be tcp as the docker
|
||||
# command client does not handle https (see bug #1604812).
|
||||
params = {
|
||||
'protocol': 'tcp',
|
||||
'address': output_value,
|
||||
'port': DOCKER_PORT,
|
||||
}
|
||||
value = "%(protocol)s://%(address)s:%(port)s" % params
|
||||
setattr(cluster, self.cluster_attr, value)
|
||||
|
||||
|
||||
class ServerAddressOutputMapping(template_def.NodeGroupOutputMapping):
|
||||
public_ip_output_key = None
|
||||
private_ip_output_key = None
|
||||
|
||||
def __init__(self, dummy_arg, nodegroup_attr=None, nodegroup_uuid=None):
|
||||
self.heat_output = self.public_ip_output_key
|
||||
self.nodegroup_attr = nodegroup_attr
|
||||
self.nodegroup_uuid = nodegroup_uuid
|
||||
self.is_stack_param = False
|
||||
|
||||
|
||||
class MasterAddressOutputMapping(ServerAddressOutputMapping):
|
||||
public_ip_output_key = ['swarm_primary_master',
|
||||
'swarm_secondary_masters']
|
||||
private_ip_output_key = ['swarm_primary_master_private',
|
||||
'swarm_secondary_masters_private']
|
||||
|
||||
def set_output(self, stack, cluster_template, cluster):
|
||||
if not cluster.floating_ip_enabled:
|
||||
self.heat_output = self.private_ip_output_key
|
||||
|
||||
LOG.debug("Using heat_output: %s", self.heat_output)
|
||||
_master_addresses = []
|
||||
for output in stack.to_dict().get('outputs', []):
|
||||
if output['output_key'] in self.heat_output:
|
||||
_master_addresses += output['output_value']
|
||||
|
||||
for ng in cluster.nodegroups:
|
||||
if ng.uuid == self.nodegroup_uuid:
|
||||
setattr(ng, self.nodegroup_attr, _master_addresses)
|
||||
|
||||
|
||||
class NodeAddressOutputMapping(ServerAddressOutputMapping):
|
||||
public_ip_output_key = 'swarm_nodes'
|
||||
private_ip_output_key = 'swarm_nodes_private'
|
||||
|
||||
def set_output(self, stack, cluster_template, cluster):
|
||||
if not cluster.floating_ip_enabled:
|
||||
self.heat_output = self.private_ip_output_key
|
||||
|
||||
LOG.debug("Using heat_output: %s", self.heat_output)
|
||||
super(NodeAddressOutputMapping,
|
||||
self).set_output(stack, cluster_template, cluster)
|
||||
|
||||
|
||||
class SwarmModeTemplateDefinition(template_def.BaseTemplateDefinition):
|
||||
"""Docker swarm mode template."""
|
||||
|
||||
def __init__(self):
|
||||
super(SwarmModeTemplateDefinition, self).__init__()
|
||||
self.add_parameter('cluster_uuid',
|
||||
cluster_attr='uuid',
|
||||
param_type=str)
|
||||
self.add_parameter('volume_driver',
|
||||
cluster_template_attr='volume_driver')
|
||||
self.add_parameter('external_network',
|
||||
cluster_template_attr='external_network_id',
|
||||
required=True)
|
||||
self.add_parameter('fixed_network',
|
||||
cluster_template_attr='fixed_network')
|
||||
self.add_parameter('fixed_subnet',
|
||||
cluster_template_attr='fixed_subnet')
|
||||
self.add_parameter('tls_disabled',
|
||||
cluster_template_attr='tls_disabled',
|
||||
required=True)
|
||||
self.add_parameter('docker_storage_driver',
|
||||
cluster_template_attr='docker_storage_driver')
|
||||
|
||||
self.add_output('api_address',
|
||||
cluster_attr='api_address',
|
||||
mapping_type=SwarmModeApiAddressOutputMapping)
|
||||
|
||||
def get_params(self, context, cluster_template, cluster, **kwargs):
|
||||
extra_params = kwargs.pop('extra_params', {})
|
||||
# HACK(apmelton) - This uses the user's bearer token, ideally
|
||||
# it should be replaced with an actual trust token with only
|
||||
# access to do what the template needs it to do.
|
||||
osc = self.get_osc(context)
|
||||
# NOTE: Sometimes, version discovery fails when Magnum cannot talk to
|
||||
# Keystone via specified magnum_client.endpoint_type intended for
|
||||
# cluster instances either because it is not unreachable from the
|
||||
# controller or CA certs are missing for TLS enabled interface and the
|
||||
# returned auth_url may not be suffixed with /v1 in which case append
|
||||
# the url with the suffix so that instances can still talk to Magnum.
|
||||
magnum_url = osc.magnum_url()
|
||||
extra_params['magnum_url'] = magnum_url + ('' if
|
||||
magnum_url.endswith('/v1')
|
||||
else '/v1')
|
||||
|
||||
label_list = ['rexray_preempt', 'availability_zone']
|
||||
|
||||
extra_params['auth_url'] = context.auth_url
|
||||
extra_params['nodes_affinity_policy'] = \
|
||||
CONF.cluster.nodes_affinity_policy
|
||||
|
||||
labels = self._get_relevant_labels(cluster, kwargs)
|
||||
|
||||
for label in label_list:
|
||||
extra_params[label] = labels.get(label)
|
||||
|
||||
# set docker_volume_type
|
||||
# use the configuration default if None provided
|
||||
docker_volume_type = cluster.labels.get(
|
||||
'docker_volume_type', CONF.cinder.default_docker_volume_type)
|
||||
extra_params['docker_volume_type'] = docker_volume_type
|
||||
|
||||
return super(SwarmModeTemplateDefinition,
|
||||
self).get_params(context, cluster_template, cluster,
|
||||
extra_params=extra_params,
|
||||
**kwargs)
|
||||
|
||||
def get_nodegroup_param_maps(self, master_params=None, worker_params=None):
|
||||
master_params = master_params or dict()
|
||||
worker_params = worker_params or dict()
|
||||
master_params.update({
|
||||
'master_flavor': 'flavor_id',
|
||||
'master_image': 'image_id',
|
||||
'docker_volume_size': 'docker_volume_size'
|
||||
})
|
||||
worker_params.update({
|
||||
'number_of_nodes': 'node_count',
|
||||
'node_flavor': 'flavor_id',
|
||||
'node_image': 'image_id',
|
||||
'docker_volume_size': 'docker_volume_size'
|
||||
})
|
||||
return super(
|
||||
SwarmModeTemplateDefinition, self).get_nodegroup_param_maps(
|
||||
master_params=master_params, worker_params=worker_params)
|
||||
|
||||
def update_outputs(self, stack, cluster_template, cluster,
|
||||
nodegroups=None):
|
||||
nodegroups = nodegroups or [cluster.default_ng_worker,
|
||||
cluster.default_ng_master]
|
||||
for nodegroup in nodegroups:
|
||||
if nodegroup.role == 'master':
|
||||
self.add_output('swarm_masters',
|
||||
nodegroup_attr='node_addresses',
|
||||
nodegroup_uuid=nodegroup.uuid,
|
||||
mapping_type=MasterAddressOutputMapping)
|
||||
else:
|
||||
self.add_output('swarm_nodes',
|
||||
nodegroup_attr='node_addresses',
|
||||
nodegroup_uuid=nodegroup.uuid,
|
||||
mapping_type=NodeAddressOutputMapping)
|
||||
self.add_output(
|
||||
'number_of_nodes', nodegroup_attr='node_count',
|
||||
nodegroup_uuid=nodegroup.uuid, is_stack_param=True,
|
||||
mapping_type=template_def.NodeGroupOutputMapping)
|
||||
super(SwarmModeTemplateDefinition,
|
||||
self).update_outputs(stack, cluster_template, cluster,
|
||||
nodegroups=nodegroups)
|
||||
|
||||
def get_env_files(self, cluster_template, cluster, nodegroup=None):
|
||||
env_files = []
|
||||
|
||||
template_def.add_priv_net_env_file(env_files, cluster_template,
|
||||
cluster)
|
||||
template_def.add_volume_env_file(env_files, cluster,
|
||||
nodegroup=nodegroup)
|
||||
template_def.add_lb_env_file(env_files, cluster)
|
||||
template_def.add_fip_env_file(env_files, cluster)
|
||||
|
||||
return env_files
|
||||
|
||||
def get_scale_params(self, context, cluster, node_count,
|
||||
scale_manager=None, nodes_to_remove=None):
|
||||
scale_params = dict()
|
||||
scale_params['number_of_nodes'] = node_count
|
||||
return scale_params
|
@ -1,39 +0,0 @@
|
||||
# Copyright 2016 Rackspace Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from magnum.drivers.heat import driver
|
||||
from magnum.drivers.swarm_fedora_atomic_v1 import monitor
|
||||
from magnum.drivers.swarm_fedora_atomic_v1 import template_def
|
||||
|
||||
|
||||
class Driver(driver.HeatDriver):
|
||||
|
||||
@property
|
||||
def provides(self):
|
||||
return [
|
||||
{'server_type': 'vm',
|
||||
'os': 'fedora-atomic',
|
||||
'coe': 'swarm'},
|
||||
]
|
||||
|
||||
def get_template_definition(self):
|
||||
return template_def.AtomicSwarmTemplateDefinition()
|
||||
|
||||
def get_monitor(self, context, cluster):
|
||||
return monitor.SwarmMonitor(context, cluster)
|
||||
|
||||
def upgrade_cluster(self, context, cluster, cluster_template,
|
||||
max_batch_size, nodegroup, scale_manager=None,
|
||||
rollback=False):
|
||||
raise NotImplementedError("Must implement 'upgrade_cluster'")
|
@ -1,18 +0,0 @@
|
||||
FROM fedora:23
|
||||
MAINTAINER Ton Ngo "ton@us.ibm.com"
|
||||
WORKDIR /
|
||||
RUN dnf -y install openvswitch \
|
||||
openstack-neutron-ml2 \
|
||||
openstack-neutron-openvswitch \
|
||||
bridge-utils \
|
||||
git \
|
||||
&& dnf clean all
|
||||
RUN cd /opt \
|
||||
&& git clone https://git.openstack.org/openstack/neutron \
|
||||
&& cp neutron/etc/policy.yaml /etc/neutron/. \
|
||||
&& rm -rf neutron \
|
||||
&& dnf -y remove git
|
||||
VOLUME /var/run/openvswitch
|
||||
ADD run_openvswitch_neutron.sh /usr/bin/run_openvswitch_neutron.sh
|
||||
|
||||
CMD ["/usr/bin/run_openvswitch_neutron.sh"]
|
@ -1,68 +0,0 @@
|
||||
===================
|
||||
Neutron Openvswitch
|
||||
===================
|
||||
|
||||
This Dockerfile creates a Docker image based on Fedora 23 that runs
|
||||
Openvswitch and the Neutron L2 agent for Openvswitch. This container
|
||||
image is used by Magnum when a Swarm cluster is deployed with the
|
||||
attribute::
|
||||
|
||||
--network-driver=kuryr
|
||||
|
||||
Magnum deploys this container on each Swarm node along with the
|
||||
Kuryr container to support Docker advanced networking based on
|
||||
the `Container Networking Model
|
||||
<https://github.com/docker/libnetwork/blob/master/docs/design.md>`_.
|
||||
|
||||
To build the image, run this command in the same directory as the
|
||||
Dockerfile::
|
||||
|
||||
docker build -t openstackmagnum/fedora23-neutron-ovs:testing .
|
||||
|
||||
This image is available on Docker Hub as::
|
||||
|
||||
openstackmagnum/fedora23-neutron-ovs:testing
|
||||
|
||||
To update the image with a new build::
|
||||
|
||||
docker push openstackmagnum/fedora23-neutron-ovs:testing
|
||||
|
||||
The 'testing' tag may be replaced with 'latest' or other tag as
|
||||
needed.
|
||||
|
||||
This image is intended to run on the Fedora Atomic public image which
|
||||
by default does not have these packages installed. The common
|
||||
practice for Atomic OS is to run new packages in containers rather
|
||||
than installing them in the OS.
|
||||
|
||||
For the Neutron agent, you will need to provide 3 files at these
|
||||
locations:
|
||||
|
||||
- /etc/neutron/neutron.conf
|
||||
- /etc/neutron/policy.yaml
|
||||
- /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
|
||||
These files are typically installed in the same locations on the
|
||||
Neutron controller node. The policy.yaml file is copied into the
|
||||
Docker image because it is fairly static and does not require
|
||||
customization for the cluster. If it is changed in the Neutron master
|
||||
repo, you just need to rebuild the Docker image to update the file.
|
||||
Magnum will create the other 2 files on each cluster node in the
|
||||
directory /etc/kuryr and map them to the proper directories in
|
||||
the container using the Docker -v option.
|
||||
|
||||
Since Openvswitch needs to operate on the host network name space,
|
||||
the Docker container will need the -net=host option.
|
||||
The /var/run/openvswitch directory is also mapped to the cluster node
|
||||
so that the Kuryr container can talk to openvswitch.
|
||||
To run the image from Fedora Atomic::
|
||||
|
||||
docker run --net=host \
|
||||
--cap-add=NET_ADMIN \
|
||||
--privileged=true \
|
||||
-v /var/run/openvswitch:/var/run/openvswitch \
|
||||
-v /lib/modules:/lib/modules:ro \
|
||||
-v /etc/kuryr/neutron.conf:/etc/neutron/neutron.conf \
|
||||
-v /etc/kuryr/ml2_conf.ini:/etc/neutron/plugins/ml2/ml2_conf.ini \
|
||||
--name openvswitch-agent \
|
||||
openstackmagnum/fedora23-neutron-ovs:testing
|
@ -1,4 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
/usr/share/openvswitch/scripts/ovs-ctl start --system-id=random
|
||||
/usr/bin/neutron-openvswitch-agent --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/openvswitch-agent.log
|
@ -1,109 +0,0 @@
|
||||
# Copyright 2015 Huawei Technologies Co.,LTD.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_log import log
|
||||
|
||||
from magnum.common import docker_utils
|
||||
from magnum.conductor import monitors
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class SwarmMonitor(monitors.MonitorBase):
|
||||
|
||||
def __init__(self, context, cluster):
|
||||
super(SwarmMonitor, self).__init__(context, cluster)
|
||||
self.data = {}
|
||||
self.data['nodes'] = []
|
||||
self.data['containers'] = []
|
||||
|
||||
@property
|
||||
def metrics_spec(self):
|
||||
return {
|
||||
'memory_util': {
|
||||
'unit': '%',
|
||||
'func': 'compute_memory_util',
|
||||
},
|
||||
}
|
||||
|
||||
def pull_data(self):
|
||||
with docker_utils.docker_for_cluster(self.context,
|
||||
self.cluster) as docker:
|
||||
system_info = docker.info()
|
||||
self.data['nodes'] = self._parse_node_info(system_info)
|
||||
|
||||
# pull data from each container
|
||||
containers = []
|
||||
for container in docker.containers(all=True):
|
||||
try:
|
||||
container = docker.inspect_container(container['Id'])
|
||||
except Exception as e:
|
||||
LOG.warning("Ignore error [%(e)s] when inspecting "
|
||||
"container %(container_id)s.",
|
||||
{'e': e, 'container_id': container['Id']},
|
||||
exc_info=True)
|
||||
containers.append(container)
|
||||
self.data['containers'] = containers
|
||||
|
||||
def compute_memory_util(self):
|
||||
mem_total = 0
|
||||
for node in self.data['nodes']:
|
||||
mem_total += node['MemTotal']
|
||||
mem_reserved = 0
|
||||
for container in self.data['containers']:
|
||||
mem_reserved += container['HostConfig']['Memory']
|
||||
|
||||
if mem_total == 0:
|
||||
return 0
|
||||
else:
|
||||
return mem_reserved * 100 / mem_total
|
||||
|
||||
def _parse_node_info(self, system_info):
|
||||
"""Parse system_info to retrieve memory size of each node.
|
||||
|
||||
:param system_info: The output returned by docker.info(). Example:
|
||||
{
|
||||
u'Debug': False,
|
||||
u'NEventsListener': 0,
|
||||
u'DriverStatus': [
|
||||
[u'\x08Strategy', u'spread'],
|
||||
[u'\x08Filters', u'...'],
|
||||
[u'\x08Nodes', u'2'],
|
||||
[u'node1', u'10.0.0.4:2375'],
|
||||
[u' \u2514 Containers', u'1'],
|
||||
[u' \u2514 Reserved CPUs', u'0 / 1'],
|
||||
[u' \u2514 Reserved Memory', u'0 B / 2.052 GiB'],
|
||||
[u'node2', u'10.0.0.3:2375'],
|
||||
[u' \u2514 Containers', u'2'],
|
||||
[u' \u2514 Reserved CPUs', u'0 / 1'],
|
||||
[u' \u2514 Reserved Memory', u'0 B / 2.052 GiB']
|
||||
],
|
||||
u'Containers': 3
|
||||
}
|
||||
:return: Memory size of each node. Excample:
|
||||
[{'MemTotal': 2203318222.848},
|
||||
{'MemTotal': 2203318222.848}]
|
||||
"""
|
||||
nodes = []
|
||||
for info in system_info['DriverStatus']:
|
||||
key = info[0]
|
||||
value = info[1]
|
||||
if key == u' \u2514 Reserved Memory':
|
||||
memory = value # Example: '0 B / 2.052 GiB'
|
||||
memory = memory.split('/')[1].strip() # Example: '2.052 GiB'
|
||||
memory = memory.split(' ')[0] # Example: '2.052'
|
||||
memory = float(memory) * 1024 * 1024 * 1024
|
||||
nodes.append({'MemTotal': memory})
|
||||
return nodes
|
@ -1,29 +0,0 @@
|
||||
# Copyright 2016 Rackspace Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
import os
|
||||
|
||||
from magnum.drivers.heat import swarm_fedora_template_def as sftd
|
||||
|
||||
|
||||
class AtomicSwarmTemplateDefinition(sftd.SwarmFedoraTemplateDefinition):
|
||||
"""Docker swarm template for a Fedora Atomic VM."""
|
||||
|
||||
@property
|
||||
def driver_module_path(self):
|
||||
return __name__[:__name__.rindex('.')]
|
||||
|
||||
@property
|
||||
def template_path(self):
|
||||
return os.path.join(os.path.dirname(os.path.realpath(__file__)),
|
||||
'templates/cluster.yaml')
|
@ -1,202 +0,0 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
@ -1,107 +0,0 @@
|
||||
A Docker swarm cluster with Heat
|
||||
==============================
|
||||
|
||||
These [Heat][] templates will deploy an *N*-node [swarm][] cluster,
|
||||
where *N* is the value of the `number_of_nodes` parameter you
|
||||
specify when creating the stack.
|
||||
|
||||
[heat]: https://wiki.openstack.org/wiki/Heat
|
||||
[swarm]: https://github.com/docker/swarm/
|
||||
|
||||
## Requirements
|
||||
|
||||
### OpenStack
|
||||
|
||||
These templates will work with the Juno version of Heat.
|
||||
|
||||
### Guest image
|
||||
|
||||
These templates will work with either CentOS Atomic Host or Fedora 21
|
||||
Atomic.
|
||||
|
||||
## Creating the stack
|
||||
|
||||
First, you must create a swarm token, which is used to uniquely identify
|
||||
the cluster to the global discovery service. This can be done by issuing
|
||||
a create call to the swarm CLI. Alternatively, if you have access to
|
||||
Docker you can use the dockerswarm/swarm image.
|
||||
|
||||
$ swarm create
|
||||
afeb445bcb2f573aeb8ff3a199785f45
|
||||
|
||||
$ docker run dockerswarm/swarm create
|
||||
d8cdfe5128af6e1075b34aa06ff1cc2c
|
||||
|
||||
Creating an environment file `local.yaml` with parameters specific to
|
||||
your environment:
|
||||
|
||||
parameters:
|
||||
ssh_key_name: testkey
|
||||
external_network: 028d70dd-67b8-4901-8bdd-0c62b06cce2d
|
||||
dns_nameserver: 192.168.200.1
|
||||
server_image: fedora-atomic-latest
|
||||
discovery_url: token://d8cdfe5128af6e1075b34aa06ff1cc2c
|
||||
|
||||
And then create the stack, referencing that environment file:
|
||||
|
||||
heat stack-create -f swarm.yaml -e local.yaml my-swarm-cluster
|
||||
|
||||
You must provide values for:
|
||||
|
||||
- `ssh_key_name`
|
||||
- `external_network`
|
||||
- `server_image`
|
||||
- `discovery_url`
|
||||
|
||||
## Interacting with Swarm
|
||||
|
||||
The Docker CLI interacts with the cluster through the swarm master
|
||||
listening on port 2376.
|
||||
|
||||
You can get the ip address of the swarm master using the `heat
|
||||
output-show` command:
|
||||
|
||||
$ heat output-show my-swarm-cluster swarm_master
|
||||
"192.168.200.86"
|
||||
|
||||
Provide the Docker CLI with the address for the swarm master.
|
||||
|
||||
$ docker -H tcp://192.168.200.86:2376 info
|
||||
Containers: 4
|
||||
Nodes: 3
|
||||
swarm-master: 10.0.0.1:2375
|
||||
swarm-node1: 10.0.0.2:2375
|
||||
swarm-node2: 10.0.0.3:2375
|
||||
|
||||
## Testing
|
||||
|
||||
You can test the swarm cluster with the Docker CLI by running a container.
|
||||
In the example below, a container is spawned in the cluster to ping 8.8.8.8.
|
||||
|
||||
$ docker -H tcp://192.168.200.86:2376 run -i cirros /bin/ping -c 4 8.8.8.8
|
||||
PING 8.8.8.8 (8.8.8.8): 56 data bytes
|
||||
64 bytes from 8.8.8.8: seq=0 ttl=127 time=40.749 ms
|
||||
64 bytes from 8.8.8.8: seq=1 ttl=127 time=46.264 ms
|
||||
64 bytes from 8.8.8.8: seq=2 ttl=127 time=42.808 ms
|
||||
64 bytes from 8.8.8.8: seq=3 ttl=127 time=42.270 ms
|
||||
|
||||
--- 8.8.8.8 ping statistics ---
|
||||
4 packets transmitted, 4 packets received, 0% packet loss
|
||||
round-trip min/avg/max = 40.749/43.022/46.264 ms
|
||||
|
||||
## License
|
||||
|
||||
Copyright 2014 Lars Kellogg-Stedman <lars@redhat.com>
|
||||
Copyright 2015 Rackspace Hosting
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use these files except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
@ -1,531 +0,0 @@
|
||||
heat_template_version: 2014-10-16
|
||||
|
||||
description: >
|
||||
This template will boot a Docker swarm cluster. A swarm cluster is made up
|
||||
of several master nodes, and N agent nodes. Every node in the cluster,
|
||||
including the master, is running a Docker daemon and a swarm agent
|
||||
advertising it to the cluster. The master is running an addition swarm
|
||||
master container listening on port 2376. By default, the cluster is made
|
||||
up of one master node and one agent node.
|
||||
|
||||
parameters:
|
||||
|
||||
#
|
||||
# REQUIRED PARAMETERS
|
||||
#
|
||||
is_cluster_stack:
|
||||
type: boolean
|
||||
default: false
|
||||
|
||||
ssh_key_name:
|
||||
type: string
|
||||
description: name of ssh key to be provisioned on our server
|
||||
default: ""
|
||||
|
||||
ssh_public_key:
|
||||
type: string
|
||||
description: The public ssh key to add in all nodes
|
||||
default: ""
|
||||
|
||||
external_network:
|
||||
type: string
|
||||
description: uuid/name of a network to use for floating ip addresses
|
||||
|
||||
fixed_network:
|
||||
type: string
|
||||
description: uuid/name of an existing network to use to provision machines
|
||||
default: ""
|
||||
|
||||
fixed_subnet:
|
||||
type: string
|
||||
description: uuid/name of an existing subnet to use to provision machines
|
||||
default: ""
|
||||
|
||||
discovery_url:
|
||||
type: string
|
||||
description: url provided for node discovery
|
||||
|
||||
cluster_uuid:
|
||||
type: string
|
||||
description: identifier for the cluster this template is generating
|
||||
|
||||
magnum_url:
|
||||
type: string
|
||||
description: endpoint to retrieve TLS certs from
|
||||
|
||||
master_image:
|
||||
type: string
|
||||
description: glance image used to boot the server
|
||||
|
||||
node_image:
|
||||
type: string
|
||||
description: glance image used to boot the server
|
||||
#
|
||||
# OPTIONAL PARAMETERS
|
||||
#
|
||||
master_flavor:
|
||||
type: string
|
||||
default: m1.small
|
||||
description: flavor to use when booting the swarm master
|
||||
|
||||
node_flavor:
|
||||
type: string
|
||||
default: m1.small
|
||||
description: flavor to use when booting the swarm node
|
||||
|
||||
dns_nameserver:
|
||||
type: comma_delimited_list
|
||||
description: address of a dns nameserver reachable in your environment
|
||||
default: 8.8.8.8
|
||||
|
||||
http_proxy:
|
||||
type: string
|
||||
description: http proxy address for docker
|
||||
default: ""
|
||||
|
||||
https_proxy:
|
||||
type: string
|
||||
description: https proxy address for docker
|
||||
default: ""
|
||||
|
||||
no_proxy:
|
||||
type: string
|
||||
description: no proxies for docker
|
||||
default: ""
|
||||
|
||||
number_of_masters:
|
||||
type: number
|
||||
description: how many swarm masters to spawn
|
||||
default: 1
|
||||
|
||||
number_of_nodes:
|
||||
type: number
|
||||
description: how many swarm nodes to spawn
|
||||
default: 1
|
||||
|
||||
fixed_subnet_cidr:
|
||||
type: string
|
||||
description: network range for fixed ip network
|
||||
default: "10.0.0.0/24"
|
||||
|
||||
tls_disabled:
|
||||
type: boolean
|
||||
description: whether or not to enable TLS
|
||||
default: False
|
||||
|
||||
verify_ca:
|
||||
type: boolean
|
||||
description: whether or not to validate certificate authority
|
||||
|
||||
network_driver:
|
||||
type: string
|
||||
description: network driver to use for instantiating container networks
|
||||
default: None
|
||||
|
||||
flannel_network_cidr:
|
||||
type: string
|
||||
description: network range for flannel overlay network
|
||||
default: 10.100.0.0/16
|
||||
|
||||
flannel_network_subnetlen:
|
||||
type: number
|
||||
description: size of subnet assigned to each master
|
||||
default: 24
|
||||
|
||||
flannel_backend:
|
||||
type: string
|
||||
description: >
|
||||
specify the backend for flannel, default udp backend
|
||||
default: "udp"
|
||||
constraints:
|
||||
- allowed_values: ["udp", "vxlan", "host-gw"]
|
||||
|
||||
docker_volume_size:
|
||||
type: number
|
||||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
default: 0
|
||||
|
||||
docker_volume_type:
|
||||
type: string
|
||||
description: >
|
||||
type of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
description: docker storage driver name
|
||||
default: "devicemapper"
|
||||
|
||||
loadbalancing_protocol:
|
||||
type: string
|
||||
description: >
|
||||
The protocol which is used for load balancing. If you want to change
|
||||
tls_disabled option to 'True', please change this to "HTTP".
|
||||
default: TCP
|
||||
constraints:
|
||||
- allowed_values: ["TCP", "HTTP"]
|
||||
|
||||
swarm_port:
|
||||
type: number
|
||||
description: >
|
||||
The port which are used by swarm manager to provide swarm service.
|
||||
default: 2376
|
||||
|
||||
swarm_version:
|
||||
type: string
|
||||
description: version of swarm used for swarm cluster
|
||||
default: 1.2.5
|
||||
|
||||
swarm_strategy:
|
||||
type: string
|
||||
description: >
|
||||
schedule strategy to be used by swarm manager
|
||||
default: "spread"
|
||||
|
||||
trustee_domain_id:
|
||||
type: string
|
||||
description: domain id of the trustee
|
||||
default: ""
|
||||
|
||||
trustee_user_id:
|
||||
type: string
|
||||
description: user id of the trustee
|
||||
default: ""
|
||||
|
||||
trustee_username:
|
||||
type: string
|
||||
description: username of the trustee
|
||||
default: ""
|
||||
|
||||
trustee_password:
|
||||
type: string
|
||||
description: password of the trustee
|
||||
default: ""
|
||||
hidden: true
|
||||
|
||||
trust_id:
|
||||
type: string
|
||||
description: id of the trust which is used by the trustee
|
||||
default: ""
|
||||
hidden: true
|
||||
|
||||
auth_url:
|
||||
type: string
|
||||
description: url for keystone
|
||||
|
||||
registry_enabled:
|
||||
type: boolean
|
||||
description: >
|
||||
Indicates whether the docker registry is enabled.
|
||||
default: false
|
||||
|
||||
registry_port:
|
||||
type: number
|
||||
description: port of registry service
|
||||
default: 5000
|
||||
|
||||
swift_region:
|
||||
type: string
|
||||
description: region of swift service
|
||||
default: ""
|
||||
|
||||
registry_container:
|
||||
type: string
|
||||
description: >
|
||||
name of swift container which docker registry stores images in
|
||||
default: "container"
|
||||
|
||||
registry_insecure:
|
||||
type: boolean
|
||||
description: >
|
||||
indicates whether to skip TLS verification between registry and backend storage
|
||||
default: true
|
||||
|
||||
registry_chunksize:
|
||||
type: number
|
||||
description: >
|
||||
size fo the data segments for the swift dynamic large objects
|
||||
default: 5242880
|
||||
|
||||
volume_driver:
|
||||
type: string
|
||||
description: volume driver to use for container storage
|
||||
default: ""
|
||||
constraints:
|
||||
- allowed_values: ["","rexray"]
|
||||
|
||||
rexray_preempt:
|
||||
type: string
|
||||
description: >
|
||||
enables any host to take control of a volume irrespective of whether
|
||||
other hosts are using the volume
|
||||
default: "false"
|
||||
|
||||
openstack_ca:
|
||||
type: string
|
||||
hidden: true
|
||||
description: The OpenStack CA certificate to install on the node.
|
||||
|
||||
nodes_affinity_policy:
|
||||
type: string
|
||||
description: >
|
||||
affinity policy for nodes server group
|
||||
constraints:
|
||||
- allowed_values: ["affinity", "anti-affinity", "soft-affinity",
|
||||
"soft-anti-affinity"]
|
||||
|
||||
resources:
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# network resources. allocate a network and router for our server.
|
||||
# it would also be possible to take advantage of existing network
|
||||
# resources (and have the deployer provide network and subnet ids,
|
||||
# etc, as parameters), but I wanted to minmize the amount of
|
||||
# configuration necessary to make this go.
|
||||
|
||||
network:
|
||||
type: ../../common/templates/network.yaml
|
||||
properties:
|
||||
existing_network: {get_param: fixed_network}
|
||||
existing_subnet: {get_param: fixed_subnet}
|
||||
private_network_cidr: {get_param: fixed_subnet_cidr}
|
||||
dns_nameserver: {get_param: dns_nameserver}
|
||||
external_network: {get_param: external_network}
|
||||
|
||||
api_lb:
|
||||
type: ../../common/templates/lb_api.yaml
|
||||
properties:
|
||||
fixed_subnet: {get_attr: [network, fixed_subnet]}
|
||||
external_network: {get_param: external_network}
|
||||
protocol: {get_param: loadbalancing_protocol}
|
||||
port: {get_param: swarm_port}
|
||||
|
||||
etcd_lb:
|
||||
type: ../../common/templates/lb_etcd.yaml
|
||||
properties:
|
||||
fixed_subnet: {get_attr: [network, fixed_subnet]}
|
||||
protocol: {get_param: loadbalancing_protocol}
|
||||
port: 2379
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# security groups. we need to permit network traffic of various
|
||||
# sorts.
|
||||
#
|
||||
|
||||
secgroup_swarm_manager:
|
||||
type: "OS::Neutron::SecurityGroup"
|
||||
properties:
|
||||
rules:
|
||||
- protocol: icmp
|
||||
- protocol: tcp
|
||||
port_range_min: 22
|
||||
port_range_max: 22
|
||||
- protocol: tcp
|
||||
port_range_min: 2376
|
||||
port_range_max: 2376
|
||||
- protocol: tcp
|
||||
remote_ip_prefix: {get_param: fixed_subnet_cidr}
|
||||
port_range_min: 1
|
||||
port_range_max: 65535
|
||||
- protocol: udp
|
||||
port_range_min: 53
|
||||
port_range_max: 53
|
||||
|
||||
secgroup_swarm_node:
|
||||
type: "OS::Neutron::SecurityGroup"
|
||||
properties:
|
||||
rules:
|
||||
- protocol: icmp
|
||||
- protocol: tcp
|
||||
- protocol: udp
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# resources that expose the IPs of either the swarm master or a given
|
||||
# LBaaS pool depending on whether LBaaS is enabled for the cluster.
|
||||
#
|
||||
|
||||
api_address_lb_switch:
|
||||
type: Magnum::ApiGatewaySwitcher
|
||||
properties:
|
||||
pool_public_ip: {get_attr: [api_lb, floating_address]}
|
||||
pool_private_ip: {get_attr: [api_lb, address]}
|
||||
master_public_ip: {get_attr: [swarm_masters, resource.0.swarm_master_external_ip]}
|
||||
master_private_ip: {get_attr: [swarm_masters, resource.0.swarm_master_ip]}
|
||||
|
||||
etcd_address_lb_switch:
|
||||
type: Magnum::ApiGatewaySwitcher
|
||||
properties:
|
||||
pool_private_ip: {get_attr: [etcd_lb, address]}
|
||||
master_private_ip: {get_attr: [swarm_masters, resource.0.swarm_master_ip]}
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# resources that expose one server group for each master and worker nodes
|
||||
# separately.
|
||||
#
|
||||
|
||||
master_nodes_server_group:
|
||||
type: OS::Nova::ServerGroup
|
||||
properties:
|
||||
policies: [{get_param: nodes_affinity_policy}]
|
||||
|
||||
worker_nodes_server_group:
|
||||
type: OS::Nova::ServerGroup
|
||||
properties:
|
||||
policies: [{get_param: nodes_affinity_policy}]
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# Swarm manager is responsible for the entire cluster and manages the
|
||||
# resources of multiple Docker hosts at scale.
|
||||
# It supports high availability by create a primary manager and multiple
|
||||
# replica instances.
|
||||
|
||||
swarm_masters:
|
||||
type: "OS::Heat::ResourceGroup"
|
||||
depends_on:
|
||||
- network
|
||||
properties:
|
||||
count: {get_param: number_of_masters}
|
||||
resource_def:
|
||||
type: swarmmaster.yaml
|
||||
properties:
|
||||
name:
|
||||
list_join:
|
||||
- '-'
|
||||
- [{ get_param: 'OS::stack_name' }, 'master', '%index%']
|
||||
ssh_key_name: {get_param: ssh_key_name}
|
||||
server_image: {get_param: master_image}
|
||||
server_flavor: {get_param: master_flavor}
|
||||
docker_volume_size: {get_param: docker_volume_size}
|
||||
docker_volume_type: {get_param: docker_volume_type}
|
||||
docker_storage_driver: {get_param: docker_storage_driver}
|
||||
fixed_network_id: {get_attr: [network, fixed_network]}
|
||||
fixed_subnet_id: {get_attr: [network, fixed_subnet]}
|
||||
external_network: {get_param: external_network}
|
||||
discovery_url: {get_param: discovery_url}
|
||||
http_proxy: {get_param: http_proxy}
|
||||
https_proxy: {get_param: https_proxy}
|
||||
no_proxy: {get_param: no_proxy}
|
||||
swarm_api_ip: {get_attr: [api_lb, address]}
|
||||
cluster_uuid: {get_param: cluster_uuid}
|
||||
magnum_url: {get_param: magnum_url}
|
||||
tls_disabled: {get_param: tls_disabled}
|
||||
verify_ca: {get_param: verify_ca}
|
||||
secgroup_swarm_master_id: {get_resource: secgroup_swarm_manager}
|
||||
network_driver: {get_param: network_driver}
|
||||
flannel_network_cidr: {get_param: flannel_network_cidr}
|
||||
flannel_network_subnetlen: {get_param: flannel_network_subnetlen}
|
||||
flannel_backend: {get_param: flannel_backend}
|
||||
swarm_port: {get_param: swarm_port}
|
||||
api_pool_id: {get_attr: [api_lb, pool_id]}
|
||||
etcd_pool_id: {get_attr: [etcd_lb, pool_id]}
|
||||
etcd_server_ip: {get_attr: [etcd_lb, address]}
|
||||
api_ip_address: {get_attr: [api_lb, floating_address]}
|
||||
swarm_version: {get_param: swarm_version}
|
||||
swarm_strategy: {get_param: swarm_strategy}
|
||||
trustee_user_id: {get_param: trustee_user_id}
|
||||
trustee_password: {get_param: trustee_password}
|
||||
trust_id: {get_param: trust_id}
|
||||
auth_url: {get_param: auth_url}
|
||||
volume_driver: {get_param: volume_driver}
|
||||
rexray_preempt: {get_param: rexray_preempt}
|
||||
openstack_ca: {get_param: openstack_ca}
|
||||
nodes_server_group_id: {get_resource: master_nodes_server_group}
|
||||
|
||||
swarm_nodes:
|
||||
type: "OS::Heat::ResourceGroup"
|
||||
depends_on:
|
||||
- network
|
||||
properties:
|
||||
count: {get_param: number_of_nodes}
|
||||
resource_def:
|
||||
type: swarmnode.yaml
|
||||
properties:
|
||||
name:
|
||||
list_join:
|
||||
- '-'
|
||||
- [{ get_param: 'OS::stack_name' }, 'node', '%index%']
|
||||
ssh_key_name: {get_param: ssh_key_name}
|
||||
server_image: {get_param: node_image}
|
||||
server_flavor: {get_param: node_flavor}
|
||||
docker_volume_size: {get_param: docker_volume_size}
|
||||
docker_volume_type: {get_param: docker_volume_type}
|
||||
docker_storage_driver: {get_param: docker_storage_driver}
|
||||
fixed_network_id: {get_attr: [network, fixed_network]}
|
||||
fixed_subnet_id: {get_attr: [network, fixed_subnet]}
|
||||
external_network: {get_param: external_network}
|
||||
http_proxy: {get_param: http_proxy}
|
||||
https_proxy: {get_param: https_proxy}
|
||||
no_proxy: {get_param: no_proxy}
|
||||
swarm_api_ip: {get_attr: [api_address_lb_switch, private_ip]}
|
||||
cluster_uuid: {get_param: cluster_uuid}
|
||||
magnum_url: {get_param: magnum_url}
|
||||
tls_disabled: {get_param: tls_disabled}
|
||||
verify_ca: {get_param: verify_ca}
|
||||
secgroup_swarm_node_id: {get_resource: secgroup_swarm_node}
|
||||
flannel_network_cidr: {get_param: flannel_network_cidr}
|
||||
network_driver: {get_param: network_driver}
|
||||
etcd_server_ip: {get_attr: [etcd_address_lb_switch, private_ip]}
|
||||
api_ip_address: {get_attr: [api_address_lb_switch, public_ip]}
|
||||
swarm_version: {get_param: swarm_version}
|
||||
trustee_domain_id: {get_param: trustee_domain_id}
|
||||
trustee_user_id: {get_param: trustee_user_id}
|
||||
trustee_username: {get_param: trustee_username}
|
||||
trustee_password: {get_param: trustee_password}
|
||||
trust_id: {get_param: trust_id}
|
||||
auth_url: {get_param: auth_url}
|
||||
registry_enabled: {get_param: registry_enabled}
|
||||
registry_port: {get_param: registry_port}
|
||||
swift_region: {get_param: swift_region}
|
||||
registry_container: {get_param: registry_container}
|
||||
registry_insecure: {get_param: registry_insecure}
|
||||
registry_chunksize: {get_param: registry_chunksize}
|
||||
volume_driver: {get_param: volume_driver}
|
||||
rexray_preempt: {get_param: rexray_preempt}
|
||||
openstack_ca: {get_param: openstack_ca}
|
||||
nodes_server_group_id: {get_resource: worker_nodes_server_group}
|
||||
|
||||
outputs:
|
||||
|
||||
api_address:
|
||||
value:
|
||||
str_replace:
|
||||
template: api_ip_address
|
||||
params:
|
||||
api_ip_address: {get_attr: [api_address_lb_switch, public_ip]}
|
||||
description: >
|
||||
This is the API endpoint of the Swarm masters. Use this to access
|
||||
the Swarm API server from outside the cluster.
|
||||
|
||||
swarm_masters_private:
|
||||
value: {get_attr: [swarm_masters, swarm_master_ip]}
|
||||
description: >
|
||||
This is a list of the "private" addresses of all the Swarm masters.
|
||||
|
||||
swarm_masters:
|
||||
value: {get_attr: [swarm_masters, swarm_master_external_ip]}
|
||||
description: >
|
||||
This is a list of "public" ip addresses of all Swarm masters.
|
||||
Use these addresses to log into the Swarm masters via ssh.
|
||||
|
||||
swarm_nodes_private:
|
||||
value: {get_attr: [swarm_nodes, swarm_node_ip]}
|
||||
description: >
|
||||
This is a list of the "private" addresses of all the Swarm nodes.
|
||||
|
||||
swarm_nodes:
|
||||
value: {get_attr: [swarm_nodes, swarm_node_external_ip]}
|
||||
description: >
|
||||
This is a list of the "public" addresses of all the Swarm nodes. Use
|
||||
these addresses to, e.g., log into the nodes.
|
||||
|
||||
discovery_url:
|
||||
value: {get_param: discovery_url}
|
||||
description: >
|
||||
This the discovery url for Swarm cluster.
|
@ -1,519 +0,0 @@
|
||||
heat_template_version: 2014-10-16
|
||||
|
||||
description: >
|
||||
This is a nested stack that defines swarm master node. A swarm master node is
|
||||
running a Docker daemon and a swarm manager container listening on port 2376.
|
||||
|
||||
parameters:
|
||||
|
||||
name:
|
||||
type: string
|
||||
description: server name
|
||||
|
||||
ssh_key_name:
|
||||
type: string
|
||||
description: name of ssh key to be provisioned on our server
|
||||
|
||||
docker_volume_size:
|
||||
type: number
|
||||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_volume_type:
|
||||
type: string
|
||||
description: >
|
||||
type of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
description: docker storage driver name
|
||||
|
||||
external_network:
|
||||
type: string
|
||||
description: uuid/name of a network to use for floating ip addresses
|
||||
|
||||
discovery_url:
|
||||
type: string
|
||||
description: url provided for node discovery
|
||||
|
||||
cluster_uuid:
|
||||
type: string
|
||||
description: identifier for the cluster this template is generating
|
||||
|
||||
magnum_url:
|
||||
type: string
|
||||
description: endpoint to retrieve TLS certs from
|
||||
|
||||
fixed_network_id:
|
||||
type: string
|
||||
description: Network from which to allocate fixed addresses.
|
||||
|
||||
fixed_subnet_id:
|
||||
type: string
|
||||
description: Subnet from which to allocate fixed addresses.
|
||||
|
||||
swarm_api_ip:
|
||||
type: string
|
||||
description: swarm master's api server ip address
|
||||
default: ""
|
||||
|
||||
api_ip_address:
|
||||
type: string
|
||||
description: swarm master's api server public ip address
|
||||
default: ""
|
||||
|
||||
server_image:
|
||||
type: string
|
||||
description: glance image used to boot the server
|
||||
|
||||
server_flavor:
|
||||
type: string
|
||||
description: flavor to use when booting the server
|
||||
|
||||
http_proxy:
|
||||
type: string
|
||||
description: http proxy address for docker
|
||||
|
||||
https_proxy:
|
||||
type: string
|
||||
description: https proxy address for docker
|
||||
|
||||
no_proxy:
|
||||
type: string
|
||||
description: no proxies for docker
|
||||
|
||||
tls_disabled:
|
||||
type: boolean
|
||||
description: whether or not to enable TLS
|
||||
|
||||
verify_ca:
|
||||
type: boolean
|
||||
description: whether or not to validate certificate authority
|
||||
|
||||
network_driver:
|
||||
type: string
|
||||
description: network driver to use for instantiating container networks
|
||||
|
||||
flannel_network_cidr:
|
||||
type: string
|
||||
description: network range for flannel overlay network
|
||||
|
||||
flannel_network_subnetlen:
|
||||
type: number
|
||||
description: size of subnet assigned to each master
|
||||
|
||||
flannel_backend:
|
||||
type: string
|
||||
description: >
|
||||
specify the backend for flannel, default udp backend
|
||||
constraints:
|
||||
- allowed_values: ["udp", "vxlan", "host-gw"]
|
||||
|
||||
swarm_version:
|
||||
type: string
|
||||
description: version of swarm used for swarm cluster
|
||||
|
||||
swarm_strategy:
|
||||
type: string
|
||||
description: >
|
||||
schedule strategy to be used by swarm manager
|
||||
constraints:
|
||||
- allowed_values: ["spread", "binpack", "random"]
|
||||
|
||||
secgroup_swarm_master_id:
|
||||
type: string
|
||||
description: ID of the security group for swarm master.
|
||||
|
||||
swarm_port:
|
||||
type: number
|
||||
description: >
|
||||
The port which are used by swarm manager to provide swarm service.
|
||||
|
||||
api_pool_id:
|
||||
type: string
|
||||
description: ID of the load balancer pool of swarm master server.
|
||||
|
||||
etcd_pool_id:
|
||||
type: string
|
||||
description: ID of the load balancer pool of etcd server.
|
||||
|
||||
etcd_server_ip:
|
||||
type: string
|
||||
description: ip address of the load balancer pool of etcd server.
|
||||
default: ""
|
||||
|
||||
trustee_user_id:
|
||||
type: string
|
||||
description: user id of the trustee
|
||||
|
||||
trustee_password:
|
||||
type: string
|
||||
description: password of the trustee
|
||||
hidden: true
|
||||
|
||||
trust_id:
|
||||
type: string
|
||||
description: id of the trust which is used by the trustee
|
||||
hidden: true
|
||||
|
||||
auth_url:
|
||||
type: string
|
||||
description: url for keystone
|
||||
|
||||
volume_driver:
|
||||
type: string
|
||||
description: volume driver to use for container storage
|
||||
default: ""
|
||||
|
||||
rexray_preempt:
|
||||
type: string
|
||||
description: >
|
||||
enables any host to take control of a volume irrespective of whether
|
||||
other hosts are using the volume
|
||||
default: "false"
|
||||
|
||||
openstack_ca:
|
||||
type: string
|
||||
description: The OpenStack CA certificate to install on the node.
|
||||
|
||||
nodes_server_group_id:
|
||||
type: string
|
||||
description: ID of the server group for kubernetes cluster nodes.
|
||||
|
||||
resources:
|
||||
|
||||
master_wait_handle:
|
||||
type: "OS::Heat::WaitConditionHandle"
|
||||
|
||||
master_wait_condition:
|
||||
type: "OS::Heat::WaitCondition"
|
||||
depends_on: swarm-master
|
||||
properties:
|
||||
handle: {get_resource: master_wait_handle}
|
||||
timeout: 6000
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# resource that exposes the IPs of either the Swarm master or the API
|
||||
# LBaaS pool depending on whether LBaaS is enabled for the cluster.
|
||||
#
|
||||
|
||||
api_address_switch:
|
||||
type: Magnum::ApiGatewaySwitcher
|
||||
properties:
|
||||
pool_public_ip: {get_param: api_ip_address}
|
||||
pool_private_ip: {get_param: swarm_api_ip}
|
||||
master_public_ip: {get_attr: [swarm_master_floating, floating_ip_address]}
|
||||
master_private_ip: {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
|
||||
etcd_address_switch:
|
||||
type: Magnum::ApiGatewaySwitcher
|
||||
properties:
|
||||
pool_private_ip: {get_param: etcd_server_ip}
|
||||
master_private_ip: {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# software configs. these are components that are combined into
|
||||
# a multipart MIME user-data archive.
|
||||
#
|
||||
no_proxy_extended:
|
||||
type: OS::Heat::Value
|
||||
properties:
|
||||
type: string
|
||||
value:
|
||||
list_join:
|
||||
- ','
|
||||
- - {get_attr: [api_address_switch, private_ip]}
|
||||
- {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
- {get_attr: [etcd_address_switch, private_ip]}
|
||||
- {get_attr: [api_address_switch, public_ip]}
|
||||
- {get_param: no_proxy}
|
||||
|
||||
write_heat_params:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: ../../common/templates/swarm/fragments/write-heat-params-master.yaml}
|
||||
params:
|
||||
"$WAIT_CURL": {get_attr: [master_wait_handle, curl_cli]}
|
||||
"$DOCKER_VOLUME": {get_resource: docker_volume}
|
||||
"$DOCKER_VOLUME_SIZE": {get_param: docker_volume_size}
|
||||
"$DOCKER_STORAGE_DRIVER": {get_param: docker_storage_driver}
|
||||
"$ETCD_DISCOVERY_URL": {get_param: discovery_url}
|
||||
"$HTTP_PROXY": {get_param: http_proxy}
|
||||
"$HTTPS_PROXY": {get_param: https_proxy}
|
||||
"$NO_PROXY": {get_attr: [no_proxy_extended, value]}
|
||||
"$SWARM_API_IP": {get_attr: [api_address_switch, private_ip]}
|
||||
"$SWARM_NODE_IP": {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
"$CLUSTER_UUID": {get_param: cluster_uuid}
|
||||
"$MAGNUM_URL": {get_param: magnum_url}
|
||||
"$TLS_DISABLED": {get_param: tls_disabled}
|
||||
"$VERIFY_CA": {get_param: verify_ca}
|
||||
"$NETWORK_DRIVER": {get_param: network_driver}
|
||||
"$FLANNEL_NETWORK_CIDR": {get_param: flannel_network_cidr}
|
||||
"$FLANNEL_NETWORK_SUBNETLEN": {get_param: flannel_network_subnetlen}
|
||||
"$FLANNEL_BACKEND": {get_param: flannel_backend}
|
||||
"$ETCD_SERVER_IP": {get_attr: [etcd_address_switch, private_ip]}
|
||||
"$API_IP_ADDRESS": {get_attr: [api_address_switch, public_ip]}
|
||||
"$SWARM_VERSION": {get_param: swarm_version}
|
||||
"$TRUSTEE_USER_ID": {get_param: trustee_user_id}
|
||||
"$TRUSTEE_PASSWORD": {get_param: trustee_password}
|
||||
"$TRUST_ID": {get_param: trust_id}
|
||||
"$AUTH_URL": {get_param: auth_url}
|
||||
"$VOLUME_DRIVER": {get_param: volume_driver}
|
||||
"$REXRAY_PREEMPT": {get_param: rexray_preempt}
|
||||
|
||||
install_openstack_ca:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
params:
|
||||
$OPENSTACK_CA: {get_param: openstack_ca}
|
||||
template: {get_file: ../../common/templates/fragments/atomic-install-openstack-ca.sh}
|
||||
|
||||
write_network_config:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/write-network-config.sh}
|
||||
|
||||
network_config_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/network-config-service.sh}
|
||||
|
||||
network_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/network-service.sh}
|
||||
|
||||
configure_etcd:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/configure-etcd.sh}
|
||||
|
||||
remove_docker_key:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/remove-docker-key.sh}
|
||||
|
||||
configure_docker_storage:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
params:
|
||||
$configure_docker_storage_driver: {get_file: ../../common/templates/fragments/configure_docker_storage_driver_atomic.sh}
|
||||
template: {get_file: ../../common/templates/fragments/configure-docker-storage.sh}
|
||||
|
||||
make_cert:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/make-cert.py}
|
||||
|
||||
add_docker_daemon_options:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/add-docker-daemon-options.sh}
|
||||
|
||||
write_swarm_manager_failure_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: ../../common/templates/swarm/fragments/write-cluster-failure-service.yaml}
|
||||
params:
|
||||
"$SERVICE": swarm-manager
|
||||
"$WAIT_CURL": {get_attr: [master_wait_handle, curl_cli]}
|
||||
"$VERIFY_CA": {get_param: verify_ca}
|
||||
|
||||
write_docker_socket:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/write-docker-socket.yaml}
|
||||
|
||||
write_swarm_master_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: ../../common/templates/swarm/fragments/write-swarm-master-service.sh}
|
||||
params:
|
||||
"$ETCD_SERVER_IP": {get_attr: [etcd_address_switch, private_ip]}
|
||||
"$NODE_IP": {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
"$WAIT_CURL": {get_attr: [master_wait_handle, curl_cli]}
|
||||
"$HTTP_PROXY": {get_param: http_proxy}
|
||||
"$HTTPS_PROXY": {get_param: https_proxy}
|
||||
"$NO_PROXY": {get_attr: [no_proxy_extended, value]}
|
||||
"$TLS_DISABLED": {get_param: tls_disabled}
|
||||
"$VERIFY_CA": {get_param: verify_ca}
|
||||
"$SWARM_VERSION": {get_param: swarm_version}
|
||||
"$SWARM_STRATEGY": {get_param: swarm_strategy}
|
||||
|
||||
enable_services:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: ../../common/templates/swarm/fragments/enable-services.sh}
|
||||
params:
|
||||
"$NODE_SERVICES": "etcd docker.socket docker swarm-manager"
|
||||
|
||||
cfn_signal:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/cfn-signal.sh}
|
||||
|
||||
configure_selinux:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/configure-selinux.sh}
|
||||
|
||||
add_proxy:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/add-proxy.sh}
|
||||
|
||||
volume_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/volume-service.sh}
|
||||
|
||||
swarm_master_init:
|
||||
type: "OS::Heat::MultipartMime"
|
||||
properties:
|
||||
parts:
|
||||
- config: {get_resource: install_openstack_ca}
|
||||
- config: {get_resource: configure_selinux}
|
||||
- config: {get_resource: remove_docker_key}
|
||||
- config: {get_resource: write_heat_params}
|
||||
- config: {get_resource: make_cert}
|
||||
- config: {get_resource: configure_etcd}
|
||||
- config: {get_resource: write_network_config}
|
||||
- config: {get_resource: network_config_service}
|
||||
- config: {get_resource: network_service}
|
||||
- config: {get_resource: configure_docker_storage}
|
||||
- config: {get_resource: write_swarm_manager_failure_service}
|
||||
- config: {get_resource: add_docker_daemon_options}
|
||||
- config: {get_resource: write_docker_socket}
|
||||
- config: {get_resource: write_swarm_master_service}
|
||||
- config: {get_resource: add_proxy}
|
||||
- config: {get_resource: enable_services}
|
||||
- config: {get_resource: cfn_signal}
|
||||
- config: {get_resource: volume_service}
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# Swarm_manager is a special node running the swarm manage daemon along
|
||||
# side the swarm agent.
|
||||
#
|
||||
|
||||
# do NOT use "_" (underscore) in the Nova server name
|
||||
# it creates a mismatch between the generated Nova name and its hostname
|
||||
# which can lead to weird problems
|
||||
swarm-master:
|
||||
type: "OS::Nova::Server"
|
||||
properties:
|
||||
name: {get_param: name}
|
||||
image:
|
||||
get_param: server_image
|
||||
flavor:
|
||||
get_param: server_flavor
|
||||
key_name:
|
||||
get_param: ssh_key_name
|
||||
user_data_format: RAW
|
||||
user_data: {get_resource: swarm_master_init}
|
||||
networks:
|
||||
- port:
|
||||
get_resource: swarm_master_eth0
|
||||
scheduler_hints: { group: { get_param: nodes_server_group_id }}
|
||||
|
||||
swarm_master_eth0:
|
||||
type: "OS::Neutron::Port"
|
||||
properties:
|
||||
network_id:
|
||||
get_param: fixed_network_id
|
||||
security_groups:
|
||||
- {get_param: secgroup_swarm_master_id}
|
||||
fixed_ips:
|
||||
- subnet_id:
|
||||
get_param: fixed_subnet_id
|
||||
allowed_address_pairs:
|
||||
- ip_address: {get_param: flannel_network_cidr}
|
||||
|
||||
swarm_master_floating:
|
||||
type: "OS::Neutron::FloatingIP"
|
||||
properties:
|
||||
floating_network:
|
||||
get_param: external_network
|
||||
port_id:
|
||||
get_resource: swarm_master_eth0
|
||||
|
||||
api_pool_member:
|
||||
type: Magnum::Optional::Neutron::LBaaS::PoolMember
|
||||
properties:
|
||||
pool: {get_param: api_pool_id}
|
||||
address: {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
subnet: { get_param: fixed_subnet_id }
|
||||
protocol_port: {get_param: swarm_port}
|
||||
|
||||
etcd_pool_member:
|
||||
type: Magnum::Optional::Neutron::LBaaS::PoolMember
|
||||
properties:
|
||||
pool: {get_param: etcd_pool_id}
|
||||
address: {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
subnet: { get_param: fixed_subnet_id }
|
||||
protocol_port: 2379
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# docker storage. This allocates a cinder volume and attaches it
|
||||
# to the node.
|
||||
#
|
||||
|
||||
docker_volume:
|
||||
type: Magnum::Optional::Cinder::Volume
|
||||
properties:
|
||||
size: {get_param: docker_volume_size}
|
||||
volume_type: {get_param: docker_volume_type}
|
||||
|
||||
docker_volume_attach:
|
||||
type: Magnum::Optional::Cinder::VolumeAttachment
|
||||
properties:
|
||||
instance_uuid: {get_resource: swarm-master}
|
||||
volume_id: {get_resource: docker_volume}
|
||||
mountpoint: /dev/vdb
|
||||
|
||||
outputs:
|
||||
|
||||
swarm_master_ip:
|
||||
value: {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
description: >
|
||||
This is the "private" addresses of all the Swarm master.
|
||||
|
||||
swarm_master_external_ip:
|
||||
value: {get_attr: [swarm_master_floating, floating_ip_address]}
|
||||
description: >
|
||||
This is the "public" ip addresses of Swarm master.
|
@ -1,459 +0,0 @@
|
||||
heat_template_version: 2014-10-16
|
||||
|
||||
description: >
|
||||
This is a nested stack that defines a single swarm node,
|
||||
based on a vanilla Fedora 20 cloud image. This stack is included by
|
||||
a ResourceGroup resource in the parent template (swarmcluster.yaml).
|
||||
|
||||
parameters:
|
||||
|
||||
name:
|
||||
type: string
|
||||
description: server name
|
||||
|
||||
server_image:
|
||||
type: string
|
||||
description: glance image used to boot the server
|
||||
|
||||
server_flavor:
|
||||
type: string
|
||||
description: flavor to use when booting the server
|
||||
|
||||
ssh_key_name:
|
||||
type: string
|
||||
description: name of ssh key to be provisioned on our server
|
||||
|
||||
docker_volume_size:
|
||||
type: number
|
||||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_volume_type:
|
||||
type: string
|
||||
description: >
|
||||
type of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
description: docker storage driver name
|
||||
|
||||
external_network:
|
||||
type: string
|
||||
description: uuid/name of a network to use for floating ip addresses
|
||||
|
||||
fixed_network_id:
|
||||
type: string
|
||||
description: Network from which to allocate fixed addresses.
|
||||
|
||||
fixed_subnet_id:
|
||||
type: string
|
||||
description: Subnet from which to allocate fixed addresses.
|
||||
|
||||
network_driver:
|
||||
type: string
|
||||
description: network driver to use for instantiating container networks
|
||||
|
||||
flannel_network_cidr:
|
||||
type: string
|
||||
description: network range for flannel overlay network
|
||||
|
||||
http_proxy:
|
||||
type: string
|
||||
description: http proxy address for docker
|
||||
|
||||
https_proxy:
|
||||
type: string
|
||||
description: https proxy address for docker
|
||||
|
||||
no_proxy:
|
||||
type: string
|
||||
description: no proxies for docker
|
||||
|
||||
swarm_api_ip:
|
||||
type: string
|
||||
description: swarm master's api server ip address
|
||||
|
||||
api_ip_address:
|
||||
type: string
|
||||
description: swarm master's api server public ip address
|
||||
|
||||
cluster_uuid:
|
||||
type: string
|
||||
description: identifier for the cluster this template is generating
|
||||
|
||||
magnum_url:
|
||||
type: string
|
||||
description: endpoint to retrieve TLS certs from
|
||||
|
||||
tls_disabled:
|
||||
type: boolean
|
||||
description: whether or not to disable TLS
|
||||
|
||||
verify_ca:
|
||||
type: boolean
|
||||
description: whether or not to validate certificate authority
|
||||
|
||||
swarm_version:
|
||||
type: string
|
||||
description: version of swarm used for swarm cluster
|
||||
|
||||
secgroup_swarm_node_id:
|
||||
type: string
|
||||
description: ID of the security group for swarm node.
|
||||
|
||||
etcd_server_ip:
|
||||
type: string
|
||||
description: ip address of the load balancer pool of etcd server.
|
||||
|
||||
trustee_domain_id:
|
||||
type: string
|
||||
description: domain id of the trustee
|
||||
|
||||
trustee_user_id:
|
||||
type: string
|
||||
description: user id of the trustee
|
||||
|
||||
trustee_username:
|
||||
type: string
|
||||
description: username of the trustee
|
||||
|
||||
trustee_password:
|
||||
type: string
|
||||
description: password of the trustee
|
||||
hidden: true
|
||||
|
||||
trust_id:
|
||||
type: string
|
||||
description: id of the trust which is used by the trustee
|
||||
hidden: true
|
||||
|
||||
auth_url:
|
||||
type: string
|
||||
description: url for keystone
|
||||
|
||||
registry_enabled:
|
||||
type: boolean
|
||||
description: >
|
||||
Indicates whether the docker registry is enabled.
|
||||
|
||||
registry_port:
|
||||
type: number
|
||||
description: port of registry service
|
||||
|
||||
swift_region:
|
||||
type: string
|
||||
description: region of swift service
|
||||
|
||||
registry_container:
|
||||
type: string
|
||||
description: >
|
||||
name of swift container which docker registry stores images in
|
||||
|
||||
registry_insecure:
|
||||
type: boolean
|
||||
description: >
|
||||
indicates whether to skip TLS verification between registry and backend storage
|
||||
|
||||
registry_chunksize:
|
||||
type: number
|
||||
description: >
|
||||
size fo the data segments for the swift dynamic large objects
|
||||
|
||||
volume_driver:
|
||||
type: string
|
||||
description: volume driver to use for container storage
|
||||
default: ""
|
||||
|
||||
rexray_preempt:
|
||||
type: string
|
||||
description: >
|
||||
enables any host to take control of a volume irrespective of whether
|
||||
other hosts are using the volume
|
||||
default: "false"
|
||||
|
||||
openstack_ca:
|
||||
type: string
|
||||
description: The OpenStack CA certificate to install on the node.
|
||||
|
||||
nodes_server_group_id:
|
||||
type: string
|
||||
description: ID of the server group for kubernetes cluster nodes.
|
||||
|
||||
resources:
|
||||
|
||||
node_wait_handle:
|
||||
type: "OS::Heat::WaitConditionHandle"
|
||||
|
||||
node_wait_condition:
|
||||
type: "OS::Heat::WaitCondition"
|
||||
depends_on: swarm-node
|
||||
properties:
|
||||
handle: {get_resource: node_wait_handle}
|
||||
timeout: 6000
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# software configs. these are components that are combined into
|
||||
# a multipart MIME user-data archive.
|
||||
no_proxy_extended:
|
||||
type: OS::Heat::Value
|
||||
properties:
|
||||
type: string
|
||||
value:
|
||||
list_join:
|
||||
- ','
|
||||
- - {get_param: swarm_api_ip}
|
||||
- {get_attr: [swarm_node_eth0, fixed_ips, 0, ip_address]}
|
||||
- {get_param: etcd_server_ip}
|
||||
- {get_param: api_ip_address}
|
||||
- {get_param: no_proxy}
|
||||
|
||||
write_heat_params:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: ../../common/templates/swarm/fragments/write-heat-params-node.yaml}
|
||||
params:
|
||||
"$WAIT_CURL": {get_attr: [node_wait_handle, curl_cli]}
|
||||
"$DOCKER_VOLUME": {get_resource: docker_volume}
|
||||
"$DOCKER_VOLUME_SIZE": {get_param: docker_volume_size}
|
||||
"$DOCKER_STORAGE_DRIVER": {get_param: docker_storage_driver}
|
||||
"$HTTP_PROXY": {get_param: http_proxy}
|
||||
"$HTTPS_PROXY": {get_param: https_proxy}
|
||||
"$NO_PROXY": {get_attr: [no_proxy_extended, value]}
|
||||
"$SWARM_API_IP": {get_param: swarm_api_ip}
|
||||
"$SWARM_NODE_IP": {get_attr: [swarm_node_eth0, fixed_ips, 0, ip_address]}
|
||||
"$CLUSTER_UUID": {get_param: cluster_uuid}
|
||||
"$MAGNUM_URL": {get_param: magnum_url}
|
||||
"$TLS_DISABLED": {get_param: tls_disabled}
|
||||
"$VERIFY_CA": {get_param: verify_ca}
|
||||
"$NETWORK_DRIVER": {get_param: network_driver}
|
||||
"$ETCD_SERVER_IP": {get_param: etcd_server_ip}
|
||||
"$API_IP_ADDRESS": {get_param: api_ip_address}
|
||||
"$SWARM_VERSION": {get_param: swarm_version}
|
||||
"$TRUSTEE_DOMAIN_ID": {get_param: trustee_domain_id}
|
||||
"$TRUSTEE_USER_ID": {get_param: trustee_user_id}
|
||||
"$TRUSTEE_USERNAME": {get_param: trustee_username}
|
||||
"$TRUSTEE_PASSWORD": {get_param: trustee_password}
|
||||
"$TRUST_ID": {get_param: trust_id}
|
||||
"$AUTH_URL": {get_param: auth_url}
|
||||
"$REGISTRY_ENABLED": {get_param: registry_enabled}
|
||||
"$REGISTRY_PORT": {get_param: registry_port}
|
||||
"$SWIFT_REGION": {get_param: swift_region}
|
||||
"$REGISTRY_CONTAINER": {get_param: registry_container}
|
||||
"$REGISTRY_INSECURE": {get_param: registry_insecure}
|
||||
"$REGISTRY_CHUNKSIZE": {get_param: registry_chunksize}
|
||||
"$VOLUME_DRIVER": {get_param: volume_driver}
|
||||
"$REXRAY_PREEMPT": {get_param: rexray_preempt}
|
||||
|
||||
install_openstack_ca:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
params:
|
||||
$OPENSTACK_CA: {get_param: openstack_ca}
|
||||
template: {get_file: ../../common/templates/fragments/atomic-install-openstack-ca.sh}
|
||||
|
||||
remove_docker_key:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/remove-docker-key.sh}
|
||||
|
||||
make_cert:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/make-cert.py}
|
||||
|
||||
configure_docker_storage:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
params:
|
||||
$configure_docker_storage_driver: {get_file: ../../common/templates/fragments/configure_docker_storage_driver_atomic.sh}
|
||||
template: {get_file: ../../common/templates/fragments/configure-docker-storage.sh}
|
||||
|
||||
configure_docker_registry:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/fragments/configure-docker-registry.sh}
|
||||
|
||||
add_docker_daemon_options:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/add-docker-daemon-options.sh}
|
||||
|
||||
write_docker_socket:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/write-docker-socket.yaml}
|
||||
|
||||
network_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/network-service.sh}
|
||||
|
||||
write_swarm_agent_failure_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: ../../common/templates/swarm/fragments/write-cluster-failure-service.yaml}
|
||||
params:
|
||||
"$SERVICE": swarm-agent
|
||||
"$WAIT_CURL": {get_attr: [node_wait_handle, curl_cli]}
|
||||
"$VERIFY_CA": {get_param: verify_ca}
|
||||
|
||||
write_swarm_agent_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/write-swarm-agent-service.sh}
|
||||
|
||||
enable_docker_registry:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/fragments/enable-docker-registry.sh}
|
||||
|
||||
enable_services:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: ../../common/templates/swarm/fragments/enable-services.sh}
|
||||
params:
|
||||
"$NODE_SERVICES": "docker.socket docker swarm-agent"
|
||||
|
||||
cfn_signal:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/cfn-signal.sh}
|
||||
|
||||
configure_selinux:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/configure-selinux.sh}
|
||||
|
||||
add_proxy:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/add-proxy.sh}
|
||||
|
||||
volume_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/volume-service.sh}
|
||||
|
||||
swarm_node_init:
|
||||
type: "OS::Heat::MultipartMime"
|
||||
properties:
|
||||
parts:
|
||||
- config: {get_resource: install_openstack_ca}
|
||||
- config: {get_resource: configure_selinux}
|
||||
- config: {get_resource: remove_docker_key}
|
||||
- config: {get_resource: write_heat_params}
|
||||
- config: {get_resource: make_cert}
|
||||
- config: {get_resource: network_service}
|
||||
- config: {get_resource: configure_docker_storage}
|
||||
- config: {get_resource: configure_docker_registry}
|
||||
- config: {get_resource: write_swarm_agent_failure_service}
|
||||
- config: {get_resource: write_swarm_agent_service}
|
||||
- config: {get_resource: add_docker_daemon_options}
|
||||
- config: {get_resource: write_docker_socket}
|
||||
- config: {get_resource: add_proxy}
|
||||
- config: {get_resource: enable_docker_registry}
|
||||
- config: {get_resource: enable_services}
|
||||
- config: {get_resource: cfn_signal}
|
||||
- config: {get_resource: volume_service}
|
||||
|
||||
# do NOT use "_" (underscore) in the Nova server name
|
||||
# it creates a mismatch between the generated Nova name and its hostname
|
||||
# which can lead to weird problems
|
||||
swarm-node:
|
||||
type: "OS::Nova::Server"
|
||||
properties:
|
||||
name: {get_param: name}
|
||||
image:
|
||||
get_param: server_image
|
||||
flavor:
|
||||
get_param: server_flavor
|
||||
key_name:
|
||||
get_param: ssh_key_name
|
||||
user_data_format: RAW
|
||||
user_data: {get_resource: swarm_node_init}
|
||||
networks:
|
||||
- port:
|
||||
get_resource: swarm_node_eth0
|
||||
scheduler_hints: { group: { get_param: nodes_server_group_id }}
|
||||
|
||||
swarm_node_eth0:
|
||||
type: "OS::Neutron::Port"
|
||||
properties:
|
||||
network_id:
|
||||
get_param: fixed_network_id
|
||||
security_groups:
|
||||
- {get_param: secgroup_swarm_node_id}
|
||||
fixed_ips:
|
||||
- subnet_id:
|
||||
get_param: fixed_subnet_id
|
||||
allowed_address_pairs:
|
||||
- ip_address: {get_param: flannel_network_cidr}
|
||||
|
||||
swarm_node_floating:
|
||||
type: "OS::Neutron::FloatingIP"
|
||||
properties:
|
||||
floating_network:
|
||||
get_param: external_network
|
||||
port_id:
|
||||
get_resource: swarm_node_eth0
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# docker storage. This allocates a cinder volume and attaches it
|
||||
# to the node.
|
||||
#
|
||||
|
||||
docker_volume:
|
||||
type: Magnum::Optional::Cinder::Volume
|
||||
properties:
|
||||
size: {get_param: docker_volume_size}
|
||||
volume_type: {get_param: docker_volume_type}
|
||||
|
||||
docker_volume_attach:
|
||||
type: Magnum::Optional::Cinder::VolumeAttachment
|
||||
properties:
|
||||
instance_uuid: {get_resource: swarm-node}
|
||||
volume_id: {get_resource: docker_volume}
|
||||
mountpoint: /dev/vdb
|
||||
|
||||
outputs:
|
||||
|
||||
swarm_node_ip:
|
||||
value: {get_attr: [swarm_node_eth0, fixed_ips, 0, ip_address]}
|
||||
description: >
|
||||
This is the "private" address of the Swarm node.
|
||||
|
||||
swarm_node_external_ip:
|
||||
value: {get_attr: [swarm_node_floating, floating_ip_address]}
|
||||
description: >
|
||||
This is the "public" address of the Swarm node.
|
@ -1,17 +0,0 @@
|
||||
# Copyright 2016 - Rackspace Hosting
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
version = '1.0.0'
|
||||
driver = 'swarm_fedora_atomic_v1'
|
||||
container_version = '1.12.6'
|
@ -1,39 +0,0 @@
|
||||
# Copyright 2016 Rackspace Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from magnum.drivers.heat import driver
|
||||
from magnum.drivers.swarm_fedora_atomic_v2 import monitor
|
||||
from magnum.drivers.swarm_fedora_atomic_v2 import template_def
|
||||
|
||||
|
||||
class Driver(driver.HeatDriver):
|
||||
|
||||
@property
|
||||
def provides(self):
|
||||
return [
|
||||
{'server_type': 'vm',
|
||||
'os': 'fedora-atomic',
|
||||
'coe': 'swarm-mode'},
|
||||
]
|
||||
|
||||
def get_template_definition(self):
|
||||
return template_def.AtomicSwarmTemplateDefinition()
|
||||
|
||||
def get_monitor(self, context, cluster):
|
||||
return monitor.SwarmMonitor(context, cluster)
|
||||
|
||||
def upgrade_cluster(self, context, cluster, cluster_template,
|
||||
max_batch_size, nodegroup, scale_manager=None,
|
||||
rollback=False):
|
||||
raise NotImplementedError("Must implement 'upgrade_cluster'")
|
@ -1,107 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_log import log
|
||||
|
||||
from magnum.common import docker_utils
|
||||
from magnum.conductor import monitors
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class SwarmMonitor(monitors.MonitorBase):
|
||||
|
||||
def __init__(self, context, cluster):
|
||||
super(SwarmMonitor, self).__init__(context, cluster)
|
||||
self.data = {}
|
||||
self.data['nodes'] = []
|
||||
self.data['containers'] = []
|
||||
|
||||
@property
|
||||
def metrics_spec(self):
|
||||
return {
|
||||
'memory_util': {
|
||||
'unit': '%',
|
||||
'func': 'compute_memory_util',
|
||||
},
|
||||
}
|
||||
|
||||
def pull_data(self):
|
||||
with docker_utils.docker_for_cluster(self.context,
|
||||
self.cluster) as docker:
|
||||
system_info = docker.info()
|
||||
self.data['nodes'] = self._parse_node_info(system_info)
|
||||
|
||||
# pull data from each container
|
||||
containers = []
|
||||
for container in docker.containers(all=True):
|
||||
try:
|
||||
container = docker.inspect_container(container['Id'])
|
||||
except Exception as e:
|
||||
LOG.warning("Ignore error [%(e)s] when inspecting "
|
||||
"container %(container_id)s.",
|
||||
{'e': e, 'container_id': container['Id']},
|
||||
exc_info=True)
|
||||
containers.append(container)
|
||||
self.data['containers'] = containers
|
||||
|
||||
def compute_memory_util(self):
|
||||
mem_total = 0
|
||||
for node in self.data['nodes']:
|
||||
mem_total += node['MemTotal']
|
||||
mem_reserved = 0
|
||||
for container in self.data['containers']:
|
||||
mem_reserved += container['HostConfig']['Memory']
|
||||
|
||||
if mem_total == 0:
|
||||
return 0
|
||||
else:
|
||||
return mem_reserved * 100 / mem_total
|
||||
|
||||
def _parse_node_info(self, system_info):
|
||||
"""Parse system_info to retrieve memory size of each node.
|
||||
|
||||
:param system_info: The output returned by docker.info(). Example:
|
||||
{
|
||||
u'Debug': False,
|
||||
u'NEventsListener': 0,
|
||||
u'DriverStatus': [
|
||||
[u'\x08Strategy', u'spread'],
|
||||
[u'\x08Filters', u'...'],
|
||||
[u'\x08Nodes', u'2'],
|
||||
[u'node1', u'10.0.0.4:2375'],
|
||||
[u' \u2514 Containers', u'1'],
|
||||
[u' \u2514 Reserved CPUs', u'0 / 1'],
|
||||
[u' \u2514 Reserved Memory', u'0 B / 2.052 GiB'],
|
||||
[u'node2', u'10.0.0.3:2375'],
|
||||
[u' \u2514 Containers', u'2'],
|
||||
[u' \u2514 Reserved CPUs', u'0 / 1'],
|
||||
[u' \u2514 Reserved Memory', u'0 B / 2.052 GiB']
|
||||
],
|
||||
u'Containers': 3
|
||||
}
|
||||
:return: Memory size of each node. Excample:
|
||||
[{'MemTotal': 2203318222.848},
|
||||
{'MemTotal': 2203318222.848}]
|
||||
"""
|
||||
nodes = []
|
||||
for info in system_info['DriverStatus']:
|
||||
key = info[0]
|
||||
value = info[1]
|
||||
if key == u' \u2514 Reserved Memory':
|
||||
memory = value # Example: '0 B / 2.052 GiB'
|
||||
memory = memory.split('/')[1].strip() # Example: '2.052 GiB'
|
||||
memory = memory.split(' ')[0] # Example: '2.052'
|
||||
memory = float(memory) * 1024 * 1024 * 1024
|
||||
nodes.append({'MemTotal': memory})
|
||||
return nodes
|
@ -1,39 +0,0 @@
|
||||
# Copyright 2016 Rackspace Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
import os
|
||||
|
||||
from magnum.drivers.heat import swarm_mode_template_def as sftd
|
||||
|
||||
|
||||
class AtomicSwarmTemplateDefinition(sftd.SwarmModeTemplateDefinition):
|
||||
"""Docker swarm template for a Fedora Atomic VM."""
|
||||
|
||||
@property
|
||||
def driver_module_path(self):
|
||||
return __name__[:__name__.rindex('.')]
|
||||
|
||||
@property
|
||||
def template_path(self):
|
||||
return os.path.join(os.path.dirname(os.path.realpath(__file__)),
|
||||
'templates/swarmcluster.yaml')
|
||||
|
||||
def get_params(self, context, cluster_template, cluster, **kwargs):
|
||||
ep = kwargs.pop('extra_params', {})
|
||||
|
||||
ep['number_of_secondary_masters'] = cluster.master_count - 1
|
||||
|
||||
return super(AtomicSwarmTemplateDefinition,
|
||||
self).get_params(context, cluster_template, cluster,
|
||||
extra_params=ep,
|
||||
**kwargs)
|
@ -1,29 +0,0 @@
|
||||
#cloud-config
|
||||
merge_how: dict(recurse_array)+list(append)
|
||||
write_files:
|
||||
- path: /etc/sysconfig/heat-params
|
||||
owner: "root:root"
|
||||
permissions: "0600"
|
||||
content: |
|
||||
IS_PRIMARY_MASTER="$IS_PRIMARY_MASTER"
|
||||
WAIT_CURL="$WAIT_CURL"
|
||||
DOCKER_VOLUME="$DOCKER_VOLUME"
|
||||
DOCKER_VOLUME_SIZE="$DOCKER_VOLUME_SIZE"
|
||||
DOCKER_STORAGE_DRIVER="$DOCKER_STORAGE_DRIVER"
|
||||
HTTP_PROXY="$HTTP_PROXY"
|
||||
HTTPS_PROXY="$HTTPS_PROXY"
|
||||
NO_PROXY="$NO_PROXY"
|
||||
PRIMARY_MASTER_IP="$PRIMARY_MASTER_IP"
|
||||
SWARM_API_IP="$SWARM_API_IP"
|
||||
SWARM_NODE_IP="$SWARM_NODE_IP"
|
||||
CLUSTER_UUID="$CLUSTER_UUID"
|
||||
MAGNUM_URL="$MAGNUM_URL"
|
||||
TLS_DISABLED="$TLS_DISABLED"
|
||||
API_IP_ADDRESS="$API_IP_ADDRESS"
|
||||
TRUSTEE_USER_ID="$TRUSTEE_USER_ID"
|
||||
TRUSTEE_PASSWORD="$TRUSTEE_PASSWORD"
|
||||
TRUST_ID="$TRUST_ID"
|
||||
AUTH_URL="$AUTH_URL"
|
||||
VOLUME_DRIVER="$VOLUME_DRIVER"
|
||||
REXRAY_PREEMPT="$REXRAY_PREEMPT"
|
||||
VERIFY_CA="$VERIFY_CA"
|
@ -1,84 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
set -x
|
||||
|
||||
if [ "$VERIFY_CA" == "True" ]; then
|
||||
VERIFY_CA=""
|
||||
else
|
||||
VERIFY_CA="-k"
|
||||
fi
|
||||
|
||||
if [ "${IS_PRIMARY_MASTER}" = "True" ]; then
|
||||
cat > /usr/local/bin/magnum-start-swarm-manager << START_SWARM_BIN
|
||||
#!/bin/bash -xe
|
||||
|
||||
docker swarm init --advertise-addr "${SWARM_NODE_IP}"
|
||||
if [[ \$? -eq 0 ]]; then
|
||||
status="SUCCESS"
|
||||
msg="Swarm init was successful."
|
||||
else
|
||||
status="FAILURE"
|
||||
msg="Failed to init swarm."
|
||||
fi
|
||||
sh -c "${WAIT_CURL} ${VERIFY_CA} --data-binary '{\"status\": \"\$status\", \"reason\": \"\$msg\"}'"
|
||||
START_SWARM_BIN
|
||||
else
|
||||
if [ "${TLS_DISABLED}" = 'False' ]; then
|
||||
tls="--tlsverify"
|
||||
tls=$tls" --tlscacert=/etc/docker/ca.crt"
|
||||
tls=$tls" --tlskey=/etc/docker/server.key"
|
||||
tls=$tls" --tlscert=/etc/docker/server.crt"
|
||||
fi
|
||||
|
||||
cat > /usr/local/bin/magnum-start-swarm-manager << START_SWARM_BIN
|
||||
#!/bin/bash -xe
|
||||
i=0
|
||||
until token=\$(docker $tls -H $PRIMARY_MASTER_IP swarm join-token --quiet manager)
|
||||
do
|
||||
((i++))
|
||||
[ \$i -lt 5 ] || break;
|
||||
sleep 5
|
||||
done
|
||||
|
||||
if [[ -z \$token ]] ; then
|
||||
sh -c "${WAIT_CURL} ${VERIFY_CA} --data-binary '{\"status\": \"FAILURE\", \"reason\": \"Failed to retrieve swarm join token.\"}'"
|
||||
fi
|
||||
|
||||
i=0
|
||||
until docker swarm join --token \$token $PRIMARY_MASTER_IP:2377
|
||||
do
|
||||
((i++))
|
||||
[ \$i -lt 5 ] || break;
|
||||
sleep 5
|
||||
done
|
||||
if [[ \$i -ge 5 ]] ; then
|
||||
sh -c "${WAIT_CURL} ${VERIFY_CA} --data-binary '{\"status\": \"FAILURE\", \"reason\": \"Manager failed to join swarm.\"}'"
|
||||
else
|
||||
sh -c "${WAIT_CURL} ${VERIFY_CA} --data-binary '{\"status\": \"SUCCESS\", \"reason\": \"Manager joined swarm.\"}'"
|
||||
fi
|
||||
START_SWARM_BIN
|
||||
fi
|
||||
chmod +x /usr/local/bin/magnum-start-swarm-manager
|
||||
|
||||
cat > /etc/systemd/system/swarm-manager.service << END_SERVICE
|
||||
[Unit]
|
||||
Description=Swarm Manager
|
||||
After=docker.service
|
||||
Requires=docker.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/local/bin/magnum-start-swarm-manager
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
END_SERVICE
|
||||
|
||||
chown root:root /etc/systemd/system/swarm-manager.service
|
||||
chmod 644 /etc/systemd/system/swarm-manager.service
|
||||
|
||||
systemctl daemon-reload
|
||||
systemctl start --no-block swarm-manager
|
||||
|
@ -1,68 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
set -x
|
||||
|
||||
if [ "$VERIFY_CA" == "True" ]; then
|
||||
VERIFY_CA=""
|
||||
else
|
||||
VERIFY_CA="-k"
|
||||
fi
|
||||
|
||||
if [ "${TLS_DISABLED}" = 'False' ]; then
|
||||
tls="--tlsverify"
|
||||
tls=$tls" --tlscacert=/etc/docker/ca.crt"
|
||||
tls=$tls" --tlskey=/etc/docker/server.key"
|
||||
tls=$tls" --tlscert=/etc/docker/server.crt"
|
||||
fi
|
||||
cat > /usr/local/bin/magnum-start-swarm-worker << START_SWARM_BIN
|
||||
#!/bin/bash -ex
|
||||
|
||||
i=0
|
||||
until token=\$(/usr/bin/docker $tls -H $SWARM_API_IP swarm join-token --quiet worker)
|
||||
do
|
||||
((i++))
|
||||
[ \$i -lt 5 ] || break;
|
||||
sleep 5
|
||||
done
|
||||
|
||||
if [[ -z \$token ]] ; then
|
||||
sh -c "${WAIT_CURL} ${VERIFY_CA} --data-binary '{\"status\": \"FAILURE\", \"reason\": \"Failed to retrieve swarm join token.\"}'"
|
||||
fi
|
||||
|
||||
i=0
|
||||
until docker swarm join --token \$token $SWARM_API_IP:2377
|
||||
do
|
||||
((i++))
|
||||
[ \$i -lt 5 ] || break;
|
||||
sleep 5
|
||||
done
|
||||
if [[ \$i -ge 5 ]] ; then
|
||||
sh -c "${WAIT_CURL} ${VERIFY_CA} --data-binary '{\"status\": \"FAILURE\", \"reason\": \"Node failed to join swarm.\"}'"
|
||||
else
|
||||
sh -c "${WAIT_CURL} ${VERIFY_CA} --data-binary '{\"status\": \"SUCCESS\", \"reason\": \"Node joined swarm.\"}'"
|
||||
fi
|
||||
START_SWARM_BIN
|
||||
|
||||
chmod +x /usr/local/bin/magnum-start-swarm-worker
|
||||
|
||||
cat > /etc/systemd/system/swarm-worker.service << END_SERVICE
|
||||
[Unit]
|
||||
Description=Swarm Worker
|
||||
After=docker.service
|
||||
Requires=docker.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/local/bin/magnum-start-swarm-worker
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
END_SERVICE
|
||||
|
||||
chown root:root /etc/systemd/system/swarm-worker.service
|
||||
chmod 644 /etc/systemd/system/swarm-worker.service
|
||||
|
||||
systemctl daemon-reload
|
||||
systemctl start --no-block swarm-worker
|
@ -1,501 +0,0 @@
|
||||
heat_template_version: 2014-10-16
|
||||
|
||||
description: >
|
||||
This template will boot a Docker Swarm-Mode cluster. A swarm cluster
|
||||
is made up of several master nodes, and N worker nodes. Every node in
|
||||
the cluster, including the master, is running a Docker daemon and
|
||||
joins the swarm as a manager or as a worker. The managers are
|
||||
listening on port 2375. By default, the cluster is made up of one
|
||||
master node and one worker node.
|
||||
|
||||
parameters:
|
||||
|
||||
#
|
||||
# REQUIRED PARAMETERS
|
||||
#
|
||||
is_cluster_stack:
|
||||
type: boolean
|
||||
default: false
|
||||
|
||||
ssh_key_name:
|
||||
type: string
|
||||
description: name of ssh key to be provisioned on our server
|
||||
default: ""
|
||||
|
||||
ssh_public_key:
|
||||
type: string
|
||||
description: The public ssh key to add in all nodes
|
||||
default: ""
|
||||
|
||||
external_network:
|
||||
type: string
|
||||
description: uuid/name of a network to use for floating ip addresses
|
||||
|
||||
fixed_network:
|
||||
type: string
|
||||
description: uuid/name of an existing network to use to provision machines
|
||||
default: ""
|
||||
|
||||
fixed_subnet:
|
||||
type: string
|
||||
description: uuid/name of an existing subnet to use to provision machines
|
||||
default: ""
|
||||
|
||||
cluster_uuid:
|
||||
type: string
|
||||
description: identifier for the cluster this template is generating
|
||||
|
||||
magnum_url:
|
||||
type: string
|
||||
description: endpoint to retrieve TLS certs from
|
||||
|
||||
master_image:
|
||||
type: string
|
||||
description: glance image used to boot the server
|
||||
|
||||
node_image:
|
||||
type: string
|
||||
description: glance image used to boot the server
|
||||
#
|
||||
# OPTIONAL PARAMETERS
|
||||
#
|
||||
master_flavor:
|
||||
type: string
|
||||
description: flavor to use when booting the swarm master
|
||||
default: m1.small
|
||||
|
||||
node_flavor:
|
||||
type: string
|
||||
description: flavor to use when booting the swarm node
|
||||
|
||||
dns_nameserver:
|
||||
type: comma_delimited_list
|
||||
description: address of a dns nameserver reachable in your environment
|
||||
default: 8.8.8.8
|
||||
|
||||
http_proxy:
|
||||
type: string
|
||||
description: http proxy address for docker
|
||||
default: ""
|
||||
|
||||
https_proxy:
|
||||
type: string
|
||||
description: https proxy address for docker
|
||||
default: ""
|
||||
|
||||
no_proxy:
|
||||
type: string
|
||||
description: no proxies for docker
|
||||
default: ""
|
||||
|
||||
number_of_masters:
|
||||
type: number
|
||||
description: how many swarm masters to spawn
|
||||
default: 1
|
||||
|
||||
number_of_nodes:
|
||||
type: number
|
||||
description: how many swarm nodes to spawn
|
||||
default: 1
|
||||
|
||||
number_of_secondary_masters:
|
||||
type: number
|
||||
description: how many secondary masters to spawn
|
||||
|
||||
fixed_subnet_cidr:
|
||||
type: string
|
||||
description: network range for fixed ip network
|
||||
default: "10.0.0.0/24"
|
||||
|
||||
tls_disabled:
|
||||
type: boolean
|
||||
description: whether or not to enable TLS
|
||||
default: False
|
||||
|
||||
docker_volume_size:
|
||||
type: number
|
||||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
default: 0
|
||||
|
||||
docker_volume_type:
|
||||
type: string
|
||||
description: >
|
||||
type of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
description: docker storage driver name
|
||||
default: "devicemapper"
|
||||
|
||||
loadbalancing_protocol:
|
||||
type: string
|
||||
description: >
|
||||
The protocol which is used for load balancing. If you want to change
|
||||
tls_disabled option to 'True', please change this to "HTTP".
|
||||
default: TCP
|
||||
constraints:
|
||||
- allowed_values: ["TCP", "HTTP"]
|
||||
|
||||
swarm_port:
|
||||
type: number
|
||||
description: >
|
||||
The port which are used by swarm manager to provide swarm service.
|
||||
default: 2375
|
||||
|
||||
trustee_domain_id:
|
||||
type: string
|
||||
description: domain id of the trustee
|
||||
default: ""
|
||||
|
||||
trustee_user_id:
|
||||
type: string
|
||||
description: user id of the trustee
|
||||
default: ""
|
||||
|
||||
trustee_username:
|
||||
type: string
|
||||
description: username of the trustee
|
||||
default: ""
|
||||
|
||||
trustee_password:
|
||||
type: string
|
||||
description: password of the trustee
|
||||
default: ""
|
||||
hidden: true
|
||||
|
||||
trust_id:
|
||||
type: string
|
||||
description: id of the trust which is used by the trustee
|
||||
default: ""
|
||||
hidden: true
|
||||
|
||||
auth_url:
|
||||
type: string
|
||||
description: url for keystone
|
||||
|
||||
volume_driver:
|
||||
type: string
|
||||
description: volume driver to use for container storage
|
||||
default: ""
|
||||
constraints:
|
||||
- allowed_values: ["","rexray"]
|
||||
|
||||
rexray_preempt:
|
||||
type: string
|
||||
description: >
|
||||
enables any host to take control of a volume irrespective of whether
|
||||
other hosts are using the volume
|
||||
default: "false"
|
||||
|
||||
verify_ca:
|
||||
type: boolean
|
||||
description: whether or not to validate certificate authority
|
||||
|
||||
openstack_ca:
|
||||
type: string
|
||||
hidden: true
|
||||
description: The OpenStack CA certificate to install on the node.
|
||||
|
||||
nodes_affinity_policy:
|
||||
type: string
|
||||
description: >
|
||||
affinity policy for nodes server group
|
||||
constraints:
|
||||
- allowed_values: ["affinity", "anti-affinity", "soft-affinity",
|
||||
"soft-anti-affinity"]
|
||||
|
||||
availability_zone:
|
||||
type: string
|
||||
description: >
|
||||
availability zone for master and nodes
|
||||
default: ""
|
||||
|
||||
resources:
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# network resources. allocate a network and router for our server.
|
||||
# it would also be possible to take advantage of existing network
|
||||
# resources (and have the deployer provide network and subnet ids,
|
||||
# etc, as parameters), but I wanted to minmize the amount of
|
||||
# configuration necessary to make this go.
|
||||
|
||||
network:
|
||||
type: ../../common/templates/network.yaml
|
||||
properties:
|
||||
existing_network: {get_param: fixed_network}
|
||||
existing_subnet: {get_param: fixed_subnet}
|
||||
private_network_cidr: {get_param: fixed_subnet_cidr}
|
||||
dns_nameserver: {get_param: dns_nameserver}
|
||||
external_network: {get_param: external_network}
|
||||
|
||||
api_lb:
|
||||
type: ../../common/templates/lb_api.yaml
|
||||
properties:
|
||||
fixed_subnet: {get_attr: [network, fixed_subnet]}
|
||||
external_network: {get_param: external_network}
|
||||
protocol: {get_param: loadbalancing_protocol}
|
||||
port: {get_param: swarm_port}
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# security groups. we need to permit network traffic of various
|
||||
# sorts.
|
||||
#
|
||||
|
||||
secgroup_swarm_manager:
|
||||
type: "OS::Neutron::SecurityGroup"
|
||||
properties:
|
||||
rules:
|
||||
- protocol: icmp
|
||||
- protocol: tcp
|
||||
port_range_min: 22
|
||||
port_range_max: 22
|
||||
- protocol: tcp
|
||||
port_range_min: 2375
|
||||
port_range_max: 2375
|
||||
- protocol: tcp
|
||||
port_range_min: 2377
|
||||
port_range_max: 2377
|
||||
- protocol: tcp
|
||||
remote_ip_prefix: {get_param: fixed_subnet_cidr}
|
||||
port_range_min: 1
|
||||
port_range_max: 65535
|
||||
- protocol: udp
|
||||
port_range_min: 53
|
||||
port_range_max: 53
|
||||
|
||||
secgroup_swarm_node:
|
||||
type: "OS::Neutron::SecurityGroup"
|
||||
properties:
|
||||
rules:
|
||||
- protocol: icmp
|
||||
- protocol: tcp
|
||||
- protocol: udp
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# resources that expose the IPs of either the swarm master or a given
|
||||
# LBaaS pool depending on whether LBaaS is enabled for the cluster.
|
||||
#
|
||||
|
||||
api_address_lb_switch:
|
||||
type: Magnum::ApiGatewaySwitcher
|
||||
properties:
|
||||
pool_public_ip: {get_attr: [api_lb, floating_address]}
|
||||
pool_private_ip: {get_attr: [api_lb, address]}
|
||||
master_public_ip: {get_attr: [swarm_primary_master, resource.0.swarm_master_external_ip]}
|
||||
master_private_ip: {get_attr: [swarm_primary_master, resource.0.swarm_master_ip]}
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# resources that expose the IPs of either floating ip or a given
|
||||
# fixed ip depending on whether FloatingIP is enabled for the cluster.
|
||||
#
|
||||
|
||||
api_address_floating_switch:
|
||||
type: Magnum::FloatingIPAddressSwitcher
|
||||
properties:
|
||||
public_ip: {get_attr: [api_address_lb_switch, public_ip]}
|
||||
private_ip: {get_attr: [api_address_lb_switch, private_ip]}
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# resources that expose one server group for each master and worker nodes
|
||||
# separately.
|
||||
#
|
||||
|
||||
master_nodes_server_group:
|
||||
type: OS::Nova::ServerGroup
|
||||
properties:
|
||||
policies: [{get_param: nodes_affinity_policy}]
|
||||
|
||||
worker_nodes_server_group:
|
||||
type: OS::Nova::ServerGroup
|
||||
properties:
|
||||
policies: [{get_param: nodes_affinity_policy}]
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# Swarm manager is responsible for the entire cluster and manages the
|
||||
# resources of multiple Docker hosts at scale.
|
||||
# It supports high availability by create a primary manager and multiple
|
||||
# replica instances.
|
||||
|
||||
swarm_primary_master:
|
||||
type: "OS::Heat::ResourceGroup"
|
||||
depends_on:
|
||||
- network
|
||||
properties:
|
||||
count: 1
|
||||
resource_def:
|
||||
type: swarmmaster.yaml
|
||||
properties:
|
||||
name:
|
||||
list_join:
|
||||
- '-'
|
||||
- [{ get_param: 'OS::stack_name' }, 'primary-master', '%index%']
|
||||
is_primary_master: True
|
||||
ssh_key_name: {get_param: ssh_key_name}
|
||||
server_image: {get_param: master_image}
|
||||
server_flavor: {get_param: master_flavor}
|
||||
docker_volume_size: {get_param: docker_volume_size}
|
||||
docker_volume_type: {get_param: docker_volume_type}
|
||||
docker_storage_driver: {get_param: docker_storage_driver}
|
||||
fixed_network_id: {get_attr: [network, fixed_network]}
|
||||
fixed_subnet_id: {get_attr: [network, fixed_subnet]}
|
||||
external_network: {get_param: external_network}
|
||||
http_proxy: {get_param: http_proxy}
|
||||
https_proxy: {get_param: https_proxy}
|
||||
no_proxy: {get_param: no_proxy}
|
||||
swarm_api_ip: {get_attr: [api_lb, address]}
|
||||
cluster_uuid: {get_param: cluster_uuid}
|
||||
magnum_url: {get_param: magnum_url}
|
||||
tls_disabled: {get_param: tls_disabled}
|
||||
secgroup_swarm_master_id: {get_resource: secgroup_swarm_manager}
|
||||
swarm_port: {get_param: swarm_port}
|
||||
api_pool_id: {get_attr: [api_lb, pool_id]}
|
||||
api_ip_address: {get_attr: [api_lb, floating_address]}
|
||||
trustee_user_id: {get_param: trustee_user_id}
|
||||
trustee_password: {get_param: trustee_password}
|
||||
trust_id: {get_param: trust_id}
|
||||
auth_url: {get_param: auth_url}
|
||||
volume_driver: {get_param: volume_driver}
|
||||
rexray_preempt: {get_param: rexray_preempt}
|
||||
verify_ca: {get_param: verify_ca}
|
||||
openstack_ca: {get_param: openstack_ca}
|
||||
nodes_server_group_id: {get_resource: master_nodes_server_group}
|
||||
availability_zone: {get_param: availability_zone}
|
||||
|
||||
swarm_secondary_masters:
|
||||
type: "OS::Heat::ResourceGroup"
|
||||
depends_on:
|
||||
- network
|
||||
- swarm_primary_master
|
||||
properties:
|
||||
count: {get_param: number_of_secondary_masters}
|
||||
resource_def:
|
||||
type: swarmmaster.yaml
|
||||
properties:
|
||||
name:
|
||||
list_join:
|
||||
- '-'
|
||||
- [{ get_param: 'OS::stack_name' }, 'secondary-master', '%index%']
|
||||
ssh_key_name: {get_param: ssh_key_name}
|
||||
server_image: {get_param: master_image}
|
||||
server_flavor: {get_param: master_flavor}
|
||||
docker_volume_size: {get_param: docker_volume_size}
|
||||
docker_volume_type: {get_param: docker_volume_type}
|
||||
docker_storage_driver: {get_param: docker_storage_driver}
|
||||
fixed_network_id: {get_attr: [network, fixed_network]}
|
||||
fixed_subnet_id: {get_attr: [network, fixed_subnet]}
|
||||
external_network: {get_param: external_network}
|
||||
http_proxy: {get_param: http_proxy}
|
||||
https_proxy: {get_param: https_proxy}
|
||||
no_proxy: {get_param: no_proxy}
|
||||
swarm_api_ip: {get_attr: [api_address_lb_switch, private_ip]}
|
||||
cluster_uuid: {get_param: cluster_uuid}
|
||||
magnum_url: {get_param: magnum_url}
|
||||
tls_disabled: {get_param: tls_disabled}
|
||||
secgroup_swarm_master_id: {get_resource: secgroup_swarm_manager}
|
||||
swarm_port: {get_param: swarm_port}
|
||||
api_pool_id: {get_attr: [api_lb, pool_id]}
|
||||
api_ip_address: {get_attr: [api_lb, floating_address]}
|
||||
trustee_user_id: {get_param: trustee_user_id}
|
||||
trustee_password: {get_param: trustee_password}
|
||||
trust_id: {get_param: trust_id}
|
||||
auth_url: {get_param: auth_url}
|
||||
volume_driver: {get_param: volume_driver}
|
||||
rexray_preempt: {get_param: rexray_preempt}
|
||||
verify_ca: {get_param: verify_ca}
|
||||
openstack_ca: {get_param: openstack_ca}
|
||||
nodes_server_group_id: {get_resource: master_nodes_server_group}
|
||||
availability_zone: {get_param: availability_zone}
|
||||
|
||||
swarm_nodes:
|
||||
type: "OS::Heat::ResourceGroup"
|
||||
depends_on:
|
||||
- network
|
||||
- swarm_primary_master
|
||||
properties:
|
||||
count: {get_param: number_of_nodes}
|
||||
resource_def:
|
||||
type: swarmnode.yaml
|
||||
properties:
|
||||
name:
|
||||
list_join:
|
||||
- '-'
|
||||
- [{ get_param: 'OS::stack_name' }, 'node', '%index%']
|
||||
ssh_key_name: {get_param: ssh_key_name}
|
||||
server_image: {get_param: node_image}
|
||||
server_flavor: {get_param: node_flavor}
|
||||
docker_volume_size: {get_param: docker_volume_size}
|
||||
docker_volume_type: {get_param: docker_volume_type}
|
||||
docker_storage_driver: {get_param: docker_storage_driver}
|
||||
fixed_network_id: {get_attr: [network, fixed_network]}
|
||||
fixed_subnet_id: {get_attr: [network, fixed_subnet]}
|
||||
external_network: {get_param: external_network}
|
||||
http_proxy: {get_param: http_proxy}
|
||||
https_proxy: {get_param: https_proxy}
|
||||
no_proxy: {get_param: no_proxy}
|
||||
swarm_api_ip: {get_attr: [api_address_lb_switch, private_ip]}
|
||||
cluster_uuid: {get_param: cluster_uuid}
|
||||
magnum_url: {get_param: magnum_url}
|
||||
tls_disabled: {get_param: tls_disabled}
|
||||
secgroup_swarm_node_id: {get_resource: secgroup_swarm_node}
|
||||
api_ip_address: {get_attr: [api_address_lb_switch, public_ip]}
|
||||
trustee_domain_id: {get_param: trustee_domain_id}
|
||||
trustee_user_id: {get_param: trustee_user_id}
|
||||
trustee_username: {get_param: trustee_username}
|
||||
trustee_password: {get_param: trustee_password}
|
||||
trust_id: {get_param: trust_id}
|
||||
auth_url: {get_param: auth_url}
|
||||
volume_driver: {get_param: volume_driver}
|
||||
rexray_preempt: {get_param: rexray_preempt}
|
||||
verify_ca: {get_param: verify_ca}
|
||||
openstack_ca: {get_param: openstack_ca}
|
||||
nodes_server_group_id: {get_resource: worker_nodes_server_group}
|
||||
availability_zone: {get_param: availability_zone}
|
||||
|
||||
outputs:
|
||||
|
||||
api_address:
|
||||
value:
|
||||
str_replace:
|
||||
template: api_ip_address
|
||||
params:
|
||||
api_ip_address: {get_attr: [api_address_floating_switch, ip_address]}
|
||||
description: >
|
||||
This is the API endpoint of the Swarm masters. Use this to access
|
||||
the Swarm API server from outside the cluster.
|
||||
|
||||
swarm_primary_master_private:
|
||||
value: {get_attr: [swarm_primary_master, swarm_master_ip]}
|
||||
description: >
|
||||
This is a list of the "private" addresses of all the Swarm masters.
|
||||
|
||||
swarm_primary_master:
|
||||
value: {get_attr: [swarm_primary_master, swarm_master_external_ip]}
|
||||
description: >
|
||||
This is a list of "public" ip addresses of all Swarm masters.
|
||||
Use these addresses to log into the Swarm masters via ssh.
|
||||
|
||||
swarm_secondary_masters:
|
||||
value: {get_attr: [swarm_secondary_masters, swarm_master_external_ip]}
|
||||
description: >
|
||||
This is a list of "public" ip addresses of all Swarm masters.
|
||||
Use these addresses to log into the Swarm masters via ssh.
|
||||
|
||||
swarm_nodes_private:
|
||||
value: {get_attr: [swarm_nodes, swarm_node_ip]}
|
||||
description: >
|
||||
This is a list of the "private" addresses of all the Swarm nodes.
|
||||
|
||||
swarm_nodes:
|
||||
value: {get_attr: [swarm_nodes, swarm_node_external_ip]}
|
||||
description: >
|
||||
This is a list of the "public" addresses of all the Swarm nodes. Use
|
||||
these addresses to, e.g., log into the nodes.
|
@ -1,393 +0,0 @@
|
||||
heat_template_version: 2014-10-16
|
||||
|
||||
description: >
|
||||
This is a nested stack that defines swarm master node. A swarm master node is
|
||||
running a Docker daemon and joins swarm as a manager. The Docker daemon
|
||||
listens on port 2375.
|
||||
|
||||
parameters:
|
||||
|
||||
name:
|
||||
type: string
|
||||
description: server name
|
||||
|
||||
ssh_key_name:
|
||||
type: string
|
||||
description: name of ssh key to be provisioned on our server
|
||||
|
||||
docker_volume_size:
|
||||
type: number
|
||||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_volume_type:
|
||||
type: string
|
||||
description: >
|
||||
type of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
description: docker storage driver name
|
||||
|
||||
external_network:
|
||||
type: string
|
||||
description: uuid/name of a network to use for floating ip addresses
|
||||
|
||||
cluster_uuid:
|
||||
type: string
|
||||
description: identifier for the cluster this template is generating
|
||||
|
||||
magnum_url:
|
||||
type: string
|
||||
description: endpoint to retrieve TLS certs from
|
||||
|
||||
fixed_network_id:
|
||||
type: string
|
||||
description: Network from which to allocate fixed addresses.
|
||||
|
||||
fixed_subnet_id:
|
||||
type: string
|
||||
description: Subnet from which to allocate fixed addresses.
|
||||
|
||||
swarm_api_ip:
|
||||
type: string
|
||||
description: swarm master's api server ip address
|
||||
default: ""
|
||||
|
||||
api_ip_address:
|
||||
type: string
|
||||
description: swarm master's api server public ip address
|
||||
default: ""
|
||||
|
||||
server_image:
|
||||
type: string
|
||||
description: glance image used to boot the server
|
||||
|
||||
server_flavor:
|
||||
type: string
|
||||
description: flavor to use when booting the server
|
||||
|
||||
http_proxy:
|
||||
type: string
|
||||
description: http proxy address for docker
|
||||
|
||||
https_proxy:
|
||||
type: string
|
||||
description: https proxy address for docker
|
||||
|
||||
no_proxy:
|
||||
type: string
|
||||
description: no proxies for docker
|
||||
|
||||
tls_disabled:
|
||||
type: boolean
|
||||
description: whether or not to enable TLS
|
||||
|
||||
secgroup_swarm_master_id:
|
||||
type: string
|
||||
description: ID of the security group for swarm master.
|
||||
|
||||
swarm_port:
|
||||
type: number
|
||||
description: >
|
||||
The port which are used by swarm manager to provide swarm service.
|
||||
|
||||
api_pool_id:
|
||||
type: string
|
||||
description: ID of the load balancer pool of swarm master server.
|
||||
|
||||
trustee_user_id:
|
||||
type: string
|
||||
description: user id of the trustee
|
||||
|
||||
trustee_password:
|
||||
type: string
|
||||
description: password of the trustee
|
||||
hidden: true
|
||||
|
||||
trust_id:
|
||||
type: string
|
||||
description: id of the trust which is used by the trustee
|
||||
hidden: true
|
||||
|
||||
auth_url:
|
||||
type: string
|
||||
description: url for keystone
|
||||
|
||||
volume_driver:
|
||||
type: string
|
||||
description: volume driver to use for container storage
|
||||
default: ""
|
||||
|
||||
rexray_preempt:
|
||||
type: string
|
||||
description: >
|
||||
enables any host to take control of a volume irrespective of whether
|
||||
other hosts are using the volume
|
||||
default: "false"
|
||||
|
||||
is_primary_master:
|
||||
type: boolean
|
||||
description: whether this master is primary or not
|
||||
default: False
|
||||
|
||||
verify_ca:
|
||||
type: boolean
|
||||
description: whether or not to validate certificate authority
|
||||
|
||||
openstack_ca:
|
||||
type: string
|
||||
description: The OpenStack CA certificate to install on the node.
|
||||
nodes_server_group_id:
|
||||
type: string
|
||||
description: ID of the server group for kubernetes cluster nodes.
|
||||
|
||||
availability_zone:
|
||||
type: string
|
||||
description: >
|
||||
availability zone for master and nodes
|
||||
default: ""
|
||||
|
||||
resources:
|
||||
|
||||
master_wait_handle:
|
||||
type: "OS::Heat::WaitConditionHandle"
|
||||
|
||||
master_wait_condition:
|
||||
type: "OS::Heat::WaitCondition"
|
||||
depends_on: swarm-master
|
||||
properties:
|
||||
handle: {get_resource: master_wait_handle}
|
||||
timeout: 6000
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# resource that exposes the IPs of either the Swarm master or the API
|
||||
# LBaaS pool depending on whether LBaaS is enabled for the cluster.
|
||||
#
|
||||
|
||||
api_address_switch:
|
||||
type: Magnum::ApiGatewaySwitcher
|
||||
properties:
|
||||
pool_public_ip: {get_param: api_ip_address}
|
||||
pool_private_ip: {get_param: swarm_api_ip}
|
||||
master_public_ip: {get_attr: [swarm_master_floating, floating_ip_address]}
|
||||
master_private_ip: {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# software configs. these are components that are combined into
|
||||
# a multipart MIME user-data archive.
|
||||
#
|
||||
write_heat_params:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: fragments/write-heat-params-master.yaml}
|
||||
params:
|
||||
"$IS_PRIMARY_MASTER": {get_param: is_primary_master}
|
||||
"$WAIT_CURL": {get_attr: [master_wait_handle, curl_cli]}
|
||||
"$DOCKER_VOLUME": {get_resource: docker_volume}
|
||||
"$DOCKER_VOLUME_SIZE": {get_param: docker_volume_size}
|
||||
"$DOCKER_STORAGE_DRIVER": {get_param: docker_storage_driver}
|
||||
"$HTTP_PROXY": {get_param: http_proxy}
|
||||
"$HTTPS_PROXY": {get_param: https_proxy}
|
||||
"$NO_PROXY": {get_param: no_proxy}
|
||||
"$PRIMARY_MASTER_IP": {get_param: swarm_api_ip}
|
||||
"$SWARM_API_IP": {get_attr: [api_address_switch, private_ip]}
|
||||
"$SWARM_NODE_IP": {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
"$CLUSTER_UUID": {get_param: cluster_uuid}
|
||||
"$MAGNUM_URL": {get_param: magnum_url}
|
||||
"$TLS_DISABLED": {get_param: tls_disabled}
|
||||
"$API_IP_ADDRESS": {get_attr: [api_address_switch, public_ip]}
|
||||
"$TRUSTEE_USER_ID": {get_param: trustee_user_id}
|
||||
"$TRUSTEE_PASSWORD": {get_param: trustee_password}
|
||||
"$TRUST_ID": {get_param: trust_id}
|
||||
"$AUTH_URL": {get_param: auth_url}
|
||||
"$VOLUME_DRIVER": {get_param: volume_driver}
|
||||
"$REXRAY_PREEMPT": {get_param: rexray_preempt}
|
||||
"$VERIFY_CA": {get_param: verify_ca}
|
||||
|
||||
install_openstack_ca:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
params:
|
||||
$OPENSTACK_CA: {get_param: openstack_ca}
|
||||
template: {get_file: ../../common/templates/fragments/atomic-install-openstack-ca.sh}
|
||||
|
||||
remove_docker_key:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/remove-docker-key.sh}
|
||||
|
||||
configure_docker_storage:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
params:
|
||||
$configure_docker_storage_driver: {get_file: ../../common/templates/fragments/configure_docker_storage_driver_atomic.sh}
|
||||
template: {get_file: ../../common/templates/fragments/configure-docker-storage.sh}
|
||||
|
||||
make_cert:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/make-cert.py}
|
||||
|
||||
add_docker_daemon_options:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/add-docker-daemon-options.sh}
|
||||
|
||||
write_docker_socket:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/write-docker-socket.yaml}
|
||||
|
||||
write_swarm_master_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: fragments/write-swarm-master-service.sh}
|
||||
|
||||
enable_services:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: ../../common/templates/swarm/fragments/enable-services.sh}
|
||||
params:
|
||||
"$NODE_SERVICES": "docker.socket docker"
|
||||
|
||||
configure_selinux:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/configure-selinux.sh}
|
||||
|
||||
add_proxy:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/add-proxy.sh}
|
||||
|
||||
volume_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/volume-service.sh}
|
||||
|
||||
swarm_master_init:
|
||||
type: "OS::Heat::MultipartMime"
|
||||
properties:
|
||||
parts:
|
||||
- config: {get_resource: install_openstack_ca}
|
||||
- config: {get_resource: configure_selinux}
|
||||
- config: {get_resource: remove_docker_key}
|
||||
- config: {get_resource: write_heat_params}
|
||||
- config: {get_resource: make_cert}
|
||||
- config: {get_resource: configure_docker_storage}
|
||||
- config: {get_resource: add_docker_daemon_options}
|
||||
- config: {get_resource: write_docker_socket}
|
||||
- config: {get_resource: add_proxy}
|
||||
- config: {get_resource: enable_services}
|
||||
- config: {get_resource: write_swarm_master_service}
|
||||
- config: {get_resource: volume_service}
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# Swarm_manager is a special node running the swarm manage daemon along
|
||||
# side the swarm worker.
|
||||
#
|
||||
|
||||
# do NOT use "_" (underscore) in the Nova server name
|
||||
# it creates a mismatch between the generated Nova name and its hostname
|
||||
# which can lead to weird problems
|
||||
swarm-master:
|
||||
type: "OS::Nova::Server"
|
||||
properties:
|
||||
name: {get_param: name}
|
||||
image:
|
||||
get_param: server_image
|
||||
flavor:
|
||||
get_param: server_flavor
|
||||
key_name:
|
||||
get_param: ssh_key_name
|
||||
user_data_format: RAW
|
||||
user_data: {get_resource: swarm_master_init}
|
||||
networks:
|
||||
- port:
|
||||
get_resource: swarm_master_eth0
|
||||
scheduler_hints: { group: { get_param: nodes_server_group_id }}
|
||||
availability_zone: {get_param: availability_zone}
|
||||
|
||||
swarm_master_eth0:
|
||||
type: "OS::Neutron::Port"
|
||||
properties:
|
||||
network_id:
|
||||
get_param: fixed_network_id
|
||||
security_groups:
|
||||
- {get_param: secgroup_swarm_master_id}
|
||||
fixed_ips:
|
||||
- subnet_id:
|
||||
get_param: fixed_subnet_id
|
||||
|
||||
swarm_master_floating:
|
||||
type: "Magnum::Optional::SwarmMaster::Neutron::FloatingIP"
|
||||
properties:
|
||||
floating_network:
|
||||
get_param: external_network
|
||||
port_id:
|
||||
get_resource: swarm_master_eth0
|
||||
|
||||
api_pool_member:
|
||||
type: Magnum::Optional::Neutron::LBaaS::PoolMember
|
||||
properties:
|
||||
pool: {get_param: api_pool_id}
|
||||
address: {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
subnet: { get_param: fixed_subnet_id }
|
||||
protocol_port: {get_param: swarm_port}
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# docker storage. This allocates a cinder volume and attaches it
|
||||
# to the node.
|
||||
#
|
||||
|
||||
docker_volume:
|
||||
type: Magnum::Optional::Cinder::Volume
|
||||
properties:
|
||||
size: {get_param: docker_volume_size}
|
||||
volume_type: {get_param: docker_volume_type}
|
||||
|
||||
docker_volume_attach:
|
||||
type: Magnum::Optional::Cinder::VolumeAttachment
|
||||
properties:
|
||||
instance_uuid: {get_resource: swarm-master}
|
||||
volume_id: {get_resource: docker_volume}
|
||||
mountpoint: /dev/vdb
|
||||
|
||||
outputs:
|
||||
|
||||
swarm_master_ip:
|
||||
value: {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
|
||||
description: >
|
||||
This is the "private" addresses of all the Swarm master.
|
||||
|
||||
swarm_master_external_ip:
|
||||
value: {get_attr: [swarm_master_floating, floating_ip_address]}
|
||||
description: >
|
||||
This is the "public" ip addresses of Swarm master.
|
@ -1,357 +0,0 @@
|
||||
heat_template_version: 2014-10-16
|
||||
|
||||
description: >
|
||||
This is a nested stack that defines a single swarm worker node, based on a
|
||||
vanilla Fedora Atomic image. This stack is included by a ResourceGroup
|
||||
resource in the parent template (swarmcluster.yaml).
|
||||
|
||||
parameters:
|
||||
|
||||
name:
|
||||
type: string
|
||||
description: server name
|
||||
|
||||
server_image:
|
||||
type: string
|
||||
description: glance image used to boot the server
|
||||
|
||||
server_flavor:
|
||||
type: string
|
||||
description: flavor to use when booting the server
|
||||
|
||||
ssh_key_name:
|
||||
type: string
|
||||
description: name of ssh key to be provisioned on our server
|
||||
|
||||
docker_volume_size:
|
||||
type: number
|
||||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_volume_type:
|
||||
type: string
|
||||
description: >
|
||||
type of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
description: docker storage driver name
|
||||
|
||||
external_network:
|
||||
type: string
|
||||
description: uuid/name of a network to use for floating ip addresses
|
||||
|
||||
fixed_network_id:
|
||||
type: string
|
||||
description: Network from which to allocate fixed addresses.
|
||||
|
||||
fixed_subnet_id:
|
||||
type: string
|
||||
description: Subnet from which to allocate fixed addresses.
|
||||
|
||||
http_proxy:
|
||||
type: string
|
||||
description: http proxy address for docker
|
||||
|
||||
https_proxy:
|
||||
type: string
|
||||
description: https proxy address for docker
|
||||
|
||||
no_proxy:
|
||||
type: string
|
||||
description: no proxies for docker
|
||||
|
||||
swarm_api_ip:
|
||||
type: string
|
||||
description: swarm master's api server ip address
|
||||
|
||||
api_ip_address:
|
||||
type: string
|
||||
description: swarm master's api server public ip address
|
||||
|
||||
cluster_uuid:
|
||||
type: string
|
||||
description: identifier for the cluster this template is generating
|
||||
|
||||
magnum_url:
|
||||
type: string
|
||||
description: endpoint to retrieve TLS certs from
|
||||
|
||||
tls_disabled:
|
||||
type: boolean
|
||||
description: whether or not to disable TLS
|
||||
|
||||
secgroup_swarm_node_id:
|
||||
type: string
|
||||
description: ID of the security group for swarm node.
|
||||
|
||||
trustee_domain_id:
|
||||
type: string
|
||||
description: domain id of the trustee
|
||||
|
||||
trustee_user_id:
|
||||
type: string
|
||||
description: user id of the trustee
|
||||
|
||||
trustee_username:
|
||||
type: string
|
||||
description: username of the trustee
|
||||
|
||||
trustee_password:
|
||||
type: string
|
||||
description: password of the trustee
|
||||
hidden: true
|
||||
|
||||
trust_id:
|
||||
type: string
|
||||
description: id of the trust which is used by the trustee
|
||||
hidden: true
|
||||
|
||||
auth_url:
|
||||
type: string
|
||||
description: url for keystone
|
||||
|
||||
volume_driver:
|
||||
type: string
|
||||
description: volume driver to use for container storage
|
||||
default: ""
|
||||
|
||||
rexray_preempt:
|
||||
type: string
|
||||
description: >
|
||||
enables any host to take control of a volume irrespective of whether
|
||||
other hosts are using the volume
|
||||
default: "false"
|
||||
|
||||
verify_ca:
|
||||
type: boolean
|
||||
description: whether or not to validate certificate authority
|
||||
|
||||
openstack_ca:
|
||||
type: string
|
||||
description: The OpenStack CA certificate to install on the node.
|
||||
|
||||
nodes_server_group_id:
|
||||
type: string
|
||||
description: ID of the server group for kubernetes cluster nodes.
|
||||
|
||||
availability_zone:
|
||||
type: string
|
||||
description: >
|
||||
availability zone for master and nodes
|
||||
default: ""
|
||||
|
||||
resources:
|
||||
|
||||
node_wait_handle:
|
||||
type: "OS::Heat::WaitConditionHandle"
|
||||
|
||||
node_wait_condition:
|
||||
type: "OS::Heat::WaitCondition"
|
||||
depends_on: swarm-node
|
||||
properties:
|
||||
handle: {get_resource: node_wait_handle}
|
||||
timeout: 6000
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# software configs. these are components that are combined into
|
||||
# a multipart MIME user-data archive.
|
||||
write_heat_params:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: ../../common/templates/swarm/fragments/write-heat-params-node.yaml}
|
||||
params:
|
||||
"$WAIT_CURL": {get_attr: [node_wait_handle, curl_cli]}
|
||||
"$DOCKER_VOLUME": {get_resource: docker_volume}
|
||||
"$DOCKER_VOLUME_SIZE": {get_param: docker_volume_size}
|
||||
"$DOCKER_STORAGE_DRIVER": {get_param: docker_storage_driver}
|
||||
"$HTTP_PROXY": {get_param: http_proxy}
|
||||
"$HTTPS_PROXY": {get_param: https_proxy}
|
||||
"$NO_PROXY": {get_param: no_proxy}
|
||||
"$SWARM_API_IP": {get_param: swarm_api_ip}
|
||||
"$SWARM_NODE_IP": {get_attr: [swarm_node_eth0, fixed_ips, 0, ip_address]}
|
||||
"$CLUSTER_UUID": {get_param: cluster_uuid}
|
||||
"$MAGNUM_URL": {get_param: magnum_url}
|
||||
"$TLS_DISABLED": {get_param: tls_disabled}
|
||||
"$API_IP_ADDRESS": {get_param: api_ip_address}
|
||||
"$TRUSTEE_DOMAIN_ID": {get_param: trustee_domain_id}
|
||||
"$TRUSTEE_USER_ID": {get_param: trustee_user_id}
|
||||
"$TRUSTEE_USERNAME": {get_param: trustee_username}
|
||||
"$TRUSTEE_PASSWORD": {get_param: trustee_password}
|
||||
"$TRUST_ID": {get_param: trust_id}
|
||||
"$AUTH_URL": {get_param: auth_url}
|
||||
"$VOLUME_DRIVER": {get_param: volume_driver}
|
||||
"$REXRAY_PREEMPT": {get_param: rexray_preempt}
|
||||
"$VERIFY_CA": {get_param: verify_ca}
|
||||
|
||||
install_openstack_ca:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
params:
|
||||
$OPENSTACK_CA: {get_param: openstack_ca}
|
||||
template: {get_file: ../../common/templates/fragments/atomic-install-openstack-ca.sh}
|
||||
|
||||
remove_docker_key:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/remove-docker-key.sh}
|
||||
|
||||
make_cert:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/make-cert.py}
|
||||
|
||||
configure_docker_storage:
|
||||
type: OS::Heat::SoftwareConfig
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
params:
|
||||
$configure_docker_storage_driver: {get_file: ../../common/templates/fragments/configure_docker_storage_driver_atomic.sh}
|
||||
template: {get_file: ../../common/templates/fragments/configure-docker-storage.sh}
|
||||
|
||||
add_docker_daemon_options:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/add-docker-daemon-options.sh}
|
||||
|
||||
write_docker_socket:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/write-docker-socket.yaml}
|
||||
|
||||
write_swarm_worker_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: fragments/write-swarm-worker-service.sh}
|
||||
|
||||
enable_services:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config:
|
||||
str_replace:
|
||||
template: {get_file: ../../common/templates/swarm/fragments/enable-services.sh}
|
||||
params:
|
||||
"$NODE_SERVICES": "docker.socket docker"
|
||||
|
||||
configure_selinux:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/configure-selinux.sh}
|
||||
|
||||
add_proxy:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/add-proxy.sh}
|
||||
|
||||
volume_service:
|
||||
type: "OS::Heat::SoftwareConfig"
|
||||
properties:
|
||||
group: ungrouped
|
||||
config: {get_file: ../../common/templates/swarm/fragments/volume-service.sh}
|
||||
|
||||
swarm_node_init:
|
||||
type: "OS::Heat::MultipartMime"
|
||||
properties:
|
||||
parts:
|
||||
- config: {get_resource: install_openstack_ca}
|
||||
- config: {get_resource: configure_selinux}
|
||||
- config: {get_resource: remove_docker_key}
|
||||
- config: {get_resource: write_heat_params}
|
||||
- config: {get_resource: make_cert}
|
||||
- config: {get_resource: configure_docker_storage}
|
||||
- config: {get_resource: add_docker_daemon_options}
|
||||
- config: {get_resource: write_docker_socket}
|
||||
- config: {get_resource: add_proxy}
|
||||
- config: {get_resource: enable_services}
|
||||
- config: {get_resource: write_swarm_worker_service}
|
||||
- config: {get_resource: volume_service}
|
||||
|
||||
# do NOT use "_" (underscore) in the Nova server name
|
||||
# it creates a mismatch between the generated Nova name and its hostname
|
||||
# which can lead to weird problems
|
||||
swarm-node:
|
||||
type: "OS::Nova::Server"
|
||||
properties:
|
||||
name: {get_param: name}
|
||||
image:
|
||||
get_param: server_image
|
||||
flavor:
|
||||
get_param: server_flavor
|
||||
key_name:
|
||||
get_param: ssh_key_name
|
||||
user_data_format: RAW
|
||||
user_data: {get_resource: swarm_node_init}
|
||||
networks:
|
||||
- port:
|
||||
get_resource: swarm_node_eth0
|
||||
scheduler_hints: { group: { get_param: nodes_server_group_id }}
|
||||
availability_zone: {get_param: availability_zone}
|
||||
|
||||
swarm_node_eth0:
|
||||
type: "OS::Neutron::Port"
|
||||
properties:
|
||||
network_id:
|
||||
get_param: fixed_network_id
|
||||
security_groups:
|
||||
- {get_param: secgroup_swarm_node_id}
|
||||
fixed_ips:
|
||||
- subnet_id:
|
||||
get_param: fixed_subnet_id
|
||||
|
||||
swarm_node_floating:
|
||||
type: "Magnum::Optional::SwarmNode::Neutron::FloatingIP"
|
||||
properties:
|
||||
floating_network:
|
||||
get_param: external_network
|
||||
port_id:
|
||||
get_resource: swarm_node_eth0
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# docker storage. This allocates a cinder volume and attaches it
|
||||
# to the node.
|
||||
#
|
||||
|
||||
docker_volume:
|
||||
type: Magnum::Optional::Cinder::Volume
|
||||
properties:
|
||||
size: {get_param: docker_volume_size}
|
||||
volume_type: {get_param: docker_volume_type}
|
||||
|
||||
docker_volume_attach:
|
||||
type: Magnum::Optional::Cinder::VolumeAttachment
|
||||
properties:
|
||||
instance_uuid: {get_resource: swarm-node}
|
||||
volume_id: {get_resource: docker_volume}
|
||||
mountpoint: /dev/vdb
|
||||
|
||||
outputs:
|
||||
|
||||
swarm_node_ip:
|
||||
value: {get_attr: [swarm_node_eth0, fixed_ips, 0, ip_address]}
|
||||
description: >
|
||||
This is the "private" address of the Swarm node.
|
||||
|
||||
swarm_node_external_ip:
|
||||
value: {get_attr: [swarm_node_floating, floating_ip_address]}
|
||||
description: >
|
||||
This is the "public" address of the Swarm node.
|
@ -1,17 +0,0 @@
|
||||
# Copyright 2016 - Rackspace Hosting
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
version = '2.0.0'
|
||||
driver = 'swarm_fedora_atomic_v2'
|
||||
container_version = '1.12.6'
|
@ -99,9 +99,9 @@ class ContainerStatus(fields.Enum):
|
||||
|
||||
class ClusterType(fields.Enum):
|
||||
ALL = (
|
||||
KUBERNETES, SWARM, SWARM_MODE,
|
||||
KUBERNETES,
|
||||
) = (
|
||||
'kubernetes', 'swarm', 'swarm-mode',
|
||||
'kubernetes',
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
|
@ -100,53 +100,6 @@ if [[ "$COE" == "kubernetes" ]]; then
|
||||
remote_exec $SSH_USER "sudo journalctl -u kube-enable-monitoring --no-pager" kube-enable-monitoring.service.log
|
||||
remote_exec $SSH_USER "sudo atomic containers list" atomic-containers-list.log
|
||||
remote_exec $SSH_USER "sudo atomic images list" atomic-images-list.log
|
||||
elif [[ "$COE" == "swarm" || "$COE" == "swarm-mode" ]]; then
|
||||
SSH_USER=fedora
|
||||
remote_exec $SSH_USER "sudo systemctl --full list-units --no-pager" systemctl_list_units.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u cloud-config --no-pager" cloud-config.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u cloud-final --no-pager" cloud-final.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u cloud-init-local --no-pager" cloud-init-local.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u cloud-init --no-pager" cloud-init.log
|
||||
remote_exec $SSH_USER "sudo cat /var/log/cloud-init-output.log" cloud-init-output.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u etcd --no-pager" etcd.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u swarm-manager --no-pager" swarm-manager.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u swarm-agent --no-pager" swarm-agent.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u swarm-worker --no-pager" swarm-worker.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u docker-storage-setup --no-pager" docker-storage-setup.log
|
||||
remote_exec $SSH_USER "sudo systemctl status docker-storage-setup -l" docker-storage-setup.service.status.log
|
||||
remote_exec $SSH_USER "sudo systemctl show docker-storage-setup --no-pager" docker-storage-setup.service.show.log
|
||||
remote_exec $SSH_USER "sudo cat /etc/sysconfig/docker-storage-setup 2>/dev/null" docker-storage-setup.sysconfig.env.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u docker --no-pager" docker.log
|
||||
remote_exec $SSH_USER "sudo journalctl -u docker-containerd --no-pager" docker-containerd.log
|
||||
remote_exec $SSH_USER "sudo systemctl status docker.socket -l" docker.socket.status.log
|
||||
remote_exec $SSH_USER "sudo systemctl show docker.socket --no-pager" docker.socket.show.log
|
||||
remote_exec $SSH_USER "sudo systemctl status docker -l" docker.service.status.log
|
||||
remote_exec $SSH_USER "sudo systemctl show docker --no-pager" docker.service.show.log
|
||||
remote_exec $SSH_USER "sudo cat /etc/sysconfig/docker" docker.sysconfig.env.log
|
||||
remote_exec $SSH_USER "sudo cat /etc/sysconfig/docker-storage" docker-storage.sysconfig.env.log
|
||||
remote_exec $SSH_USER "sudo cat /etc/sysconfig/docker-network" docker-network.sysconfig.env.log
|
||||
remote_exec $SSH_USER "sudo timeout 60s docker ps --all=true --no-trunc=true" docker-containers.log
|
||||
remote_exec $SSH_USER "sudo tar zcvf - /var/lib/docker/containers 2>/dev/null" docker-container-configs.tar.gz
|
||||
remote_exec $SSH_USER "sudo journalctl -u flanneld --no-pager" flanneld.log
|
||||
remote_exec $SSH_USER "sudo ip a" ipa.log
|
||||
remote_exec $SSH_USER "sudo netstat -an" netstat.log
|
||||
remote_exec $SSH_USER "sudo df -h" dfh.log
|
||||
remote_exec $SSH_USER "sudo cat /etc/sysconfig/heat-params" heat-params
|
||||
remote_exec $SSH_USER "sudo cat /etc/etcd/etcd.conf" etcd.conf
|
||||
remote_exec $SSH_USER "sudo ls -lR /etc/docker" docker-certs
|
||||
remote_exec $SSH_USER "sudo cat /etc/sysconfig/flanneld" flanneld.sysconfig
|
||||
remote_exec $SSH_USER "sudo cat /etc/sysconfig/flannel-network.json" flannel-network.json.sysconfig
|
||||
remote_exec $SSH_USER "sudo cat /usr/local/bin/flannel-docker-bridge" bin-flannel-docker-bridge
|
||||
remote_exec $SSH_USER "sudo cat /etc/systemd/system/docker.service.d/flannel.conf" docker-flannel.conf
|
||||
remote_exec $SSH_USER "sudo cat /etc/systemd/system/flanneld.service.d/flannel-docker-bridge.conf" flannel-docker-bridge.conf
|
||||
remote_exec $SSH_USER "sudo cat /etc/systemd/system/flannel-docker-bridge.service" flannel-docker-bridge.service
|
||||
remote_exec $SSH_USER "sudo cat /etc/systemd/system/swarm-manager.service" swarm-manager.service
|
||||
remote_exec $SSH_USER "sudo cat /etc/systemd/system/swarm-manager-failure.service" swarm-manager-failure.service
|
||||
remote_exec $SSH_USER "sudo cat /etc/systemd/system/swarm-agent.service" swarm-agent.service
|
||||
remote_exec $SSH_USER "sudo cat /etc/systemd/system/swarm-agent-failure.service" swarm-agent-failure.service
|
||||
remote_exec $SSH_USER "sudo cat /etc/systemd/system/swarm-worker.service" swarm-worker.service
|
||||
remote_exec $SSH_USER "sudo cat /usr/local/bin/magnum-start-swarm-manager" bin-magnum-start-swarm-manager
|
||||
remote_exec $SSH_USER "sudo cat /usr/local/bin/magnum-start-swarm-worker" bin-magnum-start-swarm-worker
|
||||
else
|
||||
echo "ERROR: Unknown COE '${COE}'"
|
||||
EXIT_CODE=1
|
||||
|
@ -155,7 +155,7 @@ export MAGNUM_DIR="$BASE/new/magnum"
|
||||
sudo chown -R $USER:stack $MAGNUM_DIR
|
||||
|
||||
# Run functional tests
|
||||
# Currently we support functional-api, functional-k8s, will support swarm.
|
||||
# Currently we support functional-api, functional-k8s.
|
||||
|
||||
echo "Running magnum functional test suite for $1"
|
||||
|
||||
|
@ -32,8 +32,6 @@ def random_int(min_int=1, max_int=100):
|
||||
def gen_coe_dep_network_driver(coe):
|
||||
allowed_driver_types = {
|
||||
'kubernetes': ['flannel', None],
|
||||
'swarm': ['docker', 'flannel', None],
|
||||
'swarm-mode': ['docker', None],
|
||||
}
|
||||
driver_types = allowed_driver_types[coe]
|
||||
return driver_types[random.randrange(0, len(driver_types))]
|
||||
@ -42,8 +40,6 @@ def gen_coe_dep_network_driver(coe):
|
||||
def gen_coe_dep_volume_driver(coe):
|
||||
allowed_driver_types = {
|
||||
'kubernetes': ['cinder', None],
|
||||
'swarm': ['rexray', None],
|
||||
'swarm-mode': ['rexray', None],
|
||||
}
|
||||
driver_types = allowed_driver_types[coe]
|
||||
return driver_types[random.randrange(0, len(driver_types))]
|
||||
@ -109,7 +105,7 @@ def cluster_template_data(**kwargs):
|
||||
|
||||
data = {
|
||||
"name": data_utils.rand_name('cluster'),
|
||||
"coe": "swarm-mode",
|
||||
"coe": "kubernetes",
|
||||
"tls_disabled": False,
|
||||
"network_driver": None,
|
||||
"volume_driver": None,
|
||||
@ -251,26 +247,6 @@ def cluster_template_valid_data_with_specific_coe(coe):
|
||||
image_id=config.Config.image_id, coe=coe)
|
||||
|
||||
|
||||
def valid_swarm_mode_cluster_template(is_public=False):
|
||||
"""Generates a valid swarm-mode cluster_template with valid data
|
||||
|
||||
:returns: ClusterTemplateEntity with generated data
|
||||
"""
|
||||
master_flavor_id = config.Config.master_flavor_id
|
||||
return cluster_template_data(image_id=config.Config.image_id,
|
||||
flavor_id=config.Config.flavor_id,
|
||||
public=is_public,
|
||||
dns_nameserver=config.Config.dns_nameserver,
|
||||
master_flavor_id=master_flavor_id,
|
||||
coe="swarm-mode",
|
||||
cluster_distro=None,
|
||||
external_network_id=config.Config.nic_id,
|
||||
http_proxy=None, https_proxy=None,
|
||||
no_proxy=None, network_driver=None,
|
||||
volume_driver=None, labels={},
|
||||
tls_disabled=False)
|
||||
|
||||
|
||||
def cluster_data(name=data_utils.rand_name('cluster'),
|
||||
cluster_template_id=data_utils.rand_uuid(),
|
||||
node_count=random_int(1, 5), discovery_url=gen_random_ip(),
|
||||
|
@ -376,12 +376,6 @@ extendedKeyUsage = clientAuth
|
||||
output_keys = []
|
||||
if self.cluster_template.coe == "kubernetes":
|
||||
output_keys = ["kube_masters", "kube_minions"]
|
||||
elif self.cluster_template.coe == "swarm":
|
||||
output_keys = ["swarm_masters", "swarm_nodes"]
|
||||
elif self.cluster_template.coe == "swarm-mode":
|
||||
output_keys = ["swarm_primary_master",
|
||||
"swarm_secondary_masters",
|
||||
"swarm_nodes"]
|
||||
|
||||
for output in stack_outputs:
|
||||
for key in output_keys:
|
||||
|
@ -1,152 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import time
|
||||
|
||||
from docker import errors
|
||||
from requests import exceptions as req_exceptions
|
||||
|
||||
from magnum.common import docker_utils
|
||||
import magnum.conf
|
||||
from magnum.tests.functional.python_client_base import ClusterTest
|
||||
|
||||
|
||||
CONF = magnum.conf.CONF
|
||||
|
||||
|
||||
class TestSwarmAPIs(ClusterTest):
|
||||
"""This class will cover swarm cluster basic functional testing.
|
||||
|
||||
Will test all kinds of container action with tls_disabled=False mode.
|
||||
"""
|
||||
|
||||
coe = "swarm"
|
||||
cluster_template_kwargs = {
|
||||
"tls_disabled": False,
|
||||
"network_driver": None,
|
||||
"volume_driver": None,
|
||||
"labels": {}
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
super(TestSwarmAPIs, cls).setUpClass()
|
||||
cls.cluster_is_ready = None
|
||||
|
||||
def setUp(self):
|
||||
super(TestSwarmAPIs, self).setUp()
|
||||
if self.cluster_is_ready is True:
|
||||
return
|
||||
# Note(eliqiao): In our test cases, docker client or magnum client will
|
||||
# try to connect to swarm service which is running on master node,
|
||||
# the endpoint is cluster.api_address(listen port is included), but the
|
||||
# service is not ready right after the cluster was created, sleep for
|
||||
# an acceptable time to wait for service being started.
|
||||
# This is required, without this any api call will fail as
|
||||
# 'ConnectionError: [Errno 111] Connection refused'.
|
||||
msg = ("If you see this error in the functional test, it means "
|
||||
"the docker service took too long to come up. This may not "
|
||||
"be an actual error, so an option is to rerun the "
|
||||
"functional test.")
|
||||
if self.cluster_is_ready is False:
|
||||
# In such case, no need to test below cases on gate, raise a
|
||||
# meanful exception message to indicate ca setup failed after
|
||||
# cluster creation, better to do a `recheck`
|
||||
# We don't need to test since cluster is not ready.
|
||||
raise Exception(msg)
|
||||
|
||||
url = self.cs.clusters.get(self.cluster.uuid).api_address
|
||||
|
||||
# Note(eliqiao): docker_utils.CONF.docker.default_timeout is 10,
|
||||
# tested this default configure option not works on gate, it will
|
||||
# cause container creation failed due to time out.
|
||||
# Debug more found that we need to pull image when the first time to
|
||||
# create a container, set it as 180s.
|
||||
|
||||
docker_api_time_out = 180
|
||||
self.docker_client = docker_utils.DockerHTTPClient(
|
||||
url,
|
||||
CONF.docker.docker_remote_api_version,
|
||||
docker_api_time_out,
|
||||
client_key=self.key_file,
|
||||
client_cert=self.cert_file,
|
||||
ca_cert=self.ca_file)
|
||||
|
||||
self.docker_client_non_tls = docker_utils.DockerHTTPClient(
|
||||
url,
|
||||
CONF.docker.docker_remote_api_version,
|
||||
docker_api_time_out)
|
||||
|
||||
def _container_operation(self, func, *args, **kwargs):
|
||||
# NOTE(hongbin): Swarm cluster occasionally aborts the connection,
|
||||
# so we re-try the operation several times here. In long-term, we
|
||||
# need to investigate the cause of this issue. See bug #1583337.
|
||||
for i in range(150):
|
||||
try:
|
||||
self.LOG.info("Calling function %s", func.__name__)
|
||||
return func(*args, **kwargs)
|
||||
except req_exceptions.ConnectionError:
|
||||
self.LOG.info("Connection aborted on calling Swarm API. "
|
||||
"Will retry in 2 seconds.")
|
||||
except errors.APIError as e:
|
||||
if e.response.status_code != 500:
|
||||
raise
|
||||
self.LOG.info("Internal Server Error: %s", e)
|
||||
time.sleep(2)
|
||||
|
||||
raise Exception("Cannot connect to Swarm API.")
|
||||
|
||||
def _create_container(self, **kwargs):
|
||||
image = kwargs.get('image', 'docker.io/cirros')
|
||||
command = kwargs.get('command', 'ping -c 1000 8.8.8.8')
|
||||
return self._container_operation(self.docker_client.create_container,
|
||||
image=image, command=command)
|
||||
|
||||
def test_start_stop_container_from_api(self):
|
||||
# Leverage docker client to create a container on the cluster we
|
||||
# created, and try to start and stop it then delete it.
|
||||
|
||||
resp = self._create_container(image="docker.io/cirros",
|
||||
command="ping -c 1000 8.8.8.8")
|
||||
|
||||
resp = self._container_operation(self.docker_client.containers,
|
||||
all=True)
|
||||
container_id = resp[0].get('Id')
|
||||
self._container_operation(self.docker_client.start,
|
||||
container=container_id)
|
||||
|
||||
resp = self._container_operation(self.docker_client.containers)
|
||||
self.assertEqual(1, len(resp))
|
||||
resp = self._container_operation(self.docker_client.inspect_container,
|
||||
container=container_id)
|
||||
self.assertTrue(resp['State']['Running'])
|
||||
|
||||
self._container_operation(self.docker_client.stop,
|
||||
container=container_id)
|
||||
resp = self._container_operation(self.docker_client.inspect_container,
|
||||
container=container_id)
|
||||
self.assertEqual(False, resp['State']['Running'])
|
||||
|
||||
self._container_operation(self.docker_client.remove_container,
|
||||
container=container_id)
|
||||
resp = self._container_operation(self.docker_client.containers)
|
||||
self.assertEqual([], resp)
|
||||
|
||||
def test_access_with_non_tls_client(self):
|
||||
"""Try to contact master's docker using the TCP protocol.
|
||||
|
||||
TCP returns ConnectionError whereas HTTPS returns SSLError. The
|
||||
default protocol we use in magnum is TCP which works fine docker
|
||||
python SDK docker>=2.0.0.
|
||||
"""
|
||||
self.assertRaises(req_exceptions.ConnectionError,
|
||||
self.docker_client_non_tls.containers)
|
@ -1,125 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import docker
|
||||
import requests
|
||||
import time
|
||||
|
||||
import magnum.conf
|
||||
from magnum.tests.functional.python_client_base import ClusterTest
|
||||
|
||||
|
||||
CONF = magnum.conf.CONF
|
||||
|
||||
|
||||
class TestSwarmModeAPIs(ClusterTest):
|
||||
"""This class will cover swarm cluster basic functional testing.
|
||||
|
||||
Will test all kinds of container action with tls_disabled=False mode.
|
||||
"""
|
||||
|
||||
coe = "swarm-mode"
|
||||
cluster_template_kwargs = {
|
||||
"tls_disabled": False,
|
||||
"network_driver": None,
|
||||
"volume_driver": None,
|
||||
"labels": {}
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
super(TestSwarmModeAPIs, cls).setUpClass()
|
||||
cls.cluster_is_ready = None
|
||||
|
||||
def setUp(self):
|
||||
super(TestSwarmModeAPIs, self).setUp()
|
||||
if self.cluster_is_ready is True:
|
||||
return
|
||||
# Note(eliqiao): In our test cases, docker client or magnum client will
|
||||
# try to connect to swarm service which is running on master node,
|
||||
# the endpoint is cluster.api_address(listen port is included), but the
|
||||
# service is not ready right after the cluster was created, sleep for
|
||||
# an acceptable time to wait for service being started.
|
||||
# This is required, without this any api call will fail as
|
||||
# 'ConnectionError: [Errno 111] Connection refused'.
|
||||
msg = ("If you see this error in the functional test, it means "
|
||||
"the docker service took too long to come up. This may not "
|
||||
"be an actual error, so an option is to rerun the "
|
||||
"functional test.")
|
||||
if self.cluster_is_ready is False:
|
||||
# In such case, no need to test below cases on gate, raise a
|
||||
# meanful exception message to indicate ca setup failed after
|
||||
# cluster creation, better to do a `recheck`
|
||||
# We don't need to test since cluster is not ready.
|
||||
raise Exception(msg)
|
||||
|
||||
url = self.cs.clusters.get(self.cluster.uuid).api_address
|
||||
|
||||
# Note(eliqiao): docker_utils.CONF.docker.default_timeout is 10,
|
||||
# tested this default configure option not works on gate, it will
|
||||
# cause container creation failed due to time out.
|
||||
# Debug more found that we need to pull image when the first time to
|
||||
# create a container, set it as 180s.
|
||||
|
||||
docker_api_time_out = 180
|
||||
tls_config = docker.tls.TLSConfig(
|
||||
client_cert=(self.cert_file, self.key_file),
|
||||
verify=self.ca_file
|
||||
)
|
||||
|
||||
self.docker_client = docker.DockerClient(
|
||||
base_url=url,
|
||||
tls=tls_config,
|
||||
version='auto',
|
||||
timeout=docker_api_time_out)
|
||||
|
||||
self.docker_client_non_tls = docker.DockerClient(
|
||||
base_url=url,
|
||||
version='1.21',
|
||||
timeout=docker_api_time_out)
|
||||
|
||||
def test_create_remove_service(self):
|
||||
# Create and remove a service using docker python SDK.
|
||||
# Wait 15 mins until reach running and 5 mins until the service
|
||||
# is removed.
|
||||
|
||||
# Create a nginx service based on alpine linux
|
||||
service = self.docker_client.services.create(
|
||||
name='nginx',
|
||||
image='nginx:mainline-alpine')
|
||||
# wait for 15 mins to be running
|
||||
for i in range(90):
|
||||
if service.tasks()[0]['Status']['State'] == "running":
|
||||
break
|
||||
time.sleep(10)
|
||||
# Verify that it is running
|
||||
self.assertEqual('running', service.tasks()[0]['Status']['State'])
|
||||
# Remove the service and wait for 5 mins untils it is removed
|
||||
service.remove()
|
||||
for i in range(30):
|
||||
if self.docker_client.services.list() == []:
|
||||
break
|
||||
time.sleep(10)
|
||||
# Verify that it is deleted
|
||||
self.assertEqual([], self.docker_client.services.list())
|
||||
|
||||
def test_access_with_non_tls_client(self):
|
||||
"""Try to contact master's docker using the tcp protocol.
|
||||
|
||||
tcp returns ConnectionError whereas https returns SSLError. The
|
||||
default protocol we use in magnum is tcp which works fine docker
|
||||
python SDK docker>=2.0.0.
|
||||
"""
|
||||
try:
|
||||
self.docker_client_non_tls.info()
|
||||
except requests.exceptions.ConnectionError:
|
||||
pass
|
@ -262,10 +262,10 @@ class TestPatch(api_base.FunctionalTest):
|
||||
master_flavor_id='m1.magnum',
|
||||
external_network_id='public',
|
||||
keypair_id='test',
|
||||
volume_driver='rexray',
|
||||
volume_driver='cinder',
|
||||
public=False,
|
||||
docker_volume_size=20,
|
||||
coe='swarm',
|
||||
coe='kubernetes',
|
||||
labels={'key1': 'val1', 'key2': 'val2'},
|
||||
hidden=False
|
||||
)
|
||||
@ -800,6 +800,7 @@ class TestPost(api_base.FunctionalTest):
|
||||
cluster_template_dict,
|
||||
cluster_template_config_dict,
|
||||
expect_errors,
|
||||
expect_default_driver,
|
||||
mock_image_data):
|
||||
mock_image_data.return_value = {'name': 'mock_name',
|
||||
'os_distro': 'fedora-atomic'}
|
||||
@ -816,10 +817,10 @@ class TestPost(api_base.FunctionalTest):
|
||||
if expect_errors:
|
||||
self.assertEqual(400, response.status_int)
|
||||
else:
|
||||
expected_driver = bdict.get('network_driver')
|
||||
if not expected_driver:
|
||||
expected_driver = (
|
||||
cfg.CONF.cluster_template.swarm_default_network_driver)
|
||||
if expect_default_driver:
|
||||
expected_driver = 'flannel'
|
||||
else:
|
||||
expected_driver = bdict.get('network_driver')
|
||||
self.assertEqual(expected_driver,
|
||||
response.json['network_driver'])
|
||||
self.assertEqual(bdict['image_id'],
|
||||
@ -833,19 +834,23 @@ class TestPost(api_base.FunctionalTest):
|
||||
'network_driver': 'flannel'}
|
||||
config_dict = {} # Default config
|
||||
expect_errors_flag = False
|
||||
expect_default_driver_flag = False
|
||||
self._test_create_cluster_template_network_driver_attr(
|
||||
cluster_template_dict,
|
||||
config_dict,
|
||||
expect_errors_flag)
|
||||
expect_errors_flag,
|
||||
expect_default_driver_flag)
|
||||
|
||||
def test_create_cluster_template_with_no_network_driver(self):
|
||||
cluster_template_dict = {}
|
||||
config_dict = {}
|
||||
expect_errors_flag = False
|
||||
expect_default_driver_flag = True
|
||||
self._test_create_cluster_template_network_driver_attr(
|
||||
cluster_template_dict,
|
||||
config_dict,
|
||||
expect_errors_flag)
|
||||
expect_errors_flag,
|
||||
expect_default_driver_flag)
|
||||
|
||||
def test_create_cluster_template_with_network_driver_non_def_config(self):
|
||||
cluster_template_dict = {'coe': 'kubernetes',
|
||||
@ -853,10 +858,12 @@ class TestPost(api_base.FunctionalTest):
|
||||
config_dict = {
|
||||
'kubernetes_allowed_network_drivers': ['flannel', 'foo']}
|
||||
expect_errors_flag = False
|
||||
expect_default_driver_flag = False
|
||||
self._test_create_cluster_template_network_driver_attr(
|
||||
cluster_template_dict,
|
||||
config_dict,
|
||||
expect_errors_flag)
|
||||
expect_errors_flag,
|
||||
expect_default_driver_flag)
|
||||
|
||||
def test_create_cluster_template_with_invalid_network_driver(self):
|
||||
cluster_template_dict = {'coe': 'kubernetes',
|
||||
@ -864,10 +871,12 @@ class TestPost(api_base.FunctionalTest):
|
||||
config_dict = {
|
||||
'kubernetes_allowed_network_drivers': ['flannel', 'good_driver']}
|
||||
expect_errors_flag = True
|
||||
expect_default_driver_flag = False
|
||||
self._test_create_cluster_template_network_driver_attr(
|
||||
cluster_template_dict,
|
||||
config_dict,
|
||||
expect_errors_flag)
|
||||
expect_errors_flag,
|
||||
expect_default_driver_flag)
|
||||
|
||||
@mock.patch('magnum.api.attr_validator.validate_image')
|
||||
def test_create_cluster_template_with_volume_driver(self,
|
||||
@ -876,8 +885,8 @@ class TestPost(api_base.FunctionalTest):
|
||||
self.dbapi, 'create_cluster_template',
|
||||
wraps=self.dbapi.create_cluster_template) as cc_mock:
|
||||
mock_image_data.return_value = {'name': 'mock_name',
|
||||
'os_distro': 'fedora-atomic'}
|
||||
bdict = apiutils.cluster_template_post_data(volume_driver='rexray')
|
||||
'os_distro': 'fedora-coreos'}
|
||||
bdict = apiutils.cluster_template_post_data(volume_driver='cinder')
|
||||
response = self.post_json('/clustertemplates', bdict)
|
||||
self.assertEqual(bdict['volume_driver'],
|
||||
response.json['volume_driver'])
|
||||
@ -1138,9 +1147,9 @@ class TestPost(api_base.FunctionalTest):
|
||||
|
||||
def test_create_cluster_with_disabled_driver(self):
|
||||
cfg.CONF.set_override('disabled_drivers',
|
||||
['swarm_fedora_atomic_v1'],
|
||||
['kubernetes'],
|
||||
group='drivers')
|
||||
bdict = apiutils.cluster_template_post_data(coe="swarm")
|
||||
bdict = apiutils.cluster_template_post_data(coe="kubernetes")
|
||||
self.assertRaises(AppError, self.post_json, '/clustertemplates',
|
||||
bdict)
|
||||
|
||||
|
@ -151,12 +151,3 @@ class TestApiUtils(base.FunctionalTest):
|
||||
self.assertEqual(
|
||||
"The attribute /node_count has existed, please use "
|
||||
"'replace' operation instead.", exc.faultstring)
|
||||
|
||||
def test_validate_docker_memory(self):
|
||||
utils.validate_docker_memory('512m')
|
||||
utils.validate_docker_memory('512g')
|
||||
self.assertRaises(wsme.exc.ClientSideError,
|
||||
utils.validate_docker_memory, "512gg")
|
||||
# Docker require that Minimum memory limit >= 4M
|
||||
self.assertRaises(wsme.exc.ClientSideError,
|
||||
utils.validate_docker_memory, "3m")
|
||||
|
@ -203,20 +203,6 @@ class TestAttrValidator(base.BaseTestCase):
|
||||
fake_labels = {}
|
||||
attr_validator.validate_labels(fake_labels)
|
||||
|
||||
def test_validate_labels_strategy_valid(self):
|
||||
fake_labels = {'swarm_strategy': 'spread'}
|
||||
attr_validator.validate_labels_strategy(fake_labels)
|
||||
|
||||
def test_validate_labels_strategy_missing(self):
|
||||
fake_labels = {'strategy': 'spread'}
|
||||
attr_validator.validate_labels_strategy(fake_labels)
|
||||
|
||||
def test_validate_labels_strategy_invalid(self):
|
||||
fake_labels = {'swarm_strategy': 'invalid'}
|
||||
self.assertRaises(exception.InvalidParameterValue,
|
||||
attr_validator.validate_labels_strategy,
|
||||
fake_labels)
|
||||
|
||||
@mock.patch('magnum.api.utils.get_openstack_resource')
|
||||
def test_validate_image_with_valid_image_by_name(self, mock_os_res):
|
||||
mock_image = {'name': 'fedora-21-atomic-5',
|
||||
@ -310,16 +296,6 @@ class TestAttrValidator(base.BaseTestCase):
|
||||
attr_validator.validate_os_resources,
|
||||
mock_context, mock_cluster_template)
|
||||
|
||||
@mock.patch('magnum.common.clients.OpenStackClients')
|
||||
@mock.patch('magnum.api.attr_validator.validate_labels')
|
||||
def test_validate_os_resources_with_label(self, mock_validate_labels,
|
||||
mock_os_cli):
|
||||
mock_cluster_template = {'labels': {'swarm_strategy': 'abc'}}
|
||||
mock_context = mock.MagicMock()
|
||||
self.assertRaises(exception.InvalidParameterValue,
|
||||
attr_validator.validate_os_resources, mock_context,
|
||||
mock_cluster_template)
|
||||
|
||||
@mock.patch('magnum.common.clients.OpenStackClients')
|
||||
@mock.patch('magnum.api.attr_validator.validators')
|
||||
def test_validate_os_resources_without_validator(self, mock_validators,
|
||||
|
@ -43,18 +43,6 @@ class UtilsTestCase(base.TestCase):
|
||||
self.assertRaises(exception.UnsupportedK8sQuantityFormat,
|
||||
utils.get_k8s_quantity, '1E1E')
|
||||
|
||||
def test_get_docker_quantity(self):
|
||||
self.assertEqual(512, utils.get_docker_quantity('512'))
|
||||
self.assertEqual(512, utils.get_docker_quantity('512b'))
|
||||
self.assertEqual(512 * 1024, utils.get_docker_quantity('512k'))
|
||||
self.assertEqual(512 * 1024 * 1024, utils.get_docker_quantity('512m'))
|
||||
self.assertEqual(512 * 1024 * 1024 * 1024,
|
||||
utils.get_docker_quantity('512g'))
|
||||
self.assertRaises(exception.UnsupportedDockerQuantityFormat,
|
||||
utils.get_docker_quantity, '512bb')
|
||||
self.assertRaises(exception.UnsupportedDockerQuantityFormat,
|
||||
utils.get_docker_quantity, '512B')
|
||||
|
||||
def test_get_openstasck_ca(self):
|
||||
# openstack_ca_file is empty
|
||||
self.assertEqual('', utils.get_openstack_ca())
|
||||
|
@ -1,712 +0,0 @@
|
||||
# Copyright 2015 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from unittest import mock
|
||||
from unittest.mock import patch
|
||||
|
||||
import magnum.conf
|
||||
from magnum.drivers.heat import driver as heat_driver
|
||||
from magnum.drivers.swarm_fedora_atomic_v1 import driver as swarm_dr
|
||||
from magnum import objects
|
||||
from magnum.objects.fields import ClusterStatus as cluster_status
|
||||
from magnum.tests import base
|
||||
|
||||
CONF = magnum.conf.CONF
|
||||
|
||||
|
||||
class TestClusterConductorWithSwarm(base.TestCase):
|
||||
def setUp(self):
|
||||
super(TestClusterConductorWithSwarm, self).setUp()
|
||||
self.cluster_template_dict = {
|
||||
'image_id': 'image_id',
|
||||
'flavor_id': 'flavor_id',
|
||||
'master_flavor_id': 'master_flavor_id',
|
||||
'keypair_id': 'keypair_id',
|
||||
'dns_nameserver': 'dns_nameserver',
|
||||
'docker_volume_size': 20,
|
||||
'docker_storage_driver': 'devicemapper',
|
||||
'external_network_id': 'external_network_id',
|
||||
'fixed_network': 'fixed_network',
|
||||
'fixed_subnet': 'fixed_subnet',
|
||||
'cluster_distro': 'fedora-atomic',
|
||||
'coe': 'swarm',
|
||||
'http_proxy': 'http_proxy',
|
||||
'https_proxy': 'https_proxy',
|
||||
'no_proxy': 'no_proxy',
|
||||
'tls_disabled': False,
|
||||
'registry_enabled': False,
|
||||
'server_type': 'vm',
|
||||
'network_driver': 'network_driver',
|
||||
'labels': {'docker_volume_type': 'lvmdriver-1',
|
||||
'flannel_network_cidr': '10.101.0.0/16',
|
||||
'flannel_network_subnetlen': '26',
|
||||
'flannel_backend': 'vxlan',
|
||||
'rexray_preempt': 'False',
|
||||
'swarm_strategy': 'spread',
|
||||
'availability_zone': 'az_1'},
|
||||
'master_lb_enabled': False,
|
||||
'volume_driver': 'rexray'
|
||||
}
|
||||
self.cluster_dict = {
|
||||
'id': 1,
|
||||
'uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
|
||||
'cluster_template_id': 'xx-xx-xx-xx',
|
||||
'keypair': 'keypair_id',
|
||||
'flavor_id': 'flavor_id',
|
||||
'docker_volume_size': 20,
|
||||
'master_flavor_id': 'master_flavor_id',
|
||||
'name': 'cluster1',
|
||||
'stack_id': 'xx-xx-xx-xx',
|
||||
'api_address': '172.17.2.3',
|
||||
'discovery_url': 'https://discovery.test.io/123456789',
|
||||
'trustee_username': 'fake_trustee',
|
||||
'trustee_password': 'fake_trustee_password',
|
||||
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
|
||||
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
|
||||
'labels': {'docker_volume_type': 'lvmdriver-1',
|
||||
'flannel_network_cidr': '10.101.0.0/16',
|
||||
'flannel_network_subnetlen': '26',
|
||||
'flannel_backend': 'vxlan',
|
||||
'rexray_preempt': 'False',
|
||||
'swarm_strategy': 'spread',
|
||||
'availability_zone': 'az_1'},
|
||||
'coe_version': 'fake-version',
|
||||
'fixed_network': '',
|
||||
'fixed_subnet': '',
|
||||
'floating_ip_enabled': False,
|
||||
'master_lb_enabled': False,
|
||||
}
|
||||
self.worker_ng_dict = {
|
||||
'uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a53',
|
||||
'name': 'worker_ng',
|
||||
'cluster_id': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
|
||||
'project_id': 'project_id',
|
||||
'docker_volume_size': 20,
|
||||
'labels': self.cluster_dict['labels'],
|
||||
'flavor_id': 'flavor_id',
|
||||
'image_id': 'image_id',
|
||||
'node_addresses': ['172.17.2.4'],
|
||||
'node_count': 1,
|
||||
'role': 'worker',
|
||||
'max_nodes': 5,
|
||||
'min_nodes': 1,
|
||||
'is_default': True
|
||||
}
|
||||
self.master_ng_dict = {
|
||||
'uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a54',
|
||||
'name': 'master_ng',
|
||||
'cluster_id': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
|
||||
'project_id': 'project_id',
|
||||
'docker_volume_size': 20,
|
||||
'labels': self.cluster_dict['labels'],
|
||||
'flavor_id': 'master_flavor_id',
|
||||
'image_id': 'image_id',
|
||||
'node_addresses': ['172.17.2.18'],
|
||||
'node_count': 1,
|
||||
'role': 'master',
|
||||
'max_nodes': 5,
|
||||
'min_nodes': 1,
|
||||
'is_default': True
|
||||
}
|
||||
|
||||
# We need this due to volume_driver=rexray
|
||||
CONF.set_override('cluster_user_trust',
|
||||
True,
|
||||
group='trust')
|
||||
|
||||
osc_patcher = mock.patch('magnum.common.clients.OpenStackClients')
|
||||
self.mock_osc_class = osc_patcher.start()
|
||||
self.addCleanup(osc_patcher.stop)
|
||||
self.mock_osc = mock.MagicMock()
|
||||
self.mock_osc.magnum_url.return_value = 'http://127.0.0.1:9511/v1'
|
||||
self.mock_osc.url_for.return_value = 'http://192.168.10.10:5000/v3'
|
||||
|
||||
mock_keypair = mock.MagicMock()
|
||||
mock_keypair.public_key = 'ssh-rsa AAAAB3Nz'
|
||||
self.mock_nova = mock.MagicMock()
|
||||
self.mock_nova.keypairs.get.return_value = mock_keypair
|
||||
self.mock_osc.nova.return_value = self.mock_nova
|
||||
|
||||
self.mock_keystone = mock.MagicMock()
|
||||
self.mock_keystone.trustee_domain_id = 'trustee_domain_id'
|
||||
self.mock_osc.keystone.return_value = self.mock_keystone
|
||||
self.mock_osc_class.return_value = self.mock_osc
|
||||
|
||||
@patch('requests.get')
|
||||
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
|
||||
@patch('magnum.objects.NodeGroup.list')
|
||||
@patch('magnum.drivers.common.driver.Driver.get_driver')
|
||||
def test_extract_template_definition_all_values(
|
||||
self,
|
||||
mock_driver,
|
||||
mock_objects_nodegroup_list,
|
||||
mock_objects_cluster_template_get_by_uuid,
|
||||
mock_get):
|
||||
cluster_template = objects.ClusterTemplate(
|
||||
self.context, **self.cluster_template_dict)
|
||||
mock_objects_cluster_template_get_by_uuid.return_value = \
|
||||
cluster_template
|
||||
expected_result = str('{"action":"get","node":{"key":"test","value":'
|
||||
'"1","modifiedIndex":10,"createdIndex":10}}')
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = expected_result
|
||||
mock_get.return_value = mock_resp
|
||||
mock_driver.return_value = swarm_dr.Driver()
|
||||
cluster = objects.Cluster(self.context, **self.cluster_dict)
|
||||
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
|
||||
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
|
||||
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
|
||||
|
||||
(template_path,
|
||||
definition,
|
||||
env_files) = mock_driver()._extract_template_definition(self.context,
|
||||
cluster)
|
||||
|
||||
expected = {
|
||||
'ssh_key_name': 'keypair_id',
|
||||
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
|
||||
'external_network': 'external_network_id',
|
||||
'fixed_network': 'fixed_network',
|
||||
'fixed_subnet': 'fixed_subnet',
|
||||
'dns_nameserver': 'dns_nameserver',
|
||||
'master_image': 'image_id',
|
||||
'node_image': 'image_id',
|
||||
'master_flavor': 'master_flavor_id',
|
||||
'node_flavor': 'flavor_id',
|
||||
'number_of_masters': 1,
|
||||
'number_of_nodes': 1,
|
||||
'docker_volume_size': 20,
|
||||
'docker_storage_driver': 'devicemapper',
|
||||
'discovery_url': 'https://discovery.test.io/123456789',
|
||||
'http_proxy': 'http_proxy',
|
||||
'https_proxy': 'https_proxy',
|
||||
'no_proxy': 'no_proxy',
|
||||
'cluster_uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
|
||||
'magnum_url': self.mock_osc.magnum_url.return_value,
|
||||
'tls_disabled': False,
|
||||
'registry_enabled': False,
|
||||
'network_driver': 'network_driver',
|
||||
'flannel_network_cidr': '10.101.0.0/16',
|
||||
'flannel_network_subnetlen': '26',
|
||||
'flannel_backend': 'vxlan',
|
||||
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
|
||||
'trustee_username': 'fake_trustee',
|
||||
'trustee_password': 'fake_trustee_password',
|
||||
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
|
||||
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
|
||||
'auth_url': 'http://192.168.10.10:5000/v3',
|
||||
'swarm_version': 'fake-version',
|
||||
'swarm_strategy': u'spread',
|
||||
'volume_driver': 'rexray',
|
||||
'rexray_preempt': 'False',
|
||||
'docker_volume_type': 'lvmdriver-1',
|
||||
'verify_ca': True,
|
||||
'openstack_ca': '',
|
||||
'nodes_affinity_policy': 'soft-anti-affinity',
|
||||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_private_network.yaml',
|
||||
'../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/no_master_lb.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('requests.get')
|
||||
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
|
||||
@patch('magnum.objects.NodeGroup.list')
|
||||
@patch('magnum.drivers.common.driver.Driver.get_driver')
|
||||
def test_extract_template_definition_with_registry(
|
||||
self,
|
||||
mock_driver,
|
||||
mock_objects_nodegroup_list,
|
||||
mock_objects_cluster_template_get_by_uuid,
|
||||
mock_get):
|
||||
self.cluster_template_dict['registry_enabled'] = True
|
||||
cluster_template = objects.ClusterTemplate(
|
||||
self.context, **self.cluster_template_dict)
|
||||
mock_objects_cluster_template_get_by_uuid.return_value = \
|
||||
cluster_template
|
||||
expected_result = str('{"action":"get","node":{"key":"test","value":'
|
||||
'"1","modifiedIndex":10,"createdIndex":10}}')
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = expected_result
|
||||
mock_get.return_value = mock_resp
|
||||
mock_driver.return_value = swarm_dr.Driver()
|
||||
cluster = objects.Cluster(self.context, **self.cluster_dict)
|
||||
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
|
||||
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
|
||||
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
|
||||
|
||||
CONF.set_override('swift_region',
|
||||
'RegionOne',
|
||||
group='docker_registry')
|
||||
|
||||
(template_path,
|
||||
definition,
|
||||
env_files) = mock_driver()._extract_template_definition(self.context,
|
||||
cluster)
|
||||
|
||||
expected = {
|
||||
'ssh_key_name': 'keypair_id',
|
||||
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
|
||||
'external_network': 'external_network_id',
|
||||
'fixed_network': 'fixed_network',
|
||||
'fixed_subnet': 'fixed_subnet',
|
||||
'dns_nameserver': 'dns_nameserver',
|
||||
'master_image': 'image_id',
|
||||
'node_image': 'image_id',
|
||||
'master_flavor': 'master_flavor_id',
|
||||
'node_flavor': 'flavor_id',
|
||||
'number_of_masters': 1,
|
||||
'number_of_nodes': 1,
|
||||
'docker_volume_size': 20,
|
||||
'discovery_url': 'https://discovery.test.io/123456789',
|
||||
'http_proxy': 'http_proxy',
|
||||
'https_proxy': 'https_proxy',
|
||||
'no_proxy': 'no_proxy',
|
||||
'cluster_uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
|
||||
'magnum_url': self.mock_osc.magnum_url.return_value,
|
||||
'tls_disabled': False,
|
||||
'registry_enabled': True,
|
||||
'registry_container': 'docker_registry',
|
||||
'swift_region': 'RegionOne',
|
||||
'network_driver': 'network_driver',
|
||||
'flannel_network_cidr': '10.101.0.0/16',
|
||||
'flannel_network_subnetlen': '26',
|
||||
'flannel_backend': 'vxlan',
|
||||
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
|
||||
'trustee_username': 'fake_trustee',
|
||||
'trustee_password': 'fake_trustee_password',
|
||||
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
|
||||
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
|
||||
'auth_url': 'http://192.168.10.10:5000/v3',
|
||||
'docker_storage_driver': 'devicemapper',
|
||||
'swarm_version': 'fake-version',
|
||||
'swarm_strategy': u'spread',
|
||||
'volume_driver': 'rexray',
|
||||
'rexray_preempt': 'False',
|
||||
'docker_volume_type': 'lvmdriver-1',
|
||||
'verify_ca': True,
|
||||
'openstack_ca': '',
|
||||
'nodes_affinity_policy': 'soft-anti-affinity',
|
||||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_private_network.yaml',
|
||||
'../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/no_master_lb.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('requests.get')
|
||||
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
|
||||
@patch('magnum.objects.NodeGroup.list')
|
||||
@patch('magnum.drivers.common.driver.Driver.get_driver')
|
||||
def test_extract_template_definition_only_required(
|
||||
self,
|
||||
mock_driver,
|
||||
mock_objects_nodegroup_list,
|
||||
mock_objects_cluster_template_get_by_uuid,
|
||||
mock_get):
|
||||
|
||||
not_required = ['image_id', 'flavor_id', 'dns_nameserver',
|
||||
'docker_volume_size', 'fixed_network', 'http_proxy',
|
||||
'https_proxy', 'no_proxy', 'network_driver',
|
||||
'master_flavor_id', 'docker_storage_driver',
|
||||
'volume_driver', 'rexray_preempt', 'fixed_subnet',
|
||||
'docker_volume_type', 'availablity_zone']
|
||||
for key in not_required:
|
||||
self.cluster_template_dict[key] = None
|
||||
self.cluster_dict['discovery_url'] = 'https://discovery.etcd.io/test'
|
||||
|
||||
cluster_template = objects.ClusterTemplate(
|
||||
self.context, **self.cluster_template_dict)
|
||||
mock_objects_cluster_template_get_by_uuid.return_value = \
|
||||
cluster_template
|
||||
expected_result = str('{"action":"get","node":{"key":"test","value":'
|
||||
'"1","modifiedIndex":10,"createdIndex":10}}')
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = expected_result
|
||||
mock_get.return_value = mock_resp
|
||||
mock_driver.return_value = swarm_dr.Driver()
|
||||
cluster = objects.Cluster(self.context, **self.cluster_dict)
|
||||
del self.worker_ng_dict['image_id']
|
||||
del self.master_ng_dict['image_id']
|
||||
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
|
||||
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
|
||||
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
|
||||
|
||||
(template_path,
|
||||
definition,
|
||||
env_files) = mock_driver()._extract_template_definition(self.context,
|
||||
cluster)
|
||||
|
||||
expected = {
|
||||
'ssh_key_name': 'keypair_id',
|
||||
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
|
||||
'external_network': 'external_network_id',
|
||||
'number_of_masters': 1,
|
||||
'number_of_nodes': 1,
|
||||
'discovery_url': 'https://discovery.etcd.io/test',
|
||||
'cluster_uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
|
||||
'magnum_url': self.mock_osc.magnum_url.return_value,
|
||||
'tls_disabled': False,
|
||||
'registry_enabled': False,
|
||||
'flannel_network_cidr': u'10.101.0.0/16',
|
||||
'flannel_network_subnetlen': u'26',
|
||||
'flannel_backend': u'vxlan',
|
||||
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
|
||||
'trustee_username': 'fake_trustee',
|
||||
'trustee_password': 'fake_trustee_password',
|
||||
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
|
||||
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
|
||||
'auth_url': 'http://192.168.10.10:5000/v3',
|
||||
'swarm_version': 'fake-version',
|
||||
'swarm_strategy': u'spread',
|
||||
'rexray_preempt': 'False',
|
||||
'docker_volume_type': 'lvmdriver-1',
|
||||
'docker_volume_size': 20,
|
||||
'master_flavor': 'master_flavor_id',
|
||||
'verify_ca': True,
|
||||
'node_flavor': 'flavor_id',
|
||||
'openstack_ca': '',
|
||||
'nodes_affinity_policy': 'soft-anti-affinity',
|
||||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/with_private_network.yaml',
|
||||
'../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/no_master_lb.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('requests.get')
|
||||
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
|
||||
@patch('magnum.objects.NodeGroup.list')
|
||||
@patch('magnum.drivers.common.driver.Driver.get_driver')
|
||||
@patch('magnum.common.keystone.KeystoneClientV3')
|
||||
def test_extract_template_definition_with_lb_neutron(
|
||||
self,
|
||||
mock_kc,
|
||||
mock_driver,
|
||||
mock_objects_nodegroup_list,
|
||||
mock_objects_cluster_template_get_by_uuid,
|
||||
mock_get):
|
||||
self.cluster_template_dict['master_lb_enabled'] = True
|
||||
cluster_template = objects.ClusterTemplate(
|
||||
self.context, **self.cluster_template_dict)
|
||||
mock_objects_cluster_template_get_by_uuid.return_value = \
|
||||
cluster_template
|
||||
expected_result = str('{"action":"get","node":{"key":"test","value":'
|
||||
'"1","modifiedIndex":10,"createdIndex":10}}')
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = expected_result
|
||||
mock_get.return_value = mock_resp
|
||||
mock_driver.return_value = swarm_dr.Driver()
|
||||
self.cluster_dict["master_lb_enabled"] = True
|
||||
cluster = objects.Cluster(self.context, **self.cluster_dict)
|
||||
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
|
||||
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
|
||||
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
|
||||
|
||||
mock_kc.return_value.client.services.list.return_value = []
|
||||
|
||||
(template_path,
|
||||
definition,
|
||||
env_files) = mock_driver()._extract_template_definition(self.context,
|
||||
cluster)
|
||||
|
||||
expected = {
|
||||
'ssh_key_name': 'keypair_id',
|
||||
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
|
||||
'external_network': 'external_network_id',
|
||||
'fixed_network': 'fixed_network',
|
||||
'fixed_subnet': 'fixed_subnet',
|
||||
'dns_nameserver': 'dns_nameserver',
|
||||
'master_image': 'image_id',
|
||||
'node_image': 'image_id',
|
||||
'master_flavor': 'master_flavor_id',
|
||||
'node_flavor': 'flavor_id',
|
||||
'number_of_masters': 1,
|
||||
'number_of_nodes': 1,
|
||||
'docker_volume_size': 20,
|
||||
'docker_storage_driver': 'devicemapper',
|
||||
'discovery_url': 'https://discovery.test.io/123456789',
|
||||
'http_proxy': 'http_proxy',
|
||||
'https_proxy': 'https_proxy',
|
||||
'no_proxy': 'no_proxy',
|
||||
'cluster_uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
|
||||
'magnum_url': self.mock_osc.magnum_url.return_value,
|
||||
'tls_disabled': False,
|
||||
'registry_enabled': False,
|
||||
'network_driver': 'network_driver',
|
||||
'flannel_network_cidr': '10.101.0.0/16',
|
||||
'flannel_network_subnetlen': '26',
|
||||
'flannel_backend': 'vxlan',
|
||||
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
|
||||
'trustee_username': 'fake_trustee',
|
||||
'trustee_password': 'fake_trustee_password',
|
||||
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
|
||||
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
|
||||
'auth_url': 'http://192.168.10.10:5000/v3',
|
||||
'swarm_version': 'fake-version',
|
||||
'swarm_strategy': u'spread',
|
||||
'volume_driver': 'rexray',
|
||||
'rexray_preempt': 'False',
|
||||
'docker_volume_type': 'lvmdriver-1',
|
||||
'verify_ca': True,
|
||||
'openstack_ca': '',
|
||||
'nodes_affinity_policy': 'soft-anti-affinity',
|
||||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_private_network.yaml',
|
||||
'../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/with_master_lb.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('requests.get')
|
||||
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
|
||||
@patch('magnum.objects.NodeGroup.list')
|
||||
@patch('magnum.drivers.common.driver.Driver.get_driver')
|
||||
@patch('magnum.common.keystone.KeystoneClientV3')
|
||||
def test_extract_template_definition_with_lb_octavia(
|
||||
self,
|
||||
mock_kc,
|
||||
mock_driver,
|
||||
mock_objects_nodegroup_list,
|
||||
mock_objects_cluster_template_get_by_uuid,
|
||||
mock_get):
|
||||
self.cluster_template_dict['master_lb_enabled'] = True
|
||||
cluster_template = objects.ClusterTemplate(
|
||||
self.context, **self.cluster_template_dict)
|
||||
mock_objects_cluster_template_get_by_uuid.return_value = \
|
||||
cluster_template
|
||||
expected_result = str('{"action":"get","node":{"key":"test","value":'
|
||||
'"1","modifiedIndex":10,"createdIndex":10}}')
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = expected_result
|
||||
mock_get.return_value = mock_resp
|
||||
mock_driver.return_value = swarm_dr.Driver()
|
||||
self.cluster_dict["master_lb_enabled"] = True
|
||||
cluster = objects.Cluster(self.context, **self.cluster_dict)
|
||||
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
|
||||
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
|
||||
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
|
||||
|
||||
class Service(object):
|
||||
def __init__(self):
|
||||
self.enabled = True
|
||||
|
||||
mock_kc.return_value.client.services.list.return_value = [Service()]
|
||||
|
||||
(template_path,
|
||||
definition,
|
||||
env_files) = mock_driver()._extract_template_definition(self.context,
|
||||
cluster)
|
||||
|
||||
expected = {
|
||||
'ssh_key_name': 'keypair_id',
|
||||
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
|
||||
'external_network': 'external_network_id',
|
||||
'fixed_network': 'fixed_network',
|
||||
'fixed_subnet': 'fixed_subnet',
|
||||
'dns_nameserver': 'dns_nameserver',
|
||||
'master_image': 'image_id',
|
||||
'node_image': 'image_id',
|
||||
'master_flavor': 'master_flavor_id',
|
||||
'node_flavor': 'flavor_id',
|
||||
'number_of_masters': 1,
|
||||
'number_of_nodes': 1,
|
||||
'docker_volume_size': 20,
|
||||
'docker_storage_driver': 'devicemapper',
|
||||
'discovery_url': 'https://discovery.test.io/123456789',
|
||||
'http_proxy': 'http_proxy',
|
||||
'https_proxy': 'https_proxy',
|
||||
'no_proxy': 'no_proxy',
|
||||
'cluster_uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
|
||||
'magnum_url': self.mock_osc.magnum_url.return_value,
|
||||
'tls_disabled': False,
|
||||
'registry_enabled': False,
|
||||
'network_driver': 'network_driver',
|
||||
'flannel_network_cidr': '10.101.0.0/16',
|
||||
'flannel_network_subnetlen': '26',
|
||||
'flannel_backend': 'vxlan',
|
||||
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
|
||||
'trustee_username': 'fake_trustee',
|
||||
'trustee_password': 'fake_trustee_password',
|
||||
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
|
||||
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
|
||||
'auth_url': 'http://192.168.10.10:5000/v3',
|
||||
'swarm_version': 'fake-version',
|
||||
'swarm_strategy': u'spread',
|
||||
'volume_driver': 'rexray',
|
||||
'rexray_preempt': 'False',
|
||||
'docker_volume_type': 'lvmdriver-1',
|
||||
'verify_ca': True,
|
||||
'openstack_ca': '',
|
||||
'nodes_affinity_policy': 'soft-anti-affinity',
|
||||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_private_network.yaml',
|
||||
'../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/with_master_lb_octavia.yaml'
|
||||
],
|
||||
env_files)
|
||||
|
||||
@patch('requests.get')
|
||||
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
|
||||
@patch('magnum.objects.NodeGroup.list')
|
||||
@patch('magnum.drivers.common.driver.Driver.get_driver')
|
||||
@patch('magnum.common.keystone.KeystoneClientV3')
|
||||
def test_extract_template_definition_multi_master(
|
||||
self,
|
||||
mock_kc,
|
||||
mock_driver,
|
||||
mock_objects_nodegroup_list,
|
||||
mock_objects_cluster_template_get_by_uuid,
|
||||
mock_get):
|
||||
self.cluster_template_dict['master_lb_enabled'] = True
|
||||
self.master_ng_dict['node_count'] = 2
|
||||
cluster_template = objects.ClusterTemplate(
|
||||
self.context, **self.cluster_template_dict)
|
||||
mock_objects_cluster_template_get_by_uuid.return_value = \
|
||||
cluster_template
|
||||
expected_result = str('{"action":"get","node":{"key":"test","value":'
|
||||
'"2","modifiedIndex":10,"createdIndex":10}}')
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = expected_result
|
||||
mock_get.return_value = mock_resp
|
||||
mock_driver.return_value = swarm_dr.Driver()
|
||||
self.cluster_dict["master_lb_enabled"] = True
|
||||
cluster = objects.Cluster(self.context, **self.cluster_dict)
|
||||
worker_ng = objects.NodeGroup(self.context, **self.worker_ng_dict)
|
||||
master_ng = objects.NodeGroup(self.context, **self.master_ng_dict)
|
||||
mock_objects_nodegroup_list.return_value = [master_ng, worker_ng]
|
||||
|
||||
mock_kc.return_value.client.services.list.return_value = []
|
||||
|
||||
(template_path,
|
||||
definition,
|
||||
env_files) = mock_driver()._extract_template_definition(self.context,
|
||||
cluster)
|
||||
|
||||
expected = {
|
||||
'ssh_key_name': 'keypair_id',
|
||||
'ssh_public_key': 'ssh-rsa AAAAB3Nz',
|
||||
'external_network': 'external_network_id',
|
||||
'fixed_network': 'fixed_network',
|
||||
'fixed_subnet': 'fixed_subnet',
|
||||
'dns_nameserver': 'dns_nameserver',
|
||||
'master_image': 'image_id',
|
||||
'node_image': 'image_id',
|
||||
'master_flavor': 'master_flavor_id',
|
||||
'node_flavor': 'flavor_id',
|
||||
'number_of_masters': 2,
|
||||
'number_of_nodes': 1,
|
||||
'docker_volume_size': 20,
|
||||
'docker_storage_driver': 'devicemapper',
|
||||
'discovery_url': 'https://discovery.test.io/123456789',
|
||||
'http_proxy': 'http_proxy',
|
||||
'https_proxy': 'https_proxy',
|
||||
'no_proxy': 'no_proxy',
|
||||
'cluster_uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
|
||||
'magnum_url': self.mock_osc.magnum_url.return_value,
|
||||
'tls_disabled': False,
|
||||
'registry_enabled': False,
|
||||
'network_driver': 'network_driver',
|
||||
'flannel_network_cidr': '10.101.0.0/16',
|
||||
'flannel_network_subnetlen': '26',
|
||||
'flannel_backend': 'vxlan',
|
||||
'trustee_domain_id': self.mock_keystone.trustee_domain_id,
|
||||
'trustee_username': 'fake_trustee',
|
||||
'trustee_password': 'fake_trustee_password',
|
||||
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
|
||||
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
|
||||
'auth_url': 'http://192.168.10.10:5000/v3',
|
||||
'swarm_version': 'fake-version',
|
||||
'swarm_strategy': u'spread',
|
||||
'volume_driver': 'rexray',
|
||||
'rexray_preempt': 'False',
|
||||
'docker_volume_type': 'lvmdriver-1',
|
||||
'verify_ca': True,
|
||||
'openstack_ca': '',
|
||||
'nodes_affinity_policy': 'soft-anti-affinity',
|
||||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_private_network.yaml',
|
||||
'../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/with_master_lb.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('magnum.conductor.utils.retrieve_cluster_template')
|
||||
@patch('magnum.conf.CONF')
|
||||
@patch('magnum.common.clients.OpenStackClients')
|
||||
@patch('magnum.drivers.common.driver.Driver.get_driver')
|
||||
def setup_poll_test(self, mock_driver, mock_openstack_client, mock_conf,
|
||||
mock_retrieve_cluster_template):
|
||||
mock_conf.cluster_heat.max_attempts = 10
|
||||
|
||||
worker_ng = mock.MagicMock(
|
||||
uuid='5d12f6fd-a196-4bf0-ae4c-1f639a523a53',
|
||||
role='worker',
|
||||
node_count=1,
|
||||
)
|
||||
master_ng = mock.MagicMock(
|
||||
uuid='5d12f6fd-a196-4bf0-ae4c-1f639a523a54',
|
||||
role='master',
|
||||
node_count=1,
|
||||
)
|
||||
cluster = mock.MagicMock(nodegroups=[worker_ng, master_ng],
|
||||
default_ng_worker=worker_ng,
|
||||
default_ng_master=master_ng)
|
||||
mock_heat_stack = mock.MagicMock()
|
||||
mock_heat_client = mock.MagicMock()
|
||||
mock_heat_client.stacks.get.return_value = mock_heat_stack
|
||||
mock_openstack_client.heat.return_value = mock_heat_client
|
||||
cluster_template = objects.ClusterTemplate(
|
||||
self.context, **self.cluster_template_dict)
|
||||
mock_retrieve_cluster_template.return_value = \
|
||||
cluster_template
|
||||
mock_driver.return_value = swarm_dr.Driver()
|
||||
poller = heat_driver.HeatPoller(mock_openstack_client,
|
||||
mock.MagicMock(), cluster,
|
||||
swarm_dr.Driver())
|
||||
poller.template_def.add_nodegroup_params(cluster)
|
||||
poller.get_version_info = mock.MagicMock()
|
||||
return (mock_heat_stack, cluster, poller)
|
||||
|
||||
def test_poll_node_count(self):
|
||||
mock_heat_stack, cluster, poller = self.setup_poll_test()
|
||||
|
||||
mock_heat_stack.parameters = {
|
||||
'number_of_nodes': 1,
|
||||
'number_of_masters': 1
|
||||
}
|
||||
mock_heat_stack.stack_status = cluster_status.CREATE_IN_PROGRESS
|
||||
poller.poll_and_check()
|
||||
|
||||
self.assertEqual(1, cluster.default_ng_worker.node_count)
|
||||
|
||||
def test_poll_node_count_by_update(self):
|
||||
mock_heat_stack, cluster, poller = self.setup_poll_test()
|
||||
|
||||
mock_heat_stack.parameters = {
|
||||
'number_of_nodes': 2,
|
||||
'number_of_masters': 1
|
||||
}
|
||||
mock_heat_stack.stack_status = cluster_status.UPDATE_COMPLETE
|
||||
poller.poll_and_check()
|
||||
|
||||
self.assertEqual(2, cluster.default_ng_worker.node_count)
|
@ -20,8 +20,6 @@ from requests_mock.contrib import fixture
|
||||
|
||||
from magnum.common import exception
|
||||
from magnum.drivers.common import k8s_monitor
|
||||
from magnum.drivers.swarm_fedora_atomic_v1 import monitor as swarm_monitor
|
||||
from magnum.drivers.swarm_fedora_atomic_v2 import monitor as swarm_v2_monitor
|
||||
from magnum import objects
|
||||
from magnum.objects import fields as m_fields
|
||||
from magnum.tests import base
|
||||
@ -59,174 +57,7 @@ class MonitorsTestCase(base.TestCase):
|
||||
objects.NodeGroup(self.context, **nodegroups['master']),
|
||||
objects.NodeGroup(self.context, **nodegroups['worker'])
|
||||
]
|
||||
self.monitor = swarm_monitor.SwarmMonitor(self.context, self.cluster)
|
||||
self.v2_monitor = swarm_v2_monitor.SwarmMonitor(self.context,
|
||||
self.cluster)
|
||||
self.k8s_monitor = k8s_monitor.K8sMonitor(self.context, self.cluster)
|
||||
p = mock.patch('magnum.drivers.swarm_fedora_atomic_v1.monitor.'
|
||||
'SwarmMonitor.metrics_spec',
|
||||
new_callable=mock.PropertyMock)
|
||||
self.mock_metrics_spec = p.start()
|
||||
self.mock_metrics_spec.return_value = self.test_metrics_spec
|
||||
self.addCleanup(p.stop)
|
||||
|
||||
p2 = mock.patch('magnum.drivers.swarm_fedora_atomic_v2.monitor.'
|
||||
'SwarmMonitor.metrics_spec',
|
||||
new_callable=mock.PropertyMock)
|
||||
self.mock_metrics_spec_v2 = p2.start()
|
||||
self.mock_metrics_spec_v2.return_value = self.test_metrics_spec
|
||||
self.addCleanup(p2.stop)
|
||||
|
||||
@mock.patch('magnum.common.docker_utils.docker_for_cluster')
|
||||
def test_swarm_monitor_pull_data_success(self, mock_docker_cluster):
|
||||
mock_docker = mock.MagicMock()
|
||||
mock_docker.info.return_value = {'DriverStatus': [[
|
||||
u' \u2514 Reserved Memory', u'0 B / 1 GiB']]}
|
||||
mock_docker.containers.return_value = [mock.MagicMock()]
|
||||
mock_docker.inspect_container.return_value = 'test_container'
|
||||
mock_docker_cluster.return_value.__enter__.return_value = mock_docker
|
||||
|
||||
self.monitor.pull_data()
|
||||
|
||||
self.assertEqual([{'MemTotal': 1073741824.0}],
|
||||
self.monitor.data['nodes'])
|
||||
self.assertEqual(['test_container'], self.monitor.data['containers'])
|
||||
|
||||
@mock.patch('magnum.common.docker_utils.docker_for_cluster')
|
||||
def test_swarm_v2_monitor_pull_data_success(self, mock_docker_cluster):
|
||||
mock_docker = mock.MagicMock()
|
||||
mock_docker.info.return_value = {'DriverStatus': [[
|
||||
u' \u2514 Reserved Memory', u'0 B / 1 GiB']]}
|
||||
mock_docker.containers.return_value = [mock.MagicMock()]
|
||||
mock_docker.inspect_container.return_value = 'test_container'
|
||||
mock_docker_cluster.return_value.__enter__.return_value = mock_docker
|
||||
|
||||
self.v2_monitor.pull_data()
|
||||
|
||||
self.assertEqual([{'MemTotal': 1073741824.0}],
|
||||
self.v2_monitor.data['nodes'])
|
||||
self.assertEqual(['test_container'],
|
||||
self.v2_monitor.data['containers'])
|
||||
|
||||
@mock.patch('magnum.common.docker_utils.docker_for_cluster')
|
||||
def test_swarm_monitor_pull_data_raise(self, mock_docker_cluster):
|
||||
mock_container = mock.MagicMock()
|
||||
mock_docker = mock.MagicMock()
|
||||
mock_docker.info.return_value = {'DriverStatus': [[
|
||||
u' \u2514 Reserved Memory', u'0 B / 1 GiB']]}
|
||||
mock_docker.containers.return_value = [mock_container]
|
||||
mock_docker.inspect_container.side_effect = Exception("inspect error")
|
||||
mock_docker_cluster.return_value.__enter__.return_value = mock_docker
|
||||
|
||||
self.monitor.pull_data()
|
||||
|
||||
self.assertEqual([{'MemTotal': 1073741824.0}],
|
||||
self.monitor.data['nodes'])
|
||||
self.assertEqual([mock_container], self.monitor.data['containers'])
|
||||
|
||||
@mock.patch('magnum.common.docker_utils.docker_for_cluster')
|
||||
def test_swarm_v2_monitor_pull_data_raise(self, mock_docker_cluster):
|
||||
mock_container = mock.MagicMock()
|
||||
mock_docker = mock.MagicMock()
|
||||
mock_docker.info.return_value = {'DriverStatus': [[
|
||||
u' \u2514 Reserved Memory', u'0 B / 1 GiB']]}
|
||||
mock_docker.containers.return_value = [mock_container]
|
||||
mock_docker.inspect_container.side_effect = Exception("inspect error")
|
||||
mock_docker_cluster.return_value.__enter__.return_value = mock_docker
|
||||
|
||||
self.v2_monitor.pull_data()
|
||||
|
||||
self.assertEqual([{'MemTotal': 1073741824.0}],
|
||||
self.v2_monitor.data['nodes'])
|
||||
self.assertEqual([mock_container], self.v2_monitor.data['containers'])
|
||||
|
||||
def test_swarm_monitor_get_metric_names(self):
|
||||
names = self.monitor.get_metric_names()
|
||||
self.assertEqual(sorted(['metric1', 'metric2']), sorted(names))
|
||||
|
||||
def test_swarm_v2_monitor_get_metric_names(self):
|
||||
names = self.v2_monitor.get_metric_names()
|
||||
self.assertEqual(sorted(['metric1', 'metric2']), sorted(names))
|
||||
|
||||
def test_swarm_monitor_get_metric_unit(self):
|
||||
unit = self.monitor.get_metric_unit('metric1')
|
||||
self.assertEqual('metric1_unit', unit)
|
||||
|
||||
def test_swarm_v2_monitor_get_metric_unit(self):
|
||||
unit = self.v2_monitor.get_metric_unit('metric1')
|
||||
self.assertEqual('metric1_unit', unit)
|
||||
|
||||
def test_swarm_monitor_compute_metric_value(self):
|
||||
mock_func = mock.MagicMock()
|
||||
mock_func.return_value = 'metric1_value'
|
||||
self.monitor.metric1_func = mock_func
|
||||
value = self.monitor.compute_metric_value('metric1')
|
||||
self.assertEqual('metric1_value', value)
|
||||
|
||||
def test_swarm_v2_monitor_compute_metric_value(self):
|
||||
mock_func = mock.MagicMock()
|
||||
mock_func.return_value = 'metric1_value'
|
||||
self.v2_monitor.metric1_func = mock_func
|
||||
value = self.v2_monitor.compute_metric_value('metric1')
|
||||
self.assertEqual('metric1_value', value)
|
||||
|
||||
def test_swarm_monitor_compute_memory_util(self):
|
||||
test_data = {
|
||||
'nodes': [
|
||||
{
|
||||
'Name': 'node',
|
||||
'MemTotal': 20,
|
||||
},
|
||||
],
|
||||
'containers': [
|
||||
{
|
||||
'Name': 'container',
|
||||
'HostConfig': {
|
||||
'Memory': 10,
|
||||
},
|
||||
},
|
||||
],
|
||||
}
|
||||
self.monitor.data = test_data
|
||||
mem_util = self.monitor.compute_memory_util()
|
||||
self.assertEqual(50, mem_util)
|
||||
|
||||
test_data = {
|
||||
'nodes': [],
|
||||
'containers': [],
|
||||
}
|
||||
self.monitor.data = test_data
|
||||
mem_util = self.monitor.compute_memory_util()
|
||||
self.assertEqual(0, mem_util)
|
||||
|
||||
def test_swarm_v2_monitor_compute_memory_util(self):
|
||||
test_data = {
|
||||
'nodes': [
|
||||
{
|
||||
'Name': 'node',
|
||||
'MemTotal': 20,
|
||||
},
|
||||
],
|
||||
'containers': [
|
||||
{
|
||||
'Name': 'container',
|
||||
'HostConfig': {
|
||||
'Memory': 10,
|
||||
},
|
||||
},
|
||||
],
|
||||
}
|
||||
self.v2_monitor.data = test_data
|
||||
mem_util = self.v2_monitor.compute_memory_util()
|
||||
self.assertEqual(50, mem_util)
|
||||
|
||||
test_data = {
|
||||
'nodes': [],
|
||||
'containers': [],
|
||||
}
|
||||
self.v2_monitor.data = test_data
|
||||
mem_util = self.v2_monitor.compute_memory_util()
|
||||
self.assertEqual(0, mem_util)
|
||||
|
||||
@mock.patch('magnum.conductor.k8s_api.create_client_files')
|
||||
def test_k8s_monitor_pull_data_success(self, mock_create_client_files):
|
||||
|
@ -42,7 +42,7 @@ def get_test_cluster_template(**kw):
|
||||
'docker_storage_driver': kw.get('docker_storage_driver',
|
||||
'devicemapper'),
|
||||
'cluster_distro': kw.get('cluster_distro', 'fedora-atomic'),
|
||||
'coe': kw.get('coe', 'swarm'),
|
||||
'coe': kw.get('coe', 'kubernetes'),
|
||||
'created_at': kw.get('created_at'),
|
||||
'updated_at': kw.get('updated_at'),
|
||||
'labels': kw.get('labels', {'key1': 'val1', 'key2': 'val2'}),
|
||||
|
@ -28,10 +28,6 @@ from magnum.drivers.k8s_fedora_atomic_v1 import driver as k8sa_dr
|
||||
from magnum.drivers.k8s_fedora_atomic_v1 import template_def as k8sa_tdef
|
||||
from magnum.drivers.k8s_fedora_ironic_v1 import driver as k8s_i_dr
|
||||
from magnum.drivers.k8s_fedora_ironic_v1 import template_def as k8si_tdef
|
||||
from magnum.drivers.swarm_fedora_atomic_v1 import driver as swarm_dr
|
||||
from magnum.drivers.swarm_fedora_atomic_v1 import template_def as swarm_tdef
|
||||
from magnum.drivers.swarm_fedora_atomic_v2 import driver as swarm_v2_dr
|
||||
from magnum.drivers.swarm_fedora_atomic_v2 import template_def as swarm_v2_tdef
|
||||
from magnum.tests import base
|
||||
|
||||
from requests import exceptions as req_exceptions
|
||||
@ -86,28 +82,6 @@ class TemplateDefinitionTestCase(base.TestCase):
|
||||
self.assertIsInstance(definition,
|
||||
k8s_coreos_tdef.CoreOSK8sTemplateDefinition)
|
||||
|
||||
@mock.patch('magnum.drivers.common.driver.Driver.get_driver')
|
||||
def test_get_vm_atomic_swarm_definition(self, mock_driver):
|
||||
mock_driver.return_value = swarm_dr.Driver()
|
||||
cluster_driver = driver.Driver.get_driver('vm',
|
||||
'fedora-atomic',
|
||||
'swarm')
|
||||
definition = cluster_driver.get_template_definition()
|
||||
|
||||
self.assertIsInstance(definition,
|
||||
swarm_tdef.AtomicSwarmTemplateDefinition)
|
||||
|
||||
@mock.patch('magnum.drivers.common.driver.Driver.get_driver')
|
||||
def test_get_vm_atomic_swarm_v2_definition(self, mock_driver):
|
||||
mock_driver.return_value = swarm_v2_dr.Driver()
|
||||
cluster_driver = driver.Driver.get_driver('vm',
|
||||
'fedora-atomic',
|
||||
'swarm-mode')
|
||||
definition = cluster_driver.get_template_definition()
|
||||
|
||||
self.assertIsInstance(definition,
|
||||
swarm_v2_tdef.AtomicSwarmTemplateDefinition)
|
||||
|
||||
def test_get_driver_not_supported(self):
|
||||
self.assertRaises(exception.ClusterTypeNotSupported,
|
||||
driver.Driver.get_driver,
|
||||
@ -1456,19 +1430,6 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
|
||||
template_definition = k8sa_tdef.AtomicK8sTemplateDefinition()
|
||||
self._test_update_outputs_api_address(template_definition, params)
|
||||
|
||||
def test_update_swarm_outputs_api_address(self):
|
||||
address = 'updated_address'
|
||||
protocol = 'tcp'
|
||||
port = '2376'
|
||||
params = {
|
||||
'protocol': protocol,
|
||||
'address': address,
|
||||
'port': port,
|
||||
}
|
||||
|
||||
template_definition = swarm_tdef.AtomicSwarmTemplateDefinition()
|
||||
self._test_update_outputs_api_address(template_definition, params)
|
||||
|
||||
def test_update_k8s_outputs_if_cluster_template_is_secure(self):
|
||||
address = 'updated_address'
|
||||
protocol = 'https'
|
||||
@ -1482,20 +1443,6 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
|
||||
self._test_update_outputs_api_address(template_definition, params,
|
||||
tls=False)
|
||||
|
||||
def test_update_swarm_outputs_if_cluster_template_is_secure(self):
|
||||
address = 'updated_address'
|
||||
protocol = 'tcp'
|
||||
port = '2376'
|
||||
params = {
|
||||
'protocol': protocol,
|
||||
'address': address,
|
||||
'port': port,
|
||||
}
|
||||
|
||||
template_definition = swarm_tdef.AtomicSwarmTemplateDefinition()
|
||||
self._test_update_outputs_api_address(template_definition, params,
|
||||
tls=False)
|
||||
|
||||
def _test_update_outputs_none_api_address(self, template_definition,
|
||||
params, tls=True):
|
||||
|
||||
@ -1528,17 +1475,6 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
|
||||
template_definition = k8sa_tdef.AtomicK8sTemplateDefinition()
|
||||
self._test_update_outputs_none_api_address(template_definition, params)
|
||||
|
||||
def test_update_swarm_outputs_none_api_address(self):
|
||||
protocol = 'tcp'
|
||||
port = '2376'
|
||||
params = {
|
||||
'protocol': protocol,
|
||||
'address': None,
|
||||
'port': port,
|
||||
}
|
||||
template_definition = swarm_tdef.AtomicSwarmTemplateDefinition()
|
||||
self._test_update_outputs_none_api_address(template_definition, params)
|
||||
|
||||
def test_update_outputs_master_address(self):
|
||||
self._test_update_outputs_server_address(
|
||||
public_ip_output_key='kube_masters',
|
||||
@ -1719,389 +1655,3 @@ class FedoraK8sIronicTemplateDefinitionTestCase(base.TestCase):
|
||||
ex,
|
||||
n_exception.ServiceUnavailable,
|
||||
)
|
||||
|
||||
|
||||
class AtomicSwarmModeTemplateDefinitionTestCase(base.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(AtomicSwarmModeTemplateDefinitionTestCase, self).setUp()
|
||||
self.master_ng = mock.MagicMock(uuid='master_ng', role='master')
|
||||
self.worker_ng = mock.MagicMock(uuid='worker_ng', role='worker')
|
||||
self.nodegroups = [self.master_ng, self.worker_ng]
|
||||
self.mock_cluster = mock.MagicMock(nodegroups=self.nodegroups,
|
||||
default_ng_worker=self.worker_ng,
|
||||
default_ng_master=self.master_ng)
|
||||
|
||||
def get_definition(self):
|
||||
return swarm_v2_dr.Driver().get_template_definition()
|
||||
|
||||
def _test_update_outputs_server_address(
|
||||
self,
|
||||
floating_ip_enabled=True,
|
||||
public_ip_output_key='swarm_nodes',
|
||||
private_ip_output_key='swarm_nodes_private',
|
||||
cluster_attr=None,
|
||||
nodegroup_attr=None,
|
||||
is_master=False
|
||||
):
|
||||
|
||||
definition = self.get_definition()
|
||||
|
||||
expected_address = expected_public_address = ['public']
|
||||
expected_private_address = ['private']
|
||||
if not floating_ip_enabled:
|
||||
expected_address = expected_private_address
|
||||
|
||||
outputs = [
|
||||
{"output_value": expected_public_address,
|
||||
"description": "No description given",
|
||||
"output_key": public_ip_output_key},
|
||||
{"output_value": expected_private_address,
|
||||
"description": "No description given",
|
||||
"output_key": private_ip_output_key},
|
||||
]
|
||||
mock_stack = mock.MagicMock()
|
||||
mock_stack.to_dict.return_value = {'outputs': outputs}
|
||||
mock_cluster_template = mock.MagicMock()
|
||||
mock_cluster_template.floating_ip_enabled = floating_ip_enabled
|
||||
self.mock_cluster.floating_ip_enabled = floating_ip_enabled
|
||||
|
||||
definition.update_outputs(mock_stack, mock_cluster_template,
|
||||
self.mock_cluster)
|
||||
|
||||
actual = None
|
||||
if cluster_attr:
|
||||
actual = getattr(self.mock_cluster, cluster_attr)
|
||||
elif is_master:
|
||||
actual = getattr(self.master_ng, nodegroup_attr)
|
||||
else:
|
||||
actual = getattr(self.worker_ng, nodegroup_attr)
|
||||
self.assertEqual(expected_address, actual)
|
||||
|
||||
@mock.patch('magnum.common.clients.OpenStackClients')
|
||||
@mock.patch('magnum.drivers.swarm_fedora_atomic_v2.template_def'
|
||||
'.AtomicSwarmTemplateDefinition.get_discovery_url')
|
||||
@mock.patch('magnum.drivers.heat.template_def.BaseTemplateDefinition'
|
||||
'.get_params')
|
||||
@mock.patch('magnum.drivers.heat.template_def.TemplateDefinition'
|
||||
'.get_output')
|
||||
def test_swarm_get_params(self, mock_get_output, mock_get_params,
|
||||
mock_get_discovery_url, mock_osc_class):
|
||||
mock_context = mock.MagicMock()
|
||||
mock_context.auth_token = 'AUTH_TOKEN'
|
||||
mock_cluster_template = mock.MagicMock()
|
||||
mock_cluster_template.tls_disabled = False
|
||||
mock_cluster_template.registry_enabled = False
|
||||
mock_cluster = mock.MagicMock()
|
||||
mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52'
|
||||
del mock_cluster.stack_id
|
||||
mock_osc = mock.MagicMock()
|
||||
mock_osc.magnum_url.return_value = 'http://127.0.0.1:9511/v1'
|
||||
mock_osc_class.return_value = mock_osc
|
||||
|
||||
discovery_url = 'fake_discovery_url'
|
||||
mock_get_discovery_url.return_value = discovery_url
|
||||
|
||||
mock_context.auth_url = 'http://192.168.10.10:5000/v3'
|
||||
mock_context.user_name = 'fake_user'
|
||||
mock_context.tenant = 'fake_tenant'
|
||||
|
||||
docker_volume_type = mock_cluster.labels.get(
|
||||
'docker_volume_type')
|
||||
rexray_preempt = mock_cluster.labels.get('rexray_preempt')
|
||||
availability_zone = mock_cluster.labels.get(
|
||||
'availability_zone')
|
||||
|
||||
number_of_secondary_masters = mock_cluster.master_count - 1
|
||||
|
||||
swarm_def = swarm_v2_tdef.AtomicSwarmTemplateDefinition()
|
||||
|
||||
swarm_def.get_params(mock_context, mock_cluster_template, mock_cluster)
|
||||
|
||||
expected_kwargs = {'extra_params': {
|
||||
'magnum_url': mock_osc.magnum_url.return_value,
|
||||
'auth_url': 'http://192.168.10.10:5000/v3',
|
||||
'rexray_preempt': rexray_preempt,
|
||||
'docker_volume_type': docker_volume_type,
|
||||
'number_of_secondary_masters': number_of_secondary_masters,
|
||||
'availability_zone': availability_zone,
|
||||
'nodes_affinity_policy': 'soft-anti-affinity'}}
|
||||
mock_get_params.assert_called_once_with(mock_context,
|
||||
mock_cluster_template,
|
||||
mock_cluster,
|
||||
**expected_kwargs)
|
||||
|
||||
def test_swarm_get_heat_param(self):
|
||||
swarm_def = swarm_v2_tdef.AtomicSwarmTemplateDefinition()
|
||||
|
||||
swarm_def.add_nodegroup_params(self.mock_cluster)
|
||||
heat_param = swarm_def.get_heat_param(nodegroup_attr='node_count',
|
||||
nodegroup_uuid='worker_ng')
|
||||
self.assertEqual('number_of_nodes', heat_param)
|
||||
heat_param = swarm_def.get_heat_param(cluster_attr='uuid')
|
||||
self.assertEqual('cluster_uuid', heat_param)
|
||||
|
||||
def test_swarm_get_scale_params(self):
|
||||
mock_context = mock.MagicMock()
|
||||
swarm_def = swarm_v2_tdef.AtomicSwarmTemplateDefinition()
|
||||
self.assertEqual(
|
||||
swarm_def.get_scale_params(mock_context, self.mock_cluster, 7),
|
||||
{'number_of_nodes': 7})
|
||||
|
||||
def test_update_outputs(self):
|
||||
swarm_def = swarm_v2_tdef.AtomicSwarmTemplateDefinition()
|
||||
|
||||
expected_api_address = 'updated_address'
|
||||
expected_node_addresses = ['ex_minion', 'address']
|
||||
|
||||
outputs = [
|
||||
{"output_value": expected_api_address,
|
||||
"description": "No description given",
|
||||
"output_key": "api_address"},
|
||||
{"output_value": ['any', 'output'],
|
||||
"description": "No description given",
|
||||
"output_key": "swarm_master_private"},
|
||||
{"output_value": ['any', 'output'],
|
||||
"description": "No description given",
|
||||
"output_key": "swarm_master"},
|
||||
{"output_value": ['any', 'output'],
|
||||
"description": "No description given",
|
||||
"output_key": "swarm_nodes_private"},
|
||||
{"output_value": expected_node_addresses,
|
||||
"description": "No description given",
|
||||
"output_key": "swarm_nodes"},
|
||||
]
|
||||
mock_stack = mock.MagicMock()
|
||||
mock_stack.to_dict.return_value = {'outputs': outputs}
|
||||
mock_cluster_template = mock.MagicMock()
|
||||
|
||||
swarm_def.update_outputs(mock_stack, mock_cluster_template,
|
||||
self.mock_cluster)
|
||||
expected_api_address = "tcp://%s:2375" % expected_api_address
|
||||
self.assertEqual(expected_api_address, self.mock_cluster.api_address)
|
||||
self.assertEqual(expected_node_addresses,
|
||||
self.mock_cluster.default_ng_worker.node_addresses)
|
||||
|
||||
def test_update_outputs_master_address(self):
|
||||
self._test_update_outputs_server_address(
|
||||
public_ip_output_key='swarm_primary_master',
|
||||
private_ip_output_key='swarm_primary_master_private',
|
||||
nodegroup_attr='node_addresses',
|
||||
is_master=True
|
||||
)
|
||||
|
||||
def test_update_outputs_node_address(self):
|
||||
self._test_update_outputs_server_address(
|
||||
public_ip_output_key='swarm_nodes',
|
||||
private_ip_output_key='swarm_nodes_private',
|
||||
nodegroup_attr='node_addresses',
|
||||
is_master=False
|
||||
)
|
||||
|
||||
def test_update_outputs_master_address_fip_disabled(self):
|
||||
self._test_update_outputs_server_address(
|
||||
floating_ip_enabled=False,
|
||||
public_ip_output_key='swarm_primary_master',
|
||||
private_ip_output_key='swarm_primary_master_private',
|
||||
nodegroup_attr='node_addresses',
|
||||
is_master=True
|
||||
)
|
||||
|
||||
def test_update_outputs_node_address_fip_disabled(self):
|
||||
self._test_update_outputs_server_address(
|
||||
floating_ip_enabled=False,
|
||||
public_ip_output_key='swarm_nodes',
|
||||
private_ip_output_key='swarm_nodes_private',
|
||||
nodegroup_attr='node_addresses',
|
||||
is_master=False
|
||||
)
|
||||
|
||||
|
||||
class AtomicSwarmTemplateDefinitionTestCase(base.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(AtomicSwarmTemplateDefinitionTestCase, self).setUp()
|
||||
self.master_ng = mock.MagicMock(uuid='master_ng', role='master')
|
||||
self.worker_ng = mock.MagicMock(uuid='worker_ng', role='worker')
|
||||
self.nodegroups = [self.master_ng, self.worker_ng]
|
||||
self.mock_cluster = mock.MagicMock(nodegroups=self.nodegroups,
|
||||
default_ng_worker=self.worker_ng,
|
||||
default_ng_master=self.master_ng)
|
||||
|
||||
@mock.patch('magnum.common.clients.OpenStackClients')
|
||||
@mock.patch('magnum.drivers.swarm_fedora_atomic_v1.template_def'
|
||||
'.AtomicSwarmTemplateDefinition.get_discovery_url')
|
||||
@mock.patch('magnum.drivers.heat.template_def.BaseTemplateDefinition'
|
||||
'.get_params')
|
||||
@mock.patch('magnum.drivers.heat.template_def.TemplateDefinition'
|
||||
'.get_output')
|
||||
def test_swarm_get_params(self, mock_get_output, mock_get_params,
|
||||
mock_get_discovery_url, mock_osc_class):
|
||||
mock_context = mock.MagicMock()
|
||||
mock_context.auth_token = 'AUTH_TOKEN'
|
||||
mock_cluster_template = mock.MagicMock()
|
||||
mock_cluster_template.tls_disabled = False
|
||||
mock_cluster_template.registry_enabled = False
|
||||
mock_cluster = mock.MagicMock()
|
||||
mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52'
|
||||
del mock_cluster.stack_id
|
||||
mock_osc = mock.MagicMock()
|
||||
mock_osc.magnum_url.return_value = 'http://127.0.0.1:9511/v1'
|
||||
mock_osc_class.return_value = mock_osc
|
||||
|
||||
mock_get_discovery_url.return_value = 'fake_discovery_url'
|
||||
|
||||
mock_context.auth_url = 'http://192.168.10.10:5000/v3'
|
||||
mock_context.user_name = 'fake_user'
|
||||
mock_context.tenant = 'fake_tenant'
|
||||
|
||||
docker_volume_type = mock_cluster.labels.get(
|
||||
'docker_volume_type')
|
||||
flannel_cidr = mock_cluster.labels.get('flannel_network_cidr')
|
||||
flannel_subnet = mock_cluster.labels.get(
|
||||
'flannel_network_subnetlen')
|
||||
flannel_backend = mock_cluster.labels.get('flannel_backend')
|
||||
rexray_preempt = mock_cluster.labels.get('rexray_preempt')
|
||||
swarm_strategy = mock_cluster.labels.get('swarm_strategy')
|
||||
|
||||
swarm_def = swarm_tdef.AtomicSwarmTemplateDefinition()
|
||||
|
||||
swarm_def.get_params(mock_context, mock_cluster_template, mock_cluster)
|
||||
|
||||
expected_kwargs = {'extra_params': {
|
||||
'discovery_url': 'fake_discovery_url',
|
||||
'magnum_url': mock_osc.magnum_url.return_value,
|
||||
'flannel_network_cidr': flannel_cidr,
|
||||
'flannel_backend': flannel_backend,
|
||||
'flannel_network_subnetlen': flannel_subnet,
|
||||
'auth_url': 'http://192.168.10.10:5000/v3',
|
||||
'rexray_preempt': rexray_preempt,
|
||||
'swarm_strategy': swarm_strategy,
|
||||
'docker_volume_type': docker_volume_type,
|
||||
'nodes_affinity_policy': 'soft-anti-affinity'}}
|
||||
mock_get_params.assert_called_once_with(mock_context,
|
||||
mock_cluster_template,
|
||||
mock_cluster,
|
||||
**expected_kwargs)
|
||||
|
||||
@mock.patch('requests.get')
|
||||
def test_swarm_validate_discovery_url(self, mock_get):
|
||||
expected_result = str('{"action":"get","node":{"key":"test","value":'
|
||||
'"1","modifiedIndex":10,"createdIndex":10}}')
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = expected_result
|
||||
mock_get.return_value = mock_resp
|
||||
|
||||
k8s_def = k8sa_tdef.AtomicK8sTemplateDefinition()
|
||||
k8s_def.validate_discovery_url('http://etcd/test', 1)
|
||||
|
||||
@mock.patch('requests.get')
|
||||
def test_swarm_validate_discovery_url_fail(self, mock_get):
|
||||
mock_get.side_effect = req_exceptions.RequestException()
|
||||
|
||||
k8s_def = k8sa_tdef.AtomicK8sTemplateDefinition()
|
||||
self.assertRaises(exception.GetClusterSizeFailed,
|
||||
k8s_def.validate_discovery_url,
|
||||
'http://etcd/test', 1)
|
||||
|
||||
@mock.patch('requests.get')
|
||||
def test_swarm_validate_discovery_url_invalid(self, mock_get):
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = str('{"action":"get"}')
|
||||
mock_get.return_value = mock_resp
|
||||
|
||||
k8s_def = k8sa_tdef.AtomicK8sTemplateDefinition()
|
||||
self.assertRaises(exception.InvalidClusterDiscoveryURL,
|
||||
k8s_def.validate_discovery_url,
|
||||
'http://etcd/test', 1)
|
||||
|
||||
@mock.patch('requests.get')
|
||||
def test_swarm_validate_discovery_url_unexpect_size(self, mock_get):
|
||||
expected_result = str('{"action":"get","node":{"key":"test","value":'
|
||||
'"1","modifiedIndex":10,"createdIndex":10}}')
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = expected_result
|
||||
mock_get.return_value = mock_resp
|
||||
|
||||
k8s_def = k8sa_tdef.AtomicK8sTemplateDefinition()
|
||||
self.assertRaises(exception.InvalidClusterSize,
|
||||
k8s_def.validate_discovery_url,
|
||||
'http://etcd/test', 5)
|
||||
|
||||
@mock.patch('requests.get')
|
||||
def test_swarm_get_discovery_url(self, mock_get):
|
||||
CONF.set_override('etcd_discovery_service_endpoint_format',
|
||||
'http://etcd/test?size=%(size)d',
|
||||
group='cluster')
|
||||
expected_discovery_url = 'http://etcd/token'
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = expected_discovery_url
|
||||
mock_resp.status_code = 200
|
||||
mock_get.return_value = mock_resp
|
||||
mock_cluster = mock.MagicMock()
|
||||
mock_cluster.discovery_url = None
|
||||
|
||||
swarm_def = swarm_tdef.AtomicSwarmTemplateDefinition()
|
||||
discovery_url = swarm_def.get_discovery_url(mock_cluster)
|
||||
|
||||
mock_get.assert_called_once_with('http://etcd/test?size=1',
|
||||
timeout=60)
|
||||
self.assertEqual(mock_cluster.discovery_url, expected_discovery_url)
|
||||
self.assertEqual(discovery_url, expected_discovery_url)
|
||||
|
||||
@mock.patch('requests.get')
|
||||
def test_swarm_get_discovery_url_not_found(self, mock_get):
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = ''
|
||||
mock_resp.status_code = 200
|
||||
mock_get.return_value = mock_resp
|
||||
|
||||
fake_cluster = mock.MagicMock()
|
||||
fake_cluster.discovery_url = None
|
||||
|
||||
self.assertRaises(
|
||||
exception.InvalidDiscoveryURL,
|
||||
k8sa_tdef.AtomicK8sTemplateDefinition().get_discovery_url,
|
||||
fake_cluster)
|
||||
|
||||
def test_swarm_get_heat_param(self):
|
||||
swarm_def = swarm_tdef.AtomicSwarmTemplateDefinition()
|
||||
|
||||
swarm_def.add_nodegroup_params(self.mock_cluster)
|
||||
heat_param = swarm_def.get_heat_param(nodegroup_attr='node_count',
|
||||
nodegroup_uuid='worker_ng')
|
||||
self.assertEqual('number_of_nodes', heat_param)
|
||||
|
||||
def test_update_outputs(self):
|
||||
swarm_def = swarm_tdef.AtomicSwarmTemplateDefinition()
|
||||
|
||||
expected_api_address = 'updated_address'
|
||||
expected_node_addresses = ['ex_minion', 'address']
|
||||
|
||||
outputs = [
|
||||
{"output_value": expected_api_address,
|
||||
"description": "No description given",
|
||||
"output_key": "api_address"},
|
||||
{"output_value": ['any', 'output'],
|
||||
"description": "No description given",
|
||||
"output_key": "swarm_master_private"},
|
||||
{"output_value": ['any', 'output'],
|
||||
"description": "No description given",
|
||||
"output_key": "swarm_master"},
|
||||
{"output_value": ['any', 'output'],
|
||||
"description": "No description given",
|
||||
"output_key": "swarm_nodes_private"},
|
||||
{"output_value": expected_node_addresses,
|
||||
"description": "No description given",
|
||||
"output_key": "swarm_nodes"},
|
||||
]
|
||||
mock_stack = mock.MagicMock()
|
||||
mock_stack.to_dict.return_value = {'outputs': outputs}
|
||||
mock_cluster_template = mock.MagicMock()
|
||||
|
||||
swarm_def.update_outputs(mock_stack, mock_cluster_template,
|
||||
self.mock_cluster)
|
||||
expected_api_address = "tcp://%s:2376" % expected_api_address
|
||||
self.assertEqual(expected_api_address, self.mock_cluster.api_address)
|
||||
self.assertEqual(expected_node_addresses,
|
||||
self.worker_ng.node_addresses)
|
||||
|
@ -94,8 +94,7 @@ class TestClusterType(test_fields.TestField):
|
||||
def setUp(self):
|
||||
super(TestClusterType, self).setUp()
|
||||
self.field = fields.ClusterTypeField()
|
||||
self.coerce_good_values = [('kubernetes', 'kubernetes'),
|
||||
('swarm', 'swarm'), ]
|
||||
self.coerce_good_values = [('kubernetes', 'kubernetes')]
|
||||
self.coerce_bad_values = ['invalid']
|
||||
|
||||
self.to_primitive_values = self.coerce_good_values[0:1]
|
||||
|
@ -356,7 +356,7 @@ class TestObject(test_base.TestCase, _TestObject):
|
||||
# https://docs.openstack.org/magnum/latest/contributor/objects.html
|
||||
object_data = {
|
||||
'Cluster': '1.23-dfaf9ecb65a5fcab4f6c36497a8bc866',
|
||||
'ClusterTemplate': '1.20-ea3b06c5fdbf4a3fba0db9865cd2ba4c',
|
||||
'ClusterTemplate': '1.20-a9334881d1dc6e077faec68214fa9d1d',
|
||||
'Certificate': '1.2-64f24db0e10ad4cbd72aea21d2075a80',
|
||||
'MyObj': '1.0-34c4b1aadefd177b13f9a2f894cc23cd',
|
||||
'X509KeyPair': '1.2-d81950af36c59a71365e33ce539d24f9',
|
||||
|
@ -84,7 +84,7 @@ def create_test_cluster(context, **kw):
|
||||
"""
|
||||
cluster = get_test_cluster(context, **kw)
|
||||
create_test_cluster_template(context, uuid=cluster['cluster_template_id'],
|
||||
coe=kw.get('coe', 'swarm'),
|
||||
coe=kw.get('coe', 'kubernetes'),
|
||||
tls_disabled=kw.get('tls_disabled'))
|
||||
kw.update({'cluster_id': cluster['uuid']})
|
||||
db_utils.create_nodegroups_for_cluster(**kw)
|
||||
|
@ -0,0 +1,4 @@
|
||||
---
|
||||
upgrade:
|
||||
- |
|
||||
Dropped swarm drivers, Docker Swarm is not supported in Magnum anymore.
|
@ -52,8 +52,6 @@ magnum.drivers =
|
||||
k8s_fedora_atomic_v1 = magnum.drivers.k8s_fedora_atomic_v1.driver:Driver
|
||||
k8s_fedora_coreos_v1 = magnum.drivers.k8s_fedora_coreos_v1.driver:Driver
|
||||
k8s_coreos_v1 = magnum.drivers.k8s_coreos_v1.driver:Driver
|
||||
swarm_fedora_atomic_v1 = magnum.drivers.swarm_fedora_atomic_v1.driver:Driver
|
||||
swarm_fedora_atomic_v2 = magnum.drivers.swarm_fedora_atomic_v2.driver:Driver
|
||||
k8s_fedora_ironic_v1 = magnum.drivers.k8s_fedora_ironic_v1.driver:Driver
|
||||
|
||||
magnum.database.migration_backend =
|
||||
|
22
tox.ini
22
tox.ini
@ -59,28 +59,6 @@ commands =
|
||||
find . -type f -name "*.py[c|o]" -delete
|
||||
stestr run {posargs}
|
||||
|
||||
[testenv:functional-swarm]
|
||||
sitepackages = True
|
||||
setenv = {[testenv]setenv}
|
||||
OS_TEST_PATH=./magnum/tests/functional/swarm
|
||||
OS_TEST_TIMEOUT=7200
|
||||
deps =
|
||||
{[testenv]deps}
|
||||
commands =
|
||||
find . -type f -name "*.py[c|o]" -delete
|
||||
stestr run {posargs}
|
||||
|
||||
[testenv:functional-swarm-mode]
|
||||
sitepackages = True
|
||||
setenv = {[testenv]setenv}
|
||||
OS_TEST_PATH=./magnum/tests/functional/swarm_mode
|
||||
OS_TEST_TIMEOUT=7200
|
||||
deps =
|
||||
{[testenv]deps}
|
||||
commands =
|
||||
find . -type f -name "*.py[c|o]" -delete
|
||||
stestr run {posargs}
|
||||
|
||||
[testenv:pep8]
|
||||
commands =
|
||||
doc8 -e .rst specs/ doc/source/ contrib/ CONTRIBUTING.rst HACKING.rst README.rst
|
||||
|
Loading…
Reference in New Issue
Block a user