Retire the kolla-mesos repository

The core reviewer team voted with majority to retire this project.

The full thread of the retirement can be found here:

http://lists.openstack.org/pipermail/openstack-dev/2016-April/093180.html

Change-Id: I1599167bb948a186547cf9d270d37cb2d2cec4fd
Depends-On: Ief32a399d0ff57332864c65d5380269051e24b6b
This commit is contained in:
Steven Dake 2016-04-30 16:35:15 -05:00
parent a0ffda04ae
commit 3652f95164
236 changed files with 12 additions and 14087 deletions

View File

@ -1,17 +0,0 @@
If you would like to contribute to the development of OpenStack, you must
follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
If you already have a good understanding of how the system works and your
OpenStack accounts are set up, you can skip to the development workflow
section of this documentation to learn how changes to OpenStack should be
submitted for review via the Gerrit tool:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/kolla-mesos

View File

@ -1,4 +0,0 @@
kolla-mesos Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,19 +1,16 @@
===============================
kolla-mesos
===============================
This project is no longer maintained.
Mesos Deployment for kolla
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
Please feel here a long description which must be at least 3 lines wrapped on
80 cols, so that distribution package maintainers can use it in their packages.
Note that this is a hard requirement.
For an altnerative, consider the new development taking place at
http://github.com/openstack/kolla related to kubernetes. WHile not
a Mesos implementation, it Kubernetes still bheaves as an underlay
and is where most of the community interest currently revolves around.
* Free software: Apache license
* Documentation: http://docs.openstack.org/developer/kolla-mesos
* Source: http://git.openstack.org/cgit/openstack/kolla-mesos
* Bugs: http://bugs.launchpad.net/kolla
For any further questions, please email
openstack-dev@lists.openstack.org with the tag line [kolla] or join
#openstack-kolla on Freenode.
Features
--------
* TODO

View File

@ -1,26 +0,0 @@
---
node_config_directory: "/etc/kolla"
container_config_directory: "/var/lib/kolla/config_files"
api_interface: "{{ network_interface }}"
docker_registry_email:
docker_registry:
docker_namespace: "kollaglue"
docker_registry_username:
docker_restart_policy: "always"
docker_common_options:
auth_email: "{{ docker_registry_email }}"
auth_password: "{{ docker_registry_password }}"
auth_registry: "{{ docker_registry }}"
auth_username: "{{ docker_registry_username }}"
environment:
KOLLA_CONFIG_STRATEGY: "{{ config_strategy }}"
restart_policy: "{{ docker_restart_policy }}"
mesos_docker_remove_delay: "5mins"
mesos_domain: "mesos"
mesos_resolvers: '"8.8.8.8","8.8.4.4"'
marathon_framework: "marathon"

View File

@ -1,32 +0,0 @@
# NOTE(nihilifer): Please don't use "ansible_connection=local" here!
# Mesos slave has to be registered with its hostname accessible in the network
# in order to get Mesos UI working outside. In order to do that, the
# "inventory_hostname" variable must containe this hostname.
[master]
operator
[controller]
operator
[compute]
operator
[mesos-dns:children]
master
[zookeeper:children]
master
[mesos-master:children]
master
[marathon:children]
master
[chronos:children]
master
[mesos-slave:children]
controller
compute

View File

@ -1,31 +0,0 @@
[master]
master01
master02
master03
[controller]
controller01
controller02
controller03
[compute]
compute01
[mesos-dns:children]
master
[zookeeper:children]
master
[mesos-master:children]
master
[marathon:children]
master
[chronos:children]
master
[mesos-slave:children]
controller
compute

View File

@ -1,570 +0,0 @@
#!/usr/bin/python
# Copyright 2015 Sam Yaple
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
DOCUMENTATION = '''
---
module: kolla_docker
short_description: Module for controlling Docker
description:
- A module targeting at controlling Docker as used by Kolla.
options:
common_options:
description:
- A dict containing common params such as login info
required: False
type: dict
default: dict()
action:
description:
- The action the module should take
required: True
type: str
choices:
- create_volume
- pull_image
- remove_container
- remove_volume
- start_container
api_version:
description:
- The version of the api for docker-py to use when contacting docker
required: False
type: str
default: auto
auth_email:
description:
- The email address used to authenticate
required: False
type: str
auth_password:
description:
- The password used to authenticate
required: False
type: str
auth_registry:
description:
- The registry to authenticate to
required: False
type: str
auth_username:
description:
- The username used to authenticate
required: False
type: str
detach:
description:
- Detach from the container after it is created
required: False
default: True
type: bool
name:
description:
- Name of the container or volume to manage
required: False
type: str
environment:
description:
- The environment to set for the container
required: False
type: dict
image:
description:
- Name of the docker image
required: False
type: str
pid_mode:
description:
- Set docker pid namespace
required: False
type: str
default: None
choices:
- host
privileged:
description:
- Set the container to privileged
required: False
default: False
type: bool
remove_on_exit:
description:
- When not detaching from container, remove on successful exit
required: False
default: True
type: bool
restart_policy:
description:
- Determine what docker does when the container exits
required: False
type: str
choices:
- never
- on-failure
- always
restart_retries:
description:
- How many times to attempt a restart if restart_policy is set
type: int
default: 10
volumes:
description:
- Set volumes for docker to use
required: False
type: list
volumes_from:
description:
- Name or id of container(s) to use volumes from
required: True
type: list
author: Sam Yaple
'''
EXAMPLES = '''
- hosts: kolla_docker
tasks:
- name: Start container
kolla_docker:
image: ubuntu
name: test_container
action: start_container
- name: Remove container
kolla_docker:
name: test_container
action: remove_container
- name: Pull image without starting container
kolla_docker:
action: pull_container
image: private-registry.example.com:5000/ubuntu
- name: Create named volume
action: create_volume
name: name_of_volume
- name: Remove named volume
action: remove_volume
name: name_of_volume
'''
import os
import docker
class DockerWorker(object):
def __init__(self, module):
self.module = module
self.params = self.module.params
self.changed = False
# TLS not fully implemented
# tls_config = self.generate_tls()
options = {
'version': self.params.get('api_version')
}
self.dc = docker.Client(**options)
def generate_tls(self):
tls = {'verify': self.params.get('tls_verify')}
tls_cert = self.params.get('tls_cert'),
tls_key = self.params.get('tls_key'),
tls_cacert = self.params.get('tls_cacert')
if tls['verify']:
if tlscert:
self.check_file(tls['tls_cert'])
self.check_file(tls['tls_key'])
tls['client_cert'] = (tls_cert, tls_key)
if tlscacert:
self.check_file(tls['tls_cacert'])
tls['verify'] = tls_cacert
return docker.tls.TLSConfig(**tls)
def check_file(self, path):
if not os.path.isfile(path):
self.module.fail_json(
failed=True,
msg='There is no file at "{}"'.format(path)
)
if not os.access(path, os.R_OK):
self.module.fail_json(
failed=True,
msg='Permission denied for file at "{}"'.format(path)
)
def check_image(self):
find_image = ':'.join(self.parse_image())
for image in self.dc.images():
for image_name in image['RepoTags']:
if image_name == find_image:
return image
def check_volume(self):
for vol in self.dc.volumes()['Volumes']:
if vol['Name'] == self.params.get('name'):
return vol
def check_container(self):
find_name = '/{}'.format(self.params.get('name'))
for cont in self.dc.containers(all=True):
if find_name in cont['Names']:
return cont
def check_container_differs(self):
container = self.check_container()
if not container:
return True
container_info = self.dc.inspect_container(self.params.get('name'))
return (
self.compare_image(container_info) or
self.compare_privileged(container_info) or
self.compare_pid_mode(container_info) or
self.compare_volumes(container_info) or
self.compare_volumes_from(container_info) or
self.compare_environment(container_info)
)
def compare_pid_mode(self, container_info):
new_pid_mode = self.params.get('pid_mode')
current_pid_mode = container_info['HostConfig'].get('PidMode')
if not current_pid_mode:
current_pid_mode = None
if new_pid_mode != current_pid_mode:
return True
def compare_privileged(self, container_info):
new_privileged = self.params.get('privileged')
current_privileged = container_info['HostConfig']['Privileged']
if new_privileged != current_privileged:
return True
def compare_image(self, container_info):
new_image = self.check_image()
current_image = container_info['Image']
if new_image['Id'] != current_image:
return True
def compare_volumes_from(self, container_info):
new_vols_from = self.params.get('volumes_from')
current_vols_from = container_info['HostConfig'].get('VolumesFrom')
if not new_vols_from:
new_vols_from = list()
if not current_vols_from:
current_vols_from = list()
if set(current_vols_from).symmetric_difference(set(new_vols_from)):
return True
def compare_volumes(self, container_info):
volumes, binds = self.generate_volumes()
current_vols = container_info['Config'].get('Volumes')
current_binds = container_info['HostConfig'].get('Binds')
if not volumes:
volumes = list()
if not current_vols:
current_vols = list()
if not current_binds:
current_binds = list()
if set(volumes).symmetric_difference(set(current_vols)):
return True
new_binds = list()
if binds:
for k, v in binds.items():
new_binds.append("{}:{}:{}".format(k, v['bind'], v['mode']))
if set(new_binds).symmetric_difference(set(current_binds)):
return True
def compare_environment(self, container_info):
if self.params.get('environment'):
current_env = dict()
for kv in container_info['Config'].get('Env', list()):
k, v = kv.split('=', 1)
current_env.update({k: v})
for k, v in self.params.get('environment').items():
if k not in current_env:
return True
if current_env[k] != v:
return True
def parse_image(self):
full_image = self.params.get('image')
if '/' in full_image:
registry, image = full_image.split('/', 1)
else:
image = full_image
if ':' in image:
return full_image.rsplit(':', 1)
else:
return full_image, 'latest'
def pull_image(self):
if self.params.get('auth_username'):
self.dc.login(
username=self.params.get('auth_username'),
password=self.params.get('auth_password'),
registry=self.params.get('auth_registry'),
email=self.params.get('auth_email')
)
image, tag = self.parse_image()
statuses = [
json.loads(line.strip()) for line in self.dc.pull(
repository=image, tag=tag, stream=True
)
]
for status in reversed(statuses):
# NOTE(jeffrey4l): Get the last not empty status with status
# property
if status and status.get('status'):
# NOTE(SamYaple): This allows us to use v1 and v2 docker
# registries. Eventually docker will stop supporting v1
# registries and when that happens we can remove this.
if 'legacy registry' in status.get('status'):
continue
elif "Downloaded newer image for" in status.get('status'):
self.changed = True
return
elif "Image is up to date for" in status.get('status'):
return
else:
self.module.fail_json(
msg="Invalid status returned from pull",
changed=True,
failed=True
)
def remove_container(self):
if self.check_container():
self.changed = True
self.dc.remove_container(
container=self.params.get('name'),
force=True
)
def generate_volumes(self):
volumes = self.params.get('volumes')
if not volumes:
return None, None
vol_list = list()
vol_dict = dict()
for vol in volumes:
if ':' not in vol:
vol_list.append(vol)
continue
split_vol = vol.split(':')
if (len(split_vol) == 2
and ('/' not in split_vol[0] or '/' in split_vol[1])):
split_vol.append('rw')
vol_list.append(split_vol[1])
vol_dict.update({
split_vol[0]: {
'bind': split_vol[1],
'mode': split_vol[2]
}
})
return vol_list, vol_dict
def build_host_config(self, binds):
options = {
'network_mode': 'host',
'pid_mode': self.params.get('pid_mode'),
'privileged': self.params.get('privileged'),
'volumes_from': self.params.get('volumes_from')
}
if self.params.get('restart_policy') in ['on-failure', 'always']:
options['restart_policy'] = {
'Name': self.params.get('restart_policy'),
'MaximumRetryCount': self.params.get('restart_retries')
}
if binds:
options['binds'] = binds
return self.dc.create_host_config(**options)
def build_container_options(self):
volumes, binds = self.generate_volumes()
return {
'detach': self.params.get('detach'),
'environment': self.params.get('environment'),
'host_config': self.build_host_config(binds),
'image': self.params.get('image'),
'name': self.params.get('name'),
'volumes': volumes,
'tty': True
}
def create_container(self):
self.changed = True
options = self.build_container_options()
self.dc.create_container(**options)
def start_container(self):
if not self.check_image():
self.pull_image()
container = self.check_container()
if container and self.check_container_differs():
self.remove_container()
container = self.check_container()
if not container:
self.create_container()
container = self.check_container()
if not container['Status'].startswith('Up '):
self.changed = True
self.dc.start(container=self.params.get('name'))
# We do not want to detach so we wait around for container to exit
if not self.params.get('detach'):
rc = self.dc.wait(self.params.get('name'))
if rc != 0:
self.module.fail_json(
failed=True,
changed=True,
msg="Container exited with non-zero return code"
)
if self.params.get('remove_on_exit'):
self.remove_container()
def create_volume(self):
if not self.check_volume():
self.changed = True
self.dc.create_volume(name=self.params.get('name'), driver='local')
def remove_volume(self):
if self.check_volume():
self.changed = True
try:
self.dc.remove_volume(name=self.params.get('name'))
except docker.errors.APIError as e:
if e.response.status_code == 409:
self.module.fail_json(
failed=True,
msg="Volume named '{}' is currently in-use".format(
self.params.get('name')
)
)
raise
def generate_module():
argument_spec = dict(
common_options=dict(required=False, type='dict', default=dict()),
action=dict(requried=True, type='str', choices=['create_volume',
'pull_image',
'remove_container',
'remove_volume',
'start_container']),
api_version=dict(required=False, type='str', default='auto'),
auth_email=dict(required=False, type='str'),
auth_password=dict(required=False, type='str'),
auth_registry=dict(required=False, type='str'),
auth_username=dict(required=False, type='str'),
detach=dict(required=False, type='bool', default=True),
name=dict(required=False, type='str'),
environment=dict(required=False, type='dict'),
image=dict(required=False, type='str'),
pid_mode=dict(required=False, type='str', choices=['host']),
privileged=dict(required=False, type='bool', default=False),
remove_on_exit=dict(required=False, type='bool', default=True),
restart_policy=dict(required=False, type='str', choices=['no',
'never',
'on-failure',
'always']),
restart_retries=dict(required=False, type='int', default=10),
tls_verify=dict(required=False, type='bool', default=False),
tls_cert=dict(required=False, type='str'),
tls_key=dict(required=False, type='str'),
tls_cacert=dict(required=False, type='str'),
volumes=dict(required=False, type='list'),
volumes_from=dict(required=False, type='list')
)
required_together = [
['tls_cert', 'tls_key']
]
return AnsibleModule(
argument_spec=argument_spec,
required_together=required_together
)
def generate_nested_module():
module = generate_module()
# We unnest the common dict and the update it with the other options
new_args = module.params.get('common_options')
new_args.update(module._load_params()[0])
module.params = new_args
# Override ARGS to ensure new args are used
global MODULE_ARGS
global MODULE_COMPLEX_ARGS
MODULE_ARGS = ''
MODULE_COMPLEX_ARGS = json.dumps(module.params)
# Reprocess the args now that the common dict has been unnested
return generate_module()
def main():
module = generate_nested_module()
# TODO(SamYaple): Replace with required_if when Ansible 2.0 lands
if (module.params.get('action') in ['pull_image', 'start_container']
and not module.params.get('image')):
self.module.fail_json(
msg="missing required arguments: image",
failed=True
)
# TODO(SamYaple): Replace with required_if when Ansible 2.0 lands
if (module.params.get('action') != 'pull_image'
and not module.params.get('name')):
self.module.fail_json(
msg="missing required arguments: name",
failed=True
)
try:
dw = DockerWorker(module)
getattr(dw, module.params.get('action'))()
module.exit_json(changed=dw.changed)
except Exception as e:
module.exit_json(failed=True, changed=True, msg=repr(e))
# import module snippets
from ansible.module_utils.basic import * # noqa
if __name__ == '__main__':
main()

View File

@ -1,4 +0,0 @@
---
chronos_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-chronos"
chronos_tag: "{{ openstack_release }}"
chronos_image_full: "{{ chronos_image }}:{{ chronos_tag }}"

View File

@ -1,3 +0,0 @@
---
- include: start.yml
when: inventory_hostname in groups['chronos']

View File

@ -1,11 +0,0 @@
---
- name: Starting Chronos container
kolla_docker:
action: "start_container"
common_options: "{{ docker_common_options }}"
environment:
CHRONOS_HTTP_PORT: "4400"
CHRONOS_MASTER: "zk://{% for host in groups['zookeeper'] %}{{ hostvars[host]['ansible_' + hostvars[host]['api_interface']]['ipv4']['address'] }}:2181{% if not loop.last %},{% endif %}{% endfor %}/mesos"
CHRONOS_ZK_HOSTS: "zk://{% for host in groups['zookeeper'] %}{{ hostvars[host]['ansible_' + hostvars[host]['api_interface']]['ipv4']['address'] }}:2181{% if not loop.last %},{% endif %}{% endfor %}"
image: "{{ chronos_image_full }}"
name: "chronos"

View File

@ -1,4 +0,0 @@
---
marathon_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-marathon"
marathon_tag: "{{ openstack_release }}"
marathon_image_full: "{{ marathon_image }}:{{ marathon_tag }}"

View File

@ -1,3 +0,0 @@
---
- include: start.yml
when: inventory_hostname in groups['zookeeper']

View File

@ -1,15 +0,0 @@
---
- name: Starting Marathon container
kolla_docker:
action: "start_container"
common_options: "{{ docker_common_options }}"
environment:
MARATHON_HOSTNAME: "{{ inventory_hostname }}"
MARATHON_HTTPS_ADDRESS: "{{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] }}"
MARATHON_HTTP_ADDRESS: "{{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] }}"
MARATHON_MASTER: "zk://{% for host in groups['zookeeper'] %}{{ hostvars[host]['ansible_' + hostvars[host]['api_interface']]['ipv4']['address'] }}:2181{% if not loop.last %},{% endif %}{% endfor %}/mesos"
MARATHON_ZK: "zk://{% for host in groups['zookeeper'] %}{{ hostvars[host]['ansible_' + hostvars[host]['api_interface']]['ipv4']['address'] }}:2181{% if not loop.last %},{% endif %}{% endfor %}/marathon"
MARATHON_MESOS_USER: "root"
MARATHON_FRAMEWORK_NAME: "{{ marathon_framework }}"
image: "{{ marathon_image_full }}"
name: "marathon"

View File

@ -1,4 +0,0 @@
---
mesos_dns_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-mesos-dns"
mesos_dns_tag: "{{ openstack_release }}"
mesos_dns_image_full: "{{ mesos_dns_image }}:{{ mesos_dns_tag }}"

View File

@ -1,16 +0,0 @@
---
- name: Ensuring config directory exists
file:
path: "{{ node_config_directory }}/mesos-dns"
state: "directory"
recurse: yes
- name: Copying config.json
template:
src: "mesos-dns.json.j2"
dest: "{{ node_config_directory }}/mesos-dns/config.json"
- name: Copying Mesos DNS configuration files
template:
src: "mesos-dns.conf.j2"
dest: "{{ node_config_directory }}/mesos-dns/mesos-dns.conf"

View File

@ -1,6 +0,0 @@
---
- include: config.yml
when: inventory_hostname in groups['mesos-dns']
- include: start.yml
when: inventory_hostname in groups['mesos-dns']

View File

@ -1,9 +0,0 @@
---
- name: Starting Mesos DNS container
kolla_docker:
action: "start_container"
common_options: "{{ docker_common_options }}"
image: "{{ mesos_dns_image_full }}"
name: "mesos-dns"
volumes:
- "{{ node_config_directory }}/mesos-dns/:{{ container_config_directory }}/:ro"

View File

@ -1,11 +0,0 @@
{
"zk": "zk://{% for host in groups['zookeeper'] %}{{ hostvars[host]['ansible_' + hostvars[host]['api_interface']]['ipv4']['address'] }}:2181{% if not loop.last %},{% endif %}{% endfor %}/mesos",
"refreshSeconds": 60,
"ttl": 60,
"domain": "{{ mesos_domain }}",
"port": 53,
"resolvers": [{{ mesos_resolvers }}],
"timeout": 5,
"httpon": true,
"IPSources": ["netinfo","mesos","host"]
}

View File

@ -1,11 +0,0 @@
{
"command": "mesos-dns -config /usr/local/etc/mesos-dns.conf",
"config_files": [
{
"source": "{{ container_config_directory }}/mesos-dns.conf",
"dest": "/usr/local/etc/mesos-dns.conf",
"owner": "root",
"perm": "0600"
}
]
}

View File

@ -1,4 +0,0 @@
---
mesos_master_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-mesos-master"
mesos_master_tag: "{{ openstack_release }}"
mesos_master_image_full: "{{ mesos_master_image }}:{{ mesos_master_tag }}"

View File

@ -1,3 +0,0 @@
---
- include: start.yml
when: inventory_hostname in groups['mesos-master']

View File

@ -1,18 +0,0 @@
---
- name: Starting Mesos master container
kolla_docker:
action: "start_container"
common_options: "{{ docker_common_options }}"
environment:
MESOS_HOSTNAME: "{{ inventory_hostname }}"
MESOS_IP: "{{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] }}"
MESOS_ZK: "zk://{% for host in groups['zookeeper'] %}{{ hostvars[host]['ansible_' + hostvars[host]['api_interface']]['ipv4']['address'] }}:2181{% if not loop.last %},{% endif %}{% endfor %}/mesos"
MESOS_PORT: "5050"
MESOS_LOG_DIR: "/var/log/mesos"
MESOS_QUORUM: "3"
MESOS_REGISTRY: "in_memory"
MESOS_WORK_DIR: "/var/lib/mesos"
image: "{{ mesos_master_image_full }}"
name: "mesos_master"
volumes:
- /var/lib/mesos:/var/lib/mesos

View File

@ -1,4 +0,0 @@
---
mesos_slave_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-mesos-slave"
mesos_slave_tag: "{{ openstack_release }}"
mesos_slave_image_full: "{{ mesos_slave_image }}:{{ mesos_slave_tag }}"

View File

@ -1,3 +0,0 @@
---
- include: start.yml
when: inventory_hostname in groups['mesos-slave']

View File

@ -1,21 +0,0 @@
---
- name: Starting Mesos slave container
kolla_docker:
action: "start_container"
common_options: "{{ docker_common_options }}"
environment:
MESOS_HOSTNAME: "{{ inventory_hostname }}"
MESOS_IP: "{{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] }}"
MESOS_MASTER: "zk://{% for host in groups['zookeeper'] %}{{ hostvars[host]['ansible_' + hostvars[host]['api_interface']]['ipv4']['address'] }}:2181{% if not loop.last %},{% endif %}{% endfor %}/mesos"
MESOS_SWITCH_USER: "false"
MESOS_LOG_DIR: "/var/log/mesos"
MESOS_LOGGING_LEVEL: "INFO"
MESOS_DOCKER_REMOVE_DELAY: "{{ mesos_docker_remove_delay }}"
MESOS_ATTRIBUTES: "openstack_role:{% if 'controller' in group_names %}controller{% elif 'compute' in group_names %}compute{% endif %}"
MESOS_SYSTEMD_ENABLE_SUPPORT: "false"
image: "{{ mesos_slave_image_full }}"
name: "mesos_slave"
privileged: True
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup
- /var/run/docker.sock:/var/run/docker.sock

View File

@ -1,4 +0,0 @@
---
zookeeper_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-zookeeper"
zookeeper_tag: "{{ openstack_release }}"
zookeeper_image_full: "{{ zookeeper_image }}:{{ zookeeper_tag }}"

View File

@ -1,19 +0,0 @@
---
- name: Ensuring config directory exists
file:
path: "{{ node_config_directory }}/zookeeper"
state: "directory"
recurse: yes
- name: Copying config.json
template:
src: "zookeeper.json.j2"
dest: "{{ node_config_directory }}/zookeeper/config.json"
- name: Copying ZooKeeper configuration files
template:
src: "{{ item }}.j2"
dest: "{{ node_config_directory }}/zookeeper/{{ item }}"
with_items:
- "zoo.cfg"
- "myid"

View File

@ -1,6 +0,0 @@
---
- include: config.yml
when: inventory_hostname in groups['zookeeper']
- include: start.yml
when: inventory_hostname in groups['zookeeper']

View File

@ -1,10 +0,0 @@
---
- name: Starting ZooKeeper container
kolla_docker:
action: "start_container"
common_options: "{{ docker_common_options }}"
image: "{{ zookeeper_image_full }}"
name: "zookeeper"
volumes:
- "{{ node_config_directory }}/zookeeper/:{{ container_config_directory }}/:ro"
- "zookeeper_data:/var/lib/zookeeper"

View File

@ -1 +0,0 @@
{{ groups['zookeeper'].index(inventory_hostname) + 1 }}

View File

@ -1,8 +0,0 @@
tickTime=3000
initLimit=10
syncLimit=5
clientPort=2181
{% for host in groups['zookeeper'] %}
server.{{ loop.index }}={{ hostvars[host]['ansible_' + hostvars[host]['api_interface']]['ipv4']['address'] }}:2888:3888
{% endfor %}
dataDir=/var/lib/zookeeper

View File

@ -1,18 +0,0 @@
{% set zk_path = '/usr/share/zookeeper/bin' if kolla_base_distro in ['ubuntu', 'debian'] else '/opt/mesosphere/zookeeper/bin' %}
{
"command": "{{ zk_path }}/zkServer.sh start-foreground",
"config_files": [
{
"source": "{{ container_config_directory }}/zoo.cfg",
"dest": "/etc/zookeeper/conf/zoo.cfg",
"owner": "zookeeper",
"perm": "0600"
},
{
"source": "{{ container_config_directory }}/myid",
"dest": "/var/lib/zookeeper/myid",
"owner": "zookeeper",
"perm": "0600"
}
]
}

View File

@ -1,36 +0,0 @@
---
- hosts:
- zookeeper
roles:
- { role: zookeeper,
tags: zookeeper }
- hosts:
- mesos-master
roles:
- { role: mesos-master,
tags: mesos-master }
- hosts:
- marathon
roles:
- { role: marathon,
tags: marathon }
- hosts:
- chronos
roles:
- { role: chronos,
tags: chronos }
- hosts:
- mesos-slave
roles:
- { role: mesos-slave,
tags: mesos-slave }
- hosts:
- mesos-dns
roles:
- { role: mesos-dns,
tags: mesos-dns }

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,222 +0,0 @@
---
# The options in this file can be overridden in 'globals.yml'
# The "temp" files that are created before merge need to stay persistent due
# to the fact that ansible will register a "change" if it has to create them
# again. Persistent files allow for idempotency
node_templates_directory: "/usr/share/kolla/templates"
container_config_directory: "/var/lib/kolla/config_files"
# The directory to store the config files on the destination node
node_config_directory: "/etc/kolla"
###################
# Kolla options
###################
# Valid options are [ COPY_ONCE, COPY_ALWAYS ]
config_strategy: "COPY_ONCE"
# Valid options are [ centos, fedora, oraclelinux, ubuntu ]
kolla_base_distro: "centos"
# Valid options are [ binary, source ]
kolla_install_type: "binary"
# Value set in the public_url endpoint in Keystone
kolla_external_address: "{{ kolla_internal_address }}"
####################
# Database options
####################
database_address: "mariadb-mariadb-infra-{{ deployment_id }}.{{ marathon_framework }}.{{ mesos_dns_domain }}"
database_user: "root"
####################
# Docker options
####################
docker_registry:
docker_namespace: "kollaglue"
docker_registry_username:
docker_insecure_registry: "False"
# Valid options are [ missing, always ]
docker_pull_policy: "always"
# Valid options are [ no, on-failure, always ]
docker_restart_policy: "always"
# '0' means unlimited retries
docker_restart_policy_retry: "10"
# kolla anisble image name
kolla_toolbox_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-kolla-toolbox"
kolla_toolbox_tag: "{{ openstack_release }}"
ansible_task_cmd: "/usr/bin/ansible localhost"
# loglevel for the container init script
init_log_level: "info"
####################
# Networking options
####################
api_interface: "{{ network_interface }}"
storage_interface: "{{ network_interface }}"
tunnel_interface: "{{ network_interface }}"
# Valid options are [ openvswitch, linuxbridge ]
neutron_plugin_agent: "openvswitch"
# The default ports used by each service.
mariadb_port: "3306"
mariadb_wsrep_port: "4567"
mariadb_ist_port: "4568"
mariadb_sst_port: "4444"
rabbitmq_port: "5672"
rabbitmq_management_port: "15672"
rabbitmq_cluster_port: "25672"
rabbitmq_epmd_port: "4369"
haproxy_stats_port: "1984"
keystone_public_port: "5000"
keystone_admin_port: "35357"
glance_api_port: "9292"
glance_registry_port: "9191"
nova_api_port: "8774"
nova_api_ec2_port: "8773"
nova_metadata_port: "8775"
nova_novncproxy_port: "6080"
nova_spicehtml5proxy_port: "6082"
neutron_server_port: "9696"
cinder_api_port: "8776"
memcached_port: "11211"
swift_proxy_server_port: "8080"
swift_object_server_port: "6000"
swift_account_server_port: "6001"
swift_container_server_port: "6002"
heat_api_port: "8004"
heat_api_cfn_port: "8000"
murano_api_port: "8082"
ironic_api_port: "6385"
####################
# Openstack options
####################
openstack_release: "2.0.0"
openstack_logging_verbose: "True"
openstack_logging_debug: "False"
openstack_use_syslog: "False"
openstack_use_stderr: "True"
openstack_region_name: "RegionOne"
# Valid options are [ novnc, spice ]
nova_console: "novnc"
####################
# Constraints
####################
controller_constraints: '[["hostname", "UNIQUE"], ["openstack_role", "CLUSTER", "controller"]]'
compute_constraints: '[["hostname", "UNIQUE"], ["openstack_role", "CLUSTER", "compute"]]'
controller_compute_constraints: '[["hostname", "UNIQUE"], ["openstack_role", "LIKE", "(controller|compute)"]]'
storage_constraints: '[["hostname", "UNIQUE"], ["openstack_role", "CLUSTER", "storage"]]'
####################
# Mesos-dns hosts
####################
keystone_auth_host: "keystone-api-keystone-openstack-{{ deployment_id }}.{{ marathon_framework }}.{{ mesos_dns_domain }}"
neutron_server_host: "neutron-server-neutron-openstack-{{ deployment_id }}.{{ marathon_framework }}.{{ mesos_dns_domain }}"
glance_api_host: "glance-api-glance-api-openstack-{{ deployment_id }}.{{ marathon_framework }}.{{ mesos_dns_domain }}"
cinder_api_host: "cinder-api-cinder-openstack-{{ deployment_id }}.{{ marathon_framework }}.{{ mesos_dns_domain }}"
nova_api_host: "nova-api-nova-openstack-{{ deployment_id }}.{{ marathon_framework }}.{{ mesos_dns_domain }}"
####################
# OpenStack auth
####################
# Openstack authentication string. You should only need to override these if you
# are changing the admin tenant/project or user.
openstack_auth_url: "http://{{ keystone_auth_host }}:{{ keystone_admin_port }}"
openstack_username: "admin"
openstack_password: "{{ keystone_admin_password }}"
openstack_project_name: "admin"
####################
# OpenStack services
####################
# Core services are required for Kolla to be operation.
enable_mariadb: "yes"
enable_keystone: "yes"
enable_rabbitmq: "yes"
enable_glance: "yes"
enable_nova: "yes"
enable_neutron: "yes"
# Additional optional OpenStack services are specified here
enable_cinder: "no"
enable_horizon: "no"
enable_memcached: "no"
enable_haproxy: "no"
enable_ceph: "no"
enable_heat: "no"
enable_swift: "no"
enable_murano: "no"
enable_ironic: "no"
ironic_keystone_user: "ironic"
####################
# RabbitMQ options
####################
rabbitmq_user: "openstack"
####################
# HAProxy options
####################
haproxy_user: "openstack"
#################################
# Cinder - Block Storage options
#################################
cinder_volume_driver: "{{ 'ceph' if enable_ceph | bool else 'lvm' }}"
###################
# Ceph options
###################
# Ceph can be setup with a caching to improve performance. To use the cache you
# must provide seperate disks than those for the OSDs
ceph_enable_cache: "no"
# Valid options are [ forward, none, writeback ]
ceph_cache_mode: "writeback"
# A requirement for using the erasure-coded pools is you must setup a cache tier
# Valid options are [ erasure, replicated ]
ceph_pool_type: "replicated"
ceph_cinder_pool_name: "volumes"
ceph_cinder_backup_pool_name: "backups"
ceph_glance_pool_name: "images"
ceph_nova_pool_name: "vms"
ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
ceph_rule: "default host {{ 'indep' if ceph_pool_type == 'erasure' else 'firstn' }}"
ceph_cache_rule: "cache host firstn"

View File

@ -1,86 +0,0 @@
---
project_name: "cinder"
####################
# Ceph
####################
ceph_cinder_pool_type: "{{ ceph_pool_type }}"
ceph_cinder_cache_mode: "{{ ceph_cache_mode }}"
ceph_cinder_backup_pool_type: "{{ ceph_pool_type }}"
ceph_cinder_backup_cache_mode: "{{ ceph_cache_mode }}"
# Due to Ansible issues on include, you cannot override these variables. Please
# override the variables they refernce instead.
cinder_pool_name: "{{ ceph_cinder_pool_name }}"
cinder_pool_type: "{{ ceph_cinder_pool_type }}"
cinder_cache_mode: "{{ ceph_cinder_cache_mode }}"
cinder_backup_pool_name: "{{ ceph_cinder_backup_pool_name }}"
cinder_backup_pool_type: "{{ ceph_cinder_backup_pool_type }}"
cinder_backup_cache_mode: "{{ ceph_cinder_backup_cache_mode }}"
####################
# Database
####################
cinder_database_name: "cinder"
cinder_database_user: "cinder"
cinder_database_address: "{{ database_address }}"
####################
# Docker
####################
cinder_volume_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-cinder-volume"
cinder_volume_tag: "{{ openstack_release }}"
cinder_volume_image_full: "{{ cinder_volume_image }}:{{ cinder_volume_tag }}"
cinder_scheduler_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-cinder-scheduler"
cinder_scheduler_tag: "{{ openstack_release }}"
cinder_scheduler_image_full: "{{ cinder_scheduler_image }}:{{ cinder_scheduler_tag }}"
cinder_backup_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-cinder-backup"
cinder_backup_tag: "{{ openstack_release }}"
cinder_backup_image_full: "{{ cinder_backup_image }}:{{ cinder_backup_tag }}"
cinder_api_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-cinder-api"
cinder_api_tag: "{{ openstack_release }}"
cinder_api_image_full: "{{ cinder_api_image }}:{{ cinder_api_tag }}"
cinder_data_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-data"
cinder_data_image_tag: "{{ openstack_release }}"
cinder_data_image_full: "{{ cinder_data_image }}:{{ cinder_data_image_tag }}"
####################
# Openstack
####################
cinder_public_endpoint: "{{ cinder_api_host }}:{{ cinder_api_port }}"
cinder_admin_endpoint: "{{ cinder_api_host }}:{{ cinder_api_port }}"
cinder_internal_endpoint: "{{ cinder_api_host }}:{{ cinder_api_port }}"
cinder_logging_verbose: "{{ openstack_logging_verbose }}"
cinder_logging_debug: "{{ openstack_logging_debug }}"
cinder_keystone_user: "cinder"
openstack_cinder_auth: "{'auth_url':'{{ openstack_auth_url }}','username':'{{ openstack_username }}','password':'{{ openstack_password }}','project_name':'{{ openstack_project_name }}'}"
####################
# Resources
####################
# cinder-api
cinder_api_mem: "{{ cinder_api_mem|default('128') }}"
cinder_api_cpus: "{{ cinder_api_cpus|default('0.3') }}"
# cinder-backup
cinder_backup_mem: "{{ cinder_backup_mem|default('128') }}"
cinder_backup_cpus: "{{ cinder_backup_cpus|default('0.3') }}"
# cinder-init
cinder_init_mem: "{{ cinder_init_mem|default('512') }}"
cinder_init_cpus: "{{ cinder_init_cpus|default('0.3') }}"
# cinder-scheduler
cinder_scheduler_mem: "{{ cinder_scheduler_mem|default('128') }}"
cinder_scheduler_cpus: "{{ cinder_scheduler_cpus|default('0.3') }}"
# cinder-volume
cinder_volume_mem: "{{ cinder_volume_mem|default('128') }}"
cinder_volume_cpus: "{{ cinder_volume_cpus|default('0.3') }}"

View File

@ -1,8 +0,0 @@
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = {{ ceph_cinder_backup_pool_name }}
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

View File

@ -1,14 +0,0 @@
[DEFAULT]
default_volume_type = rbd-1
enabled_backends = rbd-1
[rbd-1]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = {{ ceph_cinder_pool_name }}
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_secret_uuid = {{ rbd_secret_uuid }}

View File

@ -1,9 +0,0 @@
[DEFAULT]
default_volume_type = lvmdriver-1
enabled_backends = lvmdriver-1
[lvmdriver-1]
lvm_type = default
volume_group = cinder-volumes
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvmdriver-1

View File

@ -1,42 +0,0 @@
[DEFAULT]
debug = {{ cinder_logging_debug }}
use_syslog = {{ openstack_use_syslog }}
use_stderr = {{ openstack_use_stderr }}
enable_v1_api=false
volume_name_template = %s
glance_api_servers = http://{{ glance_api_host }}.{{ mesos_dns_domain }}:{{ glance_api_port }}
glance_api_version = 2
os_region_name = {{ openstack_region_name }}
osapi_volume_listen = {{ get_ip_address(api_interface) }}
osapi_volume_listen_port = {{ cinder_api_port }}
api_paste_config = /etc/cinder/api-paste.ini
nova_catalog_info = compute:nova:internalURL
auth_strategy = keystone
[database]
connection = mysql+pymysql://{{ cinder_database_user }}:{{ cinder_database_password }}@{{ cinder_database_address }}/{{ cinder_database_name }}
[keystone_authtoken]
auth_uri = http://{{ keystone_auth_host }}:{{ keystone_public_port }}
auth_url = http://{{ keystone_auth_host }}:{{ keystone_admin_port }}
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = {{ cinder_keystone_user }}
password = {{ cinder_keystone_password }}
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_rabbit]
rabbit_userid = {{ rabbitmq_user }}
rabbit_password = {{ rabbitmq_password }}
rabbit_ha_queues = true
rabbit_hosts = {{ list_ips_by_service('infra/rabbitmq/rabbitmq', rabbitmq_port) }}

View File

@ -1,9 +0,0 @@
#!/bin/bash
{% set apache_cmd = 'apache2' if kolla_base_distro in ['ubuntu', 'debian'] else 'httpd' %}
if [[ "${KOLLA_BASE_DISTRO}" == "ubuntu" || \
"${KOLLA_BASE_DISTRO}" == "debian" ]]; then
# Loading Apache2 ENV variables
source /etc/apache2/envvars
fi
/usr/sbin/{{ apache_cmd }} -DFOREGROUND

View File

@ -1,3 +0,0 @@
kolla_mesos_start.py:
source: kolla_mesos/container_scripts/start.py
dest: /usr/local/bin/kolla_mesos_start

View File

@ -1,11 +0,0 @@
{
"command": "kolla_mesos_start",
"config_files": [
{
"source": "{{ container_config_directory }}/kolla/common/kolla_mesos_start.py",
"dest": "/usr/local/bin/kolla_mesos_start",
"owner": "root",
"perm": "0755"
}
]
}

View File

@ -1,63 +0,0 @@
---
####################
# Ceph
####################
ceph_glance_pool_type: "{{ ceph_pool_type }}"
ceph_glance_cache_mode: "{{ ceph_cache_mode }}"
# Due to Ansible issues on include, you cannot override these variables. Please
# override the variables they refernce instead.
glance_pool_name: "{{ ceph_glance_pool_name }}"
glance_pool_type: "{{ ceph_glance_pool_type }}"
glance_cache_mode: "{{ ceph_glance_cache_mode }}"
####################
# Database
####################
glance_database_name: "glance"
glance_database_user: "glance"
glance_database_address: "{{ database_address }}"
####################
# Docker
####################
glance_registry_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-glance-registry"
glance_registry_tag: "{{ openstack_release }}"
glance_registry_image_full: "{{ glance_registry_image }}:{{ glance_registry_tag }}"
glance_api_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-glance-api"
glance_api_tag: "{{ openstack_release }}"
glance_api_image_full: "{{ glance_api_image }}:{{ glance_api_tag }}"
####################
# Openstack
####################
glance_admin_endpoint: "http://{{ glance_api_host }}:{{ glance_api_port }}"
glance_internal_endpoint: "http://{{ glance_api_host }}:{{ glance_api_port }}"
glance_public_endpoint: "http://{{ glance_api_host }}:{{ glance_api_port }}"
glance_logging_verbose: "{{ openstack_logging_verbose }}"
glance_logging_debug: "{{ openstack_logging_debug }}"
glance_keystone_user: "glance"
openstack_glance_auth: "{'auth_url':'{{ openstack_auth_url }}','username':'{{ openstack_username }}','password':'{{ openstack_password }}','project_name':'{{ openstack_project_name }}','domain_name':'default'}"
glance_registry_host: "glance-registry-glance-registry-openstack-{{ deployment_id }}.{{ marathon_framework }}.{{ mesos_dns_domain }}"
####################
# Resources
####################
# glance-api
glance_api_mem: "{{ glance_api_mem|default('128') }}"
glance_api_cpus: "{{ glance_api_cpus|default('0.3') }}"
# glance-init
glance_init_mem: "{{ glance_init_mem|default('512') }}"
glance_init_cpus: "{{ glance_init_cpus|default('0.3') }}"
# glance-registry
glance_registry_mem: "{{ glance_registry_mem|default('128') }}"
glance_registry_cpus: "{{ glance_registry_cpus|default('0.3') }}"

View File

@ -1,29 +0,0 @@
[DEFAULT]
debug = {{ glance_logging_debug }}
use_syslog = {{ openstack_use_syslog }}
use_stderr = {{ openstack_use_stderr }}
bind_host = {{ get_ip_address(api_interface) }}
bind_port = {{ glance_api_port }}
registry_host = {{ glance_registry_host }}
[database]
connection = mysql+pymysql://{{ glance_database_user }}:{{ glance_database_password }}@{{ glance_database_address }}/{{ glance_database_name }}
[keystone_authtoken]
auth_uri = http://{{ keystone_auth_host }}:{{ keystone_public_port }}
auth_url = http://{{ keystone_auth_host }}:{{ keystone_admin_port }}
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = {{ glance_keystone_user }}
password = {{ glance_keystone_password }}
[paste_deploy]
flavor = keystone
[oslo_messaging_notifications]
driver = noop

View File

@ -1,9 +0,0 @@
[DEFAULT]
show_image_direct_url= True
[glance_store]
default_store = rbd
stores = rbd
rbd_store_user = glance
rbd_store_pool = {{ ceph_glance_pool_name }}
rbd_store_chunk_size = 8

View File

@ -1,3 +0,0 @@
[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

View File

@ -1,27 +0,0 @@
[DEFAULT]
debug = {{ glance_logging_debug }}
use_syslog = {{ openstack_use_syslog }}
use_stderr = {{ openstack_use_stderr }}
bind_host = {{ get_ip_address(api_interface) }}
bind_port = {{ glance_registry_port }}
[database]
connection = mysql+pymysql://{{ glance_database_user }}:{{ glance_database_password }}@{{ glance_database_address }}/{{ glance_database_name }}
[keystone_authtoken]
auth_uri = http://{{ keystone_auth_host }}:{{ keystone_public_port }}
auth_url = http://{{ keystone_auth_host }}:{{ keystone_admin_port }}
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = {{ glance_keystone_user }}
password = {{ glance_keystone_password }}
[paste_deploy]
flavor = keystone
[oslo_messaging_notifications]
driver = noop

View File

@ -1,15 +0,0 @@
---
project_name: "horizon"
####################
# Docker
####################
horizon_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-horizon"
horizon_tag: "{{ openstack_release }}"
####################
# Resources
####################
horizon_mem: "{{ horizon_mem|default('128') }}"
horizon_cpus: "{{ horizon_cpus|default('0.3') }}"

View File

@ -1,24 +0,0 @@
{% set apache_dir = 'apache2' if kolla_base_distro in ['ubuntu', 'debian'] else 'httpd' %}
{% set python_path = '/usr/lib/python2.7/site-packages' if kolla_install_type == 'binary' else '/var/lib/kolla/venv/lib/python2.7/site-packages' %}
Listen {{ get_ip_address(api_interface) }}:80
<VirtualHost *:80>
LogLevel warn
ErrorLog /var/log/{{ apache_dir }}/horizon.log
CustomLog /var/log/{{ apache_dir }}/horizon-access.log combined
WSGIScriptReloading On
WSGIDaemonProcess horizon-http processes=5 threads=1 user=horizon group=horizon display-name=%{GROUP} python-path={{ python_path }}
WSGIProcessGroup horizon-http
WSGIScriptAlias / {{ python_path }}/openstack_dashboard/wsgi/django.wsgi
WSGIPassAuthorization On
<Location "/">
Require all granted
</Location>
Alias /static {{ python_path }}/static
<Location "/static">
SetHandler None
</Location>
</Virtualhost>

View File

@ -1,649 +0,0 @@
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard import exceptions
DEBUG = False
TEMPLATE_DEBUG = DEBUG
COMPRESS_OFFLINE = True
# WEBROOT is the location relative to Webserver root
# should end with a slash.
WEBROOT = '/'
# LOGIN_URL = WEBROOT + 'auth/login/'
# LOGOUT_URL = WEBROOT + 'auth/logout/'
#
# LOGIN_REDIRECT_URL can be used as an alternative for
# HORIZON_CONFIG.user_home, if user_home is not set.
# Do not set it to '/home/', as this will cause circular redirect loop
# LOGIN_REDIRECT_URL = WEBROOT
# Required for Django 1.5.
# If horizon is running in production (DEBUG is False), set this
# with the list of host/domain names that the application can serve.
# For more information see:
# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
ALLOWED_HOSTS = ['*']
# Set SSL proxy settings:
# For Django 1.4+ pass this header from the proxy after terminating the SSL,
# and don't forget to strip it from the client's request.
# For more information see:
# https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header
#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
# https://docs.djangoproject.com/en/1.5/ref/settings/#secure-proxy-ssl-header
#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# If Horizon is being served through SSL, then uncomment the following two
# settings to better secure the cookies from security exploits
#CSRF_COOKIE_SECURE = True
#SESSION_COOKIE_SECURE = True
# Overrides for OpenStack API versions. Use this setting to force the
# OpenStack dashboard to use a specific API version for a given service API.
# Versions specified here should be integers or floats, not strings.
# NOTE: The version should be formatted as it appears in the URL for the
# service API. For example, The identity service APIs have inconsistent
# use of the decimal point, so valid options would be 2.0 or 3.
#OPENSTACK_API_VERSIONS = {
# "data-processing": 1.1,
# "identity": 3,
# "volume": 2,
#}
OPENSTACK_API_VERSIONS = {
"identity": 3,
}
# Set this to True if running on multi-domain model. When this is enabled, it
# will require user to enter the Domain name in addition to username for login.
#OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False
# Overrides the default domain used when running on single-domain model
# with Keystone V3. All entities will be created in the default domain.
#OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
# Set Console type:
# valid options are "AUTO"(default), "VNC", "SPICE", "RDP", "SERIAL" or None
# Set to None explicitly if you want to deactivate the console.
#CONSOLE_TYPE = "AUTO"
# Default OpenStack Dashboard configuration.
HORIZON_CONFIG = {
'user_home': 'openstack_dashboard.views.get_user_home',
'ajax_queue_limit': 10,
'auto_fade_alerts': {
'delay': 3000,
'fade_duration': 1500,
'types': ['alert-success', 'alert-info']
},
'help_url': "http://docs.openstack.org",
'exceptions': {'recoverable': exceptions.RECOVERABLE,
'not_found': exceptions.NOT_FOUND,
'unauthorized': exceptions.UNAUTHORIZED},
'modal_backdrop': 'static',
'angular_modules': [],
'js_files': [],
'js_spec_files': [],
}
# Specify a regular expression to validate user passwords.
#HORIZON_CONFIG["password_validator"] = {
# "regex": '.*',
# "help_text": _("Your password does not meet the requirements."),
#}
# Disable simplified floating IP address management for deployments with
# multiple floating IP pools or complex network requirements.
#HORIZON_CONFIG["simple_ip_management"] = False
# Turn off browser autocompletion for forms including the login form and
# the database creation workflow if so desired.
#HORIZON_CONFIG["password_autocomplete"] = "off"
# Setting this to True will disable the reveal button for password fields,
# including on the login form.
#HORIZON_CONFIG["disable_password_reveal"] = False
LOCAL_PATH = '/tmp'
# Set custom secret key:
# You can either set it to a specific value or you can let horizon generate a
# default secret key that is unique on this machine, e.i. regardless of the
# amount of Python WSGI workers (if used behind Apache+mod_wsgi): However,
# there may be situations where you would want to set this explicitly, e.g.
# when multiple dashboard instances are distributed on different machines
# (usually behind a load-balancer). Either you have to make sure that a session
# gets all requests routed to the same dashboard instance or you set the same
# SECRET_KEY for all of them.
SECRET_KEY='{{ horizon_secret_key }}'
# Memcached session engine
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
# We recommend you use memcached for development; otherwise after every reload
# of the django development server, you will have to login again. To use
# memcached set CACHES to something like
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '{{ list_ips_by_service('infra/memcached/memcached', memcached_port) }}'
}
}
# Send email to the console by default
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# Or send them to /dev/null
#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
# Configure these for your outgoing email host
#EMAIL_HOST = 'smtp.my-company.com'
#EMAIL_PORT = 25
#EMAIL_HOST_USER = 'djangomail'
#EMAIL_HOST_PASSWORD = 'top-secret!'
# For multiple regions uncomment this configuration, and add (endpoint, title).
#AVAILABLE_REGIONS = [
# ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
# ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
#]
OPENSTACK_HOST = "{{ keystone_auth_host }}"
OPENSTACK_KEYSTONE_URL = "http://%s:{{ keystone_public_port }}/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"
# Enables keystone web single-sign-on if set to True.
#WEBSSO_ENABLED = False
# Determines which authentication choice to show as default.
#WEBSSO_INITIAL_CHOICE = "credentials"
# The list of authentication mechanisms
# which include keystone federation protocols.
# Current supported protocol IDs are 'saml2' and 'oidc'
# which represent SAML 2.0, OpenID Connect respectively.
# Do not remove the mandatory credentials mechanism.
#WEBSSO_CHOICES = (
# ("credentials", _("Keystone Credentials")),
# ("oidc", _("OpenID Connect")),
# ("saml2", _("Security Assertion Markup Language")))
# Disable SSL certificate checks (useful for self-signed certificates):
#OPENSTACK_SSL_NO_VERIFY = True
# The CA certificate to use to verify SSL connections
#OPENSTACK_SSL_CACERT = '/path/to/cacert.pem'
# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
# capabilities of the auth backend for Keystone.
# If Keystone has been configured to use LDAP as the auth backend then set
# can_edit_user to False and name to 'ldap'.
#
# TODO(tres): Remove these once Keystone has an API to identify auth backend.
OPENSTACK_KEYSTONE_BACKEND = {
'name': 'native',
'can_edit_user': True,
'can_edit_group': True,
'can_edit_project': True,
'can_edit_domain': True,
'can_edit_role': True,
}
# Setting this to True, will add a new "Retrieve Password" action on instance,
# allowing Admin session password retrieval/decryption.
#OPENSTACK_ENABLE_PASSWORD_RETRIEVE = False
# The Launch Instance user experience has been significantly enhanced.
# You can choose whether to enable the new launch instance experience,
# the legacy experience, or both. The legacy experience will be removed
# in a future release, but is available as a temporary backup setting to ensure
# compatibility with existing deployments. Further development will not be
# done on the legacy experience. Please report any problems with the new
# experience via the Launchpad tracking system.
#
# Toggle LAUNCH_INSTANCE_LEGACY_ENABLED and LAUNCH_INSTANCE_NG_ENABLED to
# determine the experience to enable. Set them both to true to enable
# both.
#LAUNCH_INSTANCE_LEGACY_ENABLED = True
#LAUNCH_INSTANCE_NG_ENABLED = False
# The Xen Hypervisor has the ability to set the mount point for volumes
# attached to instances (other Hypervisors currently do not). Setting
# can_set_mount_point to True will add the option to set the mount point
# from the UI.
OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': False,
'can_set_password': False,
}
# The OPENSTACK_CINDER_FEATURES settings can be used to enable optional
# services provided by cinder that is not exposed by its extension API.
OPENSTACK_CINDER_FEATURES = {
'enable_backup': False,
}
# The OPENSTACK_NEUTRON_NETWORK settings can be used to enable optional
# services provided by neutron. Options currently available are load
# balancer service, security groups, quotas, VPN service.
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_quotas': True,
'enable_ipv6': True,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': True,
'enable_firewall': True,
'enable_vpn': True,
'enable_fip_topology_check': True,
# The profile_support option is used to detect if an external router can be
# configured via the dashboard. When using specific plugins the
# profile_support can be turned on if needed.
'profile_support': None,
#'profile_support': 'cisco',
# Set which provider network types are supported. Only the network types
# in this list will be available to choose from when creating a network.
# Network types include local, flat, vlan, gre, and vxlan.
'supported_provider_types': ['*'],
# Set which VNIC types are supported for port binding. Only the VNIC
# types in this list will be available to choose from when creating a
# port.
# VNIC types include 'normal', 'macvtap' and 'direct'.
'supported_vnic_types': ['*']
}
# The OPENSTACK_IMAGE_BACKEND settings can be used to customize features
# in the OpenStack Dashboard related to the Image service, such as the list
# of supported image formats.
#OPENSTACK_IMAGE_BACKEND = {
# 'image_formats': [
# ('', _('Select format')),
# ('aki', _('AKI - Amazon Kernel Image')),
# ('ami', _('AMI - Amazon Machine Image')),
# ('ari', _('ARI - Amazon Ramdisk Image')),
# ('docker', _('Docker')),
# ('iso', _('ISO - Optical Disk Image')),
# ('ova', _('OVA - Open Virtual Appliance')),
# ('qcow2', _('QCOW2 - QEMU Emulator')),
# ('raw', _('Raw')),
# ('vdi', _('VDI - Virtual Disk Image')),
# ('vhd', ('VHD - Virtual Hard Disk')),
# ('vmdk', _('VMDK - Virtual Machine Disk')),
# ]
#}
# The IMAGE_CUSTOM_PROPERTY_TITLES settings is used to customize the titles for
# image custom property attributes that appear on image detail pages.
IMAGE_CUSTOM_PROPERTY_TITLES = {
"architecture": _("Architecture"),
"kernel_id": _("Kernel ID"),
"ramdisk_id": _("Ramdisk ID"),
"image_state": _("Euca2ools state"),
"project_id": _("Project ID"),
"image_type": _("Image Type"),
}
# The IMAGE_RESERVED_CUSTOM_PROPERTIES setting is used to specify which image
# custom properties should not be displayed in the Image Custom Properties
# table.
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
# in the Keystone service catalog. Use this setting when Horizon is running
# external to the OpenStack environment. The default is 'publicURL'.
#OPENSTACK_ENDPOINT_TYPE = "publicURL"
# SECONDARY_ENDPOINT_TYPE specifies the fallback endpoint type to use in the
# case that OPENSTACK_ENDPOINT_TYPE is not present in the endpoints
# in the Keystone service catalog. Use this setting when Horizon is running
# external to the OpenStack environment. The default is None. This
# value should differ from OPENSTACK_ENDPOINT_TYPE if used.
#SECONDARY_ENDPOINT_TYPE = "publicURL"
# The number of objects (Swift containers/objects or images) to display
# on a single page before providing a paging element (a "more" link)
# to paginate results.
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
# The size of chunk in bytes for downloading objects from Swift
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
# Specify a maximum number of items to display in a dropdown.
DROPDOWN_MAX_ITEMS = 30
# The timezone of the server. This should correspond with the timezone
# of your entire OpenStack installation, and hopefully be in UTC.
TIME_ZONE = "UTC"
# When launching an instance, the menu of available flavors is
# sorted by RAM usage, ascending. If you would like a different sort order,
# you can provide another flavor attribute as sorting key. Alternatively, you
# can provide a custom callback method to use for sorting. You can also provide
# a flag for reverse sort. For more info, see
# http://docs.python.org/2/library/functions.html#sorted
#CREATE_INSTANCE_FLAVOR_SORT = {
# 'key': 'name',
# # or
# 'key': my_awesome_callback_method,
# 'reverse': False,
#}
# Set this to True to display an 'Admin Password' field on the Change Password
# form to verify that it is indeed the admin logged-in who wants to change
# the password.
# ENFORCE_PASSWORD_CHECK = False
# Modules that provide /auth routes that can be used to handle different types
# of user authentication. Add auth plugins that require extra route handling to
# this list.
#AUTHENTICATION_URLS = [
# 'openstack_auth.urls',
#]
# The Horizon Policy Enforcement engine uses these values to load per service
# policy rule files. The content of these files should match the files the
# OpenStack services are using to determine role based access control in the
# target installation.
# Path to directory containing policy.json files
POLICY_FILES_PATH = '/etc/openstack-dashboard'
# Map of local copy of service policy files
#POLICY_FILES = {
# 'identity': 'keystone_policy.json',
# 'compute': 'nova_policy.json',
# 'volume': 'cinder_policy.json',
# 'image': 'glance_policy.json',
# 'orchestration': 'heat_policy.json',
# 'network': 'neutron_policy.json',
# 'telemetry': 'ceilometer_policy.json',
#}
# Trove user and database extension support. By default support for
# creating users and databases on database instances is turned on.
# To disable these extensions set the permission here to something
# unusable such as ["!"].
# TROVE_ADD_USER_PERMS = []
# TROVE_ADD_DATABASE_PERMS = []
# Change this patch to the appropriate static directory containing
# two files: _variables.scss and _styles.scss
#CUSTOM_THEME_PATH = 'static/themes/default'
LOGGING = {
'version': 1,
# When set to True this will disable all logging except
# for loggers specified in this configuration dictionary. Note that
# if nothing is specified here and disable_existing_loggers is True,
# django.db.backends will still log unless it is disabled explicitly.
'disable_existing_loggers': False,
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'django.utils.log.NullHandler',
},
'console': {
# Set the level to "DEBUG" for verbose output logging.
'level': 'INFO',
'class': 'logging.StreamHandler',
},
},
'loggers': {
# Logging from django.db.backends is VERY verbose, send to null
# by default.
'django.db.backends': {
'handlers': ['null'],
'propagate': False,
},
'requests': {
'handlers': ['null'],
'propagate': False,
},
'horizon': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'openstack_dashboard': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'novaclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'cinderclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'glanceclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'glanceclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'neutronclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'heatclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'ceilometerclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'troveclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'swiftclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'openstack_auth': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'nose.plugins.manager': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'django': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'iso8601': {
'handlers': ['null'],
'propagate': False,
},
'scss': {
'handlers': ['null'],
'propagate': False,
},
}
}
# 'direction' should not be specified for all_tcp/udp/icmp.
# It is specified in the form.
SECURITY_GROUP_RULES = {
'all_tcp': {
'name': _('All TCP'),
'ip_protocol': 'tcp',
'from_port': '1',
'to_port': '65535',
},
'all_udp': {
'name': _('All UDP'),
'ip_protocol': 'udp',
'from_port': '1',
'to_port': '65535',
},
'all_icmp': {
'name': _('All ICMP'),
'ip_protocol': 'icmp',
'from_port': '-1',
'to_port': '-1',
},
'ssh': {
'name': 'SSH',
'ip_protocol': 'tcp',
'from_port': '22',
'to_port': '22',
},
'smtp': {
'name': 'SMTP',
'ip_protocol': 'tcp',
'from_port': '25',
'to_port': '25',
},
'dns': {
'name': 'DNS',
'ip_protocol': 'tcp',
'from_port': '53',
'to_port': '53',
},
'http': {
'name': 'HTTP',
'ip_protocol': 'tcp',
'from_port': '80',
'to_port': '80',
},
'pop3': {
'name': 'POP3',
'ip_protocol': 'tcp',
'from_port': '110',
'to_port': '110',
},
'imap': {
'name': 'IMAP',
'ip_protocol': 'tcp',
'from_port': '143',
'to_port': '143',
},
'ldap': {
'name': 'LDAP',
'ip_protocol': 'tcp',
'from_port': '389',
'to_port': '389',
},
'https': {
'name': 'HTTPS',
'ip_protocol': 'tcp',
'from_port': '443',
'to_port': '443',
},
'smtps': {
'name': 'SMTPS',
'ip_protocol': 'tcp',
'from_port': '465',
'to_port': '465',
},
'imaps': {
'name': 'IMAPS',
'ip_protocol': 'tcp',
'from_port': '993',
'to_port': '993',
},
'pop3s': {
'name': 'POP3S',
'ip_protocol': 'tcp',
'from_port': '995',
'to_port': '995',
},
'ms_sql': {
'name': 'MS SQL',
'ip_protocol': 'tcp',
'from_port': '1433',
'to_port': '1433',
},
'mysql': {
'name': 'MYSQL',
'ip_protocol': 'tcp',
'from_port': '3306',
'to_port': '3306',
},
'rdp': {
'name': 'RDP',
'ip_protocol': 'tcp',
'from_port': '3389',
'to_port': '3389',
},
}
# Deprecation Notice:
#
# The setting FLAVOR_EXTRA_KEYS has been deprecated.
# Please load extra spec metadata into the Glance Metadata Definition Catalog.
#
# The sample quota definitions can be found in:
# <glance_source>/etc/metadefs/compute-quota.json
#
# The metadata definition catalog supports CLI and API:
# $glance --os-image-api-version 2 help md-namespace-import
# $glance-manage db_load_metadefs <directory_with_definition_files>
#
# See Metadata Definitions on: http://docs.openstack.org/developer/glance/
# Indicate to the Sahara data processing service whether or not
# automatic floating IP allocation is in effect. If it is not
# in effect, the user will be prompted to choose a floating IP
# pool for use in their cluster. False by default. You would want
# to set this to True if you were running Nova Networking with
# auto_assign_floating_ip = True.
#SAHARA_AUTO_IP_ALLOCATION_ENABLED = False
# The hash algorithm to use for authentication tokens. This must
# match the hash algorithm that the identity server and the
# auth_token middleware are using. Allowed values are the
# algorithms supported by Python's hashlib library.
#OPENSTACK_TOKEN_HASH_ALGORITHM = 'md5'
# AngularJS requires some settings to be made available to
# the client side. Some settings are required by in-tree / built-in horizon
# features. These settings must be added to REST_API_REQUIRED_SETTINGS in the
# form of ['SETTING_1','SETTING_2'], etc.
#
# You may remove settings from this list for security purposes, but do so at
# the risk of breaking a built-in horizon feature. These settings are required
# for horizon to function properly. Only remove them if you know what you
# are doing. These settings may in the future be moved to be defined within
# the enabled panel configuration.
# You should not add settings to this list for out of tree extensions.
# See: https://wiki.openstack.org/wiki/Horizon/RESTAPI
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES']
# Additional settings can be made available to the client side for
# extensibility by specifying them in REST_API_ADDITIONAL_SETTINGS
# !! Please use extreme caution as the settings are transferred via HTTP/S
# and are not encrypted on the browser. This is an experimental API and
# may be deprecated in the future without notice.
#REST_API_ADDITIONAL_SETTINGS = []
# DISALLOW_IFRAME_EMBED can be used to prevent Horizon from being embedded
# within an iframe. Legacy browsers are still vulnerable to a Cross-Frame
# Scripting (XFS) vulnerability, so this option allows extra security hardening
# where iframes are not used in deployment. Default setting is True.
# For more information see:
# http://tinyurl.com/anticlickjack
# DISALLOW_IFRAME_EMBED = True

View File

@ -1,41 +0,0 @@
---
project_name: "keystone"
####################
# Database
####################
keystone_database_name: "keystone"
keystone_database_user: "keystone"
keystone_database_address: "{{ database_address }}"
####################
# Docker
####################
keystone_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-keystone"
keystone_tag: "{{ openstack_release }}"
# keystone_image_full: "{{ keystone_image }}:{{ keystone_tag }}"
####################
# Openstack
####################
keystone_admin_endpoint: "http://{{ keystone_auth_host }}:{{ keystone_admin_port }}/v3"
keystone_internal_endpoint: "http://{{ keystone_auth_host }}:{{ keystone_public_port }}/v3"
keystone_public_endpoint: "http://{{ keystone_auth_host }}:{{ keystone_public_port }}/v3"
keystone_logging_verbose: "{{ openstack_logging_verbose }}"
keystone_logging_debug: "{{ openstack_logging_debug }}"
openstack_keystone_auth: "{'auth_url':'{{ openstack_auth_url }}','username':'{{ openstack_username }}','password':'{{ openstack_password }}','project_name':'{{ openstack_project_name }}'}"
####################
# Resources
####################
# keystone-api
keystone_api_mem: "{{ keystone_api_mem|default('128') }}"
keystone_api_cpus: "{{ keystone_api_cpus|default('0.3') }}"
# keystone-init
keystone_init_mem: "{{ keystone_init_mem|default('512') }}"
keystone_init_cpus: "{{ keystone_init_cpus|default('0.3') }}"

View File

@ -1,7 +0,0 @@
[DEFAULT]
debug = {{ keystone_logging_debug }}
use_syslog = {{ openstack_use_syslog }}
use_stderr = {{ openstack_use_stderr }}
[database]
connection = mysql+pymysql://{{ keystone_database_user }}:{{ keystone_database_password }}@{{ keystone_database_address }}/{{ keystone_database_name }}

View File

@ -1,29 +0,0 @@
{% set apache_dir = 'apache2' if kolla_base_distro in ['ubuntu', 'debian'] else 'httpd' %}
Listen {{ get_ip_address(api_interface) }}:{{ keystone_public_port }}
Listen {{ get_ip_address(api_interface) }}:{{ keystone_admin_port }}
<VirtualHost *:{{ keystone_public_port }}>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /var/www/cgi-bin/keystone/main
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog "|/usr/bin/logger -t keystone-error"
CustomLog "|/usr/bin/logger -t keystone-access" combined
</VirtualHost>
<VirtualHost *:{{ keystone_admin_port }}>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /var/www/cgi-bin/keystone/admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog "|/usr/bin/logger -t keystone-error"
CustomLog "|/usr/bin/logger -t keystone-access" combined
</VirtualHost>

View File

@ -1,26 +0,0 @@
---
project_name: "mariadb"
####################
# Database
####################
database_cluster_name: "openstack"
####################
# Docker
####################
mariadb_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-mariadb"
mariadb_tag: "{{ openstack_release }}"
# mariadb_image_full: "{{ mariadb_image }}:{{ mariadb_tag }}"
#mariadb_data_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-data"
#mariadb_data_tag: "{{ openstack_release }}"
#mariadb_data_image_full: "{{ mariadb_data_image }}:{{ mariadb_data_tag }}"
####################
# Resources
####################
mariadb_mem: "{{ mariadb_mem|default('128') }}"
mariadb_cpus: "{{ mariadb_cpus|default('0.3') }}"

View File

@ -1,29 +0,0 @@
{% set wsrep_driver = '/usr/lib/galera/libgalera_smm.so' if kolla_base_distro == 'ubuntu' else '/usr/lib64/galera/libgalera_smm.so' %}
[mysqld]
bind-address={{ get_ip_address(api_interface) }}
port={{ mariadb_port }}
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
datadir=/var/lib/mysql/
wsrep_cluster_address=gcomm://{% if (list_ips_by_service('infra/mariadb/mariadb') | length) > 1 %}{{ list_ips_by_service('infra/mariadb/mariadb', mariadb_wsrep_port) }}{% endif %}
wsrep_provider_options="gmcast.listen_addr=tcp://{{ get_ip_address(api_interface) }}:{{ mariadb_wsrep_port }};ist.recv_addr={{ get_ip_address(api_interface) }}:{{ mariadb_ist_port }}"
wsrep_node_address={{ get_ip_address(api_interface) }}:{{ mariadb_wsrep_port }}
wsrep_sst_receive_address={{ get_ip_address(api_interface) }}:{{ mariadb_sst_port }}
wsrep_provider={{ wsrep_driver }}
wsrep_cluster_name="{{ database_cluster_name }}"
wsrep_node_name={{ get_hostname() }}
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth={{ database_user }}:{{ database_password }}
wsrep_slave_threads=4
max_connections=1000
[server]
pid-file=/var/lib/mysql/mariadb.pid

View File

@ -1,16 +0,0 @@
---
project_name: "memcached"
####################
# Docker
####################
memcached_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-memcached"
memcached_tag: "{{ openstack_release }}"
memcached_image_full: "{{ memcached_image }}:{{ memcached_tag }}"
####################
# Resources
####################
memcached_mem: "{{ memcached_mem|default('128') }}"
memcached_cpus: "{{ memcached_cpus|default('0.3') }}"

View File

@ -1,95 +0,0 @@
---
project_name: "neutron"
####################
# Database
####################
neutron_database_name: "neutron"
neutron_database_user: "neutron"
neutron_database_address: "{{ database_address }}"
####################
# Docker
####################
neutron_server_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-neutron-server"
neutron_server_tag: "{{ openstack_release }}"
neutron_server_image_full: "{{ neutron_server_image }}:{{ neutron_server_tag }}"
neutron_dhcp_agent_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-neutron-dhcp-agent"
neutron_dhcp_agent_tag: "{{ openstack_release }}"
neutron_dhcp_agent_image_full: "{{ neutron_dhcp_agent_image }}:{{ neutron_dhcp_agent_tag }}"
neutron_l3_agent_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-neutron-l3-agent"
neutron_l3_agent_tag: "{{ openstack_release }}"
neutron_l3_agent_image_full: "{{ neutron_l3_agent_image }}:{{ neutron_l3_agent_tag }}"
neutron_metadata_agent_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-neutron-metadata-agent"
neutron_metadata_agent_tag: "{{ openstack_release }}"
neutron_metadata_agent_image_full: "{{ neutron_metadata_agent_image }}:{{ neutron_metadata_agent_tag }}"
neutron_openvswitch_agent_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-neutron-openvswitch-agent"
neutron_openvswitch_agent_tag: "{{ openstack_release }}"
neutron_openvswitch_agent_image_full: "{{ neutron_openvswitch_agent_image }}:{{ neutron_openvswitch_agent_tag }}"
neutron_linuxbridge_agent_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-neutron-linuxbridge-agent"
neutron_linuxbridge_agent_tag: "{{ openstack_release }}"
neutron_linuxbridge_agent_image_full: "{{ neutron_linuxbridge_agent_image }}:{{ neutron_linuxbridge_agent_tag }}"
openvswitch_vswitchd_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-openvswitch-vswitchd"
openvswitch_vswitchd_tag: "{{ openstack_release }}"
openvswitch_vswitchd_image_full: "{{ openvswitch_vswitchd_image }}:{{ openvswitch_vswitchd_tag }}"
openvswitch_db_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-openvswitch-db-server"
openvswitch_db_tag: "{{ openstack_release }}"
openvswitch_db_image_full: "{{ openvswitch_db_image }}:{{ openvswitch_db_tag }}"
####################
# Openstack
####################
neutron_admin_endpoint: "http://{{ neutron_server_host }}:{{ neutron_server_port }}"
neutron_internal_endpoint: "http://{{ neutron_server_host }}:{{ neutron_server_port }}"
neutron_public_endpoint: "http://{{ neutron_server_host }}:{{ neutron_server_port }}"
neutron_logging_verbose: "{{ openstack_logging_verbose }}"
neutron_logging_debug: "{{ openstack_logging_debug }}"
neutron_keystone_user: "neutron"
neutron_bridge_name: "br-ex"
openstack_neutron_auth: "{'auth_url':'{{ openstack_auth_url }}','username':'{{ openstack_username }}','password':'{{ openstack_password }}','project_name':'{{ openstack_project_name }}'}"
nova_api_host: "nova-api-nova-openstack-{{ deployment_id }}.{{ marathon_framework }}.{{ mesos_dns_domain }}"
####################
# Resources
####################
# neutron-dhcp-agent
neutron_dhcp_agent_mem: "{{ neutron_dhcp_agent_mem|default('128') }}"
neutron_dhcp_agent_cpus: "{{ neutron_dhcp_agent_cpus|default('0.2') }}"
# neutron-init
neutron_init_mem: "{{ neutron_init_mem|default('512') }}"
neutron_init_cpus: "{{ neutron_init_cpus|default('0.3') }}"
# neutron-l3-agent
neutron_l3_agent_mem: "{{ neutron_l3_agent_mem|default('128') }}"
neutron_l3_agent_cpus: "{{ neutron_l3_agent_cpus|default('0.2') }}"
# neutron-linuxbridge-agent
neutron_linuxbridge_agent_mem: "{{ neutron_linuxbridge_agent_mem|default('128') }}"
neutron_linuxbridge_agent_cpus: "{{ neutron_linuxbridfe_agent_cpus|default('0.2') }}"
# neutron-metadata-agent
neutron_metadata_agent_mem: "{{ neutron_metadata_agent_mem|default('128') }}"
neutron_metadata_agent_cpus: "{{ neutron_metadata_agent_cpus|default('0.2') }}"
# neutron-openvswitch-agent
neutron_openvswitch_agent_mem: "{{ neutron_openvswitch_agent_mem|default('128') }}"
neutron_openvswitch_agent_cpus: "{{ neutron_openvswitch_agent_cpus|default('0.2') }}"
# neutron-server
neutron_server_mem: "{{ neutron_server_mem|default('128') }}"
neutron_server_cpus: "{{ neutron_server_cpus|default('0.3') }}"
# openvswitch-db
openvswitch_db_mem: "{{ openvswitch_db_mem|default('128') }}"
openvswitch_db_cpus: "{{ openvswitch_db_cpus|default('0.3') }}"
# openvswitch-vswitchd
openvswitch_vswitchd_mem: "{{ openvswitch_vswitchd_mem|default('128') }}"
openvswitch_vswitchd_cpus: "{{ openvswitch_vswitchd_cpus|default('0.3') }}"

View File

@ -1,3 +0,0 @@
# dhcp_agent.ini
[DEFAULT]
dnsmasq_config_file = /etc/neutron/dnsmasq.conf

View File

@ -1,2 +0,0 @@
dhcp-option-force=26,1450
log-facility=/var/log/neutron/dnsmasq.log

View File

@ -1 +0,0 @@
[fwaas]

View File

@ -1,4 +0,0 @@
# l3_agent.ini
[DEFAULT]
agent_mode = legacy
external_network_bridge =

View File

@ -1,5 +0,0 @@
# metadata_agent.ini
[DEFAULT]
nova_metadata_ip = {{ nova_api_host }}
nova_metadata_port = {{ nova_metadata_port }}
metadata_proxy_shared_secret = {{ metadata_secret }}

View File

@ -1,15 +0,0 @@
# ml2_conf.ini
[ml2]
# Changing type_drivers after bootstrap can lead to database inconsistencies
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
[ml2_type_vlan]
network_vlan_ranges =
[ml2_type_flat]
flat_networks = physnet1
[ml2_type_vxlan]
vni_ranges = 1:1000
vxlan_group = 239.1.1.1

View File

@ -1,12 +0,0 @@
[ml2]
mechanism_drivers = linuxbridge,l2population
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[linux_bridge]
physical_interface_mappings = physnet1:{{ neutron_external_interface }}
[vxlan]
l2_population = true
local_ip = {{ get_ip_address(tunnel_interface) }}

View File

@ -1,14 +0,0 @@
[ml2]
mechanism_drivers = openvswitch,l2population
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
tunnel_types = vxlan
l2_population = true
arp_responder = true
[ovs]
bridge_mappings = physnet1:{{ neutron_bridge_name }}
local_ip = {{ get_ip_address(tunnel_interface) }}

View File

@ -1,53 +0,0 @@
# neutron.conf
[DEFAULT]
debug = {{ keystone_logging_debug }}
use_syslog = {{ openstack_use_syslog }}
use_stderr = {{ openstack_use_stderr }}
bind_host = {{ get_ip_address(api_interface) }}
bind_port = {{ neutron_server_port }}
#lock_path = /var/lock/neutron
api_paste_config = /usr/share/neutron/api-paste.ini
allow_overlapping_ips = true
core_plugin = ml2
service_plugins = router
[nova]
auth_url = http://{{ keystone_auth_host }}:{{ keystone_admin_port }}
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = {{ openstack_region_name }}
project_name = service
username = nova
password = {{ nova_keystone_password }}
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_rabbit]
rabbit_userid = {{ rabbitmq_user }}
rabbit_password = {{ rabbitmq_password }}
rabbit_ha_queues = true
rabbit_hosts = {{ list_ips_by_service('infra/rabbitmq/rabbitmq', rabbitmq_port) }}
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[database]
connection = mysql+pymysql://{{ neutron_database_user }}:{{ neutron_database_password }}@{{ neutron_database_address }}/{{ neutron_database_name }}
[keystone_authtoken]
auth_uri = http://{{ keystone_auth_host }}:{{ keystone_public_port }}
auth_url = http://{{ keystone_auth_host }}:{{ keystone_admin_port }}
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = {{ neutron_keystone_password }}
[oslo_messaging_notifications]
driver = noop

View File

@ -1,2 +0,0 @@
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

View File

@ -1,2 +0,0 @@
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

View File

@ -1,17 +0,0 @@
#!/bin/bash
bridge=$1
port=$2
ovs-vsctl br-exists $bridge; rc=$?
if [[ $rc == 2 ]]; then
changed=changed
ovs-vsctl --no-wait add-br $bridge
fi
if [[ ! $(ovs-vsctl list-ports $bridge) =~ $(echo "\<$port\>") ]]; then
changed=changed
ovs-vsctl --no-wait add-port $bridge $port
fi
echo $changed

View File

@ -1,113 +0,0 @@
---
project_name: "nova"
####################
# Ceph
####################
ceph_nova_pool_type: "{{ ceph_pool_type }}"
ceph_nova_cache_mode: "{{ ceph_cache_mode }}"
# Due to Ansible issues on include, you cannot override these variables. Please
# override the variables they reference instead.
nova_pool_name: "{{ ceph_nova_pool_name }}"
nova_pool_type: "{{ ceph_nova_pool_type }}"
nova_cache_mode: "{{ ceph_nova_cache_mode }}"
####################
# Database
####################
nova_database_name: "nova"
nova_database_user: "nova"
nova_database_address: "{{ database_address }}"
nova_api_database_name: "nova_api"
nova_api_database_user: "nova_api"
nova_api_database_address: "{{ database_address }}"
####################
# Docker
####################
nova_libvirt_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-nova-libvirt"
nova_libvirt_tag: "{{ openstack_release }}"
nova_libvirt_image_full: "{{ nova_libvirt_image }}:{{ nova_libvirt_tag }}"
nova_conductor_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-nova-conductor"
nova_conductor_tag: "{{ openstack_release }}"
nova_conductor_image_full: "{{ nova_conductor_image }}:{{ nova_conductor_tag }}"
nova_consoleauth_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-nova-consoleauth"
nova_consoleauth_tag: "{{ openstack_release }}"
nova_consoleauth_image_full: "{{ nova_consoleauth_image }}:{{ nova_consoleauth_tag }}"
nova_novncproxy_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-nova-novncproxy"
nova_novncproxy_tag: "{{ openstack_release }}"
nova_novncproxy_image_full: "{{ nova_novncproxy_image }}:{{ nova_novncproxy_tag }}"
nova_spicehtml5proxy_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-nova-spicehtml5proxy"
nova_spicehtml5proxy_tag: "{{ openstack_release }}"
nova_spicehtml5proxy_image_full: "{{ nova_spicehtml5proxy_image }}:{{ nova_spicehtml5proxy_tag }}"
nova_scheduler_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-nova-scheduler"
nova_scheduler_tag: "{{ openstack_release }}"
nova_scheduler_image_full: "{{ nova_scheduler_image }}:{{ nova_scheduler_tag }}"
nova_compute_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-nova-compute"
nova_compute_tag: "{{ openstack_release }}"
nova_compute_image_full: "{{ nova_compute_image }}:{{ nova_compute_tag }}"
nova_api_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-nova-api"
nova_api_tag: "{{ openstack_release }}"
nova_api_image_full: "{{ nova_api_image }}:{{ nova_api_tag }}"
nova_compute_ironic_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-nova-compute-ironic"
nova_compute_ironic_tag: "{{ openstack_release }}"
nova_compute_ironic_image_full: "{{ nova_compute_ironic_image }}:{{ nova_compute_ironic_tag }}"
####################
# Openstack
####################
nova_admin_endpoint: "http://{{ nova_api_host }}:{{ nova_api_port }}/v2/%(tenant_id)s"
nova_internal_endpoint: "http://{{ nova_api_host }}:{{ nova_api_port }}/v2/%(tenant_id)s"
nova_public_endpoint: "http://{{ nova_api_host }}:{{ nova_api_port }}/v2/%(tenant_id)s"
nova_logging_verbose: "{{ openstack_logging_verbose }}"
nova_logging_debug: "{{ openstack_logging_debug }}"
nova_keystone_user: "nova"
openstack_nova_auth: "{'auth_url':'{{ openstack_auth_url }}','username':'{{ openstack_username }}','password':'{{ openstack_password }}','project_name':'{{ openstack_project_name }}'}"
nova_novncproxy_host: "nova-novncproxy-nova-openstack-{{ deployment_id }}.{{ marathon_framework }}.{{ mesos_dns_domain }}"
nova_spicehtml5proxy_host: "nova-spicehtml5proxy-nova-openstack-{{ deployment_id }}.{{ marathon_framework }}.{{ mesos_dns_domain }}"
####################
# Resources
####################
# nova-api
nova_api_mem: "{{ nova_api_mem|default('128') }}"
nova_api_cpus: "{{ nova_api_cpus|default('0.3') }}"
# nova-compute
nova_compute_mem: "{{ nova_compute_mem|default('1024') }}"
nova_compute_cpus: "{{ nova_compute_cpus|default('2') }}"
# nova-conductor
nova_conductor_mem: "{{ nova_conductor_mem|default('128') }}"
nova_conductor_cpus: "{{ nova_conductor_cpus|default('0.3') }}"
# nova-consoleauth
nova_consoleauth_mem: "{{ nova_consoleauth_mem|default('128') }}"
nova_consoleauth_cpus: "{{ nova_consoleauth_cpus|default('0.3') }}"
# nova-init
nova_init_mem: "{{ nova_init_mem|default('512') }}"
nova_init_cpus: "{{ nova_init_cpus|default('0.3') }}"
# nova-libvirt
nova_libvirt_mem: "{{ nova_libvirt_mem|default('1024') }}"
nova_libvirt_cpus: "{{ nova_libvirt_cpus|default('2') }}"
# nova-novncproxy
nova_novncproxy_mem: "{{ nova_novncproxy_mem|default('128') }}"
nova_novncproxy_cpus: "{{ nova_novncproxy_cpus|default('0.3') }}"
# nova-scheduler
nova_scheduler_mem: "{{ nova_scheduler_mem|default('128') }}"
nova_scheduler_cpus: "{{ nova_scheduler_cpus|default('0.3') }}"
# noca-spicehtml3proxy
nova_spicehtml3proxy_mem: "{{ nova_spicehtml3proxy_mem|default('128') }}"
nova_spicehtml3proxy_cpus: "{{ nova_spicehtml3proxy_cpus|default('0.3') }}"

View File

@ -1,11 +0,0 @@
listen_tcp = 1
auth_tcp = "none"
ca_file = ""
log_level = 2
log_outputs = "2:file:/var/log/libvirt/libvirtd.log"
listen_addr = "{{ get_ip_address(api_interface) }}"
unix_sock_group = "nova"
unix_sock_ro_perms = "0777"
unix_sock_rw_perms = "0770"
auth_unix_ro = "none"
auth_unix_rw = "none"

View File

@ -1,147 +0,0 @@
# nova.conf
[DEFAULT]
debug = {{ nova_logging_debug }}
api_paste_config = /etc/nova/api-paste.ini
state_path = /var/lib/nova
osapi_compute_listen = {{ get_ip_address(api_interface) }}
osapi_compute_listen_port = {{ nova_api_port }}
metadata_listen = {{ get_ip_address(api_interface) }}
metadata_listen_port = {{ nova_metadata_port }}
ec2_listen = {{ get_ip_address(api_interface) }}
ec2_listen_port = {{ nova_api_ec2_port }}
notification_driver = noop
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
{% if neutron_plugin_agent == "openvswitch" %}
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
{% elif neutron_plugin_agent == "linuxbridge" %}
linuxnet_interface_driver = nova.network.linux_net.BridgeInterfaceDriver
{% endif %}
allow_resize_to_same_host = true
{% if enable_ironic | bool %}
scheduler_host_manager = nova.scheduler.ironic_host_manager.IronicHostManager
{% endif %}
{% if service_name == "openstack/nova/nova-compute-ironic" %}
compute_driver = nova.virt.ironic.IronicDriver
vnc_enabled = False
ram_allocation_ratio = 1.0
reserved_host_memory_mb = 0
{% elif enable_nova_fake | bool %}
scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter
host = {{ get_hostname() }}_{{ item }}
compute_driver = fake.FakeDriver
{% else %}
compute_driver = libvirt.LibvirtDriver
{% endif %}
memcached_servers = {{ list_ips_by_service('infra/memcached/memcached', memcached_port) }}
# Though my_ip is not used directly, lots of other variables use $my_ip
my_ip = {{ get_ip_address(api_interface) }}
{% if nova_console == 'novnc' %}
novncproxy_host = {{ get_ip_address(api_interface) }}
novncproxy_port = {{ nova_novncproxy_port }}
[vnc]
vncserver_listen = {{ get_ip_address(api_interface) }}
vncserver_proxyclient_address = {{ get_ip_address(api_interface) }}
{% if service_name == "openstack/nova/nova-compute" %}
novncproxy_base_url = http://{{ nova_novncproxy_host }}:{{ nova_novncproxy_port }}/vnc_auto.html
{% endif %}
{% elif nova_console == 'spice' %}
[vnc]
# We have to turn off vnc to use spice
enabled = false
[spice]
server_listen = {{ get_ip_address(api_interface) }}
server_proxyclient_address = {{ get_ip_address(api_interface) }}
{% if service_name == "openstack/nova/nova-compute" %}
html5proxy_base_url = http://{{ nova_spicehtml5proxy_host }}:{{ nova_spicehtml5proxy_port }}/spice_auto.html
{% endif %}
html5proxy_host = {{ get_ip_address(api_interface) }}
html5proxy_port = {{ nova_spicehtml5proxy_port }}
{% endif %}
{% if service_name == "openstack/nova/nova-compute-ironic" %}
[ironic]
#(TODO) remember to update this once discoverd is replaced by inspector
admin_username = {{ ironic_keystone_user }}
admin_password = {{ ironic_keystone_password }}
admin_url = {{ openstack_auth_url }}
admin_tenant_name = service
api_endpoint = http://{{ kolla_internal_address }}:{{ ironic_api_port }}/v1
{% endif %}
[oslo_messaging_rabbit]
rabbit_userid = {{ rabbitmq_user }}
rabbit_password = {{ rabbitmq_password }}
rabbit_ha_queues = true
rabbit_hosts = {{ list_ips_by_service('infra/rabbitmq/rabbitmq', rabbitmq_port) }}
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[glance]
api_servers = {{ glance_api_host }}:{{ glance_api_port }}
num_retries = {{ list_ips_by_service('openstack/glance/glance-api').split(',') | length }}
[cinder]
catalog_info = volume:cinder:internalURL
[neutron]
url = http://{{ neutron_server_host }}:{{ neutron_server_port }}
auth_strategy = keystone
metadata_proxy_shared_secret = {{ metadata_secret }}
service_metadata_proxy = true
auth_url = http://{{ keystone_auth_host }}:{{ keystone_admin_port }}
auth_plugin = password
project_domain_name = default
user_domain_id = default
project_name = service
username = neutron
password = {{ neutron_keystone_password }}
[database]
connection = mysql+pymysql://{{ nova_database_user }}:{{ nova_database_password }}@{{ nova_database_address }}/{{ nova_database_name }}
[api_database]
connection = mysql+pymysql://{{ nova_api_database_user }}:{{ nova_api_database_password }}@{{ nova_api_database_address }}/{{ nova_api_database_name }}
[keystone_authtoken]
auth_uri = http://{{ keystone_auth_host }}:{{ keystone_public_port }}
auth_url = http://{{ keystone_auth_host }}:{{ keystone_admin_port }}
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = {{ nova_keystone_password }}
[libvirt]
connection_uri = "qemu+tcp://{{ get_ip_address(api_interface) }}/system"
{% if enable_ceph | bool %}
images_type = rbd
images_rbd_pool = {{ ceph_nova_pool_name }}
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = nova
rbd_secret_uuid = {{ rbd_secret_uuid }}
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
hw_disk_discard = unmap
{% endif %}
[upgrade_levels]
compute = auto

View File

@ -1,24 +0,0 @@
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
if [[ -n "$1" ]]; then
OS_USERNAME=$1
fi
if [[ -n "$2" ]]; then
OS_PROJECT_NAME=$2
fi
export OS_PROJECT_NAME=${OS_PROJECT_NAME:-admin}
export OS_USERNAME=${OS_USERNAME:-admin}
if [ $OS_USERNAME == "admin" ]; then
export OS_PASSWORD={{ keystone_admin_password }}
fi
export OS_AUTH_URL=http://{{ keystone_auth_host }}:{{ keystone_admin_port }}
export OS_REGION_NAME={{ openstack_region_name }}
export NOVA_VERSION=${NOVA_VERSION:-1.1}
export COMPUTE_API_VERSION=${COMPUTE_API_VERSION:-$NOVA_VERSION}
export CINDER_VERSION=${CINDER_VERSION:-2}
export OS_VOLUME_API_VERSION=${OS_VOLUME_API_VERSION:-$CINDER_VERSION}

View File

@ -1,27 +0,0 @@
---
project_name: "rabbitmq"
####################
# Docker
####################
rabbitmq_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-rabbitmq"
rabbitmq_tag: "{{ openstack_release }}"
rabbitmq_image_full: "{{ rabbitmq_image }}:{{ rabbitmq_tag }}"
rabbitmq_data_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-data"
rabbitmq_data_tag: "{{ openstack_release }}"
rabbitmq_data_image_full: "{{ rabbitmq_data_image }}:{{ rabbitmq_data_tag }}"
####################
# Message-Broker
####################
rabbitmq_user: "openstack"
rabbitmq_cluster_name: "openstack"
####################
# Resources
####################
rabbitmq_mem: "{{ rabbitmq_mem|default('128') }}"
rabbitmq_cpus: "{{ rabbitmq_cpus|default('0.3') }}"

View File

@ -1,12 +0,0 @@
RABBITMQ_NODENAME=rabbit
RABBITMQ_BOOT_MODULE=rabbit_clusterer
{% if not kolla_base_distro in ['ubuntu', 'debian'] %}
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.5.5/plugins/rabbitmq_clusterer-3.5.x-189b3a81.ez/rabbitmq_clusterer-3.5.x-189b3a81/ebin"
# See bug https://bugs.launchpad.net/ubuntu/+source/erlang/+bug/1374109
export ERL_EPMD_ADDRESS={{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] }}
{% else %}
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.5.7/plugins/rabbitmq_clusterer-3.5.x-189b3a81.ez/rabbitmq_clusterer-3.5.x-189b3a81/ebin"
{% endif %}
export ERL_EPMD_PORT={{ rabbitmq_epmd_port }}

View File

@ -1,23 +0,0 @@
[
{kernel, [
{inet_dist_use_interface, {% raw %}{{% endraw %}{{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] | regex_replace('\.', ',') }}}},
{inet_dist_listen_min, {{ rabbitmq_cluster_port }}},
{inet_dist_listen_max, {{ rabbitmq_cluster_port }}}
]},
{rabbit, [
{tcp_listeners, [
{"{{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] }}", {{ rabbitmq_port }}}
]},
{default_user, <<"{{ rabbitmq_user }}">>},
{default_pass, <<"{{ rabbitmq_password }}">>},
{cluster_partition_handling, autoheal}
]},
{rabbitmq_management, [
{listener, [
{ip, "{{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] }}"},
{port, {{ rabbitmq_management_port }}}
]}
]},
{rabbitmq_clusterer, [{config, "/etc/rabbitmq/rabbitmq_clusterer.config"}]}
].
% EOF

View File

@ -1,9 +0,0 @@
[
{version, 1},
{nodes, [
{% for host in groups['rabbitmq'] %} {'rabbit@{{ hostvars[host]['ansible_hostname'] }}', disc}{% if not loop.last %},{% endif %}
{% endfor %}
]},
{gospel, {node, 'rabbit@{{ hostvars[groups['rabbitmq'][0]]['ansible_hostname'] }}'}}
].

View File

@ -1 +0,0 @@
*.rst

View File

@ -1,105 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
ROOT = os.path.abspath(os.path.join(BASE_DIR, "..", ".."))
sys.path.insert(0, ROOT)
sys.path.insert(0, BASE_DIR)
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'oslosphinx']
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'kolla-mesos'
copyright = u'2013, OpenStack Foundation'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['**/#*', '**~', '**/#*#']
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
primary_domain = 'py'
nitpicky = False
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}
# -- Options for manual page output -------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('man/kolla-mesos', 'kolla-mesos',
u'Shell to access kolla-mesos.',
[u'Kolla-Mesos Developers'], 1),
]

View File

@ -1,206 +0,0 @@
..
Copyright 2014-2015 OpenStack Foundation
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
How to add a new service
========================
Overview
--------
Firstly let's go through how a deployment works when you run
kolla-mesos-deploy to better understand the flow of operation.
1. kolla-mesos-deploy iterates over all the projects specified in
the chosen profile. It then finds services and task definitions
in services/<project>/*.yml.j2.
2. parse the service definition file and write the following to zookeeper:
- the required templates and files
- the variables that the above templates need.
- the definition it's self
3. generate the marathon and chronos files and deploy them.
The config/<project>/defaults/main.yml
--------------------------------------
This file keeps the basic variables which will be used when generating the
other files. Of course it can re-use wariables from *config/all.yml* file
which stores global variables for the whole kolla-mesos project.
We usually store the following information in this kind of files:
* database name, user and address
* Docker image name and tag
* OpenStack credentials and options
An example:
.. code-block:: yaml
project_name: "keystone"
keystone_database_name: "keystone"
keystone_database_user: "keystone"
keystone_database_address: "{{ kolla_internal_address }}
keystone_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-keystone"
keystone_tag: "{{ openstack_release }}"
keystone_public_address: "{{ kolla_external_address }}"
keystone_admin_address: "{{ kolla_internal_address }}"
keystone_internal_address: "{{ kolla_internal_address }}"
keystone_logging_verbose: "{{ openstack_logging_verbose }}"
keystone_logging_debug: "{{ openstack_logging_debug }}"
config/<project>/templates/*
----------------------------
kolla-mesos uses these files to generate the configuration of OpenStack
services. You can use jinja2 variables here. Generally, such a config file
should follow the practices used for creating usual config files.
An example::
[DEFAULT]
verbose = {{ keystone_logging_verbose }}
debug = {{ keystone_logging_debug}}
admin_token = {{ keystone_admin_token }}
[database]
connection = mysql://{{ keystone_database_user }}:{{ keystone_database_password }}@{{ keystone_database_address }}/{{ keystone_database_name }}
The service definition file
---------------------------
kolla-mesos-deploy uses this file to know what files are placed into
zookeeper from the kolla-mesos repo. Note the config it's self is
copied into zookeeper so that the container can read it too.
kolla_mesos_start.py (within the running container) uses this config to:
1. know where these files are placed within the container.
2. run commands defined in the config
The following is an example of a service.
.. code-block:: yaml
name: openstack/cinder/cinder-api
enabled: {{ enable_cinder | bool }}
container:
# place any marathon/container attribute here
# note the container/docker attributes do not need extra nesting
# they will be placed correctly in container/docker/
privileged: false
image: "{{ cinder_api_image }}:{{ cinder_api_tag }}"
service:
# place any toplevel marathon attribute here
# see: https://mesosphere.github.io/marathon/docs/rest-api.html
constraints: [["attribute", "OPERATOR", "value"]]
cpus: 1.5
mem: 256.0
instances: 3
daemon:
dependencies: [rabbitmq/daemon, cinder-api/db_sync]
command: /usr/bin/cinder-api
commands:
db_sync:
env:
KOLLA_BOOTSTRAP:
command: kolla_extend_start
run_once: True
dependencies: [cinder_ansible_tasks/create_database,
cinder_ansible_tasks/database_user_create]
files:
cinder.conf.j2:
source: /etc/kolla-mesos/config/cinder/cinder-api.conf
dest: /etc/cinder/cinder.conf
owner: cinder
perm: "0600"
The following is an example of a task.
.. code-block:: yaml
name: openstack/cinder/task
enabled: {{ enable_cinder | bool }}
container:
# place any chronos/container attribute here
volumes:
-
containerPath: "/var/log/"
hostPath: "/logs/"
mode: "RW"
image: "{{ kolla_toolbox_image }}:{{ kolla_toolbox_tag }}"
task:
# place any toplevel chronos attribute here
# see: https://mesos.github.io/chronos/docs/api.html
cpus: 1.5
mem: 256.0
retries: 2
commands:
db_sync:
env:
KOLLA_BOOTSTRAP:
command: kolla_extend_start
run_once: True
dependencies: [cinder_ansible_tasks/create_database,
cinder_ansible_tasks/database_user_create]
files:
cinder.conf.j2:
source: /etc/kolla-mesos/config/cinder/cinder-api.conf
dest: /etc/cinder/cinder.conf
owner: cinder
perm: "0600"
Notes on the above config.
1. In the files section, "source" is the source in the kolla-mesos
source tree and "dest" is the destination in the container. The
contents of the file will be placed in zookeeper in the node named:
"/kolla/config/project_a/service_x/a.cnf.j2".
2. kolla_mesos_start.py will render the file before placing in the
container.
3. In the commands section, commands will be run as soon as their
"dependencies" are fulfilled (exist in zookeeper), except that the
daemon command will be kept until last. Once a command
has completed, kolla_mesos_start.py will create the node in zookeeper.
Commands marked with "run_once" will not run
on more than one node.
Porting a service from kolla-ansible
------------------------------------
Let's assume that kolla-ansible has the service that you want
supported in kolla-mesos.
initial copying::
cp ansible/roles/<project>/templates/* ../kolla-mesos/config/<project>/templates/
cp ansible/roles/<project>/tasks/config.yml ../kolla-mesos/config/<project>/<service>_config.yml
# then edit the above to the new format.
cp ansible/roles/<projects>/defaults/main.yml ../kolla-mesos/config/<project>/defaults/main.yml

View File

@ -1,56 +0,0 @@
..
Copyright 2014-2015 OpenStack Foundation
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Kolla-Mesos's Mission
=====================
Kolla-Mesos provides Mesos deployment of Kolla Docker containers to meet Kolla's mission.
Kolla-Mesos is highly opinionated out of the box, but allows for complete
customization. This permits operators with minimal experience to deploy
OpenStack quickly and as experience grows modify the OpenStack configuration to
suit the operator's exact requirements.
Kolla-Mesos Overview
====================
Note: Kolla-Mesos is at a very early stage of development and the
documentation is currently focused on helping developers understand
the project and get started.
Contents:
.. toctree::
:maxdepth: 1
quickstart
howto_add_a_new_service
man/index
Code documentation
==================
.. toctree::
:maxdepth: 1
api/autoindex
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,8 +0,0 @@
-----------------------
Man pages for utilities
-----------------------
.. toctree::
:maxdepth: 2
kolla-mesos

View File

@ -1,25 +0,0 @@
===========
kolla-mesos
===========
.. program:: kolla-mesos
SYNOPSIS
========
``kolla-mesos [options]``
DESCRIPTION
===========
kolla-mesos runs kolla containers on Mesos (Marathon+Chronos) cluster.
INVENTORY
=========
OPTIONS
=======
FILES
=====
* /etc/kolla-mesos/kolla-mesos.conf

View File

@ -1,167 +0,0 @@
Development Environment with Vagrant
====================================
This guide describes how to use `Vagrant <http://vagrantup.com>`__ to
assist in developing for Kolla-Mesos.
Vagrant is a tool to assist in scripted creation of virtual machines. Vagrant
takes care of setting up CentOS-based VMs for Kolla-Mesos development, each with
proper hardware like memory amount and number of network interfaces.
Getting Started
---------------
The Vagrant script implements All-in-One (AIO).
Start by downloading and installing the Vagrant package for the distro of
choice. Various downloads can be found at the `Vagrant downloads
<https://www.vagrantup.com/downloads.html>`__.
On Fedora it is as easy as::
sudo dnf install vagrant ruby-devel
**Note:** Many distros ship outdated versions of Vagrant by default. When in
doubt, always install the latest from the downloads page above.
Next install the hostmanager plugin so all hosts are recorded in /etc/hosts
(inside each vm)::
vagrant plugin install vagrant-hostmanager
Vagrant supports a wide range of virtualization technologies. This
documentation describes libvirt.
Firstly, you should install libvirt (including headers and Python library) and
NFS.
On Fedora::
sudo dnf install libvirt-devel libvirt-python nfs-utils
On CentOS/RHEL::
sudo yum install libvirt-devel libvirt-python nfs-utils
On Ubuntu::
sudo apt-get install libvirt-dev nfs-common nfs-kernel-server python-libvirt qemu ruby-libvirt
To install vagrant-libvirt plugin::
vagrant plugin install --plugin-version ">= 0.0.31" vagrant-libvirt
Some Linux distributions offer vagrant-libvirt packages, but the version they
provide tends to be too old to run Kolla-Mesos. A version of >= 0.0.31 is required.
Setup NFS to permit file sharing between host and VMs. Contrary to the rsync
method, NFS allows both way synchronization and offers much better performance
than VirtualBox shared folders. On Fedora 22::
sudo systemctl start nfs-server
firewall-cmd --permanent --add-port=2049/udp
firewall-cmd --permanent --add-port=2049/tcp
firewall-cmd --permanent --add-port=111/udp
firewall-cmd --permanent --add-port=111/tcp
Find a location in the system's home directory and checkout the Kolla-Mesos repo::
git clone https://github.com/openstack/kolla-mesos.git
Developers can now tweak the Vagrantfile or bring up the default AIO
Centos7-based environment::
cd kolla-mesos/vagrant
vagrant up
To tweak Vagranfile, you should create *Vagrantfile.custom* file which overrides
some values of *Vagrantfile*. It's recommended to use *Vagrantfile.custom.example*
for that::
cp Vagrantfile.custom.example Vagrantfile.custom
It's mandatory to set variables in *Vagrantfile.custom* if you want to set up
the multinode environment. In order to do that, this file should contain a line::
MULTINODE = true
The command ``vagrant status`` provides a quick overview of the VMs composing
the environment.
Vagrant Up
----------
Once Vagrant has completed deploying all nodes, the next step is to
build images using Kolla. First, connect with the *operator* node::
vagrant ssh operator
To speed things up, there is a local registry running on the operator. All
nodes are configured so they can use this insecure repo to pull from, and use
it as a mirror. Ansible may use this registry to pull images from.
All nodes have a local folder shared between the group and the hypervisor, and
a folder shared between *all* nodes and the hypervisor. This mapping is lost
after reboots, so make sure to use the command ``vagrant reload <node>`` when
reboots are required. Having this shared folder provides a method to supply
a different docker binary to the cluster. The shared folder is also used to
store the docker-registry files, so they are save from destructive operations
like ``vagrant destroy``.
Building images
^^^^^^^^^^^^^^^
Log onto the *operator* VM and call the ``kolla-build`` utility. If you're
doing the multinode installation, pushing built images to Docker Registry is
mandatory and you can do this by::
sudo kolla-build --push --profile mesos
Otherwise, if you're doing the all-in-one installation and don't want to use
the registry::
sudo kolla-build --profile mesos
``kolla-build`` builds Docker images and pushes them to the local registry if
the *push* option is enabled (in Vagrant this is the default behaviour).
Setting up Mesos cluster
^^^^^^^^^^^^^^^^^^^^^^^^
To set up a Mesos cluster, the ``kolla-mesos-ansible`` utility should be used.
In case of an all-in-one installation, you can call it without any additional
arguments::
sudo kolla-mesos-ansible deploy
When you want to provide a custom inventory, you can use the ``--inventory``
option. For example, to use the default multinode inventory (made for
Vagrant)::
sudo kolla-mesos-ansible -i /usr/share/kolla-mesos/ansible/inventory/multinode deploy
Of course, you can use your custom inventory file for bare metal deployments.
Deploying OpenStack with Kolla-Mesos
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Deploy AIO with::
kolla-mesos-deploy
Validate OpenStack is operational::
source ~/openrc
openstack user list
Or navigate to http://10.10.10.254/ with a web browser.
Further Reading
---------------
All Vagrant documentation can be found at
`docs.vagrantup.com <http://docs.vagrantup.com>`__.

View File

@ -1,68 +0,0 @@
---
# You can use this file to override _any_ variable throughout Kolla-Mesos.
# Additional options can be found in the 'kolla-mesos/config/all.yml' file.
####################
# Kolla options
####################
config_strategy: "COPY_ONCE"
kolla_base_distro: "centos"
kolla_install_type: "binary"
kolla_internal_address: "10.10.10.254"
ansible_task_cmd: "/usr/bin/ansible localhost -vvv"
####################
# Docker options
####################
docker_registry: "operator.local:5000"
docker_namespace: "kollaglue"
####################
# Networking options
####################
network_interface: "eth2"
neutron_external_interface: "eth2"
####################
# Resources options
####################
# If "no", there will be no constraints regarding the OpenStack services.
# And all the options in this section will be not regarded.
multinode: "no"
# If defined, then this single slave will be used for "all-in-one"
# deployment and all the options in this section will be not regarded
#mesos_aio_hostname: "slave01.local"
# If "yes", kolla-mesos will auto-detect the Mesos slave nodes which have
# "openstack_role" attribute and count them to caculate the number
# of OpenStack services to run. If no, you have to set up the options below.
autodetect_resources: "yes"
# Please set the number of controller and compute nodes if the autodetecting
# is disabled.
controller_nodes: "1"
compute_nodes: "1"
storage_nodes: "1"
####################
# OpenStack options
####################
openstack_release: "2.0.0"
init_log_level: "debug"
database_max_timeout: "60"
# Additional optional OpenStack services
enable_cinder: "no"
enable_horizon: "no"
enable_memcached: "no"
enable_haproxy: "no"
enable_ceph: "no"
enable_heat: "no"
enable_swift: "no"
enable_murano: "no"
enable_ironic: "no"
# If marathon_framework is not set then script
# will try to autodetect it from mesos
# marathon_framework: "marathon"
# Domain configured in mesos-dns
mesos_dns_domain: "mesos"

View File

@ -1,5 +0,0 @@
[DEFAULT]
output_file = etc/kolla-mesos.conf.sample
wrap_width = 79
namespace = kolla_mesos

View File

@ -1,77 +0,0 @@
---
# TODO(SamYaple): This file should have generated values by default. Propose
# Ansible vault for locking down the secrets properly.
###################
# Ceph options
####################
ceph_cluster_fsid: "5fba2fbc-551d-11e5-a8ce-01ef4c5cf93c"
rbd_secret_uuid: "bbc5b4d5-6fca-407d-807d-06a4f4a7bccb"
###################
# Database options
####################
database_password: "password"
####################
# Docker options
####################
docker_registry_password:
####################
# Openstack options
####################
keystone_admin_password: "password"
keystone_database_password: "password"
glance_database_password: "password"
glance_keystone_password: "password"
nova_database_password: "password"
nova_api_database_password: "password"
nova_keystone_password: "password"
neutron_database_password: "password"
neutron_keystone_password: "password"
metadata_secret: "password"
cinder_database_password: "password"
cinder_keystone_password: "password"
swift_keystone_password: "password"
swift_hash_path_suffix: "kolla"
swift_hash_path_prefix: "kolla"
heat_database_password: "password"
heat_keystone_password: "password"
heat_domain_admin_password: "password"
murano_database_password: "password"
murano_keystone_password: "password"
ironic_database_password: "password"
ironic_keystone_password: "password"
magnum_database_password: "password"
magnum_keystone_password: "password"
mistral_database_password: "password"
mistral_keystone_password: "password"
horizon_secret_key: "password"
####################
# RabbitMQ options
####################
rabbitmq_password: "password"
rabbitmq_cluster_cookie: "password"
####################
# HAProxy options
####################
haproxy_password: "password"

View File

@ -1,19 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
__version__ = pbr.version.VersionInfo(
'kolla-mesos').version_string()

View File

@ -1,132 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# TODO(nihilifer): Contribute to https://github.com/mesosphere/dcos-cli and
# remove this module when possible.
import json
import operator
from oslo_config import cfg
from oslo_log import log as logging
import requests
from six.moves.urllib import parse
from kolla_mesos.common import retry_utils
from kolla_mesos import exception
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_group('chronos', 'kolla_mesos.config.chronos')
class Client(object):
"""Class for talking to the Chronos server.
:param chronos_url: the base URL for the Chronos server
:type chronos_url: str
:param timeout: timeout for request to the Chronos server
:type timeout: int
"""
def _create_url(self, path):
"""Create URL for the specific Chronos API resource.
:param path: the path to the Chronos API resource
:type path: str
"""
return parse.urljoin(CONF.chronos.host, path)
@retry_utils.retry_if_not_rollback(stop_max_attempt_number=5,
wait_fixed=1000)
def add_job(self, job_resource):
"""Add job to Chronos.
:param job_resource: data about job to run on Chronos
:type job_resource: dict
"""
job_name = job_resource['name']
old_job = self.get_job(job_name)
if old_job is None:
url = self._create_url('scheduler/iso8601')
response = requests.post(url, data=json.dumps(job_resource),
timeout=CONF.chronos.timeout,
headers={'Content-Type':
'application/json'})
if response.status_code not in [200, 204]:
raise exception.ChronosException('Failed to add job')
else:
if CONF.force:
LOG.info('Deployment found and --force flag is used. '
'Destroying previous deployment and re-creating it.')
raise exception.ChronosRollback()
else:
LOG.info('Job %s is already added. If you want to replace it, '
'please use --force flag', job_name)
return old_job
def get_job(self, job_name):
"""Get job from Chronos by name.
:param job_name: id of job to get
:type job_name: str
"""
jobs = self.get_jobs()
return next((job for job in jobs if job['name'] == job_name), None)
def get_jobs(self):
"""Get list of running jobs in Chronos"""
LOG.debug('Requesting list of all Chronos jobs')
url = self._create_url('scheduler/jobs')
response = requests.get(url, timeout=CONF.chronos.timeout)
return response.json()
def remove_job(self, job_name):
"""Remove job from Chronos.
:param job_name: name of job to delete
:type job_name: str
"""
url = self._create_url('scheduler/job/{}'.format(job_name))
response = requests.delete(url, timeout=CONF.chronos.timeout)
if response.status_code not in [200, 204]:
raise exception.ChronosException('Failed to remove job')
def remove_job_tasks(self, job_name):
"""Remove all tasks for a job.
:param job_name: name of job to delete tasks from
:type job_name: str
"""
url = self._create_url('scheduler/task/kill/{}'.format(job_name))
response = requests.delete(url, timeout=CONF.chronos.timeout)
if response.status_code not in [200, 204]:
raise exception.ChronosException('Failed to remove tasks from job')
def remove_all_jobs(self, with_tasks=True):
job_names = list(map(operator.itemgetter('name'), self.get_jobs()))
LOG.debug('Found chronos jobs: %s', job_names)
for job_name in job_names:
if with_tasks:
LOG.info('Removing chronos job: %s', job_name)
self.remove_job_tasks(job_name)
LOG.info('Removing chronos job: %s', job_name)
self.remove_job(job_name)

View File

@ -1,144 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import itertools
import multiprocessing
import operator
import re
from oslo_config import cfg
from oslo_log import log as logging
import retrying
import six
from kolla_mesos import chronos
from kolla_mesos.common import docker_utils
from kolla_mesos.common import zk_utils
from kolla_mesos import exception
from kolla_mesos import marathon
from kolla_mesos import mesos
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
@retrying.retry(wait_fixed=5000)
def wait_for_mesos_cleanup():
"""Check whether all tasks in Mesos are exited."""
mesos_client = mesos.Client()
tasks = mesos_client.get_tasks()
if len(tasks) > 0:
LOG.info("Mesos is still running some tasks. Waiting for their "
"exit.")
raise exception.MesosTasksNotCompleted()
@docker_utils.DockerClient()
def remove_container(dc, container_name):
LOG.info("Removing container %s", container_name)
dc.remove_container(container_name)
# NOTE(nihilifer): Despite the fact that OpenStack community decided to use
# builtins like "map", "filter" etc. directly, without aiming to use lazy
# generators in Python 2.x, here we decided to always use generators in every
# version of Python. Mainly because Mesos cluster may have a lot of containers
# and we would do multiple O(n) operations. Doing all these things lazy
# results in iterating only once on the lists of containers and volumes.
def get_container_names():
with docker_utils.DockerClient() as dc:
exited_containers = dc.containers(all=True,
filters={'status': 'exited'})
created_containers = dc.containers(all=True,
filters={'status': 'created'})
dead_containers = dc.containers(all=True,
filters={'status': 'dead'})
containers = itertools.chain(exited_containers, created_containers,
dead_containers)
container_name_lists = six.moves.map(operator.itemgetter('Names'),
containers)
container_name_lists = six.moves.filter(lambda name_list:
len(name_list) > 0,
container_name_lists)
container_names = six.moves.map(operator.itemgetter(0),
container_name_lists)
container_names = six.moves.filter(lambda name: re.search(r'/mesos-',
name),
container_names)
return container_names
# NOTE(nihilifer): Mesos doesn't support fully the named volumes which we're
# using. Mesos can run containers with named volume with passing the Docker
# parameters directly, but it doesn't handle any other actions with them.
# That's why currently we're cleaning the containers and volumes by calling
# the Docker API directly.
# TODO(nihilifer): Request/develop the feature of cleaning volumes directly
# in Mesos and Marathon.
# TODO(nihilifer): Support multinode cleanup.
def remove_all_containers():
"""Remove all exited containers which were run by Mesos.
It's done in order to succesfully remove named volumes.
"""
container_names = get_container_names()
# Remove containers in the pool of workers
pool = multiprocessing.Pool(processes=CONF.workers)
tasks = [pool.apply_async(remove_container, (container_name,))
for container_name in container_names]
# Wait for every task to execute
for task in tasks:
task.get()
@docker_utils.DockerClient()
def remove_all_volumes(dc):
"""Remove all volumes created for containers run by Mesos."""
if dc.volumes()['Volumes'] is not None:
volume_names = six.moves.map(operator.itemgetter('Name'),
dc.volumes()['Volumes'])
for volume_name in volume_names:
# TODO(nihilifer): Provide a more intelligent filtering for Mesos
# infra volumes.
if 'zookeeper' not in volume_name:
LOG.info("Removing volume %s", volume_name)
dc.remove_volume(volume_name)
else:
LOG.info("No docker volumes found")
def cleanup():
LOG.info("Starting cleanup...")
marathon_client = marathon.Client()
chronos_client = chronos.Client()
with zk_utils.connection() as zk:
zk_utils.clean(zk)
LOG.info("Starting cleanup of apps")
marathon_client.remove_all_apps()
LOG.info("Starting cleanup of groups")
marathon_client.remove_all_groups()
LOG.info("Starting cleanup of chronos jobs")
chronos_client.remove_all_jobs()
LOG.info("Checking whether all tasks in Mesos are exited")
wait_for_mesos_cleanup()
LOG.info("Starting cleanup of Docker containers")
remove_all_containers()
LOG.info("Starting cleanup of Docker volumes")
remove_all_volumes()

View File

@ -1,66 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Command line interface for the Chronos API.
"""
from cliff import lister
from cliff import show
from oslo_config import cfg
from kolla_mesos import chronos
CONF = cfg.CONF
CONF.import_group('chronos', 'kolla_mesos.config.chronos')
class List(lister.Lister):
"""List Chronos jobs."""
def get_parser(self, prog_name):
parser = super(List, self).get_parser(prog_name)
parser.add_argument('--path',
default='/kolla/%s' % CONF.kolla.deployment_id)
return parser
def take_action(self, parsed_args):
client = chronos.Client()
jobs = client.get_jobs()
return (('Name', 'Mem', 'CPUs', 'Last success', 'Last error',
'Command', 'Schedule',),
((job['name'], job['mem'], job['cpus'],
job['lastSuccess'], job['lastError'], job['command'],
job['schedule'],)
for job in jobs))
class Show(show.ShowOne):
"""Show the chronos job."""
def get_parser(self, prog_name):
parser = super(Show, self).get_parser(prog_name)
parser.add_argument('job_name')
return parser
def take_action(self, parsed_args):
client = chronos.Client()
job = client.get_job(CONF.action.job_name)
return (('Name', 'Mem', 'CPUs', 'Disk', 'Last success',
'Last error', 'Command', 'Container', 'Environment',),
(job['name'], job['mem'], job['cpus'], job['disk'],
job['lastSuccess'], job['lastError'], job['command'],
job['schedule'], job['container'],
job['environmentVariables'],))

View File

@ -1,70 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cliff import lister
from oslo_config import cfg
from oslo_log import log
from kolla_mesos import commands
CONF = cfg.CONF
LOG = log.getLogger(__name__)
def format_output(status):
cols = ('Command', 'Status', 'Requirements')
rows = []
for taskname, info in sorted(status.items()):
reg_status = info['register'][1] or 'unknown'
requirements = []
reqts = info['requirements']
for reqt_path, reqt_status in sorted(reqts.items()):
reqt_path = reqt_path.split(
'status/')[1] if 'status/' in reqt_path else reqt_path
if not reqt_status:
reqt_status = 'unknown'
requirements.append('%s:%s' % (reqt_path, reqt_status))
requirements = '\n'.join(requirements)
rows.append((taskname, reg_status, requirements))
return cols, rows
def _clean_path(path):
if 'status/' in path:
path = path.split('status/')[1]
return path
class List(lister.Lister):
"""List all commands and their statuses for this service."""
def get_parser(self, prog_name):
parser = super(List, self).get_parser(prog_name)
parser.add_argument(
'service',
nargs='?',
help='Information for the deployment will be shown if the service '
'is not specified'
)
return parser
def take_action(self, parsed_args):
if parsed_args.service:
status = commands.get_service_status(
parsed_args.service, CONF.service_dir)
else:
status = commands.get_deployment_status(CONF.service_dir)
return format_output(status)

View File

@ -1,64 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cliff import command
from cliff import lister
from cliff import show
from oslo_config import cfg
from oslo_log import log
from kolla_mesos.common import cli_utils
from kolla_mesos.common import zk_utils
CONF = cfg.CONF
CONF.import_group('kolla', 'kolla_mesos.config.kolla')
LOG = log.getLogger(__name__)
class ConfigList(lister.Lister):
"""List Zookeeper variables."""
def get_parser(self, prog_name):
parser = super(ConfigList, self).get_parser(prog_name)
parser.add_argument('--path',
default='/kolla/%s' % CONF.kolla.deployment_id)
return parser
def take_action(self, parsed_args):
dd = zk_utils.list_all(parsed_args.path)
return (('Path', 'Value'), dd.items())
class ConfigShow(show.ShowOne):
"""Show a Zookeeper variable value."""
def get_parser(self, prog_name):
parser = super(ConfigShow, self).get_parser(prog_name)
parser.add_argument('path')
return parser
def take_action(self, parsed_args):
data = zk_utils.get_one(parsed_args.path)
return cli_utils.dict2columns(data, id_col='Path')
class ConfigSet(command.Command):
"""Set a Zookeeper variable value."""
def get_parser(self, prog_name):
parser = super(ConfigSet, self).get_parser(prog_name)
parser.add_argument('path')
parser.add_argument('value')
return parser
def take_action(self, parsed_args):
zk_utils.set_one(parsed_args.path, parsed_args.value)

View File

@ -1,67 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cliff import command
from cliff import lister
from cliff import show
from oslo_config import cfg
from oslo_log import log
from kolla_mesos import cleanup
from kolla_mesos.common import cli_utils
from kolla_mesos import deployment
from kolla_mesos import service
CONF = cfg.CONF
CONF.import_opt('workers', 'kolla_mesos.config.multiprocessing_cli')
LOG = log.getLogger(__name__)
class Run(command.Command):
"""Run the services in the configured profile."""
def take_action(self, parsed_args):
deployment.run_deployment()
deployment.write_openrc('%s-openrc' % CONF.kolla.deployment_id)
class Kill(command.Command):
"""Kill all the running services."""
def take_action(self, parsed_args):
for serv in service.list_services():
service.kill_service(serv['service'])
class Cleanup(command.Command):
"""Delete all created resources."""
def take_action(self, parsed_args):
cleanup.cleanup()
class Show(show.ShowOne):
"""Show the deployment configuration."""
def take_action(self, parsed_args):
conf_opts = deployment.get_deployment()
return cli_utils.dict2columns(conf_opts, id_col='deployment_id')
class List(lister.Lister):
"""List all existing deployments."""
def take_action(self, parsed_args):
cols = ['Deployment ID']
ids = deployment.list_deployments()
values = [[id] for id in ids]
return cols, values

View File

@ -1,148 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cliff import command
from cliff import lister
from cliff import show
from oslo_config import cfg
from oslo_log import log
from kolla_mesos.common import cli_utils
from kolla_mesos import service
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class Run(command.Command):
"""Run a service."""
def get_parser(self, prog_name):
parser = super(Run, self).get_parser(prog_name)
parser.add_argument('service')
return parser
def take_action(self, parsed_args):
service.run_service(parsed_args.service,
CONF.service_dir)
class Kill(command.Command):
"""Kill a service."""
def get_parser(self, prog_name):
parser = super(Kill, self).get_parser(prog_name)
parser.add_argument('service')
return parser
def take_action(self, parsed_args):
service.kill_service(parsed_args.service)
class Show(show.ShowOne):
"""Show the live status of the task or service."""
def get_parser(self, prog_name):
parser = super(Show, self).get_parser(prog_name)
parser.add_argument('service')
return parser
def take_action(self, parsed_args):
data = service.get_service(parsed_args.service)
return cli_utils.dict2columns(data, id_col='service')
class List(lister.Lister):
"""List all deployed services for this deployment_id."""
def take_action(self, parsed_args):
apps = service.list_services()
values = []
cols = ('service', 'type', 'instances', 'tasksUnhealthy',
'tasksHealthy', 'tasksRunning', 'tasksStaged', 'version')
for app in apps:
values.append([app[field] for field in cols])
return (cols, values)
class Scale(command.Command):
"""Scale the service."""
def get_parser(self, prog_name):
parser = super(Scale, self).get_parser(prog_name)
parser.add_argument('service')
parser.add_argument('instances')
parser.add_argument('--force', action='store_true',
default=False)
return parser
def take_action(self, parsed_args):
service.scale_service(parsed_args.service,
parsed_args.instances,
parsed_args.force)
class Log(command.Command):
"""Dump the logs for this task or service."""
def get_parser(self, prog_name):
parser = super(Log, self).get_parser(prog_name)
parser.add_argument('service')
file_name = parser.add_mutually_exclusive_group()
file_name.add_argument(
'--stderr',
action='store_const',
const='stderr',
dest='filename'
)
file_name.add_argument(
'--stdout',
action='store_const',
const='stdout',
dest='filename'
)
return parser
def take_action(self, parsed_args):
self.app.stdout.write(service.get_service_logs(
parsed_args.service, parsed_args.filename))
class Snapshot(command.Command):
"""Snapshot the service configuration and deployment file.
This will produce a tarball that can be later used with
'kolla-mesos update <service> --snapshot <file>'
"""
def get_parser(self, prog_name):
parser = super(Snapshot, self).get_parser(prog_name)
parser.add_argument('service')
parser.add_argument('output_dir')
return parser
def take_action(self, parsed_args):
service.snapshot_service(parsed_args.service,
parsed_args.output_dir)
class Update(command.Command):
"""Update the service configuration and deployment file."""
def get_parser(self, prog_name):
parser = super(Update, self).get_parser(prog_name)
parser.add_argument('service')
return parser
def take_action(self, parsed_args):
service.update_service(parsed_args.service, CONF.service_dir)

View File

@ -1,52 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cliff import command
from cliff import show
from oslo_log import log
from kolla_mesos import service_definition
LOG = log.getLogger(__name__)
class Inspect(show.ShowOne):
"""Show available parameters and info about a service definition."""
def get_parser(self, prog_name):
parser = super(Inspect, self).get_parser(prog_name)
parser.add_argument('service', help='The service name')
return parser
def take_action(self, parsed_args):
info = service_definition.inspect(parsed_args.service,
self.app.options.service_dir)
columns = []
data = []
for col, val in info.items():
columns.append(col)
data.append(val)
return (columns, data)
class Validate(command.Command):
"""Validate the service definition."""
def get_parser(self, prog_name):
parser = super(Validate, self).get_parser(prog_name)
parser.add_argument('service', help='The service name')
return parser
def take_action(self, parsed_args):
service_definition.validate(parsed_args.service,
self.app.options.service_dir,
variables={})

Some files were not shown because too many files have changed in this diff Show More