Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: Ib9526e42d9546db85d5612eda8131e1df0c698b8
This commit is contained in:
Tony Breeds 2017-09-12 15:40:31 -06:00
parent d1a41f74f3
commit 6f4091344c
792 changed files with 14 additions and 83581 deletions

View File

@ -1,9 +0,0 @@
[run]
branch = True
source = magnum
omit = magnum/tests/*
[report]
ignore_errors = True
exclude_lines =
pass

66
.gitignore vendored
View File

@ -1,66 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg*
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
cover
cover-master
.tox
nosetests.xml
.testrepository
.venv
# Functional test
functional-tests.log
functional_creds.conf
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
.idea
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?
*.DS_Store
# generated config file
etc/magnum/magnum.conf.sample
# Files created by releasenotes build
releasenotes/build

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/magnum.git

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-45} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./magnum/tests/unit} $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/magnum

View File

@ -1,24 +0,0 @@
Magnum Style Commandments
=========================
- Step 1: Read the OpenStack Style Commandments
http://docs.openstack.org/developer/hacking/
- Step 2: Read on
Magnum Specific Commandments
----------------------------
- [M302] Change assertEqual(A is not None) by optimal assert like
assertIsNotNone(A).
- [M310] timeutils.utcnow() wrapper must be used instead of direct calls to
datetime.datetime.utcnow() to make it easy to override its return value.
- [M316] Change assertTrue(isinstance(A, B)) by optimal assert like
assertIsInstance(A, B).
- [M322] Method's default argument shouldn't be mutable.
- [M336] Must use a dict comprehension instead of a dict constructor
with a sequence of key-value pairs.
- [M338] Use assertIn/NotIn(A, B) rather than assertEqual(A in B, True/False).
- [M339] Don't use xrange()
- [M340] Check for explicit import of the _ function.
- [M352] LOG.warn is deprecated. Enforce use of LOG.warning.
- [M353] String interpolation should be delayed at logging calls.

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,24 +0,0 @@
========================
Team and repository tags
========================
.. image:: https://governance.openstack.org/badges/magnum.svg
:target: https://governance.openstack.org/reference/tags/index.html
.. Change things from this point on
======
Magnum
======
Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
For more information, please refer to the following resources:
* **Free software:** under the `Apache license <http://www.apache.org/licenses/LICENSE-2.0>`_
* **Documentation:** https://docs.openstack.org/magnum/latest/
* **Source:** http://git.openstack.org/cgit/openstack/magnum
* **Blueprints:** https://blueprints.launchpad.net/magnum
* **Bugs:** http://bugs.launchpad.net/magnum
* **REST Client:** http://git.openstack.org/cgit/openstack/python-magnumclient

View File

@ -1,366 +0,0 @@
.. -*- rst -*-
===================
Manage Baymodels
===================
Lists, creates, shows details for, updates, and deletes baymodels.
Create new baymodel
====================
.. rest_method:: POST /v1/baymodels/
Create new baymodel.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 201
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- labels: labels
- fixed_subnet: fixed_subnet
- master_flavor_id: master_flavor_id
- no_proxy: no_proxy
- https_proxy: https_proxy
- http_proxy: http_proxy
- tls_disabled: tls_disabled
- keypair_id: keypair_id
- public: public_type
- docker_volume_size: docker_volume_size
- server_type: server_type
- external_network_id: external_network_id
- image_id: image_id
- volume_driver: volume_driver
- registry_enabled: registry_enabled
- docker_storage_driver: docker_storage_driver
- name: name
- network_driver: network_driver
- fixed_network: fixed_network
- coe: coe
- flavor_id: flavor_id
- master_lb_enabled: master_lb_enabled
- dns_nameserver: dns_nameserver
Request Example
----------------
.. literalinclude:: samples/baymodel-create-req.json
:language: javascript
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- insecure_registry: insecure_registry
- links: links
- http_proxy: http_proxy
- updated_at: updated_at
- floating_ip_enabled: floating_ip_enabled
- fixed_subnet: fixed_subnet
- master_flavor_id: master_flavor_id
- uuid: baymodel_id
- no_proxy: no_proxy
- https_proxy: https_proxy
- tls_disabled: tls_disabled
- keypair_id: keypair_id
- public: public_type
- labels: labels
- docker_volume_size: docker_volume_size
- server_type: server_type
- external_network_id: external_network_id
- cluster_distro: cluster_distro
- image_id: image_id
- volume_driver: volume_driver
- registry_enabled: registry_enabled
- docker_storage_driver: docker_storage_driver
- apiserver_port: apiserver_port
- name: name
- created_at: created_at
- network_driver: network_driver
- fixed_network: fixed_network
- coe: coe
- flavor_id: flavor_id
- master_lb_enabled: master_lb_enabled
- dns_nameserver: dns_nameserver
Response Example
----------------
.. literalinclude:: samples/baymodel-create-resp.json
:language: javascript
List all baymodels
==================
.. rest_method:: GET /v1/baymodels/
List all available baymodels in Magnum.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- baymodels: baymodel_list
- insecure_registry: insecure_registry
- links: links
- http_proxy: http_proxy
- updated_at: updated_at
- floating_ip_enabled: floating_ip_enabled
- fixed_subnet: fixed_subnet
- master_flavor_id: master_flavor_id
- uuid: baymodel_id
- no_proxy: no_proxy
- https_proxy: https_proxy
- tls_disabled: tls_disabled
- keypair_id: keypair_id
- public: public_type
- labels: labels
- docker_volume_size: docker_volume_size
- server_type: server_type
- external_network_id: external_network_id
- cluster_distro: cluster_distro
- image_id: image_id
- volume_driver: volume_driver
- registry_enabled: registry_enabled
- docker_storage_driver: docker_storage_driver
- apiserver_port: apiserver_port
- name: name
- created_at: created_at
- network_driver: network_driver
- fixed_network: fixed_network
- coe: coe
- flavor_id: flavor_id
- master_lb_enabled: master_lb_enabled
- dns_nameserver: dns_nameserver
Response Example
----------------
.. literalinclude:: samples/baymodel-get-all-resp.json
:language: javascript
Show details of a baymodel
==========================
.. rest_method:: GET /v1/baymodels/{baymodel_ident}
Get all information of a baymodel in Magnum.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- baymodel_ident: baymodel_ident
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- baymodels: baymodel_list
- insecure_registry: insecure_registry
- links: links
- http_proxy: http_proxy
- updated_at: updated_at
- floating_ip_enabled: floating_ip_enabled
- fixed_subnet: fixed_subnet
- master_flavor_id: master_flavor_id
- uuid: baymodel_id
- no_proxy: no_proxy
- https_proxy: https_proxy
- tls_disabled: tls_disabled
- keypair_id: keypair_id
- public: public_type
- labels: labels
- docker_volume_size: docker_volume_size
- server_type: server_type
- external_network_id: external_network_id
- cluster_distro: cluster_distro
- image_id: image_id
- volume_driver: volume_driver
- registry_enabled: registry_enabled
- docker_storage_driver: docker_storage_driver
- apiserver_port: apiserver_port
- name: name
- created_at: created_at
- network_driver: network_driver
- fixed_network: fixed_network
- coe: coe
- flavor_id: flavor_id
- master_lb_enabled: master_lb_enabled
- dns_nameserver: dns_nameserver
Response Example
----------------
.. literalinclude:: samples/baymodel-create-resp.json
:language: javascript
Delete a baymodel
==================
.. rest_method:: DELETE /v1/baymodels/{baymodel_ident}
Delete a baymodel.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 204
.. rest_status_code:: error status.yaml
- 401
- 403
- 404
- 409
Request
-------
.. rest_parameters:: parameters.yaml
- baymodel_ident: baymodel_ident
Response
--------
This request does not return anything in the response body.
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
Update information of baymodel
===============================
.. rest_method:: PATCH /v1/baymodels/{baymodel_ident}
Update information of one baymodel attributes using operations including:
``add``, ``replace`` or ``remove``. The attributes to ``add`` and ``replace``
in the form of ``key=value`` while ``remove`` only needs the keys.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- baymodel_ident: baymodel_ident
- path: path
- value: value
- op: op
Request Example
----------------
.. literalinclude:: samples/baymodel-update-req.json
:language: javascript
Response
--------
Return new baymodel with updated attributes.
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- baymodels: baymodel_list
- insecure_registry: insecure_registry
- links: links
- http_proxy: http_proxy
- updated_at: updated_at
- floating_ip_enabled: floating_ip_enabled
- fixed_subnet: fixed_subnet
- master_flavor_id: master_flavor_id
- uuid: baymodel_id
- no_proxy: no_proxy
- https_proxy: https_proxy
- tls_disabled: tls_disabled
- keypair_id: keypair_id
- public: public_type
- labels: labels
- docker_volume_size: docker_volume_size
- server_type: server_type
- external_network_id: external_network_id
- cluster_distro: cluster_distro
- image_id: image_id
- volume_driver: volume_driver
- registry_enabled: registry_enabled
- docker_storage_driver: docker_storage_driver
- apiserver_port: apiserver_port
- name: name
- created_at: created_at
- network_driver: network_driver
- fixed_network: fixed_network
- coe: coe
- flavor_id: flavor_id
- master_lb_enabled: master_lb_enabled
- dns_nameserver: dns_nameserver
Response Example
----------------
.. literalinclude:: samples/baymodel-create-resp.json
:language: javascript

View File

@ -1,259 +0,0 @@
.. -*- rst -*-
============
Manage Bay
============
Lists, creates, shows details for, updates, and deletes Bay.
Create new bay
==============
.. rest_method:: POST /v1/bays
Create new bay based on bay model.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 202
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- name: name
- discovery_url: discovery_url
- master_count: master_count
- baymodel_id: baymodel_id
- node_count: node_count
- bay_create_timeout: bay_create_timeout
.. note::
Request for creating bay is asynchronous from Newton.
Request Example
----------------
.. literalinclude:: samples/bay-create-req.json
:language: javascript
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- uuid: bay_id
Response Example
----------------
.. literalinclude:: samples/bay-create-resp.json
:language: javascript
List all bays
====================
.. rest_method:: GET /v1/bays/
List all bays in Magnum.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- bays: bay_list
- status: status
- uuid: bay_id
- links: links
- stack_id: stack_id
- master_count: master_count
- baymodel_id: baymodel_id
- node_count: node_count
- bay_create_timeout: bay_create_timeout
- name: name
Response Example
----------------
.. literalinclude:: samples/bay-get-all-resp.json
:language: javascript
Show details of a bay
=============================
.. rest_method:: GET /v1/bays/{bay_ident}
Get all information of a bay in Magnum.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- bay_ident: bay_ident
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- status: status
- uuid: bay_id
- links: links
- stack_id: stack_id
- created_at: created_at
- api_address: api_address
- discovery_url: discovery_url
- updated_at: updated_at
- master_count: master_count
- coe_version: coe_version
- baymodel_id: baymodel_id
- master_addresses: master_addresses
- node_count: node_count
- node_addresses: node_addresses
- status_reason: status_reason
- bay_create_timeout: bay_create_timeout
- name: name
Response Example
----------------
.. literalinclude:: samples/bay-get-one-resp.json
:language: javascript
Delete a bay
====================
.. rest_method:: DELETE /v1/bays/{bay_ident}
Delete a bay.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 204
.. rest_status_code:: error status.yaml
- 401
- 403
- 404
- 409
Request
-------
.. rest_parameters:: parameters.yaml
- bay_ident: bay_ident
Response
--------
This request does not return anything in the response body.
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
Update information of bay
=================================
.. rest_method:: PATCH /v1/bays/{bay_ident}
Update information of one bay attributes using operations
including: ``add``, ``replace`` or ``remove``. The attributes to ``add`` and
``replace`` in the form of ``key=value`` while ``remove`` only needs the keys.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 202
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- bay_ident: bay_ident
- path: path
- value: value
- op: op
.. note::
Request for updating bay is asynchronous from Newton.
Currently only attribute ``node_count`` are supported for operation
``replace`` and ``remove``.
Request Example
----------------
.. literalinclude:: samples/bay-update-req.json
:language: javascript
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- uuid: bay_id
Response Example
----------------
.. literalinclude:: samples/bay-create-resp.json
:language: javascript

View File

@ -1,147 +0,0 @@
.. -*- rst -*-
=====================================
Manage certificates for bay/cluster
=====================================
Generates and show CA certificates for bay/cluster.
Show details about the CA certificate for a bay/cluster
=======================================================
.. rest_method:: GET /v1/certificates/{bay_uuid/cluster_uuid}
Show CA certificate details that are associated with the created bay/cluster.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
Request
-------
.. rest_parameters:: parameters.yaml
- bay_uuid: bay_id
.. note::
After Newton, all terms related bay/baymodel will be renamed to cluster
and cluster template.
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- cluster_uuid: cluster_id
- pem: pem
- bay_uuid: bay_id
- links: links
.. note::
After Newton, all terms related bay/baymodel will be renamed to cluster
and cluster template.
Response Example
----------------
.. literalinclude:: samples/certificates-ca-show-resp.json
:language: javascript
Generate the CA certificate for a bay/cluster
=============================================
.. rest_method:: POST /v1/certificates/
Sign client key and generate the CA certificate for a bay/cluster
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 201
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
Request
-------
.. rest_parameters:: parameters.yaml
- bay_uuid: bay_id
- csr: csr
.. note::
After Newton, all terms related bay/baymodel will be renamed to cluster
and cluster template.
Request Example
----------------
.. literalinclude:: samples/certificates-ca-sign-req.json
:language: javascript
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- pem: pem
- bay_uuid: bay_id
- links: links
- csr: csr
.. note::
After Newton, all terms related bay/baymodel will be renamed to cluster
and cluster template.
Response Example
----------------
.. literalinclude:: samples/certificates-ca-sign-resp.json
:language: javascript
Rotate the CA certificate for a bay/cluster
===========================================
.. rest_method:: PATCH /v1/certificates/{bay_uuid/cluster_uuid}
Rotate the CA certificate for a bay/cluster and invalidate all user
certificates.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 202
.. rest_status_code:: error status.yaml
- 400
Request
-------
.. rest_parameters:: parameters.yaml
- cluster: cluster_id

View File

@ -1,262 +0,0 @@
.. -*- rst -*-
================
Manage Cluster
================
Lists, creates, shows details for, updates, and deletes Cluster.
Create new cluster
==================
.. rest_method:: POST /v1/clusters
Create new cluster based on cluster template.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 202
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- name: name
- discovery_url: discovery_url
- master_count: master_count
- cluster_template_id: clustertemplate_id
- node_count: node_count
- create_timeout: create_timeout
- keypair: keypair_id
.. note::
Request for creating cluster is asynchronous from Newton.
Request Example
----------------
.. literalinclude:: samples/cluster-create-req.json
:language: javascript
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- uuid: cluster_id
Response Example
----------------
.. literalinclude:: samples/cluster-create-resp.json
:language: javascript
List all clusters
=================
.. rest_method:: GET /v1/clusters
List all clusters in Magnum.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- clusters: cluster_list
- status: status
- uuid: cluster_id
- links: links
- stack_id: stack_id
- keypair: keypair_id
- master_count: master_count
- cluster_template_id: clustertemplate_id
- node_count: node_count
- create_timeout: create_timeout
- name: name
Response Example
----------------
.. literalinclude:: samples/cluster-get-all-resp.json
:language: javascript
Show details of a cluster
=========================
.. rest_method:: GET /v1/clusters/{cluster_ident}
Get all information of a cluster in Magnum.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- cluster_ident: cluster_ident
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- status: status
- uuid: cluster_id
- links: links
- stack_id: stack_id
- created_at: created_at
- api_address: api_address
- discovery_url: discovery_url
- updated_at: updated_at
- master_count: master_count
- coe_version: coe_version
- keypair: keypair_id
- cluster_template_id: clustertemplate_id
- master_addresses: master_addresses
- node_count: node_count
- node_addresses: node_addresses
- status_reason: status_reason
- create_timeout: create_timeout
- name: name
Response Example
----------------
.. literalinclude:: samples/cluster-get-one-resp.json
:language: javascript
Delete a cluster
====================
.. rest_method:: DELETE /v1/clusters/{cluster_ident}
Delete a cluster.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 204
.. rest_status_code:: error status.yaml
- 401
- 403
- 404
- 409
Request
-------
.. rest_parameters:: parameters.yaml
- cluster_ident: cluster_ident
Response
--------
This request does not return anything in the response body.
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
Update information of cluster
=============================
.. rest_method:: PATCH /v1/clusters/{cluster_ident}
Update information of one cluster attributes using operations
including: ``add``, ``replace`` or ``remove``. The attributes to ``add`` and
``replace`` in the form of ``key=value`` while ``remove`` only needs the keys.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 202
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- cluster_ident: cluster_ident
- path: path
- value: value
- op: op
.. note::
Request for updating cluster is asynchronous from Newton.
Currently only attribute ``node_count`` are supported for operation
``replace`` and ``remove``.
Request Example
----------------
.. literalinclude:: samples/cluster-update-req.json
:language: javascript
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- uuid: cluster_id
Response Example
----------------
.. literalinclude:: samples/cluster-create-resp.json
:language: javascript

View File

@ -1,366 +0,0 @@
.. -*- rst -*-
==========================
Manage Cluster Templates
==========================
Lists, creates, shows details for, updates, and deletes Cluster Templates.
Create new cluster template
=====================================
.. rest_method:: POST /v1/clustertemplates
Create new cluster template.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 201
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- labels: labels
- fixed_subnet: fixed_subnet
- master_flavor_id: master_flavor_id
- no_proxy: no_proxy
- https_proxy: https_proxy
- http_proxy: http_proxy
- tls_disabled: tls_disabled
- keypair_id: keypair_id
- public: public_type
- docker_volume_size: docker_volume_size
- server_type: server_type
- external_network_id: external_network_id
- image_id: image_id
- volume_driver: volume_driver
- registry_enabled: registry_enabled
- docker_storage_driver: docker_storage_driver
- name: name
- network_driver: network_driver
- fixed_network: fixed_network
- coe: coe
- flavor_id: flavor_id
- master_lb_enabled: master_lb_enabled
- dns_nameserver: dns_nameserver
Request Example
----------------
.. literalinclude:: samples/clustertemplate-create-req.json
:language: javascript
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- insecure_registry: insecure_registry
- links: links
- http_proxy: http_proxy
- updated_at: updated_at
- floating_ip_enabled: floating_ip_enabled
- fixed_subnet: fixed_subnet
- master_flavor_id: master_flavor_id
- uuid: clustertemplate_id
- no_proxy: no_proxy
- https_proxy: https_proxy
- tls_disabled: tls_disabled
- keypair_id: keypair_id
- public: public_type
- labels: labels
- docker_volume_size: docker_volume_size
- server_type: server_type
- external_network_id: external_network_id
- cluster_distro: cluster_distro
- image_id: image_id
- volume_driver: volume_driver
- registry_enabled: registry_enabled
- docker_storage_driver: docker_storage_driver
- apiserver_port: apiserver_port
- name: name
- created_at: created_at
- network_driver: network_driver
- fixed_network: fixed_network
- coe: coe
- flavor_id: flavor_id
- master_lb_enabled: master_lb_enabled
- dns_nameserver: dns_nameserver
Response Example
----------------
.. literalinclude:: samples/clustertemplate-create-resp.json
:language: javascript
List all cluster templates
==========================
.. rest_method:: GET /v1/clustertemplates
List all available cluster templates in Magnum.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- clustertemplates: clustertemplate_list
- insecure_registry: insecure_registry
- links: links
- http_proxy: http_proxy
- updated_at: updated_at
- floating_ip_enabled: floating_ip_enabled
- fixed_subnet: fixed_subnet
- master_flavor_id: master_flavor_id
- uuid: clustertemplate_id
- no_proxy: no_proxy
- https_proxy: https_proxy
- tls_disabled: tls_disabled
- keypair_id: keypair_id
- public: public_type
- labels: labels
- docker_volume_size: docker_volume_size
- server_type: server_type
- external_network_id: external_network_id
- cluster_distro: cluster_distro
- image_id: image_id
- volume_driver: volume_driver
- registry_enabled: registry_enabled
- docker_storage_driver: docker_storage_driver
- apiserver_port: apiserver_port
- name: name
- created_at: created_at
- network_driver: network_driver
- fixed_network: fixed_network
- coe: coe
- flavor_id: flavor_id
- master_lb_enabled: master_lb_enabled
- dns_nameserver: dns_nameserver
Response Example
----------------
.. literalinclude:: samples/clustertemplate-get-all-resp.json
:language: javascript
Show details of a cluster template
==================================
.. rest_method:: GET /v1/clustertemplates/{clustertemplate_ident}
Get all information of a cluster template in Magnum.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- clustertemplate_ident: clustertemplate_ident
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- clustertemplates: clustertemplate_list
- insecure_registry: insecure_registry
- links: links
- http_proxy: http_proxy
- updated_at: updated_at
- floating_ip_enabled: floating_ip_enabled
- fixed_subnet: fixed_subnet
- master_flavor_id: master_flavor_id
- uuid: clustertemplate_id
- no_proxy: no_proxy
- https_proxy: https_proxy
- tls_disabled: tls_disabled
- keypair_id: keypair_id
- public: public_type
- labels: labels
- docker_volume_size: docker_volume_size
- server_type: server_type
- external_network_id: external_network_id
- cluster_distro: cluster_distro
- image_id: image_id
- volume_driver: volume_driver
- registry_enabled: registry_enabled
- docker_storage_driver: docker_storage_driver
- apiserver_port: apiserver_port
- name: name
- created_at: created_at
- network_driver: network_driver
- fixed_network: fixed_network
- coe: coe
- flavor_id: flavor_id
- master_lb_enabled: master_lb_enabled
- dns_nameserver: dns_nameserver
Response Example
----------------
.. literalinclude:: samples/clustertemplate-create-resp.json
:language: javascript
Delete a cluster template
=========================
.. rest_method:: DELETE /v1/clustertemplates/{clustertemplate_ident}
Delete a cluster template.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 204
.. rest_status_code:: error status.yaml
- 401
- 403
- 404
- 409
Request
-------
.. rest_parameters:: parameters.yaml
- clustertemplate_ident: clustertemplate_ident
Response
--------
This request does not return anything in the response body.
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
Update information of cluster template
================================================
.. rest_method:: PATCH /v1/clustertemplates/{clustertemplate_ident}
Update information of one cluster template attributes using operations
including: ``add``, ``replace`` or ``remove``. The attributes to ``add`` and
``replace`` in the form of ``key=value`` while ``remove`` only needs the keys.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request
-------
.. rest_parameters:: parameters.yaml
- clustertemplate_ident: clustertemplate_ident
- path: path
- value: value
- op: op
Request Example
----------------
.. literalinclude:: samples/clustertemplate-update-req.json
:language: javascript
Response
--------
Return new cluster templates with updated attributes.
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- clustertemplates: clustertemplate_list
- insecure_registry: insecure_registry
- links: links
- http_proxy: http_proxy
- updated_at: updated_at
- floating_ip_enabled: floating_ip_enabled
- fixed_subnet: fixed_subnet
- master_flavor_id: master_flavor_id
- uuid: clustertemplate_id
- no_proxy: no_proxy
- https_proxy: https_proxy
- tls_disabled: tls_disabled
- keypair_id: keypair_id
- public: public_type
- labels: labels
- docker_volume_size: docker_volume_size
- server_type: server_type
- external_network_id: external_network_id
- cluster_distro: cluster_distro
- image_id: image_id
- volume_driver: volume_driver
- registry_enabled: registry_enabled
- docker_storage_driver: docker_storage_driver
- apiserver_port: apiserver_port
- name: name
- created_at: created_at
- network_driver: network_driver
- fixed_network: fixed_network
- coe: coe
- flavor_id: flavor_id
- master_lb_enabled: master_lb_enabled
- dns_nameserver: dns_nameserver
Response Example
----------------
.. literalinclude:: samples/clustertemplate-create-resp.json
:language: javascript

View File

@ -1,237 +0,0 @@
# -*- coding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Magnum documentation build configuration file
#
# This file is execfile()d with the current directory set to
# its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import subprocess
import sys
import warnings
extensions = [
'os_api_ref',
]
import openstackdocstheme # noqa
html_theme = 'openstackdocs'
html_theme_path = [openstackdocstheme.get_html_theme_path()]
html_theme_options = {
"sidebar_mode": "toc",
}
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#
# source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Container Infrastructure Management API Reference'
copyright = u'2010-present, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
from magnum.version import version_info
# The full version, including alpha/beta/rc tags.
release = version_info.release_string()
# The short X.Y version.
version = version_info.version_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# The reST default role (used for this markup: `text`) to use
# for all documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# Config logABug feature
# source tree
giturl = (
u'https://git.openstack.org/cgit/openstack/magnum/tree/api-ref/source')
# html_context allows us to pass arbitrary values into the html template
html_context = {'bug_tag': 'api-ref',
'giturl': giturl,
'bug_project': 'magnum'}
# -- Options for man page output ----------------------------------------------
# Grouping the document tree for man pages.
# List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
"-n1"]
try:
html_last_updated_fmt = subprocess.check_output(git_cmd).decode('utf-8')
except Exception:
warnings.warn('Cannot get last updated time from git repository. '
'Not setting "html_last_updated_fmt".')
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_use_modindex = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'magnumdoc'
# -- Options for LaTeX output -------------------------------------------------
# The paper size ('letter' or 'a4').
# latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
# latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index', 'Magnum.tex',
u'OpenStack Container Infrastructure Management API Documentation',
u'OpenStack Foundation', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# Additional stuff for the LaTeX preamble.
# latex_preamble = ''
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_use_modindex = True

View File

@ -1,18 +0,0 @@
:tocdepth: 2
========================================
Container Infrastructure Management API
========================================
.. rest_expand_all::
.. include:: versions.inc
.. include:: urls.inc
.. include:: bays.inc
.. include:: baymodels.inc
.. include:: clusters.inc
.. include:: clustertemplates.inc
.. include:: certificates.inc
.. include:: mservices.inc
.. include:: stats.inc
.. include:: quotas.inc

View File

@ -1,49 +0,0 @@
.. -*- rst -*-
=====================
Manage Magnum service
=====================
List container infrastructure management services
=======================================================
.. rest_method:: GET /v1/mservices
Enables administrative users to list all Magnum services.
Container infrastructure service information include service id, binary,
host, report count, creation time, last updated time, health status, and
the reason for disabling service.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- mservices: mservices
- binary: binary
- created_at: created_at
- state: state
- report_count: report_count
- updated_at: updated_at
- host: host
- disabled_reason: disabled_reason
- id: id_s
Response Example
----------------
.. literalinclude:: samples/mservice-get-resp.json
:language: javascript

View File

@ -1,597 +0,0 @@
# Header params
request_id:
type: UUID
in: header
required: true
description: |
A unique ID for tracking service request. The request ID associated
with the request by default appears in the service logs.
# Path params
bay_ident:
type: string
in: path
required: true
description: |
The UUID or name of bays in Magnum.
baymodel_ident:
description: |
The UUID or name of baymodels in Magnum.
in: path
required: true
type: string
cluster_ident:
type: string
in: path
required: true
description: |
The UUID or name of clusters in Magnum.
clustertemplate_ident:
type: string
in: path
required: true
description: |
The UUID or name of cluster templates in Magnum.
project_id:
type: string
in: path
required: true
description: |
Project ID.
# Body params
api_address:
description: |
The endpoint URL of COE API exposed to end-users.
in: body
format: uri
required: true
type: string
apiserver_port:
type: integer
in: body
required: true
description: |
The exposed port of COE API server.
bay_create_timeout:
type: integer
in: body
required: true
description: |
The timeout for bay creation in minutes. The value expected is a
positive integer and the default is 60 minutes. If the timeout is reached
during bay creation process, the operation will be aborted and the
bay status will be set to ``CREATE_FAILED``.
bay_id:
type: UUID
in: body
required: true
description: |
The UUID of the bay.
bay_list:
type: array
in: body
required: true
description: |
The list of all bays in Magnum.
The list of all clusters in Magnum.
baymodel_id:
type: UUID
in: body
required: true
description: |
The UUID of the baymodel.
baymodel_list:
type: array
in: body
required: true
description: |
The list of all baymodels in Magnum.
binary:
type: string
in: body
required: true
description: |
The name of the binary form of the Magnum service.
cluster_distro:
type: string
in: body
required: true
description: |
Display the attribute ``os-distro`` defined as appropriate metadata in
image for the bay/cluster driver.
cluster_id:
type: UUID
in: body
required: true
description: |
The UUID of the cluster.
cluster_list:
type: array
in: body
required: true
description: |
The list of all clusters in Magnum.
clusters:
type: integer
in: body
required: true
description: |
The number of clusters.
clustertemplate_id:
type: UUID
in: body
required: true
description: |
The UUID of the cluster template.
clustertemplate_list:
type: array
in: body
required: true
description: |
The list of all cluster templates in Magnum.
coe:
type: string
in: body
required: true
description: |
Specify the Container Orchestration Engine to use. Supported COEs
include ``kubernetes``, ``swarm``, ``mesos``. If your environment has
additional bay/cluster drivers installed, refer to the bay/cluster driver
documentation for the new COE names.
coe_version:
type: string
in: body
required: true
description: |
Version info of chosen COE in bay/cluster for helping client in picking
the right version of client.
create_timeout:
type: integer
in: body
required: true
description: |
The timeout for cluster creation in minutes. The value expected is a
positive integer and the default is 60 minutes. If the timeout is reached
during cluster creation process, the operation will be aborted and the
cluster status will be set to ``CREATE_FAILED``.
created_at:
description: |
The date and time when the resource was created.
The date and time stamp format is `ISO 8601
<https://en.wikipedia.org/wiki/ISO_8601>`_:
::
CCYY-MM-DDThh:mm:ss±hh:mm
For example, ``2015-08-27T09:49:58-05:00``.
The ``±hh:mm`` value, if included, is the time zone as an offset
from UTC.
in: body
required: true
type: string
csr:
description: |
Certificate Signing Request (CSR) for authenticating client key.
The CSR will be used by Magnum to generate a signed certificate
that client will use to communicate with the Bay/Cluster.
in: body
required: true
type: string
description:
description: |
Descriptive text about the Magnum service.
in: body
required: true
type: string
disabled_reason:
description: |
The disable reason of the service, ``null`` if the service is enabled or
disabled without reason provided.
in: body
required: true
type: string
discovery_url:
description: |
The custom discovery url for node discovery. This is used by the COE to
discover the servers that have been created to host the containers. The
actual discovery mechanism varies with the COE. In some cases, Magnum fills
in the server info in the discovery service. In other cases, if the
``discovery_url`` is not specified, Magnum will use the public discovery
service at:
::
https://discovery.etcd.io
In this case, Magnum will generate a unique url here for each bay and
store the info for the servers.
in: body
format: uri
required: true
type: string
dns_nameserver:
description: |
The DNS nameserver for the servers and containers in the bay/cluster to
use. This is configured in the private Neutron network for the bay/cluster.
The default is ``8.8.8.8``.
in: body
required: true
type: string
docker_storage_driver:
description: |
The name of a driver to manage the storage for the images and the
container's writable layer. The supported drivers are ``devicemapper`` and
``overlay``. The default is ``devicemapper``.
in: body
required: true
type: string
docker_volume_size:
description: |
The size in GB for the local storage on each server for the Docker daemon
to cache the images and host the containers. Cinder volumes provide the
storage. The default is 25 GB. For the ``devicemapper`` storage driver,
the minimum value is 3GB. For the ``overlay`` storage driver, the minimum
value is 1GB.
in: body
required: true
type: integer
external_network_id:
description: |
The name or network ID of a Neutron network to provide connectivity to the
external internet for the bay/cluster. This network must be an external
network, i.e. its attribute ``router:external`` must be ``True``. The
servers in the bay/cluster will be connected to a private network and
Magnum will create a router between this private network and the external
network. This will allow the servers to download images, access discovery
service, etc, and the containers to install packages, etc. In the opposite
direction, floating IPs will be allocated from the external network to
provide access from the external internet to servers and the container
services hosted in the bay/cluster.
in: body
required: true
type: string
fixed_network:
description: |
The name or network ID of a Neutron network to provide connectivity to
the internal network for the bay/cluster.
in: body
required: false
type: string
fixed_subnet:
description: |
Fixed subnet that are using to allocate network address for nodes in
bay/cluster.
in: body
required: false
type: string
flavor_id:
description: |
The nova flavor ID or name for booting the node servers. The default is
``m1.small``.
in: body
required: true
type: string
floating_ip_enabled:
description: |
Whether enable or not using the floating IP of cloud provider. Some
cloud providers used floating IP, some used public IP, thus Magnum
provide this option for specifying the choice of using floating IP.
in: body
required: true
type: boolean
host:
description: |
The host for the service.
in: body
required: true
type: string
http_proxy:
description: |
The IP address for a proxy to use when direct http access from the servers
to sites on the external internet is blocked. This may happen in certain
countries or enterprises, and the proxy allows the servers and
containers to access these sites. The format is a URL including a port
number. The default is ``None``.
in: body
required: false
type: string
https_proxy:
description: |
The IP address for a proxy to use when direct https access from the
servers to sites on the external internet is blocked. This may happen in
certain countries or enterprises, and the proxy allows the servers and
containers to access these sites. The format is a URL including a port
number. The default is ``None``.
in: body
required: false
type: string
id_s:
description: |
The ID of the Magnum service.
in: body
required: true
type: string
image_id:
description: |
The name or UUID of the base image in Glance to boot the servers for the
bay/cluster. The image must have the attribute ``os-distro`` defined as
appropriate for the bay/cluster driver.
in: body
required: true
type: string
insecure_registry:
description: |
The URL pointing to users's own private insecure docker registry to
deploy and run docker containers.
in: body
required: true
type: string
keypair_id:
description: |
The name of the SSH keypair to configure in the bay/cluster servers
for ssh access. Users will need the key to be able to ssh to the servers in
the bay/cluster. The login name is specific to the bay/cluster driver, for
example with fedora-atomic image, default login name is ``fedora``.
in: body
required: true
type: string
labels:
description: |
Arbitrary labels in the form of ``key=value`` pairs. The accepted keys and
valid values are defined in the bay/cluster drivers. They are used as a way
to pass additional parameters that are specific to a bay/cluster driver.
in: body
required: false
type: array
links:
description: |
Links to the resources in question.
in: body
required: true
type: array
master_addresses:
description: |
List of floating IP of all master nodes.
in: body
required: true
type: array
master_count:
description: |
The number of servers that will serve as master for the bay/cluster. The
default is 1. Set to more than 1 master to enable High Availability. If
the option ``master-lb-enabled`` is specified in the baymodel/cluster
template, the master servers will be placed in a load balancer pool.
in: body
required: true
type: integer
master_flavor_id:
description: |
The flavor of the master node for this baymodel/cluster template.
in: body
required: false
type: string
master_lb_enabled:
description: |
Since multiple masters may exist in a bay/cluster, a Neutron load balancer
is created to provide the API endpoint for the bay/cluster and to direct
requests to the masters. In some cases, such as when the LBaaS service is
not available, this option can be set to ``false`` to create a bay/cluster
without the load balancer. In this case, one of the masters will serve as
the API endpoint. The default is ``true``, i.e. to create the load
balancer for the bay.
in: body
required: true
type: boolean
mservices:
description: |
A list of Magnum services.
in: body
required: true
type: array
name:
description: |
Name of the resource.
in: body
required: true
type: string
network_driver:
description: |
The name of a network driver for providing the networks for the containers.
Note that this is different and separate from the Neutron network for the
bay/cluster. The operation and networking model are specific to the
particular driver.
in: body
required: true
type: string
no_proxy:
description: |
When a proxy server is used, some sites should not go through the proxy
and should be accessed normally. In this case, users can specify these
sites as a comma separated list of IPs. The default is ``None``.
in: body
required: false
type: string
node_addresses:
description: |
List of floating IP of all servers that serve as node.
in: body
required: true
type: array
node_count:
description: |
The number of servers that will serve as node in the bay/cluster. The
default is 1.
in: body
required: true
type: integer
nodes:
description: |
The total number of nodes including master nodes.
in: body
required: true
type: integer
op:
description: |
The operation used to modify resource's attributes. Supported operations
are following: ``add``, ``replace`` and ``remove``. In case of
``remove``, users only need to provide ``path`` for deleting attribute.
in: body
required: true
type: string
path:
description: |
Resource attribute's name.
in: body
required: true
type: string
pem:
description: |
CA certificate for the bay/cluster.
in: body
required: true
type: string
public_type:
description: |
Access to a baymodel/cluster template is normally limited to the admin,
owner or users within the same tenant as the owners. Setting this flag
makes the baymodel/cluster template public and accessible by other users.
The default is not public.
in: body
required: true
type: boolean
registry_enabled:
description: |
Docker images by default are pulled from the public Docker registry,
but in some cases, users may want to use a private registry. This option
provides an alternative registry based on the Registry V2: Magnum will
create a local registry in the bay/cluster backed by swift to host the
images. The default is to use the public registry.
in: body
required: false
type: boolean
report_count:
description: |
The total number of report.
in: body
required: true
type: integer
server_type:
description: |
The servers in the bay/cluster can be ``vm`` or ``baremetal``. This
parameter selects the type of server to create for the bay/cluster.
The default is ``vm``.
in: body
required: true
type: string
stack_id:
description: |
The reference UUID of orchestration stack from Heat orchestration service.
in: body
required: true
type: UUID
state:
description: |
The current state of Magnum services.
in: body
required: true
type: string
status:
description: |
The current state of the bay/cluster.
in: body
required: true
type: string
status_reason:
description: |
The reason of bay/cluster current status.
in: body
required: true
type: string
tls_disabled:
description: |
Transport Layer Security (TLS) is normally enabled to secure the
bay/cluster. In some cases, users may want to disable TLS in the
bay/cluster, for instance during development or to troubleshoot certain
problems. Specifying this parameter will disable TLS so that users can
access the COE endpoints without a certificate. The default is TLS enabled.
in: body
required: true
type: boolean
updated_at:
description: |
The date and time when the resource was updated.
The date and time stamp format is `ISO 8601
<https://en.wikipedia.org/wiki/ISO_8601>`_:
::
CCYY-MM-DDThh:mm:ss±hh:mm
For example, ``2015-08-27T09:49:58-05:00``.
The ``±hh:mm`` value, if included, is the time zone as an offset
from UTC. In the previous example, the offset value is ``-05:00``.
If the ``updated_at`` date and time stamp is not set, its value is
``null``.
in: body
required: true
type: string
value:
description: |
Resource attribute's value.
in: body
required: true
type: string
version:
description: |
The version.
in: body
required: true
type: string
version_id:
type: string
in: body
required: true
description: >
A common name for the version in question. Informative only, it
has no real semantic meaning.
version_max:
type: string
in: body
required: true
description: >
If this version of the API supports microversions, the maximum
microversion that is supported. This will be the empty string if
microversions are not supported.
version_min:
type: string
in: body
required: true
description: >
If this version of the API supports microversions, the minimum
microversion that is supported. This will be the empty string if
microversions are not supported.
version_status:
type: string
in: body
required: true
description: |
The status of this API version. This can be one of:
- ``CURRENT``: this is the preferred version of the API to use
- ``SUPPORTED``: this is an older, but still supported version of the API
- ``DEPRECATED``: a deprecated version of the API that is slated for removal
volume_driver:
type: string
in: body
required: true
description: >
The name of a volume driver for managing the persistent storage for
the containers. The functionality supported are specific to the driver.

View File

@ -1,151 +0,0 @@
.. -*- rst -*-
=================
Magnum Quota API
=================
Lists, creates, shows details, and updates Quotas.
Set new quota
==================
.. rest_method:: POST /v1/quotas
Create new quota for a project.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 201
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request Example
----------------
.. literalinclude:: samples/quota-create-req.json
:language: javascript
Response Example
----------------
.. literalinclude:: samples/quota-create-resp.json
:language: javascript
List all quotas
================
.. rest_method:: GET /v1/quotas
List all quotas in Magnum.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
Response Example
----------------
.. literalinclude:: samples/quota-get-all-resp.json
:language: javascript
Show details of a quota
=========================
.. rest_method:: GET /v1/quotas/{project_id}/{resource}
Get quota information for the given project_id and resource.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
- 404
Response Example
----------------
.. literalinclude:: samples/quota-get-one-resp.json
:language: javascript
Update a resource quota
=============================
.. rest_method:: PATCH /v1/quotas/{project_id}/{resource}
Update resource quota for the given project id.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 202
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request Example
----------------
.. literalinclude:: samples/quota-update-req.json
:language: javascript
Response Example
----------------
.. literalinclude:: samples/quota-update-resp.json
:language: javascript
Delete a resource quota
============================
.. rest_method:: DELETE /v1/quotas/{project_id}/{resource}
Delete a resource quota for the given project id.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 204
.. rest_status_code:: error status.yaml
- 400
- 401
- 403
- 404
Request Example
----------------
.. literalinclude:: samples/quota-delete-req.json
:language: javascript

View File

@ -1,8 +0,0 @@
{
"name":"k8s",
"discovery_url":null,
"master_count":2,
"baymodel_id":"0562d357-8641-4759-8fed-8173f02c9633",
"node_count":2,
"bay_create_timeout":60
}

View File

@ -1,3 +0,0 @@
{
"uuid":"746e779a-751a-456b-a3e9-c883d734946f"
}

View File

@ -1,24 +0,0 @@
{
"bays":[
{
"status":"CREATE_COMPLETE",
"uuid":"746e779a-751a-456b-a3e9-c883d734946f",
"links":[
{
"href":"http://10.164.180.104:9511/v1/bays/746e779a-751a-456b-a3e9-c883d734946f",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/bays/746e779a-751a-456b-a3e9-c883d734946f",
"rel":"bookmark"
}
],
"stack_id":"9c6f1169-7300-4d08-a444-d2be38758719",
"master_count":1,
"baymodel_id":"0562d357-8641-4759-8fed-8173f02c9633",
"node_count":1,
"bay_create_timeout":60,
"name":"k8s"
}
]
}

View File

@ -1,32 +0,0 @@
{
"status":"CREATE_COMPLETE",
"uuid":"746e779a-751a-456b-a3e9-c883d734946f",
"links":[
{
"href":"http://10.164.180.104:9511/v1/bays/746e779a-751a-456b-a3e9-c883d734946f",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/bays/746e779a-751a-456b-a3e9-c883d734946f",
"rel":"bookmark"
}
],
"stack_id":"9c6f1169-7300-4d08-a444-d2be38758719",
"created_at":"2016-08-29T06:51:31+00:00",
"api_address":"https://172.24.4.6:6443",
"discovery_url":"https://discovery.etcd.io/cbeb580da58915809d59ee69348a84f3",
"updated_at":"2016-08-29T06:53:24+00:00",
"master_count":1,
"coe_version": "v1.2.0",
"baymodel_id":"0562d357-8641-4759-8fed-8173f02c9633",
"master_addresses":[
"172.24.4.6"
],
"node_count":1,
"node_addresses":[
"172.24.4.13"
],
"status_reason":"Stack CREATE completed successfully",
"bay_create_timeout":60,
"name":"k8s"
}

View File

@ -1,7 +0,0 @@
[
{
"path":"/node_count",
"value":2,
"op":"replace"
}
]

View File

@ -1,27 +0,0 @@
{
"labels":{
},
"fixed_subnet":null,
"master_flavor_id":null,
"no_proxy":"10.0.0.0/8,172.0.0.0/8,192.0.0.0/8,localhost",
"https_proxy":"http://10.164.177.169:8080",
"tls_disabled":false,
"keypair_id":"kp",
"public":false,
"http_proxy":"http://10.164.177.169:8080",
"docker_volume_size":3,
"server_type":"vm",
"external_network_id":"public",
"image_id":"fedora-atomic-latest",
"volume_driver":"cinder",
"registry_enabled":false,
"docker_storage_driver":"devicemapper",
"name":"k8s-bm2",
"network_driver":"flannel",
"fixed_network":null,
"coe":"kubernetes",
"flavor_id":"m1.small",
"master_lb_enabled":true,
"dns_nameserver":"8.8.8.8"
}

View File

@ -1,44 +0,0 @@
{
"insecure_registry":null,
"links":[
{
"href":"http://10.164.180.104:9511/v1/baymodels/085e1c4d-4f68-4bfd-8462-74b9e14e4f39",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/baymodels/085e1c4d-4f68-4bfd-8462-74b9e14e4f39",
"rel":"bookmark"
}
],
"http_proxy":"http://10.164.177.169:8080",
"updated_at":null,
"floating_ip_enabled":true,
"fixed_subnet":null,
"master_flavor_id":null,
"uuid":"085e1c4d-4f68-4bfd-8462-74b9e14e4f39",
"no_proxy":"10.0.0.0/8,172.0.0.0/8,192.0.0.0/8,localhost",
"https_proxy":"http://10.164.177.169:8080",
"tls_disabled":false,
"keypair_id":"kp",
"public":false,
"labels":{
},
"docker_volume_size":3,
"server_type":"vm",
"external_network_id":"public",
"cluster_distro":"fedora-atomic",
"image_id":"fedora-atomic-latest",
"volume_driver":"cinder",
"registry_enabled":false,
"docker_storage_driver":"devicemapper",
"apiserver_port":null,
"name":"k8s-bm2",
"created_at":"2016-08-29T02:08:08+00:00",
"network_driver":"flannel",
"fixed_network":null,
"coe":"kubernetes",
"flavor_id":"m1.small",
"master_lb_enabled":true,
"dns_nameserver":"8.8.8.8"
}

View File

@ -1,48 +0,0 @@
{
"baymodels":[
{
"insecure_registry":null,
"links":[
{
"href":"http://10.164.180.104:9511/v1/baymodels/085e1c4d-4f68-4bfd-8462-74b9e14e4f39",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/baymodels/085e1c4d-4f68-4bfd-8462-74b9e14e4f39",
"rel":"bookmark"
}
],
"http_proxy":"http://10.164.177.169:8080",
"updated_at":null,
"floating_ip_enabled":true,
"fixed_subnet":null,
"master_flavor_id":null,
"uuid":"085e1c4d-4f68-4bfd-8462-74b9e14e4f39",
"no_proxy":"10.0.0.0/8,172.0.0.0/8,192.0.0.0/8,localhost",
"https_proxy":"http://10.164.177.169:8080",
"tls_disabled":false,
"keypair_id":"kp",
"public":false,
"labels":{
},
"docker_volume_size":3,
"server_type":"vm",
"external_network_id":"public",
"cluster_distro":"fedora-atomic",
"image_id":"fedora-atomic-latest",
"volume_driver":"cinder",
"registry_enabled":false,
"docker_storage_driver":"devicemapper",
"apiserver_port":null,
"name":"k8s-bm2",
"created_at":"2016-08-29T02:08:08+00:00",
"network_driver":"flannel",
"fixed_network":null,
"coe":"kubernetes",
"flavor_id":"m1.small",
"master_lb_enabled":true,
"dns_nameserver":"8.8.8.8"
}
]
}

View File

@ -1,12 +0,0 @@
[
{
"path":"/master_lb_enabled",
"value":"True",
"op":"replace"
},
{
"path":"/registry_enabled",
"value":"True",
"op":"replace"
}
]

View File

@ -1,15 +0,0 @@
{
"cluster_uuid":"0b4b766f-1500-44b3-9804-5a6e12fe6df4"
"pem":"-----BEGIN CERTIFICATE-----\nMIICzDCCAbSgAwIBAgIQOOkVcEN7TNa9E80GoUs4xDANBgkqhkiG9w0BAQsFADAO\n-----END CERTIFICATE-----\n",
"bay_uuid":"0b4b766f-1500-44b3-9804-5a6e12fe6df4",
"links":[
{
"href":"http://10.164.180.104:9511/v1/certificates/0b4b766f-1500-44b3-9804-5a6e12fe6df4",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/certificates/0b4b766f-1500-44b3-9804-5a6e12fe6df4",
"rel":"bookmark"
}
]
}

View File

@ -1,4 +0,0 @@
{
"bay_uuid":"0b4b766f-1500-44b3-9804-5a6e12fe6df4",
"csr":"-----BEGIN CERTIFICATE REQUEST-----\nMIIEfzCCAmcCAQAwFDESMBAGA1UEAxMJWW91ciBOYW1lMIICIjANBgkqhkiG9w0B\n-----END CERTIFICATE REQUEST-----\n"
}

View File

@ -1,15 +0,0 @@
{
"pem":"-----BEGIN CERTIFICATE-----\nMIIDxDCCAqygAwIBAgIRALgUbIjdKUy8lqErJmCxVfkwDQYJKoZIhvcNAQELBQAw\n-----END CERTIFICATE-----\n",
"bay_uuid":"0b4b766f-1500-44b3-9804-5a6e12fe6df4",
"links":[
{
"href":"http://10.164.180.104:9511/v1/certificates/0b4b766f-1500-44b3-9804-5a6e12fe6df4",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/certificates/0b4b766f-1500-44b3-9804-5a6e12fe6df4",
"rel":"bookmark"
}
],
"csr":"-----BEGIN CERTIFICATE REQUEST-----\nMIIEfzCCAmcCAQAwFDESMBAGA1UEAxMJWW91ciBOYW1lMIICIjANBgkqhkiG9w0B\n-----END CERTIFICATE REQUEST-----\n"
}

View File

@ -1,9 +0,0 @@
{
"name":"k8s",
"discovery_url":null,
"master_count":2,
"cluster_template_id":"0562d357-8641-4759-8fed-8173f02c9633",
"node_count":2,
"create_timeout":60,
"keypair":"my_keypair"
}

View File

@ -1,3 +0,0 @@
{
"uuid":"746e779a-751a-456b-a3e9-c883d734946f"
}

View File

@ -1,25 +0,0 @@
{
"clusters":[
{
"status":"CREATE_IN_PROGRESS",
"cluster_template_id":"0562d357-8641-4759-8fed-8173f02c9633",
"uuid":"731387cf-a92b-4c36-981e-3271d63e5597",
"links":[
{
"href":"http://10.164.180.104:9511/v1/bays/731387cf-a92b-4c36-981e-3271d63e5597",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/bays/731387cf-a92b-4c36-981e-3271d63e5597",
"rel":"bookmark"
}
],
"stack_id":"31c1ee6c-081e-4f39-9f0f-f1d87a7defa1",
"keypair":"my_keypair",
"master_count":1,
"create_timeout":60,
"node_count":1,
"name":"k8s"
}
]
}

View File

@ -1,33 +0,0 @@
{
"status":"CREATE_COMPLETE",
"uuid":"746e779a-751a-456b-a3e9-c883d734946f",
"links":[
{
"href":"http://10.164.180.104:9511/v1/clusters/746e779a-751a-456b-a3e9-c883d734946f",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/clusters/746e779a-751a-456b-a3e9-c883d734946f",
"rel":"bookmark"
}
],
"stack_id":"9c6f1169-7300-4d08-a444-d2be38758719",
"created_at":"2016-08-29T06:51:31+00:00",
"api_address":"https://172.24.4.6:6443",
"discovery_url":"https://discovery.etcd.io/cbeb580da58915809d59ee69348a84f3",
"updated_at":"2016-08-29T06:53:24+00:00",
"master_count":1,
"coe_version": "v1.2.0",
"keypair":"my_keypair"
"cluster_template_id":"0562d357-8641-4759-8fed-8173f02c9633",
"master_addresses":[
"172.24.4.6"
],
"node_count":1,
"node_addresses":[
"172.24.4.13"
],
"status_reason":"Stack CREATE completed successfully",
"create_timeout":60,
"name":"k8s"
}

View File

@ -1,7 +0,0 @@
[
{
"path":"/node_count",
"value":2,
"op":"replace"
}
]

View File

@ -1,27 +0,0 @@
{
"labels":{
},
"fixed_subnet":null,
"master_flavor_id":null,
"no_proxy":"10.0.0.0/8,172.0.0.0/8,192.0.0.0/8,localhost",
"https_proxy":"http://10.164.177.169:8080",
"tls_disabled":false,
"keypair_id":"kp",
"public":false,
"http_proxy":"http://10.164.177.169:8080",
"docker_volume_size":3,
"server_type":"vm",
"external_network_id":"public",
"image_id":"fedora-atomic-latest",
"volume_driver":"cinder",
"registry_enabled":false,
"docker_storage_driver":"devicemapper",
"name":"k8s-bm2",
"network_driver":"flannel",
"fixed_network":null,
"coe":"kubernetes",
"flavor_id":"m1.small",
"master_lb_enabled":true,
"dns_nameserver":"8.8.8.8"
}

View File

@ -1,44 +0,0 @@
{
"insecure_registry":null,
"links":[
{
"href":"http://10.164.180.104:9511/v1/clustertemplates/085e1c4d-4f68-4bfd-8462-74b9e14e4f39",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/clustertemplates/085e1c4d-4f68-4bfd-8462-74b9e14e4f39",
"rel":"bookmark"
}
],
"http_proxy":"http://10.164.177.169:8080",
"updated_at":null,
"floating_ip_enabled":true,
"fixed_subnet":null,
"master_flavor_id":null,
"uuid":"085e1c4d-4f68-4bfd-8462-74b9e14e4f39",
"no_proxy":"10.0.0.0/8,172.0.0.0/8,192.0.0.0/8,localhost",
"https_proxy":"http://10.164.177.169:8080",
"tls_disabled":false,
"keypair_id":"kp",
"public":false,
"labels":{
},
"docker_volume_size":3,
"server_type":"vm",
"external_network_id":"public",
"cluster_distro":"fedora-atomic",
"image_id":"fedora-atomic-latest",
"volume_driver":"cinder",
"registry_enabled":false,
"docker_storage_driver":"devicemapper",
"apiserver_port":null,
"name":"k8s-bm2",
"created_at":"2016-08-29T02:08:08+00:00",
"network_driver":"flannel",
"fixed_network":null,
"coe":"kubernetes",
"flavor_id":"m1.small",
"master_lb_enabled":true,
"dns_nameserver":"8.8.8.8"
}

View File

@ -1,48 +0,0 @@
{
"clustertemplates":[
{
"insecure_registry":null,
"links":[
{
"href":"http://10.164.180.104:9511/v1/clustertemplates/0562d357-8641-4759-8fed-8173f02c9633",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/clustertemplates/0562d357-8641-4759-8fed-8173f02c9633",
"rel":"bookmark"
}
],
"http_proxy":"http://10.164.177.169:8080",
"updated_at":null,
"floating_ip_enabled":true,
"fixed_subnet":null,
"master_flavor_id":null,
"uuid":"0562d357-8641-4759-8fed-8173f02c9633",
"no_proxy":"10.0.0.0/8,172.0.0.0/8,192.0.0.0/8,localhost",
"https_proxy":"http://10.164.177.169:8080",
"tls_disabled":false,
"keypair_id":"kp",
"public":false,
"labels":{
},
"docker_volume_size":3,
"server_type":"vm",
"external_network_id":"public",
"cluster_distro":"fedora-atomic",
"image_id":"fedora-atomic-latest",
"volume_driver":"cinder",
"registry_enabled":false,
"docker_storage_driver":"devicemapper",
"apiserver_port":null,
"name":"k8s-bm",
"created_at":"2016-08-26T09:34:41+00:00",
"network_driver":"flannel",
"fixed_network":null,
"coe":"kubernetes",
"flavor_id":"m1.small",
"master_lb_enabled":false,
"dns_nameserver":"8.8.8.8"
}
]
}

View File

@ -1,12 +0,0 @@
[
{
"path":"/master_lb_enabled",
"value":"True",
"op":"replace"
},
{
"path":"/registry_enabled",
"value":"True",
"op":"replace"
}
]

View File

@ -1,14 +0,0 @@
{
"mservices":[
{
"binary":"magnum-conductor",
"created_at":"2016-08-23T10:52:13+00:00",
"state":"up",
"report_count":2179,
"updated_at":"2016-08-25T01:13:16+00:00",
"host":"magnum-manager",
"disabled_reason":null,
"id":1
}
]
}

View File

@ -1,5 +0,0 @@
{
"project_id": "aa5436ab58144c768ca4e9d2e9f5c3b2",
"resource": "Cluster",
"hard_limit": 10
}

View File

@ -1,8 +0,0 @@
{
"resource": "Cluster",
"created_at": "2017-01-17T17:35:48+00:00",
"updated_at": null,
"hard_limit": 1,
"project_id": "aa5436ab58144c768ca4e9d2e9f5c3b2",
"id": 26
}

View File

@ -1,4 +0,0 @@
{
"project_id": "aa5436ab58144c768ca4e9d2e9f5c3b2",
"resource": "Cluster"
}

View File

@ -1,12 +0,0 @@
{
"quotas": [
{
"resource": "Cluster",
"created_at": "2017-01-17T17:35:49+00:00",
"updated_at": "2017-01-17T17:38:21+00:00",
"hard_limit": 10,
"project_id": "aa5436ab58144c768ca4e9d2e9f5c3b2",
"id": 26
}
]
}

View File

@ -1,8 +0,0 @@
{
"resource": "Cluster",
"created_at": "2017-01-17T17:35:49+00:00",
"updated_at": "2017-01-17T17:38:20+00:00",
"hard_limit": 10,
"project_id": "aa5436ab58144c768ca4e9d2e9f5c3b2",
"id": 26
}

View File

@ -1,5 +0,0 @@
{
"project_id": "aa5436ab58144c768ca4e9d2e9f5c3b2",
"resource": "Cluster",
"hard_limit": 10
}

View File

@ -1,8 +0,0 @@
{
"resource": "Cluster",
"created_at": "2017-01-17T17:35:49+00:00",
"updated_at": "2017-01-17T17:38:20+00:00",
"hard_limit": 10,
"project_id": "aa5436ab58144c768ca4e9d2e9f5c3b2",
"id": 26
}

View File

@ -1,4 +0,0 @@
{
"clusters": 1,
"nodes": 2
}

View File

@ -1,80 +0,0 @@
{
"media_types":[
{
"base":"application/json",
"type":"application/vnd.openstack.magnum.v1+json"
}
],
"links":[
{
"href":"http://10.164.180.104:9511/v1/",
"rel":"self"
},
{
"href":"http://docs.openstack.org/developer/magnum/dev/api-spec-v1.html",
"type":"text/html",
"rel":"describedby"
}
],
"mservices":[
{
"href":"http://10.164.180.104:9511/v1/mservices/",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/mservices/",
"rel":"bookmark"
}
],
"bays":[
{
"href":"http://10.164.180.104:9511/v1/bays/",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/bays/",
"rel":"bookmark"
}
],
"clustertemplates":[
{
"href":"http://10.164.180.104:9511/v1/clustertemplates/",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/clustertemplates/",
"rel":"bookmark"
}
],
"certificates":[
{
"href":"http://10.164.180.104:9511/v1/certificates/",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/certificates/",
"rel":"bookmark"
}
],
"clusters":[
{
"href":"http://10.164.180.104:9511/v1/clusters/",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/clusters/",
"rel":"bookmark"
}
],
"baymodels":[
{
"href":"http://10.164.180.104:9511/v1/baymodels/",
"rel":"self"
},
{
"href":"http://10.164.180.104:9511/baymodels/",
"rel":"bookmark"
}
],
"id":"v1"
}

View File

@ -1,18 +0,0 @@
{
"versions":[
{
"status":"CURRENT",
"min_version":"1.1",
"max_version":"1.4",
"id":"v1",
"links":[
{
"href":"http://10.164.180.104:9511/v1/",
"rel":"self"
}
]
}
],
"name":"OpenStack Magnum API",
"description":"Magnum is an OpenStack project which aims to provide container management."
}

View File

@ -1,82 +0,0 @@
.. -*- rst -*-
=================
Magnum Stats API
=================
An admin user can get stats for the given tenant and also overall system stats.
A non-admin user can get self stats.
Show stats for a tenant
=======================
.. rest_method:: GET /v1/stats?project_id=<project_id>
Get stats based on project id.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
Request
-------
.. rest_parameters:: parameters.yaml
- project_id: project_id
Response
--------
.. rest_parameters:: parameters.yaml
- clusters: clusters
- nodes: nodes
Response Example
----------------
.. literalinclude:: samples/stats-get-resp.json
:language: javascript
Show overall stats
==================
.. rest_method:: GET /v1/stats
Show overall Magnum system stats.
If the requester is non-admin user show self stats.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 401
- 403
Response
--------
.. rest_parameters:: parameters.yaml
- clusters: clusters
- nodes: nodes
Response Example
----------------
.. literalinclude:: samples/stats-get-resp.json
:language: javascript

View File

@ -1,62 +0,0 @@
#################
# Success Codes #
#################
200:
default: |
Request was successful.
201:
default: |
Resource was created and is ready to use.
202:
default: |
Request was accepted for processing, but the processing has not been
completed. A 'location' header is included in the response which contains
a link to check the progress of the request.
204:
default: |
The server has fulfilled the request by deleting the resource.
300:
default: |
There are multiple choices for resources. The request has to be more
specific to successfully retrieve one of these resources.
302:
default: |
The response is about a redirection hint. The header of the response
usually contains a 'location' value where requesters can check to track
the real location of the resource.
#################
# Error Codes #
#################
400:
default: |
Some content in the request was invalid.
resource_signal: |
The target resource doesn't support receiving a signal.
401:
default: |
User must authenticate before making a request.
403:
default: |
Policy does not allow current user to do this operation.
404:
default: |
The requested resource could not be found.
405:
default: |
Method is not valid for this endpoint.
409:
default: |
This operation conflicted with another operation on this resource.
duplicate_zone: |
There is already a zone with this name.
500:
default: |
Something went wrong inside the service. This should not happen usually.
If it does happen, it means the server has experienced some serious
problems.
503:
default: |
Service is not available. This is mostly caused by service configuration
errors which prevents the service from successful start up.

View File

@ -1,31 +0,0 @@
.. -*- rst -*-
=================
Magnum Base URLs
=================
All API calls through the rest of this document require authentication
with the OpenStack Identity service. They also required a ``url`` that
is extracted from the Identity token of type
``container-infra``. This will be the root url that every call below will be
added to build a full path.
Note that if using OpenStack Identity service API v2, ``url`` can be
represented via ``adminURL``, ``internalURL`` or ``publicURL`` in endpoint
catalog. In Identity service API v3, ``url`` is represented with field
``interface`` including ``admin``, ``internal`` and ``public``.
For instance, if the ``url`` is
``http://my-container-infra.org/magnum/v1`` then the full API call for
``/clusters`` is ``http://my-container-infra.org/magnum/v1/clusters``.
Depending on the deployment the container infrastructure management service
url might be http or https, a custom port, a custom path, and include your
project id. The only way to know the urls for your deployment is by using the
service catalog. The container infrastructure management URL should never be
hard coded in applications, even if they are only expected to work at a
single site. It should always be discovered from the Identity token.
As such, for the rest of this document we will be using short hand
where ``GET /clusters`` really means
``GET {your_container_infra_url}/clusters``.

View File

@ -1,104 +0,0 @@
.. -*- rst -*-
==============
API Versions
==============
In order to bring new features to users over time, the Magnum API
supports versioning. There are two kinds of versions in Magnum.
- ''major versions'', which have dedicated urls
- ''microversions'', which can be requested through the use of the
``OpenStack-API-Version``.
Beginning with the Newton release, all API requests support the
``OpenStack-API-Version`` header. This header SHOULD be supplied
with every request; in the absence of this header, each request is treated
as though coming from an older pre-Newton client. This was done to preserve
backwards compatibility as we introduced new features.
The Version APIs work differently from other APIs as they *do not*
require authentication.
List API Versions
=======================
.. rest_method:: GET /
This fetches all the information about all known major API versions in
the deployment. Links to more specific information will be provided
for each API version, as well as information about supported min and
max microversions.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 503
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- versions: version
- status: version_status
- min_version: version_min
- max_version: version_max
- id: version_id
- links: links
- name: name
- description: description
Response Example
----------------
.. literalinclude:: samples/versions-get-resp.json
:language: javascript
Show v1 API Version
====================================
.. rest_method:: GET /v1/
Show all the resources within the Magnum v1 API.
Response Codes
--------------
.. rest_status_code:: success status.yaml
- 200
.. rest_status_code:: error status.yaml
- 503
Response
--------
.. rest_parameters:: parameters.yaml
- X-Openstack-Request-Id: request_id
- id: version_id
- links: links
.. note::
The ``media-types`` parameters in the response are
vestigial and provide no useful information. They will probably be
deprecated and removed in the future.
Response Example
----------------
.. literalinclude:: samples/versions-01-get-resp.json
:language: javascript

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,103 +0,0 @@
How to build a centos image which contains DC/OS 1.8.x
======================================================
Here is the advanced DC/OS 1.8 installation guide.
See [Advanced DC/OS Installation Guide]
(https://dcos.io/docs/1.8/administration/installing/custom/advanced/)
See [Install Docker on CentOS]
(https://dcos.io/docs/1.8/administration/installing/custom/system-requirements/install-docker-centos/)
See [Adding agent nodes]
(https://dcos.io/docs/1.8/administration/installing/custom/add-a-node/)
Create a centos image using DIB following the steps outlined in DC/OS installation guide.
1. Install and configure docker in chroot.
2. Install system requirements in chroot.
3. Download `dcos_generate_config.sh` outside chroot.
This file will be used to run `dcos_generate_config.sh --genconf` to generate
config files on the node during magnum cluster creation.
4. Some configuration changes are required for DC/OS, i.e disabling the firewalld
and adding the group named nogroup.
See comments in the script file.
Use the centos image to build a DC/OS cluster.
Command:
`magnum cluster-template-create`
`magnum cluster-create`
After all the instances with centos image are created.
1. Pass parameters to config.yaml with magnum cluster template properties.
2. Run `dcos_generate_config.sh --genconf` to generate config files.
3. Run `dcos_install.sh master` on master node and `dcos_install.sh slave` on slave node.
If we want to scale the DC/OS cluster.
Command:
`magnum cluster-update`
The same steps as cluster creation.
1. Create new instances, generate config files on them and install.
2. Or delete those agent nodes where containers are not running.
How to use magnum dcos coe
===============================================
We are assuming that magnum has been installed and the magnum path is `/opt/stack/magnum`.
1. Copy dcos magnum coe source code
$ mv -r /opt/stack/magnum/contrib/drivers/dcos_centos_v1 /opt/stack/magnum/magnum/drivers/
$ mv /opt/stack/magnum/contrib/drivers/common/dcos_* /opt/stack/magnum/magnum/drivers/common/
$ cd /opt/stack/magnum
$ sudo python setup.py install
2. Add driver in setup.cfg
dcos_centos_v1 = magnum.drivers.dcos_centos_v1.driver:Driver
3. Restart your magnum services.
4. Prepare centos image with elements dcos and docker installed
See how to build a centos image in /opt/stack/magnum/magnum/drivers/dcos_centos_v1/image/README.md
5. Create glance image
$ openstack image create centos-7-dcos.qcow2 \
--public \
--disk-format=qcow2 \
--container-format=bare \
--property os_distro=centos \
--file=centos-7-dcos.qcow2
6. Create magnum cluster template
Configure DC/OS cluster with --labels
See https://dcos.io/docs/1.8/administration/installing/custom/configuration-parameters/
$ magnum cluster-template-create --name dcos-cluster-template \
--image-id centos-7-dcos.qcow2 \
--keypair-id testkey \
--external-network-id public \
--dns-nameserver 8.8.8.8 \
--flavor-id m1.medium \
--labels oauth_enabled=false \
--coe dcos
Here is an example to specify the overlay network in DC/OS,
'dcos_overlay_network' should be json string format.
$ magnum cluster-template-create --name dcos-cluster-template \
--image-id centos-7-dcos.qcow2 \
--keypair-id testkey \
--external-network-id public \
--dns-nameserver 8.8.8.8 \
--flavor-id m1.medium \
--labels oauth_enabled=false \
--labels dcos_overlay_enable='true' \
--labels dcos_overlay_config_attempts='6' \
--labels dcos_overlay_mtu='9001' \
--labels dcos_overlay_network='{"vtep_subnet": "44.128.0.0/20",\
"vtep_mac_oui": "70:B3:D5:00:00:00","overlays":\
[{"name": "dcos","subnet": "9.0.0.0/8","prefix": 26}]}' \
--coe dcos
7. Create magnum cluster
$ magnum cluster-create --name dcos-cluster --cluster-template dcos-cluster-template --node-count 1
8. You need to wait for a while after magnum cluster creation completed to make
DC/OS web interface accessible.

View File

@ -1,36 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from magnum.drivers.dcos_centos_v1 import monitor
from magnum.drivers.dcos_centos_v1.scale_manager import DcosScaleManager
from magnum.drivers.dcos_centos_v1 import template_def
from magnum.drivers.heat import driver
class Driver(driver.HeatDriver):
@property
def provides(self):
return [
{'server_type': 'vm',
'os': 'centos',
'coe': 'dcos'},
]
def get_template_definition(self):
return template_def.DcosCentosVMTemplateDefinition()
def get_monitor(self, context, cluster):
return monitor.DcosMonitor(context, cluster)
def get_scale_manager(self, context, osclient, cluster):
return DcosScaleManager(context, osclient, cluster)

View File

@ -1,86 +0,0 @@
=============
centos-dcos
=============
This directory contains `[diskimage-builder](https://github.com/openstack/diskimage-builder)`
elements to build an centos image which contains dcos.
Pre-requisites to run diskimage-builder
---------------------------------------
For diskimage-builder to work, following packages need to be
present:
* kpartx
* qemu-utils
* curl
* xfsprogs
* yum
* yum-utils
* git
For Debian/Ubuntu systems, use::
apt-get install kpartx qemu-utils curl xfsprogs yum yum-utils git
For CentOS and Fedora < 22, use::
yum install kpartx qemu-utils curl xfsprogs yum yum-utils git
For Fedora >= 22, use::
dnf install kpartx @virtualization curl xfsprogs yum yum-utils git
How to generate Centos image with DC/OS 1.8.x
---------------------------------------------
1. Download and export element path
git clone https://git.openstack.org/openstack/magnum
git clone https://git.openstack.org/openstack/diskimage-builder.git
git clone https://git.openstack.org/openstack/dib-utils.git
git clone https://git.openstack.org/openstack/tripleo-image-elements.git
git clone https://git.openstack.org/openstack/heat-templates.git
export PATH="${PWD}/diskimage-builder/bin:$PATH"
export PATH="${PWD}/dib-utils/bin:$PATH"
export ELEMENTS_PATH=magnum/contrib/drivers/dcos_centos_v1/image
export ELEMENTS_PATH=${ELEMENTS_PATH}:diskimage-builder/elements
export ELEMENTS_PATH=${ELEMENTS_PATH}:tripleo-image-elements/elements:heat-templates/hot/software-config/elements
2. Export environment path of the url to download dcos_generate_config.sh
This default download url is for DC/OS 1.8.4
export DCOS_GENERATE_CONFIG_SRC=https://downloads.dcos.io/dcos/stable/commit/e64024af95b62c632c90b9063ed06296fcf38ea5/dcos_generate_config.sh
Or specify local file path
export DCOS_GENERATE_CONFIG_SRC=`pwd`/dcos_generate_config.sh
3. Set file system type to `xfs`
Only XFS is currently supported for overlay.
See https://dcos.io/docs/1.8/administration/installing/custom/system-requirements/install-docker-centos/#recommendations
export FS_TYPE=xfs
4. Create image
disk-image-create \
centos7 vm docker dcos selinux-permissive \
os-collect-config os-refresh-config os-apply-config \
heat-config heat-config-script \
-o centos-7-dcos.qcow2
5. (Optional) Create user image for bare metal node
Create with elements dhcp-all-interfaces and devuser
export DIB_DEV_USER_USERNAME=centos
export DIB_DEV_USER_PWDLESS_SUDO=YES
disk-image-create \
centos7 vm docker dcos selinux-permissive dhcp-all-interfaces devuser \
os-collect-config os-refresh-config os-apply-config \
heat-config heat-config-script \
-o centos-7-dcos-bm.qcow2

View File

@ -1,2 +0,0 @@
package-installs
docker

View File

@ -1,5 +0,0 @@
# Specify download url, default DC/OS version 1.8.4
export DCOS_GENERATE_CONFIG_SRC=${DCOS_GENERATE_CONFIG_SRC:-https://downloads.dcos.io/dcos/stable/commit/e64024af95b62c632c90b9063ed06296fcf38ea5/dcos_generate_config.sh}
# or local file path
# export DCOS_GENERATE_CONFIG_SRC=${DCOS_GENERATE_CONFIG_SRC:-${PWD}/dcos_generate_config.sh}

View File

@ -1,23 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# This scrpit file is used to dowload dcos_generate_config.sh outside chroot.
# Ihis file is essential that the size of dcos_generate_config.sh is more than
# 700M, we should download it into the image in advance.
sudo mkdir -p $TMP_MOUNT_PATH/opt/dcos
if [ -f $DCOS_GENERATE_CONFIG_SRC ]; then
# If $DCOS_GENERATE_CONFIG_SRC is a file path, copy the file
sudo cp $DCOS_GENERATE_CONFIG_SRC $TMP_MOUNT_PATH/opt/dcos
else
# If $DCOS_GENERATE_CONFIG_SRC is a url, download it
# Please make sure curl is installed on your host environment
cd $TMP_MOUNT_PATH/opt/dcos
sudo -E curl -O $DCOS_GENERATE_CONFIG_SRC
fi

View File

@ -1,6 +0,0 @@
tar:
xz:
unzip:
curl:
ipset:
ntp:

View File

@ -1,10 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# nogroup will be used on Mesos masters and agents.
sudo groupadd nogroup

View File

@ -1,9 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
sudo systemctl enable ntpd

View File

@ -1 +0,0 @@
package-installs

View File

@ -1,24 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Install the Docker engine, daemon, and service.
#
# The supported versions of Docker are:
# 1.7.x
# 1.8.x
# 1.9.x
# 1.10.x
# 1.11.x
# Docker 1.12.x is NOT supported.
# Docker 1.9.x - 1.11.x is recommended for stability reasons.
# https://github.com/docker/docker/issues/9718
#
# See DC/OS installtion guide for details
# https://dcos.io/docs/1.8/administration/installing/custom/system-requirements/install-docker-centos/
#
sudo -E yum install -y docker-engine-1.11.2

View File

@ -1,9 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
sudo systemctl enable docker

View File

@ -1,26 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Upgrade CentOS to 7.2
sudo -E yum upgrade --assumeyes --tolerant
sudo -E yum update --assumeyes
# Verify that the kernel is at least 3.10
function version_gt() { test "$(echo "$@" | tr " " "\n" | sort -V | head -n 1)" != "$1"; }
kernel_version=`uname -r | cut --bytes=1-4`
expect_version=3.10
if version_gt $expect_version $kernel_version; then
echo "Error: kernel version at least $expect_version, current version $kernel_version"
exit 1
fi
# Enable OverlayFS
sudo tee /etc/modules-load.d/overlay.conf <<-'EOF'
overlay
EOF

View File

@ -1,33 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Configure yum to use the Docker yum repo
sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
# Configure systemd to run the Docker Daemon with OverlayFS
# Manage Docker on CentOS with systemd.
# systemd handles starting Docker on boot and restarting it when it crashes.
#
# Docker 1.11.x will be installed, so issue for Docker 1.12.x on Centos7
# won't happen.
# https://github.com/docker/docker/issues/22847
# https://github.com/docker/docker/issues/25098
#
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo tee /etc/systemd/system/docker.service.d/override.conf <<- 'EOF'
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon --storage-driver=overlay -H fd://
EOF

View File

@ -1,25 +0,0 @@
#!/bin/bash
# This script installs all needed dependencies to generate
# images using diskimage-builder. Please note it only has been
# tested on Ubuntu Xenial.
set -eux
set -o pipefail
sudo apt update || true
sudo apt install -y \
git \
qemu-utils \
python-dev \
python-yaml \
python-six \
uuid-runtime \
curl \
sudo \
kpartx \
parted \
wget \
xfsprogs \
yum \
yum-utils

View File

@ -1,35 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2016 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
# check that image is valid
qemu-img check -q $1
# validate estimated size
FILESIZE=$(stat -c%s "$1")
MIN_SIZE=1231028224 # 1.15GB
MAX_SIZE=1335885824 # 1.25GB
if [ $FILESIZE -lt $MIN_SIZE ] ; then
echo "Error: generated image size is lower than expected."
exit 1
fi
if [ $FILESIZE -gt $MAX_SIZE ] ; then
echo "Error: generated image size is higher than expected."
exit 1
fi

View File

@ -1,74 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_serialization import jsonutils
from magnum.common import urlfetch
from magnum.conductor import monitors
class DcosMonitor(monitors.MonitorBase):
def __init__(self, context, cluster):
super(DcosMonitor, self).__init__(context, cluster)
self.data = {}
@property
def metrics_spec(self):
return {
'memory_util': {
'unit': '%',
'func': 'compute_memory_util',
},
'cpu_util': {
'unit': '%',
'func': 'compute_cpu_util',
},
}
# See https://github.com/dcos/adminrouter#ports-summary
# Use http://<mesos-master>/mesos/ instead of http://<mesos-master>:5050
def _build_url(self, url, protocol='http', server_name='mesos', path='/'):
return protocol + '://' + url + '/' + server_name + path
def _is_leader(self, state):
return state['leader'] == state['pid']
def pull_data(self):
self.data['mem_total'] = 0
self.data['mem_used'] = 0
self.data['cpu_total'] = 0
self.data['cpu_used'] = 0
for master_addr in self.cluster.master_addresses:
mesos_master_url = self._build_url(master_addr,
server_name='mesos',
path='/state')
master = jsonutils.loads(urlfetch.get(mesos_master_url))
if self._is_leader(master):
for slave in master['slaves']:
self.data['mem_total'] += slave['resources']['mem']
self.data['mem_used'] += slave['used_resources']['mem']
self.data['cpu_total'] += slave['resources']['cpus']
self.data['cpu_used'] += slave['used_resources']['cpus']
break
def compute_memory_util(self):
if self.data['mem_total'] == 0 or self.data['mem_used'] == 0:
return 0
else:
return self.data['mem_used'] * 100 / self.data['mem_total']
def compute_cpu_util(self):
if self.data['cpu_total'] == 0 or self.data['cpu_used'] == 0:
return 0
else:
return self.data['cpu_used'] * 100 / self.data['cpu_total']

View File

@ -1,29 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from magnum.conductor.scale_manager import ScaleManager
from marathon import MarathonClient
class DcosScaleManager(ScaleManager):
def __init__(self, context, osclient, cluster):
super(DcosScaleManager, self).__init__(context, osclient, cluster)
def _get_hosts_with_container(self, context, cluster):
marathon_client = MarathonClient(
'http://' + cluster.api_address + '/marathon/')
hosts = set()
for task in marathon_client.list_tasks():
hosts.add(task.host)
return hosts

View File

@ -1,28 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from magnum.drivers.heat import dcos_centos_template_def as dctd
class DcosCentosVMTemplateDefinition(dctd.DcosCentosTemplateDefinition):
"""DC/OS template for Centos VM."""
@property
def driver_module_path(self):
return __name__[:__name__.rindex('.')]
@property
def template_path(self):
return os.path.join(os.path.dirname(os.path.realpath(__file__)),
'templates/dcoscluster.yaml')

View File

@ -1,674 +0,0 @@
heat_template_version: 2014-10-16
description: >
This template will boot a DC/OS cluster with one or more masters
(as specified by number_of_masters, default is 1) and one or more slaves
(as specified by the number_of_slaves parameter, which
defaults to 1).
parameters:
cluster_name:
type: string
description: human readable name for the DC/OS cluster
default: my-cluster
number_of_masters:
type: number
description: how many DC/OS masters to spawn initially
default: 1
# In DC/OS, there are two types of slave nodes, public and private.
# Public slave nodes have external access and private slave nodes don't.
# Magnum only supports one type of slave nodes and I decide not to modify
# cluster template properties. So I create slave nodes as private agents.
number_of_slaves:
type: number
description: how many DC/OS agents or slaves to spawn initially
default: 1
master_flavor:
type: string
default: m1.medium
description: flavor to use when booting the master servers
slave_flavor:
type: string
default: m1.medium
description: flavor to use when booting the slave servers
server_image:
type: string
default: centos-dcos
description: glance image used to boot the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
default: public
fixed_network:
type: string
description: uuid/name of an existing network to use to provision machines
default: ""
fixed_subnet:
type: string
description: uuid/name of an existing subnet to use to provision machines
default: ""
fixed_network_cidr:
type: string
description: network range for fixed ip network
default: 10.0.0.0/24
dns_nameserver:
type: string
description: address of a dns nameserver reachable in your environment
http_proxy:
type: string
description: http proxy address for docker
default: ""
https_proxy:
type: string
description: https proxy address for docker
default: ""
no_proxy:
type: string
description: no proxies for docker
default: ""
######################################################################
#
# Rexray Configuration
#
trustee_domain_id:
type: string
description: domain id of the trustee
default: ""
trustee_user_id:
type: string
description: user id of the trustee
default: ""
trustee_username:
type: string
description: username of the trustee
default: ""
trustee_password:
type: string
description: password of the trustee
default: ""
hidden: true
trust_id:
type: string
description: id of the trust which is used by the trustee
default: ""
hidden: true
######################################################################
#
# Rexray Configuration
#
volume_driver:
type: string
description: volume driver to use for container storage
default: ""
username:
type: string
description: user name
tenant_name:
type: string
description: >
tenant_name is used to isolate access to cloud resources
domain_name:
type: string
description: >
domain is to define the administrative boundaries for management
of Keystone entities
region_name:
type: string
description: a logically separate section of the cluster
rexray_preempt:
type: string
description: >
enables any host to take control of a volume irrespective of whether
other hosts are using the volume
default: "false"
auth_url:
type: string
description: url for keystone
slaves_to_remove:
type: comma_delimited_list
description: >
List of slaves to be removed when doing an update. Individual slave may
be referenced several ways: (1) The resource name (e.g.['1', '3']),
(2) The private IP address ['10.0.0.4', '10.0.0.6']. Note: the list should
be empty when doing a create.
default: []
wait_condition_timeout:
type: number
description: >
timeout for the Wait Conditions
default: 6000
password:
type: string
description: >
user password, not set in current implementation, only used to
fill in for DC/OS config file
default:
password
hidden: true
######################################################################
#
# DC/OS parameters
#
# cluster_name
exhibitor_storage_backend:
type: string
default: "static"
exhibitor_zk_hosts:
type: string
default: ""
exhibitor_zk_path:
type: string
default: ""
aws_access_key_id:
type: string
default: ""
aws_region:
type: string
default: ""
aws_secret_access_key:
type: string
default: ""
exhibitor_explicit_keys:
type: string
default: ""
s3_bucket:
type: string
default: ""
s3_prefix:
type: string
default: ""
exhibitor_azure_account_name:
type: string
default: ""
exhibitor_azure_account_key:
type: string
default: ""
exhibitor_azure_prefix:
type: string
default: ""
# master_discovery default set to "static"
# If --master-lb-enabled is specified,
# master_discovery will be set to "master_http_loadbalancer"
master_discovery:
type: string
default: "static"
# master_list
# exhibitor_address
# num_masters
####################################################
# Networking
dcos_overlay_enable:
type: string
default: ""
constraints:
- allowed_values:
- "true"
- "false"
- ""
dcos_overlay_config_attempts:
type: string
default: ""
dcos_overlay_mtu:
type: string
default: ""
dcos_overlay_network:
type: string
default: ""
dns_search:
type: string
description: >
This parameter specifies a space-separated list of domains that
are tried when an unqualified domain is entered
default: ""
# resolvers
# use_proxy
####################################################
# Performance and Tuning
check_time:
type: string
default: "true"
constraints:
- allowed_values:
- "true"
- "false"
docker_remove_delay:
type: number
default: 1
gc_delay:
type: number
default: 2
log_directory:
type: string
default: "/genconf/logs"
process_timeout:
type: number
default: 120
####################################################
# Security And Authentication
oauth_enabled:
type: string
default: "true"
constraints:
- allowed_values:
- "true"
- "false"
telemetry_enabled:
type: string
default: "true"
constraints:
- allowed_values:
- "true"
- "false"
resources:
######################################################################
#
# network resources. allocate a network and router for our server.
#
network:
type: ../../common/templates/network.yaml
properties:
existing_network: {get_param: fixed_network}
existing_subnet: {get_param: fixed_subnet}
private_network_cidr: {get_param: fixed_network_cidr}
dns_nameserver: {get_param: dns_nameserver}
external_network: {get_param: external_network}
api_lb:
type: lb.yaml
properties:
fixed_subnet: {get_attr: [network, fixed_subnet]}
external_network: {get_param: external_network}
######################################################################
#
# security groups. we need to permit network traffic of various
# sorts.
#
secgroup:
type: secgroup.yaml
######################################################################
#
# resources that expose the IPs of either the dcos master or a given
# LBaaS pool depending on whether LBaaS is enabled for the cluster.
#
api_address_lb_switch:
type: Magnum::ApiGatewaySwitcher
properties:
pool_public_ip: {get_attr: [api_lb, floating_address]}
pool_private_ip: {get_attr: [api_lb, address]}
master_public_ip: {get_attr: [dcos_masters, resource.0.dcos_master_external_ip]}
master_private_ip: {get_attr: [dcos_masters, resource.0.dcos_master_ip]}
######################################################################
#
# Master SoftwareConfig.
#
write_params_master:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: fragments/write-heat-params.sh}
inputs:
- name: HTTP_PROXY
type: String
- name: HTTPS_PROXY
type: String
- name: NO_PROXY
type: String
- name: AUTH_URL
type: String
- name: USERNAME
type: String
- name: PASSWORD
type: String
- name: TENANT_NAME
type: String
- name: VOLUME_DRIVER
type: String
- name: REGION_NAME
type: String
- name: DOMAIN_NAME
type: String
- name: REXRAY_PREEMPT
type: String
- name: CLUSTER_NAME
type: String
- name: EXHIBITOR_STORAGE_BACKEND
type: String
- name: EXHIBITOR_ZK_HOSTS
type: String
- name: EXHIBITOR_ZK_PATH
type: String
- name: AWS_ACCESS_KEY_ID
type: String
- name: AWS_REGION
type: String
- name: AWS_SECRET_ACCESS_KEY
type: String
- name: EXHIBITOR_EXPLICIT_KEYS
type: String
- name: S3_BUCKET
type: String
- name: S3_PREFIX
type: String
- name: EXHIBITOR_AZURE_ACCOUNT_NAME
type: String
- name: EXHIBITOR_AZURE_ACCOUNT_KEY
type: String
- name: EXHIBITOR_AZURE_PREFIX
type: String
- name: MASTER_DISCOVERY
type: String
- name: MASTER_LIST
type: String
- name: EXHIBITOR_ADDRESS
type: String
- name: NUM_MASTERS
type: String
- name: DCOS_OVERLAY_ENABLE
type: String
- name: DCOS_OVERLAY_CONFIG_ATTEMPTS
type: String
- name: DCOS_OVERLAY_MTU
type: String
- name: DCOS_OVERLAY_NETWORK
type: String
- name: DNS_SEARCH
type: String
- name: RESOLVERS
type: String
- name: CHECK_TIME
type: String
- name: DOCKER_REMOVE_DELAY
type: String
- name: GC_DELAY
type: String
- name: LOG_DIRECTORY
type: String
- name: PROCESS_TIMEOUT
type: String
- name: OAUTH_ENABLED
type: String
- name: TELEMETRY_ENABLED
type: String
- name: ROLES
type: String
######################################################################
#
# DC/OS configuration SoftwareConfig.
# Configuration files are readered and injected into instance.
#
dcos_config:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: fragments/configure-dcos.sh}
######################################################################
#
# Master SoftwareDeployment.
#
write_params_master_deployment:
type: OS::Heat::SoftwareDeploymentGroup
properties:
config: {get_resource: write_params_master}
servers: {get_attr: [dcos_masters, attributes, dcos_server_id]}
input_values:
HTTP_PROXY: {get_param: http_proxy}
HTTPS_PROXY: {get_param: https_proxy}
NO_PROXY: {get_param: no_proxy}
AUTH_URL: {get_param: auth_url}
USERNAME: {get_param: username}
PASSWORD: {get_param: password}
TENANT_NAME: {get_param: tenant_name}
VOLUME_DRIVER: {get_param: volume_driver}
REGION_NAME: {get_param: region_name}
DOMAIN_NAME: {get_param: domain_name}
REXRAY_PREEMPT: {get_param: rexray_preempt}
CLUSTER_NAME: {get_param: cluster_name}
EXHIBITOR_STORAGE_BACKEND: {get_param: exhibitor_storage_backend}
EXHIBITOR_ZK_HOSTS: {get_param: exhibitor_zk_hosts}
EXHIBITOR_ZK_PATH: {get_param: exhibitor_zk_path}
AWS_ACCESS_KEY_ID: {get_param: aws_access_key_id}
AWS_REGION: {get_param: aws_region}
AWS_SECRET_ACCESS_KEY: {get_param: aws_secret_access_key}
EXHIBITOR_EXPLICIT_KEYS: {get_param: exhibitor_explicit_keys}
S3_BUCKET: {get_param: s3_bucket}
S3_PREFIX: {get_param: s3_prefix}
EXHIBITOR_AZURE_ACCOUNT_NAME: {get_param: exhibitor_azure_account_name}
EXHIBITOR_AZURE_ACCOUNT_KEY: {get_param: exhibitor_azure_account_key}
EXHIBITOR_AZURE_PREFIX: {get_param: exhibitor_azure_prefix}
MASTER_DISCOVERY: {get_param: master_discovery}
MASTER_LIST: {list_join: [' ', {get_attr: [dcos_masters, dcos_master_ip]}]}
EXHIBITOR_ADDRESS: {get_attr: [api_lb, address]}
NUM_MASTERS: {get_param: number_of_masters}
DCOS_OVERLAY_ENABLE: {get_param: dcos_overlay_enable}
DCOS_OVERLAY_CONFIG_ATTEMPTS: {get_param: dcos_overlay_config_attempts}
DCOS_OVERLAY_MTU: {get_param: dcos_overlay_mtu}
DCOS_OVERLAY_NETWORK: {get_param: dcos_overlay_network}
DNS_SEARCH: {get_param: dns_search}
RESOLVERS: {get_param: dns_nameserver}
CHECK_TIME: {get_param: check_time}
DOCKER_REMOVE_DELAY: {get_param: docker_remove_delay}
GC_DELAY: {get_param: gc_delay}
LOG_DIRECTORY: {get_param: log_directory}
PROCESS_TIMEOUT: {get_param: process_timeout}
OAUTH_ENABLED: {get_param: oauth_enabled}
TELEMETRY_ENABLED: {get_param: telemetry_enabled}
ROLES: master
dcos_config_deployment:
type: OS::Heat::SoftwareDeploymentGroup
depends_on:
- write_params_master_deployment
properties:
config: {get_resource: dcos_config}
servers: {get_attr: [dcos_masters, attributes, dcos_server_id]}
######################################################################
#
# DC/OS masters. This is a resource group that will create
# <number_of_masters> masters.
#
dcos_masters:
type: OS::Heat::ResourceGroup
depends_on:
- network
properties:
count: {get_param: number_of_masters}
resource_def:
type: dcosmaster.yaml
properties:
ssh_key_name: {get_param: ssh_key_name}
server_image: {get_param: server_image}
master_flavor: {get_param: master_flavor}
external_network: {get_param: external_network}
fixed_network: {get_attr: [network, fixed_network]}
fixed_subnet: {get_attr: [network, fixed_subnet]}
secgroup_base_id: {get_attr: [secgroup, secgroup_base_id]}
secgroup_dcos_id: {get_attr: [secgroup, secgroup_dcos_id]}
api_pool_80_id: {get_attr: [api_lb, pool_80_id]}
api_pool_443_id: {get_attr: [api_lb, pool_443_id]}
api_pool_8080_id: {get_attr: [api_lb, pool_8080_id]}
api_pool_5050_id: {get_attr: [api_lb, pool_5050_id]}
api_pool_2181_id: {get_attr: [api_lb, pool_2181_id]}
api_pool_8181_id: {get_attr: [api_lb, pool_8181_id]}
######################################################################
#
# DC/OS slaves. This is a resource group that will initially
# create <number_of_slaves> public or private slaves,
# and needs to be manually scaled.
#
dcos_slaves:
type: OS::Heat::ResourceGroup
depends_on:
- network
properties:
count: {get_param: number_of_slaves}
removal_policies: [{resource_list: {get_param: slaves_to_remove}}]
resource_def:
type: dcosslave.yaml
properties:
ssh_key_name: {get_param: ssh_key_name}
server_image: {get_param: server_image}
slave_flavor: {get_param: slave_flavor}
fixed_network: {get_attr: [network, fixed_network]}
fixed_subnet: {get_attr: [network, fixed_subnet]}
external_network: {get_param: external_network}
wait_condition_timeout: {get_param: wait_condition_timeout}
secgroup_base_id: {get_attr: [secgroup, secgroup_base_id]}
# DC/OS params
auth_url: {get_param: auth_url}
username: {get_param: username}
password: {get_param: password}
tenant_name: {get_param: tenant_name}
volume_driver: {get_param: volume_driver}
region_name: {get_param: region_name}
domain_name: {get_param: domain_name}
rexray_preempt: {get_param: rexray_preempt}
http_proxy: {get_param: http_proxy}
https_proxy: {get_param: https_proxy}
no_proxy: {get_param: no_proxy}
cluster_name: {get_param: cluster_name}
exhibitor_storage_backend: {get_param: exhibitor_storage_backend}
exhibitor_zk_hosts: {get_param: exhibitor_zk_hosts}
exhibitor_zk_path: {get_param: exhibitor_zk_path}
aws_access_key_id: {get_param: aws_access_key_id}
aws_region: {get_param: aws_region}
aws_secret_access_key: {get_param: aws_secret_access_key}
exhibitor_explicit_keys: {get_param: exhibitor_explicit_keys}
s3_bucket: {get_param: s3_bucket}
s3_prefix: {get_param: s3_prefix}
exhibitor_azure_account_name: {get_param: exhibitor_azure_account_name}
exhibitor_azure_account_key: {get_param: exhibitor_azure_account_key}
exhibitor_azure_prefix: {get_param: exhibitor_azure_prefix}
master_discovery: {get_param: master_discovery}
master_list: {list_join: [' ', {get_attr: [dcos_masters, dcos_master_ip]}]}
exhibitor_address: {get_attr: [api_lb, address]}
num_masters: {get_param: number_of_masters}
dcos_overlay_enable: {get_param: dcos_overlay_enable}
dcos_overlay_config_attempts: {get_param: dcos_overlay_config_attempts}
dcos_overlay_mtu: {get_param: dcos_overlay_mtu}
dcos_overlay_network: {get_param: dcos_overlay_network}
dns_search: {get_param: dns_search}
resolvers: {get_param: dns_nameserver}
check_time: {get_param: check_time}
docker_remove_delay: {get_param: docker_remove_delay}
gc_delay: {get_param: gc_delay}
log_directory: {get_param: log_directory}
process_timeout: {get_param: process_timeout}
oauth_enabled: {get_param: oauth_enabled}
telemetry_enabled: {get_param: telemetry_enabled}
outputs:
api_address:
value: {get_attr: [api_address_lb_switch, public_ip]}
description: >
This is the API endpoint of the DC/OS master. Use this to access
the DC/OS API from outside the cluster.
dcos_master_private:
value: {get_attr: [dcos_masters, dcos_master_ip]}
description: >
This is a list of the "private" addresses of all the DC/OS masters.
dcos_master:
value: {get_attr: [dcos_masters, dcos_master_external_ip]}
description: >
This is the "public" ip address of the DC/OS master server. Use this address to
log in to the DC/OS master via ssh or to access the DC/OS API
from outside the cluster.
dcos_slaves_private:
value: {get_attr: [dcos_slaves, dcos_slave_ip]}
description: >
This is a list of the "private" addresses of all the DC/OS slaves.
dcos_slaves:
value: {get_attr: [dcos_slaves, dcos_slave_external_ip]}
description: >
This is a list of the "public" addresses of all the DC/OS slaves.

View File

@ -1,161 +0,0 @@
heat_template_version: 2014-10-16
description: >
This is a nested stack that defines a single DC/OS master, This stack is
included by a ResourceGroup resource in the parent template
(dcoscluster.yaml).
parameters:
server_image:
type: string
description: glance image used to boot the server
master_flavor:
type: string
description: flavor to use when booting the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
fixed_network:
type: string
description: Network from which to allocate fixed addresses.
fixed_subnet:
type: string
description: Subnet from which to allocate fixed addresses.
secgroup_base_id:
type: string
description: ID of the security group for base.
secgroup_dcos_id:
type: string
description: ID of the security group for DC/OS master.
api_pool_80_id:
type: string
description: ID of the load balancer pool of Http.
api_pool_443_id:
type: string
description: ID of the load balancer pool of Https.
api_pool_8080_id:
type: string
description: ID of the load balancer pool of Marathon.
api_pool_5050_id:
type: string
description: ID of the load balancer pool of Mesos master.
api_pool_2181_id:
type: string
description: ID of the load balancer pool of Zookeeper.
api_pool_8181_id:
type: string
description: ID of the load balancer pool of Exhibitor.
resources:
######################################################################
#
# DC/OS master server.
#
dcos_master:
type: OS::Nova::Server
properties:
image: {get_param: server_image}
flavor: {get_param: master_flavor}
key_name: {get_param: ssh_key_name}
user_data_format: SOFTWARE_CONFIG
networks:
- port: {get_resource: dcos_master_eth0}
dcos_master_eth0:
type: OS::Neutron::Port
properties:
network: {get_param: fixed_network}
security_groups:
- {get_param: secgroup_base_id}
- {get_param: secgroup_dcos_id}
fixed_ips:
- subnet: {get_param: fixed_subnet}
replacement_policy: AUTO
dcos_master_floating:
type: Magnum::Optional::DcosMaster::Neutron::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_resource: dcos_master_eth0}
api_pool_80_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_80_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 80
api_pool_443_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_443_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 443
api_pool_8080_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_8080_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 8080
api_pool_5050_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_5050_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 5050
api_pool_2181_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_2181_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 2181
api_pool_8181_member:
type: Magnum::Optional::Neutron::LBaaS::PoolMember
properties:
pool: {get_param: api_pool_8181_id}
address: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
subnet: { get_param: fixed_subnet }
protocol_port: 8181
outputs:
dcos_master_ip:
value: {get_attr: [dcos_master_eth0, fixed_ips, 0, ip_address]}
description: >
This is the "private" address of the DC/OS master node.
dcos_master_external_ip:
value: {get_attr: [dcos_master_floating, floating_ip_address]}
description: >
This is the "public" address of the DC/OS master node.
dcos_server_id:
value: {get_resource: dcos_master}
description: >
This is the logical id of the DC/OS master node.

View File

@ -1,338 +0,0 @@
heat_template_version: 2014-10-16
description: >
This is a nested stack that defines a single DC/OS slave, This stack is
included by a ResourceGroup resource in the parent template
(dcoscluster.yaml).
parameters:
server_image:
type: string
description: glance image used to boot the server
slave_flavor:
type: string
description: flavor to use when booting the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
wait_condition_timeout:
type: number
description : >
timeout for the Wait Conditions
http_proxy:
type: string
description: http proxy address for docker
https_proxy:
type: string
description: https proxy address for docker
no_proxy:
type: string
description: no proxies for docker
auth_url:
type: string
description: >
url for DC/OS to authenticate before sending request
username:
type: string
description: user name
password:
type: string
description: >
user password, not set in current implementation, only used to
fill in for Kubernetes config file
hidden: true
tenant_name:
type: string
description: >
tenant_name is used to isolate access to Compute resources
volume_driver:
type: string
description: volume driver to use for container storage
region_name:
type: string
description: A logically separate section of the cluster
domain_name:
type: string
description: >
domain is to define the administrative boundaries for management
of Keystone entities
fixed_network:
type: string
description: Network from which to allocate fixed addresses.
fixed_subnet:
type: string
description: Subnet from which to allocate fixed addresses.
secgroup_base_id:
type: string
description: ID of the security group for base.
rexray_preempt:
type: string
description: >
enables any host to take control of a volume irrespective of whether
other hosts are using the volume
######################################################################
#
# DC/OS parameters
#
cluster_name:
type: string
description: human readable name for the DC/OS cluster
default: my-cluster
exhibitor_storage_backend:
type: string
exhibitor_zk_hosts:
type: string
exhibitor_zk_path:
type: string
aws_access_key_id:
type: string
aws_region:
type: string
aws_secret_access_key:
type: string
exhibitor_explicit_keys:
type: string
s3_bucket:
type: string
s3_prefix:
type: string
exhibitor_azure_account_name:
type: string
exhibitor_azure_account_key:
type: string
exhibitor_azure_prefix:
type: string
master_discovery:
type: string
master_list:
type: string
exhibitor_address:
type: string
default: 127.0.0.1
num_masters:
type: number
dcos_overlay_enable:
type: string
dcos_overlay_config_attempts:
type: string
dcos_overlay_mtu:
type: string
dcos_overlay_network:
type: string
dns_search:
type: string
resolvers:
type: string
check_time:
type: string
docker_remove_delay:
type: number
gc_delay:
type: number
log_directory:
type: string
process_timeout:
type: number
oauth_enabled:
type: string
telemetry_enabled:
type: string
resources:
slave_wait_handle:
type: OS::Heat::WaitConditionHandle
slave_wait_condition:
type: OS::Heat::WaitCondition
depends_on: dcos_slave
properties:
handle: {get_resource: slave_wait_handle}
timeout: {get_param: wait_condition_timeout}
secgroup_all_open:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: icmp
- protocol: tcp
- protocol: udp
######################################################################
#
# software configs. these are components that are combined into
# a multipart MIME user-data archive.
#
write_heat_params:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: {get_file: fragments/write-heat-params.sh}
params:
"$HTTP_PROXY": {get_param: http_proxy}
"$HTTPS_PROXY": {get_param: https_proxy}
"$NO_PROXY": {get_param: no_proxy}
"$AUTH_URL": {get_param: auth_url}
"$USERNAME": {get_param: username}
"$PASSWORD": {get_param: password}
"$TENANT_NAME": {get_param: tenant_name}
"$VOLUME_DRIVER": {get_param: volume_driver}
"$REGION_NAME": {get_param: region_name}
"$DOMAIN_NAME": {get_param: domain_name}
"$REXRAY_PREEMPT": {get_param: rexray_preempt}
"$CLUSTER_NAME": {get_param: cluster_name}
"$EXHIBITOR_STORAGE_BACKEND": {get_param: exhibitor_storage_backend}
"$EXHIBITOR_ZK_HOSTS": {get_param: exhibitor_zk_hosts}
"$EXHIBITOR_ZK_PATH": {get_param: exhibitor_zk_path}
"$AWS_ACCESS_KEY_ID": {get_param: aws_access_key_id}
"$AWS_REGION": {get_param: aws_region}
"$AWS_SECRET_ACCESS_KEY": {get_param: aws_secret_access_key}
"$EXHIBITOR_EXPLICIT_KEYS": {get_param: exhibitor_explicit_keys}
"$S3_BUCKET": {get_param: s3_bucket}
"$S3_PREFIX": {get_param: s3_prefix}
"$EXHIBITOR_AZURE_ACCOUNT_NAME": {get_param: exhibitor_azure_account_name}
"$EXHIBITOR_AZURE_ACCOUNT_KEY": {get_param: exhibitor_azure_account_key}
"$EXHIBITOR_AZURE_PREFIX": {get_param: exhibitor_azure_prefix}
"$MASTER_DISCOVERY": {get_param: master_discovery}
"$MASTER_LIST": {get_param: master_list}
"$EXHIBITOR_ADDRESS": {get_param: exhibitor_address}
"$NUM_MASTERS": {get_param: num_masters}
"$DCOS_OVERLAY_ENABLE": {get_param: dcos_overlay_enable}
"$DCOS_OVERLAY_CONFIG_ATTEMPTS": {get_param: dcos_overlay_config_attempts}
"$DCOS_OVERLAY_MTU": {get_param: dcos_overlay_mtu}
"$DCOS_OVERLAY_NETWORK": {get_param: dcos_overlay_network}
"$DNS_SEARCH": {get_param: dns_search}
"$RESOLVERS": {get_param: resolvers}
"$CHECK_TIME": {get_param: check_time}
"$DOCKER_REMOVE_DELAY": {get_param: docker_remove_delay}
"$GC_DELAY": {get_param: gc_delay}
"$LOG_DIRECTORY": {get_param: log_directory}
"$PROCESS_TIMEOUT": {get_param: process_timeout}
"$OAUTH_ENABLED": {get_param: oauth_enabled}
"$TELEMETRY_ENABLED": {get_param: telemetry_enabled}
"$ROLES": slave
dcos_config:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/configure-dcos.sh}
slave_wc_notify:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: |
#!/bin/bash -v
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: {get_attr: [slave_wait_handle, curl_cli]}
dcos_slave_init:
type: OS::Heat::MultipartMime
properties:
parts:
- config: {get_resource: write_heat_params}
- config: {get_resource: dcos_config}
- config: {get_resource: slave_wc_notify}
######################################################################
#
# a single DC/OS slave.
#
dcos_slave:
type: OS::Nova::Server
properties:
image: {get_param: server_image}
flavor: {get_param: slave_flavor}
key_name: {get_param: ssh_key_name}
user_data_format: RAW
user_data: {get_resource: dcos_slave_init}
networks:
- port: {get_resource: dcos_slave_eth0}
dcos_slave_eth0:
type: OS::Neutron::Port
properties:
network: {get_param: fixed_network}
security_groups:
- get_resource: secgroup_all_open
- get_param: secgroup_base_id
fixed_ips:
- subnet: {get_param: fixed_subnet}
dcos_slave_floating:
type: Magnum::Optional::DcosSlave::Neutron::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_resource: dcos_slave_eth0}
outputs:
dcos_slave_ip:
value: {get_attr: [dcos_slave_eth0, fixed_ips, 0, ip_address]}
description: >
This is the "private" address of the DC/OS slave node.
dcos_slave_external_ip:
value: {get_attr: [dcos_slave_floating, floating_ip_address]}
description: >
This is the "public" address of the DC/OS slave node.

View File

@ -1,187 +0,0 @@
#!/bin/bash
. /etc/sysconfig/heat-params
GENCONF_SCRIPT_DIR=/opt/dcos
sudo mkdir -p $GENCONF_SCRIPT_DIR/genconf
sudo chown -R centos $GENCONF_SCRIPT_DIR/genconf
# Configure ip-detect
cat > $GENCONF_SCRIPT_DIR/genconf/ip-detect <<EOF
#!/usr/bin/env bash
set -o nounset -o errexit
export PATH=/usr/sbin:/usr/bin:\$PATH
echo \$(ip addr show eth0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | head -1)
EOF
# Configure config.yaml
CONFIG_YAML_FILE=$GENCONF_SCRIPT_DIR/genconf/config.yaml
####################################################
# Cluster Setup
# bootstrap_url is not configurable
echo "bootstrap_url: file://$GENCONF_SCRIPT_DIR/genconf/serve" > $CONFIG_YAML_FILE
# cluster_name
echo "cluster_name: $CLUSTER_NAME" >> $CONFIG_YAML_FILE
# exhibitor_storage_backend
if [ "static" == "$EXHIBITOR_STORAGE_BACKEND" ]; then
echo "exhibitor_storage_backend: static" >> $CONFIG_YAML_FILE
elif [ "zookeeper" == "$EXHIBITOR_STORAGE_BACKEND" ]; then
echo "exhibitor_storage_backend: zookeeper" >> $CONFIG_YAML_FILE
echo "exhibitor_zk_hosts: $EXHIBITOR_ZK_HOSTS" >> $CONFIG_YAML_FILE
echo "exhibitor_zk_path: $EXHIBITOR_ZK_PATH" >> $CONFIG_YAML_FILE
elif [ "aws_s3" == "$EXHIBITOR_STORAGE_BACKEND" ]; then
echo "exhibitor_storage_backend: aws_s3" >> $CONFIG_YAML_FILE
echo "aws_access_key_id: $AWS_ACCESS_KEY_ID" >> $CONFIG_YAML_FILE
echo "aws_region: $AWS_REGIION" >> $CONFIG_YAML_FILE
echo "aws_secret_access_key: $AWS_SECRET_ACCESS_KEY" >> $CONFIG_YAML_FILE
echo "exhibitor_explicit_keys: $EXHIBITOR_EXPLICIT_KEYS" >> $CONFIG_YAML_FILE
echo "s3_bucket: $S3_BUCKET" >> $CONFIG_YAML_FILE
echo "s3_prefix: $S3_PREFIX" >> $CONFIG_YAML_FILE
elif [ "azure" == "$EXHIBITOR_STORAGE_BACKEND" ]; then
echo "exhibitor_storage_backend: azure" >> $CONFIG_YAML_FILE
echo "exhibitor_azure_account_name: $EXHIBITOR_AZURE_ACCOUNT_NAME" >> $CONFIG_YAML_FILE
echo "exhibitor_azure_account_key: $EXHIBITOR_AZURE_ACCOUNT_KEY" >> $CONFIG_YAML_FILE
echo "exhibitor_azure_prefix: $EXHIBITOR_AZURE_PREFIX" >> $CONFIG_YAML_FILE
fi
# master_discovery
if [ "static" == "$MASTER_DISCOVERY" ]; then
echo "master_discovery: static" >> $CONFIG_YAML_FILE
echo "master_list:" >> $CONFIG_YAML_FILE
for ip in $MASTER_LIST; do
echo "- ${ip}" >> $CONFIG_YAML_FILE
done
elif [ "master_http_loadbalancer" == "$MASTER_DISCOVERY" ]; then
echo "master_discovery: master_http_loadbalancer" >> $CONFIG_YAML_FILE
echo "exhibitor_address: $EXHIBITOR_ADDRESS" >> $CONFIG_YAML_FILE
echo "num_masters: $NUM_MASTERS" >> $CONFIG_YAML_FILE
echo "master_list:" >> $CONFIG_YAML_FILE
for ip in $MASTER_LIST; do
echo "- ${ip}" >> $CONFIG_YAML_FILE
done
fi
####################################################
# Networking
# dcos_overlay_enable
if [ "false" == "$DCOS_OVERLAY_ENABLE" ]; then
echo "dcos_overlay_enable: false" >> $CONFIG_YAML_FILE
elif [ "true" == "$DCOS_OVERLAY_ENABLE" ]; then
echo "dcos_overlay_enable: true" >> $CONFIG_YAML_FILE
echo "dcos_overlay_config_attempts: $DCOS_OVERLAY_CONFIG_ATTEMPTS" >> $CONFIG_YAML_FILE
echo "dcos_overlay_mtu: $DCOS_OVERLAY_MTU" >> $CONFIG_YAML_FILE
echo "dcos_overlay_network:" >> $CONFIG_YAML_FILE
echo "$DCOS_OVERLAY_NETWORK" >> $CONFIG_YAML_FILE
fi
# dns_search
if [ -n "$DNS_SEARCH" ]; then
echo "dns_search: $DNS_SEARCH" >> $CONFIG_YAML_FILE
fi
# resolvers
echo "resolvers:" >> $CONFIG_YAML_FILE
for ip in $RESOLVERS; do
echo "- ${ip}" >> $CONFIG_YAML_FILE
done
# use_proxy
if [ -n "$HTTP_PROXY" ] && [ -n "$HTTPS_PROXY" ]; then
echo "use_proxy: true" >> $CONFIG_YAML_FILE
echo "http_proxy: $HTTP_PROXY" >> $CONFIG_YAML_FILE
echo "https_proxy: $HTTPS_PROXY" >> $CONFIG_YAML_FILE
if [ -n "$NO_PROXY" ]; then
echo "no_proxy:" >> $CONFIG_YAML_FILE
for ip in $NO_PROXY; do
echo "- ${ip}" >> $CONFIG_YAML_FILE
done
fi
fi
####################################################
# Performance and Tuning
# check_time
if [ "false" == "$CHECK_TIME" ]; then
echo "check_time: false" >> $CONFIG_YAML_FILE
fi
# docker_remove_delay
if [ "1" != "$DOCKER_REMOVE_DELAY" ]; then
echo "docker_remove_delay: $DOCKER_REMOVE_DELAY" >> $CONFIG_YAML_FILE
fi
# gc_delay
if [ "2" != "$GC_DELAY" ]; then
echo "gc_delay: $GC_DELAY" >> $CONFIG_YAML_FILE
fi
# log_directory
if [ "/genconf/logs" != "$LOG_DIRECTORY" ]; then
echo "log_directory: $LOG_DIRECTORY" >> $CONFIG_YAML_FILE
fi
# process_timeout
if [ "120" != "$PROCESS_TIMEOUT" ]; then
echo "process_timeout: $PROCESS_TIMEOUT" >> $CONFIG_YAML_FILE
fi
####################################################
# Security And Authentication
# oauth_enabled
if [ "false" == "$OAUTH_ENABLED" ]; then
echo "oauth_enabled: false" >> $CONFIG_YAML_FILE
fi
# telemetry_enabled
if [ "false" == "$TELEMETRY_ENABLED" ]; then
echo "telemetry_enabled: false" >> $CONFIG_YAML_FILE
fi
####################################################
# Rexray Configuration
# NOTE: This feature is considered experimental: use it at your own risk.
# We might add, change, or delete any functionality as described in this document.
# See https://dcos.io/docs/1.8/usage/storage/external-storage/
if [ "$VOLUME_DRIVER" == "rexray" ]; then
if [ ${AUTH_URL##*/}=="v3" ]; then
extra_configs="domainName: $DOMAIN_NAME"
else
extra_configs=""
fi
echo "rexray_config:" >> $CONFIG_YAML_FILE
echo " rexray:" >> $CONFIG_YAML_FILE
echo " modules:" >> $CONFIG_YAML_FILE
echo " default-admin:" >> $CONFIG_YAML_FILE
echo " host: tcp://127.0.0.1:61003" >> $CONFIG_YAML_FILE
echo " storageDrivers:" >> $CONFIG_YAML_FILE
echo " - openstack" >> $CONFIG_YAML_FILE
echo " volume:" >> $CONFIG_YAML_FILE
echo " mount:" >> $CONFIG_YAML_FILE
echo " preempt: $REXRAY_PREEMPT" >> $CONFIG_YAML_FILE
echo " openstack:" >> $CONFIG_YAML_FILE
echo " authUrl: $AUTH_URL" >> $CONFIG_YAML_FILE
echo " username: $USERNAME" >> $CONFIG_YAML_FILE
echo " password: $PASSWORD" >> $CONFIG_YAML_FILE
echo " tenantName: $TENANT_NAME" >> $CONFIG_YAML_FILE
echo " regionName: $REGION_NAME" >> $CONFIG_YAML_FILE
echo " availabilityZoneName: nova" >> $CONFIG_YAML_FILE
echo " $extra_configs" >> $CONFIG_YAML_FILE
fi
cd $GENCONF_SCRIPT_DIR
sudo bash $GENCONF_SCRIPT_DIR/dcos_generate_config.sh --genconf
cd $GENCONF_SCRIPT_DIR/genconf/serve
sudo bash $GENCONF_SCRIPT_DIR/genconf/serve/dcos_install.sh --no-block-dcos-setup $ROLES

View File

@ -1,56 +0,0 @@
#!/bin/sh
mkdir -p /etc/sysconfig
cat > /etc/sysconfig/heat-params <<EOF
HTTP_PROXY="$HTTP_PROXY"
HTTPS_PROXY="$HTTPS_PROXY"
NO_PROXY="$NO_PROXY"
AUTH_URL="$AUTH_URL"
USERNAME="$USERNAME"
PASSWORD="$PASSWORD"
TENANT_NAME="$TENANT_NAME"
VOLUME_DRIVER="$VOLUME_DRIVER"
REGION_NAME="$REGION_NAME"
DOMAIN_NAME="$DOMAIN_NAME"
REXRAY_PREEMPT="$REXRAY_PREEMPT"
CLUSTER_NAME="$CLUSTER_NAME"
EXHIBITOR_STORAGE_BACKEND="$EXHIBITOR_STORAGE_BACKEND"
EXHIBITOR_ZK_HOSTS="$EXHIBITOR_ZK_HOSTS"
EXHIBITOR_ZK_PATH="$EXHIBITOR_ZK_PATH"
AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID"
AWS_REGIION="$AWS_REGIION"
AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY"
EXHIBITOR_EXPLICIT_KEYS="$EXHIBITOR_EXPLICIT_KEYS"
S3_BUCKET="$S3_BUCKET"
S3_PREFIX="$S3_PREFIX"
EXHIBITOR_AZURE_ACCOUNT_NAME="$EXHIBITOR_AZURE_ACCOUNT_NAME"
EXHIBITOR_AZURE_ACCOUNT_KEY="$EXHIBITOR_AZURE_ACCOUNT_KEY"
EXHIBITOR_AZURE_PREFIX="$EXHIBITOR_AZURE_PREFIX"
MASTER_DISCOVERY="$MASTER_DISCOVERY"
MASTER_LIST="$MASTER_LIST"
EXHIBITOR_ADDRESS="$EXHIBITOR_ADDRESS"
NUM_MASTERS="$NUM_MASTERS"
DCOS_OVERLAY_ENABLE="$DCOS_OVERLAY_ENABLE"
DCOS_OVERLAY_CONFIG_ATTEMPTS="$DCOS_OVERLAY_CONFIG_ATTEMPTS"
DCOS_OVERLAY_MTU="$DCOS_OVERLAY_MTU"
DCOS_OVERLAY_NETWORK="$DCOS_OVERLAY_NETWORK"
DNS_SEARCH="$DNS_SEARCH"
RESOLVERS="$RESOLVERS"
CHECK_TIME="$CHECK_TIME"
DOCKER_REMOVE_DELAY="$DOCKER_REMOVE_DELAY"
GC_DELAY="$GC_DELAY"
LOG_DIRECTORY="$LOG_DIRECTORY"
PROCESS_TIMEOUT="$PROCESS_TIMEOUT"
OAUTH_ENABLED="$OAUTH_ENABLED"
TELEMETRY_ENABLED="$TELEMETRY_ENABLED"
ROLES="$ROLES"
EOF

View File

@ -1,201 +0,0 @@
heat_template_version: 2014-10-16
parameters:
fixed_subnet:
type: string
external_network:
type: string
resources:
# Admin Router is a customized Nginx that proxies all of the internal
# services on port 80 and 443 (if https is configured)
# See https://dcos.io/docs/1.8/administration/installing/custom/configuration-parameters/#-a-name-master-a-master_discovery
# If parameter is specified to master_http_loadbalancer, the
# load balancer must accept traffic on ports 8080, 5050, 80, and 443,
# and forward it to the same ports on the master
#
# Opening ports 2181 and 8181 are not mentioned in DC/OS document.
# When I create a cluster with load balancer, slave nodes will connect to
# some services in master nodes with the IP of load balancer, if the port
# is not open it will fail.
loadbalancer:
type: Magnum::Optional::Neutron::LBaaS::LoadBalancer
properties:
vip_subnet: {get_param: fixed_subnet}
listener_80:
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: HTTP
protocol_port: 80
pool_80:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_80}
protocol: HTTP
monitor_80:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_80 }
listener_443:
depends_on: monitor_80
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: HTTPS
protocol_port: 443
pool_443:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_443}
protocol: HTTPS
monitor_443:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_443 }
listener_8080:
depends_on: monitor_443
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: TCP
protocol_port: 8080
pool_8080:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_8080}
protocol: TCP
monitor_8080:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_8080 }
listener_5050:
depends_on: monitor_8080
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: TCP
protocol_port: 5050
pool_5050:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_5050}
protocol: TCP
monitor_5050:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_5050 }
listener_2181:
depends_on: monitor_5050
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: TCP
protocol_port: 2181
pool_2181:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_2181}
protocol: TCP
monitor_2181:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_2181 }
listener_8181:
depends_on: monitor_2181
type: Magnum::Optional::Neutron::LBaaS::Listener
properties:
loadbalancer: {get_resource: loadbalancer}
protocol: TCP
protocol_port: 8181
pool_8181:
type: Magnum::Optional::Neutron::LBaaS::Pool
properties:
lb_algorithm: ROUND_ROBIN
listener: {get_resource: listener_8181}
protocol: TCP
monitor_8181:
type: Magnum::Optional::Neutron::LBaaS::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
pool: { get_resource: pool_8181 }
floating:
type: Magnum::Optional::Neutron::LBaaS::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_attr: [loadbalancer, vip_port_id]}
outputs:
pool_80_id:
value: {get_resource: pool_80}
pool_443_id:
value: {get_resource: pool_443}
pool_8080_id:
value: {get_resource: pool_8080}
pool_5050_id:
value: {get_resource: pool_5050}
pool_2181_id:
value: {get_resource: pool_2181}
pool_8181_id:
value: {get_resource: pool_8181}
address:
value: {get_attr: [loadbalancer, vip_address]}
floating_address:
value: {get_attr: [floating, floating_ip_address]}

View File

@ -1,115 +0,0 @@
heat_template_version: 2014-10-16
parameters:
resources:
######################################################################
#
# security groups. we need to permit network traffic of various
# sorts.
# The following is a list of ports used by internal DC/OS components,
# and their corresponding systemd unit.
# https://dcos.io/docs/1.8/administration/installing/ports/
#
# The VIP features, added in DC/OS 1.8, require that ports 32768 - 65535
# are open between all agent and master nodes for both TCP and UDP.
# https://dcos.io/docs/1.8/administration/upgrading/
#
secgroup_base:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: icmp
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
remote_mode: remote_group_id
- protocol: udp
remote_mode: remote_group_id
# All nodes
- protocol: tcp
port_range_min: 32768
port_range_max: 65535
# Master nodes
- protocol: tcp
port_range_min: 53
port_range_max: 53
- protocol: tcp
port_range_min: 1050
port_range_max: 1050
- protocol: tcp
port_range_min: 1801
port_range_max: 1801
- protocol: tcp
port_range_min: 7070
port_range_max: 7070
# dcos-oauth
- protocol: tcp
port_range_min: 8101
port_range_max: 8101
- protocol: tcp
port_range_min: 8123
port_range_max: 8123
- protocol: tcp
port_range_min: 9000
port_range_max: 9000
- protocol: tcp
port_range_min: 9942
port_range_max: 9942
- protocol: tcp
port_range_min: 9990
port_range_max: 9990
- protocol: tcp
port_range_min: 15055
port_range_max: 15055
- protocol: udp
port_range_min: 53
port_range_max: 53
- protocol: udp
port_range_min: 32768
port_range_max: 65535
secgroup_dcos:
type: OS::Neutron::SecurityGroup
properties:
rules:
# Admin Router is a customized Nginx that proxies all of the internal
# services on port 80 and 443 (if https is configured)
# See https://github.com/dcos/adminrouter
# If parameter is specified to master_http_loadbalancer, the
# load balancer must accept traffic on ports 8080, 5050, 80, and 443,
# and forward it to the same ports on the master
# Admin Router http
- protocol: tcp
port_range_min: 80
port_range_max: 80
# Admin Router https
- protocol: tcp
port_range_min: 443
port_range_max: 443
# Marathon
- protocol: tcp
port_range_min: 8080
port_range_max: 8080
# Mesos master
- protocol: tcp
port_range_min: 5050
port_range_max: 5050
# Exhibitor
- protocol: tcp
port_range_min: 8181
port_range_max: 8181
# Zookeeper
- protocol: tcp
port_range_min: 2181
port_range_max: 2181
outputs:
secgroup_base_id:
value: {get_resource: secgroup_base}
secgroup_dcos_id:
value: {get_resource: secgroup_dcos}

View File

@ -1,15 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
version = '1.0.0'
driver = 'dcos_centos_v1'
container_version = '1.11.2'

View File

@ -1,163 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from oslo_serialization import jsonutils
from magnum.drivers.heat import template_def
LOG = logging.getLogger(__name__)
class ServerAddressOutputMapping(template_def.OutputMapping):
public_ip_output_key = None
private_ip_output_key = None
def __init__(self, dummy_arg, cluster_attr=None):
self.cluster_attr = cluster_attr
self.heat_output = self.public_ip_output_key
def set_output(self, stack, cluster_template, cluster):
if not cluster_template.floating_ip_enabled:
self.heat_output = self.private_ip_output_key
LOG.debug("Using heat_output: %s", self.heat_output)
super(ServerAddressOutputMapping,
self).set_output(stack, cluster_template, cluster)
class MasterAddressOutputMapping(ServerAddressOutputMapping):
public_ip_output_key = 'dcos_master'
private_ip_output_key = 'dcos_master_private'
class NodeAddressOutputMapping(ServerAddressOutputMapping):
public_ip_output_key = 'dcos_slaves'
private_ip_output_key = 'dcos_slaves_private'
class DcosCentosTemplateDefinition(template_def.BaseTemplateDefinition):
"""DC/OS template for Centos."""
def __init__(self):
super(DcosCentosTemplateDefinition, self).__init__()
self.add_parameter('external_network',
cluster_template_attr='external_network_id',
required=True)
self.add_parameter('number_of_slaves',
cluster_attr='node_count')
self.add_parameter('master_flavor',
cluster_template_attr='master_flavor_id')
self.add_parameter('slave_flavor',
cluster_template_attr='flavor_id')
self.add_parameter('cluster_name',
cluster_attr='name')
self.add_parameter('volume_driver',
cluster_template_attr='volume_driver')
self.add_output('api_address',
cluster_attr='api_address')
self.add_output('dcos_master_private',
cluster_attr=None)
self.add_output('dcos_slaves_private',
cluster_attr=None)
self.add_output('dcos_slaves',
cluster_attr='node_addresses',
mapping_type=NodeAddressOutputMapping)
self.add_output('dcos_master',
cluster_attr='master_addresses',
mapping_type=MasterAddressOutputMapping)
def get_params(self, context, cluster_template, cluster, **kwargs):
extra_params = kwargs.pop('extra_params', {})
# HACK(apmelton) - This uses the user's bearer token, ideally
# it should be replaced with an actual trust token with only
# access to do what the template needs it to do.
osc = self.get_osc(context)
extra_params['auth_url'] = context.auth_url
extra_params['username'] = context.user_name
extra_params['tenant_name'] = context.tenant
extra_params['domain_name'] = context.domain_name
extra_params['region_name'] = osc.cinder_region_name()
# Mesos related label parameters are deleted
# Because they are not optional in DC/OS configuration
label_list = ['rexray_preempt',
'exhibitor_storage_backend',
'exhibitor_zk_hosts',
'exhibitor_zk_path',
'aws_access_key_id',
'aws_region',
'aws_secret_access_key',
'exhibitor_explicit_keys',
's3_bucket',
's3_prefix',
'exhibitor_azure_account_name',
'exhibitor_azure_account_key',
'exhibitor_azure_prefix',
'dcos_overlay_enable',
'dcos_overlay_config_attempts',
'dcos_overlay_mtu',
'dcos_overlay_network',
'dns_search',
'check_time',
'docker_remove_delay',
'gc_delay',
'log_directory',
'process_timeout',
'oauth_enabled',
'telemetry_enabled']
for label in label_list:
extra_params[label] = cluster_template.labels.get(label)
# By default, master_discovery is set to 'static'
# If --master-lb-enabled is specified,
# master_discovery will be set to 'master_http_loadbalancer'
if cluster_template.master_lb_enabled:
extra_params['master_discovery'] = 'master_http_loadbalancer'
if 'true' == extra_params['dcos_overlay_enable']:
overlay_obj = jsonutils.loads(extra_params['dcos_overlay_network'])
extra_params['dcos_overlay_network'] = ''' vtep_subnet: %s
vtep_mac_oui: %s
overlays:''' % (overlay_obj['vtep_subnet'],
overlay_obj['vtep_mac_oui'])
for item in overlay_obj['overlays']:
extra_params['dcos_overlay_network'] += '''
- name: %s
subnet: %s
prefix: %s''' % (item['name'],
item['subnet'],
item['prefix'])
scale_mgr = kwargs.pop('scale_manager', None)
if scale_mgr:
hosts = self.get_output('dcos_slaves_private')
extra_params['slaves_to_remove'] = (
scale_mgr.get_removal_nodes(hosts))
return super(DcosCentosTemplateDefinition,
self).get_params(context, cluster_template, cluster,
extra_params=extra_params,
**kwargs)
def get_env_files(self, cluster_template, cluster):
env_files = []
template_def.add_priv_net_env_file(env_files, cluster_template)
template_def.add_lb_env_file(env_files, cluster_template)
template_def.add_fip_env_file(env_files, cluster_template)
return env_files

View File

@ -1,19 +0,0 @@
# Magnum openSUSE K8s driver
This is openSUSE Kubernetes driver for Magnum, which allow to deploy Kubernetes cluster on openSUSE.
## Installation
### 1. Install the openSUSE K8s driver in Magnum
- To install the driver, from this directory run:
`python ./setup.py install`
### 2. Enable driver in magnum.conf
enabled_definitions = ...,magnum_vm_opensuse_k8s
### 2. Restart Magnum
Both Magnum services has to restarted `magnum-api` and `magnum-conductor`

View File

@ -1,27 +0,0 @@
# Copyright 2016 Rackspace Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from magnum.drivers.common import driver
from magnum.drivers.k8s_opensuse_v1 import template_def
class Driver(driver.Driver):
provides = [
{'server_type': 'vm',
'os': 'opensuse',
'coe': 'kubernetes'},
]
def get_template_definition(self):
return template_def.JeOSK8sTemplateDefinition()

View File

@ -1,39 +0,0 @@
Build openSUSE Leap 42.1 image for OpenStack Magnum
===================================================
This instruction describes how to build manually openSUSE Leap 42.1 image
for OpenStack Magnum with Kubernetes packages.
Link to the image:
http://download.opensuse.org/repositories/Cloud:/Images:/Leap_42.1/images/openSUSE-Leap-42.1-JeOS-for-OpenStack-Magnum-K8s.x86_64.qcow2
## Requirements
Please install openSUSE (https://www.opensuse.org/) on physical or virtual machine.
## Install packages
Install `kiwi` package on openSUSE node, where do you want to build your image
`zypper install kiwi`
Create destination directory, where image will be build
`mkdir /tmp/openSUSE-Leap-42.1-JeOS-for-OpenStack-Magnum-K8s`
## Build image
Run in current directory with `openSUSE-Leap-42.1-JeOS-for-OpenStack-Magnum-K8s` kiwi template
`kiwi --verbose 3 --logfile terminal --build . --destdir /tmp/openSUSE-Leap-42.1-JeOS-for-OpenStack-Magnum-K8s`
## Get image
After `kiwi` will finish, image can be found in `/tmp/openSUSE-Leap-42.1-JeOS-for-OpenStack-Magnum-K8s`
directory with name `openSUSE-Leap-42.1-JeOS-for-OpenStack-Magnum-K8s.x86_64-1.1.1.qcow2`.
Full path
`/tmp/openSUSE-Leap-42.1-JeOS-for-OpenStack-Magnum-K8s/openSUSE-Leap-42.1-JeOS-for-OpenStack-Magnum-K8s.x86_64-1.1.1.qcow2`
Have fun !!!

View File

@ -1,119 +0,0 @@
#!/bin/bash
#================
# FILE : config.sh
#----------------
# PROJECT : openSUSE KIWI Image System
# COPYRIGHT : (c) 2006 SUSE LINUX Products GmbH. All rights reserved
# :
# AUTHOR : Marcus Schaefer <ms@suse.de>
# :
# BELONGS TO : Operating System images
# :
# DESCRIPTION : configuration script for SUSE based
# : operating systems
# :
# :
# STATUS : BETA
#----------------
#======================================
# Functions...
#--------------------------------------
test -f /.kconfig && . /.kconfig
test -f /.profile && . /.profile
mkdir /var/lib/misc/reconfig_system
#======================================
# Greeting...
#--------------------------------------
echo "Configure image: [$name]..."
#======================================
# add missing fonts
#--------------------------------------
CONSOLE_FONT="lat9w-16.psfu"
#======================================
# prepare for setting root pw, timezone
#--------------------------------------
echo ** "reset machine settings"
sed -i 's/^root:[^:]*:/root:*:/' /etc/shadow
rm /etc/machine-id
rm /etc/localtime
rm /var/lib/zypp/AnonymousUniqueId
rm /var/lib/systemd/random-seed
#======================================
# SuSEconfig
#--------------------------------------
echo "** Running suseConfig..."
suseConfig
echo "** Running ldconfig..."
/sbin/ldconfig
#======================================
# Setup baseproduct link
#--------------------------------------
suseSetupProduct
#======================================
# Specify default runlevel
#--------------------------------------
baseSetRunlevel 3
#======================================
# Add missing gpg keys to rpm
#--------------------------------------
suseImportBuildKey
#======================================
# Firewall Configuration
#--------------------------------------
echo '** Configuring firewall...'
chkconfig SuSEfirewall2_init on
chkconfig SuSEfirewall2_setup on
#======================================
# Enable sshd
#--------------------------------------
chkconfig sshd on
#======================================
# Remove doc files
#--------------------------------------
baseStripDocs
#======================================
# remove rpms defined in config.xml in the image type=delete section
#--------------------------------------
baseStripRPM
#======================================
# Sysconfig Update
#--------------------------------------
echo '** Update sysconfig entries...'
baseUpdateSysConfig /etc/sysconfig/SuSEfirewall2 FW_CONFIGURATIONS_EXT sshd
baseUpdateSysConfig /etc/sysconfig/console CONSOLE_FONT "$CONSOLE_FONT"
# baseUpdateSysConfig /etc/sysconfig/snapper SNAPPER_CONFIGS root
if [[ "${kiwi_iname}" != *"OpenStack"* ]]; then
baseUpdateSysConfig /etc/sysconfig/network/dhcp DHCLIENT_SET_HOSTNAME yes
fi
# true
#======================================
# SSL Certificates Configuration
#--------------------------------------
echo '** Rehashing SSL Certificates...'
update-ca-certificates
if [ ! -s /var/log/zypper.log ]; then
> /var/log/zypper.log
fi
# only for debugging
#systemctl enable debug-shell.service
baseCleanMount
exit 0

View File

@ -1,39 +0,0 @@
#!/bin/bash
#================
# FILE : image.sh
#----------------
# PROJECT : openSUSE KIWI Image System
# COPYRIGHT : (c) 2006 SUSE LINUX Products GmbH. All rights reserved
# :
# AUTHOR : Marcus Schaefer <ms@suse.de>
# :
# BELONGS TO : Operating System images
# :
# DESCRIPTION : configuration script for SUSE based
# : operating systems
# :
# :
# STATUS : BETA
#----------------
test -f /.kconfig && . /.kconfig
test -f /.profile && . /.profile
if [[ "${kiwi_iname}" = *"OpenStack"* ]]; then
# disable jeos-firstboot service
# We need to install it because it provides files required in the
# overlay for the image. However, the service itself is something that
# requires interaction on boot, which is not good for OpenStack, and the
# interaction actually doesn't bring any benefit in OpenStack.
systemctl mask jeos-firstboot.service
# enable cloud-init services
suseInsertService cloud-init-local
suseInsertService cloud-init
suseInsertService cloud-config
suseInsertService cloud-final
echo '*** adjusting cloud.cfg for openstack'
sed -i -e '/mount_default_fields/{adatasource_list: [ NoCloud, OpenStack, None ]
}' /etc/cloud/cloud.cfg
fi

View File

@ -1,160 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<image schemaversion="6.1" name="openSUSE-Leap-42.1-JeOS-for-OpenStack-Magnum-K8s">
<description type="system">
<author>SUSE Containers Team</author>
<contact>docker-devel@suse.de</contact>
<specification>Kubernetes openSUSE Leap 42.1 image for OpenStack Magnum</specification>
</description>
<preferences>
<version>1.1.1</version>
<packagemanager>zypper</packagemanager>
<bootsplash-theme>openSUSE</bootsplash-theme>
<bootloader-theme>openSUSE</bootloader-theme>
<rpm-excludedocs>true</rpm-excludedocs>
<!-- temporary filesystem change to ext4, btrfs is just a nightmare on steroids for aarch64 -->
<type
image="vmx"
filesystem="ext4"
boot="vmxboot/suse-leap42.1"
format="qcow2"
vga="normal"
boottimeout="1"
bootloader="grub2"
firmware="uefi"
kernelcmdline="console=tty1 console=ttyS0,115200n8 console=ttyAMA0,115200n8 plymouth.enable=0 net.ifnames=0"
bootpartition="false"
bootkernel="custom"
devicepersistency="by-label"
/>
</preferences>
<repository type="rpm-md">
<source path="obs://Virtualization:containers/openSUSE_Leap_42.1"/>
</repository>
<repository type="rpm-md">
<source path="obs://Virtualization/openSUSE_Leap_42.1"/>
</repository>
<repository type="rpm-md">
<source path="obs://openSUSE:Leap:42.1:Update/standard"/>
</repository>
<repository type="rpm-md">
<source path="obs://openSUSE:Leap:42.1/standard"/>
</repository>
<repository type="rpm-md">
<source path="obs://openSUSE:Leap:42.1:Images/standard"/>
</repository>
<packages type="image">
<!-- jeos server -->
<package name="patterns-openSUSE-minimal_base"/>
<package name="aaa_base-extras"/>
<package name="acl"/>
<package name="curl"/>
<package name="dracut"/>
<package name="fipscheck"/>
<package name="grub2-branding-openSUSE" bootinclude="true"/>
<package name="iputils"/>
<package name="jeos-firstboot"/>
<package name="vim"/>
<package name="which"/>
<package name="gettext-runtime"/>
<package name="shim" arch="x86_64"/>
<package name="grub2"/>
<package name="grub2-x86_64-efi" arch="x86_64"/>
<package name="syslinux" arch="i586,x86_64"/>
<package name="fontconfig"/>
<package name="fonts-config"/>
<package name="haveged"/>
<package name="less" />
<package name="openslp"/>
<package name="tar"/>
<package name="parted"/>
<!-- <package name="SuSEfirewall2"/> not needed for JeOS and OpenStack Cloud-->
<package name="systemd"/>
<package name="systemd-sysvinit"/>
<package name="timezone"/>
<package name="wicked"/>
<package name="iproute2"/>
<package name="openssh"/>
<package name="elfutils"/>
<!-- kernel-default-base doesn't include kernel module tun.ko required by flanneld -->
<!-- <package name="kernel-default-base" bootinclude="true" replaces="kernel-default"/> -->
<package name="kernel-default" bootinclude="true"/>
<package name="python-base"/>
<package name="rsync"/>
<package name="libyui-ncurses-pkg7"/>
<package name="salt-minion"/>
<!-- packages required by file provides, BS can't resolve them -->
<package name="openSUSE-build-key"/>
<package name="pkg-config"/>
<package name="sg3_utils"/>
<package name="ncurses-utils"/>
<package name="krb5"/>
<package name="xfsprogs" />
<!-- cloud specific packages -->
<package name="cloud-init" />
<!-- kubernetes -->
<package name='docker'/>
<package name='etcd'/>
<package name='etcdctl'/>
<package name='flannel'/>
<package name='kubernetes-client'/>
<package name='kubernetes-master'/>
<package name='kubernetes-node'/>
</packages>
<packages type="bootstrap">
<package name="udev"/>
<package name="filesystem"/>
<package name="glibc-locale"/>
<package name="cracklib-dict-small"/>
<package name="ca-certificates"/>
<package name="openSUSE-release"/>
</packages>
<packages type="delete">
<package name="mtools"/>
<package name="initviocons"/>
<package name="cryptsetup"/>
<package name="autoyast2-installation"/>
<package name="bind-utils"/>
<package name="Mesa" />
<package name="Mesa-libGL1"/>
<package name="Mesa-libglapi0"/>
<package name="Mesa-EGL1"/>
<package name="Mesa-libEGL1"/>
<package name="lvm2"/>
<package name="sg3_utils"/>
<package name="libcairo2"/>
<package name="libcxb-dri2-0"/>
<package name="libgbm1"/>
<package name="libgio-2_0-0"/>
<package name="libharfbuzz0"/>
<package name="libpango-1_0-0"/>
<package name="libpixman-1-0"/>
<package name="libply-splash-graphics2"/>
<package name="libX11-6"/>
<package name="libX11-xcb1"/>
<package name="libxcb1"/>
<package name="libX11-data"/>
<package name="libXdamage1"/>
<package name="libXext6"/>
<package name="libXfixes3"/>
<package name="libXft2" />
<package name="libXrender1"/>
<package name="libXxf86vm1"/>
<package name="libpng16-16"/>
<package name="os-prober"/>
<package name="pango-modules"/>
<package name="plymouth"/>
<package name="plymouth-plugin-label"/>
<package name="plymouth-plugin-script"/>
<package name="plymouth-scripts"/>
<package name="plymouth-branding-openSUSE"/>
<package name="fontconfig"/>
<package name="fonts-config"/>
<package name="gnu-unifont-bitmap-fonts"/>
<package name="gio-branding-upstream"/>
<package name="libXau6"/>
<package name="libfreetype6"/>
<package name="shared-mime-info"/>
</packages>
</image>

View File

@ -1,36 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2016 SUSE Linux GmbH
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import setuptools
setuptools.setup(
name="k8s_opensuse_v1",
version="1.0",
packages=['k8s_opensuse_v1'],
package_data={
'k8s_opensuse_v1': ['templates/*', 'templates/fragments/*']
},
author="SUSE Linux GmbH",
author_email="opensuse-cloud@opensuse.org",
description="Magnum openSUSE Kubernetes driver",
license="Apache",
keywords="magnum opensuse driver",
entry_points={
'magnum.template_definitions': [
'k8s_opensuse_v1 = k8s_opensuse_v1:JeOSK8sTemplateDefinition'
]
}
)

View File

@ -1,71 +0,0 @@
# Copyright 2016 Rackspace Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import magnum.conf
from magnum.drivers.common import k8s_template_def
from magnum.drivers.common import template_def
CONF = magnum.conf.CONF
class JeOSK8sTemplateDefinition(k8s_template_def.K8sTemplateDefinition):
"""Kubernetes template for openSUSE/SLES JeOS VM."""
def __init__(self):
super(JeOSK8sTemplateDefinition, self).__init__()
self.add_parameter('docker_volume_size',
cluster_template_attr='docker_volume_size')
self.add_output('kube_minions',
cluster_attr='node_addresses')
self.add_output('kube_masters',
cluster_attr='master_addresses')
def get_params(self, context, cluster_template, cluster, **kwargs):
extra_params = kwargs.pop('extra_params', {})
extra_params['username'] = context.user_name
extra_params['tenant_name'] = context.tenant
return super(JeOSK8sTemplateDefinition,
self).get_params(context, cluster_template, cluster,
extra_params=extra_params,
**kwargs)
def get_env_files(self, cluster_template, cluster):
env_files = []
if cluster_template.master_lb_enabled:
env_files.append(
template_def.COMMON_ENV_PATH + 'with_master_lb.yaml')
else:
env_files.append(
template_def.COMMON_ENV_PATH + 'no_master_lb.yaml')
if cluster_template.floating_ip_enabled:
env_files.append(
template_def.COMMON_ENV_PATH + 'enable_floating_ip.yaml')
else:
env_files.append(
template_def.COMMON_ENV_PATH + 'disable_floating_ip.yaml')
return env_files
@property
def driver_module_path(self):
return __name__[:__name__.rindex('.')]
@property
def template_path(self):
return os.path.join(os.path.dirname(os.path.realpath(__file__)),
'templates/kubecluster.yaml')

View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,129 +0,0 @@
A Kubernetes cluster with Heat
==============================
These [Heat][] templates will deploy a [Kubernetes][] cluster that
supports automatic scaling based on CPU load.
[heat]: https://wiki.openstack.org/wiki/Heat
[kubernetes]: https://github.com/GoogleCloudPlatform/kubernetes
The cluster uses [Flannel][] to provide an overlay network connecting
pods deployed on different minions.
[flannel]: https://github.com/coreos/flannel
## Requirements
### Guest image
These templates will work with either openSUSE JeOS or SLES JeOS images
that are prepared for Docker and Kubernetes.
You can enable docker registry v2 by setting the "registry_enabled"
parameter to "true".
## Creating the stack
Creating an environment file `local.yaml` with parameters specific to
your environment:
parameters:
ssh_key_name: testkey
external_network: public
dns_nameserver: 192.168.200.1
server_image: openSUSELeap42.1-jeos-k8s
registry_enabled: true
registry_username: username
registry_password: password
registry_domain: domain
registry_trust_id: trust_id
registry_auth_url: auth_url
registry_region: region
registry_container: container
And then create the stack, referencing that environment file:
heat stack-create -f kubecluster.yaml -e local.yaml my-kube-cluster
You must provide values for:
- `ssh_key_name`
- `server_image`
If you enable docker registry v2, you must provide values for:
- `registry_username`
- `registry_password`
- `registry_domain`
- `registry_trust_id`
- `registry_auth_url`
- `registry_region`
- `registry_container
## Interacting with Kubernetes
You can get the ip address of the Kubernetes master using the `heat
output-show` command:
$ heat output-show my-kube-cluster kube_masters
"192.168.200.86"
You can ssh into that server as the `minion` user:
$ ssh minion@192.168.200.86
And once logged in you can run `kubectl`, etc:
$ kubectl get minions
NAME LABELS STATUS
10.0.0.4 <none> Ready
You can log into your minions using the `minion` user as well. You
can get a list of minion addresses by running:
$ heat output-show my-kube-cluster kube_minions
[
"192.168.200.182"
]
You can get the docker registry v2 address:
$ heat output-show my-kube-cluster registry_address
localhost:5000
## Testing
The templates install an example Pod and Service description into
`/etc/kubernetes/examples`. You can deploy this with the following
commands:
$ kubectl create -f /etc/kubernetes/examples/web.service
$ kubectl create -f /etc/kubernetes/examples/web.pod
This will deploy a minimal webserver and a service. You can use
`kubectl get pods` and `kubectl get services` to see the results of
these commands.
## License
Copyright 2016 SUSE Linux GmbH
Licensed under the Apache License, Version 2.0 (the "License");
you may not use these files except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
## Contributing
Please submit bugs and pull requests via the Gerrit repository at
https://review.openstack.org/. For more information, please refer
to the following resources:
* **Documentation:** http://docs.openstack.org/developer/magnum
* **Source:** http://git.openstack.org/cgit/openstack/magnum

View File

@ -1,40 +0,0 @@
#!/bin/sh
. /etc/sysconfig/heat-params
DOCKER_PROXY_CONF=/etc/systemd/system/docker.service.d/proxy.conf
BASH_RC=/etc/bashrc
mkdir -p /etc/systemd/system/docker.service.d
if [ -n "$HTTP_PROXY" ]; then
cat <<EOF | sed "s/^ *//" > $DOCKER_PROXY_CONF
[Service]
Environment=HTTP_PROXY=$HTTP_PROXY
EOF
systemctl daemon-reload
systemctl --no-block restart docker.service
if [ -f "$BASH_RC" ]; then
echo "declare -x http_proxy=$HTTP_PROXY" >> $BASH_RC
else
echo "File $BASH_RC does not exist, not setting http_proxy"
fi
fi
if [ -n "$HTTPS_PROXY" ]; then
if [ -f "$BASH_RC" ]; then
echo "declare -x https_proxy=$HTTPS_PROXY" >> $BASH_RC
else
echo "File $BASH_RC does not exist, not setting https_proxy"
fi
fi
if [ -n "$NO_PROXY" ]; then
if [ -f "$BASH_RC" ]; then
echo "declare -x no_proxy=$NO_PROXY" >> $BASH_RC
else
echo "File $BASH_RC does not exist, not setting no_proxy"
fi
fi

View File

@ -1,71 +0,0 @@
#!/bin/sh
. /etc/sysconfig/heat-params
echo "stopping docker"
systemctl stop docker
ip link del docker0
if [ "$NETWORK_DRIVER" == "flannel" ]; then
FLANNEL_ENV=/run/flannel/subnet.env
attempts=60
while [[ ! -f $FLANNEL_ENV && $attempts != 0 ]]; do
echo "waiting for file $FLANNEL_ENV"
sleep 1
let attempts--
done
source $FLANNEL_ENV
if ! [ "\$FLANNEL_SUBNET" ] && [ "\$FLANNEL_MTU" ] ; then
echo "ERROR: missing required environment variables." >&2
exit 1
fi
if `grep -q DOCKER_NETWORK_OPTIONS /etc/sysconfig/docker`; then
sed -i '
/^DOCKER_NETWORK_OPTIONS=/ s|=.*|="--bip='"$FLANNEL_SUBNET"' --mtu='"$FLANNEL_MTU"'"|
' /etc/sysconfig/docker
else
echo "DOCKER_NETWORK_OPTIONS=\"--bip=$FLANNEL_SUBNET --mtu=$FLANNEL_MTU\"" >> /etc/sysconfig/docker
fi
sed -i '
/^DOCKER_OPTS=/ s/=.*/="--storage-driver=btrfs"/
' /etc/sysconfig/docker
fi
DOCKER_DEV=/dev/disk/by-id/virtio-${DOCKER_VOLUME:0:20}
attempts=60
while [[ ! -b $DOCKER_DEV && $attempts != 0 ]]; do
echo "waiting for disk $DOCKER_DEV"
sleep 0.5
udevadm trigger
let attempts--
done
if ! [ -b $DOCKER_DEV ]; then
echo "ERROR: device $DOCKER_DEV does not exist" >&2
exit 1
fi
mkfs.btrfs $DOCKER_DEV
mount $DOCKER_DEV /var/lib/docker
# update /etc/fstab with DOCKER_DEV
if ! `grep -q /var/lib/docker /etc/fstab`; then
grep /var/lib/docker /etc/mtab | head -1 >> /etc/fstab
fi
# make sure we pick up any modified unit files
systemctl daemon-reload
echo "activating docker service"
systemctl enable docker
echo "starting docker service"
systemctl --no-block start docker

View File

@ -1,21 +0,0 @@
#!/bin/sh
. /etc/sysconfig/heat-params
myip="$KUBE_NODE_IP"
sed -i '
/ETCD_NAME=/c ETCD_NAME="'$myip'"
/ETCD_DATA_DIR=/c ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
/ETCD_LISTEN_CLIENT_URLS=/c ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
/ETCD_LISTEN_PEER_URLS=/c ETCD_LISTEN_PEER_URLS="http://'$myip':2380"
/ETCD_ADVERTISE_CLIENT_URLS=/c ETCD_ADVERTISE_CLIENT_URLS="http://'$myip':2379"
/ETCD_INITIAL_ADVERTISE_PEER_URLS=/c ETCD_INITIAL_ADVERTISE_PEER_URLS="http://'$myip':2380"
/ETCD_DISCOVERY=/c ETCD_DISCOVERY="'$ETCD_DISCOVERY_URL'"
' /etc/sysconfig/etcd
echo "activating etcd service"
systemctl enable etcd
echo "starting etcd service"
systemctl --no-block start etcd

Some files were not shown because too many files have changed in this diff Show More