Browse Source

Merge "Fix specs reference rst format"

changes/15/306915/1
Jenkins 6 years ago committed by Gerrit Code Review
parent
commit
cade36b33f
  1. 50
      specs/container-networking-model.rst
  2. 64
      specs/container-volume-integration-model.rst
  3. 12
      specs/resource-quotas.rst

50
specs/container-networking-model.rst

@ -29,7 +29,7 @@ Problem Description
The container networking ecosystem is undergoing rapid changes. The
networking tools and techniques used in today's container deployments are
different than twelve months ago and will continue to evolve. For example,
Flannel[6], Kubernetes preferred networking implementation, was initially
Flannel [6]_, Kubernetes preferred networking implementation, was initially
released in July of 2014 and was not considered preferred until early 2015.
Furthermore, the various container orchestration engines have not
@ -98,7 +98,7 @@ Pod
managed within Kubernetes.
Additional Magnum definitions can be found in the Magnum Developer
documentation[2].
documentation [2]_.
Use Cases
----------
@ -170,7 +170,7 @@ As a CP:
Proposed Changes
================
1. Currently, Magnum supports Flannel[6] as the only multi-host container
1. Currently, Magnum supports Flannel [6]_ as the only multi-host container
networking implementation. Although Flannel has become widely accepted
for providing networking capabilities to Kubernetes-based container
clusters, other networking tools exist and future tools may develop.
@ -233,7 +233,7 @@ Proposed Changes
support labels as a mechanism for providing custom metadata. The labels
attribute within Magnum should be extended beyond Kubernetes pods, so a
single mechanism can be used to pass arbitrary metadata throughout the
entire system. A blueprint[2] has been registered to expand the scope
entire system. A blueprint [2]_ has been registered to expand the scope
of labels for Magnum. This document intends on adhering to the
expand-labels-scope blueprint.
@ -252,7 +252,7 @@ Proposed Changes
3. Update python-magnumclient to understand the new Container Networking
Model attributes. The client should also be updated to support passing
the --labels flag according to the expand-labels-scope blueprint[2].
the --labels flag according to the expand-labels-scope blueprint [2]_.
4. Update the conductor template definitions to support the new Container
Networking Model attributes.
@ -260,14 +260,14 @@ Proposed Changes
5. Refactor Heat templates to support the Magnum Container Networking Model.
Currently, Heat templates embed Flannel-specific configuration within
top-level templates. For example, the top-level Kubernetes Heat
template[8] contains the flannel_network_subnetlen parameter. Network
template [8]_ contains the flannel_network_subnetlen parameter. Network
driver specific configurations should be removed from all top-level
templates and instead be implemented in one or more template fragments.
As it relates to container networking, top-level templates should only
expose the labels and generalized parameters such as network-driver.
Heat templates, template definitions and definition entry points should
be suited for composition, allowing for a range of supported labels. This
document intends to follow the refactor-heat-templates blueprint[3] to
document intends to follow the refactor-heat-templates blueprint [3]_ to
achieve this goal.
6. Update unit and functional tests to support the new attributes of the
@ -276,10 +276,11 @@ Proposed Changes
7. The spec will not add support for natively managing container networks.
Due to each network driver supporting different API operations, this
document suggests that Magnum not natively manage container networks at
this time and instead leave this job to native tools. References [4-7]
this time and instead leave this job to native tools. References [4]_ [5]_
[6]_ [7]_.
provide additional details to common labels operations.
8. Since implementing the expand-labels-scope blueprint[2] may take a while,
8. Since implementing the expand-labels-scope blueprint [2]_ may take a while,
exposing network functionality through baymodel configuration parameters
should be considered as an interim solution.
@ -299,7 +300,7 @@ Alternatives
abstractions for each supported network driver or creating an
abstraction layer that covers all possible network drivers.
4. Use the Kuryr project[10] to provide networking to Magnum containers.
4. Use the Kuryr project [10]_ to provide networking to Magnum containers.
Kuryr currently contains no documentation or code, so this alternative
is highly unlikely if the Magnum community requires a pluggable
container networking implementation in the near future. However, Kuryr
@ -422,14 +423,11 @@ following blueprints, it's highly recommended that the Magnum Container
Networking Model be developed in concert with the blueprints to maintain
development continuity within the project.
1. Common Plugin Framework Blueprint:
https://blueprints.launchpad.net/magnum/+spec/common-plugin-framework
1. Common Plugin Framework Blueprint [1]_.
2. Expand the Scope of Labels Blueprint:
https://blueprints.launchpad.net/magnum/+spec/expand-labels-scope
2. Expand the Scope of Labels Blueprint [9]_.
3. Refactor Heat Templates, Definitions and Entry Points Blueprint:
https://blueprints.launchpad.net/magnum/+spec/refactor-heat-templates
3. Refactor Heat Templates, Definitions and Entry Points Blueprint [3]_.
Testing
=======
@ -448,13 +446,13 @@ information on how to use these flags will be included.
References
==========
[1] https://blueprints.launchpad.net/magnum/+spec/common-plugin-framework
[2] http://docs.openstack.org/developer/magnum/
[3] https://blueprints.launchpad.net/magnum/+spec/refactor-heat-templates
[4] https://github.com/docker/libnetwork/blob/master/docs/design.md
[5] https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/networking.md
[6] https://github.com/coreos/flannel
[7] https://github.com/coreos/rkt/blob/master/Documentation/networking.md
[8] https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster.yaml
[9] https://blueprints.launchpad.net/magnum/+spec/expand-labels-scope
[10] https://github.com/openstack/kuryr
.. [1] https://blueprints.launchpad.net/magnum/+spec/common-plugin-framework
.. [2] http://docs.openstack.org/developer/magnum/
.. [3] https://blueprints.launchpad.net/magnum/+spec/refactor-heat-templates
.. [4] https://github.com/docker/libnetwork/blob/master/docs/design.md
.. [5] https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/networking.md
.. [6] https://github.com/coreos/flannel
.. [7] https://github.com/coreos/rkt/blob/master/Documentation/networking.md
.. [8] https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster.yaml
.. [9] https://blueprints.launchpad.net/magnum/+spec/expand-labels-scope
.. [10] https://github.com/openstack/kuryr

64
specs/container-volume-integration-model.rst

@ -39,7 +39,7 @@ In this area, the support for container volume is undergoing rapid change
to bring more integration with open source software and third party
storage solutions.
A clear evidence of this growth is the many plugin volume drivers [1][3]
A clear evidence of this growth is the many plugin volume drivers [1]_ [4]_
such as NFS, GlusterFS, EBS, etc. They provide different functionality, use
different storage backend and have different requirements. The COE's are
naturally motivated to be flexible and allow as many choices as possible for
@ -93,7 +93,7 @@ Volume plugin
COE specific code that supports the functionality of a type of volume.
Additional Magnum definitions can be found in the Magnum Developer
documentation[7].
documentation[7]_ .
Use Cases
----------
@ -173,8 +173,8 @@ We propose extending Magnum as follows.
rexray, flocker, nfs, glusterfs, etc..
Here is an example of creating a Docker Swarm baymodel that uses rexray[5][6]
as the volume driver: ::
Here is an example of creating a Docker Swarm baymodel that uses rexray [5]_
[6]_ as the volume driver: ::
magnum baymodel-create --name swarmbaymodel \
@ -193,11 +193,11 @@ We propose extending Magnum as follows.
then the REX-Ray volume plugin will be registered in Docker. When a container
is created with rexray as the volume driver, the container will have full
access to the REX-Ray capabilities such as creating, mounting, deleting
volumes [6]. REX-Ray in turn will interface with Cinder to manage the volumes
in OpenStack.
volumes [6]_. REX-Ray in turn will interface with Cinder to manage the
volumes in OpenStack.
Here is an example of creating a Kubernetes baymodel that uses Cinder [2][3]
as the volume driver: ::
Here is an example of creating a Kubernetes baymodel that uses Cinder [2]_
[3]_ as the volume driver: ::
magnum baymodel-create --name k8sbaymodel \
--image-id fedora-21-atomic-5 \
@ -237,7 +237,7 @@ volume driver: ::
When the mesos bay is created using this bay model, the mesos bay will be
configured so that an existing Cinder volume can be mounted in a container
by configuring the parameters to mount the cinder volume in the json file. ::
by configuring the parameters to mount the cinder volume in the json file. ::
"parameters": [
{ "key": "volume-driver", "value": "rexray" },
@ -378,7 +378,7 @@ performance.
An example of the second case is a docker swarm bay with
"--volume-driver rexray" where the rexray driver's storage provider is
OpenStack cinder. The resulting performance for container may vary depending
on the storage backends. As listed in [8], Cinder supports many storage
on the storage backends. As listed in [8]_ , Cinder supports many storage
drivers. Besides this, different container volume driver can also cause
performance variance.
@ -403,11 +403,11 @@ High-Availablity Impact
Kubernetes does support pod high-availability through the replication
controller, however, this doesn't work when a pod with volume attached
fails. Refer the link [11] for details.
fails. Refer the link [11]_ for details.
Docker swarm doesn't support the containers reschduling when a node fails, so
volume can not be automatically detached by volume driver. Refer the
link [12] for details.
link [12]_ for details.
Mesos supports the application high-availability when a node fails, which
means application would be started on new node, and volumes can be
@ -484,29 +484,17 @@ configuration flags introduced by this document. Additionally, background
information on how to use these flags will be included.
References
[1] http://kubernetes.io/v1.1/docs/user-guide/volumes.html
[2] http://kubernetes.io/v1.1/examples/mysql-cinder-pd/
[3] https://github.com/kubernetes/kubernetes/tree/master/pkg/volume/cinder
[3] http://docs.docker.com/engine/extend/plugins/
[4] https://docs.docker.com/engine/userguide/dockervolumes/
[5] https://github.com/emccode/rexray
[6] http://rexray.readthedocs.org/en/stable/user-guide/storage-providers/openstack
[7] http://docs.openstack.org/developer/magnum/
[8] http://docs.openstack.org/liberty/config-reference/content/section_volume-drivers.html
[9] http://docs.openstack.org/admin-guide-cloud/blockstorage_multi_backend.html#
[10] http://docs.openstack.org/user-guide-admin/dashboard_manage_volumes.html
[11] https://github.com/kubernetes/kubernetes/issues/14642
[12] https://github.com/docker/swarm/issues/1488
==========
.. [1] http://kubernetes.io/v1.1/docs/user-guide/volumes.html
.. [2] http://kubernetes.io/v1.1/examples/mysql-cinder-pd/
.. [3] https://github.com/kubernetes/kubernetes/tree/master/pkg/volume/cinder
.. [4] http://docs.docker.com/engine/extend/plugins/
.. [5] https://github.com/emccode/rexray
.. [6] http://rexray.readthedocs.org/en/stable/user-guide/storage-providers/openstack
.. [7] http://docs.openstack.org/developer/magnum/
.. [8] http://docs.openstack.org/liberty/config-reference/content/section_volume-drivers.html
.. [9] http://docs.openstack.org/admin-guide-cloud/blockstorage_multi_backend.html#
.. [10] http://docs.openstack.org/user-guide-admin/dashboard_manage_volumes.html
.. [11] https://github.com/kubernetes/kubernetes/issues/14642
.. [12] https://github.com/docker/swarm/issues/1488

12
specs/resource-quotas.rst

@ -68,7 +68,7 @@ Mitaka.
When a project is created and if the Magnum service is running, the default
quota for Magnum resources will be set by the values configured in magnum.conf.
Other Openstack projects like Nova [3], Cinder [4] follow a similar pattern
Other Openstack projects like Nova [2]_, Cinder [3]_ follow a similar pattern
and we will also do so and hence won't have a seperate CLI for quota-create.
Later if the user wants to change the Quota of the resource option will be
provided to do so using magnum quota-update. In situation where all of the
@ -114,7 +114,7 @@ At present there is not quota infrastructure in Magnum.
Adding Quota Management layer at the Orchestration layer, Heat, could be an
alternative. Doing so will give a finer view of resource consumption at the
IaaS layer which can be used while provisioning Magnum resources which
depend on the IaaS layer [1].
depend on the IaaS layer [1]_.
Data model impact
-----------------
@ -247,8 +247,6 @@ None
References
==========
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-December/082266.html
[2] https://github.com/openstack/nova/blob/master/nova/quota.py
[3] https://github.com/openstack/nova/blob/master/cinder/quota.py
.. [1] http://lists.openstack.org/pipermail/openstack-dev/2015-December/082266.html
.. [2] https://github.com/openstack/nova/blob/master/nova/quota.py
.. [3] https://github.com/openstack/nova/blob/master/cinder/quota.py

Loading…
Cancel
Save