diff --git a/README.rst b/README.rst index d37e87e1..af6ee368 100644 --- a/README.rst +++ b/README.rst @@ -193,3 +193,10 @@ i.e 10.0.0.0/16, but with different pool name, neutron_pool2: bar 397badb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d786 + +External Resources +------------------ + +The latest and most in-depth documentation is available at: + + diff --git a/doc/images/kuryr_logo.png b/doc/images/kuryr_logo.png deleted file mode 100644 index bca10083..00000000 Binary files a/doc/images/kuryr_logo.png and /dev/null differ diff --git a/doc/source/conf.py b/doc/source/conf.py index 111067cf..e01d54f0 100755 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -38,7 +38,7 @@ source_suffix = '.rst' master_doc = 'index' # General information about the project. -project = u'kuryr' +project = u'kuryr-libnetwork' copyright = u'2013, OpenStack Foundation' # If true, '()' will be appended to :func: etc. cross-reference text. diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst deleted file mode 100644 index 1728a61c..00000000 --- a/doc/source/contributing.rst +++ /dev/null @@ -1,4 +0,0 @@ -============ -Contributing -============ -.. include:: ../../CONTRIBUTING.rst diff --git a/doc/source/devref/goals_and_use_cases.rst b/doc/source/devref/goals_and_use_cases.rst deleted file mode 100644 index 63512026..00000000 --- a/doc/source/devref/goals_and_use_cases.rst +++ /dev/null @@ -1,65 +0,0 @@ -=================== -Goals And Use Cases -=================== - -Kuryr provides networking to Docker containers by leveraging the Neutron APIs -and services. It also provides containerized images for common Neutron plugins - -Kuryr implements a `libnetwork remote driver`_ and maps its calls to OpenStack -`Neutron`_. It works as a translator between libnetwork's -`Container Network Model`_ (CNM) and `Neutron's networking model`_ -and provides container-host or container-vm (nested VM) binding. - -Using Kuryr any Neutron plugin can be used as a libnetwork remote driver -explicitly. Neutron APIs are vendor agnostic and thus all Neutron plugins will -have the capability of providing the networking backend of Docker with a common -lightweight plugging snippet as they have in nova. - -Kuryr takes care of binding the container namespace to the networking -infrastructure by providing a generic layer for `VIF binding`_ depending on the -port type for example Linux bridge port, Open vSwitch port, Midonet port and so -on. - -Kuryr should be the gateway between containers networking API and use cases and -Neutron APIs and services and should bridge the gaps between the two in both -domains. It will map the missing parts in Neutron and drive changes to adjust -it. - -Kuryr should address `Magnum`_ project use cases in terms of containers -networking and serve as a unified interface for Magnum or any other OpenStack -project that needs to leverage containers networking through Neutron API. -In that regard, Kuryr aims at leveraging Neutron plugins that support VM -nested container's use cases and enhancing Neutron APIs to support these cases -(for example `OVN`_). An etherpad regarding `Magnum Kuryr Integration`_ -describes the various use cases Kuryr needs to support. - -Kuryr should provide containerized Neutron plugins for easy deployment and must -be compatible with OpenStack `Kolla`_ project and its deployment tools. The -containerized plugins have the common Kuryr binding layer which binds the -container to the network infrastructure. - -Kuryr should leverage Neutron sub-projects and services (in particular LBaaS, -FWaaS, VPNaaS) to provide to support advanced containers networking use cases -and to be consumed by containers orchestration management systems (for example -Kubernetes , or even OpenStack Magnum). - -Kuryr also support pre-allocating of networks, ports and subnets, and binding -them to Docker networks/endpoints upon creation depending on specific labels -that are passed during Docker creation. There is a patch being merged in Docker -to support providing user labels upon network creation. you can look at this -`User labels in docker patch`_. - -References ----------- - -.. _libnetwork remote driver: https://github.com/docker/libnetwork/blob/master/docs/remote.md -.. _Neutron: https://wiki.openstack.org/wiki/Neutron -.. _Container Network Model: https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model -.. _Neutron's networking model: https://wiki.openstack.org/wiki/Neutron/APIv2-specification -.. _VIF binding: https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism -.. _Magnum: https://wiki.openstack.org/wiki/Magnum -.. _OVN: https://launchpad.net/networking-ovn -.. _Kolla: https://wiki.openstack.org/wiki/Kolla -.. _APIs: https://github.com/docker/libnetwork/blob/master/docs/design.md#api -.. _User labels in docker patch: https://github.com/docker/libnetwork/pull/222/files#diff-2b9501381623bc063b38733c35a1d254 -.. _Magnum Kuryr Integration: https://etherpad.openstack.org/p/magnum-kuryr diff --git a/doc/source/devref/index.rst b/doc/source/devref/index.rst deleted file mode 100644 index e3af5869..00000000 --- a/doc/source/devref/index.rst +++ /dev/null @@ -1,48 +0,0 @@ -.. - Licensed under the Apache License, Version 2.0 (the "License"); you may - not use this file except in compliance with the License. You may obtain - a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - License for the specific language governing permissions and limitations - under the License. - - Convention for heading levels in Neutron devref: - ======= Heading 0 (reserved for the title in a document) - ------- Heading 1 - ~~~~~~~ Heading 2 - +++++++ Heading 3 - ''''''' Heading 4 - (Avoid deeper levels because they do not render well.) - - -Design and Developer Docs -========================== - -Kuryr goal is to bring containers networking to Neutron core API -and advanced networking services. -This section contains detailed designs / project integration plans and low level -use cases for the various parts inside Kuryr. - - -Programming HowTos and Tutorials --------------------------------- -.. toctree:: - :maxdepth: 4 - - goals_and_use_cases - libnetwork_remote_driver_design - kuryr_mitaka_milestone - k8s_api_watcher_design - - -Indices and tables ------------------- - -* :ref:`genindex` -* :ref:`modindex` -* :ref:`search` diff --git a/doc/source/devref/k8s_api_watcher_design.rst b/doc/source/devref/k8s_api_watcher_design.rst deleted file mode 100644 index cfd16c73..00000000 --- a/doc/source/devref/k8s_api_watcher_design.rst +++ /dev/null @@ -1,1065 +0,0 @@ -.. - This work is licensed under a Creative Commons Attribution 3.0 Unported - License. - - http://creativecommons.org/licenses/by/3.0/legalcode - - Convention for heading levels in Neutron devref: - ======= Heading 0 (reserved for the title in a document) - ------- Heading 1 - ~~~~~~~ Heading 2 - +++++++ Heading 3 - ''''''' Heading 4 - (Avoid deeper levels because they do not render well.) - -============================= -Kubernetes API Watcher Design -============================= - -This documentation describes the `Kubernetes API`_ watcher daemon component, -**Raven**, of Kuryr. - -What is Raven -------------- - -Raven is a daemon watches the internal Kubernetes (K8s) state through its API -server and receives the changes with the event notifications. Raven then -translate the state changes of K8s into requests against Neutron API and -constructs the virtual network topology on Neutron. - -Raven must act as the centralized component for the translations due to the -constraints come from the concurrent deployments of the pods on worker nodes. -Unless it's centralized, each plugin on each worker node would make requests -against Neutron API and it would lead the conflicts of the requests due to the -race condition because of the lack of the lock or the serialization mechanisms -for the requests against Neutron API. - -Raven doesn't take care of the bindings between the virtual ports and the -physical interfaces on worker nodes. It is the responsibility of Kuryr CNI_ -plugin for K8s and it shall recognize which Neutron port should be bound to the -physical interface associated with the pod to be deployed. So Raven focuses -only on building the virtual network topology translated from the events of the -internal state changes of K8s through its API server. - -Goal -~~~~ - -Through Raven the changes to K8s API server are translated into the appropriate -requests against Neutron API and we can make sure their logical networking states -are synchronized and consistent when the pods are deployed. - -Translation Mapping -------------------- - -The detailed specification of the translation mappings is described in another -document, :doc:`../specs/mitaka/kuryr_k8s_integration`. In this document we touch what -to be translated briefly. - -The main focus of Raven is the following resources. - -* Namespace -* Pod -* Service (Optional) -* Endpoints (Optional) - -Namespaces are translated into the networking basis, Neutron networks and -subnets for the cluster and the service using the explicitly predefined values -in the configuration file, or implicitly specified by the environment -variables, e.g., ``FLANNEL_NET=172.16.0.0/16`` as specified -`in the deployment phase`_. Raven also creates Neutron routers for connecting -the cluster subnets and the service subnets. - -When each namespace is created, a cluster network that contains a cluster -subnet, a service network that contains a service subnet, and a router that -connects the cluster subnet and the service subnet are created through Neutron -API. The apiserver ensures namespaces are created before pods are created. - -Pods contain the information required for creating Neutron ports. If pods are -associated with the specific namespace, the ports are created and associated -with the subnets for the namespace. - -Although it's optional, Raven can emulate kube-proxy_. This is for the network -controller that leverages isolated datapath from ``docker0`` bridge such as -Open vSwitch datapath. Services contain the information for the emulation. Raven -maps kube-proxy to Neutron load balancers with VIPs. In this case Raven also -creates a LBaaS pool member for each Endpoints to be translated coordinating -with the associated service translation. For "externalIPs" type K8s service, -Raven associates a floating IP with a load balancer for enabling the pubilc -accesses. - -================= ================= -Kubernetes Neutron -================= ================= -Namespace Network -(Cluster subnet) (Subnet) -Pod Port -Service LBaaS Pool - LBaaS VIP - (FloatingIP) -Endpoints LBaaS Pool Member -================= ================= - - -.. _k8s-api-behaviour: - -K8s API behaviour ------------------ - -We look at the responses from the pod endpoints as an exmple. - -The following behaviour is based on the 1.2.0 release, which is the latest one -as of March 17th, 2016. - -:: - - $ ./kubectl.sh version - Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"} - Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"} - -Regular requests -~~~~~~~~~~~~~~~~ - -If there's no pod, the K8s API server returns the following JSON response that -has the empty list for the ``"items"`` property. - -:: - - $ curl -X GET -i http://127.0.0.1:8080/api/v1/pods - HTTP/1.1 200 OK - Content-Type: application/json - Date: Tue, 15 Mar 2016 07:15:46 GMT - Content-Length: 145 - - { - "kind": "PodList", - "apiVersion": "v1", - "metadata": { - "selfLink": "/api/v1/pods", - "resourceVersion": "227806" - }, - "items": [] - } - -We deploy a pod as follow. - -:: - - $ ./kubectl.sh run --image=nginx nginx-app --port=80 - replicationcontroller "nginx-app" created - -Then the response from the API server contains the pod information in -``"items"`` property of the JSON response. - -:: - - $ curl -X GET -i http://127.0.0.1:8080/api/v1/pods - HTTP/1.1 200 OK - Content-Type: application/json - Date: Tue, 15 Mar 2016 08:18:25 GMT - Transfer-Encoding: chunked - - { - "kind": "PodList", - "apiVersion": "v1", - "metadata": { - "selfLink": "/api/v1/pods", - "resourceVersion": "228211" - }, - "items": [ - { - "metadata": { - "name": "nginx-app-o0kvl", - "generateName": "nginx-app-", - "namespace": "default", - "selfLink": "/api/v1/namespaces/default/pods/nginx-app-o0kvl", - "uid": "090cc0c8-ea84-11e5-8c79-42010af00003", - "resourceVersion": "228094", - "creationTimestamp": "2016-03-15T08:00:51Z", - "labels": { - "run": "nginx-app" - }, - "annotations": { - "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"default\",\"name\":\"nginx-app\",\"uid\":\"090bfb57-ea84-11e5-8c79-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"228081\"}}\n" - } - }, - "spec": { - "volumes": [ - { - "name": "default-token-wpfjn", - "secret": { - "secretName": "default-token-wpfjn" - } - } - ], - "containers": [ - { - "name": "nginx-app", - "image": "nginx", - "ports": [ - { - "containerPort": 80, - "protocol": "TCP" - } - ], - "resources": {}, - "volumeMounts": [ - { - "name": "default-token-wpfjn", - "readOnly": true, - "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "Always" - } - ], - "restartPolicy": "Always", - "terminationGracePeriodSeconds": 30, - "dnsPolicy": "ClusterFirst", - "serviceAccountName": "default", - "serviceAccount": "default", - "nodeName": "10.240.0.4", - "securityContext": {} - }, - "status": { - "phase": "Running", - "conditions": [ - { - "type": "Ready", - "status": "True", - "lastProbeTime": null, - "lastTransitionTime": "2016-03-15T08:00:52Z" - } - ], - "hostIP": "10.240.0.4", - "podIP": "172.16.49.2", - "startTime": "2016-03-15T08:00:51Z", - "containerStatuses": [ - { - "name": "nginx-app", - "state": { - "running": { - "startedAt": "2016-03-15T08:00:52Z" - } - }, - "lastState": {}, - "ready": true, - "restartCount": 0, - "image": "nginx", - "imageID": "docker://sha256:af4b3d7d5401624ed3a747dc20f88e2b5e92e0ee9954aab8f1b5724d7edeca5e", - "containerID": "docker://b97168314ad58404dbce7cb94291db7a976d2cb824b39e5864bf4bdaf27af255" - } - ] - } - } - ] - } - -We get the current snapshot of the requested resources with the regular -requests against the K8s API server. - -Requests with ``watch=true`` query string -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -K8s provides the "watch" capability for the endpoints with ``/watch/`` prefix -for the specific resource name, i.e., ``/api/v1/watch/pods``, or ``watch=true`` -query string. - -If there's no pod, we get only the response header and the connection is kept -open. - -:: - - $ curl -X GET -i http://127.0.0.1:8080/api/v1/pods?watch=true - HTTP/1.1 200 OK - Transfer-Encoding: chunked - Date: Tue, 15 Mar 2016 08:00:09 GMT - Content-Type: text/plain; charset=utf-8 - Transfer-Encoding: chunked - - -We create a pod as we did for the case without the ``watch=true`` query string. - -:: - - $ ./kubectl.sh run --image=nginx nginx-app --port=80 - replicationcontroller "nginx-app" created - -Then we observe the JSON data corresponds to the event is given by each line. -The event type is given in ``"type"`` property of the JSON data, i.e., -``"ADDED"``, ``"MODIFIED"`` and ``"DELETED"``. - -:: - - $ curl -X GET -i http://127.0.0.1:8080/api/v1/pods?watch=true - HTTP/1.1 200 OK - Transfer-Encoding: chunked - Date: Tue, 15 Mar 2016 08:00:09 GMT - Content-Type: text/plain; charset=utf-8 - Transfer-Encoding: chunked - - {"type":"ADDED","object":{"kind":"Pod","apiVersion":"v1","metadata":{"name":"nginx-app-o0kvl","generateName":"nginx-app-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/nginx-app-o0kvl","uid":"090cc0c8-ea84-11e5-8c79-42010af00003","resourceVersion":"228082","creationTimestamp":"2016-03-15T08:00:51Z","labels":{"run":"nginx-app"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"default\",\"name\":\"nginx-app\",\"uid\":\"090bfb57-ea84-11e5-8c79-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"228081\"}}\n"}},"spec":{"volumes":[{"name":"default-token-wpfjn","secret":{"secretName":"default-token-wpfjn"}}],"containers":[{"name":"nginx-app","image":"nginx","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-wpfjn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","securityContext":{}},"status":{"phase":"Pending"}}} - {"type":"MODIFIED","object":{"kind":"Pod","apiVersion":"v1","metadata":{"name":"nginx-app-o0kvl","generateName":"nginx-app-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/nginx-app-o0kvl","uid":"090cc0c8-ea84-11e5-8c79-42010af00003","resourceVersion":"228084","creationTimestamp":"2016-03-15T08:00:51Z","labels":{"run":"nginx-app"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"default\",\"name\":\"nginx-app\",\"uid\":\"090bfb57-ea84-11e5-8c79-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"228081\"}}\n"}},"spec":{"volumes":[{"name":"default-token-wpfjn","secret":{"secretName":"default-token-wpfjn"}}],"containers":[{"name":"nginx-app","image":"nginx","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-wpfjn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.240.0.4","securityContext":{}},"status":{"phase":"Pending"}}} - {"type":"MODIFIED","object":{"kind":"Pod","apiVersion":"v1","metadata":{"name":"nginx-app-o0kvl","generateName":"nginx-app-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/nginx-app-o0kvl","uid":"090cc0c8-ea84-11e5-8c79-42010af00003","resourceVersion":"228088","creationTimestamp":"2016-03-15T08:00:51Z","labels":{"run":"nginx-app"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"default\",\"name\":\"nginx-app\",\"uid\":\"090bfb57-ea84-11e5-8c79-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"228081\"}}\n"}},"spec":{"volumes":[{"name":"default-token-wpfjn","secret":{"secretName":"default-token-wpfjn"}}],"containers":[{"name":"nginx-app","image":"nginx","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-wpfjn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.240.0.4","securityContext":{}},"status":{"phase":"Pending","conditions":[{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2016-03-15T08:00:51Z","reason":"ContainersNotReady","message":"containers with unready status: [nginx-app]"}],"hostIP":"10.240.0.4","startTime":"2016-03-15T08:00:51Z","containerStatuses":[{"name":"nginx-app","state":{"waiting":{"reason":"ContainerCreating","message":"Image: nginx is ready, container is creating"}},"lastState":{},"ready":false,"restartCount":0,"image":"nginx","imageID":""}]}}} - {"type":"MODIFIED","object":{"kind":"Pod","apiVersion":"v1","metadata":{"name":"nginx-app-o0kvl","generateName":"nginx-app-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/nginx-app-o0kvl","uid":"090cc0c8-ea84-11e5-8c79-42010af00003","resourceVersion":"228094","creationTimestamp":"2016-03-15T08:00:51Z","labels":{"run":"nginx-app"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"default\",\"name\":\"nginx-app\",\"uid\":\"090bfb57-ea84-11e5-8c79-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"228081\"}}\n"}},"spec":{"volumes":[{"name":"default-token-wpfjn","secret":{"secretName":"default-token-wpfjn"}}],"containers":[{"name":"nginx-app","image":"nginx","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-wpfjn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.240.0.4","securityContext":{}},"status":{"phase":"Running","conditions":[{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2016-03-15T08:00:52Z"}],"hostIP":"10.240.0.4","podIP":"172.16.49.2","startTime":"2016-03-15T08:00:51Z","containerStatuses":[{"name":"nginx-app","state":{"running":{"startedAt":"2016-03-15T08:00:52Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"nginx","imageID":"docker://sha256:af4b3d7d5401624ed3a747dc20f88e2b5e92e0ee9954aab8f1b5724d7edeca5e","containerID":"docker://b97168314ad58404dbce7cb94291db7a976d2cb824b39e5864bf4bdaf27af255"}]}}} - -Raven Technical Design Overview -------------------------------- - -Problem Statement -~~~~~~~~~~~~~~~~~ - -To conform to the I/O bound requirement described in :ref:`k8s-api-behaviour`, -the multiplexed concurrent network I/O is demanded. eventlet_ is used in -various OpenStack projects for this purpose as well as other libraries such as -Twisted_, Tornado_ and gevent_. However, it has problems as described in -"`What's wrong with eventlet?`_" on the OpenStack wiki page. - -asyncio and Python 3 by default -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -asyncio_ was introduced as a standard asynchronous I/O library in Python 3.4. -Its event loop and coroutines provide the mechanism to multiplex network I/O -in the asynchronous fashion. Compared with eventlet, we can explicitly mark the -I/O operations asynchronous with ``yield from`` or ``await`` introduced in -Python 3.5. - -Trollius_ is a port of asyncio to Python 2.x. However `Trollius documentation`_ -is describing a list of problems and even promoting the migration to Python 3 -with asyncio. - -Kuryr is still a quite young project in OpenStack Neutron big tent. In addition -to that, since it's a container related project it should be able to be run -inside a container. So do Raven. Therefore we take a path to support for only -Python 3 and drop Python 2. - -With asyncio we can achieve concurrent networking I/O operations required by -watchers watch multiple endpoints and translate their responses into requests -against Neutron and K8s API server. - -Watchers -~~~~~~~~ - -A watcher can be represented as a pair of an API endpoint and a function used -for the translation essentially. That is, the pair of what is translated and -how it is. The API endpoint URI is associated with the stream of the event -notifications and the translation function maps each event coming from the -apiserver into another form such as the request against Neutron API server. - -Watchers can be considered as concerns and reactions. They should be decoupled -from the actual task dispatcher and their consumers. A single or multiple -watchers can be mixed into the single class that leverages them, i.e., Raven, -or even mutliple classes leverage them can have the same concern and the same -reaction. The watchers can be able to be mixed into the single entity of the -watcher user but they should work independently. For instance, ``AliceWatcher`` -does its work and knows nothing about other watchers such as ``BobWatcher``. -They don't work together depending on one or each. - -A minimum watcher can be defined as follow. - -.. code-block:: python - - from kuryr.raven import watchers - - class SomeWatcher(watchers.K8sApiWatcher): - WATCH_ENDPOINT = '/' - - def translate(self, deserialized_json): - pass - -The watcher is defined in the declarative way and ideally doesn't care when it -is called and by whom. However, it needs to recognize the context such as the -event type and behave appropriately according to the situation. - -Raven -~~~~~ - -Raven acts as a daemon and it should be able to be started or stopped by -operators. It delegates the actual watch tasks to the watchers and dispatch -them with the single JSON response corresponds to each endpoint on which the -watcher has its concern. - -Hence, Raven holds one or multiple watchers, opens connections for each -endpoint, makes HTTP requests, gets HTTP responses and parses every event -notification and dispatches the translate methods of the watchers routed based -on their corresponding endpoints. - -To register the watchers to Raven or any class, ``register_watchers`` decorator -is used. It simply inserts the watchers into the dictionary in the class, -``WATCH_ENDPOINTS_AND_CALLBACKS`` and it's up to the class how use the -registerd watchers. The classes passed to ``register_watchers`` are defined in -the configuration file and you can specify only what you need. - -In the case of Raven, it starts the event loop, open connections for each -registered watcher and keeps feeding the notified events to the translate -methods of the watchers. - -Raven is a service implements ``oslo_service.service.Service``. When ``start`` -method is called, it starts the event loop and delegatations of the watch tasks. -If ``SIGINT`` or ``SIGTERM`` signal is sent to Raven, it cancells all watch -tasks, closes connections and stops immediately. Otherwise Raven lets watchers -keep watching the API endpionts until the API server sends EOF strings. When -``stop`` is called, it cancells all watch tasks, closes connections and stops -as well. - -Ideally, the translate method can be a pure function that doesn't depend on the -user of the watcher. However, the translation gets involved in requests against -Neutron and possibly the K8s API server. And it depends on the Neutron client -that shall be shared among the watchers. Hence, Raven calls the translate -methods of the watchers binding itself to ``self``. That is, Raven can -propagate its contexts to the watchers and in this way watchers can share the -same contexts. However, it's responsibilty of the writer of the watchers to -track which variables are defined in Raven and what they are. - -Appendix A: JSON response from the apiserver for each resource --------------------------------------------------------------- - -Namespace -~~~~~~~~~ - -:: - - /api/v1/namespaces?watch=true - -ADDED -+++++ - -:: - - { - "type": "ADDED", - "object": { - "kind": "Namespace", - "apiVersion": "v1", - "metadata": { - "name": "test", - "selfLink": "/api/v1/namespaces/test", - "uid": "f094ea6b-06c2-11e6-8128-42010af00003", - "resourceVersion": "497821", - "creationTimestamp": "2016-04-20T06:41:41Z" - }, - "spec": { - "finalizers": [ - "kubernetes" - ] - }, - "status": { - "phase": "Active" - } - } - } - -MODIFIED -++++++++ - -:: - - { - "type": "MODIFIED", - "object": { - "kind": "Namespace", - "apiVersion": "v1", - "metadata": { - "name": "test", - "selfLink": "/api/v1/namespaces/test", - "uid": "f094ea6b-06c2-11e6-8128-42010af00003", - "resourceVersion": "519095", - "creationTimestamp": "2016-04-20T06:41:41Z", - "deletionTimestamp": "2016-04-21T08:47:53Z" - }, - "spec": { - "finalizers": [ - "kubernetes" - ] - }, - "status": { - "phase": "Terminating" - } - } - } - -DELETED -+++++++ - -:: - - { - "type": "DELETED", - "object": { - "kind": "Namespace", - "apiVersion": "v1", - "metadata": { - "name": "test", - "selfLink": "/api/v1/namespaces/test", - "uid": "f094ea6b-06c2-11e6-8128-42010af00003", - "resourceVersion": "519099", - "creationTimestamp": "2016-04-20T06:41:41Z", - "deletionTimestamp": "2016-04-21T08:47:53Z" - }, - "spec": {}, - "status": { - "phase": "Terminating" - } - } - } - - -Pod -~~~ - -:: - - /api/v1/pods?watch=true - -ADDED -+++++ - -:: - - { - "type": "ADDED", - "object": { - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "my-nginx-y67ky", - "generateName": "my-nginx-", - "namespace": "default", - "selfLink": "/api/v1/namespaces/default/pods/my-nginx-y67ky", - "uid": "d42b0bb2-dc4e-11e5-8c79-42010af00003", - "resourceVersion": "63355", - "creationTimestamp": "2016-02-26T06:04:42Z", - "labels": { - "run": "my-nginx" - }, - "annotations": { - "kubernetes.io/created-by": { - "kind": "SerializedReference", - "apiVersion": "v1", - "reference": { - "kind": "ReplicationController", - "namespace": "default", - "name": "my-nginx", - "uid": "d42a4ee1-dc4e-11e5-8c79-42010af00003", - "apiVersion": "v1", - "resourceVersion": "63348" - } - } - } - }, - "spec": { - "volumes": [ - { - "name": "default-token-wpfjn", - "secret": { - "secretName": "default-token-wpfjn" - } - } - ], - "containers": [ - { - "name": "my-nginx", - "image": "nginx", - "ports": [ - { - "containerPort": 80, - "protocol": "TCP" - } - ], - "resources": {}, - "volumeMounts": [ - { - "name": "default-token-wpfjn", - "readOnly": true, - "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "Always" - } - ], - "restartPolicy": "Always", - "terminationGracePeriodSeconds": 30, - "dnsPolicy": "ClusterFirst", - "serviceAccountName": "default", - "serviceAccount": "default", - "nodeName": "10.240.0.4", - "securityContext": {} - }, - "status": { - "phase": "Pending", - "conditions": [ - { - "type": "Ready", - "status": "False", - "lastProbeTime": null, - "lastTransitionTime": "2016-02-26T06:04:43Z", - "reason": "ContainersNotReady", - "message": "containers with unready status: [my-nginx]" - } - ], - "hostIP": "10.240.0.4", - "startTime": "2016-02-26T06:04:43Z", - "containerStatuses": [ - { - "name": "my-nginx", - "state": { - "waiting": { - "reason": "ContainerCreating", - "message": "Image: nginx is ready, container is creating" - } - }, - "lastState": {}, - "ready": false, - "restartCount": 0, - "image": "nginx", - "imageID": "" - } - ] - } - } - } - -MODIFIED -~~~~~~~~ - -:: - - { - "type": "MODIFIED", - "object": { - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "my-nginx-y67ky", - "generateName": "my-nginx-", - "namespace": "default", - "selfLink": "/api/v1/namespaces/default/pods/my-nginx-y67ky", - "uid": "d42b0bb2-dc4e-11e5-8c79-42010af00003", - "resourceVersion": "63425", - "creationTimestamp": "2016-02-26T06:04:42Z", - "deletionTimestamp": "2016-02-26T06:06:16Z", - "deletionGracePeriodSeconds": 30, - "labels": { - "run": "my-nginx" - }, - "annotations": { - "kubernetes.io/created-by": { - "kind": "SerializedReference", - "apiVersion": "v1", - "reference": { - "kind": "ReplicationController", - "namespace": "default", - "name": "my-nginx", - "uid": "d42a4ee1-dc4e-11e5-8c79-42010af00003", - "apiVersion": "v1", - "resourceVersion": "63348" - } - } - } - }, - "spec": { - "volumes": [ - { - "name": "default-token-wpfjn", - "secret": { - "secretName": "default-token-wpfjn" - } - } - ], - "containers": [ - { - "name": "my-nginx", - "image": "nginx", - "ports": [ - { - "containerPort": 80, - "protocol": "TCP" - } - ], - "resources": {}, - "volumeMounts": [ - { - "name": "default-token-wpfjn", - "readOnly": true, - "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "Always" - } - ], - "restartPolicy": "Always", - "terminationGracePeriodSeconds": 30, - "dnsPolicy": "ClusterFirst", - "serviceAccountName": "default", - "serviceAccount": "default", - "nodeName": "10.240.0.4", - "securityContext": {} - }, - "status": { - "phase": "Pending", - "conditions": [ - { - "type": "Ready", - "status": "False", - "lastProbeTime": null, - "lastTransitionTime": "2016-02-26T06:04:43Z", - "reason": "ContainersNotReady", - "message": "containers with unready status: [my-nginx]" - } - ], - "hostIP": "10.240.0.4", - "startTime": "2016-02-26T06:04:43Z", - "containerStatuses": [ - { - "name": "my-nginx", - "state": { - "waiting": { - "reason": "ContainerCreating", - "message": "Image: nginx is ready, container is creating" - } - }, - "lastState": {}, - "ready": false, - "restartCount": 0, - "image": "nginx", - "imageID": "" - } - ] - } - } - } - -DELETED -+++++++ - -:: - - { - "type": "DELETED", - "object": { - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "my-nginx-y67ky", - "generateName": "my-nginx-", - "namespace": "default", - "selfLink": "/api/v1/namespaces/default/pods/my-nginx-y67ky", - "uid": "d42b0bb2-dc4e-11e5-8c79-42010af00003", - "resourceVersion": "63431", - "creationTimestamp": "2016-02-26T06:04:42Z", - "deletionTimestamp": "2016-02-26T06:05:46Z", - "deletionGracePeriodSeconds": 0, - "labels": { - "run": "my-nginx" - }, - "annotations": { - "kubernetes.io/created-by": { - "kind": "SerializedReference", - "apiVersion": "v1", - "reference": { - "kind": "ReplicationController", - "namespace": "default", - "name": "my-nginx", - "uid": "d42a4ee1-dc4e-11e5-8c79-42010af00003", - "apiVersion": "v1", - "resourceVersion": "63348" - } - } - } - }, - "spec": { - "volumes": [ - { - "name": "default-token-wpfjn", - "secret": { - "secretName": "default-token-wpfjn" - } - } - ], - "containers": [ - { - "name": "my-nginx", - "image": "nginx", - "ports": [ - { - "containerPort": 80, - "protocol": "TCP" - } - ], - "resources": {}, - "volumeMounts": [ - { - "name": "default-token-wpfjn", - "readOnly": true, - "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "Always" - } - ], - "restartPolicy": "Always", - "terminationGracePeriodSeconds": 30, - "dnsPolicy": "ClusterFirst", - "serviceAccountName": "default", - "serviceAccount": "default", - "nodeName": "10.240.0.4", - "securityContext": {} - }, - "status": { - "phase": "Pending", - "conditions": [ - { - "type": "Ready", - "status": "False", - "lastProbeTime": null, - "lastTransitionTime": "2016-02-26T06:04:43Z", - "reason": "ContainersNotReady", - "message": "containers with unready status: [my-nginx]" - } - ], - "hostIP": "10.240.0.4", - "startTime": "2016-02-26T06:04:43Z", - "containerStatuses": [ - { - "name": "my-nginx", - "state": { - "waiting": { - "reason": "ContainerCreating", - "message": "Image: nginx is ready, container is creating" - } - }, - "lastState": {}, - "ready": false, - "restartCount": 0, - "image": "nginx", - "imageID": "" - } - ] - } - } - } - -Service -~~~~~~~ - -:: - - /api/v1/services?watch=true - -ADDED -+++++ - -:: - - { - "type": "ADDED", - "object": { - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "redis-master", - "namespace": "default", - "selfLink": "/api/v1/namespaces/default/services/redis-master", - "uid": "7aecfdac-d54c-11e5-8cc5-42010af00002", - "resourceVersion": "2074", - "creationTimestamp": "2016-02-17T08:00:16Z", - "labels": { - "app": "redis", - "role": "master", - "tier": "backend" - } - }, - "spec": { - "ports": [ - { - "protocol": "TCP", - "port": 6379, - "targetPort": 6379 - } - ], - "selector": { - "app": "redis", - "role": "master", - "tier": "backend" - }, - "clusterIP": "10.0.0.102", - "type": "ClusterIP", - "sessionAffinity": "None" - }, - "status": { - "loadBalancer": {} - } - } - } - -MODIFIED -++++++++ - -The event could not be observed. - -DELETED -+++++++ - -:: - - { - "type": "DELETED", - "object": { - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "redis-master", - "namespace": "default", - "selfLink": "/api/v1/namespaces/default/services/redis-master", - "uid": "7aecfdac-d54c-11e5-8cc5-42010af00002", - "resourceVersion": "2806", - "creationTimestamp": "2016-02-17T08:00:16Z", - "labels": { - "app": "redis", - "role": "master", - "tier": "backend" - } - }, - "spec": { - "ports": [ - { - "protocol": "TCP", - "port": 6379, - "targetPort": 6379 - } - ], - "selector": { - "app": "redis", - "role": "master", - "tier": "backend" - }, - "clusterIP": "10.0.0.102", - "type": "ClusterIP", - "sessionAffinity": "None" - }, - "status": { - "loadBalancer": {} - } - } - } - -Endpoints -~~~~~~~~~ - -:: - - /api/v1/endpoints?watch=true - -ADDED -+++++ - -:: - - { - "type": "ADDED", - "object": { - "apiVersion": "v1", - "kind": "Endpoints", - "subsets": [], - "metadata": { - "creationTimestamp": "2016-06-10T06:26:57Z", - "namespace": "default", - "labels": { - "app": "guestbook", - "tier": "frontend" - }, - "selfLink": "/api/v1/namespaces/default/endpoints/frontend", - "name": "frontend", - "uid": "5542ba6b-2ed4-11e6-8128-42010af00003", - "resourceVersion": "1506396" - } - } - } - -MODIFIED -++++++++ - -:: - - { - "type": "MODIFIED", - "object": { - "apiVersion": "v1", - "kind": "Endpoints", - "subsets": [ - { - "addresses": [ - { - "targetRef": { - "kind": "Pod", - "name": "frontend-ib7ui", - "namespace": "default", - "uid": "554b2924-2ed4-11e6-8128-42010af00003", - "resourceVersion": "1506444" - }, - "ip": "192.168.0.119" - }, - { - "targetRef": { - "kind": "Pod", - "name": "frontend-tt8ok", - "namespace": "default", - "uid": "554b37db-2ed4-11e6-8128-42010af00003", - "resourceVersion": "1506459" - }, - "ip": "192.168.0.120" - }, - { - "targetRef": { - "kind": "Pod", - "name": "frontend-rxsaw", - "namespace": "default", - "uid": "554b43b8-2ed4-11e6-8128-42010af00003", - "resourceVersion": "1506442" - }, - "ip": "192.168.0.121" - } - ], - "ports": [ - { - "port": 80, - "protocol": "TCP" - } - ] - } - ], - "metadata": { - "creationTimestamp": "2016-06-10T06:26:57Z", - "namespace": "default", - "labels": { - "app": "guestbook", - "tier": "frontend" - }, - "selfLink": "/api/v1/namespaces/default/endpoints/frontend", - "name": "frontend", - "uid": "5542ba6b-2ed4-11e6-8128-42010af00003", - "resourceVersion": "1506460" - } - } - } - -DELETED -++++++++ - -The event could not be observed. - -.. _`Kubernetes API`: http://kubernetes.io/docs/api/ -.. _CNI: https://github.com/appc/cni -.. _`in the deployment phase`: https://github.com/kubernetes/kubernetes/search?utf8=%E2%9C%93&q=FLANNEL_NET -.. _kube-proxy: http://kubernetes.io/docs/user-guide/services/#virtual-ips-and-service-proxies -.. _eventlet: http://eventlet.net/ -.. _Twisted: https://twistedmatrix.com/trac/ -.. _Tornado: http://tornadoweb.org/ -.. _gevent: http://www.gevent.org/ -.. _`What's wrong with eventlet?`: https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio#What.27s_wrong_with_eventlet.3F -.. _asyncio: https://www.python.org/dev/peps/pep-3156/ -.. _Trollius: http://trollius.readthedocs.org/ -.. _`Trollius documentation`: http://trollius.readthedocs.org/deprecated.html diff --git a/doc/source/devref/kuryr_mitaka_milestone.rst b/doc/source/devref/kuryr_mitaka_milestone.rst deleted file mode 100644 index 60b08a57..00000000 --- a/doc/source/devref/kuryr_mitaka_milestone.rst +++ /dev/null @@ -1,118 +0,0 @@ -.. - This work is licensed under a Creative Commons Attribution 3.0 Unported - License. - - http://creativecommons.org/licenses/by/3.0/legalcode - -===================================== -Kuryr - Milestone for Mitaka -===================================== - -https://launchpad.net/kuryr - - -Kuryr Roles and Responsibilities - First Milestone for Mitaka release ------------------------------------------------------------------------ - -This chapter includes the various use cases that Kuryr aims at solving, -some were briefly described in the introduction chapter. -This list of items will need to be prioritized. - -1) Deploy Kuryr as a libnetwork remote driver (map between libnetwork - API and Neutron API) - -2) Configuration - https://etherpad.openstack.org/p/kuryr-configuration - - Includes authentication to Neutron and Docker (Keystone integration) - -3) VIF Binding - https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding - https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism - -4) Containerized neutron plugins + Kuryr common layer (Kolla) - -5) Nested VM - agent less mode (or with Kuryr shim layer) - -6) Magnum Kuryr Integration - https://blueprints.launchpad.net/kuryr/+spec/containers-in-instances - - Create Kuryr heat resources for Magnum to consume - -7) Missing APIs in Neutron to support docker networking model - - Port-Mapping: - Docker port-mapping will be implemented in services and not networks - (libnetwork). - There is a relationship between the two. - Here are some details: - https://github.com/docker/docker/blob/master/experimental/networking.md - https://github.com/docker/docker/blob/master/api/server/server_experimental_unix.go#L13-L16 - - Here is an example of publishing a service on a particular network and attaching - a container to the service: - docker service publish db1.prod cid=$(docker run -itd -p 8000:8000 ubuntu) - docker service attach $cid db1.prod - - Kuryr will need to interact with the services object of the docker - api to support port-mapping. - We are planning to propose a port forwarding spec in Mitaka that - introduces the API and reference implementation of port forwarding - in Neutron to enable this feature. - - Neutron relevant specs: - VLAN trunk ports - ( https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms) - (Used for nested VM's defining trunk port and sub-ports) - - DNS resolution according to port name - (https://review.openstack.org/#/c/90150/) - (Needed for feature compatibility with Docker services publishing) - -8) Mapping between Neutron identifiers and Docker identifiers - - A new spec in Neutron is being proposed that we can - leverage for this use case: `Adding tags to resources`_ . - Tags are similar in concept to Docker labels. - -9) Testing (CI) - - There should be a testing infrastructure running both unit and functional tests with full - setup of docker + kuryr + neutron. - -10) Packaging and devstack plugin for Kuryr - - -Kuryr Future Scope ------------------- - -1) Kuryr is planned to support other networking backend models defined by Kubernetes - (and not just libnetwork). - -2) In addition to Docker, services are a key component of Kubernetes. - In Kubernetes, I create a pod and optionally create/attach a service to a pod: - https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md - - Services could be implemented with LBaaS APIs - - An example project that does this for Kubernetes and Neutron LBaaS: - https://github.com/kubernetes/kubernetes/blob/release-1.0/pkg/cloudprovider/openstack/openstack.go - - -References -========== - -.. _libnetwork remote driver: https://github.com/docker/libnetwork/blob/master/docs/remote.md -.. _Neutron: https://wiki.openstack.org/wiki/Neutron -.. _Container Network Model: https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model -.. _Neutron's networking model: https://wiki.openstack.org/wiki/Neutron/APIv2-specification -.. _Magnum: https://wiki.openstack.org/wiki/Magnum -.. _OVN: https://launchpad.net/networking-ovn -.. _Kolla: https://wiki.openstack.org/wiki/Kolla -.. _APIs: https://github.com/docker/libnetwork/blob/master/docs/design.md#api -.. _plugin discovery mechanism: https://github.com/docker/docker/blob/master/docs/extend/plugin_api.md#plugin-discovery -.. _Neutron client: http://docs.openstack.org/developer/python-neutronclient/ -.. _libkv: https://github.com/docker/libkv -.. _VIF binding: https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism -.. _Adding tags to resources: https://review.openstack.org/#/c/216021/ -.. _User labels in docker patch: https://github.com/docker/libnetwork/pull/222/files#diff-2b9501381623bc063b38733c35a1d254 diff --git a/doc/source/devref/libnetwork_remote_driver_design.rst b/doc/source/devref/libnetwork_remote_driver_design.rst deleted file mode 100644 index cb363157..00000000 --- a/doc/source/devref/libnetwork_remote_driver_design.rst +++ /dev/null @@ -1,419 +0,0 @@ -======================================= -Libnetwork Remote Network Driver Design -======================================= - -What is Kuryr -------------- - -Kuryr implements a `libnetwork remote network driver`_ and maps its calls to OpenStack -`Neutron`_. It works as a translator between libnetwork's -`Container Network Model`_ (CNM) and `Neutron's networking model`_. Kuryr also acts as -a `libnetwork IPAM driver`_. - -Goal -~~~~ - -Through Kuryr any Neutron plugin can be used as libnetwork backend with no -additional effort. Neutron APIs are vendor agnostic and thus all Neutron -plugins will have the capability of providing the networking backend of Docker -for a similar small plugging snippet as they have in nova. - -Kuryr also takes care of binding one of a veth pair to a network interface on -the host, e.g., Linux bridge, Open vSwitch datapath and so on. - - -Kuryr Workflow - Host Networking --------------------------------- -Kuryr resides in each host that runs Docker containers and serves `APIs`_ -required for the libnetwork remote network driver. It is planned to use the -`Adding tags to resources`_ new Neutron feature by Kuryr, to map between -Neutron resource Id's and Docker Id's (UUID's) - -1. libnetwork discovers Kuryr via `plugin discovery mechanism`_ *before the - first request is made* - - - During this process libnetwork makes a HTTP POST call on - ``/Plugin.Active`` and examines the driver type, which defaults to - ``"NetworkDriver"`` and ``"IpamDriver"`` - - libnetwork also calls the following two API endpoints - - 1. ``/NetworkDriver.GetCapabilities`` to obtain the capability of Kuryr - which defaults to ``"local"`` - 2. ``/IpamDriver.GetDefaultAddressSpcaces`` to get the default address - spaces used for the IPAM - -2. libnetwork registers Kuryr as a remote driver - -3. A user makes requests against libnetwork with the network driver specifier for Kuryr - - - i.e., ``--driver=kuryr`` or ``-d kuryr`` **and** ``--ipam-driver=kuryr`` - for the Docker CLI - -4. libnetwork makes API calls against Kuryr - -5. Kuryr receives the requests and calls Neutron APIs with `Neutron client`_ - -6. Kuryr receives the responses from Neutron and compose the responses for - libnetwork - -7. Kuryr returns the responses to libnetwork - -8. libnetwork stores the returned information to its key/value datastore - backend - - - the key/value datastore is abstracted by `libkv`_ - - -Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking ---------------------------------------------------------------------------------- -1. A user creates a network ``foo`` with the subnet information - :: - - $ sudo docker network create --driver=kuryr --ipam-driver=kuryr \ - --subnet 10.0.0.0/16 --gateway 10.0.0.1 --ip-range 10.0.0.0/24 foo - 286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364 - - This makes a HTTP POST call on ``/IpamDriver.RequestPool`` with the following - JSON data. - :: - - { - "AddressSpace": "global_scope", - "Pool": "10.0.0.0/16", - "SubPool": "10.0.0.0/24", - "Options": null - "V6": false - } - - The value of ``SubPool`` comes from the value specified in ``--ip-range`` - option in the command above and value of ``AddressSpace`` will be ``global_scope`` or ``local_scope`` depending on value of ``capability_scope`` configuration option. Kuryr creates a subnetpool, and then returns - the following response. - :: - - { - "PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376", - "Pool": "10.0.0.0/16", - "Data": {} - } - - If the ``--gateway`` was specified like the command above, another HTTP POST - call against ``/IpamDriver.RequestAddress`` follows with the JSON data below. - :: - - { - "Address": "10.0.0.1", - "PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376", - "Options": null, - } - - As the IPAM driver Kuryr allocates a requested IP address and returns the - following response. - :: - - { - "Address": "10.0.0.1/16", - "Data": {} - } - - Finally a HTTP POST call on ``/NetworkDriver.CreateNetwork`` with the - following JSON data. - :: - - { - "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364", - "IPv4Data": [{ - "Pool": "10.0.0.0/16", - "Gateway": "10.0.0.1/16", - "AddressSpace": "" - }], - "IPv6Data": [], - "Options": {"com.docker.network.generic": {}} - } - - The Kuryr remote network driver will then generate a Neutron API request to - create subnet with pool cidr and an underlying Neutron network. When the - Neutron subnet and network has been created, the Kuryr remote network driver - will generate an empty success response to the docker daemon. Kuryr tags the - Neutron network with the NetworkID from docker. - -2. A user launches a container against network ``foo`` - :: - - $ sudo docker run --net=foo -itd --name=container1 busybox - 78c0458ba00f836f609113dd369b5769527f55bb62b5680d03aa1329eb416703 - - This makes a HTTP POST call on ``/IpamDriver.RequestAddress`` with the - following JSON data. - :: - - { - "Address": "", - "PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376", - "Options": null, - } - - The IPAM driver Kuryr sends a port creation request to neutron and returns the following response with neutron provided ip address. - :: - - { - "Address": "10.0.0.2/16", - "Data": {} - } - - - Then another HTTP POST call on ``/NetworkDriver.CreateEndpoint`` with the - following JSON data is made. - :: - - { - "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364", - "Interface": { - "AddressIPv6": "", - "MacAddress": "", - "Address": "10.0.0.2/16" - }, - "Options": { - "com.docker.network.endpoint.exposedports": [], - "com.docker.network.portmap": [] - }, - "EndpointID": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd" - } - - The Kuryr remote network driver then generates a Neutron API request to - fetch port with the matching fields for interface in the request. Kuryr - then updates this port's name, tagging it with endpoint ID. - - Following steps are taken: - - 1) On the endpoint creation Kuryr examines if there's a Port with CIDR - that corresponds to Address or AddressIPv6 requested. - 2) If there's a Port, Kuryr tries to reuse it without creating a new - Port. Otherwise it creates a new one with the given address. - 3) Kuryr tags the Neutron port with EndpointID. - - When the Neutron port has been updated, the Kuryr remote driver will - generate a response to the docker daemon in following form: - (https://github.com/docker/libnetwork/blob/master/docs/remote.md#create-endpoint) - :: - - { - "Interface": {"MacAddress": "08:22:e0:a8:7d:db"} - } - - - On receiving success response, libnetwork makes a HTTP POST call on ``/NetworkDriver.Join`` with - the following JSON data. - :: - - { - "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364", - "SandboxKey": "/var/run/docker/netns/052b9aa6e9cd", - "Options": null, - "EndpointID": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd" - } - - Kuryr connects the container to the corresponding neutron network by doing - the following steps: - - 1) Generate a veth pair. - 2) Connect one end of the veth pair to the container (which is running in a - namespace that was created by Docker). - 3) Perform a neutron-port-type-dependent VIF-binding to the corresponding - Neutron port using the VIF binding layer and depending on the specific - port type. - - After the VIF-binding is completed, the Kuryr remote network driver - generates a response to the Docker daemon as specified in the libnetwork - documentation for a join request. - (https://github.com/docker/libnetwork/blob/master/docs/remote.md#join) - -3. A user requests information about the network - :: - - $ sudo docker network inspect foo - { - "Name": "foo", - "Id": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364", - "Scope": "local", - "Driver": "kuryr", - "IPAM": { - "Driver": "default", - "Config": [{ - "Subnet": "10.0.0.0/16", - "IPRange": "10.0.0.0/24", - "Gateway": "10.0.0.1" - }] - }, - "Containers": { - "78c0458ba00f836f609113dd369b5769527f55bb62b5680d03aa1329eb416703": { - "endpoint": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd", - "mac_address": "02:42:c0:a8:7b:cb", - "ipv4_address": "10.0.0.2/16", - "ipv6_address": "" - } - } - } - - -4. A user connects one more container to the network - :: - - $ sudo docker network connect foo container2 - d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646 - - $ sudo docker network inspect foo - { - "Name": "foo", - "Id": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364", - "Scope": "local", - "Driver": "kuryr", - "IPAM": { - "Driver": "default", - "Config": [{ - "Subnet": "10.0.0.0/16", - "IPRange": "10.0.0.0/24", - "Gateway": "10.0.0.1" - }] - }, - "Containers": { - "78c0458ba00f836f609113dd369b5769527f55bb62b5680d03aa1329eb416703": { - "endpoint": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd", - "mac_address": "02:42:c0:a8:7b:cb", - "ipv4_address": "10.0.0.2/16", - "ipv6_address": "" - }, - "d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646": { - "endpoint": "a55976bafaad19f2d455c4516fd3450d3c52d9996a98beb4696dc435a63417fc", - "mac_address": "02:42:c0:a8:7b:cc", - "ipv4_address": "10.0.0.3/16", - "ipv6_address": "" - } - } - } - - -5. A user disconnects a container from the network - :: - - $ CID=d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646 - $ sudo docker network disconnect foo $CID - - This makes a HTTP POST call on ``/NetworkDriver.Leave`` with the following - JSON data. - :: - - { - "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364", - "EndpointID": "a55976bafaad19f2d455c4516fd3450d3c52d9996a98beb4696dc435a63417fc" - } - - Kuryr remote network driver will remove the VIF binding between the - container and the Neutron port, and generate an empty response to the - Docker daemon. - - Then libnetwork makes a HTTP POST call on ``/NetworkDriver.DeleteEndpoint`` with the - following JSON data. - :: - - { - "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364", - "EndpointID": "a55976bafaad19f2d455c4516fd3450d3c52d9996a98beb4696dc435a63417fc" - } - - Kuryr remote network driver generates a Neutron API request to delete the - associated Neutron port, in case the relevant port subnet is empty, Kuryr - also deletes the subnet object using Neutron API and generate an empty - response to the Docker daemon: {} - - Finally libnetwork makes a HTTP POST call on ``/IpamDriver.ReleaseAddress`` - with the following JSON data. - :: - - { - "Address": "10.0.0.3", - "PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376" - } - - Kuryr remote IPAM driver generates a Neutron API request to delete the associated Neutron port. - As the IPAM driver Kuryr deallocates the IP address and returns the following response. - :: - - {} - -7. A user deletes the network - :: - - $ sudo docker network rm foo - - This makes a HTTP POST call against ``/NetworkDriver.DeleteNetwork`` with the - following JSON data. - :: - - { - "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364" - } - - Kuryr remote network driver generates a Neutron API request to delete the - corresponding Neutron network and subnets. When the Neutron network and subnets has been deleted, - the Kuryr remote network driver generate an empty response to the docker - daemon: {} - - Then another HTTP POST call on ``/IpamDriver.ReleasePool`` with the - following JSON data is made. - :: - - { - "PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376" - } - - Kuryr delete the corresponding subnetpool and returns the following response. - :: - - {} - -Mapping between the CNM and the Neutron's Networking Model ----------------------------------------------------------- - -Kuryr communicates with Neutron via `Neutron client`_ and bridges between -libnetwork and Neutron by translating their networking models. The following -table depicts the current mapping between libnetwork and Neutron models: - -===================== ====================== -libnetwork Neutron -===================== ====================== -Network Network -Sandbox Subnet, Port and netns -Endpoint Port -===================== ====================== - -libnetwork's Sandbox and Endpoint can be mapped into Neutron's Subnet and Port, -however, Sandbox is invisible from users directly and Endpoint is only the -visible and editable resource entity attachable to containers from users' -perspective. Sandbox manages information exposed by Endpoint behind the scene -automatically. - - -Notes on implementing the libnetwork remote driver API in Kuryr ---------------------------------------------------------------- - -1. DiscoverNew Notification: - Neutron does not use the information related to discovery of new resources such - as new nodes and therefore the implementation of this API method does nothing. - -2. DiscoverDelete Notification: - Neutron does not use the information related to discovery of resources such as - nodes being deleted and therefore the implementation of this API method does - nothing. - -.. _libnetwork remote network driver: https://github.com/docker/libnetwork/blob/master/docs/remote.md -.. _libnetwork IPAM driver: https://github.com/docker/libnetwork/blob/master/docs/ipam.md -.. _Neutron: https://wiki.openstack.org/wiki/Neutron -.. _Container Network Model: https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model -.. _Neutron's networking model: https://wiki.openstack.org/wiki/Neutron/APIv2-specification -.. _Neutron client: http://docs.openstack.org/developer/python-neutronclient/ -.. _plugin discovery mechanism: https://github.com/docker/docker/blob/master/docs/extend/plugin_api.md#plugin-discovery -.. _Adding tags to resources: https://review.openstack.org/#/c/216021/ -.. _APIs: https://github.com/docker/libnetwork/blob/master/docs/design.md#api -.. _libkv: https://github.com/docker/libkv -.. _IPAM blueprint: https://blueprints.launchpad.net/kuryr/+spec/ipam -.. _Neutron's API reference: http://developer.openstack.org/api-ref-networking-v2.html#createSubnet diff --git a/doc/source/index.rst b/doc/source/index.rst index 0a00e2b8..29c4288b 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -3,7 +3,7 @@ You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. -Welcome to kuryr's documentation! +Welcome to kuryr-libnetwork's documentation! ======================================================== Contents: @@ -12,26 +12,6 @@ Contents: :maxdepth: 2 readme - installation - usage - contributing - releasenotes - -Design and Developer Docs -========================== - -.. toctree:: - :maxdepth: 1 - - devref/index - -Kuryr Specs -=========== - -.. toctree:: - :maxdepth: 2 - - specs/index Indices and tables ================== diff --git a/doc/source/installation.rst b/doc/source/installation.rst deleted file mode 100644 index 836bc636..00000000 --- a/doc/source/installation.rst +++ /dev/null @@ -1,12 +0,0 @@ -============ -Installation -============ - -At the command line:: - - $ pip install kuryr - -Or, if you have virtualenvwrapper installed:: - - $ mkvirtualenv kuryr - $ pip install kuryr diff --git a/doc/source/releasenotes.rst b/doc/source/releasenotes.rst deleted file mode 100644 index c7fc6a15..00000000 --- a/doc/source/releasenotes.rst +++ /dev/null @@ -1,5 +0,0 @@ -=============== - Release Notes -=============== - -.. release-notes:: diff --git a/doc/source/specs/existing-neutron-network.rst b/doc/source/specs/existing-neutron-network.rst deleted file mode 100644 index d0fd8c9a..00000000 --- a/doc/source/specs/existing-neutron-network.rst +++ /dev/null @@ -1,176 +0,0 @@ -.. - This work is licensed under a Creative Commons Attribution 3.0 Unported - License. - - http://creativecommons.org/licenses/by/3.0/legalcode - -====================================== -Reuse of the existing Neutron networks -====================================== - -https://blueprints.launchpad.net/kuryr/+spec/existing-neutron-network - -The current Kuryr implementation assumes the Neutron networks, subnetpools, -subnets and ports are created by Kuryr and their lifecycles are completely -controlled by Kuryr. However, in the case where users need to mix the VM -instances and/or the bare metal nodes with containers, the capability of -reusing existing Neutron networks for implementing Kuryr networks becomes -valuable. - - -Problem Description -------------------- - -The main use case being addressed in this spec is described below: - -* Use of existing Neutron network and subnet resources created independent of - Kuryr - -With the addition of Tags to neutron resources -`Add tags to neutron resources spec`_ -the association between container networks and Neutron networks is -implemented by associating tag(s) to Neutron networks. In particular, -the container network ID is stored in such tags. Currently the -maximum size for tags is 64 bytes. Therefore, we currently use two -tags for each network to store the corresponding Docker ID. - - -Proposed Change ---------------- - -This specification proposes to use the ``Options`` that can be specified by -user during the creation of Docker networks. We propose to use either the -Neutron network uuid or name to identify the Neutron network to use. If the -Neutron network uuid or name is specified but such a network does not exist or -multiple such networks exist in cases where a network name is specified, the -create operation fails. Otherwise, the existing network will be used. -Similarly, if a subnet is not associated with the existing network it will be -created by Kuryr. Otherwise, the existing subnet will be used. - -The specified Neutron network is tagged with a well known string such that it -can be verified whether it already existed at the time of the creation of the -Docker network or not. - - -.. NOTE(banix): If a Neutron network is specified but it is already - associated with an existing Kuryr network we may refuse the request - unless there are use cases which allow the use of a Neutron network - for realizing more than one Docker networks. - - -.. _workflow: - -Proposed Workflow -~~~~~~~~~~~~~~~~~ - -1. A user creates a Docker network and binds it to an existing Neutron network - by specifying it's uuid: - :: - - $ sudo docker network create --driver=kuryr --ipam-driver=kuryr \ - --subnet 10.0.0.0/16 --gateway 10.0.0.1 --ip-range 10.0.0.0/24 \ - -o neutron.net.uuid=25495f6a-8eae-43ff-ad7b-77ba57ed0a04 \ - foo - 286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364 - - $ sudo docker network create --driver=kuryr --ipam-driver=kuryr \ - --subnet 10.0.0.0/16 --gateway 10.0.0.1 --ip-range 10.0.0.0/24 \ - -o neutron.net.name=my_network_name \ - foo - 286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364 - - This creates a Docker network with the given name, ``foo`` in this case, by - using the Neutron network with the specified uuid or name. - - If subnet information is specified by ``--subnet``, ``--gateway``, and - ``--ip-range`` as shown in the command above, the corresponding subnetpools - and subnets are created or the exising resources are appropriately reused - based on the provided information such as CIDR. For instance, if the network - with the given UUID in the command exists and that network has the subnet - whose CIDR is the same as what is given by ``--subnet`` and possibly - ``--ip-range``, Kuryr doesn't create a subnet and just leaves the existing - subnets as they are. Kuryr composes the response from the information of - the created or reused subnet. - - It is expected that when Kuryr driver is used, the Kuryr IPAM driver is also - used. - - If the gateway IP address of the reused Neutron subnet doesn't match with - the one given by ``--gateway``, Kuryr returns the IP address set in the - Neutron subnet nevertheless and the command is going to fail because of - Dockers's validation against the response. - -2. A user inspects the created Docker network - :: - - $ sudo docker network inspect foo - { - "Name": "foo", - "Id": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364", - "Scope": "global", - "Driver": "kuryr", - "IPAM": { - "Driver": "kuryr", - "Config": [{ - "Subnet": "10.0.0.0/16", - "IPRange": "10.0.0.0/24", - "Gateway": "10.0.0.1" - }] - }, - "Containers": {} - "Options": { - "com.docker.network.generic": { - "neutron.net.uuid": "25495f6a-8eae-43ff-ad7b-77ba57ed0a04" - } - } - } - - A user can see the Neutron ``uuid`` given in the command is stored in the - Docker's storage and can be seen by inspecting the network. - -3. A user launches a container and attaches it to the network - :: - - $ CID=$(sudo docker run --net=foo -itd busybox) - - This process is identical to the existing logic described in `Kuryr devref`_. - libnetwork calls ``/IpamDriver.RequestAddress``, - ``/NetworkDriver.CreateEndpoint`` and then ``/NetworkDriver.Join``. The - appropriate available IP address shall be returned by Neutron through Kuryr - and a port with the IP address is created under the subnet on the network. - -4. A user terminates the container - :: - - $ sudo docker kill ${CID} - - This process is identical to the existing logic described in `Kuryr devref`_ - as well. libnetwork calls ``/IpamDriver.ReleaseAddress``, - ``/NetworkDriver.Leave`` and then ``/NetworkDriver.DeleteEndpoint``. - -5. A user deletes the network - :: - - $ sudo docker network rm foo - - When an existing Neutron network is used to create a Docker network, it is - tagged such that during the delete operation the Neutron network does not - get deleted. Currently, if an existing Neutron network is used, the subnets - associated with it (whether pre existing or newly created) are preserved as - well. In the future, we may consider tagging subnets themselves or the - networks (with subnet information) to decide whether a subnet is to be - deleted or not. - - -Challenges ----------- - -None - -References ----------- - -* `Add tags to neutron resources spec`_ - -.. _Add tags to neutron resources spec: http://docs.openstack.org/developer/neutron/devref/tag.html -.. _Kuryr devref: http://docs.openstack.org/developer/kuryr/devref/index.html diff --git a/doc/source/specs/index.rst b/doc/source/specs/index.rst deleted file mode 100644 index d6ebbb9a..00000000 --- a/doc/source/specs/index.rst +++ /dev/null @@ -1,49 +0,0 @@ -.. - Licensed under the Apache License, Version 2.0 (the "License"); you may - not use this file except in compliance with the License. You may obtain - a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - License for the specific language governing permissions and limitations - under the License. - - Convention for heading levels: - ======= Heading 0 (reserved for the title in a document) - ------- Heading 1 - ~~~~~~~ Heading 2 - +++++++ Heading 3 - ''''''' Heading 4 - (Avoid deeper levels because they do not render well.) - - -Kuryr Specs -=========== - -This section contains detailed specification documents for -different features inside Kuryr. - -.. toctree:: - :maxdepth: 1 - - existing-neutron-network - - -Spec Template --------------- -.. toctree:: - :maxdepth: 3 - - skeleton - template - newton/index - -Indices and tables ------------------- - -* :ref:`genindex` -* :ref:`modindex` -* :ref:`search` diff --git a/doc/source/specs/newton/index.rst b/doc/source/specs/newton/index.rst deleted file mode 100644 index f2ffb546..00000000 --- a/doc/source/specs/newton/index.rst +++ /dev/null @@ -1,43 +0,0 @@ -.. - Licensed under the Apache License, Version 2.0 (the "License"); you may - not use this file except in compliance with the License. You may obtain - a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - License for the specific language governing permissions and limitations - under the License. - - Convention for heading levels: - ======= Heading 0 (reserved for the title in a document) - ------- Heading 1 - ~~~~~~~ Heading 2 - +++++++ Heading 3 - ''''''' Heading 4 - (Avoid deeper levels because they do not render well.) - - -Mitaka Specifications -===================== - -This section contains detailed specification documents for -different features inside Kuryr. - - -Spec ----- -.. toctree:: - :maxdepth: 1 - - nested_containers - kuryr_k8s_integration - -Indices and tables ------------------- - -* :ref:`genindex` -* :ref:`modindex` -* :ref:`search` diff --git a/doc/source/specs/newton/kuryr_k8s_integration.rst b/doc/source/specs/newton/kuryr_k8s_integration.rst deleted file mode 100644 index 8d30f53e..00000000 --- a/doc/source/specs/newton/kuryr_k8s_integration.rst +++ /dev/null @@ -1,303 +0,0 @@ -.. - This work is licensed under a Creative Commons Attribution 3.0 Unported - License. - - http://creativecommons.org/licenses/by/3.0/legalcode - -============================ -Kuryr Kubernetes Integration -============================ - -https://blueprints.launchpad.net/kuryr/+spec/kuryr-k8s-integration - -This spec proposes how to integrate Kubernetes Bare Metal cluster with Neutron -being used as network provider. - -Kubernetes is a platform for automating deployment, scaling and operations of -application containers across clusters of hosts. There are already a number of -implementations of kubernetes network model, such as Flannel, Weave, Linux -Bridge, OpenvSwitch, Calico as well as other vendor implementations. Neutron -already serves as a common way to support various networking providers via -common API. Therefore, using neutron to provide kubernetes networking will -enable different backend support in a common way. - -This approach provides clear benefit for operators who will have variety of -networking choices that already supported via neutron. - - -Problem Description -=================== -Application developers usually are not networking engineers. They should be -able to express the application intent. Currently, there is no integration -between kubernetes and Neutron. Kuryr should bridge the gap between kubernetes -and neutron by using the application intent to infer the connectivity and -isolation requirements necessary to provision the networking entities in a -consistent way. - -Kubernetes Overview -------------------- - -Kubernetes API abstractions: - -**Namespace** - Serves as logical grouping of partition resources. Names of resources need to - be unique within a namespace, but not across namespaces. - -**Pod** - Contains a group of tightly coupled containers that share single network - namespace. Pod models an application-specific "logical host" in a - containerized environment. It may contain one or more containers which are - relatively tightly coupled. Each pod gets its own IP that is also an IP of - the contained Containers. - -**Deployment/Replication Controller** - Ensures the requested number of pods are running at any time. - -**Service** - Is an abstraction which defines a logical set of pods and a policy by which - to access them. The set of service endpoints, usually pods that implement a - given service is defined by the label selector. The default service type - (ClusterIP) is used to provide consistent application inside the kubernetes - cluster. Service receives a service portal (VIP and port). Service IPs are - only available inside the cluster. - Service can abstract access not only to pods. For example, it can be for - external database cluster, service in another namespace, etc. In such case - service does not have a selector and endpoint are defined as part of the - service. The service can be headless (clusterIP=None). For such Services, - a cluster IP is not allocated. DNS should return multiple addresses for the - Service name, which point directly to the Pods backing the Service. - To receive traffic from the outside, service should be assigned an external - IP address. - For more details on service, please refer to [1]_. - -Kubernetes provides two options for service discovery, environments variables -and DNS. Environment variables are added for each active service when pod is -run on the node. DNS is kubernetes cluster add-on that provides DNS server, -more details on this below. - -Kubernetes has two more powerful tools, labels and annotations. Both can be -attached to the API objects. Labels are an arbitrary key/value pairs. Labels -do not provide uniqueness. Labels are queryable and used to organize and to -select subsets of objects. - -Annotations are string keys and values that can be used by external tooling to -store arbitrary metadata. - -More detailed information on k8s API can be found in [2]_ - - -Network Requirements -^^^^^^^^^^^^^^^^^^^^ -k8s imposes some fundamental requirements on the networking implementation: - -* All containers can communicate without NAT. - -* All nodes can communicate with containers without NAT. - -* The IP the containers sees itself is the same IP that others see. - -The kubernetes model is for each pod to have an IP in a flat shared namespace -that allows full communication with physical computers and containers across -the network. The above approach makes it easier than native Docker model to -port applications from VMs to containers. More on kubernetes network model -is here [3]_. - - -Use Cases ---------- -The kubernetes networking should address requirements of several stakeholders: - -* Application developer, the one that runs its application on the k8s cluster - -* Cluster administrator, the one that runs the k8s cluster - -* Network infrastructure administrator, the one that provides the physical - network - -Use Case 1: -^^^^^^^^^^^ -Support current kubernetes network requirements that address application -connectivity needs. This will enable default kubernetes behavior to allow all -traffic from all sources inside or outside the cluster to all pods within the -cluster. This use case does not add multi-tenancy support. - -Use Case 2: -^^^^^^^^^^^ -Application isolation policy support. -This use case is about application isolation policy support as it is defined -by kubernetes community, based on spec [4]_. Network isolation policy will -impose limitations on the connectivity from an optional set of traffic sources -to an optional set of destination TCP/UDP ports. -Regardless of network policy, pods should be accessible by the host on which -they are running to allow local health checks. This use case does not address -multi-tenancy. - -More enhanced use cases can be added in the future, that will allow to add -extra functionality that is supported by neutron. - - -Proposed Change -=============== - - -Model Mapping -------------- - -In order to support kubernetes networking via neutron, we should define how -k8s model maps into neutron model. -With regards to the first use case, to support default kubernetes networking -mode, the mapping can be done in the following way: - -+-----------------+-------------------+---------------------------------------+ -| **k8s entity** | **neutron entity**| **notes** | -+=================+===================+=======================================+ -|namespace | network | | -+-----------------+-------------------+---------------------------------------+ -|cluster subnet | subnet pool | subnet pool for subnets to allocate | -| | | Pod IPs. Current k8s deployment on | -| | | GCE uses subnet per node to leverage | -| | | advanced routing. This allocation | -| | | scheme should be supported as well | -+-----------------+-------------------+---------------------------------------+ -|service cluster | subnet | VIP subnet, service VIP will be | -|ip range | | allocated from | -+-----------------+-------------------+---------------------------------------+ -|external subnet | floating ip pool | To allow external access to services,| -| | external network | each service should be assigned with | -| | router | external (floating IP) router is | -| | | required to enable north-south traffic| -+-----------------+-------------------+---------------------------------------+ -|pod | port | A port gets its IP address from the | -| | | cluster subnet pool | -+-----------------+-------------------+---------------------------------------+ -|service | load balancer | each endpoint (pod) is a member in the| -| | | load balancer pool. VIP is allocated | -| | | from the service cluster ip range. | -+-----------------+-------------------+---------------------------------------+ - -k8s Service Implementation -^^^^^^^^^^^^^^^^^^^^^^^^^^ -Kubernetes default **ClusterIP** service type is used to expose service inside -the cluster. If users decide to expose services to external traffic, they will -assign ExternalIP to the services they choose to expose. Kube-proxy should be -an optional part of the deployment, since it may not work with some neutron -backend solutions, i.e. MidoNet or Contrail. Kubernetes service will be mapped -to the neutron Load Balancer, with ClusterIP as the load balancer VIP and -EndPoints (Pods) are members of the load balancer. -Once External IP is assigned, it will create FIP on external network and -associate it with the VIP. - - -Isolation Policy -^^^^^^^^^^^^^^^^ -In order to support second use case, the application isolation policy mode, -requested policy should be translated into security group that reflects the -requested ACLs as the group rules. This security group will be associated with -pods that policy is applied to. Kubernetes namespace can be used as isolation -scope of the contained Pods. For isolated namespace, all incoming connections -to pods in that namespace from any source inside or outside of the Kubernetes -cluster will be denied unless allowed by a policy. -For non-isolated namespace, all incoming connections to pods in that namespace -will be allowed. -The exact translation details are provided in the [5]_. - -As an alternative, and this goes beyond neutron, it seems that more native way -might be to use policy (intent) based API to request the isolation policy. -Group Based Policy can be considered, but this will be left for the later phase. - -Service Discovery ------------------ -Service discovery should be supported via environment variables. -Kubernetes also offers a DNS cluster add-on to support application services name -resolution. It uses SkyDNS with helper container, kube2sky to bridge between -kubernetes to SkyDNS and etcd to maintain services registry. -Kubernetes Service DNS names can be resolved using standard methods inside the -pods (i.e. gethostbyname). DNS server runs as kubernetes service with assigned -static IP from the service cluster ip range. Both DNS server IP and domain are -configured and passed to the kubelet service on each worker node that passes it -to containers. SkyDNS service is deployed in the kube-system namespace. -This integration should enable SkyDNS support as well as it may add support -for external DNS servers. Since SkyDNS service will be deployed as any other -k8s service, this should just work. -Other alternatives for DNS, such as integration with OpenStack Designate for -local DNS resolution by port name will be considered for later phases. - - -Integration Decomposition -------------------------- - -The user interacts with the system via the kubectl cli or directly via REST API -calls. Those calls define Kubernetes resources such as RC, Pods and services. -The scheduler sees the requests for Pods and assigns them to a specific worker -nodes. - -On the worker nodes, kubelet daemons see the pods that are being scheduled for -the node and take care of creating the Pods, i.e. deploying the infrastructure -and application containers and ensuring the required connectivity. - -There are two conceptual parts that kuryr needs to support: - -API Watcher -^^^^^^^^^^^ -To watch kubernetes API server for changes in services and pods and later -policies collections. -Upon changes, it should map services/pods into the neutron constructs, -ensuring connectivity. It should use neutron client to invoke neutron API to -maintain networks, ports, load balancers, router interfaces and security groups. -The API Watcher will add allocated port details to the Pod object to make it -available to the kubelet process and eventually to the kuryr CNI driver. - -CNI Driver -^^^^^^^^^^ -To enable CNI plugin on each worker node to setup, teardown and provide status -of the Pod, more accurately of the infrastructure container. Kuryr will provide -CNI Driver that implements [6]_. In order to be able to configure and report an -IP configuration, the Kuryr CNI driver must be able to access IPAM to get IP -details for the Pod. The IP, port UUID, GW and port type details should be -available to the driver via **CNI_ARGS** in addition to the standard content:: - - CNI_ARGS=K8S_POD_NAMESPACE=default;\ - K8S_POD_NAME=nginx-app-722l8;\ - K8S_POD_INFRA_CONTAINER_ID=8ceb00926acf251b34d70065a6158370953ab909b0745f5f4647ee6b9ec5c250\ - PORT_UUID=a28c7404-7495-4557-b7fc-3e293508dbc6,\ - IPV4=10.0.0.15/16,\ - GW=10.0.0.1,\ - PORT_TYPE=midonet - -For more details on kuryr CNI Driver, see [7]_. - -Kube-proxy service that runs on each worker node and implements the service in -native implementation is not required since service is implemented via neutron -load balancer. - - -Community Impact ----------------- - -This spec invites community to collaborate on unified solution to support -kubernetes networking by using neutron as a backend via Kuryr. - - -Implementation -============== - -Assignee(s) ------------ - -TBD - -Work Items ----------- - -TBD - - -References -========== -.. [1] http://kubernetes.io/v1.1/docs/user-guide/services.html -.. [2] http://kubernetes.io/docs/api/ -.. [3] http://kubernetes.io/docs/admin/networking/#kubernetes-model -.. [4] https://docs.google.com/document/d/1qAm-_oSap-f1d6a-xRTj6xaH1sYQBfK36VyjB5XOZug -.. [5] https://review.openstack.org/#/c/290172/ -.. [6] https://github.com/appc/cni/blob/master/SPEC.md -.. [7] https://blueprints.launchpad.net/kuryr/+spec/kuryr-cni-plugin diff --git a/doc/source/specs/newton/nested_containers.rst b/doc/source/specs/newton/nested_containers.rst deleted file mode 100644 index a9f526da..00000000 --- a/doc/source/specs/newton/nested_containers.rst +++ /dev/null @@ -1,527 +0,0 @@ -.. - This work is licensed under a Creative Commons Attribution 3.0 Unported - License. - - http://creativecommons.org/licenses/by/3.0/legalcode - -============================================================================ -Networking for Nested Containers in OpenStack / Magnum - Neutron Integration -============================================================================= - -Launchpad blueprint: - -https://blueprints.launchpad.net/kuryr/+spec/containers-in-instances - -This blueprint proposes how to integrate Magnum with Neutron based -networking and how the problem of networking for nested containers -can be solved. - - -Problem Description -=================== - -Magnum (containers-as-a-service for OpenStack) provisions containers -inside Nova instances and those instances use standard Neutron -networking. These containers are referred to as nested containers. -Currently, there is no integration between Magnum resources and -Neutron and the nested containers are served networking outside -of that provided by OpenStack (Neutron) today. - -Definitions ------------ - -COE - Container Orchestration Engine - -Bay - A Magnum resource that includes at least one host to run containers on, - and a COE to manage containers created on hosts within the bay. - -Baymodel - An object that stores template information about the bay which is - used to create new bays consistently. - -Pod - Is the smallest deployable unit that can be created, scheduled, and - managed within Kubernetes. - -deviceowner (in Neutron ports) - device_owner is an attribute which is used internally by Neutron. - It identifies the service which manages the port. For example - router interface, router gateway will have their respective - device owners entries. Similarly, Neutron ports attached to Nova - instances have device_owner as compute. - - -Requirements ------------- - -Following are the requirements of Magnum around networking: - -1. Provide networking capabilities to containers running in Nova - instances. - -2. Magnum uses Heat to orchestrate multi-tenant application container - environments. Heat uses user-data scripts underneath. Therefore, - Kuryr must have the ability to be deployed/orchestrated using Heat - via the scripts. - -3. Current Magnum container networking implementations such as Flannel, - provide networking connectivity to containers that reside across - multiple Nova instances. Kuryr must provide multi-instance container - networking capabilities. The existing networking capabilities like - Flannel that Magnum uses will remain and Kuryr to be introduced - in parallel. Decision on default is for later and default may vary - based on the type of Magnum Bay. Magnum currently supports three - types of Bays: Swarm, Kubernetes, and Mesos. They are - referred to as COEs (Container Orchestration Engine). - -4. Kuryr must provide a simple user experience like "batteries included - but replaceable" philosophy. Magnum must have the ability to deploy - Kuryr without any user intervention, but allow more advanced users - to modify Kuryr's default settings as needed. - -5. If something needs to be installed in the Nova VMs used by Magnum, - it needs to be installed in the VMs in a secure manner. - -6. Communication between Kuryr and other services must be secure. For example, - if there is a Kuryr agent running inside the Nova instances, the - communication between Kuryr components (Kuryr, Kuryr Agent), - Neutron-Kuryr, Magnum-Kuryr should all be secure. - -7. Magnum Bays (Swarm, Kubernetes, etc..) must work the same or - better than they do with existing network providers such as Flannel. - -8. Kuryr must scale just as well, if not better, than existing container - networking providers. - - -Use cases ----------- - -* Any container within a nova instance (VM, baremetal, container) - may communicate with any other nova instance (VM, baremetal, container), - or container therein, regardless if the containers are on the same nova - instance, same host, or different hosts within the same Magnum bay. - Such containers shall be able to communicate with any OpenStack cloud - resource in the same Neutron network as the Magnum bay nodes, including - (but not limited to) Load Balancers, Databases, and other Nova instances. - -* Any container should be able to have access to any Neutron resource and - it's capabilities. Neutron resources include DHCP, router, floating IPs etc. - - -Proposed Change -=============== - -The proposal is to leverage the concept of VLAN aware VMs/Trunk Ports [2], -that would be able to discriminate the traffic coming from VM by using -VLAN tags. The trunk port would get attached to a VM and be capable of -receiving both untagged and tagged traffic. Each VLAN would be represented -by a sub port (Neutron ports). A subport must have a network attached. -Each subport will have an additional parameter of VID. VID can be of -different types and VLAN is one of the options. - -Each VM running containers by Magnum would need to have a Kuryr container -agent [3]. Kuryr container agent would be like a CNI/CNM plugin, capable of -assigning IPs to the container interfaces and tagging with VLAN IDs. -Magnum baymodel resource can be passed along information for -network type and kuryr will serve Neutron networking. Based on the baymodel, -Magnum can provision necessary services inside the Nova instance using Heat -templates and the scripts Heat uses. The Kuryr container agent would be -responsible for providing networking to the nested containers by tagging -each container interface with a VLAN ID. Kuryr container agent [3] would be -agnostic of COE type and will have different modes based on the COE. -First implementation would support Swarm and the corresponding container -network model via libnetwork. - -There are two mechanisms in which nested containers will be served networking -via Kuryr: - -1. When user interacts with Magnum APIs to provision containers. -2. Magnum allows end-users to access native COE APIs. It means end-users - can alternatively create containers using docker CLI etc. If the - end-users interact with the native APIs, they should be able to get - the same functionality that is available via Magnum interfaces/orchestration. - COEs use underlying container runtimes tools so this option is also applicable - for non-COE APIs as well. - -For the case, where user interacts with Magnum APIs, Magnum would need to -integrate a 'network' option in the container API to choose Neutron networks -for containers. This option will be applicable for baymodels -running kuryr type networking. For each container launched, Magnum would -pick up a network, and talk to the COE to provision the container(s), Kuryr agent -would be running inside the Nova instance as a driver/plugin to COE networking -model and based on the network UUID/name, Kuryr agent will create a subport on -parent trunk port, where Nova instance is attached to, Kuryr will allocate -a VLAN ID and subport creation be invoked in Neutron and that will allocate the -IP address. Based on the information returned, Kuryr agent will assign IP to -the container/pod and assign a VLAN, which would match VLAN in the subport -metadata. Once the sub-port is provisioned, it will have an IP address and a -VLAN ID allocated by Neutron and Kuryr respectively. - -For the case, where native COE APIs are used, user would be required to specify -information about Kuryr driver and Neutron networks when launching containers. -Kuryr agent will take care of providing networking to the containers in exactly -the same fashion as it would when Magnum talks to the COEs. - -Now, all the traffic coming from the containers inside the VMs would be -tagged and backend implementation of how those containers communicate -will follow a generic onboarding mechanism. Neutron supports several plugins -and each plugin uses some backend technology. The plugins would be -responsible for implementing VLAN aware VMs Neutron extension and onboard -the container based on tenant UUID, trunk port ID, VLAN ID, network UUID -and sub-port UUID. Subports will have deviceowner=kuryr. At this -point, a plugin can onboard the container using unique classification per -tenant to the relevant Neutron network and nested container would be -onboarded onto Neutron networks and will be capable of passing packets. -The plugins/onboarding engines would be responsible for tagging the packets -with the correct VLAN ID on their way back to the containers. - - -Integration Components ------------------------ - -Kuryr: - -Kuryr and Kuryr Agent will be responsible for providing the networking -inside the Nova instances. Kuryr is the main service/utility running -on the controller node and capabilities like segmentation ID allocation -will be performed there. Kuryr agent will be like a CNI/CNM plugin, -capable of allocating IPs and VLANs to container interfaces. Kuryr -agent will be a helper running inside the Nova instances that can -communicate with Neutron endpoint and Kuryr server. This will require -availability of credentials inside the Bay that Kuryr can use to -communicate. There is a security impact of storing credentials and -it is discussed in the Security Impact section of this document. - -More details on the Kuryr Agent can be found here [3]. - - -Neutron: - -vlan-aware-vms and notion of trunk port, sub-ports from Neutron will be -used in this design. Neutron will be responsible for all the backend -networking that Kuryr will expose via its mechanisms. - -Magnum: - -Magnum will be responsible for launching containers on specified/pre-provisioned -networks, using Heat to provisioning Kuryr components inside Nova instances and passing -along network information to the COEs, which can invoke their networking part. - -Heat: - -Heat templates use use-data scripts to launch tools for containers that Magnum -relies on. The scripts will be updated to handle Kuryr. We should not expect -to run scripts each time a container is started. More details can be -found here [4]. - -Example of model:: - -+-------------------------------+ +-------------------------------+ -| +---------+ +---------+ | | +---------+ +---------+ | -| | c1 | | c2 | | | | c3 | | c4 | | -| +---------+ +---------+ | | +---------+ +---------+ | -| | | | -| VM1 | | VM2 | -| | | | -| | | | -+---------+------------+--------+ +---------+------------+--------+ - |Trunk Port1 | |Trunk Port2 | - +------------+ +------------+ - /|\ /|\ - / | \ / | \ - / | \ / | \ - +--+ +-++ +--+ +--+ +-++ +--+ - |S1| |S2| |S3| |S4| |S5| |S6| - +-++ +--+ +-++ +--+ +-++ +-++ - | | | | | - | | | +---+ | | - | | +---+N1+ +-+N2+-----------+ - | | | | | | - +-------------+ | | | - | | | | - + ++ x x +-+ + - N3+--------+x x+-----------+N4 - x x - x Router x - x x - x x - - -C1-4 = Magnum containers -N1-4 = Neutron Networks and Subnets -S1,S3,S4,S6 = Subports -S2,S5 = Trunk ports (untagged traffic) - -In the example above, Magnum launches four containers (c1, c2, c3, c4) -spread across two Nova instances. There are four Neutron -networks(N1, N2, N3, N4) in the deployment and all of them are -connected to a router. Both the Nova instances (VM1 and VM2) have one -NIC each and a corresponding trunk port. Each trunk port has three -sub-ports: S1, S2, S3 and S4, S5, S6 for VM1 and VM2 respectively. -The untagged traffic goes to S2 and S5 and tagged to S1, S3, S4 and -S6. On the tagged sub-ports, the tags will be stripped and packets -will be sent to the respective Neutron networks. - -On the way back, the reverse would be applied and each sub-port to VLAN -mapping be checked using something like following and packets will be -tagged: - -+------+----------------------+---------------+ -| Port | Tagged(VID)/untagged | Packets go to | -+------+----------------------+---------------+ -| S1 | 100 | N1 | -| S2 | untagged | N3 | -| S3 | 200 | N1 | -| S4 | 100 | N2 | -| S5 | untagged | N4 | -| S6 | 300 | N2 | -+------+----------------------+---------------+ - -One thing to note over here is S1.vlan == S4.vlan is a valid scenario -since they are part of different trunk ports. It is possible that some -implementations do not use VLAN IDs, the VID can be something -other than VLAN ID. The fields in the sub-port can be treated as key -value pairs and corresponding support can be extended in the Kuryr agent -if there is a need. - -Example of commands: - -:: - - magnum baymodel-create --name \ - --image-id \ - --keypair-id \ - --external-network-id \ - --dns-nameserver \ - --flavor-id \ - --docker-volume-size \ - --coe \ - --network-driver kuryr - -:: - - neutron port-create --name S1 N1 \ - --device-owner kuryr - -:: - - neutron port-create --name S2 N3 - - -:: - - # trunk-create may refer to 0, 1 or more subport(s). - $ neutron trunk-create --port-id PORT \ - [--subport PORT[,SEGMENTATION-TYPE,SEGMENTATION-ID]] \ - [--subport ...] - -Note: All ports referred must exist. - -:: - - # trunk-add-subport adds 1 or more subport(s) - $ neutron trunk-subport-add TRUNK \ - PORT[,SEGMENTATION-TYPE,SEGMENTATION-ID] \ - [PORT,...] - -:: - - magnum container-create --name \ - --image \ - --bay \ - --command \ - --memory \ - --network network_id - - -Magnum changes --------------- - -Magnum will launch containers on Neutron networks. -Magnum will provision the Kuryr Agent inside the Nova instances via Heat templates. - - -Alternatives ------------- - -None - - -Data Model Impact (Magnum) --------------------------- - -This document adds the network_id attribute to the container database -table. A migration script will be provided to support the attribute -being added. :: - - +-------------------+-----------------+---------------------------------------------+ - | Attribute | Type | Description | - +===================+=================+=============================================+ - +-------------------+-----------------+---------------------------------------------+ - | network_id | uuid | UUID of a Neutron network | - +-------------------+-----------------+---------------------------------------------+ - - -REST API Impact (Magnum) -------------------------- - -This document adds network_id attribute to the Container -API class. :: - - +-------------------+-----------------+---------------------------------------------+ - | Attribute | Type | Description | - +===================+=================+=============================================+ - +-------------------+-----------------+---------------------------------------------+ - | network_id | uuid | UUID of a Neutron network | - +-------------------+-----------------+---------------------------------------------+ - - -Security Impact ---------------- - -Kuryr Agent running inside Nova instances will communicate with OpenStack APIs. For this to -happen, credentials will have to be stored inside Nova instances hosting Bays. - -This arrangement poses a security threat that credentials might be compromised and there -could be ways malicious containers could get access to credentials or Kuryr Agent. -To mitigate the impact, there are multiple options: - -1. Run Kuryr Agent in two modes: primary and secondary. Only primary mode has access to the - credentials and talks to Neutron and fetches information about available resources - like IPs, VLANs. Secondary mode has no information about credentials and performs operations - based on information coming in the input like IP, VLAN etc. Primary mode can be tied to the - Kubernetes, Mesos master nodes. In this option, containers will be running on nodes other - than the ones that talk to OpenStack APIs. -2. Containerize the Kuryr Agent to offer isolation from other containers. -3. Instead of storing credentials in text files, use some sort of binaries - and make them part of the container running Kuryr Agent. -4. Have an Admin provisioned Nova instance that carries the credentials - and has connectivity to the tenant Bays. The credentials are accessible only to the Kuryr - agent via certain port that is allowed through security group rules and secret key. - In this option, operations like VM snapshot in tenant domains will not lead to stolen credentials. -5. Introduce Keystone authentication mechanism for Kuryr Agent. In case of a compromise, this option - will limit the damage to the scope of permissions/roles the Kuryr Agent will have. -6. Use HTTPS for communication with OpenStack APIs. -7. Introduce a mechanism/tool to detect if a host is compromised and take action to stop any further - damage. - -Notifications Impact --------------------- - -None - -Other End User Impact ---------------------- - -None - -Performance Impact ------------------- - -For containers inside the same VM to communicate with each other, -the packets will have to step outside the VMs and come back in. - - -IPv6 Impact ------------ - -None - -Other Deployer Impact ---------------------- - -None - -Developer Impact ----------------- - -Extended attributes in Magnum container API to be used. - -Introduction of Kuryr Agent. - -Requires the testing framework changes. - - -Community Impact ----------------- - -The changes bring significant improvement in the container -networking approach by using Neutron as a backend via Kuryr. - - -Implementation -============== - -Assignee(s) ------------ - - Fawad Khaliq (fawadkhaliq) - -Work Items ----------- - -Magnum: -* Extend the Magnum API to support new network attribute. -* Extend the Client API to support new network attribute. -* Extend baymodel objects to support new container - attributes. Provide a database migration script for - adding the attribute. -* Extend unit and functional tests to support new port attribute - in Magnum. - -Heat: -* Update Heat templates to support the Magnum container - port information. - -Kuryr: -* Kuryr container agent. -* Kuryr VLAN/VID allocation engine. -* Extend unit test cases in Kuryr for the agent and VLAN/VID allocation - engine. -* Other tempest tests. -* Other scenario tests. - - -Dependencies -============ - -VLAN aware VMs [2] implementation in Neutron - - -Testing -======= - -Tempest and functional tests will be created. - - -Documentation Impact -==================== - -Documentation will have to updated to take care of the -Magnum container API changes and use the Kuryr network -driver. - -User Documentation ------------------- - -Magnum and Kuryr user guides will be updated. - -Developer Documentation ------------------------ - -The Magnum and Kuryr developer quickstart documents will be -updated to include the nested container use case and the -corresponding details. - - -References -========== - -[1] https://review.openstack.org/#/c/204686/7 -[2] http://specs.openstack.org/openstack/neutron-specs/specs/mitaka/vlan-aware-vms.html -[3] https://blueprints.launchpad.net/kuryr/+spec/kuryr-agent -[4] https://blueprints.launchpad.net/kuryr/+spec/kuryr-magnum-heat-deployment -[5] http://docs.openstack.org/developer/magnum/ diff --git a/doc/source/specs/skeleton.rst b/doc/source/specs/skeleton.rst deleted file mode 100644 index 7ffc364d..00000000 --- a/doc/source/specs/skeleton.rst +++ /dev/null @@ -1,23 +0,0 @@ -.. - This work is licensed under a Creative Commons Attribution 3.0 Unported - License. - - http://creativecommons.org/licenses/by/3.0/legalcode - -========================================== -Title of your RFE -========================================== - - -Problem Description -=================== - - -Proposed Change -=============== - - -References -========== - - diff --git a/doc/source/specs/template.rst b/doc/source/specs/template.rst deleted file mode 100644 index 521e8d7a..00000000 --- a/doc/source/specs/template.rst +++ /dev/null @@ -1,64 +0,0 @@ -.. - This work is licensed under a Creative Commons Attribution 3.0 Unported - License. - - http://creativecommons.org/licenses/by/3.0/legalcode - -==================================== -Example Spec - The title of your RFE -==================================== - -Include the URL of your launchpad RFE: - -https://bugs.launchpad.net/kuryr/+bug/example-id - -Introduction paragraph -- why are we doing this feature? A single paragraph of -prose that **deployers, and developers, and operators** can understand. - -Do you even need to file a spec? Most features can be done by filing an RFE bug -and moving on with life. In most cases, filing an RFE and documenting your -design is sufficient. If the feature seems very large or contentious, then -you may want to consider filing a spec. - - -Problem Description -=================== - -A detailed description of the problem: - -* For a new feature this should be use cases. Ensure you are clear about the - actors in each use case: End User vs Deployer - -* For a major reworking of something existing it would describe the - problems in that feature that are being addressed. - -Note that the RFE filed for this feature will have a description already. This -section is not meant to simply duplicate that; you can simply refer to that -description if it is sufficient, and use this space to capture changes to -the description based on bug comments or feedback on the spec. - - -Proposed Change -=============== - -How do you propose to solve this problem? - -This section is optional, and provides an area to discuss your high-level -design at the same time as use cases, if desired. Note that by high-level, -we mean the "view from orbit" rough cut at how things will happen. - -This section should 'scope' the effort from a feature standpoint: how is the -'kuryr end-to-end system' going to look like after this change? What Kuryr -areas do you intend to touch and how do you intend to work on them? The list -below is not meant to be a template to fill in, but rather a jumpstart on the -sorts of areas to consider in your proposed change description. - -You do not need to detail API or data model changes. - - -References -========== - -Please add any useful references here. You are not required to have any -reference. Moreover, this specification should still make sense when your -references are unavailable. diff --git a/doc/source/usage.rst b/doc/source/usage.rst deleted file mode 100644 index c1bc4db4..00000000 --- a/doc/source/usage.rst +++ /dev/null @@ -1,7 +0,0 @@ -======== -Usage -======== - -To use kuryr in a project:: - - import kuryr