Update Kuryr dev docs

Update dev docs followed by [1]. Fix include:
1. external references [2]
2. code block [3]

[1] http://docs.openstack.org/contributor-guide/index.html
[2] http://docs.openstack.org/contributor-guide/rst-conv/references.html
[3] http://docs.openstack.org/contributor-guide/rst-conv/source-code.html

Change-Id: I4bb57948835a4f352995e39ade5f03c2bfa34957
Closes-Bug: #1511221
This commit is contained in:
Dongcan Ye 2016-07-25 21:40:06 +08:00
parent 4287cf791b
commit ab4591dcbe
5 changed files with 93 additions and 183 deletions

View File

@ -5,9 +5,11 @@ Goals And Use Cases
Kuryr provides networking to Docker containers by leveraging the Neutron APIs
and services. It also provides containerized images for common Neutron plugins
Kuryr implements a `libnetwork remote driver`_ and maps its calls to OpenStack
`Neutron`_. It works as a translator between libnetwork's
`Container Network Model`_ (CNM) and `Neutron's networking model`_
Kuryr implements a `libnetwork remote driver <https://github.com/docker/libnetwork/blob/master/docs/remote.md>`_
and maps its calls to OpenStack `Neutron <https://wiki.openstack.org/wiki/Neutron>`_.
It works as a translator between libnetwork's
`Container Network Model <https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model>`_ (CNM)
and `Neutron's networking model <https://wiki.openstack.org/wiki/Neutron/APIv2-specification>`_
and provides container-host or container-vm (nested VM) binding.
Using Kuryr any Neutron plugin can be used as a libnetwork remote driver
@ -16,27 +18,29 @@ have the capability of providing the networking backend of Docker with a common
lightweight plugging snippet as they have in nova.
Kuryr takes care of binding the container namespace to the networking
infrastructure by providing a generic layer for `VIF binding`_ depending on the
port type for example Linux bridge port, Open vSwitch port, Midonet port and so
on.
infrastructure by providing a generic layer for `VIF binding <https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism>`_
depending on the port type for example Linux bridge port, Open vSwitch port,
Midonet port and so on.
Kuryr should be the gateway between containers networking API and use cases and
Neutron APIs and services and should bridge the gaps between the two in both
domains. It will map the missing parts in Neutron and drive changes to adjust
it.
Kuryr should address `Magnum`_ project use cases in terms of containers
networking and serve as a unified interface for Magnum or any other OpenStack
project that needs to leverage containers networking through Neutron API.
Kuryr should address `Magnum <https://wiki.openstack.org/wiki/Magnum>`_ project
use cases in terms of containers networking and serve as a unified interface for
Magnum or any other OpenStack project that needs to leverage containers
networking through Neutron API.
In that regard, Kuryr aims at leveraging Neutron plugins that support VM
nested container's use cases and enhancing Neutron APIs to support these cases
(for example `OVN`_). An etherpad regarding `Magnum Kuryr Integration`_
(for example `OVN <https://launchpad.net/networking-ovn>`_).
An etherpad regarding `Magnum Kuryr Integration <https://etherpad.openstack.org/p/magnum-kuryr>`_
describes the various use cases Kuryr needs to support.
Kuryr should provide containerized Neutron plugins for easy deployment and must
be compatible with OpenStack `Kolla`_ project and its deployment tools. The
containerized plugins have the common Kuryr binding layer which binds the
container to the network infrastructure.
be compatible with OpenStack `Kolla <https://wiki.openstack.org/wiki/Kolla>`_
project and its deployment tools. The containerized plugins have the common
Kuryr binding layer which binds the container to the network infrastructure.
Kuryr should leverage Neutron sub-projects and services (in particular LBaaS,
FWaaS, VPNaaS) to provide to support advanced containers networking use cases
@ -47,19 +51,4 @@ Kuryr also support pre-allocating of networks, ports and subnets, and binding
them to Docker networks/endpoints upon creation depending on specific labels
that are passed during Docker creation. There is a patch being merged in Docker
to support providing user labels upon network creation. you can look at this
`User labels in docker patch`_.
References
----------
.. _libnetwork remote driver: https://github.com/docker/libnetwork/blob/master/docs/remote.md
.. _Neutron: https://wiki.openstack.org/wiki/Neutron
.. _Container Network Model: https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model
.. _Neutron's networking model: https://wiki.openstack.org/wiki/Neutron/APIv2-specification
.. _VIF binding: https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism
.. _Magnum: https://wiki.openstack.org/wiki/Magnum
.. _OVN: https://launchpad.net/networking-ovn
.. _Kolla: https://wiki.openstack.org/wiki/Kolla
.. _APIs: https://github.com/docker/libnetwork/blob/master/docs/design.md#api
.. _User labels in docker patch: https://github.com/docker/libnetwork/pull/222/files#diff-2b9501381623bc063b38733c35a1d254
.. _Magnum Kuryr Integration: https://etherpad.openstack.org/p/magnum-kuryr
`User labels in docker patch <https://github.com/docker/libnetwork/pull/222/files#diff-2b9501381623bc063b38733c35a1d254>`_.

View File

@ -21,7 +21,7 @@
Design and Developer Docs
==========================
=========================
Kuryr goal is to bring containers networking to Neutron core API
and advanced networking services.

View File

@ -16,8 +16,8 @@
Kubernetes API Watcher Design
=============================
This documentation describes the `Kubernetes API`_ watcher daemon component,
**Raven**, of Kuryr.
This documentation describes the `Kubernetes API <http://kubernetes.io/docs/api/>`_
watcher daemon component, **Raven**, of Kuryr.
What is Raven
-------------
@ -35,7 +35,7 @@ race condition because of the lack of the lock or the serialization mechanisms
for the requests against Neutron API.
Raven doesn't take care of the bindings between the virtual ports and the
physical interfaces on worker nodes. It is the responsibility of Kuryr CNI_
physical interfaces on worker nodes. It is the responsibility of Kuryr `CNI <https://github.com/appc/cni>`_
plugin for K8s and it shall recognize which Neutron port should be bound to the
physical interface associated with the pod to be deployed. So Raven focuses
only on building the virtual network topology translated from the events of the
@ -66,8 +66,8 @@ Namespaces are translated into the networking basis, Neutron networks and
subnets for the cluster and the service using the explicitly predefined values
in the configuration file, or implicitly specified by the environment
variables, e.g., ``FLANNEL_NET=172.16.0.0/16`` as specified
`in the deployment phase`_. Raven also creates Neutron routers for connecting
the cluster subnets and the service subnets.
`in the deployment phase <https://github.com/kubernetes/kubernetes/search?utf8=%E2%9C%93&q=FLANNEL_NET>`_.
Raven also creates Neutron routers for connecting the cluster subnets and the service subnets.
When each namespace is created, a cluster network that contains a cluster
subnet, a service network that contains a service subnet, and a router that
@ -78,13 +78,13 @@ Pods contain the information required for creating Neutron ports. If pods are
associated with the specific namespace, the ports are created and associated
with the subnets for the namespace.
Although it's optional, Raven can emulate kube-proxy_. This is for the network
controller that leverages isolated datapath from ``docker0`` bridge such as
Open vSwitch datapath. Services contain the information for the emulation. Raven
maps kube-proxy to Neutron load balancers with VIPs. In this case Raven also
creates a LBaaS pool member for each Endpoints to be translated coordinating
with the associated service translation. For "externalIPs" type K8s service,
Raven associates a floating IP with a load balancer for enabling the pubilc
Although it's optional, Raven can emulate `kube-proxy <http://kubernetes.io/docs/user-guide/services/#virtual-ips-and-service-proxies>`_.
This is for the network controller that leverages isolated datapath from ``docker0``
bridge such as Open vSwitch datapath. Services contain the information for the
emulation. Raven maps kube-proxy to Neutron load balancers with VIPs. In this case
Raven also creates a LBaaS pool member for each Endpoints to be translated
coordinating with the associated service translation. For "externalIPs" type K8s
service, Raven associates a floating IP with a load balancer for enabling the pubilc
accesses.
================= =================
@ -108,9 +108,7 @@ K8s API behaviour
We look at the responses from the pod endpoints as an exmple.
The following behaviour is based on the 1.2.0 release, which is the latest one
as of March 17th, 2016.
::
as of March 17th, 2016::
$ ./kubectl.sh version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}
@ -120,9 +118,7 @@ Regular requests
~~~~~~~~~~~~~~~~
If there's no pod, the K8s API server returns the following JSON response that
has the empty list for the ``"items"`` property.
::
has the empty list for the ``"items"`` property::
$ curl -X GET -i http://127.0.0.1:8080/api/v1/pods
HTTP/1.1 200 OK
@ -140,17 +136,13 @@ has the empty list for the ``"items"`` property.
"items": []
}
We deploy a pod as follow.
::
We deploy a pod as follow::
$ ./kubectl.sh run --image=nginx nginx-app --port=80
replicationcontroller "nginx-app" created
Then the response from the API server contains the pod information in
``"items"`` property of the JSON response.
::
``"items"`` property of the JSON response::
$ curl -X GET -i http://127.0.0.1:8080/api/v1/pods
HTTP/1.1 200 OK
@ -266,9 +258,7 @@ for the specific resource name, i.e., ``/api/v1/watch/pods``, or ``watch=true``
query string.
If there's no pod, we get only the response header and the connection is kept
open.
::
open::
$ curl -X GET -i http://127.0.0.1:8080/api/v1/pods?watch=true
HTTP/1.1 200 OK
@ -277,19 +267,14 @@ open.
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
We create a pod as we did for the case without the ``watch=true`` query string.
::
We create a pod as we did for the case without the ``watch=true`` query string::
$ ./kubectl.sh run --image=nginx nginx-app --port=80
replicationcontroller "nginx-app" created
Then we observe the JSON data corresponds to the event is given by each line.
The event type is given in ``"type"`` property of the JSON data, i.e.,
``"ADDED"``, ``"MODIFIED"`` and ``"DELETED"``.
::
``"ADDED"``, ``"MODIFIED"`` and ``"DELETED"``::
$ curl -X GET -i http://127.0.0.1:8080/api/v1/pods?watch=true
HTTP/1.1 200 OK
@ -310,21 +295,25 @@ Problem Statement
~~~~~~~~~~~~~~~~~
To conform to the I/O bound requirement described in :ref:`k8s-api-behaviour`,
the multiplexed concurrent network I/O is demanded. eventlet_ is used in
various OpenStack projects for this purpose as well as other libraries such as
Twisted_, Tornado_ and gevent_. However, it has problems as described in
"`What's wrong with eventlet?`_" on the OpenStack wiki page.
the multiplexed concurrent network I/O is demanded.
`eventlet <http://eventlet.net/>`_ is used in various OpenStack projects for this
purpose as well as other libraries such as `Twisted <https://twistedmatrix.com/trac/>`_,
`Tornado <http://tornadoweb.org/>`_ and `gevent <http://www.gevent.org/>`_.
However, it has problems as described in
"`What's wrong with eventlet? <https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio#What.27s_wrong_with_eventlet.3F>`_"
on the OpenStack wiki page.
asyncio and Python 3 by default
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
asyncio_ was introduced as a standard asynchronous I/O library in Python 3.4.
Its event loop and coroutines provide the mechanism to multiplex network I/O
in the asynchronous fashion. Compared with eventlet, we can explicitly mark the
I/O operations asynchronous with ``yield from`` or ``await`` introduced in
Python 3.5.
`asyncio <https://www.python.org/dev/peps/pep-3156/>`_ was introduced as a
standard asynchronous I/O library in Python 3.4. Its event loop and coroutines
provide the mechanism to multiplex network I/O in the asynchronous fashion.
Compared with eventlet, we can explicitly mark the I/O operations asynchronous
with ``yield from`` or ``await`` introduced in Python 3.5.
Trollius_ is a port of asyncio to Python 2.x. However `Trollius documentation`_
`Trollius <http://trollius.readthedocs.org/>`_ is a port of asyncio to Python 2.x.
However `Trollius documentation <http://trollius.readthedocs.org/deprecated.html>`_
is describing a list of problems and even promoting the migration to Python 3
with asyncio.
@ -621,7 +610,7 @@ ADDED
}
MODIFIED
~~~~~~~~
++++++++
::
@ -1050,16 +1039,3 @@ DELETED
++++++++
The event could not be observed.
.. _`Kubernetes API`: http://kubernetes.io/docs/api/
.. _CNI: https://github.com/appc/cni
.. _`in the deployment phase`: https://github.com/kubernetes/kubernetes/search?utf8=%E2%9C%93&q=FLANNEL_NET
.. _kube-proxy: http://kubernetes.io/docs/user-guide/services/#virtual-ips-and-service-proxies
.. _eventlet: http://eventlet.net/
.. _Twisted: https://twistedmatrix.com/trac/
.. _Tornado: http://tornadoweb.org/
.. _gevent: http://www.gevent.org/
.. _`What's wrong with eventlet?`: https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio#What.27s_wrong_with_eventlet.3F
.. _asyncio: https://www.python.org/dev/peps/pep-3156/
.. _Trollius: http://trollius.readthedocs.org/
.. _`Trollius documentation`: http://trollius.readthedocs.org/deprecated.html

View File

@ -4,15 +4,15 @@
http://creativecommons.org/licenses/by/3.0/legalcode
=====================================
============================
Kuryr - Milestone for Mitaka
=====================================
============================
https://launchpad.net/kuryr
Kuryr Roles and Responsibilities - First Milestone for Mitaka release
-----------------------------------------------------------------------
---------------------------------------------------------------------
This chapter includes the various use cases that Kuryr aims at solving,
some were briefly described in the introduction chapter.
@ -72,7 +72,7 @@ This list of items will need to be prioritized.
8) Mapping between Neutron identifiers and Docker identifiers
A new spec in Neutron is being proposed that we can
leverage for this use case: `Adding tags to resources`_ .
leverage for this use case: `Adding tags to resources <https://review.openstack.org/#/c/216021/>`_ .
Tags are similar in concept to Docker labels.
9) Testing (CI)
@ -97,22 +97,3 @@ Kuryr Future Scope
An example project that does this for Kubernetes and Neutron LBaaS:
https://github.com/kubernetes/kubernetes/blob/release-1.0/pkg/cloudprovider/openstack/openstack.go
References
==========
.. _libnetwork remote driver: https://github.com/docker/libnetwork/blob/master/docs/remote.md
.. _Neutron: https://wiki.openstack.org/wiki/Neutron
.. _Container Network Model: https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model
.. _Neutron's networking model: https://wiki.openstack.org/wiki/Neutron/APIv2-specification
.. _Magnum: https://wiki.openstack.org/wiki/Magnum
.. _OVN: https://launchpad.net/networking-ovn
.. _Kolla: https://wiki.openstack.org/wiki/Kolla
.. _APIs: https://github.com/docker/libnetwork/blob/master/docs/design.md#api
.. _plugin discovery mechanism: https://github.com/docker/docker/blob/master/docs/extend/plugin_api.md#plugin-discovery
.. _Neutron client: http://docs.openstack.org/developer/python-neutronclient/
.. _libkv: https://github.com/docker/libkv
.. _VIF binding: https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism
.. _Adding tags to resources: https://review.openstack.org/#/c/216021/
.. _User labels in docker patch: https://github.com/docker/libnetwork/pull/222/files#diff-2b9501381623bc063b38733c35a1d254

View File

@ -5,10 +5,10 @@ Libnetwork Remote Network Driver Design
What is Kuryr
-------------
Kuryr implements a `libnetwork remote network driver`_ and maps its calls to OpenStack
`Neutron`_. It works as a translator between libnetwork's
`Container Network Model`_ (CNM) and `Neutron's networking model`_. Kuryr also acts as
a `libnetwork IPAM driver`_.
Kuryr implements a `libnetwork remote network driver <https://github.com/docker/libnetwork/blob/master/docs/remote.md>`_
and maps its calls to OpenStack `Neutron <https://wiki.openstack.org/wiki/Neutron>`_.
It works as a translator between libnetwork's `Container Network Model <https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model>`_ (CNM) and `Neutron's networking model <https://wiki.openstack.org/wiki/Neutron/APIv2-specification>`_.
Kuryr also acts as a `libnetwork IPAM driver <https://github.com/docker/libnetwork/blob/master/docs/ipam.md>`_.
Goal
~~~~
@ -24,13 +24,13 @@ the host, e.g., Linux bridge, Open vSwitch datapath and so on.
Kuryr Workflow - Host Networking
--------------------------------
Kuryr resides in each host that runs Docker containers and serves `APIs`_
Kuryr resides in each host that runs Docker containers and serves `APIs <https://github.com/docker/libnetwork/blob/master/docs/design.md#api>`_
required for the libnetwork remote network driver. It is planned to use the
`Adding tags to resources`_ new Neutron feature by Kuryr, to map between
`Adding tags to resources <https://review.openstack.org/#/c/216021/>`_
new Neutron feature by Kuryr, to map between
Neutron resource Id's and Docker Id's (UUID's)
1. libnetwork discovers Kuryr via `plugin discovery mechanism`_ *before the
first request is made*
1. libnetwork discovers Kuryr via `plugin discovery mechanism <https://github.com/docker/docker/blob/master/docs/extend/plugin_api.md#plugin-discovery>`_ *before the first request is made*
- During this process libnetwork makes a HTTP POST call on
``/Plugin.Active`` and examines the driver type, which defaults to
@ -51,7 +51,7 @@ Neutron resource Id's and Docker Id's (UUID's)
4. libnetwork makes API calls against Kuryr
5. Kuryr receives the requests and calls Neutron APIs with `Neutron client`_
5. Kuryr receives the requests and calls Neutron APIs with `Neutron client <http://docs.openstack.org/developer/python-neutronclient/>`_
6. Kuryr receives the responses from Neutron and compose the responses for
libnetwork
@ -61,21 +61,19 @@ Neutron resource Id's and Docker Id's (UUID's)
8. libnetwork stores the returned information to its key/value datastore
backend
- the key/value datastore is abstracted by `libkv`_
- the key/value datastore is abstracted by `libkv <https://github.com/docker/libkv>`_
Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
---------------------------------------------------------------------------------
1. A user creates a network ``foo`` with the subnet information
::
1. A user creates a network ``foo`` with the subnet information::
$ sudo docker network create --driver=kuryr --ipam-driver=kuryr \
--subnet 10.0.0.0/16 --gateway 10.0.0.1 --ip-range 10.0.0.0/24 foo
286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364
This makes a HTTP POST call on ``/IpamDriver.RequestPool`` with the following
JSON data.
::
JSON data::
{
"AddressSpace": "global_scope",
@ -87,8 +85,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
The value of ``SubPool`` comes from the value specified in ``--ip-range``
option in the command above and value of ``AddressSpace`` will be ``global_scope`` or ``local_scope`` depending on value of ``capability_scope`` configuration option. Kuryr creates a subnetpool, and then returns
the following response.
::
the following response::
{
"PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376",
@ -97,8 +94,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
}
If the ``--gateway`` was specified like the command above, another HTTP POST
call against ``/IpamDriver.RequestAddress`` follows with the JSON data below.
::
call against ``/IpamDriver.RequestAddress`` follows with the JSON data below::
{
"Address": "10.0.0.1",
@ -107,8 +103,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
}
As the IPAM driver Kuryr allocates a requested IP address and returns the
following response.
::
following response::
{
"Address": "10.0.0.1/16",
@ -116,8 +111,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
}
Finally a HTTP POST call on ``/NetworkDriver.CreateNetwork`` with the
following JSON data.
::
following JSON data::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
@ -136,15 +130,13 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
will generate an empty success response to the docker daemon. Kuryr tags the
Neutron network with the NetworkID from docker.
2. A user launches a container against network ``foo``
::
2. A user launches a container against network ``foo``::
$ sudo docker run --net=foo -itd --name=container1 busybox
78c0458ba00f836f609113dd369b5769527f55bb62b5680d03aa1329eb416703
This makes a HTTP POST call on ``/IpamDriver.RequestAddress`` with the
following JSON data.
::
following JSON data::
{
"Address": "",
@ -152,8 +144,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
"Options": null,
}
The IPAM driver Kuryr sends a port creation request to neutron and returns the following response with neutron provided ip address.
::
The IPAM driver Kuryr sends a port creation request to neutron and returns the following response with neutron provided ip address::
{
"Address": "10.0.0.2/16",
@ -162,8 +153,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
Then another HTTP POST call on ``/NetworkDriver.CreateEndpoint`` with the
following JSON data is made.
::
following JSON data is made::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
@ -193,8 +183,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
When the Neutron port has been updated, the Kuryr remote driver will
generate a response to the docker daemon in following form:
(https://github.com/docker/libnetwork/blob/master/docs/remote.md#create-endpoint)
::
(https://github.com/docker/libnetwork/blob/master/docs/remote.md#create-endpoint)::
{
"Interface": {"MacAddress": "08:22:e0:a8:7d:db"}
@ -202,8 +191,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
On receiving success response, libnetwork makes a HTTP POST call on ``/NetworkDriver.Join`` with
the following JSON data.
::
the following JSON data::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
@ -227,8 +215,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
documentation for a join request.
(https://github.com/docker/libnetwork/blob/master/docs/remote.md#join)
3. A user requests information about the network
::
3. A user requests information about the network::
$ sudo docker network inspect foo
{
@ -255,8 +242,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
}
4. A user connects one more container to the network
::
4. A user connects one more container to the network::
$ sudo docker network connect foo container2
d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646
@ -292,15 +278,13 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
}
5. A user disconnects a container from the network
::
5. A user disconnects a container from the network::
$ CID=d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646
$ sudo docker network disconnect foo $CID
This makes a HTTP POST call on ``/NetworkDriver.Leave`` with the following
JSON data.
::
JSON data::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
@ -312,8 +296,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
Docker daemon.
Then libnetwork makes a HTTP POST call on ``/NetworkDriver.DeleteEndpoint`` with the
following JSON data.
::
following JSON data::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
@ -326,8 +309,7 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
response to the Docker daemon: {}
Finally libnetwork makes a HTTP POST call on ``/IpamDriver.ReleaseAddress``
with the following JSON data.
::
with the following JSON data::
{
"Address": "10.0.0.3",
@ -335,19 +317,16 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
}
Kuryr remote IPAM driver generates a Neutron API request to delete the associated Neutron port.
As the IPAM driver Kuryr deallocates the IP address and returns the following response.
::
As the IPAM driver Kuryr deallocates the IP address and returns the following response::
{}
7. A user deletes the network
::
7. A user deletes the network::
$ sudo docker network rm foo
This makes a HTTP POST call against ``/NetworkDriver.DeleteNetwork`` with the
following JSON data.
::
following JSON data::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364"
@ -359,24 +338,22 @@ Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
daemon: {}
Then another HTTP POST call on ``/IpamDriver.ReleasePool`` with the
following JSON data is made.
::
following JSON data is made::
{
"PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376"
}
Kuryr delete the corresponding subnetpool and returns the following response.
::
Kuryr delete the corresponding subnetpool and returns the following response::
{}
Mapping between the CNM and the Neutron's Networking Model
----------------------------------------------------------
Kuryr communicates with Neutron via `Neutron client`_ and bridges between
libnetwork and Neutron by translating their networking models. The following
table depicts the current mapping between libnetwork and Neutron models:
Kuryr communicates with Neutron via `Neutron client <http://docs.openstack.org/developer/python-neutronclient/>`_
and bridges between libnetwork and Neutron by translating their networking models.
The following table depicts the current mapping between libnetwork and Neutron models:
===================== ======================
libnetwork Neutron
@ -404,16 +381,3 @@ Notes on implementing the libnetwork remote driver API in Kuryr
Neutron does not use the information related to discovery of resources such as
nodes being deleted and therefore the implementation of this API method does
nothing.
.. _libnetwork remote network driver: https://github.com/docker/libnetwork/blob/master/docs/remote.md
.. _libnetwork IPAM driver: https://github.com/docker/libnetwork/blob/master/docs/ipam.md
.. _Neutron: https://wiki.openstack.org/wiki/Neutron
.. _Container Network Model: https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model
.. _Neutron's networking model: https://wiki.openstack.org/wiki/Neutron/APIv2-specification
.. _Neutron client: http://docs.openstack.org/developer/python-neutronclient/
.. _plugin discovery mechanism: https://github.com/docker/docker/blob/master/docs/extend/plugin_api.md#plugin-discovery
.. _Adding tags to resources: https://review.openstack.org/#/c/216021/
.. _APIs: https://github.com/docker/libnetwork/blob/master/docs/design.md#api
.. _libkv: https://github.com/docker/libkv
.. _IPAM blueprint: https://blueprints.launchpad.net/kuryr/+spec/ipam
.. _Neutron's API reference: http://developer.openstack.org/api-ref-networking-v2.html#createSubnet