Fix text blocks formatting

There are the cases, where text blocks in restructuredtext files are
exceeding text 79 column, or are formatted in weeird way. In this patch
it is fixed. Also couple of typos were tided up.

Change-Id: I78c20cbb45c74e817d60582439acc7b01b577a83
This commit is contained in:
Roman Dobosz 2019-11-12 16:16:18 +01:00
parent e32d796ac7
commit c3c270c752
29 changed files with 375 additions and 352 deletions

View File

@ -1,20 +1,17 @@
If you would like to contribute to the development of OpenStack, you must If you would like to contribute to the development of OpenStack, you must
follow the steps in this page: follow the steps in this page:
https://docs.openstack.org/infra/manual/developers.html
https://docs.openstack.org/infra/manual/developers.html
If you already have a good understanding of how the system works and your If you already have a good understanding of how the system works and your
OpenStack accounts are set up, you can skip to the development workflow OpenStack accounts are set up, you can skip to the development workflow section
section of this documentation to learn how changes to OpenStack should be of this documentation to learn how changes to OpenStack should be submitted for
submitted for review via the Gerrit tool: review via the Gerrit tool:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored. Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub: Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/kuryr-kubernetes
https://bugs.launchpad.net/kuryr-kubernetes
If you want to have your code checked for pep8 automatically before committing If you want to have your code checked for pep8 automatically before committing
changes, you can just do:: changes, you can just do::

View File

@ -29,4 +29,5 @@ require it or to use different segments and, for example, route between them.
Contribution guidelines Contribution guidelines
----------------------- -----------------------
For the process of new feature addition, refer to the `Kuryr Policy <https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies>`_ For the process of new feature addition, refer to the `Kuryr Policy
<https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies>`_

View File

@ -3,41 +3,41 @@ Kuryr Heat Templates
==================== ====================
This set of scripts and Heat templates are useful for deploying devstack This set of scripts and Heat templates are useful for deploying devstack
scenarios. It handles the creation of an allinone devstack nova instance and its scenarios. It handles the creation of an all-in-one devstack nova instance and
networking needs. its networking needs.
Prerequisites Prerequisites
------------- -------------
Packages to install on the host you run devstack-heat (not on the cloud server): Packages to install on the host you run devstack-heat (not on the cloud
server):
* jq * jq
* openstack-cli * openstack-cli
If you want to run devstack from the master commit, this application requires a If you want to run devstack from the master commit, this application requires a
github token due to the github api rate limiting: Github token due to the Github API rate limiting:
You can generate one without any permissions at: You can generate one without any permissions at
https://github.com/settings/tokens/new.
https://github.com/settings/tokens/new Then put it in your ``~/.bashrc`` an environment variable called
``DEVSTACK_HEAT_GH_TOKEN`` like so:
Then put it in your ~/.bashrc an ENV variable called DEVSTACK_HEAT_GH_TOKEN like
so:
echo "export DEVSTACK_HEAT_GH_TOKEN=my_token" >> ~/.bashrc echo "export DEVSTACK_HEAT_GH_TOKEN=my_token" >> ~/.bashrc
After creating the instance, devstack-heat will immediately start creating a After creating the instance, devstack-heat will immediately start creating a
devstack `stack` user and using devstack to stack kuryr-kubernetes. When it is devstack ``stack`` user and using devstack to stack kuryr-kubernetes. When it
finished, there'll be a file names `/opt/stack/ready`. is finished, there'll be a file names ``/opt/stack/ready``.
How to run How to run
---------- ----------
In order to run it, make sure that you have sourced your OpenStack cloud In order to run it, make sure that you have sourced your OpenStack cloud
provider openrc file and tweaked `hot/parameters.yml` to your liking then launch provider openrc file and tweaked ``hot/parameters.yml`` to your liking then
with:: launch with::
./devstack-heat stack ./devstack-heat stack
@ -89,5 +89,5 @@ To delete the deployment::
Supported images Supported images
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
It should work with the latest centos7 image. It is not tested with the latest It should work with the latest Centos7 image. It is not tested with the latest
ubuntu 16.04 cloud image but it will probably work. Ubuntu 16.04 cloud image but it will probably work.

View File

@ -7,7 +7,7 @@ a given amount of subports at the specified pools (i.e., at the VM trunks), as
well as to free the unused ones. well as to free the unused ones.
The first step to perform is to enable the pool manager by adding this to The first step to perform is to enable the pool manager by adding this to
`/etc/kuryr/kuryr.conf`:: ``/etc/kuryr/kuryr.conf``::
[kubernetes] [kubernetes]
enable_manager = True enable_manager = True
@ -17,7 +17,7 @@ If the environment has been deployed with devstack, the socket file directory
will have been created automatically. However, if that is not the case, you will have been created automatically. However, if that is not the case, you
need to create the directory for the socket file with the right permissions. need to create the directory for the socket file with the right permissions.
If no other path is specified, the default location for the socket file is: If no other path is specified, the default location for the socket file is:
`/run/kuryr/kuryr_manage.sock` ``/run/kuryr/kuryr_manage.sock``
Hence, you need to create that directory and give it read/write access to the Hence, you need to create that directory and give it read/write access to the
user who is running the kuryr-kubernetes.service, for instance:: user who is running the kuryr-kubernetes.service, for instance::
@ -36,7 +36,7 @@ Populate subport pools for nested environment
Once the nested environment is up and running, and the pool manager has been Once the nested environment is up and running, and the pool manager has been
started, we can populate the pools, i.e., the trunk ports in used by the started, we can populate the pools, i.e., the trunk ports in used by the
overcloud VMs, with subports. From the `undercloud` we just need to make use overcloud VMs, with subports. From the *undercloud* we just need to make use
of the subports.py tool. of the subports.py tool.
To obtain information about the tool options:: To obtain information about the tool options::

View File

@ -21,7 +21,7 @@ Purpose
The purpose of this document is to present the main Kuryr-K8s integration The purpose of this document is to present the main Kuryr-K8s integration
components and capture the design decisions of each component currently taken components and capture the design decisions of each component currently taken
by the kuryr team. by the Kuryr team.
Goal Statement Goal Statement
@ -30,19 +30,19 @@ Goal Statement
Enable OpenStack Neutron realization of the Kubernetes networking. Start by Enable OpenStack Neutron realization of the Kubernetes networking. Start by
supporting network connectivity and expand to support advanced features, such supporting network connectivity and expand to support advanced features, such
as Network Policies. In the future, it may be extended to some other as Network Policies. In the future, it may be extended to some other
openstack services. OpenStack services.
Overview Overview
-------- --------
In order to integrate Neutron into kubernetes networking, 2 components are In order to integrate Neutron into Kubernetes networking, 2 components are
introduced: Controller and CNI Driver. introduced: Controller and CNI Driver.
Controller is a supervisor component responsible to maintain translation of Controller is a supervisor component responsible to maintain translation of
networking relevant Kubernetes model into the OpenStack (i.e. Neutron) model. networking relevant Kubernetes model into the OpenStack (i.e. Neutron) model.
This can be considered as a centralized service (supporting HA mode in the This can be considered as a centralized service (supporting HA mode in the
future). future).
CNI driver is responsible for binding kubernetes pods on worker nodes into CNI driver is responsible for binding Kubernetes pods on worker nodes into
Neutron ports ensuring requested level of isolation. Neutron ports ensuring requested level of isolation.
Please see below the component view of the integrated system: Please see below the component view of the integrated system:
@ -62,13 +62,13 @@ Design Principles
should rely on existing communication channels, currently added to the pod should rely on existing communication channels, currently added to the pod
metadata via annotations. metadata via annotations.
4. CNI Driver should not depend on Neutron. It gets all required details 4. CNI Driver should not depend on Neutron. It gets all required details
from Kubernetes API server (currently through Kubernetes annotations), therefore from Kubernetes API server (currently through Kubernetes annotations),
depending on Controller to perform its translation tasks. therefore depending on Controller to perform its translation tasks.
5. Allow different neutron backends to bind Kubernetes pods without code modification. 5. Allow different neutron backends to bind Kubernetes pods without code
This means that both Controller and CNI binding mechanism should allow modification. This means that both Controller and CNI binding mechanism
loading of the vif management and binding components, manifested via should allow loading of the vif management and binding components,
configuration. If some vendor requires some extra code, it should be handled manifested via configuration. If some vendor requires some extra code, it
in one of the stevedore drivers. should be handled in one of the stevedore drivers.
Kuryr Controller Design Kuryr Controller Design
@ -86,10 +86,10 @@ Watcher
~~~~~~~ ~~~~~~~
Watcher is a common software component used by both the Controller and the CNI Watcher is a common software component used by both the Controller and the CNI
driver. Watcher connects to Kubernetes API. Watcher's responsibility is to observe the driver. Watcher connects to Kubernetes API. Watcher's responsibility is to
registered (either on startup or dynamically during its runtime) endpoints and observe the registered (either on startup or dynamically during its runtime)
invoke registered callback handler (pipeline) to pass all events from endpoints and invoke registered callback handler (pipeline) to pass all events
registered endpoints. from registered endpoints.
Event Handler Event Handler
@ -125,16 +125,17 @@ ControllerPipeline
ControllerPipeline serves as an event dispatcher of the Watcher for Kuryr-K8s ControllerPipeline serves as an event dispatcher of the Watcher for Kuryr-K8s
controller Service. Currently watched endpoints are 'pods', 'services' and controller Service. Currently watched endpoints are 'pods', 'services' and
'endpoints'. Kubernetes resource event handlers (Event Consumers) are registered into 'endpoints'. Kubernetes resource event handlers (Event Consumers) are
the Controller Pipeline. There is a special EventConsumer, ResourceEventHandler, registered into the Controller Pipeline. There is a special EventConsumer,
that provides API for Kubernetes event handling. When a watched event arrives, it is ResourceEventHandler, that provides API for Kubernetes event handling. When a
processed by all Resource Event Handlers registered for specific Kubernetes object watched event arrives, it is processed by all Resource Event Handlers
kind. Pipeline retries on resource event handler invocation in registered for specific Kubernetes object kind. Pipeline retries on resource
case of the ResourceNotReady exception till it succeeds or the number of event handler invocation in case of the ResourceNotReady exception till it
retries (time-based) is reached. Any unrecovered failure is logged without succeeds or the number of retries (time-based) is reached. Any unrecovered
affecting other Handlers (of the current and other events). failure is logged without affecting other Handlers (of the current and other
Events of the same group (same Kubernetes object) are handled sequentially in the events). Events of the same group (same Kubernetes object) are handled
order arrival. Events of different Kubernetes objects are handled concurrently. sequentially in the order arrival. Events of different Kubernetes objects are
handled concurrently.
.. image:: ../..//images/controller_pipeline.png .. image:: ../..//images/controller_pipeline.png
:alt: controller pipeline :alt: controller pipeline
@ -145,11 +146,13 @@ order arrival. Events of different Kubernetes objects are handled concurrently.
ResourceEventHandler ResourceEventHandler
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
ResourceEventHandler is a convenience base class for the Kubernetes event processing. ResourceEventHandler is a convenience base class for the Kubernetes event
The specific Handler associates itself with specific Kubernetes object kind (through processing. The specific Handler associates itself with specific Kubernetes
setting OBJECT_KIND) and is expected to implement at least one of the methods object kind (through setting OBJECT_KIND) and is expected to implement at
of the base class to handle at least one of the ADDED/MODIFIED/DELETED events least one of the methods of the base class to handle at least one of the
of the Kubernetes object. For details, see `k8s-api <https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/api-conventions.md#types-kinds>`_. ADDED/MODIFIED/DELETED events of the Kubernetes object. For details, see
`k8s-api
<https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/api-conventions.md#types-kinds>`_.
Since both ADDED and MODIFIED event types trigger very similar sequence of Since both ADDED and MODIFIED event types trigger very similar sequence of
actions, Handler has 'on_present' method that is invoked for both event types. actions, Handler has 'on_present' method that is invoked for both event types.
The specific Handler implementation should strive to put all the common ADDED The specific Handler implementation should strive to put all the common ADDED
@ -161,6 +164,7 @@ Pluggable Handlers
Starting with the Rocky release, Kuryr-Kubernetes includes a pluggable Starting with the Rocky release, Kuryr-Kubernetes includes a pluggable
interface for the Kuryr controller handlers. interface for the Kuryr controller handlers.
The pluggable handlers framework allows : The pluggable handlers framework allows :
- Using externally provided handlers. - Using externally provided handlers.
@ -179,8 +183,8 @@ lb Endpoint
lbaasspec Service lbaasspec Service
================ ========================= ================ =========================
For example, to enable only the 'vif' controller handler we should set the following For example, to enable only the 'vif' controller handler we should set the
at kuryr.conf:: following at kuryr.conf::
[kubernetes] [kubernetes]
enabled_handlers=vif enabled_handlers=vif
@ -190,19 +194,19 @@ Providers
~~~~~~~~~ ~~~~~~~~~
Provider (Drivers) are used by ResourceEventHandlers to manage specific aspects Provider (Drivers) are used by ResourceEventHandlers to manage specific aspects
of the Kubernetes resource in the OpenStack domain. For example, creating a Kubernetes Pod of the Kubernetes resource in the OpenStack domain. For example, creating a
will require a neutron port to be created on a specific network with the proper Kubernetes Pod will require a neutron port to be created on a specific network
security groups applied to it. There will be dedicated Drivers for Project, with the proper security groups applied to it. There will be dedicated Drivers
Subnet, Port and Security Groups settings in neutron. For instance, the Handler for Project, Subnet, Port and Security Groups settings in neutron. For
that processes pod events, will use PodVIFDriver, PodProjectDriver, instance, the Handler that processes pod events, will use PodVIFDriver,
PodSubnetsDriver and PodSecurityGroupsDriver. The Drivers model is introduced PodProjectDriver, PodSubnetsDriver and PodSecurityGroupsDriver. The Drivers
in order to allow flexibility in the Kubernetes model mapping to the OpenStack. There model is introduced in order to allow flexibility in the Kubernetes model
can be different drivers that do Neutron resources management, i.e. create on mapping to the OpenStack. There can be different drivers that do Neutron
demand or grab one from the precreated pool. There can be different drivers for resources management, i.e. create on demand or grab one from the precreated
the Project management, i.e. single Tenant or multiple. Same goes for the other pool. There can be different drivers for the Project management, i.e. single
drivers. There are drivers that handle the Pod based on the project, subnet Tenant or multiple. Same goes for the other drivers. There are drivers that
and security groups specified via configuration settings during cluster handle the Pod based on the project, subnet and security groups specified via
deployment phase. configuration settings during cluster deployment phase.
NeutronPodVifDriver NeutronPodVifDriver
@ -250,10 +254,10 @@ Processes communicate between each other using Python's
responsible for extracting VIF annotations from Pod events and putting them responsible for extracting VIF annotations from Pod events and putting them
into the shared dictionary. Server is a regular WSGI server that will answer into the shared dictionary. Server is a regular WSGI server that will answer
CNI Driver calls. When a CNI request comes, Server is waiting for VIF object to CNI Driver calls. When a CNI request comes, Server is waiting for VIF object to
appear in the shared dictionary. As annotations are read from appear in the shared dictionary. As annotations are read from kubernetes API
kubernetes API and added to the registry by Watcher thread, Server will and added to the registry by Watcher thread, Server will eventually get VIF it
eventually get VIF it needs to connect for a given pod. Then it waits for the needs to connect for a given pod. Then it waits for the VIF to become active
VIF to become active before returning to the CNI Driver. before returning to the CNI Driver.
Communication Communication
@ -293,12 +297,13 @@ deserialized using o.vo's ``obj_from_primitive()`` method.
**Return body:** None. **Return body:** None.
When running in daemonized mode, CNI Driver will call CNI Daemon over those APIs When running in daemonized mode, CNI Driver will call CNI Daemon over those
to perform its tasks and wait on socket for result. APIs to perform its tasks and wait on socket for result.
Kubernetes Documentation Kubernetes Documentation
------------------------ ------------------------
The `Kubernetes reference documentation <https://kubernetes.io/docs/reference/>`_ The `Kubernetes reference documentation
is a great source for finding more details about Kubernetes API, CLIs, and tools. <https://kubernetes.io/docs/reference/>`_ is a great source for finding more
details about Kubernetes API, CLIs, and tools.

View File

@ -46,10 +46,10 @@ kubernetes, the Openshift Route matches the functionality of kubernetes Ingress.
Proposed Solution Proposed Solution
----------------- -----------------
The solution will rely on L7 router, Service/Endpoints handler and The solution will rely on L7 router, Service/Endpoints handler and L7 router
L7 router driver components described at kuryr-kubernetes Ingress integration driver components described at kuryr-kubernetes Ingress integration design,
design, where a new component - OCP-Route handler, will satisfy requests for where a new component - OCP-Route handler, will satisfy requests for Openshift
Openshift Route resources. Route resources.
Controller Handlers impact: Controller Handlers impact:
@ -72,14 +72,13 @@ The following scheme describes OCP-Route controller SW architecture:
Similar to Kubernetes Ingress, each OCP-Route object being translated to a L7 Similar to Kubernetes Ingress, each OCP-Route object being translated to a L7
policy in L7 router, and the rules on OCP-Route become L7 (URL) mapping rules policy in L7 router, and the rules on OCP-Route become L7 (URL) mapping rules
in that L7 policy. in that L7 policy. The L7 policy is configured to forward the filtered traffic
The L7 policy is configured to forward the filtered traffic to LbaaS Pool. to LbaaS Pool. The LbaaS pool represents an Endpoints resource, and it's the
The LbaaS pool represents an Endpoints resource, and it's the Service/Endpoints Service/Endpoints handler responsibility to attach all its members to this
handler responsibility to attach all its members to this pool. pool. Since the Endpoints resource is not aware of changes in OCP-Route objects
Since the Endpoints resource is not aware of changes in OCP-Route objects pointing to it, the OCP-Route handler should trigger this notification, the
pointing to it, the OCP-Route handler should trigger this notification, notification will be implemented using annotation of the relevant Endpoint
the notification will be implemented using annotation of the relevant resource.
Endpoint resource.
Use cases examples Use cases examples
@ -87,8 +86,8 @@ Use cases examples
This section describes in details the following scenarios: This section describes in details the following scenarios:
A. Create OCP-Route, create Service/Endpoints. A. Create OCP-Route, create Service/Endpoints.
B. Create Service/Endpoints, create OCP-Route, delete OCP-Route. B. Create Service/Endpoints, create OCP-Route, delete OCP-Route.
* Create OCP-Route, create Service/Endpoints: * Create OCP-Route, create Service/Endpoints:

View File

@ -47,7 +47,7 @@ container creation, and Neutron ports are deleted after container deletion.
But there is still a need to keep the Ports and Port pools details and have But there is still a need to keep the Ports and Port pools details and have
them available in case of Kuryr Controller restart. Since Kuryr is stateless them available in case of Kuryr Controller restart. Since Kuryr is stateless
service, the details should be kept either as part of Neutron or Kubernetes service, the details should be kept either as part of Neutron or Kubernetes
data. Due to the perfromance costs, K8s option is more performant. data. Due to the performance costs, K8s option is more preferred.
Proposed Solution Proposed Solution
@ -171,13 +171,12 @@ KuryrPorts objects that were annotated with `deleting` label at the
(e.g. ports) in case the controller crashed while deleting the Neutron (e.g. ports) in case the controller crashed while deleting the Neutron
(or any other SDN) associated resources. (or any other SDN) associated resources.
As for the Ports Pools, right now they reside on memory on the As for the Ports Pools, right now they reside on memory on the Kuryr-controller
Kuryr-controller and need to be recovered every time the controller gets and need to be recovered every time the controller gets restarted. To perform
restarted. To perform this recovery we are relying on Neutron Port this recovery we are relying on Neutron Port device-owner information which may
device-owner information which may not be completely waterproof in all not be completely waterproof in all situations (e.g., if there is another
situations (e.g., if there is another entity using the same device entity using the same device owner name). Consequently, by storing the
owner name). Consequently, by storing the information into K8s CRD objests we information into K8s CRD objects we have the benefit of:
have the benefit of:
* Calling K8s API instead of Neutron API * Calling K8s API instead of Neutron API
* Being sure the recovered ports into the pools were created by * Being sure the recovered ports into the pools were created by

View File

@ -31,9 +31,10 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods and
a policy by which to access them. Service is a Kubernetes managed API object. a policy by which to access them. Service is a Kubernetes managed API object.
For Kubernetes-native applications, Kubernetes offers an Endpoints API that is For Kubernetes-native applications, Kubernetes offers an Endpoints API that is
updated whenever the set of Pods in a Service changes. For detailed information updated whenever the set of Pods in a Service changes. For detailed information
please refer to `Kubernetes service <http://kubernetes.io/docs/user-guide/services/>`_ please refer to `Kubernetes service
Kubernetes supports services with kube-proxy component that runs on each node, <http://kubernetes.io/docs/user-guide/services/>`_ Kubernetes supports services
`Kube-Proxy <http://kubernetes.io/docs/admin/kube-proxy/>`_. with kube-proxy component that runs on each node, `Kube-Proxy
<http://kubernetes.io/docs/admin/kube-proxy/>`_.
Proposed Solution Proposed Solution
@ -43,18 +44,20 @@ Kubernetes service in its essence is a Load Balancer across Pods that fit the
service selection. Kuryr's choice is to support Kubernetes services by using service selection. Kuryr's choice is to support Kubernetes services by using
Neutron LBaaS service. The initial implementation is based on the OpenStack Neutron LBaaS service. The initial implementation is based on the OpenStack
LBaaSv2 API, so compatible with any LBaaSv2 API provider. LBaaSv2 API, so compatible with any LBaaSv2 API provider.
In order to be compatible with Kubernetes networking, Kuryr-Kubernetes
makes sure that services Load Balancers have access to Pods Neutron ports. In order to be compatible with Kubernetes networking, Kuryr-Kubernetes makes
This may be affected once Kubernetes Network Policies will be supported. sure that services Load Balancers have access to Pods Neutron ports. This may
Oslo versioned objects are used to keep translation details in Kubernetes entities be affected once Kubernetes Network Policies will be supported. Oslo versioned
annotation. This will allow future changes to be backward compatible. objects are used to keep translation details in Kubernetes entities annotation.
This will allow future changes to be backward compatible.
Data Model Translation Data Model Translation
~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~
Kubernetes service is mapped to the LBaaSv2 Load Balancer with associated Kubernetes service is mapped to the LBaaSv2 Load Balancer with associated
Listeners and Pools. Service endpoints are mapped to Load Balancer Pool members. Listeners and Pools. Service endpoints are mapped to Load Balancer Pool
members.
Kuryr Controller Impact Kuryr Controller Impact
@ -71,11 +74,10 @@ Two Kubernetes Event Handlers are added to the Controller pipeline.
Endpoints (LoadBalancer) handler. To avoid conflicting annotations, K8s Endpoints (LoadBalancer) handler. To avoid conflicting annotations, K8s
Services's resourceVersion is used for Service and Endpoints while handling Services's resourceVersion is used for Service and Endpoints while handling
Services events. Services events.
- LoadBalancerHandler manages Kubernetes endpoints events. It manages - LoadBalancerHandler manages Kubernetes endpoints events. It manages
LoadBalancer, LoadBalancerListener, LoadBalancerPool and LoadBalancerPool LoadBalancer, LoadBalancerListener, LoadBalancerPool and LoadBalancerPool
members to reflect and keep in sync with the Kubernetes service. It keeps details of members to reflect and keep in sync with the Kubernetes service. It keeps
Neutron resources by annotating the Kubernetes Endpoints object. details of Neutron resources by annotating the Kubernetes Endpoints object.
Both Handlers use Project, Subnet and SecurityGroup service drivers to get Both Handlers use Project, Subnet and SecurityGroup service drivers to get
details for service mapping. details for service mapping.

View File

@ -92,9 +92,9 @@ Additional Subnets Driver
Since it is possible to request additional subnets for the pod through the pod Since it is possible to request additional subnets for the pod through the pod
annotations it is necessary to have new driver. According to parsed information annotations it is necessary to have new driver. According to parsed information
(requested subnets) by Multi-vif driver it has to return dictionary containing (requested subnets) by Multi-vif driver it has to return dictionary containing
the mapping 'subnet_id' -> 'network' for all requested subnets in unified format the mapping 'subnet_id' -> 'network' for all requested subnets in unified
specified in PodSubnetsDriver class. format specified in PodSubnetsDriver class. Here's how a Pod Spec with
Here's how a Pod Spec with additional subnets requests might look like: additional subnets requests might look like:
.. code-block:: yaml .. code-block:: yaml
@ -137,11 +137,11 @@ Specific ports support
Specific ports support is enabled by default and will be a part of the drivers Specific ports support is enabled by default and will be a part of the drivers
to implement it. It is possile to have manually precreated specific ports in to implement it. It is possile to have manually precreated specific ports in
neutron and specify them in pod annotations as preferably used. This means that neutron and specify them in pod annotations as preferably used. This means that
drivers will use specific ports if it is specified in pod annotations, otherwise drivers will use specific ports if it is specified in pod annotations,
it will create new ports by default. It is important that specific ports can have otherwise it will create new ports by default. It is important that specific
vnic_type both direct and normal, so it is necessary to provide processing ports can have vnic_type both direct and normal, so it is necessary to provide
support for specific ports in both SRIOV and generic driver. processing support for specific ports in both SRIOV and generic driver. Pod
Pod annotation with requested specific ports might look like this: annotation with requested specific ports might look like this:
.. code-block:: yaml .. code-block:: yaml
@ -158,10 +158,10 @@ Pod annotation with requested specific ports might look like this:
"id_of_normal_precreated_port" "id_of_normal_precreated_port"
]' ]'
Pod spec above should be interpreted the following way: Pod spec above should be interpreted the following way: Multi-vif driver parses
Multi-vif driver parses pod annotations and gets ids of specific ports. pod annotations and gets ids of specific ports. If vnic_type is "normal" and
If vnic_type is "normal" and such ports exist, it calls generic driver to create vif such ports exist, it calls generic driver to create vif objects for these
objects for these ports. Else if vnic_type is "direct" and such ports exist, it calls ports. Else if vnic_type is "direct" and such ports exist, it calls sriov
sriov driver to create vif objects for these ports. If certain ports are not driver to create vif objects for these ports. If certain ports are not
requested in annotations then driver doesn't return additional vifs to Multi-vif requested in annotations then driver doesn't return additional vifs to
driver. Multi-vif driver.

View File

@ -21,9 +21,10 @@ If you want to run kuryr CNI without the daemon, build theimage with: ::
$ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False . $ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False .
Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller
Deployment and kuryr-cni DaemonSet definitions to use pre-built Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller
`controller <https://hub.docker.com/r/kuryr/controller/>`_ and `cni <https://hub.docker.com/r/kuryr/cni/>`_ <https://hub.docker.com/r/kuryr/controller/>`_ and `cni
images from the Docker Hub. Those definitions will be generated in next step. <https://hub.docker.com/r/kuryr/cni/>`_ images from the Docker Hub. Those
definitions will be generated in next step.
Generating Kuryr resource definitions for Kubernetes Generating Kuryr resource definitions for Kubernetes
@ -36,7 +37,8 @@ that can be used to Deploy Kuryr on Kubernetes. The script is placed in
$ ./tools/generate_k8s_resource_definitions <output_dir> [<controller_conf_path>] [<cni_conf_path>] [<ca_certificate_path>] $ ./tools/generate_k8s_resource_definitions <output_dir> [<controller_conf_path>] [<cni_conf_path>] [<ca_certificate_path>]
* ``output_dir`` - directory where to put yaml files with definitions. * ``output_dir`` - directory where to put yaml files with definitions.
* ``controller_conf_path`` - path to custom kuryr-controller configuration file. * ``controller_conf_path`` - path to custom kuryr-controller configuration
file.
* ``cni_conf_path`` - path to custom kuryr-cni configuration file (defaults to * ``cni_conf_path`` - path to custom kuryr-cni configuration file (defaults to
``controller_conf_path``). ``controller_conf_path``).
* ``ca_certificate_path`` - path to custom CA certificate for OpenStack API. It * ``ca_certificate_path`` - path to custom CA certificate for OpenStack API. It
@ -49,24 +51,29 @@ that can be used to Deploy Kuryr on Kubernetes. The script is placed in
still be mounted in kuryr-controller ``Deployment`` definition. still be mounted in kuryr-controller ``Deployment`` definition.
If no path to config files is provided, script automatically generates minimal If no path to config files is provided, script automatically generates minimal
configuration. However some of the options should be filled by the user. You can configuration. However some of the options should be filled by the user. You
do that either by editing the file after the ConfigMap definition is generated can do that either by editing the file after the ConfigMap definition is
or provide your options as environment variables before running the script. generated or provide your options as environment variables before running the
Below is the list of available variables: script. Below is the list of available variables:
* ``$KURYR_K8S_API_ROOT`` - ``[kubernetes]api_root`` (default: https://127.0.0.1:6443) * ``$KURYR_K8S_API_ROOT`` - ``[kubernetes]api_root`` (default:
* ``$KURYR_K8S_AUTH_URL`` - ``[neutron]auth_url`` (default: http://127.0.0.1/identity) https://127.0.0.1:6443)
* ``$KURYR_K8S_AUTH_URL`` - ``[neutron]auth_url`` (default:
http://127.0.0.1/identity)
* ``$KURYR_K8S_USERNAME`` - ``[neutron]username`` (default: admin) * ``$KURYR_K8S_USERNAME`` - ``[neutron]username`` (default: admin)
* ``$KURYR_K8S_PASSWORD`` - ``[neutron]password`` (default: password) * ``$KURYR_K8S_PASSWORD`` - ``[neutron]password`` (default: password)
* ``$KURYR_K8S_USER_DOMAIN_NAME`` - ``[neutron]user_domain_name`` (default: Default) * ``$KURYR_K8S_USER_DOMAIN_NAME`` - ``[neutron]user_domain_name`` (default:
Default)
* ``$KURYR_K8S_KURYR_PROJECT_ID`` - ``[neutron]kuryr_project_id`` * ``$KURYR_K8S_KURYR_PROJECT_ID`` - ``[neutron]kuryr_project_id``
* ``$KURYR_K8S_PROJECT_DOMAIN_NAME`` - ``[neutron]project_domain_name`` (default: Default) * ``$KURYR_K8S_PROJECT_DOMAIN_NAME`` - ``[neutron]project_domain_name``
(default: Default)
* ``$KURYR_K8S_PROJECT_ID`` - ``[neutron]k8s_project_id`` * ``$KURYR_K8S_PROJECT_ID`` - ``[neutron]k8s_project_id``
* ``$KURYR_K8S_POD_SUBNET_ID`` - ``[neutron_defaults]pod_subnet_id`` * ``$KURYR_K8S_POD_SUBNET_ID`` - ``[neutron_defaults]pod_subnet_id``
* ``$KURYR_K8S_POD_SG`` - ``[neutron_defaults]pod_sg`` * ``$KURYR_K8S_POD_SG`` - ``[neutron_defaults]pod_sg``
* ``$KURYR_K8S_SERVICE_SUBNET_ID`` - ``[neutron_defaults]service_subnet_id`` * ``$KURYR_K8S_SERVICE_SUBNET_ID`` - ``[neutron_defaults]service_subnet_id``
* ``$KURYR_K8S_WORKER_NODES_SUBNET`` - ``[pod_vif_nested]worker_nodes_subnet`` * ``$KURYR_K8S_WORKER_NODES_SUBNET`` - ``[pod_vif_nested]worker_nodes_subnet``
* ``$KURYR_K8S_BINDING_DRIVER`` - ``[binding]driver`` (default: ``kuryr.lib.binding.drivers.vlan``) * ``$KURYR_K8S_BINDING_DRIVER`` - ``[binding]driver`` (default:
``kuryr.lib.binding.drivers.vlan``)
* ``$KURYR_K8S_BINDING_IFACE`` - ``[binding]link_iface`` (default: eth0) * ``$KURYR_K8S_BINDING_IFACE`` - ``[binding]link_iface`` (default: eth0)
.. note:: .. note::
@ -131,8 +138,8 @@ After successful completion:
* kuryr-controller Deployment object, with single replica count, will get * kuryr-controller Deployment object, with single replica count, will get
created in kube-system namespace. created in kube-system namespace.
* kuryr-cni gets installed as a daemonset object on all the nodes in kube-system * kuryr-cni gets installed as a daemonset object on all the nodes in
namespace kube-system namespace
To see kuryr-controller logs :: To see kuryr-controller logs ::
$ kubectl logs <pod-name> $ kubectl logs <pod-name>

View File

@ -46,8 +46,8 @@ Now edit ``devstack/local.conf`` to set up some initial options:
* If you have multiple network interfaces, you need to set ``HOST_IP`` variable * If you have multiple network interfaces, you need to set ``HOST_IP`` variable
to the IP on the interface you want to use as DevStack's primary. to the IP on the interface you want to use as DevStack's primary.
* ``KURYR_K8S_LBAAS_USE_OCTAVIA`` can be set to False if you want more * ``KURYR_K8S_LBAAS_USE_OCTAVIA`` can be set to False if you want more
lightweight installation. In that case installation of Glance and Nova will be lightweight installation. In that case installation of Glance and Nova will
omitted. be omitted.
* If you already have Docker installed on the machine, you can comment out line * If you already have Docker installed on the machine, you can comment out line
starting with ``enable_plugin devstack-plugin-container``. starting with ``enable_plugin devstack-plugin-container``.
@ -133,12 +133,12 @@ You can verify that this IP is really assigned to Neutron port: ::
| 3ce7fd13-ad0a-4e92-9b6f-0d38d50b1699 | | fa:16:3e:8e:f4:30 | ip_address='10.0.0.73', subnet_id='ddfbc8e9-68da-48f9-8a05-238ea0607e0d' | ACTIVE | | 3ce7fd13-ad0a-4e92-9b6f-0d38d50b1699 | | fa:16:3e:8e:f4:30 | ip_address='10.0.0.73', subnet_id='ddfbc8e9-68da-48f9-8a05-238ea0607e0d' | ACTIVE |
If those steps were successful, then it looks like your DevStack with If those steps were successful, then it looks like your DevStack with
kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines of kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines
the logs, paste them into `paste.openstack.org <http://paste.openstack.org>`_ of the logs, paste them into `paste.openstack.org
and ask other developers for help on `Kuryr's IRC channel <http://paste.openstack.org>`_ and ask other developers for help on `Kuryr's
<chat.freenode.net:6667/openstack-kuryr>`_. More info on how to use DevStack can IRC channel <chat.freenode.net:6667/openstack-kuryr>`_. More info on how to use
be found in `DevStack Documentation DevStack can be found in `DevStack Documentation
<https://docs.openstack.org/devstack/latest/>`_, especially in section <https://docs.openstack.org/devstack/latest/>`_, especially in section `Using
`Using Systemd in DevStack Systemd in DevStack
<https://docs.openstack.org/devstack/latest/systemd.html>`_, which explains how <https://docs.openstack.org/devstack/latest/systemd.html>`_, which explains how
to use ``systemctl`` to control services and ``journalctl`` to read its logs. to use ``systemctl`` to control services and ``journalctl`` to read its logs.

View File

@ -24,8 +24,8 @@ Rebuilding container images
--------------------------- ---------------------------
Instructions on how to manually rebuild both kuryr-controller and kuryr-cni Instructions on how to manually rebuild both kuryr-controller and kuryr-cni
container images are presented on :doc:`../containerized` page. In case you want container images are presented on :doc:`../containerized` page. In case you
to test any code changes, you need to rebuild the images first. want to test any code changes, you need to rebuild the images first.
Changing configuration Changing configuration
@ -39,8 +39,8 @@ associated ConfigMap. On DevStack deployment this can be done using: ::
Then the editor will appear that will let you edit the config map. Make sure to Then the editor will appear that will let you edit the config map. Make sure to
keep correct indentation when doing changes. Also note that there are two files keep correct indentation when doing changes. Also note that there are two files
present in the ConfigMap: kuryr.conf and kuryr-cni.conf. First one is attached present in the ConfigMap: kuryr.conf and kuryr-cni.conf. First one is attached
to kuryr-controller and second to kuryr-cni. Make sure to modify both when doing to kuryr-controller and second to kuryr-cni. Make sure to modify both when
changes important for both services. doing changes important for both services.
Restarting services Restarting services

View File

@ -25,7 +25,7 @@ Testing with DevStack
The next points describe how to test OpenStack with Dragonflow using DevStack. The next points describe how to test OpenStack with Dragonflow using DevStack.
We will start by describing how to test the baremetal case on a single host, We will start by describing how to test the baremetal case on a single host,
and then cover a nested environemnt where containers are created inside VMs. and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment Single Node Test Environment

View File

@ -5,7 +5,8 @@ How to try out nested-pods locally (MACVLAN)
Following are the instructions for an all-in-one setup, using the Following are the instructions for an all-in-one setup, using the
nested MACVLAN driver rather than VLAN and trunk ports. nested MACVLAN driver rather than VLAN and trunk ports.
1. To install OpenStack services run devstack with ``devstack/local.conf.pod-in-vm.undercloud.sample``. 1. To install OpenStack services run devstack with
``devstack/local.conf.pod-in-vm.undercloud.sample``.
2. Launch a Nova VM with MACVLAN support 2. Launch a Nova VM with MACVLAN support
.. todo:: .. todo::

View File

@ -2,18 +2,21 @@
How to try out nested-pods locally (VLAN + trunk) How to try out nested-pods locally (VLAN + trunk)
================================================= =================================================
Following are the instructions for an all-in-one setup where Kubernetes will also be Following are the instructions for an all-in-one setup where Kubernetes will
running inside the same Nova VM in which Kuryr-controller and Kuryr-cni will be also be running inside the same Nova VM in which Kuryr-controller and Kuryr-cni
running. 4GB memory and 2 vCPUs, is the minimum resource requirement for the VM: will be running. 4GB memory and 2 vCPUs, is the minimum resource requirement
for the VM:
1. To install OpenStack services run devstack with ``devstack/local.conf.pod-in-vm.undercloud.sample``. 1. To install OpenStack services run devstack with
Ensure that "trunk" service plugin is enabled in ``/etc/neutron/neutron.conf``:: ``devstack/local.conf.pod-in-vm.undercloud.sample``. Ensure that "trunk"
service plugin is enabled in ``/etc/neutron/neutron.conf``::
[DEFAULT] [DEFAULT]
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
2. Launch a VM with `Neutron trunk port. <https://wiki.openstack.org/wiki/Neutron/TrunkPort>`_. 2. Launch a VM with `Neutron trunk port.
The next steps can be followed: `Boot VM with a Trunk Port`_. <https://wiki.openstack.org/wiki/Neutron/TrunkPort>`_. The next steps can be
followed: `Boot VM with a Trunk Port`_.
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html .. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html

View File

@ -20,7 +20,7 @@ Testing with DevStack
The next points describe how to test OpenStack with ODL using DevStack. The next points describe how to test OpenStack with ODL using DevStack.
We will start by describing how to test the baremetal case on a single host, We will start by describing how to test the baremetal case on a single host,
and then cover a nested environemnt where containers are created inside VMs. and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment Single Node Test Environment

View File

@ -25,7 +25,7 @@ options needs to be set at the local.conf file:
3. Then, in case you want to set a limit to the maximum number of ports, or 3. Then, in case you want to set a limit to the maximum number of ports, or
increase/reduce the default one for the mininum number, as well as to modify increase/reduce the default one for the minimum number, as well as to modify
the way the pools are repopulated, both in time as well as regarding bulk the way the pools are repopulated, both in time as well as regarding bulk
operation sizes, the next option can be included and modified accordingly:: operation sizes, the next option can be included and modified accordingly::

View File

@ -191,9 +191,9 @@ Setting it up
Note that it is /113 because the other half of the /112 will be used by the Note that it is /113 because the other half of the /112 will be used by the
Octavia LB vrrp ports. Octavia LB vrrp ports.
#. Follow the :ref:`k8s_lb_reachable` guide but using IPv6 addresses instead for #. Follow the :ref:`k8s_lb_reachable` guide but using IPv6 addresses instead
the host Kubernetes API. You should also make sure that the Kubernetes API for the host Kubernetes API. You should also make sure that the Kubernetes
server binds on the IPv6 address of the host. API server binds on the IPv6 address of the host.
Troubleshooting Troubleshooting

View File

@ -61,9 +61,10 @@ Kubernetes load balancers and their members:
Neutron port to the subnet of each of the members. This way the traffic from Neutron port to the subnet of each of the members. This way the traffic from
the Service Haproxy to the members will not go through the router again, only the Service Haproxy to the members will not go through the router again, only
will have gone through the router to reach the service. will have gone through the router to reach the service.
* Layer3: Octavia only creates the VIP port. The traffic from the service VIP to * Layer3: Octavia only creates the VIP port. The traffic from the service VIP
the members will go back to the router to reach the pod subnet. It is to the members will go back to the router to reach the pod subnet. It is
important to note that it will have some performance impact depending on the SDN. important to note that it will have some performance impact depending on the
SDN.
To support the L3 mode (both for Octavia and for the deprecated To support the L3 mode (both for Octavia and for the deprecated
Neutron-LBaaSv2): Neutron-LBaaSv2):

View File

@ -2,10 +2,10 @@
Enable network policy support functionality Enable network policy support functionality
=========================================== ===========================================
Enable policy, pod_label and namespace handlers to respond to network policy events. Enable policy, pod_label and namespace handlers to respond to network policy
As this is not done by default you'd have to explicitly add that to the list of enabled events. As this is not done by default you'd have to explicitly add that to
handlers at kuryr.conf (further info on how to do this can be found at the list of enabled handlers at kuryr.conf (further info on how to do this can
:doc:`./devstack/containerized`):: be found at :doc:`./devstack/containerized`)::
[kubernetes] [kubernetes]
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy

View File

@ -11,71 +11,74 @@ To enable OCP-Router functionality we should set the following:
Setting L7 Router Setting L7 Router
------------------ ------------------
The L7 Router is the ingress point for the external traffic destined The L7 Router is the ingress point for the external traffic destined for
for services in the K8S/OCP cluster. services in the K8S/OCP cluster. The next steps are needed for setting the L7
The next steps are needed for setting the L7 Router: Router:
1. Create LoadBalancer that will run the L7 loadbalancing:: #. Create LoadBalancer that will run the L7 loadbalancing:
$ openstack loadbalancer create --name kuryr-l7-router --vip-subnet-id k8s-service-subnet .. code-block:: console
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| created_at | 2018-06-28T06:34:15 |
| description | |
| flavor | |
| id | 99f580e6-d894-442a-bc5f-4d14b41e10d2 |
| listeners | |
| name | kuryr-l7-router |
| operating_status | OFFLINE |
| pools | |
| project_id | 24042703aba141b89217e098e495cea1 |
| provider | amphora |
| provisioning_status | PENDING_CREATE |
| updated_at | None |
| vip_address | 10.0.0.171 |
| vip_network_id | 65875d24-5a54-43fb-91a7-087e956deb1a |
| vip_port_id | 42c6062a-644a-4004-a4a6-5a88bf596196 |
| vip_qos_policy_id | None |
| vip_subnet_id | 01f21201-65a3-4bc5-a7a8-868ccf4f0edd |
+---------------------+--------------------------------------+
$
$ openstack loadbalancer create --name kuryr-l7-router --vip-subnet-id k8s-service-subnet
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| created_at | 2018-06-28T06:34:15 |
| description | |
| flavor | |
| id | 99f580e6-d894-442a-bc5f-4d14b41e10d2 |
| listeners | |
| name | kuryr-l7-router |
| operating_status | OFFLINE |
| pools | |
| project_id | 24042703aba141b89217e098e495cea1 |
| provider | amphora |
| provisioning_status | PENDING_CREATE |
| updated_at | None |
| vip_address | 10.0.0.171 |
| vip_network_id | 65875d24-5a54-43fb-91a7-087e956deb1a |
| vip_port_id | 42c6062a-644a-4004-a4a6-5a88bf596196 |
| vip_qos_policy_id | None |
| vip_subnet_id | 01f21201-65a3-4bc5-a7a8-868ccf4f0edd |
+---------------------+--------------------------------------+
$
#. Create floating IP address that should be accessible from external network:
2. Create floating IP address that should be accessible from external network:: .. code-block:: console
$ openstack floating ip create --subnet public-subnet public $ openstack floating ip create --subnet public-subnet public
+---------------------+--------------------------------------+ +---------------------+--------------------------------------+
| Field | Value | | Field | Value |
+---------------------+--------------------------------------+ +---------------------+--------------------------------------+
| created_at | 2018-06-28T06:31:36Z | | created_at | 2018-06-28T06:31:36Z |
| description | | | description | |
| dns_domain | None | | dns_domain | None |
| dns_name | None | | dns_name | None |
| fixed_ip_address | None | | fixed_ip_address | None |
| floating_ip_address | 172.24.4.3 | | floating_ip_address | 172.24.4.3 |
| floating_network_id | 3371c2ba-edb5-45f2-a589-d35080177311 | | floating_network_id | 3371c2ba-edb5-45f2-a589-d35080177311 |
| id | c971f6d3-ba63-4318-a9e7-43cbf85437c2 | | id | c971f6d3-ba63-4318-a9e7-43cbf85437c2 |
| name | 172.24.4.3 | | name | 172.24.4.3 |
| port_details | None | | port_details | None |
| port_id | None | | port_id | None |
| project_id | 24042703aba141b89217e098e495cea1 | | project_id | 24042703aba141b89217e098e495cea1 |
| qos_policy_id | None | | qos_policy_id | None |
| revision_number | 0 | | revision_number | 0 |
| router_id | None | | router_id | None |
| status | DOWN | | status | DOWN |
| subnet_id | 939eeb1f-20b8-4185-a6b1-6477fbe73409 | | subnet_id | 939eeb1f-20b8-4185-a6b1-6477fbe73409 |
| tags | [] | | tags | [] |
| updated_at | 2018-06-28T06:31:36Z | | updated_at | 2018-06-28T06:31:36Z |
+---------------------+--------------------------------------+ +---------------------+--------------------------------------+
$ $
#. Bind the floating IP to LB vip:
3. Bind the floating IP to LB vip:: .. code-block:: console
[stack@gddggd devstack]$ openstack floating ip set --port 42c6062a-644a-4004-a4a6-5a88bf596196 172.24.4.3 [stack@gddggd devstack]$ openstack floating ip set --port 42c6062a-644a-4004-a4a6-5a88bf596196 172.24.4.3
Configure Kuryr to support L7 Router and OCP-Route resources Configure Kuryr to support L7 Router and OCP-Route resources
@ -88,9 +91,9 @@ Configure Kuryr to support L7 Router and OCP-Route resources
2. Enable the ocp-route and k8s-endpoint handlers. For that you need to add 2. Enable the ocp-route and k8s-endpoint handlers. For that you need to add
this handlers to the enabled handlers list at kuryr.conf (details on how this handlers to the enabled handlers list at kuryr.conf (details on how to
to edit this for containerized deployment can be found edit this for containerized deployment can be found at
at :doc:`./devstack/containerized`):: :doc:`./devstack/containerized`)::
[kubernetes] [kubernetes]
enabled_handlers=vif,lb,lbaasspec,ocproute,ingresslb enabled_handlers=vif,lb,lbaasspec,ocproute,ingresslb

View File

@ -22,8 +22,8 @@ maximum size can be disabled by setting it to 0::
ports_pool_max = 10 ports_pool_max = 10
ports_pool_min = 5 ports_pool_min = 5
In addition the size of the bulk operation, e.g., the number In addition the size of the bulk operation, e.g., the number of ports created
of ports created in a bulk request upon pool population, can be modified:: in a bulk request upon pool population, can be modified::
[vif_pool] [vif_pool]
ports_pool_batch = 5 ports_pool_batch = 5
@ -101,15 +101,15 @@ nodes are Bare Metal while others are running inside VMs, therefore having
different VIF drivers (e.g., neutron and nested-vlan). different VIF drivers (e.g., neutron and nested-vlan).
This new multi pool driver is the default pool driver used even if a different This new multi pool driver is the default pool driver used even if a different
vif_pool_driver is set at the config option. However if the configuration vif_pool_driver is set at the config option. However if the configuration about
about the mappings between the different pod vif and pools drivers is not the mappings between the different pod vif and pools drivers is not provided at
provided at the vif_pool_mapping config option of vif_pool configuration the vif_pool_mapping config option of vif_pool configuration section only one
section only one pool driver will be loaded -- using the standard pool driver will be loaded -- using the standard pod_vif_driver and
pod_vif_driver and vif_pool_driver config options, i.e., using the one vif_pool_driver config options, i.e., using the one selected at kuryr.conf
selected at kuryr.conf options. options.
To enable the option of having different pools depending on the node's pod To enable the option of having different pools depending on the node's pod vif
vif types, you need to state the type of pool that you want for each pod vif types, you need to state the type of pool that you want for each pod vif
driver, e.g.: driver, e.g.:
.. code-block:: ini .. code-block:: ini

View File

@ -9,7 +9,8 @@ be implemented in the following way:
* **Service**: It is translated to a single **LoadBalancer** and as many * **Service**: It is translated to a single **LoadBalancer** and as many
**Listeners** and **Pools** as ports the Kubernetes Service spec defines. **Listeners** and **Pools** as ports the Kubernetes Service spec defines.
* **ClusterIP**: It is translated to a LoadBalancer's VIP. * **ClusterIP**: It is translated to a LoadBalancer's VIP.
* **loadBalancerIP**: Translated to public IP associated with the LoadBalancer's VIP. * **loadBalancerIP**: Translated to public IP associated with the
LoadBalancer's VIP.
* **Endpoints**: The Endpoint object is translated to a LoadBalancer's VIP. * **Endpoints**: The Endpoint object is translated to a LoadBalancer's VIP.
@ -20,16 +21,15 @@ be implemented in the following way:
:width: 100% :width: 100%
:alt: Graphical depiction of the translation explained above :alt: Graphical depiction of the translation explained above
In this diagram you can see how the Kubernetes entities in the top left corner In this diagram you can see how the Kubernetes entities in the top left
are implemented in plain Kubernetes networking (top-right) and in Kuryr's corner are implemented in plain Kubernetes networking (top-right) and in
default configuration (bottom) Kuryr's default configuration (bottom)
If you are paying attention and are familiar with the `LBaaS API`_ you probably If you are paying attention and are familiar with the `LBaaS API`_ you probably
noticed that we have separate pools for each exposed port in a service. This is noticed that we have separate pools for each exposed port in a service. This is
probably not optimal and we would probably benefit from keeping a single Neutron probably not optimal and we would probably benefit from keeping a single
pool that lists each of the per port listeners. Neutron pool that lists each of the per port listeners. Since `LBaaS API`_
Since `LBaaS API`_ doesn't support UDP load balancing, service exported UDP doesn't support UDP load balancing, service exported UDP ports will be ignored.
ports will be ignored.
When installing you can decide to use the legacy Neutron HAProxy driver for When installing you can decide to use the legacy Neutron HAProxy driver for
LBaaSv2 or install and configure OpenStack Octavia, which as of Pike implements LBaaSv2 or install and configure OpenStack Octavia, which as of Pike implements
@ -43,8 +43,8 @@ will be offered on each.
Legacy Neutron HAProxy agent Legacy Neutron HAProxy agent
---------------------------- ----------------------------
The requirements for running Kuryr with the legacy Neutron HAProxy agent are the The requirements for running Kuryr with the legacy Neutron HAProxy agent are
following: the following:
* Keystone * Keystone
* Neutron * Neutron
@ -53,9 +53,10 @@ following:
As you can see, the only addition from the minimal OpenStack deployment for As you can see, the only addition from the minimal OpenStack deployment for
Kuryr is the Neutron lbaasv2 agent. Kuryr is the Neutron lbaasv2 agent.
In order to use Neutron HAProxy as the Neutron LBaaSv2 implementation you should In order to use Neutron HAProxy as the Neutron LBaaSv2 implementation you
not only install the neutron-lbaas agent but also place this snippet in the should not only install the neutron-lbaas agent but also place this snippet in
*[service_providers]* section of neutron.conf in your network controller node:: the *[service_providers]* section of neutron.conf in your network controller
node::
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default" NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"
@ -90,12 +91,11 @@ network namespace is used by Octavia to reconfigure and monitor the Load
Balancer, which it talks to via HAProxy's control unix domain socket. Balancer, which it talks to via HAProxy's control unix domain socket.
Running Kuryr with Octavia means that each Kubernetes service that runs in the Running Kuryr with Octavia means that each Kubernetes service that runs in the
cluster will need at least one Load Balancer VM, i.e., an *Amphora*. cluster will need at least one Load Balancer VM, i.e., an *Amphora*. To avoid
To avoid single point of failure at Amphora, Octavia should be configured to single point of failure at Amphora, Octavia should be configured to support
support active/standby loadbalancer topology. active/standby loadbalancer topology. In addition, it is important to
In addition, it is important to configure the right Octavia flavor for your configure the right Octavia flavor for your deployment and to size the compute
deployment and to size the compute nodes appropriately so that Octavia can nodes appropriately so that Octavia can operate well.
operate well.
Another important consideration is where do the Amphorae run, i.e., whether the Another important consideration is where do the Amphorae run, i.e., whether the
worker nodes should also be compute nodes so that they run the Amphorae or if worker nodes should also be compute nodes so that they run the Amphorae or if
@ -396,13 +396,13 @@ The services and pods subnets should be created.
--service-cluster-ip-range=10.2.0.0/17 --service-cluster-ip-range=10.2.0.0/17
As a result of this, Kubernetes will allocate the **10.2.0.1** address to the As a result of this, Kubernetes will allocate the **10.2.0.1** address to
Kubernetes API service, i.e., the service used for pods to talk to the the Kubernetes API service, i.e., the service used for pods to talk to the
Kubernetes API server. It will be able to allocate service addresses up until Kubernetes API server. It will be able to allocate service addresses up
**10.2.127.254**. The rest of the addresses, as stated above, will be for until **10.2.127.254**. The rest of the addresses, as stated above, will be
Octavia load balancer *vrrp* ports. **If this subnetting was not done, for Octavia load balancer *vrrp* ports. **If this subnetting was not done,
Octavia would allocate *vrrp* ports with the Neutron IPAM from the same range Octavia would allocate *vrrp* ports with the Neutron IPAM from the same
as Kubernetes service IPAM and we'd end up with conflicts**. range as Kubernetes service IPAM and we'd end up with conflicts**.
#. Once you have Kubernetes installed and you have the API host reachable from #. Once you have Kubernetes installed and you have the API host reachable from
the pod subnet, follow the `Making the Pods be able to reach the Kubernetes the pod subnet, follow the `Making the Pods be able to reach the Kubernetes
@ -453,7 +453,8 @@ The services and pods subnets should be created.
| updated_at | 2017-10-02T09:22:37Z | | updated_at | 2017-10-02T09:22:37Z |
+---------------------+--------------------------------------+ +---------------------+--------------------------------------+
and then create k8s service with type=LoadBalancer and load-balancer-ip=<floating_ip> (e.g: 172.24.4.13) and then create k8s service with type=LoadBalancer and
load-balancer-ip=<floating_ip> (e.g: 172.24.4.13)
In both 'User' and 'Pool' methods, the external IP address could be found In both 'User' and 'Pool' methods, the external IP address could be found
in k8s service status information (under loadbalancer/ingress/ip) in k8s service status information (under loadbalancer/ingress/ip)
@ -507,9 +508,9 @@ of doing the following:
+---------------------------+--------------------------------------+ +---------------------------+--------------------------------------+
Create the subnet. Note that we disable dhcp as Kuryr-Kubernetes pod subnets Create the subnet. Note that we disable dhcp as Kuryr-Kubernetes pod subnets
have no need for them for Pod networking. We also put the gateway on the last have no need for them for Pod networking. We also put the gateway on the
IP of the subnet range so that the beginning of the range can be kept for last IP of the subnet range so that the beginning of the range can be kept
Kubernetes driven service IPAM:: for Kubernetes driven service IPAM::
$ openstack subnet create --network k8s --no-dhcp \ $ openstack subnet create --network k8s --no-dhcp \
--gateway 10.0.255.254 \ --gateway 10.0.255.254 \
@ -558,15 +559,15 @@ of doing the following:
--service-cluster-ip-range=10.0.0.0/18 --service-cluster-ip-range=10.0.0.0/18
As a result of this, Kubernetes will allocate the **10.0.0.1** address to the As a result of this, Kubernetes will allocate the **10.0.0.1** address to
Kubernetes API service, i.e., the service used for pods to talk to the the Kubernetes API service, i.e., the service used for pods to talk to the
Kubernetes API server. It will be able to allocate service addresses up until Kubernetes API server. It will be able to allocate service addresses up
**10.0.63.255**. The rest of the addresses will be for pods or Octavia load until **10.0.63.255**. The rest of the addresses will be for pods or Octavia
balancer *vrrp* ports. load balancer *vrrp* ports.
#. Once you have Kubernetes installed and you have the API host reachable from #. Once you have Kubernetes installed and you have the API host reachable from
the pod subnet, follow the `Making the Pods be able to reach the Kubernetes API`_ the pod subnet, follow the `Making the Pods be able to reach the Kubernetes
section API`_ section
.. _k8s_lb_reachable: .. _k8s_lb_reachable:

View File

@ -4,10 +4,9 @@
How to configure SR-IOV ports How to configure SR-IOV ports
============================= =============================
Current approach of SR-IOV relies on sriov-device-plugin [2]_. While Current approach of SR-IOV relies on sriov-device-plugin [2]_. While creating
creating pods with SR-IOV, sriov-device-plugin should be turned on pods with SR-IOV, sriov-device-plugin should be turned on on all nodes. To use
on all nodes. To use a SR-IOV port on a baremetal installation the 3 a SR-IOV port on a baremetal installation the 3 following steps should be done:
following steps should be done:
1. Create OpenStack network and subnet for SR-IOV. 1. Create OpenStack network and subnet for SR-IOV.
Following steps should be done with admin rights. Following steps should be done with admin rights.
@ -27,9 +26,10 @@ Subnet id <UUID of vlan-sriov-net> will be used later in NetworkAttachmentDefini
physical_device_mappings = physnet1:ens4f0 physical_device_mappings = physnet1:ens4f0
default_physnet_subnets = physnet1:<UUID of vlan-sriov-net> default_physnet_subnets = physnet1:<UUID of vlan-sriov-net>
This mapping is required for ability to find appropriate PF/VF functions at binding phase. This mapping is required for ability to find appropriate PF/VF functions at
physnet1 is just an identifier for subnet <UUID of vlan-sriov-net>. binding phase. physnet1 is just an identifier for subnet <UUID of
Such kind of transition is necessary to support many-to-many relation. vlan-sriov-net>. Such kind of transition is necessary to support many-to-many
relation.
3. Prepare NetworkAttachmentDefinition object. 3. Prepare NetworkAttachmentDefinition object.
Apply NetworkAttachmentDefinition with "sriov" driverType inside, Apply NetworkAttachmentDefinition with "sriov" driverType inside,
@ -72,25 +72,27 @@ into the pod's yaml.
intel.com/sriov: '2' intel.com/sriov: '2'
In the above example two SR-IOV devices will be attached to pod. First one is described In the above example two SR-IOV devices will be attached to pod. First one is
in sriov-net1 NetworkAttachmentDefinition, second one in sriov-net2. They may have described in sriov-net1 NetworkAttachmentDefinition, second one in sriov-net2.
different subnetId. They may have different subnetId.
4. Specify resource names 4. Specify resource names
The resource name *intel.com/sriov*, which used in the above example is the default The resource name *intel.com/sriov*, which used in the above example is the
resource name. This name was used in SR-IOV network device plugin in default resource name. This name was used in SR-IOV network device plugin in
version 1 (release-v1 branch). But since latest version the device plugin can use any version 1 (release-v1 branch). But since latest version the device plugin can
arbitrary name of the resources [3]_. This name should match "^\[a-zA-Z0-9\_\]+$" use any arbitrary name of the resources [3]_. This name should match
regular expression. To be able to work with arbitrary resource names "^\[a-zA-Z0-9\_\]+$" regular expression. To be able to work with arbitrary
physnet_resource_mappings and device_plugin_resource_prefix in [sriov] section resource names physnet_resource_mappings and device_plugin_resource_prefix in
of kuryr-controller configuration file should be filled. The default value for [sriov] section of kuryr-controller configuration file should be filled. The
device_plugin_resource_prefix is intel.com, the same as in SR-IOV network device plugin, default value for device_plugin_resource_prefix is intel.com, the same as in
in case of SR-IOV network device plugin was started with value of -resource-prefix option SR-IOV network device plugin, in case of SR-IOV network device plugin was
different from intel.com, than value should be set to started with value of -resource-prefix option different from intel.com, than
device_plugin_resource_prefix, otherwise kuryr-kubernetes will not work with resource. value should be set to device_plugin_resource_prefix, otherwise
kuryr-kubernetes will not work with resource.
Assume we have following SR-IOV network device plugin (defined by -config-file option) Assume we have following SR-IOV network device plugin (defined by -config-file
option)
.. code-block:: json .. code-block:: json
@ -107,9 +109,9 @@ Assume we have following SR-IOV network device plugin (defined by -config-file o
} }
We defined numa0 resource name, also assume we started sriovdp with We defined numa0 resource name, also assume we started sriovdp with
-resource-prefix samsung.com value. The PCI address of ens4f0 interface -resource-prefix samsung.com value. The PCI address of ens4f0 interface is
is "0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network "0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network device
device plugin, we can see following state of kubernetes plugin, we can see following state of kubernetes
.. code-block:: bash .. code-block:: bash

View File

@ -23,8 +23,8 @@ look like:
"driverType": "sriov" "driverType": "sriov"
}' }'
Here ``88d0b025-2710-4f02-a348-2829853b45da`` is an id of precreated Here ``88d0b025-2710-4f02-a348-2829853b45da`` is an id of precreated subnet
subnet that is expected to be used for SR-IOV ports: that is expected to be used for SR-IOV ports:
.. code-block:: bash .. code-block:: bash
@ -55,9 +55,9 @@ subnet that is expected to be used for SR-IOV ports:
| updated_at | 2018-11-21T10:57:34Z | | updated_at | 2018-11-21T10:57:34Z |
+-------------------+--------------------------------------------------+ +-------------------+--------------------------------------------------+
1. Create deployment definition <DEFINITION_FILE_NAME> with one 1. Create deployment definition <DEFINITION_FILE_NAME> with one SR-IOV
SR-IOV interface (apart from default one). Deployment definition interface (apart from default one). Deployment definition file might look
file might look like: like:
.. code-block:: yaml .. code-block:: yaml
@ -106,8 +106,8 @@ created before.
nginx-sriov-558db554d7-rvpxs 1/1 Running 0 1m nginx-sriov-558db554d7-rvpxs 1/1 Running 0 1m
4. If your image contains ``iputils`` (for example, busybox image), you can 4. If your image contains ``iputils`` (for example, busybox image), you can
attach to the pod and check that the correct interface has been attached attach to the pod and check that the correct interface has been attached to
to the Pod. the Pod.
.. code-block:: bash .. code-block:: bash
@ -138,16 +138,15 @@ You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
inet6 fe80::f816:3eff:fea8:55af/64 scope link inet6 fe80::f816:3eff:fea8:55af/64 scope link
valid_lft forever preferred_lft forever valid_lft forever preferred_lft forever
4.1. Alternatively you can login to k8s worker and do the same from the 4.1. Alternatively you can login to k8s worker and do the same from the host
host system. system. Use the following command to find out ID of running SR-IOV container:
Use the following command to find out ID of running SR-IOV container:
.. code-block:: bash .. code-block:: bash
$ docker ps $ docker ps
Suppose that ID of created container is ``eb4e10f38763``. Suppose that ID of created container is ``eb4e10f38763``. Use the following
Use the following command to get PID of that container: command to get PID of that container:
.. code-block:: bash .. code-block:: bash
@ -190,7 +189,8 @@ You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
In our example sriov interface has address 192.168.2.6 In our example sriov interface has address 192.168.2.6
5. Use neutron CLI to check the port with exact address has been created on neutron: 5. Use neutron CLI to check the port with exact address has been created on
neutron:
.. code-block:: bash .. code-block:: bash
@ -238,8 +238,9 @@ with the following command:
| updated_at | 2018-11-26T09:13:07Z | | updated_at | 2018-11-26T09:13:07Z |
+-----------------------+----------------------------------------------------------------------------+ +-----------------------+----------------------------------------------------------------------------+
The port would have the name of the pod, ``compute::kuryr::sriov`` for device owner and 'direct' vnic_type. The port would have the name of the pod, ``compute::kuryr::sriov`` for device
Verify that IP and MAC addresses of the port match the ones on the container. owner and 'direct' vnic_type. Verify that IP and MAC addresses of the port
Currently the neutron-sriov-nic-agent does not properly detect SR-IOV ports assigned to containers. This match the ones on the container. Currently the neutron-sriov-nic-agent does
means that direct ports in neutron would always remain in *DOWN* state. This doesn't affect the feature not properly detect SR-IOV ports assigned to containers. This means that direct
in any way other than cosmetically. ports in neutron would always remain in *DOWN* state. This doesn't affect the
feature in any way other than cosmetically.

View File

@ -2,9 +2,9 @@
Testing UDP Services Testing UDP Services
==================== ====================
In this example, we will use the `kuryr-udp-demo`_ image. In this example, we will use the `kuryr-udp-demo`_ image. This image
This image implements a simple UDP server that listens on port 9090, implements a simple UDP server that listens on port 9090, and replies towards
and replies towards client when a packet is received. client when a packet is received.
We first create a deployment named demo:: We first create a deployment named demo::
@ -37,7 +37,8 @@ Next, we expose the deployment as a service, setting UDP port to 90::
demo ClusterIP 10.0.0.150 <none> 90/UDP 16s demo ClusterIP 10.0.0.150 <none> 90/UDP 16s
kubernetes ClusterIP 10.0.0.129 <none> 443/TCP 17m kubernetes ClusterIP 10.0.0.129 <none> 443/TCP 17m
Now, let's check the OpenStack load balancer created by Kuryr for **demo** service:: Now, let's check the OpenStack load balancer created by Kuryr for **demo**
service::
$ openstack loadbalancer list $ openstack loadbalancer list
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+ +--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
@ -113,14 +114,13 @@ And the load balancer has two members listening on UDP port 9090::
+--------------------------------------+-----------------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ +--------------------------------------+-----------------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+
At this point, we have both the kubernetes **demo** service and corresponding At this point, we have both the kubernetes **demo** service and corresponding
openstack load balancer running, and we are ready to run the openstack load balancer running, and we are ready to run the client
client application. application.
For the client application we will use the `udp-client`_ python script. For the client application we will use the `udp-client`_ python script. The UDP
The UDP client script sends UDP message towards specific IP and port, and client script sends UDP message towards specific IP and port, and waits for a
waits for a response from the server. response from the server. The way that the client application can communicate
The way that the client application can communicate with the server is by with the server is by leveraging the Kubernetes service functionality.
leveraging the Kubernetes service functionality.
First we clone the client script:: First we clone the client script::
@ -149,8 +149,8 @@ Last step will be to ping the UDP server service::
demo-fbb89f54c-q9fq7: HELLO, I AM ALIVE!!! demo-fbb89f54c-q9fq7: HELLO, I AM ALIVE!!!
Since the `kuryr-udp-demo`_ application concatenates the pod's name to the Since the `kuryr-udp-demo`_ application concatenates the pod's name to the
replyed message, it is plain to see that both service's pods are replyed message, it is plain to see that both service's pods are replying to
replying to the requests from the client. the requests from the client.
.. _kuryr-udp-demo: https://hub.docker.com/r/yboaron/kuryr-udp-demo/ .. _kuryr-udp-demo: https://hub.docker.com/r/yboaron/kuryr-udp-demo/

View File

@ -40,7 +40,7 @@ steps can be followed:
$ openstack floating ip create --port port0 public $ openstack floating ip create --port port0 public
Note subports can be added to the trunk port, and be used inside the VM with the Note subports can be added to the trunk port, and be used inside the VM with
specific vlan, 102 in the example, by doing:: the specific vlan, 102 in the example, by doing::
$ openstack network trunk set --subport port=subport0,segmentation-type=vlan,segmentation-id=102 trunk0 $ openstack network trunk set --subport port=subport0,segmentation-type=vlan,segmentation-id=102 trunk0

View File

@ -2,8 +2,8 @@
Upgrading kuryr-kubernetes Upgrading kuryr-kubernetes
========================== ==========================
Kuryr-Kubernetes supports standard OpenStack utility for checking upgrade Kuryr-Kubernetes supports standard OpenStack utility for checking upgrade is
is possible and safe: possible and safe:
.. code-block:: bash .. code-block:: bash
@ -87,5 +87,5 @@ It's possible that some annotations were somehow malformed. That will generate
a warning that should be investigated, but isn't blocking upgrading to T a warning that should be investigated, but isn't blocking upgrading to T
(it won't make things any worse). (it won't make things any worse).
If in any case you need to rollback those changes, there is If in any case you need to rollback those changes, there is ``kuryr-k8s-status
``kuryr-k8s-status upgrade downgrade-annotations`` command as well. upgrade downgrade-annotations`` command as well.

View File

@ -2,9 +2,10 @@
Kuryr-Kubernetes Release Notes Howto Kuryr-Kubernetes Release Notes Howto
==================================== ====================================
Release notes are a new feature for documenting new features in Release notes are a new feature for documenting new features in OpenStack
OpenStack projects. Background on the process, tooling, and projects. Background on the process, tooling, and methodology is documented in
methodology is documented in a `mailing list post by Doug Hellmann <http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html>`_. a `mailing list post by Doug Hellmann
<http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html>`_.
For information on how to create release notes, please consult the For information on how to create release notes, please consult the `Release
`Release Notes documentation <https://docs.openstack.org/reno/latest/>`_. Notes documentation <https://docs.openstack.org/reno/latest/>`_.