Fix inconsistency in headlines format.

There are several restructuredtext files, which don't follow guide for
document contribution. Also made a rule for having additional line
between section body and folowing section headline.

Change-Id: I2dfe9aea36299e3986acb16b1805a4dc0cd952d2
This commit is contained in:
Roman Dobosz 2019-11-12 15:36:47 +01:00
parent facc629005
commit e32d796ac7
51 changed files with 322 additions and 88 deletions

View File

@ -1,3 +1,4 @@
===================================
kuryr-kubernetes Style Commandments kuryr-kubernetes Style Commandments
=================================== ===================================

View File

@ -1,3 +1,4 @@
========================
Team and repository tags Team and repository tags
======================== ========================
@ -6,6 +7,7 @@ Team and repository tags
.. Change things from this point on .. Change things from this point on
Project description Project description
=================== ===================
@ -26,4 +28,5 @@ require it or to use different segments and, for example, route between them.
Contribution guidelines Contribution guidelines
----------------------- -----------------------
For the process of new feature addition, refer to the `Kuryr Policy <https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies>`_ For the process of new feature addition, refer to the `Kuryr Policy <https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies>`_

View File

@ -1,3 +1,4 @@
====================
Kuryr Heat Templates Kuryr Heat Templates
==================== ====================
@ -5,8 +6,9 @@ This set of scripts and Heat templates are useful for deploying devstack
scenarios. It handles the creation of an allinone devstack nova instance and its scenarios. It handles the creation of an allinone devstack nova instance and its
networking needs. networking needs.
Prerequisites Prerequisites
~~~~~~~~~~~~~ -------------
Packages to install on the host you run devstack-heat (not on the cloud server): Packages to install on the host you run devstack-heat (not on the cloud server):
@ -29,8 +31,9 @@ After creating the instance, devstack-heat will immediately start creating a
devstack `stack` user and using devstack to stack kuryr-kubernetes. When it is devstack `stack` user and using devstack to stack kuryr-kubernetes. When it is
finished, there'll be a file names `/opt/stack/ready`. finished, there'll be a file names `/opt/stack/ready`.
How to run How to run
~~~~~~~~~~ ----------
In order to run it, make sure that you have sourced your OpenStack cloud In order to run it, make sure that you have sourced your OpenStack cloud
provider openrc file and tweaked `hot/parameters.yml` to your liking then launch provider openrc file and tweaked `hot/parameters.yml` to your liking then launch
@ -53,8 +56,11 @@ This will create a stack named *gerrit_465657*. Further devstack-heat
subcommands should be called with the whole name of the stack, i.e., subcommands should be called with the whole name of the stack, i.e.,
*gerrit_465657*. *gerrit_465657*.
Getting inside the deployment Getting inside the deployment
----------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can then ssh into the deployment in two ways:
You can then ssh into the deployment in two ways:: You can then ssh into the deployment in two ways::
@ -79,8 +85,9 @@ To delete the deployment::
./devstack-heat unstack name_of_my_stack ./devstack-heat unstack name_of_my_stack
Supported images Supported images
---------------- ~~~~~~~~~~~~~~~~
It should work with the latest centos7 image. It is not tested with the latest It should work with the latest centos7 image. It is not tested with the latest
ubuntu 16.04 cloud image but it will probably work. ubuntu 16.04 cloud image but it will probably work.

View File

@ -1,14 +1,17 @@
====================
Kuryr kubectl plugin Kuryr kubectl plugin
==================== ====================
This plugin aims to bring kuryr introspection an interaction to the kubectl and This plugin aims to bring kuryr introspection an interaction to the kubectl and
oc command line tools. oc command line tools.
Installation Installation
------------ ------------
Place the kuryr directory in your ~/.kube/plugins Place the kuryr directory in your ~/.kube/plugins
Usage Usage
----- -----
@ -16,6 +19,7 @@ The way to use it is via the kubectl/oc plugin facility::
kubectl plugin kuryr get vif -o wide -l deploymentconfig=demo kubectl plugin kuryr get vif -o wide -l deploymentconfig=demo
Media Media
----- -----

View File

@ -1,3 +1,4 @@
=============================
Subport pools management tool Subport pools management tool
============================= =============================

View File

@ -1,4 +1,5 @@
============ ============
Contributing Contributing
============ ============
.. include:: ../../CONTRIBUTING.rst .. include:: ../../CONTRIBUTING.rst

View File

@ -16,9 +16,9 @@
Kuryr Kubernetes Health Manager Design Kuryr Kubernetes Health Manager Design
====================================== ======================================
Purpose Purpose
------- -------
The purpose of this document is to present the design decision behind The purpose of this document is to present the design decision behind
Kuryr Kubernetes Health Managers. Kuryr Kubernetes Health Managers.
@ -26,6 +26,7 @@ The main purpose of the Health Managers is to perform Health verifications that
assures readiness and liveness to Kuryr Controller and CNI pod, and so improve assures readiness and liveness to Kuryr Controller and CNI pod, and so improve
the management that Kubernetes does on Kuryr-Kubernetes pods. the management that Kubernetes does on Kuryr-Kubernetes pods.
Overview Overview
-------- --------
@ -46,8 +47,10 @@ configurations are properly verified to assure CNI daemon is in a good shape.
On this way, the CNI Health Manager will check and serve the health state to On this way, the CNI Health Manager will check and serve the health state to
Kubernetes readiness and liveness probes. Kubernetes readiness and liveness probes.
Proposed Solution Proposed Solution
----------------- -----------------
One of the endpoints provided by the Controller Health Manager will check One of the endpoints provided by the Controller Health Manager will check
whether it is able to watch the Kubernetes API, authenticate with Keystone whether it is able to watch the Kubernetes API, authenticate with Keystone
and talk to Neutron, since these are services needed by Kuryr Controller. and talk to Neutron, since these are services needed by Kuryr Controller.

View File

@ -16,9 +16,9 @@
Active/Passive High Availability Active/Passive High Availability
================================ ================================
Overview Overview
-------- --------
Initially it was assumed that there will only be a single kuryr-controller Initially it was assumed that there will only be a single kuryr-controller
instance in the Kuryr-Kubernetes deployment. While it simplified a lot of instance in the Kuryr-Kubernetes deployment. While it simplified a lot of
controller code, it is obviously not a perfect situation. Having redundant controller code, it is obviously not a perfect situation. Having redundant
@ -29,16 +29,20 @@ Now with introduction of possibility to run Kuryr in Pods on Kubernetes cluster
HA is much easier to be implemented. The purpose of this document is to explain HA is much easier to be implemented. The purpose of this document is to explain
how will it work in practice. how will it work in practice.
Proposed Solution Proposed Solution
----------------- -----------------
There are two types of HA - Active/Passive and Active/Active. In this document There are two types of HA - Active/Passive and Active/Active. In this document
we'll focus on the former. A/P basically works as one of the instances being we'll focus on the former. A/P basically works as one of the instances being
the leader (doing all the exclusive tasks) and other instances waiting in the leader (doing all the exclusive tasks) and other instances waiting in
*standby* mode in case the leader *dies* to take over the leader role. As you *standby* mode in case the leader *dies* to take over the leader role. As you
can see a *leader election* mechanism is required to make this work. can see a *leader election* mechanism is required to make this work.
Leader election Leader election
+++++++++++++++ ~~~~~~~~~~~~~~~
The idea here is to use leader election mechanism based on Kubernetes The idea here is to use leader election mechanism based on Kubernetes
endpoints. The idea is neatly `explained on Kubernetes blog endpoints. The idea is neatly `explained on Kubernetes blog
<https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/>`_. <https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/>`_.
@ -67,8 +71,10 @@ This adds a new container to the pod. This container will do the
leader-election and expose the simple JSON API on port 16401 by default. This leader-election and expose the simple JSON API on port 16401 by default. This
API will be available to kuryr-controller container. API will be available to kuryr-controller container.
Kuryr Controller Implementation Kuryr Controller Implementation
+++++++++++++++++++++++++++++++ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The main issue with having multiple controllers is task division. All of the The main issue with having multiple controllers is task division. All of the
controllers are watching the same endpoints and getting the same notifications, controllers are watching the same endpoints and getting the same notifications,
but those notifications cannot be processed by multiple controllers at once, but those notifications cannot be processed by multiple controllers at once,
@ -93,8 +99,10 @@ Please note that this means that in HA mode Watcher will not get started on
controller startup, but only when periodic task will notice that it is the controller startup, but only when periodic task will notice that it is the
leader. leader.
Issues Issues
++++++ ~~~~~~
There are certain issues related to orphaned OpenStack resources that we may There are certain issues related to orphaned OpenStack resources that we may
hit. Those can happen in two cases: hit. Those can happen in two cases:
@ -116,8 +124,10 @@ The latter of the issues can also be tackled by saving last seen
``resourceVersion`` of watched resources list when stopping the Watcher and ``resourceVersion`` of watched resources list when stopping the Watcher and
restarting watching from that point. restarting watching from that point.
Future enhancements Future enhancements
+++++++++++++++++++ ~~~~~~~~~~~~~~~~~~~
It would be useful to implement the garbage collector and It would be useful to implement the garbage collector and
``resourceVersion``-based protection mechanism described in section above. ``resourceVersion``-based protection mechanism described in section above.

View File

@ -20,6 +20,7 @@
(Avoid deeper levels because they do not render well.) (Avoid deeper levels because they do not render well.)
===========================
Design and Developer Guides Design and Developer Guides
=========================== ===========================
@ -48,6 +49,7 @@ Design documents
network_policy network_policy
updating_pod_resources_api updating_pod_resources_api
Indices and tables Indices and tables
------------------ ------------------

View File

@ -16,22 +16,26 @@
Kuryr Kubernetes Integration Design Kuryr Kubernetes Integration Design
=================================== ===================================
Purpose Purpose
------- -------
The purpose of this document is to present the main Kuryr-K8s integration The purpose of this document is to present the main Kuryr-K8s integration
components and capture the design decisions of each component currently taken components and capture the design decisions of each component currently taken
by the kuryr team. by the kuryr team.
Goal Statement Goal Statement
-------------- --------------
Enable OpenStack Neutron realization of the Kubernetes networking. Start by Enable OpenStack Neutron realization of the Kubernetes networking. Start by
supporting network connectivity and expand to support advanced features, such supporting network connectivity and expand to support advanced features, such
as Network Policies. In the future, it may be extended to some other as Network Policies. In the future, it may be extended to some other
openstack services. openstack services.
Overview Overview
-------- --------
In order to integrate Neutron into kubernetes networking, 2 components are In order to integrate Neutron into kubernetes networking, 2 components are
introduced: Controller and CNI Driver. introduced: Controller and CNI Driver.
Controller is a supervisor component responsible to maintain translation of Controller is a supervisor component responsible to maintain translation of
@ -47,8 +51,10 @@ Please see below the component view of the integrated system:
:align: center :align: center
:width: 100% :width: 100%
Design Principles Design Principles
----------------- -----------------
1. Loose coupling between integration components. 1. Loose coupling between integration components.
2. Flexible deployment options to support different project, subnet and 2. Flexible deployment options to support different project, subnet and
security groups assignment profiles. security groups assignment profiles.
@ -64,8 +70,10 @@ Design Principles
configuration. If some vendor requires some extra code, it should be handled configuration. If some vendor requires some extra code, it should be handled
in one of the stevedore drivers. in one of the stevedore drivers.
Kuryr Controller Design Kuryr Controller Design
----------------------- -----------------------
Controller is responsible for watching Kubernetes API endpoints to make sure Controller is responsible for watching Kubernetes API endpoints to make sure
that the corresponding model is maintained in Neutron. Controller updates K8s that the corresponding model is maintained in Neutron. Controller updates K8s
resources endpoints' annotations to keep neutron details required by the CNI resources endpoints' annotations to keep neutron details required by the CNI
@ -73,16 +81,20 @@ driver as well as for the model mapping persistency.
Controller is composed from the following components: Controller is composed from the following components:
Watcher Watcher
~~~~~~~ ~~~~~~~
Watcher is a common software component used by both the Controller and the CNI Watcher is a common software component used by both the Controller and the CNI
driver. Watcher connects to Kubernetes API. Watcher's responsibility is to observe the driver. Watcher connects to Kubernetes API. Watcher's responsibility is to observe the
registered (either on startup or dynamically during its runtime) endpoints and registered (either on startup or dynamically during its runtime) endpoints and
invoke registered callback handler (pipeline) to pass all events from invoke registered callback handler (pipeline) to pass all events from
registered endpoints. registered endpoints.
Event Handler Event Handler
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
EventHandler is an interface class for the Kubernetes event handling. There are EventHandler is an interface class for the Kubernetes event handling. There are
several 'wrapper' event handlers that can be composed to implement Controller several 'wrapper' event handlers that can be composed to implement Controller
handling pipeline. handling pipeline.
@ -107,8 +119,10 @@ facility.
handlers based on event content and handler predicate provided during event handlers based on event content and handler predicate provided during event
handler registration. handler registration.
ControllerPipeline ControllerPipeline
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
ControllerPipeline serves as an event dispatcher of the Watcher for Kuryr-K8s ControllerPipeline serves as an event dispatcher of the Watcher for Kuryr-K8s
controller Service. Currently watched endpoints are 'pods', 'services' and controller Service. Currently watched endpoints are 'pods', 'services' and
'endpoints'. Kubernetes resource event handlers (Event Consumers) are registered into 'endpoints'. Kubernetes resource event handlers (Event Consumers) are registered into
@ -127,8 +141,10 @@ order arrival. Events of different Kubernetes objects are handled concurrently.
:align: center :align: center
:width: 100% :width: 100%
ResourceEventHandler ResourceEventHandler
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
ResourceEventHandler is a convenience base class for the Kubernetes event processing. ResourceEventHandler is a convenience base class for the Kubernetes event processing.
The specific Handler associates itself with specific Kubernetes object kind (through The specific Handler associates itself with specific Kubernetes object kind (through
setting OBJECT_KIND) and is expected to implement at least one of the methods setting OBJECT_KIND) and is expected to implement at least one of the methods
@ -139,8 +155,10 @@ actions, Handler has 'on_present' method that is invoked for both event types.
The specific Handler implementation should strive to put all the common ADDED The specific Handler implementation should strive to put all the common ADDED
and MODIFIED event handling logic in this method to avoid code duplication. and MODIFIED event handling logic in this method to avoid code duplication.
Pluggable Handlers Pluggable Handlers
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
Starting with the Rocky release, Kuryr-Kubernetes includes a pluggable Starting with the Rocky release, Kuryr-Kubernetes includes a pluggable
interface for the Kuryr controller handlers. interface for the Kuryr controller handlers.
The pluggable handlers framework allows : The pluggable handlers framework allows :
@ -170,6 +188,7 @@ at kuryr.conf::
Providers Providers
~~~~~~~~~ ~~~~~~~~~
Provider (Drivers) are used by ResourceEventHandlers to manage specific aspects Provider (Drivers) are used by ResourceEventHandlers to manage specific aspects
of the Kubernetes resource in the OpenStack domain. For example, creating a Kubernetes Pod of the Kubernetes resource in the OpenStack domain. For example, creating a Kubernetes Pod
will require a neutron port to be created on a specific network with the proper will require a neutron port to be created on a specific network with the proper
@ -185,8 +204,10 @@ drivers. There are drivers that handle the Pod based on the project, subnet
and security groups specified via configuration settings during cluster and security groups specified via configuration settings during cluster
deployment phase. deployment phase.
NeutronPodVifDriver NeutronPodVifDriver
~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
PodVifDriver subclass should implement request_vif, release_vif and PodVifDriver subclass should implement request_vif, release_vif and
activate_vif methods. In case request_vif returns Vif object in down state, activate_vif methods. In case request_vif returns Vif object in down state,
Controller will invoke activate_vif. Vif 'active' state is required by the Controller will invoke activate_vif. Vif 'active' state is required by the
@ -194,6 +215,7 @@ CNI driver to complete pod handling.
The NeutronPodVifDriver is the default driver that creates neutron port upon The NeutronPodVifDriver is the default driver that creates neutron port upon
Pod addition and deletes port upon Pod removal. Pod addition and deletes port upon Pod removal.
CNI Driver CNI Driver
---------- ----------
@ -208,6 +230,7 @@ supposed to be maintained.
.. _cni-daemon: .. _cni-daemon:
CNI Daemon CNI Daemon
---------- ----------
@ -232,6 +255,7 @@ kubernetes API and added to the registry by Watcher thread, Server will
eventually get VIF it needs to connect for a given pod. Then it waits for the eventually get VIF it needs to connect for a given pod. Then it waits for the
VIF to become active before returning to the CNI Driver. VIF to become active before returning to the CNI Driver.
Communication Communication
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
@ -247,8 +271,10 @@ For reference see updated pod creation flow diagram:
:align: center :align: center
:width: 100% :width: 100%
/addNetwork /addNetwork
+++++++++++ +++++++++++
**Function**: Is equivalent of running ``K8sCNIPlugin.add``. **Function**: Is equivalent of running ``K8sCNIPlugin.add``.
**Return code:** 201 Created **Return code:** 201 Created
@ -257,8 +283,10 @@ For reference see updated pod creation flow diagram:
oslo.versionedobject from ``os_vif`` library. On the other side it can be oslo.versionedobject from ``os_vif`` library. On the other side it can be
deserialized using o.vo's ``obj_from_primitive()`` method. deserialized using o.vo's ``obj_from_primitive()`` method.
/delNetwork /delNetwork
+++++++++++ +++++++++++
**Function**: Is equivalent of running ``K8sCNIPlugin.delete``. **Function**: Is equivalent of running ``K8sCNIPlugin.delete``.
**Return code:** 204 No content **Return code:** 204 No content
@ -271,5 +299,6 @@ to perform its tasks and wait on socket for result.
Kubernetes Documentation Kubernetes Documentation
------------------------ ------------------------
The `Kubernetes reference documentation <https://kubernetes.io/docs/reference/>`_ The `Kubernetes reference documentation <https://kubernetes.io/docs/reference/>`_
is a great source for finding more details about Kubernetes API, CLIs, and tools. is a great source for finding more details about Kubernetes API, CLIs, and tools.

View File

@ -18,11 +18,14 @@ Kuryr Kubernetes Ingress integration design
Purpose Purpose
------- -------
The purpose of this document is to present how Kubernetes Ingress controller The purpose of this document is to present how Kubernetes Ingress controller
is supported by the kuryr integration. is supported by the kuryr integration.
Overview Overview
-------- --------
A Kubernetes Ingress [1]_ is used to give services externally-reachable URLs, A Kubernetes Ingress [1]_ is used to give services externally-reachable URLs,
load balance traffic, terminate SSL, offer name based virtual load balance traffic, terminate SSL, offer name based virtual
hosting, and more. hosting, and more.
@ -32,8 +35,10 @@ security configuration.
A Kubernetes Ingress Controller [2]_ is an entity that watches the apiserver's A Kubernetes Ingress Controller [2]_ is an entity that watches the apiserver's
/ingress resources for updates. Its job is to satisfy requests for Ingresses. /ingress resources for updates. Its job is to satisfy requests for Ingresses.
Proposed Solution Proposed Solution
----------------- -----------------
The suggested solution is based on extension of the kuryr-kubernetes controller The suggested solution is based on extension of the kuryr-kubernetes controller
handlers functionality to support kubernetes Ingress resources. handlers functionality to support kubernetes Ingress resources.
This extension should watch kubernetes Ingresses resources, and the This extension should watch kubernetes Ingresses resources, and the
@ -48,8 +53,10 @@ content in HTTP header, e.g: HOST_NAME).
Kuryr will use neutron LBaaS L7 policy capability [3]_ to perform Kuryr will use neutron LBaaS L7 policy capability [3]_ to perform
the L7 routing task. the L7 routing task.
SW architecture: SW architecture:
---------------- ----------------
The following scheme describes the SW modules that provides Ingress controller The following scheme describes the SW modules that provides Ingress controller
capability in Kuryr Kubernetes context: capability in Kuryr Kubernetes context:
@ -68,8 +75,10 @@ modules:
Each one of this modules is detailed described below. Each one of this modules is detailed described below.
Ingress resource creation Ingress resource creation
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
The kuryr-kubernetes controller will create the L7 router, The kuryr-kubernetes controller will create the L7 router,
and both Ingress and Service/Endpoints handlers should update the L7 and both Ingress and Service/Endpoints handlers should update the L7
rules database of the L7 router. rules database of the L7 router.
@ -82,8 +91,10 @@ ingress controller SW :
:align: center :align: center
:width: 100% :width: 100%
The L7 Router The L7 Router
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
In Kuryr context, a L7 router is actually an externally reachable In Kuryr context, a L7 router is actually an externally reachable
loadbalancer with L7 capabilities. loadbalancer with L7 capabilities.
For achieving external connectivity the L7 router is attached to a floating For achieving external connectivity the L7 router is attached to a floating
@ -112,8 +123,10 @@ The next diagram illustrates data flow from external user to L7 router:
:align: center :align: center
:width: 100% :width: 100%
Ingress Handler Ingress Handler
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
The Ingress Handler watches the apiserver's for updates to The Ingress Handler watches the apiserver's for updates to
the Ingress resources and should satisfy requests for Ingresses. the Ingress resources and should satisfy requests for Ingresses.
Each Ingress being translated to a L7 policy in L7 router, and the rules on Each Ingress being translated to a L7 policy in L7 router, and the rules on
@ -126,16 +139,20 @@ Since the Service/Endpoints resource is not aware of changes in Ingress objects
pointing to it, the Ingress handler should trigger this notification, pointing to it, the Ingress handler should trigger this notification,
the notification will be implemented using annotation. the notification will be implemented using annotation.
Service/Endpoints Handler Service/Endpoints Handler
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
The Service/Endpoints handler should be **extended** to support the flows The Service/Endpoints handler should be **extended** to support the flows
involving Ingress resources. involving Ingress resources.
The Service/Endpoints handler should add/delete all its members to/from the The Service/Endpoints handler should add/delete all its members to/from the
LBaaS pool mentioned above, in case an Ingress is pointing this LBaaS pool mentioned above, in case an Ingress is pointing this
Service/Endpoints as its destination. Service/Endpoints as its destination.
The L7 router driver The L7 router driver
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
The L7 router, Ingress handler and Service/Endpoints handler will The L7 router, Ingress handler and Service/Endpoints handler will
call the L7 router driver services to create the L7 routing entities chain. call the L7 router driver services to create the L7 routing entities chain.
The L7 router driver will rely on neutron LBaaS functionality. The L7 router driver will rely on neutron LBaaS functionality.
@ -156,8 +173,10 @@ entities is given below:
- The green components are created/released by Ingress handler. - The green components are created/released by Ingress handler.
- The red components are created/released by Service/Endpoints handler. - The red components are created/released by Service/Endpoints handler.
Use cases examples Use cases examples
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
This section describe in details the following scenarios: This section describe in details the following scenarios:
A. Create Ingress, create Service/Endpoints. A. Create Ingress, create Service/Endpoints.
@ -243,7 +262,7 @@ This section describe in details the following scenarios:
References References
========== ==========
.. [1] https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress .. [1] https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
.. [2] https://github.com/kubernetes/ingress-nginx/blob/master/README.md .. [2] https://github.com/kubernetes/ingress-nginx/blob/master/README.md
.. [3] https://wiki.openstack.org/wiki/Neutron/LBaaS/l7 .. [3] https://wiki.openstack.org/wiki/Neutron/LBaaS/l7

View File

@ -18,11 +18,14 @@ Kuryr Kubernetes Openshift Routes integration design
Purpose Purpose
------- -------
The purpose of this document is to present how Openshift Routes are supported The purpose of this document is to present how Openshift Routes are supported
by kuryr-kubernetes. by kuryr-kubernetes.
Overview Overview
-------- --------
OpenShift Origin [1]_ is an open source cloud application development and OpenShift Origin [1]_ is an open source cloud application development and
hosting platform that automates the provisioning, management and scaling hosting platform that automates the provisioning, management and scaling
of applications. of applications.
@ -39,19 +42,25 @@ incoming connections.
The Openshift Routes concept introduced before Ingress [3]_ was supported by The Openshift Routes concept introduced before Ingress [3]_ was supported by
kubernetes, the Openshift Route matches the functionality of kubernetes Ingress. kubernetes, the Openshift Route matches the functionality of kubernetes Ingress.
Proposed Solution Proposed Solution
----------------- -----------------
The solution will rely on L7 router, Service/Endpoints handler and The solution will rely on L7 router, Service/Endpoints handler and
L7 router driver components described at kuryr-kubernetes Ingress integration L7 router driver components described at kuryr-kubernetes Ingress integration
design, where a new component - OCP-Route handler, will satisfy requests for design, where a new component - OCP-Route handler, will satisfy requests for
Openshift Route resources. Openshift Route resources.
Controller Handlers impact: Controller Handlers impact:
--------------------------- ---------------------------
The controller handlers should be extended to support OCP-Route resource. The controller handlers should be extended to support OCP-Route resource.
The OCP-Route handler The OCP-Route handler
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
The OCP-Route handler watches the apiserver's for updates to Openshift The OCP-Route handler watches the apiserver's for updates to Openshift
Route resources. Route resources.
The following scheme describes OCP-Route controller SW architecture: The following scheme describes OCP-Route controller SW architecture:
@ -72,8 +81,10 @@ pointing to it, the OCP-Route handler should trigger this notification,
the notification will be implemented using annotation of the relevant the notification will be implemented using annotation of the relevant
Endpoint resource. Endpoint resource.
Use cases examples Use cases examples
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
This section describes in details the following scenarios: This section describes in details the following scenarios:
A. Create OCP-Route, create Service/Endpoints. A. Create OCP-Route, create Service/Endpoints.
@ -151,8 +162,10 @@ This section describes in details the following scenarios:
* As a result to the OCP-Route handler notification, the Service/Endpoints * As a result to the OCP-Route handler notification, the Service/Endpoints
handler will set its internal state to 'no Ingress is pointing' state. handler will set its internal state to 'no Ingress is pointing' state.
References References
========== ==========
.. [1] https://www.openshift.org/ .. [1] https://www.openshift.org/
.. [2] https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html .. [2] https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html
.. [3] https://kubernetes.io/docs/concepts/Services-networking/ingress/ .. [3] https://kubernetes.io/docs/concepts/Services-networking/ingress/

View File

@ -12,24 +12,29 @@
''''''' Heading 4 ''''''' Heading 4
(Avoid deeper levels because they do not render well.) (Avoid deeper levels because they do not render well.)
================================ ==============
Network Policy Network Policy
================================ ==============
Purpose Purpose
-------- --------
The purpose of this document is to present how Network Policy is supported by The purpose of this document is to present how Network Policy is supported by
Kuryr-Kubernetes. Kuryr-Kubernetes.
Overview Overview
-------- --------
Kubernetes supports a Network Policy object to express ingress and egress rules Kubernetes supports a Network Policy object to express ingress and egress rules
for pods. Network Policy reacts on labels to qualify multiple pods, and defines for pods. Network Policy reacts on labels to qualify multiple pods, and defines
rules based on differents labeling and/or CIDRs. When combined with a rules based on differents labeling and/or CIDRs. When combined with a
networking plugin, those policy objetcs are enforced and respected. networking plugin, those policy objetcs are enforced and respected.
Proposed Solution Proposed Solution
----------------- -----------------
Kuryr-Kubernetes relies on Neutron security groups and security group rules to Kuryr-Kubernetes relies on Neutron security groups and security group rules to
enforce a Network Policy object, more specifically one security group per policy enforce a Network Policy object, more specifically one security group per policy
with possibly multiple rules. Each object has a namespace scoped Network Policy with possibly multiple rules. Each object has a namespace scoped Network Policy
@ -70,67 +75,85 @@ side effects/actions of when a Network Policy is being enforced.
expressions, mix of namespace and pod selector, ip block expressions, mix of namespace and pod selector, ip block
* named port * named port
New handlers and drivers New handlers and drivers
++++++++++++++++++++++++ ~~~~~~~~~~~~~~~~~~~~~~~~
The Network Policy handler The Network Policy handler
~~~~~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++
This handler is responsible for triggering the Network Policy Spec processing, This handler is responsible for triggering the Network Policy Spec processing,
and the creation or removal of security group with appropriate security group and the creation or removal of security group with appropriate security group
rules. It also, applies the security group to the pods and services affected rules. It also, applies the security group to the pods and services affected
by the policy. by the policy.
The Pod Label handler The Pod Label handler
~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++
This new handler is responsible for triggering the update of a security group This new handler is responsible for triggering the update of a security group
rule upon pod labels changes, and its enforcement on the pod port and service. rule upon pod labels changes, and its enforcement on the pod port and service.
The Network Policy driver The Network Policy driver
~~~~~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++
Is the main driver. It ensures a Network Policy by processing the Spec Is the main driver. It ensures a Network Policy by processing the Spec
and creating or updating the Security group with appropriate and creating or updating the Security group with appropriate
security group rules. security group rules.
The Network Policy Security Group driver The Network Policy Security Group driver
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++
It is responsible for creating, deleting, or updating security group rules It is responsible for creating, deleting, or updating security group rules
for pods, namespaces or services based on different Network Policies. for pods, namespaces or services based on different Network Policies.
Modified handlers and drivers Modified handlers and drivers
+++++++++++++++++++++++++++++ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The VIF handler The VIF handler
~~~~~~~~~~~~~~~ +++++++++++++++
As network policy rules can be defined based on pod labels, this handler As network policy rules can be defined based on pod labels, this handler
has been enhanced to trigger a security group rule creation or deletion, has been enhanced to trigger a security group rule creation or deletion,
depending on the type of pod event, if the pod is affected by the network depending on the type of pod event, if the pod is affected by the network
policy and if a new security group rule is needed. Also, it triggers the policy and if a new security group rule is needed. Also, it triggers the
translation of the pod rules to the affected service. translation of the pod rules to the affected service.
The Namespace handler The Namespace handler
~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++
Just as the pods labels, namespaces labels can also define a rule in a Just as the pods labels, namespaces labels can also define a rule in a
Network Policy. To account for this, the namespace handler has been Network Policy. To account for this, the namespace handler has been
incremented to trigger the creation, deletion or update of a incremented to trigger the creation, deletion or update of a
security group rule, in case the namespace affects a Network Policy rule, security group rule, in case the namespace affects a Network Policy rule,
and the translation of the rule to the affected service. and the translation of the rule to the affected service.
The Namespace Subnet driver The Namespace Subnet driver
~~~~~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++++++++
In case of a namespace event and a Network Policy enforcement based In case of a namespace event and a Network Policy enforcement based
on the namespace, this driver creates a subnet to this namespace, on the namespace, this driver creates a subnet to this namespace,
and restrict the number of security group rules for the Network Policy and restrict the number of security group rules for the Network Policy
to just one with the subnet CIDR, instead of one for each pod in the namespace. to just one with the subnet CIDR, instead of one for each pod in the namespace.
The LBaaS driver The LBaaS driver
~~~~~~~~~~~~~~~~ ++++++++++++++++
To restrict the incoming traffic to the backend pods, the LBaaS driver To restrict the incoming traffic to the backend pods, the LBaaS driver
has been enhanced to translate pods rules to the listener port, and react has been enhanced to translate pods rules to the listener port, and react
to Service ports updates. E.g., when the target port is not allowed by the to Service ports updates. E.g., when the target port is not allowed by the
policy enforced in the pod, the rule should not be added. policy enforced in the pod, the rule should not be added.
The VIF Pool driver The VIF Pool driver
~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++
The VIF Pool driver is responsible for updating the Security group applied The VIF Pool driver is responsible for updating the Security group applied
to the pods ports. It has been modified to embrace the fact that with Network to the pods ports. It has been modified to embrace the fact that with Network
Policies pods' ports changes their security group while being used, meaning the Policies pods' ports changes their security group while being used, meaning the
@ -141,13 +164,16 @@ and host id. Thus if there is no ports on the pool with the needed
security group id(s), one of the existing ports in the pool is updated security group id(s), one of the existing ports in the pool is updated
to match the requested sg Id. to match the requested sg Id.
Use cases examples Use cases examples
++++++++++++++++++ ~~~~~~~~~~~~~~~~~~
This section describes some scenarios with a Network Policy being enforced, This section describes some scenarios with a Network Policy being enforced,
what Kuryr componenets gets triggered and what resources are created. what Kuryr componenets gets triggered and what resources are created.
Deny all incoming traffic Deny all incoming traffic
~~~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++++++
By default, Kubernetes clusters do not restrict traffic. Only once a network By default, Kubernetes clusters do not restrict traffic. Only once a network
policy is enforced to a namespace, all traffic not explicitly allowed in the policy is enforced to a namespace, all traffic not explicitly allowed in the
@ -194,8 +220,9 @@ are assumed to assumed to affect Ingress.
securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315 securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315
securityGroupName: sg-default-deny securityGroupName: sg-default-deny
Allow traffic from pod Allow traffic from pod
~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++
The following Network Policy specification has a single rule allowing traffic The following Network Policy specification has a single rule allowing traffic
on a single port from the group of pods that have the label ``role=monitoring``. on a single port from the group of pods that have the label ``role=monitoring``.
@ -264,8 +291,9 @@ restriction was enforced.
securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17 securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
securityGroupName: sg-allow-monitoring-via-pod-selector securityGroupName: sg-allow-monitoring-via-pod-selector
Allow traffic from namespace Allow traffic from namespace
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++
The following network policy only allows allowing ingress traffic The following network policy only allows allowing ingress traffic
from namespace with the label ``purpose=test``: from namespace with the label ``purpose=test``:
@ -339,16 +367,19 @@ egress rule allowing traffic to everywhere.
that affects ingress traffic is created, and also everytime that affects ingress traffic is created, and also everytime
a pod or namespace is created. a pod or namespace is created.
Create network policy flow Create network policy flow
~~~~~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++
.. image:: ../../images/create_network_policy_flow.svg .. image:: ../../images/create_network_policy_flow.svg
:alt: Network Policy creation flow :alt: Network Policy creation flow
:align: center :align: center
:width: 100% :width: 100%
Create pod flow Create pod flow
~~~~~~~~~~~~~~~ +++++++++++++++
The following diagram only covers the implementation part that affects The following diagram only covers the implementation part that affects
network policy. network policy.
@ -357,8 +388,10 @@ network policy.
:align: center :align: center
:width: 100% :width: 100%
Network policy rule definition Network policy rule definition
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++
======================== ======================= ============================================== ======================== ======================= ==============================================
NamespaceSelector podSelector Expected result NamespaceSelector podSelector Expected result
======================== ======================= ============================================== ======================== ======================= ==============================================
@ -381,8 +414,10 @@ ingress: [] Deny all traffic
No ingress Blocks all traffic No ingress Blocks all traffic
======================== ================================================ ======================== ================================================
Policy types definition Policy types definition
~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++++
=============== ===================== ======================= ====================== =============== ===================== ======================= ======================
PolicyType Spec Ingress/Egress Ingress generated rules Egress generated rules PolicyType Spec Ingress/Egress Ingress generated rules Egress generated rules
=============== ===================== ======================= ====================== =============== ===================== ======================= ======================

View File

@ -18,6 +18,7 @@ Kuryr Kubernetes Port CRD Usage
Purpose Purpose
------- -------
The purpose of this document is to present Kuryr Kubernetes Port and PortPool The purpose of this document is to present Kuryr Kubernetes Port and PortPool
CRD [1]_ usage, capturing the design decisions currently taken by the Kuryr CRD [1]_ usage, capturing the design decisions currently taken by the Kuryr
team. team.
@ -32,8 +33,10 @@ Having the details in K8s data model should also serve the case where Kuryr is
used as generic SDN K8s integration framework. This means that Port CRD can be used as generic SDN K8s integration framework. This means that Port CRD can be
not neutron specific. not neutron specific.
Overview Overview
-------- --------
Interactions between Kuryr and Neutron may take more time than desired from Interactions between Kuryr and Neutron may take more time than desired from
the container management perspective. the container management perspective.
@ -46,8 +49,10 @@ them available in case of Kuryr Controller restart. Since Kuryr is stateless
service, the details should be kept either as part of Neutron or Kubernetes service, the details should be kept either as part of Neutron or Kubernetes
data. Due to the perfromance costs, K8s option is more performant. data. Due to the perfromance costs, K8s option is more performant.
Proposed Solution Proposed Solution
----------------- -----------------
The proposal is to start relying on K8s CRD objects more and more. The proposal is to start relying on K8s CRD objects more and more.
The first action is to create a KuryrPort CRD where the needed information The first action is to create a KuryrPort CRD where the needed information
about the Neutron Ports will be stored (or any other SDN). about the Neutron Ports will be stored (or any other SDN).
@ -192,7 +197,8 @@ Note this is similar to the approach already followed by the network per
namespace subnet driver and it could be similarly applied to other SDN namespace subnet driver and it could be similarly applied to other SDN
resources, such as LoadBalancers. resources, such as LoadBalancers.
References References
========== ==========
.. [1] https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-resources
.. [1] https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-resources

View File

@ -16,9 +16,9 @@
Kuryr Kubernetes Port Manager Design Kuryr Kubernetes Port Manager Design
==================================== ====================================
Purpose Purpose
------- -------
The purpose of this document is to present Kuryr Kubernetes Port Manager, The purpose of this document is to present Kuryr Kubernetes Port Manager,
capturing the design decision currently taken by the kuryr team. capturing the design decision currently taken by the kuryr team.
@ -28,8 +28,10 @@ the amount of calls to Neutron by ensuring port reusal as well as performing
bulk actions, e.g., creating/deleting several ports within the same Neutron bulk actions, e.g., creating/deleting several ports within the same Neutron
call. call.
Overview Overview
-------- --------
Interactions between Kuryr and Neutron may take more time than desired from Interactions between Kuryr and Neutron may take more time than desired from
the container management perspective. the container management perspective.
@ -47,8 +49,10 @@ consequently remove the waiting time for:
- Creating ports and waiting for them to become active when booting containers - Creating ports and waiting for them to become active when booting containers
- Deleting ports when removing containers - Deleting ports when removing containers
Proposed Solution Proposed Solution
----------------- -----------------
The Port Manager will be in charge of handling Neutron ports. The main The Port Manager will be in charge of handling Neutron ports. The main
difference with the current implementation resides on when and how these difference with the current implementation resides on when and how these
ports are managed. The idea behind is to minimize the amount of calls to the ports are managed. The idea behind is to minimize the amount of calls to the
@ -61,6 +65,7 @@ can be added.
Ports Manager Ports Manager
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
The Port Manager will handle different pool of Neutron ports: The Port Manager will handle different pool of Neutron ports:
- Available pools: There will be a pool of ports for each tenant, host (or - Available pools: There will be a pool of ports for each tenant, host (or
@ -105,8 +110,10 @@ In addition, a Time-To-Live (TTL) could be set to the ports at the pool, so
that if they are not used during a certain period of time, they are removed -- that if they are not used during a certain period of time, they are removed --
if and only if the available_pool size is still larger than the target minimum. if and only if the available_pool size is still larger than the target minimum.
Recovery of pool ports upon Kuryr-Controller restart Recovery of pool ports upon Kuryr-Controller restart
++++++++++++++++++++++++++++++++++++++++++++++++++++ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the Kuryr-Controller is restarted, the pre-created ports will still exist If the Kuryr-Controller is restarted, the pre-created ports will still exist
on the Neutron side but the Kuryr-controller will be unaware of them, thus on the Neutron side but the Kuryr-controller will be unaware of them, thus
pre-creating more upon pod allocation requests. To avoid having these existing pre-creating more upon pod allocation requests. To avoid having these existing
@ -134,8 +141,10 @@ attached to each existing trunk port to find where the filtered ports are
attached and then obtain all the needed information to re-add them into the attached and then obtain all the needed information to re-add them into the
corresponding pools. corresponding pools.
Kuryr Controller Impact Kuryr Controller Impact
+++++++++++++++++++++++ ~~~~~~~~~~~~~~~~~~~~~~~
A new VIF Pool driver is created to manage the ports pools upon pods creation A new VIF Pool driver is created to manage the ports pools upon pods creation
and deletion events. It will ensure that a pool with at least X ports is and deletion events. It will ensure that a pool with at least X ports is
available for each tenant, host or trunk port, and security group, when the available for each tenant, host or trunk port, and security group, when the
@ -151,16 +160,20 @@ changes related to the VIF drivers. The VIF drivers (neutron-vif and nested)
will be extended to support bulk ports creation of Neutron ports and similarly will be extended to support bulk ports creation of Neutron ports and similarly
for the VIF objects requests. for the VIF objects requests.
Future enhancement Future enhancement
'''''''''''''''''' ++++++++++++++++++
The VIFHandler needs to be aware of the new Pool driver, which will load the The VIFHandler needs to be aware of the new Pool driver, which will load the
respective VIF driver to be used. In a sense, the Pool Driver will be a proxy respective VIF driver to be used. In a sense, the Pool Driver will be a proxy
to the VIF Driver, but also managing the pools. When a mechanism to load and to the VIF Driver, but also managing the pools. When a mechanism to load and
set the VIFHandler drivers is in place, this will be reverted so that the set the VIFHandler drivers is in place, this will be reverted so that the
VIFHandlers becomes unaware of the pool drivers. VIFHandlers becomes unaware of the pool drivers.
Kuryr CNI Impact Kuryr CNI Impact
++++++++++++++++ ~~~~~~~~~~~~~~~~
For the nested vlan case, the subports at the different pools are already For the nested vlan case, the subports at the different pools are already
attached to the VMs trunk ports, therefore they are already in ACTIVE status. attached to the VMs trunk ports, therefore they are already in ACTIVE status.
However, for the generic case the ports are not really bond to anything (yet), However, for the generic case the ports are not really bond to anything (yet),
@ -175,6 +188,8 @@ OVS agent sees them as 'still connected' and maintains their ACTIVE status.
This modification must ensure the OVS (br-int) ports where these veth devices This modification must ensure the OVS (br-int) ports where these veth devices
are connected are not deleted after container deletion by the CNI. are connected are not deleted after container deletion by the CNI.
Future enhancement Future enhancement
'''''''''''''''''' ++++++++++++++++++
The CNI modifications will be implemented in a second phase. The CNI modifications will be implemented in a second phase.

View File

@ -16,15 +16,17 @@
Kuryr Kubernetes Services Integration Design Kuryr Kubernetes Services Integration Design
============================================ ============================================
Purpose Purpose
------- -------
The purpose of this document is to present how Kubernetes Service is supported The purpose of this document is to present how Kubernetes Service is supported
by the kuryr integration and to capture the design decisions currently taken by the kuryr integration and to capture the design decisions currently taken
by the kuryr team. by the kuryr team.
Overview Overview
-------- --------
A Kubernetes Service is an abstraction which defines a logical set of Pods and A Kubernetes Service is an abstraction which defines a logical set of Pods and
a policy by which to access them. Service is a Kubernetes managed API object. a policy by which to access them. Service is a Kubernetes managed API object.
For Kubernetes-native applications, Kubernetes offers an Endpoints API that is For Kubernetes-native applications, Kubernetes offers an Endpoints API that is
@ -33,8 +35,10 @@ please refer to `Kubernetes service <http://kubernetes.io/docs/user-guide/servic
Kubernetes supports services with kube-proxy component that runs on each node, Kubernetes supports services with kube-proxy component that runs on each node,
`Kube-Proxy <http://kubernetes.io/docs/admin/kube-proxy/>`_. `Kube-Proxy <http://kubernetes.io/docs/admin/kube-proxy/>`_.
Proposed Solution Proposed Solution
----------------- -----------------
Kubernetes service in its essence is a Load Balancer across Pods that fit the Kubernetes service in its essence is a Load Balancer across Pods that fit the
service selection. Kuryr's choice is to support Kubernetes services by using service selection. Kuryr's choice is to support Kubernetes services by using
Neutron LBaaS service. The initial implementation is based on the OpenStack Neutron LBaaS service. The initial implementation is based on the OpenStack
@ -45,13 +49,17 @@ This may be affected once Kubernetes Network Policies will be supported.
Oslo versioned objects are used to keep translation details in Kubernetes entities Oslo versioned objects are used to keep translation details in Kubernetes entities
annotation. This will allow future changes to be backward compatible. annotation. This will allow future changes to be backward compatible.
Data Model Translation Data Model Translation
~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~
Kubernetes service is mapped to the LBaaSv2 Load Balancer with associated Kubernetes service is mapped to the LBaaSv2 Load Balancer with associated
Listeners and Pools. Service endpoints are mapped to Load Balancer Pool members. Listeners and Pools. Service endpoints are mapped to Load Balancer Pool members.
Kuryr Controller Impact Kuryr Controller Impact
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
Two Kubernetes Event Handlers are added to the Controller pipeline. Two Kubernetes Event Handlers are added to the Controller pipeline.
- LBaaSSpecHandler manages Kubernetes Service creation and modification events. - LBaaSSpecHandler manages Kubernetes Service creation and modification events.

View File

@ -16,9 +16,9 @@
HowTo Update PodResources gRPC API HowTo Update PodResources gRPC API
================================== ==================================
Purpose Purpose
------- -------
The purpose of this document is to describe how to update gRPC API files in The purpose of this document is to describe how to update gRPC API files in
kuryr-kubernetes repository in case of upgrading to a new version of Kubernetes kuryr-kubernetes repository in case of upgrading to a new version of Kubernetes
PodResources API. These files are ``api_pb2_grpc.py``, ``api_pb2.py`` and PodResources API. These files are ``api_pb2_grpc.py``, ``api_pb2.py`` and
@ -42,8 +42,10 @@ Kubernetes source tree.
version (this is highly unlikely). In this case ``protobuf`` could fail version (this is highly unlikely). In this case ``protobuf`` could fail
to use our python bindings. to use our python bindings.
Automated update Automated update
---------------- ----------------
``contrib/regenerate_pod_resources_api.sh`` script could be used to re-generate ``contrib/regenerate_pod_resources_api.sh`` script could be used to re-generate
PodResources gRPC API files. By default, this script will download ``v1alpha1`` PodResources gRPC API files. By default, this script will download ``v1alpha1``
version of ``api.proto`` file from the Kubernetes GitHub repo and create version of ``api.proto`` file from the Kubernetes GitHub repo and create
@ -61,6 +63,7 @@ Define ``API_VERSION`` environment variable to use specific version of
$ export API_VERSION=v1alpha1 $ export API_VERSION=v1alpha1
Manual update steps Manual update steps
------------------- -------------------
@ -82,6 +85,7 @@ Don't forget to update the file header that should point to the original
// To regenerate api.proto, api_pb2.py and api_pb2_grpc.py follow instructions // To regenerate api.proto, api_pb2.py and api_pb2_grpc.py follow instructions
// from doc/source/devref/updating_pod_resources_api.rst. // from doc/source/devref/updating_pod_resources_api.rst.
Generating the python bindings Generating the python bindings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -116,4 +120,3 @@ Generating the python bindings
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc -I./ \ (venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc -I./ \
--python_out=. --grpc_python_out=. \ --python_out=. --grpc_python_out=. \
kuryr_kubernetes/pod_resources/api.proto kuryr_kubernetes/pod_resources/api.proto

View File

@ -18,12 +18,15 @@ VIF-Handler And Vif Drivers Design
Purpose Purpose
------- -------
The purpose of this document is to present an approach for implementing The purpose of this document is to present an approach for implementing
design of interaction between VIF-handler and the drivers it uses in design of interaction between VIF-handler and the drivers it uses in
Kuryr-Kubernetes Controller. Kuryr-Kubernetes Controller.
VIF-Handler VIF-Handler
----------- -----------
VIF-handler is intended to handle VIFs. The main aim of VIF-handler is to get VIF-handler is intended to handle VIFs. The main aim of VIF-handler is to get
the pod object, send it to 1) the VIF-driver for the default network, 2) the pod object, send it to 1) the VIF-driver for the default network, 2)
enabled Multi-VIF drivers for the additional networks, and get VIF objects enabled Multi-VIF drivers for the additional networks, and get VIF objects
@ -31,8 +34,10 @@ from both. After that VIF-handler is able to activate, release or update VIFs.
VIF-handler should stay clean whereas parsing of specific pod information VIF-handler should stay clean whereas parsing of specific pod information
should be done by Multi-VIF drivers. should be done by Multi-VIF drivers.
Multi-VIF driver Multi-VIF driver
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
The new type of drivers which is used to call other VIF-drivers to attach The new type of drivers which is used to call other VIF-drivers to attach
additional interfaces to Pods. The main aim of this kind of drivers is to get additional interfaces to Pods. The main aim of this kind of drivers is to get
additional interfaces from the Pods definition, then invoke real VIF-drivers additional interfaces from the Pods definition, then invoke real VIF-drivers
@ -53,8 +58,10 @@ Diagram describing VifHandler - Drivers flow is giver below:
:align: center :align: center
:width: 100% :width: 100%
Config Options Config Options
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
Add new config option "multi_vif_drivers" (list) to config file that shows Add new config option "multi_vif_drivers" (list) to config file that shows
what Multi-VIF drivers should be used in to specify the addition VIF objects. what Multi-VIF drivers should be used in to specify the addition VIF objects.
It is allowed to have one or more multi_vif_drivers enabled, which means that It is allowed to have one or more multi_vif_drivers enabled, which means that
@ -78,8 +85,10 @@ Or like this:
multi_vif_drivers = npwg_multiple_interfaces multi_vif_drivers = npwg_multiple_interfaces
Additional Subnets Driver Additional Subnets Driver
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
Since it is possible to request additional subnets for the pod through the pod Since it is possible to request additional subnets for the pod through the pod
annotations it is necessary to have new driver. According to parsed information annotations it is necessary to have new driver. According to parsed information
(requested subnets) by Multi-vif driver it has to return dictionary containing (requested subnets) by Multi-vif driver it has to return dictionary containing
@ -104,6 +113,7 @@ Here's how a Pod Spec with additional subnets requests might look like:
SRIOV Driver SRIOV Driver
~~~~~~~~~~~~ ~~~~~~~~~~~~
SRIOV driver gets pod object from Multi-vif driver, according to parsed SRIOV driver gets pod object from Multi-vif driver, according to parsed
information (sriov requests) by Multi-vif driver. It should return a list of information (sriov requests) by Multi-vif driver. It should return a list of
created vif objects. Method request_vif() has unified interface with created vif objects. Method request_vif() has unified interface with
@ -123,6 +133,7 @@ Here's how a Pod Spec with sriov requests might look like:
Specific ports support Specific ports support
---------------------- ----------------------
Specific ports support is enabled by default and will be a part of the drivers Specific ports support is enabled by default and will be a part of the drivers
to implement it. It is possile to have manually precreated specific ports in to implement it. It is possile to have manually precreated specific ports in
neutron and specify them in pod annotations as preferably used. This means that neutron and specify them in pod annotations as preferably used. This means that

View File

@ -8,6 +8,7 @@ Welcome to kuryr-kubernetes's documentation!
Contents Contents
-------- --------
.. toctree:: .. toctree::
:maxdepth: 3 :maxdepth: 3
@ -16,6 +17,7 @@ Contents
usage usage
contributing contributing
Developer Docs Developer Docs
-------------- --------------
@ -24,6 +26,7 @@ Developer Docs
devref/index devref/index
Design Specs Design Specs
------------ ------------
@ -37,9 +40,9 @@ Design Specs
specs/rocky/npwg_spec_support specs/rocky/npwg_spec_support
specs/stein/vhostuser specs/stein/vhostuser
Indices and tables Indices and tables
------------------ ------------------
* :ref:`genindex` * :ref:`genindex`
* :ref:`search` * :ref:`search`

View File

@ -1,3 +1,4 @@
================================================
Kuryr installation as a Kubernetes network addon Kuryr installation as a Kubernetes network addon
================================================ ================================================
@ -24,6 +25,7 @@ Deployment and kuryr-cni DaemonSet definitions to use pre-built
`controller <https://hub.docker.com/r/kuryr/controller/>`_ and `cni <https://hub.docker.com/r/kuryr/cni/>`_ `controller <https://hub.docker.com/r/kuryr/controller/>`_ and `cni <https://hub.docker.com/r/kuryr/cni/>`_
images from the Docker Hub. Those definitions will be generated in next step. images from the Docker Hub. Those definitions will be generated in next step.
Generating Kuryr resource definitions for Kubernetes Generating Kuryr resource definitions for Kubernetes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -113,6 +115,7 @@ This should generate 5 files in your ``<output_dir>``:
In case when Open vSwitch keeps vhostuser socket files not in /var/run/openvswitch, openvswitch In case when Open vSwitch keeps vhostuser socket files not in /var/run/openvswitch, openvswitch
mount point in cni_ds.yaml and [vhostuser] section in config_map.yml should be changed properly. mount point in cni_ds.yaml and [vhostuser] section in config_map.yml should be changed properly.
Deploying Kuryr resources on Kubernetes Deploying Kuryr resources on Kubernetes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -1,3 +1,4 @@
=============================
Inspect default Configuration Inspect default Configuration
============================= =============================

View File

@ -1,3 +1,4 @@
===========================
Basic DevStack installation Basic DevStack installation
=========================== ===========================
@ -9,6 +10,7 @@ operating systems. It is also assumed that ``git`` is already installed on the
system. DevStack will make sure to install and configure OpenStack, Kubernetes system. DevStack will make sure to install and configure OpenStack, Kubernetes
and dependencies of both systems. and dependencies of both systems.
Cloning required repositories Cloning required repositories
----------------------------- -----------------------------
@ -139,4 +141,4 @@ be found in `DevStack Documentation
<https://docs.openstack.org/devstack/latest/>`_, especially in section <https://docs.openstack.org/devstack/latest/>`_, especially in section
`Using Systemd in DevStack `Using Systemd in DevStack
<https://docs.openstack.org/devstack/latest/systemd.html>`_, which explains how <https://docs.openstack.org/devstack/latest/systemd.html>`_, which explains how
to use ``systemctl`` to control services and ``journalctl`` to read its logs. to use ``systemctl`` to control services and ``journalctl`` to read its logs.

View File

@ -1,3 +1,4 @@
==========================
Containerized installation Containerized installation
========================== ==========================
@ -5,6 +6,7 @@ It is possible to configure DevStack to install kuryr-controller and kuryr-cni
on Kubernetes as pods. Details can be found on :doc:`../containerized` page, on Kubernetes as pods. Details can be found on :doc:`../containerized` page,
this page will explain DevStack aspects of running containerized. this page will explain DevStack aspects of running containerized.
Installation Installation
------------ ------------
@ -17,6 +19,7 @@ line to your ``local.conf``: ::
This will trigger building the kuryr-controller and kuryr-cni containers during This will trigger building the kuryr-controller and kuryr-cni containers during
installation, as well as will deploy those on Kubernetes cluster it installed. installation, as well as will deploy those on Kubernetes cluster it installed.
Rebuilding container images Rebuilding container images
--------------------------- ---------------------------
@ -24,6 +27,7 @@ Instructions on how to manually rebuild both kuryr-controller and kuryr-cni
container images are presented on :doc:`../containerized` page. In case you want container images are presented on :doc:`../containerized` page. In case you want
to test any code changes, you need to rebuild the images first. to test any code changes, you need to rebuild the images first.
Changing configuration Changing configuration
---------------------- ----------------------
@ -38,12 +42,14 @@ present in the ConfigMap: kuryr.conf and kuryr-cni.conf. First one is attached
to kuryr-controller and second to kuryr-cni. Make sure to modify both when doing to kuryr-controller and second to kuryr-cni. Make sure to modify both when doing
changes important for both services. changes important for both services.
Restarting services Restarting services
------------------- -------------------
Once any changes are made to docker images or the configuration, it is crucial Once any changes are made to docker images or the configuration, it is crucial
to restart pod you've modified. to restart pod you've modified.
kuryr-controller kuryr-controller
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
@ -56,6 +62,7 @@ kill existing pod: ::
Deployment controller will make sure to restart the pod with new configuration. Deployment controller will make sure to restart the pod with new configuration.
kuryr-cni kuryr-cni
~~~~~~~~~ ~~~~~~~~~

View File

@ -1,6 +1,6 @@
========================================= =======================================
Kuryr Kubernetes Dragonflow Integration Kuryr Kubernetes Dragonflow Integration
========================================= =======================================
Dragonflow is a distributed, modular and extendable SDN controller that Dragonflow is a distributed, modular and extendable SDN controller that
enables to connect cloud network instances (VMs, Containers and Bare Metal enables to connect cloud network instances (VMs, Containers and Bare Metal
@ -21,14 +21,15 @@ networking interface for Dragonflow.
Testing with DevStack Testing with DevStack
===================== ---------------------
The next points describe how to test OpenStack with Dragonflow using DevStack. The next points describe how to test OpenStack with Dragonflow using DevStack.
We will start by describing how to test the baremetal case on a single host, We will start by describing how to test the baremetal case on a single host,
and then cover a nested environemnt where containers are created inside VMs. and then cover a nested environemnt where containers are created inside VMs.
Single Node Test Environment Single Node Test Environment
---------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Create a test system. 1. Create a test system.
@ -98,7 +99,7 @@ rewritten to your network controller's ip address and sent out on the network:
Inspect default Configuration Inspect default Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++++++++++
In order to check the default configuration, in term of networks, subnets, In order to check the default configuration, in term of networks, subnets,
security groups and loadbalancers created upon a successful devstack stacking, security groups and loadbalancers created upon a successful devstack stacking,
@ -108,7 +109,7 @@ you can check the `Inspect default Configuration`_.
Testing Network Connectivity Testing Network Connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++
Once the environment is ready, we can test that network connectivity works Once the environment is ready, we can test that network connectivity works
among pods. To do that check out `Testing Network Connectivity`_. among pods. To do that check out `Testing Network Connectivity`_.
@ -117,7 +118,7 @@ among pods. To do that check out `Testing Network Connectivity`_.
Nested Containers Test Environment (VLAN) Nested Containers Test Environment (VLAN)
----------------------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another deployment option is the nested-vlan where containers are created Another deployment option is the nested-vlan where containers are created
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
@ -129,7 +130,7 @@ the kuryr components.
Undercloud deployment Undercloud deployment
~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++
The steps to deploy the undercloud environment are the same as described above The steps to deploy the undercloud environment are the same as described above
for the `Single Node Test Environment` with the different sample local.conf to for the `Single Node Test Environment` with the different sample local.conf to
@ -165,7 +166,7 @@ steps detailed at `Boot VM with a Trunk Port`_.
Overcloud deployment Overcloud deployment
~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++
Once the VM is up and running, we can start with the overcloud configuration. Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without Dragonflow integration, i.e., the The steps to perform are the same as without Dragonflow integration, i.e., the
@ -182,10 +183,12 @@ same steps as for ML2/OVS:
Testing Nested Network Connectivity Testing Nested Network Connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++++++++++++++++
Similarly to the baremetal testing, we can create a demo deployment at the Similarly to the baremetal testing, we can create a demo deployment at the
overcloud VM, scale it to any number of pods and expose the service to check if overcloud VM, scale it to any number of pods and expose the service to check if
the deployment was successful. To do that check out the deployment was successful. To do that check out
`Testing Nested Network Connectivity`_. `Testing Nested Network Connectivity`_.
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html .. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html

View File

@ -20,6 +20,7 @@
(Avoid deeper levels because they do not render well.) (Avoid deeper levels because they do not render well.)
===========================
DevStack based Installation DevStack based Installation
=========================== ===========================
@ -27,6 +28,7 @@ This section describes how you can install and configure kuryr-kubernetes with
DevStack for testing different functionality, such as nested or different DevStack for testing different functionality, such as nested or different
ML2 drivers. ML2 drivers.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@ -1,3 +1,4 @@
============================================
How to try out nested-pods locally (MACVLAN) How to try out nested-pods locally (MACVLAN)
============================================ ============================================

View File

@ -1,3 +1,4 @@
=================================================
How to try out nested-pods locally (VLAN + trunk) How to try out nested-pods locally (VLAN + trunk)
================================================= =================================================

View File

@ -16,14 +16,15 @@ deployment. Kuryr acts as the container networking interface for OpenDaylight.
Testing with DevStack Testing with DevStack
===================== ---------------------
The next points describe how to test OpenStack with ODL using DevStack. The next points describe how to test OpenStack with ODL using DevStack.
We will start by describing how to test the baremetal case on a single host, We will start by describing how to test the baremetal case on a single host,
and then cover a nested environemnt where containers are created inside VMs. and then cover a nested environemnt where containers are created inside VMs.
Single Node Test Environment Single Node Test Environment
---------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Create a test system. 1. Create a test system.
@ -106,7 +107,7 @@ ip address and sent out on the network:
Inspect default Configuration Inspect default Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++++++++++
In order to check the default configuration, in term of networks, subnets, In order to check the default configuration, in term of networks, subnets,
security groups and loadbalancers created upon a successful devstack stacking, security groups and loadbalancers created upon a successful devstack stacking,
@ -116,7 +117,7 @@ you can check the `Inspect default Configuration`_.
Testing Network Connectivity Testing Network Connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++
Once the environment is ready, we can test that network connectivity works Once the environment is ready, we can test that network connectivity works
among pods. To do that check out `Testing Network Connectivity`_. among pods. To do that check out `Testing Network Connectivity`_.
@ -125,7 +126,7 @@ among pods. To do that check out `Testing Network Connectivity`_.
Nested Containers Test Environment (VLAN) Nested Containers Test Environment (VLAN)
----------------------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another deployment option is the nested-vlan where containers are created Another deployment option is the nested-vlan where containers are created
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
@ -137,7 +138,7 @@ components.
Undercloud deployment Undercloud deployment
~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++
The steps to deploy the undercloud environment are the same described above The steps to deploy the undercloud environment are the same described above
for the `Single Node Test Environment` with the different of the sample for the `Single Node Test Environment` with the different of the sample
@ -172,7 +173,7 @@ steps detailed at `Boot VM with a Trunk Port`_.
Overcloud deployment Overcloud deployment
~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++
Once the VM is up and running, we can start with the overcloud configuration. Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without ODL integration, i.e., the The steps to perform are the same as without ODL integration, i.e., the
@ -189,7 +190,8 @@ same steps as for ML2/OVS:
Testing Nested Network Connectivity Testing Nested Network Connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++++++++++++++++
Similarly to the baremetal testing, we can create a demo deployment at the Similarly to the baremetal testing, we can create a demo deployment at the
overcloud VM, scale it to any number of pods and expose the service to check if overcloud VM, scale it to any number of pods and expose the service to check if
the deployment was successful. To do that check out the deployment was successful. To do that check out

View File

@ -13,14 +13,15 @@ nested) containers and VM networking in a OVN-based OpenStack deployment.
Testing with DevStack Testing with DevStack
===================== ---------------------
The next points describe how to test OpenStack with OVN using DevStack. The next points describe how to test OpenStack with OVN using DevStack.
We will start by describing how to test the baremetal case on a single host, We will start by describing how to test the baremetal case on a single host,
and then cover a nested environment where containers are created inside VMs. and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment Single Node Test Environment
---------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Create a test system. 1. Create a test system.
@ -105,21 +106,21 @@ ip address and sent out on the network:
Inspect default Configuration Inspect default Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++++++++++
In order to check the default configuration, in term of networks, subnets, In order to check the default configuration, in term of networks, subnets,
security groups and loadbalancers created upon a successful devstack stacking, security groups and loadbalancers created upon a successful devstack stacking,
you can check the :doc:`../default_configuration` you can check the :doc:`../default_configuration`
Testing Network Connectivity Testing Network Connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++
Once the environment is ready, we can test that network connectivity works Once the environment is ready, we can test that network connectivity works
among pods. To do that check out :doc:`../testing_connectivity` among pods. To do that check out :doc:`../testing_connectivity`
Nested Containers Test Environment (VLAN) Nested Containers Test Environment (VLAN)
----------------------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another deployment option is the nested-vlan where containers are created Another deployment option is the nested-vlan where containers are created
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
@ -131,7 +132,7 @@ components.
Undercloud deployment Undercloud deployment
~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++
The steps to deploy the undercloud environment are the same described above The steps to deploy the undercloud environment are the same described above
for the `Single Node Test Environment` with the different of the sample for the `Single Node Test Environment` with the different of the sample
@ -164,7 +165,7 @@ steps detailed at :doc:`../trunk_ports`
Overcloud deployment Overcloud deployment
~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++
Once the VM is up and running, we can start with the overcloud configuration. Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without OVN integration, i.e., the The steps to perform are the same as without OVN integration, i.e., the
@ -178,7 +179,8 @@ same steps as for ML2/OVS:
Testing Nested Network Connectivity Testing Nested Network Connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++++++++++++++++
Similarly to the baremetal testing, we can create a demo deployment at the Similarly to the baremetal testing, we can create a demo deployment at the
overcloud VM, scale it to any number of pods and expose the service to check if overcloud VM, scale it to any number of pods and expose the service to check if
the deployment was successful. To do that check out the deployment was successful. To do that check out

View File

@ -1,3 +1,4 @@
======================================
How to enable ports pool with devstack How to enable ports pool with devstack
====================================== ======================================

View File

@ -1,3 +1,4 @@
=========================================
Watching Kubernetes api-server over HTTPS Watching Kubernetes api-server over HTTPS
========================================= =========================================
@ -20,4 +21,3 @@ If want to query HTTPS Kubernetes api server with ``--insecure`` mode::
[kubernetes] [kubernetes]
ssl_verify_server_crt = False ssl_verify_server_crt = False

View File

@ -1,3 +1,4 @@
===============
IPv6 networking IPv6 networking
=============== ===============
@ -5,6 +6,7 @@ Kuryr Kubernetes can be used with IPv6 networking. In this guide we'll show how
you can create the Neutron resources and configure Kubernetes and you can create the Neutron resources and configure Kubernetes and
Kuryr-Kubernetes to achieve an IPv6 only Kubernetes cluster. Kuryr-Kubernetes to achieve an IPv6 only Kubernetes cluster.
Setting it up Setting it up
------------- -------------
@ -193,6 +195,7 @@ Setting it up
the host Kubernetes API. You should also make sure that the Kubernetes API the host Kubernetes API. You should also make sure that the Kubernetes API
server binds on the IPv6 address of the host. server binds on the IPv6 address of the host.
Troubleshooting Troubleshooting
--------------- ---------------

View File

@ -1,3 +1,4 @@
====================================
Installing kuryr-kubernetes manually Installing kuryr-kubernetes manually
==================================== ====================================
@ -103,6 +104,7 @@ Alternatively you may run it in screen::
$ screen -dm kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d $ screen -dm kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
Configure kuryr-cni Configure kuryr-cni
------------------- -------------------
@ -157,8 +159,9 @@ to work correctly::
deactivate deactivate
sudo pip install 'oslo.privsep>=1.20.0' 'os-vif>=1.5.0' sudo pip install 'oslo.privsep>=1.20.0' 'os-vif>=1.5.0'
Configure Kuryr CNI Daemon Configure Kuryr CNI Daemon
------------------------------------- --------------------------
Kuryr CNI Daemon is a service designed to increased scalability of the Kuryr Kuryr CNI Daemon is a service designed to increased scalability of the Kuryr
operations done on Kubernetes nodes. More information can be found on operations done on Kubernetes nodes. More information can be found on
@ -201,6 +204,7 @@ Alternatively you may run it in screen::
$ screen -dm kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d $ screen -dm kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
Kuryr CNI Daemon health checks Kuryr CNI Daemon health checks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -1,3 +1,4 @@
========================================
Configure Pod with Additional Interfaces Configure Pod with Additional Interfaces
======================================== ========================================
@ -89,6 +90,7 @@ defined in step 1.
You may put a list of network separated with comma to attach Pods to more networks. You may put a list of network separated with comma to attach Pods to more networks.
Reference Reference
--------- ---------

View File

@ -1,3 +1,4 @@
=============================================================
Enable network per namespace functionality (handler + driver) Enable network per namespace functionality (handler + driver)
============================================================= =============================================================
@ -92,6 +93,7 @@ to add the namespace handler and state the namespace subnet driver with::
To disable the enforcement, you need to set the following variable: To disable the enforcement, you need to set the following variable:
KURYR_ENFORCE_SG_RULES=False KURYR_ENFORCE_SG_RULES=False
Testing the network per namespace functionality Testing the network per namespace functionality
----------------------------------------------- -----------------------------------------------

View File

@ -1,3 +1,4 @@
===========================================
Enable network policy support functionality Enable network policy support functionality
=========================================== ===========================================
@ -72,6 +73,7 @@ to add the policy, pod_label and namespace handler and drivers with::
To disable the enforcement, you need to set the following variable: To disable the enforcement, you need to set the following variable:
KURYR_ENFORCE_SG_RULES=False KURYR_ENFORCE_SG_RULES=False
Testing the network policy support functionality Testing the network policy support functionality
------------------------------------------------ ------------------------------------------------

View File

@ -1,3 +1,4 @@
===============================
Enable OCP-Router functionality Enable OCP-Router functionality
=============================== ===============================
@ -6,6 +7,7 @@ To enable OCP-Router functionality we should set the following:
- Setting L7 Router. - Setting L7 Router.
- Configure Kuryr to support L7 Router and OCP-Route resources. - Configure Kuryr to support L7 Router and OCP-Route resources.
Setting L7 Router Setting L7 Router
------------------ ------------------

View File

@ -1,3 +1,4 @@
================================
How to enable ports pool support How to enable ports pool support
================================ ================================
@ -138,6 +139,7 @@ the right pod-vif driver set.
Note that if no annotation is set on a node, the default pod_vif_driver is Note that if no annotation is set on a node, the default pod_vif_driver is
used. used.
Populate pools on subnets creation for namespace subnet driver Populate pools on subnets creation for namespace subnet driver
-------------------------------------------------------------- --------------------------------------------------------------

View File

@ -1,3 +1,4 @@
==============================
Kubernetes services networking Kubernetes services networking
============================== ==============================
@ -38,6 +39,7 @@ It is beyond the scope of this document to explain in detail the inner workings
of these two possible Neutron LBaaSv2 backends thus, only a brief explanation of these two possible Neutron LBaaSv2 backends thus, only a brief explanation
will be offered on each. will be offered on each.
Legacy Neutron HAProxy agent Legacy Neutron HAProxy agent
---------------------------- ----------------------------
@ -63,6 +65,7 @@ listeners and pools are added. Thus you should take into consideration the
memory requirements that arise from having one HAProxy process per Kubernetes memory requirements that arise from having one HAProxy process per Kubernetes
Service. Service.
Octavia Octavia
------- -------
@ -455,8 +458,10 @@ The services and pods subnets should be created.
In both 'User' and 'Pool' methods, the external IP address could be found In both 'User' and 'Pool' methods, the external IP address could be found
in k8s service status information (under loadbalancer/ingress/ip) in k8s service status information (under loadbalancer/ingress/ip)
Alternative configuration Alternative configuration
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
It is actually possible to avoid this routing by performing a deployment change It is actually possible to avoid this routing by performing a deployment change
that was successfully pioneered by the people at EasyStack Inc. which consists that was successfully pioneered by the people at EasyStack Inc. which consists
of doing the following: of doing the following:
@ -563,6 +568,7 @@ of doing the following:
the pod subnet, follow the `Making the Pods be able to reach the Kubernetes API`_ the pod subnet, follow the `Making the Pods be able to reach the Kubernetes API`_
section section
.. _k8s_lb_reachable: .. _k8s_lb_reachable:
Making the Pods be able to reach the Kubernetes API Making the Pods be able to reach the Kubernetes API
@ -685,6 +691,7 @@ Kubernetes service to be accessible to Pods.
| updated_at | 2017-08-10T16:46:55 | | updated_at | 2017-08-10T16:46:55 |
+---------------------------+--------------------------------------+ +---------------------------+--------------------------------------+
.. _services_troubleshooting: .. _services_troubleshooting:
Troubleshooting Troubleshooting

View File

@ -1,5 +1,6 @@
.. _sriov: .. _sriov:
=============================
How to configure SR-IOV ports How to configure SR-IOV ports
============================= =============================
@ -165,6 +166,7 @@ To make neutron ports active kuryr-k8s makes requests to neutron API to update
ports with binding:profile information. Due to this it is necessary to make ports with binding:profile information. Due to this it is necessary to make
actions with privileged user with admin rights. actions with privileged user with admin rights.
Reference Reference
--------- ---------

View File

@ -1,3 +1,4 @@
============================
Testing Network Connectivity Testing Network Connectivity
============================ ============================

View File

@ -1,3 +1,4 @@
===================================
Testing Nested Network Connectivity Testing Nested Network Connectivity
=================================== ===================================

View File

@ -1,3 +1,4 @@
===========================
Testing SRIOV functionality Testing SRIOV functionality
=========================== ===========================

View File

@ -1,3 +1,4 @@
====================
Testing UDP Services Testing UDP Services
==================== ====================
@ -151,5 +152,6 @@ Since the `kuryr-udp-demo`_ application concatenates the pod's name to the
replyed message, it is plain to see that both service's pods are replyed message, it is plain to see that both service's pods are
replying to the requests from the client. replying to the requests from the client.
.. _kuryr-udp-demo: https://hub.docker.com/r/yboaron/kuryr-udp-demo/ .. _kuryr-udp-demo: https://hub.docker.com/r/yboaron/kuryr-udp-demo/
.. _udp-client: https://github.com/yboaron/udp-client-script .. _udp-client: https://github.com/yboaron/udp-client-script

View File

@ -1,3 +1,4 @@
=========================
Boot VM with a Trunk Port Boot VM with a Trunk Port
========================= =========================

View File

@ -1,5 +1,6 @@
==========================
Upgrading kuryr-kubernetes Upgrading kuryr-kubernetes
=========================== ==========================
Kuryr-Kubernetes supports standard OpenStack utility for checking upgrade Kuryr-Kubernetes supports standard OpenStack utility for checking upgrade
is possible and safe: is possible and safe:
@ -19,6 +20,7 @@ If any issue will be found, the utility will give you explanation and possible
remediations. Also note that *Warning* results aren't blocking an upgrade, but remediations. Also note that *Warning* results aren't blocking an upgrade, but
are worth investigating. are worth investigating.
Stein (0.6.x) to T (0.7.x) upgrade Stein (0.6.x) to T (0.7.x) upgrade
---------------------------------- ----------------------------------

View File

@ -1,6 +1,6 @@
===================================== ====================================
Kuryr-Kubernetes Release Notes Howto Kuryr-Kubernetes Release Notes Howto
===================================== ====================================
Release notes are a new feature for documenting new features in Release notes are a new feature for documenting new features in
OpenStack projects. Background on the process, tooling, and OpenStack projects. Background on the process, tooling, and

View File

@ -1,8 +1,9 @@
========================================================
Welcome to Kuryr-Kubernetes Release Notes documentation! Welcome to Kuryr-Kubernetes Release Notes documentation!
======================================================== ========================================================
Contents Contents
======== --------
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@ -1,6 +1,6 @@
=================================== ===========================
Queens Series Release Notes Queens Series Release Notes
=================================== ===========================
.. release-notes:: .. release-notes::
:branch: stable/queens :branch: stable/queens

View File

@ -1,6 +1,6 @@
=================================== ==========================
Rocky Series Release Notes Rocky Series Release Notes
=================================== ==========================
.. release-notes:: .. release-notes::
:branch: stable/rocky :branch: stable/rocky

View File

@ -1,6 +1,6 @@
=================================== ==========================
Stein Series Release Notes Stein Series Release Notes
=================================== ==========================
.. release-notes:: .. release-notes::
:branch: stable/stein :branch: stable/stein