Fix directives formatting.

According to restructuredtext guidelines, directives should have
following formatting:

.. directive_name:: param
   :option:

   body

So it starts with the leading two dots and the space. Next directive
name follows with two colons, and finally optional parameter. Than, line
below it is either some options delimited with colons, or an empty line.
After which body of the directive follows. It should be positioned just
at the same column as directive name, so exactly 3 spaces. Most of the
directives are formatted wrong, including:

- note
- image
- todo
- toctree
- warning
- figure

This patch fixes that.

Change-Id: Ic970f72aeb93bcda82f02e57b70370ab79d6855d
This commit is contained in:
Roman Dobosz 2019-11-12 16:40:14 +01:00
parent c3c270c752
commit feff260509
22 changed files with 178 additions and 163 deletions

View File

@ -3,7 +3,7 @@ Team and repository tags
======================== ========================
.. image:: https://governance.openstack.org/tc/badges/kuryr-kubernetes.svg .. image:: https://governance.openstack.org/tc/badges/kuryr-kubernetes.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html :target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on .. Change things from this point on

View File

@ -65,9 +65,10 @@ CNI ADD failure is reached, health of CNI components and existence of memory
leak. leak.
.. note:: .. note::
The CNI Health Manager will be started with the check for memory leak
disabled. In order to enable, set the following option in kuryr.conf to a The CNI Health Manager will be started with the check for memory leak
limit value of memory in MiBs. disabled. In order to enable, set the following option in kuryr.conf to a
limit value of memory in MiBs.
[cni_health_server] [cni_health_server]
max_memory_usage = -1 max_memory_usage = -1

View File

@ -34,20 +34,20 @@ testing infrastructure.
Design documents Design documents
---------------- ----------------
.. toctree:: .. toctree::
:maxdepth: 3 :maxdepth: 3
kuryr_kubernetes_design kuryr_kubernetes_design
service_support service_support
port_manager port_manager
vif_handler_drivers_design vif_handler_drivers_design
health_manager health_manager
kuryr_kubernetes_ingress_design kuryr_kubernetes_ingress_design
kuryr_kubernetes_ocp_route_design kuryr_kubernetes_ocp_route_design
high_availability high_availability
kuryr_kubernetes_versions kuryr_kubernetes_versions
port_crd_usage port_crd_usage
network_policy network_policy
updating_pod_resources_api updating_pod_resources_api
Indices and tables Indices and tables

View File

@ -47,9 +47,9 @@ Neutron ports ensuring requested level of isolation.
Please see below the component view of the integrated system: Please see below the component view of the integrated system:
.. image:: ../../images/kuryr_k8s_components.png .. image:: ../../images/kuryr_k8s_components.png
:alt: integration components :alt: integration components
:align: center :align: center
:width: 100% :width: 100%
Design Principles Design Principles
@ -138,9 +138,9 @@ sequentially in the order arrival. Events of different Kubernetes objects are
handled concurrently. handled concurrently.
.. image:: ../..//images/controller_pipeline.png .. image:: ../..//images/controller_pipeline.png
:alt: controller pipeline :alt: controller pipeline
:align: center :align: center
:width: 100% :width: 100%
ResourceEventHandler ResourceEventHandler
@ -271,9 +271,9 @@ expected to be JSON).
For reference see updated pod creation flow diagram: For reference see updated pod creation flow diagram:
.. image:: ../../images/pod_creation_flow_daemon.png .. image:: ../../images/pod_creation_flow_daemon.png
:alt: Controller-CNI-daemon interaction :alt: Controller-CNI-daemon interaction
:align: center :align: center
:width: 100% :width: 100%
/addNetwork /addNetwork

View File

@ -61,9 +61,9 @@ The following scheme describes the SW modules that provides Ingress controller
capability in Kuryr Kubernetes context: capability in Kuryr Kubernetes context:
.. image:: ../../images/kuryr_k8s_ingress_sw_components.svg .. image:: ../../images/kuryr_k8s_ingress_sw_components.svg
:alt: Ingress ctrlr SW architecture :alt: Ingress ctrlr SW architecture
:align: center :align: center
:width: 100% :width: 100%
The Ingress controller functionality will be composed of the following software The Ingress controller functionality will be composed of the following software
modules: modules:
@ -87,9 +87,9 @@ The next diagram illustrates creation of Ingress resource in kuryr-kubernetes
ingress controller SW : ingress controller SW :
.. image:: ../../images/kuryr_k8s_ingress_ctrl_flow_diagram.svg .. image:: ../../images/kuryr_k8s_ingress_ctrl_flow_diagram.svg
:alt: Ingress creation flow diagram :alt: Ingress creation flow diagram
:align: center :align: center
:width: 100% :width: 100%
The L7 Router The L7 Router
@ -119,9 +119,9 @@ FIP.
The next diagram illustrates data flow from external user to L7 router: The next diagram illustrates data flow from external user to L7 router:
.. image:: ../../images/external_traffic_to_l7_router.svg .. image:: ../../images/external_traffic_to_l7_router.svg
:alt: external traffic to L7 loadbalancer :alt: external traffic to L7 loadbalancer
:align: center :align: center
:width: 100% :width: 100%
Ingress Handler Ingress Handler
@ -165,9 +165,9 @@ A diagram describing both L7 router and user loadbalancer Neutron LBaaS
entities is given below: entities is given below:
.. image:: ../../images/l7_routing_and_user_lb_neutron_entities.svg .. image:: ../../images/l7_routing_and_user_lb_neutron_entities.svg
:alt: L7 routing and user LB Neutron LBaaS entities :alt: L7 routing and user LB Neutron LBaaS entities
:align: center :align: center
:width: 100% :width: 100%
- The blue components are created/released by the L7 router. - The blue components are created/released by the L7 router.
- The green components are created/released by Ingress handler. - The green components are created/released by Ingress handler.

View File

@ -66,9 +66,9 @@ Route resources.
The following scheme describes OCP-Route controller SW architecture: The following scheme describes OCP-Route controller SW architecture:
.. image:: ../../images/kuryr_k8s_ocp_route_ctrl_sw.svg .. image:: ../../images/kuryr_k8s_ocp_route_ctrl_sw.svg
:alt: Ingress/OCP-Route controllers SW architecture :alt: Ingress/OCP-Route controllers SW architecture
:align: center :align: center
:width: 100% :width: 100%
Similar to Kubernetes Ingress, each OCP-Route object being translated to a L7 Similar to Kubernetes Ingress, each OCP-Route object being translated to a L7
policy in L7 router, and the rules on OCP-Route become L7 (URL) mapping rules policy in L7 router, and the rules on OCP-Route become L7 (URL) mapping rules

View File

@ -69,11 +69,13 @@ ones, for instance service/pod handlers, have been modified to account for the
side effects/actions of when a Network Policy is being enforced. side effects/actions of when a Network Policy is being enforced.
.. note:: .. note::
Kuryr supports a network policy that contains:
* Ingress and Egress rules Kuryr supports a network policy that contains:
* namespace selector and pod selector, defined with match labels or match
expressions, mix of namespace and pod selector, ip block * Ingress and Egress rules
* named port * namespace selector and pod selector, defined with match labels or match
expressions, mix of namespace and pod selector, ip block
* named port
New handlers and drivers New handlers and drivers
@ -363,18 +365,19 @@ egress rule allowing traffic to everywhere.
securityGroupName: sg-allow-test-via-ns-selector securityGroupName: sg-allow-test-via-ns-selector
.. note:: .. note::
The Service security groups need to be rechecked when a network policy
that affects ingress traffic is created, and also everytime The Service security groups need to be rechecked when a network policy
a pod or namespace is created. that affects ingress traffic is created, and also everytime
a pod or namespace is created.
Create network policy flow Create network policy flow
++++++++++++++++++++++++++ ++++++++++++++++++++++++++
.. image:: ../../images/create_network_policy_flow.svg .. image:: ../../images/create_network_policy_flow.svg
:alt: Network Policy creation flow :alt: Network Policy creation flow
:align: center :align: center
:width: 100% :width: 100%
Create pod flow Create pod flow
@ -384,9 +387,9 @@ The following diagram only covers the implementation part that affects
network policy. network policy.
.. image:: ../../images/update_network_policy_on_pod_creation.svg .. image:: ../../images/update_network_policy_on_pod_creation.svg
:alt: Pod creation flow :alt: Pod creation flow
:align: center :align: center
:width: 100% :width: 100%
Network policy rule definition Network policy rule definition

View File

@ -37,7 +37,6 @@ Kubernetes source tree.
#. Kubernetes released new version of PodResources API and the old one is no #. Kubernetes released new version of PodResources API and the old one is no
longer supported. In this case, without update, we'll not be able to use longer supported. In this case, without update, we'll not be able to use
PodResources service. PodResources service.
#. ``protobuf`` version in ``lower-constraints.txt`` changed to lower #. ``protobuf`` version in ``lower-constraints.txt`` changed to lower
version (this is highly unlikely). In this case ``protobuf`` could fail version (this is highly unlikely). In this case ``protobuf`` could fail
to use our python bindings. to use our python bindings.

View File

@ -54,9 +54,9 @@ additional interfaces, the Multi-VIF driver can just return.
Diagram describing VifHandler - Drivers flow is giver below: Diagram describing VifHandler - Drivers flow is giver below:
.. image:: ../../images/vif_handler_drivers_design.png .. image:: ../../images/vif_handler_drivers_design.png
:alt: vif handler drivers design :alt: vif handler drivers design
:align: center :align: center
:width: 100% :width: 100%
Config Options Config Options

View File

@ -46,9 +46,10 @@ that can be used to Deploy Kuryr on Kubernetes. The script is placed in
kuryr-controller container. Defaults to no certificate. kuryr-controller container. Defaults to no certificate.
.. note:: .. note::
Providing no or incorrect ``ca_certificate_path`` will still create the file
with ``Secret`` definition with empty CA certificate file. This file will Providing no or incorrect ``ca_certificate_path`` will still create the file
still be mounted in kuryr-controller ``Deployment`` definition. with ``Secret`` definition with empty CA certificate file. This file will
still be mounted in kuryr-controller ``Deployment`` definition.
If no path to config files is provided, script automatically generates minimal If no path to config files is provided, script automatically generates minimal
configuration. However some of the options should be filled by the user. You configuration. However some of the options should be filled by the user. You
@ -77,20 +78,22 @@ script. Below is the list of available variables:
* ``$KURYR_K8S_BINDING_IFACE`` - ``[binding]link_iface`` (default: eth0) * ``$KURYR_K8S_BINDING_IFACE`` - ``[binding]link_iface`` (default: eth0)
.. note:: .. note::
kuryr-daemon will be started in the CNI container. It is using ``os-vif`` and
``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo`` to kuryr-daemon will be started in the CNI container. It is using ``os-vif``
raise privileges, even though container is priviledged by itself or ``sudo`` and ``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo``
is missing from container OS (e.g. default CentOS 7). To prevent that make to raise privileges, even though container is priviledged by itself or
sure to set following options in kuryr.conf used for kuryr-daemon:: ``sudo`` is missing from container OS (e.g. default CentOS 7). To prevent
that make sure to set following options in kuryr.conf used for
kuryr-daemon::
[vif_plug_ovs_privileged] [vif_plug_ovs_privileged]
helper_command=privsep-helper helper_command=privsep-helper
[vif_plug_linux_bridge_privileged] [vif_plug_linux_bridge_privileged]
helper_command=privsep-helper helper_command=privsep-helper
Those options will prevent oslo.privsep from doing that. If rely on Those options will prevent oslo.privsep from doing that. If rely on
aformentioned script to generate config files, those options will be added aformentioned script to generate config files, those options will be added
automatically. automatically.
In case of using ports pool functionality, we may want to make the In case of using ports pool functionality, we may want to make the
kuryr-controller not ready until the pools are populated with the existing kuryr-controller not ready until the pools are populated with the existing
@ -114,13 +117,17 @@ This should generate 5 files in your ``<output_dir>``:
* cni_ds.yml * cni_ds.yml
.. note:: .. note::
kuryr-cni daemonset mounts /var/run, due to necessity of accessing to several sub directories
like openvswitch and auxiliary directory for vhostuser configuration and socket files. Also when kuryr-cni daemonset mounts /var/run, due to necessity of accessing to
neutron-openvswitch-agent works with datapath_type = netdev configuration option, kuryr-kubernetes several sub directories like openvswitch and auxiliary directory for
has to move vhostuser socket to auxiliary directory, that auxiliary directory should be on the same vhostuser configuration and socket files. Also when
mount point, otherwise connection of this socket will be refused. neutron-openvswitch-agent works with datapath_type = netdev configuration
In case when Open vSwitch keeps vhostuser socket files not in /var/run/openvswitch, openvswitch option, kuryr-kubernetes has to move vhostuser socket to auxiliary
mount point in cni_ds.yaml and [vhostuser] section in config_map.yml should be changed properly. directory, that auxiliary directory should be on the same mount point,
otherwise connection of this socket will be refused. In case when Open
vSwitch keeps vhostuser socket files not in /var/run/openvswitch,
openvswitch mount point in cni_ds.yaml and [vhostuser] section in
config_map.yml should be changed properly.
Deploying Kuryr resources on Kubernetes Deploying Kuryr resources on Kubernetes

View File

@ -36,10 +36,10 @@ directory: ::
.. note:: .. note::
``local.conf.sample`` file is configuring Neutron and Kuryr with standard ``local.conf.sample`` file is configuring Neutron and Kuryr with standard
Open vSwitch ML2 networking. In the ``devstack`` directory there are other Open vSwitch ML2 networking. In the ``devstack`` directory there are other
sample configuration files that enable OpenDaylight or Drangonflow networking. sample configuration files that enable OpenDaylight or Drangonflow
See other pages in this documentation section to learn more. networking. See other pages in this documentation section to learn more.
Now edit ``devstack/local.conf`` to set up some initial options: Now edit ``devstack/local.conf`` to set up some initial options:

View File

@ -30,13 +30,13 @@ ML2 drivers.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
basic basic
nested-vlan nested-vlan
nested-macvlan nested-macvlan
odl_support odl_support
ovn_support ovn_support
dragonflow_support dragonflow_support
containerized containerized
ports-pool ports-pool

View File

@ -10,7 +10,8 @@ nested MACVLAN driver rather than VLAN and trunk ports.
2. Launch a Nova VM with MACVLAN support 2. Launch a Nova VM with MACVLAN support
.. todo:: .. todo::
Add a list of neutron commands, required to launch a such a VM
Add a list of neutron commands, required to launch a such a VM
3. Log into the VM and set up Kubernetes along with Kuryr using devstack: 3. Log into the VM and set up Kubernetes along with Kuryr using devstack:
- Since undercloud Neutron will be used by pods, Neutron services should be - Since undercloud Neutron will be used by pods, Neutron services should be

View File

@ -26,24 +26,24 @@ Installation
This section describes how you can install and configure kuryr-kubernetes This section describes how you can install and configure kuryr-kubernetes
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
manual manual
https_kubernetes https_kubernetes
ports-pool ports-pool
services services
ipv6 ipv6
upgrades upgrades
devstack/index devstack/index
default_configuration default_configuration
trunk_ports trunk_ports
network_namespace network_namespace
network_policy network_policy
testing_connectivity testing_connectivity
testing_nested_connectivity testing_nested_connectivity
containerized containerized
ocp_route ocp_route
multi_vif_with_npwg_spec multi_vif_with_npwg_spec
sriov sriov
testing_udp_services testing_udp_services
testing_sriov_functional testing_sriov_functional

View File

@ -172,30 +172,31 @@ Kuryr CNI Daemon, should be installed on every Kubernetes node, so following
steps need to be repeated. steps need to be repeated.
.. note:: .. note::
You can tweak configuration of some timeouts to match your environment. It's
crucial for scalability of the whole deployment. In general the timeout to You can tweak configuration of some timeouts to match your environment. It's
serve CNI request from kubelet to Kuryr is 180 seconds. After that time crucial for scalability of the whole deployment. In general the timeout to
kubelet will retry the request. Additionally there are two configuration serve CNI request from kubelet to Kuryr is 180 seconds. After that time
options:: kubelet will retry the request. Additionally there are two configuration
options::
[cni_daemon] [cni_daemon]
vif_annotation_timeout=60 vif_annotation_timeout=60
pyroute2_timeout=10 pyroute2_timeout=10
``vif_annotation_timeout`` is time the Kuryr CNI Daemon will wait for Kuryr ``vif_annotation_timeout`` is time the Kuryr CNI Daemon will wait for Kuryr
Controller to create a port in Neutron and add information about it to Pod's Controller to create a port in Neutron and add information about it to Pod's
metadata. If either Neutron or Kuryr Controller doesn't keep up with high metadata. If either Neutron or Kuryr Controller doesn't keep up with high
number of requests, it's advised to increase this timeout. Please note that number of requests, it's advised to increase this timeout. Please note that
increasing it over 180 seconds will not have any effect as the request will increasing it over 180 seconds will not have any effect as the request will
time out anyway and will be retried (which is safe). time out anyway and will be retried (which is safe).
``pyroute2_timeout`` is internal timeout of pyroute2 library, that is ``pyroute2_timeout`` is internal timeout of pyroute2 library, that is
responsible for doing modifications to Linux Kernel networking stack (e.g. responsible for doing modifications to Linux Kernel networking stack (e.g.
moving interfaces to Pod's namespaces, adding routes and ports or assigning moving interfaces to Pod's namespaces, adding routes and ports or assigning
addresses to interfaces). When serving a lot of ADD/DEL CNI requests on a addresses to interfaces). When serving a lot of ADD/DEL CNI requests on a
regular basis it's advised to increase that timeout. Please note that the regular basis it's advised to increase that timeout. Please note that the
value denotes *maximum* time to wait for kernel to complete the operations. value denotes *maximum* time to wait for kernel to complete the operations.
If operation succeeds earlier, request isn't delayed. If operation succeeds earlier, request isn't delayed.
Run kuryr-daemon:: Run kuryr-daemon::

View File

@ -88,10 +88,11 @@ to add the namespace handler and state the namespace subnet driver with::
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace
.. note:: .. note::
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
there is no need to enforce sg rules at the load balancer level. If the loadbalancer maintains the source IP (such as ovn-octavia driver),
To disable the enforcement, you need to set the following variable: there is no need to enforce sg rules at the load balancer level.
KURYR_ENFORCE_SG_RULES=False To disable the enforcement, you need to set the following variable:
KURYR_ENFORCE_SG_RULES=False
Testing the network per namespace functionality Testing the network per namespace functionality

View File

@ -26,10 +26,11 @@ After that, enable also the security group drivers for policies::
pod_security_groups_driver = policy pod_security_groups_driver = policy
.. warning:: .. warning::
The correct behavior for pods that have no network policy applied is to allow
all ingress and egress traffic. If you want that to be enforced, please make The correct behavior for pods that have no network policy applied is to
sure to create an SG allowing all traffic and add it to allow all ingress and egress traffic. If you want that to be enforced,
``[neutron_defaults]pod_security_groups`` setting in ``kuryr.conf``:: please make sure to create an SG allowing all traffic and add it to
``[neutron_defaults]pod_security_groups`` setting in ``kuryr.conf``::
[neutron_defaults] [neutron_defaults]
pod_security_groups = ALLOW_ALL_SG_ID pod_security_groups = ALLOW_ALL_SG_ID
@ -68,10 +69,11 @@ to add the policy, pod_label and namespace handler and drivers with::
KURYR_SUBNET_DRIVER=namespace KURYR_SUBNET_DRIVER=namespace
.. note:: .. note::
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
there is no need to enforce sg rules at the load balancer level. If the loadbalancer maintains the source IP (such as ovn-octavia driver),
To disable the enforcement, you need to set the following variable: there is no need to enforce sg rules at the load balancer level. To disable
KURYR_ENFORCE_SG_RULES=False the enforcement, you need to set the following variable:
KURYR_ENFORCE_SG_RULES=False
Testing the network policy support functionality Testing the network policy support functionality

View File

@ -126,15 +126,15 @@ the right pod-vif driver set.
.. note:: .. note::
Previously, `pools_vif_drivers` configuration option provided similar Previously, `pools_vif_drivers` configuration option provided similar
functionality, but is now deprecated and not recommended. functionality, but is now deprecated and not recommended. It stored a
It stored a mapping from pool_driver => pod_vif_driver instead, disallowing mapping from pool_driver => pod_vif_driver instead, disallowing the use of a
the use of a single pool driver as keys for multiple pod_vif_drivers. single pool driver as keys for multiple pod_vif_drivers.
.. code-block:: ini .. code-block:: ini
[vif_pool] [vif_pool]
pools_vif_drivers=nested:nested-vlan,neutron:neutron-vif pools_vif_drivers=nested:nested-vlan,neutron:neutron-vif
Note that if no annotation is set on a node, the default pod_vif_driver is Note that if no annotation is set on a node, the default pod_vif_driver is
used. used.

View File

@ -18,12 +18,12 @@ be implemented in the following way:
.. _LBaaS API: https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0 .. _LBaaS API: https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0
.. figure:: ../../images/lbaas_translation.svg .. figure:: ../../images/lbaas_translation.svg
:width: 100% :width: 100%
:alt: Graphical depiction of the translation explained above :alt: Graphical depiction of the translation explained above
In this diagram you can see how the Kubernetes entities in the top left In this diagram you can see how the Kubernetes entities in the top left
corner are implemented in plain Kubernetes networking (top-right) and in corner are implemented in plain Kubernetes networking (top-right) and in
Kuryr's default configuration (bottom) Kuryr's default configuration (bottom)
If you are paying attention and are familiar with the `LBaaS API`_ you probably If you are paying attention and are familiar with the `LBaaS API`_ you probably
noticed that we have separate pools for each exposed port in a service. This is noticed that we have separate pools for each exposed port in a service. This is

View File

@ -31,12 +31,12 @@ upgrade check`` utility **before upgrading Kuryr-Kubernetes services to T**.
.. note:: .. note::
In case of running Kuryr-Kubernetes containerized you can use ``kubectl In case of running Kuryr-Kubernetes containerized you can use ``kubectl
exec`` to run kuryr-k8s-status exec`` to run kuryr-k8s-status
.. code-block:: bash .. code-block:: console
$ kubectl -n kube-system exec -it <controller-pod-name> kuryr-k8s-status upgrade check $ kubectl -n kube-system exec -it <controller-pod-name> kuryr-k8s-status upgrade check
.. code-block:: bash .. code-block:: bash

View File

@ -70,9 +70,9 @@ such as attach volumes to hosts, etc.. Both volume provisioner and FlexVolume
driver will consume OpenStack storage services via Fuxi server. driver will consume OpenStack storage services via Fuxi server.
.. image:: ../../../images/fuxi_k8s_components.png .. image:: ../../../images/fuxi_k8s_components.png
:alt: integration components :alt: integration components
:align: center :align: center
:width: 100% :width: 100%
Volume Provisioner Volume Provisioner

View File

@ -428,9 +428,9 @@ Execution flow diagram
See below the network policy attachment to the pod after pod creation: See below the network policy attachment to the pod after pod creation:
.. image:: ../../../images/net-policy.svg .. image:: ../../../images/net-policy.svg
:alt: Ingress creation flow diagram :alt: Ingress creation flow diagram
:align: left :align: left
:width: 100% :width: 100%
Possible optimization: Possible optimization: