Fix directives formatting.

According to restructuredtext guidelines, directives should have
following formatting:

.. directive_name:: param
   :option:

   body

So it starts with the leading two dots and the space. Next directive
name follows with two colons, and finally optional parameter. Than, line
below it is either some options delimited with colons, or an empty line.
After which body of the directive follows. It should be positioned just
at the same column as directive name, so exactly 3 spaces. Most of the
directives are formatted wrong, including:

- note
- image
- todo
- toctree
- warning
- figure

This patch fixes that.

Change-Id: Ic970f72aeb93bcda82f02e57b70370ab79d6855d
This commit is contained in:
Roman Dobosz 2019-11-12 16:40:14 +01:00
parent c3c270c752
commit feff260509
22 changed files with 178 additions and 163 deletions

View File

@ -3,7 +3,7 @@ Team and repository tags
========================
.. image:: https://governance.openstack.org/tc/badges/kuryr-kubernetes.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on

View File

@ -65,9 +65,10 @@ CNI ADD failure is reached, health of CNI components and existence of memory
leak.
.. note::
The CNI Health Manager will be started with the check for memory leak
disabled. In order to enable, set the following option in kuryr.conf to a
limit value of memory in MiBs.
The CNI Health Manager will be started with the check for memory leak
disabled. In order to enable, set the following option in kuryr.conf to a
limit value of memory in MiBs.
[cni_health_server]
max_memory_usage = -1

View File

@ -34,20 +34,20 @@ testing infrastructure.
Design documents
----------------
.. toctree::
:maxdepth: 3
:maxdepth: 3
kuryr_kubernetes_design
service_support
port_manager
vif_handler_drivers_design
health_manager
kuryr_kubernetes_ingress_design
kuryr_kubernetes_ocp_route_design
high_availability
kuryr_kubernetes_versions
port_crd_usage
network_policy
updating_pod_resources_api
kuryr_kubernetes_design
service_support
port_manager
vif_handler_drivers_design
health_manager
kuryr_kubernetes_ingress_design
kuryr_kubernetes_ocp_route_design
high_availability
kuryr_kubernetes_versions
port_crd_usage
network_policy
updating_pod_resources_api
Indices and tables

View File

@ -47,9 +47,9 @@ Neutron ports ensuring requested level of isolation.
Please see below the component view of the integrated system:
.. image:: ../../images/kuryr_k8s_components.png
:alt: integration components
:align: center
:width: 100%
:alt: integration components
:align: center
:width: 100%
Design Principles
@ -138,9 +138,9 @@ sequentially in the order arrival. Events of different Kubernetes objects are
handled concurrently.
.. image:: ../..//images/controller_pipeline.png
:alt: controller pipeline
:align: center
:width: 100%
:alt: controller pipeline
:align: center
:width: 100%
ResourceEventHandler
@ -271,9 +271,9 @@ expected to be JSON).
For reference see updated pod creation flow diagram:
.. image:: ../../images/pod_creation_flow_daemon.png
:alt: Controller-CNI-daemon interaction
:align: center
:width: 100%
:alt: Controller-CNI-daemon interaction
:align: center
:width: 100%
/addNetwork

View File

@ -61,9 +61,9 @@ The following scheme describes the SW modules that provides Ingress controller
capability in Kuryr Kubernetes context:
.. image:: ../../images/kuryr_k8s_ingress_sw_components.svg
:alt: Ingress ctrlr SW architecture
:align: center
:width: 100%
:alt: Ingress ctrlr SW architecture
:align: center
:width: 100%
The Ingress controller functionality will be composed of the following software
modules:
@ -87,9 +87,9 @@ The next diagram illustrates creation of Ingress resource in kuryr-kubernetes
ingress controller SW :
.. image:: ../../images/kuryr_k8s_ingress_ctrl_flow_diagram.svg
:alt: Ingress creation flow diagram
:align: center
:width: 100%
:alt: Ingress creation flow diagram
:align: center
:width: 100%
The L7 Router
@ -119,9 +119,9 @@ FIP.
The next diagram illustrates data flow from external user to L7 router:
.. image:: ../../images/external_traffic_to_l7_router.svg
:alt: external traffic to L7 loadbalancer
:align: center
:width: 100%
:alt: external traffic to L7 loadbalancer
:align: center
:width: 100%
Ingress Handler
@ -165,9 +165,9 @@ A diagram describing both L7 router and user loadbalancer Neutron LBaaS
entities is given below:
.. image:: ../../images/l7_routing_and_user_lb_neutron_entities.svg
:alt: L7 routing and user LB Neutron LBaaS entities
:align: center
:width: 100%
:alt: L7 routing and user LB Neutron LBaaS entities
:align: center
:width: 100%
- The blue components are created/released by the L7 router.
- The green components are created/released by Ingress handler.

View File

@ -66,9 +66,9 @@ Route resources.
The following scheme describes OCP-Route controller SW architecture:
.. image:: ../../images/kuryr_k8s_ocp_route_ctrl_sw.svg
:alt: Ingress/OCP-Route controllers SW architecture
:align: center
:width: 100%
:alt: Ingress/OCP-Route controllers SW architecture
:align: center
:width: 100%
Similar to Kubernetes Ingress, each OCP-Route object being translated to a L7
policy in L7 router, and the rules on OCP-Route become L7 (URL) mapping rules

View File

@ -69,11 +69,13 @@ ones, for instance service/pod handlers, have been modified to account for the
side effects/actions of when a Network Policy is being enforced.
.. note::
Kuryr supports a network policy that contains:
* Ingress and Egress rules
* namespace selector and pod selector, defined with match labels or match
expressions, mix of namespace and pod selector, ip block
* named port
Kuryr supports a network policy that contains:
* Ingress and Egress rules
* namespace selector and pod selector, defined with match labels or match
expressions, mix of namespace and pod selector, ip block
* named port
New handlers and drivers
@ -363,18 +365,19 @@ egress rule allowing traffic to everywhere.
securityGroupName: sg-allow-test-via-ns-selector
.. note::
The Service security groups need to be rechecked when a network policy
that affects ingress traffic is created, and also everytime
a pod or namespace is created.
The Service security groups need to be rechecked when a network policy
that affects ingress traffic is created, and also everytime
a pod or namespace is created.
Create network policy flow
++++++++++++++++++++++++++
.. image:: ../../images/create_network_policy_flow.svg
:alt: Network Policy creation flow
:align: center
:width: 100%
:alt: Network Policy creation flow
:align: center
:width: 100%
Create pod flow
@ -384,9 +387,9 @@ The following diagram only covers the implementation part that affects
network policy.
.. image:: ../../images/update_network_policy_on_pod_creation.svg
:alt: Pod creation flow
:align: center
:width: 100%
:alt: Pod creation flow
:align: center
:width: 100%
Network policy rule definition

View File

@ -37,7 +37,6 @@ Kubernetes source tree.
#. Kubernetes released new version of PodResources API and the old one is no
longer supported. In this case, without update, we'll not be able to use
PodResources service.
#. ``protobuf`` version in ``lower-constraints.txt`` changed to lower
version (this is highly unlikely). In this case ``protobuf`` could fail
to use our python bindings.

View File

@ -54,9 +54,9 @@ additional interfaces, the Multi-VIF driver can just return.
Diagram describing VifHandler - Drivers flow is giver below:
.. image:: ../../images/vif_handler_drivers_design.png
:alt: vif handler drivers design
:align: center
:width: 100%
:alt: vif handler drivers design
:align: center
:width: 100%
Config Options

View File

@ -46,9 +46,10 @@ that can be used to Deploy Kuryr on Kubernetes. The script is placed in
kuryr-controller container. Defaults to no certificate.
.. note::
Providing no or incorrect ``ca_certificate_path`` will still create the file
with ``Secret`` definition with empty CA certificate file. This file will
still be mounted in kuryr-controller ``Deployment`` definition.
Providing no or incorrect ``ca_certificate_path`` will still create the file
with ``Secret`` definition with empty CA certificate file. This file will
still be mounted in kuryr-controller ``Deployment`` definition.
If no path to config files is provided, script automatically generates minimal
configuration. However some of the options should be filled by the user. You
@ -77,20 +78,22 @@ script. Below is the list of available variables:
* ``$KURYR_K8S_BINDING_IFACE`` - ``[binding]link_iface`` (default: eth0)
.. note::
kuryr-daemon will be started in the CNI container. It is using ``os-vif`` and
``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo`` to
raise privileges, even though container is priviledged by itself or ``sudo``
is missing from container OS (e.g. default CentOS 7). To prevent that make
sure to set following options in kuryr.conf used for kuryr-daemon::
kuryr-daemon will be started in the CNI container. It is using ``os-vif``
and ``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo``
to raise privileges, even though container is priviledged by itself or
``sudo`` is missing from container OS (e.g. default CentOS 7). To prevent
that make sure to set following options in kuryr.conf used for
kuryr-daemon::
[vif_plug_ovs_privileged]
helper_command=privsep-helper
[vif_plug_linux_bridge_privileged]
helper_command=privsep-helper
Those options will prevent oslo.privsep from doing that. If rely on
aformentioned script to generate config files, those options will be added
automatically.
Those options will prevent oslo.privsep from doing that. If rely on
aformentioned script to generate config files, those options will be added
automatically.
In case of using ports pool functionality, we may want to make the
kuryr-controller not ready until the pools are populated with the existing
@ -114,13 +117,17 @@ This should generate 5 files in your ``<output_dir>``:
* cni_ds.yml
.. note::
kuryr-cni daemonset mounts /var/run, due to necessity of accessing to several sub directories
like openvswitch and auxiliary directory for vhostuser configuration and socket files. Also when
neutron-openvswitch-agent works with datapath_type = netdev configuration option, kuryr-kubernetes
has to move vhostuser socket to auxiliary directory, that auxiliary directory should be on the same
mount point, otherwise connection of this socket will be refused.
In case when Open vSwitch keeps vhostuser socket files not in /var/run/openvswitch, openvswitch
mount point in cni_ds.yaml and [vhostuser] section in config_map.yml should be changed properly.
kuryr-cni daemonset mounts /var/run, due to necessity of accessing to
several sub directories like openvswitch and auxiliary directory for
vhostuser configuration and socket files. Also when
neutron-openvswitch-agent works with datapath_type = netdev configuration
option, kuryr-kubernetes has to move vhostuser socket to auxiliary
directory, that auxiliary directory should be on the same mount point,
otherwise connection of this socket will be refused. In case when Open
vSwitch keeps vhostuser socket files not in /var/run/openvswitch,
openvswitch mount point in cni_ds.yaml and [vhostuser] section in
config_map.yml should be changed properly.
Deploying Kuryr resources on Kubernetes

View File

@ -36,10 +36,10 @@ directory: ::
.. note::
``local.conf.sample`` file is configuring Neutron and Kuryr with standard
Open vSwitch ML2 networking. In the ``devstack`` directory there are other
sample configuration files that enable OpenDaylight or Drangonflow networking.
See other pages in this documentation section to learn more.
``local.conf.sample`` file is configuring Neutron and Kuryr with standard
Open vSwitch ML2 networking. In the ``devstack`` directory there are other
sample configuration files that enable OpenDaylight or Drangonflow
networking. See other pages in this documentation section to learn more.
Now edit ``devstack/local.conf`` to set up some initial options:

View File

@ -30,13 +30,13 @@ ML2 drivers.
.. toctree::
:maxdepth: 1
:maxdepth: 1
basic
nested-vlan
nested-macvlan
odl_support
ovn_support
dragonflow_support
containerized
ports-pool
basic
nested-vlan
nested-macvlan
odl_support
ovn_support
dragonflow_support
containerized
ports-pool

View File

@ -10,7 +10,8 @@ nested MACVLAN driver rather than VLAN and trunk ports.
2. Launch a Nova VM with MACVLAN support
.. todo::
Add a list of neutron commands, required to launch a such a VM
Add a list of neutron commands, required to launch a such a VM
3. Log into the VM and set up Kubernetes along with Kuryr using devstack:
- Since undercloud Neutron will be used by pods, Neutron services should be

View File

@ -26,24 +26,24 @@ Installation
This section describes how you can install and configure kuryr-kubernetes
.. toctree::
:maxdepth: 2
:maxdepth: 2
manual
https_kubernetes
ports-pool
services
ipv6
upgrades
devstack/index
default_configuration
trunk_ports
network_namespace
network_policy
testing_connectivity
testing_nested_connectivity
containerized
ocp_route
multi_vif_with_npwg_spec
sriov
testing_udp_services
testing_sriov_functional
manual
https_kubernetes
ports-pool
services
ipv6
upgrades
devstack/index
default_configuration
trunk_ports
network_namespace
network_policy
testing_connectivity
testing_nested_connectivity
containerized
ocp_route
multi_vif_with_npwg_spec
sriov
testing_udp_services
testing_sriov_functional

View File

@ -172,30 +172,31 @@ Kuryr CNI Daemon, should be installed on every Kubernetes node, so following
steps need to be repeated.
.. note::
You can tweak configuration of some timeouts to match your environment. It's
crucial for scalability of the whole deployment. In general the timeout to
serve CNI request from kubelet to Kuryr is 180 seconds. After that time
kubelet will retry the request. Additionally there are two configuration
options::
You can tweak configuration of some timeouts to match your environment. It's
crucial for scalability of the whole deployment. In general the timeout to
serve CNI request from kubelet to Kuryr is 180 seconds. After that time
kubelet will retry the request. Additionally there are two configuration
options::
[cni_daemon]
vif_annotation_timeout=60
pyroute2_timeout=10
``vif_annotation_timeout`` is time the Kuryr CNI Daemon will wait for Kuryr
Controller to create a port in Neutron and add information about it to Pod's
metadata. If either Neutron or Kuryr Controller doesn't keep up with high
number of requests, it's advised to increase this timeout. Please note that
increasing it over 180 seconds will not have any effect as the request will
time out anyway and will be retried (which is safe).
``vif_annotation_timeout`` is time the Kuryr CNI Daemon will wait for Kuryr
Controller to create a port in Neutron and add information about it to Pod's
metadata. If either Neutron or Kuryr Controller doesn't keep up with high
number of requests, it's advised to increase this timeout. Please note that
increasing it over 180 seconds will not have any effect as the request will
time out anyway and will be retried (which is safe).
``pyroute2_timeout`` is internal timeout of pyroute2 library, that is
responsible for doing modifications to Linux Kernel networking stack (e.g.
moving interfaces to Pod's namespaces, adding routes and ports or assigning
addresses to interfaces). When serving a lot of ADD/DEL CNI requests on a
regular basis it's advised to increase that timeout. Please note that the
value denotes *maximum* time to wait for kernel to complete the operations.
If operation succeeds earlier, request isn't delayed.
``pyroute2_timeout`` is internal timeout of pyroute2 library, that is
responsible for doing modifications to Linux Kernel networking stack (e.g.
moving interfaces to Pod's namespaces, adding routes and ports or assigning
addresses to interfaces). When serving a lot of ADD/DEL CNI requests on a
regular basis it's advised to increase that timeout. Please note that the
value denotes *maximum* time to wait for kernel to complete the operations.
If operation succeeds earlier, request isn't delayed.
Run kuryr-daemon::

View File

@ -88,10 +88,11 @@ to add the namespace handler and state the namespace subnet driver with::
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace
.. note::
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
there is no need to enforce sg rules at the load balancer level.
To disable the enforcement, you need to set the following variable:
KURYR_ENFORCE_SG_RULES=False
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
there is no need to enforce sg rules at the load balancer level.
To disable the enforcement, you need to set the following variable:
KURYR_ENFORCE_SG_RULES=False
Testing the network per namespace functionality

View File

@ -26,10 +26,11 @@ After that, enable also the security group drivers for policies::
pod_security_groups_driver = policy
.. warning::
The correct behavior for pods that have no network policy applied is to allow
all ingress and egress traffic. If you want that to be enforced, please make
sure to create an SG allowing all traffic and add it to
``[neutron_defaults]pod_security_groups`` setting in ``kuryr.conf``::
The correct behavior for pods that have no network policy applied is to
allow all ingress and egress traffic. If you want that to be enforced,
please make sure to create an SG allowing all traffic and add it to
``[neutron_defaults]pod_security_groups`` setting in ``kuryr.conf``::
[neutron_defaults]
pod_security_groups = ALLOW_ALL_SG_ID
@ -68,10 +69,11 @@ to add the policy, pod_label and namespace handler and drivers with::
KURYR_SUBNET_DRIVER=namespace
.. note::
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
there is no need to enforce sg rules at the load balancer level.
To disable the enforcement, you need to set the following variable:
KURYR_ENFORCE_SG_RULES=False
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
there is no need to enforce sg rules at the load balancer level. To disable
the enforcement, you need to set the following variable:
KURYR_ENFORCE_SG_RULES=False
Testing the network policy support functionality

View File

@ -126,15 +126,15 @@ the right pod-vif driver set.
.. note::
Previously, `pools_vif_drivers` configuration option provided similar
functionality, but is now deprecated and not recommended.
It stored a mapping from pool_driver => pod_vif_driver instead, disallowing
the use of a single pool driver as keys for multiple pod_vif_drivers.
Previously, `pools_vif_drivers` configuration option provided similar
functionality, but is now deprecated and not recommended. It stored a
mapping from pool_driver => pod_vif_driver instead, disallowing the use of a
single pool driver as keys for multiple pod_vif_drivers.
.. code-block:: ini
.. code-block:: ini
[vif_pool]
pools_vif_drivers=nested:nested-vlan,neutron:neutron-vif
[vif_pool]
pools_vif_drivers=nested:nested-vlan,neutron:neutron-vif
Note that if no annotation is set on a node, the default pod_vif_driver is
used.

View File

@ -18,12 +18,12 @@ be implemented in the following way:
.. _LBaaS API: https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0
.. figure:: ../../images/lbaas_translation.svg
:width: 100%
:alt: Graphical depiction of the translation explained above
:width: 100%
:alt: Graphical depiction of the translation explained above
In this diagram you can see how the Kubernetes entities in the top left
corner are implemented in plain Kubernetes networking (top-right) and in
Kuryr's default configuration (bottom)
In this diagram you can see how the Kubernetes entities in the top left
corner are implemented in plain Kubernetes networking (top-right) and in
Kuryr's default configuration (bottom)
If you are paying attention and are familiar with the `LBaaS API`_ you probably
noticed that we have separate pools for each exposed port in a service. This is

View File

@ -31,12 +31,12 @@ upgrade check`` utility **before upgrading Kuryr-Kubernetes services to T**.
.. note::
In case of running Kuryr-Kubernetes containerized you can use ``kubectl
exec`` to run kuryr-k8s-status
In case of running Kuryr-Kubernetes containerized you can use ``kubectl
exec`` to run kuryr-k8s-status
.. code-block:: bash
.. code-block:: console
$ kubectl -n kube-system exec -it <controller-pod-name> kuryr-k8s-status upgrade check
$ kubectl -n kube-system exec -it <controller-pod-name> kuryr-k8s-status upgrade check
.. code-block:: bash

View File

@ -70,9 +70,9 @@ such as attach volumes to hosts, etc.. Both volume provisioner and FlexVolume
driver will consume OpenStack storage services via Fuxi server.
.. image:: ../../../images/fuxi_k8s_components.png
:alt: integration components
:align: center
:width: 100%
:alt: integration components
:align: center
:width: 100%
Volume Provisioner

View File

@ -428,9 +428,9 @@ Execution flow diagram
See below the network policy attachment to the pod after pod creation:
.. image:: ../../../images/net-policy.svg
:alt: Ingress creation flow diagram
:align: left
:width: 100%
:alt: Ingress creation flow diagram
:align: left
:width: 100%
Possible optimization: