Fix directives formatting.

According to restructuredtext guidelines, directives should have
following formatting:

.. directive_name:: param
   :option:

   body

So it starts with the leading two dots and the space. Next directive
name follows with two colons, and finally optional parameter. Than, line
below it is either some options delimited with colons, or an empty line.
After which body of the directive follows. It should be positioned just
at the same column as directive name, so exactly 3 spaces. Most of the
directives are formatted wrong, including:

- note
- image
- todo
- toctree
- warning
- figure

This patch fixes that.

Change-Id: Ic970f72aeb93bcda82f02e57b70370ab79d6855d
This commit is contained in:
Roman Dobosz 2019-11-12 16:40:14 +01:00
parent c3c270c752
commit feff260509
22 changed files with 178 additions and 163 deletions

View File

@ -65,6 +65,7 @@ CNI ADD failure is reached, health of CNI components and existence of memory
leak.
.. note::
The CNI Health Manager will be started with the check for memory leak
disabled. In order to enable, set the following option in kuryr.conf to a
limit value of memory in MiBs.

View File

@ -69,7 +69,9 @@ ones, for instance service/pod handlers, have been modified to account for the
side effects/actions of when a Network Policy is being enforced.
.. note::
Kuryr supports a network policy that contains:
* Ingress and Egress rules
* namespace selector and pod selector, defined with match labels or match
expressions, mix of namespace and pod selector, ip block
@ -363,6 +365,7 @@ egress rule allowing traffic to everywhere.
securityGroupName: sg-allow-test-via-ns-selector
.. note::
The Service security groups need to be rechecked when a network policy
that affects ingress traffic is created, and also everytime
a pod or namespace is created.

View File

@ -37,7 +37,6 @@ Kubernetes source tree.
#. Kubernetes released new version of PodResources API and the old one is no
longer supported. In this case, without update, we'll not be able to use
PodResources service.
#. ``protobuf`` version in ``lower-constraints.txt`` changed to lower
version (this is highly unlikely). In this case ``protobuf`` could fail
to use our python bindings.

View File

@ -46,6 +46,7 @@ that can be used to Deploy Kuryr on Kubernetes. The script is placed in
kuryr-controller container. Defaults to no certificate.
.. note::
Providing no or incorrect ``ca_certificate_path`` will still create the file
with ``Secret`` definition with empty CA certificate file. This file will
still be mounted in kuryr-controller ``Deployment`` definition.
@ -77,11 +78,13 @@ script. Below is the list of available variables:
* ``$KURYR_K8S_BINDING_IFACE`` - ``[binding]link_iface`` (default: eth0)
.. note::
kuryr-daemon will be started in the CNI container. It is using ``os-vif`` and
``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo`` to
raise privileges, even though container is priviledged by itself or ``sudo``
is missing from container OS (e.g. default CentOS 7). To prevent that make
sure to set following options in kuryr.conf used for kuryr-daemon::
kuryr-daemon will be started in the CNI container. It is using ``os-vif``
and ``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo``
to raise privileges, even though container is priviledged by itself or
``sudo`` is missing from container OS (e.g. default CentOS 7). To prevent
that make sure to set following options in kuryr.conf used for
kuryr-daemon::
[vif_plug_ovs_privileged]
helper_command=privsep-helper
@ -114,13 +117,17 @@ This should generate 5 files in your ``<output_dir>``:
* cni_ds.yml
.. note::
kuryr-cni daemonset mounts /var/run, due to necessity of accessing to several sub directories
like openvswitch and auxiliary directory for vhostuser configuration and socket files. Also when
neutron-openvswitch-agent works with datapath_type = netdev configuration option, kuryr-kubernetes
has to move vhostuser socket to auxiliary directory, that auxiliary directory should be on the same
mount point, otherwise connection of this socket will be refused.
In case when Open vSwitch keeps vhostuser socket files not in /var/run/openvswitch, openvswitch
mount point in cni_ds.yaml and [vhostuser] section in config_map.yml should be changed properly.
kuryr-cni daemonset mounts /var/run, due to necessity of accessing to
several sub directories like openvswitch and auxiliary directory for
vhostuser configuration and socket files. Also when
neutron-openvswitch-agent works with datapath_type = netdev configuration
option, kuryr-kubernetes has to move vhostuser socket to auxiliary
directory, that auxiliary directory should be on the same mount point,
otherwise connection of this socket will be refused. In case when Open
vSwitch keeps vhostuser socket files not in /var/run/openvswitch,
openvswitch mount point in cni_ds.yaml and [vhostuser] section in
config_map.yml should be changed properly.
Deploying Kuryr resources on Kubernetes

View File

@ -38,8 +38,8 @@ directory: ::
``local.conf.sample`` file is configuring Neutron and Kuryr with standard
Open vSwitch ML2 networking. In the ``devstack`` directory there are other
sample configuration files that enable OpenDaylight or Drangonflow networking.
See other pages in this documentation section to learn more.
sample configuration files that enable OpenDaylight or Drangonflow
networking. See other pages in this documentation section to learn more.
Now edit ``devstack/local.conf`` to set up some initial options:

View File

@ -10,6 +10,7 @@ nested MACVLAN driver rather than VLAN and trunk ports.
2. Launch a Nova VM with MACVLAN support
.. todo::
Add a list of neutron commands, required to launch a such a VM
3. Log into the VM and set up Kubernetes along with Kuryr using devstack:

View File

@ -172,6 +172,7 @@ Kuryr CNI Daemon, should be installed on every Kubernetes node, so following
steps need to be repeated.
.. note::
You can tweak configuration of some timeouts to match your environment. It's
crucial for scalability of the whole deployment. In general the timeout to
serve CNI request from kubelet to Kuryr is 180 seconds. After that time

View File

@ -88,6 +88,7 @@ to add the namespace handler and state the namespace subnet driver with::
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace
.. note::
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
there is no need to enforce sg rules at the load balancer level.
To disable the enforcement, you need to set the following variable:

View File

@ -26,9 +26,10 @@ After that, enable also the security group drivers for policies::
pod_security_groups_driver = policy
.. warning::
The correct behavior for pods that have no network policy applied is to allow
all ingress and egress traffic. If you want that to be enforced, please make
sure to create an SG allowing all traffic and add it to
The correct behavior for pods that have no network policy applied is to
allow all ingress and egress traffic. If you want that to be enforced,
please make sure to create an SG allowing all traffic and add it to
``[neutron_defaults]pod_security_groups`` setting in ``kuryr.conf``::
[neutron_defaults]
@ -68,9 +69,10 @@ to add the policy, pod_label and namespace handler and drivers with::
KURYR_SUBNET_DRIVER=namespace
.. note::
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
there is no need to enforce sg rules at the load balancer level.
To disable the enforcement, you need to set the following variable:
there is no need to enforce sg rules at the load balancer level. To disable
the enforcement, you need to set the following variable:
KURYR_ENFORCE_SG_RULES=False

View File

@ -127,9 +127,9 @@ the right pod-vif driver set.
.. note::
Previously, `pools_vif_drivers` configuration option provided similar
functionality, but is now deprecated and not recommended.
It stored a mapping from pool_driver => pod_vif_driver instead, disallowing
the use of a single pool driver as keys for multiple pod_vif_drivers.
functionality, but is now deprecated and not recommended. It stored a
mapping from pool_driver => pod_vif_driver instead, disallowing the use of a
single pool driver as keys for multiple pod_vif_drivers.
.. code-block:: ini

View File

@ -34,7 +34,7 @@ upgrade check`` utility **before upgrading Kuryr-Kubernetes services to T**.
In case of running Kuryr-Kubernetes containerized you can use ``kubectl
exec`` to run kuryr-k8s-status
.. code-block:: bash
.. code-block:: console
$ kubectl -n kube-system exec -it <controller-pod-name> kuryr-k8s-status upgrade check