From 80b5ecd41b1d3855f27f1de6d872e81d0696ecaf Mon Sep 17 00:00:00 2001 From: Roman Dobosz Date: Wed, 13 Nov 2019 09:47:26 +0100 Subject: [PATCH] Change inline hyperlinks to link-target pairs. Inline, named hyperlinks seems to be fine, but often they just provides noise to the paragrafs. In this patch we propose to use link-target style for hyperlinks, where named link(s) is (are) placed in paragraph while its target is at the bottom of the document. Change-Id: Ia4f4c66f51ea193dc201b3dba5be2788f20765e0 --- README.rst | 6 ++-- doc/source/devref/high_availability.rst | 14 ++++---- doc/source/devref/kuryr_kubernetes_design.rst | 20 ++++++----- .../kuryr_kubernetes_ingress_design.rst | 15 ++++---- .../kuryr_kubernetes_ocp_route_design.rst | 15 ++++---- doc/source/devref/port_crd_usage.rst | 8 ++--- doc/source/devref/service_support.rst | 10 +++--- doc/source/installation/containerized.rst | 11 +++--- doc/source/installation/devstack/basic.rst | 19 +++++----- .../devstack/dragonflow_support.rst | 15 +++----- .../installation/devstack/nested-vlan.rst | 14 ++++---- .../installation/devstack/odl_support.rst | 18 ++++------ .../installation/multi_vif_with_npwg_spec.rst | 9 ++--- doc/source/installation/services.rst | 11 +++--- doc/source/installation/sriov.rst | 36 +++++++++---------- .../installation/testing_sriov_functional.rst | 12 +++---- releasenotes/source/README.rst | 9 +++-- 17 files changed, 116 insertions(+), 126 deletions(-) diff --git a/README.rst b/README.rst index 91a60465f..f493bec4e 100644 --- a/README.rst +++ b/README.rst @@ -29,5 +29,7 @@ require it or to use different segments and, for example, route between them. Contribution guidelines ----------------------- -For the process of new feature addition, refer to the `Kuryr Policy -`_ +For the process of new feature addition, refer to the `Kuryr Policy`_. + + +.. _Kuryr Policy: https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies diff --git a/doc/source/devref/high_availability.rst b/doc/source/devref/high_availability.rst index eaaa0a801..2af75d718 100644 --- a/doc/source/devref/high_availability.rst +++ b/doc/source/devref/high_availability.rst @@ -44,12 +44,11 @@ Leader election ~~~~~~~~~~~~~~~ The idea here is to use leader election mechanism based on Kubernetes -endpoints. The idea is neatly `explained on Kubernetes blog -`_. -Election is based on Endpoint resources, that hold annotation about current -leader and its leadership lease time. If leader dies, other instances of the -service are free to take over the record. Kubernetes API mechanisms will -provide update exclusion mechanisms to prevent race conditions. +endpoints. The idea is neatly `explained on Kubernetes blog`_. Election is +based on Endpoint resources, that hold annotation about current leader and its +leadership lease time. If leader dies, other instances of the service are free +to take over the record. Kubernetes API mechanisms will provide update +exclusion mechanisms to prevent race conditions. This can be implemented by adding another *leader-elector* container to each of kuryr-controller pods: @@ -139,3 +138,6 @@ consistent hash ring to decide which instance will process which resource. Potentially this can be extended with support for non-containerized deployments by using Tooz and some other tool providing leader-election - like Consul or Zookeeper. + + +.. _explained on Kubernetes blog: https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/ diff --git a/doc/source/devref/kuryr_kubernetes_design.rst b/doc/source/devref/kuryr_kubernetes_design.rst index af1c04a97..dd72e8b62 100644 --- a/doc/source/devref/kuryr_kubernetes_design.rst +++ b/doc/source/devref/kuryr_kubernetes_design.rst @@ -147,16 +147,15 @@ ResourceEventHandler ~~~~~~~~~~~~~~~~~~~~ ResourceEventHandler is a convenience base class for the Kubernetes event -processing. The specific Handler associates itself with specific Kubernetes +processing. The specific Handler associates itself with specific Kubernetes object kind (through setting OBJECT_KIND) and is expected to implement at least one of the methods of the base class to handle at least one of the ADDED/MODIFIED/DELETED events of the Kubernetes object. For details, see -`k8s-api -`_. -Since both ADDED and MODIFIED event types trigger very similar sequence of -actions, Handler has 'on_present' method that is invoked for both event types. -The specific Handler implementation should strive to put all the common ADDED -and MODIFIED event handling logic in this method to avoid code duplication. +`k8s-api`_. Since both ADDED and MODIFIED event types trigger very similar +sequence of actions, Handler has 'on_present' method that is invoked for both +event types. The specific Handler implementation should strive to put all the +common ADDED and MODIFIED event handling logic in this method to avoid code +duplication. Pluggable Handlers @@ -306,6 +305,9 @@ APIs to perform its tasks and wait on socket for result. Kubernetes Documentation ------------------------ -The `Kubernetes reference documentation -`_ is a great source for finding more +The `Kubernetes reference documentation`_ is a great source for finding more details about Kubernetes API, CLIs, and tools. + + +.. _k8s-api: https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/api-conventions.md#types-kinds> +.. _Kubernetes reference documentation: https://kubernetes.io/docs/reference/ diff --git a/doc/source/devref/kuryr_kubernetes_ingress_design.rst b/doc/source/devref/kuryr_kubernetes_ingress_design.rst index 3f71d6cb8..434109fee 100644 --- a/doc/source/devref/kuryr_kubernetes_ingress_design.rst +++ b/doc/source/devref/kuryr_kubernetes_ingress_design.rst @@ -26,13 +26,13 @@ is supported by the kuryr integration. Overview -------- -A Kubernetes Ingress [1]_ is used to give services externally-reachable URLs, +A `Kubernetes Ingress`_ is used to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting, and more. Each Ingress consists of a name, service identifier, and (optionally) security configuration. -A Kubernetes Ingress Controller [2]_ is an entity that watches the apiserver's +A `Kubernetes Ingress Controller`_ is an entity that watches the apiserver's /ingress resources for updates. Its job is to satisfy requests for Ingresses. @@ -50,7 +50,7 @@ A L7 router is a logical entity responsible for L7 routing based on L7 rules database, when an HTTP packet hits the L7 router, the L7 router uses its rules database to determine the endpoint destination (based on the fields content in HTTP header, e.g: HOST_NAME). -Kuryr will use neutron LBaaS L7 policy capability [3]_ to perform +Kuryr will use neutron LBaaS `L7 policy capability`_ to perform the L7 routing task. @@ -262,9 +262,6 @@ This section describe in details the following scenarios: handler will set its internal state to 'no Ingress is pointing' state. -References -========== - -.. [1] https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress -.. [2] https://github.com/kubernetes/ingress-nginx/blob/master/README.md -.. [3] https://wiki.openstack.org/wiki/Neutron/LBaaS/l7 +.. _Kubernetes Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress +.. _Kubernetes Ingress Controller: https://github.com/kubernetes/ingress-nginx/blob/master/README.md +.. _L7 policy capability: https://wiki.openstack.org/wiki/Neutron/LBaaS/l7 diff --git a/doc/source/devref/kuryr_kubernetes_ocp_route_design.rst b/doc/source/devref/kuryr_kubernetes_ocp_route_design.rst index d4e3eb708..05f9f9bb7 100644 --- a/doc/source/devref/kuryr_kubernetes_ocp_route_design.rst +++ b/doc/source/devref/kuryr_kubernetes_ocp_route_design.rst @@ -26,7 +26,7 @@ by kuryr-kubernetes. Overview -------- -OpenShift Origin [1]_ is an open source cloud application development and +`OpenShift Origin`_ is an open source cloud application development and hosting platform that automates the provisioning, management and scaling of applications. @@ -35,11 +35,11 @@ application development and multi-tenancy deployment. OpenShift adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance. -An OpenShift Route [2]_ exposes a Service at a host name, like www.example.com, +The `OpenShift Route`_ exposes a Service at a host name, like www.example.com, so that external clients can reach it by name. The Route is an Openshift resource that defines the rules you want to apply to incoming connections. -The Openshift Routes concept introduced before Ingress [3]_ was supported by +The Openshift Routes concept was `introduced before Ingress`_ was supported by kubernetes, the Openshift Route matches the functionality of kubernetes Ingress. @@ -162,9 +162,6 @@ B. Create Service/Endpoints, create OCP-Route, delete OCP-Route. handler will set its internal state to 'no Ingress is pointing' state. -References -========== - -.. [1] https://www.openshift.org/ -.. [2] https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html -.. [3] https://kubernetes.io/docs/concepts/Services-networking/ingress/ +.. _OpenShift Origin: https://www.openshift.org/ +.. _OpenShift Route: https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html +.. _introduced before Ingress: https://kubernetes.io/docs/concepts/Services-networking/ingress/ diff --git a/doc/source/devref/port_crd_usage.rst b/doc/source/devref/port_crd_usage.rst index c0f3e1143..131f5254c 100644 --- a/doc/source/devref/port_crd_usage.rst +++ b/doc/source/devref/port_crd_usage.rst @@ -20,8 +20,7 @@ Purpose ------- The purpose of this document is to present Kuryr Kubernetes Port and PortPool -CRD [1]_ usage, capturing the design decisions currently taken by the Kuryr -team. +`CRD`_ usage, capturing the design decisions currently taken by the Kuryr team. The main purpose of the Port CRD is to keep Neutron resources tracking as part of K8s data model. The main idea behind is to try to minimize the amount of @@ -199,7 +198,4 @@ namespace subnet driver and it could be similarly applied to other SDN resources, such as LoadBalancers. -References -========== - -.. [1] https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-resources +.. _CRD: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-resources diff --git a/doc/source/devref/service_support.rst b/doc/source/devref/service_support.rst index 4aae89ec5..5a0c9bcf6 100644 --- a/doc/source/devref/service_support.rst +++ b/doc/source/devref/service_support.rst @@ -31,10 +31,8 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them. Service is a Kubernetes managed API object. For Kubernetes-native applications, Kubernetes offers an Endpoints API that is updated whenever the set of Pods in a Service changes. For detailed information -please refer to `Kubernetes service -`_ Kubernetes supports services -with kube-proxy component that runs on each node, `Kube-Proxy -`_. +please refer to `Kubernetes service`_. Kubernetes supports services with +kube-proxy component that runs on each node, `Kube-Proxy`_. Proposed Solution @@ -84,3 +82,7 @@ details for service mapping. LBaaS Driver is added to manage service translation to the LBaaSv2-like API. It abstracts all the details of service translation to Load Balancer. LBaaSv2Driver supports this interface by mapping to neutron LBaaSv2 constructs. + + +.. _Kubernetes service: http://kubernetes.io/docs/user-guide/services/ +.. _Kube-Proxy: http://kubernetes.io/docs/admin/kube-proxy/ diff --git a/doc/source/installation/containerized.rst b/doc/source/installation/containerized.rst index 625e8dc81..4644d853d 100644 --- a/doc/source/installation/containerized.rst +++ b/doc/source/installation/containerized.rst @@ -27,10 +27,9 @@ If you want to run kuryr CNI without the daemon, build the image with: $ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False . Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller -Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller -`_ and `cni -`_ images from the Docker Hub. Those -definitions will be generated in next step. +Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller`_ +and `cni`_ images from the Docker Hub. Those definitions will be generated in +next step. Generating Kuryr resource definitions for Kubernetes @@ -169,3 +168,7 @@ To see kuryr-controller logs: NOTE: kuryr-cni has no logs and to debug failures you need to check out kubelet logs. + + +.. _controller: https://hub.docker.com/r/kuryr/controller/ +.. _cni: https://hub.docker.com/r/kuryr/cni/ diff --git a/doc/source/installation/devstack/basic.rst b/doc/source/installation/devstack/basic.rst index acc6c2e66..f6038b7e1 100644 --- a/doc/source/installation/devstack/basic.rst +++ b/doc/source/installation/devstack/basic.rst @@ -156,11 +156,14 @@ You can verify that this IP is really assigned to Neutron port: If those steps were successful, then it looks like your DevStack with kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines -of the logs, paste them into `paste.openstack.org -`_ and ask other developers for help on `Kuryr's -IRC channel `_. More info on how to use -DevStack can be found in `DevStack Documentation -`_, especially in section `Using -Systemd in DevStack -`_, which explains how -to use ``systemctl`` to control services and ``journalctl`` to read its logs. +of the logs, paste them into `paste.openstack.org`_ and ask other developers +for help on `Kuryr's IRC channel`_. More info on how to use DevStack can be +found in `DevStack Documentation`_, especially in section `Using Systemd in +DevStack`_, which explains how to use ``systemctl`` to control services and +``journalctl`` to read its logs. + + +.. _paste.openstack.org: http://paste.openstack.org +.. _Kuryr's IRC channel: chat.freenode.net:6667/openstack-kuryr +.. _DevStack Documentation: https://docs.openstack.org/devstack/latest/ +.. _Using Systemd in DevStack: https://docs.openstack.org/devstack/latest/systemd.html diff --git a/doc/source/installation/devstack/dragonflow_support.rst b/doc/source/installation/devstack/dragonflow_support.rst index b24557c25..bb7769510 100644 --- a/doc/source/installation/devstack/dragonflow_support.rst +++ b/doc/source/installation/devstack/dragonflow_support.rst @@ -67,8 +67,6 @@ Feel free to edit it if you'd like, but it should work as-is. Optionally, the ports pool funcionality can be enabled by following: `How to enable ports pool with devstack`_. -.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html - 5. Run DevStack. Expect it to take a while. It installs required packages, clones a bunch @@ -108,8 +106,6 @@ In order to check the default configuration, in term of networks, subnets, security groups and loadbalancers created upon a successful devstack stacking, you can check the `Inspect default Configuration`_. -.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html - Testing Network Connectivity ++++++++++++++++++++++++++++ @@ -117,8 +113,6 @@ Testing Network Connectivity Once the environment is ready, we can test that network connectivity works among pods. To do that check out `Testing Network Connectivity`_. -.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html - Nested Containers Test Environment (VLAN) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -144,7 +138,6 @@ use (step 4), in this case: $ cd devstack $ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf - The main differences with the default dragonflow local.conf sample are that: - There is no need to enable the kuryr-kubernetes plugin as this will be @@ -167,8 +160,6 @@ creating the overcloud VM by using a parent port of a Trunk so that containers can be created inside with their own networks. To do that we follow the next steps detailed at `Boot VM with a Trunk Port`_. -.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html - Overcloud deployment ++++++++++++++++++++ @@ -186,7 +177,6 @@ same steps as for ML2/OVS: 2. Deploy devstack following steps 3 and 4 detailed at `How to try out nested-pods locally (VLAN + trunk)`_. -.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html Testing Nested Network Connectivity @@ -198,4 +188,9 @@ the deployment was successful. To do that check out `Testing Nested Network Connectivity`_. +.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html +.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html +.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html +.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html +.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html .. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html diff --git a/doc/source/installation/devstack/nested-vlan.rst b/doc/source/installation/devstack/nested-vlan.rst index b10f42b00..a4d0965bc 100644 --- a/doc/source/installation/devstack/nested-vlan.rst +++ b/doc/source/installation/devstack/nested-vlan.rst @@ -16,11 +16,8 @@ for the VM: [DEFAULT] service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin -2. Launch a VM with `Neutron trunk port. - `_. The next steps can be - followed: `Boot VM with a Trunk Port`_. - -.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html +2. Launch a VM with `Neutron trunk port`_. The next steps can be followed: + `Boot VM with a Trunk Port`_. 3. Inside VM, install and setup Kubernetes along with Kuryr using devstack: - Since undercloud Neutron will be used by pods, Neutron services should be @@ -52,8 +49,6 @@ for the VM: - Optionally, the ports pool funcionality can be enabled by following: `How to enable ports pool with devstack`_. - .. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html - - [OPTIONAL] If you want to enable the subport pools driver and the VIF Pool Manager you need to include: @@ -92,3 +87,8 @@ for the VM: $ sudo systemctl restart devstack@kuryr-daemon.service Now launch pods using kubectl, Undercloud Neutron will serve the networking. + + +.. _Neutron trunk port: https://wiki.openstack.org/wiki/Neutron/TrunkPort +.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html +.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html diff --git a/doc/source/installation/devstack/odl_support.rst b/doc/source/installation/devstack/odl_support.rst index ad693ddb7..0608547cf 100644 --- a/doc/source/installation/devstack/odl_support.rst +++ b/doc/source/installation/devstack/odl_support.rst @@ -58,12 +58,9 @@ Feel free to edit it if you'd like, but it should work as-is. $ cd devstack $ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf - Optionally, the ports pool funcionality can be enabled by following: `How to enable ports pool with devstack`_. -.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html - 5. Run DevStack. This is going to take a while. It installs a bunch of packages, clones a bunch @@ -113,8 +110,6 @@ In order to check the default configuration, in term of networks, subnets, security groups and loadbalancers created upon a successful devstack stacking, you can check the `Inspect default Configuration`_. -.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html - Testing Network Connectivity ++++++++++++++++++++++++++++ @@ -122,8 +117,6 @@ Testing Network Connectivity Once the environment is ready, we can test that network connectivity works among pods. To do that check out `Testing Network Connectivity`_. -.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html - Nested Containers Test Environment (VLAN) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -164,14 +157,11 @@ The main differences with the default odl local.conf sample are that: - ODL Trunk service plugin need to be enable to ensure Trunk ports support. - Once the undercloud deployment has finished, the next steps are related to create the overcloud VM by using a parent port of a Trunk so that containers can be created inside with their own networks. To do that we follow the next steps detailed at `Boot VM with a Trunk Port`_. -.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html - Overcloud deployment ++++++++++++++++++++ @@ -189,8 +179,6 @@ same steps as for ML2/OVS: 2. Deploy devstack following steps 3 and 4 detailed at `How to try out nested-pods locally (VLAN + trunk)`_. -.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html - Testing Nested Network Connectivity +++++++++++++++++++++++++++++++++++ @@ -200,4 +188,10 @@ overcloud VM, scale it to any number of pods and expose the service to check if the deployment was successful. To do that check out `Testing Nested Network Connectivity`_. + +.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html +.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html +.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html +.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html +.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html .. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html diff --git a/doc/source/installation/multi_vif_with_npwg_spec.rst b/doc/source/installation/multi_vif_with_npwg_spec.rst index 611b93948..56dca14dc 100644 --- a/doc/source/installation/multi_vif_with_npwg_spec.rst +++ b/doc/source/installation/multi_vif_with_npwg_spec.rst @@ -2,8 +2,8 @@ Configure Pod with Additional Interfaces ======================================== -To create pods with additional Interfaces follow the Kubernetes Network Custom -Resource Definition De-facto Standard Version 1 [#]_, the next steps can be +To create pods with additional Interfaces follow the `Kubernetes Network Custom +Resource Definition De-facto Standard Version 1`_, the next steps can be followed: 1. Create Neutron net/subnets which you want the additional interfaces attach @@ -91,7 +91,4 @@ defined in step 1. You may put a list of network separated with comma to attach Pods to more networks. -Reference ---------- - -.. [#] https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit?usp=sharing +.. _Kubernetes Network Custom Resource Definition De-facto Standard Version 1: https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit?usp=sharing diff --git a/doc/source/installation/services.rst b/doc/source/installation/services.rst index 5f49583a3..977609a80 100644 --- a/doc/source/installation/services.rst +++ b/doc/source/installation/services.rst @@ -13,10 +13,6 @@ be implemented in the following way: LoadBalancer's VIP. * **Endpoints**: The Endpoint object is translated to a LoadBalancer's VIP. - -.. _services: https://kubernetes.io/docs/concepts/services-networking/service/ -.. _LBaaS API: https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0 - .. figure:: ../../images/lbaas_translation.svg :width: 100% :alt: Graphical depiction of the translation explained above @@ -83,8 +79,6 @@ adds over the Neutron HAProxy agent are: You can find a good explanation about the involved steps to install Octavia in the `Octavia installation docs`_. -.. _Octavia installation docs: https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html - Simplifying a lot, Octavia works by instantiating a compute resource, i.e. a Nova VM, and running HAProxy inside. These single load balancer Nova VMs are called *Amphorae*. Each *Amphora* has a separate linux network namespace where @@ -789,3 +783,8 @@ Troubleshooting If you want your current pods to get this change applied, the most comfortable way to do that is to delete them and let the Kubernetes Deployment create them automatically for you. + + +.. _services: https://kubernetes.io/docs/concepts/services-networking/service/ +.. _LBaaS API: https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0 +.. _Octavia installation docs: https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html diff --git a/doc/source/installation/sriov.rst b/doc/source/installation/sriov.rst index 26196f001..fb8866a3f 100644 --- a/doc/source/installation/sriov.rst +++ b/doc/source/installation/sriov.rst @@ -4,7 +4,7 @@ How to configure SR-IOV ports ============================= -Current approach of SR-IOV relies on sriov-device-plugin [2]_. While creating +Current approach of SR-IOV relies on `sriov-device-plugin`_. While creating pods with SR-IOV, sriov-device-plugin should be turned on on all nodes. To use a SR-IOV port on a baremetal installation the 3 following steps should be done: @@ -27,13 +27,13 @@ Subnet id will be used later in NetworkAttachmentDefini default_physnet_subnets = physnet1: This mapping is required for ability to find appropriate PF/VF functions at -binding phase. physnet1 is just an identifier for subnet . Such kind of transition is necessary to support many-to-many +binding phase. physnet1 is just an identifier for subnet . Such kind of transition is necessary to support many-to-many relation. 3. Prepare NetworkAttachmentDefinition object. Apply NetworkAttachmentDefinition with "sriov" driverType inside, -as described in [1]_. +as described in `NPWG spec`_. .. code-block:: yaml @@ -79,15 +79,16 @@ They may have different subnetId. The resource name *intel.com/sriov*, which used in the above example is the default resource name. This name was used in SR-IOV network device plugin in version 1 (release-v1 branch). But since latest version the device plugin can -use any arbitrary name of the resources [3]_. This name should match -"^\[a-zA-Z0-9\_\]+$" regular expression. To be able to work with arbitrary -resource names physnet_resource_mappings and device_plugin_resource_prefix in -[sriov] section of kuryr-controller configuration file should be filled. The -default value for device_plugin_resource_prefix is intel.com, the same as in -SR-IOV network device plugin, in case of SR-IOV network device plugin was -started with value of -resource-prefix option different from intel.com, than -value should be set to device_plugin_resource_prefix, otherwise -kuryr-kubernetes will not work with resource. +use any arbitrary name of the resources (see `SRIOV network device plugin for +Kubernetes`_). This name should match "^\[a-zA-Z0-9\_\]+$" regular expression. +To be able to work with arbitrary resource names physnet_resource_mappings and +device_plugin_resource_prefix in [sriov] section of kuryr-controller +configuration file should be filled. The default value for +device_plugin_resource_prefix is intel.com, the same as in SR-IOV network +device plugin, in case of SR-IOV network device plugin was started with value +of -resource-prefix option different from intel.com, than value should be set +to device_plugin_resource_prefix, otherwise kuryr-kubernetes will not work with +resource. Assume we have following SR-IOV network device plugin (defined by -config-file option) @@ -169,9 +170,6 @@ ports with binding:profile information. Due to this it is necessary to make actions with privileged user with admin rights. -Reference ---------- - -.. [1] https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html -.. [2] https://docs.google.com/document/d/1D3dJeUUmta3sMzqw8JtWFoG2rvcJiWitVro9bsfUTEw -.. [3] https://github.com/intel/sriov-network-device-plugin +.. _NPWG spec: https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html +.. _sriov-device-plugin: https://docs.google.com/document/d/1D3dJeUUmta3sMzqw8JtWFoG2rvcJiWitVro9bsfUTEw +.. _SRIOV network device plugin for Kubernetes: https://github.com/intel/sriov-network-device-plugin diff --git a/doc/source/installation/testing_sriov_functional.rst b/doc/source/installation/testing_sriov_functional.rst index 1d7dd1a2b..dda96b369 100644 --- a/doc/source/installation/testing_sriov_functional.rst +++ b/doc/source/installation/testing_sriov_functional.rst @@ -2,12 +2,9 @@ Testing SRIOV functionality =========================== -Following the steps explained on :ref:`sriov` make sure that you have -already created and applied a ``NetworkAttachmentDefinition`` -containing a ``sriov`` driverType. Also make sure that -`sriov-device-plugin `_ -is enabled on the nodes. - +Following the steps explained on :ref:`sriov` make sure that you have already +created and applied a ``NetworkAttachmentDefinition`` containing a ``sriov`` +driverType. Also make sure that `sriov-device-plugin`_ is enabled on the nodes. ``NetworkAttachmentDefinition`` containing a ``sriov`` driverType might look like: @@ -244,3 +241,6 @@ match the ones on the container. Currently the neutron-sriov-nic-agent does not properly detect SR-IOV ports assigned to containers. This means that direct ports in neutron would always remain in *DOWN* state. This doesn't affect the feature in any way other than cosmetically. + + +.. _sriov-device-plugin: https://docs.google.com/document/d/1Ewe9Of84GkP0b2Q2PC0y9RVZNkN2WeVEagX9m99Nrzc diff --git a/releasenotes/source/README.rst b/releasenotes/source/README.rst index df6c1d0db..7ed60214c 100644 --- a/releasenotes/source/README.rst +++ b/releasenotes/source/README.rst @@ -4,8 +4,11 @@ Kuryr-Kubernetes Release Notes Howto Release notes are a new feature for documenting new features in OpenStack projects. Background on the process, tooling, and methodology is documented in -a `mailing list post by Doug Hellmann -`_. +a `mailing list post by Doug Hellmann`_. For information on how to create release notes, please consult the `Release -Notes documentation `_. +Notes documentation`_. + + +.. _mailing list post by Doug Hellmann: http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html +.. _Release Notes documentation: https://docs.openstack.org/reno/latest/