Change inline hyperlinks to link-target pairs.
Inline, named hyperlinks seems to be fine, but often they just provides noise to the paragrafs. In this patch we propose to use link-target style for hyperlinks, where named link(s) is (are) placed in paragraph while its target is at the bottom of the document. Change-Id: Ia4f4c66f51ea193dc201b3dba5be2788f20765e0
This commit is contained in:
parent
fd440fcdcb
commit
80b5ecd41b
@ -29,5 +29,7 @@ require it or to use different segments and, for example, route between them.
|
||||
Contribution guidelines
|
||||
-----------------------
|
||||
|
||||
For the process of new feature addition, refer to the `Kuryr Policy
|
||||
<https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies>`_
|
||||
For the process of new feature addition, refer to the `Kuryr Policy`_.
|
||||
|
||||
|
||||
.. _Kuryr Policy: https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies
|
||||
|
@ -44,12 +44,11 @@ Leader election
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
The idea here is to use leader election mechanism based on Kubernetes
|
||||
endpoints. The idea is neatly `explained on Kubernetes blog
|
||||
<https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/>`_.
|
||||
Election is based on Endpoint resources, that hold annotation about current
|
||||
leader and its leadership lease time. If leader dies, other instances of the
|
||||
service are free to take over the record. Kubernetes API mechanisms will
|
||||
provide update exclusion mechanisms to prevent race conditions.
|
||||
endpoints. The idea is neatly `explained on Kubernetes blog`_. Election is
|
||||
based on Endpoint resources, that hold annotation about current leader and its
|
||||
leadership lease time. If leader dies, other instances of the service are free
|
||||
to take over the record. Kubernetes API mechanisms will provide update
|
||||
exclusion mechanisms to prevent race conditions.
|
||||
|
||||
This can be implemented by adding another *leader-elector* container to each
|
||||
of kuryr-controller pods:
|
||||
@ -139,3 +138,6 @@ consistent hash ring to decide which instance will process which resource.
|
||||
Potentially this can be extended with support for non-containerized deployments
|
||||
by using Tooz and some other tool providing leader-election - like Consul or
|
||||
Zookeeper.
|
||||
|
||||
|
||||
.. _explained on Kubernetes blog: https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/
|
||||
|
@ -147,16 +147,15 @@ ResourceEventHandler
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
ResourceEventHandler is a convenience base class for the Kubernetes event
|
||||
processing. The specific Handler associates itself with specific Kubernetes
|
||||
processing. The specific Handler associates itself with specific Kubernetes
|
||||
object kind (through setting OBJECT_KIND) and is expected to implement at
|
||||
least one of the methods of the base class to handle at least one of the
|
||||
ADDED/MODIFIED/DELETED events of the Kubernetes object. For details, see
|
||||
`k8s-api
|
||||
<https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/api-conventions.md#types-kinds>`_.
|
||||
Since both ADDED and MODIFIED event types trigger very similar sequence of
|
||||
actions, Handler has 'on_present' method that is invoked for both event types.
|
||||
The specific Handler implementation should strive to put all the common ADDED
|
||||
and MODIFIED event handling logic in this method to avoid code duplication.
|
||||
`k8s-api`_. Since both ADDED and MODIFIED event types trigger very similar
|
||||
sequence of actions, Handler has 'on_present' method that is invoked for both
|
||||
event types. The specific Handler implementation should strive to put all the
|
||||
common ADDED and MODIFIED event handling logic in this method to avoid code
|
||||
duplication.
|
||||
|
||||
|
||||
Pluggable Handlers
|
||||
@ -306,6 +305,9 @@ APIs to perform its tasks and wait on socket for result.
|
||||
Kubernetes Documentation
|
||||
------------------------
|
||||
|
||||
The `Kubernetes reference documentation
|
||||
<https://kubernetes.io/docs/reference/>`_ is a great source for finding more
|
||||
The `Kubernetes reference documentation`_ is a great source for finding more
|
||||
details about Kubernetes API, CLIs, and tools.
|
||||
|
||||
|
||||
.. _k8s-api: https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/api-conventions.md#types-kinds>
|
||||
.. _Kubernetes reference documentation: https://kubernetes.io/docs/reference/
|
||||
|
@ -26,13 +26,13 @@ is supported by the kuryr integration.
|
||||
Overview
|
||||
--------
|
||||
|
||||
A Kubernetes Ingress [1]_ is used to give services externally-reachable URLs,
|
||||
A `Kubernetes Ingress`_ is used to give services externally-reachable URLs,
|
||||
load balance traffic, terminate SSL, offer name based virtual
|
||||
hosting, and more.
|
||||
Each Ingress consists of a name, service identifier, and (optionally)
|
||||
security configuration.
|
||||
|
||||
A Kubernetes Ingress Controller [2]_ is an entity that watches the apiserver's
|
||||
A `Kubernetes Ingress Controller`_ is an entity that watches the apiserver's
|
||||
/ingress resources for updates. Its job is to satisfy requests for Ingresses.
|
||||
|
||||
|
||||
@ -50,7 +50,7 @@ A L7 router is a logical entity responsible for L7 routing based on L7 rules
|
||||
database, when an HTTP packet hits the L7 router, the L7 router uses its
|
||||
rules database to determine the endpoint destination (based on the fields
|
||||
content in HTTP header, e.g: HOST_NAME).
|
||||
Kuryr will use neutron LBaaS L7 policy capability [3]_ to perform
|
||||
Kuryr will use neutron LBaaS `L7 policy capability`_ to perform
|
||||
the L7 routing task.
|
||||
|
||||
|
||||
@ -262,9 +262,6 @@ This section describe in details the following scenarios:
|
||||
handler will set its internal state to 'no Ingress is pointing' state.
|
||||
|
||||
|
||||
References
|
||||
==========
|
||||
|
||||
.. [1] https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
|
||||
.. [2] https://github.com/kubernetes/ingress-nginx/blob/master/README.md
|
||||
.. [3] https://wiki.openstack.org/wiki/Neutron/LBaaS/l7
|
||||
.. _Kubernetes Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
|
||||
.. _Kubernetes Ingress Controller: https://github.com/kubernetes/ingress-nginx/blob/master/README.md
|
||||
.. _L7 policy capability: https://wiki.openstack.org/wiki/Neutron/LBaaS/l7
|
||||
|
@ -26,7 +26,7 @@ by kuryr-kubernetes.
|
||||
Overview
|
||||
--------
|
||||
|
||||
OpenShift Origin [1]_ is an open source cloud application development and
|
||||
`OpenShift Origin`_ is an open source cloud application development and
|
||||
hosting platform that automates the provisioning, management and scaling
|
||||
of applications.
|
||||
|
||||
@ -35,11 +35,11 @@ application development and multi-tenancy deployment. OpenShift adds developer
|
||||
and operations-centric tools on top of Kubernetes to enable rapid application
|
||||
development, easy deployment and scaling, and long-term lifecycle maintenance.
|
||||
|
||||
An OpenShift Route [2]_ exposes a Service at a host name, like www.example.com,
|
||||
The `OpenShift Route`_ exposes a Service at a host name, like www.example.com,
|
||||
so that external clients can reach it by name.
|
||||
The Route is an Openshift resource that defines the rules you want to apply to
|
||||
incoming connections.
|
||||
The Openshift Routes concept introduced before Ingress [3]_ was supported by
|
||||
The Openshift Routes concept was `introduced before Ingress`_ was supported by
|
||||
kubernetes, the Openshift Route matches the functionality of kubernetes Ingress.
|
||||
|
||||
|
||||
@ -162,9 +162,6 @@ B. Create Service/Endpoints, create OCP-Route, delete OCP-Route.
|
||||
handler will set its internal state to 'no Ingress is pointing' state.
|
||||
|
||||
|
||||
References
|
||||
==========
|
||||
|
||||
.. [1] https://www.openshift.org/
|
||||
.. [2] https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html
|
||||
.. [3] https://kubernetes.io/docs/concepts/Services-networking/ingress/
|
||||
.. _OpenShift Origin: https://www.openshift.org/
|
||||
.. _OpenShift Route: https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html
|
||||
.. _introduced before Ingress: https://kubernetes.io/docs/concepts/Services-networking/ingress/
|
||||
|
@ -20,8 +20,7 @@ Purpose
|
||||
-------
|
||||
|
||||
The purpose of this document is to present Kuryr Kubernetes Port and PortPool
|
||||
CRD [1]_ usage, capturing the design decisions currently taken by the Kuryr
|
||||
team.
|
||||
`CRD`_ usage, capturing the design decisions currently taken by the Kuryr team.
|
||||
|
||||
The main purpose of the Port CRD is to keep Neutron resources tracking as part
|
||||
of K8s data model. The main idea behind is to try to minimize the amount of
|
||||
@ -199,7 +198,4 @@ namespace subnet driver and it could be similarly applied to other SDN
|
||||
resources, such as LoadBalancers.
|
||||
|
||||
|
||||
References
|
||||
==========
|
||||
|
||||
.. [1] https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-resources
|
||||
.. _CRD: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-resources
|
||||
|
@ -31,10 +31,8 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods and
|
||||
a policy by which to access them. Service is a Kubernetes managed API object.
|
||||
For Kubernetes-native applications, Kubernetes offers an Endpoints API that is
|
||||
updated whenever the set of Pods in a Service changes. For detailed information
|
||||
please refer to `Kubernetes service
|
||||
<http://kubernetes.io/docs/user-guide/services/>`_ Kubernetes supports services
|
||||
with kube-proxy component that runs on each node, `Kube-Proxy
|
||||
<http://kubernetes.io/docs/admin/kube-proxy/>`_.
|
||||
please refer to `Kubernetes service`_. Kubernetes supports services with
|
||||
kube-proxy component that runs on each node, `Kube-Proxy`_.
|
||||
|
||||
|
||||
Proposed Solution
|
||||
@ -84,3 +82,7 @@ details for service mapping.
|
||||
LBaaS Driver is added to manage service translation to the LBaaSv2-like API.
|
||||
It abstracts all the details of service translation to Load Balancer.
|
||||
LBaaSv2Driver supports this interface by mapping to neutron LBaaSv2 constructs.
|
||||
|
||||
|
||||
.. _Kubernetes service: http://kubernetes.io/docs/user-guide/services/
|
||||
.. _Kube-Proxy: http://kubernetes.io/docs/admin/kube-proxy/
|
||||
|
@ -27,10 +27,9 @@ If you want to run kuryr CNI without the daemon, build the image with:
|
||||
$ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False .
|
||||
|
||||
Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller
|
||||
Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller
|
||||
<https://hub.docker.com/r/kuryr/controller/>`_ and `cni
|
||||
<https://hub.docker.com/r/kuryr/cni/>`_ images from the Docker Hub. Those
|
||||
definitions will be generated in next step.
|
||||
Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller`_
|
||||
and `cni`_ images from the Docker Hub. Those definitions will be generated in
|
||||
next step.
|
||||
|
||||
|
||||
Generating Kuryr resource definitions for Kubernetes
|
||||
@ -169,3 +168,7 @@ To see kuryr-controller logs:
|
||||
|
||||
NOTE: kuryr-cni has no logs and to debug failures you need to check out kubelet
|
||||
logs.
|
||||
|
||||
|
||||
.. _controller: https://hub.docker.com/r/kuryr/controller/
|
||||
.. _cni: https://hub.docker.com/r/kuryr/cni/
|
||||
|
@ -156,11 +156,14 @@ You can verify that this IP is really assigned to Neutron port:
|
||||
|
||||
If those steps were successful, then it looks like your DevStack with
|
||||
kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines
|
||||
of the logs, paste them into `paste.openstack.org
|
||||
<http://paste.openstack.org>`_ and ask other developers for help on `Kuryr's
|
||||
IRC channel <chat.freenode.net:6667/openstack-kuryr>`_. More info on how to use
|
||||
DevStack can be found in `DevStack Documentation
|
||||
<https://docs.openstack.org/devstack/latest/>`_, especially in section `Using
|
||||
Systemd in DevStack
|
||||
<https://docs.openstack.org/devstack/latest/systemd.html>`_, which explains how
|
||||
to use ``systemctl`` to control services and ``journalctl`` to read its logs.
|
||||
of the logs, paste them into `paste.openstack.org`_ and ask other developers
|
||||
for help on `Kuryr's IRC channel`_. More info on how to use DevStack can be
|
||||
found in `DevStack Documentation`_, especially in section `Using Systemd in
|
||||
DevStack`_, which explains how to use ``systemctl`` to control services and
|
||||
``journalctl`` to read its logs.
|
||||
|
||||
|
||||
.. _paste.openstack.org: http://paste.openstack.org
|
||||
.. _Kuryr's IRC channel: chat.freenode.net:6667/openstack-kuryr
|
||||
.. _DevStack Documentation: https://docs.openstack.org/devstack/latest/
|
||||
.. _Using Systemd in DevStack: https://docs.openstack.org/devstack/latest/systemd.html
|
||||
|
@ -67,8 +67,6 @@ Feel free to edit it if you'd like, but it should work as-is.
|
||||
Optionally, the ports pool funcionality can be enabled by following:
|
||||
`How to enable ports pool with devstack`_.
|
||||
|
||||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||
|
||||
5. Run DevStack.
|
||||
|
||||
Expect it to take a while. It installs required packages, clones a bunch
|
||||
@ -108,8 +106,6 @@ In order to check the default configuration, in term of networks, subnets,
|
||||
security groups and loadbalancers created upon a successful devstack stacking,
|
||||
you can check the `Inspect default Configuration`_.
|
||||
|
||||
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
|
||||
|
||||
|
||||
Testing Network Connectivity
|
||||
++++++++++++++++++++++++++++
|
||||
@ -117,8 +113,6 @@ Testing Network Connectivity
|
||||
Once the environment is ready, we can test that network connectivity works
|
||||
among pods. To do that check out `Testing Network Connectivity`_.
|
||||
|
||||
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
|
||||
|
||||
|
||||
Nested Containers Test Environment (VLAN)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -144,7 +138,6 @@ use (step 4), in this case:
|
||||
$ cd devstack
|
||||
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf
|
||||
|
||||
|
||||
The main differences with the default dragonflow local.conf sample are that:
|
||||
|
||||
- There is no need to enable the kuryr-kubernetes plugin as this will be
|
||||
@ -167,8 +160,6 @@ creating the overcloud VM by using a parent port of a Trunk so that containers
|
||||
can be created inside with their own networks. To do that we follow the next
|
||||
steps detailed at `Boot VM with a Trunk Port`_.
|
||||
|
||||
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
||||
|
||||
|
||||
Overcloud deployment
|
||||
++++++++++++++++++++
|
||||
@ -186,7 +177,6 @@ same steps as for ML2/OVS:
|
||||
2. Deploy devstack following steps 3 and 4 detailed at
|
||||
`How to try out nested-pods locally (VLAN + trunk)`_.
|
||||
|
||||
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
|
||||
|
||||
|
||||
Testing Nested Network Connectivity
|
||||
@ -198,4 +188,9 @@ the deployment was successful. To do that check out
|
||||
`Testing Nested Network Connectivity`_.
|
||||
|
||||
|
||||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
|
||||
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
|
||||
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
||||
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
|
||||
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html
|
||||
|
@ -16,11 +16,8 @@ for the VM:
|
||||
[DEFAULT]
|
||||
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
|
||||
|
||||
2. Launch a VM with `Neutron trunk port.
|
||||
<https://wiki.openstack.org/wiki/Neutron/TrunkPort>`_. The next steps can be
|
||||
followed: `Boot VM with a Trunk Port`_.
|
||||
|
||||
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
||||
2. Launch a VM with `Neutron trunk port`_. The next steps can be followed:
|
||||
`Boot VM with a Trunk Port`_.
|
||||
|
||||
3. Inside VM, install and setup Kubernetes along with Kuryr using devstack:
|
||||
- Since undercloud Neutron will be used by pods, Neutron services should be
|
||||
@ -52,8 +49,6 @@ for the VM:
|
||||
- Optionally, the ports pool funcionality can be enabled by following:
|
||||
`How to enable ports pool with devstack`_.
|
||||
|
||||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||
|
||||
- [OPTIONAL] If you want to enable the subport pools driver and the
|
||||
VIF Pool Manager you need to include:
|
||||
|
||||
@ -92,3 +87,8 @@ for the VM:
|
||||
$ sudo systemctl restart devstack@kuryr-daemon.service
|
||||
|
||||
Now launch pods using kubectl, Undercloud Neutron will serve the networking.
|
||||
|
||||
|
||||
.. _Neutron trunk port: https://wiki.openstack.org/wiki/Neutron/TrunkPort
|
||||
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
||||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||
|
@ -58,12 +58,9 @@ Feel free to edit it if you'd like, but it should work as-is.
|
||||
$ cd devstack
|
||||
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
|
||||
|
||||
|
||||
Optionally, the ports pool funcionality can be enabled by following:
|
||||
`How to enable ports pool with devstack`_.
|
||||
|
||||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||
|
||||
5. Run DevStack.
|
||||
|
||||
This is going to take a while. It installs a bunch of packages, clones a bunch
|
||||
@ -113,8 +110,6 @@ In order to check the default configuration, in term of networks, subnets,
|
||||
security groups and loadbalancers created upon a successful devstack stacking,
|
||||
you can check the `Inspect default Configuration`_.
|
||||
|
||||
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
|
||||
|
||||
|
||||
Testing Network Connectivity
|
||||
++++++++++++++++++++++++++++
|
||||
@ -122,8 +117,6 @@ Testing Network Connectivity
|
||||
Once the environment is ready, we can test that network connectivity works
|
||||
among pods. To do that check out `Testing Network Connectivity`_.
|
||||
|
||||
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
|
||||
|
||||
|
||||
Nested Containers Test Environment (VLAN)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -164,14 +157,11 @@ The main differences with the default odl local.conf sample are that:
|
||||
|
||||
- ODL Trunk service plugin need to be enable to ensure Trunk ports support.
|
||||
|
||||
|
||||
Once the undercloud deployment has finished, the next steps are related to
|
||||
create the overcloud VM by using a parent port of a Trunk so that containers
|
||||
can be created inside with their own networks. To do that we follow the next
|
||||
steps detailed at `Boot VM with a Trunk Port`_.
|
||||
|
||||
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
||||
|
||||
|
||||
Overcloud deployment
|
||||
++++++++++++++++++++
|
||||
@ -189,8 +179,6 @@ same steps as for ML2/OVS:
|
||||
2. Deploy devstack following steps 3 and 4 detailed at
|
||||
`How to try out nested-pods locally (VLAN + trunk)`_.
|
||||
|
||||
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
|
||||
|
||||
|
||||
Testing Nested Network Connectivity
|
||||
+++++++++++++++++++++++++++++++++++
|
||||
@ -200,4 +188,10 @@ overcloud VM, scale it to any number of pods and expose the service to check if
|
||||
the deployment was successful. To do that check out
|
||||
`Testing Nested Network Connectivity`_.
|
||||
|
||||
|
||||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
|
||||
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
|
||||
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
||||
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
|
||||
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html
|
||||
|
@ -2,8 +2,8 @@
|
||||
Configure Pod with Additional Interfaces
|
||||
========================================
|
||||
|
||||
To create pods with additional Interfaces follow the Kubernetes Network Custom
|
||||
Resource Definition De-facto Standard Version 1 [#]_, the next steps can be
|
||||
To create pods with additional Interfaces follow the `Kubernetes Network Custom
|
||||
Resource Definition De-facto Standard Version 1`_, the next steps can be
|
||||
followed:
|
||||
|
||||
1. Create Neutron net/subnets which you want the additional interfaces attach
|
||||
@ -91,7 +91,4 @@ defined in step 1.
|
||||
You may put a list of network separated with comma to attach Pods to more networks.
|
||||
|
||||
|
||||
Reference
|
||||
---------
|
||||
|
||||
.. [#] https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit?usp=sharing
|
||||
.. _Kubernetes Network Custom Resource Definition De-facto Standard Version 1: https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit?usp=sharing
|
||||
|
@ -13,10 +13,6 @@ be implemented in the following way:
|
||||
LoadBalancer's VIP.
|
||||
* **Endpoints**: The Endpoint object is translated to a LoadBalancer's VIP.
|
||||
|
||||
|
||||
.. _services: https://kubernetes.io/docs/concepts/services-networking/service/
|
||||
.. _LBaaS API: https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0
|
||||
|
||||
.. figure:: ../../images/lbaas_translation.svg
|
||||
:width: 100%
|
||||
:alt: Graphical depiction of the translation explained above
|
||||
@ -83,8 +79,6 @@ adds over the Neutron HAProxy agent are:
|
||||
You can find a good explanation about the involved steps to install Octavia in
|
||||
the `Octavia installation docs`_.
|
||||
|
||||
.. _Octavia installation docs: https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html
|
||||
|
||||
Simplifying a lot, Octavia works by instantiating a compute resource, i.e. a
|
||||
Nova VM, and running HAProxy inside. These single load balancer Nova VMs are
|
||||
called *Amphorae*. Each *Amphora* has a separate linux network namespace where
|
||||
@ -789,3 +783,8 @@ Troubleshooting
|
||||
If you want your current pods to get this change applied, the most
|
||||
comfortable way to do that is to delete them and let the Kubernetes
|
||||
Deployment create them automatically for you.
|
||||
|
||||
|
||||
.. _services: https://kubernetes.io/docs/concepts/services-networking/service/
|
||||
.. _LBaaS API: https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0
|
||||
.. _Octavia installation docs: https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html
|
||||
|
@ -4,7 +4,7 @@
|
||||
How to configure SR-IOV ports
|
||||
=============================
|
||||
|
||||
Current approach of SR-IOV relies on sriov-device-plugin [2]_. While creating
|
||||
Current approach of SR-IOV relies on `sriov-device-plugin`_. While creating
|
||||
pods with SR-IOV, sriov-device-plugin should be turned on on all nodes. To use
|
||||
a SR-IOV port on a baremetal installation the 3 following steps should be done:
|
||||
|
||||
@ -27,13 +27,13 @@ Subnet id <UUID of vlan-sriov-net> will be used later in NetworkAttachmentDefini
|
||||
default_physnet_subnets = physnet1:<UUID of vlan-sriov-net>
|
||||
|
||||
This mapping is required for ability to find appropriate PF/VF functions at
|
||||
binding phase. physnet1 is just an identifier for subnet <UUID of
|
||||
vlan-sriov-net>. Such kind of transition is necessary to support many-to-many
|
||||
binding phase. physnet1 is just an identifier for subnet <UUID of
|
||||
vlan-sriov-net>. Such kind of transition is necessary to support many-to-many
|
||||
relation.
|
||||
|
||||
3. Prepare NetworkAttachmentDefinition object.
|
||||
Apply NetworkAttachmentDefinition with "sriov" driverType inside,
|
||||
as described in [1]_.
|
||||
as described in `NPWG spec`_.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -79,15 +79,16 @@ They may have different subnetId.
|
||||
The resource name *intel.com/sriov*, which used in the above example is the
|
||||
default resource name. This name was used in SR-IOV network device plugin in
|
||||
version 1 (release-v1 branch). But since latest version the device plugin can
|
||||
use any arbitrary name of the resources [3]_. This name should match
|
||||
"^\[a-zA-Z0-9\_\]+$" regular expression. To be able to work with arbitrary
|
||||
resource names physnet_resource_mappings and device_plugin_resource_prefix in
|
||||
[sriov] section of kuryr-controller configuration file should be filled. The
|
||||
default value for device_plugin_resource_prefix is intel.com, the same as in
|
||||
SR-IOV network device plugin, in case of SR-IOV network device plugin was
|
||||
started with value of -resource-prefix option different from intel.com, than
|
||||
value should be set to device_plugin_resource_prefix, otherwise
|
||||
kuryr-kubernetes will not work with resource.
|
||||
use any arbitrary name of the resources (see `SRIOV network device plugin for
|
||||
Kubernetes`_). This name should match "^\[a-zA-Z0-9\_\]+$" regular expression.
|
||||
To be able to work with arbitrary resource names physnet_resource_mappings and
|
||||
device_plugin_resource_prefix in [sriov] section of kuryr-controller
|
||||
configuration file should be filled. The default value for
|
||||
device_plugin_resource_prefix is intel.com, the same as in SR-IOV network
|
||||
device plugin, in case of SR-IOV network device plugin was started with value
|
||||
of -resource-prefix option different from intel.com, than value should be set
|
||||
to device_plugin_resource_prefix, otherwise kuryr-kubernetes will not work with
|
||||
resource.
|
||||
|
||||
Assume we have following SR-IOV network device plugin (defined by -config-file
|
||||
option)
|
||||
@ -169,9 +170,6 @@ ports with binding:profile information. Due to this it is necessary to make
|
||||
actions with privileged user with admin rights.
|
||||
|
||||
|
||||
Reference
|
||||
---------
|
||||
|
||||
.. [1] https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html
|
||||
.. [2] https://docs.google.com/document/d/1D3dJeUUmta3sMzqw8JtWFoG2rvcJiWitVro9bsfUTEw
|
||||
.. [3] https://github.com/intel/sriov-network-device-plugin
|
||||
.. _NPWG spec: https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html
|
||||
.. _sriov-device-plugin: https://docs.google.com/document/d/1D3dJeUUmta3sMzqw8JtWFoG2rvcJiWitVro9bsfUTEw
|
||||
.. _SRIOV network device plugin for Kubernetes: https://github.com/intel/sriov-network-device-plugin
|
||||
|
@ -2,12 +2,9 @@
|
||||
Testing SRIOV functionality
|
||||
===========================
|
||||
|
||||
Following the steps explained on :ref:`sriov` make sure that you have
|
||||
already created and applied a ``NetworkAttachmentDefinition``
|
||||
containing a ``sriov`` driverType. Also make sure that
|
||||
`sriov-device-plugin <https://docs.google.com/document/d/1Ewe9Of84GkP0b2Q2PC0y9RVZNkN2WeVEagX9m99Nrzc>`_
|
||||
is enabled on the nodes.
|
||||
|
||||
Following the steps explained on :ref:`sriov` make sure that you have already
|
||||
created and applied a ``NetworkAttachmentDefinition`` containing a ``sriov``
|
||||
driverType. Also make sure that `sriov-device-plugin`_ is enabled on the nodes.
|
||||
``NetworkAttachmentDefinition`` containing a ``sriov`` driverType might
|
||||
look like:
|
||||
|
||||
@ -244,3 +241,6 @@ match the ones on the container. Currently the neutron-sriov-nic-agent does
|
||||
not properly detect SR-IOV ports assigned to containers. This means that direct
|
||||
ports in neutron would always remain in *DOWN* state. This doesn't affect the
|
||||
feature in any way other than cosmetically.
|
||||
|
||||
|
||||
.. _sriov-device-plugin: https://docs.google.com/document/d/1Ewe9Of84GkP0b2Q2PC0y9RVZNkN2WeVEagX9m99Nrzc
|
||||
|
@ -4,8 +4,11 @@ Kuryr-Kubernetes Release Notes Howto
|
||||
|
||||
Release notes are a new feature for documenting new features in OpenStack
|
||||
projects. Background on the process, tooling, and methodology is documented in
|
||||
a `mailing list post by Doug Hellmann
|
||||
<http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html>`_.
|
||||
a `mailing list post by Doug Hellmann`_.
|
||||
|
||||
For information on how to create release notes, please consult the `Release
|
||||
Notes documentation <https://docs.openstack.org/reno/latest/>`_.
|
||||
Notes documentation`_.
|
||||
|
||||
|
||||
.. _mailing list post by Doug Hellmann: http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html
|
||||
.. _Release Notes documentation: https://docs.openstack.org/reno/latest/
|
||||
|
Loading…
Reference in New Issue
Block a user