Change inline hyperlinks to link-target pairs.
Inline, named hyperlinks seems to be fine, but often they just provides noise to the paragrafs. In this patch we propose to use link-target style for hyperlinks, where named link(s) is (are) placed in paragraph while its target is at the bottom of the document. Change-Id: Ia4f4c66f51ea193dc201b3dba5be2788f20765e0
This commit is contained in:
parent
fd440fcdcb
commit
80b5ecd41b
@ -29,5 +29,7 @@ require it or to use different segments and, for example, route between them.
|
|||||||
Contribution guidelines
|
Contribution guidelines
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
For the process of new feature addition, refer to the `Kuryr Policy
|
For the process of new feature addition, refer to the `Kuryr Policy`_.
|
||||||
<https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies>`_
|
|
||||||
|
|
||||||
|
.. _Kuryr Policy: https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies
|
||||||
|
@ -44,12 +44,11 @@ Leader election
|
|||||||
~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The idea here is to use leader election mechanism based on Kubernetes
|
The idea here is to use leader election mechanism based on Kubernetes
|
||||||
endpoints. The idea is neatly `explained on Kubernetes blog
|
endpoints. The idea is neatly `explained on Kubernetes blog`_. Election is
|
||||||
<https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/>`_.
|
based on Endpoint resources, that hold annotation about current leader and its
|
||||||
Election is based on Endpoint resources, that hold annotation about current
|
leadership lease time. If leader dies, other instances of the service are free
|
||||||
leader and its leadership lease time. If leader dies, other instances of the
|
to take over the record. Kubernetes API mechanisms will provide update
|
||||||
service are free to take over the record. Kubernetes API mechanisms will
|
exclusion mechanisms to prevent race conditions.
|
||||||
provide update exclusion mechanisms to prevent race conditions.
|
|
||||||
|
|
||||||
This can be implemented by adding another *leader-elector* container to each
|
This can be implemented by adding another *leader-elector* container to each
|
||||||
of kuryr-controller pods:
|
of kuryr-controller pods:
|
||||||
@ -139,3 +138,6 @@ consistent hash ring to decide which instance will process which resource.
|
|||||||
Potentially this can be extended with support for non-containerized deployments
|
Potentially this can be extended with support for non-containerized deployments
|
||||||
by using Tooz and some other tool providing leader-election - like Consul or
|
by using Tooz and some other tool providing leader-election - like Consul or
|
||||||
Zookeeper.
|
Zookeeper.
|
||||||
|
|
||||||
|
|
||||||
|
.. _explained on Kubernetes blog: https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/
|
||||||
|
@ -151,12 +151,11 @@ processing. The specific Handler associates itself with specific Kubernetes
|
|||||||
object kind (through setting OBJECT_KIND) and is expected to implement at
|
object kind (through setting OBJECT_KIND) and is expected to implement at
|
||||||
least one of the methods of the base class to handle at least one of the
|
least one of the methods of the base class to handle at least one of the
|
||||||
ADDED/MODIFIED/DELETED events of the Kubernetes object. For details, see
|
ADDED/MODIFIED/DELETED events of the Kubernetes object. For details, see
|
||||||
`k8s-api
|
`k8s-api`_. Since both ADDED and MODIFIED event types trigger very similar
|
||||||
<https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/api-conventions.md#types-kinds>`_.
|
sequence of actions, Handler has 'on_present' method that is invoked for both
|
||||||
Since both ADDED and MODIFIED event types trigger very similar sequence of
|
event types. The specific Handler implementation should strive to put all the
|
||||||
actions, Handler has 'on_present' method that is invoked for both event types.
|
common ADDED and MODIFIED event handling logic in this method to avoid code
|
||||||
The specific Handler implementation should strive to put all the common ADDED
|
duplication.
|
||||||
and MODIFIED event handling logic in this method to avoid code duplication.
|
|
||||||
|
|
||||||
|
|
||||||
Pluggable Handlers
|
Pluggable Handlers
|
||||||
@ -306,6 +305,9 @@ APIs to perform its tasks and wait on socket for result.
|
|||||||
Kubernetes Documentation
|
Kubernetes Documentation
|
||||||
------------------------
|
------------------------
|
||||||
|
|
||||||
The `Kubernetes reference documentation
|
The `Kubernetes reference documentation`_ is a great source for finding more
|
||||||
<https://kubernetes.io/docs/reference/>`_ is a great source for finding more
|
|
||||||
details about Kubernetes API, CLIs, and tools.
|
details about Kubernetes API, CLIs, and tools.
|
||||||
|
|
||||||
|
|
||||||
|
.. _k8s-api: https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/api-conventions.md#types-kinds>
|
||||||
|
.. _Kubernetes reference documentation: https://kubernetes.io/docs/reference/
|
||||||
|
@ -26,13 +26,13 @@ is supported by the kuryr integration.
|
|||||||
Overview
|
Overview
|
||||||
--------
|
--------
|
||||||
|
|
||||||
A Kubernetes Ingress [1]_ is used to give services externally-reachable URLs,
|
A `Kubernetes Ingress`_ is used to give services externally-reachable URLs,
|
||||||
load balance traffic, terminate SSL, offer name based virtual
|
load balance traffic, terminate SSL, offer name based virtual
|
||||||
hosting, and more.
|
hosting, and more.
|
||||||
Each Ingress consists of a name, service identifier, and (optionally)
|
Each Ingress consists of a name, service identifier, and (optionally)
|
||||||
security configuration.
|
security configuration.
|
||||||
|
|
||||||
A Kubernetes Ingress Controller [2]_ is an entity that watches the apiserver's
|
A `Kubernetes Ingress Controller`_ is an entity that watches the apiserver's
|
||||||
/ingress resources for updates. Its job is to satisfy requests for Ingresses.
|
/ingress resources for updates. Its job is to satisfy requests for Ingresses.
|
||||||
|
|
||||||
|
|
||||||
@ -50,7 +50,7 @@ A L7 router is a logical entity responsible for L7 routing based on L7 rules
|
|||||||
database, when an HTTP packet hits the L7 router, the L7 router uses its
|
database, when an HTTP packet hits the L7 router, the L7 router uses its
|
||||||
rules database to determine the endpoint destination (based on the fields
|
rules database to determine the endpoint destination (based on the fields
|
||||||
content in HTTP header, e.g: HOST_NAME).
|
content in HTTP header, e.g: HOST_NAME).
|
||||||
Kuryr will use neutron LBaaS L7 policy capability [3]_ to perform
|
Kuryr will use neutron LBaaS `L7 policy capability`_ to perform
|
||||||
the L7 routing task.
|
the L7 routing task.
|
||||||
|
|
||||||
|
|
||||||
@ -262,9 +262,6 @@ This section describe in details the following scenarios:
|
|||||||
handler will set its internal state to 'no Ingress is pointing' state.
|
handler will set its internal state to 'no Ingress is pointing' state.
|
||||||
|
|
||||||
|
|
||||||
References
|
.. _Kubernetes Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
|
||||||
==========
|
.. _Kubernetes Ingress Controller: https://github.com/kubernetes/ingress-nginx/blob/master/README.md
|
||||||
|
.. _L7 policy capability: https://wiki.openstack.org/wiki/Neutron/LBaaS/l7
|
||||||
.. [1] https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
|
|
||||||
.. [2] https://github.com/kubernetes/ingress-nginx/blob/master/README.md
|
|
||||||
.. [3] https://wiki.openstack.org/wiki/Neutron/LBaaS/l7
|
|
||||||
|
@ -26,7 +26,7 @@ by kuryr-kubernetes.
|
|||||||
Overview
|
Overview
|
||||||
--------
|
--------
|
||||||
|
|
||||||
OpenShift Origin [1]_ is an open source cloud application development and
|
`OpenShift Origin`_ is an open source cloud application development and
|
||||||
hosting platform that automates the provisioning, management and scaling
|
hosting platform that automates the provisioning, management and scaling
|
||||||
of applications.
|
of applications.
|
||||||
|
|
||||||
@ -35,11 +35,11 @@ application development and multi-tenancy deployment. OpenShift adds developer
|
|||||||
and operations-centric tools on top of Kubernetes to enable rapid application
|
and operations-centric tools on top of Kubernetes to enable rapid application
|
||||||
development, easy deployment and scaling, and long-term lifecycle maintenance.
|
development, easy deployment and scaling, and long-term lifecycle maintenance.
|
||||||
|
|
||||||
An OpenShift Route [2]_ exposes a Service at a host name, like www.example.com,
|
The `OpenShift Route`_ exposes a Service at a host name, like www.example.com,
|
||||||
so that external clients can reach it by name.
|
so that external clients can reach it by name.
|
||||||
The Route is an Openshift resource that defines the rules you want to apply to
|
The Route is an Openshift resource that defines the rules you want to apply to
|
||||||
incoming connections.
|
incoming connections.
|
||||||
The Openshift Routes concept introduced before Ingress [3]_ was supported by
|
The Openshift Routes concept was `introduced before Ingress`_ was supported by
|
||||||
kubernetes, the Openshift Route matches the functionality of kubernetes Ingress.
|
kubernetes, the Openshift Route matches the functionality of kubernetes Ingress.
|
||||||
|
|
||||||
|
|
||||||
@ -162,9 +162,6 @@ B. Create Service/Endpoints, create OCP-Route, delete OCP-Route.
|
|||||||
handler will set its internal state to 'no Ingress is pointing' state.
|
handler will set its internal state to 'no Ingress is pointing' state.
|
||||||
|
|
||||||
|
|
||||||
References
|
.. _OpenShift Origin: https://www.openshift.org/
|
||||||
==========
|
.. _OpenShift Route: https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html
|
||||||
|
.. _introduced before Ingress: https://kubernetes.io/docs/concepts/Services-networking/ingress/
|
||||||
.. [1] https://www.openshift.org/
|
|
||||||
.. [2] https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html
|
|
||||||
.. [3] https://kubernetes.io/docs/concepts/Services-networking/ingress/
|
|
||||||
|
@ -20,8 +20,7 @@ Purpose
|
|||||||
-------
|
-------
|
||||||
|
|
||||||
The purpose of this document is to present Kuryr Kubernetes Port and PortPool
|
The purpose of this document is to present Kuryr Kubernetes Port and PortPool
|
||||||
CRD [1]_ usage, capturing the design decisions currently taken by the Kuryr
|
`CRD`_ usage, capturing the design decisions currently taken by the Kuryr team.
|
||||||
team.
|
|
||||||
|
|
||||||
The main purpose of the Port CRD is to keep Neutron resources tracking as part
|
The main purpose of the Port CRD is to keep Neutron resources tracking as part
|
||||||
of K8s data model. The main idea behind is to try to minimize the amount of
|
of K8s data model. The main idea behind is to try to minimize the amount of
|
||||||
@ -199,7 +198,4 @@ namespace subnet driver and it could be similarly applied to other SDN
|
|||||||
resources, such as LoadBalancers.
|
resources, such as LoadBalancers.
|
||||||
|
|
||||||
|
|
||||||
References
|
.. _CRD: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-resources
|
||||||
==========
|
|
||||||
|
|
||||||
.. [1] https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-resources
|
|
||||||
|
@ -31,10 +31,8 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods and
|
|||||||
a policy by which to access them. Service is a Kubernetes managed API object.
|
a policy by which to access them. Service is a Kubernetes managed API object.
|
||||||
For Kubernetes-native applications, Kubernetes offers an Endpoints API that is
|
For Kubernetes-native applications, Kubernetes offers an Endpoints API that is
|
||||||
updated whenever the set of Pods in a Service changes. For detailed information
|
updated whenever the set of Pods in a Service changes. For detailed information
|
||||||
please refer to `Kubernetes service
|
please refer to `Kubernetes service`_. Kubernetes supports services with
|
||||||
<http://kubernetes.io/docs/user-guide/services/>`_ Kubernetes supports services
|
kube-proxy component that runs on each node, `Kube-Proxy`_.
|
||||||
with kube-proxy component that runs on each node, `Kube-Proxy
|
|
||||||
<http://kubernetes.io/docs/admin/kube-proxy/>`_.
|
|
||||||
|
|
||||||
|
|
||||||
Proposed Solution
|
Proposed Solution
|
||||||
@ -84,3 +82,7 @@ details for service mapping.
|
|||||||
LBaaS Driver is added to manage service translation to the LBaaSv2-like API.
|
LBaaS Driver is added to manage service translation to the LBaaSv2-like API.
|
||||||
It abstracts all the details of service translation to Load Balancer.
|
It abstracts all the details of service translation to Load Balancer.
|
||||||
LBaaSv2Driver supports this interface by mapping to neutron LBaaSv2 constructs.
|
LBaaSv2Driver supports this interface by mapping to neutron LBaaSv2 constructs.
|
||||||
|
|
||||||
|
|
||||||
|
.. _Kubernetes service: http://kubernetes.io/docs/user-guide/services/
|
||||||
|
.. _Kube-Proxy: http://kubernetes.io/docs/admin/kube-proxy/
|
||||||
|
@ -27,10 +27,9 @@ If you want to run kuryr CNI without the daemon, build the image with:
|
|||||||
$ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False .
|
$ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False .
|
||||||
|
|
||||||
Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller
|
Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller
|
||||||
Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller
|
Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller`_
|
||||||
<https://hub.docker.com/r/kuryr/controller/>`_ and `cni
|
and `cni`_ images from the Docker Hub. Those definitions will be generated in
|
||||||
<https://hub.docker.com/r/kuryr/cni/>`_ images from the Docker Hub. Those
|
next step.
|
||||||
definitions will be generated in next step.
|
|
||||||
|
|
||||||
|
|
||||||
Generating Kuryr resource definitions for Kubernetes
|
Generating Kuryr resource definitions for Kubernetes
|
||||||
@ -169,3 +168,7 @@ To see kuryr-controller logs:
|
|||||||
|
|
||||||
NOTE: kuryr-cni has no logs and to debug failures you need to check out kubelet
|
NOTE: kuryr-cni has no logs and to debug failures you need to check out kubelet
|
||||||
logs.
|
logs.
|
||||||
|
|
||||||
|
|
||||||
|
.. _controller: https://hub.docker.com/r/kuryr/controller/
|
||||||
|
.. _cni: https://hub.docker.com/r/kuryr/cni/
|
||||||
|
@ -156,11 +156,14 @@ You can verify that this IP is really assigned to Neutron port:
|
|||||||
|
|
||||||
If those steps were successful, then it looks like your DevStack with
|
If those steps were successful, then it looks like your DevStack with
|
||||||
kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines
|
kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines
|
||||||
of the logs, paste them into `paste.openstack.org
|
of the logs, paste them into `paste.openstack.org`_ and ask other developers
|
||||||
<http://paste.openstack.org>`_ and ask other developers for help on `Kuryr's
|
for help on `Kuryr's IRC channel`_. More info on how to use DevStack can be
|
||||||
IRC channel <chat.freenode.net:6667/openstack-kuryr>`_. More info on how to use
|
found in `DevStack Documentation`_, especially in section `Using Systemd in
|
||||||
DevStack can be found in `DevStack Documentation
|
DevStack`_, which explains how to use ``systemctl`` to control services and
|
||||||
<https://docs.openstack.org/devstack/latest/>`_, especially in section `Using
|
``journalctl`` to read its logs.
|
||||||
Systemd in DevStack
|
|
||||||
<https://docs.openstack.org/devstack/latest/systemd.html>`_, which explains how
|
|
||||||
to use ``systemctl`` to control services and ``journalctl`` to read its logs.
|
.. _paste.openstack.org: http://paste.openstack.org
|
||||||
|
.. _Kuryr's IRC channel: chat.freenode.net:6667/openstack-kuryr
|
||||||
|
.. _DevStack Documentation: https://docs.openstack.org/devstack/latest/
|
||||||
|
.. _Using Systemd in DevStack: https://docs.openstack.org/devstack/latest/systemd.html
|
||||||
|
@ -67,8 +67,6 @@ Feel free to edit it if you'd like, but it should work as-is.
|
|||||||
Optionally, the ports pool funcionality can be enabled by following:
|
Optionally, the ports pool funcionality can be enabled by following:
|
||||||
`How to enable ports pool with devstack`_.
|
`How to enable ports pool with devstack`_.
|
||||||
|
|
||||||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
|
||||||
|
|
||||||
5. Run DevStack.
|
5. Run DevStack.
|
||||||
|
|
||||||
Expect it to take a while. It installs required packages, clones a bunch
|
Expect it to take a while. It installs required packages, clones a bunch
|
||||||
@ -108,8 +106,6 @@ In order to check the default configuration, in term of networks, subnets,
|
|||||||
security groups and loadbalancers created upon a successful devstack stacking,
|
security groups and loadbalancers created upon a successful devstack stacking,
|
||||||
you can check the `Inspect default Configuration`_.
|
you can check the `Inspect default Configuration`_.
|
||||||
|
|
||||||
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
|
|
||||||
|
|
||||||
|
|
||||||
Testing Network Connectivity
|
Testing Network Connectivity
|
||||||
++++++++++++++++++++++++++++
|
++++++++++++++++++++++++++++
|
||||||
@ -117,8 +113,6 @@ Testing Network Connectivity
|
|||||||
Once the environment is ready, we can test that network connectivity works
|
Once the environment is ready, we can test that network connectivity works
|
||||||
among pods. To do that check out `Testing Network Connectivity`_.
|
among pods. To do that check out `Testing Network Connectivity`_.
|
||||||
|
|
||||||
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
|
|
||||||
|
|
||||||
|
|
||||||
Nested Containers Test Environment (VLAN)
|
Nested Containers Test Environment (VLAN)
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -144,7 +138,6 @@ use (step 4), in this case:
|
|||||||
$ cd devstack
|
$ cd devstack
|
||||||
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf
|
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf
|
||||||
|
|
||||||
|
|
||||||
The main differences with the default dragonflow local.conf sample are that:
|
The main differences with the default dragonflow local.conf sample are that:
|
||||||
|
|
||||||
- There is no need to enable the kuryr-kubernetes plugin as this will be
|
- There is no need to enable the kuryr-kubernetes plugin as this will be
|
||||||
@ -167,8 +160,6 @@ creating the overcloud VM by using a parent port of a Trunk so that containers
|
|||||||
can be created inside with their own networks. To do that we follow the next
|
can be created inside with their own networks. To do that we follow the next
|
||||||
steps detailed at `Boot VM with a Trunk Port`_.
|
steps detailed at `Boot VM with a Trunk Port`_.
|
||||||
|
|
||||||
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
|
||||||
|
|
||||||
|
|
||||||
Overcloud deployment
|
Overcloud deployment
|
||||||
++++++++++++++++++++
|
++++++++++++++++++++
|
||||||
@ -186,7 +177,6 @@ same steps as for ML2/OVS:
|
|||||||
2. Deploy devstack following steps 3 and 4 detailed at
|
2. Deploy devstack following steps 3 and 4 detailed at
|
||||||
`How to try out nested-pods locally (VLAN + trunk)`_.
|
`How to try out nested-pods locally (VLAN + trunk)`_.
|
||||||
|
|
||||||
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
|
|
||||||
|
|
||||||
|
|
||||||
Testing Nested Network Connectivity
|
Testing Nested Network Connectivity
|
||||||
@ -198,4 +188,9 @@ the deployment was successful. To do that check out
|
|||||||
`Testing Nested Network Connectivity`_.
|
`Testing Nested Network Connectivity`_.
|
||||||
|
|
||||||
|
|
||||||
|
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||||
|
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
|
||||||
|
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
|
||||||
|
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
||||||
|
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
|
||||||
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html
|
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html
|
||||||
|
@ -16,11 +16,8 @@ for the VM:
|
|||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
|
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
|
||||||
|
|
||||||
2. Launch a VM with `Neutron trunk port.
|
2. Launch a VM with `Neutron trunk port`_. The next steps can be followed:
|
||||||
<https://wiki.openstack.org/wiki/Neutron/TrunkPort>`_. The next steps can be
|
`Boot VM with a Trunk Port`_.
|
||||||
followed: `Boot VM with a Trunk Port`_.
|
|
||||||
|
|
||||||
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
|
||||||
|
|
||||||
3. Inside VM, install and setup Kubernetes along with Kuryr using devstack:
|
3. Inside VM, install and setup Kubernetes along with Kuryr using devstack:
|
||||||
- Since undercloud Neutron will be used by pods, Neutron services should be
|
- Since undercloud Neutron will be used by pods, Neutron services should be
|
||||||
@ -52,8 +49,6 @@ for the VM:
|
|||||||
- Optionally, the ports pool funcionality can be enabled by following:
|
- Optionally, the ports pool funcionality can be enabled by following:
|
||||||
`How to enable ports pool with devstack`_.
|
`How to enable ports pool with devstack`_.
|
||||||
|
|
||||||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
|
||||||
|
|
||||||
- [OPTIONAL] If you want to enable the subport pools driver and the
|
- [OPTIONAL] If you want to enable the subport pools driver and the
|
||||||
VIF Pool Manager you need to include:
|
VIF Pool Manager you need to include:
|
||||||
|
|
||||||
@ -92,3 +87,8 @@ for the VM:
|
|||||||
$ sudo systemctl restart devstack@kuryr-daemon.service
|
$ sudo systemctl restart devstack@kuryr-daemon.service
|
||||||
|
|
||||||
Now launch pods using kubectl, Undercloud Neutron will serve the networking.
|
Now launch pods using kubectl, Undercloud Neutron will serve the networking.
|
||||||
|
|
||||||
|
|
||||||
|
.. _Neutron trunk port: https://wiki.openstack.org/wiki/Neutron/TrunkPort
|
||||||
|
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
||||||
|
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||||
|
@ -58,12 +58,9 @@ Feel free to edit it if you'd like, but it should work as-is.
|
|||||||
$ cd devstack
|
$ cd devstack
|
||||||
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
|
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
|
||||||
|
|
||||||
|
|
||||||
Optionally, the ports pool funcionality can be enabled by following:
|
Optionally, the ports pool funcionality can be enabled by following:
|
||||||
`How to enable ports pool with devstack`_.
|
`How to enable ports pool with devstack`_.
|
||||||
|
|
||||||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
|
||||||
|
|
||||||
5. Run DevStack.
|
5. Run DevStack.
|
||||||
|
|
||||||
This is going to take a while. It installs a bunch of packages, clones a bunch
|
This is going to take a while. It installs a bunch of packages, clones a bunch
|
||||||
@ -113,8 +110,6 @@ In order to check the default configuration, in term of networks, subnets,
|
|||||||
security groups and loadbalancers created upon a successful devstack stacking,
|
security groups and loadbalancers created upon a successful devstack stacking,
|
||||||
you can check the `Inspect default Configuration`_.
|
you can check the `Inspect default Configuration`_.
|
||||||
|
|
||||||
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
|
|
||||||
|
|
||||||
|
|
||||||
Testing Network Connectivity
|
Testing Network Connectivity
|
||||||
++++++++++++++++++++++++++++
|
++++++++++++++++++++++++++++
|
||||||
@ -122,8 +117,6 @@ Testing Network Connectivity
|
|||||||
Once the environment is ready, we can test that network connectivity works
|
Once the environment is ready, we can test that network connectivity works
|
||||||
among pods. To do that check out `Testing Network Connectivity`_.
|
among pods. To do that check out `Testing Network Connectivity`_.
|
||||||
|
|
||||||
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
|
|
||||||
|
|
||||||
|
|
||||||
Nested Containers Test Environment (VLAN)
|
Nested Containers Test Environment (VLAN)
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -164,14 +157,11 @@ The main differences with the default odl local.conf sample are that:
|
|||||||
|
|
||||||
- ODL Trunk service plugin need to be enable to ensure Trunk ports support.
|
- ODL Trunk service plugin need to be enable to ensure Trunk ports support.
|
||||||
|
|
||||||
|
|
||||||
Once the undercloud deployment has finished, the next steps are related to
|
Once the undercloud deployment has finished, the next steps are related to
|
||||||
create the overcloud VM by using a parent port of a Trunk so that containers
|
create the overcloud VM by using a parent port of a Trunk so that containers
|
||||||
can be created inside with their own networks. To do that we follow the next
|
can be created inside with their own networks. To do that we follow the next
|
||||||
steps detailed at `Boot VM with a Trunk Port`_.
|
steps detailed at `Boot VM with a Trunk Port`_.
|
||||||
|
|
||||||
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
|
||||||
|
|
||||||
|
|
||||||
Overcloud deployment
|
Overcloud deployment
|
||||||
++++++++++++++++++++
|
++++++++++++++++++++
|
||||||
@ -189,8 +179,6 @@ same steps as for ML2/OVS:
|
|||||||
2. Deploy devstack following steps 3 and 4 detailed at
|
2. Deploy devstack following steps 3 and 4 detailed at
|
||||||
`How to try out nested-pods locally (VLAN + trunk)`_.
|
`How to try out nested-pods locally (VLAN + trunk)`_.
|
||||||
|
|
||||||
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
|
|
||||||
|
|
||||||
|
|
||||||
Testing Nested Network Connectivity
|
Testing Nested Network Connectivity
|
||||||
+++++++++++++++++++++++++++++++++++
|
+++++++++++++++++++++++++++++++++++
|
||||||
@ -200,4 +188,10 @@ overcloud VM, scale it to any number of pods and expose the service to check if
|
|||||||
the deployment was successful. To do that check out
|
the deployment was successful. To do that check out
|
||||||
`Testing Nested Network Connectivity`_.
|
`Testing Nested Network Connectivity`_.
|
||||||
|
|
||||||
|
|
||||||
|
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||||
|
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
|
||||||
|
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
|
||||||
|
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
|
||||||
|
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
|
||||||
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html
|
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html
|
||||||
|
@ -2,8 +2,8 @@
|
|||||||
Configure Pod with Additional Interfaces
|
Configure Pod with Additional Interfaces
|
||||||
========================================
|
========================================
|
||||||
|
|
||||||
To create pods with additional Interfaces follow the Kubernetes Network Custom
|
To create pods with additional Interfaces follow the `Kubernetes Network Custom
|
||||||
Resource Definition De-facto Standard Version 1 [#]_, the next steps can be
|
Resource Definition De-facto Standard Version 1`_, the next steps can be
|
||||||
followed:
|
followed:
|
||||||
|
|
||||||
1. Create Neutron net/subnets which you want the additional interfaces attach
|
1. Create Neutron net/subnets which you want the additional interfaces attach
|
||||||
@ -91,7 +91,4 @@ defined in step 1.
|
|||||||
You may put a list of network separated with comma to attach Pods to more networks.
|
You may put a list of network separated with comma to attach Pods to more networks.
|
||||||
|
|
||||||
|
|
||||||
Reference
|
.. _Kubernetes Network Custom Resource Definition De-facto Standard Version 1: https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit?usp=sharing
|
||||||
---------
|
|
||||||
|
|
||||||
.. [#] https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit?usp=sharing
|
|
||||||
|
@ -13,10 +13,6 @@ be implemented in the following way:
|
|||||||
LoadBalancer's VIP.
|
LoadBalancer's VIP.
|
||||||
* **Endpoints**: The Endpoint object is translated to a LoadBalancer's VIP.
|
* **Endpoints**: The Endpoint object is translated to a LoadBalancer's VIP.
|
||||||
|
|
||||||
|
|
||||||
.. _services: https://kubernetes.io/docs/concepts/services-networking/service/
|
|
||||||
.. _LBaaS API: https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0
|
|
||||||
|
|
||||||
.. figure:: ../../images/lbaas_translation.svg
|
.. figure:: ../../images/lbaas_translation.svg
|
||||||
:width: 100%
|
:width: 100%
|
||||||
:alt: Graphical depiction of the translation explained above
|
:alt: Graphical depiction of the translation explained above
|
||||||
@ -83,8 +79,6 @@ adds over the Neutron HAProxy agent are:
|
|||||||
You can find a good explanation about the involved steps to install Octavia in
|
You can find a good explanation about the involved steps to install Octavia in
|
||||||
the `Octavia installation docs`_.
|
the `Octavia installation docs`_.
|
||||||
|
|
||||||
.. _Octavia installation docs: https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html
|
|
||||||
|
|
||||||
Simplifying a lot, Octavia works by instantiating a compute resource, i.e. a
|
Simplifying a lot, Octavia works by instantiating a compute resource, i.e. a
|
||||||
Nova VM, and running HAProxy inside. These single load balancer Nova VMs are
|
Nova VM, and running HAProxy inside. These single load balancer Nova VMs are
|
||||||
called *Amphorae*. Each *Amphora* has a separate linux network namespace where
|
called *Amphorae*. Each *Amphora* has a separate linux network namespace where
|
||||||
@ -789,3 +783,8 @@ Troubleshooting
|
|||||||
If you want your current pods to get this change applied, the most
|
If you want your current pods to get this change applied, the most
|
||||||
comfortable way to do that is to delete them and let the Kubernetes
|
comfortable way to do that is to delete them and let the Kubernetes
|
||||||
Deployment create them automatically for you.
|
Deployment create them automatically for you.
|
||||||
|
|
||||||
|
|
||||||
|
.. _services: https://kubernetes.io/docs/concepts/services-networking/service/
|
||||||
|
.. _LBaaS API: https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0
|
||||||
|
.. _Octavia installation docs: https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
How to configure SR-IOV ports
|
How to configure SR-IOV ports
|
||||||
=============================
|
=============================
|
||||||
|
|
||||||
Current approach of SR-IOV relies on sriov-device-plugin [2]_. While creating
|
Current approach of SR-IOV relies on `sriov-device-plugin`_. While creating
|
||||||
pods with SR-IOV, sriov-device-plugin should be turned on on all nodes. To use
|
pods with SR-IOV, sriov-device-plugin should be turned on on all nodes. To use
|
||||||
a SR-IOV port on a baremetal installation the 3 following steps should be done:
|
a SR-IOV port on a baremetal installation the 3 following steps should be done:
|
||||||
|
|
||||||
@ -33,7 +33,7 @@ relation.
|
|||||||
|
|
||||||
3. Prepare NetworkAttachmentDefinition object.
|
3. Prepare NetworkAttachmentDefinition object.
|
||||||
Apply NetworkAttachmentDefinition with "sriov" driverType inside,
|
Apply NetworkAttachmentDefinition with "sriov" driverType inside,
|
||||||
as described in [1]_.
|
as described in `NPWG spec`_.
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -79,15 +79,16 @@ They may have different subnetId.
|
|||||||
The resource name *intel.com/sriov*, which used in the above example is the
|
The resource name *intel.com/sriov*, which used in the above example is the
|
||||||
default resource name. This name was used in SR-IOV network device plugin in
|
default resource name. This name was used in SR-IOV network device plugin in
|
||||||
version 1 (release-v1 branch). But since latest version the device plugin can
|
version 1 (release-v1 branch). But since latest version the device plugin can
|
||||||
use any arbitrary name of the resources [3]_. This name should match
|
use any arbitrary name of the resources (see `SRIOV network device plugin for
|
||||||
"^\[a-zA-Z0-9\_\]+$" regular expression. To be able to work with arbitrary
|
Kubernetes`_). This name should match "^\[a-zA-Z0-9\_\]+$" regular expression.
|
||||||
resource names physnet_resource_mappings and device_plugin_resource_prefix in
|
To be able to work with arbitrary resource names physnet_resource_mappings and
|
||||||
[sriov] section of kuryr-controller configuration file should be filled. The
|
device_plugin_resource_prefix in [sriov] section of kuryr-controller
|
||||||
default value for device_plugin_resource_prefix is intel.com, the same as in
|
configuration file should be filled. The default value for
|
||||||
SR-IOV network device plugin, in case of SR-IOV network device plugin was
|
device_plugin_resource_prefix is intel.com, the same as in SR-IOV network
|
||||||
started with value of -resource-prefix option different from intel.com, than
|
device plugin, in case of SR-IOV network device plugin was started with value
|
||||||
value should be set to device_plugin_resource_prefix, otherwise
|
of -resource-prefix option different from intel.com, than value should be set
|
||||||
kuryr-kubernetes will not work with resource.
|
to device_plugin_resource_prefix, otherwise kuryr-kubernetes will not work with
|
||||||
|
resource.
|
||||||
|
|
||||||
Assume we have following SR-IOV network device plugin (defined by -config-file
|
Assume we have following SR-IOV network device plugin (defined by -config-file
|
||||||
option)
|
option)
|
||||||
@ -169,9 +170,6 @@ ports with binding:profile information. Due to this it is necessary to make
|
|||||||
actions with privileged user with admin rights.
|
actions with privileged user with admin rights.
|
||||||
|
|
||||||
|
|
||||||
Reference
|
.. _NPWG spec: https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html
|
||||||
---------
|
.. _sriov-device-plugin: https://docs.google.com/document/d/1D3dJeUUmta3sMzqw8JtWFoG2rvcJiWitVro9bsfUTEw
|
||||||
|
.. _SRIOV network device plugin for Kubernetes: https://github.com/intel/sriov-network-device-plugin
|
||||||
.. [1] https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html
|
|
||||||
.. [2] https://docs.google.com/document/d/1D3dJeUUmta3sMzqw8JtWFoG2rvcJiWitVro9bsfUTEw
|
|
||||||
.. [3] https://github.com/intel/sriov-network-device-plugin
|
|
||||||
|
@ -2,12 +2,9 @@
|
|||||||
Testing SRIOV functionality
|
Testing SRIOV functionality
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
Following the steps explained on :ref:`sriov` make sure that you have
|
Following the steps explained on :ref:`sriov` make sure that you have already
|
||||||
already created and applied a ``NetworkAttachmentDefinition``
|
created and applied a ``NetworkAttachmentDefinition`` containing a ``sriov``
|
||||||
containing a ``sriov`` driverType. Also make sure that
|
driverType. Also make sure that `sriov-device-plugin`_ is enabled on the nodes.
|
||||||
`sriov-device-plugin <https://docs.google.com/document/d/1Ewe9Of84GkP0b2Q2PC0y9RVZNkN2WeVEagX9m99Nrzc>`_
|
|
||||||
is enabled on the nodes.
|
|
||||||
|
|
||||||
``NetworkAttachmentDefinition`` containing a ``sriov`` driverType might
|
``NetworkAttachmentDefinition`` containing a ``sriov`` driverType might
|
||||||
look like:
|
look like:
|
||||||
|
|
||||||
@ -244,3 +241,6 @@ match the ones on the container. Currently the neutron-sriov-nic-agent does
|
|||||||
not properly detect SR-IOV ports assigned to containers. This means that direct
|
not properly detect SR-IOV ports assigned to containers. This means that direct
|
||||||
ports in neutron would always remain in *DOWN* state. This doesn't affect the
|
ports in neutron would always remain in *DOWN* state. This doesn't affect the
|
||||||
feature in any way other than cosmetically.
|
feature in any way other than cosmetically.
|
||||||
|
|
||||||
|
|
||||||
|
.. _sriov-device-plugin: https://docs.google.com/document/d/1Ewe9Of84GkP0b2Q2PC0y9RVZNkN2WeVEagX9m99Nrzc
|
||||||
|
@ -4,8 +4,11 @@ Kuryr-Kubernetes Release Notes Howto
|
|||||||
|
|
||||||
Release notes are a new feature for documenting new features in OpenStack
|
Release notes are a new feature for documenting new features in OpenStack
|
||||||
projects. Background on the process, tooling, and methodology is documented in
|
projects. Background on the process, tooling, and methodology is documented in
|
||||||
a `mailing list post by Doug Hellmann
|
a `mailing list post by Doug Hellmann`_.
|
||||||
<http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html>`_.
|
|
||||||
|
|
||||||
For information on how to create release notes, please consult the `Release
|
For information on how to create release notes, please consult the `Release
|
||||||
Notes documentation <https://docs.openstack.org/reno/latest/>`_.
|
Notes documentation`_.
|
||||||
|
|
||||||
|
|
||||||
|
.. _mailing list post by Doug Hellmann: http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html
|
||||||
|
.. _Release Notes documentation: https://docs.openstack.org/reno/latest/
|
||||||
|
Loading…
Reference in New Issue
Block a user