From c3c270c75203a86f366edfd9295aea534b7d4c4b Mon Sep 17 00:00:00 2001 From: Roman Dobosz Date: Tue, 12 Nov 2019 16:16:18 +0100 Subject: [PATCH] Fix text blocks formatting There are the cases, where text blocks in restructuredtext files are exceeding text 79 column, or are formatted in weeird way. In this patch it is fixed. Also couple of typos were tided up. Change-Id: I78c20cbb45c74e817d60582439acc7b01b577a83 --- CONTRIBUTING.rst | 15 +-- README.rst | 3 +- contrib/devstack-heat/README.rst | 30 ++--- contrib/pools-management/README.rst | 6 +- doc/source/devref/kuryr_kubernetes_design.rst | 111 ++++++++-------- .../kuryr_kubernetes_ocp_route_design.rst | 27 ++-- doc/source/devref/port_crd_usage.rst | 15 +-- doc/source/devref/service_support.rst | 26 ++-- .../devref/vif_handler_drivers_design.rst | 30 ++--- doc/source/installation/containerized.rst | 37 +++--- doc/source/installation/devstack/basic.rst | 18 +-- .../installation/devstack/containerized.rst | 8 +- .../devstack/dragonflow_support.rst | 2 +- .../installation/devstack/nested-macvlan.rst | 3 +- .../installation/devstack/nested-vlan.rst | 17 ++- .../installation/devstack/odl_support.rst | 2 +- .../installation/devstack/ports-pool.rst | 2 +- doc/source/installation/ipv6.rst | 6 +- doc/source/installation/manual.rst | 7 +- doc/source/installation/network_policy.rst | 8 +- doc/source/installation/ocp_route.rst | 121 +++++++++--------- doc/source/installation/ports-pool.rst | 20 +-- doc/source/installation/services.rst | 75 +++++------ doc/source/installation/sriov.rst | 52 ++++---- .../installation/testing_sriov_functional.rst | 37 +++--- .../installation/testing_udp_services.rst | 26 ++-- doc/source/installation/trunk_ports.rst | 4 +- doc/source/installation/upgrades.rst | 8 +- releasenotes/source/README.rst | 11 +- 29 files changed, 375 insertions(+), 352 deletions(-) diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst index a7d90c63a..f863f307a 100644 --- a/CONTRIBUTING.rst +++ b/CONTRIBUTING.rst @@ -1,20 +1,17 @@ If you would like to contribute to the development of OpenStack, you must follow the steps in this page: - - https://docs.openstack.org/infra/manual/developers.html +https://docs.openstack.org/infra/manual/developers.html If you already have a good understanding of how the system works and your -OpenStack accounts are set up, you can skip to the development workflow -section of this documentation to learn how changes to OpenStack should be -submitted for review via the Gerrit tool: - - https://docs.openstack.org/infra/manual/developers.html#development-workflow +OpenStack accounts are set up, you can skip to the development workflow section +of this documentation to learn how changes to OpenStack should be submitted for +review via the Gerrit tool: +https://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored. Bugs should be filed on Launchpad, not GitHub: - - https://bugs.launchpad.net/kuryr-kubernetes +https://bugs.launchpad.net/kuryr-kubernetes If you want to have your code checked for pep8 automatically before committing changes, you can just do:: diff --git a/README.rst b/README.rst index 1dac51a80..0c727bea0 100644 --- a/README.rst +++ b/README.rst @@ -29,4 +29,5 @@ require it or to use different segments and, for example, route between them. Contribution guidelines ----------------------- -For the process of new feature addition, refer to the `Kuryr Policy `_ +For the process of new feature addition, refer to the `Kuryr Policy +`_ diff --git a/contrib/devstack-heat/README.rst b/contrib/devstack-heat/README.rst index 19bedc022..6f58ba86b 100644 --- a/contrib/devstack-heat/README.rst +++ b/contrib/devstack-heat/README.rst @@ -3,41 +3,41 @@ Kuryr Heat Templates ==================== This set of scripts and Heat templates are useful for deploying devstack -scenarios. It handles the creation of an allinone devstack nova instance and its -networking needs. +scenarios. It handles the creation of an all-in-one devstack nova instance and +its networking needs. Prerequisites ------------- -Packages to install on the host you run devstack-heat (not on the cloud server): +Packages to install on the host you run devstack-heat (not on the cloud +server): * jq * openstack-cli If you want to run devstack from the master commit, this application requires a -github token due to the github api rate limiting: +Github token due to the Github API rate limiting: -You can generate one without any permissions at: +You can generate one without any permissions at +https://github.com/settings/tokens/new. - https://github.com/settings/tokens/new - -Then put it in your ~/.bashrc an ENV variable called DEVSTACK_HEAT_GH_TOKEN like -so: +Then put it in your ``~/.bashrc`` an environment variable called +``DEVSTACK_HEAT_GH_TOKEN`` like so: echo "export DEVSTACK_HEAT_GH_TOKEN=my_token" >> ~/.bashrc After creating the instance, devstack-heat will immediately start creating a -devstack `stack` user and using devstack to stack kuryr-kubernetes. When it is -finished, there'll be a file names `/opt/stack/ready`. +devstack ``stack`` user and using devstack to stack kuryr-kubernetes. When it +is finished, there'll be a file names ``/opt/stack/ready``. How to run ---------- In order to run it, make sure that you have sourced your OpenStack cloud -provider openrc file and tweaked `hot/parameters.yml` to your liking then launch -with:: +provider openrc file and tweaked ``hot/parameters.yml`` to your liking then +launch with:: ./devstack-heat stack @@ -89,5 +89,5 @@ To delete the deployment:: Supported images ~~~~~~~~~~~~~~~~ -It should work with the latest centos7 image. It is not tested with the latest -ubuntu 16.04 cloud image but it will probably work. +It should work with the latest Centos7 image. It is not tested with the latest +Ubuntu 16.04 cloud image but it will probably work. diff --git a/contrib/pools-management/README.rst b/contrib/pools-management/README.rst index 46a28f091..7621e341c 100644 --- a/contrib/pools-management/README.rst +++ b/contrib/pools-management/README.rst @@ -7,7 +7,7 @@ a given amount of subports at the specified pools (i.e., at the VM trunks), as well as to free the unused ones. The first step to perform is to enable the pool manager by adding this to -`/etc/kuryr/kuryr.conf`:: +``/etc/kuryr/kuryr.conf``:: [kubernetes] enable_manager = True @@ -17,7 +17,7 @@ If the environment has been deployed with devstack, the socket file directory will have been created automatically. However, if that is not the case, you need to create the directory for the socket file with the right permissions. If no other path is specified, the default location for the socket file is: -`/run/kuryr/kuryr_manage.sock` +``/run/kuryr/kuryr_manage.sock`` Hence, you need to create that directory and give it read/write access to the user who is running the kuryr-kubernetes.service, for instance:: @@ -36,7 +36,7 @@ Populate subport pools for nested environment Once the nested environment is up and running, and the pool manager has been started, we can populate the pools, i.e., the trunk ports in used by the -overcloud VMs, with subports. From the `undercloud` we just need to make use +overcloud VMs, with subports. From the *undercloud* we just need to make use of the subports.py tool. To obtain information about the tool options:: diff --git a/doc/source/devref/kuryr_kubernetes_design.rst b/doc/source/devref/kuryr_kubernetes_design.rst index 2f7164150..e8af57dfd 100644 --- a/doc/source/devref/kuryr_kubernetes_design.rst +++ b/doc/source/devref/kuryr_kubernetes_design.rst @@ -21,7 +21,7 @@ Purpose The purpose of this document is to present the main Kuryr-K8s integration components and capture the design decisions of each component currently taken -by the kuryr team. +by the Kuryr team. Goal Statement @@ -30,19 +30,19 @@ Goal Statement Enable OpenStack Neutron realization of the Kubernetes networking. Start by supporting network connectivity and expand to support advanced features, such as Network Policies. In the future, it may be extended to some other -openstack services. +OpenStack services. Overview -------- -In order to integrate Neutron into kubernetes networking, 2 components are +In order to integrate Neutron into Kubernetes networking, 2 components are introduced: Controller and CNI Driver. Controller is a supervisor component responsible to maintain translation of networking relevant Kubernetes model into the OpenStack (i.e. Neutron) model. This can be considered as a centralized service (supporting HA mode in the future). -CNI driver is responsible for binding kubernetes pods on worker nodes into +CNI driver is responsible for binding Kubernetes pods on worker nodes into Neutron ports ensuring requested level of isolation. Please see below the component view of the integrated system: @@ -62,13 +62,13 @@ Design Principles should rely on existing communication channels, currently added to the pod metadata via annotations. 4. CNI Driver should not depend on Neutron. It gets all required details - from Kubernetes API server (currently through Kubernetes annotations), therefore - depending on Controller to perform its translation tasks. -5. Allow different neutron backends to bind Kubernetes pods without code modification. - This means that both Controller and CNI binding mechanism should allow - loading of the vif management and binding components, manifested via - configuration. If some vendor requires some extra code, it should be handled - in one of the stevedore drivers. + from Kubernetes API server (currently through Kubernetes annotations), + therefore depending on Controller to perform its translation tasks. +5. Allow different neutron backends to bind Kubernetes pods without code + modification. This means that both Controller and CNI binding mechanism + should allow loading of the vif management and binding components, + manifested via configuration. If some vendor requires some extra code, it + should be handled in one of the stevedore drivers. Kuryr Controller Design @@ -86,10 +86,10 @@ Watcher ~~~~~~~ Watcher is a common software component used by both the Controller and the CNI -driver. Watcher connects to Kubernetes API. Watcher's responsibility is to observe the -registered (either on startup or dynamically during its runtime) endpoints and -invoke registered callback handler (pipeline) to pass all events from -registered endpoints. +driver. Watcher connects to Kubernetes API. Watcher's responsibility is to +observe the registered (either on startup or dynamically during its runtime) +endpoints and invoke registered callback handler (pipeline) to pass all events +from registered endpoints. Event Handler @@ -125,16 +125,17 @@ ControllerPipeline ControllerPipeline serves as an event dispatcher of the Watcher for Kuryr-K8s controller Service. Currently watched endpoints are 'pods', 'services' and -'endpoints'. Kubernetes resource event handlers (Event Consumers) are registered into -the Controller Pipeline. There is a special EventConsumer, ResourceEventHandler, -that provides API for Kubernetes event handling. When a watched event arrives, it is -processed by all Resource Event Handlers registered for specific Kubernetes object -kind. Pipeline retries on resource event handler invocation in -case of the ResourceNotReady exception till it succeeds or the number of -retries (time-based) is reached. Any unrecovered failure is logged without -affecting other Handlers (of the current and other events). -Events of the same group (same Kubernetes object) are handled sequentially in the -order arrival. Events of different Kubernetes objects are handled concurrently. +'endpoints'. Kubernetes resource event handlers (Event Consumers) are +registered into the Controller Pipeline. There is a special EventConsumer, +ResourceEventHandler, that provides API for Kubernetes event handling. When a +watched event arrives, it is processed by all Resource Event Handlers +registered for specific Kubernetes object kind. Pipeline retries on resource +event handler invocation in case of the ResourceNotReady exception till it +succeeds or the number of retries (time-based) is reached. Any unrecovered +failure is logged without affecting other Handlers (of the current and other +events). Events of the same group (same Kubernetes object) are handled +sequentially in the order arrival. Events of different Kubernetes objects are +handled concurrently. .. image:: ../..//images/controller_pipeline.png :alt: controller pipeline @@ -145,11 +146,13 @@ order arrival. Events of different Kubernetes objects are handled concurrently. ResourceEventHandler ~~~~~~~~~~~~~~~~~~~~ -ResourceEventHandler is a convenience base class for the Kubernetes event processing. -The specific Handler associates itself with specific Kubernetes object kind (through -setting OBJECT_KIND) and is expected to implement at least one of the methods -of the base class to handle at least one of the ADDED/MODIFIED/DELETED events -of the Kubernetes object. For details, see `k8s-api `_. +ResourceEventHandler is a convenience base class for the Kubernetes event +processing. The specific Handler associates itself with specific Kubernetes +object kind (through setting OBJECT_KIND) and is expected to implement at +least one of the methods of the base class to handle at least one of the +ADDED/MODIFIED/DELETED events of the Kubernetes object. For details, see +`k8s-api +`_. Since both ADDED and MODIFIED event types trigger very similar sequence of actions, Handler has 'on_present' method that is invoked for both event types. The specific Handler implementation should strive to put all the common ADDED @@ -161,6 +164,7 @@ Pluggable Handlers Starting with the Rocky release, Kuryr-Kubernetes includes a pluggable interface for the Kuryr controller handlers. + The pluggable handlers framework allows : - Using externally provided handlers. @@ -179,8 +183,8 @@ lb Endpoint lbaasspec Service ================ ========================= -For example, to enable only the 'vif' controller handler we should set the following -at kuryr.conf:: +For example, to enable only the 'vif' controller handler we should set the +following at kuryr.conf:: [kubernetes] enabled_handlers=vif @@ -190,19 +194,19 @@ Providers ~~~~~~~~~ Provider (Drivers) are used by ResourceEventHandlers to manage specific aspects -of the Kubernetes resource in the OpenStack domain. For example, creating a Kubernetes Pod -will require a neutron port to be created on a specific network with the proper -security groups applied to it. There will be dedicated Drivers for Project, -Subnet, Port and Security Groups settings in neutron. For instance, the Handler -that processes pod events, will use PodVIFDriver, PodProjectDriver, -PodSubnetsDriver and PodSecurityGroupsDriver. The Drivers model is introduced -in order to allow flexibility in the Kubernetes model mapping to the OpenStack. There -can be different drivers that do Neutron resources management, i.e. create on -demand or grab one from the precreated pool. There can be different drivers for -the Project management, i.e. single Tenant or multiple. Same goes for the other -drivers. There are drivers that handle the Pod based on the project, subnet -and security groups specified via configuration settings during cluster -deployment phase. +of the Kubernetes resource in the OpenStack domain. For example, creating a +Kubernetes Pod will require a neutron port to be created on a specific network +with the proper security groups applied to it. There will be dedicated Drivers +for Project, Subnet, Port and Security Groups settings in neutron. For +instance, the Handler that processes pod events, will use PodVIFDriver, +PodProjectDriver, PodSubnetsDriver and PodSecurityGroupsDriver. The Drivers +model is introduced in order to allow flexibility in the Kubernetes model +mapping to the OpenStack. There can be different drivers that do Neutron +resources management, i.e. create on demand or grab one from the precreated +pool. There can be different drivers for the Project management, i.e. single +Tenant or multiple. Same goes for the other drivers. There are drivers that +handle the Pod based on the project, subnet and security groups specified via +configuration settings during cluster deployment phase. NeutronPodVifDriver @@ -250,10 +254,10 @@ Processes communicate between each other using Python's responsible for extracting VIF annotations from Pod events and putting them into the shared dictionary. Server is a regular WSGI server that will answer CNI Driver calls. When a CNI request comes, Server is waiting for VIF object to -appear in the shared dictionary. As annotations are read from -kubernetes API and added to the registry by Watcher thread, Server will -eventually get VIF it needs to connect for a given pod. Then it waits for the -VIF to become active before returning to the CNI Driver. +appear in the shared dictionary. As annotations are read from kubernetes API +and added to the registry by Watcher thread, Server will eventually get VIF it +needs to connect for a given pod. Then it waits for the VIF to become active +before returning to the CNI Driver. Communication @@ -293,12 +297,13 @@ deserialized using o.vo's ``obj_from_primitive()`` method. **Return body:** None. -When running in daemonized mode, CNI Driver will call CNI Daemon over those APIs -to perform its tasks and wait on socket for result. +When running in daemonized mode, CNI Driver will call CNI Daemon over those +APIs to perform its tasks and wait on socket for result. Kubernetes Documentation ------------------------ -The `Kubernetes reference documentation `_ -is a great source for finding more details about Kubernetes API, CLIs, and tools. +The `Kubernetes reference documentation +`_ is a great source for finding more +details about Kubernetes API, CLIs, and tools. diff --git a/doc/source/devref/kuryr_kubernetes_ocp_route_design.rst b/doc/source/devref/kuryr_kubernetes_ocp_route_design.rst index afab0f11e..ba41cf43b 100644 --- a/doc/source/devref/kuryr_kubernetes_ocp_route_design.rst +++ b/doc/source/devref/kuryr_kubernetes_ocp_route_design.rst @@ -46,10 +46,10 @@ kubernetes, the Openshift Route matches the functionality of kubernetes Ingress. Proposed Solution ----------------- -The solution will rely on L7 router, Service/Endpoints handler and -L7 router driver components described at kuryr-kubernetes Ingress integration -design, where a new component - OCP-Route handler, will satisfy requests for -Openshift Route resources. +The solution will rely on L7 router, Service/Endpoints handler and L7 router +driver components described at kuryr-kubernetes Ingress integration design, +where a new component - OCP-Route handler, will satisfy requests for Openshift +Route resources. Controller Handlers impact: @@ -72,14 +72,13 @@ The following scheme describes OCP-Route controller SW architecture: Similar to Kubernetes Ingress, each OCP-Route object being translated to a L7 policy in L7 router, and the rules on OCP-Route become L7 (URL) mapping rules -in that L7 policy. -The L7 policy is configured to forward the filtered traffic to LbaaS Pool. -The LbaaS pool represents an Endpoints resource, and it's the Service/Endpoints -handler responsibility to attach all its members to this pool. -Since the Endpoints resource is not aware of changes in OCP-Route objects -pointing to it, the OCP-Route handler should trigger this notification, -the notification will be implemented using annotation of the relevant -Endpoint resource. +in that L7 policy. The L7 policy is configured to forward the filtered traffic +to LbaaS Pool. The LbaaS pool represents an Endpoints resource, and it's the +Service/Endpoints handler responsibility to attach all its members to this +pool. Since the Endpoints resource is not aware of changes in OCP-Route objects +pointing to it, the OCP-Route handler should trigger this notification, the +notification will be implemented using annotation of the relevant Endpoint +resource. Use cases examples @@ -87,8 +86,8 @@ Use cases examples This section describes in details the following scenarios: - A. Create OCP-Route, create Service/Endpoints. - B. Create Service/Endpoints, create OCP-Route, delete OCP-Route. +A. Create OCP-Route, create Service/Endpoints. +B. Create Service/Endpoints, create OCP-Route, delete OCP-Route. * Create OCP-Route, create Service/Endpoints: diff --git a/doc/source/devref/port_crd_usage.rst b/doc/source/devref/port_crd_usage.rst index 188f63747..de9c8e5b1 100644 --- a/doc/source/devref/port_crd_usage.rst +++ b/doc/source/devref/port_crd_usage.rst @@ -47,7 +47,7 @@ container creation, and Neutron ports are deleted after container deletion. But there is still a need to keep the Ports and Port pools details and have them available in case of Kuryr Controller restart. Since Kuryr is stateless service, the details should be kept either as part of Neutron or Kubernetes -data. Due to the perfromance costs, K8s option is more performant. +data. Due to the performance costs, K8s option is more preferred. Proposed Solution @@ -171,13 +171,12 @@ KuryrPorts objects that were annotated with `deleting` label at the (e.g. ports) in case the controller crashed while deleting the Neutron (or any other SDN) associated resources. -As for the Ports Pools, right now they reside on memory on the -Kuryr-controller and need to be recovered every time the controller gets -restarted. To perform this recovery we are relying on Neutron Port -device-owner information which may not be completely waterproof in all -situations (e.g., if there is another entity using the same device -owner name). Consequently, by storing the information into K8s CRD objests we -have the benefit of: +As for the Ports Pools, right now they reside on memory on the Kuryr-controller +and need to be recovered every time the controller gets restarted. To perform +this recovery we are relying on Neutron Port device-owner information which may +not be completely waterproof in all situations (e.g., if there is another +entity using the same device owner name). Consequently, by storing the +information into K8s CRD objects we have the benefit of: * Calling K8s API instead of Neutron API * Being sure the recovered ports into the pools were created by diff --git a/doc/source/devref/service_support.rst b/doc/source/devref/service_support.rst index 25d0c53de..4aae89ec5 100644 --- a/doc/source/devref/service_support.rst +++ b/doc/source/devref/service_support.rst @@ -31,9 +31,10 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them. Service is a Kubernetes managed API object. For Kubernetes-native applications, Kubernetes offers an Endpoints API that is updated whenever the set of Pods in a Service changes. For detailed information -please refer to `Kubernetes service `_ -Kubernetes supports services with kube-proxy component that runs on each node, -`Kube-Proxy `_. +please refer to `Kubernetes service +`_ Kubernetes supports services +with kube-proxy component that runs on each node, `Kube-Proxy +`_. Proposed Solution @@ -43,18 +44,20 @@ Kubernetes service in its essence is a Load Balancer across Pods that fit the service selection. Kuryr's choice is to support Kubernetes services by using Neutron LBaaS service. The initial implementation is based on the OpenStack LBaaSv2 API, so compatible with any LBaaSv2 API provider. -In order to be compatible with Kubernetes networking, Kuryr-Kubernetes -makes sure that services Load Balancers have access to Pods Neutron ports. -This may be affected once Kubernetes Network Policies will be supported. -Oslo versioned objects are used to keep translation details in Kubernetes entities -annotation. This will allow future changes to be backward compatible. + +In order to be compatible with Kubernetes networking, Kuryr-Kubernetes makes +sure that services Load Balancers have access to Pods Neutron ports. This may +be affected once Kubernetes Network Policies will be supported. Oslo versioned +objects are used to keep translation details in Kubernetes entities annotation. +This will allow future changes to be backward compatible. Data Model Translation ~~~~~~~~~~~~~~~~~~~~~~ Kubernetes service is mapped to the LBaaSv2 Load Balancer with associated -Listeners and Pools. Service endpoints are mapped to Load Balancer Pool members. +Listeners and Pools. Service endpoints are mapped to Load Balancer Pool +members. Kuryr Controller Impact @@ -71,11 +74,10 @@ Two Kubernetes Event Handlers are added to the Controller pipeline. Endpoints (LoadBalancer) handler. To avoid conflicting annotations, K8s Services's resourceVersion is used for Service and Endpoints while handling Services events. - - LoadBalancerHandler manages Kubernetes endpoints events. It manages LoadBalancer, LoadBalancerListener, LoadBalancerPool and LoadBalancerPool - members to reflect and keep in sync with the Kubernetes service. It keeps details of - Neutron resources by annotating the Kubernetes Endpoints object. + members to reflect and keep in sync with the Kubernetes service. It keeps + details of Neutron resources by annotating the Kubernetes Endpoints object. Both Handlers use Project, Subnet and SecurityGroup service drivers to get details for service mapping. diff --git a/doc/source/devref/vif_handler_drivers_design.rst b/doc/source/devref/vif_handler_drivers_design.rst index 659bbb31e..167ea28a0 100644 --- a/doc/source/devref/vif_handler_drivers_design.rst +++ b/doc/source/devref/vif_handler_drivers_design.rst @@ -92,9 +92,9 @@ Additional Subnets Driver Since it is possible to request additional subnets for the pod through the pod annotations it is necessary to have new driver. According to parsed information (requested subnets) by Multi-vif driver it has to return dictionary containing -the mapping 'subnet_id' -> 'network' for all requested subnets in unified format -specified in PodSubnetsDriver class. -Here's how a Pod Spec with additional subnets requests might look like: +the mapping 'subnet_id' -> 'network' for all requested subnets in unified +format specified in PodSubnetsDriver class. Here's how a Pod Spec with +additional subnets requests might look like: .. code-block:: yaml @@ -137,11 +137,11 @@ Specific ports support Specific ports support is enabled by default and will be a part of the drivers to implement it. It is possile to have manually precreated specific ports in neutron and specify them in pod annotations as preferably used. This means that -drivers will use specific ports if it is specified in pod annotations, otherwise -it will create new ports by default. It is important that specific ports can have -vnic_type both direct and normal, so it is necessary to provide processing -support for specific ports in both SRIOV and generic driver. -Pod annotation with requested specific ports might look like this: +drivers will use specific ports if it is specified in pod annotations, +otherwise it will create new ports by default. It is important that specific +ports can have vnic_type both direct and normal, so it is necessary to provide +processing support for specific ports in both SRIOV and generic driver. Pod +annotation with requested specific ports might look like this: .. code-block:: yaml @@ -158,10 +158,10 @@ Pod annotation with requested specific ports might look like this: "id_of_normal_precreated_port" ]' -Pod spec above should be interpreted the following way: -Multi-vif driver parses pod annotations and gets ids of specific ports. -If vnic_type is "normal" and such ports exist, it calls generic driver to create vif -objects for these ports. Else if vnic_type is "direct" and such ports exist, it calls -sriov driver to create vif objects for these ports. If certain ports are not -requested in annotations then driver doesn't return additional vifs to Multi-vif -driver. +Pod spec above should be interpreted the following way: Multi-vif driver parses +pod annotations and gets ids of specific ports. If vnic_type is "normal" and +such ports exist, it calls generic driver to create vif objects for these +ports. Else if vnic_type is "direct" and such ports exist, it calls sriov +driver to create vif objects for these ports. If certain ports are not +requested in annotations then driver doesn't return additional vifs to +Multi-vif driver. diff --git a/doc/source/installation/containerized.rst b/doc/source/installation/containerized.rst index 402fd08c0..151b49566 100644 --- a/doc/source/installation/containerized.rst +++ b/doc/source/installation/containerized.rst @@ -21,9 +21,10 @@ If you want to run kuryr CNI without the daemon, build theimage with: :: $ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False . Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller -Deployment and kuryr-cni DaemonSet definitions to use pre-built -`controller `_ and `cni `_ -images from the Docker Hub. Those definitions will be generated in next step. +Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller +`_ and `cni +`_ images from the Docker Hub. Those +definitions will be generated in next step. Generating Kuryr resource definitions for Kubernetes @@ -36,7 +37,8 @@ that can be used to Deploy Kuryr on Kubernetes. The script is placed in $ ./tools/generate_k8s_resource_definitions [] [] [] * ``output_dir`` - directory where to put yaml files with definitions. -* ``controller_conf_path`` - path to custom kuryr-controller configuration file. +* ``controller_conf_path`` - path to custom kuryr-controller configuration + file. * ``cni_conf_path`` - path to custom kuryr-cni configuration file (defaults to ``controller_conf_path``). * ``ca_certificate_path`` - path to custom CA certificate for OpenStack API. It @@ -49,24 +51,29 @@ that can be used to Deploy Kuryr on Kubernetes. The script is placed in still be mounted in kuryr-controller ``Deployment`` definition. If no path to config files is provided, script automatically generates minimal -configuration. However some of the options should be filled by the user. You can -do that either by editing the file after the ConfigMap definition is generated -or provide your options as environment variables before running the script. -Below is the list of available variables: +configuration. However some of the options should be filled by the user. You +can do that either by editing the file after the ConfigMap definition is +generated or provide your options as environment variables before running the +script. Below is the list of available variables: -* ``$KURYR_K8S_API_ROOT`` - ``[kubernetes]api_root`` (default: https://127.0.0.1:6443) -* ``$KURYR_K8S_AUTH_URL`` - ``[neutron]auth_url`` (default: http://127.0.0.1/identity) +* ``$KURYR_K8S_API_ROOT`` - ``[kubernetes]api_root`` (default: + https://127.0.0.1:6443) +* ``$KURYR_K8S_AUTH_URL`` - ``[neutron]auth_url`` (default: + http://127.0.0.1/identity) * ``$KURYR_K8S_USERNAME`` - ``[neutron]username`` (default: admin) * ``$KURYR_K8S_PASSWORD`` - ``[neutron]password`` (default: password) -* ``$KURYR_K8S_USER_DOMAIN_NAME`` - ``[neutron]user_domain_name`` (default: Default) +* ``$KURYR_K8S_USER_DOMAIN_NAME`` - ``[neutron]user_domain_name`` (default: + Default) * ``$KURYR_K8S_KURYR_PROJECT_ID`` - ``[neutron]kuryr_project_id`` -* ``$KURYR_K8S_PROJECT_DOMAIN_NAME`` - ``[neutron]project_domain_name`` (default: Default) +* ``$KURYR_K8S_PROJECT_DOMAIN_NAME`` - ``[neutron]project_domain_name`` + (default: Default) * ``$KURYR_K8S_PROJECT_ID`` - ``[neutron]k8s_project_id`` * ``$KURYR_K8S_POD_SUBNET_ID`` - ``[neutron_defaults]pod_subnet_id`` * ``$KURYR_K8S_POD_SG`` - ``[neutron_defaults]pod_sg`` * ``$KURYR_K8S_SERVICE_SUBNET_ID`` - ``[neutron_defaults]service_subnet_id`` * ``$KURYR_K8S_WORKER_NODES_SUBNET`` - ``[pod_vif_nested]worker_nodes_subnet`` -* ``$KURYR_K8S_BINDING_DRIVER`` - ``[binding]driver`` (default: ``kuryr.lib.binding.drivers.vlan``) +* ``$KURYR_K8S_BINDING_DRIVER`` - ``[binding]driver`` (default: + ``kuryr.lib.binding.drivers.vlan``) * ``$KURYR_K8S_BINDING_IFACE`` - ``[binding]link_iface`` (default: eth0) .. note:: @@ -131,8 +138,8 @@ After successful completion: * kuryr-controller Deployment object, with single replica count, will get created in kube-system namespace. -* kuryr-cni gets installed as a daemonset object on all the nodes in kube-system - namespace +* kuryr-cni gets installed as a daemonset object on all the nodes in + kube-system namespace To see kuryr-controller logs :: $ kubectl logs diff --git a/doc/source/installation/devstack/basic.rst b/doc/source/installation/devstack/basic.rst index 8eeb9c6ff..012af37d0 100644 --- a/doc/source/installation/devstack/basic.rst +++ b/doc/source/installation/devstack/basic.rst @@ -46,8 +46,8 @@ Now edit ``devstack/local.conf`` to set up some initial options: * If you have multiple network interfaces, you need to set ``HOST_IP`` variable to the IP on the interface you want to use as DevStack's primary. * ``KURYR_K8S_LBAAS_USE_OCTAVIA`` can be set to False if you want more - lightweight installation. In that case installation of Glance and Nova will be - omitted. + lightweight installation. In that case installation of Glance and Nova will + be omitted. * If you already have Docker installed on the machine, you can comment out line starting with ``enable_plugin devstack-plugin-container``. @@ -133,12 +133,12 @@ You can verify that this IP is really assigned to Neutron port: :: | 3ce7fd13-ad0a-4e92-9b6f-0d38d50b1699 | | fa:16:3e:8e:f4:30 | ip_address='10.0.0.73', subnet_id='ddfbc8e9-68da-48f9-8a05-238ea0607e0d' | ACTIVE | If those steps were successful, then it looks like your DevStack with -kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines of -the logs, paste them into `paste.openstack.org `_ -and ask other developers for help on `Kuryr's IRC channel -`_. More info on how to use DevStack can -be found in `DevStack Documentation -`_, especially in section -`Using Systemd in DevStack +kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines +of the logs, paste them into `paste.openstack.org +`_ and ask other developers for help on `Kuryr's +IRC channel `_. More info on how to use +DevStack can be found in `DevStack Documentation +`_, especially in section `Using +Systemd in DevStack `_, which explains how to use ``systemctl`` to control services and ``journalctl`` to read its logs. diff --git a/doc/source/installation/devstack/containerized.rst b/doc/source/installation/devstack/containerized.rst index 006b151c8..85864a472 100644 --- a/doc/source/installation/devstack/containerized.rst +++ b/doc/source/installation/devstack/containerized.rst @@ -24,8 +24,8 @@ Rebuilding container images --------------------------- Instructions on how to manually rebuild both kuryr-controller and kuryr-cni -container images are presented on :doc:`../containerized` page. In case you want -to test any code changes, you need to rebuild the images first. +container images are presented on :doc:`../containerized` page. In case you +want to test any code changes, you need to rebuild the images first. Changing configuration @@ -39,8 +39,8 @@ associated ConfigMap. On DevStack deployment this can be done using: :: Then the editor will appear that will let you edit the config map. Make sure to keep correct indentation when doing changes. Also note that there are two files present in the ConfigMap: kuryr.conf and kuryr-cni.conf. First one is attached -to kuryr-controller and second to kuryr-cni. Make sure to modify both when doing -changes important for both services. +to kuryr-controller and second to kuryr-cni. Make sure to modify both when +doing changes important for both services. Restarting services diff --git a/doc/source/installation/devstack/dragonflow_support.rst b/doc/source/installation/devstack/dragonflow_support.rst index e5b3cf77f..f1b8a3bfd 100644 --- a/doc/source/installation/devstack/dragonflow_support.rst +++ b/doc/source/installation/devstack/dragonflow_support.rst @@ -25,7 +25,7 @@ Testing with DevStack The next points describe how to test OpenStack with Dragonflow using DevStack. We will start by describing how to test the baremetal case on a single host, -and then cover a nested environemnt where containers are created inside VMs. +and then cover a nested environment where containers are created inside VMs. Single Node Test Environment diff --git a/doc/source/installation/devstack/nested-macvlan.rst b/doc/source/installation/devstack/nested-macvlan.rst index c199233c2..125d76ba4 100644 --- a/doc/source/installation/devstack/nested-macvlan.rst +++ b/doc/source/installation/devstack/nested-macvlan.rst @@ -5,7 +5,8 @@ How to try out nested-pods locally (MACVLAN) Following are the instructions for an all-in-one setup, using the nested MACVLAN driver rather than VLAN and trunk ports. -1. To install OpenStack services run devstack with ``devstack/local.conf.pod-in-vm.undercloud.sample``. +1. To install OpenStack services run devstack with + ``devstack/local.conf.pod-in-vm.undercloud.sample``. 2. Launch a Nova VM with MACVLAN support .. todo:: diff --git a/doc/source/installation/devstack/nested-vlan.rst b/doc/source/installation/devstack/nested-vlan.rst index fe33cec7b..f35b4833b 100644 --- a/doc/source/installation/devstack/nested-vlan.rst +++ b/doc/source/installation/devstack/nested-vlan.rst @@ -2,18 +2,21 @@ How to try out nested-pods locally (VLAN + trunk) ================================================= -Following are the instructions for an all-in-one setup where Kubernetes will also be -running inside the same Nova VM in which Kuryr-controller and Kuryr-cni will be -running. 4GB memory and 2 vCPUs, is the minimum resource requirement for the VM: +Following are the instructions for an all-in-one setup where Kubernetes will +also be running inside the same Nova VM in which Kuryr-controller and Kuryr-cni +will be running. 4GB memory and 2 vCPUs, is the minimum resource requirement +for the VM: -1. To install OpenStack services run devstack with ``devstack/local.conf.pod-in-vm.undercloud.sample``. - Ensure that "trunk" service plugin is enabled in ``/etc/neutron/neutron.conf``:: +1. To install OpenStack services run devstack with + ``devstack/local.conf.pod-in-vm.undercloud.sample``. Ensure that "trunk" + service plugin is enabled in ``/etc/neutron/neutron.conf``:: [DEFAULT] service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin -2. Launch a VM with `Neutron trunk port. `_. - The next steps can be followed: `Boot VM with a Trunk Port`_. +2. Launch a VM with `Neutron trunk port. + `_. The next steps can be + followed: `Boot VM with a Trunk Port`_. .. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html diff --git a/doc/source/installation/devstack/odl_support.rst b/doc/source/installation/devstack/odl_support.rst index 9c93326e0..8467ed1f1 100644 --- a/doc/source/installation/devstack/odl_support.rst +++ b/doc/source/installation/devstack/odl_support.rst @@ -20,7 +20,7 @@ Testing with DevStack The next points describe how to test OpenStack with ODL using DevStack. We will start by describing how to test the baremetal case on a single host, -and then cover a nested environemnt where containers are created inside VMs. +and then cover a nested environment where containers are created inside VMs. Single Node Test Environment diff --git a/doc/source/installation/devstack/ports-pool.rst b/doc/source/installation/devstack/ports-pool.rst index 67ae6008b..5e911da87 100644 --- a/doc/source/installation/devstack/ports-pool.rst +++ b/doc/source/installation/devstack/ports-pool.rst @@ -25,7 +25,7 @@ options needs to be set at the local.conf file: 3. Then, in case you want to set a limit to the maximum number of ports, or - increase/reduce the default one for the mininum number, as well as to modify + increase/reduce the default one for the minimum number, as well as to modify the way the pools are repopulated, both in time as well as regarding bulk operation sizes, the next option can be included and modified accordingly:: diff --git a/doc/source/installation/ipv6.rst b/doc/source/installation/ipv6.rst index 47baffc90..54172e48a 100644 --- a/doc/source/installation/ipv6.rst +++ b/doc/source/installation/ipv6.rst @@ -191,9 +191,9 @@ Setting it up Note that it is /113 because the other half of the /112 will be used by the Octavia LB vrrp ports. -#. Follow the :ref:`k8s_lb_reachable` guide but using IPv6 addresses instead for - the host Kubernetes API. You should also make sure that the Kubernetes API - server binds on the IPv6 address of the host. +#. Follow the :ref:`k8s_lb_reachable` guide but using IPv6 addresses instead + for the host Kubernetes API. You should also make sure that the Kubernetes + API server binds on the IPv6 address of the host. Troubleshooting diff --git a/doc/source/installation/manual.rst b/doc/source/installation/manual.rst index 43266b6ad..9732121e3 100644 --- a/doc/source/installation/manual.rst +++ b/doc/source/installation/manual.rst @@ -61,9 +61,10 @@ Kubernetes load balancers and their members: Neutron port to the subnet of each of the members. This way the traffic from the Service Haproxy to the members will not go through the router again, only will have gone through the router to reach the service. -* Layer3: Octavia only creates the VIP port. The traffic from the service VIP to - the members will go back to the router to reach the pod subnet. It is - important to note that it will have some performance impact depending on the SDN. +* Layer3: Octavia only creates the VIP port. The traffic from the service VIP + to the members will go back to the router to reach the pod subnet. It is + important to note that it will have some performance impact depending on the + SDN. To support the L3 mode (both for Octavia and for the deprecated Neutron-LBaaSv2): diff --git a/doc/source/installation/network_policy.rst b/doc/source/installation/network_policy.rst index 697e1241e..21d79c27c 100644 --- a/doc/source/installation/network_policy.rst +++ b/doc/source/installation/network_policy.rst @@ -2,10 +2,10 @@ Enable network policy support functionality =========================================== -Enable policy, pod_label and namespace handlers to respond to network policy events. -As this is not done by default you'd have to explicitly add that to the list of enabled -handlers at kuryr.conf (further info on how to do this can be found at -:doc:`./devstack/containerized`):: +Enable policy, pod_label and namespace handlers to respond to network policy +events. As this is not done by default you'd have to explicitly add that to +the list of enabled handlers at kuryr.conf (further info on how to do this can +be found at :doc:`./devstack/containerized`):: [kubernetes] enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy diff --git a/doc/source/installation/ocp_route.rst b/doc/source/installation/ocp_route.rst index 215c233ca..08e9475f5 100644 --- a/doc/source/installation/ocp_route.rst +++ b/doc/source/installation/ocp_route.rst @@ -11,71 +11,74 @@ To enable OCP-Router functionality we should set the following: Setting L7 Router ------------------ -The L7 Router is the ingress point for the external traffic destined -for services in the K8S/OCP cluster. -The next steps are needed for setting the L7 Router: +The L7 Router is the ingress point for the external traffic destined for +services in the K8S/OCP cluster. The next steps are needed for setting the L7 +Router: -1. Create LoadBalancer that will run the L7 loadbalancing:: +#. Create LoadBalancer that will run the L7 loadbalancing: - $ openstack loadbalancer create --name kuryr-l7-router --vip-subnet-id k8s-service-subnet - +---------------------+--------------------------------------+ - | Field | Value | - +---------------------+--------------------------------------+ - | admin_state_up | True | - | created_at | 2018-06-28T06:34:15 | - | description | | - | flavor | | - | id | 99f580e6-d894-442a-bc5f-4d14b41e10d2 | - | listeners | | - | name | kuryr-l7-router | - | operating_status | OFFLINE | - | pools | | - | project_id | 24042703aba141b89217e098e495cea1 | - | provider | amphora | - | provisioning_status | PENDING_CREATE | - | updated_at | None | - | vip_address | 10.0.0.171 | - | vip_network_id | 65875d24-5a54-43fb-91a7-087e956deb1a | - | vip_port_id | 42c6062a-644a-4004-a4a6-5a88bf596196 | - | vip_qos_policy_id | None | - | vip_subnet_id | 01f21201-65a3-4bc5-a7a8-868ccf4f0edd | - +---------------------+--------------------------------------+ - $ + .. code-block:: console + $ openstack loadbalancer create --name kuryr-l7-router --vip-subnet-id k8s-service-subnet + +---------------------+--------------------------------------+ + | Field | Value | + +---------------------+--------------------------------------+ + | admin_state_up | True | + | created_at | 2018-06-28T06:34:15 | + | description | | + | flavor | | + | id | 99f580e6-d894-442a-bc5f-4d14b41e10d2 | + | listeners | | + | name | kuryr-l7-router | + | operating_status | OFFLINE | + | pools | | + | project_id | 24042703aba141b89217e098e495cea1 | + | provider | amphora | + | provisioning_status | PENDING_CREATE | + | updated_at | None | + | vip_address | 10.0.0.171 | + | vip_network_id | 65875d24-5a54-43fb-91a7-087e956deb1a | + | vip_port_id | 42c6062a-644a-4004-a4a6-5a88bf596196 | + | vip_qos_policy_id | None | + | vip_subnet_id | 01f21201-65a3-4bc5-a7a8-868ccf4f0edd | + +---------------------+--------------------------------------+ + $ +#. Create floating IP address that should be accessible from external network: -2. Create floating IP address that should be accessible from external network:: + .. code-block:: console - $ openstack floating ip create --subnet public-subnet public - +---------------------+--------------------------------------+ - | Field | Value | - +---------------------+--------------------------------------+ - | created_at | 2018-06-28T06:31:36Z | - | description | | - | dns_domain | None | - | dns_name | None | - | fixed_ip_address | None | - | floating_ip_address | 172.24.4.3 | - | floating_network_id | 3371c2ba-edb5-45f2-a589-d35080177311 | - | id | c971f6d3-ba63-4318-a9e7-43cbf85437c2 | - | name | 172.24.4.3 | - | port_details | None | - | port_id | None | - | project_id | 24042703aba141b89217e098e495cea1 | - | qos_policy_id | None | - | revision_number | 0 | - | router_id | None | - | status | DOWN | - | subnet_id | 939eeb1f-20b8-4185-a6b1-6477fbe73409 | - | tags | [] | - | updated_at | 2018-06-28T06:31:36Z | - +---------------------+--------------------------------------+ - $ + $ openstack floating ip create --subnet public-subnet public + +---------------------+--------------------------------------+ + | Field | Value | + +---------------------+--------------------------------------+ + | created_at | 2018-06-28T06:31:36Z | + | description | | + | dns_domain | None | + | dns_name | None | + | fixed_ip_address | None | + | floating_ip_address | 172.24.4.3 | + | floating_network_id | 3371c2ba-edb5-45f2-a589-d35080177311 | + | id | c971f6d3-ba63-4318-a9e7-43cbf85437c2 | + | name | 172.24.4.3 | + | port_details | None | + | port_id | None | + | project_id | 24042703aba141b89217e098e495cea1 | + | qos_policy_id | None | + | revision_number | 0 | + | router_id | None | + | status | DOWN | + | subnet_id | 939eeb1f-20b8-4185-a6b1-6477fbe73409 | + | tags | [] | + | updated_at | 2018-06-28T06:31:36Z | + +---------------------+--------------------------------------+ + $ +#. Bind the floating IP to LB vip: -3. Bind the floating IP to LB vip:: + .. code-block:: console - [stack@gddggd devstack]$ openstack floating ip set --port 42c6062a-644a-4004-a4a6-5a88bf596196 172.24.4.3 + [stack@gddggd devstack]$ openstack floating ip set --port 42c6062a-644a-4004-a4a6-5a88bf596196 172.24.4.3 Configure Kuryr to support L7 Router and OCP-Route resources @@ -88,9 +91,9 @@ Configure Kuryr to support L7 Router and OCP-Route resources 2. Enable the ocp-route and k8s-endpoint handlers. For that you need to add - this handlers to the enabled handlers list at kuryr.conf (details on how - to edit this for containerized deployment can be found - at :doc:`./devstack/containerized`):: + this handlers to the enabled handlers list at kuryr.conf (details on how to + edit this for containerized deployment can be found at + :doc:`./devstack/containerized`):: [kubernetes] enabled_handlers=vif,lb,lbaasspec,ocproute,ingresslb diff --git a/doc/source/installation/ports-pool.rst b/doc/source/installation/ports-pool.rst index fc236c0a5..e3d6d4ea1 100644 --- a/doc/source/installation/ports-pool.rst +++ b/doc/source/installation/ports-pool.rst @@ -22,8 +22,8 @@ maximum size can be disabled by setting it to 0:: ports_pool_max = 10 ports_pool_min = 5 -In addition the size of the bulk operation, e.g., the number -of ports created in a bulk request upon pool population, can be modified:: +In addition the size of the bulk operation, e.g., the number of ports created +in a bulk request upon pool population, can be modified:: [vif_pool] ports_pool_batch = 5 @@ -101,15 +101,15 @@ nodes are Bare Metal while others are running inside VMs, therefore having different VIF drivers (e.g., neutron and nested-vlan). This new multi pool driver is the default pool driver used even if a different -vif_pool_driver is set at the config option. However if the configuration -about the mappings between the different pod vif and pools drivers is not -provided at the vif_pool_mapping config option of vif_pool configuration -section only one pool driver will be loaded -- using the standard -pod_vif_driver and vif_pool_driver config options, i.e., using the one -selected at kuryr.conf options. +vif_pool_driver is set at the config option. However if the configuration about +the mappings between the different pod vif and pools drivers is not provided at +the vif_pool_mapping config option of vif_pool configuration section only one +pool driver will be loaded -- using the standard pod_vif_driver and +vif_pool_driver config options, i.e., using the one selected at kuryr.conf +options. -To enable the option of having different pools depending on the node's pod -vif types, you need to state the type of pool that you want for each pod vif +To enable the option of having different pools depending on the node's pod vif +types, you need to state the type of pool that you want for each pod vif driver, e.g.: .. code-block:: ini diff --git a/doc/source/installation/services.rst b/doc/source/installation/services.rst index 09df5de91..a2dd9ac66 100644 --- a/doc/source/installation/services.rst +++ b/doc/source/installation/services.rst @@ -9,7 +9,8 @@ be implemented in the following way: * **Service**: It is translated to a single **LoadBalancer** and as many **Listeners** and **Pools** as ports the Kubernetes Service spec defines. * **ClusterIP**: It is translated to a LoadBalancer's VIP. -* **loadBalancerIP**: Translated to public IP associated with the LoadBalancer's VIP. +* **loadBalancerIP**: Translated to public IP associated with the + LoadBalancer's VIP. * **Endpoints**: The Endpoint object is translated to a LoadBalancer's VIP. @@ -20,16 +21,15 @@ be implemented in the following way: :width: 100% :alt: Graphical depiction of the translation explained above - In this diagram you can see how the Kubernetes entities in the top left corner - are implemented in plain Kubernetes networking (top-right) and in Kuryr's - default configuration (bottom) + In this diagram you can see how the Kubernetes entities in the top left + corner are implemented in plain Kubernetes networking (top-right) and in + Kuryr's default configuration (bottom) If you are paying attention and are familiar with the `LBaaS API`_ you probably noticed that we have separate pools for each exposed port in a service. This is -probably not optimal and we would probably benefit from keeping a single Neutron -pool that lists each of the per port listeners. -Since `LBaaS API`_ doesn't support UDP load balancing, service exported UDP -ports will be ignored. +probably not optimal and we would probably benefit from keeping a single +Neutron pool that lists each of the per port listeners. Since `LBaaS API`_ +doesn't support UDP load balancing, service exported UDP ports will be ignored. When installing you can decide to use the legacy Neutron HAProxy driver for LBaaSv2 or install and configure OpenStack Octavia, which as of Pike implements @@ -43,8 +43,8 @@ will be offered on each. Legacy Neutron HAProxy agent ---------------------------- -The requirements for running Kuryr with the legacy Neutron HAProxy agent are the -following: +The requirements for running Kuryr with the legacy Neutron HAProxy agent are +the following: * Keystone * Neutron @@ -53,9 +53,10 @@ following: As you can see, the only addition from the minimal OpenStack deployment for Kuryr is the Neutron lbaasv2 agent. -In order to use Neutron HAProxy as the Neutron LBaaSv2 implementation you should -not only install the neutron-lbaas agent but also place this snippet in the -*[service_providers]* section of neutron.conf in your network controller node:: +In order to use Neutron HAProxy as the Neutron LBaaSv2 implementation you +should not only install the neutron-lbaas agent but also place this snippet in +the *[service_providers]* section of neutron.conf in your network controller +node:: NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default" @@ -90,12 +91,11 @@ network namespace is used by Octavia to reconfigure and monitor the Load Balancer, which it talks to via HAProxy's control unix domain socket. Running Kuryr with Octavia means that each Kubernetes service that runs in the -cluster will need at least one Load Balancer VM, i.e., an *Amphora*. -To avoid single point of failure at Amphora, Octavia should be configured to -support active/standby loadbalancer topology. -In addition, it is important to configure the right Octavia flavor for your -deployment and to size the compute nodes appropriately so that Octavia can -operate well. +cluster will need at least one Load Balancer VM, i.e., an *Amphora*. To avoid +single point of failure at Amphora, Octavia should be configured to support +active/standby loadbalancer topology. In addition, it is important to +configure the right Octavia flavor for your deployment and to size the compute +nodes appropriately so that Octavia can operate well. Another important consideration is where do the Amphorae run, i.e., whether the worker nodes should also be compute nodes so that they run the Amphorae or if @@ -396,13 +396,13 @@ The services and pods subnets should be created. --service-cluster-ip-range=10.2.0.0/17 - As a result of this, Kubernetes will allocate the **10.2.0.1** address to the - Kubernetes API service, i.e., the service used for pods to talk to the - Kubernetes API server. It will be able to allocate service addresses up until - **10.2.127.254**. The rest of the addresses, as stated above, will be for - Octavia load balancer *vrrp* ports. **If this subnetting was not done, - Octavia would allocate *vrrp* ports with the Neutron IPAM from the same range - as Kubernetes service IPAM and we'd end up with conflicts**. + As a result of this, Kubernetes will allocate the **10.2.0.1** address to + the Kubernetes API service, i.e., the service used for pods to talk to the + Kubernetes API server. It will be able to allocate service addresses up + until **10.2.127.254**. The rest of the addresses, as stated above, will be + for Octavia load balancer *vrrp* ports. **If this subnetting was not done, + Octavia would allocate *vrrp* ports with the Neutron IPAM from the same + range as Kubernetes service IPAM and we'd end up with conflicts**. #. Once you have Kubernetes installed and you have the API host reachable from the pod subnet, follow the `Making the Pods be able to reach the Kubernetes @@ -453,7 +453,8 @@ The services and pods subnets should be created. | updated_at | 2017-10-02T09:22:37Z | +---------------------+--------------------------------------+ - and then create k8s service with type=LoadBalancer and load-balancer-ip= (e.g: 172.24.4.13) + and then create k8s service with type=LoadBalancer and + load-balancer-ip= (e.g: 172.24.4.13) In both 'User' and 'Pool' methods, the external IP address could be found in k8s service status information (under loadbalancer/ingress/ip) @@ -507,9 +508,9 @@ of doing the following: +---------------------------+--------------------------------------+ Create the subnet. Note that we disable dhcp as Kuryr-Kubernetes pod subnets - have no need for them for Pod networking. We also put the gateway on the last - IP of the subnet range so that the beginning of the range can be kept for - Kubernetes driven service IPAM:: + have no need for them for Pod networking. We also put the gateway on the + last IP of the subnet range so that the beginning of the range can be kept + for Kubernetes driven service IPAM:: $ openstack subnet create --network k8s --no-dhcp \ --gateway 10.0.255.254 \ @@ -558,15 +559,15 @@ of doing the following: --service-cluster-ip-range=10.0.0.0/18 - As a result of this, Kubernetes will allocate the **10.0.0.1** address to the - Kubernetes API service, i.e., the service used for pods to talk to the - Kubernetes API server. It will be able to allocate service addresses up until - **10.0.63.255**. The rest of the addresses will be for pods or Octavia load - balancer *vrrp* ports. + As a result of this, Kubernetes will allocate the **10.0.0.1** address to + the Kubernetes API service, i.e., the service used for pods to talk to the + Kubernetes API server. It will be able to allocate service addresses up + until **10.0.63.255**. The rest of the addresses will be for pods or Octavia + load balancer *vrrp* ports. #. Once you have Kubernetes installed and you have the API host reachable from - the pod subnet, follow the `Making the Pods be able to reach the Kubernetes API`_ - section + the pod subnet, follow the `Making the Pods be able to reach the Kubernetes + API`_ section .. _k8s_lb_reachable: diff --git a/doc/source/installation/sriov.rst b/doc/source/installation/sriov.rst index 447c8d754..19de264f2 100644 --- a/doc/source/installation/sriov.rst +++ b/doc/source/installation/sriov.rst @@ -4,10 +4,9 @@ How to configure SR-IOV ports ============================= -Current approach of SR-IOV relies on sriov-device-plugin [2]_. While -creating pods with SR-IOV, sriov-device-plugin should be turned on -on all nodes. To use a SR-IOV port on a baremetal installation the 3 -following steps should be done: +Current approach of SR-IOV relies on sriov-device-plugin [2]_. While creating +pods with SR-IOV, sriov-device-plugin should be turned on on all nodes. To use +a SR-IOV port on a baremetal installation the 3 following steps should be done: 1. Create OpenStack network and subnet for SR-IOV. Following steps should be done with admin rights. @@ -27,9 +26,10 @@ Subnet id will be used later in NetworkAttachmentDefini physical_device_mappings = physnet1:ens4f0 default_physnet_subnets = physnet1: -This mapping is required for ability to find appropriate PF/VF functions at binding phase. -physnet1 is just an identifier for subnet . -Such kind of transition is necessary to support many-to-many relation. +This mapping is required for ability to find appropriate PF/VF functions at +binding phase. physnet1 is just an identifier for subnet . Such kind of transition is necessary to support many-to-many +relation. 3. Prepare NetworkAttachmentDefinition object. Apply NetworkAttachmentDefinition with "sriov" driverType inside, @@ -72,25 +72,27 @@ into the pod's yaml. intel.com/sriov: '2' -In the above example two SR-IOV devices will be attached to pod. First one is described -in sriov-net1 NetworkAttachmentDefinition, second one in sriov-net2. They may have -different subnetId. +In the above example two SR-IOV devices will be attached to pod. First one is +described in sriov-net1 NetworkAttachmentDefinition, second one in sriov-net2. +They may have different subnetId. 4. Specify resource names -The resource name *intel.com/sriov*, which used in the above example is the default -resource name. This name was used in SR-IOV network device plugin in -version 1 (release-v1 branch). But since latest version the device plugin can use any -arbitrary name of the resources [3]_. This name should match "^\[a-zA-Z0-9\_\]+$" -regular expression. To be able to work with arbitrary resource names -physnet_resource_mappings and device_plugin_resource_prefix in [sriov] section -of kuryr-controller configuration file should be filled. The default value for -device_plugin_resource_prefix is intel.com, the same as in SR-IOV network device plugin, -in case of SR-IOV network device plugin was started with value of -resource-prefix option -different from intel.com, than value should be set to -device_plugin_resource_prefix, otherwise kuryr-kubernetes will not work with resource. +The resource name *intel.com/sriov*, which used in the above example is the +default resource name. This name was used in SR-IOV network device plugin in +version 1 (release-v1 branch). But since latest version the device plugin can +use any arbitrary name of the resources [3]_. This name should match +"^\[a-zA-Z0-9\_\]+$" regular expression. To be able to work with arbitrary +resource names physnet_resource_mappings and device_plugin_resource_prefix in +[sriov] section of kuryr-controller configuration file should be filled. The +default value for device_plugin_resource_prefix is intel.com, the same as in +SR-IOV network device plugin, in case of SR-IOV network device plugin was +started with value of -resource-prefix option different from intel.com, than +value should be set to device_plugin_resource_prefix, otherwise +kuryr-kubernetes will not work with resource. -Assume we have following SR-IOV network device plugin (defined by -config-file option) +Assume we have following SR-IOV network device plugin (defined by -config-file +option) .. code-block:: json @@ -107,9 +109,9 @@ Assume we have following SR-IOV network device plugin (defined by -config-file o } We defined numa0 resource name, also assume we started sriovdp with --resource-prefix samsung.com value. The PCI address of ens4f0 interface -is "0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network -device plugin, we can see following state of kubernetes +-resource-prefix samsung.com value. The PCI address of ens4f0 interface is +"0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network device +plugin, we can see following state of kubernetes .. code-block:: bash diff --git a/doc/source/installation/testing_sriov_functional.rst b/doc/source/installation/testing_sriov_functional.rst index 70a133a12..0a27a2023 100644 --- a/doc/source/installation/testing_sriov_functional.rst +++ b/doc/source/installation/testing_sriov_functional.rst @@ -23,8 +23,8 @@ look like: "driverType": "sriov" }' -Here ``88d0b025-2710-4f02-a348-2829853b45da`` is an id of precreated -subnet that is expected to be used for SR-IOV ports: +Here ``88d0b025-2710-4f02-a348-2829853b45da`` is an id of precreated subnet +that is expected to be used for SR-IOV ports: .. code-block:: bash @@ -55,9 +55,9 @@ subnet that is expected to be used for SR-IOV ports: | updated_at | 2018-11-21T10:57:34Z | +-------------------+--------------------------------------------------+ -1. Create deployment definition with one -SR-IOV interface (apart from default one). Deployment definition -file might look like: +1. Create deployment definition with one SR-IOV + interface (apart from default one). Deployment definition file might look + like: .. code-block:: yaml @@ -106,8 +106,8 @@ created before. nginx-sriov-558db554d7-rvpxs 1/1 Running 0 1m 4. If your image contains ``iputils`` (for example, busybox image), you can -attach to the pod and check that the correct interface has been attached -to the Pod. + attach to the pod and check that the correct interface has been attached to + the Pod. .. code-block:: bash @@ -138,16 +138,15 @@ You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface. inet6 fe80::f816:3eff:fea8:55af/64 scope link valid_lft forever preferred_lft forever -4.1. Alternatively you can login to k8s worker and do the same from the -host system. -Use the following command to find out ID of running SR-IOV container: +4.1. Alternatively you can login to k8s worker and do the same from the host +system. Use the following command to find out ID of running SR-IOV container: .. code-block:: bash $ docker ps -Suppose that ID of created container is ``eb4e10f38763``. -Use the following command to get PID of that container: +Suppose that ID of created container is ``eb4e10f38763``. Use the following +command to get PID of that container: .. code-block:: bash @@ -190,7 +189,8 @@ You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface. In our example sriov interface has address 192.168.2.6 -5. Use neutron CLI to check the port with exact address has been created on neutron: +5. Use neutron CLI to check the port with exact address has been created on + neutron: .. code-block:: bash @@ -238,8 +238,9 @@ with the following command: | updated_at | 2018-11-26T09:13:07Z | +-----------------------+----------------------------------------------------------------------------+ -The port would have the name of the pod, ``compute::kuryr::sriov`` for device owner and 'direct' vnic_type. -Verify that IP and MAC addresses of the port match the ones on the container. -Currently the neutron-sriov-nic-agent does not properly detect SR-IOV ports assigned to containers. This -means that direct ports in neutron would always remain in *DOWN* state. This doesn't affect the feature -in any way other than cosmetically. +The port would have the name of the pod, ``compute::kuryr::sriov`` for device +owner and 'direct' vnic_type. Verify that IP and MAC addresses of the port +match the ones on the container. Currently the neutron-sriov-nic-agent does +not properly detect SR-IOV ports assigned to containers. This means that direct +ports in neutron would always remain in *DOWN* state. This doesn't affect the +feature in any way other than cosmetically. diff --git a/doc/source/installation/testing_udp_services.rst b/doc/source/installation/testing_udp_services.rst index 43ea0d91c..237e423cc 100644 --- a/doc/source/installation/testing_udp_services.rst +++ b/doc/source/installation/testing_udp_services.rst @@ -2,9 +2,9 @@ Testing UDP Services ==================== -In this example, we will use the `kuryr-udp-demo`_ image. -This image implements a simple UDP server that listens on port 9090, -and replies towards client when a packet is received. +In this example, we will use the `kuryr-udp-demo`_ image. This image +implements a simple UDP server that listens on port 9090, and replies towards +client when a packet is received. We first create a deployment named demo:: @@ -37,7 +37,8 @@ Next, we expose the deployment as a service, setting UDP port to 90:: demo ClusterIP 10.0.0.150 90/UDP 16s kubernetes ClusterIP 10.0.0.129 443/TCP 17m -Now, let's check the OpenStack load balancer created by Kuryr for **demo** service:: +Now, let's check the OpenStack load balancer created by Kuryr for **demo** +service:: $ openstack loadbalancer list +--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+ @@ -113,14 +114,13 @@ And the load balancer has two members listening on UDP port 9090:: +--------------------------------------+-----------------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ At this point, we have both the kubernetes **demo** service and corresponding -openstack load balancer running, and we are ready to run the -client application. +openstack load balancer running, and we are ready to run the client +application. -For the client application we will use the `udp-client`_ python script. -The UDP client script sends UDP message towards specific IP and port, and -waits for a response from the server. -The way that the client application can communicate with the server is by -leveraging the Kubernetes service functionality. +For the client application we will use the `udp-client`_ python script. The UDP +client script sends UDP message towards specific IP and port, and waits for a +response from the server. The way that the client application can communicate +with the server is by leveraging the Kubernetes service functionality. First we clone the client script:: @@ -149,8 +149,8 @@ Last step will be to ping the UDP server service:: demo-fbb89f54c-q9fq7: HELLO, I AM ALIVE!!! Since the `kuryr-udp-demo`_ application concatenates the pod's name to the -replyed message, it is plain to see that both service's pods are -replying to the requests from the client. +replyed message, it is plain to see that both service's pods are replying to +the requests from the client. .. _kuryr-udp-demo: https://hub.docker.com/r/yboaron/kuryr-udp-demo/ diff --git a/doc/source/installation/trunk_ports.rst b/doc/source/installation/trunk_ports.rst index 6de8ade7f..4b4142393 100644 --- a/doc/source/installation/trunk_ports.rst +++ b/doc/source/installation/trunk_ports.rst @@ -40,7 +40,7 @@ steps can be followed: $ openstack floating ip create --port port0 public -Note subports can be added to the trunk port, and be used inside the VM with the -specific vlan, 102 in the example, by doing:: +Note subports can be added to the trunk port, and be used inside the VM with +the specific vlan, 102 in the example, by doing:: $ openstack network trunk set --subport port=subport0,segmentation-type=vlan,segmentation-id=102 trunk0 diff --git a/doc/source/installation/upgrades.rst b/doc/source/installation/upgrades.rst index 200361498..d290c3f5a 100644 --- a/doc/source/installation/upgrades.rst +++ b/doc/source/installation/upgrades.rst @@ -2,8 +2,8 @@ Upgrading kuryr-kubernetes ========================== -Kuryr-Kubernetes supports standard OpenStack utility for checking upgrade -is possible and safe: +Kuryr-Kubernetes supports standard OpenStack utility for checking upgrade is +possible and safe: .. code-block:: bash @@ -87,5 +87,5 @@ It's possible that some annotations were somehow malformed. That will generate a warning that should be investigated, but isn't blocking upgrading to T (it won't make things any worse). -If in any case you need to rollback those changes, there is -``kuryr-k8s-status upgrade downgrade-annotations`` command as well. +If in any case you need to rollback those changes, there is ``kuryr-k8s-status +upgrade downgrade-annotations`` command as well. diff --git a/releasenotes/source/README.rst b/releasenotes/source/README.rst index b85e6bcb9..df6c1d0db 100644 --- a/releasenotes/source/README.rst +++ b/releasenotes/source/README.rst @@ -2,9 +2,10 @@ Kuryr-Kubernetes Release Notes Howto ==================================== -Release notes are a new feature for documenting new features in -OpenStack projects. Background on the process, tooling, and -methodology is documented in a `mailing list post by Doug Hellmann `_. +Release notes are a new feature for documenting new features in OpenStack +projects. Background on the process, tooling, and methodology is documented in +a `mailing list post by Doug Hellmann +`_. -For information on how to create release notes, please consult the -`Release Notes documentation `_. +For information on how to create release notes, please consult the `Release +Notes documentation `_.