Explicitly use code-block.
In this patch we convert preformatted directive (double colon at the end of line) to code-block sphinx directive. Also, fixed formatting for existing code-block directives, and changes restructuredtext code directive into sphinx code-block. Change-Id: I9db48fbb169263e3bf66eacca7d9bce6c355739f
This commit is contained in:
parent
feff260509
commit
fd440fcdcb
|
@ -70,6 +70,8 @@ leak.
|
|||
disabled. In order to enable, set the following option in kuryr.conf to a
|
||||
limit value of memory in MiBs.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[cni_health_server]
|
||||
max_memory_usage = -1
|
||||
|
||||
|
|
|
@ -54,7 +54,7 @@ provide update exclusion mechanisms to prevent race conditions.
|
|||
This can be implemented by adding another *leader-elector* container to each
|
||||
of kuryr-controller pods:
|
||||
|
||||
.. code:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
- image: gcr.io/google_containers/leader-elector:0.5
|
||||
name: leader-elector
|
||||
|
|
|
@ -184,7 +184,9 @@ lbaasspec Service
|
|||
================ =========================
|
||||
|
||||
For example, to enable only the 'vif' controller handler we should set the
|
||||
following at kuryr.conf::
|
||||
following at kuryr.conf:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
enabled_handlers=vif
|
||||
|
|
|
@ -101,7 +101,9 @@ For achieving external connectivity the L7 router is attached to a floating
|
|||
IP (allocated from 'external_svc_subnet').
|
||||
|
||||
The following parameters should be configured in kuryr.conf file to
|
||||
enable L7 Router::
|
||||
enable L7 Router:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ingress]
|
||||
l7_router_uuid=<loadbalancer uuid>
|
||||
|
|
|
@ -250,7 +250,9 @@ on a single port from the group of pods that have the label ``role=monitoring``.
|
|||
- protocol: TCP
|
||||
port: 8080
|
||||
|
||||
Create the following pod with label ``role=monitoring``::
|
||||
Create the following pod with label ``role=monitoring``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl run monitor --image=busybox --restart=Never --labels=role=monitoring
|
||||
|
||||
|
@ -321,7 +323,9 @@ from namespace with the label ``purpose=test``:
|
|||
- protocol: TCP
|
||||
port: 8080
|
||||
|
||||
Create a namespace and label it with ``purpose=test``::
|
||||
Create a namespace and label it with ``purpose=test``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl create namespace dev
|
||||
$ kubectl label namespace dev purpose=test
|
||||
|
|
|
@ -58,13 +58,15 @@ The first action is to create a KuryrPort CRD where the needed information
|
|||
about the Neutron Ports will be stored (or any other SDN).
|
||||
|
||||
Currently, the pods are annotated with the vif information of the port
|
||||
assigned to it::
|
||||
assigned to it:
|
||||
|
||||
.. code-block::
|
||||
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"openstack.org/kuryr-vif": "{\"eth0\": {\"versioned_object.data\": {\"active\": true, \"address\": \"fa:16:3e:bf:84:ff\", \"has_traffic_filtering\
|
||||
": false, \"id\": \"18f968a5-c420-4318-92d7-941eb5f9e60e\", \"network\": {\"versioned_object.data\": {\"id\": \"144164d9-8c21-4274-acec-43245de0aed0\", \"labe
|
||||
"openstack.org/kuryr-vif": "{\"eth0\": {\"versioned_object.data\": {\"active\": true, \"address\": \"fa:16:3e:bf:84:ff\", \"has_traffic_filtering\": false,
|
||||
\"id\": \"18f968a5-c420-4318-92d7-941eb5f9e60e\", \"network\": {\"versioned_object.data\": {\"id\": \"144164d9-8c21-4274-acec-43245de0aed0\", \"labe
|
||||
l\": \"ns/luis-net\", \"mtu\": 1350, \"multi_host\": false, \"should_provide_bridge\": false, \"should_provide_vlan\": false, \"subnets\": {\"versioned_object
|
||||
.data\": {\"objects\": [{\"versioned_object.data\": {\"cidr\": \"10.11.9.0/24\", \"dns\": [], \"gateway\": \"10.11.9.1\", \"ips\": {\"versioned_object.data\":
|
||||
{\"objects\": [{\"versioned_object.data\": {\"address\": \"10.11.9.5\"}, \"versioned_object.name\": \"FixedIP\", \"versioned_object.namespace\": \"os_vif\",
|
||||
|
@ -77,7 +79,6 @@ assigned to it::
|
|||
.version\": \"1.0\"}}"
|
||||
},
|
||||
|
||||
|
||||
The proposal is to store the information of the VIF in the new defined
|
||||
KuryrPort CRD as a new KuryrPort object, including similar information to the
|
||||
one we currently have on os_vif objects. Then we annotate the KuryrPort
|
||||
|
@ -85,7 +86,9 @@ object selfLink at the pod by using oslo.versionedobject to easy identify
|
|||
the changes into the annotation format. Note the selfLink should contain the
|
||||
Neutron Port UUID if that is used as the name for the KuryrPort CRD object.
|
||||
In case of other SDN a unique value that represents the port should be used
|
||||
as the name for the KuryrPort CRD object::
|
||||
as the name for the KuryrPort CRD object:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get POD_NAME -o json
|
||||
"kind": "Pod",
|
||||
|
@ -160,7 +163,6 @@ as the name for the KuryrPort CRD object::
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
This allows a more standard way of annotating the pods, ensuring all needed
|
||||
information is there regardless of the SDN backend.
|
||||
|
||||
|
|
|
@ -48,17 +48,23 @@ Automated update
|
|||
``contrib/regenerate_pod_resources_api.sh`` script could be used to re-generate
|
||||
PodResources gRPC API files. By default, this script will download ``v1alpha1``
|
||||
version of ``api.proto`` file from the Kubernetes GitHub repo and create
|
||||
required kuryr-kubernetes files from it::
|
||||
required kuryr-kubernetes files from it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[kuryr-kubernetes]$ ./contrib/regenerate_pod_resources_api.sh
|
||||
|
||||
Alternatively, path to ``api.proto`` file could be specified in
|
||||
``KUBERNETES_API_PROTO`` environment variable::
|
||||
``KUBERNETES_API_PROTO`` environment variable:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export KUBERNETES_API_PROTO=/path/to/api.proto
|
||||
|
||||
Define ``API_VERSION`` environment variable to use specific version of
|
||||
``api.proto`` from the Kubernetes GitHub::
|
||||
``api.proto`` from the Kubernetes GitHub:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export API_VERSION=v1alpha1
|
||||
|
||||
|
@ -71,7 +77,9 @@ Preparing the new api.proto
|
|||
|
||||
Copy the ``api.proto`` from K8s sources to ``kuryr_kubernetes/pod_resources/``
|
||||
and remove all the lines that contains ``gogoproto`` since this is unwanted
|
||||
dependency that is not needed for python bindings::
|
||||
dependency that is not needed for python bindings:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sed '/gogoproto/d' \
|
||||
../kubernetes/pkg/kubelet/apis/podresources/<version>/api.proto \
|
||||
|
@ -88,14 +96,18 @@ Don't forget to update the file header that should point to the original
|
|||
Generating the python bindings
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* (Optional) Create the python virtual environment::
|
||||
* (Optional) Create the python virtual environment:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[kuryr-kubernetes]$ python3 -m venv venv
|
||||
[kuryr-kubernetes]$ . ./venv/bin/activate
|
||||
|
||||
* To generate python bindings we need a ``protoc`` compiler and the
|
||||
``gRPC plugin`` for it. The most simple way to get them is to install
|
||||
``grpcio-tools``::
|
||||
``grpcio-tools``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
(venv) [kuryr-kubernetes]$ pip install grpcio-tools==1.19
|
||||
|
||||
|
@ -109,12 +121,16 @@ Generating the python bindings
|
|||
you need update ``requirements.txt`` and ``lower-constraints.txt``
|
||||
accordingly.
|
||||
|
||||
To check version of compiler installed with ``grpcio-tools`` use::
|
||||
To check version of compiler installed with ``grpcio-tools`` use:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc --version
|
||||
libprotoc 3.6.1
|
||||
|
||||
* Following command will generate ``api_pb2_grpc.py`` and ``api_pb2.py``::
|
||||
* Following command will generate ``api_pb2_grpc.py`` and ``api_pb2.py``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc -I./ \
|
||||
--python_out=. --grpc_python_out=. \
|
||||
|
|
|
@ -74,7 +74,6 @@ Option in config file might look like this:
|
|||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
|
||||
multi_vif_drivers = sriov, additional_subnets
|
||||
|
||||
Or like this:
|
||||
|
@ -82,7 +81,6 @@ Or like this:
|
|||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
|
||||
multi_vif_drivers = npwg_multiple_interfaces
|
||||
|
||||
|
||||
|
|
|
@ -8,15 +8,21 @@ Building images
|
|||
First you should build kuryr-controller and kuryr-cni docker images and place
|
||||
them on cluster-wide accessible registry.
|
||||
|
||||
For creating controller image on local machine: ::
|
||||
For creating controller image on local machine:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ docker build -t kuryr/controller -f controller.Dockerfile .
|
||||
|
||||
For creating cni daemonset image on local machine: ::
|
||||
For creating cni daemonset image on local machine:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ docker build -t kuryr/cni -f cni.Dockerfile .
|
||||
|
||||
If you want to run kuryr CNI without the daemon, build theimage with: ::
|
||||
If you want to run kuryr CNI without the daemon, build the image with:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False .
|
||||
|
||||
|
@ -32,7 +38,9 @@ Generating Kuryr resource definitions for Kubernetes
|
|||
|
||||
kuryr-kubernetes includes a tool that lets you generate resource definitions
|
||||
that can be used to Deploy Kuryr on Kubernetes. The script is placed in
|
||||
``tools/generate_k8s_resource_definitions.sh`` and takes up to 3 arguments: ::
|
||||
``tools/generate_k8s_resource_definitions.sh`` and takes up to 3 arguments:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./tools/generate_k8s_resource_definitions <output_dir> [<controller_conf_path>] [<cni_conf_path>] [<ca_certificate_path>]
|
||||
|
||||
|
@ -83,8 +91,9 @@ script. Below is the list of available variables:
|
|||
and ``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo``
|
||||
to raise privileges, even though container is priviledged by itself or
|
||||
``sudo`` is missing from container OS (e.g. default CentOS 7). To prevent
|
||||
that make sure to set following options in kuryr.conf used for
|
||||
kuryr-daemon::
|
||||
that make sure to set following options in kuryr.conf used for kuryr-daemon:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vif_plug_ovs_privileged]
|
||||
helper_command=privsep-helper
|
||||
|
@ -104,7 +113,9 @@ variable must be set:
|
|||
|
||||
* ``$KURYR_USE_PORTS_POOLS`` - ``True`` (default: False)
|
||||
|
||||
Example run: ::
|
||||
Example run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ KURYR_K8S_API_ROOT="192.168.0.1:6443" ./tools/generate_k8s_resource_definitions /tmp
|
||||
|
||||
|
@ -133,7 +144,9 @@ This should generate 5 files in your ``<output_dir>``:
|
|||
Deploying Kuryr resources on Kubernetes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To deploy the files on your Kubernetes cluster run: ::
|
||||
To deploy the files on your Kubernetes cluster run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl apply -f config_map.yml -n kube-system
|
||||
$ kubectl apply -f certificates_secret.yml -n kube-system
|
||||
|
@ -148,7 +161,10 @@ After successful completion:
|
|||
* kuryr-cni gets installed as a daemonset object on all the nodes in
|
||||
kube-system namespace
|
||||
|
||||
To see kuryr-controller logs ::
|
||||
To see kuryr-controller logs:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl logs <pod-name>
|
||||
|
||||
NOTE: kuryr-cni has no logs and to debug failures you need to check out kubelet
|
||||
|
|
|
@ -2,7 +2,9 @@
|
|||
Inspect default Configuration
|
||||
=============================
|
||||
|
||||
By default, DevStack creates networks called ``private`` and ``public``::
|
||||
By default, DevStack creates networks called ``private`` and ``public``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network list --project demo
|
||||
+--------------------------------------+---------+----------------------------------------------------------------------------+
|
||||
|
@ -18,9 +20,10 @@ By default, DevStack creates networks called ``private`` and ``public``::
|
|||
| 646baf54-6178-4a26-a52b-68ad0ba1e057 | public | 00e0b1e4-4bee-4204-bd02-610291c56334, b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b |
|
||||
+--------------------------------------+--------+----------------------------------------------------------------------------+
|
||||
|
||||
|
||||
And kuryr-kubernetes creates two extra ones for the kubernetes services and
|
||||
pods under the project k8s::
|
||||
pods under the project k8s:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network list --project k8s
|
||||
+--------------------------------------+-----------------+--------------------------------------+
|
||||
|
@ -30,8 +33,9 @@ pods under the project k8s::
|
|||
| d4be7efc-b84d-480e-a1db-34205877e6c4 | k8s-service-net | 55405e9d-4e25-4a55-bac2-e25ee88584e1 |
|
||||
+--------------------------------------+-----------------+--------------------------------------+
|
||||
|
||||
And similarly for the subnets:
|
||||
|
||||
And similarly for the subnets::
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet list --project k8s
|
||||
+--------------------------------------+--------------------+--------------------------------------+---------------+
|
||||
|
@ -41,9 +45,9 @@ And similarly for the subnets::
|
|||
| 55405e9d-4e25-4a55-bac2-e25ee88584e1 | k8s-service-subnet | d4be7efc-b84d-480e-a1db-34205877e6c4 | 10.0.0.128/26 |
|
||||
+--------------------------------------+--------------------+--------------------------------------+---------------+
|
||||
|
||||
In addition to that, security groups for both pods and services are created too:
|
||||
|
||||
In addition to that, security groups for both pods and services are created
|
||||
too::
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group list --project k8s
|
||||
+--------------------------------------+--------------------+------------------------+----------------------------------+
|
||||
|
@ -53,9 +57,10 @@ too::
|
|||
| fe7cee41-6021-4d7b-ab03-1ce1e391a1ca | default | Default security group | 49e2683370f245e38ac2d6a8c16697b3 |
|
||||
+--------------------------------------+--------------------+------------------------+----------------------------------+
|
||||
|
||||
|
||||
And finally, the loadbalancer for the kubernetes API service is also created,
|
||||
with the subsequence listener, pool and added members::
|
||||
with the subsequence listener, pool and added members:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack loadbalancer list
|
||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||
|
|
|
@ -14,23 +14,31 @@ and dependencies of both systems.
|
|||
Cloning required repositories
|
||||
-----------------------------
|
||||
|
||||
First of all you need to clone DevStack: ::
|
||||
First of all you need to clone DevStack:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ git clone https://opendev.org/openstack-dev/devstack
|
||||
|
||||
Create user *stack*, give it required permissions and log in as that user: ::
|
||||
Create user *stack*, give it required permissions and log in as that user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./devstack/tools/create-stack-user.sh
|
||||
$ sudo su stack
|
||||
|
||||
*stack* user has ``/opt/stack`` set as its home directory. It will need its own
|
||||
repository with DevStack. Also clone kuryr-kubernetes: ::
|
||||
repository with DevStack. Also clone kuryr-kubernetes:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ git clone https://opendev.org/openstack-dev/devstack
|
||||
$ git clone https://opendev.org/openstack/kuryr-kubernetes
|
||||
|
||||
Copy sample ``local.conf`` (DevStack configuration file) to devstack
|
||||
directory: ::
|
||||
directory:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cp kuryr-kubernetes/devstack/local.conf.sample devstack/local.conf
|
||||
|
||||
|
@ -51,12 +59,16 @@ Now edit ``devstack/local.conf`` to set up some initial options:
|
|||
* If you already have Docker installed on the machine, you can comment out line
|
||||
starting with ``enable_plugin devstack-plugin-container``.
|
||||
|
||||
Once ``local.conf`` is configured, you can start the installation: ::
|
||||
Once ``local.conf`` is configured, you can start the installation:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./devstack/stack.sh
|
||||
|
||||
Installation takes from 15 to 30 minutes. Once that's done you should see
|
||||
similar output: ::
|
||||
similar output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
=========================
|
||||
DevStack Component Timing
|
||||
|
@ -94,7 +106,9 @@ similar output: ::
|
|||
Change: 301d4d1678c3c1342abc03e51a74574f7792a58b Merge "Use "pip list" in check_libs_from_git" 2017-10-04 07:22:59 +0000
|
||||
OS Version: CentOS 7.4.1708 Core
|
||||
|
||||
You can test DevStack by sourcing credentials and trying some commands: ::
|
||||
You can test DevStack by sourcing credentials and trying some commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ source /devstack/openrc admin admin
|
||||
$ openstack service list
|
||||
|
@ -107,13 +121,17 @@ You can test DevStack by sourcing credentials and trying some commands: ::
|
|||
+----------------------------------+------------------+------------------+
|
||||
|
||||
To verify if Kubernetes is running properly, list its nodes and check status of
|
||||
the only node you should have. The correct value is "Ready": ::
|
||||
the only node you should have. The correct value is "Ready":
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
localhost Ready 2m v1.6.2
|
||||
|
||||
To test kuryr-kubernetes itself try creating a Kubernetes pod: ::
|
||||
To test kuryr-kubernetes itself try creating a Kubernetes pod:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl run --image busybox test -- sleep 3600
|
||||
$ kubectl get pods -o wide
|
||||
|
@ -121,13 +139,17 @@ To test kuryr-kubernetes itself try creating a Kubernetes pod: ::
|
|||
test-3202410914-1dp7g 0/1 ContainerCreating 0 7s <none> localhost
|
||||
|
||||
After a moment (even up to few minutes as Docker image needs to be downloaded)
|
||||
you should see that pod got the IP from OpenStack network: ::
|
||||
you should see that pod got the IP from OpenStack network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get pods -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
test-3202410914-1dp7g 1/1 Running 0 35s 10.0.0.73 localhost
|
||||
|
||||
You can verify that this IP is really assigned to Neutron port: ::
|
||||
You can verify that this IP is really assigned to Neutron port:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[stack@localhost kuryr-kubernetes]$ openstack port list | grep 10.0.0.73
|
||||
| 3ce7fd13-ad0a-4e92-9b6f-0d38d50b1699 | | fa:16:3e:8e:f4:30 | ip_address='10.0.0.73', subnet_id='ddfbc8e9-68da-48f9-8a05-238ea0607e0d' | ACTIVE |
|
||||
|
|
|
@ -12,7 +12,9 @@ Installation
|
|||
|
||||
To configure DevStack to install Kuryr services as containerized Kubernetes
|
||||
resources, you need to switch ``KURYR_K8S_CONTAINERIZED_DEPLOYMENT``. Add this
|
||||
line to your ``local.conf``: ::
|
||||
line to your ``local.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True
|
||||
|
||||
|
@ -32,7 +34,9 @@ Changing configuration
|
|||
----------------------
|
||||
|
||||
To change kuryr.conf files that are put into containers you need to edit the
|
||||
associated ConfigMap. On DevStack deployment this can be done using: ::
|
||||
associated ConfigMap. On DevStack deployment this can be done using:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl -n kube-system edit cm kuryr-config
|
||||
|
||||
|
@ -54,7 +58,9 @@ kuryr-controller
|
|||
~~~~~~~~~~~~~~~~
|
||||
|
||||
To restart kuryr-controller and let it load new image and configuration, simply
|
||||
kill existing pod: ::
|
||||
kill existing pod:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl -n kube-system get pods
|
||||
<find kuryr-controller pod you want to restart>
|
||||
|
@ -71,7 +77,9 @@ actually idling with ``sleep infinity`` once all the files are copied into
|
|||
correct locations on Kubernetes host.
|
||||
|
||||
You can force it to redeploy new files by killing it. DaemonSet controller
|
||||
should make sure to restart it with new image and configuration files. ::
|
||||
should make sure to restart it with new image and configuration files.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl -n kube-system get pods
|
||||
<find kuryr-cni pods you want to restart>
|
||||
|
|
|
@ -38,14 +38,14 @@ to use either Fedora 25 or the latest Ubuntu LTS (16.04, Xenial).
|
|||
|
||||
2. Create the ``stack`` user.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||
$ sudo ./devstack/tools/create-stack-user.sh
|
||||
|
||||
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo su - stack
|
||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||
|
@ -58,7 +58,7 @@ you can start with. You may change some values for the various variables in
|
|||
that file, like password settings or what LBaaS service provider to use.
|
||||
Feel free to edit it if you'd like, but it should work as-is.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ cd devstack
|
||||
$ cp ../kuryr-kubernetes/devstack/local.conf.df.sample local.conf
|
||||
|
@ -74,12 +74,15 @@ Optionally, the ports pool funcionality can be enabled by following:
|
|||
Expect it to take a while. It installs required packages, clones a bunch
|
||||
of git repos, and installs everything from these git repos.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ ./stack.sh
|
||||
|
||||
|
||||
Once DevStack completes successfully, you should see output that looks
|
||||
something like this::
|
||||
something like this:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
This is your host IP address: 192.168.5.10
|
||||
This is your host IPv6 address: ::1
|
||||
|
@ -93,7 +96,7 @@ something like this::
|
|||
Create NAT rule that will cause "external" traffic from your instances to get
|
||||
rewritten to your network controller's ip address and sent out on the network:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
|
||||
|
||||
|
@ -134,7 +137,9 @@ Undercloud deployment
|
|||
|
||||
The steps to deploy the undercloud environment are the same as described above
|
||||
for the `Single Node Test Environment` with the different sample local.conf to
|
||||
use (step 4), in this case::
|
||||
use (step 4), in this case:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cd devstack
|
||||
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf
|
||||
|
@ -172,7 +177,9 @@ Once the VM is up and running, we can start with the overcloud configuration.
|
|||
The steps to perform are the same as without Dragonflow integration, i.e., the
|
||||
same steps as for ML2/OVS:
|
||||
|
||||
1. Log in into the VM::
|
||||
1. Log in into the VM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh -i id_rsa_demo centos@FLOATING_IP
|
||||
|
||||
|
|
|
@ -23,23 +23,31 @@ nested MACVLAN driver rather than VLAN and trunk ports.
|
|||
4. Once devstack is done and all services are up inside VM. Next steps are to
|
||||
configure the missing information at ``/etc/kuryr/kuryr.conf``:
|
||||
|
||||
- Configure worker VMs subnet::
|
||||
- Configure worker VMs subnet:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pod_vif_nested]
|
||||
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>
|
||||
|
||||
- Configure "pod_vif_driver" as "nested-macvlan"::
|
||||
- Configure "pod_vif_driver" as "nested-macvlan":
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
pod_vif_driver = nested-macvlan
|
||||
|
||||
- Configure binding section::
|
||||
- Configure binding section:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[binding]
|
||||
link_iface = <VM interface name eg. eth0>
|
||||
|
||||
- Restart kuryr-k8s-controller::
|
||||
- Restart kuryr-k8s-controller:
|
||||
|
||||
sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
|
||||
Now launch pods using kubectl, Undercloud Neutron will serve the networking.
|
||||
|
|
|
@ -9,7 +9,9 @@ for the VM:
|
|||
|
||||
1. To install OpenStack services run devstack with
|
||||
``devstack/local.conf.pod-in-vm.undercloud.sample``. Ensure that "trunk"
|
||||
service plugin is enabled in ``/etc/neutron/neutron.conf``::
|
||||
service plugin is enabled in ``/etc/neutron/neutron.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
|
||||
|
@ -26,19 +28,24 @@ for the VM:
|
|||
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
|
||||
but first fill in the needed information:
|
||||
|
||||
- Point to the undercloud deployment by setting::
|
||||
- Point to the undercloud deployment by setting:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP
|
||||
|
||||
|
||||
- Fill in the subnetpool id of the undercloud deployment, as well as
|
||||
the router where the new pod and service networks need to be
|
||||
connected::
|
||||
connected:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID=UNDERCLOUD_SUBNETPOOL_V4_ID
|
||||
KURYR_NEUTRON_DEFAULT_ROUTER=router1
|
||||
|
||||
- Ensure the nested-vlan driver is going to be set by setting::
|
||||
- Ensure the nested-vlan driver is going to be set by setting:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_POD_VIF_DRIVER=nested-vlan
|
||||
|
||||
|
@ -48,31 +55,40 @@ for the VM:
|
|||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||
|
||||
- [OPTIONAL] If you want to enable the subport pools driver and the
|
||||
VIF Pool Manager you need to include::
|
||||
VIF Pool Manager you need to include:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_VIF_POOL_MANAGER=True
|
||||
|
||||
|
||||
4. Once devstack is done and all services are up inside VM. Next steps are to
|
||||
configure the missing information at ``/etc/kuryr/kuryr.conf``:
|
||||
|
||||
- Configure worker VMs subnet::
|
||||
- Configure worker VMs subnet:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pod_vif_nested]
|
||||
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>
|
||||
|
||||
- Configure binding section::
|
||||
- Configure binding section:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[binding]
|
||||
driver = kuryr.lib.binding.drivers.vlan
|
||||
link_iface = <VM interface name eg. eth0>
|
||||
|
||||
- Restart kuryr-k8s-controller::
|
||||
- Restart kuryr-k8s-controller:
|
||||
|
||||
sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
.. code-block:: console
|
||||
|
||||
- Restart kuryr-daemon::
|
||||
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
|
||||
sudo systemctl restart devstack@kuryr-daemon.service
|
||||
- Restart kuryr-daemon:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo systemctl restart devstack@kuryr-daemon.service
|
||||
|
||||
Now launch pods using kubectl, Undercloud Neutron will serve the networking.
|
||||
|
|
|
@ -33,14 +33,14 @@ to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
|
|||
|
||||
2. Create the ``stack`` user.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||
$ sudo ./devstack/tools/create-stack-user.sh
|
||||
|
||||
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo su - stack
|
||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||
|
@ -53,7 +53,7 @@ can start with. For example, you may want to set some values for the various
|
|||
PASSWORD variables in that file, or change the LBaaS service provider to use.
|
||||
Feel free to edit it if you'd like, but it should work as-is.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ cd devstack
|
||||
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
|
||||
|
@ -69,12 +69,14 @@ Optionally, the ports pool funcionality can be enabled by following:
|
|||
This is going to take a while. It installs a bunch of packages, clones a bunch
|
||||
of git repos, and installs everything from these git repos.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ ./stack.sh
|
||||
|
||||
Once DevStack completes successfully, you should see output that looks
|
||||
something like this::
|
||||
something like this:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
This is your host IP address: 192.168.5.10
|
||||
This is your host IPv6 address: ::1
|
||||
|
@ -82,24 +84,22 @@ something like this::
|
|||
The default users are: admin and demo
|
||||
The password: pass
|
||||
|
||||
|
||||
6. Extra configurations.
|
||||
|
||||
Devstack does not wire up the public network by default so we must do
|
||||
some extra steps for floating IP usage as well as external connectivity:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo ip link set br-ex up
|
||||
$ sudo ip route add 172.24.4.0/24 dev br-ex
|
||||
$ sudo ip addr add 172.24.4.1/24 dev br-ex
|
||||
|
||||
|
||||
Then you can create forwarding and NAT rules that will cause "external"
|
||||
traffic from your instances to get rewritten to your network controller's
|
||||
ip address and sent out on the network:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
|
||||
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
|
||||
|
@ -142,12 +142,13 @@ Undercloud deployment
|
|||
|
||||
The steps to deploy the undercloud environment are the same described above
|
||||
for the `Single Node Test Environment` with the different of the sample
|
||||
local.conf to use (step 4), in this case::
|
||||
local.conf to use (step 4), in this case:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cd devstack
|
||||
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.odl.sample local.conf
|
||||
|
||||
|
||||
The main differences with the default odl local.conf sample are that:
|
||||
|
||||
- There is no need to enable the kuryr-kubernetes plugin as this will be
|
||||
|
@ -179,7 +180,9 @@ Once the VM is up and running, we can start with the overcloud configuration.
|
|||
The steps to perform are the same as without ODL integration, i.e., the
|
||||
same steps as for ML2/OVS:
|
||||
|
||||
1. Log in into the VM::
|
||||
1. Log in into the VM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh -i id_rsa_demo centos@FLOATING_IP
|
||||
|
||||
|
|
|
@ -30,14 +30,14 @@ to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
|
|||
|
||||
2. Create the ``stack`` user.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||
$ sudo ./devstack/tools/create-stack-user.sh
|
||||
|
||||
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo su - stack
|
||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||
|
@ -50,12 +50,11 @@ can start with. For example, you may want to set some values for the various
|
|||
PASSWORD variables in that file, or change the LBaaS service provider to use.
|
||||
Feel free to edit it if you'd like, but it should work as-is.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ cd devstack
|
||||
$ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf
|
||||
|
||||
|
||||
Note that due to OVN compiling OVS from source at
|
||||
/usr/local/var/run/openvswitch we need to state at the local.conf that the path
|
||||
is different from the default one (i.e., /var/run/openvswitch).
|
||||
|
@ -68,7 +67,7 @@ Optionally, the ports pool functionality can be enabled by following:
|
|||
This is going to take a while. It installs a bunch of packages, clones a bunch
|
||||
of git repos, and installs everything from these git repos.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ ./stack.sh
|
||||
|
||||
|
@ -87,18 +86,17 @@ something like this::
|
|||
Devstack does not wire up the public network by default so we must do
|
||||
some extra steps for floating IP usage as well as external connectivity:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo ip link set br-ex up
|
||||
$ sudo ip route add 172.24.4.0/24 dev br-ex
|
||||
$ sudo ip addr add 172.24.4.1/24 dev br-ex
|
||||
|
||||
|
||||
Then you can create forwarding and NAT rules that will cause "external"
|
||||
traffic from your instances to get rewritten to your network controller's
|
||||
ip address and sent out on the network:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
|
||||
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
|
||||
|
@ -136,12 +134,13 @@ Undercloud deployment
|
|||
|
||||
The steps to deploy the undercloud environment are the same described above
|
||||
for the `Single Node Test Environment` with the different of the sample
|
||||
local.conf to use (step 4), in this case::
|
||||
local.conf to use (step 4), in this case:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cd devstack
|
||||
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.ovn.sample local.conf
|
||||
|
||||
|
||||
The main differences with the default ovn local.conf sample are that:
|
||||
|
||||
- There is no need to enable the kuryr-kubernetes plugin as this will be
|
||||
|
@ -171,7 +170,9 @@ Once the VM is up and running, we can start with the overcloud configuration.
|
|||
The steps to perform are the same as without OVN integration, i.e., the
|
||||
same steps as for ML2/OVS:
|
||||
|
||||
1. Log in into the VM::
|
||||
1. Log in into the VM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh -i id_rsa_demo centos@FLOATING_IP
|
||||
|
||||
|
|
|
@ -5,29 +5,34 @@ How to enable ports pool with devstack
|
|||
To enable the utilization of the ports pool feature through devstack, the next
|
||||
options needs to be set at the local.conf file:
|
||||
|
||||
1. First, you need to enable the pools by setting::
|
||||
1. First, you need to enable the pools by setting:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_USE_PORT_POOLS=True
|
||||
|
||||
|
||||
2. Then, the proper pool driver needs to be set. This means that for the
|
||||
baremetal case you need to ensure the pod vif driver and the vif pool driver
|
||||
are set to the right baremetal drivers, for instance::
|
||||
are set to the right baremetal drivers, for instance:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_POD_VIF_DRIVER=neutron-vif
|
||||
KURYR_VIF_POOL_DRIVER=neutron
|
||||
|
||||
And if the use case is the nested one, then they should be set to:
|
||||
|
||||
And if the use case is the nested one, then they should be set to::
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_POD_VIF_DRIVER=nested-vlan
|
||||
KURYR_VIF_POOL_DRIVER=nested
|
||||
|
||||
|
||||
3. Then, in case you want to set a limit to the maximum number of ports, or
|
||||
increase/reduce the default one for the minimum number, as well as to modify
|
||||
the way the pools are repopulated, both in time as well as regarding bulk
|
||||
operation sizes, the next option can be included and modified accordingly::
|
||||
operation sizes, the next option can be included and modified accordingly:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_PORT_POOL_MIN=5
|
||||
KURYR_PORT_POOL_MAX=0
|
||||
|
|
|
@ -3,7 +3,9 @@ Watching Kubernetes api-server over HTTPS
|
|||
=========================================
|
||||
|
||||
Add absolute path of client side cert file and key file for Kubernetes server
|
||||
in ``kuryr.conf``::
|
||||
in ``kuryr.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
api_root = https://your_server_address:server_ssl_port
|
||||
|
@ -11,13 +13,17 @@ in ``kuryr.conf``::
|
|||
ssl_client_key_file = <absolute file path eg. /etc/kubernetes/admin.key>
|
||||
|
||||
If server ssl certification verification is also to be enabled, add absolute
|
||||
path to the ca cert::
|
||||
path to the ca cert:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
ssl_ca_crt_file = <absolute file path eg. /etc/kubernetes/ca.crt>
|
||||
ssl_verify_server_crt = True
|
||||
|
||||
If want to query HTTPS Kubernetes api server with ``--insecure`` mode::
|
||||
If want to query HTTPS Kubernetes api server with ``--insecure`` mode:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
ssl_verify_server_crt = False
|
||||
|
|
|
@ -10,7 +10,9 @@ Kuryr-Kubernetes to achieve an IPv6 only Kubernetes cluster.
|
|||
Setting it up
|
||||
-------------
|
||||
|
||||
#. Create pods network::
|
||||
#. Create pods network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create pods
|
||||
+---------------------------+--------------------------------------+
|
||||
|
@ -45,7 +47,9 @@ Setting it up
|
|||
| updated_at | 2017-08-11T10:51:25Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
#. Create the pod subnet::
|
||||
#. Create the pod subnet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --network pods --no-dhcp \
|
||||
--subnet-range fd10:0:0:1::/64 \
|
||||
|
@ -79,7 +83,9 @@ Setting it up
|
|||
+-------------------------+-------------------------------------------+
|
||||
|
||||
|
||||
#. Create services network::
|
||||
#. Create services network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create services
|
||||
+---------------------------+--------------------------------------+
|
||||
|
@ -115,7 +121,9 @@ Setting it up
|
|||
+---------------------------+--------------------------------------+
|
||||
|
||||
#. Create services subnet. We reserve the first half of the subnet range for the
|
||||
VIPs and the second half for the loadbalancer vrrp ports ::
|
||||
VIPs and the second half for the loadbalancer vrrp ports.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --network services --no-dhcp \
|
||||
--gateway fd10:0:0:2:0:0:0:fffe \
|
||||
|
@ -150,7 +158,9 @@ Setting it up
|
|||
| use_default_subnet_pool | None |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
#. Create a router::
|
||||
#. Create a router:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router create k8s-ipv6
|
||||
+-------------------------+--------------------------------------+
|
||||
|
@ -175,16 +185,22 @@ Setting it up
|
|||
| updated_at | 2017-08-11T13:17:10Z |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
#. Add the router to the pod subnet::
|
||||
#. Add the router to the pod subnet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router add subnet k8s-ipv6 pod_subnet
|
||||
|
||||
#. Add the router to the service subnet::
|
||||
#. Add the router to the service subnet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router add subnet k8s-ipv6 service_subnet
|
||||
|
||||
#. Modify Kubernetes API server command line so that it points to the right
|
||||
CIDR::
|
||||
CIDR:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
--service-cluster-ip-range=fd10:0:0:2::/113
|
||||
|
||||
|
@ -203,7 +219,9 @@ Troubleshooting
|
|||
|
||||
This means that most likely you forgot to create a security group or rule
|
||||
for the pods to be accessible by the service CIDR. You can find an example
|
||||
here::
|
||||
here:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group create service_pod_access_v6
|
||||
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
|
|
@ -5,7 +5,9 @@ Installing kuryr-kubernetes manually
|
|||
Configure kuryr-k8s-controller
|
||||
------------------------------
|
||||
|
||||
Install ``kuryr-k8s-controller`` in a virtualenv::
|
||||
Install ``kuryr-k8s-controller`` in a virtualenv:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mkdir kuryr-k8s-controller
|
||||
$ cd kuryr-k8s-controller
|
||||
|
@ -14,19 +16,22 @@ Install ``kuryr-k8s-controller`` in a virtualenv::
|
|||
$ . env/bin/activate
|
||||
$ pip install -e kuryr-kubernetes
|
||||
|
||||
|
||||
In neutron or in horizon create subnet for pods, subnet for services and a
|
||||
security-group for pods. You may use existing if you like. In case that you
|
||||
decide to create new networks and subnets with the cli, you can follow the
|
||||
services guide, specifically its :ref:`k8s_default_configuration` section.
|
||||
|
||||
Create ``/etc/kuryr/kuryr.conf``::
|
||||
Create ``/etc/kuryr/kuryr.conf``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cd kuryr-kubernetes
|
||||
$ ./tools/generate_config_file_samples.sh
|
||||
$ cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf
|
||||
|
||||
Edit ``kuryr.conf``::
|
||||
Edit ``kuryr.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
use_stderr = true
|
||||
|
@ -71,11 +76,13 @@ Neutron-LBaaSv2):
|
|||
|
||||
* There should be a router between the two subnets.
|
||||
* The pod_security_groups setting should include a security group with a rule
|
||||
granting access to all the CIDR of the service subnet, e.g.::
|
||||
granting access to all the CIDR of the service subnet, e.g.:
|
||||
|
||||
openstack security group create --project k8s_cluster_project \
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group create --project k8s_cluster_project \
|
||||
service_pod_access_sg
|
||||
openstack security group rule create --project k8s_cluster_project \
|
||||
$ openstack security group rule create --project k8s_cluster_project \
|
||||
--remote-ip cidr_of_service_subnet --ethertype IPv4 --protocol tcp \
|
||||
service_pod_access_sg
|
||||
|
||||
|
@ -85,23 +92,28 @@ Neutron-LBaaSv2):
|
|||
Alternatively, to support Octavia L2 mode:
|
||||
|
||||
* The pod security_groups setting should include a security group with a rule
|
||||
granting access to all the CIDR of the pod subnet, e.g.::
|
||||
granting access to all the CIDR of the pod subnet, e.g.:
|
||||
|
||||
openstack security group create --project k8s_cluster_project \
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group create --project k8s_cluster_project \
|
||||
octavia_pod_access_sg
|
||||
openstack security group rule create --project k8s_cluster_project \
|
||||
$ openstack security group rule create --project k8s_cluster_project \
|
||||
--remote-ip cidr_of_pod_subnet --ethertype IPv4 --protocol tcp \
|
||||
octavia_pod_access_sg
|
||||
|
||||
* The uuid of this security group id should be added to the comma separated
|
||||
list of pod security groups. *pod_security_groups* in *[neutron_defaults]*.
|
||||
|
||||
Run kuryr-k8s-controller:
|
||||
|
||||
Run kuryr-k8s-controller::
|
||||
.. code-block:: console
|
||||
|
||||
$ kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
|
||||
|
||||
Alternatively you may run it in screen::
|
||||
Alternatively you may run it in screen:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ screen -dm kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
|
||||
|
||||
|
@ -112,7 +124,9 @@ Configure kuryr-cni
|
|||
On every kubernetes minion node (and on master if you intend to run containers
|
||||
there) you need to configure kuryr-cni.
|
||||
|
||||
Install ``kuryr-cni`` in a virtualenv::
|
||||
Install ``kuryr-cni`` in a virtualenv:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mkdir kuryr-k8s-cni
|
||||
$ cd kuryr-k8s-cni
|
||||
|
@ -121,13 +135,17 @@ Install ``kuryr-cni`` in a virtualenv::
|
|||
$ git clone https://opendev.org/openstack/kuryr-kubernetes
|
||||
$ pip install -e kuryr-kubernetes
|
||||
|
||||
Create ``/etc/kuryr/kuryr.conf``::
|
||||
Create ``/etc/kuryr/kuryr.conf``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cd kuryr-kubernetes
|
||||
$ ./tools/generate_config_file_samples.sh
|
||||
$ cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf
|
||||
|
||||
Edit ``kuryr.conf``::
|
||||
Edit ``kuryr.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
use_stderr = true
|
||||
|
@ -135,14 +153,18 @@ Edit ``kuryr.conf``::
|
|||
[kubernetes]
|
||||
api_root = http://{ip_of_kubernetes_apiserver}:8080
|
||||
|
||||
Link the CNI binary to CNI directory, where kubelet would find it::
|
||||
Link the CNI binary to CNI directory, where kubelet would find it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mkdir -p /opt/cni/bin
|
||||
$ ln -s $(which kuryr-cni) /opt/cni/bin/
|
||||
|
||||
Create the CNI config file for kuryr-cni: ``/etc/cni/net.d/10-kuryr.conf``.
|
||||
Kubelet would only use the lexicographically first file in that directory, so
|
||||
make sure that it is kuryr's config file::
|
||||
make sure that it is kuryr's config file:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
|
@ -155,10 +177,12 @@ make sure that it is kuryr's config file::
|
|||
Install ``os-vif`` and ``oslo.privsep`` libraries globally. These modules
|
||||
are used to plug interfaces and would be run with raised privileges. ``os-vif``
|
||||
uses ``sudo`` to raise privileges, and they would need to be installed globally
|
||||
to work correctly::
|
||||
to work correctly:
|
||||
|
||||
deactivate
|
||||
sudo pip install 'oslo.privsep>=1.20.0' 'os-vif>=1.5.0'
|
||||
.. code-block:: console
|
||||
|
||||
$ deactivate
|
||||
$ sudo pip install 'oslo.privsep>=1.20.0' 'os-vif>=1.5.0'
|
||||
|
||||
|
||||
Configure Kuryr CNI Daemon
|
||||
|
@ -177,7 +201,9 @@ steps need to be repeated.
|
|||
crucial for scalability of the whole deployment. In general the timeout to
|
||||
serve CNI request from kubelet to Kuryr is 180 seconds. After that time
|
||||
kubelet will retry the request. Additionally there are two configuration
|
||||
options::
|
||||
options:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[cni_daemon]
|
||||
vif_annotation_timeout=60
|
||||
|
@ -198,11 +224,15 @@ steps need to be repeated.
|
|||
value denotes *maximum* time to wait for kernel to complete the operations.
|
||||
If operation succeeds earlier, request isn't delayed.
|
||||
|
||||
Run kuryr-daemon::
|
||||
Run kuryr-daemon:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
||||
|
||||
Alternatively you may run it in screen::
|
||||
Alternatively you may run it in screen:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ screen -dm kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
||||
|
||||
|
@ -216,11 +246,14 @@ and readiness.
|
|||
|
||||
If you want to make use of all of its facilities, you should run the
|
||||
kuryr-daemon in its own cgroup. It will get its own cgroup if you:
|
||||
|
||||
* Run it as a systemd service,
|
||||
* run it containerized,
|
||||
* create a memory cgroup for it.
|
||||
|
||||
In order to make the daemon run in its own cgroup, you can do the following::
|
||||
In order to make the daemon run in its own cgroup, you can do the following:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
systemd-run --unit=kuryr-daemon --scope --slice=kuryr-cni \
|
||||
kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
||||
|
@ -229,7 +262,9 @@ After this, with the CNI daemon running inside its own cgroup, we can enable
|
|||
the CNI daemon memory health check. This health check allows us to limit the
|
||||
memory consumption of the CNI Daemon. The health checks will fail if CNI starts
|
||||
taking more memory that it is set and the orchestration layer should restart.
|
||||
The setting is::
|
||||
The setting is:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[cni_health_server]
|
||||
max_memory_usage = 4096 # Set the memory limit to 4GiB
|
||||
|
|
|
@ -8,80 +8,90 @@ the next steps are needed:
|
|||
1. Enable the namespace handler to reach to namespace events, in this case,
|
||||
creation and deletion. To do that you need to add it to the list of the
|
||||
enabled handlers at kuryr.conf (details on how to edit this for
|
||||
containerized deployment can be found at :doc:`./devstack/containerized`)::
|
||||
containerized deployment can be found at :doc:`./devstack/containerized`):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
enabled_handlers=vif,lb,lbaasspec,namespace
|
||||
|
||||
|
||||
Note that if you also want to enable prepopulation of ports pools upon new
|
||||
namespace creation, you need to add the kuryrnet handler (more details on
|
||||
:doc:`./ports-pool`)::
|
||||
:doc:`./ports-pool`):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
enabled_handlers=vif,lb,lbaasspec,namespace,kuryrnet
|
||||
|
||||
|
||||
2. Enable the namespace subnet driver by modifying the default
|
||||
pod_subnet_driver option at kuryr.conf::
|
||||
pod_subnet_driver option at kuryr.conf:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
pod_subnets_driver = namespace
|
||||
|
||||
|
||||
In addition, to ensure that pods and services at one given namespace
|
||||
cannot reach (or be reached by) the ones at another namespace, except the
|
||||
pods at the default namespace that can reach (and be reached by) any pod at
|
||||
a different namespace, the next security group driver needs to be set too::
|
||||
a different namespace, the next security group driver needs to be set too:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
pod_security_groups_driver = namespace
|
||||
service_security_groups_driver = namespace
|
||||
|
||||
|
||||
3. Select (and create if needed) the subnet pool from where the new subnets
|
||||
will get their CIDR (e.g., the default on devstack deployment is
|
||||
shared-default-subnetpool-v4)::
|
||||
shared-default-subnetpool-v4):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[namespace_subnet]
|
||||
pod_subnet_pool = SUBNET_POOL_ID
|
||||
|
||||
|
||||
4. Select (and create if needed) the router where the new subnet will be
|
||||
connected (e.g., the default on devstack deployments is router1)::
|
||||
connected (e.g., the default on devstack deployments is router1):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[namespace_subnet]
|
||||
pod_router = ROUTER_ID
|
||||
|
||||
|
||||
Note that if a new router is created, it must ensure the connectivity
|
||||
requirements between pod, service and public subnets, as in the case for
|
||||
the default subnet driver.
|
||||
|
||||
|
||||
5. Select (and create if needed) the security groups to be attached to the
|
||||
pods at the default namespace and to the others, enabling the cross access
|
||||
between them::
|
||||
between them:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[namespace_sg]
|
||||
sg_allow_from_namespaces = SG_ID_1 # Makes SG_ID_1 allow traffic from the sg sg_allow_from_default
|
||||
sg_allow_from_default = SG_ID_2 # Makes SG_ID_2 allow traffic from the sg sg_allow_from_namespaces
|
||||
|
||||
|
||||
Note you need to restart the kuryr controller after applying the above
|
||||
detailed steps. For devstack non-containerized deployments::
|
||||
detailed steps. For devstack non-containerized deployments:
|
||||
|
||||
sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
|
||||
And for containerized deployments::
|
||||
And for containerized deployments:
|
||||
|
||||
kubectl -n kube-system get pod | grep kuryr-controller
|
||||
kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl -n kube-system get pod | grep kuryr-controller
|
||||
$ kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||
|
||||
For directly enabling the driver when deploying with devstack, you just need
|
||||
to add the namespace handler and state the namespace subnet driver with::
|
||||
to add the namespace handler and state the namespace subnet driver with:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
KURYR_SUBNET_DRIVER=namespace
|
||||
KURYR_SG_DRIVER=namespace
|
||||
|
@ -98,12 +108,16 @@ to add the namespace handler and state the namespace subnet driver with::
|
|||
Testing the network per namespace functionality
|
||||
-----------------------------------------------
|
||||
|
||||
1. Create two namespaces::
|
||||
1. Create two namespaces:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl create namespace test1
|
||||
$ kubectl create namespace test2
|
||||
|
||||
2. Check resources has been created::
|
||||
2. Check resources has been created:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get namespaces
|
||||
NAME STATUS AGE
|
||||
|
@ -122,7 +136,9 @@ Testing the network per namespace functionality
|
|||
$ openstack subnet list | grep test1
|
||||
| 8640d134-5ea2-437d-9e2a-89236f6c0198 | ns/test1-subnet | 7c7b68c5-d3c4-431c-9f69-fbc777b43ee5 | 10.0.1.128/26 |
|
||||
|
||||
3. Create a pod in the created namespaces::
|
||||
3. Create a pod in the created namespaces:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl run -n test1 --image kuryr/demo demo
|
||||
deployment "demo" created
|
||||
|
@ -138,8 +154,9 @@ Testing the network per namespace functionality
|
|||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
demo-5135352253-dfghd 1/1 Running 0 7s 10.0.1.134 node1
|
||||
|
||||
4. Create a service:
|
||||
|
||||
4. Create a service::
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl expose -n test1 deploy/demo --port 80 --target-port 8080
|
||||
service "demo" exposed
|
||||
|
@ -148,8 +165,9 @@ Testing the network per namespace functionality
|
|||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
demo ClusterIP 10.0.0.141 <none> 80/TCP 18s
|
||||
|
||||
5. Test service connectivity from both namespaces:
|
||||
|
||||
5. Test service connectivity from both namespaces::
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl exec -n test1 -it demo-5995548848-lmmjc /bin/sh
|
||||
test-1-pod$ curl 10.0.0.141
|
||||
|
@ -159,9 +177,10 @@ Testing the network per namespace functionality
|
|||
test-2-pod$ curl 10.0.0.141
|
||||
## No response
|
||||
|
||||
|
||||
6. And finally, to remove the namespace and all its resources, including
|
||||
openstack networks, kuryrnet CRD, svc, pods, you just need to do::
|
||||
openstack networks, kuryrnet CRD, svc, pods, you just need to do:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl delete namespace test1
|
||||
$ kubectl delete namespace test2
|
||||
|
|
|
@ -5,21 +5,25 @@ Enable network policy support functionality
|
|||
Enable policy, pod_label and namespace handlers to respond to network policy
|
||||
events. As this is not done by default you'd have to explicitly add that to
|
||||
the list of enabled handlers at kuryr.conf (further info on how to do this can
|
||||
be found at :doc:`./devstack/containerized`)::
|
||||
be found at :doc:`./devstack/containerized`):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy
|
||||
|
||||
|
||||
Note that if you also want to enable prepopulation of ports pools upon new
|
||||
namespace creation, you need to add the kuryrnet handler (more details on
|
||||
:doc:`./ports-pool`)::
|
||||
:doc:`./ports-pool`):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy,kuryrnet
|
||||
|
||||
After that, enable also the security group drivers for policies:
|
||||
|
||||
After that, enable also the security group drivers for policies::
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
service_security_groups_driver = policy
|
||||
|
@ -30,39 +34,53 @@ After that, enable also the security group drivers for policies::
|
|||
The correct behavior for pods that have no network policy applied is to
|
||||
allow all ingress and egress traffic. If you want that to be enforced,
|
||||
please make sure to create an SG allowing all traffic and add it to
|
||||
``[neutron_defaults]pod_security_groups`` setting in ``kuryr.conf``::
|
||||
``[neutron_defaults]pod_security_groups`` setting in ``kuryr.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[neutron_defaults]
|
||||
pod_security_groups = ALLOW_ALL_SG_ID
|
||||
|
||||
Enable the namespace subnet driver by modifying the default pod_subnet_driver
|
||||
option::
|
||||
option:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
pod_subnets_driver = namespace
|
||||
|
||||
Select the subnet pool from where the new subnets will get their CIDR::
|
||||
Select the subnet pool from where the new subnets will get their CIDR:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[namespace_subnet]
|
||||
pod_subnet_pool = SUBNET_POOL_ID
|
||||
|
||||
Lastly, select the router where the new subnet will be connected::
|
||||
Lastly, select the router where the new subnet will be connected:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[namespace_subnet]
|
||||
pod_router = ROUTER_ID
|
||||
|
||||
Note you need to restart the kuryr controller after applying the above step.
|
||||
For devstack non-containerized deployments::
|
||||
For devstack non-containerized deployments:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
|
||||
Same for containerized deployments::
|
||||
Same for containerized deployments:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl -n kube-system get pod | grep kuryr-controller
|
||||
$ kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||
|
||||
For directly enabling the driver when deploying with devstack, you just need
|
||||
to add the policy, pod_label and namespace handler and drivers with::
|
||||
to add the policy, pod_label and namespace handler and drivers with:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy
|
||||
KURYR_SG_DRIVER=policy
|
||||
|
@ -73,13 +91,18 @@ to add the policy, pod_label and namespace handler and drivers with::
|
|||
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
|
||||
there is no need to enforce sg rules at the load balancer level. To disable
|
||||
the enforcement, you need to set the following variable:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_ENFORCE_SG_RULES=False
|
||||
|
||||
|
||||
Testing the network policy support functionality
|
||||
------------------------------------------------
|
||||
|
||||
1. Given a yaml file with a network policy, such as::
|
||||
1. Given a yaml file with a network policy, such as:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
|
@ -110,11 +133,15 @@ Testing the network policy support functionality
|
|||
- protocol: TCP
|
||||
port: 5978
|
||||
|
||||
2. Apply the network policy::
|
||||
2. Apply the network policy:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl apply -f network_policy.yml
|
||||
|
||||
3. Check that the resources has been created::
|
||||
3. Check that the resources has been created:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get kuryrnetpolicies
|
||||
NAME AGE
|
||||
|
@ -127,7 +154,9 @@ Testing the network policy support functionality
|
|||
$ openstack security group list | grep sg-test-network-policy
|
||||
| dabdf308-7eed-43ef-a058-af84d1954acb | sg-test-network-policy
|
||||
|
||||
4. Check that the rules are in place for the security group::
|
||||
4. Check that the rules are in place for the security group:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get kuryrnetpolicy np-test-network-policy -o yaml
|
||||
|
||||
|
@ -201,7 +230,9 @@ Testing the network policy support functionality
|
|||
| tcp | 5978:5978 | egress |
|
||||
+-------------+------------+-----------+
|
||||
|
||||
5. Create a pod::
|
||||
5. Create a pod:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl create deployment --image kuryr/demo demo
|
||||
deployment "demo" created
|
||||
|
@ -210,7 +241,9 @@ Testing the network policy support functionality
|
|||
NAME READY STATUS RESTARTS AGE IP
|
||||
demo-5558c7865d-fdkdv 1/1 Running 0 44s 10.0.0.68
|
||||
|
||||
6. Get the pod port and check its security group rules::
|
||||
6. Get the pod port and check its security group rules:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port list --fixed-ip ip-address=10.0.0.68 -f value -c ID
|
||||
5d29b83c-714c-4579-8987-d0c0558420b3
|
||||
|
@ -227,11 +260,15 @@ Testing the network policy support functionality
|
|||
| tcp | 5978:5978 | egress |
|
||||
+-------------+------------+-----------+
|
||||
|
||||
7. Try to curl the pod on port 8080 (hint: it won't work!)::
|
||||
7. Try to curl the pod on port 8080 (hint: it won't work!):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ curl 10.0.0.68:8080
|
||||
|
||||
8. Update network policy to allow ingress 8080 port::
|
||||
8. Update network policy to allow ingress 8080 port:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl patch networkpolicy test-network-policy -p '{"spec":{"ingress":[{"ports":[{"port": 8080,"protocol": "TCP"}]}]}}'
|
||||
networkpolicy "test-network-policy" patched
|
||||
|
@ -306,19 +343,21 @@ Testing the network policy support functionality
|
|||
| tcp | 5978:5978 | egress |
|
||||
+-------------+------------+-----------+
|
||||
|
||||
9. Try to curl the pod ip after patching the network policy::
|
||||
9. Try to curl the pod ip after patching the network policy:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ curl 10.0.0.68:8080
|
||||
demo-5558c7865d-fdkdv: HELLO! I AM ALIVE!!!
|
||||
|
||||
|
||||
Note the curl only works from pods (neutron ports) on a namespace that has
|
||||
the label `project: default` as stated on the policy namespaceSelector.
|
||||
|
||||
|
||||
10. We can also create a single pod, without a label and check that there is
|
||||
no connectivity to it, as it does not match the network policy
|
||||
podSelector::
|
||||
podSelector:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cat sample-pod.yml
|
||||
apiVersion: v1
|
||||
|
@ -335,17 +374,19 @@ the label `project: default` as stated on the policy namespaceSelector.
|
|||
$ curl demo-pod-IP:8080
|
||||
NO REPLY
|
||||
|
||||
|
||||
11. If we add to the pod a label that match a network policy podSelector, in
|
||||
this case 'project: default', the network policy will get applied on the
|
||||
pod, and the traffic will be allowed::
|
||||
pod, and the traffic will be allowed:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl label pod demo-pod project=default
|
||||
$ curl demo-pod-IP:8080
|
||||
demo-pod-XXX: HELLO! I AM ALIVE!!!
|
||||
|
||||
12. Confirm the teardown of the resources once the network policy is removed:
|
||||
|
||||
12. Confirm the teardown of the resources once the network policy is removed::
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl delete -f network_policy.yml
|
||||
$ kubectl get kuryrnetpolicies
|
||||
|
|
|
@ -84,34 +84,41 @@ Router:
|
|||
Configure Kuryr to support L7 Router and OCP-Route resources
|
||||
------------------------------------------------------------
|
||||
|
||||
1. Configure the L7 Router by adding the LB UUID at kuryr.conf::
|
||||
1. Configure the L7 Router by adding the LB UUID at kuryr.conf:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ingress]
|
||||
l7_router_uuid = 99f580e6-d894-442a-bc5f-4d14b41e10d2
|
||||
|
||||
|
||||
2. Enable the ocp-route and k8s-endpoint handlers. For that you need to add
|
||||
this handlers to the enabled handlers list at kuryr.conf (details on how to
|
||||
edit this for containerized deployment can be found at
|
||||
:doc:`./devstack/containerized`)::
|
||||
:doc:`./devstack/containerized`):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
enabled_handlers=vif,lb,lbaasspec,ocproute,ingresslb
|
||||
|
||||
Note: you need to restart the kuryr controller after applying the above
|
||||
detailed steps. For devstack non-containerized deployments::
|
||||
detailed steps. For devstack non-containerized deployments:
|
||||
|
||||
sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
|
||||
And for containerized deployments::
|
||||
And for containerized deployments:
|
||||
|
||||
kubectl -n kube-system get pod | grep kuryr-controller
|
||||
kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl -n kube-system get pod | grep kuryr-controller
|
||||
$ kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||
|
||||
For directly enabling both L7 router and OCP-Route handlers when deploying
|
||||
with devstack, you just need to add the following at local.conf file::
|
||||
with devstack, you just need to add the following at local.conf file:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_ENABLE_INGRESS=True
|
||||
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,ocproute,ingresslb
|
||||
|
@ -120,14 +127,17 @@ with devstack, you just need to add the following at local.conf file::
|
|||
Testing OCP-Route functionality
|
||||
-------------------------------
|
||||
|
||||
1. Create a service::
|
||||
1. Create a service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ oc run --image=celebdor/kuryr-demo kuryr-demo
|
||||
$ oc scale dc/kuryr-demo --replicas=2
|
||||
$ oc expose dc/kuryr-demo --port 80 --target-port 8080
|
||||
|
||||
2. Create a Route object pointing to above service (kuryr-demo):
|
||||
|
||||
2. Create a Route object pointing to above service (kuryr-demo)::
|
||||
.. code-block:: console
|
||||
|
||||
$ cat >> route.yaml << EOF
|
||||
> apiVersion: v1
|
||||
|
@ -142,8 +152,9 @@ Testing OCP-Route functionality
|
|||
> EOF
|
||||
$ oc create -f route.yaml
|
||||
|
||||
3. Curl L7 router's FIP using specified hostname:
|
||||
|
||||
3. Curl L7 router's FIP using specified hostname::
|
||||
.. code-block:: console
|
||||
|
||||
$ curl --header 'Host: www.firstroute.com' 172.24.4.3
|
||||
kuryr-demo-1-gzgj2: HELLO, I AM ALIVE!!!
|
||||
|
|
|
@ -4,26 +4,34 @@ How to enable ports pool support
|
|||
|
||||
To enable the utilization of the ports pool feature, the selected pool driver
|
||||
needs to be included at the kuryr.conf at the kubernetes section. So, for the
|
||||
baremetal deployment::
|
||||
baremetal deployment:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
vif_pool_driver = neutron
|
||||
|
||||
And for the nested (VLAN+Trunk) case::
|
||||
And for the nested (VLAN+Trunk) case:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
vif_pool_driver = nested
|
||||
|
||||
On the other hand, there are a few extra (optional) configuration options
|
||||
regarding the maximum and minimum desired sizes of the pools, where the
|
||||
maximum size can be disabled by setting it to 0::
|
||||
maximum size can be disabled by setting it to 0:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vif_pool]
|
||||
ports_pool_max = 10
|
||||
ports_pool_min = 5
|
||||
|
||||
In addition the size of the bulk operation, e.g., the number of ports created
|
||||
in a bulk request upon pool population, can be modified::
|
||||
in a bulk request upon pool population, can be modified:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vif_pool]
|
||||
ports_pool_batch = 5
|
||||
|
@ -36,32 +44,42 @@ modified, and it should be adjusted based on your specific deployment, e.g., if
|
|||
the port creation actions are slow, it is desirable to raise it in order not to
|
||||
have overlapping actions. As a simple rule of thumbs, the frequency should be
|
||||
at least as large as the time needed to perform the bulk requests (ports
|
||||
creation, including subports attachment for the nested case)::
|
||||
creation, including subports attachment for the nested case):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vif_pool]
|
||||
ports_pool_update_frequency = 20
|
||||
|
||||
After these configurations, the final step is to restart the
|
||||
kuryr-k8s-controller. At devstack deployment::
|
||||
kuryr-k8s-controller. At devstack deployment:
|
||||
|
||||
sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
.. code-block:: console
|
||||
|
||||
And for RDO packaging based installations::
|
||||
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||
|
||||
sudo systemctl restart kuryr-controller
|
||||
And for RDO packaging based installations:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo systemctl restart kuryr-controller
|
||||
|
||||
Note that for the containerized deployment, you need to edit the associated
|
||||
ConfigMap to change the kuryr.conf files with::
|
||||
ConfigMap to change the kuryr.conf files with:
|
||||
|
||||
kubectl -n kube-system edit cm kuryr-config
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl -n kube-system edit cm kuryr-config
|
||||
|
||||
Then modify the kuryr.conf (not the kuryr-cni.conf) to modify the controller
|
||||
configuration regarding the pools. After that, to have the new configuration
|
||||
applied you need to restart the kuryr-controller just by killing the existing
|
||||
pod::
|
||||
pod:
|
||||
|
||||
kubectl -n kube-system get pod | grep kuryr-controller
|
||||
kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl -n kube-system get pod | grep kuryr-controller
|
||||
$ kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||
|
||||
|
||||
Ports loading into pools
|
||||
|
@ -112,7 +130,7 @@ To enable the option of having different pools depending on the node's pod vif
|
|||
types, you need to state the type of pool that you want for each pod vif
|
||||
driver, e.g.:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[vif_pool]
|
||||
vif_pool_mapping=nested-vlan:nested,neutron-vif:neutron
|
||||
|
@ -147,13 +165,17 @@ When the namespace subnet driver is used (either for namespace isolation or
|
|||
for network policies) a new subnet is created for each namespace. The ports
|
||||
associated to each namespace will therefore be on different pools. In order
|
||||
to prepopulate the pools associated to a newly created namespace (i.e.,
|
||||
subnet), the next handler needs to be enabled::
|
||||
subnet), the next handler needs to be enabled:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kubernetes]
|
||||
enabled_handlers=vif,lb,lbaasspec,namespace,*kuryrnet*
|
||||
|
||||
|
||||
This can be enabled at devstack deployment time to by adding the next to the
|
||||
local.conf::
|
||||
local.conf:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace,*kuryrnet*
|
||||
|
|
|
@ -56,7 +56,9 @@ Kuryr is the Neutron lbaasv2 agent.
|
|||
In order to use Neutron HAProxy as the Neutron LBaaSv2 implementation you
|
||||
should not only install the neutron-lbaas agent but also place this snippet in
|
||||
the *[service_providers]* section of neutron.conf in your network controller
|
||||
node::
|
||||
node:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"
|
||||
|
||||
|
@ -125,7 +127,9 @@ Kuryr can use Octavia in two ways:
|
|||
|
||||
The services and pods subnets should be created.
|
||||
|
||||
#. Create pod network::
|
||||
#. Create pod network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create pod
|
||||
+---------------------------+--------------------------------------+
|
||||
|
@ -160,7 +164,9 @@ The services and pods subnets should be created.
|
|||
| updated_at | 2017-08-11T10:51:25Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
#. Create pod subnet::
|
||||
#. Create pod subnet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --network pod --no-dhcp \
|
||||
--gateway 10.1.255.254 \
|
||||
|
@ -193,7 +199,9 @@ The services and pods subnets should be created.
|
|||
| use_default_subnet_pool | None |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
#. Create services network::
|
||||
#. Create services network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create services
|
||||
+---------------------------+--------------------------------------+
|
||||
|
@ -229,7 +237,9 @@ The services and pods subnets should be created.
|
|||
+---------------------------+--------------------------------------+
|
||||
|
||||
#. Create service subnet. We reserve the first half of the subnet range for the
|
||||
VIPs and the second half for the loadbalancer vrrp ports ::
|
||||
VIPs and the second half for the loadbalancer vrrp ports:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --network services --no-dhcp \
|
||||
--gateway 10.2.255.254 \
|
||||
|
@ -265,7 +275,9 @@ The services and pods subnets should be created.
|
|||
+-------------------------+--------------------------------------+
|
||||
|
||||
#. Create a router to give L3 connectivity between the pod and the service
|
||||
subnets. If you already have one, you can use it::
|
||||
subnets. If you already have one, you can use it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router create kuryr-kubernetes
|
||||
+-------------------------+--------------------------------------+
|
||||
|
@ -290,7 +302,9 @@ The services and pods subnets should be created.
|
|||
| updated_at | 2017-08-11T11:06:21Z |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
#. Create router ports in the pod and service subnets::
|
||||
#. Create router ports in the pod and service subnets:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port create --network pod --fixed-ip ip-address=10.1.255.254 pod_subnet_router
|
||||
+-----------------------+---------------------------------------------------------------------------+
|
||||
|
@ -372,7 +386,9 @@ The services and pods subnets should be created.
|
|||
| updated_at | 2017-08-11T11:16:57Z |
|
||||
+-----------------------+-----------------------------------------------------------------------------+
|
||||
|
||||
#. Add the router to the service and the pod subnets::
|
||||
#. Add the router to the service and the pod subnets:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router add port \
|
||||
d2a06d95-8abd-471b-afbe-9dfe475dd8a4 \
|
||||
|
@ -383,7 +399,9 @@ The services and pods subnets should be created.
|
|||
572cee3d-c30a-4ee6-a59c-fe9529a6e168
|
||||
|
||||
#. Configure kuryr.conf pod subnet and service subnet to point to their
|
||||
respective subnets created in step (2) and (4)::
|
||||
respective subnets created in step (2) and (4):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[neutron_defaults]
|
||||
pod_subnet = e0a888ab-9915-4685-a600-bffe240dc58b
|
||||
|
@ -392,7 +410,9 @@ The services and pods subnets should be created.
|
|||
#. Configure Kubernetes API server to use only a subset of the service
|
||||
addresses, **10.2.0.0/17**. The rest will be used for loadbalancer *vrrp*
|
||||
ports managed by Octavia. To configure Kubernetes with this CIDR range you
|
||||
have to add the following parameter to its command line invocation::
|
||||
have to add the following parameter to its command line invocation:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
--service-cluster-ip-range=10.2.0.0/17
|
||||
|
||||
|
@ -419,7 +439,7 @@ The services and pods subnets should be created.
|
|||
A. Create an external/provider network
|
||||
B. Create subnet/pool range of external CIDR
|
||||
C. Connect external subnet to kuryr-kubernetes router
|
||||
D. Configure external network details in Kuryr.conf as follows::
|
||||
D. Configure external network details in Kuryr.conf as follows:
|
||||
|
||||
[neutron_defaults]
|
||||
external_svc_net= <id of external network>
|
||||
|
@ -431,10 +451,12 @@ The services and pods subnets should be created.
|
|||
'load-balancer-ip' is not specified, an external IP from
|
||||
'external_svc_subnet' will be allocated.
|
||||
|
||||
For the 'User' case, user should first create an external/floating IP::
|
||||
For the 'User' case, user should first create an external/floating IP:
|
||||
|
||||
$#openstack floating ip create --subnet <ext-subnet-id> <ext-netowrk-id>
|
||||
$openstack floating ip create --subnet 48ddcfec-1b29-411b-be92-8329cc09fc12 3b4eb25e-e103-491f-a640-a6246d588561
|
||||
.. code-block:: console
|
||||
|
||||
$ #openstack floating ip create --subnet <ext-subnet-id> <ext-netowrk-id>
|
||||
$ openstack floating ip create --subnet 48ddcfec-1b29-411b-be92-8329cc09fc12 3b4eb25e-e103-491f-a640-a6246d588561
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
|
@ -472,7 +494,9 @@ of doing the following:
|
|||
out of the service range so that nor Octavia nor Kuryr-Kubernetes pod
|
||||
allocation create ports in the part reserved for services.
|
||||
|
||||
Create the network::
|
||||
Create the network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create k8s
|
||||
+---------------------------+--------------------------------------+
|
||||
|
@ -510,7 +534,9 @@ of doing the following:
|
|||
Create the subnet. Note that we disable dhcp as Kuryr-Kubernetes pod subnets
|
||||
have no need for them for Pod networking. We also put the gateway on the
|
||||
last IP of the subnet range so that the beginning of the range can be kept
|
||||
for Kubernetes driven service IPAM::
|
||||
for Kubernetes driven service IPAM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --network k8s --no-dhcp \
|
||||
--gateway 10.0.255.254 \
|
||||
|
@ -546,7 +572,9 @@ of doing the following:
|
|||
+-------------------------+--------------------------------------+
|
||||
|
||||
#. Configure kuryr.conf pod subnet and service subnet to point to the same
|
||||
subnet created in step (1)::
|
||||
subnet created in step (1):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[neutron_defaults]
|
||||
pod_subnet = 3a1df0d9-f738-4293-8de6-6c624f742980
|
||||
|
@ -555,7 +583,9 @@ of doing the following:
|
|||
#. Configure Kubernetes API server to use only a subset of the addresses for
|
||||
services, **10.0.0.0/18**. The rest will be used for pods. To configure
|
||||
Kubernetes with this CIDR range you have to add the following parameter to
|
||||
its command line invocation::
|
||||
its command line invocation:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
--service-cluster-ip-range=10.0.0.0/18
|
||||
|
||||
|
@ -581,7 +611,9 @@ access to it), you should create a load balancer configuration for the
|
|||
Kubernetes service to be accessible to Pods.
|
||||
|
||||
#. Create the load balancer (Kubernetes always picks the first address of the
|
||||
range we gave in *--service-cluster-ip-range*)::
|
||||
range we gave in *--service-cluster-ip-range*):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack loadbalancer create --vip-address 10.0.0.1 \
|
||||
--vip-subnet-id 3a1df0d9-f738-4293-8de6-6c624f742980 \
|
||||
|
@ -608,7 +640,9 @@ Kubernetes service to be accessible to Pods.
|
|||
| vip_subnet_id | 3a1df0d9-f738-4293-8de6-6c624f742980 |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
#. Create the Pool for all the Kubernetes API hosts::
|
||||
#. Create the Pool for all the Kubernetes API hosts:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack loadbalancer pool create --name default/kubernetes:HTTPS:443 \
|
||||
--protocol HTTPS --lb-algorithm LEAST_CONNECTIONS \
|
||||
|
@ -635,7 +669,9 @@ Kubernetes service to be accessible to Pods.
|
|||
+---------------------+--------------------------------------+
|
||||
|
||||
#. Add a member for each Kubernetes API server. We recommend setting the name
|
||||
to be the hostname of the host where the Kubernetes API runs::
|
||||
to be the hostname of the host where the Kubernetes API runs:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack loadbalancer member create \
|
||||
--name k8s-master-0 \
|
||||
|
@ -661,7 +697,9 @@ Kubernetes service to be accessible to Pods.
|
|||
| monitor_address | None |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
#. Create a listener for the load balancer that defaults to the created pool::
|
||||
#. Create a listener for the load balancer that defaults to the created pool:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack loadbalancer listener create \
|
||||
--name default/kubernetes:HTTPS:443 \
|
||||
|
@ -702,7 +740,9 @@ Troubleshooting
|
|||
|
||||
This means that most likely you forgot to create a security group or rule
|
||||
for the pods to be accessible by the service CIDR. You can find an example
|
||||
here::
|
||||
here:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group create service_pod_access
|
||||
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
|
|
@ -11,7 +11,7 @@ a SR-IOV port on a baremetal installation the 3 following steps should be done:
|
|||
1. Create OpenStack network and subnet for SR-IOV.
|
||||
Following steps should be done with admin rights.
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
neutron net-create vlan-sriov-net --shared --provider:physical_network physnet10_4 --provider:network_type vlan --provider:segmentation_id 3501
|
||||
neutron subnet-create vlan-sriov-net 203.0.114.0/24 --name vlan-sriov-subnet --gateway 203.0.114.1
|
||||
|
@ -47,7 +47,6 @@ as described in [1]_.
|
|||
"driverType": "sriov"
|
||||
}'
|
||||
|
||||
|
||||
Then add k8s.v1.cni.cncf.io/networks and request/limits for SR-IOV
|
||||
into the pod's yaml.
|
||||
|
||||
|
@ -71,7 +70,6 @@ into the pod's yaml.
|
|||
limits:
|
||||
intel.com/sriov: '2'
|
||||
|
||||
|
||||
In the above example two SR-IOV devices will be attached to pod. First one is
|
||||
described in sriov-net1 NetworkAttachmentDefinition, second one in sriov-net2.
|
||||
They may have different subnetId.
|
||||
|
@ -113,7 +111,7 @@ We defined numa0 resource name, also assume we started sriovdp with
|
|||
"0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network device
|
||||
plugin, we can see following state of kubernetes
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get node node1 -o json | jq '.status.allocatable'
|
||||
{
|
||||
|
@ -146,7 +144,9 @@ for particular container.
|
|||
|
||||
To enable Pod Resources service it is needed to add
|
||||
``--feature-gates KubeletPodResources=true`` into ``/etc/sysconfig/kubelet``.
|
||||
This file could look like::
|
||||
This file could look like:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
KUBELET_EXTRA_ARGS="--feature-gates KubeletPodResources=true"
|
||||
|
||||
|
|
|
@ -3,7 +3,9 @@ Testing Network Connectivity
|
|||
============================
|
||||
|
||||
Once the environment is ready, we can test that network connectivity works
|
||||
among pods. First we check the status of the kubernetes cluster::
|
||||
among pods. First we check the status of the kubernetes cluster:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
|
@ -21,14 +23,17 @@ with the kubernetes API service listening on port 443 at 10.0.0.129 (which
|
|||
matches the ip assigned to the load balancer created for it).
|
||||
|
||||
To test proper configuration and connectivity we firstly create a sample
|
||||
deployment with::
|
||||
deployment with:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl run demo --image=celebdor/kuryr-demo
|
||||
deployment "demo" created
|
||||
|
||||
|
||||
After a few seconds, the container is up an running, and a neutron port was
|
||||
created with the same IP that got assigned to the pod::
|
||||
created with the same IP that got assigned to the pod:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
@ -40,9 +45,10 @@ created with the same IP that got assigned to the pod::
|
|||
$ openstack port list | grep demo
|
||||
| 73100cdb-84d6-4f33-93b2-e212966c65ac | demo-2293951457-j29nb | fa:16:3e:99:ac:ce | ip_address='10.0.0.69', subnet_id='3c3e18f9-d1d0-4674-b3be-9fc8561980d3' | ACTIVE |
|
||||
|
||||
|
||||
We can then scale the deployment to 2 pods, and check connectivity between
|
||||
them::
|
||||
them:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl scale deploy/demo --replicas=2
|
||||
deployment "demo" scaled
|
||||
|
@ -69,9 +75,10 @@ them::
|
|||
64 bytes from 10.0.0.75: icmp_seq=1 ttl=64 time=1.14 ms
|
||||
64 bytes from 10.0.0.75: icmp_seq=2 ttl=64 time=0.250 ms
|
||||
|
||||
|
||||
Next, we expose the service so that a neutron load balancer is created and
|
||||
the service is exposed and load balanced among the available pods::
|
||||
the service is exposed and load balanced among the available pods:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get svc
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
|
@ -117,11 +124,12 @@ the service is exposed and load balanced among the available pods::
|
|||
| 7a0c0ef9-35ce-4134-b92a-2e73f0f8fe98 | default/demo-2293951457-gdrv2:8080 | 49e2683370f245e38ac2d6a8c16697b3 | 10.0.0.75 | 8080 | 1 | 55405e9d-4e25-4a55-bac2-e25ee88584e1 | True |
|
||||
+--------------------------------------+------------------------------------+----------------------------------+-----------+---------------+--------+--------------------------------------+----------------+
|
||||
|
||||
|
||||
We can see that both pods are included as members and that the demo cluster-ip
|
||||
matches with the loadbalancer vip_address. In order to check loadbalancing
|
||||
among them, we are going to curl the cluster-ip from one of the pods and see
|
||||
that each of the pods is replying at a time::
|
||||
that each of the pods is replying at a time:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl exec -it demo-2293951457-j29nb -- /bin/sh
|
||||
|
||||
|
|
|
@ -4,39 +4,39 @@ Testing Nested Network Connectivity
|
|||
|
||||
Similarly to the baremetal testing, we can create a demo deployment, scale it
|
||||
to any number of pods and expose the service to check if the deployment was
|
||||
successful::
|
||||
successful:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl run demo --image=celebdor/kuryr-demo
|
||||
$ kubectl scale deploy/demo --replicas=2
|
||||
$ kubectl expose deploy/demo --port=80 --target-port=8080
|
||||
|
||||
|
||||
After a few seconds you can check that the pods are up and running and the
|
||||
neutron subports have been created (and in ACTIVE status) at the undercloud::
|
||||
neutron subports have been created (and in ACTIVE status) at the undercloud:
|
||||
|
||||
(OVERCLOUD)
|
||||
$ kubectl get pods
|
||||
.. code-block:: console
|
||||
|
||||
(OVERCLOUD) $ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
demo-1575152709-4k19q 1/1 Running 0 2m
|
||||
demo-1575152709-vmjwx 1/1 Running 0 12s
|
||||
|
||||
(UNDERCLOUD)
|
||||
$ openstack port list | grep demo
|
||||
(UNDERCLOUD) $ openstack port list | grep demo
|
||||
| 1019bc07-fcdd-4c78-adbd-72a04dffd6ba | demo-1575152709-4k19q | fa:16:3e:b5:de:1f | ip_address='10.0.0.65', subnet_id='b98d40d1-57ac-4909-8db5-0bf0226719d8' | ACTIVE |
|
||||
| 33c4d79f-4fde-4817-b672-a5ec026fa833 | demo-1575152709-vmjwx | fa:16:3e:32:58:38 | ip_address='10.0.0.70', subnet_id='b98d40d1-57ac-4909-8db5-0bf0226719d8' | ACTIVE |
|
||||
|
||||
|
||||
Then, we can check that the service has been created, as well as the
|
||||
respective loadbalancer at the undercloud::
|
||||
respective loadbalancer at the undercloud:
|
||||
|
||||
(OVERCLOUD)
|
||||
$ kubectl get svc
|
||||
.. code-block:: console
|
||||
|
||||
(OVERCLOUD) $ kubectl get svc
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
svc/demo 10.0.0.171 <none> 80/TCP 1m
|
||||
svc/kubernetes 10.0.0.129 <none> 443/TCP 45m
|
||||
|
||||
(UNDERCLOUD)
|
||||
$ openstack loadbalancer list
|
||||
(UNDERCLOUD) $ openstack loadbalancer list
|
||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||
| id | name | tenant_id | vip_address | provisioning_status | provider |
|
||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||
|
@ -46,7 +46,9 @@ respective loadbalancer at the undercloud::
|
|||
|
||||
|
||||
Finally, you can log in into one of the containers and curl the service IP to
|
||||
check that each time a different pod answer the request::
|
||||
check that each time a different pod answer the request:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl exec -it demo-1575152709-4k19q -- /bin/sh
|
||||
sh-4.2$ curl 10.0.0.171
|
||||
|
|
|
@ -26,7 +26,7 @@ look like:
|
|||
Here ``88d0b025-2710-4f02-a348-2829853b45da`` is an id of precreated subnet
|
||||
that is expected to be used for SR-IOV ports:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-show 88d0b025-2710-4f02-a348-2829853b45da
|
||||
+-------------------+--------------------------------------------------+
|
||||
|
@ -93,13 +93,13 @@ created before.
|
|||
|
||||
2. Create deployment with the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl create -f <DEFINITION_FILE_NAME>
|
||||
|
||||
3. Wait for the pod to get to Running phase.
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
@ -109,7 +109,7 @@ created before.
|
|||
attach to the pod and check that the correct interface has been attached to
|
||||
the Pod.
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get pod
|
||||
$ kubectl exec -it nginx-sriov-558db554d7-rvpxs -- /bin/bash
|
||||
|
@ -117,7 +117,7 @@ created before.
|
|||
|
||||
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
|
@ -141,32 +141,32 @@ You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
|
|||
4.1. Alternatively you can login to k8s worker and do the same from the host
|
||||
system. Use the following command to find out ID of running SR-IOV container:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ docker ps
|
||||
|
||||
Suppose that ID of created container is ``eb4e10f38763``. Use the following
|
||||
command to get PID of that container:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ docker inspect --format {{.State.Pid}} eb4e10f38763
|
||||
|
||||
Suppose that output of previous command is bellow:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ 32609
|
||||
|
||||
Use the following command to get interfaces of container:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ nsenter -n -t 32609 ip a
|
||||
|
||||
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
|
@ -192,7 +192,7 @@ In our example sriov interface has address 192.168.2.6
|
|||
5. Use neutron CLI to check the port with exact address has been created on
|
||||
neutron:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port list | grep 192.168.2.6
|
||||
|
||||
|
@ -200,7 +200,7 @@ Suppose that previous command returns a list with one openstack port that
|
|||
has ID ``545ec21d-6bfc-4179-88c6-9dacaf435ea7``. You can see its information
|
||||
with the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port show 545ec21d-6bfc-4179-88c6-9dacaf435ea7
|
||||
+-----------------------+----------------------------------------------------------------------------+
|
||||
|
|
|
@ -6,24 +6,32 @@ In this example, we will use the `kuryr-udp-demo`_ image. This image
|
|||
implements a simple UDP server that listens on port 9090, and replies towards
|
||||
client when a packet is received.
|
||||
|
||||
We first create a deployment named demo::
|
||||
We first create a deployment named demo:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl run --image=yboaron/kuryr-udp-demo demo
|
||||
deployment "demo" created
|
||||
|
||||
As the next step, we will scale the deployment to 2 pods::
|
||||
As the next step, we will scale the deployment to 2 pods:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl scale deploy/demo --replicas=2
|
||||
deployment "demo" scaled
|
||||
|
||||
At this point we should have two pods running the `kuryr-udp-demo`_ image::
|
||||
At this point we should have two pods running the `kuryr-udp-demo`_ image:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
demo-fbb89f54c-92ttl 1/1 Running 0 31s
|
||||
demo-fbb89f54c-q9fq7 1/1 Running 0 1m
|
||||
|
||||
Next, we expose the deployment as a service, setting UDP port to 90::
|
||||
Next, we expose the deployment as a service, setting UDP port to 90:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get svc
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
|
@ -38,7 +46,9 @@ Next, we expose the deployment as a service, setting UDP port to 90::
|
|||
kubernetes ClusterIP 10.0.0.129 <none> 443/TCP 17m
|
||||
|
||||
Now, let's check the OpenStack load balancer created by Kuryr for **demo**
|
||||
service::
|
||||
service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack loadbalancer list
|
||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||
|
@ -73,7 +83,9 @@ service::
|
|||
+---------------------+--------------------------------------+
|
||||
|
||||
Checking the load balancer's details, we can see that the load balancer is
|
||||
listening on UDP port 90::
|
||||
listening on UDP port 90:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack loadbalancer listener show 7b374ecf-80c4-44be-a725-9b0c3fa2d0fa
|
||||
+---------------------------+--------------------------------------+
|
||||
|
@ -103,7 +115,9 @@ listening on UDP port 90::
|
|||
| updated_at | 2018-10-09T06:07:53 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
And the load balancer has two members listening on UDP port 9090::
|
||||
And the load balancer has two members listening on UDP port 9090:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack loadbalancer member list d549df5b-e008-49a6-8695-b6578441553e
|
||||
+--------------------------------------+-----------------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+
|
||||
|
@ -122,7 +136,9 @@ client script sends UDP message towards specific IP and port, and waits for a
|
|||
response from the server. The way that the client application can communicate
|
||||
with the server is by leveraging the Kubernetes service functionality.
|
||||
|
||||
First we clone the client script::
|
||||
First we clone the client script:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ git clone https://github.com/yboaron/udp-client-script.git
|
||||
Cloning into 'udp-client-script'...
|
||||
|
@ -133,14 +149,18 @@ First we clone the client script::
|
|||
Unpacking objects: 100% (15/15), done.
|
||||
$
|
||||
|
||||
And we need the UDP server service IP and port::
|
||||
And we need the UDP server service IP and port:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kubectl get svc demo
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
demo ClusterIP 10.0.0.150 <none> 90/UDP 20m
|
||||
$
|
||||
|
||||
Last step will be to ping the UDP server service::
|
||||
Last step will be to ping the UDP server service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ python udp-client-script/client.py 10.0.0.150 90
|
||||
demo-fbb89f54c-92ttl: HELLO, I AM ALIVE!!!
|
||||
|
|
|
@ -6,41 +6,48 @@ To create a VM that makes use of the Neutron Trunk port support, the next
|
|||
steps can be followed:
|
||||
|
||||
1. Use the demo tenant and create a key to be used to log in into the overcloud
|
||||
VM::
|
||||
VM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ source ~/devstack/openrc demo
|
||||
$ openstack keypair create demo > id_rsa_demo
|
||||
$ chmod 600 id_rsa_demo
|
||||
|
||||
2. Ensure the demo default security group allows ping and ssh access:
|
||||
|
||||
2. Ensure the demo default security group allows ping and ssh access::
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack security group rule create --protocol icmp default
|
||||
$ openstack security group rule create --protocol tcp --dst-port 22 default
|
||||
|
||||
|
||||
3. Download and import an image that allows vlans, as cirros does not support
|
||||
it::
|
||||
it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
|
||||
$ openstack image create --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2 centos7
|
||||
|
||||
|
||||
4. Create a port for the overcloud VM and create the trunk with that port as
|
||||
the parent port (untagged traffic)::
|
||||
the parent port (untagged traffic):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port create --network private --security-group default port0
|
||||
$ openstack network trunk create --parent-port port0 trunk0
|
||||
|
||||
|
||||
5. Create the overcloud VM and assign a floating ip to it to be able to log in
|
||||
into it::
|
||||
into it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --image centos7 --flavor ds4G --nic port-id=port0 --key-name demo overcloud_vm
|
||||
$ openstack floating ip create --port port0 public
|
||||
|
||||
|
||||
Note subports can be added to the trunk port, and be used inside the VM with
|
||||
the specific vlan, 102 in the example, by doing::
|
||||
the specific vlan, 102 in the example, by doing:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network trunk set --subport port=subport0,segmentation-type=vlan,segmentation-id=102 trunk0
|
||||
|
|
|
@ -5,7 +5,7 @@ Upgrading kuryr-kubernetes
|
|||
Kuryr-Kubernetes supports standard OpenStack utility for checking upgrade is
|
||||
possible and safe:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ kuryr-k8s-status upgrade check
|
||||
+---------------------------------------+
|
||||
|
@ -38,7 +38,7 @@ upgrade check`` utility **before upgrading Kuryr-Kubernetes services to T**.
|
|||
|
||||
$ kubectl -n kube-system exec -it <controller-pod-name> kuryr-k8s-status upgrade check
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ kuryr-k8s-status upgrade check
|
||||
+---------------------------------------+
|
||||
|
@ -52,7 +52,7 @@ upgrade check`` utility **before upgrading Kuryr-Kubernetes services to T**.
|
|||
In case of *Failure* result of *Pod annotations* check you should run
|
||||
``kuryr-k8s-status upgrade update-annotations`` command and check again:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: console
|
||||
|
||||
$ kuryr-k8s-status upgrade check
|
||||
+----------------------------------------------------------------------+
|
||||
|
|
Loading…
Reference in New Issue