2017-06-09 14:57:17 +03:00
|
|
|
Installing kuryr-kubernetes manually
|
|
|
|
====================================
|
|
|
|
|
|
|
|
Configure kuryr-k8s-controller
|
|
|
|
------------------------------
|
|
|
|
|
|
|
|
Install ``kuryr-k8s-controller`` in a virtualenv::
|
|
|
|
|
|
|
|
$ mkdir kuryr-k8s-controller
|
|
|
|
$ cd kuryr-k8s-controller
|
|
|
|
$ virtualenv env
|
|
|
|
$ git clone http://git.openstack.org/openstack/kuryr-kubernetes
|
|
|
|
$ . env/bin/activate
|
|
|
|
$ pip install -e kuryr-kubernetes
|
|
|
|
|
|
|
|
|
|
|
|
In neutron or in horizon create subnet for pods, subnet for services and a
|
2017-08-22 14:47:57 +02:00
|
|
|
security-group for pods. You may use existing if you like. In case that you
|
|
|
|
decide to create new networks and subnets with the cli, you can follow the
|
|
|
|
services guide, specifically its :ref:`k8s_default_configuration` section.
|
2017-06-09 14:57:17 +03:00
|
|
|
|
|
|
|
Create ``/etc/kuryr/kuryr.conf``::
|
|
|
|
|
|
|
|
$ cd kuryr-kubernetes
|
|
|
|
$ ./tools/generate_config_file_samples.sh
|
|
|
|
$ cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf
|
|
|
|
|
|
|
|
Edit ``kuryr.conf``::
|
|
|
|
|
|
|
|
[DEFAULT]
|
|
|
|
use_stderr = true
|
|
|
|
bindir = {path_to_env}/libexec/kuryr
|
|
|
|
|
|
|
|
[kubernetes]
|
|
|
|
api_root = http://{ip_of_kubernetes_apiserver}:8080
|
|
|
|
|
|
|
|
[neutron]
|
|
|
|
auth_url = http://127.0.0.1:35357/v3/
|
|
|
|
username = admin
|
|
|
|
user_domain_name = Default
|
|
|
|
password = ADMIN_PASSWORD
|
|
|
|
project_name = service
|
|
|
|
project_domain_name = Default
|
|
|
|
auth_type = password
|
|
|
|
|
|
|
|
[neutron_defaults]
|
|
|
|
ovs_bridge = br-int
|
|
|
|
pod_security_groups = {id_of_secuirity_group_for_pods}
|
|
|
|
pod_subnet = {id_of_subnet_for_pods}
|
|
|
|
project = {id_of_project}
|
|
|
|
service_subnet = {id_of_subnet_for_k8s_services}
|
|
|
|
|
2017-07-31 11:48:21 +02:00
|
|
|
Note that the service_subnet and the pod_subnet *should be routable* and that
|
|
|
|
the pods should allow service subnet access.
|
|
|
|
|
|
|
|
Octavia supports two ways of performing the load balancing between the
|
|
|
|
Kubernetes load balancers and their members:
|
|
|
|
|
|
|
|
* Layer2: Octavia, apart from the VIP port in the services subnet, creates a
|
2017-08-30 10:07:11 +00:00
|
|
|
Neutron port to the subnet of each of the members. This way the traffic from
|
2017-07-31 11:48:21 +02:00
|
|
|
the Service Haproxy to the members will not go through the router again, only
|
|
|
|
will have gone through the router to reach the service.
|
|
|
|
* Layer3: Octavia only creates the VIP port. The traffic from the service VIP to
|
|
|
|
the members will go back to the router to reach the pod subnet. It is
|
2017-08-23 16:43:28 +08:00
|
|
|
important to note that it will have some performance impact depending on the SDN.
|
2017-07-31 11:48:21 +02:00
|
|
|
|
2017-08-30 10:07:11 +00:00
|
|
|
To support the L3 mode (both for Octavia and for the deprecated
|
|
|
|
Neutron-LBaaSv2):
|
2017-07-31 11:48:21 +02:00
|
|
|
|
|
|
|
* There should be a router between the two subnets.
|
|
|
|
* The pod_security_groups setting should include a security group with a rule
|
2017-08-30 10:07:11 +00:00
|
|
|
granting access to all the CIDR of the service subnet, e.g.::
|
2017-07-31 11:48:21 +02:00
|
|
|
|
|
|
|
openstack security group create --project k8s_cluster_project \
|
|
|
|
service_pod_access_sg
|
|
|
|
openstack --project k8s_cluster_project security group rule create \
|
|
|
|
--remote-ip cidr_of_service_subnet --ethertype IPv4 --protocol tcp \
|
|
|
|
service_pod_access_sg
|
|
|
|
|
|
|
|
* The uuid of this security group id should be added to the comma separated
|
|
|
|
list of pod security groups. *pod_security_groups* in *[neutron_defaults]*.
|
|
|
|
|
2017-08-30 10:07:11 +00:00
|
|
|
Alternatively, to support Octavia L2 mode:
|
|
|
|
|
|
|
|
* The pod security_groups setting should include a security group with a rule
|
|
|
|
granting access to all the CIDR of the pod subnet, e.g.::
|
|
|
|
|
|
|
|
openstack security group create --project k8s_cluster_project \
|
|
|
|
octavia_pod_access_sg
|
|
|
|
openstack --project k8s_cluster_project security group rule create \
|
|
|
|
--remote-ip cidr_of_pod_subnet --ethertype IPv4 --protocol tcp \
|
|
|
|
octavia_pod_access_sg
|
|
|
|
|
|
|
|
* The uuid of this security group id should be added to the comma separated
|
|
|
|
list of pod security groups. *pod_security_groups* in *[neutron_defaults]*.
|
|
|
|
|
|
|
|
|
2017-06-09 14:57:17 +03:00
|
|
|
Run kuryr-k8s-controller::
|
|
|
|
|
|
|
|
$ kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
|
|
|
|
|
|
|
|
Alternatively you may run it in screen::
|
|
|
|
|
|
|
|
$ screen -dm kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
|
|
|
|
|
|
|
|
Configure kuryr-cni
|
|
|
|
-------------------
|
|
|
|
|
|
|
|
On every kubernetes minion node (and on master if you intend to run containers
|
|
|
|
there) you need to configure kuryr-cni.
|
|
|
|
|
2017-08-31 06:41:57 +00:00
|
|
|
Install ``kuryr-cni`` in a virtualenv::
|
2017-06-09 14:57:17 +03:00
|
|
|
|
|
|
|
$ mkdir kuryr-k8s-cni
|
|
|
|
$ cd kuryr-k8s-cni
|
|
|
|
$ virtualenv env
|
|
|
|
$ . env/bin/activate
|
|
|
|
$ git clone http://git.openstack.org/openstack/kuryr-kubernetes
|
|
|
|
$ pip install -e kuryr-kubernetes
|
|
|
|
|
|
|
|
Create ``/etc/kuryr/kuryr.conf``::
|
|
|
|
|
|
|
|
$ cd kuryr-kubernetes
|
|
|
|
$ ./tools/generate_config_file_samples.sh
|
|
|
|
$ cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf
|
|
|
|
|
|
|
|
Edit ``kuryr.conf``::
|
|
|
|
|
|
|
|
[DEFAULT]
|
|
|
|
use_stderr = true
|
2017-08-08 05:29:18 +00:00
|
|
|
bindir = {path_to_env}/libexec/kuryr
|
2017-06-09 14:57:17 +03:00
|
|
|
[kubernetes]
|
|
|
|
api_root = http://{ip_of_kubernetes_apiserver}:8080
|
|
|
|
|
|
|
|
Link the CNI binary to CNI directory, where kubelet would find it::
|
|
|
|
|
|
|
|
$ mkdir -p /opt/cni/bin
|
|
|
|
$ ln -s $(which kuryr-cni) /opt/cni/bin/
|
|
|
|
|
|
|
|
Create the CNI config file for kuryr-cni: ``/etc/cni/net.d/10-kuryr.conf``.
|
2017-08-23 16:43:28 +08:00
|
|
|
Kubelet would only use the lexicographically first file in that directory, so
|
2017-06-09 14:57:17 +03:00
|
|
|
make sure that it is kuryr's config file::
|
|
|
|
|
|
|
|
{
|
2018-07-04 04:27:11 -04:00
|
|
|
"cniVersion": "0.3.1",
|
2017-06-09 14:57:17 +03:00
|
|
|
"name": "kuryr",
|
|
|
|
"type": "kuryr-cni",
|
|
|
|
"kuryr_conf": "/etc/kuryr/kuryr.conf",
|
|
|
|
"debug": true
|
|
|
|
}
|
|
|
|
|
|
|
|
Install ``os-vif`` and ``oslo.privsep`` libraries globally. These modules
|
|
|
|
are used to plug interfaces and would be run with raised privileges. ``os-vif``
|
|
|
|
uses ``sudo`` to raise privileges, and they would need to be installed globally
|
|
|
|
to work correctly::
|
|
|
|
|
|
|
|
deactivate
|
|
|
|
sudo pip install 'oslo.privsep>=1.20.0' 'os-vif>=1.5.0'
|
2017-10-04 10:54:28 +02:00
|
|
|
|
2018-03-06 18:00:54 +01:00
|
|
|
Configure Kuryr CNI Daemon
|
2017-10-04 10:54:28 +02:00
|
|
|
-------------------------------------
|
|
|
|
|
2018-03-06 18:00:54 +01:00
|
|
|
Kuryr CNI Daemon is a service designed to increased scalability of the Kuryr
|
|
|
|
operations done on Kubernetes nodes. More information can be found on
|
2017-10-04 10:54:28 +02:00
|
|
|
:ref:`cni-daemon` page.
|
|
|
|
|
2018-03-06 18:00:54 +01:00
|
|
|
Kuryr CNI Daemon, should be installed on every Kubernetes node, so following
|
|
|
|
steps need to be repeated.
|
2017-10-04 10:54:28 +02:00
|
|
|
|
|
|
|
.. note::
|
|
|
|
You can tweak configuration of some timeouts to match your environment. It's
|
|
|
|
crucial for scalability of the whole deployment. In general the timeout to
|
|
|
|
serve CNI request from kubelet to Kuryr is 180 seconds. After that time
|
|
|
|
kubelet will retry the request. Additionally there are two configuration
|
|
|
|
options::
|
|
|
|
|
|
|
|
[cni_daemon]
|
|
|
|
vif_annotation_timeout=60
|
|
|
|
pyroute2_timeout=10
|
|
|
|
|
|
|
|
``vif_annotation_timeout`` is time the Kuryr CNI Daemon will wait for Kuryr
|
|
|
|
Controller to create a port in Neutron and add information about it to Pod's
|
|
|
|
metadata. If either Neutron or Kuryr Controller doesn't keep up with high
|
|
|
|
number of requests, it's advised to increase this timeout. Please note that
|
|
|
|
increasing it over 180 seconds will not have any effect as the request will
|
|
|
|
time out anyway and will be retried (which is safe).
|
|
|
|
|
|
|
|
``pyroute2_timeout`` is internal timeout of pyroute2 library, that is
|
|
|
|
responsible for doing modifications to Linux Kernel networking stack (e.g.
|
|
|
|
moving interfaces to Pod's namespaces, adding routes and ports or assigning
|
|
|
|
addresses to interfaces). When serving a lot of ADD/DEL CNI requests on a
|
|
|
|
regular basis it's advised to increase that timeout. Please note that the
|
|
|
|
value denotes *maximum* time to wait for kernel to complete the operations.
|
|
|
|
If operation succeeds earlier, request isn't delayed.
|
|
|
|
|
|
|
|
Run kuryr-daemon::
|
|
|
|
|
|
|
|
$ kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
|
|
|
|
|
|
|
Alternatively you may run it in screen::
|
|
|
|
|
2018-03-01 14:27:13 +08:00
|
|
|
$ screen -dm kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
2018-03-02 22:53:54 +00:00
|
|
|
|
|
|
|
Kuryr CNI Daemon health checks
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
The CNI daemon health checks allow the deployer or the orchestration layer
|
|
|
|
(like for example Kubernetes or OpenShift) to probe the CNI daemon for liveness
|
|
|
|
and readiness.
|
|
|
|
|
|
|
|
If you want to make use of all of its facilities, you should run the
|
|
|
|
kuryr-daemon in its own cgroup. It will get its own cgroup if you:
|
|
|
|
* Run it as a systemd service,
|
|
|
|
* run it containerized,
|
|
|
|
* create a memory cgroup for it.
|
|
|
|
|
|
|
|
In order to make the daemon run in its own cgroup, you can do the following::
|
|
|
|
|
|
|
|
systemd-run --unit=kuryr-daemon --scope --slice=kuryr-cni \
|
|
|
|
kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
|
|
|
|
|
|
|
After this, with the CNI daemon running inside its own cgroup, we can enable
|
|
|
|
the CNI daemon memory health check. This health check allows us to limit the
|
|
|
|
memory consumption of the CNI Daemon. The health checks will fail if CNI starts
|
|
|
|
taking more memory that it is set and the orchestration layer should restart.
|
|
|
|
The setting is::
|
|
|
|
|
|
|
|
[cni_health_server]
|
|
|
|
max_memory_usage = 4096 # Set the memory limit to 4GiB
|