Explicitly use code-block.
In this patch we convert preformatted directive (double colon at the end of line) to code-block sphinx directive. Also, fixed formatting for existing code-block directives, and changes restructuredtext code directive into sphinx code-block. Change-Id: I9db48fbb169263e3bf66eacca7d9bce6c355739f
This commit is contained in:
parent
feff260509
commit
fd440fcdcb
|
@ -70,8 +70,10 @@ leak.
|
||||||
disabled. In order to enable, set the following option in kuryr.conf to a
|
disabled. In order to enable, set the following option in kuryr.conf to a
|
||||||
limit value of memory in MiBs.
|
limit value of memory in MiBs.
|
||||||
|
|
||||||
[cni_health_server]
|
.. code-block:: ini
|
||||||
max_memory_usage = -1
|
|
||||||
|
[cni_health_server]
|
||||||
|
max_memory_usage = -1
|
||||||
|
|
||||||
The CNI Health Manager is added as a process to CNI daemon and communicates
|
The CNI Health Manager is added as a process to CNI daemon and communicates
|
||||||
to the other two processes i.e. Watcher and Server with a shared boolean
|
to the other two processes i.e. Watcher and Server with a shared boolean
|
||||||
|
|
|
@ -54,18 +54,18 @@ provide update exclusion mechanisms to prevent race conditions.
|
||||||
This can be implemented by adding another *leader-elector* container to each
|
This can be implemented by adding another *leader-elector* container to each
|
||||||
of kuryr-controller pods:
|
of kuryr-controller pods:
|
||||||
|
|
||||||
.. code:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
- image: gcr.io/google_containers/leader-elector:0.5
|
- image: gcr.io/google_containers/leader-elector:0.5
|
||||||
name: leader-elector
|
name: leader-elector
|
||||||
args:
|
args:
|
||||||
- "--election=kuryr-controller"
|
- "--election=kuryr-controller"
|
||||||
- "--http=0.0.0.0:${KURYR_CONTROLLER_HA_PORT:-16401}"
|
- "--http=0.0.0.0:${KURYR_CONTROLLER_HA_PORT:-16401}"
|
||||||
- "--election-namespace=kube-system"
|
- "--election-namespace=kube-system"
|
||||||
- "--ttl=5s"
|
- "--ttl=5s"
|
||||||
ports:
|
ports:
|
||||||
- containerPort: ${KURYR_CONTROLLER_HA_PORT:-16401}
|
- containerPort: ${KURYR_CONTROLLER_HA_PORT:-16401}
|
||||||
protocol: TCP
|
protocol: TCP
|
||||||
|
|
||||||
This adds a new container to the pod. This container will do the
|
This adds a new container to the pod. This container will do the
|
||||||
leader-election and expose the simple JSON API on port 16401 by default. This
|
leader-election and expose the simple JSON API on port 16401 by default. This
|
||||||
|
|
|
@ -184,10 +184,12 @@ lbaasspec Service
|
||||||
================ =========================
|
================ =========================
|
||||||
|
|
||||||
For example, to enable only the 'vif' controller handler we should set the
|
For example, to enable only the 'vif' controller handler we should set the
|
||||||
following at kuryr.conf::
|
following at kuryr.conf:
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
enabled_handlers=vif
|
|
||||||
|
[kubernetes]
|
||||||
|
enabled_handlers=vif
|
||||||
|
|
||||||
|
|
||||||
Providers
|
Providers
|
||||||
|
|
|
@ -101,14 +101,16 @@ For achieving external connectivity the L7 router is attached to a floating
|
||||||
IP (allocated from 'external_svc_subnet').
|
IP (allocated from 'external_svc_subnet').
|
||||||
|
|
||||||
The following parameters should be configured in kuryr.conf file to
|
The following parameters should be configured in kuryr.conf file to
|
||||||
enable L7 Router::
|
enable L7 Router:
|
||||||
|
|
||||||
[ingress]
|
.. code-block:: ini
|
||||||
l7_router_uuid=<loadbalancer uuid>
|
|
||||||
[kubernetes]
|
[ingress]
|
||||||
enable_ingress = True
|
l7_router_uuid=<loadbalancer uuid>
|
||||||
# make sure that 'ingresslb' appears in enabled_handlers
|
[kubernetes]
|
||||||
enabled_handlers= vif,lb,lbaasspec,ingresslb
|
enable_ingress = True
|
||||||
|
# make sure that 'ingresslb' appears in enabled_handlers
|
||||||
|
enabled_handlers= vif,lb,lbaasspec,ingresslb
|
||||||
|
|
||||||
The L7 Router should notify the Health Manager upon any failure that could
|
The L7 Router should notify the Health Manager upon any failure that could
|
||||||
lead to L7 Router's malfunctionality.
|
lead to L7 Router's malfunctionality.
|
||||||
|
@ -190,18 +192,18 @@ This section describe in details the following scenarios:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: extensions/v1beta1
|
apiVersion: extensions/v1beta1
|
||||||
kind: Ingress
|
kind: Ingress
|
||||||
metadata:
|
metadata:
|
||||||
name: test
|
name: test
|
||||||
spec:
|
spec:
|
||||||
rules:
|
rules:
|
||||||
- host: www.example.com
|
- host: www.example.com
|
||||||
http:
|
http:
|
||||||
paths:
|
paths:
|
||||||
- backend:
|
- backend:
|
||||||
serviceName: s1
|
serviceName: s1
|
||||||
servicePort: 80
|
servicePort: 80
|
||||||
|
|
||||||
* Since its the first Ingress pointing to this Service, the Ingress
|
* Since its the first Ingress pointing to this Service, the Ingress
|
||||||
handler will create LBaaS pool (attached to L7 router)- named
|
handler will create LBaaS pool (attached to L7 router)- named
|
||||||
|
|
|
@ -97,15 +97,15 @@ B. Create Service/Endpoints, create OCP-Route, delete OCP-Route.
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Route
|
kind: Route
|
||||||
metadata:
|
metadata:
|
||||||
name: test
|
name: test
|
||||||
spec:
|
spec:
|
||||||
host: www.example.com
|
host: www.example.com
|
||||||
to:
|
to:
|
||||||
kind: Service
|
kind: Service
|
||||||
name: s1
|
name: s1
|
||||||
|
|
||||||
* Since it's the first route pointing to this Service, the OCP-Route
|
* Since it's the first route pointing to this Service, the OCP-Route
|
||||||
handler will create LbaaS pool (attached to L7 router)- named
|
handler will create LbaaS pool (attached to L7 router)- named
|
||||||
|
|
|
@ -46,23 +46,23 @@ The network policy CRD has the following format:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: openstack.org/v1
|
apiVersion: openstack.org/v1
|
||||||
kind: KuryrNetPolicy
|
kind: KuryrNetPolicy
|
||||||
metadata:
|
metadata:
|
||||||
...
|
...
|
||||||
spec:
|
spec:
|
||||||
egressSgRules:
|
egressSgRules:
|
||||||
- security_group_rule:
|
- security_group_rule:
|
||||||
...
|
...
|
||||||
ingressSgRules:
|
ingressSgRules:
|
||||||
- security_group_rule:
|
- security_group_rule:
|
||||||
...
|
...
|
||||||
networkpolicy_spec:
|
networkpolicy_spec:
|
||||||
...
|
...
|
||||||
podSelector:
|
podSelector:
|
||||||
...
|
...
|
||||||
securityGroupId: ...
|
securityGroupId: ...
|
||||||
securityGroupName: ...
|
securityGroupName: ...
|
||||||
|
|
||||||
A new handler has been added to react to Network Policy events, and the existing
|
A new handler has been added to react to Network Policy events, and the existing
|
||||||
ones, for instance service/pod handlers, have been modified to account for the
|
ones, for instance service/pod handlers, have been modified to account for the
|
||||||
|
@ -183,14 +183,14 @@ policy becomes denied. As specified in the following policy:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: networking.k8s.io/v1
|
apiVersion: networking.k8s.io/v1
|
||||||
kind: NetworkPolicy
|
kind: NetworkPolicy
|
||||||
metadata:
|
metadata:
|
||||||
name: default-deny
|
name: default-deny
|
||||||
spec:
|
spec:
|
||||||
podSelector: {}
|
podSelector: {}
|
||||||
policyTypes:
|
policyTypes:
|
||||||
- Ingress
|
- Ingress
|
||||||
|
|
||||||
The following CRD is the translation of policy rules to security group rules.
|
The following CRD is the translation of policy rules to security group rules.
|
||||||
No ingress rule was created, which means traffic is blocked, and since
|
No ingress rule was created, which means traffic is blocked, and since
|
||||||
|
@ -200,27 +200,27 @@ are assumed to assumed to affect Ingress.
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: openstack.org/v1
|
apiVersion: openstack.org/v1
|
||||||
kind: KuryrNetPolicy
|
kind: KuryrNetPolicy
|
||||||
metadata:
|
metadata:
|
||||||
name: np-default-deny
|
name: np-default-deny
|
||||||
namespace: default
|
namespace: default
|
||||||
...
|
...
|
||||||
spec:
|
spec:
|
||||||
egressSgRules:
|
egressSgRules:
|
||||||
- security_group_rule:
|
- security_group_rule:
|
||||||
description: Kuryr-Kubernetes NetPolicy SG rule
|
description: Kuryr-Kubernetes NetPolicy SG rule
|
||||||
direction: egress
|
direction: egress
|
||||||
ethertype: IPv4
|
ethertype: IPv4
|
||||||
id: 60a0d59c-2102-43e0-b025-75c98b7d9315
|
id: 60a0d59c-2102-43e0-b025-75c98b7d9315
|
||||||
security_group_id: 20d9b623-f1e0-449d-95c1-01624cb3e315
|
security_group_id: 20d9b623-f1e0-449d-95c1-01624cb3e315
|
||||||
ingressSgRules: []
|
ingressSgRules: []
|
||||||
networkpolicy_spec:
|
networkpolicy_spec:
|
||||||
...
|
...
|
||||||
podSelector:
|
podSelector:
|
||||||
...
|
...
|
||||||
securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315
|
securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315
|
||||||
securityGroupName: sg-default-deny
|
securityGroupName: sg-default-deny
|
||||||
|
|
||||||
|
|
||||||
Allow traffic from pod
|
Allow traffic from pod
|
||||||
|
@ -231,28 +231,30 @@ on a single port from the group of pods that have the label ``role=monitoring``.
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: networking.k8s.io/v1
|
apiVersion: networking.k8s.io/v1
|
||||||
kind: NetworkPolicy
|
kind: NetworkPolicy
|
||||||
metadata:
|
metadata:
|
||||||
name: allow-monitoring-via-pod-selector
|
name: allow-monitoring-via-pod-selector
|
||||||
spec:
|
spec:
|
||||||
podSelector:
|
podSelector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
app: server
|
app: server
|
||||||
policyTypes:
|
policyTypes:
|
||||||
- Ingress
|
- Ingress
|
||||||
ingress:
|
ingress:
|
||||||
- from:
|
- from:
|
||||||
- podSelector:
|
- podSelector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
role: monitoring
|
role: monitoring
|
||||||
ports:
|
ports:
|
||||||
- protocol: TCP
|
- protocol: TCP
|
||||||
port: 8080
|
port: 8080
|
||||||
|
|
||||||
Create the following pod with label ``role=monitoring``::
|
Create the following pod with label ``role=monitoring``:
|
||||||
|
|
||||||
$ kubectl run monitor --image=busybox --restart=Never --labels=role=monitoring
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ kubectl run monitor --image=busybox --restart=Never --labels=role=monitoring
|
||||||
|
|
||||||
The generated CRD contains an ingress rule allowing traffic on port 8080 from
|
The generated CRD contains an ingress rule allowing traffic on port 8080 from
|
||||||
the created pod, and an egress rule allowing traffic to everywhere, since no
|
the created pod, and an egress rule allowing traffic to everywhere, since no
|
||||||
|
@ -260,38 +262,38 @@ restriction was enforced.
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: openstack.org/v1
|
apiVersion: openstack.org/v1
|
||||||
kind: KuryrNetPolicy
|
kind: KuryrNetPolicy
|
||||||
metadata:
|
metadata:
|
||||||
name: np-allow-monitoring-via-pod-selector
|
name: np-allow-monitoring-via-pod-selector
|
||||||
namespace: default
|
namespace: default
|
||||||
...
|
...
|
||||||
spec:
|
spec:
|
||||||
egressSgRules:
|
egressSgRules:
|
||||||
- security_group_rule:
|
- security_group_rule:
|
||||||
description: Kuryr-Kubernetes NetPolicy SG rule
|
description: Kuryr-Kubernetes NetPolicy SG rule
|
||||||
direction: egress
|
direction: egress
|
||||||
ethertype: IPv4
|
ethertype: IPv4
|
||||||
id: 203a14fe-1059-4eff-93ed-a42bd957145d
|
id: 203a14fe-1059-4eff-93ed-a42bd957145d
|
||||||
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
|
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
|
||||||
ingressSgRules:
|
ingressSgRules:
|
||||||
- namespace: default
|
- namespace: default
|
||||||
security_group_rule:
|
security_group_rule:
|
||||||
description: Kuryr-Kubernetes NetPolicy SG rule
|
description: Kuryr-Kubernetes NetPolicy SG rule
|
||||||
direction: ingress
|
direction: ingress
|
||||||
ethertype: IPv4
|
ethertype: IPv4
|
||||||
id: 7987c382-f2a9-47f7-b6e8-1a3a1bcb7d95
|
id: 7987c382-f2a9-47f7-b6e8-1a3a1bcb7d95
|
||||||
port_range_max: 8080
|
port_range_max: 8080
|
||||||
port_range_min: 8080
|
port_range_min: 8080
|
||||||
protocol: tcp
|
protocol: tcp
|
||||||
remote_ip_prefix: 10.0.1.143
|
remote_ip_prefix: 10.0.1.143
|
||||||
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
|
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
|
||||||
networkpolicy_spec:
|
networkpolicy_spec:
|
||||||
...
|
...
|
||||||
podSelector:
|
podSelector:
|
||||||
...
|
...
|
||||||
securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
|
securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
|
||||||
securityGroupName: sg-allow-monitoring-via-pod-selector
|
securityGroupName: sg-allow-monitoring-via-pod-selector
|
||||||
|
|
||||||
|
|
||||||
Allow traffic from namespace
|
Allow traffic from namespace
|
||||||
|
@ -302,29 +304,31 @@ from namespace with the label ``purpose=test``:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: networking.k8s.io/v1
|
apiVersion: networking.k8s.io/v1
|
||||||
kind: NetworkPolicy
|
kind: NetworkPolicy
|
||||||
metadata:
|
metadata:
|
||||||
name: allow-test-via-ns-selector
|
name: allow-test-via-ns-selector
|
||||||
spec:
|
spec:
|
||||||
podSelector:
|
podSelector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
app: server
|
app: server
|
||||||
policyTypes:
|
policyTypes:
|
||||||
- Ingress
|
- Ingress
|
||||||
ingress:
|
ingress:
|
||||||
- from:
|
- from:
|
||||||
- namespaceSelector:
|
- namespaceSelector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
purpose: test
|
purpose: test
|
||||||
ports:
|
ports:
|
||||||
- protocol: TCP
|
- protocol: TCP
|
||||||
port: 8080
|
port: 8080
|
||||||
|
|
||||||
Create a namespace and label it with ``purpose=test``::
|
Create a namespace and label it with ``purpose=test``:
|
||||||
|
|
||||||
$ kubectl create namespace dev
|
.. code-block:: console
|
||||||
$ kubectl label namespace dev purpose=test
|
|
||||||
|
$ kubectl create namespace dev
|
||||||
|
$ kubectl label namespace dev purpose=test
|
||||||
|
|
||||||
The resulting CRD has an ingress rule allowing traffic
|
The resulting CRD has an ingress rule allowing traffic
|
||||||
from the namespace CIDR on the specified port, and an
|
from the namespace CIDR on the specified port, and an
|
||||||
|
@ -332,37 +336,37 @@ egress rule allowing traffic to everywhere.
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: openstack.org/v1
|
apiVersion: openstack.org/v1
|
||||||
kind: KuryrNetPolicy
|
kind: KuryrNetPolicy
|
||||||
name: np-allow-test-via-ns-selector
|
name: np-allow-test-via-ns-selector
|
||||||
namespace: default
|
namespace: default
|
||||||
...
|
...
|
||||||
spec:
|
spec:
|
||||||
egressSgRules:
|
egressSgRules:
|
||||||
- security_group_rule:
|
- security_group_rule:
|
||||||
description: Kuryr-Kubernetes NetPolicy SG rule
|
description: Kuryr-Kubernetes NetPolicy SG rule
|
||||||
direction: egress
|
direction: egress
|
||||||
ethertype: IPv4
|
ethertype: IPv4
|
||||||
id: 8c21bf42-c8b9-4628-b0a1-bd0dbb192e6b
|
id: 8c21bf42-c8b9-4628-b0a1-bd0dbb192e6b
|
||||||
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
|
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
|
||||||
ingressSgRules:
|
ingressSgRules:
|
||||||
- namespace: dev
|
- namespace: dev
|
||||||
security_group_rule:
|
security_group_rule:
|
||||||
description: Kuryr-Kubernetes NetPolicy SG rule
|
description: Kuryr-Kubernetes NetPolicy SG rule
|
||||||
direction: ingress
|
direction: ingress
|
||||||
ethertype: IPv4
|
ethertype: IPv4
|
||||||
id: 2a33b802-56ad-430a-801d-690f653198ef
|
id: 2a33b802-56ad-430a-801d-690f653198ef
|
||||||
port_range_max: 8080
|
port_range_max: 8080
|
||||||
port_range_min: 8080
|
port_range_min: 8080
|
||||||
protocol: tcp
|
protocol: tcp
|
||||||
remote_ip_prefix: 10.0.1.192/26
|
remote_ip_prefix: 10.0.1.192/26
|
||||||
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
|
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
|
||||||
networkpolicy_spec:
|
networkpolicy_spec:
|
||||||
...
|
...
|
||||||
podSelector:
|
podSelector:
|
||||||
...
|
...
|
||||||
securityGroupId: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
|
securityGroupId: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
|
||||||
securityGroupName: sg-allow-test-via-ns-selector
|
securityGroupName: sg-allow-test-via-ns-selector
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
|
|
@ -58,25 +58,26 @@ The first action is to create a KuryrPort CRD where the needed information
|
||||||
about the Neutron Ports will be stored (or any other SDN).
|
about the Neutron Ports will be stored (or any other SDN).
|
||||||
|
|
||||||
Currently, the pods are annotated with the vif information of the port
|
Currently, the pods are annotated with the vif information of the port
|
||||||
assigned to it::
|
assigned to it:
|
||||||
|
|
||||||
"kind": "Pod",
|
.. code-block::
|
||||||
"metadata": {
|
|
||||||
"annotations": {
|
|
||||||
"openstack.org/kuryr-vif": "{\"eth0\": {\"versioned_object.data\": {\"active\": true, \"address\": \"fa:16:3e:bf:84:ff\", \"has_traffic_filtering\
|
|
||||||
": false, \"id\": \"18f968a5-c420-4318-92d7-941eb5f9e60e\", \"network\": {\"versioned_object.data\": {\"id\": \"144164d9-8c21-4274-acec-43245de0aed0\", \"labe
|
|
||||||
l\": \"ns/luis-net\", \"mtu\": 1350, \"multi_host\": false, \"should_provide_bridge\": false, \"should_provide_vlan\": false, \"subnets\": {\"versioned_object
|
|
||||||
.data\": {\"objects\": [{\"versioned_object.data\": {\"cidr\": \"10.11.9.0/24\", \"dns\": [], \"gateway\": \"10.11.9.1\", \"ips\": {\"versioned_object.data\":
|
|
||||||
{\"objects\": [{\"versioned_object.data\": {\"address\": \"10.11.9.5\"}, \"versioned_object.name\": \"FixedIP\", \"versioned_object.namespace\": \"os_vif\",
|
|
||||||
\"versioned_object.version\": \"1.0\"}]}, \"versioned_object.name\": \"FixedIPList\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\"
|
|
||||||
: \"1.0\"}, \"routes\": {\"versioned_object.data\": {\"objects\": []}, \"versioned_object.name\": \"RouteList\", \"versioned_object.namespace\": \"os_vif\", \
|
|
||||||
"versioned_object.version\": \"1.0\"}}, \"versioned_object.name\": \"Subnet\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.0
|
|
||||||
\"}]}, \"versioned_object.name\": \"SubnetList\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.0\"}}, \"versioned_object.name
|
|
||||||
\": \"Network\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.1\"}, \"plugin\": \"noop\", \"preserve_on_delete\": false, \"vi
|
|
||||||
f_name\": \"tap18f968a5-c4\", \"vlan_id\": 1346}, \"versioned_object.name\": \"VIFVlanNested\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object
|
|
||||||
.version\": \"1.0\"}}"
|
|
||||||
},
|
|
||||||
|
|
||||||
|
"kind": "Pod",
|
||||||
|
"metadata": {
|
||||||
|
"annotations": {
|
||||||
|
"openstack.org/kuryr-vif": "{\"eth0\": {\"versioned_object.data\": {\"active\": true, \"address\": \"fa:16:3e:bf:84:ff\", \"has_traffic_filtering\": false,
|
||||||
|
\"id\": \"18f968a5-c420-4318-92d7-941eb5f9e60e\", \"network\": {\"versioned_object.data\": {\"id\": \"144164d9-8c21-4274-acec-43245de0aed0\", \"labe
|
||||||
|
l\": \"ns/luis-net\", \"mtu\": 1350, \"multi_host\": false, \"should_provide_bridge\": false, \"should_provide_vlan\": false, \"subnets\": {\"versioned_object
|
||||||
|
.data\": {\"objects\": [{\"versioned_object.data\": {\"cidr\": \"10.11.9.0/24\", \"dns\": [], \"gateway\": \"10.11.9.1\", \"ips\": {\"versioned_object.data\":
|
||||||
|
{\"objects\": [{\"versioned_object.data\": {\"address\": \"10.11.9.5\"}, \"versioned_object.name\": \"FixedIP\", \"versioned_object.namespace\": \"os_vif\",
|
||||||
|
\"versioned_object.version\": \"1.0\"}]}, \"versioned_object.name\": \"FixedIPList\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\"
|
||||||
|
: \"1.0\"}, \"routes\": {\"versioned_object.data\": {\"objects\": []}, \"versioned_object.name\": \"RouteList\", \"versioned_object.namespace\": \"os_vif\", \
|
||||||
|
"versioned_object.version\": \"1.0\"}}, \"versioned_object.name\": \"Subnet\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.0
|
||||||
|
\"}]}, \"versioned_object.name\": \"SubnetList\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.0\"}}, \"versioned_object.name
|
||||||
|
\": \"Network\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.1\"}, \"plugin\": \"noop\", \"preserve_on_delete\": false, \"vi
|
||||||
|
f_name\": \"tap18f968a5-c4\", \"vlan_id\": 1346}, \"versioned_object.name\": \"VIFVlanNested\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object
|
||||||
|
.version\": \"1.0\"}}"
|
||||||
|
},
|
||||||
|
|
||||||
The proposal is to store the information of the VIF in the new defined
|
The proposal is to store the information of the VIF in the new defined
|
||||||
KuryrPort CRD as a new KuryrPort object, including similar information to the
|
KuryrPort CRD as a new KuryrPort object, including similar information to the
|
||||||
|
@ -85,81 +86,82 @@ object selfLink at the pod by using oslo.versionedobject to easy identify
|
||||||
the changes into the annotation format. Note the selfLink should contain the
|
the changes into the annotation format. Note the selfLink should contain the
|
||||||
Neutron Port UUID if that is used as the name for the KuryrPort CRD object.
|
Neutron Port UUID if that is used as the name for the KuryrPort CRD object.
|
||||||
In case of other SDN a unique value that represents the port should be used
|
In case of other SDN a unique value that represents the port should be used
|
||||||
as the name for the KuryrPort CRD object::
|
as the name for the KuryrPort CRD object:
|
||||||
|
|
||||||
$ kubectl get POD_NAME -o json
|
.. code-block:: console
|
||||||
"kind": "Pod",
|
|
||||||
"metadata": {
|
|
||||||
"annotations": {
|
|
||||||
"openstack.org/kuryr-vif": "{"eth0": {\"versioned_object.data\": {\"selfLink\": \"/apis/openstack.org/v1/kuryrports/18f968a5-c420-4318-92d7-941eb5f9e60e\"}},
|
|
||||||
\"versioned_object.name\": \"KuryrPortCRD\", \"versioned_object.version\": \"1.0\"}"
|
|
||||||
},
|
|
||||||
...
|
|
||||||
|
|
||||||
$ openstack port show 18f968a5-c420-4318-92d7-941eb5f9e60e
|
$ kubectl get POD_NAME -o json
|
||||||
+-----------------------+---------------------------------------------------------------------------+
|
"kind": "Pod",
|
||||||
| Field | Value |
|
"metadata": {
|
||||||
+-----------------------+---------------------------------------------------------------------------+
|
"annotations": {
|
||||||
| admin_state_up | UP |
|
"openstack.org/kuryr-vif": "{"eth0": {\"versioned_object.data\": {\"selfLink\": \"/apis/openstack.org/v1/kuryrports/18f968a5-c420-4318-92d7-941eb5f9e60e\"}},
|
||||||
| allowed_address_pairs | |
|
\"versioned_object.name\": \"KuryrPortCRD\", \"versioned_object.version\": \"1.0\"}"
|
||||||
| binding_host_id | None |
|
},
|
||||||
| binding_profile | None |
|
...
|
||||||
| binding_vif_details | None |
|
|
||||||
| binding_vif_type | None |
|
|
||||||
| binding_vnic_type | normal |
|
|
||||||
| created_at | 2018-06-18T15:58:23Z |
|
|
||||||
| data_plane_status | None |
|
|
||||||
| description | |
|
|
||||||
| device_id | |
|
|
||||||
| device_owner | trunk:subport |
|
|
||||||
| dns_assignment | None |
|
|
||||||
| dns_domain | None |
|
|
||||||
| dns_name | None |
|
|
||||||
| extra_dhcp_opts | |
|
|
||||||
| fixed_ips | ip_address='10.11.9.5', subnet_id='fa660385-65f1-4677-8dc7-3f4f9cd15d7f' |
|
|
||||||
| id | 18f968a5-c420-4318-92d7-941eb5f9e60e |
|
|
||||||
| ip_address | None |
|
|
||||||
| mac_address | fa:16:3e:bf:84:ff |
|
|
||||||
| name | |
|
|
||||||
| network_id | 144164d9-8c21-4274-acec-43245de0aed0 |
|
|
||||||
| option_name | None |
|
|
||||||
| option_value | None |
|
|
||||||
| port_security_enabled | True |
|
|
||||||
| project_id | d85bdba083204fe2845349a86cb87d82 |
|
|
||||||
| qos_policy_id | None |
|
|
||||||
| revision_number | 4 |
|
|
||||||
| security_group_ids | 32704585-8cbe-43f3-a4d5-56ffe2d3ab24 |
|
|
||||||
| status | ACTIVE |
|
|
||||||
| subnet_id | None |
|
|
||||||
| tags | |
|
|
||||||
| trunk_details | None |
|
|
||||||
| updated_at | 2018-06-18T15:58:30Z |
|
|
||||||
+-----------------------+---------------------------------------------------------------------------+
|
|
||||||
|
|
||||||
$ kubectl get kuryrports 18f968a5-c420-4318-92d7-941eb5f9e60e -o json
|
$ openstack port show 18f968a5-c420-4318-92d7-941eb5f9e60e
|
||||||
{
|
+-----------------------+---------------------------------------------------------------------------+
|
||||||
"apiVersion": "openstack.org/v1",
|
| Field | Value |
|
||||||
"kind": "KuryrPort",
|
+-----------------------+---------------------------------------------------------------------------+
|
||||||
"metadata": {
|
| admin_state_up | UP |
|
||||||
"resourceVersion": "164682",
|
| allowed_address_pairs | |
|
||||||
"selfLink": "/apis/openstack.org/v1/kuryrports/18f968a5-c420-4318-92d7-941eb5f9e60e",
|
| binding_host_id | None |
|
||||||
"uid": "d2834c13-6e6e-11e8-8acd-fa163ed12aae"
|
| binding_profile | None |
|
||||||
"name": "18f968a5-c420-4318-92d7-941eb5f9e60e"
|
| binding_vif_details | None |
|
||||||
"portStatus": "created"
|
| binding_vif_type | None |
|
||||||
},
|
| binding_vnic_type | normal |
|
||||||
"spec": {
|
| created_at | 2018-06-18T15:58:23Z |
|
||||||
"active": true",
|
| data_plane_status | None |
|
||||||
"address": "fa:16:3e:bf:84:ff",
|
| description | |
|
||||||
"id": "18f968a5-c420-4318-92d7-941eb5f9e60e",
|
| device_id | |
|
||||||
"network": {
|
| device_owner | trunk:subport |
|
||||||
"id": "144164d9-8c21-4274-acec-43245de0aed0",
|
| dns_assignment | None |
|
||||||
"mtu": 1350,
|
| dns_domain | None |
|
||||||
...
|
| dns_name | None |
|
||||||
}
|
| extra_dhcp_opts | |
|
||||||
...
|
| fixed_ips | ip_address='10.11.9.5', subnet_id='fa660385-65f1-4677-8dc7-3f4f9cd15d7f' |
|
||||||
}
|
| id | 18f968a5-c420-4318-92d7-941eb5f9e60e |
|
||||||
}
|
| ip_address | None |
|
||||||
|
| mac_address | fa:16:3e:bf:84:ff |
|
||||||
|
| name | |
|
||||||
|
| network_id | 144164d9-8c21-4274-acec-43245de0aed0 |
|
||||||
|
| option_name | None |
|
||||||
|
| option_value | None |
|
||||||
|
| port_security_enabled | True |
|
||||||
|
| project_id | d85bdba083204fe2845349a86cb87d82 |
|
||||||
|
| qos_policy_id | None |
|
||||||
|
| revision_number | 4 |
|
||||||
|
| security_group_ids | 32704585-8cbe-43f3-a4d5-56ffe2d3ab24 |
|
||||||
|
| status | ACTIVE |
|
||||||
|
| subnet_id | None |
|
||||||
|
| tags | |
|
||||||
|
| trunk_details | None |
|
||||||
|
| updated_at | 2018-06-18T15:58:30Z |
|
||||||
|
+-----------------------+---------------------------------------------------------------------------+
|
||||||
|
|
||||||
|
$ kubectl get kuryrports 18f968a5-c420-4318-92d7-941eb5f9e60e -o json
|
||||||
|
{
|
||||||
|
"apiVersion": "openstack.org/v1",
|
||||||
|
"kind": "KuryrPort",
|
||||||
|
"metadata": {
|
||||||
|
"resourceVersion": "164682",
|
||||||
|
"selfLink": "/apis/openstack.org/v1/kuryrports/18f968a5-c420-4318-92d7-941eb5f9e60e",
|
||||||
|
"uid": "d2834c13-6e6e-11e8-8acd-fa163ed12aae"
|
||||||
|
"name": "18f968a5-c420-4318-92d7-941eb5f9e60e"
|
||||||
|
"portStatus": "created"
|
||||||
|
},
|
||||||
|
"spec": {
|
||||||
|
"active": true",
|
||||||
|
"address": "fa:16:3e:bf:84:ff",
|
||||||
|
"id": "18f968a5-c420-4318-92d7-941eb5f9e60e",
|
||||||
|
"network": {
|
||||||
|
"id": "144164d9-8c21-4274-acec-43245de0aed0",
|
||||||
|
"mtu": 1350,
|
||||||
|
...
|
||||||
|
}
|
||||||
|
...
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
This allows a more standard way of annotating the pods, ensuring all needed
|
This allows a more standard way of annotating the pods, ensuring all needed
|
||||||
information is there regardless of the SDN backend.
|
information is there regardless of the SDN backend.
|
||||||
|
|
|
@ -48,19 +48,25 @@ Automated update
|
||||||
``contrib/regenerate_pod_resources_api.sh`` script could be used to re-generate
|
``contrib/regenerate_pod_resources_api.sh`` script could be used to re-generate
|
||||||
PodResources gRPC API files. By default, this script will download ``v1alpha1``
|
PodResources gRPC API files. By default, this script will download ``v1alpha1``
|
||||||
version of ``api.proto`` file from the Kubernetes GitHub repo and create
|
version of ``api.proto`` file from the Kubernetes GitHub repo and create
|
||||||
required kuryr-kubernetes files from it::
|
required kuryr-kubernetes files from it:
|
||||||
|
|
||||||
[kuryr-kubernetes]$ ./contrib/regenerate_pod_resources_api.sh
|
.. code-block:: console
|
||||||
|
|
||||||
|
[kuryr-kubernetes]$ ./contrib/regenerate_pod_resources_api.sh
|
||||||
|
|
||||||
Alternatively, path to ``api.proto`` file could be specified in
|
Alternatively, path to ``api.proto`` file could be specified in
|
||||||
``KUBERNETES_API_PROTO`` environment variable::
|
``KUBERNETES_API_PROTO`` environment variable:
|
||||||
|
|
||||||
$ export KUBERNETES_API_PROTO=/path/to/api.proto
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ export KUBERNETES_API_PROTO=/path/to/api.proto
|
||||||
|
|
||||||
Define ``API_VERSION`` environment variable to use specific version of
|
Define ``API_VERSION`` environment variable to use specific version of
|
||||||
``api.proto`` from the Kubernetes GitHub::
|
``api.proto`` from the Kubernetes GitHub:
|
||||||
|
|
||||||
$ export API_VERSION=v1alpha1
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ export API_VERSION=v1alpha1
|
||||||
|
|
||||||
|
|
||||||
Manual update steps
|
Manual update steps
|
||||||
|
@ -71,11 +77,13 @@ Preparing the new api.proto
|
||||||
|
|
||||||
Copy the ``api.proto`` from K8s sources to ``kuryr_kubernetes/pod_resources/``
|
Copy the ``api.proto`` from K8s sources to ``kuryr_kubernetes/pod_resources/``
|
||||||
and remove all the lines that contains ``gogoproto`` since this is unwanted
|
and remove all the lines that contains ``gogoproto`` since this is unwanted
|
||||||
dependency that is not needed for python bindings::
|
dependency that is not needed for python bindings:
|
||||||
|
|
||||||
$ sed '/gogoproto/d' \
|
.. code-block:: console
|
||||||
../kubernetes/pkg/kubelet/apis/podresources/<version>/api.proto \
|
|
||||||
> kuryr_kubernetes/pod_resources/api.proto
|
$ sed '/gogoproto/d' \
|
||||||
|
../kubernetes/pkg/kubelet/apis/podresources/<version>/api.proto \
|
||||||
|
> kuryr_kubernetes/pod_resources/api.proto
|
||||||
|
|
||||||
Don't forget to update the file header that should point to the original
|
Don't forget to update the file header that should point to the original
|
||||||
``api.proto`` and to this reference document::
|
``api.proto`` and to this reference document::
|
||||||
|
@ -88,16 +96,20 @@ Don't forget to update the file header that should point to the original
|
||||||
Generating the python bindings
|
Generating the python bindings
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
* (Optional) Create the python virtual environment::
|
* (Optional) Create the python virtual environment:
|
||||||
|
|
||||||
[kuryr-kubernetes]$ python3 -m venv venv
|
.. code-block:: console
|
||||||
[kuryr-kubernetes]$ . ./venv/bin/activate
|
|
||||||
|
[kuryr-kubernetes]$ python3 -m venv venv
|
||||||
|
[kuryr-kubernetes]$ . ./venv/bin/activate
|
||||||
|
|
||||||
* To generate python bindings we need a ``protoc`` compiler and the
|
* To generate python bindings we need a ``protoc`` compiler and the
|
||||||
``gRPC plugin`` for it. The most simple way to get them is to install
|
``gRPC plugin`` for it. The most simple way to get them is to install
|
||||||
``grpcio-tools``::
|
``grpcio-tools``:
|
||||||
|
|
||||||
(venv) [kuryr-kubernetes]$ pip install grpcio-tools==1.19
|
.. code-block:: console
|
||||||
|
|
||||||
|
(venv) [kuryr-kubernetes]$ pip install grpcio-tools==1.19
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
@ -109,13 +121,17 @@ Generating the python bindings
|
||||||
you need update ``requirements.txt`` and ``lower-constraints.txt``
|
you need update ``requirements.txt`` and ``lower-constraints.txt``
|
||||||
accordingly.
|
accordingly.
|
||||||
|
|
||||||
To check version of compiler installed with ``grpcio-tools`` use::
|
To check version of compiler installed with ``grpcio-tools`` use:
|
||||||
|
|
||||||
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc --version
|
.. code-block:: console
|
||||||
libprotoc 3.6.1
|
|
||||||
|
|
||||||
* Following command will generate ``api_pb2_grpc.py`` and ``api_pb2.py``::
|
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc --version
|
||||||
|
libprotoc 3.6.1
|
||||||
|
|
||||||
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc -I./ \
|
* Following command will generate ``api_pb2_grpc.py`` and ``api_pb2.py``:
|
||||||
--python_out=. --grpc_python_out=. \
|
|
||||||
kuryr_kubernetes/pod_resources/api.proto
|
.. code-block:: console
|
||||||
|
|
||||||
|
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc -I./ \
|
||||||
|
--python_out=. --grpc_python_out=. \
|
||||||
|
kuryr_kubernetes/pod_resources/api.proto
|
||||||
|
|
|
@ -73,17 +73,15 @@ Option in config file might look like this:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[kubernetes]
|
[kubernetes]
|
||||||
|
multi_vif_drivers = sriov, additional_subnets
|
||||||
multi_vif_drivers = sriov, additional_subnets
|
|
||||||
|
|
||||||
Or like this:
|
Or like this:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[kubernetes]
|
[kubernetes]
|
||||||
|
multi_vif_drivers = npwg_multiple_interfaces
|
||||||
multi_vif_drivers = npwg_multiple_interfaces
|
|
||||||
|
|
||||||
|
|
||||||
Additional Subnets Driver
|
Additional Subnets Driver
|
||||||
|
@ -98,17 +96,17 @@ additional subnets requests might look like:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
name: some-name
|
name: some-name
|
||||||
labels:
|
labels:
|
||||||
app: some-name
|
app: some-name
|
||||||
annotations:
|
annotations:
|
||||||
openstack.org/kuryr-additional-subnets: '[
|
openstack.org/kuryr-additional-subnets: '[
|
||||||
"id_of_neutron_subnet_created_previously"
|
"id_of_neutron_subnet_created_previously"
|
||||||
]'
|
]'
|
||||||
|
|
||||||
|
|
||||||
SRIOV Driver
|
SRIOV Driver
|
||||||
|
@ -122,13 +120,13 @@ Here's how a Pod Spec with sriov requests might look like:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: vf-container
|
- name: vf-container
|
||||||
image: vf-image
|
image: vf-image
|
||||||
resources:
|
resources:
|
||||||
requests:
|
requests:
|
||||||
pod.alpha.kubernetes.io/opaque-int-resource-sriov-vf-physnet2: 1
|
pod.alpha.kubernetes.io/opaque-int-resource-sriov-vf-physnet2: 1
|
||||||
|
|
||||||
|
|
||||||
Specific ports support
|
Specific ports support
|
||||||
|
@ -145,18 +143,18 @@ annotation with requested specific ports might look like this:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
name: some-name
|
name: some-name
|
||||||
labels:
|
labels:
|
||||||
app: some-name
|
app: some-name
|
||||||
annotations:
|
annotations:
|
||||||
spec-ports: '[
|
spec-ports: '[
|
||||||
"id_of_direct_precreated_port".
|
"id_of_direct_precreated_port".
|
||||||
"id_of_normal_precreated_port"
|
"id_of_normal_precreated_port"
|
||||||
]'
|
]'
|
||||||
|
|
||||||
Pod spec above should be interpreted the following way: Multi-vif driver parses
|
Pod spec above should be interpreted the following way: Multi-vif driver parses
|
||||||
pod annotations and gets ids of specific ports. If vnic_type is "normal" and
|
pod annotations and gets ids of specific ports. If vnic_type is "normal" and
|
||||||
|
|
|
@ -8,17 +8,23 @@ Building images
|
||||||
First you should build kuryr-controller and kuryr-cni docker images and place
|
First you should build kuryr-controller and kuryr-cni docker images and place
|
||||||
them on cluster-wide accessible registry.
|
them on cluster-wide accessible registry.
|
||||||
|
|
||||||
For creating controller image on local machine: ::
|
For creating controller image on local machine:
|
||||||
|
|
||||||
$ docker build -t kuryr/controller -f controller.Dockerfile .
|
.. code-block:: console
|
||||||
|
|
||||||
For creating cni daemonset image on local machine: ::
|
$ docker build -t kuryr/controller -f controller.Dockerfile .
|
||||||
|
|
||||||
$ docker build -t kuryr/cni -f cni.Dockerfile .
|
For creating cni daemonset image on local machine:
|
||||||
|
|
||||||
If you want to run kuryr CNI without the daemon, build theimage with: ::
|
.. code-block:: console
|
||||||
|
|
||||||
$ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False .
|
$ docker build -t kuryr/cni -f cni.Dockerfile .
|
||||||
|
|
||||||
|
If you want to run kuryr CNI without the daemon, build the image with:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False .
|
||||||
|
|
||||||
Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller
|
Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller
|
||||||
Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller
|
Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller
|
||||||
|
@ -32,9 +38,11 @@ Generating Kuryr resource definitions for Kubernetes
|
||||||
|
|
||||||
kuryr-kubernetes includes a tool that lets you generate resource definitions
|
kuryr-kubernetes includes a tool that lets you generate resource definitions
|
||||||
that can be used to Deploy Kuryr on Kubernetes. The script is placed in
|
that can be used to Deploy Kuryr on Kubernetes. The script is placed in
|
||||||
``tools/generate_k8s_resource_definitions.sh`` and takes up to 3 arguments: ::
|
``tools/generate_k8s_resource_definitions.sh`` and takes up to 3 arguments:
|
||||||
|
|
||||||
$ ./tools/generate_k8s_resource_definitions <output_dir> [<controller_conf_path>] [<cni_conf_path>] [<ca_certificate_path>]
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ ./tools/generate_k8s_resource_definitions <output_dir> [<controller_conf_path>] [<cni_conf_path>] [<ca_certificate_path>]
|
||||||
|
|
||||||
* ``output_dir`` - directory where to put yaml files with definitions.
|
* ``output_dir`` - directory where to put yaml files with definitions.
|
||||||
* ``controller_conf_path`` - path to custom kuryr-controller configuration
|
* ``controller_conf_path`` - path to custom kuryr-controller configuration
|
||||||
|
@ -83,13 +91,14 @@ script. Below is the list of available variables:
|
||||||
and ``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo``
|
and ``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo``
|
||||||
to raise privileges, even though container is priviledged by itself or
|
to raise privileges, even though container is priviledged by itself or
|
||||||
``sudo`` is missing from container OS (e.g. default CentOS 7). To prevent
|
``sudo`` is missing from container OS (e.g. default CentOS 7). To prevent
|
||||||
that make sure to set following options in kuryr.conf used for
|
that make sure to set following options in kuryr.conf used for kuryr-daemon:
|
||||||
kuryr-daemon::
|
|
||||||
|
|
||||||
[vif_plug_ovs_privileged]
|
.. code-block:: ini
|
||||||
helper_command=privsep-helper
|
|
||||||
[vif_plug_linux_bridge_privileged]
|
[vif_plug_ovs_privileged]
|
||||||
helper_command=privsep-helper
|
helper_command=privsep-helper
|
||||||
|
[vif_plug_linux_bridge_privileged]
|
||||||
|
helper_command=privsep-helper
|
||||||
|
|
||||||
Those options will prevent oslo.privsep from doing that. If rely on
|
Those options will prevent oslo.privsep from doing that. If rely on
|
||||||
aformentioned script to generate config files, those options will be added
|
aformentioned script to generate config files, those options will be added
|
||||||
|
@ -104,9 +113,11 @@ variable must be set:
|
||||||
|
|
||||||
* ``$KURYR_USE_PORTS_POOLS`` - ``True`` (default: False)
|
* ``$KURYR_USE_PORTS_POOLS`` - ``True`` (default: False)
|
||||||
|
|
||||||
Example run: ::
|
Example run:
|
||||||
|
|
||||||
$ KURYR_K8S_API_ROOT="192.168.0.1:6443" ./tools/generate_k8s_resource_definitions /tmp
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ KURYR_K8S_API_ROOT="192.168.0.1:6443" ./tools/generate_k8s_resource_definitions /tmp
|
||||||
|
|
||||||
This should generate 5 files in your ``<output_dir>``:
|
This should generate 5 files in your ``<output_dir>``:
|
||||||
|
|
||||||
|
@ -133,13 +144,15 @@ This should generate 5 files in your ``<output_dir>``:
|
||||||
Deploying Kuryr resources on Kubernetes
|
Deploying Kuryr resources on Kubernetes
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
To deploy the files on your Kubernetes cluster run: ::
|
To deploy the files on your Kubernetes cluster run:
|
||||||
|
|
||||||
$ kubectl apply -f config_map.yml -n kube-system
|
.. code-block:: console
|
||||||
$ kubectl apply -f certificates_secret.yml -n kube-system
|
|
||||||
$ kubectl apply -f service_account.yml -n kube-system
|
$ kubectl apply -f config_map.yml -n kube-system
|
||||||
$ kubectl apply -f controller_deployment.yml -n kube-system
|
$ kubectl apply -f certificates_secret.yml -n kube-system
|
||||||
$ kubectl apply -f cni_ds.yml -n kube-system
|
$ kubectl apply -f service_account.yml -n kube-system
|
||||||
|
$ kubectl apply -f controller_deployment.yml -n kube-system
|
||||||
|
$ kubectl apply -f cni_ds.yml -n kube-system
|
||||||
|
|
||||||
After successful completion:
|
After successful completion:
|
||||||
|
|
||||||
|
@ -148,8 +161,11 @@ After successful completion:
|
||||||
* kuryr-cni gets installed as a daemonset object on all the nodes in
|
* kuryr-cni gets installed as a daemonset object on all the nodes in
|
||||||
kube-system namespace
|
kube-system namespace
|
||||||
|
|
||||||
To see kuryr-controller logs ::
|
To see kuryr-controller logs:
|
||||||
$ kubectl logs <pod-name>
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ kubectl logs <pod-name>
|
||||||
|
|
||||||
NOTE: kuryr-cni has no logs and to debug failures you need to check out kubelet
|
NOTE: kuryr-cni has no logs and to debug failures you need to check out kubelet
|
||||||
logs.
|
logs.
|
||||||
|
|
|
@ -2,85 +2,90 @@
|
||||||
Inspect default Configuration
|
Inspect default Configuration
|
||||||
=============================
|
=============================
|
||||||
|
|
||||||
By default, DevStack creates networks called ``private`` and ``public``::
|
By default, DevStack creates networks called ``private`` and ``public``:
|
||||||
|
|
||||||
$ openstack network list --project demo
|
.. code-block:: console
|
||||||
+--------------------------------------+---------+----------------------------------------------------------------------------+
|
|
||||||
| ID | Name | Subnets |
|
|
||||||
+--------------------------------------+---------+----------------------------------------------------------------------------+
|
|
||||||
| 12bc346b-35ed-4cfa-855b-389305c05740 | private | 1ee73076-e01e-4cec-a3a4-cbb275f94d0f, 8376a091-dcea-4ed5-b738-c16446e861da |
|
|
||||||
+--------------------------------------+---------+----------------------------------------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack network list --project admin
|
$ openstack network list --project demo
|
||||||
+--------------------------------------+--------+----------------------------------------------------------------------------+
|
+--------------------------------------+---------+----------------------------------------------------------------------------+
|
||||||
| ID | Name | Subnets |
|
| ID | Name | Subnets |
|
||||||
+--------------------------------------+--------+----------------------------------------------------------------------------+
|
+--------------------------------------+---------+----------------------------------------------------------------------------+
|
||||||
| 646baf54-6178-4a26-a52b-68ad0ba1e057 | public | 00e0b1e4-4bee-4204-bd02-610291c56334, b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b |
|
| 12bc346b-35ed-4cfa-855b-389305c05740 | private | 1ee73076-e01e-4cec-a3a4-cbb275f94d0f, 8376a091-dcea-4ed5-b738-c16446e861da |
|
||||||
+--------------------------------------+--------+----------------------------------------------------------------------------+
|
+--------------------------------------+---------+----------------------------------------------------------------------------+
|
||||||
|
|
||||||
|
$ openstack network list --project admin
|
||||||
|
+--------------------------------------+--------+----------------------------------------------------------------------------+
|
||||||
|
| ID | Name | Subnets |
|
||||||
|
+--------------------------------------+--------+----------------------------------------------------------------------------+
|
||||||
|
| 646baf54-6178-4a26-a52b-68ad0ba1e057 | public | 00e0b1e4-4bee-4204-bd02-610291c56334, b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b |
|
||||||
|
+--------------------------------------+--------+----------------------------------------------------------------------------+
|
||||||
|
|
||||||
And kuryr-kubernetes creates two extra ones for the kubernetes services and
|
And kuryr-kubernetes creates two extra ones for the kubernetes services and
|
||||||
pods under the project k8s::
|
pods under the project k8s:
|
||||||
|
|
||||||
$ openstack network list --project k8s
|
.. code-block:: console
|
||||||
+--------------------------------------+-----------------+--------------------------------------+
|
|
||||||
| ID | Name | Subnets |
|
|
||||||
+--------------------------------------+-----------------+--------------------------------------+
|
|
||||||
| 1bff74a6-e4e2-42fb-a81b-33c9c144987c | k8s-pod-net | 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 |
|
|
||||||
| d4be7efc-b84d-480e-a1db-34205877e6c4 | k8s-service-net | 55405e9d-4e25-4a55-bac2-e25ee88584e1 |
|
|
||||||
+--------------------------------------+-----------------+--------------------------------------+
|
|
||||||
|
|
||||||
|
$ openstack network list --project k8s
|
||||||
|
+--------------------------------------+-----------------+--------------------------------------+
|
||||||
|
| ID | Name | Subnets |
|
||||||
|
+--------------------------------------+-----------------+--------------------------------------+
|
||||||
|
| 1bff74a6-e4e2-42fb-a81b-33c9c144987c | k8s-pod-net | 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 |
|
||||||
|
| d4be7efc-b84d-480e-a1db-34205877e6c4 | k8s-service-net | 55405e9d-4e25-4a55-bac2-e25ee88584e1 |
|
||||||
|
+--------------------------------------+-----------------+--------------------------------------+
|
||||||
|
|
||||||
And similarly for the subnets::
|
And similarly for the subnets:
|
||||||
|
|
||||||
$ openstack subnet list --project k8s
|
.. code-block:: console
|
||||||
+--------------------------------------+--------------------+--------------------------------------+---------------+
|
|
||||||
| ID | Name | Network | Subnet |
|
|
||||||
+--------------------------------------+--------------------+--------------------------------------+---------------+
|
|
||||||
| 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 | k8s-pod-subnet | 1bff74a6-e4e2-42fb-a81b-33c9c144987c | 10.0.0.64/26 |
|
|
||||||
| 55405e9d-4e25-4a55-bac2-e25ee88584e1 | k8s-service-subnet | d4be7efc-b84d-480e-a1db-34205877e6c4 | 10.0.0.128/26 |
|
|
||||||
+--------------------------------------+--------------------+--------------------------------------+---------------+
|
|
||||||
|
|
||||||
|
$ openstack subnet list --project k8s
|
||||||
|
+--------------------------------------+--------------------+--------------------------------------+---------------+
|
||||||
|
| ID | Name | Network | Subnet |
|
||||||
|
+--------------------------------------+--------------------+--------------------------------------+---------------+
|
||||||
|
| 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 | k8s-pod-subnet | 1bff74a6-e4e2-42fb-a81b-33c9c144987c | 10.0.0.64/26 |
|
||||||
|
| 55405e9d-4e25-4a55-bac2-e25ee88584e1 | k8s-service-subnet | d4be7efc-b84d-480e-a1db-34205877e6c4 | 10.0.0.128/26 |
|
||||||
|
+--------------------------------------+--------------------+--------------------------------------+---------------+
|
||||||
|
|
||||||
In addition to that, security groups for both pods and services are created
|
In addition to that, security groups for both pods and services are created too:
|
||||||
too::
|
|
||||||
|
|
||||||
$ openstack security group list --project k8s
|
.. code-block:: console
|
||||||
+--------------------------------------+--------------------+------------------------+----------------------------------+
|
|
||||||
| ID | Name | Description | Project |
|
|
||||||
+--------------------------------------+--------------------+------------------------+----------------------------------+
|
|
||||||
| 00fd78f9-484d-4ea7-b677-82f73c54064a | service_pod_access | service_pod_access | 49e2683370f245e38ac2d6a8c16697b3 |
|
|
||||||
| fe7cee41-6021-4d7b-ab03-1ce1e391a1ca | default | Default security group | 49e2683370f245e38ac2d6a8c16697b3 |
|
|
||||||
+--------------------------------------+--------------------+------------------------+----------------------------------+
|
|
||||||
|
|
||||||
|
$ openstack security group list --project k8s
|
||||||
|
+--------------------------------------+--------------------+------------------------+----------------------------------+
|
||||||
|
| ID | Name | Description | Project |
|
||||||
|
+--------------------------------------+--------------------+------------------------+----------------------------------+
|
||||||
|
| 00fd78f9-484d-4ea7-b677-82f73c54064a | service_pod_access | service_pod_access | 49e2683370f245e38ac2d6a8c16697b3 |
|
||||||
|
| fe7cee41-6021-4d7b-ab03-1ce1e391a1ca | default | Default security group | 49e2683370f245e38ac2d6a8c16697b3 |
|
||||||
|
+--------------------------------------+--------------------+------------------------+----------------------------------+
|
||||||
|
|
||||||
And finally, the loadbalancer for the kubernetes API service is also created,
|
And finally, the loadbalancer for the kubernetes API service is also created,
|
||||||
with the subsequence listener, pool and added members::
|
with the subsequence listener, pool and added members:
|
||||||
|
|
||||||
$ openstack loadbalancer list
|
.. code-block:: console
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
|
||||||
| id | name | tenant_id | vip_address | provisioning_status | provider |
|
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
|
||||||
| 7d0cf5b5-b164-4b32-87d3-ae6c82513927 | default/kubernetes | 47c28e562795468ea52e92226e3bc7b1 | 10.0.0.129 | ACTIVE | haproxy |
|
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
|
||||||
|
|
||||||
$ openstack loadbalancer listener list
|
$ openstack loadbalancer list
|
||||||
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
| id | default_pool_id | name | tenant_id | protocol | protocol_port | admin_state_up |
|
| id | name | tenant_id | vip_address | provisioning_status | provider |
|
||||||
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
| abfbafd8-7609-4b7d-9def-4edddf2b887b | 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | HTTPS | 443 | True |
|
| 7d0cf5b5-b164-4b32-87d3-ae6c82513927 | default/kubernetes | 47c28e562795468ea52e92226e3bc7b1 | 10.0.0.129 | ACTIVE | haproxy |
|
||||||
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
|
|
||||||
$ openstack loadbalancer pool list
|
$ openstack loadbalancer listener list
|
||||||
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
||||||
| id | name | tenant_id | lb_algorithm | protocol | admin_state_up |
|
| id | default_pool_id | name | tenant_id | protocol | protocol_port | admin_state_up |
|
||||||
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
||||||
| 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | ROUND_ROBIN | HTTPS | True |
|
| abfbafd8-7609-4b7d-9def-4edddf2b887b | 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | HTTPS | 443 | True |
|
||||||
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
||||||
|
|
||||||
$ openstack loadbalancer member list default/kubernetes:443
|
$ openstack loadbalancer pool list
|
||||||
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
|
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
||||||
| id | name | tenant_id | address | protocol_port | weight | subnet_id | admin_state_up |
|
| id | name | tenant_id | lb_algorithm | protocol | admin_state_up |
|
||||||
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
|
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
||||||
| 5ddceaff-180b-47fa-b787-8921f4591cb0 | | 47c28e562795468ea52e92226e3bc7b1 | 192.168.5.10 | 6443 | 1 | b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b | True |
|
| 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | ROUND_ROBIN | HTTPS | True |
|
||||||
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
|
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
||||||
|
|
||||||
|
$ openstack loadbalancer member list default/kubernetes:443
|
||||||
|
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
|
||||||
|
| id | name | tenant_id | address | protocol_port | weight | subnet_id | admin_state_up |
|
||||||
|
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
|
||||||
|
| 5ddceaff-180b-47fa-b787-8921f4591cb0 | | 47c28e562795468ea52e92226e3bc7b1 | 192.168.5.10 | 6443 | 1 | b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b | True |
|
||||||
|
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
|
||||||
|
|
|
@ -14,25 +14,33 @@ and dependencies of both systems.
|
||||||
Cloning required repositories
|
Cloning required repositories
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
|
||||||
First of all you need to clone DevStack: ::
|
First of all you need to clone DevStack:
|
||||||
|
|
||||||
$ git clone https://opendev.org/openstack-dev/devstack
|
.. code-block:: console
|
||||||
|
|
||||||
Create user *stack*, give it required permissions and log in as that user: ::
|
$ git clone https://opendev.org/openstack-dev/devstack
|
||||||
|
|
||||||
$ ./devstack/tools/create-stack-user.sh
|
Create user *stack*, give it required permissions and log in as that user:
|
||||||
$ sudo su stack
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ ./devstack/tools/create-stack-user.sh
|
||||||
|
$ sudo su stack
|
||||||
|
|
||||||
*stack* user has ``/opt/stack`` set as its home directory. It will need its own
|
*stack* user has ``/opt/stack`` set as its home directory. It will need its own
|
||||||
repository with DevStack. Also clone kuryr-kubernetes: ::
|
repository with DevStack. Also clone kuryr-kubernetes:
|
||||||
|
|
||||||
$ git clone https://opendev.org/openstack-dev/devstack
|
.. code-block:: console
|
||||||
$ git clone https://opendev.org/openstack/kuryr-kubernetes
|
|
||||||
|
$ git clone https://opendev.org/openstack-dev/devstack
|
||||||
|
$ git clone https://opendev.org/openstack/kuryr-kubernetes
|
||||||
|
|
||||||
Copy sample ``local.conf`` (DevStack configuration file) to devstack
|
Copy sample ``local.conf`` (DevStack configuration file) to devstack
|
||||||
directory: ::
|
directory:
|
||||||
|
|
||||||
$ cp kuryr-kubernetes/devstack/local.conf.sample devstack/local.conf
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ cp kuryr-kubernetes/devstack/local.conf.sample devstack/local.conf
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
@ -51,86 +59,100 @@ Now edit ``devstack/local.conf`` to set up some initial options:
|
||||||
* If you already have Docker installed on the machine, you can comment out line
|
* If you already have Docker installed on the machine, you can comment out line
|
||||||
starting with ``enable_plugin devstack-plugin-container``.
|
starting with ``enable_plugin devstack-plugin-container``.
|
||||||
|
|
||||||
Once ``local.conf`` is configured, you can start the installation: ::
|
Once ``local.conf`` is configured, you can start the installation:
|
||||||
|
|
||||||
$ ./devstack/stack.sh
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ ./devstack/stack.sh
|
||||||
|
|
||||||
Installation takes from 15 to 30 minutes. Once that's done you should see
|
Installation takes from 15 to 30 minutes. Once that's done you should see
|
||||||
similar output: ::
|
similar output:
|
||||||
|
|
||||||
=========================
|
.. code-block:: console
|
||||||
DevStack Component Timing
|
|
||||||
(times are in seconds)
|
=========================
|
||||||
=========================
|
DevStack Component Timing
|
||||||
run_process 5
|
(times are in seconds)
|
||||||
test_with_retry 2
|
=========================
|
||||||
pip_install 48
|
run_process 5
|
||||||
osc 121
|
test_with_retry 2
|
||||||
wait_for_service 1
|
pip_install 48
|
||||||
yum_install 31
|
osc 121
|
||||||
dbsync 27
|
wait_for_service 1
|
||||||
-------------------------
|
yum_install 31
|
||||||
Unaccounted time 125
|
dbsync 27
|
||||||
=========================
|
-------------------------
|
||||||
Total runtime 360
|
Unaccounted time 125
|
||||||
|
=========================
|
||||||
|
Total runtime 360
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
This is your host IP address: 192.168.101.249
|
This is your host IP address: 192.168.101.249
|
||||||
This is your host IPv6 address: fec0::5054:ff:feb0:213a
|
This is your host IPv6 address: fec0::5054:ff:feb0:213a
|
||||||
Keystone is serving at http://192.168.101.249/identity/
|
Keystone is serving at http://192.168.101.249/identity/
|
||||||
The default users are: admin and demo
|
The default users are: admin and demo
|
||||||
The password: password
|
The password: password
|
||||||
|
|
||||||
WARNING:
|
WARNING:
|
||||||
Using lib/neutron-legacy is deprecated, and it will be removed in the future
|
Using lib/neutron-legacy is deprecated, and it will be removed in the future
|
||||||
|
|
||||||
|
|
||||||
Services are running under systemd unit files.
|
Services are running under systemd unit files.
|
||||||
For more information see:
|
For more information see:
|
||||||
https://docs.openstack.org/devstack/latest/systemd.html
|
https://docs.openstack.org/devstack/latest/systemd.html
|
||||||
|
|
||||||
DevStack Version: queens
|
DevStack Version: queens
|
||||||
Change: 301d4d1678c3c1342abc03e51a74574f7792a58b Merge "Use "pip list" in check_libs_from_git" 2017-10-04 07:22:59 +0000
|
Change: 301d4d1678c3c1342abc03e51a74574f7792a58b Merge "Use "pip list" in check_libs_from_git" 2017-10-04 07:22:59 +0000
|
||||||
OS Version: CentOS 7.4.1708 Core
|
OS Version: CentOS 7.4.1708 Core
|
||||||
|
|
||||||
You can test DevStack by sourcing credentials and trying some commands: ::
|
You can test DevStack by sourcing credentials and trying some commands:
|
||||||
|
|
||||||
$ source /devstack/openrc admin admin
|
.. code-block:: console
|
||||||
$ openstack service list
|
|
||||||
+----------------------------------+------------------+------------------+
|
$ source /devstack/openrc admin admin
|
||||||
| ID | Name | Type |
|
$ openstack service list
|
||||||
+----------------------------------+------------------+------------------+
|
+----------------------------------+------------------+------------------+
|
||||||
| 091e3e2813cc4904b74b60c41e8a98b3 | kuryr-kubernetes | kuryr-kubernetes |
|
| ID | Name | Type |
|
||||||
| 2b6076dd5fc04bf180e935f78c12d431 | neutron | network |
|
+----------------------------------+------------------+------------------+
|
||||||
| b598216086944714aed2c233123fc22d | keystone | identity |
|
| 091e3e2813cc4904b74b60c41e8a98b3 | kuryr-kubernetes | kuryr-kubernetes |
|
||||||
+----------------------------------+------------------+------------------+
|
| 2b6076dd5fc04bf180e935f78c12d431 | neutron | network |
|
||||||
|
| b598216086944714aed2c233123fc22d | keystone | identity |
|
||||||
|
+----------------------------------+------------------+------------------+
|
||||||
|
|
||||||
To verify if Kubernetes is running properly, list its nodes and check status of
|
To verify if Kubernetes is running properly, list its nodes and check status of
|
||||||
the only node you should have. The correct value is "Ready": ::
|
the only node you should have. The correct value is "Ready":
|
||||||
|
|
||||||
$ kubectl get nodes
|
.. code-block:: console
|
||||||
NAME STATUS AGE VERSION
|
|
||||||
localhost Ready 2m v1.6.2
|
|
||||||
|
|
||||||
To test kuryr-kubernetes itself try creating a Kubernetes pod: ::
|
$ kubectl get nodes
|
||||||
|
NAME STATUS AGE VERSION
|
||||||
|
localhost Ready 2m v1.6.2
|
||||||
|
|
||||||
$ kubectl run --image busybox test -- sleep 3600
|
To test kuryr-kubernetes itself try creating a Kubernetes pod:
|
||||||
$ kubectl get pods -o wide
|
|
||||||
NAME READY STATUS RESTARTS AGE IP NODE
|
.. code-block:: console
|
||||||
test-3202410914-1dp7g 0/1 ContainerCreating 0 7s <none> localhost
|
|
||||||
|
$ kubectl run --image busybox test -- sleep 3600
|
||||||
|
$ kubectl get pods -o wide
|
||||||
|
NAME READY STATUS RESTARTS AGE IP NODE
|
||||||
|
test-3202410914-1dp7g 0/1 ContainerCreating 0 7s <none> localhost
|
||||||
|
|
||||||
After a moment (even up to few minutes as Docker image needs to be downloaded)
|
After a moment (even up to few minutes as Docker image needs to be downloaded)
|
||||||
you should see that pod got the IP from OpenStack network: ::
|
you should see that pod got the IP from OpenStack network:
|
||||||
|
|
||||||
$ kubectl get pods -o wide
|
.. code-block:: console
|
||||||
NAME READY STATUS RESTARTS AGE IP NODE
|
|
||||||
test-3202410914-1dp7g 1/1 Running 0 35s 10.0.0.73 localhost
|
|
||||||
|
|
||||||
You can verify that this IP is really assigned to Neutron port: ::
|
$ kubectl get pods -o wide
|
||||||
|
NAME READY STATUS RESTARTS AGE IP NODE
|
||||||
|
test-3202410914-1dp7g 1/1 Running 0 35s 10.0.0.73 localhost
|
||||||
|
|
||||||
[stack@localhost kuryr-kubernetes]$ openstack port list | grep 10.0.0.73
|
You can verify that this IP is really assigned to Neutron port:
|
||||||
| 3ce7fd13-ad0a-4e92-9b6f-0d38d50b1699 | | fa:16:3e:8e:f4:30 | ip_address='10.0.0.73', subnet_id='ddfbc8e9-68da-48f9-8a05-238ea0607e0d' | ACTIVE |
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
[stack@localhost kuryr-kubernetes]$ openstack port list | grep 10.0.0.73
|
||||||
|
| 3ce7fd13-ad0a-4e92-9b6f-0d38d50b1699 | | fa:16:3e:8e:f4:30 | ip_address='10.0.0.73', subnet_id='ddfbc8e9-68da-48f9-8a05-238ea0607e0d' | ACTIVE |
|
||||||
|
|
||||||
If those steps were successful, then it looks like your DevStack with
|
If those steps were successful, then it looks like your DevStack with
|
||||||
kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines
|
kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines
|
||||||
|
|
|
@ -12,9 +12,11 @@ Installation
|
||||||
|
|
||||||
To configure DevStack to install Kuryr services as containerized Kubernetes
|
To configure DevStack to install Kuryr services as containerized Kubernetes
|
||||||
resources, you need to switch ``KURYR_K8S_CONTAINERIZED_DEPLOYMENT``. Add this
|
resources, you need to switch ``KURYR_K8S_CONTAINERIZED_DEPLOYMENT``. Add this
|
||||||
line to your ``local.conf``: ::
|
line to your ``local.conf``:
|
||||||
|
|
||||||
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True
|
.. code-block:: ini
|
||||||
|
|
||||||
|
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True
|
||||||
|
|
||||||
This will trigger building the kuryr-controller and kuryr-cni containers during
|
This will trigger building the kuryr-controller and kuryr-cni containers during
|
||||||
installation, as well as will deploy those on Kubernetes cluster it installed.
|
installation, as well as will deploy those on Kubernetes cluster it installed.
|
||||||
|
@ -32,9 +34,11 @@ Changing configuration
|
||||||
----------------------
|
----------------------
|
||||||
|
|
||||||
To change kuryr.conf files that are put into containers you need to edit the
|
To change kuryr.conf files that are put into containers you need to edit the
|
||||||
associated ConfigMap. On DevStack deployment this can be done using: ::
|
associated ConfigMap. On DevStack deployment this can be done using:
|
||||||
|
|
||||||
$ kubectl -n kube-system edit cm kuryr-config
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ kubectl -n kube-system edit cm kuryr-config
|
||||||
|
|
||||||
Then the editor will appear that will let you edit the config map. Make sure to
|
Then the editor will appear that will let you edit the config map. Make sure to
|
||||||
keep correct indentation when doing changes. Also note that there are two files
|
keep correct indentation when doing changes. Also note that there are two files
|
||||||
|
@ -54,11 +58,13 @@ kuryr-controller
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
To restart kuryr-controller and let it load new image and configuration, simply
|
To restart kuryr-controller and let it load new image and configuration, simply
|
||||||
kill existing pod: ::
|
kill existing pod:
|
||||||
|
|
||||||
$ kubectl -n kube-system get pods
|
.. code-block:: console
|
||||||
<find kuryr-controller pod you want to restart>
|
|
||||||
$ kubectl -n kube-system delete pod <pod-name>
|
$ kubectl -n kube-system get pods
|
||||||
|
<find kuryr-controller pod you want to restart>
|
||||||
|
$ kubectl -n kube-system delete pod <pod-name>
|
||||||
|
|
||||||
Deployment controller will make sure to restart the pod with new configuration.
|
Deployment controller will make sure to restart the pod with new configuration.
|
||||||
|
|
||||||
|
@ -71,8 +77,10 @@ actually idling with ``sleep infinity`` once all the files are copied into
|
||||||
correct locations on Kubernetes host.
|
correct locations on Kubernetes host.
|
||||||
|
|
||||||
You can force it to redeploy new files by killing it. DaemonSet controller
|
You can force it to redeploy new files by killing it. DaemonSet controller
|
||||||
should make sure to restart it with new image and configuration files. ::
|
should make sure to restart it with new image and configuration files.
|
||||||
|
|
||||||
$ kubectl -n kube-system get pods
|
.. code-block:: console
|
||||||
<find kuryr-cni pods you want to restart>
|
|
||||||
$ kubectl -n kube-system delete pod <pod-name1> <pod-name2> <...>
|
$ kubectl -n kube-system get pods
|
||||||
|
<find kuryr-cni pods you want to restart>
|
||||||
|
$ kubectl -n kube-system delete pod <pod-name1> <pod-name2> <...>
|
||||||
|
|
|
@ -38,18 +38,18 @@ to use either Fedora 25 or the latest Ubuntu LTS (16.04, Xenial).
|
||||||
|
|
||||||
2. Create the ``stack`` user.
|
2. Create the ``stack`` user.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||||
$ sudo ./devstack/tools/create-stack-user.sh
|
$ sudo ./devstack/tools/create-stack-user.sh
|
||||||
|
|
||||||
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
|
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ sudo su - stack
|
$ sudo su - stack
|
||||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||||
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
|
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
|
||||||
|
|
||||||
4. Configure DevStack to use Dragonflow.
|
4. Configure DevStack to use Dragonflow.
|
||||||
|
|
||||||
|
@ -58,10 +58,10 @@ you can start with. You may change some values for the various variables in
|
||||||
that file, like password settings or what LBaaS service provider to use.
|
that file, like password settings or what LBaaS service provider to use.
|
||||||
Feel free to edit it if you'd like, but it should work as-is.
|
Feel free to edit it if you'd like, but it should work as-is.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ cd devstack
|
$ cd devstack
|
||||||
$ cp ../kuryr-kubernetes/devstack/local.conf.df.sample local.conf
|
$ cp ../kuryr-kubernetes/devstack/local.conf.df.sample local.conf
|
||||||
|
|
||||||
|
|
||||||
Optionally, the ports pool funcionality can be enabled by following:
|
Optionally, the ports pool funcionality can be enabled by following:
|
||||||
|
@ -74,18 +74,21 @@ Optionally, the ports pool funcionality can be enabled by following:
|
||||||
Expect it to take a while. It installs required packages, clones a bunch
|
Expect it to take a while. It installs required packages, clones a bunch
|
||||||
of git repos, and installs everything from these git repos.
|
of git repos, and installs everything from these git repos.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ ./stack.sh
|
||||||
|
|
||||||
$ ./stack.sh
|
|
||||||
|
|
||||||
Once DevStack completes successfully, you should see output that looks
|
Once DevStack completes successfully, you should see output that looks
|
||||||
something like this::
|
something like this:
|
||||||
|
|
||||||
This is your host IP address: 192.168.5.10
|
.. code-block:: console
|
||||||
This is your host IPv6 address: ::1
|
|
||||||
Keystone is serving at http://192.168.5.10/identity/
|
This is your host IP address: 192.168.5.10
|
||||||
The default users are: admin and demo
|
This is your host IPv6 address: ::1
|
||||||
The password: pass
|
Keystone is serving at http://192.168.5.10/identity/
|
||||||
|
The default users are: admin and demo
|
||||||
|
The password: pass
|
||||||
|
|
||||||
|
|
||||||
6. Extra configurations.
|
6. Extra configurations.
|
||||||
|
@ -93,9 +96,9 @@ something like this::
|
||||||
Create NAT rule that will cause "external" traffic from your instances to get
|
Create NAT rule that will cause "external" traffic from your instances to get
|
||||||
rewritten to your network controller's ip address and sent out on the network:
|
rewritten to your network controller's ip address and sent out on the network:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
|
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
|
||||||
|
|
||||||
|
|
||||||
Inspect default Configuration
|
Inspect default Configuration
|
||||||
|
@ -134,10 +137,12 @@ Undercloud deployment
|
||||||
|
|
||||||
The steps to deploy the undercloud environment are the same as described above
|
The steps to deploy the undercloud environment are the same as described above
|
||||||
for the `Single Node Test Environment` with the different sample local.conf to
|
for the `Single Node Test Environment` with the different sample local.conf to
|
||||||
use (step 4), in this case::
|
use (step 4), in this case:
|
||||||
|
|
||||||
$ cd devstack
|
.. code-block:: console
|
||||||
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf
|
|
||||||
|
$ cd devstack
|
||||||
|
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf
|
||||||
|
|
||||||
|
|
||||||
The main differences with the default dragonflow local.conf sample are that:
|
The main differences with the default dragonflow local.conf sample are that:
|
||||||
|
@ -172,9 +177,11 @@ Once the VM is up and running, we can start with the overcloud configuration.
|
||||||
The steps to perform are the same as without Dragonflow integration, i.e., the
|
The steps to perform are the same as without Dragonflow integration, i.e., the
|
||||||
same steps as for ML2/OVS:
|
same steps as for ML2/OVS:
|
||||||
|
|
||||||
1. Log in into the VM::
|
1. Log in into the VM:
|
||||||
|
|
||||||
$ ssh -i id_rsa_demo centos@FLOATING_IP
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ ssh -i id_rsa_demo centos@FLOATING_IP
|
||||||
|
|
||||||
2. Deploy devstack following steps 3 and 4 detailed at
|
2. Deploy devstack following steps 3 and 4 detailed at
|
||||||
`How to try out nested-pods locally (VLAN + trunk)`_.
|
`How to try out nested-pods locally (VLAN + trunk)`_.
|
||||||
|
|
|
@ -23,23 +23,31 @@ nested MACVLAN driver rather than VLAN and trunk ports.
|
||||||
4. Once devstack is done and all services are up inside VM. Next steps are to
|
4. Once devstack is done and all services are up inside VM. Next steps are to
|
||||||
configure the missing information at ``/etc/kuryr/kuryr.conf``:
|
configure the missing information at ``/etc/kuryr/kuryr.conf``:
|
||||||
|
|
||||||
- Configure worker VMs subnet::
|
- Configure worker VMs subnet:
|
||||||
|
|
||||||
[pod_vif_nested]
|
.. code-block:: ini
|
||||||
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>
|
|
||||||
|
|
||||||
- Configure "pod_vif_driver" as "nested-macvlan"::
|
[pod_vif_nested]
|
||||||
|
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>
|
||||||
|
|
||||||
[kubernetes]
|
- Configure "pod_vif_driver" as "nested-macvlan":
|
||||||
pod_vif_driver = nested-macvlan
|
|
||||||
|
|
||||||
- Configure binding section::
|
.. code-block:: ini
|
||||||
|
|
||||||
[binding]
|
[kubernetes]
|
||||||
link_iface = <VM interface name eg. eth0>
|
pod_vif_driver = nested-macvlan
|
||||||
|
|
||||||
- Restart kuryr-k8s-controller::
|
- Configure binding section:
|
||||||
|
|
||||||
sudo systemctl restart devstack@kuryr-kubernetes.service
|
.. code-block:: ini
|
||||||
|
|
||||||
|
[binding]
|
||||||
|
link_iface = <VM interface name eg. eth0>
|
||||||
|
|
||||||
|
- Restart kuryr-k8s-controller:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||||
|
|
||||||
Now launch pods using kubectl, Undercloud Neutron will serve the networking.
|
Now launch pods using kubectl, Undercloud Neutron will serve the networking.
|
||||||
|
|
|
@ -9,10 +9,12 @@ for the VM:
|
||||||
|
|
||||||
1. To install OpenStack services run devstack with
|
1. To install OpenStack services run devstack with
|
||||||
``devstack/local.conf.pod-in-vm.undercloud.sample``. Ensure that "trunk"
|
``devstack/local.conf.pod-in-vm.undercloud.sample``. Ensure that "trunk"
|
||||||
service plugin is enabled in ``/etc/neutron/neutron.conf``::
|
service plugin is enabled in ``/etc/neutron/neutron.conf``:
|
||||||
|
|
||||||
[DEFAULT]
|
.. code-block:: ini
|
||||||
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
|
|
||||||
|
[DEFAULT]
|
||||||
|
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
|
||||||
|
|
||||||
2. Launch a VM with `Neutron trunk port.
|
2. Launch a VM with `Neutron trunk port.
|
||||||
<https://wiki.openstack.org/wiki/Neutron/TrunkPort>`_. The next steps can be
|
<https://wiki.openstack.org/wiki/Neutron/TrunkPort>`_. The next steps can be
|
||||||
|
@ -26,21 +28,26 @@ for the VM:
|
||||||
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
|
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
|
||||||
but first fill in the needed information:
|
but first fill in the needed information:
|
||||||
|
|
||||||
- Point to the undercloud deployment by setting::
|
- Point to the undercloud deployment by setting:
|
||||||
|
|
||||||
SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP
|
.. code-block:: bash
|
||||||
|
|
||||||
|
SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP
|
||||||
|
|
||||||
- Fill in the subnetpool id of the undercloud deployment, as well as
|
- Fill in the subnetpool id of the undercloud deployment, as well as
|
||||||
the router where the new pod and service networks need to be
|
the router where the new pod and service networks need to be
|
||||||
connected::
|
connected:
|
||||||
|
|
||||||
KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID=UNDERCLOUD_SUBNETPOOL_V4_ID
|
.. code-block:: bash
|
||||||
KURYR_NEUTRON_DEFAULT_ROUTER=router1
|
|
||||||
|
|
||||||
- Ensure the nested-vlan driver is going to be set by setting::
|
KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID=UNDERCLOUD_SUBNETPOOL_V4_ID
|
||||||
|
KURYR_NEUTRON_DEFAULT_ROUTER=router1
|
||||||
|
|
||||||
KURYR_POD_VIF_DRIVER=nested-vlan
|
- Ensure the nested-vlan driver is going to be set by setting:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
KURYR_POD_VIF_DRIVER=nested-vlan
|
||||||
|
|
||||||
- Optionally, the ports pool funcionality can be enabled by following:
|
- Optionally, the ports pool funcionality can be enabled by following:
|
||||||
`How to enable ports pool with devstack`_.
|
`How to enable ports pool with devstack`_.
|
||||||
|
@ -48,31 +55,40 @@ for the VM:
|
||||||
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
|
||||||
|
|
||||||
- [OPTIONAL] If you want to enable the subport pools driver and the
|
- [OPTIONAL] If you want to enable the subport pools driver and the
|
||||||
VIF Pool Manager you need to include::
|
VIF Pool Manager you need to include:
|
||||||
|
|
||||||
KURYR_VIF_POOL_MANAGER=True
|
.. code-block:: bash
|
||||||
|
|
||||||
|
KURYR_VIF_POOL_MANAGER=True
|
||||||
|
|
||||||
4. Once devstack is done and all services are up inside VM. Next steps are to
|
4. Once devstack is done and all services are up inside VM. Next steps are to
|
||||||
configure the missing information at ``/etc/kuryr/kuryr.conf``:
|
configure the missing information at ``/etc/kuryr/kuryr.conf``:
|
||||||
|
|
||||||
- Configure worker VMs subnet::
|
- Configure worker VMs subnet:
|
||||||
|
|
||||||
[pod_vif_nested]
|
.. code-block:: ini
|
||||||
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>
|
|
||||||
|
|
||||||
- Configure binding section::
|
[pod_vif_nested]
|
||||||
|
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>
|
||||||
|
|
||||||
[binding]
|
- Configure binding section:
|
||||||
driver = kuryr.lib.binding.drivers.vlan
|
|
||||||
link_iface = <VM interface name eg. eth0>
|
|
||||||
|
|
||||||
- Restart kuryr-k8s-controller::
|
.. code-block:: ini
|
||||||
|
|
||||||
sudo systemctl restart devstack@kuryr-kubernetes.service
|
[binding]
|
||||||
|
driver = kuryr.lib.binding.drivers.vlan
|
||||||
|
link_iface = <VM interface name eg. eth0>
|
||||||
|
|
||||||
- Restart kuryr-daemon::
|
- Restart kuryr-k8s-controller:
|
||||||
|
|
||||||
sudo systemctl restart devstack@kuryr-daemon.service
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||||
|
|
||||||
|
- Restart kuryr-daemon:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ sudo systemctl restart devstack@kuryr-daemon.service
|
||||||
|
|
||||||
Now launch pods using kubectl, Undercloud Neutron will serve the networking.
|
Now launch pods using kubectl, Undercloud Neutron will serve the networking.
|
||||||
|
|
|
@ -33,18 +33,18 @@ to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
|
||||||
|
|
||||||
2. Create the ``stack`` user.
|
2. Create the ``stack`` user.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||||
$ sudo ./devstack/tools/create-stack-user.sh
|
$ sudo ./devstack/tools/create-stack-user.sh
|
||||||
|
|
||||||
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
|
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ sudo su - stack
|
$ sudo su - stack
|
||||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||||
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
|
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
|
||||||
|
|
||||||
4. Configure DevStack to use ODL.
|
4. Configure DevStack to use ODL.
|
||||||
|
|
||||||
|
@ -53,10 +53,10 @@ can start with. For example, you may want to set some values for the various
|
||||||
PASSWORD variables in that file, or change the LBaaS service provider to use.
|
PASSWORD variables in that file, or change the LBaaS service provider to use.
|
||||||
Feel free to edit it if you'd like, but it should work as-is.
|
Feel free to edit it if you'd like, but it should work as-is.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ cd devstack
|
$ cd devstack
|
||||||
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
|
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
|
||||||
|
|
||||||
|
|
||||||
Optionally, the ports pool funcionality can be enabled by following:
|
Optionally, the ports pool funcionality can be enabled by following:
|
||||||
|
@ -69,41 +69,41 @@ Optionally, the ports pool funcionality can be enabled by following:
|
||||||
This is going to take a while. It installs a bunch of packages, clones a bunch
|
This is going to take a while. It installs a bunch of packages, clones a bunch
|
||||||
of git repos, and installs everything from these git repos.
|
of git repos, and installs everything from these git repos.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ ./stack.sh
|
$ ./stack.sh
|
||||||
|
|
||||||
Once DevStack completes successfully, you should see output that looks
|
Once DevStack completes successfully, you should see output that looks
|
||||||
something like this::
|
something like this:
|
||||||
|
|
||||||
This is your host IP address: 192.168.5.10
|
.. code-block:: console
|
||||||
This is your host IPv6 address: ::1
|
|
||||||
Keystone is serving at http://192.168.5.10/identity/
|
|
||||||
The default users are: admin and demo
|
|
||||||
The password: pass
|
|
||||||
|
|
||||||
|
This is your host IP address: 192.168.5.10
|
||||||
|
This is your host IPv6 address: ::1
|
||||||
|
Keystone is serving at http://192.168.5.10/identity/
|
||||||
|
The default users are: admin and demo
|
||||||
|
The password: pass
|
||||||
|
|
||||||
6. Extra configurations.
|
6. Extra configurations.
|
||||||
|
|
||||||
Devstack does not wire up the public network by default so we must do
|
Devstack does not wire up the public network by default so we must do
|
||||||
some extra steps for floating IP usage as well as external connectivity:
|
some extra steps for floating IP usage as well as external connectivity:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ sudo ip link set br-ex up
|
|
||||||
$ sudo ip route add 172.24.4.0/24 dev br-ex
|
|
||||||
$ sudo ip addr add 172.24.4.1/24 dev br-ex
|
|
||||||
|
|
||||||
|
$ sudo ip link set br-ex up
|
||||||
|
$ sudo ip route add 172.24.4.0/24 dev br-ex
|
||||||
|
$ sudo ip addr add 172.24.4.1/24 dev br-ex
|
||||||
|
|
||||||
Then you can create forwarding and NAT rules that will cause "external"
|
Then you can create forwarding and NAT rules that will cause "external"
|
||||||
traffic from your instances to get rewritten to your network controller's
|
traffic from your instances to get rewritten to your network controller's
|
||||||
ip address and sent out on the network:
|
ip address and sent out on the network:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
|
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
|
||||||
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
|
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
|
||||||
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
|
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
|
||||||
|
|
||||||
|
|
||||||
Inspect default Configuration
|
Inspect default Configuration
|
||||||
|
@ -142,11 +142,12 @@ Undercloud deployment
|
||||||
|
|
||||||
The steps to deploy the undercloud environment are the same described above
|
The steps to deploy the undercloud environment are the same described above
|
||||||
for the `Single Node Test Environment` with the different of the sample
|
for the `Single Node Test Environment` with the different of the sample
|
||||||
local.conf to use (step 4), in this case::
|
local.conf to use (step 4), in this case:
|
||||||
|
|
||||||
$ cd devstack
|
.. code-block:: console
|
||||||
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.odl.sample local.conf
|
|
||||||
|
|
||||||
|
$ cd devstack
|
||||||
|
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.odl.sample local.conf
|
||||||
|
|
||||||
The main differences with the default odl local.conf sample are that:
|
The main differences with the default odl local.conf sample are that:
|
||||||
|
|
||||||
|
@ -179,9 +180,11 @@ Once the VM is up and running, we can start with the overcloud configuration.
|
||||||
The steps to perform are the same as without ODL integration, i.e., the
|
The steps to perform are the same as without ODL integration, i.e., the
|
||||||
same steps as for ML2/OVS:
|
same steps as for ML2/OVS:
|
||||||
|
|
||||||
1. Log in into the VM::
|
1. Log in into the VM:
|
||||||
|
|
||||||
$ ssh -i id_rsa_demo centos@FLOATING_IP
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ ssh -i id_rsa_demo centos@FLOATING_IP
|
||||||
|
|
||||||
2. Deploy devstack following steps 3 and 4 detailed at
|
2. Deploy devstack following steps 3 and 4 detailed at
|
||||||
`How to try out nested-pods locally (VLAN + trunk)`_.
|
`How to try out nested-pods locally (VLAN + trunk)`_.
|
||||||
|
|
|
@ -30,18 +30,18 @@ to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
|
||||||
|
|
||||||
2. Create the ``stack`` user.
|
2. Create the ``stack`` user.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||||
$ sudo ./devstack/tools/create-stack-user.sh
|
$ sudo ./devstack/tools/create-stack-user.sh
|
||||||
|
|
||||||
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
|
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ sudo su - stack
|
$ sudo su - stack
|
||||||
$ git clone https://opendev.org/openstack-dev/devstack.git
|
$ git clone https://opendev.org/openstack-dev/devstack.git
|
||||||
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
|
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
|
||||||
|
|
||||||
4. Configure DevStack to use OVN.
|
4. Configure DevStack to use OVN.
|
||||||
|
|
||||||
|
@ -50,11 +50,10 @@ can start with. For example, you may want to set some values for the various
|
||||||
PASSWORD variables in that file, or change the LBaaS service provider to use.
|
PASSWORD variables in that file, or change the LBaaS service provider to use.
|
||||||
Feel free to edit it if you'd like, but it should work as-is.
|
Feel free to edit it if you'd like, but it should work as-is.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ cd devstack
|
|
||||||
$ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf
|
|
||||||
|
|
||||||
|
$ cd devstack
|
||||||
|
$ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf
|
||||||
|
|
||||||
Note that due to OVN compiling OVS from source at
|
Note that due to OVN compiling OVS from source at
|
||||||
/usr/local/var/run/openvswitch we need to state at the local.conf that the path
|
/usr/local/var/run/openvswitch we need to state at the local.conf that the path
|
||||||
|
@ -68,9 +67,9 @@ Optionally, the ports pool functionality can be enabled by following:
|
||||||
This is going to take a while. It installs a bunch of packages, clones a bunch
|
This is going to take a while. It installs a bunch of packages, clones a bunch
|
||||||
of git repos, and installs everything from these git repos.
|
of git repos, and installs everything from these git repos.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ ./stack.sh
|
$ ./stack.sh
|
||||||
|
|
||||||
Once DevStack completes successfully, you should see output that looks
|
Once DevStack completes successfully, you should see output that looks
|
||||||
something like this::
|
something like this::
|
||||||
|
@ -87,22 +86,21 @@ something like this::
|
||||||
Devstack does not wire up the public network by default so we must do
|
Devstack does not wire up the public network by default so we must do
|
||||||
some extra steps for floating IP usage as well as external connectivity:
|
some extra steps for floating IP usage as well as external connectivity:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ sudo ip link set br-ex up
|
|
||||||
$ sudo ip route add 172.24.4.0/24 dev br-ex
|
|
||||||
$ sudo ip addr add 172.24.4.1/24 dev br-ex
|
|
||||||
|
|
||||||
|
$ sudo ip link set br-ex up
|
||||||
|
$ sudo ip route add 172.24.4.0/24 dev br-ex
|
||||||
|
$ sudo ip addr add 172.24.4.1/24 dev br-ex
|
||||||
|
|
||||||
Then you can create forwarding and NAT rules that will cause "external"
|
Then you can create forwarding and NAT rules that will cause "external"
|
||||||
traffic from your instances to get rewritten to your network controller's
|
traffic from your instances to get rewritten to your network controller's
|
||||||
ip address and sent out on the network:
|
ip address and sent out on the network:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
|
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
|
||||||
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
|
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
|
||||||
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
|
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
|
||||||
|
|
||||||
|
|
||||||
Inspect default Configuration
|
Inspect default Configuration
|
||||||
|
@ -136,11 +134,12 @@ Undercloud deployment
|
||||||
|
|
||||||
The steps to deploy the undercloud environment are the same described above
|
The steps to deploy the undercloud environment are the same described above
|
||||||
for the `Single Node Test Environment` with the different of the sample
|
for the `Single Node Test Environment` with the different of the sample
|
||||||
local.conf to use (step 4), in this case::
|
local.conf to use (step 4), in this case:
|
||||||
|
|
||||||
$ cd devstack
|
.. code-block:: console
|
||||||
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.ovn.sample local.conf
|
|
||||||
|
|
||||||
|
$ cd devstack
|
||||||
|
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.ovn.sample local.conf
|
||||||
|
|
||||||
The main differences with the default ovn local.conf sample are that:
|
The main differences with the default ovn local.conf sample are that:
|
||||||
|
|
||||||
|
@ -171,9 +170,11 @@ Once the VM is up and running, we can start with the overcloud configuration.
|
||||||
The steps to perform are the same as without OVN integration, i.e., the
|
The steps to perform are the same as without OVN integration, i.e., the
|
||||||
same steps as for ML2/OVS:
|
same steps as for ML2/OVS:
|
||||||
|
|
||||||
1. Log in into the VM::
|
1. Log in into the VM:
|
||||||
|
|
||||||
$ ssh -i id_rsa_demo centos@FLOATING_IP
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ ssh -i id_rsa_demo centos@FLOATING_IP
|
||||||
|
|
||||||
2. Deploy devstack following steps 3 and 4 detailed at :doc:`./nested-vlan`
|
2. Deploy devstack following steps 3 and 4 detailed at :doc:`./nested-vlan`
|
||||||
|
|
||||||
|
|
|
@ -5,31 +5,36 @@ How to enable ports pool with devstack
|
||||||
To enable the utilization of the ports pool feature through devstack, the next
|
To enable the utilization of the ports pool feature through devstack, the next
|
||||||
options needs to be set at the local.conf file:
|
options needs to be set at the local.conf file:
|
||||||
|
|
||||||
1. First, you need to enable the pools by setting::
|
1. First, you need to enable the pools by setting:
|
||||||
|
|
||||||
KURYR_USE_PORT_POOLS=True
|
.. code-block:: bash
|
||||||
|
|
||||||
|
KURYR_USE_PORT_POOLS=True
|
||||||
|
|
||||||
2. Then, the proper pool driver needs to be set. This means that for the
|
2. Then, the proper pool driver needs to be set. This means that for the
|
||||||
baremetal case you need to ensure the pod vif driver and the vif pool driver
|
baremetal case you need to ensure the pod vif driver and the vif pool driver
|
||||||
are set to the right baremetal drivers, for instance::
|
are set to the right baremetal drivers, for instance:
|
||||||
|
|
||||||
KURYR_POD_VIF_DRIVER=neutron-vif
|
.. code-block:: bash
|
||||||
KURYR_VIF_POOL_DRIVER=neutron
|
|
||||||
|
|
||||||
|
KURYR_POD_VIF_DRIVER=neutron-vif
|
||||||
|
KURYR_VIF_POOL_DRIVER=neutron
|
||||||
|
|
||||||
And if the use case is the nested one, then they should be set to::
|
And if the use case is the nested one, then they should be set to:
|
||||||
|
|
||||||
KURYR_POD_VIF_DRIVER=nested-vlan
|
.. code-block:: bash
|
||||||
KURYR_VIF_POOL_DRIVER=nested
|
|
||||||
|
|
||||||
|
KURYR_POD_VIF_DRIVER=nested-vlan
|
||||||
|
KURYR_VIF_POOL_DRIVER=nested
|
||||||
|
|
||||||
3. Then, in case you want to set a limit to the maximum number of ports, or
|
3. Then, in case you want to set a limit to the maximum number of ports, or
|
||||||
increase/reduce the default one for the minimum number, as well as to modify
|
increase/reduce the default one for the minimum number, as well as to modify
|
||||||
the way the pools are repopulated, both in time as well as regarding bulk
|
the way the pools are repopulated, both in time as well as regarding bulk
|
||||||
operation sizes, the next option can be included and modified accordingly::
|
operation sizes, the next option can be included and modified accordingly:
|
||||||
|
|
||||||
KURYR_PORT_POOL_MIN=5
|
.. code-block:: bash
|
||||||
KURYR_PORT_POOL_MAX=0
|
|
||||||
KURYR_PORT_POOL_BATCH=10
|
KURYR_PORT_POOL_MIN=5
|
||||||
KURYR_PORT_POOL_UPDATE_FREQ=20
|
KURYR_PORT_POOL_MAX=0
|
||||||
|
KURYR_PORT_POOL_BATCH=10
|
||||||
|
KURYR_PORT_POOL_UPDATE_FREQ=20
|
||||||
|
|
|
@ -3,21 +3,27 @@ Watching Kubernetes api-server over HTTPS
|
||||||
=========================================
|
=========================================
|
||||||
|
|
||||||
Add absolute path of client side cert file and key file for Kubernetes server
|
Add absolute path of client side cert file and key file for Kubernetes server
|
||||||
in ``kuryr.conf``::
|
in ``kuryr.conf``:
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
api_root = https://your_server_address:server_ssl_port
|
|
||||||
ssl_client_crt_file = <absolute file path eg. /etc/kubernetes/admin.crt>
|
[kubernetes]
|
||||||
ssl_client_key_file = <absolute file path eg. /etc/kubernetes/admin.key>
|
api_root = https://your_server_address:server_ssl_port
|
||||||
|
ssl_client_crt_file = <absolute file path eg. /etc/kubernetes/admin.crt>
|
||||||
|
ssl_client_key_file = <absolute file path eg. /etc/kubernetes/admin.key>
|
||||||
|
|
||||||
If server ssl certification verification is also to be enabled, add absolute
|
If server ssl certification verification is also to be enabled, add absolute
|
||||||
path to the ca cert::
|
path to the ca cert:
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
ssl_ca_crt_file = <absolute file path eg. /etc/kubernetes/ca.crt>
|
|
||||||
ssl_verify_server_crt = True
|
|
||||||
|
|
||||||
If want to query HTTPS Kubernetes api server with ``--insecure`` mode::
|
[kubernetes]
|
||||||
|
ssl_ca_crt_file = <absolute file path eg. /etc/kubernetes/ca.crt>
|
||||||
|
ssl_verify_server_crt = True
|
||||||
|
|
||||||
[kubernetes]
|
If want to query HTTPS Kubernetes api server with ``--insecure`` mode:
|
||||||
ssl_verify_server_crt = False
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
[kubernetes]
|
||||||
|
ssl_verify_server_crt = False
|
||||||
|
|
|
@ -10,183 +10,199 @@ Kuryr-Kubernetes to achieve an IPv6 only Kubernetes cluster.
|
||||||
Setting it up
|
Setting it up
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
#. Create pods network::
|
#. Create pods network:
|
||||||
|
|
||||||
$ openstack network create pods
|
.. code-block:: console
|
||||||
+---------------------------+--------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------------+--------------------------------------+
|
|
||||||
| admin_state_up | UP |
|
|
||||||
| availability_zone_hints | |
|
|
||||||
| availability_zones | |
|
|
||||||
| created_at | 2017-08-11T10:51:25Z |
|
|
||||||
| description | |
|
|
||||||
| dns_domain | None |
|
|
||||||
| id | 4593045c-4233-4b4c-8527-35608ab0eaae |
|
|
||||||
| ipv4_address_scope | None |
|
|
||||||
| ipv6_address_scope | None |
|
|
||||||
| is_default | False |
|
|
||||||
| is_vlan_transparent | None |
|
|
||||||
| mtu | 1450 |
|
|
||||||
| name | pods |
|
|
||||||
| port_security_enabled | True |
|
|
||||||
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
|
||||||
| provider:network_type | vxlan |
|
|
||||||
| provider:physical_network | None |
|
|
||||||
| provider:segmentation_id | 21 |
|
|
||||||
| qos_policy_id | None |
|
|
||||||
| revision_number | 2 |
|
|
||||||
| router:external | Internal |
|
|
||||||
| segments | None |
|
|
||||||
| shared | False |
|
|
||||||
| status | ACTIVE |
|
|
||||||
| subnets | |
|
|
||||||
| tags | [] |
|
|
||||||
| updated_at | 2017-08-11T10:51:25Z |
|
|
||||||
+---------------------------+--------------------------------------+
|
|
||||||
|
|
||||||
#. Create the pod subnet::
|
$ openstack network create pods
|
||||||
|
+---------------------------+--------------------------------------+
|
||||||
|
| Field | Value |
|
||||||
|
+---------------------------+--------------------------------------+
|
||||||
|
| admin_state_up | UP |
|
||||||
|
| availability_zone_hints | |
|
||||||
|
| availability_zones | |
|
||||||
|
| created_at | 2017-08-11T10:51:25Z |
|
||||||
|
| description | |
|
||||||
|
| dns_domain | None |
|
||||||
|
| id | 4593045c-4233-4b4c-8527-35608ab0eaae |
|
||||||
|
| ipv4_address_scope | None |
|
||||||
|
| ipv6_address_scope | None |
|
||||||
|
| is_default | False |
|
||||||
|
| is_vlan_transparent | None |
|
||||||
|
| mtu | 1450 |
|
||||||
|
| name | pods |
|
||||||
|
| port_security_enabled | True |
|
||||||
|
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
||||||
|
| provider:network_type | vxlan |
|
||||||
|
| provider:physical_network | None |
|
||||||
|
| provider:segmentation_id | 21 |
|
||||||
|
| qos_policy_id | None |
|
||||||
|
| revision_number | 2 |
|
||||||
|
| router:external | Internal |
|
||||||
|
| segments | None |
|
||||||
|
| shared | False |
|
||||||
|
| status | ACTIVE |
|
||||||
|
| subnets | |
|
||||||
|
| tags | [] |
|
||||||
|
| updated_at | 2017-08-11T10:51:25Z |
|
||||||
|
+---------------------------+--------------------------------------+
|
||||||
|
|
||||||
$ openstack subnet create --network pods --no-dhcp \
|
#. Create the pod subnet:
|
||||||
--subnet-range fd10:0:0:1::/64 \
|
|
||||||
--ip-version 6 \
|
.. code-block:: console
|
||||||
pod_subnet
|
|
||||||
+-------------------------+-------------------------------------------+
|
$ openstack subnet create --network pods --no-dhcp \
|
||||||
| Field | Value |
|
--subnet-range fd10:0:0:1::/64 \
|
||||||
+-------------------------+-------------------------------------------+
|
--ip-version 6 \
|
||||||
| allocation_pools | fd10:0:0:1::2-fd10::1:ffff:ffff:ffff:ffff |
|
pod_subnet
|
||||||
| cidr | fd10:0:0:1::/64 |
|
+-------------------------+-------------------------------------------+
|
||||||
| created_at | 2017-08-11T17:02:20Z |
|
| Field | Value |
|
||||||
| description | |
|
+-------------------------+-------------------------------------------+
|
||||||
| dns_nameservers | |
|
| allocation_pools | fd10:0:0:1::2-fd10::1:ffff:ffff:ffff:ffff |
|
||||||
| enable_dhcp | False |
|
| cidr | fd10:0:0:1::/64 |
|
||||||
| gateway_ip | fd10:0:0:1::1 |
|
| created_at | 2017-08-11T17:02:20Z |
|
||||||
| host_routes | |
|
| description | |
|
||||||
| id | eef12d65-4d02-4344-b255-295f9adfd4e9 |
|
| dns_nameservers | |
|
||||||
| ip_version | 6 |
|
| enable_dhcp | False |
|
||||||
| ipv6_address_mode | None |
|
| gateway_ip | fd10:0:0:1::1 |
|
||||||
| ipv6_ra_mode | None |
|
| host_routes | |
|
||||||
| name | pod_subnet |
|
| id | eef12d65-4d02-4344-b255-295f9adfd4e9 |
|
||||||
| network_id | 4593045c-4233-4b4c-8527-35608ab0eaae |
|
| ip_version | 6 |
|
||||||
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
| ipv6_address_mode | None |
|
||||||
| revision_number | 0 |
|
| ipv6_ra_mode | None |
|
||||||
| segment_id | None |
|
| name | pod_subnet |
|
||||||
| service_types | |
|
| network_id | 4593045c-4233-4b4c-8527-35608ab0eaae |
|
||||||
| subnetpool_id | None |
|
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
||||||
| tags | [] |
|
| revision_number | 0 |
|
||||||
| updated_at | 2017-08-11T17:02:20Z |
|
| segment_id | None |
|
||||||
| use_default_subnet_pool | None |
|
| service_types | |
|
||||||
+-------------------------+-------------------------------------------+
|
| subnetpool_id | None |
|
||||||
|
| tags | [] |
|
||||||
|
| updated_at | 2017-08-11T17:02:20Z |
|
||||||
|
| use_default_subnet_pool | None |
|
||||||
|
+-------------------------+-------------------------------------------+
|
||||||
|
|
||||||
|
|
||||||
#. Create services network::
|
#. Create services network:
|
||||||
|
|
||||||
$ openstack network create services
|
.. code-block:: console
|
||||||
+---------------------------+--------------------------------------+
|
|
||||||
| Field | Value |
|
$ openstack network create services
|
||||||
+---------------------------+--------------------------------------+
|
+---------------------------+--------------------------------------+
|
||||||
| admin_state_up | UP |
|
| Field | Value |
|
||||||
| availability_zone_hints | |
|
+---------------------------+--------------------------------------+
|
||||||
| availability_zones | |
|
| admin_state_up | UP |
|
||||||
| created_at | 2017-08-11T10:53:36Z |
|
| availability_zone_hints | |
|
||||||
| description | |
|
| availability_zones | |
|
||||||
| dns_domain | None |
|
| created_at | 2017-08-11T10:53:36Z |
|
||||||
| id | 560df0c2-537c-41c0-b22c-40ef3d752574 |
|
| description | |
|
||||||
| ipv4_address_scope | None |
|
| dns_domain | None |
|
||||||
| ipv6_address_scope | None |
|
| id | 560df0c2-537c-41c0-b22c-40ef3d752574 |
|
||||||
| is_default | False |
|
| ipv4_address_scope | None |
|
||||||
| is_vlan_transparent | None |
|
| ipv6_address_scope | None |
|
||||||
| mtu | 1450 |
|
| is_default | False |
|
||||||
| name | services |
|
| is_vlan_transparent | None |
|
||||||
| port_security_enabled | True |
|
| mtu | 1450 |
|
||||||
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
| name | services |
|
||||||
| provider:network_type | vxlan |
|
| port_security_enabled | True |
|
||||||
| provider:physical_network | None |
|
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
||||||
| provider:segmentation_id | 94 |
|
| provider:network_type | vxlan |
|
||||||
| qos_policy_id | None |
|
| provider:physical_network | None |
|
||||||
| revision_number | 2 |
|
| provider:segmentation_id | 94 |
|
||||||
| router:external | Internal |
|
| qos_policy_id | None |
|
||||||
| segments | None |
|
| revision_number | 2 |
|
||||||
| shared | False |
|
| router:external | Internal |
|
||||||
| status | ACTIVE |
|
| segments | None |
|
||||||
| subnets | |
|
| shared | False |
|
||||||
| tags | [] |
|
| status | ACTIVE |
|
||||||
| updated_at | 2017-08-11T10:53:37Z |
|
| subnets | |
|
||||||
+---------------------------+--------------------------------------+
|
| tags | [] |
|
||||||
|
| updated_at | 2017-08-11T10:53:37Z |
|
||||||
|
+---------------------------+--------------------------------------+
|
||||||
|
|
||||||
#. Create services subnet. We reserve the first half of the subnet range for the
|
#. Create services subnet. We reserve the first half of the subnet range for the
|
||||||
VIPs and the second half for the loadbalancer vrrp ports ::
|
VIPs and the second half for the loadbalancer vrrp ports.
|
||||||
|
|
||||||
$ openstack subnet create --network services --no-dhcp \
|
.. code-block:: console
|
||||||
--gateway fd10:0:0:2:0:0:0:fffe \
|
|
||||||
--ip-version 6 \
|
|
||||||
--allocation-pool start=fd10:0:0:2:0:0:0:8000,end=fd10:0:0:2:0:0:0:fffd \
|
|
||||||
--subnet-range fd10:0:0:2::/112 \
|
|
||||||
service_subnet
|
|
||||||
+-------------------------+--------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------------------+--------------------------------------+
|
|
||||||
| allocation_pools | fd10:0:0:2::8000-fd10:0:0:2::fffd |
|
|
||||||
| cidr | fd10:0:0:2::/112 |
|
|
||||||
| created_at | 2017-08-14T19:08:34Z |
|
|
||||||
| description | |
|
|
||||||
| dns_nameservers | |
|
|
||||||
| enable_dhcp | False |
|
|
||||||
| gateway_ip | fd10:0:0:2::fffe |
|
|
||||||
| host_routes | |
|
|
||||||
| id | 3c53ff94-40e2-4399-bc45-6e210f1e8064 |
|
|
||||||
| ip_version | 6 |
|
|
||||||
| ipv6_address_mode | None |
|
|
||||||
| ipv6_ra_mode | None |
|
|
||||||
| name | service_subnet |
|
|
||||||
| network_id | 560df0c2-537c-41c0-b22c-40ef3d752574 |
|
|
||||||
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
|
||||||
| revision_number | 0 |
|
|
||||||
| segment_id | None |
|
|
||||||
| service_types | |
|
|
||||||
| subnetpool_id | None |
|
|
||||||
| tags | [] |
|
|
||||||
| updated_at | 2017-08-14T19:08:34Z |
|
|
||||||
| use_default_subnet_pool | None |
|
|
||||||
+-------------------------+--------------------------------------+
|
|
||||||
|
|
||||||
#. Create a router::
|
$ openstack subnet create --network services --no-dhcp \
|
||||||
|
--gateway fd10:0:0:2:0:0:0:fffe \
|
||||||
|
--ip-version 6 \
|
||||||
|
--allocation-pool start=fd10:0:0:2:0:0:0:8000,end=fd10:0:0:2:0:0:0:fffd \
|
||||||
|
--subnet-range fd10:0:0:2::/112 \
|
||||||
|
service_subnet
|
||||||
|
+-------------------------+--------------------------------------+
|
||||||
|
| Field | Value |
|
||||||
|
+-------------------------+--------------------------------------+
|
||||||
|
| allocation_pools | fd10:0:0:2::8000-fd10:0:0:2::fffd |
|
||||||
|
| cidr | fd10:0:0:2::/112 |
|
||||||
|
| created_at | 2017-08-14T19:08:34Z |
|
||||||
|
| description | |
|
||||||
|
| dns_nameservers | |
|
||||||
|
| enable_dhcp | False |
|
||||||
|
| gateway_ip | fd10:0:0:2::fffe |
|
||||||
|
| host_routes | |
|
||||||
|
| id | 3c53ff94-40e2-4399-bc45-6e210f1e8064 |
|
||||||
|
| ip_version | 6 |
|
||||||
|
| ipv6_address_mode | None |
|
||||||
|
| ipv6_ra_mode | None |
|
||||||
|
| name | service_subnet |
|
||||||
|
| network_id | 560df0c2-537c-41c0-b22c-40ef3d752574 |
|
||||||
|
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
||||||
|
| revision_number | 0 |
|
||||||
|
| segment_id | None |
|
||||||
|
| service_types | |
|
||||||
|
| subnetpool_id | None |
|
||||||
|
| tags | [] |
|
||||||
|
| updated_at | 2017-08-14T19:08:34Z |
|
||||||
|
| use_default_subnet_pool | None |
|
||||||
|
+-------------------------+--------------------------------------+
|
||||||
|
|
||||||
$ openstack router create k8s-ipv6
|
#. Create a router:
|
||||||
+-------------------------+--------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------------------+--------------------------------------+
|
|
||||||
| admin_state_up | UP |
|
|
||||||
| availability_zone_hints | |
|
|
||||||
| availability_zones | |
|
|
||||||
| created_at | 2017-08-11T13:17:10Z |
|
|
||||||
| description | |
|
|
||||||
| distributed | False |
|
|
||||||
| external_gateway_info | None |
|
|
||||||
| flavor_id | None |
|
|
||||||
| ha | False |
|
|
||||||
| id | f802a968-2f83-4006-80cb-5070415f69bf |
|
|
||||||
| name | k8s-ipv6 |
|
|
||||||
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
|
||||||
| revision_number | None |
|
|
||||||
| routes | |
|
|
||||||
| status | ACTIVE |
|
|
||||||
| tags | [] |
|
|
||||||
| updated_at | 2017-08-11T13:17:10Z |
|
|
||||||
+-------------------------+--------------------------------------+
|
|
||||||
|
|
||||||
#. Add the router to the pod subnet::
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack router add subnet k8s-ipv6 pod_subnet
|
$ openstack router create k8s-ipv6
|
||||||
|
+-------------------------+--------------------------------------+
|
||||||
|
| Field | Value |
|
||||||
|
+-------------------------+--------------------------------------+
|
||||||
|
| admin_state_up | UP |
|
||||||
|
| availability_zone_hints | |
|
||||||
|
| availability_zones | |
|
||||||
|
| created_at | 2017-08-11T13:17:10Z |
|
||||||
|
| description | |
|
||||||
|
| distributed | False |
|
||||||
|
| external_gateway_info | None |
|
||||||
|
| flavor_id | None |
|
||||||
|
| ha | False |
|
||||||
|
| id | f802a968-2f83-4006-80cb-5070415f69bf |
|
||||||
|
| name | k8s-ipv6 |
|
||||||
|
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
||||||
|
| revision_number | None |
|
||||||
|
| routes | |
|
||||||
|
| status | ACTIVE |
|
||||||
|
| tags | [] |
|
||||||
|
| updated_at | 2017-08-11T13:17:10Z |
|
||||||
|
+-------------------------+--------------------------------------+
|
||||||
|
|
||||||
#. Add the router to the service subnet::
|
#. Add the router to the pod subnet:
|
||||||
|
|
||||||
$ openstack router add subnet k8s-ipv6 service_subnet
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ openstack router add subnet k8s-ipv6 pod_subnet
|
||||||
|
|
||||||
|
#. Add the router to the service subnet:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ openstack router add subnet k8s-ipv6 service_subnet
|
||||||
|
|
||||||
#. Modify Kubernetes API server command line so that it points to the right
|
#. Modify Kubernetes API server command line so that it points to the right
|
||||||
CIDR::
|
CIDR:
|
||||||
|
|
||||||
--service-cluster-ip-range=fd10:0:0:2::/113
|
.. code-block:: console
|
||||||
|
|
||||||
|
--service-cluster-ip-range=fd10:0:0:2::/113
|
||||||
|
|
||||||
Note that it is /113 because the other half of the /112 will be used by the
|
Note that it is /113 because the other half of the /112 will be used by the
|
||||||
Octavia LB vrrp ports.
|
Octavia LB vrrp ports.
|
||||||
|
@ -203,45 +219,47 @@ Troubleshooting
|
||||||
|
|
||||||
This means that most likely you forgot to create a security group or rule
|
This means that most likely you forgot to create a security group or rule
|
||||||
for the pods to be accessible by the service CIDR. You can find an example
|
for the pods to be accessible by the service CIDR. You can find an example
|
||||||
here::
|
here:
|
||||||
|
|
||||||
$ openstack security group create service_pod_access_v6
|
.. code-block:: console
|
||||||
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
|
|
||||||
| created_at | 2017-08-16T10:01:45Z |
|
|
||||||
| description | service_pod_access_v6 |
|
|
||||||
| id | f0b6f0bd-40f7-4ab6-a77b-3cf9f7cc28ac |
|
|
||||||
| name | service_pod_access_v6 |
|
|
||||||
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
|
||||||
| revision_number | 2 |
|
|
||||||
| rules | created_at='2017-08-16T10:01:45Z', direction='egress', ethertype='IPv4', id='bd759b4f-c0f5-4cff-a30a-3cd8544d2822', updated_at='2017-08-16T10:01:45Z' |
|
|
||||||
| | created_at='2017-08-16T10:01:45Z', direction='egress', ethertype='IPv6', id='c89c3f3e-a326-4902-ba26-5315e2d95320', updated_at='2017-08-16T10:01:45Z' |
|
|
||||||
| updated_at | 2017-08-16T10:01:45Z |
|
|
||||||
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack security group rule create --remote-ip fd10:0:0:2::/112 \
|
$ openstack security group create service_pod_access_v6
|
||||||
--ethertype IPv6 f0b6f0bd-40f7-4ab6-a77b-3cf9f7cc28ac
|
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
+-------------------+--------------------------------------+
|
| Field | Value |
|
||||||
| Field | Value |
|
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
+-------------------+--------------------------------------+
|
| created_at | 2017-08-16T10:01:45Z |
|
||||||
| created_at | 2017-08-16T10:04:57Z |
|
| description | service_pod_access_v6 |
|
||||||
| description | |
|
| id | f0b6f0bd-40f7-4ab6-a77b-3cf9f7cc28ac |
|
||||||
| direction | ingress |
|
| name | service_pod_access_v6 |
|
||||||
| ether_type | IPv6 |
|
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
||||||
| id | cface77f-666f-4a4c-8a15-a9c6953acf08 |
|
| revision_number | 2 |
|
||||||
| name | None |
|
| rules | created_at='2017-08-16T10:01:45Z', direction='egress', ethertype='IPv4', id='bd759b4f-c0f5-4cff-a30a-3cd8544d2822', updated_at='2017-08-16T10:01:45Z' |
|
||||||
| port_range_max | None |
|
| | created_at='2017-08-16T10:01:45Z', direction='egress', ethertype='IPv6', id='c89c3f3e-a326-4902-ba26-5315e2d95320', updated_at='2017-08-16T10:01:45Z' |
|
||||||
| port_range_min | None |
|
| updated_at | 2017-08-16T10:01:45Z |
|
||||||
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| protocol | tcp |
|
|
||||||
| remote_group_id | None |
|
|
||||||
| remote_ip_prefix | fd10:0:0:2::/112 |
|
|
||||||
| revision_number | 0 |
|
|
||||||
| security_group_id | f0b6f0bd-40f7-4ab6-a77b-3cf9f7cc28ac |
|
|
||||||
| updated_at | 2017-08-16T10:04:57Z |
|
|
||||||
+-------------------+--------------------------------------+
|
|
||||||
|
|
||||||
Then remember to add the new security groups to the comma-separated
|
$ openstack security group rule create --remote-ip fd10:0:0:2::/112 \
|
||||||
*pod_security_groups* setting in the section *[neutron_defaults]* of
|
--ethertype IPv6 f0b6f0bd-40f7-4ab6-a77b-3cf9f7cc28ac
|
||||||
/etc/kuryr/kuryr.conf
|
+-------------------+--------------------------------------+
|
||||||
|
| Field | Value |
|
||||||
|
+-------------------+--------------------------------------+
|
||||||
|
| created_at | 2017-08-16T10:04:57Z |
|
||||||
|
| description | |
|
||||||
|
| direction | ingress |
|
||||||
|
| ether_type | IPv6 |
|
||||||
|
| id | cface77f-666f-4a4c-8a15-a9c6953acf08 |
|
||||||
|
| name | None |
|
||||||
|
| port_range_max | None |
|
||||||
|
| port_range_min | None |
|
||||||
|
| project_id | 90baf12877ba49a786419b2cacc2c954 |
|
||||||
|
| protocol | tcp |
|
||||||
|
| remote_group_id | None |
|
||||||
|
| remote_ip_prefix | fd10:0:0:2::/112 |
|
||||||
|
| revision_number | 0 |
|
||||||
|
| security_group_id | f0b6f0bd-40f7-4ab6-a77b-3cf9f7cc28ac |
|
||||||
|
| updated_at | 2017-08-16T10:04:57Z |
|
||||||
|
+-------------------+--------------------------------------+
|
||||||
|
|
||||||
|
Then remember to add the new security groups to the comma-separated
|
||||||
|
*pod_security_groups* setting in the section *[neutron_defaults]* of
|
||||||
|
/etc/kuryr/kuryr.conf
|
||||||
|
|
|
@ -5,51 +5,56 @@ Installing kuryr-kubernetes manually
|
||||||
Configure kuryr-k8s-controller
|
Configure kuryr-k8s-controller
|
||||||
------------------------------
|
------------------------------
|
||||||
|
|
||||||
Install ``kuryr-k8s-controller`` in a virtualenv::
|
Install ``kuryr-k8s-controller`` in a virtualenv:
|
||||||
|
|
||||||
$ mkdir kuryr-k8s-controller
|
.. code-block:: console
|
||||||
$ cd kuryr-k8s-controller
|
|
||||||
$ virtualenv env
|
|
||||||
$ git clone https://opendev.org/openstack/kuryr-kubernetes
|
|
||||||
$ . env/bin/activate
|
|
||||||
$ pip install -e kuryr-kubernetes
|
|
||||||
|
|
||||||
|
$ mkdir kuryr-k8s-controller
|
||||||
|
$ cd kuryr-k8s-controller
|
||||||
|
$ virtualenv env
|
||||||
|
$ git clone https://opendev.org/openstack/kuryr-kubernetes
|
||||||
|
$ . env/bin/activate
|
||||||
|
$ pip install -e kuryr-kubernetes
|
||||||
|
|
||||||
In neutron or in horizon create subnet for pods, subnet for services and a
|
In neutron or in horizon create subnet for pods, subnet for services and a
|
||||||
security-group for pods. You may use existing if you like. In case that you
|
security-group for pods. You may use existing if you like. In case that you
|
||||||
decide to create new networks and subnets with the cli, you can follow the
|
decide to create new networks and subnets with the cli, you can follow the
|
||||||
services guide, specifically its :ref:`k8s_default_configuration` section.
|
services guide, specifically its :ref:`k8s_default_configuration` section.
|
||||||
|
|
||||||
Create ``/etc/kuryr/kuryr.conf``::
|
Create ``/etc/kuryr/kuryr.conf``:
|
||||||
|
|
||||||
$ cd kuryr-kubernetes
|
.. code-block:: console
|
||||||
$ ./tools/generate_config_file_samples.sh
|
|
||||||
$ cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf
|
|
||||||
|
|
||||||
Edit ``kuryr.conf``::
|
$ cd kuryr-kubernetes
|
||||||
|
$ ./tools/generate_config_file_samples.sh
|
||||||
|
$ cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf
|
||||||
|
|
||||||
[DEFAULT]
|
Edit ``kuryr.conf``:
|
||||||
use_stderr = true
|
|
||||||
bindir = {path_to_env}/libexec/kuryr
|
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
api_root = http://{ip_of_kubernetes_apiserver}:8080
|
|
||||||
|
|
||||||
[neutron]
|
[DEFAULT]
|
||||||
auth_url = http://127.0.0.1:35357/v3/
|
use_stderr = true
|
||||||
username = admin
|
bindir = {path_to_env}/libexec/kuryr
|
||||||
user_domain_name = Default
|
|
||||||
password = ADMIN_PASSWORD
|
|
||||||
project_name = service
|
|
||||||
project_domain_name = Default
|
|
||||||
auth_type = password
|
|
||||||
|
|
||||||
[neutron_defaults]
|
[kubernetes]
|
||||||
ovs_bridge = br-int
|
api_root = http://{ip_of_kubernetes_apiserver}:8080
|
||||||
pod_security_groups = {id_of_secuirity_group_for_pods}
|
|
||||||
pod_subnet = {id_of_subnet_for_pods}
|
[neutron]
|
||||||
project = {id_of_project}
|
auth_url = http://127.0.0.1:35357/v3/
|
||||||
service_subnet = {id_of_subnet_for_k8s_services}
|
username = admin
|
||||||
|
user_domain_name = Default
|
||||||
|
password = ADMIN_PASSWORD
|
||||||
|
project_name = service
|
||||||
|
project_domain_name = Default
|
||||||
|
auth_type = password
|
||||||
|
|
||||||
|
[neutron_defaults]
|
||||||
|
ovs_bridge = br-int
|
||||||
|
pod_security_groups = {id_of_secuirity_group_for_pods}
|
||||||
|
pod_subnet = {id_of_subnet_for_pods}
|
||||||
|
project = {id_of_project}
|
||||||
|
service_subnet = {id_of_subnet_for_k8s_services}
|
||||||
|
|
||||||
Note that the service_subnet and the pod_subnet *should be routable* and that
|
Note that the service_subnet and the pod_subnet *should be routable* and that
|
||||||
the pods should allow service subnet access.
|
the pods should allow service subnet access.
|
||||||
|
@ -71,13 +76,15 @@ Neutron-LBaaSv2):
|
||||||
|
|
||||||
* There should be a router between the two subnets.
|
* There should be a router between the two subnets.
|
||||||
* The pod_security_groups setting should include a security group with a rule
|
* The pod_security_groups setting should include a security group with a rule
|
||||||
granting access to all the CIDR of the service subnet, e.g.::
|
granting access to all the CIDR of the service subnet, e.g.:
|
||||||
|
|
||||||
openstack security group create --project k8s_cluster_project \
|
.. code-block:: console
|
||||||
service_pod_access_sg
|
|
||||||
openstack security group rule create --project k8s_cluster_project \
|
$ openstack security group create --project k8s_cluster_project \
|
||||||
--remote-ip cidr_of_service_subnet --ethertype IPv4 --protocol tcp \
|
service_pod_access_sg
|
||||||
service_pod_access_sg
|
$ openstack security group rule create --project k8s_cluster_project \
|
||||||
|
--remote-ip cidr_of_service_subnet --ethertype IPv4 --protocol tcp \
|
||||||
|
service_pod_access_sg
|
||||||
|
|
||||||
* The uuid of this security group id should be added to the comma separated
|
* The uuid of this security group id should be added to the comma separated
|
||||||
list of pod security groups. *pod_security_groups* in *[neutron_defaults]*.
|
list of pod security groups. *pod_security_groups* in *[neutron_defaults]*.
|
||||||
|
@ -85,25 +92,30 @@ Neutron-LBaaSv2):
|
||||||
Alternatively, to support Octavia L2 mode:
|
Alternatively, to support Octavia L2 mode:
|
||||||
|
|
||||||
* The pod security_groups setting should include a security group with a rule
|
* The pod security_groups setting should include a security group with a rule
|
||||||
granting access to all the CIDR of the pod subnet, e.g.::
|
granting access to all the CIDR of the pod subnet, e.g.:
|
||||||
|
|
||||||
openstack security group create --project k8s_cluster_project \
|
.. code-block:: console
|
||||||
octavia_pod_access_sg
|
|
||||||
openstack security group rule create --project k8s_cluster_project \
|
$ openstack security group create --project k8s_cluster_project \
|
||||||
--remote-ip cidr_of_pod_subnet --ethertype IPv4 --protocol tcp \
|
octavia_pod_access_sg
|
||||||
octavia_pod_access_sg
|
$ openstack security group rule create --project k8s_cluster_project \
|
||||||
|
--remote-ip cidr_of_pod_subnet --ethertype IPv4 --protocol tcp \
|
||||||
|
octavia_pod_access_sg
|
||||||
|
|
||||||
* The uuid of this security group id should be added to the comma separated
|
* The uuid of this security group id should be added to the comma separated
|
||||||
list of pod security groups. *pod_security_groups* in *[neutron_defaults]*.
|
list of pod security groups. *pod_security_groups* in *[neutron_defaults]*.
|
||||||
|
|
||||||
|
Run kuryr-k8s-controller:
|
||||||
|
|
||||||
Run kuryr-k8s-controller::
|
.. code-block:: console
|
||||||
|
|
||||||
$ kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
|
$ kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
|
||||||
|
|
||||||
Alternatively you may run it in screen::
|
Alternatively you may run it in screen:
|
||||||
|
|
||||||
$ screen -dm kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ screen -dm kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
|
||||||
|
|
||||||
|
|
||||||
Configure kuryr-cni
|
Configure kuryr-cni
|
||||||
|
@ -112,53 +124,65 @@ Configure kuryr-cni
|
||||||
On every kubernetes minion node (and on master if you intend to run containers
|
On every kubernetes minion node (and on master if you intend to run containers
|
||||||
there) you need to configure kuryr-cni.
|
there) you need to configure kuryr-cni.
|
||||||
|
|
||||||
Install ``kuryr-cni`` in a virtualenv::
|
Install ``kuryr-cni`` in a virtualenv:
|
||||||
|
|
||||||
$ mkdir kuryr-k8s-cni
|
.. code-block:: console
|
||||||
$ cd kuryr-k8s-cni
|
|
||||||
$ virtualenv env
|
|
||||||
$ . env/bin/activate
|
|
||||||
$ git clone https://opendev.org/openstack/kuryr-kubernetes
|
|
||||||
$ pip install -e kuryr-kubernetes
|
|
||||||
|
|
||||||
Create ``/etc/kuryr/kuryr.conf``::
|
$ mkdir kuryr-k8s-cni
|
||||||
|
$ cd kuryr-k8s-cni
|
||||||
|
$ virtualenv env
|
||||||
|
$ . env/bin/activate
|
||||||
|
$ git clone https://opendev.org/openstack/kuryr-kubernetes
|
||||||
|
$ pip install -e kuryr-kubernetes
|
||||||
|
|
||||||
$ cd kuryr-kubernetes
|
Create ``/etc/kuryr/kuryr.conf``:
|
||||||
$ ./tools/generate_config_file_samples.sh
|
|
||||||
$ cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf
|
|
||||||
|
|
||||||
Edit ``kuryr.conf``::
|
.. code-block:: console
|
||||||
|
|
||||||
[DEFAULT]
|
$ cd kuryr-kubernetes
|
||||||
use_stderr = true
|
$ ./tools/generate_config_file_samples.sh
|
||||||
bindir = {path_to_env}/libexec/kuryr
|
$ cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf
|
||||||
[kubernetes]
|
|
||||||
api_root = http://{ip_of_kubernetes_apiserver}:8080
|
|
||||||
|
|
||||||
Link the CNI binary to CNI directory, where kubelet would find it::
|
Edit ``kuryr.conf``:
|
||||||
|
|
||||||
$ mkdir -p /opt/cni/bin
|
.. code-block:: ini
|
||||||
$ ln -s $(which kuryr-cni) /opt/cni/bin/
|
|
||||||
|
[DEFAULT]
|
||||||
|
use_stderr = true
|
||||||
|
bindir = {path_to_env}/libexec/kuryr
|
||||||
|
[kubernetes]
|
||||||
|
api_root = http://{ip_of_kubernetes_apiserver}:8080
|
||||||
|
|
||||||
|
Link the CNI binary to CNI directory, where kubelet would find it:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ mkdir -p /opt/cni/bin
|
||||||
|
$ ln -s $(which kuryr-cni) /opt/cni/bin/
|
||||||
|
|
||||||
Create the CNI config file for kuryr-cni: ``/etc/cni/net.d/10-kuryr.conf``.
|
Create the CNI config file for kuryr-cni: ``/etc/cni/net.d/10-kuryr.conf``.
|
||||||
Kubelet would only use the lexicographically first file in that directory, so
|
Kubelet would only use the lexicographically first file in that directory, so
|
||||||
make sure that it is kuryr's config file::
|
make sure that it is kuryr's config file:
|
||||||
|
|
||||||
{
|
.. code-block:: json
|
||||||
"cniVersion": "0.3.1",
|
|
||||||
"name": "kuryr",
|
{
|
||||||
"type": "kuryr-cni",
|
"cniVersion": "0.3.1",
|
||||||
"kuryr_conf": "/etc/kuryr/kuryr.conf",
|
"name": "kuryr",
|
||||||
"debug": true
|
"type": "kuryr-cni",
|
||||||
}
|
"kuryr_conf": "/etc/kuryr/kuryr.conf",
|
||||||
|
"debug": true
|
||||||
|
}
|
||||||
|
|
||||||
Install ``os-vif`` and ``oslo.privsep`` libraries globally. These modules
|
Install ``os-vif`` and ``oslo.privsep`` libraries globally. These modules
|
||||||
are used to plug interfaces and would be run with raised privileges. ``os-vif``
|
are used to plug interfaces and would be run with raised privileges. ``os-vif``
|
||||||
uses ``sudo`` to raise privileges, and they would need to be installed globally
|
uses ``sudo`` to raise privileges, and they would need to be installed globally
|
||||||
to work correctly::
|
to work correctly:
|
||||||
|
|
||||||
deactivate
|
.. code-block:: console
|
||||||
sudo pip install 'oslo.privsep>=1.20.0' 'os-vif>=1.5.0'
|
|
||||||
|
$ deactivate
|
||||||
|
$ sudo pip install 'oslo.privsep>=1.20.0' 'os-vif>=1.5.0'
|
||||||
|
|
||||||
|
|
||||||
Configure Kuryr CNI Daemon
|
Configure Kuryr CNI Daemon
|
||||||
|
@ -177,11 +201,13 @@ steps need to be repeated.
|
||||||
crucial for scalability of the whole deployment. In general the timeout to
|
crucial for scalability of the whole deployment. In general the timeout to
|
||||||
serve CNI request from kubelet to Kuryr is 180 seconds. After that time
|
serve CNI request from kubelet to Kuryr is 180 seconds. After that time
|
||||||
kubelet will retry the request. Additionally there are two configuration
|
kubelet will retry the request. Additionally there are two configuration
|
||||||
options::
|
options:
|
||||||
|
|
||||||
[cni_daemon]
|
.. code-block:: ini
|
||||||
vif_annotation_timeout=60
|
|
||||||
pyroute2_timeout=10
|
[cni_daemon]
|
||||||
|
vif_annotation_timeout=60
|
||||||
|
pyroute2_timeout=10
|
||||||
|
|
||||||
``vif_annotation_timeout`` is time the Kuryr CNI Daemon will wait for Kuryr
|
``vif_annotation_timeout`` is time the Kuryr CNI Daemon will wait for Kuryr
|
||||||
Controller to create a port in Neutron and add information about it to Pod's
|
Controller to create a port in Neutron and add information about it to Pod's
|
||||||
|
@ -198,13 +224,17 @@ steps need to be repeated.
|
||||||
value denotes *maximum* time to wait for kernel to complete the operations.
|
value denotes *maximum* time to wait for kernel to complete the operations.
|
||||||
If operation succeeds earlier, request isn't delayed.
|
If operation succeeds earlier, request isn't delayed.
|
||||||
|
|
||||||
Run kuryr-daemon::
|
Run kuryr-daemon:
|
||||||
|
|
||||||
$ kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
.. code-block:: console
|
||||||
|
|
||||||
Alternatively you may run it in screen::
|
$ kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
||||||
|
|
||||||
$ screen -dm kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
Alternatively you may run it in screen:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ screen -dm kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
||||||
|
|
||||||
|
|
||||||
Kuryr CNI Daemon health checks
|
Kuryr CNI Daemon health checks
|
||||||
|
@ -216,20 +246,25 @@ and readiness.
|
||||||
|
|
||||||
If you want to make use of all of its facilities, you should run the
|
If you want to make use of all of its facilities, you should run the
|
||||||
kuryr-daemon in its own cgroup. It will get its own cgroup if you:
|
kuryr-daemon in its own cgroup. It will get its own cgroup if you:
|
||||||
|
|
||||||
* Run it as a systemd service,
|
* Run it as a systemd service,
|
||||||
* run it containerized,
|
* run it containerized,
|
||||||
* create a memory cgroup for it.
|
* create a memory cgroup for it.
|
||||||
|
|
||||||
In order to make the daemon run in its own cgroup, you can do the following::
|
In order to make the daemon run in its own cgroup, you can do the following:
|
||||||
|
|
||||||
systemd-run --unit=kuryr-daemon --scope --slice=kuryr-cni \
|
.. code-block:: console
|
||||||
kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
|
||||||
|
systemd-run --unit=kuryr-daemon --scope --slice=kuryr-cni \
|
||||||
|
kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
||||||
|
|
||||||
After this, with the CNI daemon running inside its own cgroup, we can enable
|
After this, with the CNI daemon running inside its own cgroup, we can enable
|
||||||
the CNI daemon memory health check. This health check allows us to limit the
|
the CNI daemon memory health check. This health check allows us to limit the
|
||||||
memory consumption of the CNI Daemon. The health checks will fail if CNI starts
|
memory consumption of the CNI Daemon. The health checks will fail if CNI starts
|
||||||
taking more memory that it is set and the orchestration layer should restart.
|
taking more memory that it is set and the orchestration layer should restart.
|
||||||
The setting is::
|
The setting is:
|
||||||
|
|
||||||
[cni_health_server]
|
.. code-block:: ini
|
||||||
max_memory_usage = 4096 # Set the memory limit to 4GiB
|
|
||||||
|
[cni_health_server]
|
||||||
|
max_memory_usage = 4096 # Set the memory limit to 4GiB
|
||||||
|
|
|
@ -8,84 +8,94 @@ the next steps are needed:
|
||||||
1. Enable the namespace handler to reach to namespace events, in this case,
|
1. Enable the namespace handler to reach to namespace events, in this case,
|
||||||
creation and deletion. To do that you need to add it to the list of the
|
creation and deletion. To do that you need to add it to the list of the
|
||||||
enabled handlers at kuryr.conf (details on how to edit this for
|
enabled handlers at kuryr.conf (details on how to edit this for
|
||||||
containerized deployment can be found at :doc:`./devstack/containerized`)::
|
containerized deployment can be found at :doc:`./devstack/containerized`):
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
enabled_handlers=vif,lb,lbaasspec,namespace
|
|
||||||
|
|
||||||
|
[kubernetes]
|
||||||
|
enabled_handlers=vif,lb,lbaasspec,namespace
|
||||||
|
|
||||||
Note that if you also want to enable prepopulation of ports pools upon new
|
Note that if you also want to enable prepopulation of ports pools upon new
|
||||||
namespace creation, you need to add the kuryrnet handler (more details on
|
namespace creation, you need to add the kuryrnet handler (more details on
|
||||||
:doc:`./ports-pool`)::
|
:doc:`./ports-pool`):
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
enabled_handlers=vif,lb,lbaasspec,namespace,kuryrnet
|
|
||||||
|
|
||||||
|
[kubernetes]
|
||||||
|
enabled_handlers=vif,lb,lbaasspec,namespace,kuryrnet
|
||||||
|
|
||||||
2. Enable the namespace subnet driver by modifying the default
|
2. Enable the namespace subnet driver by modifying the default
|
||||||
pod_subnet_driver option at kuryr.conf::
|
pod_subnet_driver option at kuryr.conf:
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
pod_subnets_driver = namespace
|
|
||||||
|
|
||||||
|
[kubernetes]
|
||||||
|
pod_subnets_driver = namespace
|
||||||
|
|
||||||
In addition, to ensure that pods and services at one given namespace
|
In addition, to ensure that pods and services at one given namespace
|
||||||
cannot reach (or be reached by) the ones at another namespace, except the
|
cannot reach (or be reached by) the ones at another namespace, except the
|
||||||
pods at the default namespace that can reach (and be reached by) any pod at
|
pods at the default namespace that can reach (and be reached by) any pod at
|
||||||
a different namespace, the next security group driver needs to be set too::
|
a different namespace, the next security group driver needs to be set too:
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
pod_security_groups_driver = namespace
|
|
||||||
service_security_groups_driver = namespace
|
|
||||||
|
|
||||||
|
[kubernetes]
|
||||||
|
pod_security_groups_driver = namespace
|
||||||
|
service_security_groups_driver = namespace
|
||||||
|
|
||||||
3. Select (and create if needed) the subnet pool from where the new subnets
|
3. Select (and create if needed) the subnet pool from where the new subnets
|
||||||
will get their CIDR (e.g., the default on devstack deployment is
|
will get their CIDR (e.g., the default on devstack deployment is
|
||||||
shared-default-subnetpool-v4)::
|
shared-default-subnetpool-v4):
|
||||||
|
|
||||||
[namespace_subnet]
|
.. code-block:: ini
|
||||||
pod_subnet_pool = SUBNET_POOL_ID
|
|
||||||
|
|
||||||
|
[namespace_subnet]
|
||||||
|
pod_subnet_pool = SUBNET_POOL_ID
|
||||||
|
|
||||||
4. Select (and create if needed) the router where the new subnet will be
|
4. Select (and create if needed) the router where the new subnet will be
|
||||||
connected (e.g., the default on devstack deployments is router1)::
|
connected (e.g., the default on devstack deployments is router1):
|
||||||
|
|
||||||
[namespace_subnet]
|
.. code-block:: ini
|
||||||
pod_router = ROUTER_ID
|
|
||||||
|
|
||||||
|
[namespace_subnet]
|
||||||
|
pod_router = ROUTER_ID
|
||||||
|
|
||||||
Note that if a new router is created, it must ensure the connectivity
|
Note that if a new router is created, it must ensure the connectivity
|
||||||
requirements between pod, service and public subnets, as in the case for
|
requirements between pod, service and public subnets, as in the case for
|
||||||
the default subnet driver.
|
the default subnet driver.
|
||||||
|
|
||||||
|
|
||||||
5. Select (and create if needed) the security groups to be attached to the
|
5. Select (and create if needed) the security groups to be attached to the
|
||||||
pods at the default namespace and to the others, enabling the cross access
|
pods at the default namespace and to the others, enabling the cross access
|
||||||
between them::
|
between them:
|
||||||
|
|
||||||
[namespace_sg]
|
.. code-block:: ini
|
||||||
sg_allow_from_namespaces = SG_ID_1 # Makes SG_ID_1 allow traffic from the sg sg_allow_from_default
|
|
||||||
sg_allow_from_default = SG_ID_2 # Makes SG_ID_2 allow traffic from the sg sg_allow_from_namespaces
|
|
||||||
|
|
||||||
|
[namespace_sg]
|
||||||
|
sg_allow_from_namespaces = SG_ID_1 # Makes SG_ID_1 allow traffic from the sg sg_allow_from_default
|
||||||
|
sg_allow_from_default = SG_ID_2 # Makes SG_ID_2 allow traffic from the sg sg_allow_from_namespaces
|
||||||
|
|
||||||
Note you need to restart the kuryr controller after applying the above
|
Note you need to restart the kuryr controller after applying the above
|
||||||
detailed steps. For devstack non-containerized deployments::
|
detailed steps. For devstack non-containerized deployments:
|
||||||
|
|
||||||
sudo systemctl restart devstack@kuryr-kubernetes.service
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||||
|
|
||||||
And for containerized deployments::
|
And for containerized deployments:
|
||||||
|
|
||||||
kubectl -n kube-system get pod | grep kuryr-controller
|
.. code-block:: console
|
||||||
kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
|
||||||
|
|
||||||
|
$ kubectl -n kube-system get pod | grep kuryr-controller
|
||||||
|
$ kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||||
|
|
||||||
For directly enabling the driver when deploying with devstack, you just need
|
For directly enabling the driver when deploying with devstack, you just need
|
||||||
to add the namespace handler and state the namespace subnet driver with::
|
to add the namespace handler and state the namespace subnet driver with:
|
||||||
|
|
||||||
KURYR_SUBNET_DRIVER=namespace
|
.. code-block:: console
|
||||||
KURYR_SG_DRIVER=namespace
|
|
||||||
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace
|
KURYR_SUBNET_DRIVER=namespace
|
||||||
|
KURYR_SG_DRIVER=namespace
|
||||||
|
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
@ -98,70 +108,79 @@ to add the namespace handler and state the namespace subnet driver with::
|
||||||
Testing the network per namespace functionality
|
Testing the network per namespace functionality
|
||||||
-----------------------------------------------
|
-----------------------------------------------
|
||||||
|
|
||||||
1. Create two namespaces::
|
1. Create two namespaces:
|
||||||
|
|
||||||
$ kubectl create namespace test1
|
.. code-block:: console
|
||||||
$ kubectl create namespace test2
|
|
||||||
|
|
||||||
2. Check resources has been created::
|
$ kubectl create namespace test1
|
||||||
|
$ kubectl create namespace test2
|
||||||
|
|
||||||
$ kubectl get namespaces
|
2. Check resources has been created:
|
||||||
NAME STATUS AGE
|
|
||||||
test1 Active 14s
|
|
||||||
test2 Active 5s
|
|
||||||
... ... ...
|
|
||||||
|
|
||||||
$ kubectl get kuryrnets
|
.. code-block:: console
|
||||||
NAME AGE
|
|
||||||
ns-test1 1m
|
|
||||||
ns-test2 1m
|
|
||||||
|
|
||||||
$ openstack network list | grep test1
|
$ kubectl get namespaces
|
||||||
| 7c7b68c5-d3c4-431c-9f69-fbc777b43ee5 | ns/test1-net | 8640d134-5ea2-437d-9e2a-89236f6c0198 |
|
NAME STATUS AGE
|
||||||
|
test1 Active 14s
|
||||||
|
test2 Active 5s
|
||||||
|
... ... ...
|
||||||
|
|
||||||
$ openstack subnet list | grep test1
|
$ kubectl get kuryrnets
|
||||||
| 8640d134-5ea2-437d-9e2a-89236f6c0198 | ns/test1-subnet | 7c7b68c5-d3c4-431c-9f69-fbc777b43ee5 | 10.0.1.128/26 |
|
NAME AGE
|
||||||
|
ns-test1 1m
|
||||||
|
ns-test2 1m
|
||||||
|
|
||||||
3. Create a pod in the created namespaces::
|
$ openstack network list | grep test1
|
||||||
|
| 7c7b68c5-d3c4-431c-9f69-fbc777b43ee5 | ns/test1-net | 8640d134-5ea2-437d-9e2a-89236f6c0198 |
|
||||||
|
|
||||||
$ kubectl run -n test1 --image kuryr/demo demo
|
$ openstack subnet list | grep test1
|
||||||
deployment "demo" created
|
| 8640d134-5ea2-437d-9e2a-89236f6c0198 | ns/test1-subnet | 7c7b68c5-d3c4-431c-9f69-fbc777b43ee5 | 10.0.1.128/26 |
|
||||||
|
|
||||||
$ kubectl run -n test2 --image kuryr/demo demo
|
3. Create a pod in the created namespaces:
|
||||||
deployment "demo" created
|
|
||||||
|
|
||||||
$ kubectl -n test1 get pod -o wide
|
.. code-block:: console
|
||||||
NAME READY STATUS RESTARTS AGE IP NODE
|
|
||||||
demo-5995548848-lmmjc 1/1 Running 0 7s 10.0.1.136 node1
|
|
||||||
|
|
||||||
$ kubectl -n test2 get pod -o wide
|
$ kubectl run -n test1 --image kuryr/demo demo
|
||||||
NAME READY STATUS RESTARTS AGE IP NODE
|
deployment "demo" created
|
||||||
demo-5135352253-dfghd 1/1 Running 0 7s 10.0.1.134 node1
|
|
||||||
|
|
||||||
|
$ kubectl run -n test2 --image kuryr/demo demo
|
||||||
|
deployment "demo" created
|
||||||
|
|
||||||
4. Create a service::
|
$ kubectl -n test1 get pod -o wide
|
||||||
|
NAME READY STATUS RESTARTS AGE IP NODE
|
||||||
|
demo-5995548848-lmmjc 1/1 Running 0 7s 10.0.1.136 node1
|
||||||
|
|
||||||
$ kubectl expose -n test1 deploy/demo --port 80 --target-port 8080
|
$ kubectl -n test2 get pod -o wide
|
||||||
service "demo" exposed
|
NAME READY STATUS RESTARTS AGE IP NODE
|
||||||
|
demo-5135352253-dfghd 1/1 Running 0 7s 10.0.1.134 node1
|
||||||
|
|
||||||
$ kubectl -n test1 get svc
|
4. Create a service:
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
||||||
demo ClusterIP 10.0.0.141 <none> 80/TCP 18s
|
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
5. Test service connectivity from both namespaces::
|
$ kubectl expose -n test1 deploy/demo --port 80 --target-port 8080
|
||||||
|
service "demo" exposed
|
||||||
|
|
||||||
$ kubectl exec -n test1 -it demo-5995548848-lmmjc /bin/sh
|
$ kubectl -n test1 get svc
|
||||||
test-1-pod$ curl 10.0.0.141
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
demo-5995548848-lmmjc: HELLO! I AM ALIVE!!!
|
demo ClusterIP 10.0.0.141 <none> 80/TCP 18s
|
||||||
|
|
||||||
$ kubectl exec -n test2 -it demo-5135352253-dfghd /bin/sh
|
5. Test service connectivity from both namespaces:
|
||||||
test-2-pod$ curl 10.0.0.141
|
|
||||||
## No response
|
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ kubectl exec -n test1 -it demo-5995548848-lmmjc /bin/sh
|
||||||
|
test-1-pod$ curl 10.0.0.141
|
||||||
|
demo-5995548848-lmmjc: HELLO! I AM ALIVE!!!
|
||||||
|
|
||||||
|
$ kubectl exec -n test2 -it demo-5135352253-dfghd /bin/sh
|
||||||
|
test-2-pod$ curl 10.0.0.141
|
||||||
|
## No response
|
||||||
|
|
||||||
6. And finally, to remove the namespace and all its resources, including
|
6. And finally, to remove the namespace and all its resources, including
|
||||||
openstack networks, kuryrnet CRD, svc, pods, you just need to do::
|
openstack networks, kuryrnet CRD, svc, pods, you just need to do:
|
||||||
|
|
||||||
$ kubectl delete namespace test1
|
.. code-block:: console
|
||||||
$ kubectl delete namespace test2
|
|
||||||
|
$ kubectl delete namespace test1
|
||||||
|
$ kubectl delete namespace test2
|
||||||
|
|
|
@ -5,320 +5,359 @@ Enable network policy support functionality
|
||||||
Enable policy, pod_label and namespace handlers to respond to network policy
|
Enable policy, pod_label and namespace handlers to respond to network policy
|
||||||
events. As this is not done by default you'd have to explicitly add that to
|
events. As this is not done by default you'd have to explicitly add that to
|
||||||
the list of enabled handlers at kuryr.conf (further info on how to do this can
|
the list of enabled handlers at kuryr.conf (further info on how to do this can
|
||||||
be found at :doc:`./devstack/containerized`)::
|
be found at :doc:`./devstack/containerized`):
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy
|
|
||||||
|
|
||||||
|
[kubernetes]
|
||||||
|
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy
|
||||||
|
|
||||||
Note that if you also want to enable prepopulation of ports pools upon new
|
Note that if you also want to enable prepopulation of ports pools upon new
|
||||||
namespace creation, you need to add the kuryrnet handler (more details on
|
namespace creation, you need to add the kuryrnet handler (more details on
|
||||||
:doc:`./ports-pool`)::
|
:doc:`./ports-pool`):
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy,kuryrnet
|
|
||||||
|
|
||||||
|
[kubernetes]
|
||||||
|
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy,kuryrnet
|
||||||
|
|
||||||
After that, enable also the security group drivers for policies::
|
After that, enable also the security group drivers for policies:
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
service_security_groups_driver = policy
|
|
||||||
pod_security_groups_driver = policy
|
[kubernetes]
|
||||||
|
service_security_groups_driver = policy
|
||||||
|
pod_security_groups_driver = policy
|
||||||
|
|
||||||
.. warning::
|
.. warning::
|
||||||
|
|
||||||
The correct behavior for pods that have no network policy applied is to
|
The correct behavior for pods that have no network policy applied is to
|
||||||
allow all ingress and egress traffic. If you want that to be enforced,
|
allow all ingress and egress traffic. If you want that to be enforced,
|
||||||
please make sure to create an SG allowing all traffic and add it to
|
please make sure to create an SG allowing all traffic and add it to
|
||||||
``[neutron_defaults]pod_security_groups`` setting in ``kuryr.conf``::
|
``[neutron_defaults]pod_security_groups`` setting in ``kuryr.conf``:
|
||||||
|
|
||||||
[neutron_defaults]
|
.. code-block:: ini
|
||||||
pod_security_groups = ALLOW_ALL_SG_ID
|
|
||||||
|
[neutron_defaults]
|
||||||
|
pod_security_groups = ALLOW_ALL_SG_ID
|
||||||
|
|
||||||
Enable the namespace subnet driver by modifying the default pod_subnet_driver
|
Enable the namespace subnet driver by modifying the default pod_subnet_driver
|
||||||
option::
|
option:
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
pod_subnets_driver = namespace
|
|
||||||
|
|
||||||
Select the subnet pool from where the new subnets will get their CIDR::
|
[kubernetes]
|
||||||
|
pod_subnets_driver = namespace
|
||||||
|
|
||||||
[namespace_subnet]
|
Select the subnet pool from where the new subnets will get their CIDR:
|
||||||
pod_subnet_pool = SUBNET_POOL_ID
|
|
||||||
|
|
||||||
Lastly, select the router where the new subnet will be connected::
|
.. code-block:: ini
|
||||||
|
|
||||||
[namespace_subnet]
|
[namespace_subnet]
|
||||||
pod_router = ROUTER_ID
|
pod_subnet_pool = SUBNET_POOL_ID
|
||||||
|
|
||||||
|
Lastly, select the router where the new subnet will be connected:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
[namespace_subnet]
|
||||||
|
pod_router = ROUTER_ID
|
||||||
|
|
||||||
Note you need to restart the kuryr controller after applying the above step.
|
Note you need to restart the kuryr controller after applying the above step.
|
||||||
For devstack non-containerized deployments::
|
For devstack non-containerized deployments:
|
||||||
|
|
||||||
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
.. code-block:: console
|
||||||
|
|
||||||
Same for containerized deployments::
|
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||||
|
|
||||||
$ kubectl -n kube-system get pod | grep kuryr-controller
|
Same for containerized deployments:
|
||||||
$ kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ kubectl -n kube-system get pod | grep kuryr-controller
|
||||||
|
$ kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||||
|
|
||||||
For directly enabling the driver when deploying with devstack, you just need
|
For directly enabling the driver when deploying with devstack, you just need
|
||||||
to add the policy, pod_label and namespace handler and drivers with::
|
to add the policy, pod_label and namespace handler and drivers with:
|
||||||
|
|
||||||
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy
|
.. code-block:: bash
|
||||||
KURYR_SG_DRIVER=policy
|
|
||||||
KURYR_SUBNET_DRIVER=namespace
|
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy
|
||||||
|
KURYR_SG_DRIVER=policy
|
||||||
|
KURYR_SUBNET_DRIVER=namespace
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
|
If the loadbalancer maintains the source IP (such as ovn-octavia driver),
|
||||||
there is no need to enforce sg rules at the load balancer level. To disable
|
there is no need to enforce sg rules at the load balancer level. To disable
|
||||||
the enforcement, you need to set the following variable:
|
the enforcement, you need to set the following variable:
|
||||||
KURYR_ENFORCE_SG_RULES=False
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
KURYR_ENFORCE_SG_RULES=False
|
||||||
|
|
||||||
|
|
||||||
Testing the network policy support functionality
|
Testing the network policy support functionality
|
||||||
------------------------------------------------
|
------------------------------------------------
|
||||||
|
|
||||||
1. Given a yaml file with a network policy, such as::
|
1. Given a yaml file with a network policy, such as:
|
||||||
|
|
||||||
apiVersion: networking.k8s.io/v1
|
.. code-block:: yaml
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: test-network-policy
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
project: default
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
- Egress
|
|
||||||
ingress:
|
|
||||||
- from:
|
|
||||||
- namespaceSelector:
|
|
||||||
matchLabels:
|
|
||||||
project: default
|
|
||||||
ports:
|
|
||||||
- protocol: TCP
|
|
||||||
port: 6379
|
|
||||||
egress:
|
|
||||||
- to:
|
|
||||||
- namespaceSelector:
|
|
||||||
matchLabels:
|
|
||||||
project: default
|
|
||||||
ports:
|
|
||||||
- protocol: TCP
|
|
||||||
port: 5978
|
|
||||||
|
|
||||||
2. Apply the network policy::
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
$ kubectl apply -f network_policy.yml
|
metadata:
|
||||||
|
name: test-network-policy
|
||||||
3. Check that the resources has been created::
|
namespace: default
|
||||||
|
spec:
|
||||||
$ kubectl get kuryrnetpolicies
|
podSelector:
|
||||||
NAME AGE
|
matchLabels:
|
||||||
np-test-network-policy 2s
|
project: default
|
||||||
|
policyTypes:
|
||||||
$ kubectl get networkpolicies
|
- Ingress
|
||||||
NAME POD-SELECTOR AGE
|
- Egress
|
||||||
test-network-policy role=db 2s
|
|
||||||
|
|
||||||
$ openstack security group list | grep sg-test-network-policy
|
|
||||||
| dabdf308-7eed-43ef-a058-af84d1954acb | sg-test-network-policy
|
|
||||||
|
|
||||||
4. Check that the rules are in place for the security group::
|
|
||||||
|
|
||||||
$ kubectl get kuryrnetpolicy np-test-network-policy -o yaml
|
|
||||||
|
|
||||||
apiVersion: openstack.org/v1
|
|
||||||
kind: KuryrNetPolicy
|
|
||||||
metadata:
|
|
||||||
annotations:
|
|
||||||
networkpolicy_name: test-network-policy
|
|
||||||
networkpolicy_namespace: default
|
|
||||||
networkpolicy_uid: aee1c59f-c634-11e8-b63d-002564fdd760
|
|
||||||
clusterName: ""
|
|
||||||
creationTimestamp: 2018-10-02T11:17:02Z
|
|
||||||
generation: 0
|
|
||||||
name: np-test-network-policy
|
|
||||||
namespace: default
|
|
||||||
resourceVersion: "2117"
|
|
||||||
selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetpolicies/np-test-network-policy
|
|
||||||
uid: afb99326-c634-11e8-b63d-002564fdd760
|
|
||||||
spec:
|
|
||||||
egressSgRules:
|
|
||||||
- security_group_rule:
|
|
||||||
description: Kuryr-Kubernetes NetPolicy SG rule
|
|
||||||
direction: egress
|
|
||||||
ethertype: IPv4
|
|
||||||
id: 6297c198-b385-44f3-8b43-29951f933a8f
|
|
||||||
port_range_max: 5978
|
|
||||||
port_range_min: 5978
|
|
||||||
protocol: tcp
|
|
||||||
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
|
||||||
ingressSgRules:
|
|
||||||
- security_group_rule:
|
|
||||||
description: Kuryr-Kubernetes NetPolicy SG rule
|
|
||||||
direction: ingress
|
|
||||||
ethertype: IPv4
|
|
||||||
id: f4e11e73-81c6-4c1b-9760-714eedff417b
|
|
||||||
port_range_max: 6379
|
|
||||||
port_range_min: 6379
|
|
||||||
protocol: tcp
|
|
||||||
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
|
||||||
securityGroupId: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
|
||||||
securityGroupName: sg-test-network-policy
|
|
||||||
networkpolicy_spec:
|
|
||||||
egress:
|
|
||||||
- to:
|
|
||||||
- namespaceSelector:
|
|
||||||
matchLabels:
|
|
||||||
project: default
|
|
||||||
ports:
|
|
||||||
- port: 5978
|
|
||||||
protocol: TCP
|
|
||||||
ingress:
|
ingress:
|
||||||
- from:
|
- from:
|
||||||
- namespaceSelector:
|
- namespaceSelector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
project: default
|
project: default
|
||||||
ports:
|
ports:
|
||||||
- port: 6379
|
- protocol: TCP
|
||||||
protocol: TCP
|
port: 6379
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
project: default
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
- Egress
|
|
||||||
|
|
||||||
$ openstack security group rule list sg-test-network-policy --protocol tcp -c "IP Protocol" -c "Port Range" -c "Direction" --long
|
|
||||||
+-------------+------------+-----------+
|
|
||||||
| IP Protocol | Port Range | Direction |
|
|
||||||
+-------------+------------+-----------+
|
|
||||||
| tcp | 6379:6379 | ingress |
|
|
||||||
| tcp | 5978:5978 | egress |
|
|
||||||
+-------------+------------+-----------+
|
|
||||||
|
|
||||||
5. Create a pod::
|
|
||||||
|
|
||||||
$ kubectl create deployment --image kuryr/demo demo
|
|
||||||
deployment "demo" created
|
|
||||||
|
|
||||||
$ kubectl get pod -o wide
|
|
||||||
NAME READY STATUS RESTARTS AGE IP
|
|
||||||
demo-5558c7865d-fdkdv 1/1 Running 0 44s 10.0.0.68
|
|
||||||
|
|
||||||
6. Get the pod port and check its security group rules::
|
|
||||||
|
|
||||||
$ openstack port list --fixed-ip ip-address=10.0.0.68 -f value -c ID
|
|
||||||
5d29b83c-714c-4579-8987-d0c0558420b3
|
|
||||||
|
|
||||||
$ openstack port show 5d29b83c-714c-4579-8987-d0c0558420b3 | grep security_group_ids
|
|
||||||
| security_group_ids | bb2ac605-56ff-4688-b4f1-1d045ad251d0
|
|
||||||
|
|
||||||
$ openstack security group rule list bb2ac605-56ff-4688-b4f1-1d045ad251d0
|
|
||||||
--protocol tcp -c "IP Protocol" -c "Port Range"
|
|
||||||
+-------------+------------+-----------+
|
|
||||||
| IP Protocol | Port Range | Direction |
|
|
||||||
+-------------+------------+-----------+
|
|
||||||
| tcp | 6379:6379 | ingress |
|
|
||||||
| tcp | 5978:5978 | egress |
|
|
||||||
+-------------+------------+-----------+
|
|
||||||
|
|
||||||
7. Try to curl the pod on port 8080 (hint: it won't work!)::
|
|
||||||
|
|
||||||
$ curl 10.0.0.68:8080
|
|
||||||
|
|
||||||
8. Update network policy to allow ingress 8080 port::
|
|
||||||
|
|
||||||
$ kubectl patch networkpolicy test-network-policy -p '{"spec":{"ingress":[{"ports":[{"port": 8080,"protocol": "TCP"}]}]}}'
|
|
||||||
networkpolicy "test-network-policy" patched
|
|
||||||
|
|
||||||
$ kubectl get knp np-test-network-policy -o yaml
|
|
||||||
apiVersion: openstack.org/v1
|
|
||||||
kind: KuryrNetPolicy
|
|
||||||
metadata:
|
|
||||||
annotations:
|
|
||||||
networkpolicy_name: test-network-policy
|
|
||||||
networkpolicy_namespace: default
|
|
||||||
networkpolicy_uid: aee1c59f-c634-11e8-b63d-002564fdd760
|
|
||||||
clusterName: ""
|
|
||||||
creationTimestamp: 2018-10-02T11:17:02Z
|
|
||||||
generation: 0
|
|
||||||
name: np-test-network-policy
|
|
||||||
namespace: default
|
|
||||||
resourceVersion: "1546"
|
|
||||||
selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetpolicies/np-test-network-policy
|
|
||||||
uid: afb99326-c634-11e8-b63d-002564fdd760
|
|
||||||
spec:
|
|
||||||
egressSgRules:
|
|
||||||
- security_group_rule:
|
|
||||||
description: Kuryr-Kubernetes NetPolicy SG rule
|
|
||||||
direction: egress
|
|
||||||
ethertype: IPv4
|
|
||||||
id: 1969a0b3-55e1-43d7-ba16-005b4ed4cbb7
|
|
||||||
port_range_max: 5978
|
|
||||||
port_range_min: 5978
|
|
||||||
protocol: tcp
|
|
||||||
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
|
||||||
ingressSgRules:
|
|
||||||
- security_group_rule:
|
|
||||||
description: Kuryr-Kubernetes NetPolicy SG rule
|
|
||||||
direction: ingress
|
|
||||||
ethertype: IPv4
|
|
||||||
id: 6598aa1f-4f94-4fb2-81ce-d3649ba28f33
|
|
||||||
port_range_max: 8080
|
|
||||||
port_range_min: 8080
|
|
||||||
protocol: tcp
|
|
||||||
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
|
||||||
securityGroupId: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
|
||||||
networkpolicy_spec:
|
|
||||||
egress:
|
egress:
|
||||||
- ports:
|
- to:
|
||||||
- port: 5978
|
|
||||||
protocol: TCP
|
|
||||||
to:
|
|
||||||
- namespaceSelector:
|
- namespaceSelector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
project: default
|
project: default
|
||||||
ingress:
|
ports:
|
||||||
- ports:
|
- protocol: TCP
|
||||||
- port: 8080
|
port: 5978
|
||||||
protocol: TCP
|
|
||||||
from:
|
|
||||||
- namespaceSelector:
|
|
||||||
matchLabels:
|
|
||||||
project: default
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
project: default
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
- Egress
|
|
||||||
|
|
||||||
$ openstack security group rule list sg-test-network-policy -c "IP Protocol" -c "Port Range" -c "Direction" --long
|
2. Apply the network policy:
|
||||||
+-------------+------------+-----------+
|
|
||||||
| IP Protocol | Port Range | Direction |
|
|
||||||
+-------------+------------+-----------+
|
|
||||||
| tcp | 8080:8080 | ingress |
|
|
||||||
| tcp | 5978:5978 | egress |
|
|
||||||
+-------------+------------+-----------+
|
|
||||||
|
|
||||||
9. Try to curl the pod ip after patching the network policy::
|
.. code-block:: console
|
||||||
|
|
||||||
$ curl 10.0.0.68:8080
|
$ kubectl apply -f network_policy.yml
|
||||||
demo-5558c7865d-fdkdv: HELLO! I AM ALIVE!!!
|
|
||||||
|
|
||||||
|
3. Check that the resources has been created:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ kubectl get kuryrnetpolicies
|
||||||
|
NAME AGE
|
||||||
|
np-test-network-policy 2s
|
||||||
|
|
||||||
|
$ kubectl get networkpolicies
|
||||||
|
NAME POD-SELECTOR AGE
|
||||||
|
test-network-policy role=db 2s
|
||||||
|
|
||||||
|
$ openstack security group list | grep sg-test-network-policy
|
||||||
|
| dabdf308-7eed-43ef-a058-af84d1954acb | sg-test-network-policy
|
||||||
|
|
||||||
|
4. Check that the rules are in place for the security group:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ kubectl get kuryrnetpolicy np-test-network-policy -o yaml
|
||||||
|
|
||||||
|
apiVersion: openstack.org/v1
|
||||||
|
kind: KuryrNetPolicy
|
||||||
|
metadata:
|
||||||
|
annotations:
|
||||||
|
networkpolicy_name: test-network-policy
|
||||||
|
networkpolicy_namespace: default
|
||||||
|
networkpolicy_uid: aee1c59f-c634-11e8-b63d-002564fdd760
|
||||||
|
clusterName: ""
|
||||||
|
creationTimestamp: 2018-10-02T11:17:02Z
|
||||||
|
generation: 0
|
||||||
|
name: np-test-network-policy
|
||||||
|
namespace: default
|
||||||
|
resourceVersion: "2117"
|
||||||
|
selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetpolicies/np-test-network-policy
|
||||||
|
uid: afb99326-c634-11e8-b63d-002564fdd760
|
||||||
|
spec:
|
||||||
|
egressSgRules:
|
||||||
|
- security_group_rule:
|
||||||
|
description: Kuryr-Kubernetes NetPolicy SG rule
|
||||||
|
direction: egress
|
||||||
|
ethertype: IPv4
|
||||||
|
id: 6297c198-b385-44f3-8b43-29951f933a8f
|
||||||
|
port_range_max: 5978
|
||||||
|
port_range_min: 5978
|
||||||
|
protocol: tcp
|
||||||
|
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
||||||
|
ingressSgRules:
|
||||||
|
- security_group_rule:
|
||||||
|
description: Kuryr-Kubernetes NetPolicy SG rule
|
||||||
|
direction: ingress
|
||||||
|
ethertype: IPv4
|
||||||
|
id: f4e11e73-81c6-4c1b-9760-714eedff417b
|
||||||
|
port_range_max: 6379
|
||||||
|
port_range_min: 6379
|
||||||
|
protocol: tcp
|
||||||
|
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
||||||
|
securityGroupId: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
||||||
|
securityGroupName: sg-test-network-policy
|
||||||
|
networkpolicy_spec:
|
||||||
|
egress:
|
||||||
|
- to:
|
||||||
|
- namespaceSelector:
|
||||||
|
matchLabels:
|
||||||
|
project: default
|
||||||
|
ports:
|
||||||
|
- port: 5978
|
||||||
|
protocol: TCP
|
||||||
|
ingress:
|
||||||
|
- from:
|
||||||
|
- namespaceSelector:
|
||||||
|
matchLabels:
|
||||||
|
project: default
|
||||||
|
ports:
|
||||||
|
- port: 6379
|
||||||
|
protocol: TCP
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
project: default
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
- Egress
|
||||||
|
|
||||||
|
$ openstack security group rule list sg-test-network-policy --protocol tcp -c "IP Protocol" -c "Port Range" -c "Direction" --long
|
||||||
|
+-------------+------------+-----------+
|
||||||
|
| IP Protocol | Port Range | Direction |
|
||||||
|
+-------------+------------+-----------+
|
||||||
|
| tcp | 6379:6379 | ingress |
|
||||||
|
| tcp | 5978:5978 | egress |
|
||||||
|
+-------------+------------+-----------+
|
||||||
|
|
||||||
|
5. Create a pod:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ kubectl create deployment --image kuryr/demo demo
|
||||||
|
deployment "demo" created
|
||||||
|
|
||||||
|
$ kubectl get pod -o wide
|
||||||
|
NAME READY STATUS RESTARTS AGE IP
|
||||||
|
demo-5558c7865d-fdkdv 1/1 Running 0 44s 10.0.0.68
|
||||||
|
|
||||||
|
6. Get the pod port and check its security group rules:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ openstack port list --fixed-ip ip-address=10.0.0.68 -f value -c ID
|
||||||
|
5d29b83c-714c-4579-8987-d0c0558420b3
|
||||||
|
|
||||||
|
$ openstack port show 5d29b83c-714c-4579-8987-d0c0558420b3 | grep security_group_ids
|
||||||
|
| security_group_ids | bb2ac605-56ff-4688-b4f1-1d045ad251d0
|
||||||
|
|
||||||
|
$ openstack security group rule list bb2ac605-56ff-4688-b4f1-1d045ad251d0
|
||||||
|
--protocol tcp -c "IP Protocol" -c "Port Range"
|
||||||
|
+-------------+------------+-----------+
|
||||||
|
| IP Protocol | Port Range | Direction |
|
||||||
|
+-------------+------------+-----------+
|
||||||
|
| tcp | 6379:6379 | ingress |
|
||||||
|
| tcp | 5978:5978 | egress |
|
||||||
|
+-------------+------------+-----------+
|
||||||
|
|
||||||
|
7. Try to curl the pod on port 8080 (hint: it won't work!):
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ curl 10.0.0.68:8080
|
||||||
|
|
||||||
|
8. Update network policy to allow ingress 8080 port:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ kubectl patch networkpolicy test-network-policy -p '{"spec":{"ingress":[{"ports":[{"port": 8080,"protocol": "TCP"}]}]}}'
|
||||||
|
networkpolicy "test-network-policy" patched
|
||||||
|
|
||||||
|
$ kubectl get knp np-test-network-policy -o yaml
|
||||||
|
apiVersion: openstack.org/v1
|
||||||
|
kind: KuryrNetPolicy
|
||||||
|
metadata:
|
||||||
|
annotations:
|
||||||
|
networkpolicy_name: test-network-policy
|
||||||
|
networkpolicy_namespace: default
|
||||||
|
networkpolicy_uid: aee1c59f-c634-11e8-b63d-002564fdd760
|
||||||
|
clusterName: ""
|
||||||
|
creationTimestamp: 2018-10-02T11:17:02Z
|
||||||
|
generation: 0
|
||||||
|
name: np-test-network-policy
|
||||||
|
namespace: default
|
||||||
|
resourceVersion: "1546"
|
||||||
|
selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetpolicies/np-test-network-policy
|
||||||
|
uid: afb99326-c634-11e8-b63d-002564fdd760
|
||||||
|
spec:
|
||||||
|
egressSgRules:
|
||||||
|
- security_group_rule:
|
||||||
|
description: Kuryr-Kubernetes NetPolicy SG rule
|
||||||
|
direction: egress
|
||||||
|
ethertype: IPv4
|
||||||
|
id: 1969a0b3-55e1-43d7-ba16-005b4ed4cbb7
|
||||||
|
port_range_max: 5978
|
||||||
|
port_range_min: 5978
|
||||||
|
protocol: tcp
|
||||||
|
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
||||||
|
ingressSgRules:
|
||||||
|
- security_group_rule:
|
||||||
|
description: Kuryr-Kubernetes NetPolicy SG rule
|
||||||
|
direction: ingress
|
||||||
|
ethertype: IPv4
|
||||||
|
id: 6598aa1f-4f94-4fb2-81ce-d3649ba28f33
|
||||||
|
port_range_max: 8080
|
||||||
|
port_range_min: 8080
|
||||||
|
protocol: tcp
|
||||||
|
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
||||||
|
securityGroupId: cdee7815-3b49-4a3e-abc8-31e384ab75c5
|
||||||
|
networkpolicy_spec:
|
||||||
|
egress:
|
||||||
|
- ports:
|
||||||
|
- port: 5978
|
||||||
|
protocol: TCP
|
||||||
|
to:
|
||||||
|
- namespaceSelector:
|
||||||
|
matchLabels:
|
||||||
|
project: default
|
||||||
|
ingress:
|
||||||
|
- ports:
|
||||||
|
- port: 8080
|
||||||
|
protocol: TCP
|
||||||
|
from:
|
||||||
|
- namespaceSelector:
|
||||||
|
matchLabels:
|
||||||
|
project: default
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
project: default
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
- Egress
|
||||||
|
|
||||||
|
$ openstack security group rule list sg-test-network-policy -c "IP Protocol" -c "Port Range" -c "Direction" --long
|
||||||
|
+-------------+------------+-----------+
|
||||||
|
| IP Protocol | Port Range | Direction |
|
||||||
|
+-------------+------------+-----------+
|
||||||
|
| tcp | 8080:8080 | ingress |
|
||||||
|
| tcp | 5978:5978 | egress |
|
||||||
|
+-------------+------------+-----------+
|
||||||
|
|
||||||
|
9. Try to curl the pod ip after patching the network policy:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ curl 10.0.0.68:8080
|
||||||
|
demo-5558c7865d-fdkdv: HELLO! I AM ALIVE!!!
|
||||||
|
|
||||||
Note the curl only works from pods (neutron ports) on a namespace that has
|
Note the curl only works from pods (neutron ports) on a namespace that has
|
||||||
the label `project: default` as stated on the policy namespaceSelector.
|
the label `project: default` as stated on the policy namespaceSelector.
|
||||||
|
|
||||||
|
|
||||||
10. We can also create a single pod, without a label and check that there is
|
10. We can also create a single pod, without a label and check that there is
|
||||||
no connectivity to it, as it does not match the network policy
|
no connectivity to it, as it does not match the network policy
|
||||||
podSelector::
|
podSelector:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
$ cat sample-pod.yml
|
$ cat sample-pod.yml
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
@ -335,19 +374,21 @@ the label `project: default` as stated on the policy namespaceSelector.
|
||||||
$ curl demo-pod-IP:8080
|
$ curl demo-pod-IP:8080
|
||||||
NO REPLY
|
NO REPLY
|
||||||
|
|
||||||
|
|
||||||
11. If we add to the pod a label that match a network policy podSelector, in
|
11. If we add to the pod a label that match a network policy podSelector, in
|
||||||
this case 'project: default', the network policy will get applied on the
|
this case 'project: default', the network policy will get applied on the
|
||||||
pod, and the traffic will be allowed::
|
pod, and the traffic will be allowed:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
$ kubectl label pod demo-pod project=default
|
$ kubectl label pod demo-pod project=default
|
||||||
$ curl demo-pod-IP:8080
|
$ curl demo-pod-IP:8080
|
||||||
demo-pod-XXX: HELLO! I AM ALIVE!!!
|
demo-pod-XXX: HELLO! I AM ALIVE!!!
|
||||||
|
|
||||||
|
12. Confirm the teardown of the resources once the network policy is removed:
|
||||||
|
|
||||||
12. Confirm the teardown of the resources once the network policy is removed::
|
.. code-block:: console
|
||||||
|
|
||||||
$ kubectl delete -f network_policy.yml
|
$ kubectl delete -f network_policy.yml
|
||||||
$ kubectl get kuryrnetpolicies
|
$ kubectl get kuryrnetpolicies
|
||||||
$ kubectl get networkpolicies
|
$ kubectl get networkpolicies
|
||||||
$ openstack security group list | grep sg-test-network-policy
|
$ openstack security group list | grep sg-test-network-policy
|
||||||
|
|
|
@ -84,67 +84,78 @@ Router:
|
||||||
Configure Kuryr to support L7 Router and OCP-Route resources
|
Configure Kuryr to support L7 Router and OCP-Route resources
|
||||||
------------------------------------------------------------
|
------------------------------------------------------------
|
||||||
|
|
||||||
1. Configure the L7 Router by adding the LB UUID at kuryr.conf::
|
1. Configure the L7 Router by adding the LB UUID at kuryr.conf:
|
||||||
|
|
||||||
[ingress]
|
.. code-block:: ini
|
||||||
l7_router_uuid = 99f580e6-d894-442a-bc5f-4d14b41e10d2
|
|
||||||
|
|
||||||
|
[ingress]
|
||||||
|
l7_router_uuid = 99f580e6-d894-442a-bc5f-4d14b41e10d2
|
||||||
|
|
||||||
2. Enable the ocp-route and k8s-endpoint handlers. For that you need to add
|
2. Enable the ocp-route and k8s-endpoint handlers. For that you need to add
|
||||||
this handlers to the enabled handlers list at kuryr.conf (details on how to
|
this handlers to the enabled handlers list at kuryr.conf (details on how to
|
||||||
edit this for containerized deployment can be found at
|
edit this for containerized deployment can be found at
|
||||||
:doc:`./devstack/containerized`)::
|
:doc:`./devstack/containerized`):
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
enabled_handlers=vif,lb,lbaasspec,ocproute,ingresslb
|
|
||||||
|
[kubernetes]
|
||||||
|
enabled_handlers=vif,lb,lbaasspec,ocproute,ingresslb
|
||||||
|
|
||||||
Note: you need to restart the kuryr controller after applying the above
|
Note: you need to restart the kuryr controller after applying the above
|
||||||
detailed steps. For devstack non-containerized deployments::
|
detailed steps. For devstack non-containerized deployments:
|
||||||
|
|
||||||
sudo systemctl restart devstack@kuryr-kubernetes.service
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||||
|
|
||||||
And for containerized deployments::
|
And for containerized deployments:
|
||||||
|
|
||||||
kubectl -n kube-system get pod | grep kuryr-controller
|
.. code-block:: console
|
||||||
kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
|
||||||
|
|
||||||
|
$ kubectl -n kube-system get pod | grep kuryr-controller
|
||||||
|
$ kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||||
|
|
||||||
For directly enabling both L7 router and OCP-Route handlers when deploying
|
For directly enabling both L7 router and OCP-Route handlers when deploying
|
||||||
with devstack, you just need to add the following at local.conf file::
|
with devstack, you just need to add the following at local.conf file:
|
||||||
|
|
||||||
KURYR_ENABLE_INGRESS=True
|
.. code-block:: bash
|
||||||
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,ocproute,ingresslb
|
|
||||||
|
KURYR_ENABLE_INGRESS=True
|
||||||
|
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,ocproute,ingresslb
|
||||||
|
|
||||||
|
|
||||||
Testing OCP-Route functionality
|
Testing OCP-Route functionality
|
||||||
-------------------------------
|
-------------------------------
|
||||||
|
|
||||||
1. Create a service::
|
1. Create a service:
|
||||||
|
|
||||||
$ oc run --image=celebdor/kuryr-demo kuryr-demo
|
.. code-block:: console
|
||||||
$ oc scale dc/kuryr-demo --replicas=2
|
|
||||||
$ oc expose dc/kuryr-demo --port 80 --target-port 8080
|
|
||||||
|
|
||||||
|
$ oc run --image=celebdor/kuryr-demo kuryr-demo
|
||||||
|
$ oc scale dc/kuryr-demo --replicas=2
|
||||||
|
$ oc expose dc/kuryr-demo --port 80 --target-port 8080
|
||||||
|
|
||||||
2. Create a Route object pointing to above service (kuryr-demo)::
|
2. Create a Route object pointing to above service (kuryr-demo):
|
||||||
|
|
||||||
$ cat >> route.yaml << EOF
|
.. code-block:: console
|
||||||
> apiVersion: v1
|
|
||||||
> kind: Route
|
|
||||||
> metadata:
|
|
||||||
> name: testroute
|
|
||||||
> spec:
|
|
||||||
> host: www.firstroute.com
|
|
||||||
> to:
|
|
||||||
> kind: Service
|
|
||||||
> name: kuryr-demo
|
|
||||||
> EOF
|
|
||||||
$ oc create -f route.yaml
|
|
||||||
|
|
||||||
|
$ cat >> route.yaml << EOF
|
||||||
|
> apiVersion: v1
|
||||||
|
> kind: Route
|
||||||
|
> metadata:
|
||||||
|
> name: testroute
|
||||||
|
> spec:
|
||||||
|
> host: www.firstroute.com
|
||||||
|
> to:
|
||||||
|
> kind: Service
|
||||||
|
> name: kuryr-demo
|
||||||
|
> EOF
|
||||||
|
$ oc create -f route.yaml
|
||||||
|
|
||||||
3. Curl L7 router's FIP using specified hostname::
|
3. Curl L7 router's FIP using specified hostname:
|
||||||
|
|
||||||
$ curl --header 'Host: www.firstroute.com' 172.24.4.3
|
.. code-block:: console
|
||||||
kuryr-demo-1-gzgj2: HELLO, I AM ALIVE!!!
|
|
||||||
$
|
$ curl --header 'Host: www.firstroute.com' 172.24.4.3
|
||||||
|
kuryr-demo-1-gzgj2: HELLO, I AM ALIVE!!!
|
||||||
|
$
|
||||||
|
|
|
@ -4,29 +4,37 @@ How to enable ports pool support
|
||||||
|
|
||||||
To enable the utilization of the ports pool feature, the selected pool driver
|
To enable the utilization of the ports pool feature, the selected pool driver
|
||||||
needs to be included at the kuryr.conf at the kubernetes section. So, for the
|
needs to be included at the kuryr.conf at the kubernetes section. So, for the
|
||||||
baremetal deployment::
|
baremetal deployment:
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
vif_pool_driver = neutron
|
|
||||||
|
|
||||||
And for the nested (VLAN+Trunk) case::
|
[kubernetes]
|
||||||
|
vif_pool_driver = neutron
|
||||||
|
|
||||||
[kubernetes]
|
And for the nested (VLAN+Trunk) case:
|
||||||
vif_pool_driver = nested
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
[kubernetes]
|
||||||
|
vif_pool_driver = nested
|
||||||
|
|
||||||
On the other hand, there are a few extra (optional) configuration options
|
On the other hand, there are a few extra (optional) configuration options
|
||||||
regarding the maximum and minimum desired sizes of the pools, where the
|
regarding the maximum and minimum desired sizes of the pools, where the
|
||||||
maximum size can be disabled by setting it to 0::
|
maximum size can be disabled by setting it to 0:
|
||||||
|
|
||||||
[vif_pool]
|
.. code-block:: ini
|
||||||
ports_pool_max = 10
|
|
||||||
ports_pool_min = 5
|
[vif_pool]
|
||||||
|
ports_pool_max = 10
|
||||||
|
ports_pool_min = 5
|
||||||
|
|
||||||
In addition the size of the bulk operation, e.g., the number of ports created
|
In addition the size of the bulk operation, e.g., the number of ports created
|
||||||
in a bulk request upon pool population, can be modified::
|
in a bulk request upon pool population, can be modified:
|
||||||
|
|
||||||
[vif_pool]
|
.. code-block:: ini
|
||||||
ports_pool_batch = 5
|
|
||||||
|
[vif_pool]
|
||||||
|
ports_pool_batch = 5
|
||||||
|
|
||||||
Note this value should be smaller than the ports_pool_max (if the
|
Note this value should be smaller than the ports_pool_max (if the
|
||||||
ports_pool_max is enabled).
|
ports_pool_max is enabled).
|
||||||
|
@ -36,32 +44,42 @@ modified, and it should be adjusted based on your specific deployment, e.g., if
|
||||||
the port creation actions are slow, it is desirable to raise it in order not to
|
the port creation actions are slow, it is desirable to raise it in order not to
|
||||||
have overlapping actions. As a simple rule of thumbs, the frequency should be
|
have overlapping actions. As a simple rule of thumbs, the frequency should be
|
||||||
at least as large as the time needed to perform the bulk requests (ports
|
at least as large as the time needed to perform the bulk requests (ports
|
||||||
creation, including subports attachment for the nested case)::
|
creation, including subports attachment for the nested case):
|
||||||
|
|
||||||
[vif_pool]
|
.. code-block:: ini
|
||||||
ports_pool_update_frequency = 20
|
|
||||||
|
[vif_pool]
|
||||||
|
ports_pool_update_frequency = 20
|
||||||
|
|
||||||
After these configurations, the final step is to restart the
|
After these configurations, the final step is to restart the
|
||||||
kuryr-k8s-controller. At devstack deployment::
|
kuryr-k8s-controller. At devstack deployment:
|
||||||
|
|
||||||
sudo systemctl restart devstack@kuryr-kubernetes.service
|
.. code-block:: console
|
||||||
|
|
||||||
And for RDO packaging based installations::
|
$ sudo systemctl restart devstack@kuryr-kubernetes.service
|
||||||
|
|
||||||
sudo systemctl restart kuryr-controller
|
And for RDO packaging based installations:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ sudo systemctl restart kuryr-controller
|
||||||
|
|
||||||
Note that for the containerized deployment, you need to edit the associated
|
Note that for the containerized deployment, you need to edit the associated
|
||||||
ConfigMap to change the kuryr.conf files with::
|
ConfigMap to change the kuryr.conf files with:
|
||||||
|
|
||||||
kubectl -n kube-system edit cm kuryr-config
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ kubectl -n kube-system edit cm kuryr-config
|
||||||
|
|
||||||
Then modify the kuryr.conf (not the kuryr-cni.conf) to modify the controller
|
Then modify the kuryr.conf (not the kuryr-cni.conf) to modify the controller
|
||||||
configuration regarding the pools. After that, to have the new configuration
|
configuration regarding the pools. After that, to have the new configuration
|
||||||
applied you need to restart the kuryr-controller just by killing the existing
|
applied you need to restart the kuryr-controller just by killing the existing
|
||||||
pod::
|
pod:
|
||||||
|
|
||||||
kubectl -n kube-system get pod | grep kuryr-controller
|
.. code-block:: console
|
||||||
kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
|
||||||
|
$ kubectl -n kube-system get pod | grep kuryr-controller
|
||||||
|
$ kubectl -n kube-system delete pod KURYR_CONTROLLER_POD_NAME
|
||||||
|
|
||||||
|
|
||||||
Ports loading into pools
|
Ports loading into pools
|
||||||
|
@ -112,10 +130,10 @@ To enable the option of having different pools depending on the node's pod vif
|
||||||
types, you need to state the type of pool that you want for each pod vif
|
types, you need to state the type of pool that you want for each pod vif
|
||||||
driver, e.g.:
|
driver, e.g.:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[vif_pool]
|
[vif_pool]
|
||||||
vif_pool_mapping=nested-vlan:nested,neutron-vif:neutron
|
vif_pool_mapping=nested-vlan:nested,neutron-vif:neutron
|
||||||
|
|
||||||
This will use a pool driver nested to handle the pods whose vif driver is
|
This will use a pool driver nested to handle the pods whose vif driver is
|
||||||
nested-vlan, and a pool driver neutron to handle the pods whose vif driver is
|
nested-vlan, and a pool driver neutron to handle the pods whose vif driver is
|
||||||
|
@ -147,13 +165,17 @@ When the namespace subnet driver is used (either for namespace isolation or
|
||||||
for network policies) a new subnet is created for each namespace. The ports
|
for network policies) a new subnet is created for each namespace. The ports
|
||||||
associated to each namespace will therefore be on different pools. In order
|
associated to each namespace will therefore be on different pools. In order
|
||||||
to prepopulate the pools associated to a newly created namespace (i.e.,
|
to prepopulate the pools associated to a newly created namespace (i.e.,
|
||||||
subnet), the next handler needs to be enabled::
|
subnet), the next handler needs to be enabled:
|
||||||
|
|
||||||
[kubernetes]
|
.. code-block:: ini
|
||||||
enabled_handlers=vif,lb,lbaasspec,namespace,*kuryrnet*
|
|
||||||
|
[kubernetes]
|
||||||
|
enabled_handlers=vif,lb,lbaasspec,namespace,*kuryrnet*
|
||||||
|
|
||||||
|
|
||||||
This can be enabled at devstack deployment time to by adding the next to the
|
This can be enabled at devstack deployment time to by adding the next to the
|
||||||
local.conf::
|
local.conf:
|
||||||
|
|
||||||
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace,*kuryrnet*
|
.. code-block:: bash
|
||||||
|
|
||||||
|
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace,*kuryrnet*
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -11,10 +11,10 @@ a SR-IOV port on a baremetal installation the 3 following steps should be done:
|
||||||
1. Create OpenStack network and subnet for SR-IOV.
|
1. Create OpenStack network and subnet for SR-IOV.
|
||||||
Following steps should be done with admin rights.
|
Following steps should be done with admin rights.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
neutron net-create vlan-sriov-net --shared --provider:physical_network physnet10_4 --provider:network_type vlan --provider:segmentation_id 3501
|
neutron net-create vlan-sriov-net --shared --provider:physical_network physnet10_4 --provider:network_type vlan --provider:segmentation_id 3501
|
||||||
neutron subnet-create vlan-sriov-net 203.0.114.0/24 --name vlan-sriov-subnet --gateway 203.0.114.1
|
neutron subnet-create vlan-sriov-net 203.0.114.0/24 --name vlan-sriov-subnet --gateway 203.0.114.1
|
||||||
|
|
||||||
Subnet id <UUID of vlan-sriov-net> will be used later in NetworkAttachmentDefinition.
|
Subnet id <UUID of vlan-sriov-net> will be used later in NetworkAttachmentDefinition.
|
||||||
|
|
||||||
|
@ -22,9 +22,9 @@ Subnet id <UUID of vlan-sriov-net> will be used later in NetworkAttachmentDefini
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[sriov]
|
[sriov]
|
||||||
physical_device_mappings = physnet1:ens4f0
|
physical_device_mappings = physnet1:ens4f0
|
||||||
default_physnet_subnets = physnet1:<UUID of vlan-sriov-net>
|
default_physnet_subnets = physnet1:<UUID of vlan-sriov-net>
|
||||||
|
|
||||||
This mapping is required for ability to find appropriate PF/VF functions at
|
This mapping is required for ability to find appropriate PF/VF functions at
|
||||||
binding phase. physnet1 is just an identifier for subnet <UUID of
|
binding phase. physnet1 is just an identifier for subnet <UUID of
|
||||||
|
@ -37,40 +37,38 @@ as described in [1]_.
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: "k8s.cni.cncf.io/v1"
|
apiVersion: "k8s.cni.cncf.io/v1"
|
||||||
kind: NetworkAttachmentDefinition
|
kind: NetworkAttachmentDefinition
|
||||||
metadata:
|
metadata:
|
||||||
name: "sriov-net1"
|
name: "sriov-net1"
|
||||||
annotations:
|
annotations:
|
||||||
openstack.org/kuryr-config: '{
|
openstack.org/kuryr-config: '{
|
||||||
"subnetId": "UUID of vlan-sriov-net",
|
"subnetId": "UUID of vlan-sriov-net",
|
||||||
"driverType": "sriov"
|
"driverType": "sriov"
|
||||||
}'
|
}'
|
||||||
|
|
||||||
|
|
||||||
Then add k8s.v1.cni.cncf.io/networks and request/limits for SR-IOV
|
Then add k8s.v1.cni.cncf.io/networks and request/limits for SR-IOV
|
||||||
into the pod's yaml.
|
into the pod's yaml.
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
kind: Pod
|
kind: Pod
|
||||||
metadata:
|
metadata:
|
||||||
name: my-pod
|
name: my-pod
|
||||||
namespace: my-namespace
|
namespace: my-namespace
|
||||||
annotations:
|
annotations:
|
||||||
k8s.v1.cni.cncf.io/networks: sriov-net1,sriov-net2
|
k8s.v1.cni.cncf.io/networks: sriov-net1,sriov-net2
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: containerName
|
- name: containerName
|
||||||
image: containerImage
|
image: containerImage
|
||||||
imagePullPolicy: IfNotPresent
|
imagePullPolicy: IfNotPresent
|
||||||
command: ["tail", "-f", "/dev/null"]
|
command: ["tail", "-f", "/dev/null"]
|
||||||
resources:
|
resources:
|
||||||
requests:
|
requests:
|
||||||
intel.com/sriov: '2'
|
intel.com/sriov: '2'
|
||||||
limits:
|
limits:
|
||||||
intel.com/sriov: '2'
|
intel.com/sriov: '2'
|
||||||
|
|
||||||
|
|
||||||
In the above example two SR-IOV devices will be attached to pod. First one is
|
In the above example two SR-IOV devices will be attached to pod. First one is
|
||||||
described in sriov-net1 NetworkAttachmentDefinition, second one in sriov-net2.
|
described in sriov-net1 NetworkAttachmentDefinition, second one in sriov-net2.
|
||||||
|
@ -96,43 +94,43 @@ option)
|
||||||
|
|
||||||
.. code-block:: json
|
.. code-block:: json
|
||||||
|
|
||||||
{
|
{
|
||||||
"resourceList":
|
"resourceList":
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"resourceName": "numa0",
|
"resourceName": "numa0",
|
||||||
"rootDevices": ["0000:02:00.0"],
|
"rootDevices": ["0000:02:00.0"],
|
||||||
"sriovMode": true,
|
"sriovMode": true,
|
||||||
"deviceType": "netdevice"
|
"deviceType": "netdevice"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
We defined numa0 resource name, also assume we started sriovdp with
|
We defined numa0 resource name, also assume we started sriovdp with
|
||||||
-resource-prefix samsung.com value. The PCI address of ens4f0 interface is
|
-resource-prefix samsung.com value. The PCI address of ens4f0 interface is
|
||||||
"0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network device
|
"0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network device
|
||||||
plugin, we can see following state of kubernetes
|
plugin, we can see following state of kubernetes
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ kubectl get node node1 -o json | jq '.status.allocatable'
|
$ kubectl get node node1 -o json | jq '.status.allocatable'
|
||||||
{
|
{
|
||||||
"cpu": "4",
|
"cpu": "4",
|
||||||
"ephemeral-storage": "269986638772",
|
"ephemeral-storage": "269986638772",
|
||||||
"hugepages-1Gi": "8Gi",
|
"hugepages-1Gi": "8Gi",
|
||||||
"hugepages-2Mi": "0Gi",
|
"hugepages-2Mi": "0Gi",
|
||||||
"samsung.com/numa0": "8",
|
"samsung.com/numa0": "8",
|
||||||
"memory": "7880620Ki",
|
"memory": "7880620Ki",
|
||||||
"pods": "1k"
|
"pods": "1k"
|
||||||
}
|
}
|
||||||
|
|
||||||
We have to add to the sriov section following mapping:
|
We have to add to the sriov section following mapping:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[sriov]
|
[sriov]
|
||||||
device_plugin_resource_prefix = samsung.com
|
device_plugin_resource_prefix = samsung.com
|
||||||
physnet_resource_mappings = physnet1:numa0
|
physnet_resource_mappings = physnet1:numa0
|
||||||
|
|
||||||
5. Enable Kubelet Pod Resources feature
|
5. Enable Kubelet Pod Resources feature
|
||||||
|
|
||||||
|
@ -146,9 +144,11 @@ for particular container.
|
||||||
|
|
||||||
To enable Pod Resources service it is needed to add
|
To enable Pod Resources service it is needed to add
|
||||||
``--feature-gates KubeletPodResources=true`` into ``/etc/sysconfig/kubelet``.
|
``--feature-gates KubeletPodResources=true`` into ``/etc/sysconfig/kubelet``.
|
||||||
This file could look like::
|
This file could look like:
|
||||||
|
|
||||||
KUBELET_EXTRA_ARGS="--feature-gates KubeletPodResources=true"
|
.. code-block:: bash
|
||||||
|
|
||||||
|
KUBELET_EXTRA_ARGS="--feature-gates KubeletPodResources=true"
|
||||||
|
|
||||||
Note that it is important to set right value for parameter ``kubelet_root_dir``
|
Note that it is important to set right value for parameter ``kubelet_root_dir``
|
||||||
in ``kuryr.conf``. By default it is ``/var/lib/kubelet``.
|
in ``kuryr.conf``. By default it is ``/var/lib/kubelet``.
|
||||||
|
@ -159,8 +159,8 @@ To use this feature add ``enable_pod_resource_service`` into kuryr.conf.
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[sriov]
|
[sriov]
|
||||||
enable_pod_resource_service = True
|
enable_pod_resource_service = True
|
||||||
|
|
||||||
6. Use privileged user
|
6. Use privileged user
|
||||||
|
|
||||||
|
|
|
@ -3,130 +3,138 @@ Testing Network Connectivity
|
||||||
============================
|
============================
|
||||||
|
|
||||||
Once the environment is ready, we can test that network connectivity works
|
Once the environment is ready, we can test that network connectivity works
|
||||||
among pods. First we check the status of the kubernetes cluster::
|
among pods. First we check the status of the kubernetes cluster:
|
||||||
|
|
||||||
$ kubectl get nodes
|
.. code-block:: console
|
||||||
NAME STATUS AGE VERSION
|
|
||||||
masterodl-vm Ready 1h v1.6.2
|
|
||||||
|
|
||||||
$ kubectl get pods
|
$ kubectl get nodes
|
||||||
No resources found.
|
NAME STATUS AGE VERSION
|
||||||
|
masterodl-vm Ready 1h v1.6.2
|
||||||
|
|
||||||
$ kubectl get svc
|
$ kubectl get pods
|
||||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
No resources found.
|
||||||
kubernetes 10.0.0.129 <none> 443/TCP 1h
|
|
||||||
|
$ kubectl get svc
|
||||||
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
kubernetes 10.0.0.129 <none> 443/TCP 1h
|
||||||
|
|
||||||
As we can see, this is a one node cluster with currently no pods running, and
|
As we can see, this is a one node cluster with currently no pods running, and
|
||||||
with the kubernetes API service listening on port 443 at 10.0.0.129 (which
|
with the kubernetes API service listening on port 443 at 10.0.0.129 (which
|
||||||
matches the ip assigned to the load balancer created for it).
|
matches the ip assigned to the load balancer created for it).
|
||||||
|
|
||||||
To test proper configuration and connectivity we firstly create a sample
|
To test proper configuration and connectivity we firstly create a sample
|
||||||
deployment with::
|
deployment with:
|
||||||
|
|
||||||
$ kubectl run demo --image=celebdor/kuryr-demo
|
.. code-block:: console
|
||||||
deployment "demo" created
|
|
||||||
|
|
||||||
|
$ kubectl run demo --image=celebdor/kuryr-demo
|
||||||
|
deployment "demo" created
|
||||||
|
|
||||||
After a few seconds, the container is up an running, and a neutron port was
|
After a few seconds, the container is up an running, and a neutron port was
|
||||||
created with the same IP that got assigned to the pod::
|
created with the same IP that got assigned to the pod:
|
||||||
|
|
||||||
$ kubectl get pods
|
.. code-block:: console
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
demo-2293951457-j29nb 1/1 Running 0 1m
|
|
||||||
|
|
||||||
$ kubectl describe pod demo-2293951457-j29nb | grep IP:
|
$ kubectl get pods
|
||||||
IP: 10.0.0.69
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
demo-2293951457-j29nb 1/1 Running 0 1m
|
||||||
|
|
||||||
$ openstack port list | grep demo
|
$ kubectl describe pod demo-2293951457-j29nb | grep IP:
|
||||||
| 73100cdb-84d6-4f33-93b2-e212966c65ac | demo-2293951457-j29nb | fa:16:3e:99:ac:ce | ip_address='10.0.0.69', subnet_id='3c3e18f9-d1d0-4674-b3be-9fc8561980d3' | ACTIVE |
|
IP: 10.0.0.69
|
||||||
|
|
||||||
|
$ openstack port list | grep demo
|
||||||
|
| 73100cdb-84d6-4f33-93b2-e212966c65ac | demo-2293951457-j29nb | fa:16:3e:99:ac:ce | ip_address='10.0.0.69', subnet_id='3c3e18f9-d1d0-4674-b3be-9fc8561980d3' | ACTIVE |
|
||||||
|
|
||||||
We can then scale the deployment to 2 pods, and check connectivity between
|
We can then scale the deployment to 2 pods, and check connectivity between
|
||||||
them::
|
them:
|
||||||
|
|
||||||
$ kubectl scale deploy/demo --replicas=2
|
.. code-block:: console
|
||||||
deployment "demo" scaled
|
|
||||||
|
|
||||||
$ kubectl get pods
|
$ kubectl scale deploy/demo --replicas=2
|
||||||
NAME READY STATUS RESTARTS AGE
|
deployment "demo" scaled
|
||||||
demo-2293951457-gdrv2 1/1 Running 0 9s
|
|
||||||
demo-2293951457-j29nb 1/1 Running 0 14m
|
|
||||||
|
|
||||||
$ openstack port list | grep demo
|
$ kubectl get pods
|
||||||
| 73100cdb-84d6-4f33-93b2-e212966c65ac | demo-2293951457-j29nb | fa:16:3e:99:ac:ce | ip_address='10.0.0.69', subnet_id='3c3e18f9-d1d0-4674-b3be-9fc8561980d3' | ACTIVE |
|
NAME READY STATUS RESTARTS AGE
|
||||||
| 95e89edd-f513-4ec8-80d0-36839725e62d | demo-2293951457-gdrv2 | fa:16:3e:e6:b4:b9 | ip_address='10.0.0.75', subnet_id='3c3e18f9-d1d0-4674-b3be-9fc8561980d3' | ACTIVE |
|
demo-2293951457-gdrv2 1/1 Running 0 9s
|
||||||
|
demo-2293951457-j29nb 1/1 Running 0 14m
|
||||||
|
|
||||||
$ kubectl exec -it demo-2293951457-j29nb -- /bin/sh
|
$ openstack port list | grep demo
|
||||||
|
| 73100cdb-84d6-4f33-93b2-e212966c65ac | demo-2293951457-j29nb | fa:16:3e:99:ac:ce | ip_address='10.0.0.69', subnet_id='3c3e18f9-d1d0-4674-b3be-9fc8561980d3' | ACTIVE |
|
||||||
|
| 95e89edd-f513-4ec8-80d0-36839725e62d | demo-2293951457-gdrv2 | fa:16:3e:e6:b4:b9 | ip_address='10.0.0.75', subnet_id='3c3e18f9-d1d0-4674-b3be-9fc8561980d3' | ACTIVE |
|
||||||
|
|
||||||
sh-4.2$ curl 10.0.0.69:8080
|
$ kubectl exec -it demo-2293951457-j29nb -- /bin/sh
|
||||||
demo-2293951457-j29nb: HELLO, I AM ALIVE!!!
|
|
||||||
|
|
||||||
sh-4.2$ curl 10.0.0.75:8080
|
sh-4.2$ curl 10.0.0.69:8080
|
||||||
demo-2293951457-gdrv2: HELLO, I AM ALIVE!!!
|
demo-2293951457-j29nb: HELLO, I AM ALIVE!!!
|
||||||
|
|
||||||
sh-4.2$ ping 10.0.0.75
|
sh-4.2$ curl 10.0.0.75:8080
|
||||||
PING 10.0.0.75 (10.0.0.75) 56(84) bytes of data.
|
demo-2293951457-gdrv2: HELLO, I AM ALIVE!!!
|
||||||
64 bytes from 10.0.0.75: icmp_seq=1 ttl=64 time=1.14 ms
|
|
||||||
64 bytes from 10.0.0.75: icmp_seq=2 ttl=64 time=0.250 ms
|
|
||||||
|
|
||||||
|
sh-4.2$ ping 10.0.0.75
|
||||||
|
PING 10.0.0.75 (10.0.0.75) 56(84) bytes of data.
|
||||||
|
64 bytes from 10.0.0.75: icmp_seq=1 ttl=64 time=1.14 ms
|
||||||
|
64 bytes from 10.0.0.75: icmp_seq=2 ttl=64 time=0.250 ms
|
||||||
|
|
||||||
Next, we expose the service so that a neutron load balancer is created and
|
Next, we expose the service so that a neutron load balancer is created and
|
||||||
the service is exposed and load balanced among the available pods::
|
the service is exposed and load balanced among the available pods:
|
||||||
|
|
||||||
$ kubectl get svc
|
.. code-block:: console
|
||||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
||||||
kubernetes 10.0.0.129 <none> 443/TCP 1h
|
|
||||||
|
|
||||||
$ kubectl expose deploy/demo --port=80 --target-port=8080
|
$ kubectl get svc
|
||||||
service "demo" exposed
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
kubernetes 10.0.0.129 <none> 443/TCP 1h
|
||||||
|
|
||||||
$ kubectl get svc
|
$ kubectl expose deploy/demo --port=80 --target-port=8080
|
||||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
service "demo" exposed
|
||||||
demo 10.0.0.161 <none> 80/TCP 6s
|
|
||||||
kubernetes 10.0.0.129 <none> 443/TCP 1h
|
|
||||||
|
|
||||||
$ openstack loadbalancer list
|
$ kubectl get svc
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
| id | name | tenant_id | vip_address | provisioning_status | provider |
|
demo 10.0.0.161 <none> 80/TCP 6s
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
kubernetes 10.0.0.129 <none> 443/TCP 1h
|
||||||
| 7d0cf5b5-b164-4b32-87d3-ae6c82513927 | default/kubernetes | 47c28e562795468ea52e92226e3bc7b1 | 10.0.0.129 | ACTIVE | haproxy |
|
|
||||||
| c34c8d0c-a683-497f-9530-a49021e4b502 | default/demo | 49e2683370f245e38ac2d6a8c16697b3 | 10.0.0.161 | ACTIVE | haproxy |
|
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
|
||||||
|
|
||||||
$ openstack loadbalancer listener list
|
$ openstack loadbalancer list
|
||||||
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
| id | default_pool_id | name | tenant_id | protocol | protocol_port | admin_state_up |
|
| id | name | tenant_id | vip_address | provisioning_status | provider |
|
||||||
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
| fc485508-c37a-48bd-9be3-898bbb7700fa | b12f00b9-44c0-430e-b1a1-e92b57247ad2 | default/demo:TCP:80 | 49e2683370f245e38ac2d6a8c16697b3 | TCP | 80 | True |
|
| 7d0cf5b5-b164-4b32-87d3-ae6c82513927 | default/kubernetes | 47c28e562795468ea52e92226e3bc7b1 | 10.0.0.129 | ACTIVE | haproxy |
|
||||||
| abfbafd8-7609-4b7d-9def-4edddf2b887b | 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | HTTPS | 443 | True |
|
| c34c8d0c-a683-497f-9530-a49021e4b502 | default/demo | 49e2683370f245e38ac2d6a8c16697b3 | 10.0.0.161 | ACTIVE | haproxy |
|
||||||
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
|
|
||||||
$ openstack loadbalancer pool list
|
$ openstack loadbalancer listener list
|
||||||
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
||||||
| id | name | tenant_id | lb_algorithm | protocol | admin_state_up |
|
| id | default_pool_id | name | tenant_id | protocol | protocol_port | admin_state_up |
|
||||||
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
||||||
| 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | ROUND_ROBIN | HTTPS | True |
|
| fc485508-c37a-48bd-9be3-898bbb7700fa | b12f00b9-44c0-430e-b1a1-e92b57247ad2 | default/demo:TCP:80 | 49e2683370f245e38ac2d6a8c16697b3 | TCP | 80 | True |
|
||||||
| b12f00b9-44c0-430e-b1a1-e92b57247ad2 | default/demo:TCP:80 | 49e2683370f245e38ac2d6a8c16697b3 | ROUND_ROBIN | TCP | True |
|
| abfbafd8-7609-4b7d-9def-4edddf2b887b | 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | HTTPS | 443 | True |
|
||||||
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
|
||||||
|
|
||||||
$ openstack loadbalancer member list default/demo:TCP:80
|
$ openstack loadbalancer pool list
|
||||||
+--------------------------------------+------------------------------------+----------------------------------+-----------+---------------+--------+--------------------------------------+----------------+
|
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
||||||
| id | name | tenant_id | address | protocol_port | weight | subnet_id | admin_state_up |
|
| id | name | tenant_id | lb_algorithm | protocol | admin_state_up |
|
||||||
+--------------------------------------+------------------------------------+----------------------------------+-----------+---------------+--------+--------------------------------------+----------------+
|
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
||||||
| c0057ce6-64da-4613-b284-faf5477533ab | default/demo-2293951457-j29nb:8080 | 49e2683370f245e38ac2d6a8c16697b3 | 10.0.0.69 | 8080 | 1 | 55405e9d-4e25-4a55-bac2-e25ee88584e1 | True |
|
| 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | ROUND_ROBIN | HTTPS | True |
|
||||||
| 7a0c0ef9-35ce-4134-b92a-2e73f0f8fe98 | default/demo-2293951457-gdrv2:8080 | 49e2683370f245e38ac2d6a8c16697b3 | 10.0.0.75 | 8080 | 1 | 55405e9d-4e25-4a55-bac2-e25ee88584e1 | True |
|
| b12f00b9-44c0-430e-b1a1-e92b57247ad2 | default/demo:TCP:80 | 49e2683370f245e38ac2d6a8c16697b3 | ROUND_ROBIN | TCP | True |
|
||||||
+--------------------------------------+------------------------------------+----------------------------------+-----------+---------------+--------+--------------------------------------+----------------+
|
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
|
||||||
|
|
||||||
|
$ openstack loadbalancer member list default/demo:TCP:80
|
||||||
|
+--------------------------------------+------------------------------------+----------------------------------+-----------+---------------+--------+--------------------------------------+----------------+
|
||||||
|
| id | name | tenant_id | address | protocol_port | weight | subnet_id | admin_state_up |
|
||||||
|
+--------------------------------------+------------------------------------+----------------------------------+-----------+---------------+--------+--------------------------------------+----------------+
|
||||||
|
| c0057ce6-64da-4613-b284-faf5477533ab | default/demo-2293951457-j29nb:8080 | 49e2683370f245e38ac2d6a8c16697b3 | 10.0.0.69 | 8080 | 1 | 55405e9d-4e25-4a55-bac2-e25ee88584e1 | True |
|
||||||
|
| 7a0c0ef9-35ce-4134-b92a-2e73f0f8fe98 | default/demo-2293951457-gdrv2:8080 | 49e2683370f245e38ac2d6a8c16697b3 | 10.0.0.75 | 8080 | 1 | 55405e9d-4e25-4a55-bac2-e25ee88584e1 | True |
|
||||||
|
+--------------------------------------+------------------------------------+----------------------------------+-----------+---------------+--------+--------------------------------------+----------------+
|
||||||
|
|
||||||
We can see that both pods are included as members and that the demo cluster-ip
|
We can see that both pods are included as members and that the demo cluster-ip
|
||||||
matches with the loadbalancer vip_address. In order to check loadbalancing
|
matches with the loadbalancer vip_address. In order to check loadbalancing
|
||||||
among them, we are going to curl the cluster-ip from one of the pods and see
|
among them, we are going to curl the cluster-ip from one of the pods and see
|
||||||
that each of the pods is replying at a time::
|
that each of the pods is replying at a time:
|
||||||
|
|
||||||
$ kubectl exec -it demo-2293951457-j29nb -- /bin/sh
|
.. code-block:: console
|
||||||
|
|
||||||
sh-4.2$ curl 10.0.0.161
|
$ kubectl exec -it demo-2293951457-j29nb -- /bin/sh
|
||||||
demo-2293951457-j29nb: HELLO, I AM ALIVE!!!
|
|
||||||
|
|
||||||
sh-4.2$ curl 10.0.0.161
|
sh-4.2$ curl 10.0.0.161
|
||||||
demo-2293951457-gdrv2: HELLO, I AM ALIVE!!!
|
demo-2293951457-j29nb: HELLO, I AM ALIVE!!!
|
||||||
|
|
||||||
|
sh-4.2$ curl 10.0.0.161
|
||||||
|
demo-2293951457-gdrv2: HELLO, I AM ALIVE!!!
|
||||||
|
|
|
@ -4,52 +4,54 @@ Testing Nested Network Connectivity
|
||||||
|
|
||||||
Similarly to the baremetal testing, we can create a demo deployment, scale it
|
Similarly to the baremetal testing, we can create a demo deployment, scale it
|
||||||
to any number of pods and expose the service to check if the deployment was
|
to any number of pods and expose the service to check if the deployment was
|
||||||
successful::
|
successful:
|
||||||
|
|
||||||
$ kubectl run demo --image=celebdor/kuryr-demo
|
.. code-block:: console
|
||||||
$ kubectl scale deploy/demo --replicas=2
|
|
||||||
$ kubectl expose deploy/demo --port=80 --target-port=8080
|
|
||||||
|
|
||||||
|
$ kubectl run demo --image=celebdor/kuryr-demo
|
||||||
|
$ kubectl scale deploy/demo --replicas=2
|
||||||
|
$ kubectl expose deploy/demo --port=80 --target-port=8080
|
||||||
|
|
||||||
After a few seconds you can check that the pods are up and running and the
|
After a few seconds you can check that the pods are up and running and the
|
||||||
neutron subports have been created (and in ACTIVE status) at the undercloud::
|
neutron subports have been created (and in ACTIVE status) at the undercloud:
|
||||||
|
|
||||||
(OVERCLOUD)
|
.. code-block:: console
|
||||||
$ kubectl get pods
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
demo-1575152709-4k19q 1/1 Running 0 2m
|
|
||||||
demo-1575152709-vmjwx 1/1 Running 0 12s
|
|
||||||
|
|
||||||
(UNDERCLOUD)
|
(OVERCLOUD) $ kubectl get pods
|
||||||
$ openstack port list | grep demo
|
NAME READY STATUS RESTARTS AGE
|
||||||
| 1019bc07-fcdd-4c78-adbd-72a04dffd6ba | demo-1575152709-4k19q | fa:16:3e:b5:de:1f | ip_address='10.0.0.65', subnet_id='b98d40d1-57ac-4909-8db5-0bf0226719d8' | ACTIVE |
|
demo-1575152709-4k19q 1/1 Running 0 2m
|
||||||
| 33c4d79f-4fde-4817-b672-a5ec026fa833 | demo-1575152709-vmjwx | fa:16:3e:32:58:38 | ip_address='10.0.0.70', subnet_id='b98d40d1-57ac-4909-8db5-0bf0226719d8' | ACTIVE |
|
demo-1575152709-vmjwx 1/1 Running 0 12s
|
||||||
|
|
||||||
|
(UNDERCLOUD) $ openstack port list | grep demo
|
||||||
|
| 1019bc07-fcdd-4c78-adbd-72a04dffd6ba | demo-1575152709-4k19q | fa:16:3e:b5:de:1f | ip_address='10.0.0.65', subnet_id='b98d40d1-57ac-4909-8db5-0bf0226719d8' | ACTIVE |
|
||||||
|
| 33c4d79f-4fde-4817-b672-a5ec026fa833 | demo-1575152709-vmjwx | fa:16:3e:32:58:38 | ip_address='10.0.0.70', subnet_id='b98d40d1-57ac-4909-8db5-0bf0226719d8' | ACTIVE |
|
||||||
|
|
||||||
Then, we can check that the service has been created, as well as the
|
Then, we can check that the service has been created, as well as the
|
||||||
respective loadbalancer at the undercloud::
|
respective loadbalancer at the undercloud:
|
||||||
|
|
||||||
(OVERCLOUD)
|
.. code-block:: console
|
||||||
$ kubectl get svc
|
|
||||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
||||||
svc/demo 10.0.0.171 <none> 80/TCP 1m
|
|
||||||
svc/kubernetes 10.0.0.129 <none> 443/TCP 45m
|
|
||||||
|
|
||||||
(UNDERCLOUD)
|
(OVERCLOUD) $ kubectl get svc
|
||||||
$ openstack loadbalancer list
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
svc/demo 10.0.0.171 <none> 80/TCP 1m
|
||||||
| id | name | tenant_id | vip_address | provisioning_status | provider |
|
svc/kubernetes 10.0.0.129 <none> 443/TCP 45m
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
|
||||||
| a3b85089-1fbd-47e1-a697-bbdfd0fa19e3 | default/kubernetes | 672bc45aedfe4ec7b0e90959b1029e30 | 10.0.0.129 | ACTIVE | haproxy |
|
(UNDERCLOUD) $ openstack loadbalancer list
|
||||||
| e55b3f75-15dc-4bc5-b4f4-bce65fc15aa4 | default/demo | e4757688696641218fba0bac86ff7117 | 10.0.0.171 | ACTIVE | haproxy |
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
| id | name | tenant_id | vip_address | provisioning_status | provider |
|
||||||
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
|
| a3b85089-1fbd-47e1-a697-bbdfd0fa19e3 | default/kubernetes | 672bc45aedfe4ec7b0e90959b1029e30 | 10.0.0.129 | ACTIVE | haproxy |
|
||||||
|
| e55b3f75-15dc-4bc5-b4f4-bce65fc15aa4 | default/demo | e4757688696641218fba0bac86ff7117 | 10.0.0.171 | ACTIVE | haproxy |
|
||||||
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
|
|
||||||
|
|
||||||
Finally, you can log in into one of the containers and curl the service IP to
|
Finally, you can log in into one of the containers and curl the service IP to
|
||||||
check that each time a different pod answer the request::
|
check that each time a different pod answer the request:
|
||||||
|
|
||||||
$ kubectl exec -it demo-1575152709-4k19q -- /bin/sh
|
.. code-block:: console
|
||||||
sh-4.2$ curl 10.0.0.171
|
|
||||||
demo-1575152709-4k19q: HELLO, I AM ALIVE!!!
|
$ kubectl exec -it demo-1575152709-4k19q -- /bin/sh
|
||||||
sh-4.2$ curl 10.0.0.771
|
sh-4.2$ curl 10.0.0.171
|
||||||
demo-1575152709-vmjwx: HELLO, I AM ALIVE!!!
|
demo-1575152709-4k19q: HELLO, I AM ALIVE!!!
|
||||||
|
sh-4.2$ curl 10.0.0.771
|
||||||
|
demo-1575152709-vmjwx: HELLO, I AM ALIVE!!!
|
||||||
|
|
|
@ -13,47 +13,47 @@ look like:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: "k8s.cni.cncf.io/v1"
|
apiVersion: "k8s.cni.cncf.io/v1"
|
||||||
kind: NetworkAttachmentDefinition
|
kind: NetworkAttachmentDefinition
|
||||||
metadata:
|
metadata:
|
||||||
name: "net-sriov"
|
name: "net-sriov"
|
||||||
annotations:
|
annotations:
|
||||||
openstack.org/kuryr-config: '{
|
openstack.org/kuryr-config: '{
|
||||||
"subnetId": "88d0b025-2710-4f02-a348-2829853b45da",
|
"subnetId": "88d0b025-2710-4f02-a348-2829853b45da",
|
||||||
"driverType": "sriov"
|
"driverType": "sriov"
|
||||||
}'
|
}'
|
||||||
|
|
||||||
Here ``88d0b025-2710-4f02-a348-2829853b45da`` is an id of precreated subnet
|
Here ``88d0b025-2710-4f02-a348-2829853b45da`` is an id of precreated subnet
|
||||||
that is expected to be used for SR-IOV ports:
|
that is expected to be used for SR-IOV ports:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ neutron subnet-show 88d0b025-2710-4f02-a348-2829853b45da
|
$ neutron subnet-show 88d0b025-2710-4f02-a348-2829853b45da
|
||||||
+-------------------+--------------------------------------------------+
|
+-------------------+--------------------------------------------------+
|
||||||
| Field | Value |
|
| Field | Value |
|
||||||
+-------------------+--------------------------------------------------+
|
+-------------------+--------------------------------------------------+
|
||||||
| allocation_pools | {"start": "192.168.2.2", "end": "192.168.2.254"} |
|
| allocation_pools | {"start": "192.168.2.2", "end": "192.168.2.254"} |
|
||||||
| cidr | 192.168.2.0/24 |
|
| cidr | 192.168.2.0/24 |
|
||||||
| created_at | 2018-11-21T10:57:34Z |
|
| created_at | 2018-11-21T10:57:34Z |
|
||||||
| description | |
|
| description | |
|
||||||
| dns_nameservers | |
|
| dns_nameservers | |
|
||||||
| enable_dhcp | True |
|
| enable_dhcp | True |
|
||||||
| gateway_ip | 192.168.2.1 |
|
| gateway_ip | 192.168.2.1 |
|
||||||
| host_routes | |
|
| host_routes | |
|
||||||
| id | 88d0b025-2710-4f02-a348-2829853b45da |
|
| id | 88d0b025-2710-4f02-a348-2829853b45da |
|
||||||
| ip_version | 4 |
|
| ip_version | 4 |
|
||||||
| ipv6_address_mode | |
|
| ipv6_address_mode | |
|
||||||
| ipv6_ra_mode | |
|
| ipv6_ra_mode | |
|
||||||
| name | sriov_subnet |
|
| name | sriov_subnet |
|
||||||
| network_id | 2f8b9103-e9ec-47fa-9617-0fb9deacfc00 |
|
| network_id | 2f8b9103-e9ec-47fa-9617-0fb9deacfc00 |
|
||||||
| project_id | 92a4d7734b17486ba24e635bc7fad595 |
|
| project_id | 92a4d7734b17486ba24e635bc7fad595 |
|
||||||
| revision_number | 2 |
|
| revision_number | 2 |
|
||||||
| service_types | |
|
| service_types | |
|
||||||
| subnetpool_id | |
|
| subnetpool_id | |
|
||||||
| tags | |
|
| tags | |
|
||||||
| tenant_id | 92a4d7734b17486ba24e635bc7fad595 |
|
| tenant_id | 92a4d7734b17486ba24e635bc7fad595 |
|
||||||
| updated_at | 2018-11-21T10:57:34Z |
|
| updated_at | 2018-11-21T10:57:34Z |
|
||||||
+-------------------+--------------------------------------------------+
|
+-------------------+--------------------------------------------------+
|
||||||
|
|
||||||
1. Create deployment definition <DEFINITION_FILE_NAME> with one SR-IOV
|
1. Create deployment definition <DEFINITION_FILE_NAME> with one SR-IOV
|
||||||
interface (apart from default one). Deployment definition file might look
|
interface (apart from default one). Deployment definition file might look
|
||||||
|
@ -61,182 +61,182 @@ that is expected to be used for SR-IOV ports:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
apiVersion: extensions/v1beta1
|
apiVersion: extensions/v1beta1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
metadata:
|
metadata:
|
||||||
name: nginx-sriov
|
name: nginx-sriov
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
name: nginx-sriov
|
name: nginx-sriov
|
||||||
labels:
|
labels:
|
||||||
app: nginx-sriov
|
app: nginx-sriov
|
||||||
annotations:
|
annotations:
|
||||||
k8s.v1.cni.cncf.io/networks: net-sriov
|
k8s.v1.cni.cncf.io/networks: net-sriov
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-sriov
|
- name: nginx-sriov
|
||||||
image: nginx
|
image: nginx
|
||||||
resources:
|
resources:
|
||||||
requests:
|
requests:
|
||||||
intel.com/sriov: '1'
|
intel.com/sriov: '1'
|
||||||
cpu: "1"
|
cpu: "1"
|
||||||
memory: "512Mi"
|
memory: "512Mi"
|
||||||
limits:
|
limits:
|
||||||
intel.com/sriov: '1'
|
intel.com/sriov: '1'
|
||||||
cpu: "1"
|
cpu: "1"
|
||||||
memory: "512Mi"
|
memory: "512Mi"
|
||||||
|
|
||||||
Here ``net-sriov`` is the name of ``NetworkAttachmentDefinition``
|
Here ``net-sriov`` is the name of ``NetworkAttachmentDefinition``
|
||||||
created before.
|
created before.
|
||||||
|
|
||||||
2. Create deployment with the following command:
|
2. Create deployment with the following command:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ kubectl create -f <DEFINITION_FILE_NAME>
|
$ kubectl create -f <DEFINITION_FILE_NAME>
|
||||||
|
|
||||||
3. Wait for the pod to get to Running phase.
|
3. Wait for the pod to get to Running phase.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ kubectl get pods
|
$ kubectl get pods
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
nginx-sriov-558db554d7-rvpxs 1/1 Running 0 1m
|
nginx-sriov-558db554d7-rvpxs 1/1 Running 0 1m
|
||||||
|
|
||||||
4. If your image contains ``iputils`` (for example, busybox image), you can
|
4. If your image contains ``iputils`` (for example, busybox image), you can
|
||||||
attach to the pod and check that the correct interface has been attached to
|
attach to the pod and check that the correct interface has been attached to
|
||||||
the Pod.
|
the Pod.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ kubectl get pod
|
$ kubectl get pod
|
||||||
$ kubectl exec -it nginx-sriov-558db554d7-rvpxs -- /bin/bash
|
$ kubectl exec -it nginx-sriov-558db554d7-rvpxs -- /bin/bash
|
||||||
$ ip a
|
$ ip a
|
||||||
|
|
||||||
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
|
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
|
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
|
||||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||||
inet 127.0.0.1/8 scope host lo
|
inet 127.0.0.1/8 scope host lo
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
inet6 ::1/128 scope host
|
inet6 ::1/128 scope host
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
3: eth0@if43: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
|
3: eth0@if43: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
|
||||||
link/ether fa:16:3e:1a:c0:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
|
link/ether fa:16:3e:1a:c0:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
|
||||||
inet 192.168.0.9/24 scope global eth0
|
inet 192.168.0.9/24 scope global eth0
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
inet6 fe80::f816:3eff:fe1a:c043/64 scope link
|
inet6 fe80::f816:3eff:fe1a:c043/64 scope link
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
13: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
13: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||||
link/ether fa:16:3e:b3:2e:70 brd ff:ff:ff:ff:ff:ff
|
link/ether fa:16:3e:b3:2e:70 brd ff:ff:ff:ff:ff:ff
|
||||||
inet 192.168.2.6/24 scope global eth1
|
inet 192.168.2.6/24 scope global eth1
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
inet6 fe80::f816:3eff:fea8:55af/64 scope link
|
inet6 fe80::f816:3eff:fea8:55af/64 scope link
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
|
|
||||||
4.1. Alternatively you can login to k8s worker and do the same from the host
|
4.1. Alternatively you can login to k8s worker and do the same from the host
|
||||||
system. Use the following command to find out ID of running SR-IOV container:
|
system. Use the following command to find out ID of running SR-IOV container:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ docker ps
|
$ docker ps
|
||||||
|
|
||||||
Suppose that ID of created container is ``eb4e10f38763``. Use the following
|
Suppose that ID of created container is ``eb4e10f38763``. Use the following
|
||||||
command to get PID of that container:
|
command to get PID of that container:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ docker inspect --format {{.State.Pid}} eb4e10f38763
|
$ docker inspect --format {{.State.Pid}} eb4e10f38763
|
||||||
|
|
||||||
Suppose that output of previous command is bellow:
|
Suppose that output of previous command is bellow:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ 32609
|
$ 32609
|
||||||
|
|
||||||
Use the following command to get interfaces of container:
|
Use the following command to get interfaces of container:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ nsenter -n -t 32609 ip a
|
$ nsenter -n -t 32609 ip a
|
||||||
|
|
||||||
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
|
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
|
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
|
||||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||||
inet 127.0.0.1/8 scope host lo
|
inet 127.0.0.1/8 scope host lo
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
inet6 ::1/128 scope host
|
inet6 ::1/128 scope host
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
3: eth0@if43: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
|
3: eth0@if43: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
|
||||||
link/ether fa:16:3e:1a:c0:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
|
link/ether fa:16:3e:1a:c0:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
|
||||||
inet 192.168.0.9/24 scope global eth0
|
inet 192.168.0.9/24 scope global eth0
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
inet6 fe80::f816:3eff:fe1a:c043/64 scope link
|
inet6 fe80::f816:3eff:fe1a:c043/64 scope link
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
13: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
13: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||||
link/ether fa:16:3e:b3:2e:70 brd ff:ff:ff:ff:ff:ff
|
link/ether fa:16:3e:b3:2e:70 brd ff:ff:ff:ff:ff:ff
|
||||||
inet 192.168.2.6/24 scope global eth1
|
inet 192.168.2.6/24 scope global eth1
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
inet6 fe80::f816:3eff:fea8:55af/64 scope link
|
inet6 fe80::f816:3eff:fea8:55af/64 scope link
|
||||||
valid_lft forever preferred_lft forever
|
valid_lft forever preferred_lft forever
|
||||||
|
|
||||||
In our example sriov interface has address 192.168.2.6
|
In our example sriov interface has address 192.168.2.6
|
||||||
|
|
||||||
5. Use neutron CLI to check the port with exact address has been created on
|
5. Use neutron CLI to check the port with exact address has been created on
|
||||||
neutron:
|
neutron:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack port list | grep 192.168.2.6
|
$ openstack port list | grep 192.168.2.6
|
||||||
|
|
||||||
Suppose that previous command returns a list with one openstack port that
|
Suppose that previous command returns a list with one openstack port that
|
||||||
has ID ``545ec21d-6bfc-4179-88c6-9dacaf435ea7``. You can see its information
|
has ID ``545ec21d-6bfc-4179-88c6-9dacaf435ea7``. You can see its information
|
||||||
with the following command:
|
with the following command:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack port show 545ec21d-6bfc-4179-88c6-9dacaf435ea7
|
$ openstack port show 545ec21d-6bfc-4179-88c6-9dacaf435ea7
|
||||||
+-----------------------+----------------------------------------------------------------------------+
|
+-----------------------+----------------------------------------------------------------------------+
|
||||||
| Field | Value |
|
| Field | Value |
|
||||||
+-----------------------+----------------------------------------------------------------------------+
|
+-----------------------+----------------------------------------------------------------------------+
|
||||||
| admin_state_up | UP |
|
| admin_state_up | UP |
|
||||||
| allowed_address_pairs | |
|
| allowed_address_pairs | |
|
||||||
| binding_host_id | novactl |
|
| binding_host_id | novactl |
|
||||||
| binding_profile | |
|
| binding_profile | |
|
||||||
| binding_vif_details | port_filter='True' |
|
| binding_vif_details | port_filter='True' |
|
||||||
| binding_vif_type | hw_veb |
|
| binding_vif_type | hw_veb |
|
||||||
| binding_vnic_type | direct |
|
| binding_vnic_type | direct |
|
||||||
| created_at | 2018-11-26T09:13:07Z |
|
| created_at | 2018-11-26T09:13:07Z |
|
||||||
| description | |
|
| description | |
|
||||||
| device_id | 7ab02cf9-f15b-11e8-bdf4-525400152cf3 |
|
| device_id | 7ab02cf9-f15b-11e8-bdf4-525400152cf3 |
|
||||||
| device_owner | compute:kuryr:sriov |
|
| device_owner | compute:kuryr:sriov |
|
||||||
| dns_assignment | None |
|
| dns_assignment | None |
|
||||||
| dns_name | None |
|
| dns_name | None |
|
||||||
| extra_dhcp_opts | |
|
| extra_dhcp_opts | |
|
||||||
| fixed_ips | ip_address='192.168.2.6', subnet_id='88d0b025-2710-4f02-a348-2829853b45da' |
|
| fixed_ips | ip_address='192.168.2.6', subnet_id='88d0b025-2710-4f02-a348-2829853b45da' |
|
||||||
| id | 545ec21d-6bfc-4179-88c6-9dacaf435ea7 |
|
| id | 545ec21d-6bfc-4179-88c6-9dacaf435ea7 |
|
||||||
| ip_address | None |
|
| ip_address | None |
|
||||||
| mac_address | fa:16:3e:b3:2e:70 |
|
| mac_address | fa:16:3e:b3:2e:70 |
|
||||||
| name | default/nginx-sriov-558db554d7-rvpxs |
|
| name | default/nginx-sriov-558db554d7-rvpxs |
|
||||||
| network_id | 2f8b9103-e9ec-47fa-9617-0fb9deacfc00 |
|
| network_id | 2f8b9103-e9ec-47fa-9617-0fb9deacfc00 |
|
||||||
| option_name | None |
|
| option_name | None |
|
||||||
| option_value | None |
|
| option_value | None |
|
||||||
| port_security_enabled | False |
|
| port_security_enabled | False |
|
||||||
| project_id | 92a4d7734b17486ba24e635bc7fad595 |
|
| project_id | 92a4d7734b17486ba24e635bc7fad595 |
|
||||||
| qos_policy_id | None |
|
| qos_policy_id | None |
|
||||||
| revision_number | 5 |
|
| revision_number | 5 |
|
||||||
| security_groups | 1e7bb965-2ad5-4a09-a5ac-41aa466af25b |
|
| security_groups | 1e7bb965-2ad5-4a09-a5ac-41aa466af25b |
|
||||||
| status | DOWN |
|
| status | DOWN |
|
||||||
| subnet_id | None |
|
| subnet_id | None |
|
||||||
| updated_at | 2018-11-26T09:13:07Z |
|
| updated_at | 2018-11-26T09:13:07Z |
|
||||||
+-----------------------+----------------------------------------------------------------------------+
|
+-----------------------+----------------------------------------------------------------------------+
|
||||||
|
|
||||||
The port would have the name of the pod, ``compute::kuryr::sriov`` for device
|
The port would have the name of the pod, ``compute::kuryr::sriov`` for device
|
||||||
owner and 'direct' vnic_type. Verify that IP and MAC addresses of the port
|
owner and 'direct' vnic_type. Verify that IP and MAC addresses of the port
|
||||||
|
|
|
@ -6,112 +6,126 @@ In this example, we will use the `kuryr-udp-demo`_ image. This image
|
||||||
implements a simple UDP server that listens on port 9090, and replies towards
|
implements a simple UDP server that listens on port 9090, and replies towards
|
||||||
client when a packet is received.
|
client when a packet is received.
|
||||||
|
|
||||||
We first create a deployment named demo::
|
We first create a deployment named demo:
|
||||||
|
|
||||||
$ kubectl run --image=yboaron/kuryr-udp-demo demo
|
.. code-block:: console
|
||||||
deployment "demo" created
|
|
||||||
|
|
||||||
As the next step, we will scale the deployment to 2 pods::
|
$ kubectl run --image=yboaron/kuryr-udp-demo demo
|
||||||
|
deployment "demo" created
|
||||||
|
|
||||||
$ kubectl scale deploy/demo --replicas=2
|
As the next step, we will scale the deployment to 2 pods:
|
||||||
deployment "demo" scaled
|
|
||||||
|
|
||||||
At this point we should have two pods running the `kuryr-udp-demo`_ image::
|
.. code-block:: console
|
||||||
|
|
||||||
$ kubectl get pods
|
$ kubectl scale deploy/demo --replicas=2
|
||||||
NAME READY STATUS RESTARTS AGE
|
deployment "demo" scaled
|
||||||
demo-fbb89f54c-92ttl 1/1 Running 0 31s
|
|
||||||
demo-fbb89f54c-q9fq7 1/1 Running 0 1m
|
|
||||||
|
|
||||||
Next, we expose the deployment as a service, setting UDP port to 90::
|
At this point we should have two pods running the `kuryr-udp-demo`_ image:
|
||||||
|
|
||||||
$ kubectl get svc
|
.. code-block:: console
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
||||||
kubernetes ClusterIP 10.0.0.129 <none> 443/TCP 17m
|
|
||||||
|
|
||||||
$ kubectl expose deploy/demo --protocol UDP --port 90 --target-port 9090
|
$ kubectl get pods
|
||||||
service "demo" exposed
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
demo-fbb89f54c-92ttl 1/1 Running 0 31s
|
||||||
|
demo-fbb89f54c-q9fq7 1/1 Running 0 1m
|
||||||
|
|
||||||
$ kubectl get svc
|
Next, we expose the deployment as a service, setting UDP port to 90:
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
||||||
demo ClusterIP 10.0.0.150 <none> 90/UDP 16s
|
.. code-block:: console
|
||||||
kubernetes ClusterIP 10.0.0.129 <none> 443/TCP 17m
|
|
||||||
|
$ kubectl get svc
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
kubernetes ClusterIP 10.0.0.129 <none> 443/TCP 17m
|
||||||
|
|
||||||
|
$ kubectl expose deploy/demo --protocol UDP --port 90 --target-port 9090
|
||||||
|
service "demo" exposed
|
||||||
|
|
||||||
|
$ kubectl get svc
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
demo ClusterIP 10.0.0.150 <none> 90/UDP 16s
|
||||||
|
kubernetes ClusterIP 10.0.0.129 <none> 443/TCP 17m
|
||||||
|
|
||||||
Now, let's check the OpenStack load balancer created by Kuryr for **demo**
|
Now, let's check the OpenStack load balancer created by Kuryr for **demo**
|
||||||
service::
|
service:
|
||||||
|
|
||||||
$ openstack loadbalancer list
|
.. code-block:: console
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
|
||||||
| id | name | project_id | vip_address | provisioning_status | provider |
|
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
|
||||||
| eb5123e8-6bb5-4680-ac64-dcf25c57ced3 | default/kubernetes | fdc9ac3b36474fbf8c7ab77f4f783ec5 | 10.0.0.129 | ACTIVE | amphora |
|
|
||||||
| 67f19a39-dfb9-4a7a-bafe-7d6789982d91 | default/demo | fdc9ac3b36474fbf8c7ab77f4f783ec5 | 10.0.0.150 | ACTIVE | amphora |
|
|
||||||
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
|
||||||
|
|
||||||
$ openstack loadbalancer show default/demo
|
$ openstack loadbalancer list
|
||||||
+---------------------+--------------------------------------+
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
| Field | Value |
|
| id | name | project_id | vip_address | provisioning_status | provider |
|
||||||
+---------------------+--------------------------------------+
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
| admin_state_up | True |
|
| eb5123e8-6bb5-4680-ac64-dcf25c57ced3 | default/kubernetes | fdc9ac3b36474fbf8c7ab77f4f783ec5 | 10.0.0.129 | ACTIVE | amphora |
|
||||||
| created_at | 2018-10-09T06:06:14 |
|
| 67f19a39-dfb9-4a7a-bafe-7d6789982d91 | default/demo | fdc9ac3b36474fbf8c7ab77f4f783ec5 | 10.0.0.150 | ACTIVE | amphora |
|
||||||
| description | |
|
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
|
||||||
| flavor | |
|
|
||||||
| id | 67f19a39-dfb9-4a7a-bafe-7d6789982d91 |
|
$ openstack loadbalancer show default/demo
|
||||||
| listeners | 7b374ecf-80c4-44be-a725-9b0c3fa2d0fa |
|
+---------------------+--------------------------------------+
|
||||||
| name | default/demo |
|
| Field | Value |
|
||||||
| operating_status | ONLINE |
|
+---------------------+--------------------------------------+
|
||||||
| pools | d549df5b-e008-49a6-8695-b6578441553e |
|
| admin_state_up | True |
|
||||||
| project_id | fdc9ac3b36474fbf8c7ab77f4f783ec5 |
|
| created_at | 2018-10-09T06:06:14 |
|
||||||
| provider | amphora |
|
| description | |
|
||||||
| provisioning_status | ACTIVE |
|
| flavor | |
|
||||||
| updated_at | 2018-10-09T06:07:53 |
|
| id | 67f19a39-dfb9-4a7a-bafe-7d6789982d91 |
|
||||||
| vip_address | 10.0.0.150 |
|
| listeners | 7b374ecf-80c4-44be-a725-9b0c3fa2d0fa |
|
||||||
| vip_network_id | eee6af72-9fbb-48b5-8e52-9f8bdf61cbab |
|
| name | default/demo |
|
||||||
| vip_port_id | ccd8be94-c65e-4bb2-afe7-44aa3d0617ea |
|
| operating_status | ONLINE |
|
||||||
| vip_qos_policy_id | None |
|
| pools | d549df5b-e008-49a6-8695-b6578441553e |
|
||||||
| vip_subnet_id | 3376291d-6c23-48cb-b6c6-37cefd57f914 |
|
| project_id | fdc9ac3b36474fbf8c7ab77f4f783ec5 |
|
||||||
+---------------------+--------------------------------------+
|
| provider | amphora |
|
||||||
|
| provisioning_status | ACTIVE |
|
||||||
|
| updated_at | 2018-10-09T06:07:53 |
|
||||||
|
| vip_address | 10.0.0.150 |
|
||||||
|
| vip_network_id | eee6af72-9fbb-48b5-8e52-9f8bdf61cbab |
|
||||||
|
| vip_port_id | ccd8be94-c65e-4bb2-afe7-44aa3d0617ea |
|
||||||
|
| vip_qos_policy_id | None |
|
||||||
|
| vip_subnet_id | 3376291d-6c23-48cb-b6c6-37cefd57f914 |
|
||||||
|
+---------------------+--------------------------------------+
|
||||||
|
|
||||||
Checking the load balancer's details, we can see that the load balancer is
|
Checking the load balancer's details, we can see that the load balancer is
|
||||||
listening on UDP port 90::
|
listening on UDP port 90:
|
||||||
|
|
||||||
$ openstack loadbalancer listener show 7b374ecf-80c4-44be-a725-9b0c3fa2d0fa
|
.. code-block:: console
|
||||||
+---------------------------+--------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------------+--------------------------------------+
|
|
||||||
| admin_state_up | True |
|
|
||||||
| connection_limit | -1 |
|
|
||||||
| created_at | 2018-10-09T06:07:37 |
|
|
||||||
| default_pool_id | d549df5b-e008-49a6-8695-b6578441553e |
|
|
||||||
| default_tls_container_ref | None |
|
|
||||||
| description | |
|
|
||||||
| id | 7b374ecf-80c4-44be-a725-9b0c3fa2d0fa |
|
|
||||||
| insert_headers | None |
|
|
||||||
| l7policies | |
|
|
||||||
| loadbalancers | 67f19a39-dfb9-4a7a-bafe-7d6789982d91 |
|
|
||||||
| name | default/demo:UDP:90 |
|
|
||||||
| operating_status | ONLINE |
|
|
||||||
| project_id | fdc9ac3b36474fbf8c7ab77f4f783ec5 |
|
|
||||||
| protocol | UDP |
|
|
||||||
| protocol_port | 90 |
|
|
||||||
| provisioning_status | ACTIVE |
|
|
||||||
| sni_container_refs | [] |
|
|
||||||
| timeout_client_data | 50000 |
|
|
||||||
| timeout_member_connect | 5000 |
|
|
||||||
| timeout_member_data | 50000 |
|
|
||||||
| timeout_tcp_inspect | 0 |
|
|
||||||
| updated_at | 2018-10-09T06:07:53 |
|
|
||||||
+---------------------------+--------------------------------------+
|
|
||||||
|
|
||||||
And the load balancer has two members listening on UDP port 9090::
|
$ openstack loadbalancer listener show 7b374ecf-80c4-44be-a725-9b0c3fa2d0fa
|
||||||
|
+---------------------------+--------------------------------------+
|
||||||
|
| Field | Value |
|
||||||
|
+---------------------------+--------------------------------------+
|
||||||
|
| admin_state_up | True |
|
||||||
|
| connection_limit | -1 |
|
||||||
|
| created_at | 2018-10-09T06:07:37 |
|
||||||
|
| default_pool_id | d549df5b-e008-49a6-8695-b6578441553e |
|
||||||
|
| default_tls_container_ref | None |
|
||||||
|
| description | |
|
||||||
|
| id | 7b374ecf-80c4-44be-a725-9b0c3fa2d0fa |
|
||||||
|
| insert_headers | None |
|
||||||
|
| l7policies | |
|
||||||
|
| loadbalancers | 67f19a39-dfb9-4a7a-bafe-7d6789982d91 |
|
||||||
|
| name | default/demo:UDP:90 |
|
||||||
|
| operating_status | ONLINE |
|
||||||
|
| project_id | fdc9ac3b36474fbf8c7ab77f4f783ec5 |
|
||||||
|
| protocol | UDP |
|
||||||
|
| protocol_port | 90 |
|
||||||
|
| provisioning_status | ACTIVE |
|
||||||
|
| sni_container_refs | [] |
|
||||||
|
| timeout_client_data | 50000 |
|
||||||
|
| timeout_member_connect | 5000 |
|
||||||
|
| timeout_member_data | 50000 |
|
||||||
|
| timeout_tcp_inspect | 0 |
|
||||||
|
| updated_at | 2018-10-09T06:07:53 |
|
||||||
|
+---------------------------+--------------------------------------+
|
||||||
|
|
||||||
$ openstack loadbalancer member list d549df5b-e008-49a6-8695-b6578441553e
|
And the load balancer has two members listening on UDP port 9090:
|
||||||
+--------------------------------------+-----------------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+
|
|
||||||
| id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight |
|
.. code-block:: console
|
||||||
+--------------------------------------+-----------------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+
|
|
||||||
| b2c63e7b-47ed-4a6f-b8bb-acaa6742a0ad | default/demo-fbb89f54c-q9fq7:9090 | fdc9ac3b36474fbf8c7ab77f4f783ec5 | ACTIVE | 10.0.0.74 | 9090 | ONLINE | 1 |
|
$ openstack loadbalancer member list d549df5b-e008-49a6-8695-b6578441553e
|
||||||
| 7fa773b1-cf76-4a0b-8004-153423e59ef6 | default/demo-fbb89f54c-92ttl:9090 | fdc9ac3b36474fbf8c7ab77f4f783ec5 | ACTIVE | 10.0.0.88 | 9090 | ONLINE | 1 |
|
+--------------------------------------+-----------------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+
|
||||||
+--------------------------------------+-----------------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+
|
| id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight |
|
||||||
|
+--------------------------------------+-----------------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+
|
||||||
|
| b2c63e7b-47ed-4a6f-b8bb-acaa6742a0ad | default/demo-fbb89f54c-q9fq7:9090 | fdc9ac3b36474fbf8c7ab77f4f783ec5 | ACTIVE | 10.0.0.74 | 9090 | ONLINE | 1 |
|
||||||
|
| 7fa773b1-cf76-4a0b-8004-153423e59ef6 | default/demo-fbb89f54c-92ttl:9090 | fdc9ac3b36474fbf8c7ab77f4f783ec5 | ACTIVE | 10.0.0.88 | 9090 | ONLINE | 1 |
|
||||||
|
+--------------------------------------+-----------------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+
|
||||||
|
|
||||||
At this point, we have both the kubernetes **demo** service and corresponding
|
At this point, we have both the kubernetes **demo** service and corresponding
|
||||||
openstack load balancer running, and we are ready to run the client
|
openstack load balancer running, and we are ready to run the client
|
||||||
|
@ -122,31 +136,37 @@ client script sends UDP message towards specific IP and port, and waits for a
|
||||||
response from the server. The way that the client application can communicate
|
response from the server. The way that the client application can communicate
|
||||||
with the server is by leveraging the Kubernetes service functionality.
|
with the server is by leveraging the Kubernetes service functionality.
|
||||||
|
|
||||||
First we clone the client script::
|
First we clone the client script:
|
||||||
|
|
||||||
$ git clone https://github.com/yboaron/udp-client-script.git
|
.. code-block:: console
|
||||||
Cloning into 'udp-client-script'...
|
|
||||||
remote: Enumerating objects: 15, done.
|
|
||||||
remote: Counting objects: 100% (15/15), done.
|
|
||||||
remote: Compressing objects: 100% (13/13), done.
|
|
||||||
remote: Total 15 (delta 4), reused 3 (delta 1), pack-reused 0
|
|
||||||
Unpacking objects: 100% (15/15), done.
|
|
||||||
$
|
|
||||||
|
|
||||||
And we need the UDP server service IP and port::
|
$ git clone https://github.com/yboaron/udp-client-script.git
|
||||||
|
Cloning into 'udp-client-script'...
|
||||||
|
remote: Enumerating objects: 15, done.
|
||||||
|
remote: Counting objects: 100% (15/15), done.
|
||||||
|
remote: Compressing objects: 100% (13/13), done.
|
||||||
|
remote: Total 15 (delta 4), reused 3 (delta 1), pack-reused 0
|
||||||
|
Unpacking objects: 100% (15/15), done.
|
||||||
|
$
|
||||||
|
|
||||||
$ kubectl get svc demo
|
And we need the UDP server service IP and port:
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
||||||
demo ClusterIP 10.0.0.150 <none> 90/UDP 20m
|
|
||||||
$
|
|
||||||
|
|
||||||
Last step will be to ping the UDP server service::
|
.. code-block:: console
|
||||||
|
|
||||||
$ python udp-client-script/client.py 10.0.0.150 90
|
$ kubectl get svc demo
|
||||||
demo-fbb89f54c-92ttl: HELLO, I AM ALIVE!!!
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
demo ClusterIP 10.0.0.150 <none> 90/UDP 20m
|
||||||
|
$
|
||||||
|
|
||||||
$ python udp-client-script/client.py 10.0.0.150 90
|
Last step will be to ping the UDP server service:
|
||||||
demo-fbb89f54c-q9fq7: HELLO, I AM ALIVE!!!
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ python udp-client-script/client.py 10.0.0.150 90
|
||||||
|
demo-fbb89f54c-92ttl: HELLO, I AM ALIVE!!!
|
||||||
|
|
||||||
|
$ python udp-client-script/client.py 10.0.0.150 90
|
||||||
|
demo-fbb89f54c-q9fq7: HELLO, I AM ALIVE!!!
|
||||||
|
|
||||||
Since the `kuryr-udp-demo`_ application concatenates the pod's name to the
|
Since the `kuryr-udp-demo`_ application concatenates the pod's name to the
|
||||||
replyed message, it is plain to see that both service's pods are replying to
|
replyed message, it is plain to see that both service's pods are replying to
|
||||||
|
|
|
@ -6,41 +6,48 @@ To create a VM that makes use of the Neutron Trunk port support, the next
|
||||||
steps can be followed:
|
steps can be followed:
|
||||||
|
|
||||||
1. Use the demo tenant and create a key to be used to log in into the overcloud
|
1. Use the demo tenant and create a key to be used to log in into the overcloud
|
||||||
VM::
|
VM:
|
||||||
|
|
||||||
$ source ~/devstack/openrc demo
|
.. code-block:: console
|
||||||
$ openstack keypair create demo > id_rsa_demo
|
|
||||||
$ chmod 600 id_rsa_demo
|
|
||||||
|
|
||||||
|
$ source ~/devstack/openrc demo
|
||||||
|
$ openstack keypair create demo > id_rsa_demo
|
||||||
|
$ chmod 600 id_rsa_demo
|
||||||
|
|
||||||
2. Ensure the demo default security group allows ping and ssh access::
|
2. Ensure the demo default security group allows ping and ssh access:
|
||||||
|
|
||||||
$ openstack security group rule create --protocol icmp default
|
.. code-block:: console
|
||||||
$ openstack security group rule create --protocol tcp --dst-port 22 default
|
|
||||||
|
|
||||||
|
$ openstack security group rule create --protocol icmp default
|
||||||
|
$ openstack security group rule create --protocol tcp --dst-port 22 default
|
||||||
|
|
||||||
3. Download and import an image that allows vlans, as cirros does not support
|
3. Download and import an image that allows vlans, as cirros does not support
|
||||||
it::
|
it:
|
||||||
|
|
||||||
$ wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
|
.. code-block:: console
|
||||||
$ openstack image create --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2 centos7
|
|
||||||
|
|
||||||
|
$ wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
|
||||||
|
$ openstack image create --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2 centos7
|
||||||
|
|
||||||
4. Create a port for the overcloud VM and create the trunk with that port as
|
4. Create a port for the overcloud VM and create the trunk with that port as
|
||||||
the parent port (untagged traffic)::
|
the parent port (untagged traffic):
|
||||||
|
|
||||||
$ openstack port create --network private --security-group default port0
|
.. code-block:: console
|
||||||
$ openstack network trunk create --parent-port port0 trunk0
|
|
||||||
|
|
||||||
|
$ openstack port create --network private --security-group default port0
|
||||||
|
$ openstack network trunk create --parent-port port0 trunk0
|
||||||
|
|
||||||
5. Create the overcloud VM and assign a floating ip to it to be able to log in
|
5. Create the overcloud VM and assign a floating ip to it to be able to log in
|
||||||
into it::
|
into it:
|
||||||
|
|
||||||
$ openstack server create --image centos7 --flavor ds4G --nic port-id=port0 --key-name demo overcloud_vm
|
.. code-block:: console
|
||||||
$ openstack floating ip create --port port0 public
|
|
||||||
|
|
||||||
|
$ openstack server create --image centos7 --flavor ds4G --nic port-id=port0 --key-name demo overcloud_vm
|
||||||
|
$ openstack floating ip create --port port0 public
|
||||||
|
|
||||||
Note subports can be added to the trunk port, and be used inside the VM with
|
Note subports can be added to the trunk port, and be used inside the VM with
|
||||||
the specific vlan, 102 in the example, by doing::
|
the specific vlan, 102 in the example, by doing:
|
||||||
|
|
||||||
$ openstack network trunk set --subport port=subport0,segmentation-type=vlan,segmentation-id=102 trunk0
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ openstack network trunk set --subport port=subport0,segmentation-type=vlan,segmentation-id=102 trunk0
|
||||||
|
|
|
@ -5,16 +5,16 @@ Upgrading kuryr-kubernetes
|
||||||
Kuryr-Kubernetes supports standard OpenStack utility for checking upgrade is
|
Kuryr-Kubernetes supports standard OpenStack utility for checking upgrade is
|
||||||
possible and safe:
|
possible and safe:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ kuryr-k8s-status upgrade check
|
$ kuryr-k8s-status upgrade check
|
||||||
+---------------------------------------+
|
+---------------------------------------+
|
||||||
| Upgrade Check Results |
|
| Upgrade Check Results |
|
||||||
+---------------------------------------+
|
+---------------------------------------+
|
||||||
| Check: Pod annotations |
|
| Check: Pod annotations |
|
||||||
| Result: Success |
|
| Result: Success |
|
||||||
| Details: All annotations are updated. |
|
| Details: All annotations are updated. |
|
||||||
+---------------------------------------+
|
+---------------------------------------+
|
||||||
|
|
||||||
If any issue will be found, the utility will give you explanation and possible
|
If any issue will be found, the utility will give you explanation and possible
|
||||||
remediations. Also note that *Warning* results aren't blocking an upgrade, but
|
remediations. Also note that *Warning* results aren't blocking an upgrade, but
|
||||||
|
@ -38,50 +38,50 @@ upgrade check`` utility **before upgrading Kuryr-Kubernetes services to T**.
|
||||||
|
|
||||||
$ kubectl -n kube-system exec -it <controller-pod-name> kuryr-k8s-status upgrade check
|
$ kubectl -n kube-system exec -it <controller-pod-name> kuryr-k8s-status upgrade check
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ kuryr-k8s-status upgrade check
|
$ kuryr-k8s-status upgrade check
|
||||||
+---------------------------------------+
|
+---------------------------------------+
|
||||||
| Upgrade Check Results |
|
| Upgrade Check Results |
|
||||||
+---------------------------------------+
|
+---------------------------------------+
|
||||||
| Check: Pod annotations |
|
| Check: Pod annotations |
|
||||||
| Result: Success |
|
| Result: Success |
|
||||||
| Details: All annotations are updated. |
|
| Details: All annotations are updated. |
|
||||||
+---------------------------------------+
|
+---------------------------------------+
|
||||||
|
|
||||||
In case of *Failure* result of *Pod annotations* check you should run
|
In case of *Failure* result of *Pod annotations* check you should run
|
||||||
``kuryr-k8s-status upgrade update-annotations`` command and check again:
|
``kuryr-k8s-status upgrade update-annotations`` command and check again:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: console
|
||||||
|
|
||||||
$ kuryr-k8s-status upgrade check
|
$ kuryr-k8s-status upgrade check
|
||||||
+----------------------------------------------------------------------+
|
+----------------------------------------------------------------------+
|
||||||
| Upgrade Check Results |
|
| Upgrade Check Results |
|
||||||
+----------------------------------------------------------------------+
|
+----------------------------------------------------------------------+
|
||||||
| Check: Pod annotations |
|
| Check: Pod annotations |
|
||||||
| Result: Failure |
|
| Result: Failure |
|
||||||
| Details: You have 3 Kuryr pod annotations in old format. You need to |
|
| Details: You have 3 Kuryr pod annotations in old format. You need to |
|
||||||
| run `kuryr-k8s-status upgrade update-annotations` |
|
| run `kuryr-k8s-status upgrade update-annotations` |
|
||||||
| before proceeding with the upgrade. |
|
| before proceeding with the upgrade. |
|
||||||
+----------------------------------------------------------------------+
|
+----------------------------------------------------------------------+
|
||||||
$ kuryr-k8s-status upgrade update-annotations
|
$ kuryr-k8s-status upgrade update-annotations
|
||||||
+-----------------------+--------+
|
+-----------------------+--------+
|
||||||
| Stat | Number |
|
| Stat | Number |
|
||||||
+-----------------------+--------+
|
+-----------------------+--------+
|
||||||
| Updated annotations | 3 |
|
| Updated annotations | 3 |
|
||||||
+-----------------------+--------+
|
+-----------------------+--------+
|
||||||
| Malformed annotations | 0 |
|
| Malformed annotations | 0 |
|
||||||
+-----------------------+--------+
|
+-----------------------+--------+
|
||||||
| Annotations left | 0 |
|
| Annotations left | 0 |
|
||||||
+-----------------------+--------+
|
+-----------------------+--------+
|
||||||
$ kuryr-k8s-status upgrade check
|
$ kuryr-k8s-status upgrade check
|
||||||
+---------------------------------------+
|
+---------------------------------------+
|
||||||
| Upgrade Check Results |
|
| Upgrade Check Results |
|
||||||
+---------------------------------------+
|
+---------------------------------------+
|
||||||
| Check: Pod annotations |
|
| Check: Pod annotations |
|
||||||
| Result: Success |
|
| Result: Success |
|
||||||
| Details: All annotations are updated. |
|
| Details: All annotations are updated. |
|
||||||
+---------------------------------------+
|
+---------------------------------------+
|
||||||
|
|
||||||
It's possible that some annotations were somehow malformed. That will generate
|
It's possible that some annotations were somehow malformed. That will generate
|
||||||
a warning that should be investigated, but isn't blocking upgrading to T
|
a warning that should be investigated, but isn't blocking upgrading to T
|
||||||
|
|
Loading…
Reference in New Issue