Browse Source

Explicitly use code-block.

In this patch we convert preformatted directive (double colon at the end
of line) to code-block sphinx directive. Also, fixed formatting for
existing code-block directives, and changes restructuredtext code
directive into sphinx code-block.

Change-Id: I9db48fbb169263e3bf66eacca7d9bce6c355739f
changes/30/693930/3
Roman Dobosz 3 weeks ago
parent
commit
fd440fcdcb
34 changed files with 2756 additions and 2410 deletions
  1. +4
    -2
      doc/source/devref/health_manager.rst
  2. +12
    -12
      doc/source/devref/high_availability.rst
  3. +5
    -3
      doc/source/devref/kuryr_kubernetes_design.rst
  4. +21
    -19
      doc/source/devref/kuryr_kubernetes_ingress_design.rst
  5. +9
    -9
      doc/source/devref/kuryr_kubernetes_ocp_route_design.rst
  6. +158
    -154
      doc/source/devref/network_policy.rst
  7. +96
    -94
      doc/source/devref/port_crd_usage.rst
  8. +38
    -22
      doc/source/devref/updating_pod_resources_api.rst
  9. +34
    -36
      doc/source/devref/vif_handler_drivers_design.rst
  10. +40
    -24
      doc/source/installation/containerized.rst
  11. +72
    -67
      doc/source/installation/default_configuration.rst
  12. +99
    -77
      doc/source/installation/devstack/basic.rst
  13. +20
    -12
      doc/source/installation/devstack/containerized.rst
  14. +32
    -25
      doc/source/installation/devstack/dragonflow_support.rst
  15. +19
    -11
      doc/source/installation/devstack/nested-macvlan.rst
  16. +39
    -23
      doc/source/installation/devstack/nested-vlan.rst
  17. +35
    -32
      doc/source/installation/devstack/odl_support.rst
  18. +28
    -27
      doc/source/installation/devstack/ovn_support.rst
  19. +18
    -13
      doc/source/installation/devstack/ports-pool.rst
  20. +18
    -12
      doc/source/installation/https_kubernetes.rst
  21. +232
    -214
      doc/source/installation/ipv6.rst
  22. +127
    -92
      doc/source/installation/manual.rst
  23. +96
    -77
      doc/source/installation/network_namespace.rst
  24. +298
    -257
      doc/source/installation/network_policy.rst
  25. +46
    -35
      doc/source/installation/ocp_route.rst
  26. +55
    -33
      doc/source/installation/ports-pool.rst
  27. +542
    -502
      doc/source/installation/services.rst
  28. +63
    -63
      doc/source/installation/sriov.rst
  29. +99
    -91
      doc/source/installation/testing_connectivity.rst
  30. +36
    -34
      doc/source/installation/testing_nested_connectivity.rst
  31. +160
    -160
      doc/source/installation/testing_sriov_functional.rst
  32. +132
    -112
      doc/source/installation/testing_udp_services.rst
  33. +25
    -18
      doc/source/installation/trunk_ports.rst
  34. +48
    -48
      doc/source/installation/upgrades.rst

+ 4
- 2
doc/source/devref/health_manager.rst View File

@@ -70,8 +70,10 @@ leak.
disabled. In order to enable, set the following option in kuryr.conf to a
limit value of memory in MiBs.

[cni_health_server]
max_memory_usage = -1
.. code-block:: ini

[cni_health_server]
max_memory_usage = -1

The CNI Health Manager is added as a process to CNI daemon and communicates
to the other two processes i.e. Watcher and Server with a shared boolean

+ 12
- 12
doc/source/devref/high_availability.rst View File

@@ -54,18 +54,18 @@ provide update exclusion mechanisms to prevent race conditions.
This can be implemented by adding another *leader-elector* container to each
of kuryr-controller pods:

.. code:: yaml
- image: gcr.io/google_containers/leader-elector:0.5
name: leader-elector
args:
- "--election=kuryr-controller"
- "--http=0.0.0.0:${KURYR_CONTROLLER_HA_PORT:-16401}"
- "--election-namespace=kube-system"
- "--ttl=5s"
ports:
- containerPort: ${KURYR_CONTROLLER_HA_PORT:-16401}
protocol: TCP
.. code-block:: yaml
- image: gcr.io/google_containers/leader-elector:0.5
name: leader-elector
args:
- "--election=kuryr-controller"
- "--http=0.0.0.0:${KURYR_CONTROLLER_HA_PORT:-16401}"
- "--election-namespace=kube-system"
- "--ttl=5s"
ports:
- containerPort: ${KURYR_CONTROLLER_HA_PORT:-16401}
protocol: TCP

This adds a new container to the pod. This container will do the
leader-election and expose the simple JSON API on port 16401 by default. This

+ 5
- 3
doc/source/devref/kuryr_kubernetes_design.rst View File

@@ -184,10 +184,12 @@ lbaasspec Service
================ =========================

For example, to enable only the 'vif' controller handler we should set the
following at kuryr.conf::
following at kuryr.conf:

[kubernetes]
enabled_handlers=vif
.. code-block:: ini

[kubernetes]
enabled_handlers=vif


Providers

+ 21
- 19
doc/source/devref/kuryr_kubernetes_ingress_design.rst View File

@@ -101,14 +101,16 @@ For achieving external connectivity the L7 router is attached to a floating
IP (allocated from 'external_svc_subnet').

The following parameters should be configured in kuryr.conf file to
enable L7 Router::
enable L7 Router:

[ingress]
l7_router_uuid=<loadbalancer uuid>
[kubernetes]
enable_ingress = True
# make sure that 'ingresslb' appears in enabled_handlers
enabled_handlers= vif,lb,lbaasspec,ingresslb
.. code-block:: ini

[ingress]
l7_router_uuid=<loadbalancer uuid>
[kubernetes]
enable_ingress = True
# make sure that 'ingresslb' appears in enabled_handlers
enabled_handlers= vif,lb,lbaasspec,ingresslb

The L7 Router should notify the Health Manager upon any failure that could
lead to L7 Router's malfunctionality.
@@ -190,18 +192,18 @@ This section describe in details the following scenarios:

.. code-block:: yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: s1
servicePort: 80
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: s1
servicePort: 80

* Since its the first Ingress pointing to this Service, the Ingress
handler will create LBaaS pool (attached to L7 router)- named

+ 9
- 9
doc/source/devref/kuryr_kubernetes_ocp_route_design.rst View File

@@ -97,15 +97,15 @@ B. Create Service/Endpoints, create OCP-Route, delete OCP-Route.

.. code-block:: yaml

apiVersion: v1
kind: Route
metadata:
name: test
spec:
host: www.example.com
to:
kind: Service
name: s1
apiVersion: v1
kind: Route
metadata:
name: test
spec:
host: www.example.com
to:
kind: Service
name: s1

* Since it's the first route pointing to this Service, the OCP-Route
handler will create LbaaS pool (attached to L7 router)- named

+ 158
- 154
doc/source/devref/network_policy.rst View File

@@ -46,23 +46,23 @@ The network policy CRD has the following format:

.. code-block:: yaml

apiVersion: openstack.org/v1
kind: KuryrNetPolicy
metadata:
...
spec:
egressSgRules:
- security_group_rule:
...
ingressSgRules:
- security_group_rule:
...
networkpolicy_spec:
...
podSelector:
...
securityGroupId: ...
securityGroupName: ...
apiVersion: openstack.org/v1
kind: KuryrNetPolicy
metadata:
...
spec:
egressSgRules:
- security_group_rule:
...
ingressSgRules:
- security_group_rule:
...
networkpolicy_spec:
...
podSelector:
...
securityGroupId: ...
securityGroupName: ...

A new handler has been added to react to Network Policy events, and the existing
ones, for instance service/pod handlers, have been modified to account for the
@@ -183,14 +183,14 @@ policy becomes denied. As specified in the following policy:

.. code-block:: yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress

The following CRD is the translation of policy rules to security group rules.
No ingress rule was created, which means traffic is blocked, and since
@@ -200,27 +200,27 @@ are assumed to assumed to affect Ingress.

.. code-block:: yaml

apiVersion: openstack.org/v1
kind: KuryrNetPolicy
metadata:
name: np-default-deny
namespace: default
...
spec:
egressSgRules:
- security_group_rule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 60a0d59c-2102-43e0-b025-75c98b7d9315
security_group_id: 20d9b623-f1e0-449d-95c1-01624cb3e315
ingressSgRules: []
networkpolicy_spec:
...
podSelector:
...
securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315
securityGroupName: sg-default-deny
apiVersion: openstack.org/v1
kind: KuryrNetPolicy
metadata:
name: np-default-deny
namespace: default
...
spec:
egressSgRules:
- security_group_rule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 60a0d59c-2102-43e0-b025-75c98b7d9315
security_group_id: 20d9b623-f1e0-449d-95c1-01624cb3e315
ingressSgRules: []
networkpolicy_spec:
...
podSelector:
...
securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315
securityGroupName: sg-default-deny


Allow traffic from pod
@@ -231,28 +231,30 @@ on a single port from the group of pods that have the label ``role=monitoring``.

.. code-block:: yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring-via-pod-selector
spec:
podSelector:
matchLabels:
app: server
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: monitoring
ports:
- protocol: TCP
port: 8080

Create the following pod with label ``role=monitoring``::

$ kubectl run monitor --image=busybox --restart=Never --labels=role=monitoring
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring-via-pod-selector
spec:
podSelector:
matchLabels:
app: server
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: monitoring
ports:
- protocol: TCP
port: 8080

Create the following pod with label ``role=monitoring``:

.. code-block:: console

$ kubectl run monitor --image=busybox --restart=Never --labels=role=monitoring

The generated CRD contains an ingress rule allowing traffic on port 8080 from
the created pod, and an egress rule allowing traffic to everywhere, since no
@@ -260,38 +262,38 @@ restriction was enforced.

.. code-block:: yaml

apiVersion: openstack.org/v1
kind: KuryrNetPolicy
metadata:
name: np-allow-monitoring-via-pod-selector
namespace: default
...
spec:
egressSgRules:
- security_group_rule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 203a14fe-1059-4eff-93ed-a42bd957145d
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
ingressSgRules:
- namespace: default
security_group_rule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
id: 7987c382-f2a9-47f7-b6e8-1a3a1bcb7d95
port_range_max: 8080
port_range_min: 8080
protocol: tcp
remote_ip_prefix: 10.0.1.143
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
networkpolicy_spec:
...
podSelector:
...
securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
securityGroupName: sg-allow-monitoring-via-pod-selector
apiVersion: openstack.org/v1
kind: KuryrNetPolicy
metadata:
name: np-allow-monitoring-via-pod-selector
namespace: default
...
spec:
egressSgRules:
- security_group_rule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 203a14fe-1059-4eff-93ed-a42bd957145d
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
ingressSgRules:
- namespace: default
security_group_rule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
id: 7987c382-f2a9-47f7-b6e8-1a3a1bcb7d95
port_range_max: 8080
port_range_min: 8080
protocol: tcp
remote_ip_prefix: 10.0.1.143
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
networkpolicy_spec:
...
podSelector:
...
securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
securityGroupName: sg-allow-monitoring-via-pod-selector


Allow traffic from namespace
@@ -302,29 +304,31 @@ from namespace with the label ``purpose=test``:

.. code-block:: yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-test-via-ns-selector
spec:
podSelector:
matchLabels:
app: server
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: test
ports:
- protocol: TCP
port: 8080

Create a namespace and label it with ``purpose=test``::

$ kubectl create namespace dev
$ kubectl label namespace dev purpose=test
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-test-via-ns-selector
spec:
podSelector:
matchLabels:
app: server
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: test
ports:
- protocol: TCP
port: 8080

Create a namespace and label it with ``purpose=test``:

.. code-block:: console

$ kubectl create namespace dev
$ kubectl label namespace dev purpose=test

The resulting CRD has an ingress rule allowing traffic
from the namespace CIDR on the specified port, and an
@@ -332,37 +336,37 @@ egress rule allowing traffic to everywhere.

.. code-block:: yaml

apiVersion: openstack.org/v1
kind: KuryrNetPolicy
name: np-allow-test-via-ns-selector
namespace: default
...
spec:
egressSgRules:
- security_group_rule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 8c21bf42-c8b9-4628-b0a1-bd0dbb192e6b
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
ingressSgRules:
- namespace: dev
security_group_rule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
id: 2a33b802-56ad-430a-801d-690f653198ef
port_range_max: 8080
port_range_min: 8080
protocol: tcp
remote_ip_prefix: 10.0.1.192/26
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
networkpolicy_spec:
...
podSelector:
...
securityGroupId: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
securityGroupName: sg-allow-test-via-ns-selector
apiVersion: openstack.org/v1
kind: KuryrNetPolicy
name: np-allow-test-via-ns-selector
namespace: default
...
spec:
egressSgRules:
- security_group_rule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 8c21bf42-c8b9-4628-b0a1-bd0dbb192e6b
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
ingressSgRules:
- namespace: dev
security_group_rule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
id: 2a33b802-56ad-430a-801d-690f653198ef
port_range_max: 8080
port_range_min: 8080
protocol: tcp
remote_ip_prefix: 10.0.1.192/26
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
networkpolicy_spec:
...
podSelector:
...
securityGroupId: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
securityGroupName: sg-allow-test-via-ns-selector

.. note::


+ 96
- 94
doc/source/devref/port_crd_usage.rst View File

@@ -58,25 +58,26 @@ The first action is to create a KuryrPort CRD where the needed information
about the Neutron Ports will be stored (or any other SDN).

Currently, the pods are annotated with the vif information of the port
assigned to it::

"kind": "Pod",
"metadata": {
"annotations": {
"openstack.org/kuryr-vif": "{\"eth0\": {\"versioned_object.data\": {\"active\": true, \"address\": \"fa:16:3e:bf:84:ff\", \"has_traffic_filtering\
": false, \"id\": \"18f968a5-c420-4318-92d7-941eb5f9e60e\", \"network\": {\"versioned_object.data\": {\"id\": \"144164d9-8c21-4274-acec-43245de0aed0\", \"labe
l\": \"ns/luis-net\", \"mtu\": 1350, \"multi_host\": false, \"should_provide_bridge\": false, \"should_provide_vlan\": false, \"subnets\": {\"versioned_object
.data\": {\"objects\": [{\"versioned_object.data\": {\"cidr\": \"10.11.9.0/24\", \"dns\": [], \"gateway\": \"10.11.9.1\", \"ips\": {\"versioned_object.data\":
{\"objects\": [{\"versioned_object.data\": {\"address\": \"10.11.9.5\"}, \"versioned_object.name\": \"FixedIP\", \"versioned_object.namespace\": \"os_vif\",
\"versioned_object.version\": \"1.0\"}]}, \"versioned_object.name\": \"FixedIPList\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\"
: \"1.0\"}, \"routes\": {\"versioned_object.data\": {\"objects\": []}, \"versioned_object.name\": \"RouteList\", \"versioned_object.namespace\": \"os_vif\", \
"versioned_object.version\": \"1.0\"}}, \"versioned_object.name\": \"Subnet\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.0
\"}]}, \"versioned_object.name\": \"SubnetList\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.0\"}}, \"versioned_object.name
\": \"Network\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.1\"}, \"plugin\": \"noop\", \"preserve_on_delete\": false, \"vi
f_name\": \"tap18f968a5-c4\", \"vlan_id\": 1346}, \"versioned_object.name\": \"VIFVlanNested\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object
.version\": \"1.0\"}}"
},

assigned to it:

.. code-block::

"kind": "Pod",
"metadata": {
"annotations": {
"openstack.org/kuryr-vif": "{\"eth0\": {\"versioned_object.data\": {\"active\": true, \"address\": \"fa:16:3e:bf:84:ff\", \"has_traffic_filtering\": false,
\"id\": \"18f968a5-c420-4318-92d7-941eb5f9e60e\", \"network\": {\"versioned_object.data\": {\"id\": \"144164d9-8c21-4274-acec-43245de0aed0\", \"labe
l\": \"ns/luis-net\", \"mtu\": 1350, \"multi_host\": false, \"should_provide_bridge\": false, \"should_provide_vlan\": false, \"subnets\": {\"versioned_object
.data\": {\"objects\": [{\"versioned_object.data\": {\"cidr\": \"10.11.9.0/24\", \"dns\": [], \"gateway\": \"10.11.9.1\", \"ips\": {\"versioned_object.data\":
{\"objects\": [{\"versioned_object.data\": {\"address\": \"10.11.9.5\"}, \"versioned_object.name\": \"FixedIP\", \"versioned_object.namespace\": \"os_vif\",
\"versioned_object.version\": \"1.0\"}]}, \"versioned_object.name\": \"FixedIPList\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\"
: \"1.0\"}, \"routes\": {\"versioned_object.data\": {\"objects\": []}, \"versioned_object.name\": \"RouteList\", \"versioned_object.namespace\": \"os_vif\", \
"versioned_object.version\": \"1.0\"}}, \"versioned_object.name\": \"Subnet\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.0
\"}]}, \"versioned_object.name\": \"SubnetList\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.0\"}}, \"versioned_object.name
\": \"Network\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object.version\": \"1.1\"}, \"plugin\": \"noop\", \"preserve_on_delete\": false, \"vi
f_name\": \"tap18f968a5-c4\", \"vlan_id\": 1346}, \"versioned_object.name\": \"VIFVlanNested\", \"versioned_object.namespace\": \"os_vif\", \"versioned_object
.version\": \"1.0\"}}"
},

The proposal is to store the information of the VIF in the new defined
KuryrPort CRD as a new KuryrPort object, including similar information to the
@@ -85,81 +86,82 @@ object selfLink at the pod by using oslo.versionedobject to easy identify
the changes into the annotation format. Note the selfLink should contain the
Neutron Port UUID if that is used as the name for the KuryrPort CRD object.
In case of other SDN a unique value that represents the port should be used
as the name for the KuryrPort CRD object::

$ kubectl get POD_NAME -o json
"kind": "Pod",
"metadata": {
"annotations": {
"openstack.org/kuryr-vif": "{"eth0": {\"versioned_object.data\": {\"selfLink\": \"/apis/openstack.org/v1/kuryrports/18f968a5-c420-4318-92d7-941eb5f9e60e\"}},
\"versioned_object.name\": \"KuryrPortCRD\", \"versioned_object.version\": \"1.0\"}"
},
...

$ openstack port show 18f968a5-c420-4318-92d7-941eb5f9e60e
+-----------------------+---------------------------------------------------------------------------+
| Field | Value |
+-----------------------+---------------------------------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | None |
| binding_profile | None |
| binding_vif_details | None |
| binding_vif_type | None |
| binding_vnic_type | normal |
| created_at | 2018-06-18T15:58:23Z |
| data_plane_status | None |
| description | |
| device_id | |
| device_owner | trunk:subport |
| dns_assignment | None |
| dns_domain | None |
| dns_name | None |
| extra_dhcp_opts | |
| fixed_ips | ip_address='10.11.9.5', subnet_id='fa660385-65f1-4677-8dc7-3f4f9cd15d7f' |
| id | 18f968a5-c420-4318-92d7-941eb5f9e60e |
| ip_address | None |
| mac_address | fa:16:3e:bf:84:ff |
| name | |
| network_id | 144164d9-8c21-4274-acec-43245de0aed0 |
| option_name | None |
| option_value | None |
| port_security_enabled | True |
| project_id | d85bdba083204fe2845349a86cb87d82 |
| qos_policy_id | None |
| revision_number | 4 |
| security_group_ids | 32704585-8cbe-43f3-a4d5-56ffe2d3ab24 |
| status | ACTIVE |
| subnet_id | None |
| tags | |
| trunk_details | None |
| updated_at | 2018-06-18T15:58:30Z |
+-----------------------+---------------------------------------------------------------------------+

$ kubectl get kuryrports 18f968a5-c420-4318-92d7-941eb5f9e60e -o json
{
"apiVersion": "openstack.org/v1",
"kind": "KuryrPort",
"metadata": {
"resourceVersion": "164682",
"selfLink": "/apis/openstack.org/v1/kuryrports/18f968a5-c420-4318-92d7-941eb5f9e60e",
"uid": "d2834c13-6e6e-11e8-8acd-fa163ed12aae"
"name": "18f968a5-c420-4318-92d7-941eb5f9e60e"
"portStatus": "created"
},
"spec": {
"active": true",
"address": "fa:16:3e:bf:84:ff",
"id": "18f968a5-c420-4318-92d7-941eb5f9e60e",
"network": {
"id": "144164d9-8c21-4274-acec-43245de0aed0",
"mtu": 1350,
...
}
...
}
}

as the name for the KuryrPort CRD object:

.. code-block:: console

$ kubectl get POD_NAME -o json
"kind": "Pod",
"metadata": {
"annotations": {
"openstack.org/kuryr-vif": "{"eth0": {\"versioned_object.data\": {\"selfLink\": \"/apis/openstack.org/v1/kuryrports/18f968a5-c420-4318-92d7-941eb5f9e60e\"}},
\"versioned_object.name\": \"KuryrPortCRD\", \"versioned_object.version\": \"1.0\"}"
},
...

$ openstack port show 18f968a5-c420-4318-92d7-941eb5f9e60e
+-----------------------+---------------------------------------------------------------------------+
| Field | Value |
+-----------------------+---------------------------------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | None |
| binding_profile | None |
| binding_vif_details | None |
| binding_vif_type | None |
| binding_vnic_type | normal |
| created_at | 2018-06-18T15:58:23Z |
| data_plane_status | None |
| description | |
| device_id | |
| device_owner | trunk:subport |
| dns_assignment | None |
| dns_domain | None |
| dns_name | None |
| extra_dhcp_opts | |
| fixed_ips | ip_address='10.11.9.5', subnet_id='fa660385-65f1-4677-8dc7-3f4f9cd15d7f' |
| id | 18f968a5-c420-4318-92d7-941eb5f9e60e |
| ip_address | None |
| mac_address | fa:16:3e:bf:84:ff |
| name | |
| network_id | 144164d9-8c21-4274-acec-43245de0aed0 |
| option_name | None |
| option_value | None |
| port_security_enabled | True |
| project_id | d85bdba083204fe2845349a86cb87d82 |
| qos_policy_id | None |
| revision_number | 4 |
| security_group_ids | 32704585-8cbe-43f3-a4d5-56ffe2d3ab24 |
| status | ACTIVE |
| subnet_id | None |
| tags | |
| trunk_details | None |
| updated_at | 2018-06-18T15:58:30Z |
+-----------------------+---------------------------------------------------------------------------+

$ kubectl get kuryrports 18f968a5-c420-4318-92d7-941eb5f9e60e -o json
{
"apiVersion": "openstack.org/v1",
"kind": "KuryrPort",
"metadata": {
"resourceVersion": "164682",
"selfLink": "/apis/openstack.org/v1/kuryrports/18f968a5-c420-4318-92d7-941eb5f9e60e",
"uid": "d2834c13-6e6e-11e8-8acd-fa163ed12aae"
"name": "18f968a5-c420-4318-92d7-941eb5f9e60e"
"portStatus": "created"
},
"spec": {
"active": true",
"address": "fa:16:3e:bf:84:ff",
"id": "18f968a5-c420-4318-92d7-941eb5f9e60e",
"network": {
"id": "144164d9-8c21-4274-acec-43245de0aed0",
"mtu": 1350,
...
}
...
}
}

This allows a more standard way of annotating the pods, ensuring all needed
information is there regardless of the SDN backend.

+ 38
- 22
doc/source/devref/updating_pod_resources_api.rst View File

@@ -48,19 +48,25 @@ Automated update
``contrib/regenerate_pod_resources_api.sh`` script could be used to re-generate
PodResources gRPC API files. By default, this script will download ``v1alpha1``
version of ``api.proto`` file from the Kubernetes GitHub repo and create
required kuryr-kubernetes files from it::
required kuryr-kubernetes files from it:

[kuryr-kubernetes]$ ./contrib/regenerate_pod_resources_api.sh
.. code-block:: console

[kuryr-kubernetes]$ ./contrib/regenerate_pod_resources_api.sh

Alternatively, path to ``api.proto`` file could be specified in
``KUBERNETES_API_PROTO`` environment variable::
``KUBERNETES_API_PROTO`` environment variable:

.. code-block:: console

$ export KUBERNETES_API_PROTO=/path/to/api.proto
$ export KUBERNETES_API_PROTO=/path/to/api.proto

Define ``API_VERSION`` environment variable to use specific version of
``api.proto`` from the Kubernetes GitHub::
``api.proto`` from the Kubernetes GitHub:

$ export API_VERSION=v1alpha1
.. code-block:: console

$ export API_VERSION=v1alpha1


Manual update steps
@@ -71,11 +77,13 @@ Preparing the new api.proto

Copy the ``api.proto`` from K8s sources to ``kuryr_kubernetes/pod_resources/``
and remove all the lines that contains ``gogoproto`` since this is unwanted
dependency that is not needed for python bindings::
dependency that is not needed for python bindings:

.. code-block:: console

$ sed '/gogoproto/d' \
../kubernetes/pkg/kubelet/apis/podresources/<version>/api.proto \
> kuryr_kubernetes/pod_resources/api.proto
$ sed '/gogoproto/d' \
../kubernetes/pkg/kubelet/apis/podresources/<version>/api.proto \
> kuryr_kubernetes/pod_resources/api.proto

Don't forget to update the file header that should point to the original
``api.proto`` and to this reference document::
@@ -88,16 +96,20 @@ Don't forget to update the file header that should point to the original
Generating the python bindings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

* (Optional) Create the python virtual environment::
* (Optional) Create the python virtual environment:

[kuryr-kubernetes]$ python3 -m venv venv
[kuryr-kubernetes]$ . ./venv/bin/activate
.. code-block:: console

[kuryr-kubernetes]$ python3 -m venv venv
[kuryr-kubernetes]$ . ./venv/bin/activate

* To generate python bindings we need a ``protoc`` compiler and the
``gRPC plugin`` for it. The most simple way to get them is to install
``grpcio-tools``::
``grpcio-tools``:

.. code-block:: console

(venv) [kuryr-kubernetes]$ pip install grpcio-tools==1.19
(venv) [kuryr-kubernetes]$ pip install grpcio-tools==1.19

.. note::

@@ -109,13 +121,17 @@ Generating the python bindings
you need update ``requirements.txt`` and ``lower-constraints.txt``
accordingly.

To check version of compiler installed with ``grpcio-tools`` use::
To check version of compiler installed with ``grpcio-tools`` use:

.. code-block:: console

(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc --version
libprotoc 3.6.1

(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc --version
libprotoc 3.6.1
* Following command will generate ``api_pb2_grpc.py`` and ``api_pb2.py``:

* Following command will generate ``api_pb2_grpc.py`` and ``api_pb2.py``::
.. code-block:: console

(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc -I./ \
--python_out=. --grpc_python_out=. \
kuryr_kubernetes/pod_resources/api.proto
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc -I./ \
--python_out=. --grpc_python_out=. \
kuryr_kubernetes/pod_resources/api.proto

+ 34
- 36
doc/source/devref/vif_handler_drivers_design.rst View File

@@ -73,17 +73,15 @@ Option in config file might look like this:

.. code-block:: ini

[kubernetes]

multi_vif_drivers = sriov, additional_subnets
[kubernetes]
multi_vif_drivers = sriov, additional_subnets

Or like this:

.. code-block:: ini

[kubernetes]

multi_vif_drivers = npwg_multiple_interfaces
[kubernetes]
multi_vif_drivers = npwg_multiple_interfaces


Additional Subnets Driver
@@ -98,17 +96,17 @@ additional subnets requests might look like:

.. code-block:: yaml

spec:
replicas: 1
template:
metadata:
name: some-name
labels:
app: some-name
annotations:
openstack.org/kuryr-additional-subnets: '[
"id_of_neutron_subnet_created_previously"
]'
spec:
replicas: 1
template:
metadata:
name: some-name
labels:
app: some-name
annotations:
openstack.org/kuryr-additional-subnets: '[
"id_of_neutron_subnet_created_previously"
]'


SRIOV Driver
@@ -122,13 +120,13 @@ Here's how a Pod Spec with sriov requests might look like:

.. code-block:: yaml

spec:
containers:
- name: vf-container
image: vf-image
resources:
requests:
pod.alpha.kubernetes.io/opaque-int-resource-sriov-vf-physnet2: 1
spec:
containers:
- name: vf-container
image: vf-image
resources:
requests:
pod.alpha.kubernetes.io/opaque-int-resource-sriov-vf-physnet2: 1


Specific ports support
@@ -145,18 +143,18 @@ annotation with requested specific ports might look like this:

.. code-block:: yaml

spec:
replicas: 1
template:
metadata:
name: some-name
labels:
app: some-name
annotations:
spec-ports: '[
"id_of_direct_precreated_port".
"id_of_normal_precreated_port"
]'
spec:
replicas: 1
template:
metadata:
name: some-name
labels:
app: some-name
annotations:
spec-ports: '[
"id_of_direct_precreated_port".
"id_of_normal_precreated_port"
]'

Pod spec above should be interpreted the following way: Multi-vif driver parses
pod annotations and gets ids of specific ports. If vnic_type is "normal" and

+ 40
- 24
doc/source/installation/containerized.rst View File

@@ -8,17 +8,23 @@ Building images
First you should build kuryr-controller and kuryr-cni docker images and place
them on cluster-wide accessible registry.

For creating controller image on local machine: ::
For creating controller image on local machine:

$ docker build -t kuryr/controller -f controller.Dockerfile .
.. code-block:: console

For creating cni daemonset image on local machine: ::
$ docker build -t kuryr/controller -f controller.Dockerfile .

$ docker build -t kuryr/cni -f cni.Dockerfile .
For creating cni daemonset image on local machine:

If you want to run kuryr CNI without the daemon, build theimage with: ::
.. code-block:: console

$ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False .
$ docker build -t kuryr/cni -f cni.Dockerfile .

If you want to run kuryr CNI without the daemon, build the image with:

.. code-block:: console

$ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False .

Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller
Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller
@@ -32,9 +38,11 @@ Generating Kuryr resource definitions for Kubernetes

kuryr-kubernetes includes a tool that lets you generate resource definitions
that can be used to Deploy Kuryr on Kubernetes. The script is placed in
``tools/generate_k8s_resource_definitions.sh`` and takes up to 3 arguments: ::
``tools/generate_k8s_resource_definitions.sh`` and takes up to 3 arguments:

$ ./tools/generate_k8s_resource_definitions <output_dir> [<controller_conf_path>] [<cni_conf_path>] [<ca_certificate_path>]
.. code-block:: console

$ ./tools/generate_k8s_resource_definitions <output_dir> [<controller_conf_path>] [<cni_conf_path>] [<ca_certificate_path>]

* ``output_dir`` - directory where to put yaml files with definitions.
* ``controller_conf_path`` - path to custom kuryr-controller configuration
@@ -83,13 +91,14 @@ script. Below is the list of available variables:
and ``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo``
to raise privileges, even though container is priviledged by itself or
``sudo`` is missing from container OS (e.g. default CentOS 7). To prevent
that make sure to set following options in kuryr.conf used for
kuryr-daemon::
that make sure to set following options in kuryr.conf used for kuryr-daemon:

.. code-block:: ini

[vif_plug_ovs_privileged]
helper_command=privsep-helper
[vif_plug_linux_bridge_privileged]
helper_command=privsep-helper
[vif_plug_ovs_privileged]
helper_command=privsep-helper
[vif_plug_linux_bridge_privileged]
helper_command=privsep-helper

Those options will prevent oslo.privsep from doing that. If rely on
aformentioned script to generate config files, those options will be added
@@ -104,9 +113,11 @@ variable must be set:

* ``$KURYR_USE_PORTS_POOLS`` - ``True`` (default: False)

Example run: ::
Example run:

$ KURYR_K8S_API_ROOT="192.168.0.1:6443" ./tools/generate_k8s_resource_definitions /tmp
.. code-block:: console

$ KURYR_K8S_API_ROOT="192.168.0.1:6443" ./tools/generate_k8s_resource_definitions /tmp

This should generate 5 files in your ``<output_dir>``:

@@ -133,13 +144,15 @@ This should generate 5 files in your ``<output_dir>``:
Deploying Kuryr resources on Kubernetes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To deploy the files on your Kubernetes cluster run: ::
To deploy the files on your Kubernetes cluster run:

.. code-block:: console

$ kubectl apply -f config_map.yml -n kube-system
$ kubectl apply -f certificates_secret.yml -n kube-system
$ kubectl apply -f service_account.yml -n kube-system
$ kubectl apply -f controller_deployment.yml -n kube-system
$ kubectl apply -f cni_ds.yml -n kube-system
$ kubectl apply -f config_map.yml -n kube-system
$ kubectl apply -f certificates_secret.yml -n kube-system
$ kubectl apply -f service_account.yml -n kube-system
$ kubectl apply -f controller_deployment.yml -n kube-system
$ kubectl apply -f cni_ds.yml -n kube-system

After successful completion:

@@ -148,8 +161,11 @@ After successful completion:
* kuryr-cni gets installed as a daemonset object on all the nodes in
kube-system namespace

To see kuryr-controller logs ::
$ kubectl logs <pod-name>
To see kuryr-controller logs:

.. code-block:: console

$ kubectl logs <pod-name>

NOTE: kuryr-cni has no logs and to debug failures you need to check out kubelet
logs.

+ 72
- 67
doc/source/installation/default_configuration.rst View File

@@ -2,85 +2,90 @@
Inspect default Configuration
=============================

By default, DevStack creates networks called ``private`` and ``public``::
By default, DevStack creates networks called ``private`` and ``public``:

$ openstack network list --project demo
+--------------------------------------+---------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+----------------------------------------------------------------------------+
| 12bc346b-35ed-4cfa-855b-389305c05740 | private | 1ee73076-e01e-4cec-a3a4-cbb275f94d0f, 8376a091-dcea-4ed5-b738-c16446e861da |
+--------------------------------------+---------+----------------------------------------------------------------------------+
.. code-block:: console

$ openstack network list --project admin
+--------------------------------------+--------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------+----------------------------------------------------------------------------+
| 646baf54-6178-4a26-a52b-68ad0ba1e057 | public | 00e0b1e4-4bee-4204-bd02-610291c56334, b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b |
+--------------------------------------+--------+----------------------------------------------------------------------------+
$ openstack network list --project demo
+--------------------------------------+---------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+----------------------------------------------------------------------------+
| 12bc346b-35ed-4cfa-855b-389305c05740 | private | 1ee73076-e01e-4cec-a3a4-cbb275f94d0f, 8376a091-dcea-4ed5-b738-c16446e861da |
+--------------------------------------+---------+----------------------------------------------------------------------------+

$ openstack network list --project admin
+--------------------------------------+--------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------+----------------------------------------------------------------------------+
| 646baf54-6178-4a26-a52b-68ad0ba1e057 | public | 00e0b1e4-4bee-4204-bd02-610291c56334, b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b |
+--------------------------------------+--------+----------------------------------------------------------------------------+

And kuryr-kubernetes creates two extra ones for the kubernetes services and
pods under the project k8s::
pods under the project k8s:

$ openstack network list --project k8s
+--------------------------------------+-----------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-----------------+--------------------------------------+
| 1bff74a6-e4e2-42fb-a81b-33c9c144987c | k8s-pod-net | 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 |
| d4be7efc-b84d-480e-a1db-34205877e6c4 | k8s-service-net | 55405e9d-4e25-4a55-bac2-e25ee88584e1 |
+--------------------------------------+-----------------+--------------------------------------+
.. code-block:: console

$ openstack network list --project k8s
+--------------------------------------+-----------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-----------------+--------------------------------------+
| 1bff74a6-e4e2-42fb-a81b-33c9c144987c | k8s-pod-net | 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 |
| d4be7efc-b84d-480e-a1db-34205877e6c4 | k8s-service-net | 55405e9d-4e25-4a55-bac2-e25ee88584e1 |
+--------------------------------------+-----------------+--------------------------------------+

And similarly for the subnets::
And similarly for the subnets:

$ openstack subnet list --project k8s
+--------------------------------------+--------------------+--------------------------------------+---------------+
| ID | Name | Network | Subnet |
+--------------------------------------+--------------------+--------------------------------------+---------------+
| 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 | k8s-pod-subnet | 1bff74a6-e4e2-42fb-a81b-33c9c144987c | 10.0.0.64/26 |
| 55405e9d-4e25-4a55-bac2-e25ee88584e1 | k8s-service-subnet | d4be7efc-b84d-480e-a1db-34205877e6c4 | 10.0.0.128/26 |
+--------------------------------------+--------------------+--------------------------------------+---------------+
.. code-block:: console

$ openstack subnet list --project k8s
+--------------------------------------+--------------------+--------------------------------------+---------------+
| ID | Name | Network | Subnet |
+--------------------------------------+--------------------+--------------------------------------+---------------+
| 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 | k8s-pod-subnet | 1bff74a6-e4e2-42fb-a81b-33c9c144987c | 10.0.0.64/26 |
| 55405e9d-4e25-4a55-bac2-e25ee88584e1 | k8s-service-subnet | d4be7efc-b84d-480e-a1db-34205877e6c4 | 10.0.0.128/26 |
+--------------------------------------+--------------------+--------------------------------------+---------------+

In addition to that, security groups for both pods and services are created
too::
In addition to that, security groups for both pods and services are created too:

$ openstack security group list --project k8s
+--------------------------------------+--------------------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+--------------------+------------------------+----------------------------------+
| 00fd78f9-484d-4ea7-b677-82f73c54064a | service_pod_access | service_pod_access | 49e2683370f245e38ac2d6a8c16697b3 |
| fe7cee41-6021-4d7b-ab03-1ce1e391a1ca | default | Default security group | 49e2683370f245e38ac2d6a8c16697b3 |
+--------------------------------------+--------------------+------------------------+----------------------------------+
.. code-block:: console

$ openstack security group list --project k8s
+--------------------------------------+--------------------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+--------------------+------------------------+----------------------------------+
| 00fd78f9-484d-4ea7-b677-82f73c54064a | service_pod_access | service_pod_access | 49e2683370f245e38ac2d6a8c16697b3 |
| fe7cee41-6021-4d7b-ab03-1ce1e391a1ca | default | Default security group | 49e2683370f245e38ac2d6a8c16697b3 |
+--------------------------------------+--------------------+------------------------+----------------------------------+

And finally, the loadbalancer for the kubernetes API service is also created,
with the subsequence listener, pool and added members::

$ openstack loadbalancer list
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| id | name | tenant_id | vip_address | provisioning_status | provider |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| 7d0cf5b5-b164-4b32-87d3-ae6c82513927 | default/kubernetes | 47c28e562795468ea52e92226e3bc7b1 | 10.0.0.129 | ACTIVE | haproxy |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+

$ openstack loadbalancer listener list
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
| id | default_pool_id | name | tenant_id | protocol | protocol_port | admin_state_up |
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
| abfbafd8-7609-4b7d-9def-4edddf2b887b | 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | HTTPS | 443 | True |
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+

$ openstack loadbalancer pool list
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
| id | name | tenant_id | lb_algorithm | protocol | admin_state_up |
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
| 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | ROUND_ROBIN | HTTPS | True |
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+

$ openstack loadbalancer member list default/kubernetes:443
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
| id | name | tenant_id | address | protocol_port | weight | subnet_id | admin_state_up |
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
| 5ddceaff-180b-47fa-b787-8921f4591cb0 | | 47c28e562795468ea52e92226e3bc7b1 | 192.168.5.10 | 6443 | 1 | b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b | True |
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
with the subsequence listener, pool and added members:

.. code-block:: console

$ openstack loadbalancer list
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| id | name | tenant_id | vip_address | provisioning_status | provider |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| 7d0cf5b5-b164-4b32-87d3-ae6c82513927 | default/kubernetes | 47c28e562795468ea52e92226e3bc7b1 | 10.0.0.129 | ACTIVE | haproxy |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+

$ openstack loadbalancer listener list
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
| id | default_pool_id | name | tenant_id | protocol | protocol_port | admin_state_up |
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
| abfbafd8-7609-4b7d-9def-4edddf2b887b | 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | HTTPS | 443 | True |
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+

$ openstack loadbalancer pool list
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
| id | name | tenant_id | lb_algorithm | protocol | admin_state_up |
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
| 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | ROUND_ROBIN | HTTPS | True |
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+

$ openstack loadbalancer member list default/kubernetes:443
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
| id | name | tenant_id | address | protocol_port | weight | subnet_id | admin_state_up |
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
| 5ddceaff-180b-47fa-b787-8921f4591cb0 | | 47c28e562795468ea52e92226e3bc7b1 | 192.168.5.10 | 6443 | 1 | b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b | True |
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+

+ 99
- 77
doc/source/installation/devstack/basic.rst View File

@@ -14,25 +14,33 @@ and dependencies of both systems.
Cloning required repositories
-----------------------------

First of all you need to clone DevStack: ::
First of all you need to clone DevStack:

$ git clone https://opendev.org/openstack-dev/devstack
.. code-block:: console

Create user *stack*, give it required permissions and log in as that user: ::
$ git clone https://opendev.org/openstack-dev/devstack

$ ./devstack/tools/create-stack-user.sh
$ sudo su stack
Create user *stack*, give it required permissions and log in as that user:

.. code-block:: console

$ ./devstack/tools/create-stack-user.sh
$ sudo su stack

*stack* user has ``/opt/stack`` set as its home directory. It will need its own
repository with DevStack. Also clone kuryr-kubernetes: ::
repository with DevStack. Also clone kuryr-kubernetes:

.. code-block:: console

$ git clone https://opendev.org/openstack-dev/devstack
$ git clone https://opendev.org/openstack/kuryr-kubernetes
$ git clone https://opendev.org/openstack-dev/devstack
$ git clone https://opendev.org/openstack/kuryr-kubernetes

Copy sample ``local.conf`` (DevStack configuration file) to devstack
directory: ::
directory:

$ cp kuryr-kubernetes/devstack/local.conf.sample devstack/local.conf
.. code-block:: console

$ cp kuryr-kubernetes/devstack/local.conf.sample devstack/local.conf

.. note::

@@ -51,86 +59,100 @@ Now edit ``devstack/local.conf`` to set up some initial options:
* If you already have Docker installed on the machine, you can comment out line
starting with ``enable_plugin devstack-plugin-container``.

Once ``local.conf`` is configured, you can start the installation: ::
Once ``local.conf`` is configured, you can start the installation:

.. code-block:: console

$ ./devstack/stack.sh
$ ./devstack/stack.sh

Installation takes from 15 to 30 minutes. Once that's done you should see
similar output: ::

=========================
DevStack Component Timing
(times are in seconds)
=========================
run_process 5
test_with_retry 2
pip_install 48
osc 121
wait_for_service 1
yum_install 31
dbsync 27
-------------------------
Unaccounted time 125
=========================
Total runtime 360



This is your host IP address: 192.168.101.249
This is your host IPv6 address: fec0::5054:ff:feb0:213a
Keystone is serving at http://192.168.101.249/identity/
The default users are: admin and demo
The password: password

WARNING:
Using lib/neutron-legacy is deprecated, and it will be removed in the future


Services are running under systemd unit files.
For more information see:
https://docs.openstack.org/devstack/latest/systemd.html

DevStack Version: queens
Change: 301d4d1678c3c1342abc03e51a74574f7792a58b Merge "Use "pip list" in check_libs_from_git" 2017-10-04 07:22:59 +0000
OS Version: CentOS 7.4.1708 Core

You can test DevStack by sourcing credentials and trying some commands: ::

$ source /devstack/openrc admin admin
$ openstack service list
+----------------------------------+------------------+------------------+
| ID | Name | Type |
+----------------------------------+------------------+------------------+
| 091e3e2813cc4904b74b60c41e8a98b3 | kuryr-kubernetes | kuryr-kubernetes |
| 2b6076dd5fc04bf180e935f78c12d431 | neutron | network |
| b598216086944714aed2c233123fc22d | keystone | identity |
+----------------------------------+------------------+------------------+
similar output:

.. code-block:: console

=========================
DevStack Component Timing
(times are in seconds)
=========================
run_process 5
test_with_retry 2
pip_install 48
osc 121
wait_for_service 1
yum_install 31
dbsync 27
-------------------------
Unaccounted time 125
=========================
Total runtime 360



This is your host IP address: 192.168.101.249
This is your host IPv6 address: fec0::5054:ff:feb0:213a
Keystone is serving at http://192.168.101.249/identity/
The default users are: admin and demo
The password: password

WARNING:
Using lib/neutron-legacy is deprecated, and it will be removed in the future


Services are running under systemd unit files.
For more information see:
https://docs.openstack.org/devstack/latest/systemd.html

DevStack Version: queens
Change: 301d4d1678c3c1342abc03e51a74574f7792a58b Merge "Use "pip list" in check_libs_from_git" 2017-10-04 07:22:59 +0000
OS Version: CentOS 7.4.1708 Core

You can test DevStack by sourcing credentials and trying some commands:

.. code-block:: console

$ source /devstack/openrc admin admin
$ openstack service list
+----------------------------------+------------------+------------------+
| ID | Name | Type |
+----------------------------------+------------------+------------------+
| 091e3e2813cc4904b74b60c41e8a98b3 | kuryr-kubernetes | kuryr-kubernetes |
| 2b6076dd5fc04bf180e935f78c12d431 | neutron | network |
| b598216086944714aed2c233123fc22d | keystone | identity |
+----------------------------------+------------------+------------------+

To verify if Kubernetes is running properly, list its nodes and check status of
the only node you should have. The correct value is "Ready": ::
the only node you should have. The correct value is "Ready":

.. code-block:: console

$ kubectl get nodes
NAME STATUS AGE VERSION
localhost Ready 2m v1.6.2

$ kubectl get nodes
NAME STATUS AGE VERSION
localhost Ready 2m v1.6.2
To test kuryr-kubernetes itself try creating a Kubernetes pod:

To test kuryr-kubernetes itself try creating a Kubernetes pod: ::
.. code-block:: console

$ kubectl run --image busybox test -- sleep 3600
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test-3202410914-1dp7g 0/1 ContainerCreating 0 7s <none> localhost
$ kubectl run --image busybox test -- sleep 3600
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test-3202410914-1dp7g 0/1 ContainerCreating 0 7s <none> localhost

After a moment (even up to few minutes as Docker image needs to be downloaded)
you should see that pod got the IP from OpenStack network: ::
you should see that pod got the IP from OpenStack network:

.. code-block:: console

$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test-3202410914-1dp7g 1/1 Running 0 35s 10.0.0.73 localhost

$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test-3202410914-1dp7g 1/1 Running 0 35s 10.0.0.73 localhost
You can verify that this IP is really assigned to Neutron port:

You can verify that this IP is really assigned to Neutron port: ::
.. code-block:: console

[stack@localhost kuryr-kubernetes]$ openstack port list | grep 10.0.0.73
| 3ce7fd13-ad0a-4e92-9b6f-0d38d50b1699 | | fa:16:3e:8e:f4:30 | ip_address='10.0.0.73', subnet_id='ddfbc8e9-68da-48f9-8a05-238ea0607e0d' | ACTIVE |
[stack@localhost kuryr-kubernetes]$ openstack port list | grep 10.0.0.73
| 3ce7fd13-ad0a-4e92-9b6f-0d38d50b1699 | | fa:16:3e:8e:f4:30 | ip_address='10.0.0.73', subnet_id='ddfbc8e9-68da-48f9-8a05-238ea0607e0d' | ACTIVE |

If those steps were successful, then it looks like your DevStack with
kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines

+ 20
- 12
doc/source/installation/devstack/containerized.rst View File

@@ -12,9 +12,11 @@ Installation

To configure DevStack to install Kuryr services as containerized Kubernetes
resources, you need to switch ``KURYR_K8S_CONTAINERIZED_DEPLOYMENT``. Add this
line to your ``local.conf``: ::
line to your ``local.conf``:

KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True
.. code-block:: ini

KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True

This will trigger building the kuryr-controller and kuryr-cni containers during
installation, as well as will deploy those on Kubernetes cluster it installed.
@@ -32,9 +34,11 @@ Changing configuration
----------------------

To change kuryr.conf files that are put into containers you need to edit the
associated ConfigMap. On DevStack deployment this can be done using: ::
associated ConfigMap. On DevStack deployment this can be done using:

.. code-block:: console

$ kubectl -n kube-system edit cm kuryr-config
$ kubectl -n kube-system edit cm kuryr-config

Then the editor will appear that will let you edit the config map. Make sure to
keep correct indentation when doing changes. Also note that there are two files
@@ -54,11 +58,13 @@ kuryr-controller
~~~~~~~~~~~~~~~~

To restart kuryr-controller and let it load new image and configuration, simply
kill existing pod: ::
kill existing pod:

$ kubectl -n kube-system get pods
<find kuryr-controller pod you want to restart>
$ kubectl -n kube-system delete pod <pod-name>
.. code-block:: console

$ kubectl -n kube-system get pods
<find kuryr-controller pod you want to restart>
$ kubectl -n kube-system delete pod <pod-name>

Deployment controller will make sure to restart the pod with new configuration.

@@ -71,8 +77,10 @@ actually idling with ``sleep infinity`` once all the files are copied into
correct locations on Kubernetes host.

You can force it to redeploy new files by killing it. DaemonSet controller
should make sure to restart it with new image and configuration files. ::
should make sure to restart it with new image and configuration files.

.. code-block:: console

$ kubectl -n kube-system get pods
<find kuryr-cni pods you want to restart>
$ kubectl -n kube-system delete pod <pod-name1> <pod-name2> <...>
$ kubectl -n kube-system get pods
<find kuryr-cni pods you want to restart>
$ kubectl -n kube-system delete pod <pod-name1> <pod-name2> <...>

+ 32
- 25
doc/source/installation/devstack/dragonflow_support.rst View File

@@ -38,18 +38,18 @@ to use either Fedora 25 or the latest Ubuntu LTS (16.04, Xenial).

2. Create the ``stack`` user.

::
.. code-block:: console

$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh

3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.

::
.. code-block:: console

$ sudo su - stack
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
$ sudo su - stack
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git

4. Configure DevStack to use Dragonflow.

@@ -58,10 +58,10 @@ you can start with. You may change some values for the various variables in
that file, like password settings or what LBaaS service provider to use.
Feel free to edit it if you'd like, but it should work as-is.

::
.. code-block:: console

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.df.sample local.conf
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.df.sample local.conf


Optionally, the ports pool funcionality can be enabled by following:
@@ -74,18 +74,21 @@ Optionally, the ports pool funcionality can be enabled by following:
Expect it to take a while. It installs required packages, clones a bunch
of git repos, and installs everything from these git repos.

::
.. code-block:: console

$ ./stack.sh

$ ./stack.sh

Once DevStack completes successfully, you should see output that looks
something like this::
something like this:

.. code-block:: console

This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
Keystone is serving at http://192.168.5.10/identity/
The default users are: admin and demo
The password: pass
This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
Keystone is serving at http://192.168.5.10/identity/
The default users are: admin and demo
The password: pass


6. Extra configurations.
@@ -93,9 +96,9 @@ something like this::
Create NAT rule that will cause "external" traffic from your instances to get
rewritten to your network controller's ip address and sent out on the network:

::
.. code-block:: console

$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE


Inspect default Configuration
@@ -134,10 +137,12 @@ Undercloud deployment

The steps to deploy the undercloud environment are the same as described above
for the `Single Node Test Environment` with the different sample local.conf to
use (step 4), in this case::
use (step 4), in this case:

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf
.. code-block:: console

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf


The main differences with the default dragonflow local.conf sample are that:
@@ -172,9 +177,11 @@ Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without Dragonflow integration, i.e., the
same steps as for ML2/OVS:

1. Log in into the VM::
1. Log in into the VM:

.. code-block:: console

$ ssh -i id_rsa_demo centos@FLOATING_IP
$ ssh -i id_rsa_demo centos@FLOATING_IP

2. Deploy devstack following steps 3 and 4 detailed at
`How to try out nested-pods locally (VLAN + trunk)`_.

+ 19
- 11
doc/source/installation/devstack/nested-macvlan.rst View File

@@ -23,23 +23,31 @@ nested MACVLAN driver rather than VLAN and trunk ports.
4. Once devstack is done and all services are up inside VM. Next steps are to
configure the missing information at ``/etc/kuryr/kuryr.conf``:

- Configure worker VMs subnet::
- Configure worker VMs subnet:

[pod_vif_nested]
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>
.. code-block:: ini

- Configure "pod_vif_driver" as "nested-macvlan"::
[pod_vif_nested]
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>

[kubernetes]
pod_vif_driver = nested-macvlan
- Configure "pod_vif_driver" as "nested-macvlan":

- Configure binding section::
.. code-block:: ini

[binding]
link_iface = <VM interface name eg. eth0>
[kubernetes]
pod_vif_driver = nested-macvlan

- Restart kuryr-k8s-controller::
- Configure binding section:

sudo systemctl restart devstack@kuryr-kubernetes.service
.. code-block:: ini

[binding]
link_iface = <VM interface name eg. eth0>

- Restart kuryr-k8s-controller:

.. code-block:: console

$ sudo systemctl restart devstack@kuryr-kubernetes.service

Now launch pods using kubectl, Undercloud Neutron will serve the networking.

+ 39
- 23
doc/source/installation/devstack/nested-vlan.rst View File

@@ -9,10 +9,12 @@ for the VM:

1. To install OpenStack services run devstack with
``devstack/local.conf.pod-in-vm.undercloud.sample``. Ensure that "trunk"
service plugin is enabled in ``/etc/neutron/neutron.conf``::
service plugin is enabled in ``/etc/neutron/neutron.conf``:

[DEFAULT]
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
.. code-block:: ini

[DEFAULT]
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin

2. Launch a VM with `Neutron trunk port.
<https://wiki.openstack.org/wiki/Neutron/TrunkPort>`_. The next steps can be
@@ -26,21 +28,26 @@ for the VM:
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
but first fill in the needed information:

- Point to the undercloud deployment by setting::
- Point to the undercloud deployment by setting:

SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP
.. code-block:: bash

SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP

- Fill in the subnetpool id of the undercloud deployment, as well as
the router where the new pod and service networks need to be
connected::
connected:

.. code-block:: bash

KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID=UNDERCLOUD_SUBNETPOOL_V4_ID
KURYR_NEUTRON_DEFAULT_ROUTER=router1
KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID=UNDERCLOUD_SUBNETPOOL_V4_ID
KURYR_NEUTRON_DEFAULT_ROUTER=router1

- Ensure the nested-vlan driver is going to be set by setting::
- Ensure the nested-vlan driver is going to be set by setting:

KURYR_POD_VIF_DRIVER=nested-vlan
.. code-block:: bash

KURYR_POD_VIF_DRIVER=nested-vlan

- Optionally, the ports pool funcionality can be enabled by following:
`How to enable ports pool with devstack`_.
@@ -48,31 +55,40 @@ for the VM:
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html

- [OPTIONAL] If you want to enable the subport pools driver and the
VIF Pool Manager you need to include::
VIF Pool Manager you need to include:

KURYR_VIF_POOL_MANAGER=True
.. code-block:: bash

KURYR_VIF_POOL_MANAGER=True

4. Once devstack is done and all services are up inside VM. Next steps are to
configure the missing information at ``/etc/kuryr/kuryr.conf``:

- Configure worker VMs subnet::
- Configure worker VMs subnet:

.. code-block:: ini

[pod_vif_nested]
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>

- Configure binding section:

.. code-block:: ini

[pod_vif_nested]
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>
[binding]
driver = kuryr.lib.binding.drivers.vlan
link_iface = <VM interface name eg. eth0>

- Configure binding section::
- Restart kuryr-k8s-controller:

[binding]
driver = kuryr.lib.binding.drivers.vlan
link_iface = <VM interface name eg. eth0>
.. code-block:: console

- Restart kuryr-k8s-controller::
$ sudo systemctl restart devstack@kuryr-kubernetes.service

sudo systemctl restart devstack@kuryr-kubernetes.service
- Restart kuryr-daemon:

- Restart kuryr-daemon::
.. code-block:: console

sudo systemctl restart devstack@kuryr-daemon.service
$ sudo systemctl restart devstack@kuryr-daemon.service

Now launch pods using kubectl, Undercloud Neutron will serve the networking.

+ 35
- 32
doc/source/installation/devstack/odl_support.rst View File

@@ -33,18 +33,18 @@ to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).

2. Create the ``stack`` user.

::
.. code-block:: console

$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh

3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.

::
.. code-block:: console

$ sudo su - stack
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
$ sudo su - stack
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git

4. Configure DevStack to use ODL.

@@ -53,10 +53,10 @@ can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to use.
Feel free to edit it if you'd like, but it should work as-is.

::
.. code-block:: console

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf


Optionally, the ports pool funcionality can be enabled by following:
@@ -69,41 +69,41 @@ Optionally, the ports pool funcionality can be enabled by following:
This is going to take a while. It installs a bunch of packages, clones a bunch
of git repos, and installs everything from these git repos.

::
.. code-block:: console

$ ./stack.sh
$ ./stack.sh

Once DevStack completes successfully, you should see output that looks
something like this::
something like this:

This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
Keystone is serving at http://192.168.5.10/identity/
The default users are: admin and demo
The password: pass
.. code-block:: console

This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
Keystone is serving at http://192.168.5.10/identity/
The default users are: admin and demo
The password: pass

6. Extra configurations.

Devstack does not wire up the public network by default so we must do
some extra steps for floating IP usage as well as external connectivity:

::

$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
.. code-block:: console

$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex

Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's
ip address and sent out on the network:

::
.. code-block:: console

$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE


Inspect default Configuration
@@ -142,11 +142,12 @@ Undercloud deployment

The steps to deploy the undercloud environment are the same described above
for the `Single Node Test Environment` with the different of the sample
local.conf to use (step 4), in this case::
local.conf to use (step 4), in this case:

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.odl.sample local.conf
.. code-block:: console

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.odl.sample local.conf

The main differences with the default odl local.conf sample are that:

@@ -179,9 +180,11 @@ Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without ODL integration, i.e., the
same steps as for ML2/OVS:

1. Log in into the VM::
1. Log in into the VM:

.. code-block:: console

$ ssh -i id_rsa_demo centos@FLOATING_IP
$ ssh -i id_rsa_demo centos@FLOATING_IP

2. Deploy devstack following steps 3 and 4 detailed at
`How to try out nested-pods locally (VLAN + trunk)`_.

+ 28
- 27
doc/source/installation/devstack/ovn_support.rst View File

@@ -30,18 +30,18 @@ to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).

2. Create the ``stack`` user.

::
.. code-block:: console

$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh

3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.

::
.. code-block:: console

$ sudo su - stack
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
$ sudo su - stack
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git

4. Configure DevStack to use OVN.

@@ -50,11 +50,10 @@ can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to use.
Feel free to edit it if you'd like, but it should work as-is.

::

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf
.. code-block:: console

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf

Note that due to OVN compiling OVS from source at
/usr/local/var/run/openvswitch we need to state at the local.conf that the path
@@ -68,9 +67,9 @@ Optionally, the ports pool functionality can be enabled by following:
This is going to take a while. It installs a bunch of packages, clones a bunch
of git repos, and installs everything from these git repos.

::
.. code-block:: console

$ ./stack.sh
$ ./stack.sh

Once DevStack completes successfully, you should see output that looks
something like this::
@@ -87,22 +86,21 @@ something like this::
Devstack does not wire up the public network by default so we must do
some extra steps for floating IP usage as well as external connectivity:

::

$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
.. code-block:: console

$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex

Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's
ip address and sent out on the network:

::
.. code-block:: console

$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE


Inspect default Configuration
@@ -136,11 +134,12 @@ Undercloud deployment

The steps to deploy the undercloud environment are the same described above
for the `Single Node Test Environment` with the different of the sample
local.conf to use (step 4), in this case::
local.conf to use (step 4), in this case:

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.ovn.sample local.conf
.. code-block:: console

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.ovn.sample local.conf

The main differences with the default ovn local.conf sample are that:

@@ -171,9 +170,11 @@ Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without OVN integration, i.e., the
same steps as for ML2/OVS:

1. Log in into the VM::
1. Log in into the VM:

.. code-block:: console

$ ssh -i id_rsa_demo centos@FLOATING_IP
$ ssh -i id_rsa_demo centos@FLOATING_IP

2. Deploy devstack following steps 3 and 4 detailed at :doc:`./nested-vlan`


+ 18
- 13
doc/source/installation/devstack/ports-pool.rst View File

@@ -5,31 +5,36 @@ How to enable ports pool with devstack
To enable the utilization of the ports pool feature through devstack, the next
options needs to be set at the local.conf file:

1. First, you need to enable the pools by setting::
1. First, you need to enable the pools by setting:

KURYR_USE_PORT_POOLS=True
.. code-block:: bash

KURYR_USE_PORT_POOLS=True

2. Then, the proper pool driver needs to be set. This means that for the
baremetal case you need to ensure the pod vif driver and the vif pool driver
are set to the right baremetal drivers, for instance::
are set to the right baremetal drivers, for instance:

KURYR_POD_VIF_DRIVER=neutron-vif
KURYR_VIF_POOL_DRIVER=neutron
.. code-block:: bash

KURYR_POD_VIF_DRIVER=neutron-vif
KURYR_VIF_POOL_DRIVER=neutron

And if the use case is the nested one, then they should be set to::
And if the use case is the nested one, then they should be set to:

KURYR_POD_VIF_DRIVER=nested-vlan
KURYR_VIF_POOL_DRIVER=nested
.. code-block:: bash

KURYR_POD_VIF_DRIVER=nested-vlan
KURYR_VIF_POOL_DRIVER=nested

3. Then, in case you want to set a limit to the maximum number of ports, or
increase/reduce the default one for the minimum number, as well as to modify
the way the pools are repopulated, both in time as well as regarding bulk
operation sizes, the next option can be included and modified accordingly::
operation sizes, the next option can be included and modified accordingly:

KURYR_PORT_POOL_MIN=5
KURYR_PORT_POOL_MAX=0
KURYR_PORT_POOL_BATCH=10
KURYR_PORT_POOL_UPDATE_FREQ=20
.. code-block:: bash

KURYR_PORT_POOL_MIN=5
KURYR_PORT_POOL_MAX=0
KURYR_PORT_POOL_BATCH=10
KURYR_PORT_POOL_UPDATE_FREQ=20

+ 18
- 12
doc/source/installation/https_kubernetes.rst View File

@@ -3,21 +3,27 @@ Watching Kubernetes api-server over HTTPS
=========================================

Add absolute path of client side cert file and key file for Kubernetes server