OpenDaylight support: Installation & Configuration

Partially Implements blueprint kuryr-k8s-odl-integration

Change-Id: I27309b2fbd45874e8b6fa0d81851c5007ddc88c2
This commit is contained in:
Luis Tomas Bolivar 2017-07-27 16:51:50 +02:00
parent d47fa2e498
commit 0c5b37c2ca
10 changed files with 619 additions and 6 deletions

View File

@ -37,7 +37,11 @@ enable_service q-svc
enable_plugin neutron-lbaas \
git://git.openstack.org/openstack/neutron-lbaas
enable_service q-lbaasv2
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:opendaylight:networking_odl.lbaas.driver_v2.OpenDaylightLbaasDriverV2:default"
# Currently there is problem with the ODL LBaaS driver integration, so we
# default to the default neutron one
#NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:opendaylight:networking_odl.lbaas.driver_v2.OpenDaylightLbaasDriverV2:default"
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
ODL_MODE=allinone
ODL_RELEASE=carbon-snapshot-0.6

View File

@ -0,0 +1,85 @@
[[local|localrc]]
# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
# If you want the screen tabs logged in a specific location, you can use:
# SCREEN_LOGDIR="${HOME}/devstack_logs"
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable Keystone v3
IDENTITY_API_VERSION=3
# In pro of speed and being lightweight, we will be explicit in regards to
# which services we enable
ENABLED_SERVICES=""
# Neutron services
enable_service neutron
enable_service q-dhcp
enable_service q-svc
enable_service q-meta
### Nova
enable_service n-api
enable_service n-api-meta
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service placement-api
enable_service placement-client
### Glance
enable_service g-api
enable_service g-reg
### Neutron-lbaas
# LBaaSv2 service and Haproxy agent
enable_plugin neutron-lbaas \
git://git.openstack.org/openstack/neutron-lbaas
enable_service q-lbaasv2
# Currently there is problem with the ODL LBaaS driver integration, so we
# default to the default neutron one
#NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:opendaylight:networking_odl.lbaas.driver_v2.OpenDaylightLbaasDriverV2:default"
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"
# Keystone
enable_service key
# dependencies
enable_service mysql
enable_service rabbit
ODL_MODE=allinone
ODL_RELEASE=carbon-snapshot-0.6
Q_USE_PUBLIC_VETH=False
PUBLIC_BRIDGE=br-ex
PUBLIC_PHYSICAL_NETWORK=public
ODL_PROVIDER_MAPPINGS=public:br-ex
ODL_L3=True
ODL_NETVIRT_KARAF_FEATURE=odl-neutron-service,odl-restconf-all,odl-aaa-authn,odl-dlux-core,odl-mdsal-apidocs,odl-netvirt-openstack,odl-neutron-logger,odl-neutron-hostconfig-ovs
ODL_PORT_BINDING_CONTROLLER=pseudo-agentdb-binding
ODL_TIMEOUT=60
ODL_V2DRIVER=True
ODL_NETVIRT_DEBUG_LOGS=True
Q_SERVICE_PLUGIN_CLASSES=trunk
EBTABLES_RACE_FIX=True
enable_plugin networking-odl http://git.openstack.org/openstack/networking-odl

View File

@ -0,0 +1,85 @@
Inspect default Configuration
=============================
By default, DevStack creates networks called ``private`` and ``public``::
$ openstack network list --project demo
+--------------------------------------+---------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+----------------------------------------------------------------------------+
| 12bc346b-35ed-4cfa-855b-389305c05740 | private | 1ee73076-e01e-4cec-a3a4-cbb275f94d0f, 8376a091-dcea-4ed5-b738-c16446e861da |
+--------------------------------------+---------+----------------------------------------------------------------------------+
$ openstack network list --project admin
+--------------------------------------+--------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------+----------------------------------------------------------------------------+
| 646baf54-6178-4a26-a52b-68ad0ba1e057 | public | 00e0b1e4-4bee-4204-bd02-610291c56334, b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b |
+--------------------------------------+--------+----------------------------------------------------------------------------+
And kuryr-kubernetes creates two extra ones for the kubernetes services and
pods under the project k8s::
$ openstack network list --project k8s
+--------------------------------------+-----------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-----------------+--------------------------------------+
| 1bff74a6-e4e2-42fb-a81b-33c9c144987c | k8s-pod-net | 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 |
| d4be7efc-b84d-480e-a1db-34205877e6c4 | k8s-service-net | 55405e9d-4e25-4a55-bac2-e25ee88584e1 |
+--------------------------------------+-----------------+--------------------------------------+
And similarly for the subnets::
$ openstack subnet list --project k8s
+--------------------------------------+--------------------+--------------------------------------+---------------+
| ID | Name | Network | Subnet |
+--------------------------------------+--------------------+--------------------------------------+---------------+
| 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 | k8s-pod-subnet | 1bff74a6-e4e2-42fb-a81b-33c9c144987c | 10.0.0.64/26 |
| 55405e9d-4e25-4a55-bac2-e25ee88584e1 | k8s-service-subnet | d4be7efc-b84d-480e-a1db-34205877e6c4 | 10.0.0.128/26 |
+--------------------------------------+--------------------+--------------------------------------+---------------+
In addition to that, security groups for both pods and services are created
too::
$ openstack security group list --project k8s
+--------------------------------------+--------------------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+--------------------+------------------------+----------------------------------+
| 00fd78f9-484d-4ea7-b677-82f73c54064a | service_pod_access | service_pod_access | 49e2683370f245e38ac2d6a8c16697b3 |
| fe7cee41-6021-4d7b-ab03-1ce1e391a1ca | default | Default security group | 49e2683370f245e38ac2d6a8c16697b3 |
+--------------------------------------+--------------------+------------------------+----------------------------------+
And finally, the loadbalancer for the kubernetes API service is also created,
with the subsequence listener, pool and added members::
$ neutron lbaas-loadbalancer-list
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| id | name | tenant_id | vip_address | provisioning_status | provider |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| 7d0cf5b5-b164-4b32-87d3-ae6c82513927 | default/kubernetes | 47c28e562795468ea52e92226e3bc7b1 | 10.0.0.129 | ACTIVE | haproxy |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
$ neutron lbaas-listener-list
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
| id | default_pool_id | name | tenant_id | protocol | protocol_port | admin_state_up |
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
| abfbafd8-7609-4b7d-9def-4edddf2b887b | 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | HTTPS | 443 | True |
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
$ neutron lbaas-pool-list
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
| id | name | tenant_id | lb_algorithm | protocol | admin_state_up |
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
| 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | ROUND_ROBIN | HTTPS | True |
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
$ neutron lbaas-member-list default/kubernetes:443
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
| id | name | tenant_id | address | protocol_port | weight | subnet_id | admin_state_up |
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
| 5ddceaff-180b-47fa-b787-8921f4591cb0 | | 47c28e562795468ea52e92226e3bc7b1 | 192.168.5.10 | 6443 | 1 | b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b | True |
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+

View File

@ -32,3 +32,4 @@ ML2 drivers.
nested-vlan
nested-macvlan
odl_support

View File

@ -11,17 +11,29 @@ running. 4GB memory and 2 vCPUs, is the minimum resource requirement for the VM:
[DEFAULT]
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
2. Launch a VM with `Neutron trunk port. <https://wiki.openstack.org/wiki/Neutron/TrunkPort>`_
2. Launch a VM with `Neutron trunk port. <https://wiki.openstack.org/wiki/Neutron/TrunkPort>`_.
The next steps can be followed: `Boot VM with a Trunk Port`_.
.. todo::
Add a list of neutron commands, required to launch a trunk port
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
3. Inside VM, install and setup Kubernetes along with Kuryr using devstack:
- Since undercloud Neutron will be used by pods, Neutron services should be
disabled in localrc.
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
Fill in the needed information, such as the subnet pool id to use or the
router.
but first fill in the needed information:
- Point to the undercloud deployment by setting::
SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP
- Fill in the subnetpool id of the undercloud deployment, as well as
the router where the new pod and service networks need to be
connected::
KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID=UNDERCLOUD_SUBNETPOOL_V4_ID
KURYR_NEUTRON_DEFAULT_ROUTER=router1
4. Once devstack is done and all services are up inside VM. Next steps are to
configure the missing information at ``/etc/kuryr/kuryr.conf``:

View File

@ -0,0 +1,192 @@
=========================================
Kuryr Kubernetes OpenDayLight Integration
=========================================
OpenDaylight is a highly available, modular, extensible, scalable and
multi-protocol controller infrastructure built for SDN deployments on modern
heterogeneous multi-vendor networks.
OpenStack can use OpenDaylight as its network management provider through the
Modular Layer 2 (ML2) north-bound plug-in. OpenDaylight manages the network
flows for the OpenStack compute nodes via the OVSDB south-bound plug-in.
Integrating these allows Kuryr to be used to bridge (both baremetal and
nested) containers and VM networking in a OpenDaylight-based OpenStack
deployment. Kuryr acts as the container networking interface for OpenDaylight.
Testing with DevStack
=====================
The next points describe how to test OpenStack with ODL using DevStack.
We will start by describing how to test the baremetal case on a single host,
and then cover a nested environemnt where containers are created inside VMs.
Single Node Test Environment
----------------------------
1. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is
to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user.
::
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
::
$ sudo su - stack
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ git clone https://git.openstack.org/openstack/kuryr-kubernetes.git
4. Configure DevStack to use ODL.
kuryr-kubernetes comes with a sample DevStack configuration file for ODL you
can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to use.
Feel free to edit it if you'd like, but it should work as-is.
::
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
5. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a bunch
of git repos, and installs everything from these git repos.
::
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks
something like this::
This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
Keystone is serving at http://192.168.5.10/identity/
The default users are: admin and demo
The password: pass
6. Extra configurations.
Devstack does not wire up the public network by default so we must do
some extra steps for floating IP usage as well as external connectivity:
::
$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's
ip address and sent out on the network:
::
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
Inspect default Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to check the default configuration, in term of networks, subnets,
security groups and loadbalancers created upon a successful devstack stacking,
you can check the `Inspect default Configuration`_.
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
Testing Network Connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once the environment is ready, we can test that network connectivity works
among pods. To do that check out `Testing Network Connectivity`_.
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
Nested Containers Test Environment (VLAN)
-----------------------------------------
Another deployment option is the nested-vlan where containers are created
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
deploy an undercloud devstack environment with the needed components to
create VMs (e.g., Glance, Nova, Neutron, Keystone, ...), as well as the needed
ODL configurations such as enabling the trunk support that will be needed for
the VM. And then install the overcloud deployment inside the VM with the kuryr
components.
Undercloud deployment
~~~~~~~~~~~~~~~~~~~~~
The steps to deploy the undercloud environment are the same described above
for the `Single Node Test Environment` with the different of the sample
local.conf to use (step 4), in this case::
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.odl.sample local.conf
The main differences with the default odl local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also
be installed inside the VM: kuryr-kubernetes, kubelet,
kubernetes-api, kubernetes-controller-manager, kubernetes-scheduler and
kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud.
- ODL Trunk service plugin need to be enable to ensure Trunk ports support.
Once the undercloud deployment has finished, the next steps are related to
create the overcloud VM by using a parent port of a Trunk so that containers
can be created inside with their own networks. To do that we follow the next
steps detailed at `Boot VM with a Trunk Port`_.
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
Overcloud deployment
~~~~~~~~~~~~~~~~~~~~
Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without ODL integration, i.e., the
same steps as for ML2/OVS:
1. Log in into the VM::
$ ssh -i id_rsa_demo centos@FLOATING_IP
2. Deploy devstack following steps 3 and 4 detailed at
`How to try out nested-pods locally (VLAN + trunk)`_.
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
Testing Nested Network Connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Similarly to the baremetal testing, we can create a demo deployment at the
overcloud VM, scale it to any number of pods and expose the service to check if
the deployment was successful. To do that check out
`Testing Nested Network Connectivity`_.
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html

View File

@ -34,3 +34,7 @@ This section describes how you can install and configure kuryr-kubernetes
services
ipv6
devstack/index
default_configuration
trunk_ports
testing_connectivity
testing_nested_connectivity

View File

@ -0,0 +1,131 @@
Testing Network Connectivity
============================
Once the environment is ready, we can test that network connectivity works
among pods. First we check the status of the kubernetes cluster::
$ kubectl get nodes
NAME STATUS AGE VERSION
masterodl-vm Ready 1h v1.6.2
$ kubectl get pods
No resources found.
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.129 <none> 443/TCP 1h
As we can see, this is a one node cluster with currently no pods running, and
with the kubernetes API service listening on port 443 at 10.0.0.129 (which
matches the ip assigned to the load balancer created for it).
To test proper configuration and connectivity we firstly create a sample
deployment with::
$ kubectl run demo --image=celebdor/kuryr-demo
deployment "demo" created
After a few seconds, the container is up an running, and a neutron port was
created with the same IP that got assigned to the pod::
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-2293951457-j29nb 1/1 Running 0 1m
$ kubectl describe pod demo-2293951457-j29nb | grep IP:
IP: 10.0.0.69
$ openstack port list | grep demo
| 73100cdb-84d6-4f33-93b2-e212966c65ac | demo-2293951457-j29nb | fa:16:3e:99:ac:ce | ip_address='10.0.0.69', subnet_id='3c3e18f9-d1d0-4674-b3be-9fc8561980d3' | ACTIVE |
We can then scale the deployment to 2 pods, and check connectivity between
them::
$ kubectl scale deploy/demo --replicas=2
deployment "demo" scaled
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-2293951457-gdrv2 1/1 Running 0 9s
demo-2293951457-j29nb 1/1 Running 0 14m
$ openstack port list | grep demo
| 73100cdb-84d6-4f33-93b2-e212966c65ac | demo-2293951457-j29nb | fa:16:3e:99:ac:ce | ip_address='10.0.0.69', subnet_id='3c3e18f9-d1d0-4674-b3be-9fc8561980d3' | ACTIVE |
| 95e89edd-f513-4ec8-80d0-36839725e62d | demo-2293951457-gdrv2 | fa:16:3e:e6:b4:b9 | ip_address='10.0.0.75', subnet_id='3c3e18f9-d1d0-4674-b3be-9fc8561980d3' | ACTIVE |
$ kubectl exec -it demo-2293951457-j29nb -- /bin/sh
sh-4.2$ curl 10.0.0.69:8080
demo-2293951457-j29nb: HELLO, I AM ALIVE!!!
sh-4.2$ curl 10.0.0.75:8080
demo-2293951457-gdrv2: HELLO, I AM ALIVE!!!
sh-4.2$ ping 10.0.0.75
PING 10.0.0.75 (10.0.0.75) 56(84) bytes of data.
64 bytes from 10.0.0.75: icmp_seq=1 ttl=64 time=1.14 ms
64 bytes from 10.0.0.75: icmp_seq=2 ttl=64 time=0.250 ms
Next, we expose the service so that a neutron load balancer is created and
the service is exposed and load balanced among the available pods::
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.129 <none> 443/TCP 1h
$ kubectl expose deploy/demo --port=80 --target-port=8080
service "demo" exposed
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo 10.0.0.161 <none> 80/TCP 6s
kubernetes 10.0.0.129 <none> 443/TCP 1h
$ neutron lbaas-loadbalancer-list
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| id | name | tenant_id | vip_address | provisioning_status | provider |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| 7d0cf5b5-b164-4b32-87d3-ae6c82513927 | default/kubernetes | 47c28e562795468ea52e92226e3bc7b1 | 10.0.0.129 | ACTIVE | haproxy |
| c34c8d0c-a683-497f-9530-a49021e4b502 | default/demo | 49e2683370f245e38ac2d6a8c16697b3 | 10.0.0.161 | ACTIVE | haproxy |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
$ neutron lbaas-listener-list
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
| id | default_pool_id | name | tenant_id | protocol | protocol_port | admin_state_up |
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
| fc485508-c37a-48bd-9be3-898bbb7700fa | b12f00b9-44c0-430e-b1a1-e92b57247ad2 | default/demo:TCP:80 | 49e2683370f245e38ac2d6a8c16697b3 | TCP | 80 | True |
| abfbafd8-7609-4b7d-9def-4edddf2b887b | 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | HTTPS | 443 | True |
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
$ neutron lbaas-pool-list
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
| id | name | tenant_id | lb_algorithm | protocol | admin_state_up |
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
| 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | ROUND_ROBIN | HTTPS | True |
| b12f00b9-44c0-430e-b1a1-e92b57247ad2 | default/demo:TCP:80 | 49e2683370f245e38ac2d6a8c16697b3 | ROUND_ROBIN | TCP | True |
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
$ neutron lbaas-member-list default/demo:TCP:80
+--------------------------------------+------------------------------------+----------------------------------+-----------+---------------+--------+--------------------------------------+----------------+
| id | name | tenant_id | address | protocol_port | weight | subnet_id | admin_state_up |
+--------------------------------------+------------------------------------+----------------------------------+-----------+---------------+--------+--------------------------------------+----------------+
| c0057ce6-64da-4613-b284-faf5477533ab | default/demo-2293951457-j29nb:8080 | 49e2683370f245e38ac2d6a8c16697b3 | 10.0.0.69 | 8080 | 1 | 55405e9d-4e25-4a55-bac2-e25ee88584e1 | True |
| 7a0c0ef9-35ce-4134-b92a-2e73f0f8fe98 | default/demo-2293951457-gdrv2:8080 | 49e2683370f245e38ac2d6a8c16697b3 | 10.0.0.75 | 8080 | 1 | 55405e9d-4e25-4a55-bac2-e25ee88584e1 | True |
+--------------------------------------+------------------------------------+----------------------------------+-----------+---------------+--------+--------------------------------------+----------------+
We can see that both pods are included as members and that the demo cluster-ip
matches with the loadbalancer vip_address. In order to check loadbalancing
among them, we are going to curl the cluster-ip from one of the pods and see
that each of the pods is replying at a time::
$ kubectl exec -it demo-2293951457-j29nb -- /bin/sh
sh-4.2$ curl 10.0.0.161
demo-2293951457-j29nb: HELLO, I AM ALIVE!!!
sh-4.2$ curl 10.0.0.161
demo-2293951457-gdrv2: HELLO, I AM ALIVE!!!

View File

@ -0,0 +1,54 @@
Testing Nested Network Connectivity
===================================
Similarly to the baremetal testing, we can create a demo deployment, scale it
to any number of pods and expose the service to check if the deployment was
successful::
$ kubectl run demo --image=celebdor/kuryr-demo
$ kubectl scale deploy/demo --replicas=2
$ kubectl expose deploy/demo --port=80 --target-port=8080
After a few seconds you can check that the pods are up and running and the
neutron subports have been created (and in ACTIVE status) at the undercloud::
(OVERCLOUD)
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-1575152709-4k19q 1/1 Running 0 2m
demo-1575152709-vmjwx 1/1 Running 0 12s
(UNDERCLOUD)
$ openstack port list | grep demo
| 1019bc07-fcdd-4c78-adbd-72a04dffd6ba | demo-1575152709-4k19q | fa:16:3e:b5:de:1f | ip_address='10.0.0.65', subnet_id='b98d40d1-57ac-4909-8db5-0bf0226719d8' | ACTIVE |
| 33c4d79f-4fde-4817-b672-a5ec026fa833 | demo-1575152709-vmjwx | fa:16:3e:32:58:38 | ip_address='10.0.0.70', subnet_id='b98d40d1-57ac-4909-8db5-0bf0226719d8' | ACTIVE |
Then, we can check that the service has been created, as well as the
respective loadbalancer at the undercloud::
(OVERCLOUD)
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/demo 10.0.0.171 <none> 80/TCP 1m
svc/kubernetes 10.0.0.129 <none> 443/TCP 45m
(UNDERCLOUD)
$ neutron lbaas-loadbalancer-list
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| id | name | tenant_id | vip_address | provisioning_status | provider |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| a3b85089-1fbd-47e1-a697-bbdfd0fa19e3 | default/kubernetes | 672bc45aedfe4ec7b0e90959b1029e30 | 10.0.0.129 | ACTIVE | haproxy |
| e55b3f75-15dc-4bc5-b4f4-bce65fc15aa4 | default/demo | e4757688696641218fba0bac86ff7117 | 10.0.0.171 | ACTIVE | haproxy |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
Finally, you can log in into one of the containers and curl the service IP to
check that each time a different pod answer the request::
$ kubectl exec -it demo-1575152709-4k19q -- /bin/sh
sh-4.2$ curl 10.0.0.171
demo-1575152709-4k19q: HELLO, I AM ALIVE!!!
sh-4.2$ curl 10.0.0.771
demo-1575152709-vmjwx: HELLO, I AM ALIVE!!!

View File

@ -0,0 +1,45 @@
Boot VM with a Trunk Port
=========================
To create a VM that makes use of the Neutron Trunk port support, the next
steps can be followed:
1. Use the demo tenant and create a key to be used to log in into the overcloud
VM::
$ source ~/devstack/openrc demo
$ openstack keypair create demo > id_rsa_demo
$ chmod 600 id_rsa_demo
2. Ensure the demo default security group allows ping and ssh access::
$ openstack security group rule create --protocol icmp default
$ openstack security group rule create --protocol tcp --dst-port 22 default
3. Download and import an image that allows vlans, as cirros does not support
it::
$ wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
$ openstack image create --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2 centos7
4. Create a port for the overcloud VM and create the trunk with that port as
the parent port (untagged traffic)::
$ openstack port create --network private --security-group default port0
$ openstack network trunk create --parent-port port0 trunk0
5. Create the overcloud VM and assign a floating ip to it to be able to log in
into it::
$ openstack server create --image centos7 --flavor ds4G --nic port-id=port0 --key-name demo overcloud_vm
$ openstack floating ip create --port port0 public
Note subports can be added to the trunk port, and be used inside the VM with the
specific vlan, 102 in the example, by doing::
$ openstack network trunk set --subport port=subport0,segmentation-type=vlan,segmentation-id=102 trunk0