Browse Source

Remove dragonflow

Dragonflow was removed from governance in 2018 and is now being retired.
This cleans up references to dragonflow jobs and configuration.

Change-Id: Ie990da4e68e82d998768fa0c047cca4cccd59915
Signed-off-by: Sean McGinnis <>
Sean McGinnis 2 weeks ago
No known key found for this signature in database GPG Key ID: CE7EE4BFAF8D70C8
5 changed files with 0 additions and 507 deletions
  1. +0
  2. +0
  3. +0
  4. +0
  5. +0

+ 0
- 29
.zuul.d/sdn.yaml View File

@@ -99,32 +99,3 @@
KURYR_ENABLED_HANDLERS: vif,lb,lbaasspec,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork

- job:
name: kuryr-kubernetes-tempest-dragonflow
parent: kuryr-kubernetes-tempest
description: |
Kuryr-Kubernetes tempest job using Dragonflow
- openstack/dragonflow
OVS_BRANCH: master
q-agt: false
q-dhcp: false
q-l3: false
q-trunk: true
df-redis: true
df-redis-server: true
df-controller: true
df-ext-services: true
df-l3-agent: true
voting: false

+ 0
- 210
devstack/local.conf.df.sample View File

@@ -1,210 +0,0 @@

enable_plugin kuryr-kubernetes \

enable_plugin dragonflow

# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./
# and ./ again, uncomment the following
# RECLONE="no"

# Log settings for better readability

# Credentials
# Enable Keystone v3

# In pro of speed and being lightweight, we will be explicit in regards to
# which services we enable

# DF services
enable_service df-redis
enable_service df-redis-server
enable_service df-controller

# Neutron services
enable_service neutron
enable_service q-svc

# Keystone
enable_service key

# Dependencies
enable_service mysql
enable_service rabbit

# enable DF local controller

# DF settings

# Uncomment it to use L2 communication between loadbalancer and member pods

# Octavia LBaaSv2
enable_plugin octavia
enable_service octavia
enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
## Octavia Deps
### Image
### Barbican
enable_plugin barbican
### Nova
enable_service n-api
enable_service n-api-meta
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service placement-api
enable_service placement-client
### Glance
enable_service g-api
enable_service g-reg

# By default use all the services from the kuryr-kubernetes plugin

# Docker
# ======
# If you already have docker configured, running and with its socket writable
# by the stack user, you can omit the following line.
enable_plugin devstack-plugin-container

# Etcd
# ====
# The default is for devstack to run etcd for you.
enable_service etcd3
# If you already have an etcd cluster configured and running, you can just
# comment out the lines enabling legacy_etcd and etcd3
# then uncomment and set the following line:
# KURYR_ETCD_CLIENT_URL="http://etcd_ip:etcd_client_port"

# Kubernetes
# ==========
# Kubernetes is run from the hyperkube docker image
# If you already have a Kubernetes deployment, you can use it instead and omit
# enabling the Kubernetes service (except Kubelet, which must be run by
# devstack so that it uses our development CNI driver.
# The default is, again, for devstack to run the Kubernetes services:
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler

# We use hyperkube to run the services. You can select the hyperkube image and/
# or version by uncommenting and setting the following ENV vars different
# to the following defaults:
# If you have the 8080 port already bound to another service, you will need to
# have kubernetes API server bind to another port. In order to do that,
# uncomment and set a different port number in:
# If you want to test with a different range for the Cluster IPs uncomment and
# set the following ENV var to a different CIDR
# If, however, you are reusing an existing deployment, you should uncomment and
# set an ENV var so that the Kubelet devstack runs can find the API server:
# KURYR_K8S_API_URL="http://k8s_api_ip:k8s_api_port"

# Kubelet
# =======
# Kubelet should almost invariably be run by devstack
enable_service kubelet

# You can specify a different location for the hyperkube binary that will be
# extracted from the hyperkube container into the Host filesystem:
# KURYR_HYPERKUBE_BINARY=/usr/local/bin/hyperkube
# the selected binary for the Kubelet.

# Kuryr watcher
# =============
# Just like the Kubelet, you'll want to have the watcher enabled. It is the
# part of the codebase that connects to the Kubernetes API server to read the
# resource events and convert them to Neutron actions
enable_service kuryr-kubernetes

# Kuryr Daemon
# ============
# Kuryr runs CNI plugin in daemonized way - i.e. kubelet will run kuryr CNI
# driver and the driver will pass requests to Kuryr daemon running on the node,
# instead of processing them on its own. This limits the number of Kubernetes
# API requests (as only Kuryr Daemon will watch for new pod events) and should
# increase scalability in environments that often delete and create pods.
# Since Rocky release this is a default deployment configuration.
enable_service kuryr-daemon

# Kuryr POD VIF Driver
# ====================
# Set up the VIF Driver to be used. The default one is the neutron-vif, but if
# a nested deployment is desired, the corresponding driver need to be set,
# e.g.: nested-vlan or nested-macvlan
# KURYR_POD_VIF_DRIVER=neutron-vif

# Kuryr Enabled Handlers
# ======================
# By default, some Kuryr Handlers are set for DevStack installation. This can be
# further tweaked in order to enable additional ones such as Network Policy. If
# you want to add additional handlers those can be set here:
# KURYR_ENABLED_HANDLERS = vif,lb,lbaasspec

# Kuryr Ports Pools
# =================
# To speed up containers boot time the kuryr ports pool driver can be enabled
# by uncommenting the next line, so that neutron port resources are precreated
# and ready to be used by the pods when needed
# By default the pool driver is noop, i.e., there is no pool. If pool
# optimizations want to be used you need to set it to 'neutron' for the
# baremetal case, or to 'nested' for the nested case
# There are extra configuration options for the pools that can be set to decide
# on the minimum number of ports that should be ready to use at each pool, the
# maximum (0 to unset), and the batch size for the repopulation actions, i.e.,
# the number of neutron ports to create in bulk operations. Finally, the update
# frequency between actions over the pool can be set too

# Increase Octavia amphorae timeout so that the first LB amphora has time to
# build and boot


+ 0
- 77
devstack/local.conf.pod-in-vm.undercloud.df.sample View File

@@ -1,77 +0,0 @@




# Dragonflow plugin and services
enable_plugin dragonflow
enable_service df-controller
enable_service df-redis
enable_service df-redis-server
enable_service df-metadata
enable_service q-trunk

# Neutron services
disable_service n-net
enable_service q-svc
enable_service q-qos
disable_service q-l3
disable_service df-l3-agent
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
disable_service q-agt
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
disable_service q-dhcp

# Octavia LBaaSv2
enable_plugin octavia
enable_service octavia
enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
## Octavia Deps
# In order to skip building the Octavia Amphora image you can fetch a
# precreated qcow image from here [1] and set up octavia to use it by
# uncommenting the following lines.
# [1]
# OCTAVIA_AMP_IMAGE_FILE=/tmp/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
# OCTAVIA_AMP_IMAGE_NAME=test-only-amphora-x64-haproxy-ubuntu-xenial
### Image
### Barbican
enable_plugin barbican
### Nova
enable_service n-api
enable_service n-api-meta
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service placement-api
enable_service placement-client
### Glance
enable_service g-api
enable_service g-reg


# Enable heat services if you want to deploy overcloud using Heat stack
enable_plugin heat
enable_service h-eng h-api h-api-cfn h-api-cw

disable_service tempest


+ 0
- 190
doc/source/installation/devstack/dragonflow_support.rst View File

@@ -1,190 +0,0 @@
Kuryr Kubernetes Dragonflow Integration

Dragonflow is a distributed, modular and extendable SDN controller that
enables to connect cloud network instances (VMs, Containers and Bare Metal
servers) at scale.

Dragonflow adopts a distributed approach to mitigate the scaling issues for
large scale deployments. With Dragonflow the load is distributed to the compute
nodes running local controller. Dragonflow manages the network services for
the OpenStack compute nodes by distributing network topology and policies to
the compute nodes, where they are translated into Openflow rules and programmed
into Open Vswitch pipeline. Network services are implemented as Applications in
the local controller. OpenStack can use Dragonflow as its network provider
through the Modular Layer-2 (ML2) Plugin.

Integrating with Dragonflow allows Kuryr to be used to bridge containers and
VM networking in an OpenStack deployment. Kuryr acts as the container
networking interface for Dragonflow.

Testing with DevStack

The next points describe how to test OpenStack with Dragonflow using DevStack.
We will start by describing how to test the baremetal case on a single host,
and then cover a nested environment where containers are created inside VMs.

Single Node Test Environment

#. Create a test system.

It's best to use a throwaway dev system for running DevStack. Your best bet
is to use either Fedora 25 or the latest Ubuntu LTS (16.04, Xenial).

#. Create the ``stack`` user.

.. code-block:: console

$ git clone
$ sudo ./devstack/tools/

#. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.

.. code-block:: console

$ sudo su - stack
$ git clone
$ git clone

#. Configure DevStack to use Dragonflow.

kuryr-kubernetes comes with a sample DevStack configuration file for
Dragonflow you can start with. You may change some values for the various
variables in that file, like password settings or what LBaaS service
provider to use. Feel free to edit it if you'd like, but it should work

.. code-block:: console

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.df.sample local.conf

Optionally, the ports pool functionality can be enabled by following:
`How to enable ports pool with devstack`_.

#. Run DevStack.

Expect it to take a while. It installs required packages, clones a bunch of
git repos, and installs everything from these git repos.

.. code-block:: console

$ ./

Once DevStack completes successfully, you should see output that looks
something like this:

.. code-block:: console

This is your host IP address:
This is your host IPv6 address: ::1
Keystone is serving at
The default users are: admin and demo
The password: pass

#. Extra configurations.

Create NAT rule that will cause "external" traffic from your instances to
get rewritten to your network controller's ip address and sent out on the

.. code-block:: console

$ sudo iptables -t nat -I POSTROUTING 1 -s -j MASQUERADE

Inspect default Configuration

In order to check the default configuration, in term of networks, subnets,
security groups and loadbalancers created upon a successful devstack stacking,
you can check the `Inspect default Configuration`_.

Testing Network Connectivity

Once the environment is ready, we can test that network connectivity works
among pods. To do that check out `Testing Network Connectivity`_.

Nested Containers Test Environment (VLAN)

Another deployment option is the nested-vlan where containers are created
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
deploy an undercloud devstack environment with the needed components to
create VMs (e.g., Glance, Nova, Neutron, Keystone, ...), as well as the needed
Dragonflow configurations such as enabling the trunk support that will be
needed for the VM. And then install the overcloud deployment inside the VM with
the kuryr components.

Undercloud deployment

The steps to deploy the undercloud environment are the same as described above
for the `Single Node Test Environment` with the different sample local.conf to
use (step 4), in this case:

.. code-block:: console

$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf

The main differences with the default dragonflow local.conf sample are that:

- There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also be
installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-api,
kubernetes-controller-manager, kubernetes-scheduler and kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud.
- Dragonflow Trunk service plugin need to be enable to ensure Trunk ports

Once the undercloud deployment has finished, the next steps are related to
creating the overcloud VM by using a parent port of a Trunk so that containers
can be created inside with their own networks. To do that we follow the next
steps detailed at `Boot VM with a Trunk Port`_.

Overcloud deployment

Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without Dragonflow integration, i.e., the
same steps as for ML2/OVS:

#. Log in into the VM:

.. code-block:: console

$ ssh -i id_rsa_demo centos@FLOATING_IP

#. Deploy devstack following steps 3 and 4 detailed at
`How to try out nested-pods locally (VLAN + trunk)`_.

Testing Nested Network Connectivity

Similarly to the baremetal testing, we can create a demo deployment at the
overcloud VM, scale it to any number of pods and expose the service to check if
the deployment was successful. To do that check out
`Testing Nested Network Connectivity`_.

.. _How to enable ports pool with devstack:
.. _Inspect default Configuration:
.. _Testing Network Connectivity:
.. _Boot VM with a Trunk Port:
.. _How to try out nested-pods locally (VLAN + trunk):
.. _Testing Nested Network Connectivity:

+ 0
- 1
doc/source/installation/devstack/index.rst View File

@@ -38,6 +38,5 @@ ML2 drivers.