Add Neutron DVR aiopcpu support

Configure the subnode to run the extra services required
for Neutron DVR, like the L3 agent (in 'dvr' mode).

The data path between the controller and the subnode is
established via a tunnel created using OVS. The collective
troubleshooting effort [1] confirmed that n-net can work
successfully with gre tunnels being setup with ovs, therefore
only one logic is going to be maintained.

[1] https://review.openstack.org/#/c/171714/12

Depends-on: I6b4e5b82958e6d29dd450f1c4c9513f6a9e5053a
Change-Id: Id8d16877f09a1128c10f5c52438f6b7726c01f8e
This commit is contained in:
Swaminathan Vasudevan 2015-02-23 17:26:54 -08:00 committed by armando-migliaccio
parent 73e0d5f7a2
commit 06e4511fed
4 changed files with 125 additions and 83 deletions

View File

@ -81,6 +81,13 @@ function setup_localrc {
if [[ $role = sub ]]; then
if [[ "$DEVSTACK_GATE_NEUTRON" -eq "1" ]]; then
MY_ENABLED_SERVICES="q-agt,n-cpu,ceilometer-acompute"
if [[ "$DEVSTACK_GATE_NEUTRON_DVR" -eq "1" ]]; then
# As per reference architecture described in
# https://wiki.openstack.org/wiki/Neutron/DVR
# for DVR multi-node, add the following services
# on all compute nodes (q-fwaas being optional):
MY_ENABLED_SERVICES+=",q-l3,q-fwaas,q-meta"
fi
else
MY_ENABLED_SERVICES="n-cpu,n-net,n-api-meta,ceilometer-acompute"
fi
@ -107,7 +114,13 @@ function setup_localrc {
fi
if [[ "$DEVSTACK_GATE_NEUTRON_DVR" -eq "1" ]]; then
echo "Q_DVR_MODE=dvr_snat" >>"$localrc_file"
if [[ "$DEVSTACK_GATE_TOPOLOGY" != "aio" ]] && [[ $role = sub ]]; then
# The role for L3 agents running on compute nodes is 'dvr'
echo "Q_DVR_MODE=dvr" >>"$localrc_file"
else
# The role for L3 agents running on controller nodes is 'dvr_snat'
echo "Q_DVR_MODE=dvr_snat" >>"$localrc_file"
fi
fi
cat <<EOF >>"$localrc_file"
@ -463,20 +476,26 @@ else
done
PRIMARY_NODE=`cat /etc/nodepool/primary_node_private`
SUB_NODES=`cat /etc/nodepool/sub_nodes_private`
NODES="$PRIMARY_NODE $SUB_NODES"
if [[ "$DEVSTACK_GATE_NEUTRON" -ne '1' ]]; then
(source $BASE/new/devstack/functions-common; install_package bridge-utils)
gre_bridge "flat_if" "pub_if" 1 $FLOATING_HOST_PREFIX $FLOATING_HOST_MASK $NODES
ovs_gre_bridge "br_pub" $PRIMARY_NODE "True" 1 \
$FLOATING_HOST_PREFIX $FLOATING_HOST_MASK \
$SUB_NODES
ovs_gre_bridge "br_flat" $PRIMARY_NODE "False" 128 \
$SUB_NODES
cat <<EOF >>"$BASE/new/devstack/sub_localrc"
FLAT_INTERFACE=flat_if
PUBLIC_INTERFACE=pub_if
FLAT_INTERFACE=br_flat
PUBLIC_INTERFACE=br_pub
MULTI_HOST=True
EOF
cat <<EOF >>"$BASE/new/devstack/localrc"
FLAT_INTERFACE=flat_if
PUBLIC_INTERFACE=pub_if
FLAT_INTERFACE=br_flat
PUBLIC_INTERFACE=br_pub
MULTI_HOST=True
EOF
elif [[ "$DEVSTACK_GATE_NEUTRON_DVR" -eq '1' ]]; then
ovs_gre_bridge "br-ex" $PRIMARY_NODE "True" 1 \
$FLOATING_HOST_PREFIX $FLOATING_HOST_MASK \
$SUB_NODES
fi
fi
# Make the workspace owned by the stack user

View File

@ -773,13 +773,16 @@ function remote_copy_file {
scp $ssh_opts "$src" "$dest"
}
# flat_if_name: Interface name on each host for the "flat" network
# pub_if_name: Interface name on each host for the "public" network.
# IPv4 addresses will be assigned to these interfaces using
# the details provided below.
# offset: starting key value for the gre tunnels (MUST not be overlapping)
# note that two keys are used for each subnode. one for flat
# interface and the other for the pub interface.
# This function creates an internal gre bridge to connect all external
# network bridges across the compute and network nodes.
# bridge_name: Bridge name on each host for logical l2 network
# connectivity.
# host_ip: ip address of the bridge host which is reachable for all peer
# the hub for all of our spokes.
# set_ips: Whether or not to set l3 addresses on our logical l2 network.
# This can be helpful for setting up routing tables.
# offset: starting value for gre tunnel key and the ip addr suffix
# The next two parameters are only used if set_ips is "True".
# pub_addr_prefix: The IPv4 address three octet prefix used to give compute
# nodes non conflicting addresses on the pub_if_name'd
# network. Should be provided as X.Y.Z. Offset will be
@ -787,48 +790,54 @@ function remote_copy_file {
# resulting address.
# pub_addr_mask: the CIDR mask less the '/' for the IPv4 addresses used
# above.
# host_ip: ip address of the bridge host which is reachable for all peer
# every additinal paramater is considered as a peer host
# every additional parameter is considered as a peer host (spokes)
#
# See the nova_network_multihost_diagram.txt file in this repo for an
# illustration of what the network ends up looking like.
function gre_bridge {
local flat_if_name=$1
local pub_if_name=$2
local offset=$3
local pub_addr_prefix=$4
local pub_addr_mask=$5
local host_ip=$6
shift 6
function ovs_gre_bridge {
local install_ovs_deps="source $BASE/new/devstack/functions-common; \
install_package openvswitch-switch; \
restart_service openvswitch-switch"
local mtu=1450
local bridge_name=$1
local host_ip=$2
local set_ips=$3
local offset=$4
if [[ "$set_ips" == "True" ]] ; then
local pub_addr_prefix=$5
local pub_addr_mask=$6
shift 6
else
shift 4
fi
local peer_ips=$@
sudo brctl addbr gre_${flat_if_name}_br
sudo brctl addbr gre_${pub_if_name}_br
sudo iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
local key=$offset
for node in $peer_ips; do
sudo ip link add gretap_${flat_if_name}${key} type gretap local $host_ip remote $node key $key
sudo ip link set gretap_${flat_if_name}${key} up
remote_command $node sudo -i ip link add ${flat_if_name} type gretap local $node remote $host_ip key $key
remote_command $node sudo -i ip link set ${flat_if_name} up
sudo brctl addif gre_${flat_if_name}_br gretap_${flat_if_name}${key}
(( key++ ))
sudo ip link add gretap_${pub_if_name}${key} type gretap local $host_ip remote $node key $key
sudo ip link set gretap_${pub_if_name}${key} up
remote_command $node sudo -i ip link add ${pub_if_name} type gretap local $node remote $host_ip key $key
remote_command $node sudo -i ip link set ${pub_if_name} up
remote_command $node sudo -i ip address add ${pub_addr_prefix}.${key}/${pub_addr_mask} brd + dev ${pub_if_name}
sudo brctl addif gre_${pub_if_name}_br gretap_${pub_if_name}${key}
(( key++ ))
eval $install_ovs_deps
sudo ovs-vsctl --may-exist add-br $bridge_name
sudo ip link set mtu $mtu dev $bridge_name
if [[ "$set_ips" == "True" ]] ; then
sudo ip addr add ${pub_addr_prefix}.${offset}/${pub_addr_mask} dev ${bridge_name}
fi
for node_ip in $peer_ips; do
(( offset++ ))
# Setup the gre tunnel for the Controller/Network Node
sudo ovs-vsctl add-port $bridge_name \
${bridge_name}_${node_ip} \
-- set interface ${bridge_name}_${node_ip} type=gre \
options:remote_ip=${node_ip} \
options:key=${offset} \
options:local_ip=${host_ip}
# Now complete the gre tunnel setup for the Compute Node
remote_command $node_ip "$install_ovs_deps"
remote_command $node_ip sudo ovs-vsctl --may-exist add-br $bridge_name
remote_command $node_ip sudo ip link set mtu $mtu dev $bridge_name
remote_command $node_ip sudo ovs-vsctl add-port $bridge_name \
${bridge_name}_${host_ip} \
-- set interface ${bridge_name}_${host_ip} type=gre \
options:remote_ip=${host_ip} \
options:key=${offset} \
options:local_ip=${node_ip}
if [[ "$set_ips" == "True" ]] ; then
remote_command $node_ip \
sudo ip addr add ${pub_addr_prefix}.${offset}/${pub_addr_mask} \
dev ${bridge_name}
fi
done
sudo ip link add ${flat_if_name}_br_if type veth peer name ${flat_if_name}
sudo brctl addif gre_${flat_if_name}_br ${flat_if_name}_br_if
sudo ip link set ${flat_if_name}_br_if up
sudo ip link set ${flat_if_name} up
sudo ip link set gre_${flat_if_name}_br up
sudo ip link add ${pub_if_name}_br_if type veth peer name ${pub_if_name}
sudo brctl addif gre_${pub_if_name}_br ${pub_if_name}_br_if
sudo ip link set ${pub_if_name}_br_if up
sudo ip link set ${pub_if_name} up
sudo ip link set gre_${pub_if_name}_br up
sudo ip address add ${pub_addr_prefix}.${offset}/${pub_addr_mask} brd + dev ${pub_if_name}
}

41
multinode_diagram.txt Normal file
View File

@ -0,0 +1,41 @@
Nova Network
============
Subnode1 Primary Node Subnode2
+--------------------------+ +--------------------------+ +--------------------------+
| | | | | |
| | | | | |
| | | | | |
|172.24.4.2/23 | |172.24.4.1/23 | |172.24.4.3/23 |
|+------+ +--------+ | |+-------+ +-------+ | |+-------+ +-------+ |
||br_pub| | br_flat| | ||br_pub | |br_flat| | ||br_pub | |br_flat| |
|+--+---+ +---+----+ | |+---+---+ +---+---+ | |+---+---+ +---+---+ |
| | | | | | | | | | | |
| | +-------------------gre-tunnel--+-----------------gre-tunnel---+ |
| | | | | | | | |
| +----------gre-tunnel-----------+----------gre-tunnel----------+ |
| | | | | |
+--------------------------+ +--------------------------+ +--------------------------+
Diagram shows how the primary node acts as two separate network bridges for all of the subnodes.
Note spanning tree is not necessary because no there is no loop between the flat interface bridge
and the pub interface bridge and there are no loops within each bridge.
Neutron
=======
Subnode1 Primary Node Subnode2
+--------------------------+ +--------------------------+ +--------------------------+
| | | | | |
| | | | | |
| | | | | |
|172.24.4.2/23 | |172.24.4.1/23 | |172.24.4.3/23 |
|+------+ | |+-------+ | |+-------+ |
||br-ex | | ||br-ex | | ||br-ex | |
|+--+---+ | |+---+---+ | |+---+---+ |
| | | | | | | | |
| | | | | | | | |
| +----------gre-tunnel-----------+----------gre-tunnel----------+ |
| | | | | |
+--------------------------+ +--------------------------+ +--------------------------+

View File

@ -1,27 +0,0 @@
(172.24.4.2/23)
pub_if Subnode1 flat_if
| |
----------|---------------------------------------------|----------
| g g |
| r r |
gretap_pub_if2 e-pub_if_br_if e gretap_flat_if1
\ | | | /
\p pub_if(172.24.4.1/23) f/
u l
b Primary Node a
| t
/i flat_if |\
/ f | i \
gretap_pub_if4 | flat_if_br_if-f gretap_flat_if3
| b | |
| r b |
| r |
----------|---------------------------------------------|----------
| |
pub_if Subnode2 flat_if
(172.24.4.4/23)
Diagram shows how the primary node acts as two separate network bridges
for all of the subnodes. Note spanning tree is not necessary because no
there is no loop between the flat interface bridge and the pub interface
bridge and there are no loops within each bridge.