Support for MOS 7.0

Plugin can now be used on Fuel 7.0. It will install and setup
OpenDaylight Lithium SR2 controller together with networking_odl driver.
User can now decided where ODL controller will be installed by assigning
role OPENDAYLIGHT to one of the nodes.
Experimental option of managing L3 traffic by ODL was added. It will
prepare necessary configuration on ODL and Neutron side and also disable
neutron l3 agent. This feature require further development and should be
only enabled by users who know what they are doing.

Change-Id: I99bb9434f0e2baec52748e20551681d63d2bf1ce
This commit is contained in:
Michal Skalski 2015-08-03 13:27:05 +02:00
parent fc689a8fe5
commit d7f301a1d8
23 changed files with 199 additions and 665 deletions

5
CHANGELOG.md Normal file
View File

@ -0,0 +1,5 @@
## 0.7.0
- Support for MOS 7.0
- Include OpenDaylight Lithium SR2
- Introduce separate role for ODL controller

View File

@ -17,7 +17,7 @@ Requirements
| Requirement | Version/Comment |
|----------------------------------|-----------------|
| Mirantis OpenStack compatibility | 6.1 |
| Mirantis OpenStack compatibility | 7.0 |
Recommendations
---------------
@ -27,9 +27,9 @@ None.
Limitations
-----------
* Supports only environments with Neutron
* HA for ovsdb feature is not implemented in Lithium release - one instance of ODL controller runs on primary OpenStack controller.
* L3 traffic managed by neutron agent - lack of drivers in OpenStack Juno.
* Supports only environments with Neutron.
* HA for ovsdb feature is not implemented yet.
* L3 traffic managed by neutron agent.
Installation Guide
==================
@ -37,9 +37,9 @@ Installation Guide
OpenDaylight plugin installation
----------------------------------------
1. Clone the fuel-plugin-opendaylight repo from stackforge:
1. Clone the fuel-plugin-opendaylight repo from github:
git clone https://github.com/stackforge/fuel-plugin-opendaylight
git clone https://github.com/openstack/fuel-plugin-opendaylight
2. Install the Fuel Plugin Builder:
@ -82,12 +82,11 @@ OpenDaylight plugin configuration
1. Create a new environment with the Fuel UI wizard.
2. Click on the Settings tab of the Fuel web UI.
3. Scroll down the page, select the "OpenDaylight plugin" checkbox.
Rest of configuration is optional
3. Select "OpenDaylight Lithium plugin" section.
4. Tick the checkbox and click "Save Settings" button.
5. Assign role OPENDAYLIGHT to one of the node.
![OpenDaylight options](./figures/opendaylight-options.png "OpenDaylight options")
Build options
-------------
@ -123,25 +122,6 @@ Known issues
* VM live migration not supported by ODL ovsdb
* ODL ignore MTU size from Neutron and create tap devices for VMs with MTU 1500. Things like Jumbo frames will not work on VMs side.
Release Notes
-------------
**0.5.2**
* Initial release of the plugin. This is a beta version.
**0.6.0**
* Integrate Lithium release with OpenStack Juno.
**0.6.1**
* Integrate Lithium SR1 with OpenStack Juno.
**0.6.2**
* Fix MTU for vxlan segmentation type.
Development
===========
@ -164,7 +144,7 @@ follow the [OpenStack development workflow](
http://docs.openstack.org/infra/manual/developers.html#development-workflow).
Patch reviews take place on the [OpenStack gerrit](
https://review.openstack.org/#/q/status:open+project:stackforge/fuel-plugin-opendaylight,n,z)
https://review.openstack.org/#/q/status:open+project:openstack/fuel-plugin-opendaylight,n,z)
system.
Contributors

View File

@ -1,5 +1,6 @@
$nodes_hash = hiera('nodes', {})
$roles = node_roles($nodes_hash, hiera('uid'))
$odl = hiera('opendaylight')
$ovs_agent_name = $operatingsystem ? {
'CentOS' => 'neutron-openvswitch-agent',
@ -10,6 +11,13 @@ if member($roles, 'primary-controller') {
cs_resource { "p_${ovs_agent_name}":
ensure => absent,
}
if $odl['enable_l3_odl'] {
cs_resource { 'p_neutron-l3-agent':
ensure => absent,
}
}
} else {
service {$ovs_agent_name:
ensure => stopped,

View File

@ -2,41 +2,9 @@ include opendaylight
$address = hiera('management_vip')
$port = $opendaylight::rest_api_port
$vni_start = $opendaylight::odl_settings['vni_range_start']
$vni_end = $opendaylight::odl_settings['vni_range_end']
$neutron_settings = hiera('quantum_settings')
$network_scheme = hiera('network_scheme', {})
prepare_network_config($network_scheme)
neutron_plugin_ml2 {
'ml2/mechanism_drivers': value => 'opendaylight';
'ml2_odl/password': value => 'admin';
'ml2_odl/username': value => 'admin';
'ml2_odl/url': value => "http://${address}:${port}/controller/nb/v2/neutron";
}
$segmentation_type = $neutron_settings['L2']['segmentation_type']
if $segmentation_type != 'vlan' {
# MTU need to be static because ODL ignore MTU value from neturon
# and always create tap interfaces for VMs with MTU 1500
if $opendaylight::odl_settings['use_vxlan'] {
neutron_plugin_ml2 {
'ml2/tenant_network_types': value => 'vxlan';
'ml2_type_vxlan/vni_ranges': value => "${vni_start}:${vni_end}";
}
$mtu = 1450
} else {
$mtu = 1458
}
neutron_config {
'DEFAULT/network_device_mtu': value => $mtu;
}
file { '/etc/neutron/dnsmasq-neutron.conf':
owner => 'root',
group => 'root',
content => template('openstack/neutron/dnsmasq-neutron.conf.erb'),
}
}

View File

@ -1,5 +1,10 @@
include opendaylight
$network_scheme = hiera('network_scheme', {})
$neutron_config = hiera_hash('quantum_settings')
prepare_network_config($network_scheme)
$ovs_service_name = $operatingsystem ? {
'CentOS' => 'openvswitch',
'Ubuntu' => 'openvswitch-switch',
@ -24,22 +29,26 @@ exec { 'ovs-set-manager':
path => '/usr/bin'
}
if $opendaylight::node_private_address != undef {
if $neutron_config['L2']['segmentation_type'] != 'vlan' {
$net_role_property = 'neutron/mesh'
$tunneling_ip = get_network_role_property($net_role_property, 'ipaddr')
exec { 'ovs-set-tunnel-endpoint':
command => "ovs-vsctl set Open_vSwitch $(ovs-vsctl show | head -n 1) other_config={'local_ip'='${opendaylight::node_private_address}'}",
command => "ovs-vsctl set Open_vSwitch $(ovs-vsctl show | head -n 1) other_config={'local_ip'='${tunneling_ip}'}",
path => '/usr/bin',
require => Exec['ovs-set-manager']
}
} else {
$net_role_property = 'neutron/private'
$iface = get_network_role_property($net_role_property, 'phys_dev')
exec { 'ovs-br-int-to-phy':
command => 'ovs-vsctl --may-exist add-port br-int p_br-prv-0 -- set Interface p_br-prv-0 type=internal',
command => "ovs-vsctl --may-exist add-port br-int ${iface} -- set Interface ${iface} type=internal",
path => '/usr/bin',
tries => 30,
try_sleep => 5,
require => Exec['ovs-set-manager']
}
exec { 'ovs-set-provider-mapping':
command => "ovs-vsctl set Open_vSwitch $(ovs-vsctl show | head -n 1) other_config:provider_mappings=physnet2:p_br-prv-0",
command => "ovs-vsctl set Open_vSwitch $(ovs-vsctl show | head -n 1) other_config:provider_mappings=physnet2:${iface}",
path => '/usr/bin',
require => Exec['ovs-br-int-to-phy']
}

View File

@ -1,3 +1,7 @@
$bridges = ['br-floating', 'br-ex']
$patch_jacks_names = get_pair_of_jack_names($bridges)
exec { 'add-br-floating':
command => 'ovs-vsctl add-br br-floating',
unless => 'ovs-vsctl br-exists br-floating',
@ -8,6 +12,6 @@ exec { 'set-br-floating-id':
path => '/usr/bin',
} ->
exec { 'add-floating-patch':
command => 'ovs-vsctl --may-exist add-port br-floating p_br-floating-0 -- set Interface p_br-floating-0 type=internal',
command => "ovs-vsctl --may-exist add-port br-floating ${patch_jacks_names[0]} -- set Interface ${patch_jacks_names[0]} type=internal",
path => '/usr/bin',
}

View File

@ -2,32 +2,23 @@ include opendaylight
$access_hash = hiera('access', {})
$keystone_admin_tenant = $access_hash[tenant]
$neutron_settings = hiera('quantum_settings')
$nets = $neutron_settings['predefined_networks']
$neutron_config = hiera_hash('quantum_settings')
$segmentation_type = $neutron_config['L2']['segmentation_type']
$nets = $neutron_config['predefined_networks']
$odl = hiera('opendaylight')
$nodes_hash = hiera('nodes', {})
$roles = node_roles($nodes_hash, hiera('uid'))
$physnet = $nets['net04']['L2']['physnet']
$segment_id = $nets['net04']['L2']['segment_id']
$vm_net_l3 = $nets['net04']['L3']
if $opendaylight::odl_settings['use_vxlan'] {
$segmentation_type = 'vxlan'
if $segmentation_type != 'vlan' {
if $segmentation_type =='gre' {
$network_type = 'gre'
} else {
$network_type = 'vxlan'
}
} else {
$segmentation_type = $neutron_settings['L2']['segmentation_type']
$network_type = 'vlan'
}
$vm_net = { shared => false,
"L2" => { network_type => $segmentation_type,
router_ext => false,
physnet => $physnet,
segment_id => $segment_id,
},
"L3" => $vm_net_l3,
tenant => 'admin'
}
service { 'neutron-server':
ensure => running,
}
@ -44,23 +35,37 @@ if member($roles, 'primary-controller') {
path => '/usr/bin:/usr/sbin',
tries => 3,
try_sleep => 10,
} ->
exec {'refresh-l3-agent':
command => 'crm resource restart p_neutron-l3-agent',
path => '/usr/bin:/usr/sbin',
tries => 3,
try_sleep => 10,
} ->
openstack::network::create_network{'net04':
netdata => $vm_net,
require => Service['neutron-server']
} ->
openstack::network::create_network{'net04_ext':
netdata => $nets['net04_ext']
} ->
openstack::network::create_router{'router04':
internal_network => 'net04',
external_network => 'net04_ext',
tenant_name => $keystone_admin_tenant
}
unless $odl['enable_l3_odl'] {
exec {'refresh-l3-agent':
command => 'crm resource restart p_neutron-l3-agent',
path => '/usr/bin:/usr/sbin',
tries => 3,
try_sleep => 10,
}
}
if $nets and !empty($nets) {
Service<| title == 'neutron-server' |> ->
Openstack::Network::Create_network <||>
Service<| title == 'neutron-server' |> ->
Openstack::Network::Create_router <||>
openstack::network::create_network{'net04':
netdata => $nets['net04'],
segmentation_type => $network_type,
} ->
openstack::network::create_network{'net04_ext':
netdata => $nets['net04_ext'],
segmentation_type => 'local',
} ->
openstack::network::create_router{'router04':
internal_network => 'net04',
external_network => 'net04_ext',
tenant_name => $keystone_admin_tenant
}
}
}

View File

@ -1,3 +1,13 @@
$odl = hiera('opendaylight')
service { 'neutron-server':
ensure => stopped,
}
package {'python-networking-odl':
ensure => installed,
}
if $odl['enable_l3_odl'] {
neutron_config { 'DEFAULT/service_plugins': value => 'networking_odl.l3.l3_odl.OpenDaylightL3RouterPlugin,neutron.services.metering.metering_plugin.MeteringPlugin'; }
}

View File

@ -1,213 +0,0 @@
require 'csv'
require 'puppet/util/inifile'
class Puppet::Provider::Neutron < Puppet::Provider
def self.conf_filename
'/etc/neutron/neutron.conf'
end
def self.withenv(hash, &block)
saved = ENV.to_hash
hash.each do |name, val|
ENV[name.to_s] = val
end
yield
ensure
ENV.clear
saved.each do |name, val|
ENV[name] = val
end
end
def self.neutron_credentials
@neutron_credentials ||= get_neutron_credentials
end
def self.get_neutron_credentials
auth_keys = ['auth_host', 'auth_port', 'auth_protocol',
'admin_tenant_name', 'admin_user', 'admin_password']
conf = neutron_conf
if conf and conf['keystone_authtoken'] and
auth_keys.all?{|k| !conf['keystone_authtoken'][k].nil?}
creds = Hash[ auth_keys.map \
{ |k| [k, conf['keystone_authtoken'][k].strip] } ]
if conf['DEFAULT'] and !conf['DEFAULT']['nova_region_name'].nil?
creds['nova_region_name'] = conf['DEFAULT']['nova_region_name']
end
return creds
else
raise(Puppet::Error, "File: #{conf_filename} does not contain all \
required sections. Neutron types will not work if neutron is not \
correctly configured.")
end
end
def neutron_credentials
self.class.neutron_credentials
end
def self.auth_endpoint
@auth_endpoint ||= get_auth_endpoint
end
def self.get_auth_endpoint
q = neutron_credentials
"#{q['auth_protocol']}://#{q['auth_host']}:#{q['auth_port']}/v2.0/"
end
def self.neutron_conf
return @neutron_conf if @neutron_conf
@neutron_conf = Puppet::Util::IniConfig::File.new
@neutron_conf.read(conf_filename)
@neutron_conf
end
def self.auth_neutron(*args)
q = neutron_credentials
authenv = {
:OS_AUTH_URL => self.auth_endpoint,
:OS_USERNAME => q['admin_user'],
:OS_TENANT_NAME => q['admin_tenant_name'],
:OS_PASSWORD => q['admin_password'],
:OS_ENDPOINT_TYPE => 'internalURL'
}
if q.key?('nova_region_name')
authenv[:OS_REGION_NAME] = q['nova_region_name']
end
rv = nil
timeout = 120
end_time = Time.now.to_i + timeout
loop do
begin
withenv authenv do
rv = neutron(args)
end
break
rescue Puppet::ExecutionFailure => e
if ! e.message =~ /(\(HTTP\s+400\))|
(400-\{\'message\'\:\s+\'\'\})|
(\[Errno 111\]\s+Connection\s+refused)|
(503\s+Service\s+Unavailable)|
(504\s+Gateway\s+Time-out)|
(\:\s+Maximum\s+attempts\s+reached)|
(Unauthorized\:\s+bad\s+credentials)|
(Max\s+retries\s+exceeded)/
raise(e)
end
current_time = Time.now.to_i
if current_time > end_time
break
else
wait = end_time - current_time
Puppet::debug("Non-fatal error: \"#{e.message}\"")
notice("Neutron API not avalaible. Wait up to #{wait} sec.")
end
sleep(2)
# Note(xarses): Don't remove, we know that there is one of the
# Recoverable erros above, So we will retry a few more times
end
end
return rv
end
def auth_neutron(*args)
self.class.auth_neutron(args)
end
def self.reset
@neutron_conf = nil
@neutron_credentials = nil
end
def self.list_neutron_resources(type)
ids = []
list = auth_neutron("#{type}-list", '--format=csv',
'--column=id', '--quote=none')
# NOTE(bogdando) contribute change to upstream #1384101:
# raise Puppet exception, if resources list is empty
if list.nil?
raise(Puppet::ExecutionFailure, "Can't prefetch #{type}-list Neutron or Keystone API is not avalaible.")
end
(list.split("\n")[1..-1] || []).compact.collect do |line|
ids << line.strip
end
return ids
end
def self.get_neutron_resource_attrs(type, id)
attrs = {}
net = auth_neutron("#{type}-show", '--format=shell', id)
# NOTE(bogdando) contribute change to upstream #1384101:
# raise Puppet exception, if list of resources' attributes is empty
if net.nil?
raise(Puppet::ExecutionFailure, "Can't prefetch #{type}-show Neutron or Keystone API is not avalaible.")
end
last_key = nil
(net.split("\n") || []).compact.collect do |line|
if line.include? '='
k, v = line.split('=', 2)
attrs[k] = v.gsub(/\A"|"\Z/, '')
last_key = k
else
# Handle the case of a list of values
v = line.gsub(/\A"|"\Z/, '')
attrs[last_key] = [attrs[last_key], v].flatten
end
end
return attrs
end
def self.list_router_ports(router_name_or_id)
results = []
cmd_output = auth_neutron("router-port-list",
'--format=csv',
router_name_or_id)
if ! cmd_output
return results
end
headers = nil
CSV.parse(cmd_output) do |row|
if headers == nil
headers = row
else
result = Hash[*headers.zip(row).flatten]
match_data = /.*"subnet_id": "(.*)", .*/.match(result['fixed_ips'])
if match_data
result['subnet_id'] = match_data[1]
end
results << result
end
end
return results
end
def self.get_tenant_id(catalog, name)
instance_type = 'keystone_tenant'
instance = catalog.resource("#{instance_type.capitalize!}[#{name}]")
if ! instance
instance = Puppet::Type.type(instance_type).instances.find do |i|
i.provider.name == name
end
end
if instance
return instance.provider.id
else
fail("Unable to find #{instance_type} for name #{name}")
end
end
def self.parse_creation_output(data)
hash = {}
data.split("\n").compact.each do |line|
if line.include? '='
hash[line.split('=').first] = line.split('=', 2)[1].gsub(/\A"|"\Z/, '')
end
end
hash
end
end

View File

@ -1,140 +0,0 @@
require File.join(File.dirname(__FILE__), '..','..','..',
'puppet/provider/neutron')
Puppet::Type.type(:neutron_network).provide(
:neutron,
:parent => Puppet::Provider::Neutron
) do
desc <<-EOT
Neutron provider to manage neutron_network type.
Assumes that the neutron service is configured on the same host.
EOT
commands :neutron => 'neutron'
commands :keystone => 'keystone'
mk_resource_methods
def self.neutron_type
'net'
end
def self.instances
list_neutron_resources(neutron_type).collect do |id|
attrs = get_neutron_resource_attrs(neutron_type, id)
new(
:ensure => :present,
:name => attrs['name'],
:id => attrs['id'],
:admin_state_up => attrs['admin_state_up'],
:provider_network_type => attrs['provider:network_type'],
:provider_physical_network => attrs['provider:physical_network'],
:provider_segmentation_id => attrs['provider:segmentation_id'],
:router_external => attrs['router:external'],
:shared => attrs['shared'],
:tenant_id => attrs['tenant_id']
)
end
end
def self.prefetch(resources)
networks = instances
resources.keys.each do |name|
if provider = networks.find{ |net| net.name == name }
resources[name].provider = provider
end
end
end
def exists?
@property_hash[:ensure] == :present
end
def create
network_opts = Array.new
if @resource[:shared] =~ /true/i
network_opts << '--shared'
end
if @resource[:tenant_name]
tenant_id = self.class.get_tenant_id(model.catalog,
@resource[:tenant_name])
notice("***N*** neutron_network::create *** tenant_id='#{tenant_id.inspect}'")
network_opts << "--tenant_id=#{tenant_id}"
elsif @resource[:tenant_id]
network_opts << "--tenant_id=#{@resource[:tenant_id]}"
end
if @resource[:provider_network_type]
network_opts << \
"--provider:network_type=#{@resource[:provider_network_type]}"
end
if @resource[:provider_physical_network]
network_opts << \
"--provider:physical_network=#{@resource[:provider_physical_network]}"
end
if @resource[:provider_segmentation_id]
network_opts << \
"--provider:segmentation_id=#{@resource[:provider_segmentation_id]}"
end
if @resource[:router_external]
network_opts << "--router:external=#{@resource[:router_external]}"
end
results = auth_neutron('net-create', '--format=shell',
network_opts, resource[:name])
if results =~ /Created a new network:/
attrs = self.class.parse_creation_output(results)
@property_hash = {
:ensure => :present,
:name => resource[:name],
:id => attrs['id'],
:admin_state_up => attrs['admin_state_up'],
:provider_network_type => attrs['provider:network_type'],
:provider_physical_network => attrs['provider:physical_network'],
:provider_segmentation_id => attrs['provider:segmentation_id'],
:router_external => attrs['router:external'],
:shared => attrs['shared'],
:tenant_id => attrs['tenant_id'],
}
else
fail("did not get expected message on network creation, got #{results}")
end
end
def destroy
auth_neutron('net-delete', name)
@property_hash[:ensure] = :absent
end
def admin_state_up=(value)
auth_neutron('net-update', "--admin_state_up=#{value}", name)
end
def shared=(value)
auth_neutron('net-update', "--shared=#{value}", name)
end
def router_external=(value)
auth_neutron('net-update', "--router:external=#{value}", name)
end
[
:provider_network_type,
:provider_physical_network,
:provider_segmentation_id,
:tenant_id,
].each do |attr|
define_method(attr.to_s + "=") do |value|
fail("Property #{attr.to_s} does not support being updated")
end
end
end

View File

@ -1,90 +0,0 @@
Puppet::Type.newtype(:neutron_network) do
ensurable
newparam(:name, :namevar => true) do
desc 'Symbolic name for the network'
newvalues(/.*/)
end
newproperty(:id) do
desc 'The unique id of the network'
validate do |v|
raise(Puppet::Error, 'This is a read only property')
end
end
newproperty(:admin_state_up) do
desc 'The administrative status of the network'
newvalues(/(t|T)rue/, /(f|F)alse/)
munge do |v|
v.to_s.capitalize
end
end
newproperty(:shared) do
desc 'Whether this network should be shared across all tenants or not'
newvalues(/(t|T)rue/, /(f|F)alse/)
munge do |v|
v.to_s.capitalize
end
end
newparam(:tenant_name) do
desc 'The name of the tenant which will own the network.'
end
newproperty(:tenant_id) do
desc 'A uuid identifying the tenant which will own the network.'
end
newproperty(:provider_network_type) do
desc 'The physical mechanism by which the virtual network is realized.'
newvalues(:flat, :vlan, :local, :gre, :l3_ext, :vxlan)
end
newproperty(:provider_physical_network) do
desc <<-EOT
The name of the physical network over which the virtual network
is realized for flat and VLAN networks.
EOT
newvalues(/\S+/)
end
newproperty(:provider_segmentation_id) do
desc 'Identifies an isolated segment on the physical network.'
munge do |v|
Integer(v)
end
end
newproperty(:router_external) do
desc 'Whether this router will route traffic to an external network'
newvalues(/(t|T)rue/, /(f|F)alse/)
munge do |v|
v.to_s.capitalize
end
end
# Require the neutron-server service to be running
autorequire(:service) do
['neutron-server']
end
autorequire(:keystone_tenant) do
[self[:tenant_name]] if self[:tenant_name]
end
validate do
if self[:ensure] != :present
return
end
if self[:tenant_id] && self[:tenant_name]
raise(Puppet::Error, <<-EOT
Please provide a value for only one of tenant_name and tenant_id.
EOT
)
end
end
end

View File

@ -15,45 +15,38 @@
#
class opendaylight::ha::haproxy {
Haproxy::Service { use_include => true }
Haproxy::Balancermember { use_include => true }
$public_vip = hiera('public_vip')
$management_vip = hiera('management_vip')
$nodes_hash = hiera('nodes')
$primary_controller_nodes = filter_nodes($nodes_hash,'role','primary-controller')
$controllers = concat($primary_controller_nodes, filter_nodes($nodes_hash,'role','controller'))
$odl_controllers = filter_nodes($nodes_hash,'role','opendaylight')
Opendaylight::Ha::Haproxy_service {
server_names => filter_hash($controllers, 'name'),
ipaddresses => filter_hash($controllers, 'internal_address'),
public_virtual_ip => $public_vip,
internal_virtual_ip => $management_vip,
# defaults for any haproxy_service within this class
Openstack::Ha::Haproxy_service {
internal_virtual_ip => $management_vip,
ipaddresses => filter_hash($odl_controllers, 'internal_address'),
public_virtual_ip => $public_vip,
server_names => filter_hash($odl_controllers, 'name'),
public => true,
internal => true,
}
opendaylight::ha::haproxy_service { 'odl-jetty':
public => true,
openstack::ha::haproxy_service { 'odl-jetty':
order => '216',
listen_port => '8181',
balancermember_port => '8181',
haproxy_config_options => {
'option' => ['httpchk /dlux/index.html', 'httplog'],
'option' => ['httpchk /index.html', 'httplog'],
'timeout client' => '3h',
'timeout server' => '3h',
'balance' => 'source',
'mode' => 'http'
},
balancermember_options => 'check inter 5000 rise 2 fall 3',
balancermember_options => 'check inter 2000 fall 3',
}
opendaylight::ha::haproxy_service { 'odl-tomcat':
public => true,
openstack::ha::haproxy_service { 'odl-tomcat':
order => '215',
listen_port => $opendaylight::rest_api_port,
balancermember_port => $opendaylight::rest_api_port,
haproxy_config_options => {
'option' => ['httpchk /apidoc/explorer', 'httplog'],
'timeout client' => '3h',
@ -61,21 +54,6 @@ class opendaylight::ha::haproxy {
'balance' => 'source',
'mode' => 'http'
},
balancermember_options => 'check inter 5000 rise 2 fall 3',
}
exec { 'haproxy reload':
command => 'export OCF_ROOT="/usr/lib/ocf"; (ip netns list | grep haproxy) && ip netns exec haproxy /usr/lib/ocf/resource.d/fuel/ns_haproxy reload',
path => '/usr/bin:/usr/sbin:/bin:/sbin',
logoutput => true,
provider => 'shell',
tries => 10,
try_sleep => 10,
returns => [0, ''],
}
Haproxy::Listen <||> -> Exec['haproxy reload']
Haproxy::Balancermember <||> -> Exec['haproxy reload']
}

View File

@ -1,11 +1,11 @@
class opendaylight {
$odl_settings = hiera('opendaylight')
$nodes_hash = hiera('nodes')
$primary_controller_hash = filter_nodes($nodes_hash,'role','primary-controller')
$odl_controller_hash = filter_nodes($nodes_hash,'role','opendaylight')
$node = filter_nodes($nodes_hash,'name',$::hostname)
$rest_api_port = $odl_settings['rest_api_port']
$manager_ip_address = $primary_controller_hash[0]['internal_address']
$manager_ip_address = $odl_controller_hash[0]['internal_address']
$node_private_address = $node[0]['private_address']
$node_internal_address = $node[0]['internal_address']
}

View File

@ -1,16 +1,17 @@
class opendaylight::service (
$rest_port = 8282,
$bind_address = undef
$bind_address = undef,
) {
$nodes_hash = hiera('nodes', {})
$roles = node_roles($nodes_hash, hiera('uid'))
$management_vip = hiera('management_vip')
$odl = hiera("opendaylight")
$odl = hiera('opendaylight')
$features = $odl['metadata']['odl_features']
$enable = {}
$enable_l3_odl = $odl['enable_l3_odl']
if member($roles, 'primary-controller') {
if member($roles, 'opendaylight') {
firewall {'215 odl':
port => [ $opendaylight::rest_api_port, 6633, 6640, 6653, 8181, 8101],
@ -63,12 +64,4 @@ class opendaylight::service (
if member($roles, 'controller') or member($roles, 'primary-controller') {
include opendaylight::ha::haproxy
}
if $opendaylight::odl_settings['use_vxlan'] {
firewall {'216 vxlan':
port => [4789],
proto => 'udp',
action => 'accept',
}
}
}

View File

@ -83,7 +83,8 @@ ovsdb.of.version=1.3
# ovsdb can be configured with ml2 to perform l3 forwarding. The config below enables that functionality, which is
# disabled by default.
# ovsdb.l3.fwd.enabled=yes
# ovsdb.l3.fwd.enabled=false
<% if @enable_l3_odl %>ovsdb.l3.fwd.enabled=true<% end %>
# ovsdb can be configured with ml2 to perform l3 forwarding. When used in that scenario, the mac address of the default
# gateway --on the external subnet-- is expected to be resolved from its inet address. The config below overrides that

View File

@ -36,7 +36,7 @@
#
# Comma separated list of features repositories to register by default
#
featuresRepositories = mvn:org.apache.karaf.features/standard/3.0.3/xml/features,mvn:org.apache.karaf.features/enterprise/3.0.3/xml/features,mvn:org.ops4j.pax.web/pax-web-features/3.1.4/xml/features,mvn:org.apache.karaf.features/spring/3.0.3/xml/features,mvn:org.opendaylight.integration/features-integration-index/0.3.1-Lithium-SR1/xml/features
featuresRepositories = mvn:org.apache.karaf.features/standard/3.0.3/xml/features,mvn:org.apache.karaf.features/enterprise/3.0.3/xml/features,mvn:org.ops4j.pax.web/pax-web-features/3.1.4/xml/features,mvn:org.apache.karaf.features/spring/3.0.3/xml/features,mvn:org.opendaylight.integration/features-integration-index/0.3.2-Lithium-SR2/xml/features
#
# Comma separated list of features to install at startup

View File

@ -1,14 +1,28 @@
- id: opendaylight
type: group
role: [opendaylight]
requires: [deploy_start]
required_for: [deploy_end, primary-controller, controller]
tasks: [fuel_pkgs, hiera, globals, tools, logging, netconfig,
hosts, firewall, deploy_start, odl_install]
parameters:
strategy:
type: parallel
- id: odl_install
role: ['primary-controller']
stage: pre_deployment/4450
type: puppet
groups: [opendaylight]
requires: [deploy_start]
required_for: [deploy_end]
requires: [hosts, firewall, globals]
required_for: [deploy_end, openstack-network]
parameters:
puppet_manifest: puppet/manifests/controller-pre.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 720
- id: odl_configure
role: ['primary-controller', 'controller', 'compute']
stage: post_deployment/4455
role: ['primary-controller', 'controller', 'compute', 'opendaylight']
requires: [post_deployment_start]
required_for: [post_deployment_end]
type: puppet
parameters:
puppet_manifest: puppet/manifests/odl-service.pp
@ -16,14 +30,16 @@
timeout: 1400
- id: odl_delete_predefined_net
role: ['primary-controller']
stage: post_deployment/4460
requires: [odl_configure]
required_for: [post_deployment_end]
type: shell
parameters:
cmd: ./clean-neutron.sh
timeout: 120
- id: odl_disable_ovs_agent
role: ['primary-controller', 'compute']
stage: post_deployment/4465
requires: [odl_delete_predefined_net]
required_for: [post_deployment_end]
type: puppet
parameters:
puppet_manifest: puppet/manifests/disable-ovs-agent.pp
@ -31,7 +47,8 @@
timeout: 120
- id: odl_stop_neutron
role: ['primary-controller', 'controller']
stage: post_deployment/4470
requires: [odl_disable_ovs_agent]
required_for: [post_deployment_end]
type: puppet
parameters:
puppet_manifest: puppet/manifests/stop-neutron.pp
@ -39,7 +56,8 @@
timeout: 120
- id: odl_recreate_ovs
role: ['primary-controller', 'controller', 'compute']
stage: post_deployment/4475
requires: [odl_stop_neutron]
required_for: [post_deployment_end]
type: puppet
parameters:
puppet_manifest: puppet/manifests/recreate-ovs.pp
@ -47,7 +65,8 @@
timeout: 120
- id: odl_ml2_configuration
role: ['primary-controller', 'controller', 'compute']
stage: post_deployment/4480
requires: [odl_recreate_ovs]
required_for: [post_deployment_end]
type: puppet
parameters:
puppet_manifest: puppet/manifests/ml2-configuration.pp
@ -55,15 +74,17 @@
timeout: 120
- id: odl_recreate_neutron_db
role: ['primary-controller']
stage: post_deployment/4485
requires: [odl_ml2_configuration]
required_for: [post_deployment_end]
type: puppet
parameters:
puppet_manifest: puppet/manifests/recreate-neutron-db.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 180
- id: odl_setup_floating
role: ['primary-controller', 'controller']
stage: post_deployment/4490
role: ['primary-controller', 'controller', 'compute']
requires: [odl_recreate_neutron_db]
required_for: [post_deployment_end]
type: puppet
parameters:
puppet_manifest: puppet/manifests/setup-floating.pp
@ -71,7 +92,8 @@
timeout: 120
- id: odl_start_neutron
role: ['primary-controller', 'controller']
stage: post_deployment/4495
requires: [odl_setup_floating]
required_for: [post_deployment_end]
type: puppet
parameters:
puppet_manifest: puppet/manifests/start-neutron.pp

View File

@ -17,40 +17,11 @@ attributes:
- odl-dlux-all
- odl-mdsal-apidocs
- odl-ovsdb-openstack
use_vxlan:
enable_l3_odl:
weight: 12
type: "checkbox"
weight: 20
value: false
label: "Use vxlan"
description: "Configure neutron to use VXLAN tunneling"
restrictions:
- condition: "networking_parameters:segmentation_type == 'vlan'"
message: "Neutron with GRE segmentation required"
action: "disable"
vni_range_start:
value: '10'
label: 'VNI range start'
description: 'VXLAN VNI IDs range start'
type: 'text'
weight: 30
restrictions:
- condition: "networking_parameters:segmentation_type == 'vlan'"
action: "hide"
regex:
source: '^\d+$'
error: 'Invalid ID number'
vni_range_end:
value: '10000'
label: 'VNI range end'
description: 'VXLAN VNI IDs range end'
type: 'text'
weight: 31
restrictions:
- condition: "networking_parameters:segmentation_type == 'vlan'"
action: "hide"
regex:
source: '^\d+$'
error: 'Invalid ID number'
label: "EXPERIMENTAL: Use ODL to manage L3 traffic"
rest_api_port:
value: '8282'
label: 'Port number'

View File

@ -3,14 +3,14 @@ name: opendaylight
# Human-readable name for your plugin
title: OpenDaylight Lithium plugin
# Plugin version
version: '0.6.2'
version: '0.7.0'
# Description
description: 'This plugin provides OpenDaylight as a backend for neutron.
Use the same IP address as for OpenStack Horizon and port 8181 to reach dlux web ui and apidoc explorer.
DLUX: http://horizon_ip:8181/index.html,
APIDOC: http://horizon_ip:8181/apidoc/explorer/index.html'
# Required fuel version
fuel_version: ['6.1']
fuel_version: ['7.0']
# Specify license of your plugin
licenses: ['Apache License Version 2.0']
# Specify author or company name
@ -24,15 +24,10 @@ groups: ['network']
# The plugin is compatible with releases in the list
releases:
- os: ubuntu
version: 2014.2-6.1
version: 2015.1.0-7.0
mode: ['ha', 'multinode']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
- os: centos
version: 2014.2-6.1
mode: ['ha', 'multinode']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/centos
# Version of plugin package
package_version: '2.0.0'
package_version: '3.0.0'

9
node_roles.yaml Normal file
View File

@ -0,0 +1,9 @@
opendaylight:
name: "OpenDaylight controller"
description: "Install and setup OpenDaylight SDN controller"
has_primary: false # whether has primary role or not
public_ip_required: false # whether requires public net or not
weight: 150 # weight that will be used for ordering on fuel ui
limits:
max: 1
min: 1

View File

@ -7,17 +7,22 @@ set -eux
# Where we can find odl karaf distribution tarball
# can be http(s) url or absolute path
ODL_TARBALL_LOCATION=${ODL_TARBALL_LOCATION:-https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.3.1-Lithium-SR1/distribution-karaf-0.3.1-Lithium-SR1.tar.gz}
ODL_TARBALL_LOCATION=${ODL_TARBALL_LOCATION:-https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/distribution-karaf/0.3.2-Lithium-SR2/distribution-karaf-0.3.2-Lithium-SR2.tar.gz}
#Verion number used in deb/rpm package
ODL_VERSION_NUMBER=${ODL_VERSION_NUMBER:-0.3.1}
ODL_VERSION_NUMBER=${ODL_VERSION_NUMBER:-0.3.2}
ODL_DESCRIPTION="OpenDaylight SDN Controller"
TMP_NAME="karaf-odl.tar.gz"
#Networking odl
NETWORKING_ODL_REPO=${NETWORKING_ODL_REPO:-https://github.com/openstack/networking-odl.git}
NETWORKING_ODL_BRANCH=${NETWORKING_ODL_BRANCH:-stable/kilo}
# For which systems odl package should be build
BUILD_FOR=${BUILD_FOR:-centos ubuntu}
BUILD_FOR=${BUILD_FOR:-ubuntu}
DIR="$(dirname `readlink -f $0`)"
TMP_DIR="${DIR}/tmp"
MODULES="${DIR}/deployment_scripts/puppet/modules"
# If true java and it dependencies will be part of the plugin.
@ -28,8 +33,7 @@ INCLUDE_DEPENDENCIES=${INCLUDE_DEPENDENCIES:-false}
USE_FUEL_PATCH=${USE_FUEL_PATCH:-true}
function cleanup {
rm -f "${DIR}/${TMP_NAME}"
rm -rf "${DIR}/package"
rm -rf "${TMP_DIR}"
}
function download {
@ -37,11 +41,12 @@ function download {
}
function unpack {
tar xzf $1 --strip-components=1 -C "${DIR}/package"
mkdir "${TMP_DIR}/${2}"
tar xzf $1 --strip-components=1 -C "${TMP_DIR}/${2}"
}
function patch_odl {
cp "${DIR}/odl_package/odl_lithium_patch/openstack.net-virt-1.1.1-Lithium-SR1.jar" "${DIR}/package/system/org/opendaylight/ovsdb/openstack.net-virt/1.1.1-Lithium-SR1/openstack.net-virt-1.1.1-Lithium-SR1.jar"
cp "${DIR}/odl_package/odl_lithium_patch/openstack.net-virt-1.1.2-Lithium-SR2.jar" "${TMP_DIR}/opendaylight_src/system/org/opendaylight/ovsdb/openstack.net-virt/1.1.2-Lithium-SR2/openstack.net-virt-1.1.2-Lithium-SR2.jar"
}
# Download packages required by ODL.
@ -59,13 +64,14 @@ function build_pkg {
case $1 in
centos)
pushd "${DIR}/repositories/${1}/"
fpm --force -s dir -t rpm --version "${ODL_VERSION_NUMBER}" --description "${ODL_DESCRIPTION}" --prefix /opt/opendaylight --rpm-init "${DIR}/odl_package/${1}/opendaylight" --after-install "${DIR}/odl_package/${1}/opendaylight-post" --name opendaylight -d "java-1.7.0-openjdk" -C "${DIR}/package"
fpm --force -s dir -t rpm --version "${ODL_VERSION_NUMBER}" --description "${ODL_DESCRIPTION}" --prefix /opt/opendaylight --rpm-init "${DIR}/odl_package/${1}/opendaylight" --after-install "${DIR}/odl_package/${1}/opendaylight-post" --name opendaylight -d "java-1.7.0-openjdk" -C "${TMP_DIR}/opendaylight_src"
download_dependencies ${1}
popd
;;
ubuntu)
pushd "${DIR}/repositories/${1}/"
fpm --force -s dir -t deb --version "${ODL_VERSION_NUMBER}" --description "${ODL_DESCRIPTION}" --prefix /opt/opendaylight --deb-upstart "${DIR}/odl_package/${1}/opendaylight" --after-install "${DIR}/odl_package/${1}/opendaylight-post" --name opendaylight -d "openjdk-7-jre-headless" -C "${DIR}/package"
fpm --force -s dir -t deb -m 'mskalski@mirantis.com' --version "${ODL_VERSION_NUMBER}" --description "${ODL_DESCRIPTION}" --prefix /opt/opendaylight --deb-upstart "${DIR}/odl_package/${1}/opendaylight" --after-install "${DIR}/odl_package/${1}/opendaylight-post" --name opendaylight -d "openjdk-7-jre-headless" -C "${TMP_DIR}/opendaylight_src"
fpm --force -s python -t deb -m 'mskalski@mirantis.com' --no-python-dependencies -d python-pbr -d python-babel -d python-neutron ${TMP_DIR}/networking_odl/setup.py
download_dependencies ${1}
popd
;;
@ -77,14 +83,16 @@ command -v fpm >/dev/null 2>&1 || { echo >&2 "fpm ruby gem required but it's not
cleanup
mkdir -p "${DIR}/package"
mkdir -p "${TMP_DIR}"
pushd $TMP_DIR
if [[ "$ODL_TARBALL_LOCATION" =~ ^http.* ]]
then
download $ODL_TARBALL_LOCATION ${DIR}/${TMP_NAME}
unpack ${DIR}/${TMP_NAME}
download $ODL_TARBALL_LOCATION $TMP_NAME
unpack $TMP_NAME 'opendaylight_src'
else
unpack $ODL_TARBALL_LOCATION
unpack $ODL_TARBALL_LOCATION 'opendaylight_src'
fi
if [ "$USE_FUEL_PATCH" = true ]
@ -92,6 +100,11 @@ then
patch_odl
fi
git clone $NETWORKING_ODL_REPO networking_odl
pushd networking_odl
git checkout $NETWORKING_ODL_BRANCH
popd
for system in $BUILD_FOR
do
build_pkg $system

6
volumes.yaml Normal file
View File

@ -0,0 +1,6 @@
# Set here new volumes for your role
volumes: []
volumes_roles_mapping:
opendaylight:
# Default role mapping
- {allocate_size: "min", id: "os"}