Install and configure compute nodeThe compute node handles connectivity and
security groups
for instances.To configure prerequisitesBefore you install and configure OpenStack Networking, you
must configure certain kernel networking parameters.Edit the /etc/sysctl.conf file to
contain the following parameters:net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1Implement the changes:#sysctl -pTo install the Networking components#apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent#yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch#zypper install --no-recommends openstack-neutron-openvswitch-agent ipsetSUSE does not use a separate ML2 plug-in package.To install and configure the Networking components#apt-get install neutron-plugin-openvswitch-agent openvswitch-datapath-dkmsDebian does not use a separate ML2 plug-in package.Respond to prompts for database
management, Identity service credentials, service endpoint
registration, and message queue credentials.Select the ML2 plug-in:Selecting the ML2 plug-in also populates the
and
options in the
/etc/neutron/neutron.conf file with the
appropriate values.To configure the Networking common componentsThe Networking common component configuration includes the
authentication mechanism, message queue, and plug-in.Default configuration files vary by distribution. You might need
to add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you
should retain.Edit the /etc/neutron/neutron.conf file
and complete the following actions:In the [database] section, comment out
any connection options because compute nodes
do not directly access the database.In the [DEFAULT] and
[oslo_messaging_rabbit] sections, configure
RabbitMQ message queue access:[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASSReplace RABBIT_PASS with the
password you chose for the openstack account in
RabbitMQ.In the [DEFAULT] and
[keystone_authtoken] sections,
configure Identity service access:[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASSReplace NEUTRON_PASS with the
password you chose or the neutron user in the
Identity service.Comment out or remove any other options in the
[keystone_authtoken] section.In the [DEFAULT] section, enable the
Modular Layer 2 (ML2) plug-in, router service, and overlapping
IP addresses:[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True(Optional) To assist with troubleshooting,
enable verbose logging in the [DEFAULT]
section:[DEFAULT]
...
verbose = TrueTo configure the Modular Layer 2 (ML2) plug-inThe ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to
build the virtual networking framework for instances.Edit the
/etc/neutron/plugins/ml2/ml2_conf.ini
file and complete the following actions:In the [ml2] section, enable the
flat,
VLAN,
generic routing encapsulation (GRE), and
virtual extensible LAN (VXLAN)
network type drivers, GRE tenant networks, and the OVS
mechanism driver:[ml2]
...
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitchIn the [ml2_type_gre] section, configure
the tunnel identifier (id) range:[ml2_type_gre]
...
tunnel_id_ranges = 1:1000In the [securitygroup] section, enable
security groups, enable ipset, and
configure the OVS iptables firewall
driver:[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriverIn the [ovs] section, enable tunnels
and configure the local tunnel endpoint:[ovs]
...
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESSReplace
INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
with the IP address of the instance tunnels network interface
on your compute node.In the [agent] section, enable GRE
tunnels:[agent]
...
tunnel_types = greTo configure the Open vSwitch (OVS) serviceThe OVS service provides the underlying virtual networking framework
for instances.Start the OVS service and configure it to start when the
system boots:#systemctl enable openvswitch.service#systemctl start openvswitch.serviceRestart the OVS service:#service openvswitch-switch restartTo configure Compute to use NetworkingBy default, distribution packages configure Compute to use
legacy networking. You must reconfigure Compute to manage
networks through Networking.Edit the /etc/nova/nova.conf file and
complete the following actions:In the [DEFAULT] section, configure
the APIs and drivers:[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriverBy default, Compute uses an internal firewall service.
Since Networking includes a firewall service, you must
disable the Compute firewall service by using the
nova.virt.firewall.NoopFirewallDriver
firewall driver.In the [neutron] section, configure
access parameters:[neutron]
...
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = NEUTRON_PASSReplace NEUTRON_PASS with the
password you chose for the neutron user
in the Identity service.To finalize the installationThe Networking service initialization scripts expect a
symbolic link /etc/neutron/plugin.ini
pointing to the ML2 plug-in configuration file,
/etc/neutron/plugins/ml2/ml2_conf.ini.
If this symbolic link does not exist, create it using the
following command:#ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.iniDue to a packaging bug, the Open vSwitch agent initialization
script explicitly looks for the Open vSwitch plug-in configuration
file rather than a symbolic link
/etc/neutron/plugin.ini pointing to the ML2
plug-in configuration file. Run the following commands to resolve this
issue:#cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig#sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
/usr/lib/systemd/system/neutron-openvswitch-agent.serviceThe Networking service initialization scripts expect the
variable NEUTRON_PLUGIN_CONF in the
/etc/sysconfig/neutron file to
reference the ML2 plug-in configuration file. Edit the
/etc/sysconfig/neutron file and add the
following:NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"Restart the Compute service:#systemctl restart openstack-nova-compute.service#service nova-compute restartStart the Open vSwitch (OVS) agent and configure it to
start when the system boots:#systemctl enable neutron-openvswitch-agent.service#systemctl start neutron-openvswitch-agent.serviceStart the Open vSwitch (OVS) agent and configure it to
start when the system boots:#systemctl enable openstack-neutron-openvswitch-agent.service#systemctl start openstack-neutron-openvswitch-agent.serviceRestart the Open vSwitch (OVS) agent:#service neutron-plugin-openvswitch-agent restartVerify operationPerform these commands on the controller node.Source the admin credentials to gain access to
admin-only CLI commands:$source admin-openrc.shList agents to verify successful launch of the
neutron agents:$neutron agent-list+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 30275801-e17a-41e4-8f53-9db63544f689 | Metadata agent | network | :-) | True | neutron-metadata-agent |
| 4bd8c50e-7bad-4f3b-955d-67658a491a15 | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent |
| 756e5bba-b70f-4715-b80e-e37f59803d20 | L3 agent | network | :-) | True | neutron-l3-agent |
| 9c45473c-6d6d-4f94-8df1-ebd0b6838d5f | DHCP agent | network | :-) | True | neutron-dhcp-agent |
| a5a49051-05eb-4b4f-bfc7-d36235fe9131 | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+This output should indicate four agents alive on the
network node and one agent alive on the compute node.