From b183c2372aa1a48baece4af5cda68381142f2043 Mon Sep 17 00:00:00 2001 From: Darren Date: Tue, 3 Jun 2014 12:37:20 +1000 Subject: [PATCH] Edits to the Installation Guide Networking introduction Minor edits to wording and sentence structure in the Installation Guide Networking introduction Change-Id: I6ba52ba9b101d64c8803665bb6efadb5a4df0140 Implements: blueprint installation-guide-improvements --- .../compute/section_hypervisor_hyper-v.xml | 4 +- doc/install-guide/ch_networking.xml | 35 +- .../section_dashboard-install.xml | 16 +- .../section_neutron-concepts.xml | 96 +++--- .../section_neutron-ml2-compute-node.xml | 241 ++++++------- .../section_neutron-ml2-network-node.xml | 323 +++++++++--------- doc/pom.xml | 3 +- 7 files changed, 365 insertions(+), 353 deletions(-) diff --git a/doc/config-reference/compute/section_hypervisor_hyper-v.xml b/doc/config-reference/compute/section_hypervisor_hyper-v.xml index 1af9c39d0a..ca44535fc3 100644 --- a/doc/config-reference/compute/section_hypervisor_hyper-v.xml +++ b/doc/config-reference/compute/section_hypervisor_hyper-v.xml @@ -351,9 +351,7 @@ connection=mysql://nova:passwd@IP_ADDRESS/nova Verify that you are synchronized with a network time - source. Instructions for configuring NTP on your Hyper-V compute node are - located here - + source. For instructions about how to configure NTP on your Hyper-V compute node, see . diff --git a/doc/install-guide/ch_networking.xml b/doc/install-guide/ch_networking.xml index c5a8b0665e..691de64300 100644 --- a/doc/install-guide/ch_networking.xml +++ b/doc/install-guide/ch_networking.xml @@ -3,24 +3,26 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_networking"> - Add a networking service - Configuring networking in OpenStack can be a bewildering - experience. This guide provides step-by-step instructions for both - OpenStack Networking (neutron) and the legacy networking (nova-network) - service. If you are unsure which to use, we recommend trying - OpenStack Networking because it offers a considerable number of - features and flexibility including plug-ins for a variety of emerging products - supporting virtual networking. See the - Add a networking component + This chapter explains how to install and configure either + OpenStack Networking (neutron) or the legacy nova-network networking service. + The nova-network service + enables you to deploy one network type per instance and is + suitable for basic network functionality. OpenStack Networking + enables you to deploy multiple network types per instance and + includes plug-ins for a + variety of products that support virtual + networking. + For more information, see the Networking chapter of the OpenStack Cloud - Administrator Guide for more information. + Administrator Guide.
OpenStack Networking (neutron)
- Modular Layer 2 (ML2) plug-in + Modular Layer 2 (ML2) plug-in @@ -35,10 +37,9 @@
Next steps - - Your OpenStack environment now includes the core components necessary - to launch a basic instance. You can - launch an instance or add more - services to your environment in the following chapters. + Your OpenStack environment now includes the core components + necessary to launch a basic instance. You can launch an instance or add + more OpenStack services to your environment.
diff --git a/doc/install-guide/section_dashboard-install.xml b/doc/install-guide/section_dashboard-install.xml index 9a0b978953..5c955c63ad 100644 --- a/doc/install-guide/section_dashboard-install.xml +++ b/doc/install-guide/section_dashboard-install.xml @@ -16,7 +16,8 @@ For more information about how to deploy the dashboard, see deployment topics in the developer documentation. + >deployment topics in the developer + documentation. Install the dashboard on the node that can contact @@ -71,8 +72,7 @@ 'LOCATION' : '127.0.0.1:11211' } } - + Notes @@ -118,8 +118,7 @@ os="ubuntu;debian" >/etc/openstack-dashboard/local_settings.py/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py: - + >/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py: ALLOWED_HOSTS = ['localhost', 'my-desktop'] @@ -158,10 +157,9 @@ linkend="dashboard-session-database"/>. - - Ensure that the SELinux policy of the system is configured to - allow network connections to the HTTP server. - + Ensure that the SELinux policy of the system is + configured to allow network connections to the HTTP + server. # setsebool -P httpd_can_network_connect on diff --git a/doc/install-guide/section_neutron-concepts.xml b/doc/install-guide/section_neutron-concepts.xml index c55ed49df5..c3cad580b2 100644 --- a/doc/install-guide/section_neutron-concepts.xml +++ b/doc/install-guide/section_neutron-concepts.xml @@ -4,57 +4,59 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> Networking concepts - OpenStack Networking (neutron) manages all of the networking - facets for the Virtual Networking Infrastructure (VNI) in your - OpenStack environment. OpenStack Networking also manages the access - layer aspects of the Physical Networking Infrastructure (PNI). - Tenants can create advanced virtual network topologies using - OpenStack Networking. These topologies include services such as - firewalls, - load balancers, and - - virtual private networks (VPNs). - Networking provides the following object abstractions: networks, - routers, and subnets. Each has a functionality that mimics its + OpenStack Networking (neutron) manages all networking facets + for the Virtual Networking Infrastructure (VNI) and the access + layer aspects of the Physical Networking Infrastructure (PNI) in + your OpenStack environment. OpenStack Networking enables tenants + to create advanced virtual network topologies including services + such as firewalls, + load balancers, + and virtual + private networks (VPNs). + Networking provides the networks, subnets, and routers object + abstractions. Each abstraction has functionality that mimics its physical counterpart: networks contain subnets, and routers route traffic between different subnet and networks. - Each router has one gateway that connects to a network, and many - interfaces connected to subnets. Subnets can access machines on - other subnets connected to the same router. + Each router has one gateway that connects to a network, and + many interfaces connected to subnets. Subnets can access machines + on other subnets connected to the same router. Any given Networking set up has at least one external network. - This external network, unlike the other networks, is not solely a - virtually defined network. It instead provides a view into a slice - of the network accessible outside the OpenStack installation, which - is the outside network. IP addresses on the external network are - accessible by anybody physically on the outside network. DHCP is - disabled on this network. - Machines can access the outside network through the gateway - for the router. For the outside network to access VMs, and for VM's - to access the outside network, routers between the networks are - needed. - In addition to external networks, any Networking set up has one - or more internal networks. These software-defined networks connect - directly to the VMs. Only the VMs on any given internal network, - or those on subnets connected through interfaces to a similar - router, can access VMs connected to that network directly. - Additionally, you can allocate IP addresses on external + This network, unlike the other networks, is not merely a virtually + defined network. Instead, it represents the view into a slice of + the external network that is accessible outside the OpenStack + installation. IP addresses on the Networking external network are + accessible by anybody physically on the outside network. Because + this network merely represents a slice of the outside network, + DHCP is disabled on this network. + In addition to external networks, any Networking set up has + one or more internal networks. These software-defined networks + connect directly to the VMs. Only the VMs on any given internal + network, or those on subnets connected through interfaces to a + similar router, can access VMs connected to that network + directly. + For the outside network to access VMs, and vice versa, routers + between the networks are needed. Each router has one gateway that + is connected to a network and many interfaces that are connected + to subnets. Like a physical router, subnets can access machines on + other subnets that are connected to the same router, and machines + can access the outside network through the gateway for the + router. + Additionally, you can allocate IP addresses on external networks to ports on the internal network. Whenever something is connected to a subnet, that connection is called a port.You can - associate external network IP addresses with ports to VMs. - This way, entities on the outside network can access VMs. + associate external network IP addresses with ports to VMs. This + way, entities on the outside network can access VMs. Networking also supports security - groups, which enable administrators to define - firewall rules in groups. A VM can belong to one or more - security groups. Networking applies the rules in those security - groups to block or unblock ports, port ranges, or traffic types - for that VM. - Networking plug-ins - Each plug-in that Networking uses has its own concepts. These - plug-in concepts are not vital to operating Networking. - Understanding these concepts can help you set up the Openstack - Networking service, however. All Networking installations use a core - plug-in and a security group plug-in (or just the No-Op security - group plug-in). Additionally, Firewall-as-a-service (FWaaS) and - Load-balancing-as-a-service (LBaaS) plug-ins are available. - + groups. Security groups enable administrators to + define firewall rules in groups. A VM can belong to one or more + security groups, and Networking applies the rules in those + security groups to block or unblock ports, port ranges, or traffic + types for that VM. + Each plug-in that Networking uses has its own concepts. While + not vital to operating Networking, understanding these concepts + can help you set up Networking. All Networking installations use a + core plug-in and a security group plug-in (or just the No-Op + security group plug-in). Additionally, Firewall as a Service + (FWaaS) and Load Balancer as a Service (LBaaS) plug-ins are + available.
diff --git a/doc/install-guide/section_neutron-ml2-compute-node.xml b/doc/install-guide/section_neutron-ml2-compute-node.xml index 49e11d641a..10b9157172 100644 --- a/doc/install-guide/section_neutron-ml2-compute-node.xml +++ b/doc/install-guide/section_neutron-ml2-compute-node.xml @@ -4,13 +4,13 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> Configure compute node + Before you install and configure OpenStack Networking, you + must enable certain kernel networking functions. - Prerequisites - Before you configure OpenStack Networking, you must enable certain - kernel networking functions. + To enable kernel networking functions - Edit /etc/sysctl.conf to contain the - following: + Edit the /etc/sysctl.conf file and + add the following lines: net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 @@ -27,8 +27,9 @@ net.ipv4.conf.default.rp_filter=0 # yum install openstack-neutron-ml2 openstack-neutron-openvswitch # zypper install openstack-neutron-openvswitch-agent - Ubuntu installations using Linux kernel version 3.11 or newer - do not require the openvswitch-datapath-dkms + Ubuntu installations that use Linux kernel version 3.11 + or later do not require the + openvswitch-datapath-dkms package. @@ -41,21 +42,17 @@ net.ipv4.conf.default.rp_filter=0 The Networking common component configuration includes the authentication mechanism, message broker, and plug-in. - Respond to prompts for - database management, - Identity service - credentials, - service endpoint - registration, and - message broker - credentials. + Respond to prompts for database + management, Identity service credentials, service endpoint + registration, and message broker credentials. Configure Networking to use the Identity service for authentication: - Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ @@ -72,23 +69,24 @@ net.ipv4.conf.default.rp_filter=0 admin_user neutron # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password NEUTRON_PASS + Replace NEUTRON_PASS with the + password you chose for the neutron user in + the Identity service. Configure Networking to use the Identity service for authentication: - Edit the /etc/neutron/neutron.conf - file and add the following key to the - [DEFAULT] section: - [DEFAULT] + Edit the + /etc/neutron/neutron.conf file and + add the following key to the [DEFAULT] + section: + [DEFAULT] ... auth_strategy = keystone Add the following keys to the - [keystone_authtoken] section: - Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. + [keystone_authtoken] section: [keystone_authtoken] ... auth_uri = http://controller:5000 @@ -98,14 +96,14 @@ auth_port = 35357 admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS + Replace NEUTRON_PASS with + the password you chose for the neutron + user in the Identity service. Configure Networking to use the message broker: - Replace RABBIT_PASS with the password - you chose for the guest account in - RabbitMQ. # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ @@ -114,17 +112,21 @@ admin_password = NEUTRON_PASS rabbit_userid guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_password RABBIT_PASS + Replace RABBIT_PASS with the + password you chose for the guest account in + RabbitMQ. Configure Networking to use the message broker: - Edit the /etc/neutron/neutron.conf file - and add the following keys to the [DEFAULT] + Edit the + /etc/neutron/neutron.conf file and + add the following keys to the [DEFAULT] section: - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. + Replace RABBIT_PASS with + the password you chose for the guest + account in RabbitMQ. [DEFAULT] ... rpc_backend = neutron.openstack.common.rpc.impl_kombu @@ -134,26 +136,27 @@ rabbit_password = RABBIT_PASS - Configure Networking to use the Modular Layer 2 (ML2) plug-in - and associated services: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ + Configure Networking to use the Modular Layer 2 (ML2) + plug-in and associated services: + # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin ml2 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins router - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/neutron.conf to assist with - troubleshooting. + To assist with troubleshooting, add verbose = + True to the [DEFAULT] section + in the /etc/neutron/neutron.conf + file. - Configure Networking to use the Modular Layer 2 (ML2) plug-in - and associated services: + Configure Networking to use the Modular Layer 2 (ML2) + plug-in and associated services: - Edit the /etc/neutron/neutron.conf file - and add the following keys to the [DEFAULT] + Edit the + /etc/neutron/neutron.conf file and + add the following keys to the [DEFAULT] section: [DEFAULT] ... @@ -161,10 +164,11 @@ core_plugin = ml2 service_plugins = router allow_overlapping_ips = True - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/neutron.conf to assist with - troubleshooting. + To assist with troubleshooting, add verbose + = True to the [DEFAULT] + section in the + /etc/neutron/neutron.conf + file. @@ -172,17 +176,11 @@ allow_overlapping_ips = True To configure the Modular Layer 2 (ML2) plug-in - The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to - build the virtual networking framework for instances. + The ML2 plug-in uses the Open vSwitch (OVS) mechanism + (agent) to build the virtual networking framework for + instances. Run the following commands: - Replace - INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS - with the IP address of the instance tunnels network interface on - your compute node. This guide uses - 10.0.1.31 for the IP address of the - instance tunnels network interface on the first compute - node. # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ @@ -201,29 +199,35 @@ allow_overlapping_ips = True firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ enable_security_group True + Replace + INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS + with the IP address of the instance tunnels network interface + on your compute node. This guide uses + 10.0.1.31 for the IP address of the + instance tunnels network interface on the first compute + node. Edit the - /etc/neutron/plugins/ml2/ml2_conf.ini - file: - Add the following keys to the [ml2] - section: + /etc/neutron/plugins/ml2/ml2_conf.ini + file and add the following keys to the + [ml2] section: [ml2] ... type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch Add the following keys to the - [ml2_type_gre] section: + [ml2_type_gre] section: [ml2_type_gre] ... tunnel_id_ranges = 1:1000 Add the [ovs] section and the following keys to it: Replace - INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS - with the IP address of the instance tunnels network interface on - your compute node. + INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS + with the IP address of the instance tunnels network interface + on your compute node. [ovs] ... local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS @@ -239,28 +243,29 @@ enable_security_group = True To configure the Open vSwitch (OVS) service - The OVS service provides the underlying virtual networking framework - for instances. The integration bridge br-int handles - internal instance network traffic within OVS. + The OVS service provides the underlying virtual networking + framework for instances. The integration bridge + br-int handles internal instance network + traffic within OVS. - Start the OVS service and configure it to start when the system - boots: - # service openvswitch start + Start the OVS service and configure it to start when the + system boots: + # service openvswitch start # chkconfig openvswitch on - Start the OVS service and configure it to start when the system - boots: - # service openvswitch-switch start + Start the OVS service and configure it to start when the + system boots: + # service openvswitch-switch start # chkconfig openvswitch-switch on Restart the OVS service: - # service openvswitch-switch restart + # service openvswitch-switch restart Restart the OVS service: - # service openvswitch restart + # service openvswitch restart Add the integration bridge: @@ -269,14 +274,11 @@ enable_security_group = True To configure Compute to use Networking - By default, most distributions configure Compute to use legacy - networking. You must reconfigure Compute to manage networks through - Networking. + By default, most distributions configure Compute to use + legacy networking. You must reconfigure Compute to manage + networks through Networking. Run the following commands: - Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. # openstack-config --set /etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API # openstack-config --set /etc/nova/nova.conf DEFAULT \ @@ -297,20 +299,24 @@ enable_security_group = True firewall_driver nova.virt.firewall.NoopFirewallDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ security_group_api neutron + Replace NEUTRON_PASS with the + password you chose for the neutron user in + the Identity service. - By default, Compute uses an internal firewall service. Since - Networking includes a firewall service, you must disable the - Compute firewall service by using the - nova.virt.firewall.NoopFirewallDriver firewall - driver. + By default, Compute uses an internal firewall service. + Since Networking includes a firewall service, you must + disable the Compute firewall service by using the + nova.virt.firewall.NoopFirewallDriver + firewall driver. - Edit the /etc/nova/nova.conf and add the - following keys to the [DEFAULT] section: + Edit the /etc/nova/nova.conf and add + the following keys to the [DEFAULT] + section: Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. + password you chose for the neutron user in + the Identity service. [DEFAULT] ... network_api_class = nova.network.neutronv2.api.API @@ -324,42 +330,43 @@ linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron - By default, Compute uses an internal firewall service. Since - Networking includes a firewall service, you must disable the - Compute firewall service by using the - nova.virt.firewall.NoopFirewallDriver firewall - driver. + By default, Compute uses an internal firewall service. + Since Networking includes a firewall service, you must + disable the Compute firewall service by using the + nova.virt.firewall.NoopFirewallDriver + firewall driver. To finalize the installation - The Networking service initialization scripts expect a symbolic - link /etc/neutron/plugin.ini pointing to the - configuration file associated with your chosen plug-in. Using - the ML2 plug-in, for example, the symbolic link must point to - /etc/neutron/plugins/ml2/ml2_conf.ini. + The Networking service initialization scripts expect a + symbolic link /etc/neutron/plugin.ini + pointing to the configuration file associated with your chosen + plug-in. Using the ML2 plug-in, for example, the symbolic link + must point to + /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands: # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - - Due to a packaging bug, the Open vSwitch agent initialization - script explicitly looks for the Open vSwitch plug-in configuration - file rather than a symbolic link - /etc/neutron/plugin.ini pointing to the ML2 - plug-in configuration file. Run the following commands to resolve this - issue: + + Due to a packaging bug, the Open vSwitch agent + initialization script explicitly looks for the Open vSwitch + plug-in configuration file rather than a symbolic link + /etc/neutron/plugin.ini pointing to the + ML2 plug-in configuration file. Run the following commands to + resolve this issue: # cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent - The Networking service initialization scripts expect the variable - NEUTRON_PLUGIN_CONF in the - /etc/sysconfig/neutron file to reference the - configuration file associated with your chosen plug-in. Using - ML2, for example, edit the - /etc/sysconfig/neutron file and add the + The Networking service initialization scripts expect the + variable NEUTRON_PLUGIN_CONF in the + /etc/sysconfig/neutron file to + reference the configuration file associated with your chosen + plug-in. Using ML2, for example, edit the + /etc/sysconfig/neutron file and add the following: NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini" @@ -369,8 +376,8 @@ security_group_api = neutron # service nova-compute restart - Start the Open vSwitch (OVS) agent and configure it to start when - the system boots: + Start the Open vSwitch (OVS) agent and configure it to + start when the system boots: # service neutron-openvswitch-agent start # chkconfig neutron-openvswitch-agent on # service openstack-neutron-openvswitch-agent start diff --git a/doc/install-guide/section_neutron-ml2-network-node.xml b/doc/install-guide/section_neutron-ml2-network-node.xml index 65528e241c..a1fb2530f0 100644 --- a/doc/install-guide/section_neutron-ml2-network-node.xml +++ b/doc/install-guide/section_neutron-ml2-network-node.xml @@ -4,10 +4,10 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> Configure network node + Before you install and configure OpenStack Networking, you + must enable certain kernel networking functions. - Prerequisites - Before you configure OpenStack Networking, you must enable certain - kernel networking functions. + To enable kernel networking functions Edit /etc/sysctl.conf to contain the following: @@ -30,8 +30,9 @@ net.ipv4.conf.default.rp_filter=0 # zypper install openstack-neutron-openvswitch-agent openstack-neutron-l3-agent \ openstack-neutron-dhcp-agent openstack-neutron-metadata-agent - Ubuntu installations using Linux kernel version 3.11 or newer - do not require the openvswitch-datapath-dkms + Ubuntu installations using Linux kernel version 3.11 or + newer do not require the + openvswitch-datapath-dkms package. @@ -44,21 +45,20 @@ net.ipv4.conf.default.rp_filter=0 The Networking common component configuration includes the authentication mechanism, message broker, and plug-in. - Respond to prompts for - database management, - Identity service - credentials, - service endpoint - registration, and - message broker - credentials. + Respond to prompts for database + management, Identity service credentials, service endpoint + registration, and message broker credentials. Configure Networking to use the Identity service for authentication: Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. + password you chose for the neutron user in + the Identity service. # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ @@ -81,17 +81,18 @@ net.ipv4.conf.default.rp_filter=0 authentication: - Edit the /etc/neutron/neutron.conf - file and add the following key to the - [DEFAULT] section: - [DEFAULT] + Edit the + /etc/neutron/neutron.conf file and + add the following key to the [DEFAULT] + section: + [DEFAULT] ... auth_strategy = keystone Add the following keys to the - [keystone_authtoken] section: - Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. + [keystone_authtoken] section: + Replace NEUTRON_PASS with + the password you chose for the neutron + user in the Identity service. [keystone_authtoken] ... auth_uri = http://controller:5000 @@ -106,9 +107,9 @@ admin_password = NEUTRON_PASS Configure Networking to use the message broker: - Replace RABBIT_PASS with the password - you chose for the guest account in - RabbitMQ. + Replace RABBIT_PASS with the + password you chose for the guest account in + RabbitMQ. # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ @@ -122,12 +123,13 @@ admin_password = NEUTRON_PASS Configure Networking to use the message broker: - Edit the /etc/neutron/neutron.conf file - and add the following keys to the [DEFAULT] + Edit the + /etc/neutron/neutron.conf file and + add the following keys to the [DEFAULT] section: - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. + Replace RABBIT_PASS with + the password you chose for the guest + account in RabbitMQ. [DEFAULT] ... rpc_backend = neutron.openstack.common.rpc.impl_kombu @@ -137,26 +139,27 @@ rabbit_password = RABBIT_PASS - Configure Networking to use the Modular Layer 2 (ML2) plug-in - and associated services: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ + Configure Networking to use the Modular Layer 2 (ML2) + plug-in and associated services: + # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin ml2 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins router - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/neutron.conf to assist with - troubleshooting. + To assist with troubleshooting, add verbose = + True to the [DEFAULT] section + in the /etc/neutron/neutron.conf + file. - Configure Networking to use the Modular Layer 2 (ML2) plug-in - and associated services: + Configure Networking to use the Modular Layer 2 (ML2) + plug-in and associated services: - Edit the /etc/neutron/neutron.conf file - and add the following keys to the [DEFAULT] + Edit the + /etc/neutron/neutron.conf file and + add the following keys to the [DEFAULT] section: [DEFAULT] ... @@ -164,10 +167,11 @@ core_plugin = ml2 service_plugins = router allow_overlapping_ips = True - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/neutron.conf to assist with - troubleshooting. + To assist with troubleshooting, add verbose + = True to the [DEFAULT] + section in the + /etc/neutron/neutron.conf + file. @@ -175,8 +179,8 @@ allow_overlapping_ips = True To configure the Layer-3 (L3) agent - The Layer-3 (L3) agent provides routing - services for instance virtual networks. + The Layer-3 (L3) agent provides + routing services for instance virtual networks. Run the following commands: # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ @@ -184,32 +188,32 @@ allow_overlapping_ips = True # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ use_namespaces True - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/l3_agent.ini to assist with - troubleshooting. + To assist with troubleshooting, add verbose = + True to the [DEFAULT] section + in the /etc/neutron/l3_agent.ini + file. - Edit the /etc/neutron/l3_agent.ini file - and add the following keys to the [DEFAULT] - section: + Edit the /etc/neutron/l3_agent.ini + file and add the following keys to the + [DEFAULT] section: [DEFAULT] ... interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/l3_agent.ini to assist with - troubleshooting. + To assist with troubleshooting, add verbose = + True to the [DEFAULT] section + in the /etc/neutron/l3_agent.ini + file. To configure the DHCP agent The DHCP agent provides - DHCP services for instance virtual + DHCP services for instance virtual networks. Run the following commands: @@ -220,39 +224,40 @@ use_namespaces = True # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ use_namespaces True - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/dhcp_agent.ini to assist with - troubleshooting. + To assist with troubleshooting, add verbose = + True to the [DEFAULT] section + in the /etc/neutron/dhcp_agent.ini + file. - Edit the /etc/neutron/dhcp_agent.ini file - and add the following keys to the [DEFAULT] - section: + Edit the /etc/neutron/dhcp_agent.ini + file and add the following keys to the + [DEFAULT] section: [DEFAULT] ... interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq use_namespaces = True - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/dhcp_agent.ini to assist with - troubleshooting. + To assist with troubleshooting, add verbose = + True to the [DEFAULT] section + in the /etc/neutron/dhcp_agent.ini + file. To configure the metadata agent - The metadata agent provides configuration - information such as credentials for remote access to instances. + The metadata agent provides + configuration information such as credentials for remote access + to instances. Run the following commands: Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. Replace - METADATA_SECRET with a suitable + password you chose for the neutron user in + the Identity service. Replace + METADATA_SECRET with a suitable secret for the metadata proxy. # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ auth_url http://controller:5000/v2.0 @@ -269,20 +274,21 @@ use_namespaces = True # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ metadata_proxy_shared_secret METADATA_SECRET - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/metadata_agent.ini to assist with - troubleshooting. + To assist with troubleshooting, add verbose = + True to the [DEFAULT] section + in the /etc/neutron/metadata_agent.ini + file. - Edit the /etc/neutron/metadata_agent.ini file + Edit the + /etc/neutron/metadata_agent.ini file and add the following keys to the [DEFAULT] section: Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. Replace - METADATA_SECRET with a suitable + password you chose for the neutron user in + the Identity service. Replace + METADATA_SECRET with a suitable secret for the metadata proxy. [DEFAULT] ... @@ -294,24 +300,23 @@ admin_password = NEUTRON_PASS nova_metadata_ip = controller metadata_proxy_shared_secret = METADATA_SECRET - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/metadata_agent.ini to assist with - troubleshooting. + To assist with troubleshooting, add verbose = + True to the [DEFAULT] section + in the /etc/neutron/metadata_agent.ini + file. Perform the next two steps on the - controller node. + controller node. - On the controller node, configure Compute to - use the metadata service: - Replace - METADATA_SECRET with the secret you chose - for the metadata proxy. + On the controller node, configure + Compute to use the metadata service: + Replace METADATA_SECRET with + the secret you chose for the metadata proxy. # openstack-config --set /etc/nova/nova.conf DEFAULT \ service_neutron_metadata_proxy true # openstack-config --set /etc/nova/nova.conf DEFAULT \ @@ -319,36 +324,36 @@ metadata_proxy_shared_secret = METADATA_SECRET On the controller node, edit the - /etc/nova/nova.conf file and add the following - keys to the [DEFAULT] section: - Replace - METADATA_SECRET with the secret you chose - for the metadata proxy. + /etc/nova/nova.conf file and add the + following keys to the [DEFAULT] + section: + Replace METADATA_SECRET with + the secret you chose for the metadata proxy. [DEFAULT] ... service_neutron_metadata_proxy = true neutron_metadata_proxy_shared_secret = METADATA_SECRET - On the controller node, restart the Compute - API service: + On the controller node, restart the + Compute API service: # service openstack-nova-api restart # service nova-api restart To configure the Modular Layer 2 (ML2) plug-in - The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to - build virtual networking framework for instances. + The ML2 plug-in uses the Open vSwitch (OVS) mechanism + (agent) to build virtual networking framework for + instances. Run the following commands: Replace - INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS + INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your network node. This guide uses - 10.0.1.21 for the IP address of the - instance tunnels network interface on the network - node. + 10.0.1.21 for the IP address of the + instance tunnels network interface on the network node. # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ @@ -370,7 +375,7 @@ neutron_metadata_proxy_shared_secret = METADATA_SECRET Edit the - /etc/neutron/plugins/ml2/ml2_conf.ini + /etc/neutron/plugins/ml2/ml2_conf.ini file. Add the following keys to the [ml2] section: @@ -380,16 +385,16 @@ type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch Add the following keys to the - [ml2_type_gre] section: + [ml2_type_gre] section: [ml2_type_gre] ... tunnel_id_ranges = 1:1000 Add the [ovs] section and the following keys to it: Replace - INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS - with the IP address of the instance tunnels network interface on - your network node. + INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS + with the IP address of the instance tunnels network interface + on your network node. [ovs] ... local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS @@ -405,33 +410,34 @@ enable_security_group = True To configure the Open vSwitch (OVS) service - The OVS service provides the underlying virtual networking framework - for instances. The integration bridge br-int handles - internal instance network traffic within OVS. The external bridge - br-ex handles external instance network traffic - within OVS. The external bridge requires a port on the physical external - network interface to provide instances with external network access. - In essence, this port bridges the virtual and physical external + The OVS service provides the underlying virtual networking + framework for instances. The integration bridge + br-int handles internal instance network + traffic within OVS. The external bridge br-ex + handles external instance network traffic within OVS. The + external bridge requires a port on the physical external network + interface to provide instances with external network access. In + essence, this port bridges the virtual and physical external networks in your environment. - Start the OVS service and configure it to start when the system - boots: - # service openvswitch start + Start the OVS service and configure it to start when the + system boots: + # service openvswitch start # chkconfig openvswitch on - Start the OVS service and configure it to start when the system - boots: - # service openvswitch-switch start + Start the OVS service and configure it to start when the + system boots: + # service openvswitch-switch start # chkconfig openvswitch-switch on Restart the OVS service: - # service openvswitch-switch restart + # service openvswitch-switch restart Restart the OVS service: - # service openvswitch restart + # service openvswitch restart Add the integration bridge: @@ -442,19 +448,19 @@ enable_security_group = True # ovs-vsctl add-br br-ex - Add a port to the external bridge that connects to the physical - external network interface: + Add a port to the external bridge that connects to the + physical external network interface: Replace INTERFACE_NAME with the - actual interface name. For example, eth2 or - ens256. + actual interface name. For example, eth2 + or ens256. # ovs-vsctl add-port br-ex INTERFACE_NAME - Depending on your network interface driver, you may need to - disable Generic Receive Offload (GRO) to - achieve suitable throughput between your instances and the external - network. - To temporarily disable GRO on the external network interface - while testing your environment: + Depending on your network interface driver, you may need + to disable Generic Receive Offload + (GRO) to achieve suitable throughput between + your instances and the external network. + To temporarily disable GRO on the external network + interface while testing your environment: # ethtool -K INTERFACE_NAME gro off @@ -462,37 +468,38 @@ enable_security_group = True To finalize the installation - The Networking service initialization scripts expect a symbolic - link /etc/neutron/plugin.ini pointing to the - configuration file associated with your chosen plug-in. Using - the ML2 plug-in, for example, the symbolic link must point to - /etc/neutron/plugins/ml2/ml2_conf.ini. + The Networking service initialization scripts expect a + symbolic link /etc/neutron/plugin.ini + pointing to the configuration file associated with your chosen + plug-in. Using the ML2 plug-in, for example, the symbolic link + must point to + /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands: - # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - - Due to a packaging bug, the Open vSwitch agent initialization - script explicitly looks for the Open vSwitch plug-in configuration - file rather than a symbolic link - /etc/neutron/plugin.ini pointing to the ML2 - plug-in configuration file. Run the following commands to resolve this - issue: + # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini + + Due to a packaging bug, the Open vSwitch agent + initialization script explicitly looks for the Open vSwitch + plug-in configuration file rather than a symbolic link + /etc/neutron/plugin.ini pointing to the + ML2 plug-in configuration file. Run the following commands to + resolve this issue: # cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent - The Networking service initialization scripts expect the variable - NEUTRON_PLUGIN_CONF in the - /etc/sysconfig/neutron file to reference the - configuration file associated with your chosen plug-in. Using - ML2, for example, edit the - /etc/sysconfig/neutron file and add the + The Networking service initialization scripts expect the + variable NEUTRON_PLUGIN_CONF in the + /etc/sysconfig/neutron file to + reference the configuration file associated with your chosen + plug-in. Using ML2, for example, edit the + /etc/sysconfig/neutron file and add the following: NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini" - Start the Networking services and configure them to start when - the system boots: + Start the Networking services and configure them to start + when the system boots: # service neutron-openvswitch-agent start # service neutron-l3-agent start # service neutron-dhcp-agent start diff --git a/doc/pom.xml b/doc/pom.xml index 84ec6257ce..2befcddd18 100644 --- a/doc/pom.xml +++ b/doc/pom.xml @@ -17,7 +17,6 @@ image-guide install-guide security-guide - training-guides user-guide user-guide-admin @@ -48,7 +47,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 2.0.4 + 2.1.0