Update Neutron chapter of Install Guide
Add content from Solly Ross Remove plugin info Flatten deployment use cases Fix typos, mispellings, missing periods Change-Id: I68334c2ea910326623474dab5e7632569d164acd
This commit is contained in:
parent
ce64eba805
commit
1fbcf5fef7
@ -361,12 +361,10 @@ hwclock -w</programlisting>
|
||||
</section>
|
||||
<section xml:id="basics-packages">
|
||||
<title>OpenStack Packages</title>
|
||||
|
||||
<para>Distribution releases and OpenStack releases are often independent of
|
||||
each other and thus you might need to add some extra steps to access
|
||||
the latest OpenStack release after installation of the machine before
|
||||
installation of any OpenStack packages.</para>
|
||||
|
||||
<para os="fedora;centos;rhel">This guide uses the OpenStack packages from
|
||||
the RDO repository. These packages work on Red Hat Enterprise Linux 6 and
|
||||
compatible versions of CentOS, as well as Fedora 19. Enable the RDO repository
|
||||
|
@ -1,33 +1,28 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="ch_neutron">
|
||||
<title>Working with OpenStack Networking with Neutron</title>
|
||||
<para>FIXME</para>
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="ch_neutron">
|
||||
<title>Installing OpenStack Networking Service</title>
|
||||
<section xml:id="neutron-considerations">
|
||||
<title>Considerations for OpenStack Networking</title>
|
||||
<title>Considerations for OpenStack Networking</title>
|
||||
<para>There are many different drivers for OpenStack Networking,
|
||||
that range from software bridges to full control of certain
|
||||
switching hardware. This guide focuses on the simplest: the
|
||||
Linux Bridge. However, the theories presented here should be
|
||||
mostly applicable to other mechanisms.</para>
|
||||
<!-- Note that Linux Bridge is deprecated and will be replaced with ML2
|
||||
in Icehouse. Still, Linux Bridge might be the best option for
|
||||
this guide for Havana. -->
|
||||
<!-- description of architecture, network node etc -->
|
||||
<!-- description of Linux Bridge Driver, where agents run etc -->
|
||||
<!-- install of network node -->
|
||||
<!-- configuration of compute node -->
|
||||
<warning><para>If you have followed the previous section on setting
|
||||
up networking for your compute node using nova-network, this will
|
||||
override those settings.</para></warning>
|
||||
that range from software bridges to full control of certain
|
||||
switching hardware. This guide focuses on Open vSwitch. However,
|
||||
the theories presented here should be mostly applicable to other
|
||||
mechanisms, and the <citetitle>OpenStack Configuration
|
||||
Reference</citetitle> offers additional information.</para>
|
||||
<para>Please see <link
|
||||
xlink:href="http://docs.openstack.org/trunk/install-guide/install/apt/content/basics-packages.html"
|
||||
>OpenStack
|
||||
Packages</link> for specific OpenStack installation
|
||||
instructions to prepare for installation.</para>
|
||||
<warning><para>If you have followed the previous section on
|
||||
setting up networking for your compute node using
|
||||
nova-network, this configuration will override those
|
||||
settings.</para></warning>
|
||||
</section>
|
||||
<!-- This is just a start at incorporating the many use cases from
|
||||
the origina network admin guide. -->
|
||||
<xi:include href="../common/section_getstart_networking.xml"/>
|
||||
<xi:include href="section_networking-neutron-server-install.xml"/>
|
||||
<xi:include href="section_networking-single-flat.xml"/>
|
||||
<xi:include href="section_networking-provider-router-with-private_networks.xml"/>
|
||||
<xi:include href="section_networking-per-tenant-routers-with-private-networks.xml"/>
|
||||
</chapter>
|
||||
<xi:include href="section_neutron-concepts.xml"/>
|
||||
<xi:include href="section_neutron-install.xml"/>
|
||||
<xi:include href="section_neutron-deploy-use-cases.xml"/>
|
||||
</chapter>
|
||||
|
84
doc/install-guide/section_neutron-concepts.xml
Executable file
84
doc/install-guide/section_neutron-concepts.xml
Executable file
@ -0,0 +1,84 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section xml:id="install-neutron"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns:html="http://www.w3.org/1999/xhtml"
|
||||
version="5.0">
|
||||
<title>Neutron concepts</title>
|
||||
<para>Like Nova Networking, Neutron manages software-defined networking for your OpenStack
|
||||
installation. However, unlike Nova Networking, Neutron can be configured for advanced virtual
|
||||
network topologies, such as per-tenant private networks, and more.</para>
|
||||
<para>Neutron has three main object abstractions: networks, subnets, and routers. Each has
|
||||
functionality that mimics its physical counterpart: networks contain subnets, and routers route
|
||||
traffic between different subnet and networks.</para>
|
||||
<para>In any given Neutron setup, there is at least one external network. This network, unlike the
|
||||
other networks, is not merely an virtually defined network. Instead, it represents the view into
|
||||
a slice of the external network, accessible outside the OpenStack installation. IP addresses on
|
||||
Neutron's external network are in fact accessible by anybody physically on the outside network.
|
||||
Because this network merely represents a slice of the outside network, DHCP is disabled on this
|
||||
network.</para>
|
||||
<para>In addition external networks, any Neutron setup will have one or more internal networks.
|
||||
These software-defined networks connect directly to the VMs. Only the VMs on any given internal
|
||||
network, or those on subnets connected via interfaces to a similar router, can access VMs
|
||||
connected to that network directly.</para>
|
||||
<para>In order for the outside network to be able to access VMs, and vice versa, routers between
|
||||
the networks are needed. Each router has one gateway, connected to a network, and many
|
||||
interfaces, connected to subnets. Like a physical router, subnets can access machines on other
|
||||
subnets connected to the same router, and machines can access the outside network through the
|
||||
router's gateway.</para>
|
||||
<para>Additionally, IP addresses on an external networks can be allocated to ports on the internal
|
||||
network. Whenever something is connected to a subnet, that connection is called a port. External
|
||||
network IP addresses can be associated with ports to VMs. This way, entities on the outside
|
||||
network can access VMs.</para>
|
||||
<para>Neutron also supports "security groups." Security groups allow administrators to define
|
||||
firewall rules in groups. Then, a given VM can have one or more security groups to which it
|
||||
belongs, and Neutron will apply those rules to block or unblock ports, port ranges, or traffic
|
||||
types for that VM.</para>
|
||||
<para>Each of the plugins that Neutron uses has its own concepts as well. While not vital to
|
||||
operating Neutron, these concepts can be useful to help with setting up Neutron. All Neutron
|
||||
installations use a core plugin, as well as a security group plugin (or just the No-Op security
|
||||
group plugin). Additionally, Firewall-as-a-service (FWaaS) and Load-balancing-as-a-service
|
||||
(LBaaS) plugins are available.</para>
|
||||
<section xml:id="concepts-neutron.openvswitch">
|
||||
<title>Open vSwitch Concepts</title>
|
||||
<para>The Open vSwitch plugin is one of the most popular core plugins. Open vSwitch
|
||||
configurations consists of bridges and ports. Ports represent connections to other things,
|
||||
such as physical interfaces and patch cables. Packets from any given port on a bridge is
|
||||
shared with all other ports on that bridge. Bridges can be connected through Open vSwitch
|
||||
virtual patch cables, or through Linux virtual Ethernet cables (<literal>veth</literal>).
|
||||
Additionally, bridges appear as network interfaces to Linux, so they can be assigned IP
|
||||
addresses.</para>
|
||||
<para>In Neutron, there are several main bridges. The integration bridge, called
|
||||
<literal>br-int</literal>, connects directly to the VMs and associated services. The
|
||||
external bridge, called <literal>br-ex</literal>, connects to the external network. Finally,
|
||||
the VLAN configuration of the Open vSwitch plugin uses bridges associated with each physical
|
||||
network.</para>
|
||||
<para>In addition to defining bridges, Open vSwitch has OpenFlow, which allows you to define
|
||||
networking flow rules. These rules are used in certain configurations to transfer packets
|
||||
between VLANs.</para>
|
||||
<para>Finally, some configurations of Open vSwitch use network namespaces. This allows linux to
|
||||
group adapters into unique namespaces that are not visible to other namespaces, allowing
|
||||
multiple Neutron routers to be managed by the same network node.</para>
|
||||
<para>With Open vSwitch, there are two different technologies that can be used to create the
|
||||
virtual networks: GRE or VLANs.</para>
|
||||
<para>Generic Routing Encapsulation, or GRE for short, is the technology used in many VPNs. In
|
||||
essence, it works by wrapping IP packets and entirely new packets with different routing
|
||||
information. When the new packet reaches its destination, it is unwrapped, and the underlying
|
||||
packet is routed. To use GRE with Open vSwitch, Neutron creates GRE Tunnels. This tunnels are
|
||||
ports on a bridge, and allow bridges on different systems to act as though they were in fact
|
||||
one bridge, allowing the compute node and network node to act as one for the purposes of
|
||||
routing.</para>
|
||||
<para>Virtual LANs, or VLANs for short, on the other hand, use a special modification to the
|
||||
Ethernet header. They add a 4-byte VLAN tag that ranges between 1 and 4094 (the 0 tag is
|
||||
special, and the 4095 tag, made of all ones, is equivalent to an untagged packet). Special
|
||||
NICs, switches, and routers know how to interpret the VLAN tags, as does Open vSwitch. Packets
|
||||
tagged for one VLAN will only be shared with other devices configured to be on that VLAN,
|
||||
despite the fact that all of the devices are on the same physical network.</para>
|
||||
<para>The most common security group driver used with Open vSwitch is the Hybrid IPTables/Open
|
||||
vSwitch plugin. It uses a combination for IPTables and OpenFlow rules. IPTables is a tool used
|
||||
for creating firewalls and setting up NATs on Linux. It uses a complex rule system and
|
||||
"chains" of rules to allow for the complex rules required by Neutron's security groups.</para>
|
||||
</section>
|
||||
</section>
|
13
doc/install-guide/section_neutron-deploy-use-cases.xml
Normal file
13
doc/install-guide/section_neutron-deploy-use-cases.xml
Normal file
@ -0,0 +1,13 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="neutron-deploy-use-cases">
|
||||
<title>Neutron deployment use cases</title>
|
||||
<para>This section describes how to configure the
|
||||
Networking service and its components for some typical use
|
||||
cases.</para>
|
||||
<xi:include href="section_neutron-single-flat.xml"/>
|
||||
<xi:include href="section_neutron-provider-router-with-private_networks.xml"/>
|
||||
<xi:include href="section_neutron-per-tenant-routers-with-private-networks.xml"/>
|
||||
</section>
|
614
doc/install-guide/section_neutron-install.xml
Executable file
614
doc/install-guide/section_neutron-install.xml
Executable file
@ -0,0 +1,614 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section xml:id="neutron-install-network-node"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns:html="http://www.w3.org/1999/xhtml"
|
||||
version="5.0">
|
||||
<title>Install Networking Services on the network node</title>
|
||||
<note>
|
||||
<para>Before we start, you need to make sure that your machine is properly set up
|
||||
to be a dedicated network node. Dedicated network nodes should have three NICs:
|
||||
the management NIC (called <replaceable>MGMT_INTERFACE</replaceable>), the data
|
||||
NIC (called <replaceable>DATA_INTERFACE</replaceable>), and the external NIC
|
||||
(called <replaceable>EXTERNAL_INTERFACE</replaceable>).</para>
|
||||
<para>The management network is responsible for communication between nodes, the
|
||||
data network is responsible for communication comming to and from VMs, and the
|
||||
external NIC connects the network node to the ouside world, so your VMs can have
|
||||
connectivity to the outside world.</para>
|
||||
<para>All three NICs should have static IPs. However, the data and external NICs
|
||||
have some special setup. See the <link linkend="install-neutron.install-plugin">Neutron
|
||||
plugin section</link> for your chosen Neutron plugin for details.</para>
|
||||
</note>
|
||||
<warning os="rhel;centos">
|
||||
<para>By default, an automated firewall configuration tool called
|
||||
<literal>system-config-firewall</literal> in place on RHEL. This tool is
|
||||
a graphical interface (and a curses-style interface with
|
||||
<literal>-tui</literal> on the end of the name) for configuring IP
|
||||
tables as a basic firewall. You should disable it when working with
|
||||
Neutron unless you are familiar with the underlying network technologies,
|
||||
as, by default, it will block various types of network traffic that are
|
||||
important to Neutron. To disable it, simple launch the program and uncheck
|
||||
the "Enabled" check box.</para>
|
||||
<para>Once you have succesfully set up OpenStack with Neutron, you can
|
||||
re-enable it if you wish and figure out exactly how you need to configure
|
||||
it. For the duration of the setup, however, it will make finding network
|
||||
issues easier if you don't have it blocking all unrecognized
|
||||
traffic.</para>
|
||||
</warning>
|
||||
<para>First, we must install the OpenStack Networking service on the node:</para>
|
||||
<screen os="ubuntu">
|
||||
<prompt>#</prompt> <userinput>sudo apt-get install neutron</userinput>
|
||||
</screen>
|
||||
<screen os="rhel;centos;fedora">
|
||||
<prompt>#</prompt> <userinput>sudo yum install openstack-neutron</userinput>
|
||||
</screen>
|
||||
<screen os="opensuse">
|
||||
<prompt>#</prompt> <userinput>zypper install openstack-neutron</userinput>
|
||||
</screen>
|
||||
<para>Next, we must enable packet forwarding and disable packet destination
|
||||
filtering, so that the network node can coordinate traffic for the VMs. We
|
||||
do this by editing the file <filename>/etc/sysctl.conf</filename>.</para>
|
||||
<programlisting language="ini">
|
||||
net.ipv4.ip_forward=1
|
||||
net.ipv4.conf.all.rp_filter=0
|
||||
net.ipv4.conf.default.rp_filter=0
|
||||
</programlisting>
|
||||
<note>
|
||||
<para>When dealing with system network-related configurations, it may be necessary to
|
||||
restart the network service to get them to take effect. This can be done with the
|
||||
following command:</para>
|
||||
<screen os="ubuntu">
|
||||
<prompt>#</prompt> <userinput>sudo service networking restart</userinput></screen>
|
||||
<screen os="rhel;centos;fedora;opensuse">
|
||||
<prompt>#</prompt> <userinput>sudo service network restart</userinput>
|
||||
</screen>
|
||||
</note>
|
||||
<para>Before continuing, we must create the required user, service, and
|
||||
endpoint so that Neutron can interface with the Identity Service,
|
||||
Keystone.</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>keystone user-create --name=neutron --pass=NEUTRON_PASSWORD --tenant-id SERVICE_TENANT_ID --email=neutron@SOME_DOMAIN_HERE</userinput>
|
||||
<prompt>#</prompt> <userinput>keystone user-role-add --tenant-id SERVICE_TENANT_ID --user-id NEUTRON_USER_ID ADMIN_ROLE_ID</userinput>
|
||||
<prompt>#</prompt> <userinput>keystone endpoint-create --region RegionOne --service-id NEUTRON_SERVICE_ID --publicurl http://CONTROLLER_NODE_HOST:9696 --adminurl http://CONTROLLER_NODE_HOST:9696 --internalurl http://CONTROLLER_NODE_HOST:9696</userinput>
|
||||
</screen>
|
||||
<para>Now, we can install, and then configure, our networking plugin. The networking
|
||||
plugin is what Neutron uses to perform the actual software-defined networking. There
|
||||
are several options for this. Choose one, follow
|
||||
the <link linkend="install-neutron.install-plugin">instructions</link> in the linked
|
||||
section, and then return here.</para>
|
||||
<para>Now that you've installed and configured a plugin (you did do that, right?), it
|
||||
is time to configure the main part of Neutron. First, we configure Neutron core by
|
||||
editing <filename>/etc/neutron/neutron.conf</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
auth_host = CONTROLLER_NODE_MGMT_IP
|
||||
admin_tenant_name = service
|
||||
admin_user = neutron
|
||||
admin_password = ADMIN_PASSWORD
|
||||
auth_url = http://CONTROLLER_NODE_MGMT_IP:35357/v2.0
|
||||
auth_strategy = keystone
|
||||
rpc_backend = YOUR_RPC_BACKEND
|
||||
PUT_YOUR_RPC_BACKEND_SETTINGS_HERE_TOO
|
||||
</programlisting>
|
||||
<para>Then, we just need to tell the DHCP agent how to actually handle the DHCP stuff.
|
||||
Neutron has support for plugins for this purpose, but in general we just use the
|
||||
Dnsmasq plugin. Edit <filename>/etc/neutron/dhcp_agent.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
||||
</programlisting>
|
||||
<para>Now, restart the rest of Neutron:</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>service neutron-dhcp-agent restart</userinput>
|
||||
<prompt>#</prompt> <userinput>service neutron-l3-agent restart</userinput>
|
||||
</screen>
|
||||
<!-- TODO(sross): enable Neutron metadata as well? -->
|
||||
<para>Next, <link linkend="install-neutron.configure-networks">configure the
|
||||
base networks</link> and return here.</para>
|
||||
<section xml:id="install-neutron.install-plugin">
|
||||
<title>Installing and configuring the Neutron plugins</title>
|
||||
<section xml:id="install-neutron.install-plugin.ovs">
|
||||
<title>Installing the Open vSwitch (OVS) plugin</title>
|
||||
<para>First, we must install the Open vSwitch plugin and its
|
||||
dependencies.</para>
|
||||
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>sudo apt-get install neutron-plugin-openvswitch</userinput></screen>
|
||||
<screen os="rhel;fedora;centos">
|
||||
<prompt>#</prompt> <userinput>sudo yum install openstack-neutron-openvswitch</userinput>
|
||||
</screen>
|
||||
<para>Now, we start up Open vSwitch.</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>service openvswitch start</userinput>
|
||||
</screen>
|
||||
<para>Next, we must do some initial configuration for Open vSwitch, no
|
||||
matter whether we are using VLANs or GRE tunneling. We need to add the
|
||||
integration bridge (this connects to the VMs) and the external bridge
|
||||
(this connects to the outside world), called <literal>br-int</literal>
|
||||
and <literal>br-ex</literal>, respectively.</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>ovs-vsctl add-br br-int</userinput>
|
||||
<prompt>#</prompt> <userinput>ovs-vsctl add-br br-ex</userinput>
|
||||
</screen>
|
||||
<para>Then, we add a "port" (connection) from the interface
|
||||
<replaceable>EXTERNAL_INTERFACE</replaceable> to br-ex.</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>ovs-vsctl add-port br-ex EXTERNAL_INTERFACE</userinput>
|
||||
</screen>
|
||||
<para>In order for things to work correctly, we must also
|
||||
configure <replaceable>EXTERNAL_INTERFACE</replaceable> to not have an IP address and
|
||||
to be in promiscuous mode. Additionally, we need to set the newly
|
||||
created <literal>br-ex</literal> interface to have the IP address that formerly
|
||||
belonged to <replaceable>EXTERNAL_INTERFACE</replaceable>.</para>
|
||||
<para os="rhel;fedora;centos">Do this by first editing
|
||||
the <filename>/etc/sysconfig/network-scripts/ifcfg-EXTERNAL_INTERFACE</filename> file:</para>
|
||||
<programlisting language="ini" os="rhel;fedora;centos">
|
||||
DEVICE_INFO_HERE
|
||||
ONBOOT=yes
|
||||
BOOTPROTO=none
|
||||
PROMISC=yes
|
||||
</programlisting>
|
||||
<para os="rhel;fedora;centos">Then, edit the <filename>/etc/sysconfig/network-scripts/ifcfg-br-ex</filename> file:</para>
|
||||
<programlisting language="ini" os="rhel;fedora;centos">
|
||||
DEVICE=br-ex
|
||||
TYPE=Bridge
|
||||
ONBOOT=no
|
||||
BOOTPROTO=none
|
||||
IPADDR=EXTERNAL_INTERFACE_IP
|
||||
NETMASK=EXTERNAL_INTERFACE_NETMASK
|
||||
GATEWAY=EXTERNAL_INTERFACE_GATEWAY
|
||||
</programlisting>
|
||||
<!-- TODO(sross): support other distros -->
|
||||
<para>Finally, we can now configure the settings for the particular plugins.
|
||||
First, there are some general <acronym>OVS</acronym> configuration options to set,
|
||||
no matter whether you use VLANs or GRE tunneling. We need to tell L3 agent and DHCP
|
||||
agent we are using <acronym>OVS</acronym> by editing <filename>/etc/neutron/l3_agent.ini</filename> and <filename>/etc/neutron/dhcp_agent.ini</filename> (respectively):</para>
|
||||
<programlisting language="ini">
|
||||
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
|
||||
</programlisting>
|
||||
<para>Similarly, we need to also tell Neutron core to use <acronym>OVS</acronym> by
|
||||
editing <filename>/etc/neutron/neutron.conf</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
|
||||
</programlisting>
|
||||
<para>Finally, we need to tell the <acronym>OVS</acronym> plugin how to connect to
|
||||
the database by editing <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[database]
|
||||
sql_connection = DATABASE_TYPE://neutron:NETURON_PASSWORD@CONTROLLER_NODE_HOSTNAME/neutron
|
||||
</programlisting>
|
||||
<para>Now, we must decide which networking type we want. We can either use GRE tunneling
|
||||
or VLANs. <link linkend="install-neutron.install-plugin.ovs.gre">GRE tunneling</link>
|
||||
can be easier and simpler to set up, but is less flexible in certain regards. <link linkend="install-neutron.install-plugin.ovs.vlan">VLANs</link> are more flexible, but can be harder to set up and have more issues.</para>
|
||||
<!-- TODO(sross): support provider networks? We need to modify things above for this to work -->
|
||||
<para>Now, you have the option of configuring a firewall. If you do not wish to enforce firewall rules
|
||||
(called <firstterm>Security Groups</firstterm> by Neutron), you may use
|
||||
the <literal>neutron.agent.firewall.NoopFirewall</literal>. Otherwise, you may choose one of the Neutron
|
||||
firewall plugins to use. To use the Hybrid OVS-IPTables driver (the most common choice),
|
||||
edit <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[securitygroup]
|
||||
# Firewall driver for realizing neutron security group function.
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
|
||||
</programlisting>
|
||||
<warning>
|
||||
<para>You must use at least the No-Op firewall mentioned above. Otherwise, Horizon and
|
||||
other OpenStack services will not be able to get and set required VM boot options.</para>
|
||||
</warning>
|
||||
<!-- TODO(sross): document other firewall options -->
|
||||
<para>After having configured OVS, restart the <acronym>OVS</acronym> plugin:</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>service neutron-openvswitch-agent restart</userinput>
|
||||
</screen>
|
||||
<para>Now, return whence you came!</para>
|
||||
<section xml:id="install-neutron.install-plugin.ovs.gre">
|
||||
<title>Configuring the Neutron <acronym>OVS</acronym> plugin for GRE Tunneling</title>
|
||||
<para>First, we must configure the L3 agent and the DHCP agent to not use namespaces by editing <filename>/etc/neutron/l3_agent.ini</filename> and <filename>/etc/neutron/dhcp_agent.ini</filename> (respectively):</para>
|
||||
<programlisting language="ini">
|
||||
use_namespaces = False
|
||||
</programlisting>
|
||||
<para>Then, we tell the <acronym>OVS</acronym> plugin to use GRE tunneling, using an integration bridge of <literal>br-int</literal> and a tunneling bridge of <literal>br-tun</literal>, and to use a local IP for the tunnel of <replaceable>DATA_INTERFACE</replaceable>'s IP. Edit <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[ovs]
|
||||
tenant_network_type = gre
|
||||
tunnel_id_ranges = 1:1000
|
||||
enable_tunneling = True
|
||||
integration_bridge = br-int
|
||||
tunnel_bridge = br-tun
|
||||
local_ip = DATA_INTERFACE_IP
|
||||
</programlisting>
|
||||
<para>Now, return to the <acronym>OVS</acronym> general instruction</para>
|
||||
</section>
|
||||
<section xml:id="install-neutron.install-plugin.ovs.vlan">
|
||||
<title>Configuring the Neutron <acronym>OVS</acronym> plugin for VLANs</title>
|
||||
<para>First, we must tell <acronym>OVS</acronym> that we want to use VLANS by editing <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[ovs]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1:4094
|
||||
bridge_mappings = physnet1:br-DATA_INTERFACE
|
||||
</programlisting>
|
||||
<para>Then, create the bridge for <replaceable>DATA_INTERFACE</replaceable> and add <replaceable>DATA_INTERFACE</replaceable> to it:</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>ovs-vsctl add-br br-DATA_INTERFACE</userinput>
|
||||
<prompt>#</prompt> <userinput>ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE</userinput></screen>
|
||||
<!-- TODO(sross): verify this next part -->
|
||||
<para>Now that we have added <replaceable>DATA_INTERFACE</replaceable> to a bridge, we need to transfer its IP address over to the bridge. This is done in a manner similar to the way <replaceable>EXTERNAL_INTERFACE</replaceable>'s IP address was transfered to <literal>br-ex</literal>. However, in this case, we do not need to turn promiscuous mode on.</para>
|
||||
<para>Next, we must tell the L3 and DHCP agents that we want to use namespaces, by editing <filename>/etc/neutron/l3_agent.ini</filename> and <filename>/etc/neutron/dhcp_agent.ini</filename>, respectively:</para>
|
||||
<programlisting language="ini">
|
||||
use_namespaces = True
|
||||
</programlisting>
|
||||
<para os="rhel;cento">Additionally, if you a using certain kernels with partial support for namespaces, you need to enable veth support, by editing the above files again:</para>
|
||||
<programlisting language="ini" os="rhel;centos">
|
||||
ovs_use_veth = True
|
||||
</programlisting>
|
||||
<para>Now, return to the <acronym>OVS</acronym> general instruction</para>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="install-neutron.configure-networks">
|
||||
<title>Creating the base Neutron networks</title>
|
||||
<note>
|
||||
<para>In the upcoming sections, the text
|
||||
<replaceable>SPECIAL_OPTIONS</replaceable> may occur. This should be
|
||||
replaced with any options specific to your networking plugin choices.
|
||||
See <link linkend="install-neutron.configure-networks.plugin-specific"
|
||||
>here</link> to check if your plugin needs any special options.</para>
|
||||
</note>
|
||||
<para>First, we will create the external network, called
|
||||
<literal>ext-net</literal> (or something else, your choice). This
|
||||
network represents a slice of the outside world. VMs will not be directly
|
||||
linked to this network; instead, they will be on sub-networks and be
|
||||
assigned floating IPs from this network's subnet's pool of floating IPs.
|
||||
Neutron will then route the traffic appropriately.</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>neutron net-create ext-net -- --router:external=True SPECIAL_OPTIONS</userinput>
|
||||
</screen>
|
||||
<para>Next, we will create the associated subnet. It should have the same gateway
|
||||
as <replaceable>EXTERNAL_INTERFAE</replaceable> would have had, and the same CIDR
|
||||
details as well. It will not have DHCP, since it represents a slice of the external
|
||||
world:</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>neutron subnet-create ext-net --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END --gateway=EXTERNAL_INTERFACE_GATEWAY --enable_dhcp=False EXTERNAL_INTERFACE_CIDR</userinput>
|
||||
</screen>
|
||||
<para>Now, create one or more initial tenants. Choose one (we'll call it
|
||||
<replaceable>DEMO_TENANT</replaceable>) to use for the following
|
||||
parts.</para>
|
||||
<para>Then, we will create the router attached to the external network. This
|
||||
router will route traffic to the internal subnets as appropriate (you may
|
||||
wish to create it under the a given tenant, in which case you should
|
||||
append <literal>--tenant-id DEMO_TENANT_ID</literal> to the
|
||||
command).</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>neutron router-create ext-to-int</userinput>
|
||||
</screen>
|
||||
<para>Now, we'll connect the router to <literal>ext-net</literal> by setting the
|
||||
router's gateway as <literal>ext-net</literal>:</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>neutron router-gateway-set EXT_TO_INT_ID EXT_NET_ID</userinput>
|
||||
</screen>
|
||||
<para>Then, we'll create an internal network for <replaceable>DEMO_TENANT</replaceable>
|
||||
(and associated subnet over an arbitrary interal IP range, say,
|
||||
<literal>10.5.5.0/24</literal>), and connect it to the router by setting it as a port:</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>neutron net-create --tenant-id DEMO_TENANT_ID demo-net SPECIAL_OPTIONS</userinput>
|
||||
<prompt>#</prompt> <userinput>neutron subnet-create --tenant-id DEMO_TENANT_ID demo-net 10.5.5.0/24 --gateway 10.5.5.1</userinput>
|
||||
<prompt>#</prompt> <userinput>neutron router-interface-add EXT_TO_INT_ID DEMO_NET_SUBNET_ID</userinput>
|
||||
</screen>
|
||||
<para>Now, check your plugin's special options page to see if there are steps left to
|
||||
perform, and then return whence you came.</para>
|
||||
<section xml:id="install-neutron.configure-networks.plugin-specific">
|
||||
<title>Plugin-specific Neutron networks options</title>
|
||||
<section xml:id="install-neutron.configure-networks.plugin-specific.ovs">
|
||||
<title>Open vSwitch Network Configuration Options</title>
|
||||
<section xml:id="install-neutron.configure-networks.plugin-specific.ovs.gre">
|
||||
<title>GRE Tunneling network options</title>
|
||||
<para>When creating networks, you should use the options:</para>
|
||||
<screen>
|
||||
<userinput>--provider:network_type gre --provider:segmentation_id SEG_ID</userinput>
|
||||
</screen>
|
||||
<para><replaceable>SEG_ID</replaceable> should be <literal>2</literal>
|
||||
for the external network, and just any unique number inside the
|
||||
tunnel range specified before for any other network.</para>
|
||||
<note>
|
||||
<para>These options are not needed beyond the first network, as
|
||||
Neutron will automatically increment the segmentation id and copy
|
||||
the network type option for any additional networks.</para>
|
||||
</note>
|
||||
<para>After you have finished creating all the networks, we need to
|
||||
specify which some more details for the L3 agent. We need to tell it
|
||||
what the external network's ID is, as well as the ID of the router
|
||||
associated with this machine (because we are not using namespaces,
|
||||
there can be only one router per machine). To do this, edit
|
||||
<filename>/etc/neutron/l3_agent.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
gateway_external_network_id = EXT_NET_ID
|
||||
router_id = EXT_TO_INT_ID
|
||||
</programlisting>
|
||||
<para>Then, restart the L3 agent.</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>service neutron-l3-agent restart</userinput>
|
||||
</screen>
|
||||
<para>Return to the starting point.</para>
|
||||
</section>
|
||||
<section xml:id="install-neutron.configure-networks.plugin-specific.ovs.vlan">
|
||||
<title>VLAN network options</title>
|
||||
<para>FIXME</para>
|
||||
<para>When creating networks, you should use the options:</para>
|
||||
<screen>
|
||||
<userinput>--provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id SEG_ID</userinput>
|
||||
</screen>
|
||||
<para><replaceable>SEG_ID</replaceable> should be <literal>2</literal> for the external network, and just any unique number
|
||||
inside the vlan range specified before for any other network.</para>
|
||||
<note>
|
||||
<para>These options are not needed beyond the first network, as
|
||||
Neutron will automatically increment the segmentation id and copy
|
||||
the network type and physical network options for any additional
|
||||
networks.</para>
|
||||
</note>
|
||||
<warning>
|
||||
<para>Some NICs have linux drivers that do not handle VLANs properly.
|
||||
See the <literal>ovs-vlan-bug-workaround</literal> and <literal>ovs-vlan-test</literal>
|
||||
man pages for more information. Additionally, you may try turning off
|
||||
<literal>rx-vlan-offload</literal> and <literal>tx-vlan-offload</literal> using <literal>ethtool</literal> on
|
||||
the <replaceable>DATA_INTERFACE</replaceable>. Additionally, VLAN tags add an additonal 4 bytes on to the packet size. If your NICs cannot handle large packets, make sure to set the MTU 4 lower than normal on the <replaceable>DATA_INTERFACE</replaceable>.</para>
|
||||
<para>If you are running OpenStack inside a virtualized environment (for testing purposes),
|
||||
switching to the <literal>virtio</literal> NIC type (or a similar technology if
|
||||
you are not using KVM/QEMU) may solve the issue.</para>
|
||||
</warning>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="install-neutron.dedicated-compute-node">
|
||||
<title>Install Required Networking Support on a Dedicated Compute Node</title>
|
||||
<note>
|
||||
<para>This is for any node which is running compute services but is not running the full
|
||||
network stack.</para>
|
||||
</note>
|
||||
|
||||
<warning os="rhel;centos">
|
||||
<para>By default, an automated firewall configuration tool called <literal>system-config-firewall</literal> in place on RHEL. This tool is a graphical interface (and a curses-style interface with <literal>-tui</literal> on the end of the name) for configuring IP tables as a basic firewall. You should disable it when working with Neutron unless you are familiar with the underlying network technologies, as, by default, it will block various types of network traffic that are important to Neutron. To disable it, simple launch the program and uncheck the "Enabled" checkbox.</para>
|
||||
|
||||
<para>Once you have succesfully set up OpenStack with Neutron, you can
|
||||
reenable it if you wish and figure out exactly how you need to configure
|
||||
it. For the duration of the setup, however, it will make finding network
|
||||
issues easier if you don't have it blocking all unrecognized
|
||||
traffic.</para>
|
||||
</warning>
|
||||
|
||||
<!--
|
||||
<note>
|
||||
<para>Before we start, make sure your compute node is set up according to <link linkend="">common setup</link> directions.</para>
|
||||
</note>
|
||||
-->
|
||||
|
||||
<para>To start out, we need to disable packet destination filtering (route verification) in order to let the networking services route traffic to the VMs. Edit <filename>/etc/sysctl.conf</filename> (and then restart networking):</para>
|
||||
<programlisting language="ini">net.ipv4.conf.all.rp_filter=0
|
||||
net.ipv4.conf.default.rp_filter=0</programlisting>
|
||||
|
||||
<para>Next, we need to install and configure plugin components. Follow the <link
|
||||
linkend="install-neutron.install-plugin-compute">instructions</link> for configuring and
|
||||
installing your plugin of choice.</para>
|
||||
|
||||
<para>Now that you've installed and configured a plugin (you did do that, right?), it is time to configure the main part of Neutron by editing <filename>/etc/neutron/neutron.conf</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
auth_host = CONTROLLER_NODE_MGMT_IP
|
||||
admin_tenant_name = service
|
||||
admin_user = neutron
|
||||
admin_password = ADMIN_PASSWORD
|
||||
auth_url = http://CONTROLLER_NODE_MGMT_IP:35357/v2.0
|
||||
auth_strategy = keystone
|
||||
rpc_backend = YOUR_RPC_BACKEND
|
||||
PUT_YOUR_RPC_BACKEND_SETTINGS_HERE_TOO</programlisting>
|
||||
<section xml:id="install-neutron.install-plugin-compute">
|
||||
<title>Installing and configuring the Neutron plugins on the dedicated compute Node</title>
|
||||
<section xml:id="install-neutron.install-plugin-compute.ovs">
|
||||
<title>Installing the Open vSwitch (OVS) plugin on the dedicated compute node</title>
|
||||
<para>First, we must install the Open vSwitch plugin and its
|
||||
dependencies.</para>
|
||||
<screen os="rhel;fedora;centos">
|
||||
<prompt>#</prompt> <userinput>sudo yum install openstack-neutron-openvswitch</userinput>
|
||||
</screen>
|
||||
<!-- TODO(sross): support other distros -->
|
||||
<para>Now, we start up Open vSwitch.</para>
|
||||
<screen os="rhel;fedora;centos">
|
||||
<prompt>#</prompt> <userinput>sudo service openvswitch start</userinput>
|
||||
</screen>
|
||||
<para>Next, we must do some initial configuration for Open vSwitch, no
|
||||
matter whether we are using VLANs or GRE tunneling. We need to add the
|
||||
integration bridge (this connects to the VMs), called
|
||||
<literal>br-int</literal>.</para>
|
||||
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-int</userinput></screen>
|
||||
<para>Finally, we can now configure the settings for the particular plugins. First,
|
||||
there are some general <acronym>OVS</acronym> configuration options to set, no matter
|
||||
whether you use VLANs or GRE tunneling. We need to tell Neutron core to
|
||||
use <acronym>OVS</acronym> by editing <filename>/etc/neutron/neutron.conf</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
|
||||
</programlisting>
|
||||
<para>We also need to tell the <acronym>OVS</acronym> plugin how to connect to the
|
||||
database by editing <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[database]
|
||||
sql_connection = DATABASE_TYPE://neutron:NETURON_PASSWORD@CONTROLLER_NODE_HOSTNAME/neutron
|
||||
</programlisting>
|
||||
<para>Now, we must perform the configuration for the network type we chose when
|
||||
configuring the network node. <link linkend="install-neutron.install-plugin-compute.ovs.gre">GRE tunneling</link> or <link linkend="install-neutron.install-plugin-compute.ovs.vlan">VLANs</link>.</para>
|
||||
<!-- TODO(sross): support provider networks? We need to modify things above for this to work -->
|
||||
<para>Now, you have the option of configuring a firewall. If you do not wish to enforce
|
||||
firewall rules (called <firstterm>Security Groups</firstterm> by Neutron), you may use the
|
||||
<literal>neutron.agent.firewall.NoopFirewall</literal>. Otherwise, you may choose one of
|
||||
the Neutron firewall plugins to use. To use the Hybrid OVS-IPTables driver (the most
|
||||
common choice), edit
|
||||
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[securitygroup]
|
||||
# Firewall driver for realizing neutron security group function.
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
|
||||
</programlisting>
|
||||
<warning>
|
||||
<para>You must use at least the No-Op firewall mentioned above.
|
||||
Otherwise, Horizon and other OpenStack services will not be able to
|
||||
get and set required VM boot options.</para>
|
||||
</warning>
|
||||
<!-- TODO(sross): document other firewall options -->
|
||||
<para>After you have finished the above OVS configuration <emphasis>as
|
||||
well as the core Neutron configuration after this
|
||||
section</emphasis>, restart the Neutron Open vSwitch agent:</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>service neutron-openvswitch-agent restart</userinput>
|
||||
</screen>
|
||||
<para>Now, return where you started.</para>
|
||||
<section xml:id="install-neutron.install-plugin-compute.ovs.gre">
|
||||
<title>Configuring the Neutron <acronym>OVS</acronym> plugin for GRE Tunneling on the dedicated compute node</title>
|
||||
<para>We must tell the <acronym>OVS</acronym> plugin to use GRE tunneling,
|
||||
using an integration bridge of <literal>br-int</literal> and a tunneling bridge of <literal>br-tun</literal>, and to use a local IP for the tunnel of <replaceable>DATA_INTERFACE</replaceable>'s IP. Edit <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[ovs]
|
||||
tenant_network_type = gre
|
||||
tunnel_id_ranges = 1:1000
|
||||
enable_tunneling = True
|
||||
integration_bridge = br-int
|
||||
tunnel_bridge = br-tun
|
||||
local_ip = DATA_INTERFACE_IP
|
||||
</programlisting>
|
||||
|
||||
<para>Now, return to the <acronym>OVS</acronym> general
|
||||
instructions.</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="install-neutron.install-plugin-compute.ovs.vlan">
|
||||
<title>Configuring the Neutron <acronym>OVS</acronym> plugin for VLANs
|
||||
(work in progress)</title>
|
||||
<!-- NOTE(sross): this is a WIP, and has yet to be tested. Additionally, a plugin install guide specific to compute nodes will have to be written -->
|
||||
<para>First, we must tell <acronym>OVS</acronym> that we want to use VLANS by editing <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[ovs]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1:4094
|
||||
bridge_mappings = physnet1:br-DATA_INTERFACE
|
||||
</programlisting>
|
||||
|
||||
<para>Then, create the bridge for <replaceable>DATA_INTERFACE</replaceable> and add <replaceable>DATA_INTERFACE</replaceable> to it:</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>ovs-vsctl add-br br-DATA_INTERFACE</userinput>
|
||||
<prompt>#</prompt> <userinput>ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE</userinput>
|
||||
</screen>
|
||||
|
||||
<para>Now, return to the <acronym>OVS</acronym> general
|
||||
instruction.</para>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="install-neutron.dedicated-controller-node">
|
||||
<title>Install required Networking support on a dedicated controller node</title>
|
||||
|
||||
<warning os="rhel;centos">
|
||||
<para>By default, an automated firewall configuration tool called
|
||||
<literal>system-config-firewall</literal> in place on RHEL. This tool is a
|
||||
graphical interface (and a curses-style interface with <literal>-tui</literal> on
|
||||
the end of the name) for configuring IP tables as a basic firewall. You should
|
||||
disable it when working with Neutron unless you are familiar with the underlying
|
||||
network technologies, as, by default, it will block various types of network traffic
|
||||
that are important to Neutron. To disable it, simple launch the program and uncheck
|
||||
the "Enabled" checkbox.</para>
|
||||
<para>Once you have successfully set up OpenStack with Neutron, you can
|
||||
re-enable it if you wish and figure out exactly how you need to
|
||||
configure it. For the duration of the setup, however, it will make
|
||||
finding network issues easier if you don't have it blocking all
|
||||
unrecognized traffic.</para>
|
||||
</warning>
|
||||
<para>First, we need to install the main Neutron server, the Neutron libraries for python, and the Neutron CLI:</para>
|
||||
<screen os="fedora;rhel;centos">
|
||||
<prompt>#</prompt> <userinput>yum install openstack-neutron python-neutron python-neutronclient</userinput>
|
||||
</screen>
|
||||
<!-- TODO(sross): support other distros -->
|
||||
<para>Now, we need to set up the Neutron server, as usual. Make sure to do the core
|
||||
server component setup (RPC backend config, auth_strategy, and so on). Then, we'll
|
||||
need to configure Neutron's copy of <filename>api-paste.ini</filename> at <filename>/etc/neutron/api-paste.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[filter:authtoken]
|
||||
EXISTING_STUFF_HERE
|
||||
admin_tenant_name = service
|
||||
admin_user = neutron
|
||||
admin_password = ADMIN_PASSWORD
|
||||
</programlisting>
|
||||
<para>Now, we need to configure the plugin you chose when we configured the Network node. Follow the <link linkend="install-neutron.install-plugin-controller">instructions</link> and return.</para>
|
||||
<para>Next, we need to tell Nova about Neutron. Specifically, we need to tell Nova about Neutron's endpoint, and that it will handle firewall issues, so don't use a firewall though Nova. We can do this by editing <filename>/etc/nova/nova.conf</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
network_api_class=nova.network.neutronv2.api.API
|
||||
neutron_url=http://CONTROLLER_MGMT_IP:9696
|
||||
neutron_auth_strategy=keystone
|
||||
neutron_admin_tenant_name=service
|
||||
neutron_admin_username=neutron
|
||||
neutron_admin_password=password
|
||||
neutron_admin_auth_url=http://CONTROLLER_MGMT_IP:35357/v2.0
|
||||
firewall_driver=nova.virt.firewall.NoopFirewallDriver
|
||||
security_group_api=neutron
|
||||
</programlisting>
|
||||
<para>Finally, we just need to start neutron-server:</para>
|
||||
<screen>
|
||||
<prompt>#</prompt> <userinput>service neutron-server start</userinput>
|
||||
</screen>
|
||||
<note>
|
||||
<para>Make sure to check that the plugin restarted successfully. If you
|
||||
get errors about missing the file <filename>plugin.ini</filename>,
|
||||
simply make a symlink pointing at
|
||||
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>
|
||||
with the "name" <filename>/etc/neutron/plugins.ini</filename>.</para>
|
||||
</note>
|
||||
<section xml:id="install-neutron.install-plugin-controller">
|
||||
<title>Installing and configuring the Neutron plugins on the dedicated controller Node</title>
|
||||
<section xml:id="install-neutron.install-plugin-controller.ovs">
|
||||
<title>Installing the Open vSwitch (OVS) plugin on the dedicated controller node</title>
|
||||
<para>First, we must install the Open vSwitch plugin:</para>
|
||||
<screen os="rhel;fedora;centos">
|
||||
<prompt>#</prompt> <userinput>sudo yum install openstack-neutron-openvswitch</userinput>
|
||||
</screen>
|
||||
<!-- TODO(sross): support other distros -->
|
||||
<para>Then, we can now configure the settings for the particular plugins. First, there are some general <acronym>OVS</acronym> configuration options to set, no matter whether you use VLANs or GRE tunneling. We need to tell Neutron core to use <acronym>OVS</acronym> by editing <filename>/etc/neutron/neutron.conf</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
|
||||
</programlisting>
|
||||
<para>We also need to tell the <acronym>OVS</acronym> plugin how to connect to the database by editing <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[database]
|
||||
sql_connection = DATABASE_TYPE://neutron:NETURON_PASSWORD@CONTROLLER_NODE_HOSTNAME/neutron
|
||||
</programlisting>
|
||||
<para>Now, we must perform the configuration for the network type we chose when configuring the network node. <link linkend="install-neutron.install-plugin-controller.ovs.gre">GRE tunneling</link> or <link linkend="install-neutron.install-plugin-controller.ovs.vlan">VLANs</link>.</para>
|
||||
<!-- TODO(sross): support provider networks? We need to modify things above for this to work -->
|
||||
<!-- TODO(sross): document firewall? -->
|
||||
<note>
|
||||
<para>Notice that the dedicated controller node does not actually need
|
||||
to run the Open vSwitch agent, nor does it need to run Open vSwitch
|
||||
itself.</para>
|
||||
</note>
|
||||
<para>Now, return where you started.</para>
|
||||
<section xml:id="install-neutron.install-plugin-controller.ovs.gre">
|
||||
<title>Configuring the Neutron <acronym>OVS</acronym> plugin for GRE Tunneling on the dedicated compute node</title>
|
||||
<para>We must tell the <acronym>OVS</acronym> plugin to use GRE tunneling.
|
||||
Edit <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[ovs]
|
||||
tenant_network_type = gre
|
||||
tunnel_id_ranges = 1:1000
|
||||
enable_tunneling = True
|
||||
</programlisting>
|
||||
<para>Now, return to the <acronym>OVS</acronym> general instructions.</para>
|
||||
</section>
|
||||
<section xml:id="install-neutron.install-plugin-controller.ovs.vlan">
|
||||
<title>Configuring the Neutron <acronym>OVS</acronym> plugin for VLANs</title>
|
||||
<!-- NOTE(sross): this is a WIP, and has yet to be tested. Additionally, a plugin install guide specific to compute nodes will have to be written -->
|
||||
<para>First, we must tell <acronym>OVS</acronym> that we want to use VLANS by
|
||||
editing <filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin</filename>:</para>
|
||||
<programlisting language="ini">
|
||||
[ovs]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1:4094
|
||||
</programlisting>
|
||||
<para>Now, return to the <acronym>OVS</acronym> general instructions.</para>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
@ -3,10 +3,9 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="section_networking-routers-with-private-networks">
|
||||
<title>Per-tenant Routers with Private Networks</title>
|
||||
<para>This section describes how to install the OpenStack
|
||||
Networking service and its components for a
|
||||
Per-tenant Routers with Private Networks use case.</para>
|
||||
<title>Per-tenant routers with private networks</title>
|
||||
<para>This section describes how to install the Networking service and its components for a
|
||||
per-tenant routers with private networks use case.</para>
|
||||
<informalfigure>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
@ -16,7 +15,7 @@
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
</informalfigure>
|
||||
<para>The following figure shows the set up:</para>
|
||||
<para>The following figure shows the setup:</para>
|
||||
<informalfigure>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
@ -26,7 +25,7 @@
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
</informalfigure>
|
||||
<para>As shown in the figure, the set up includes:</para>
|
||||
<para>As shown in the figure, the setup includes:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>An interface for management traffic on each
|
||||
@ -65,52 +64,35 @@
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>Controller Node</td>
|
||||
<td><para>Runs the OpenStack Networking service,
|
||||
OpenStack Identity, and all OpenStack Compute
|
||||
services that are required to deploy VMs
|
||||
(<systemitem class="service"
|
||||
>nova-api</systemitem>, <systemitem
|
||||
class="service"
|
||||
>nova-scheduler</systemitem>, for
|
||||
example). The node must have at least one
|
||||
network interface, which is connected to the
|
||||
Management Network. The host name is
|
||||
controlnode, which every other node resolves
|
||||
to the IP of the controller node.</para><note>
|
||||
<para>The <systemitem class="service"
|
||||
>nova-network</systemitem> service
|
||||
should not be running. This is replaced by
|
||||
OpenStack Networking.</para>
|
||||
<td><para>Runs the Networking service, Identity, and all of the Compute services that are required to
|
||||
deploy VMs (<systemitem class="service">nova-api</systemitem>, <systemitem
|
||||
class="service">nova-scheduler</systemitem>, for example). The node must
|
||||
have at least one network interface, which is connected to the Management
|
||||
Network. The host name is controlnode, which every other node resolves to
|
||||
the IP of the controller node.</para><note>
|
||||
<para>The <systemitem class="service">nova-network</systemitem> service
|
||||
should not be running. This is replaced by Networking.</para>
|
||||
</note></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Compute Node</td>
|
||||
<td>Runs the OpenStack Networking L2 agent and the
|
||||
OpenStack Compute services that run VMs
|
||||
(<systemitem class="service"
|
||||
>nova-compute</systemitem> specifically, and
|
||||
optionally other <systemitem class="service"
|
||||
>nova-*</systemitem> services depending on
|
||||
configuration). The node must have at least two
|
||||
network interfaces. One interface communicates
|
||||
with the controller node through the management
|
||||
network. The other node is used for the VM traffic
|
||||
on the data network. The VM receives its IP
|
||||
address from the DHCP agent on this network.</td>
|
||||
<td>Runs the Networking L2 agent and the Compute services that run VMs (<systemitem
|
||||
class="service">nova-compute</systemitem> specifically, and optionally other
|
||||
<systemitem class="service">nova-*</systemitem> services depending on
|
||||
configuration). The node must have at least two network interfaces. One
|
||||
interface communicates with the controller node through the management network.
|
||||
The other node is used for the VM traffic on the data network. The VM receives
|
||||
its IP address from the DHCP agent on this network.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Network Node</td>
|
||||
<td>Runs OpenStack Networking L2 agent, DHCP agent and
|
||||
L3 agent. This node has access to the external
|
||||
network. The DHCP agent allocates IP addresses to
|
||||
the VMs on data network. (Technically, the
|
||||
addresses are allocated by the OpenStack
|
||||
Networking server, and distributed by the dhcp
|
||||
agent.) The node must have at least two network
|
||||
interfaces. One interface communicates with the
|
||||
controller node through the management network.
|
||||
The other interface is used as external network.
|
||||
GRE tunnels are set up as data networks.</td>
|
||||
<td>Runs Networking L2 agent, DHCP agent and L3 agent. This node has access to the
|
||||
external network. The DHCP agent allocates IP addresses to the VMs on data
|
||||
network. (Technically, the addresses are allocated by the Networking server, and
|
||||
distributed by the dhcp agent.) The node must have at least two network
|
||||
interfaces. One interface communicates with the controller node through the
|
||||
management network. The other interface is used as external network. GRE tunnels
|
||||
are set up as data networks.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Router</td>
|
||||
@ -120,67 +102,53 @@
|
||||
</tr>
|
||||
</tbody>
|
||||
</informaltable>
|
||||
<para>The demo assumes the following:</para>
|
||||
<para><emphasis role="bold">Controller Node</emphasis></para>
|
||||
<para>The use case assumes the following:</para>
|
||||
<para><emphasis role="bold">Controller node</emphasis></para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>Relevant OpenStack Compute services are installed,
|
||||
configured, and running.</para>
|
||||
<para>Relevant Compute services are installed, configured, and running.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Glance is installed, configured, and running. In
|
||||
addition, an image named tty must be present.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>OpenStack Identity is installed, configured, and
|
||||
running. A OpenStack Networking user named <emphasis
|
||||
role="bold">neutron</emphasis> should be created
|
||||
on tenant <emphasis role="bold"
|
||||
>servicetenant</emphasis> with password <emphasis
|
||||
role="bold">servicepassword</emphasis>.</para>
|
||||
<para>Identity is installed, configured, and running. A Networking user named <emphasis
|
||||
role="bold">neutron</emphasis> should be created on tenant <emphasis role="bold"
|
||||
>servicetenant</emphasis> with password <emphasis role="bold"
|
||||
>servicepassword</emphasis>.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Additional services <itemizedlist>
|
||||
<para>Additional services: <itemizedlist>
|
||||
<listitem>
|
||||
<para>RabbitMQ is running with default guest
|
||||
and its password</para>
|
||||
<para>RabbitMQ is running with default guest and its password</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>MySQL server (user is <emphasis
|
||||
role="bold">root</emphasis> and
|
||||
password is <emphasis role="bold"
|
||||
>root</emphasis>)</para>
|
||||
<para>MySQL server (user is <emphasis role="bold">root</emphasis> and
|
||||
password is <emphasis role="bold">root</emphasis>)</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<para><emphasis role="bold">Compute Node</emphasis></para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>OpenStack Compute is installed and configured</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<para><emphasis role="bold">Compute node</emphasis></para>
|
||||
<para>Compute is installed and configured.</para>
|
||||
<section xml:id="demo_routers_with_private_networks_installions">
|
||||
<title>Installation</title>
|
||||
<title>Install</title>
|
||||
<para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Controller Node -
|
||||
OpenStack Networking Server</emphasis><orderedlist>
|
||||
<para><emphasis role="bold">Controller nodeNetworking server</emphasis><orderedlist>
|
||||
<listitem>
|
||||
<para>Install the OpenStack Networking
|
||||
server.</para>
|
||||
<para>Install the Networking server.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create database <emphasis
|
||||
role="bold">ovs_neutron</emphasis>.</para>
|
||||
<para>Create database <emphasis role="bold"
|
||||
>ovs_neutron</emphasis>.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the OpenStack Networking
|
||||
configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename>,
|
||||
with plug-in choice and Identity
|
||||
Service user as necessary:</para>
|
||||
<para>Update the Networking configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename>, with plug-in choice
|
||||
and Identity Service user as necessary:</para>
|
||||
<programlisting>[DEFAULT]
|
||||
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
|
||||
control_exchange = neutron
|
||||
@ -194,9 +162,8 @@ admin_password=servicepassword
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the plug-in configuration
|
||||
file,
|
||||
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<para>Update the plug-in configuration file,
|
||||
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting>[database]
|
||||
sql_connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
|
||||
[ovs]
|
||||
@ -206,34 +173,25 @@ enable_tunneling = True
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Start the OpenStack Networking
|
||||
server</para>
|
||||
<para>The OpenStack Networking server
|
||||
can be a service of the operating
|
||||
system. The command to start the
|
||||
service depends on your operating
|
||||
system. The following command runs
|
||||
the OpenStack Networking server
|
||||
directly:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo neutron-server --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
|
||||
<para>Start the Networking server</para>
|
||||
<para>The Networking server can be a service of the operating
|
||||
system. The command to start the service depends on your
|
||||
operating system. The following command runs the Networking
|
||||
server directly:</para>
|
||||
<screen><prompt>#</prompt> <userinput>sudo neutron-server --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
|
||||
--config-file /etc/neutron/neutron.conf</userinput></screen>
|
||||
</listitem>
|
||||
</orderedlist></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Compute Node -
|
||||
OpenStack Compute </emphasis><orderedlist>
|
||||
<para><emphasis role="bold">Compute nodeCompute </emphasis><orderedlist>
|
||||
<listitem>
|
||||
<para>Install OpenStack Compute
|
||||
services.</para>
|
||||
<para>Install Compute services.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the OpenStack Compute
|
||||
configuration file, <filename>
|
||||
/etc/nova/nova.conf</filename>.
|
||||
Make sure the following line
|
||||
appears at the end of this
|
||||
file:</para>
|
||||
<para>Update the Compute configuration file, <filename>
|
||||
/etc/nova/nova.conf</filename>. Make sure the following line
|
||||
appears at the end of this file:</para>
|
||||
<programlisting>network_api_class=nova.network.neutronv2.api.API
|
||||
|
||||
neutron_admin_username=neutron
|
||||
@ -247,31 +205,25 @@ libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Restart relevant OpenStack
|
||||
Compute services</para>
|
||||
<para>Restart relevant Compute services.</para>
|
||||
</listitem>
|
||||
</orderedlist></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Compute and Network
|
||||
Node - L2 Agent</emphasis><orderedlist>
|
||||
<para><emphasis role="bold">Compute and Network nodeL2 agent</emphasis><orderedlist>
|
||||
<listitem>
|
||||
<para>Install and start Open
|
||||
vSwitch.</para>
|
||||
<para>Install and start Open vSwitch.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Install the L2 agent (Neutron
|
||||
Open vSwitch agent).</para>
|
||||
<para>Install the L2 agent (Neutron Open vSwitch agent).</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Add the integration bridge to
|
||||
the Open vSwitch</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
|
||||
<para>Add the integration bridge to the Open vSwitch</para>
|
||||
<screen><prompt>#</prompt> <userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the OpenStack Networking
|
||||
configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename></para>
|
||||
<para>Update the Networking configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename></para>
|
||||
<programlisting language="ini">[DEFAULT]
|
||||
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
|
||||
control_exchange = neutron
|
||||
@ -280,10 +232,9 @@ notification_driver = neutron.openstack.common.notifier.rabbit_notifier
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the plug-in configuration
|
||||
file, <filename>
|
||||
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>.</para>
|
||||
<para>Compute Node:</para>
|
||||
<para>Update the plug-in configuration file, <filename>
|
||||
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>.</para>
|
||||
<para>Compute node:</para>
|
||||
<programlisting language="ini">[database]
|
||||
sql_connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
|
||||
[ovs]
|
||||
@ -292,7 +243,7 @@ tunnel_id_ranges = 1:1000
|
||||
enable_tunneling = True
|
||||
local_ip = 9.181.89.202
|
||||
</programlisting>
|
||||
<para>Network Node:</para>
|
||||
<para>Network node:</para>
|
||||
<programlisting language="ini">[database]
|
||||
sql_connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
|
||||
[ovs]
|
||||
@ -303,36 +254,29 @@ local_ip = 9.181.89.203
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create the integration bridge
|
||||
<emphasis role="bold"
|
||||
>br-int</emphasis>:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl --may-exist add-br br-int</userinput></screen>
|
||||
<para>Create the integration bridge <emphasis role="bold"
|
||||
>br-int</emphasis>:</para>
|
||||
<screen><prompt>#</prompt> <userinput>sudo ovs-vsctl --may-exist add-br br-int</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Start the OpenStack Networking
|
||||
L2 agent</para>
|
||||
<para>The OpenStack Networking Open
|
||||
vSwitch L2 agent can be a service
|
||||
of operating system. The command
|
||||
may be different to start the
|
||||
service on different operating
|
||||
systems. However the command to run
|
||||
it directly is kind of like:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo neutron-openvswitch-agent --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
|
||||
<para>Start the Networking L2 agent</para>
|
||||
<para>The Networking Open vSwitch L2 agent can be a service of
|
||||
operating system. The command may be different to start the
|
||||
service on different operating systems. However the command to
|
||||
run it directly is kind of like:</para>
|
||||
<screen><prompt>#</prompt> <userinput>sudo neutron-openvswitch-agent --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
|
||||
--config-file /etc/neutron/neutron.conf</userinput></screen>
|
||||
</listitem>
|
||||
</orderedlist></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Network Node - DHCP
|
||||
Agent</emphasis><orderedlist>
|
||||
<para><emphasis role="bold">Network nodeDHCP agent</emphasis><orderedlist>
|
||||
<listitem>
|
||||
<para>Install the DHCP agent.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the OpenStack Networking
|
||||
configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename></para>
|
||||
<para>Update the Networking configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename></para>
|
||||
<programlisting>[DEFAULT]
|
||||
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
|
||||
control_exchange = neutron
|
||||
@ -340,71 +284,56 @@ rabbit_host = controller
|
||||
notification_driver = neutron.openstack.common.notifier.rabbit_notifier
|
||||
allow_overlapping_ips = True</programlisting>
|
||||
<para><emphasis role="bold">Set
|
||||
<literal>allow_overlapping_ips</literal>
|
||||
because TenantA and TenantC use
|
||||
overlapping
|
||||
subnets.</emphasis></para>
|
||||
<literal>allow_overlapping_ips</literal> because TenantA
|
||||
and TenantC use overlapping subnets.</emphasis></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the DHCP configuration
|
||||
file <filename>
|
||||
/etc/neutron/dhcp_agent.ini</filename></para>
|
||||
<para>Update the DHCP configuration file <filename>
|
||||
/etc/neutron/dhcp_agent.ini</filename></para>
|
||||
<programlisting>interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Start the DHCP agent</para>
|
||||
<para>The OpenStack Networking DHCP
|
||||
agent can be a service of operating
|
||||
system. The command to start the
|
||||
service depends on your operating
|
||||
system. The following command runs
|
||||
the service directly:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo neutron-dhcp-agent --config-file /etc/neutron/neutron.conf \
|
||||
<para>Start the DHCP agent.</para>
|
||||
<para>The Networking DHCP agent can be a service of operating
|
||||
system. The command to start the service depends on your
|
||||
operating system. The following command runs the service
|
||||
directly:</para>
|
||||
<screen><prompt>#</prompt> <userinput>sudo neutron-dhcp-agent --config-file /etc/neutron/neutron.conf \
|
||||
--config-file /etc/neutron/dhcp_agent.ini</userinput></screen>
|
||||
</listitem>
|
||||
</orderedlist></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Network Node - L3
|
||||
Agent</emphasis><orderedlist>
|
||||
<para><emphasis role="bold">Network nodeL3 agent</emphasis><orderedlist>
|
||||
<listitem>
|
||||
<para>Install the L3 agent.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Add the external network
|
||||
bridge</para>
|
||||
<para>Add the external network bridge</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-ex</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Add the physical interface, for
|
||||
example eth0, that is connected to
|
||||
the outside network to this
|
||||
bridge:</para>
|
||||
<para>Add the physical interface, for example eth0, that is
|
||||
connected to the outside network to this bridge:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-port br-ex eth0</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the L3 configuration file
|
||||
<filename>
|
||||
/etc/neutron/l3_agent.ini</filename>:</para>
|
||||
<para>Update the L3 configuration file <filename>
|
||||
/etc/neutron/l3_agent.ini</filename>:</para>
|
||||
<programlisting>[DEFAULT]
|
||||
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
|
||||
use_namespaces=True</programlisting>
|
||||
<para><emphasis role="bold">Set the
|
||||
<literal>use_namespaces</literal>
|
||||
option (it is True by default)
|
||||
because TenantA and TenantC have
|
||||
overlapping subnets, and the
|
||||
routers are hosted on one l3 agent
|
||||
network node.</emphasis></para>
|
||||
<literal>use_namespaces</literal> option (it is True by
|
||||
default) because TenantA and TenantC have overlapping
|
||||
subnets, and the routers are hosted on one l3 agent network
|
||||
node.</emphasis></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Start the L3 agent</para>
|
||||
<para>The OpenStack Networking L3
|
||||
agent can be a service of operating
|
||||
system. The command to start the
|
||||
service depends on your operating
|
||||
system. The following command
|
||||
starts the agent directly:</para>
|
||||
<para>The Networking L3 agent can be a service of operating system.
|
||||
The command to start the service depends on your operating
|
||||
system. The following command starts the agent directly:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo neutron-l3-agent --config-file /etc/neutron/neutron.conf \
|
||||
--config-file /etc/neutron/l3_agent.ini</userinput></screen>
|
||||
</listitem>
|
||||
@ -414,13 +343,12 @@ use_namespaces=True</programlisting>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="demo_per_tenant_router_network_config">
|
||||
<title>Logical Network Configuration</title>
|
||||
<title>Configure logical network</title>
|
||||
<para>All of the commands below can be executed on the network
|
||||
node.</para>
|
||||
<note>
|
||||
<para>Ensure that the following environment variables are
|
||||
set. These are used by the various clients to access
|
||||
the OpenStack Identity service.</para>
|
||||
<para>Ensure that the following environment variables are set. These are used by the
|
||||
various clients to access the Identity service.</para>
|
||||
</note>
|
||||
<para>
|
||||
<programlisting language="bash">export OS_USERNAME=admin
|
||||
@ -431,9 +359,8 @@ use_namespaces=True</programlisting>
|
||||
<para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>Get the tenant ID (Used as $TENANT_ID
|
||||
later)</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput>
|
||||
<para>Get the tenant ID (Used as $TENANT_ID later):</para>
|
||||
<screen><prompt>#</prompt> <userinput>keystone tenant-list</userinput>
|
||||
<computeroutput>+----------------------------------+---------+---------+
|
||||
| id | name | enabled |
|
||||
+----------------------------------+---------+---------+
|
||||
@ -446,8 +373,8 @@ use_namespaces=True</programlisting>
|
||||
</computeroutput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Get the user information</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone user-list</userinput>
|
||||
<para>Get the user information:</para>
|
||||
<screen><prompt>#</prompt> <userinput>keystone user-list</userinput>
|
||||
<computeroutput>+----------------------------------+-------+---------+-------------------+
|
||||
| id | name | enabled | email |
|
||||
+----------------------------------+-------+---------+-------------------+
|
||||
@ -462,7 +389,7 @@ use_namespaces=True</programlisting>
|
||||
<listitem>
|
||||
<para>Create the external network and its subnet
|
||||
by admin user:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron net-create Ext-Net --provider:network_type local --router:external true</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron net-create Ext-Net --provider:network_type local --router:external true</userinput>
|
||||
<computeroutput>Created a new network:
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
@ -481,7 +408,7 @@ use_namespaces=True</programlisting>
|
||||
+---------------------------+--------------------------------------+
|
||||
</computeroutput></screen>
|
||||
|
||||
<screen><prompt>$</prompt> <userinput>neutron subnet-create Ext-Net 30.0.0.0/24 --disable-dhcp</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron subnet-create Ext-Net 30.0.0.0/24 --disable-dhcp</userinput>
|
||||
<computeroutput>Created a new subnet:
|
||||
+------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
@ -500,19 +427,14 @@ use_namespaces=True</programlisting>
|
||||
+------------------+--------------------------------------------+
|
||||
</computeroutput></screen>
|
||||
<para><emphasis role="bold">
|
||||
<literal>provider:network_type
|
||||
local</literal> means that OpenStack
|
||||
Networking does not have to realize this
|
||||
network through provider network.
|
||||
<literal>router:external
|
||||
true</literal> means that an external
|
||||
network is created where you can create
|
||||
floating IP and router gateway
|
||||
<literal>provider:network_type local</literal> means that Networking
|
||||
does not have to realize this network through provider network.
|
||||
<literal>router:external true</literal> means that an external
|
||||
network is created where you can create floating IP and router gateway
|
||||
port.</emphasis></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Add an IP on external network to
|
||||
br-ex</para>
|
||||
<para>Add an IP on external network to br-ex.</para>
|
||||
<para>Because br-ex is the external network
|
||||
bridge, add an IP 30.0.0.100/24 to br-ex and
|
||||
ping the floating IP of the VM from our
|
||||
@ -521,14 +443,14 @@ use_namespaces=True</programlisting>
|
||||
<prompt>$</prompt> sudo ip link set br-ex up</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Serve TenantA</para>
|
||||
<para>Serve TenantA.</para>
|
||||
<para>For TenantA, create a private network,
|
||||
subnet, server, router, and floating
|
||||
IP.</para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>Create a network for TenantA</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
<para>Create a network for TenantA:</para>
|
||||
<screen><prompt>#</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 net-create TenantA-Net</userinput>
|
||||
<computeroutput>Created a new network:
|
||||
+-----------------+--------------------------------------+
|
||||
@ -546,7 +468,7 @@ use_namespaces=True</programlisting>
|
||||
<para>After that, you can use admin user
|
||||
to query the provider network
|
||||
information:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron net-show TenantA-Net</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron net-show TenantA-Net</userinput>
|
||||
<computeroutput>+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
@ -568,9 +490,8 @@ use_namespaces=True</programlisting>
|
||||
1.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create a subnet on the network
|
||||
TenantA-Net</para>
|
||||
<screen><prompt>$</prompt> <userinput>
|
||||
<para>Create a subnet on the network TenantA-Net:</para>
|
||||
<screen><prompt>#</prompt> <userinput>
|
||||
neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 subnet-create TenantA-Net 10.0.0.0/24</userinput>
|
||||
<computeroutput>Created a new subnet:
|
||||
@ -617,7 +538,7 @@ neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
<listitem>
|
||||
<para>Create and configure a router for
|
||||
TenantA:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
<screen><prompt>#</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 router-create TenantA-R1</userinput>
|
||||
<computeroutput>Created a new router:
|
||||
+-----------------------+--------------------------------------+
|
||||
@ -631,19 +552,18 @@ neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
| tenant_id | 247e478c599f45b5bd297e8ddbbc9b6a |
|
||||
+-----------------------+--------------------------------------+
|
||||
</computeroutput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
<screen><prompt>#</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 router-interface-add \
|
||||
TenantA-R1 51e2c223-0492-4385-b6e9-83d4e6d10657</userinput></screen>
|
||||
<para>Added interface to router TenantA-R1</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
<screen><prompt>#</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 \
|
||||
router-gateway-set TenantA-R1 Ext-Net</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Associate a floating IP for
|
||||
TenantA_VM1</para>
|
||||
<para>1. Create a floating IP</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
<para>Associate a floating IP for TenantA_VM1.</para>
|
||||
<para>1. Create a floating IP:</para>
|
||||
<screen><prompt>#</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 floatingip-create Ext-Net</userinput>
|
||||
<computeroutput>Created a new floatingip:
|
||||
+---------------------+--------------------------------------+
|
||||
@ -659,7 +579,7 @@ neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
+---------------------+--------------------------------------+
|
||||
</computeroutput></screen>
|
||||
<para>2. Get the port ID of the VM with ID
|
||||
7c5e6499-7ef7-4e36-8216-62c2941d21ff</para>
|
||||
7c5e6499-7ef7-4e36-8216-62c2941d21ff:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 port-list -- \
|
||||
--device_id 7c5e6499-7ef7-4e36-8216-62c2941d21ff</userinput>
|
||||
@ -669,8 +589,7 @@ neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
| 6071d430-c66e-4125-b972-9a937c427520 | | fa:16:3e:a0:73:0d | {"subnet_id": "51e2c223-0492-4385-b6e9-83d4e6d10657", "ip_address": "10.0.0.3"} |
|
||||
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
|
||||
</computeroutput></screen>
|
||||
<para>3. Associate the floating IP with
|
||||
the VM port</para>
|
||||
<para>3. Associate the floating IP with the VM port:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 floatingip-associate \
|
||||
5a1f90ed-aa3c-4df3-82cb-116556e96bf1 6071d430-c66e-4125-b972-9a937c427520</userinput>
|
||||
@ -685,8 +604,7 @@ neutron --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
</computeroutput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Ping the public network from the
|
||||
server of TenantA</para>
|
||||
<para>Ping the public network from the server of TenantA.</para>
|
||||
<para>In my environment, 192.168.1.0/24 is
|
||||
my public network connected with my
|
||||
physical router, which also connects
|
||||
@ -706,8 +624,7 @@ rtt min/avg/max/mdev = 1.234/1.495/1.745/0.211 ms
|
||||
</computeroutput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Ping floating IP of the TenantA's
|
||||
server</para>
|
||||
<para>Ping floating IP of the TenantA's server:</para>
|
||||
<screen><prompt>$</prompt> <userinput>ping 30.0.0.2</userinput>
|
||||
<computeroutput>PING 30.0.0.2 (30.0.0.2) 56(84) bytes of data.
|
||||
64 bytes from 30.0.0.2: icmp_req=1 ttl=63 time=45.0 ms
|
||||
@ -720,8 +637,7 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
|
||||
</computeroutput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create other servers for
|
||||
TenantA</para>
|
||||
<para>Create other servers for TenantA.</para>
|
||||
<para>We can create more servers for
|
||||
TenantA and add floating IPs for
|
||||
them.</para>
|
||||
@ -729,7 +645,7 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
|
||||
</orderedlist>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Serve TenantC</para>
|
||||
<para>Serve TenantC.</para>
|
||||
<para>For TenantC, we will create two private
|
||||
networks with subnet 10.0.0.0/24 and subnet
|
||||
10.0.1.0/24, some servers, one router to
|
||||
@ -737,23 +653,22 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
|
||||
IPs.</para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>Create networks and subnets for
|
||||
TenantC</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<para>Create networks and subnets for TenantC:</para>
|
||||
<screen><prompt>#</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 net-create TenantC-Net1</userinput>
|
||||
<prompt>$</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<prompt>#</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 subnet-create TenantC-Net1 \
|
||||
10.0.0.0/24 --name TenantC-Subnet1</userinput>
|
||||
<prompt>$</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<prompt>#</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 net-create TenantC-Net2</userinput>
|
||||
<prompt>$</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<prompt>#</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 subnet-create TenantC-Net2 \
|
||||
10.0.1.0/24 --name TenantC-Subnet2</userinput>
|
||||
</screen>
|
||||
<para>After that we can use admin user to
|
||||
query the network's provider network
|
||||
information:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron net-show TenantC-Net1</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron net-show TenantC-Net1</userinput>
|
||||
<computeroutput>+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
@ -770,7 +685,7 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
|
||||
| tenant_id | 2b4fec24e62e4ff28a8445ad83150f9d |
|
||||
+---------------------------+--------------------------------------+
|
||||
</computeroutput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>neutron net-show TenantC-Net2</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron net-show TenantC-Net2</userinput>
|
||||
<computeroutput>+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
@ -794,22 +709,20 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
|
||||
them to create VMs and router.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create a server TenantC-VM1 for
|
||||
TenantC on TenantC-Net1</para>
|
||||
<screen><prompt>$</prompt> <userinput>nova --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<para>Create a server TenantC-VM1 for TenantC on TenantC-Net1.</para>
|
||||
<screen><prompt>#</prompt> <userinput>nova --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \
|
||||
--nic net-id=91309738-c317-40a3-81bb-bed7a3917a85 TenantC_VM1</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create a server TenantC-VM3 for
|
||||
TenantC on TenantC-Net2</para>
|
||||
<screen><prompt>$</prompt> <userinput>nova --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<para>Create a server TenantC-VM3 for TenantC on TenantC-Net2.</para>
|
||||
<screen><prompt>#</prompt> <userinput>nova --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \
|
||||
--nic net-id=5b373ad2-7866-44f4-8087-f87148abd623 TenantC_VM3</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>List servers of TenantC</para>
|
||||
<screen><prompt>$</prompt> <userinput>nova --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<para>List servers of TenantC.</para>
|
||||
<screen><prompt>#</prompt> <userinput>nova --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 list</userinput>
|
||||
<computeroutput>
|
||||
+--------------------------------------+-------------+--------+-----------------------+
|
||||
@ -823,37 +736,33 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
|
||||
will use them later.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Make sure servers get their
|
||||
IPs</para>
|
||||
<para>We can use VNC to log on the VMs to
|
||||
check if they get IPs. If not, we have
|
||||
to make sure the OpenStack Networking
|
||||
components are running right and the
|
||||
GRE tunnels work.</para>
|
||||
<para>Make sure servers get their IPs.</para>
|
||||
<para>We can use VNC to log on the VMs to check if they get IPs. If not,
|
||||
we have to make sure the Networking components are running right and
|
||||
the GRE tunnels work.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create and configure a router for
|
||||
TenantC:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<screen><prompt>#</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 router-create TenantC-R1</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<screen><prompt>#</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 router-interface-add \
|
||||
TenantC-R1 cf03fd1e-164b-4527-bc87-2b2631634b83</userinput>
|
||||
<prompt>$</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<prompt>#</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 router-interface-add \
|
||||
TenantC-R1 38f0b2f0-9f98-4bf6-9520-f4abede03300</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
<screen><prompt>#</prompt> <userinput>neutron --os-tenant-name TenantC --os-username UserC --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 \
|
||||
router-gateway-set TenantC-R1 Ext-Net</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Checkpoint: ping from within TenantC's servers</para>
|
||||
<para>Checkpoint: ping from within TenantC's servers.</para>
|
||||
<para>Since we have a router connecting to two subnets, the VMs on these subnets are able to ping each other.
|
||||
And since we have set the router's gateway interface, TenantC's servers are able to ping external network IPs, such as 192.168.1.1, 30.0.0.1 etc.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Associate floating IPs for
|
||||
TenantC's servers</para>
|
||||
<para>Associate floating IPs for TenantC's servers.</para>
|
||||
<para>Since we have a router connecting to
|
||||
two subnets, the VMs on these subnets
|
||||
are able to ping each other. And since
|
||||
@ -863,8 +772,7 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
|
||||
192.168.1.1, 30.0.0.1 etc.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Associate floating IPs for TenantC's
|
||||
servers</para>
|
||||
<para>Associate floating IPs for TenantC's servers.</para>
|
||||
<para>We can use the similar commands as
|
||||
we used in TenantA's section to finish
|
||||
this task.</para>
|
||||
@ -874,4 +782,24 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
|
||||
</orderedlist>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="section_use-cases-tenant-router">
|
||||
<title>Use case: per-tenant routers with private networks</title>
|
||||
<para>This use case represents a more advanced router scenario in which each tenant gets at
|
||||
least one router, and potentially has access to the Networking API to create additional
|
||||
routers. The tenant can create their own networks, potentially uplinking those networks
|
||||
to a router. This model enables tenant-defined, multi-tier applications, with each tier
|
||||
being a separate network behind the router. Since there are multiple routers, tenant
|
||||
subnets can overlap without conflicting, since access to external networks all happens
|
||||
via SNAT or Floating IPs. Each router uplink and floating IP is allocated from the
|
||||
external network subnet.</para>
|
||||
<para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata scale="55"
|
||||
fileref="../common/figures/UseCase-MultiRouter.png" align="left"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1mmQc8cBUoTEfEns-ehIyQSTvOrjUdl5xeGDv9suVyAY/edit -->
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
@ -3,18 +3,9 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="section_networking-provider-router_with-provate-networks">
|
||||
<title>Provider Router with Private Networks</title>
|
||||
<para>This section describes how to install the OpenStack
|
||||
Networking service and its components for the single router use case,
|
||||
Provider Router with Private Networks.</para>
|
||||
<para>We will follow the <link
|
||||
xlink:href="http://docs.openstack.org/grizzly/basic-install/content/basic-install_intro.html"
|
||||
><citetitle>Basic Install Guide</citetitle></link> except for the Neutron,
|
||||
Open-vSwitch, and Virtual Networking sections on each of the
|
||||
nodes.</para>
|
||||
<para>The <citetitle>Basic Install Guide</citetitle> document
|
||||
uses gre tunnels. This document describes how to use vlans for
|
||||
separation instead.</para>
|
||||
<title>Provider router with private networks</title>
|
||||
<para>This section describes how to install the OpenStack Networking service and its components
|
||||
for a single router use casea provider router with private networks.</para>
|
||||
<para>The following figure shows the setup:</para>
|
||||
<note>
|
||||
<para>Because you run the DHCP agent and L3 agent on one node, you must set
|
||||
@ -31,7 +22,7 @@
|
||||
</mediaobject>
|
||||
</informalfigure>
|
||||
<para>The following nodes are in the setup:<table rules="all">
|
||||
<caption>Nodes for Demo</caption>
|
||||
<caption>Nodes for use case</caption>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Node</th>
|
||||
@ -87,7 +78,7 @@
|
||||
</tbody>
|
||||
</table></para>
|
||||
<section xml:id="demo_installions">
|
||||
<title>Installations</title>
|
||||
<title>Install</title>
|
||||
<section xml:id="controller-install-neutron-server">
|
||||
<title>Controller</title>
|
||||
<procedure>
|
||||
@ -137,7 +128,7 @@ admin_password = password</programlisting>
|
||||
</section>
|
||||
<section
|
||||
xml:id="network-node-install-plugin-openvswitch-agent">
|
||||
<title>Network Node</title>
|
||||
<title>Network node</title>
|
||||
<procedure>
|
||||
<title>To install and configure the network
|
||||
node</title>
|
||||
@ -179,14 +170,14 @@ bridge_mappings = physnet1:br-eth1</programlisting>
|
||||
role="bold">br-eth1</emphasis> (All VM
|
||||
communication between the nodes occurs through
|
||||
eth1):</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-eth1
|
||||
<prompt>$</prompt> sudo ovs-vsctl add-port br-eth1 eth1</userinput></screen>
|
||||
<screen><prompt>#</prompt> <userinput>sudo ovs-vsctl add-br br-eth1
|
||||
<prompt>#</prompt> sudo ovs-vsctl add-port br-eth1 eth1</userinput></screen>
|
||||
</step>
|
||||
<step>
|
||||
<para>Create the external network bridge to the
|
||||
Open vSwitch:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-ex
|
||||
<prompt>$</prompt> sudo ovs-vsctl add-port br-ex eth2</userinput></screen>
|
||||
<screen><prompt>#</prompt> <userinput>sudo ovs-vsctl add-br br-ex
|
||||
<prompt>#</prompt> sudo ovs-vsctl add-port br-ex eth2</userinput></screen>
|
||||
</step>
|
||||
<step>
|
||||
<para>Edit the file <filename>
|
||||
@ -297,7 +288,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Export the
|
||||
variables:<screen><prompt>$</prompt> <userinput>source novarc echo "source novarc">>.bashrc</userinput></screen>
|
||||
variables:<screen><prompt>#</prompt> <userinput>source novarc echo "source novarc">>.bashrc</userinput></screen>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
@ -309,7 +300,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
<step>
|
||||
<para>Get the tenant ID (Used as $TENANT_ID
|
||||
later).</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>keystone tenant-list</userinput>
|
||||
<computeroutput>+----------------------------------+--------------------+---------+
|
||||
| id | name | enabled |
|
||||
+----------------------------------+--------------------+---------+
|
||||
@ -324,7 +315,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
role="bold">net1</emphasis> for tenant_A
|
||||
($TENANT_ID will be
|
||||
e40fa60181524f9f9ee7aa1038748f08):</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id $TENANT_ID net1</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron net-create --tenant-id $TENANT_ID net1</userinput>
|
||||
<computeroutput>+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
@ -345,7 +336,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
<para>Create a subnet on the network <emphasis
|
||||
role="bold">net1</emphasis> (ID field
|
||||
below is used as $SUBNET_ID later):</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $TENANT_ID net1 10.5.5.0/24</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron subnet-create --tenant-id $TENANT_ID net1 10.5.5.0/24</userinput>
|
||||
<computeroutput>+------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------------+--------------------------------------------+
|
||||
@ -371,7 +362,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
<para>Create a router named <emphasis role="bold"
|
||||
>router1</emphasis> (ID is used as
|
||||
$ROUTER_ID later):</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron router-create router1</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron router-create router1</userinput>
|
||||
<computeroutput>+-----------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+--------------------------------------+
|
||||
@ -394,7 +385,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
>router1</emphasis> and attach it to the
|
||||
subnet from <emphasis role="bold"
|
||||
>net1</emphasis>:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron router-interface-add $ROUTER_ID $SUBNET_ID</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron router-interface-add $ROUTER_ID $SUBNET_ID</userinput>
|
||||
<computeroutput>Added interface to router 685f64e7-a020-4fdf-a8ad-e41194ae124b</computeroutput></screen>
|
||||
<note>
|
||||
<para>You can repeat this step to add more
|
||||
@ -405,7 +396,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
<step>
|
||||
<para>Create the external network named <emphasis
|
||||
role="bold">ext_net</emphasis>:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron net-create ext_net --router:external=True</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron net-create ext_net --router:external=True</userinput>
|
||||
<computeroutput>+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
@ -428,7 +419,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
<para>The DHCP service is disabled for this
|
||||
subnet.</para>
|
||||
</note>
|
||||
<screen><prompt>$</prompt> <userinput>neutron subnet-create ext_net \
|
||||
<screen><prompt>#</prompt> <userinput>neutron subnet-create ext_net \
|
||||
--allocation-pool start=7.7.7.130,end=7.7.7.150 \
|
||||
--gateway 7.7.7.1 7.7.7.0/24 --disable-dhcp</userinput>
|
||||
<computeroutput>+------------------+--------------------------------------------------+
|
||||
@ -450,7 +441,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
<step>
|
||||
<para>Set the router's gateway to be the external
|
||||
network:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron router-gateway-set $ROUTER_ID $EXTERNAL_NETWORK_ID</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron router-gateway-set $ROUTER_ID $EXTERNAL_NETWORK_ID</userinput>
|
||||
<computeroutput>Set gateway for router 685f64e7-a020-4fdf-a8ad-e41194ae124b</computeroutput></screen>
|
||||
</step>
|
||||
</procedure></para>
|
||||
@ -463,7 +454,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
a VM after it starts. The ID of the port
|
||||
($PORT_ID) that was allocated for the VM is
|
||||
required and can be found as follows:</para>
|
||||
<screen><prompt>$</prompt> <userinput>nova list</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>nova list</userinput>
|
||||
<computeroutput>+--------------------------------------+--------+--------+---------------+
|
||||
| ID | Name | Status | Networks |
|
||||
+--------------------------------------+--------+--------+---------------+
|
||||
@ -480,7 +471,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
<step>
|
||||
<para>Allocate a floating IP (Used as
|
||||
$FLOATING_ID):</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron floatingip-create ext_net</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron floatingip-create ext_net</userinput>
|
||||
<computeroutput>+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
@ -496,12 +487,12 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
<step>
|
||||
<para>Associate the floating IP with the VM's
|
||||
port:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron floatingip-associate $FLOATING_ID $PORT_ID</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron floatingip-associate $FLOATING_ID $PORT_ID</userinput>
|
||||
<computeroutput>Associated floatingip 40952c83-2541-4d0c-b58e-812c835079a5</computeroutput></screen>
|
||||
</step>
|
||||
<step>
|
||||
<para>Show the floating IP:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron floatingip-show $FLOATING_ID</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron floatingip-show $FLOATING_ID</userinput>
|
||||
<computeroutput>+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
@ -516,7 +507,7 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
</step>
|
||||
<step>
|
||||
<para>Test the floating IP:</para>
|
||||
<screen><prompt>$</prompt> <userinput>ping 7.7.7.131</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>ping 7.7.7.131</userinput>
|
||||
<computeroutput>PING 7.7.7.131 (7.7.7.131) 56(84) bytes of data.
|
||||
64 bytes from 7.7.7.131: icmp_req=2 ttl=64 time=0.152 ms
|
||||
64 bytes from 7.7.7.131: icmp_req=3 ttl=64 time=0.049 ms
|
||||
@ -525,4 +516,35 @@ export SERVICE_TOKEN=password</programlisting></para>
|
||||
</procedure>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="section_use-cases-single-router">
|
||||
<title>Use case: provider router with private networks</title>
|
||||
<para>This use case provides each tenant with one or more private networks, which connect to
|
||||
the outside world via an OpenStack Networking router. When each tenant gets exactly one
|
||||
network, this architecture maps to the same logical topology as the VlanManager in
|
||||
OpenStack Compute (although of course, OpenStack Networking doesn't require VLANs).
|
||||
Using the OpenStack Networking API, the tenant can only see a network for each private
|
||||
network assigned to that tenant. The router object in the API is created and owned by
|
||||
the cloud administrator.</para>
|
||||
<para>This model supports giving VMs public addresses using "floating IPs", in which the
|
||||
router maps public addresses from the external network to fixed IPs on private networks.
|
||||
Hosts without floating IPs can still create outbound connections to the external
|
||||
network, because the provider router performs SNAT to the router's external IP. The IP
|
||||
address of the physical router is used as the <literal>gateway_ip</literal> of the
|
||||
external network subnet, so the provider has a default router for Internet traffic.</para>
|
||||
<para>
|
||||
The router provides L3 connectivity between private networks, meaning
|
||||
that different tenants can reach each other's instances unless additional
|
||||
filtering is used (for example, security groups). Because there is only a single
|
||||
router, tenant networks cannot use overlapping IPs. Thus, it is likely
|
||||
that the administrator would create the private networks on behalf of the tenants.
|
||||
</para>
|
||||
<para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata scale="55" fileref="../common/figures/UseCase-SingleRouter.png"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1DKxeZZXml_fNZHRoGPKkC7sGdkPJZCtWytYZqHIp_ZE/edit -->
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
@ -1,8 +1,9 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="section_network-single-flat">
|
||||
<title>Single Flat Network</title>
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="section_neutron-single-flat">
|
||||
<title>Single flat network</title>
|
||||
<para>This section describes how to install the OpenStack Networking service and its components
|
||||
for a single flat network use case.</para>
|
||||
<para>The diagram below shows the setup. For simplicity all of the
|
||||
@ -30,41 +31,31 @@
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>Controller Node</td>
|
||||
<td>Runs the OpenStack Networking service, OpenStack Identity and all of
|
||||
the OpenStack Compute services that are required to deploy
|
||||
VMs (<systemitem class="service">nova-api</systemitem>,
|
||||
<systemitem class="service">nova-scheduler</systemitem>, for example).
|
||||
The node must have at least one
|
||||
network interface, which is connected to
|
||||
the "Management Network". The hostname is 'controlnode', which
|
||||
every other node resolve to the controller node's IP.
|
||||
<emphasis role="bold">Note</emphasis>
|
||||
The nova-network service should not be running. This is
|
||||
replaced by OpenStack Networking.</td>
|
||||
<td>Runs the Networking service, Identity, and all of the Compute services that
|
||||
are required to deploy VMs (<systemitem class="service">nova-api</systemitem>,
|
||||
<systemitem class="service">nova-scheduler</systemitem>, for example). The
|
||||
node must have at least one network interface, which is connected to the
|
||||
"Management Network". The hostname is 'controlnode', which every other node
|
||||
resolve to the controller node's IP. <emphasis role="bold">Note</emphasis> The
|
||||
nova-network service should not be running. This is replaced by Networking.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Compute Node</td>
|
||||
<td>Runs the OpenStack Networking L2 agent and the
|
||||
OpenStack Compute services that run VMs
|
||||
(<systemitem class="service">nova-compute</systemitem> specifically, and optionally other
|
||||
nova-* services depending on configuration). The
|
||||
node must have at least two network interfaces.
|
||||
The first is used to communicate with the
|
||||
controller node via the management network. The
|
||||
second interface is used for the VM traffic on the
|
||||
Data network. The VM will be able to receive its
|
||||
IP address from the DHCP agent on this
|
||||
network.</td>
|
||||
<td>Runs the OpenStack Networking L2 agent and the Compute services that run VMs
|
||||
(<systemitem class="service">nova-compute</systemitem> specifically, and
|
||||
optionally other nova-* services depending on configuration). The node must have
|
||||
at least two network interfaces. The first is used to communicate with the
|
||||
controller node via the management network. The second interface is used for the
|
||||
VM traffic on the Data network. The VM will be able to receive its IP address
|
||||
from the DHCP agent on this network.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Network Node</td>
|
||||
<td>Runs OpenStack Networking L2 agent and the DHCP agent.
|
||||
The DHCP agent will allocate
|
||||
IP addresses to the VMs on the network. The node must have
|
||||
at least two network interfaces. The first
|
||||
is used to communicate with the controller
|
||||
node via the management network. The second
|
||||
interface will be used for the VM traffic on the data network.</td>
|
||||
<td>Runs Networking L2 agent and the DHCP agent. The DHCP agent will allocate IP
|
||||
addresses to the VMs on the network. The node must have at least two network
|
||||
interfaces. The first is used to communicate with the controller node via the
|
||||
management network. The second interface will be used for the VM traffic on the
|
||||
data network.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Router</td>
|
||||
@ -74,58 +65,56 @@
|
||||
</tbody>
|
||||
</informaltable>
|
||||
<para>The demo assumes the following:</para>
|
||||
<para><emphasis role="bold">Controller Node</emphasis></para>
|
||||
<para><emphasis role="bold">Controller node</emphasis></para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>Relevant OpenStack Compute services are installed, configured and
|
||||
running.</para>
|
||||
<para>Relevant Compute services are installed, configured and running.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Glance is installed, configured and running. In
|
||||
addition to this there should be an image.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>OpenStack Identity is installed, configured and running. An OpenStack Networking
|
||||
user <emphasis role="bold">neutron</emphasis> should be created on tenant <emphasis
|
||||
<para>OpenStack Identity is installed, configured and running. A Networking user
|
||||
<emphasis role="bold">neutron</emphasis> should be created on tenant <emphasis
|
||||
role="bold">servicetenant</emphasis> with password <emphasis role="bold"
|
||||
>servicepassword</emphasis>.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Additional services <itemizedlist>
|
||||
<para>Additional services: <itemizedlist>
|
||||
<listitem>
|
||||
<para>RabbitMQ is running with default guest and its password</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>MySQL server (user is <emphasis
|
||||
role="bold">root</emphasis> and
|
||||
password is <emphasis role="bold"
|
||||
>root</emphasis>)</para>
|
||||
<para>MySQL server (user is <emphasis role="bold">root</emphasis> and
|
||||
password is <emphasis role="bold">root</emphasis>)</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<para><emphasis role="bold">Compute Node</emphasis></para>
|
||||
<para><emphasis role="bold">Compute node</emphasis></para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>OpenStack Compute compute is installed and configured</para>
|
||||
<para>Compute is installed and configured.</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<section xml:id="demo_flat_installions">
|
||||
<title>Installation</title>
|
||||
<title>Install</title>
|
||||
<para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Controller Node - OpenStack Networking Server</emphasis><orderedlist>
|
||||
<para><emphasis role="bold">Controller nodeNetworking server</emphasis><orderedlist>
|
||||
<listitem>
|
||||
<para>Install the OpenStack Networking server.</para>
|
||||
<para>Install the Networking server.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create database <emphasis role="bold">ovs_neutron</emphasis>.</para>
|
||||
<para>Create database <emphasis role="bold"
|
||||
>ovs_neutron</emphasis>.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the OpenStack Networking configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename> setting
|
||||
plugin choice and Identity Service user as necessary:</para>
|
||||
<para>Update the Networking configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename> setting plugin choice
|
||||
and Identity Service user as necessary:</para>
|
||||
<programlisting>[DEFAULT]
|
||||
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
|
||||
control_exchange = neutron
|
||||
@ -140,7 +129,7 @@ admin_password=servicepassword
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the plugin configuration file, <filename>
|
||||
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting>[database]
|
||||
sql_connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
|
||||
[ovs]
|
||||
@ -149,19 +138,20 @@ bridge_mappings = physnet1:br-eth0
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Start the OpenStack Networking service</para>
|
||||
<para>Start the Networking service</para>
|
||||
</listitem>
|
||||
</orderedlist></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Compute Node - OpenStack Compute </emphasis><orderedlist>
|
||||
<para><emphasis role="bold">Compute nodeCompute </emphasis><orderedlist>
|
||||
<listitem>
|
||||
<para>Install the <systemitem class="service">nova-compute</systemitem> service.</para>
|
||||
<para>Install the <systemitem class="service"
|
||||
>nova-compute</systemitem> service.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the OpenStack Compute configuration
|
||||
file, <filename>
|
||||
/etc/nova/nova.conf</filename>. Make sure the following is at the end of this file:</para>
|
||||
<para>Update the Compute configuration file, <filename>
|
||||
/etc/nova/nova.conf</filename>. Make sure the following is
|
||||
at the end of this file:</para>
|
||||
<programlisting>network_api_class=nova.network.neutronv2.api.API
|
||||
|
||||
neutron_admin_username=neutron
|
||||
@ -175,25 +165,25 @@ libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Restart the OpenStack Compute service</para>
|
||||
<para>Restart the Compute service</para>
|
||||
</listitem>
|
||||
</orderedlist></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Compute and Network Node - L2 Agent</emphasis><orderedlist>
|
||||
<para><emphasis role="bold">Compute and Network nodeL2 agent</emphasis><orderedlist>
|
||||
<listitem>
|
||||
<para>Install and start Open vSwitch.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Install the L2 agent(Neutron Open vSwitch agent).</para>
|
||||
<para>Install the L2 agent (Neutron Open vSwitch agent).</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Add the integration bridge to the Open vSwitch:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
|
||||
<screen><prompt>#</prompt> <userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the OpenStack Networking configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename>:</para>
|
||||
<para>Update the Networking configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename>:</para>
|
||||
<programlisting>[DEFAULT]
|
||||
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
|
||||
control_exchange = neutron
|
||||
@ -202,7 +192,7 @@ notification_driver = neutron.openstack.common.notifier.rabbit_notifier</program
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the plugin configuration file, <filename>
|
||||
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
|
||||
<programlisting>[database]
|
||||
sql_connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
|
||||
[ovs]
|
||||
@ -210,13 +200,11 @@ network_vlan_ranges = physnet1
|
||||
bridge_mappings = physnet1:br-eth0</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create the network bridge
|
||||
<emphasis role="bold"
|
||||
>br-eth0</emphasis> (All VM
|
||||
communication between the nodes
|
||||
will be done via eth0):</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-eth0</userinput>
|
||||
<prompt>$</prompt> <userinput>sudo ovs-vsctl add-port br-eth0 eth0</userinput></screen>
|
||||
<para>Create the network bridge <emphasis role="bold"
|
||||
>br-eth0</emphasis> (All VM communication between the nodes
|
||||
will be done via eth0):</para>
|
||||
<screen><prompt>#</prompt> <userinput>sudo ovs-vsctl add-br br-eth0</userinput>
|
||||
<prompt>#</prompt> <userinput>sudo ovs-vsctl add-port br-eth0 eth0</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Start the OpenStack Networking L2 agent</para>
|
||||
@ -224,13 +212,13 @@ bridge_mappings = physnet1:br-eth0</programlisting>
|
||||
</orderedlist></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Network Node - DHCP Agent</emphasis><orderedlist>
|
||||
<para><emphasis role="bold">Network nodeDHCP agent</emphasis><orderedlist>
|
||||
<listitem>
|
||||
<para>Install the DHCP agent.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the OpenStack Networking configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename>:</para>
|
||||
<para>Update the Networking configuration file, <filename>
|
||||
/etc/neutron/neutron.conf</filename>:</para>
|
||||
<programlisting>[DEFAULT]
|
||||
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
|
||||
control_exchange = neutron
|
||||
@ -239,11 +227,11 @@ notification_driver = neutron.openstack.common.notifier.rabbit_notifier</program
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Update the DHCP configuration file <filename>
|
||||
/etc/neutron/dhcp_agent.ini</filename>:</para>
|
||||
/etc/neutron/dhcp_agent.ini</filename>:</para>
|
||||
<programlisting>interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Start the DHCP agent</para>
|
||||
<para>Start the DHCP agent.</para>
|
||||
</listitem>
|
||||
</orderedlist></para>
|
||||
</listitem>
|
||||
@ -251,11 +239,10 @@ notification_driver = neutron.openstack.common.notifier.rabbit_notifier</program
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="demo_flat_logical_network_config">
|
||||
<title>Logical Network Configuration</title>
|
||||
<title>Configure logical network</title>
|
||||
<para>All of the commands below can be executed on the network node.</para>
|
||||
<para><emphasis role="bold">Note</emphasis> please ensure that
|
||||
the following environment variables are set. These are
|
||||
used by the various clients to access OpenStack Identity.</para>
|
||||
<para><emphasis role="bold">Note</emphasis> please ensure that the following environment
|
||||
variables are set. These are used by the various clients to access Identity.</para>
|
||||
<para>
|
||||
<programlisting language="bash">export OS_USERNAME=admin
|
||||
export OS_PASSWORD=adminpassword
|
||||
@ -267,7 +254,7 @@ export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/</programlisting>
|
||||
<listitem>
|
||||
<para>Get the tenant ID (Used as
|
||||
$TENANT_ID later):</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>keystone tenant-list</userinput>
|
||||
<computeroutput>+----------------------------------+---------+---------+
|
||||
| id | name | enabled |
|
||||
+----------------------------------+---------+---------+
|
||||
@ -281,7 +268,7 @@ export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Get the User information:</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone user-list</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>keystone user-list</userinput>
|
||||
<computeroutput>+----------------------------------+-------+---------+-------------------+
|
||||
| id | name | enabled | email |
|
||||
+----------------------------------+-------+---------+-------------------+
|
||||
@ -319,7 +306,7 @@ export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create a subnet on the network:</para>
|
||||
<screen><prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $TENANT_ID sharednet1 30.0.0.0/24</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>neutron subnet-create --tenant-id $TENANT_ID sharednet1 30.0.0.0/24</userinput>
|
||||
<computeroutput>Created a new subnet:
|
||||
+------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
@ -340,10 +327,10 @@ export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create a server for tenant A:</para>
|
||||
<screen><prompt>$</prompt> <userinput>nova --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
<screen><prompt>#</prompt> <userinput>nova --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \
|
||||
--nic net-id=04457b44-e22a-4a5c-be54-a53a9b2818e7 TenantA_VM1</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>nova --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
<screen><prompt>#</prompt> <userinput>nova --os-tenant-name TenantA --os-username UserA --os-password password \
|
||||
--os-auth-url=http://localhost:5000/v2.0 list</userinput>
|
||||
<computeroutput>+--------------------------------------+-------------+--------+---------------------+
|
||||
| ID | Name | Status | Networks |
|
||||
@ -362,7 +349,7 @@ export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Ping the public network within the server of tenant A:</para>
|
||||
<screen><prompt>$</prompt> <userinput>ping 192.168.1.1</userinput>
|
||||
<screen><prompt>#</prompt> <userinput>ping 192.168.1.1</userinput>
|
||||
<computeroutput>PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
|
||||
64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=1.74 ms
|
||||
64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=1.50 ms
|
||||
@ -382,4 +369,66 @@ rtt min/avg/max/mdev = 1.234/1.495/1.745/0.211 ms
|
||||
</orderedlist>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="section_use-cases-single-flat">
|
||||
<title>Use case: single flat network</title>
|
||||
<para>The simplest use case is a single network. This is a "shared" network, meaning it is
|
||||
visible to all tenants via the Networking API. Tenant VMs have a single NIC, and receive
|
||||
a fixed IP address from the subnet(s) associated with that network. This use case
|
||||
essentially maps to the FlatManager and FlatDHCPManager models provided by Compute.
|
||||
Floating IPs are not supported.</para>
|
||||
<para>This network type is often created by the OpenStack administrator
|
||||
to map directly to an existing physical network in the data center (called a
|
||||
"provider network"). This allows the provider to use a physical
|
||||
router on that data center network as the gateway for VMs to reach
|
||||
the outside world. For each subnet on an external network, the gateway
|
||||
configuration on the physical router must be manually configured
|
||||
outside of OpenStack.</para>
|
||||
<para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata scale="80" fileref="../common/figures/UseCase-SingleFlat.png"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1Jb6iSoBo4G7fv7i2EMpYTMTxesLPmEPKIbI7sVbhhqY/edit -->
|
||||
</para>
|
||||
</section>
|
||||
<?hard-pagebreak?>
|
||||
<section xml:id="section_use-cases-multi-flat">
|
||||
<title>Use case: multiple flat network</title>
|
||||
<para>This use case is similar to the above single flat network use case, except that tenants
|
||||
can see multiple shared networks via the Networking API and can choose which network (or
|
||||
networks) to plug into.</para>
|
||||
<para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata scale="60" fileref="../common/figures/UseCase-MultiFlat.png"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/14ayGsyunW_P-wvY8OiueE407f7540JD3VsWUH18KHvU/edit -->
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="section_use-cases-mixed">
|
||||
<title>Use case: mixed flat and private network</title>
|
||||
<para>
|
||||
This use case is an extension of the above Flat Network use cases.
|
||||
In addition to being able to see one or more shared networks via
|
||||
the OpenStack Networking API, tenants can also have access to private per-tenant
|
||||
networks (only visible to tenant users).
|
||||
</para>
|
||||
<para>
|
||||
Created VMs can have NICs on any of the shared networks and/or any of the private networks
|
||||
belonging to the tenant. This enables the creation of "multi-tier"
|
||||
topologies using VMs with multiple NICs. It also supports a model where
|
||||
a VM acting as a gateway can provide services such as routing, NAT, or
|
||||
load balancing.
|
||||
</para>
|
||||
<para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata scale="55" fileref="../common/figures/UseCase-MixedFlatPrivate.png"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1efSqR6KA2gv-OKl5Rl-oV_zwgYP8mgQHFP2DsBj5Fqo/edit -->
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
Loading…
x
Reference in New Issue
Block a user