import installation guide pages from openstack-manuals

Co-Authored-By: Akihiro Motoki <amotoki@gmail.com>
Change-Id: Id8057d229add4daf3093d362eab7614685fdb8ac
This commit is contained in:
chenxing 2017-06-30 05:58:47 +00:00 committed by Akihiro Motoki
parent df762b0d46
commit d19c7e7d59
56 changed files with 5080 additions and 0 deletions

View File

@ -39,6 +39,14 @@ The `Neutron Development wiki`_ is also a good resource for new contributors.
Enjoy!
Installation Guide
------------------
.. toctree::
:maxdepth: 2
Installation Guide <install/index>
Networking Guide
----------------

View File

@ -0,0 +1,33 @@
===========================
Networking service overview
===========================
OpenStack Networking (neutron) allows you to create and attach interface
devices managed by other OpenStack services to networks. Plug-ins can be
implemented to accommodate different networking equipment and software,
providing flexibility to OpenStack architecture and deployment.
It includes the following components:
neutron-server
Accepts and routes API requests to the appropriate OpenStack
Networking plug-in for action.
OpenStack Networking plug-ins and agents
Plug and unplug ports, create networks or subnets, and provide
IP addressing. These plug-ins and agents differ depending on the
vendor and technologies used in the particular cloud. OpenStack
Networking ships with plug-ins and agents for Cisco virtual and
physical switches, NEC OpenFlow products, Open vSwitch, Linux
bridging, and the VMware NSX product.
The common agents are L3 (layer 3), DHCP (dynamic host IP
addressing), and a plug-in agent.
Messaging queue
Used by most OpenStack Networking installations to route information
between the neutron-server and various agents. Also acts as a database
to store networking state for particular plug-ins.
OpenStack Networking mainly interacts with OpenStack Compute to provide
networks and connectivity for its instances.

View File

@ -0,0 +1,160 @@
Install and configure compute node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The compute node handles connectivity and security groups for instances.
Install the components
----------------------
.. code-block:: console
# zypper install --no-recommends \
openstack-neutron-linuxbridge-agent bridge-utils
.. end
Configure the common component
------------------------------
The Networking common component configuration includes the
authentication mechanism, message queue, and plug-in.
.. include:: shared/note_configuration_vary_by_distribution.rst
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, comment out any ``connection`` options
because compute nodes do not directly access the database.
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
Configure networking options
----------------------------
Choose the same networking option that you chose for the controller node to
configure services specific to it. Afterwards, return here and proceed to
:ref:`neutron-compute-compute-obs`.
.. toctree::
:maxdepth: 1
compute-install-option1-obs.rst
compute-install-option2-obs.rst
.. _neutron-compute-compute-obs:
Configure the Compute service to use the Networking service
-----------------------------------------------------------
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* In the ``[neutron]`` section, configure access parameters:
.. path /etc/nova/nova.conf
.. code-block:: ini
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
Finalize installation
---------------------
#. The Networking service initialization scripts expect the variable
``NEUTRON_PLUGIN_CONF`` in the ``/etc/sysconfig/neutron`` file to
reference the ML2 plug-in configuration file. Ensure that the
``/etc/sysconfig/neutron`` file contains the following:
.. path /etc/sysconfig/neutron
.. code-block:: ini
NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"
.. end
#. Restart the Compute service:
.. code-block:: console
# systemctl restart openstack-nova-compute.service
.. end
#. Start the Linux Bridge agent and configure it to start when the
system boots:
.. code-block:: console
# systemctl enable openstack-neutron-linuxbridge-agent.service
# systemctl start openstack-neutron-linuxbridge-agent.service
.. end

View File

@ -0,0 +1,53 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configure the Networking components on a *compute* node.
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-obs`
for more information.
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = false
.. end
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Return to *Networking compute node configuration*

View File

@ -0,0 +1,53 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configure the Networking components on a *compute* node.
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-rdo`
for more information.
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = false
.. end
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Return to *Networking compute node configuration*

View File

@ -0,0 +1,53 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configure the Networking components on a *compute* node.
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-ubuntu`
for more information.
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = false
.. end
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Return to *Networking compute node configuration*

View File

@ -0,0 +1,64 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configure the Networking components on a *compute* node.
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-obs`
for more information.
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
.. end
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface to tunnel traffic to
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
the management IP address of the compute node. See
:doc:`environment-networking-obs` for more information.
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Return to *Networking compute node configuration*.

View File

@ -0,0 +1,64 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configure the Networking components on a *compute* node.
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-rdo`
for more information.
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
.. end
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface to tunnel traffic to
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
the management IP address of the compute node. See
:doc:`environment-networking-rdo` for more information.
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Return to *Networking compute node configuration*.

View File

@ -0,0 +1,64 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configure the Networking components on a *compute* node.
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-ubuntu`
for more information.
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
.. end
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface to tunnel traffic to
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
the management IP address of the compute node. See
:doc:`environment-networking-ubuntu` for more information.
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Return to *Networking compute node configuration*.

View File

@ -0,0 +1,163 @@
Install and configure compute node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The compute node handles connectivity and security groups for instances.
Install the components
----------------------
.. todo:
https://bugzilla.redhat.com/show_bug.cgi?id=1334626
.. code-block:: console
# yum install openstack-neutron-linuxbridge ebtables ipset
.. end
Configure the common component
------------------------------
The Networking common component configuration includes the
authentication mechanism, message queue, and plug-in.
.. include:: shared/note_configuration_vary_by_distribution.rst
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, comment out any ``connection`` options
because compute nodes do not directly access the database.
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
.. end
Configure networking options
----------------------------
Choose the same networking option that you chose for the controller node to
configure services specific to it. Afterwards, return here and proceed to
:ref:`neutron-compute-compute-rdo`.
.. toctree::
:maxdepth: 1
compute-install-option1-rdo.rst
compute-install-option2-rdo.rst
.. _neutron-compute-compute-rdo:
Configure the Compute service to use the Networking service
-----------------------------------------------------------
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* In the ``[neutron]`` section, configure access parameters:
.. path /etc/nova/nova.conf
.. code-block:: ini
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
Finalize installation
---------------------
#. Restart the Compute service:
.. code-block:: console
# systemctl restart openstack-nova-compute.service
.. end
#. Start the Linux bridge agent and configure it to start when the
system boots:
.. code-block:: console
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service
.. end

View File

@ -0,0 +1,145 @@
Install and configure compute node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The compute node handles connectivity and security groups for instances.
Install the components
----------------------
.. code-block:: console
# apt install neutron-linuxbridge-agent
.. end
Configure the common component
------------------------------
The Networking common component configuration includes the
authentication mechanism, message queue, and plug-in.
.. include:: shared/note_configuration_vary_by_distribution.rst
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, comment out any ``connection`` options
because compute nodes do not directly access the database.
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
Configure networking options
----------------------------
Choose the same networking option that you chose for the controller node to
configure services specific to it. Afterwards, return here and proceed to
:ref:`neutron-compute-compute-ubuntu`.
.. toctree::
:maxdepth: 1
compute-install-option1-ubuntu.rst
compute-install-option2-ubuntu.rst
.. _neutron-compute-compute-ubuntu:
Configure the Compute service to use the Networking service
-----------------------------------------------------------
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* In the ``[neutron]`` section, configure access parameters:
.. path /etc/nova/nova.conf
.. code-block:: ini
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
Finalize installation
---------------------
#. Restart the Compute service:
.. code-block:: console
# service nova-compute restart
.. end
#. Restart the Linux bridge agent:
.. code-block:: console
# service neutron-linuxbridge-agent restart
.. end

View File

@ -0,0 +1,53 @@
Networking (neutron) concepts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack Networking (neutron) manages all networking facets for the
Virtual Networking Infrastructure (VNI) and the access layer aspects
of the Physical Networking Infrastructure (PNI) in your OpenStack
environment. OpenStack Networking enables projects to create advanced
virtual network topologies which may include services such as a
firewall, a load balancer, and a virtual private network (VPN).
Networking provides networks, subnets, and routers as object abstractions.
Each abstraction has functionality that mimics its physical counterpart:
networks contain subnets, and routers route traffic between different
subnets and networks.
Any given Networking set up has at least one external network. Unlike
the other networks, the external network is not merely a virtually
defined network. Instead, it represents a view into a slice of the
physical, external network accessible outside the OpenStack
installation. IP addresses on the external network are accessible by
anybody physically on the outside network.
In addition to external networks, any Networking set up has one or more
internal networks. These software-defined networks connect directly to
the VMs. Only the VMs on any given internal network, or those on subnets
connected through interfaces to a similar router, can access VMs connected
to that network directly.
For the outside network to access VMs, and vice versa, routers between
the networks are needed. Each router has one gateway that is connected
to an external network and one or more interfaces connected to internal
networks. Like a physical router, subnets can access machines on other
subnets that are connected to the same router, and machines can access the
outside network through the gateway for the router.
Additionally, you can allocate IP addresses on external networks to
ports on the internal network. Whenever something is connected to a
subnet, that connection is called a port. You can associate external
network IP addresses with ports to VMs. This way, entities on the
outside network can access VMs.
Networking also supports *security groups*. Security groups enable
administrators to define firewall rules in groups. A VM can belong to
one or more security groups, and Networking applies the rules in those
security groups to block or unblock ports, port ranges, or traffic types
for that VM.
Each plug-in that Networking uses has its own concepts. While not vital
to operating the VNI and OpenStack environment, understanding these
concepts can help you set up Networking. All Networking installations
use a core plug-in and a security group plug-in (or just the No-Op
security group plug-in). Additionally, Firewall-as-a-Service (FWaaS) and
Load-Balancer-as-a-Service (LBaaS) plug-ins are available.

View File

@ -0,0 +1,319 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prerequisites
-------------
Before you configure the OpenStack Networking (neutron) service, you
must create a database, service credentials, and API endpoints.
#. To create the database, complete these steps:
* Use the database access client to connect to the database
server as the ``root`` user:
.. code-block:: console
$ mysql -u root -p
.. end
* Create the ``neutron`` database:
.. code-block:: console
MariaDB [(none)] CREATE DATABASE neutron;
.. end
* Grant proper access to the ``neutron`` database, replacing
``NEUTRON_DBPASS`` with a suitable password:
.. code-block:: console
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
.. end
* Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only CLI
commands:
.. code-block:: console
$ . admin-openrc
.. end
#. To create the service credentials, complete these steps:
* Create the ``neutron`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
.. end
* Add the ``admin`` role to the ``neutron`` user:
.. code-block:: console
$ openstack role add --project service --user neutron admin
.. end
.. note::
This command provides no output.
* Create the ``neutron`` service entity:
.. code-block:: console
$ openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
.. end
#. Create the Networking service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
.. end
Configure networking options
----------------------------
You can deploy the Networking service using one of two architectures
represented by options 1 and 2.
Option 1 deploys the simplest possible architecture that only supports
attaching instances to provider (external) networks. No self-service (private)
networks, routers, or floating IP addresses. Only the ``admin`` or other
privileged user can manage provider networks.
Option 2 augments option 1 with layer-3 services that support attaching
instances to self-service networks. The ``demo`` or other unprivileged
user can manage self-service networks including routers that provide
connectivity between self-service and provider networks. Additionally,
floating IP addresses provide connectivity to instances using self-service
networks from external networks such as the Internet.
Self-service networks typically use overlay networks. Overlay network
protocols such as VXLAN include additional headers that increase overhead
and decrease space available for the payload or user data. Without knowledge
of the virtual network infrastructure, instances attempt to send packets
using the default Ethernet maximum transmission unit (MTU) of 1500
bytes. The Networking service automatically provides the correct MTU value
to instances via DHCP. However, some cloud images do not use DHCP or ignore
the DHCP MTU option and require configuration using metadata or a script.
.. note::
Option 2 also supports attaching instances to provider networks.
Choose one of the following networking options to configure services
specific to it. Afterwards, return here and proceed to
:ref:`neutron-controller-metadata-agent-obs`.
.. toctree::
:maxdepth: 1
controller-install-option1-obs.rst
controller-install-option2-obs.rst
.. _neutron-controller-metadata-agent-obs:
Configure the metadata agent
----------------------------
The metadata agent provides configuration information
such as credentials to instances.
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the metadata host and shared
secret:
.. path /etc/neutron/metadata_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
.. end
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
Configure the Compute service to use the Networking service
-----------------------------------------------------------
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
* In the ``[neutron]`` section, configure access parameters, enable the
metadata proxy, and configure the secret:
.. path /etc/nova/nova.conf
.. code-block:: ini
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
proxy.
Finalize installation
---------------------
.. note::
SLES enables apparmor by default and restricts dnsmasq. You need to
either completely disable apparmor or disable only the dnsmasq
profile:
.. code-block:: console
# ln -s /etc/apparmor.d/usr.sbin.dnsmasq /etc/apparmor.d/disable/
# systemctl restart apparmor
.. end
#. Restart the Compute API service:
.. code-block:: console
# systemctl restart openstack-nova-api.service
.. end
#. Start the Networking services and configure them to start when the system
boots.
For both networking options:
.. code-block:: console
# systemctl enable openstack-neutron.service \
openstack-neutron-linuxbridge-agent.service \
openstack-neutron-dhcp-agent.service \
openstack-neutron-metadata-agent.service
# systemctl start openstack-neutron.service \
openstack-neutron-linuxbridge-agent.service \
openstack-neutron-dhcp-agent.service \
openstack-neutron-metadata-agent.service
.. end
For networking option 2, also enable and start the layer-3 service:
.. code-block:: console
# systemctl enable openstack-neutron-l3-agent.service
# systemctl start openstack-neutron-l3-agent.service
.. end

View File

@ -0,0 +1,289 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on the *controller* node.
Install the components
----------------------
.. code-block:: console
# zypper install --no-recommends openstack-neutron \
openstack-neutron-server openstack-neutron-linuxbridge-agent \
openstack-neutron-dhcp-agent openstack-neutron-metadata-agent \
bridge-utils
.. end
Configure the server component
------------------------------
The Networking server component configuration includes the database,
authentication mechanism, message queue, topology change notifications,
and plug-in.
.. include:: shared/note_configuration_vary_by_distribution.rst
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
.. end
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
.. note::
Comment out or remove any other ``connection`` options in the
``[database]`` section.
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in and disable additional plug-ins:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
.. end
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
.. end
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
following actions:
* In the ``[ml2]`` section, enable flat and VLAN networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
type_drivers = flat,vlan
.. end
* In the ``[ml2]`` section, disable self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
tenant_network_types =
.. end
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
mechanism_drivers = linuxbridge
.. end
.. warning::
After you configure the ML2 plug-in, removing values in the
``type_drivers`` option can lead to database inconsistency.
* In the ``[ml2]`` section, enable the port security extension driver:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
extension_drivers = port_security
.. end
* In the ``[ml2_type_flat]`` section, configure the provider virtual
network as a flat network:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_flat]
# ...
flat_networks = provider
.. end
* In the ``[securitygroup]`` section, enable ipset to increase
efficiency of security group rules:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[securitygroup]
# ...
enable_ipset = true
.. end
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-obs`
for more information.
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = false
.. end
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Configure the DHCP agent
------------------------
The DHCP agent provides DHCP services for virtual networks.
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
networks can access metadata over the network:
.. path /etc/neutron/dhcp_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
.. end
Return to *Networking controller node configuration*.

View File

@ -0,0 +1,299 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on the *controller* node.
Install the components
----------------------
.. code-block:: console
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
.. end
Configure the server component
------------------------------
The Networking server component configuration includes the database,
authentication mechanism, message queue, topology change notifications,
and plug-in.
.. include:: shared/note_configuration_vary_by_distribution.rst
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
.. end
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
.. note::
Comment out or remove any other ``connection`` options in the
``[database]`` section.
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in and disable additional plug-ins:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
.. end
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
.. end
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
.. end
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
following actions:
* In the ``[ml2]`` section, enable flat and VLAN networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
type_drivers = flat,vlan
.. end
* In the ``[ml2]`` section, disable self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
tenant_network_types =
.. end
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
mechanism_drivers = linuxbridge
.. end
.. warning::
After you configure the ML2 plug-in, removing values in the
``type_drivers`` option can lead to database inconsistency.
* In the ``[ml2]`` section, enable the port security extension driver:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
extension_drivers = port_security
.. end
* In the ``[ml2_type_flat]`` section, configure the provider virtual
network as a flat network:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_flat]
# ...
flat_networks = provider
.. end
* In the ``[securitygroup]`` section, enable ipset to increase
efficiency of security group rules:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[securitygroup]
# ...
enable_ipset = true
.. end
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-rdo`
for more information.
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = false
.. end
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Configure the DHCP agent
------------------------
The DHCP agent provides DHCP services for virtual networks.
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
networks can access metadata over the network:
.. path /etc/neutron/dhcp_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
.. end
Return to *Networking controller node configuration*.

View File

@ -0,0 +1,288 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on the *controller* node.
Install the components
----------------------
.. code-block:: console
# apt install neutron-server neutron-plugin-ml2 \
neutron-linuxbridge-agent neutron-dhcp-agent \
neutron-metadata-agent
.. end
Configure the server component
------------------------------
The Networking server component configuration includes the database,
authentication mechanism, message queue, topology change notifications,
and plug-in.
.. include:: shared/note_configuration_vary_by_distribution.rst
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
.. end
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
.. note::
Comment out or remove any other ``connection`` options in the
``[database]`` section.
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in and disable additional plug-ins:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
.. end
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
.. end
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
following actions:
* In the ``[ml2]`` section, enable flat and VLAN networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
type_drivers = flat,vlan
.. end
* In the ``[ml2]`` section, disable self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
tenant_network_types =
.. end
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
mechanism_drivers = linuxbridge
.. end
.. warning::
After you configure the ML2 plug-in, removing values in the
``type_drivers`` option can lead to database inconsistency.
* In the ``[ml2]`` section, enable the port security extension driver:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
extension_drivers = port_security
.. end
* In the ``[ml2_type_flat]`` section, configure the provider virtual
network as a flat network:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_flat]
# ...
flat_networks = provider
.. end
* In the ``[securitygroup]`` section, enable ipset to increase
efficiency of security group rules:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[securitygroup]
# ...
enable_ipset = true
.. end
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-ubuntu`
for more information.
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = false
.. end
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Configure the DHCP agent
------------------------
The DHCP agent provides DHCP services for virtual networks.
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
networks can access metadata over the network:
.. path /etc/neutron/dhcp_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
.. end
Return to *Networking controller node configuration*.

View File

@ -0,0 +1,337 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on the *controller* node.
Install the components
----------------------
.. code-block:: console
# zypper install --no-recommends openstack-neutron \
openstack-neutron-server openstack-neutron-linuxbridge-agent \
openstack-neutron-l3-agent openstack-neutron-dhcp-agent \
openstack-neutron-metadata-agent bridge-utils
.. end
Configure the server component
------------------------------
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
.. end
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
.. note::
Comment out or remove any other ``connection`` options in the
``[database]`` section.
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in, router service, and overlapping IP addresses:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
.. end
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
.. end
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
following actions:
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
type_drivers = flat,vlan,vxlan
.. end
* In the ``[ml2]`` section, enable VXLAN self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
tenant_network_types = vxlan
.. end
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
mechanisms:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
mechanism_drivers = linuxbridge,l2population
.. end
.. warning::
After you configure the ML2 plug-in, removing values in the
``type_drivers`` option can lead to database inconsistency.
.. note::
The Linux bridge agent only supports VXLAN overlay networks.
* In the ``[ml2]`` section, enable the port security extension driver:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
extension_drivers = port_security
.. end
* In the ``[ml2_type_flat]`` section, configure the provider virtual
network as a flat network:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_flat]
# ...
flat_networks = provider
.. end
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
range for self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
.. end
* In the ``[securitygroup]`` section, enable ipset to increase
efficiency of security group rules:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[securitygroup]
# ...
enable_ipset = true
.. end
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-obs`
for more information.
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
.. end
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface to tunnel traffic to
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
the management IP address of the controller node. See
:doc:`environment-networking-obs` for more information.
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Configure the layer-3 agent
---------------------------
The Layer-3 (L3) agent provides routing and NAT services for
self-service virtual networks.
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
and external network bridge:
.. path /etc/neutron/l3_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
interface_driver = linuxbridge
.. end
Configure the DHCP agent
------------------------
The DHCP agent provides DHCP services for virtual networks.
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
networks can access metadata over the network:
.. path /etc/neutron/dhcp_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
.. end
Return to *Networking controller node configuration*.

View File

@ -0,0 +1,347 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on the *controller* node.
Install the components
----------------------
.. code-block:: console
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
.. end
Configure the server component
------------------------------
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
.. end
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
.. note::
Comment out or remove any other ``connection`` options in the
``[database]`` section.
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in, router service, and overlapping IP addresses:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
.. end
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
.. end
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
.. end
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
following actions:
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
type_drivers = flat,vlan,vxlan
.. end
* In the ``[ml2]`` section, enable VXLAN self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
tenant_network_types = vxlan
.. end
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
mechanisms:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
mechanism_drivers = linuxbridge,l2population
.. end
.. warning::
After you configure the ML2 plug-in, removing values in the
``type_drivers`` option can lead to database inconsistency.
.. note::
The Linux bridge agent only supports VXLAN overlay networks.
* In the ``[ml2]`` section, enable the port security extension driver:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
extension_drivers = port_security
.. end
* In the ``[ml2_type_flat]`` section, configure the provider virtual
network as a flat network:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_flat]
# ...
flat_networks = provider
.. end
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
range for self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
.. end
* In the ``[securitygroup]`` section, enable ipset to increase
efficiency of security group rules:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[securitygroup]
# ...
enable_ipset = true
.. end
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-rdo`
for more information.
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
.. end
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface to tunnel traffic to
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
the management IP address of the controller node. See
:doc:`environment-networking-rdo` for more information.
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Configure the layer-3 agent
---------------------------
The Layer-3 (L3) agent provides routing and NAT services for
self-service virtual networks.
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
and external network bridge:
.. path /etc/neutron/l3_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
interface_driver = linuxbridge
.. end
Configure the DHCP agent
------------------------
The DHCP agent provides DHCP services for virtual networks.
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
networks can access metadata over the network:
.. path /etc/neutron/dhcp_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
.. end
Return to *Networking controller node configuration*.

View File

@ -0,0 +1,336 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on the *controller* node.
Install the components
----------------------
.. code-block:: console
# apt install neutron-server neutron-plugin-ml2 \
neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \
neutron-metadata-agent
.. end
Configure the server component
------------------------------
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
.. end
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
.. note::
Comment out or remove any other ``connection`` options in the
``[database]`` section.
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in, router service, and overlapping IP addresses:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
.. end
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
.. end
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
following actions:
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
type_drivers = flat,vlan,vxlan
.. end
* In the ``[ml2]`` section, enable VXLAN self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
tenant_network_types = vxlan
.. end
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
mechanisms:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
mechanism_drivers = linuxbridge,l2population
.. end
.. warning::
After you configure the ML2 plug-in, removing values in the
``type_drivers`` option can lead to database inconsistency.
.. note::
The Linux bridge agent only supports VXLAN overlay networks.
* In the ``[ml2]`` section, enable the port security extension driver:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
# ...
extension_drivers = port_security
.. end
* In the ``[ml2_type_flat]`` section, configure the provider virtual
network as a flat network:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_flat]
# ...
flat_networks = provider
.. end
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
range for self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
.. end
* In the ``[securitygroup]`` section, enable ipset to increase
efficiency of security group rules:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[securitygroup]
# ...
enable_ipset = true
.. end
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :doc:`environment-networking-ubuntu`
for more information.
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
.. end
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface to tunnel traffic to
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
the management IP address of the controller node. See
:doc:`environment-networking-ubuntu` for more information.
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge iptables firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Configure the layer-3 agent
---------------------------
The Layer-3 (L3) agent provides routing and NAT services for
self-service virtual networks.
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
and external network bridge:
.. path /etc/neutron/l3_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
interface_driver = linuxbridge
.. end
Configure the DHCP agent
------------------------
The DHCP agent provides DHCP services for virtual networks.
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
networks can access metadata over the network:
.. path /etc/neutron/dhcp_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
.. end
Return to *Networking controller node configuration*.

View File

@ -0,0 +1,329 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prerequisites
-------------
Before you configure the OpenStack Networking (neutron) service, you
must create a database, service credentials, and API endpoints.
#. To create the database, complete these steps:
* Use the database access client to connect to the database
server as the ``root`` user:
.. code-block:: console
$ mysql -u root -p
.. end
* Create the ``neutron`` database:
.. code-block:: console
MariaDB [(none)] CREATE DATABASE neutron;
.. end
* Grant proper access to the ``neutron`` database, replacing
``NEUTRON_DBPASS`` with a suitable password:
.. code-block:: console
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
.. end
* Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only CLI
commands:
.. code-block:: console
$ . admin-openrc
.. end
#. To create the service credentials, complete these steps:
* Create the ``neutron`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
.. end
* Add the ``admin`` role to the ``neutron`` user:
.. code-block:: console
$ openstack role add --project service --user neutron admin
.. end
.. note::
This command provides no output.
* Create the ``neutron`` service entity:
.. code-block:: console
$ openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
.. end
#. Create the Networking service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
.. end
Configure networking options
----------------------------
You can deploy the Networking service using one of two architectures
represented by options 1 and 2.
Option 1 deploys the simplest possible architecture that only supports
attaching instances to provider (external) networks. No self-service (private)
networks, routers, or floating IP addresses. Only the ``admin`` or other
privileged user can manage provider networks.
Option 2 augments option 1 with layer-3 services that support attaching
instances to self-service networks. The ``demo`` or other unprivileged
user can manage self-service networks including routers that provide
connectivity between self-service and provider networks. Additionally,
floating IP addresses provide connectivity to instances using self-service
networks from external networks such as the Internet.
Self-service networks typically use overlay networks. Overlay network
protocols such as VXLAN include additional headers that increase overhead
and decrease space available for the payload or user data. Without knowledge
of the virtual network infrastructure, instances attempt to send packets
using the default Ethernet maximum transmission unit (MTU) of 1500
bytes. The Networking service automatically provides the correct MTU value
to instances via DHCP. However, some cloud images do not use DHCP or ignore
the DHCP MTU option and require configuration using metadata or a script.
.. note::
Option 2 also supports attaching instances to provider networks.
Choose one of the following networking options to configure services
specific to it. Afterwards, return here and proceed to
:ref:`neutron-controller-metadata-agent-rdo`.
.. toctree::
:maxdepth: 1
controller-install-option1-rdo.rst
controller-install-option2-rdo.rst
.. _neutron-controller-metadata-agent-rdo:
Configure the metadata agent
----------------------------
The metadata agent provides configuration information
such as credentials to instances.
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the metadata host and shared
secret:
.. path /etc/neutron/metadata_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
.. end
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
Configure the Compute service to use the Networking service
-----------------------------------------------------------
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
* In the ``[neutron]`` section, configure access parameters, enable the
metadata proxy, and configure the secret:
.. path /etc/nova/nova.conf
.. code-block:: ini
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
proxy.
Finalize installation
---------------------
#. The Networking service initialization scripts expect a symbolic link
``/etc/neutron/plugin.ini`` pointing to the ML2 plug-in configuration
file, ``/etc/neutron/plugins/ml2/ml2_conf.ini``. If this symbolic
link does not exist, create it using the following command:
.. code-block:: console
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
.. end
#. Populate the database:
.. code-block:: console
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
.. end
.. note::
Database population occurs later for Networking because the script
requires complete server and plug-in configuration files.
#. Restart the Compute API service:
.. code-block:: console
# systemctl restart openstack-nova-api.service
.. end
#. Start the Networking services and configure them to start when the system
boots.
For both networking options:
.. code-block:: console
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
.. end
For networking option 2, also enable and start the layer-3 service:
.. code-block:: console
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
.. end

View File

@ -0,0 +1,314 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prerequisites
-------------
Before you configure the OpenStack Networking (neutron) service, you
must create a database, service credentials, and API endpoints.
#. To create the database, complete these steps:
* Use the database access client to connect to the database
server as the ``root`` user:
.. code-block:: console
# mysql
.. end
* Create the ``neutron`` database:
.. code-block:: console
MariaDB [(none)] CREATE DATABASE neutron;
.. end
* Grant proper access to the ``neutron`` database, replacing
``NEUTRON_DBPASS`` with a suitable password:
.. code-block:: console
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
.. end
* Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only CLI
commands:
.. code-block:: console
$ . admin-openrc
.. end
#. To create the service credentials, complete these steps:
* Create the ``neutron`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
.. end
* Add the ``admin`` role to the ``neutron`` user:
.. code-block:: console
$ openstack role add --project service --user neutron admin
.. end
.. note::
This command provides no output.
* Create the ``neutron`` service entity:
.. code-block:: console
$ openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
.. end
#. Create the Networking service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
.. end
Configure networking options
----------------------------
You can deploy the Networking service using one of two architectures
represented by options 1 and 2.
Option 1 deploys the simplest possible architecture that only supports
attaching instances to provider (external) networks. No self-service (private)
networks, routers, or floating IP addresses. Only the ``admin`` or other
privileged user can manage provider networks.
Option 2 augments option 1 with layer-3 services that support attaching
instances to self-service networks. The ``demo`` or other unprivileged
user can manage self-service networks including routers that provide
connectivity between self-service and provider networks. Additionally,
floating IP addresses provide connectivity to instances using self-service
networks from external networks such as the Internet.
Self-service networks typically use overlay networks. Overlay network
protocols such as VXLAN include additional headers that increase overhead
and decrease space available for the payload or user data. Without knowledge
of the virtual network infrastructure, instances attempt to send packets
using the default Ethernet maximum transmission unit (MTU) of 1500
bytes. The Networking service automatically provides the correct MTU value
to instances via DHCP. However, some cloud images do not use DHCP or ignore
the DHCP MTU option and require configuration using metadata or a script.
.. note::
Option 2 also supports attaching instances to provider networks.
Choose one of the following networking options to configure services
specific to it. Afterwards, return here and proceed to
:ref:`neutron-controller-metadata-agent-ubuntu`.
.. toctree::
:maxdepth: 1
controller-install-option1-ubuntu.rst
controller-install-option2-ubuntu.rst
.. _neutron-controller-metadata-agent-ubuntu:
Configure the metadata agent
----------------------------
The metadata agent provides configuration information
such as credentials to instances.
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the metadata host and shared
secret:
.. path /etc/neutron/metadata_agent.ini
.. code-block:: ini
[DEFAULT]
# ...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
.. end
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
Configure the Compute service to use the Networking service
-----------------------------------------------------------
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
* In the ``[neutron]`` section, configure access parameters, enable the
metadata proxy, and configure the secret:
.. path /etc/nova/nova.conf
.. code-block:: ini
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
proxy.
Finalize installation
---------------------
#. Populate the database:
.. code-block:: console
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
.. end
.. note::
Database population occurs later for Networking because the script
requires complete server and plug-in configuration files.
#. Restart the Compute API service:
.. code-block:: console
# service nova-api restart
.. end
#. Restart the Networking services.
For both networking options:
.. code-block:: console
# service neutron-server restart
# service neutron-linuxbridge-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
.. end
For networking option 2, also restart the layer-3 service:
.. code-block:: console
# service neutron-l3-agent restart
.. end

View File

@ -0,0 +1,48 @@
Compute node
~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
.. note::
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
contain the following:
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
.. code-block:: bash
STARTMODE='auto'
BOOTPROTO='static'
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``compute1``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -0,0 +1,52 @@
Compute node
~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
.. note::
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
to contain the following:
Do not change the ``HWADDR`` and ``UUID`` keys.
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
.. code-block:: bash
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``compute1``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -0,0 +1,50 @@
Compute node
~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
.. note::
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/network/interfaces`` file to contain the following:
.. path /etc/network/interfaces
.. code-block:: bash
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``compute1``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -0,0 +1,44 @@
Controller node
~~~~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
contain the following:
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
.. code-block:: ini
STARTMODE='auto'
BOOTPROTO='static'
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``controller``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -0,0 +1,48 @@
Controller node
~~~~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
to contain the following:
Do not change the ``HWADDR`` and ``UUID`` keys.
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
.. code-block:: ini
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``controller``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -0,0 +1,46 @@
Controller node
~~~~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/network/interfaces`` file to contain the following:
.. path /etc/network/interfaces
.. code-block:: bash
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``controller``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -0,0 +1,91 @@
Host networking
~~~~~~~~~~~~~~~
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `SLES 12
<https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_basicnet_manconf.html>`__
or `openSUSE
<https://doc.opensuse.org/documentation/leap/reference/html/book.opensuse.reference/cha.basicnet.html>`__
documentation.
All nodes require Internet access for administrative purposes such as package
installation, security updates, Domain Name System (DNS), and
Network Time Protocol (NTP). In most cases, nodes should obtain
Internet access through the management network interface.
To highlight the importance of network separation, the example architectures
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
management network and assume that the physical network infrastructure
provides Internet access via Network Address Translation (NAT)
or other methods. The example architectures use routable IP address space for
the provider (external) network and assume that the physical network
infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly
to the provider network. In the self-service (private) networks architecture,
instances can attach to a self-service or provider network. Self-service
networks can reside entirely within OpenStack or provide some level of external
network access using Network Address Translation (NAT) through
the provider network.
.. _figure-networklayout:
.. figure:: figures/networklayout.png
:alt: Network layout
The example architectures assume use of the following networks:
* Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all
nodes for administrative purposes such as package installation,
security updates, Domain Name System (DNS), and
Network Time Protocol (NTP).
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to
instances in your OpenStack environment.
You can modify these ranges and gateways to work with your particular
network infrastructure.
Network interface names vary by distribution. Traditionally,
interfaces use ``eth`` followed by a sequential number. To cover all
variations, this guide refers to the first interface as the
interface with the lowest number and the second interface as the
interface with the highest number.
Unless you intend to use the exact configuration provided in this
example architecture, you must modify the networks in this procedure to
match your environment. Each node must resolve the other nodes by
name in addition to IP address. For example, the ``controller`` name must
resolve to ``10.0.0.11``, the IP address of the management interface on
the controller node.
.. warning::
Reconfiguring network interfaces will interrupt network
connectivity. We recommend using a local terminal session for these
procedures.
.. note::
Your distribution enables a restrictive firewall by
default. During the installation process, certain steps will fail
unless you alter or disable the firewall. For more information
about securing your environment, refer to the `OpenStack Security
Guide <https://docs.openstack.org/security-guide/>`_.
.. toctree::
:maxdepth: 1
environment-networking-controller-obs.rst
environment-networking-compute-obs.rst
environment-networking-storage-cinder.rst
environment-networking-verify-obs.rst

View File

@ -0,0 +1,88 @@
Host networking
~~~~~~~~~~~~~~~
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation
<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Using_the_Command_Line_Interface.html>`__ .
All nodes require Internet access for administrative purposes such as package
installation, security updates, Domain Name System (DNS), and
Network Time Protocol (NTP). In most cases, nodes should obtain
Internet access through the management network interface.
To highlight the importance of network separation, the example architectures
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
management network and assume that the physical network infrastructure
provides Internet access via Network Address Translation (NAT)
or other methods. The example architectures use routable IP address space for
the provider (external) network and assume that the physical network
infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly
to the provider network. In the self-service (private) networks architecture,
instances can attach to a self-service or provider network. Self-service
networks can reside entirely within OpenStack or provide some level of external
network access using Network Address Translation (NAT) through
the provider network.
.. _figure-networklayout:
.. figure:: figures/networklayout.png
:alt: Network layout
The example architectures assume use of the following networks:
* Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all
nodes for administrative purposes such as package installation,
security updates, Domain Name System (DNS), and
Network Time Protocol (NTP).
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to
instances in your OpenStack environment.
You can modify these ranges and gateways to work with your particular
network infrastructure.
Network interface names vary by distribution. Traditionally,
interfaces use ``eth`` followed by a sequential number. To cover all
variations, this guide refers to the first interface as the
interface with the lowest number and the second interface as the
interface with the highest number.
Unless you intend to use the exact configuration provided in this
example architecture, you must modify the networks in this procedure to
match your environment. Each node must resolve the other nodes by
name in addition to IP address. For example, the ``controller`` name must
resolve to ``10.0.0.11``, the IP address of the management interface on
the controller node.
.. warning::
Reconfiguring network interfaces will interrupt network
connectivity. We recommend using a local terminal session for these
procedures.
.. note::
Your distribution enables a restrictive firewall by
default. During the installation process, certain steps will fail
unless you alter or disable the firewall. For more information
about securing your environment, refer to the `OpenStack Security
Guide <https://docs.openstack.org/security-guide/>`_.
.. toctree::
:maxdepth: 1
environment-networking-controller-rdo.rst
environment-networking-compute-rdo.rst
environment-networking-storage-cinder.rst
environment-networking-verify-rdo.rst

View File

@ -0,0 +1,25 @@
Block storage node (Optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you want to deploy the Block Storage service, configure one
additional storage node.
Configure network interfaces
----------------------------
* Configure the management interface:
* IP address: ``10.0.0.41``
* Network mask: ``255.255.255.0`` (or ``/24``)
* Default gateway: ``10.0.0.1``
Configure name resolution
-------------------------
#. Set the hostname of the node to ``block1``.
#. .. include:: shared/edit_hosts_file.txt
#. Reboot the system to activate the changes.

View File

@ -0,0 +1,85 @@
Host networking
~~~~~~~~~~~~~~~
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation <https://help.ubuntu.com/lts/serverguide/network-configuration.html>`_.
All nodes require Internet access for administrative purposes such as package
installation, security updates, Domain Name System (DNS), and
Network Time Protocol (NTP). In most cases, nodes should obtain
Internet access through the management network interface.
To highlight the importance of network separation, the example architectures
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
management network and assume that the physical network infrastructure
provides Internet access via Network Address Translation (NAT)
or other methods. The example architectures use routable IP address space for
the provider (external) network and assume that the physical network
infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly
to the provider network. In the self-service (private) networks architecture,
instances can attach to a self-service or provider network. Self-service
networks can reside entirely within OpenStack or provide some level of external
network access using Network Address Translation (NAT) through
the provider network.
.. _figure-networklayout:
.. figure:: figures/networklayout.png
:alt: Network layout
The example architectures assume use of the following networks:
* Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all
nodes for administrative purposes such as package installation,
security updates, Domain Name System (DNS), and
Network Time Protocol (NTP).
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to
instances in your OpenStack environment.
You can modify these ranges and gateways to work with your particular
network infrastructure.
Network interface names vary by distribution. Traditionally,
interfaces use ``eth`` followed by a sequential number. To cover all
variations, this guide refers to the first interface as the
interface with the lowest number and the second interface as the
interface with the highest number.
Unless you intend to use the exact configuration provided in this
example architecture, you must modify the networks in this procedure to
match your environment. Each node must resolve the other nodes by
name in addition to IP address. For example, the ``controller`` name must
resolve to ``10.0.0.11``, the IP address of the management interface on
the controller node.
.. warning::
Reconfiguring network interfaces will interrupt network
connectivity. We recommend using a local terminal session for these
procedures.
.. note::
Your distribution does not enable a restrictive firewall by
default. For more information about securing your environment,
refer to the `OpenStack Security Guide
<https://docs.openstack.org/security-guide/>`_.
.. toctree::
:maxdepth: 1
environment-networking-controller-ubuntu.rst
environment-networking-compute-ubuntu.rst
environment-networking-storage-cinder.rst
environment-networking-verify-ubuntu.rst

View File

@ -0,0 +1,89 @@
Verify connectivity
-------------------
We recommend that you verify network connectivity to the Internet and
among the nodes before proceeding further.
#. From the *controller* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *controller* node, test access to the management interface on the
*compute* node:
.. code-block:: console
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
#. From the *compute* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *compute* node, test access to the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
.. note::
Your distribution enables a restrictive firewall by
default. During the installation process, certain steps will fail
unless you alter or disable the firewall. For more information
about securing your environment, refer to the `OpenStack Security
Guide <https://docs.openstack.org/security-guide/>`_.

View File

@ -0,0 +1,89 @@
Verify connectivity
-------------------
We recommend that you verify network connectivity to the Internet and
among the nodes before proceeding further.
#. From the *controller* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *controller* node, test access to the management interface on the
*compute* node:
.. code-block:: console
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
#. From the *compute* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *compute* node, test access to the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
.. note::
Your distribution enables a restrictive firewall by
default. During the installation process, certain steps will fail
unless you alter or disable the firewall. For more information
about securing your environment, refer to the `OpenStack Security
Guide <https://docs.openstack.org/security-guide/>`_.

View File

@ -0,0 +1,87 @@
Verify connectivity
-------------------
We recommend that you verify network connectivity to the Internet and
among the nodes before proceeding further.
#. From the *controller* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *controller* node, test access to the management interface on the
*compute* node:
.. code-block:: console
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
#. From the *compute* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *compute* node, test access to the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
.. note::
Your distribution does not enable a restrictive firewall by
default. For more information about securing your environment,
refer to the `OpenStack Security Guide
<https://docs.openstack.org/security-guide/>`_.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 167 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -0,0 +1,23 @@
.. _networking:
==================
Networking service
==================
.. toctree::
:maxdepth: 1
overview.rst
common/get-started-networking.rst
concepts.rst
install-obs.rst
install-rdo.rst
install-ubuntu.rst
This chapter explains how to install and configure the Networking
service (neutron) using the :ref:`provider networks <network1>` or
:ref:`self-service networks <network2>` option.
For more information about the Networking service including virtual
networking components, layout, and traffic flows, see the
:doc:`OpenStack Networking Guide </admin/index>`.

View File

@ -0,0 +1,13 @@
.. _networking-obs:
============================================================
Install and configure for openSUSE and SUSE Linux Enterprise
============================================================
.. toctree::
:maxdepth: 2
environment-networking-obs.rst
controller-install-obs.rst
compute-install-obs.rst
verify.rst

View File

@ -0,0 +1,13 @@
.. _networking-rdo:
=============================================================
Install and configure for Red Hat Enterprise Linux and CentOS
=============================================================
.. toctree::
:maxdepth: 2
environment-networking-rdo.rst
controller-install-rdo.rst
compute-install-rdo.rst
verify.rst

View File

@ -0,0 +1,13 @@
.. _networking-ubuntu:
================================
Install and configure for Ubuntu
================================
.. toctree::
:maxdepth: 2
environment-networking-ubuntu.rst
controller-install-ubuntu.rst
compute-install-ubuntu.rst
verify.rst

View File

@ -0,0 +1,179 @@
========
Overview
========
The OpenStack project is an open source cloud computing platform that
supports all types of cloud environments. The project aims for simple
implementation, massive scalability, and a rich set of features. Cloud
computing experts from around the world contribute to the project.
OpenStack provides an Infrastructure-as-a-Service (IaaS) solution
through a variety of complementary services. Each service offers an
Application Programming Interface (API) that facilitates this
integration.
This guide covers step-by-step deployment of the major OpenStack
services using a functional example architecture suitable for
new users of OpenStack with sufficient Linux experience. This guide is not
intended to be used for production system installations, but to create a
minimum proof-of-concept for the purpose of learning about OpenStack.
After becoming familiar with basic installation, configuration, operation,
and troubleshooting of these OpenStack services, you should consider the
following steps toward deployment using a production architecture:
* Determine and implement the necessary core and optional services to
meet performance and redundancy requirements.
* Increase security using methods such as firewalls, encryption, and
service policies.
* Implement a deployment tool such as Ansible, Chef, Puppet, or Salt
to automate deployment and management of the production environment.
.. _overview-example-architectures:
Example architecture
~~~~~~~~~~~~~~~~~~~~
The example architecture requires at least two nodes (hosts) to launch a basic
virtual machine (VM) or instance. Optional services such as Block Storage and
Object Storage require additional nodes.
.. important::
The example architecture used in this guide is a minimum configuration,
and is not intended for production system installations. It is designed to
provide a minimum proof-of-concept for the purpose of learning about
OpenStack. For information on creating architectures for specific
use cases, or how to determine which architecture is required, see the
`Architecture Design Guide <https://docs.openstack.org/arch-design/>`_.
This example architecture differs from a minimal production architecture as
follows:
* Networking agents reside on the controller node instead of one or more
dedicated network nodes.
* Overlay (tunnel) traffic for self-service networks traverses the management
network instead of a dedicated network.
For more information on production architectures, see the
`Architecture Design Guide <https://docs.openstack.org/arch-design/>`_,
`OpenStack Operations Guide <https://docs.openstack.org/ops-guide/>`_, and
:doc:`OpenStack Networking Guide </admin/index>`.
.. _figure-hwreqs:
.. figure:: figures/hwreqs.png
:alt: Hardware requirements
**Hardware requirements**
Controller
----------
The controller node runs the Identity service, Image service, management
portions of Compute, management portion of Networking, various Networking
agents, and the Dashboard. It also includes supporting services such as
an SQL database, message queue, and Network Time Protocol (NTP).
Optionally, the controller node runs portions of the Block Storage, Object
Storage, Orchestration, and Telemetry services.
The controller node requires a minimum of two network interfaces.
Compute
-------
The compute node runs the hypervisor portion of Compute that
operates instances. By default, Compute uses the kernel-based VM (KVM)
hypervisor. The compute node also runs a Networking service
agent that connects instances to virtual networks
and provides firewalling services to instances via security groups.
You can deploy more than one compute node. Each node requires a minimum
of two network interfaces.
Block Storage
-------------
The optional Block Storage node contains the disks that the Block
Storage and Shared File System services provision for instances.
For simplicity, service traffic between compute nodes and this node
uses the management network. Production environments should implement
a separate storage network to increase performance and security.
You can deploy more than one block storage node. Each node requires a
minimum of one network interface.
Object Storage
--------------
The optional Object Storage node contain the disks that the
Object Storage service uses for storing accounts, containers, and
objects.
For simplicity, service traffic between compute nodes and this node
uses the management network. Production environments should implement
a separate storage network to increase performance and security.
This service requires two nodes. Each node requires a minimum of one
network interface. You can deploy more than two object storage nodes.
Networking
~~~~~~~~~~
Choose one of the following virtual networking options.
.. _network1:
Networking Option 1: Provider networks
--------------------------------------
The provider networks option deploys the OpenStack Networking service
in the simplest way possible with primarily layer-2 (bridging/switching)
services and VLAN segmentation of networks. Essentially, it bridges virtual
networks to physical networks and relies on physical network infrastructure
for layer-3 (routing) services. Additionally, a DHCP<Dynamic Host
Configuration Protocol (DHCP) service provides IP address information to
instances.
The OpenStack user requires more information about the underlying network
infrastructure to create a virtual network to exactly match the
infrastructure.
.. warning::
This option lacks support for self-service (private) networks, layer-3
(routing) services, and advanced services such as
Load-Balancer-as-a-Service (LBaaS) and FireWall-as-a-Service (FWaaS).
Consider the self-service networks option below if you desire these features.
.. _figure-network1-services:
.. figure:: figures/network1-services.png
:alt: Networking Option 1: Provider networks - Service layout
.. _network2:
Networking Option 2: Self-service networks
------------------------------------------
The self-service networks option augments the provider networks option
with layer-3 (routing) services that enable
self-service networks using overlay segmentation methods such
as Virtual Extensible LAN (VXLAN). Essentially, it routes
virtual networks to physical networks using Network Address
Translation (NAT). Additionally, this option provides the foundation for
advanced services such as LBaaS and FWaaS.
The OpenStack user can create virtual networks without the knowledge
of underlying infrastructure on the data network. This can also include
VLAN networks if the layer-2 plug-in is configured accordingly.
.. _figure-network2-services:
.. figure:: figures/network2-services.png
:alt: Networking Option 2: Self-service networks - Service layout

View File

@ -0,0 +1,34 @@
Edit the ``/etc/hosts`` file to contain the following:
.. path /etc/hosts
.. code-block:: none
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
.. end
.. warning::
Some distributions add an extraneous entry in the ``/etc/hosts``
file that resolves the actual hostname to another loopback IP
address such as ``127.0.1.1``. You must comment out or remove this
entry to prevent name resolution problems.
**Do not remove the 127.0.0.1 entry.**
.. note::
This guide includes host entries for optional services in order to reduce
complexity should you choose to deploy them.

View File

@ -0,0 +1,7 @@
.. note::
Default configuration files vary by distribution. You might need
to add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (``...``) in the configuration
snippets indicates potential default configuration options that you
should retain.

View File

@ -0,0 +1,22 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* List agents to verify successful launch of the neutron agents:
.. code-block:: console
$ openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 0400c2f6-4d3b-44bc-89fa-99093432f3bf | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
| 83cf853d-a2f2-450a-99d7-e9c6fc08f4c3 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
| ec302e51-6101-43cf-9f19-88a78613cbee | Linux bridge agent | compute | None | True | UP | neutron-linuxbridge-agent |
| fcb9bc6e-22b1-43bc-9054-272dd517d025 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
.. end
The output should indicate three agents on the controller node and one
agent on each compute node.

View File

@ -0,0 +1,23 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* List agents to verify successful launch of the neutron agents:
.. code-block:: console
$ openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent |
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | nova | True | UP | neutron-l3-agent |
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
.. end
The output should indicate four agents on the controller node and one
agent on each compute node.

View File

@ -0,0 +1,128 @@
Verify operation
~~~~~~~~~~~~~~~~
.. note::
Perform these commands on the controller node.
#. Source the ``admin`` credentials to gain access to admin-only CLI
commands:
.. code-block:: console
$ . admin-openrc
.. end
#. List loaded extensions to verify successful launch of the
``neutron-server`` process:
.. code-block:: console
$ openstack extension list --network
+---------------------------+---------------------------+----------------------------+
| Name | Alias | Description |
+---------------------------+---------------------------+----------------------------+
| Default Subnetpools | default-subnetpools | Provides ability to mark |
| | | and use a subnetpool as |
| | | the default |
| Availability Zone | availability_zone | The availability zone |
| | | extension. |
| Network Availability Zone | network_availability_zone | Availability zone support |
| | | for network. |
| Port Binding | binding | Expose port bindings of a |
| | | virtual port to external |
| | | application |
| agent | agent | The agent management |
| | | extension. |
| Subnet Allocation | subnet_allocation | Enables allocation of |
| | | subnets from a subnet pool |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among |
| | | dhcp agents |
| Tag support | tag | Enables to set tag on |
| | | resources. |
| Neutron external network | external-net | Adds external network |
| | | attribute to network |
| | | resource. |
| Neutron Service Flavors | flavors | Flavor specification for |
| | | Neutron advanced services |
| Network MTU | net-mtu | Provides MTU attribute for |
| | | a network resource. |
| Network IP Availability | network-ip-availability | Provides IP availability |
| | | data for each network and |
| | | subnet. |
| Quota management support | quotas | Expose functions for |
| | | quotas management per |
| | | tenant |
| Provider Network | provider | Expose mapping of virtual |
| | | networks to physical |
| | | networks |
| Multi Provider Network | multi-provider | Expose mapping of virtual |
| | | networks to multiple |
| | | physical networks |
| Address scope | address-scope | Address scopes extension. |
| Subnet service types | subnet-service-types | Provides ability to set |
| | | the subnet service_types |
| | | field |
| Resource timestamps | standard-attr-timestamp | Adds created_at and |
| | | updated_at fields to all |
| | | Neutron resources that |
| | | have Neutron standard |
| | | attributes. |
| Neutron Service Type | service-type | API for retrieving service |
| Management | | providers for Neutron |
| | | advanced services |
| Tag support for | tag-ext | Extends tag support to |
| resources: subnet, | | more L2 and L3 resources. |
| subnetpool, port, router | | |
| Neutron Extra DHCP opts | extra_dhcp_opt | Extra options |
| | | configuration for DHCP. |
| | | For example PXE boot |
| | | options to DHCP clients |
| | | can be specified (e.g. |
| | | tftp-server, server-ip- |
| | | address, bootfile-name) |
| Resource revision numbers | standard-attr-revisions | This extension will |
| | | display the revision |
| | | number of neutron |
| | | resources. |
| Pagination support | pagination | Extension that indicates |
| | | that pagination is |
| | | enabled. |
| Sorting support | sorting | Extension that indicates |
| | | that sorting is enabled. |
| security-group | security-group | The security groups |
| | | extension. |
| RBAC Policies | rbac-policies | Allows creation and |
| | | modification of policies |
| | | that control tenant access |
| | | to resources. |
| standard-attr-description | standard-attr-description | Extension to add |
| | | descriptions to standard |
| | | attributes |
| Port Security | port-security | Provides port security |
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address |
| | | pairs |
| project_id field enabled | project-id | Extension that indicates |
| | | that project_id field is |
| | | enabled. |
+---------------------------+---------------------------+----------------------------+
.. end
.. note::
Actual output may differ slightly from this example.
You can perform further testing of your networking using the
`neutron-sanity-check command line client <https://docs.openstack.org/cli-reference/neutron-sanity-check.html>`_.
Use the verification section for the networking option that you chose to
deploy.
.. toctree::
verify-option1.rst
verify-option2.rst