[install] Initial networking architecture changes

Implement initial networking architecture changes for Liberty
as follows:

1) Remove nova-network.
2) Develop architecture for provider networks with Linux
   bridge agent.
3) Develop architecture for self-service networks with
   Linux bridge agent.
4) Munge the neutron controller and network node configuration
   together.
5) Rejigger neutron to use the Linux bridge agent.
6) Restructure launch an instance content to account for
   two networking options.
7) Other restructuring as necessary to meet the primary
   goal.

For simplicity, both architectures require only two nodes,
each with two network interfaces, to deploy core OpenStack
services. Also, to address recurring issues about the lack
of support for connecting instances directly to the
public/external network, the self-service architecture
augments the provider networks architecture which allows
connection to both private and public networks.

Change-Id: Ie3ab9a15ebfe82c0ce54f709c87a66d7cc46db3f
Implements: blueprint installguide-liberty
This commit is contained in:
Matthew Kassawara 2015-09-08 13:38:37 -05:00
parent 09314e1f72
commit 374a38f49a
93 changed files with 3365 additions and 3936 deletions

View File

@ -3468,7 +3468,7 @@
</glossentry>
<glossentry>
<glossterm>Firewall-as-a-Service (FWaaS)</glossterm>
<glossterm>FWaaS</glossterm>
<indexterm class="singular">
<primary>Firewall-as-a-Service (FWaaS)</primary>
</indexterm>
@ -5171,7 +5171,7 @@
</glossentry>
<glossentry>
<glossterm>Load-Balancer-as-a-Service (LBaaS)</glossterm>
<glossterm>LBaaS</glossterm>
<indexterm
class="singular">
<primary>Load-Balancer-as-a-Service (LBaaS)</primary>
@ -5692,7 +5692,7 @@
</glossentry>
<glossentry>
<glossterm>Network Address Translation (NAT)</glossterm>
<glossterm>NAT</glossterm>
<indexterm class="singular">
<primary>networks</primary>
@ -5700,8 +5700,9 @@
</indexterm>
<glossdef>
<para>The process of modifying IP address information while in
transit. Supported by Compute and Networking.</para>
<para>Network Address Translation; Process of modifying IP address
information while in transit. Supported by Compute and
Networking.</para>
</glossdef>
</glossentry>
@ -5790,7 +5791,7 @@
</glossentry>
<glossentry>
<glossterm>Network Time Protocol (NTP)</glossterm>
<glossterm>NTP</glossterm>
<indexterm class="singular">
<primary>networks</primary>
@ -5798,8 +5799,9 @@
</indexterm>
<glossdef>
<para>A method of keeping a clock for a host or node correct through
communications with a trusted, accurate time source.</para>
<para>Network Time Protocol; Method of keeping a clock for a host or
node correct via communication with a trusted, accurate time
source.</para>
</glossdef>
</glossentry>
@ -7691,6 +7693,19 @@
</glossdef>
</glossentry>
<glossentry>
<glossterm>self-service</glossterm>
<indexterm class="singular">
<primary>self-service</primary>
</indexterm>
<glossdef>
<para>For IaaS, ability for a regular (non-privileged) account to
manage a virtual infrastructure component such as networks without
involving an administrator.</para>
</glossdef>
</glossentry>
<glossentry>
<glossterm>SELinux</glossterm>
<indexterm class="singular">
@ -8805,7 +8820,7 @@
</glossentry>
<glossentry>
<glossterm>virtual extensible LAN (VXLAN)</glossterm>
<glossterm>VXLAN</glossterm>
<indexterm class="singular">
<primary>virtual extensible LAN (VXLAN)</primary>
</indexterm>

View File

@ -1,418 +0,0 @@
.. highlight:: ini
==============================
OpenStack Networking (neutron)
==============================
The example architecture with OpenStack Networking (neutron) requires
one controller node, one network node, and at least one compute node.
The controller node contains one network interface on the
:term:`management network`. The network node contains one network interface
on the management network, one on the :term:`instance tunnels network`,
and one on the :term:`external network`. The compute node contains one
network interface on the management network and one on the instance
tunnels network.
The example architecture assumes use of the following networks:
- Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide internet access to all nodes for
administrative purposes such as package installation, security updates,
:term:`DNS`, and :term:`Network Time Protocol (NTP)`.
- Instance tunnels on 10.0.1.0/24 without a gateway
This network does not require a gateway because communication only occurs
among network and compute nodes in your OpenStack environment.
- External on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide internet access to instances in
your OpenStack environment.
You can modify these ranges and gateways to work with your particular
network infrastructure.
.. note::
Network interface names vary by distribution. Traditionally,
interfaces use "eth" followed by a sequential number. To cover all
variations, this guide simply refers to the first interface as the
interface with the lowest number, the second interface as the
interface with the middle number, and the third interface as the
interface with the highest number.
|
.. figure:: figures/installguidearch-neutron-networks.png
:alt: Minimal architecture example with OpenStack Networking
(neutron)—Network layout
**Minimal architecture example with OpenStack Networking
(neutron)—Network layout**
|
Unless you intend to use the exact configuration provided in this
example architecture, you must modify the networks in this procedure to
match your environment. Also, each node must resolve the other nodes by
name in addition to IP address. For example, the ``controller`` name must
resolve to ``10.0.0.11``, the IP address of the management interface on
the controller node.
.. warning::
Reconfiguring network interfaces will interrupt network
connectivity. We recommend using a local terminal session for these
procedures.
|
Controller node
---------------
**To configure networking:**
#. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
#. Reboot the system to activate the changes.
|
**To configure name resolution:**
#. Set the hostname of the node to ``controller``.
#. Edit the :file:`/etc/hosts:` file to contain the following:
.. code-block:: ini
:linenos:
# controller
10.0.0.11 controller
# network
10.0.0.21 network
# compute1
10.0.0.31 compute1
.. warning::
Some distributions add an extraneous entry in the :file:`/etc/hosts`
file that resolves the actual hostname to another loopback IP
address such as ``127.0.1.1``. Note it's ``127.0.*1.1*``, do not
remove the required ``127.0.0.1`` entry. You must comment out or
remove this entry to prevent name resolution problems.
|
Network node
------------
**To configure networking:**
#. Configure the first interface as the management interface:
IP address: 10.0.0.21
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
#. Configure the second interface as the instance tunnels interface:
IP address: 10.0.1.21
Network mask: 255.255.255.0 (or /24)
#. The external interface uses a special configuration without an IP
address assigned to it. Configure the third interface as the external
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth2* or *ens256*.
.. only:: ubuntu or debian
a. Edit the :file:`/etc/network/interfaces` file to contain the following:
.. code-block:: ini
:linenos:
# The external network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. only:: rdo
a. Edit the :file:`/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME` file
to contain the following:
Do not change the ``HWADDR`` and ``UUID`` keys.
.. code-block:: ini
:linenos:
DEVICE= INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
.. only:: obs
a. Edit the :file:`/etc/sysconfig/network/ifcfg-INTERFACE_NAME` file
to contain the following:
.. code-block:: ini
:linenos:
STARTMODE='auto'
BOOTPROTO='static'
4. Reboot the system to activate the changes.
|
**To configure name resolution:**
#. Set the hostname of the node to ``network``.
#. Edit the :file:`/etc/hosts` file to contain the following:
.. code-block:: ini
:linenos:
# network
10.0.0.21 network
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
.. warning::
Some distributions add an extraneous entry in the :file:`/etc/hosts`
file that resolves the actual hostname to another loopback IP
address such as ``127.0.1.1``. Note it's ``127.0.*1.1*``, do not
remove the required ``127.0.0.1`` entry. You must comment out or
remove this entry to prevent name resolution problems.
|
Compute node
------------
**To configure networking:**
#. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
.. note::
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
#. Configure the second interface as the instance tunnels interface:
IP address: 10.0.1.31
Network mask: 255.255.255.0 (or /24)
.. note::
Additional compute nodes should use 10.0.1.32, 10.0.1.33, and so on.
#. Reboot the system to activate the changes.
|
**To configure name resolution:**
#. Set the hostname of the node to ``compute1``.
#. Edit the :file:`/etc/hosts` file to contain the following:
.. code-block:: ini
:linenos:
# compute1
10.0.0.31 compute1
# controller
10.0.0.11 controller
# network
10.0.0.21 network
.. warning::
Some distributions add an extraneous entry in the :file:`/etc/hosts`
file that resolves the actual hostname to another loopback IP
address such as ``127.0.1.1``. You must comment out or remove this
entry to prevent name resolution problems.
|
Verify connectivity
-------------------
We recommend that you verify network connectivity to the internet and
among the nodes before proceeding further.
#. From the *controller* node, :command:`ping` a site on the internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
#. From the *controller* node, :command:`ping` the management interface
on the *network* node:
.. code-block:: console
# ping -c 4 network
PING network (10.0.0.21) 56(84) bytes of data.
64 bytes from network (10.0.0.21): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from network (10.0.0.21): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from network (10.0.0.21): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from network (10.0.0.21): icmp_seq=4 ttl=64 time=0.202 ms
--- network ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
#. From the *controller* node, :command:`ping` the management interface on the
*compute* node:
.. code-block:: console
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- network ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
#. From the *network* node, :command:`ping` a site on the internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
#. From the *network* node, :command:`ping` the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
#. From the *network* node, :command:`ping` the instance tunnels interface
on the *compute* node:
.. code-block:: console
# ping -c 4 10.0.1.31
PING 10.0.1.31 (10.0.1.31) 56(84) bytes of data.
64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=4 ttl=64 time=0.202 ms
--- 10.0.1.31 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
#. From the *compute* node, :command:`ping` a site on the internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
#. From the *compute* node, :command:`ping` the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
#. From the *compute* node, :command:`ping` the instance tunnels interface
on the *network* node:
.. code-block:: console
# ping -c 4 10.0.1.21
PING 10.0.1.21 (10.0.1.21) 56(84) bytes of data.
64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=4 ttl=64 time=0.202 ms
--- 10.0.1.21 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

View File

@ -1,92 +0,0 @@
==========
Networking
==========
.. toctree::
basics-networking-neutron.rst
basics-networking-nova.rst
.. only:: ubuntu
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation <https://help.ubuntu.com/lts/serverguide/network-configuration.html>`__ .
.. only:: debian
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation <https://wiki.debian.org/NetworkConfiguration>`__ .
.. only:: rdo
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation <https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Using_the_Command_Line_Interface.html>`__ .
.. only:: obs
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `SLES 12 <https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_basicnet_manconf.html>`__ or `openSUSE <http://activedoc.opensuse.org/book/opensuse-reference/chapter-13-basic-networking>`__ documentation.
All nodes require Internet access for administrative purposes such as
package installation, security updates, :term:`DNS`, and
:term:`Network Time Protocol (NTP)`. In most cases,
nodes should obtain Internet access through the management network
interface. To highlight the importance of network separation, the
example architectures use `private address
space <https://tools.ietf.org/html/rfc1918>`__ for the management
network and assume that network infrastructure provides Internet access
via :term:`Network Address Translation (NAT)`. To illustrate the flexibility
of :term:`IaaS`, the example architectures use public IP address space
for the external network and assume that network infrastructure provides
direct Internet access to instances in your OpenStack environment.
In environments with only one block of public IP address space,
both the management and external networks must ultimately obtain Internet
access using it. For simplicity, the diagrams in this guide only show
Internet access for OpenStack services.
.. only:: obs
**To disable Network Manager**
* Use the YaST network module:
.. code-block:: console
# yast2 network
For more information, see the
`SLES <https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_nm_activate.html>`__
or the `openSUSE <http://activedoc.opensuse.org/book/opensuse-reference/chapter-13-basic-networking#sec.basicnet.yast.netcard.global>`__ documentation.
.. note::
.. only:: rdo or obs
Your distribution enables a restrictive :term:`firewall` by
default. During the installation process, certain steps will
fail unless you alter or disable the firewall. For more
information about securing your environment, refer to the
`OpenStack Security Guide <http://docs.openstack.org/sec/>`__.
.. only:: ubuntu or debian
Your distribution does not enable a restrictive :term:`firewall`
by default. For more information about securing your environment,
refer to the
`OpenStack Security Guide <http://docs.openstack.org/sec/>`__.

View File

@ -1,207 +0,0 @@
.. highlight:: ini
:linenothreshold: 1
Configure Network Time Protocol (NTP)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You should install Chrony, an implementation of
:term:`Network Time Protocol (NTP)`, to properly synchronize services among
nodes. We recommend that you configure the controller node to reference more
accurate (lower stratum) servers and other nodes to reference the controller
node.
Controller node
---------------
**To install the NTP service**
.. only:: ubuntu or debian
.. code-block:: console
# apt-get install chrony
.. only:: rdo
.. code-block:: console
# yum install chrony
.. only:: obs
On openSUSE:
.. code-block:: console
# zypper addrepo http://download.opensuse.org/repositories/network:time/openSUSE_13.2/network:time.repo
# zypper refresh
# zypper install chrony
On SLES:
.. code-block:: console
# zypper addrepo http://download.opensuse.org/repositories/network:time/SLE_12/network:time.repo
# zypper refresh
# zypper install chrony
|
**To configure the NTP service**
By default, the controller node synchronizes the time via a pool of
public servers. However, you can optionally configure alternative servers such
as those provided by your organization.
.. only:: ubuntu or debian
1. Edit the :file:`/etc/chrony/chrony.conf` file and add, change, or remove the
following keys as necessary for your environment:
.. code:: ini
server NTP_SERVER iburst
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
2. Restart the NTP service:
.. code-block:: console
# service chrony restart
.. only:: rdo or obs
1. Edit the :file:`/etc/chrony.conf` file and add, change, or remove the
following keys as necessary for your environment:
.. code:: ini
server NTP_SERVER iburst
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
2. Start the NTP service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
|
.. _basics-ntp-other-nodes:
Other nodes
-----------
**To install the NTP service**
.. only:: ubuntu or debian
.. code-block:: console
# apt-get install chrony
.. only:: rdo
.. code-block:: console
# yum install chrony
.. only:: obs
On openSUSE:
.. code-block:: console
# zypper addrepo http://download.opensuse.org/repositories/network:time/openSUSE_13.2/network:time.repo
# zypper refresh
# zypper install chrony
On SLES:
.. code-block:: console
# zypper addrepo http://download.opensuse.org/repositories/network:time/SLE_12/network:time.repo
# zypper refresh
# zypper install chrony
|
**To configure the NTP service**
Configure the network and compute nodes to reference the controller
node.
.. only:: ubuntu or debian
1. Edit the :file:`/etc/chrony/chrony.conf` and comment out or remove all but one ``server`` key. Change
it to reference the controller node:
.. code:: ini
server controller iburst
2. Restart the NTP service:
.. code-block:: console
# service chrony restart
.. only:: rdo or obs
1. Edit the :file:`/etc/chrony.conf` and comment out or remove all but one ``server`` key. Change
it to reference the controller node:
.. code:: ini
server controller iburst
2. Start the NTP service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
|
Verify operation
----------------
We recommend that you verify NTP synchronization before proceeding
further. Some nodes, particularly those that reference the controller
node, can take several minutes to synchronize.
#. Run this command on the *controller* node:
.. code-block:: console
# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/- 43ms
^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/- 68ms
Contents in the *Name/IP address* column should indicate the hostname or IP
address of one or more NTP servers. Contents in the *S* column should indicate
*\** for the server to which the NTP service is currently synchronized.
#. Run the same command on *all other* nodes:
.. code-block:: console
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* controller 3 9 377 421 +15us[ -87us] +/- 15ms
Contents in the *remote* column should indicate the hostname of the controller node.

View File

@ -2,6 +2,5 @@
Next steps
==========
Your OpenStack environment now includes Telemetry.
You can :doc:`launch an instance <launch-instance>` or add more
services to your environment in the previous chapters.
Your OpenStack environment now includes Telemetry. You can
:ref:`launch-instance` or add more services to your environment.

View File

@ -21,7 +21,7 @@ configure the volume service on it. Similar to the controller node,
the storage node contains one network interface on the
:term:`management network`. The storage node also
needs an empty block storage device of suitable size for your
environment. For more information, see :doc:`basic_environment`.
environment. For more information, see :ref:`environment`.
1. Configure the management interface:
@ -42,9 +42,8 @@ environment. For more information, see :doc:`basic_environment`.
Also add this content to the :file:`/etc/hosts` file
on all other nodes in your environment.
4. Install and configure :term:`NTP
<Network Time Protocol (NTP)>` using the instructions in
:ref:`the section called "Other nodes" <basics-ntp-other-nodes>`.
4. Install and configure :term:`NTP` using the instructions in
:ref:`environment-ntp`.
.. only:: obs

View File

@ -1,13 +1,10 @@
.. _cinder-verify:
================
Verify operation
================
This section describes how to verify operation of the Block Storage
service by creating a volume.
For more information about how to manage volumes, see the
`OpenStack User Guide
<http://docs.openstack.org/user-guide/index.html>`__.
Verify operation of the Block Storage service.
.. note::
@ -38,61 +35,3 @@ For more information about how to manage volumes, see the
| cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |
| cinder-volume | block1@lvm | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
#. Source the ``demo`` credentials to perform
the following steps as a non-administrative project:
.. code-block:: console
$ source demo-openrc.sh
#. Create a 1 GB volume:
.. code-block:: console
$ cinder create --name demo-volume1 1
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2015-04-21T23:46:08.000000 |
| description | None |
| encrypted | False |
| id | 6c7a3d28-e1ef-42a0-b1f7-8d6ce9218412 |
| metadata | {} |
| multiattach | False |
| name | demo-volume1 |
| os-vol-tenant-attr:tenant_id | ab8ea576c0574b6092bb99150449b2d3 |
| os-volume-replication:driver_data | None |
| os-volume-replication:extended_status | None |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 3a81e6c8103b46709ef8d141308d4c72 |
| volume_type | None |
+---------------------------------------+--------------------------------------+
#. Verify creation and availability of the volume:
.. code-block:: console
$ cinder list
+--------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------+-----------+--------------+------+-------------+----------+-------------+
| 6c7a3d28-... | available | demo-volume1 | 1 | None | false | |
+--------------+-----------+--------------+------+-------------+----------+-------------+
If the status does not indicate ``available``,
check the logs in the :file:`/var/log/cinder` directory
on the controller and volume nodes for more information.
.. note::
The :doc:`launch an instance <launch-instance>` chapter includes
instructions for attaching this volume to an instance.

View File

@ -3,8 +3,7 @@ Next steps
==========
Your OpenStack environment now includes the dashboard. You can
:doc:`launch an instance <launch-instance>` or add
more services to your environment in the following chapters.
:ref:`launch-instance` or add more services to your environment.
After you install and configure the dashboard, you can
complete the following tasks:

View File

@ -6,7 +6,7 @@ OpenStack service dependencies
==============================
OpenStack packages
------------------
~~~~~~~~~~~~~~~~~~
Distributions release OpenStack packages as part of the distribution or
using other methods because of differing release schedules. Perform
@ -19,7 +19,8 @@ these procedures on all nodes.
.. only:: ubuntu
**To enable the OpenStack repository**
Enable the OpenStack repository
-------------------------------
* Install the Ubuntu Cloud archive keyring and repository:
@ -27,12 +28,13 @@ these procedures on all nodes.
# apt-get install ubuntu-cloud-keyring
# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" \
"trusty-updates/kilo main" > /etc/apt/sources.list.d/ \
cloudarchive-kilo.list
"trusty-updates/liberty main" > /etc/apt/sources.list.d/ \
cloudarchive-liberty.list
.. only:: rdo
**To configure prerequisites**
Prerequisites
-------------
#. On RHEL and CentOS, enable the
`EPEL <https://fedoraproject.org/wiki/EPEL>`_ repository:
@ -59,18 +61,19 @@ these procedures on all nodes.
.. only:: rdo
**To enable the OpenStack repository**
Enable the OpenStack repository
-------------------------------
* Install the ``rdo-release-kilo`` package to enable the RDO repository:
* Install the ``rdo-release-liberty`` package to enable the RDO repository:
.. code-block:: console
# yum install http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm
# yum install http://rdo.fedorapeople.org/openstack-liberty/rdo-release-liberty.rpm
.. only:: obs
**To enable the OpenStack repository**
Enable the OpenStack repository
-------------------------------
* Enable the Open Build Service repositories based on your openSUSE or
SLES version:
@ -79,7 +82,7 @@ these procedures on all nodes.
.. code-block:: console
# zypper addrepo -f obs://Cloud:OpenStack:Kilo/openSUSE_13.2 Kilo
# zypper addrepo -f obs://Cloud:OpenStack:Liberty/openSUSE_13.2 Liberty
The openSUSE distribution uses the concept of patterns to represent
collections of packages. If you selected 'Minimal Server Selection (Text
@ -95,7 +98,7 @@ these procedures on all nodes.
.. code-block:: console
# zypper addrepo -f obs://Cloud:OpenStack:Kilo/SLE_12 Kilo
# zypper addrepo -f obs://Cloud:OpenStack:Liberty/SLE_12 Liberty
.. note::
@ -112,9 +115,10 @@ these procedures on all nodes.
.. only:: debian
**To use the Debian 8 (Jessie) backports archive for Kilo**
Enable the backports repository
-------------------------------
The Kilo release is available directly through the official
The Liberty release is available directly through the official
Debian backports repository. To use this repository, follow
the instruction from the official
`Debian website <http://backports.debian.org/Instructions/>`_,
@ -138,7 +142,8 @@ these procedures on all nodes.
# apt-get -t jessie-backports install ``PACKAGE``
**To finalize the installation**
Finalize the installation
-------------------------
.. only:: ubuntu or debian
@ -215,7 +220,7 @@ these procedures on all nodes.
|
SQL database
------------
~~~~~~~~~~~~
Most OpenStack services use an SQL database to store information. The
database typically runs on the controller node. The procedures in this
@ -224,7 +229,8 @@ services also support other SQL databases including
`PostgreSQL <http://www.postgresql.org/>`__.
**To install and configure the database server**
Install and configure the database server
-----------------------------------------
1. Install the packages:
@ -319,7 +325,8 @@ services also support other SQL databases including
init-connect = 'SET NAMES utf8'
character-set-server = utf8
**To finalize installation**
To finalize installation
------------------------
.. only:: ubuntu or debian
@ -364,7 +371,7 @@ services also support other SQL databases including
|
Message queue
-------------
~~~~~~~~~~~~~
OpenStack uses a :term:`message queue` to coordinate operations and
status information among services. The message queue service typically
@ -377,7 +384,8 @@ service because most distributions support it. If you prefer to
implement a different message queue service, consult the documentation
associated with it.
**To install the message queue service**
Install the message queue service
---------------------------------
* Install the package:
@ -400,7 +408,8 @@ associated with it.
# zypper install rabbitmq-server
**To configure the message queue service**
Configure the message queue service
-----------------------------------
#. Start the message queue service and configure it to start when the
system boots:

View File

@ -1,16 +1,63 @@
.. highlight:: ini
Host Networking
~~~~~~~~~~~~~~~
================================
Legacy networking (nova-network)
================================
.. only:: ubuntu
The example architecture with legacy networking (nova-network) requires
a controller node and at least one compute node. The controller node
contains one network interface on the :term:`management network`. The
compute node contains one network interface on the management network
and one on the :term:`external network`.
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation <https://help.ubuntu.com/lts/serverguide/network-configuration.html>`__ .
The example architecture assumes use of the following networks:
.. only:: debian
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation <https://wiki.debian.org/NetworkConfiguration>`__ .
.. only:: rdo
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation <https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Using_the_Command_Line_Interface.html>`__ .
.. only:: obs
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `SLES 12 <https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_basicnet_manconf.html>`__ or `openSUSE <http://activedoc.opensuse.org/book/opensuse-reference/chapter-13-basic-networking>`__ documentation.
All nodes require Internet access for administrative purposes such as package
installation, security updates, :term:`DNS`, and :term:`NTP`. In most cases,
nodes should obtain internet access through the management network interface.
To highlight the importance of network separation, the example architectures
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
management network and assume that the physical network infrastructure
provides Internet access via :term:`NAT` or other method. The example
architectures use routable IP address space for the public network and
assume that the physical network infrastructure provides direct Internet
access. In the provider networks architecture, all instances attach directly
to the public network. In the self-service networks architecture, instances
can attach to a private or public network. Private networks can reside
entirely within OpenStack or provide some level of public network access
using :term:`NAT`.
.. _figure-networklayout:
.. figure:: figures/networklayout.png
:alt: Network layout
The example architectures assume use of the following networks:
- Management on 10.0.0.0/24 with gateway 10.0.0.1
@ -18,9 +65,9 @@ The example architecture assumes use of the following networks:
This network requires a gateway to provide Internet access to all
nodes for administrative purposes such as package installation,
security updates, :term:`DNS`, and :term:`Network Time Protocol (NTP)`.
security updates, :term:`DNS`, and :term:`NTP`.
- External on 203.0.113.0/24 with gateway 203.0.113.1
- Public on 203.0.113.0/24 with gateway 203.0.113.1
.. note::
@ -40,15 +87,6 @@ network infrastructure.
|
.. figure:: figures/installguidearch-nova-networks.png
:alt: Minimal architecture example with legacy networking
(nova-network)—Network layout
**Minimal architecture example with legacy networking
(nova-network)—Network layout**
|
Unless you intend to use the exact configuration provided in this
example architecture, you must modify the networks in this procedure to
match your environment. Also, each node must resolve the other nodes by
@ -62,12 +100,11 @@ the controller node.
connectivity. We recommend using a local terminal session for these
procedures.
|
Controller node
---------------
**To configure networking:**
Configure networking
~~~~~~~~~~~~~~~~~~~~
#. Configure the first interface as the management interface:
@ -81,14 +118,14 @@ Controller node
|
**To configure name resolution:**
Configure name resolution
~~~~~~~~~~~~~~~~~~~~~~~~~
#. Set the hostname of the node to ``controller``.
#. Edit the :file:`/etc/hosts` file to contain the following:
.. code-block:: ini
:linenos:
# controller
10.0.0.11 controller
@ -100,16 +137,17 @@ Controller node
Some distributions add an extraneous entry in the :file:`/etc/hosts`
file that resolves the actual hostname to another loopback IP
address such as ``127.0.1.1``. Note it's ``127.0.*1.1*``, do not
remove the required ``127.0.0.1`` entry. You must comment out or
remove this entry to prevent name resolution problems.
address such as ``127.0.1.1``. You must commend out or remove this
entry to prevent name resolution problems. **Do not remove the
``127.0.0.1`` entry.**
|
Compute node
------------
**To configure networking:**
Configure networking
~~~~~~~~~~~~~~~~~~~~
#. Configure the first interface as the management interface:
@ -123,8 +161,8 @@ Compute node
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
#. The external interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the external
#. The public interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the public
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
@ -135,9 +173,8 @@ Compute node
a. Edit the :file:`/etc/network/interfaces` file to contain the following:
.. code-block:: ini
:linenos:
# The external network interface
# The public network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
@ -151,7 +188,6 @@ Compute node
Do not change the ``HWADDR`` and ``UUID`` keys.
.. code-block:: ini
:linenos:
DEVICE=INTERFACE_NAME
TYPE=Ethernet
@ -164,7 +200,6 @@ Compute node
contain the following:
.. code-block:: ini
:linenos:
STARTMODE='auto'
BOOTPROTO='static'
@ -173,28 +208,28 @@ Compute node
|
**To configure name resolution:**
Configure name resolution
~~~~~~~~~~~~~~~~~~~~~~~~~
#. Set the hostname of the node to ``compute1``.
#. Edit the :file:`/etc/hosts` file to contain the following:
.. code-block:: ini
:linenos:
# compute1
10.0.0.31 compute1
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
.. warning::
Some distributions add an extraneous entry in the :file:`/etc/hosts`
file that resolves the actual hostname to another loopback IP
address such as ``127.0.1.1``. Note it's ``127.0.*1.1*``, do not
remove the required ``127.0.0.1`` entry. You must comment out or
remove this entry to prevent name resolution problems.
address such as ``127.0.1.1``. You must commend out or remove this
entry to prevent name resolution problems. **Do not remove the
``127.0.0.1`` entry.**
|
@ -204,7 +239,7 @@ Verify connectivity
We recommend that you verify network connectivity to the Internet and
among the nodes before proceeding further.
#. From the *controller* node, :command:`ping` a site on the Internet:
#. From the *controller* node, test access to the Internet:
.. code-block:: console
@ -219,8 +254,8 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
#. From the *controller* node, :command:`ping` the management interface
on the *compute* node:
#. From the *controller* node, test access to the management interface on the
*compute* node:
.. code-block:: console
@ -235,7 +270,7 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
#. From the *compute* node, ``ping`` a site on the Internet:
#. From the *compute* node, test access to the Internet:
.. code-block:: console
@ -250,7 +285,7 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
#. From the *compute* node, :command:`ping` the management interface on the
#. From the *compute* node, test access to the management interface on the
*controller* node:
.. code-block:: console
@ -265,3 +300,20 @@ among the nodes before proceeding further.
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. note::
.. only:: rdo or obs
Your distribution enables a restrictive :term:`firewall` by
default. During the installation process, certain steps will
fail unless you alter or disable the firewall. For more
information about securing your environment, refer to the
`OpenStack Security Guide <http://docs.openstack.org/sec/>`__.
.. only:: ubuntu or debian
Your distribution does not enable a restrictive :term:`firewall`
by default. For more information about securing your environment,
refer to the
`OpenStack Security Guide <http://docs.openstack.org/sec/>`__.

View File

@ -0,0 +1,85 @@
.. _environment-ntp-controller:
Controller node
~~~~~~~~~~~~~~~
Install the NTP service
-----------------------
.. only:: ubuntu or debian
.. code-block:: console
# apt-get install chrony
.. only:: rdo
.. code-block:: console
# yum install chrony
.. only:: obs
On openSUSE:
.. code-block:: console
# zypper addrepo http://download.opensuse.org/repositories/network:time/openSUSE_13.2/network:time.repo
# zypper refresh
# zypper install chrony
On SLES:
.. code-block:: console
# zypper addrepo http://download.opensuse.org/repositories/network:time/SLE_12/network:time.repo
# zypper refresh
# zypper install chrony
|
Configure the NTP service
-------------------------
By default, the controller node synchronizes the time via a pool of
public servers. However, you can optionally configure alternative servers such
as those provided by your organization.
.. only:: ubuntu or debian
1. Edit the :file:`/etc/chrony/chrony.conf` file and add, change, or remove the
following keys as necessary for your environment:
.. code:: ini
server NTP_SERVER iburst
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
2. Restart the NTP service:
.. code-block:: console
# service chrony restart
.. only:: rdo or obs
1. Edit the :file:`/etc/chrony.conf` file and add, change, or remove the
following keys as necessary for your environment:
.. code:: ini
server NTP_SERVER iburst
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
2. Start the NTP service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service

View File

@ -0,0 +1,76 @@
.. _environment-ntp-other:
Other nodes
~~~~~~~~~~~
Install the NTP service
-----------------------
.. only:: ubuntu or debian
.. code-block:: console
# apt-get install chrony
.. only:: rdo
.. code-block:: console
# yum install chrony
.. only:: obs
On openSUSE:
.. code-block:: console
# zypper addrepo http://download.opensuse.org/repositories/network:time/openSUSE_13.2/network:time.repo
# zypper refresh
# zypper install chrony
On SLES:
.. code-block:: console
# zypper addrepo http://download.opensuse.org/repositories/network:time/SLE_12/network:time.repo
# zypper refresh
# zypper install chrony
|
Configure the NTP service
-------------------------
Configure the network and compute nodes to reference the controller
node.
.. only:: ubuntu or debian
1. Edit the :file:`/etc/chrony/chrony.conf` and comment out or remove all
but one ``server`` key. Change it to reference the controller node:
.. code:: ini
server controller iburst
2. Restart the NTP service:
.. code-block:: console
# service chrony restart
.. only:: rdo or obs
1. Edit the :file:`/etc/chrony.conf` and comment out or remove all but one
``server`` key. Change it to reference the controller node:
.. code:: ini
server controller iburst
2. Start the NTP service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service

View File

@ -0,0 +1,52 @@
.. highlight:: ini
:linenothreshold: 1
.. _environment-ntp:
Configure Network Time Protocol (NTP)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You should install Chrony, an implementation of :term:`NTP`, to properly
synchronize services among nodes. We recommend that you configure the
controller node to reference more accurate (lower stratum) servers and other
nodes to reference the controller node.
.. toctree::
:maxdepth: 1
environment-ntp-controller.rst
environment-ntp-other.rst
Verify operation
----------------
We recommend that you verify NTP synchronization before proceeding
further. Some nodes, particularly those that reference the controller
node, can take several minutes to synchronize.
#. Run this command on the *controller* node:
.. code-block:: console
# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/- 43ms
^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/- 68ms
Contents in the *Name/IP address* column should indicate the hostname or IP
address of one or more NTP servers. Contents in the *S* column should indicate
*\** for the server to which the NTP service is currently synchronized.
#. Run the same command on *all other* nodes:
.. code-block:: console
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* controller 3 9 377 421 +15us[ -87us] +/- 15ms
Contents in the *remote* column should indicate the hostname of the controller node.

View File

@ -1,6 +1,8 @@
=================
Basic environment
=================
.. _environment:
===========
Environment
===========
.. note::
@ -9,11 +11,8 @@ Basic environment
to install Kilo, you must use the `Kilo
version <http://docs.openstack.org>`__ of this guide instead.
This section explains how to configure each node in the
:ref:`overview-example-architectures`, including the two-node architecture
with legacy networking :ref:`figure-legacy-network-hw` and three-node
architecture with OpenStack Networking (neutron)
:ref:`figure-neutron-network-hw`.
This section explains how to configure the controller and one compute
node using the example architecture.
Although most environments include Identity, Image service, Compute, at least
one networking service, and the dashboard, the Object Storage service can
@ -34,26 +33,19 @@ utility.
Before you begin
~~~~~~~~~~~~~~~~
For best performance, we recommend that your environment meets or
exceeds the hardware requirements in
:ref:`figure-neutron-network-hw` or
:ref:`figure-legacy-network-hw`. However, OpenStack does not require a
significant amount of resources and the following minimum requirements
should support a proof-of-concept environment with core services
For best performance, we recommend that your environment meets or exceeds
the hardware requirements in :ref:`figure-hwreqs`. However, OpenStack does
not require a significant amount of resources and the following minimum
requirements should support a proof-of-concept environment with core services
and several :term:`CirrOS` instances:
- Controller Node: 1 processor, 2 GB memory, and 5 GB storage
- Network Node: 1 processor, 512 MB memory, and 5 GB storage
- Compute Node: 1 processor, 2 GB memory, and 10 GB storage
To minimize clutter and provide more resources for OpenStack, we
recommend a minimal installation of your Linux distribution. Also, we
strongly recommend that you install a 64-bit version of your
distribution on at least the compute node. If you install a 32-bit
version of your distribution on the compute node, attempting to start an
instance using a 64-bit image will fail.
To minimize clutter and provide more resources for OpenStack, we recommend
a minimal installation of your Linux distribution. Also, you must install a
64-bit version of your distribution on each node.
A single disk partition on each node works for most basic installations.
However, you should consider :term:`Logical Volume Manager (LVM)` for
@ -152,14 +144,8 @@ and their associated references in the guide:
- Password of Compute service user ``nova``
* - ``RABBIT_PASS``
- Password of user guest of RabbitMQ
* - ``SAHARA_DBPASS``
- Database password of Data processing service
* - ``SWIFT_PASS``
- Password of Object Storage service user ``swift``
* - ``TROVE_DBPASS``
- Database password of Database service
* - ``TROVE_PASS``
- Password of Database service user ``trove``
|
@ -172,20 +158,20 @@ policies. See the `Cloud Administrator Guide <http://docs.openstack.org/
admin-guide-cloud/compute-root-wrap-reference.html>`__
for more information.
Also, the Networking service assumes default
values for kernel network parameters and modifies firewall rules. To
avoid most issues during your initial installation, we recommend using a
stock deployment of a supported distribution on your hosts. However, if
you choose to automate deployment of your hosts, review the
configuration and policies applied to them before proceeding further.
Also, the Networking service assumes default values for kernel network
parameters and modifies firewall rules. To avoid most issues during your
initial installation, we recommend using a stock deployment of a supported
distribution on your hosts. However, if you choose to automate deployment
of your hosts, review the configuration and policies applied to them before
proceeding further.
Networking, NTP, OpenStack service dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Host networking, NTP, OpenStack service dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 2
basics-networking.rst
basics-ntp.rst
basics-packages.rst
environment-networking.rst
environment-ntp.rst
environment-dependencies.rst

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 174 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 100 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 156 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

View File

@ -3,5 +3,5 @@ Next steps
==========
Your OpenStack environment now includes Orchestration. You can
:doc:`launch an instance <launch-instance>` or add more
services to your environment in the following chapters.
:ref:`launch-instance` or add more services to your environment
in the following chapters.

View File

@ -80,12 +80,12 @@ Contents
common/conventions.rst
overview.rst
basic_environment.rst
environment.rst
debconf/debconf.rst
keystone.rst
glance.rst
nova.rst
networking.rst
neutron.rst
horizon.rst
cinder.rst
swift.rst

View File

@ -80,11 +80,11 @@ Contents
common/conventions.rst
overview.rst
basic_environment.rst
environment.rst
keystone.rst
glance.rst
nova.rst
networking.rst
neutron.rst
horizon.rst
cinder.rst
swift.rst

View File

@ -0,0 +1,127 @@
.. _launch-instance-cinder:
Block Storage
~~~~~~~~~~~~~
Create a volume
---------------
#. Source the ``demo`` credentials to perform
the following steps as a non-administrative project:
.. code-block:: console
$ source demo-openrc.sh
#. Create a 1 GB volume:
.. code-block:: console
$ cinder create --display-name volume1 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2015-09-22T13:36:19.457750 |
| display_description | None |
| display_name | volume1 |
| encrypted | False |
| id | 0a816b7c-e578-4290-bb74-c13b8b90d4e7 |
| metadata | {} |
| multiattach | false |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
#. After a short time, the volume status should change from ``creating``
to ``available``:
.. code-block:: console
$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 0a816b7c-e578-4290-bb74-c13b8b90d4e7 | available | volume1 | 1 | - | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
Attach the volume to an instance
--------------------------------
#. Attach a volume to an instance:
.. code-block:: console
$ nova volume-attach INSTANCE_NAME VOLUME_ID
Replace ``INSTANCE_NAME`` with the name of the instance and ``VOLUME_ID``
with the ID of the volume you want to attach to it.
**Example**
Attach the ``0a816b7c-e578-4290-bb74-c13b8b90d4e7`` volume to the
``public-instance`` instance:
.. code-block:: console
$ nova volume-attach public-instance1 0a816b7c-e578-4290-bb74-c13b8b90d4e7
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
| serverId | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf |
| volumeId | 0a816b7c-e578-4290-bb74-c13b8b90d4e7 |
+----------+--------------------------------------+
#. List volumes:
.. code-block:: console
$ nova volume-list
+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+
| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | in-use | | 1 | - | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf |
+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+
#. Access your instance using SSH and use the ``fdisk`` command to verify
presence of the volume as the ``/dev/vdb`` block storage device:
.. code-block:: console
$ sudo fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/vda1 * 16065 2088449 1036192+ 83 Linux
Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn't contain a valid partition table
.. note::
You must create a partition table and file system to use the volume.
For more information about how to manage volumes, see the
`OpenStack User Guide
<http://docs.openstack.org/user-guide/index.html>`__.
Return to :ref:`launch-instance`.

View File

@ -0,0 +1,245 @@
.. _launch-instance-networks-private:
Private project network
~~~~~~~~~~~~~~~~~~~~~~~
If you chose networking option 2, you can also create a private project
virtual network that connects to the physical network infrastructure
via layer-3 (routing) and NAT. This network includes a DHCP server that
provides IP addresses to instances. An instance on this network can
automatically access external networks such as the Internet. However, access
to an instance on this network from external networks such as the Internet
requires a :term:`floating IP address`.
The ``demo`` or other unprivileged user can create this network because it
only provides connectivity to instances within the ``demo`` project.
.. warning::
You must :ref:`create the public provider network
<launch-instance-networks-public>` before the private project network.
.. note::
The following instructions and diagrams use example IP address ranges. You
must adjust them for your particular environment.
.. figure:: figures/network2-overview.png
:alt: Networking Option 2: Self-service networks - Overview
**Networking Option 2: Self-service networks - Overview**
.. figure:: figures/network2-connectivity.png
:alt: Networking Option 2: Self-service networks - Connectivity
**Networking Option 2: Self-service networks - Connectivity**
Create the private project network
----------------------------------
#. On the controller node, source the ``demo`` credentials to gain access to
user-only CLI commands:
.. code-block:: console
$ source demo-openrc.sh
#. Create the network:
.. code-block:: console
$ neutron net-create private
Created a new network:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| id | 7c6f9b37-76b4-463e-98d8-27e5686ed083 |
| mtu | 0 |
| name | private |
| port_security_enabled | True |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
+-----------------------+--------------------------------------+
Non-privileged users typically cannot supply additional parameters to
this command. The service automatically chooses paramters using
information from the following files:
``ml2_conf.ini``:
.. code-block:: ini
[ml2]
tenant_network_types = vxlan
[ml2_type_vxlan]
vni_ranges = 1:1000
#. Create a subnet on the network:
.. code-block:: console
$ neutron subnet-create private PRIVATE_NETWORK_CIDR --name private \
--gateway PRIVATE_NETWORK_GATEWAY
Replace ``PRIVATE_NETWORK_CIDR`` with the subnet you want to use on the
private network. You can use any arbitrary value, although we recommend
a network from `RFC 1918 <https://tools.ietf.org/html/rfc1918>`_.
Replace ``PRIVATE_NETWORK_GATEWAY`` with the gateway you want to use on
the private network, typically the ".1" IP address.
**Example**
The private network uses 172.16.1.0/24 with a gateway on 172.16.1.1:
.. code-block:: console
$ neutron subnet-create private 172.16.1.0/24 --name private --gateway 172.16.1.1
Created a new subnet:
+-------------------+------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------+
| allocation_pools | {"start": "172.16.1.2", "end": "172.16.1.254"} |
| cidr | 172.16.1.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 172.16.1.1 |
| host_routes | |
| id | 3482f524-8bff-4871-80d4-5774c2730728 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | private |
| network_id | 7c6f9b37-76b4-463e-98d8-27e5686ed083 |
| subnetpool_id | |
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
+-------------------+------------------------------------------------+
Create a router
---------------
Private project networks connect to public provider networks using a virtual
router. Each router contains an interface to at least one private project
network and a gateway on a public provider network.
The public provider network must include the ``router: external`` option to
enable project routers to use it for connectivity to external networks such
as the Internet. The ``admin`` or other privileged user must include this
option during network creation or add it later. In this case, we can add it
to the existing ``public`` provider network.
#. On the controller node, source the ``admin`` credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ source admin-openrc.sh
#. Add the ``router: external`` option to the ``public`` provider network:
.. code-block:: console
$ neutron net-update public --router:external
Updated network: public
#. Source the ``demo`` credentials to gain access to user-only CLI commands:
.. code-block:: console
$ source demo-openrc.sh
#. Create the router:
.. code-block:: console
$ neutron router-create router
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | 89dd2083-a160-4d75-ab3a-14239f01ea0b |
| name | router |
| routes | |
| status | ACTIVE |
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
+-----------------------+--------------------------------------+
#. Add the private network subnet as an interface on the router:
.. code-block:: console
$ neutron router-interface-add router private
Added interface bff6605d-824c-41f9-b744-21d128fc86e1 to router router.
#. Set a gateway on the public network on the router:
.. code-block:: console
$ neutron router-gateway-set router public
Set gateway for router router
Verify operation
----------------
We recommend that you verify operation and fix any issues before proceeding
The following steps use the IP address ranges from the network and subnet
creation examples.
#. On the controller node, source the ``admin`` credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ source admin-openrc.sh
#. List network namespaces. You should see one ``qrouter`` namespace and two
``qdhcp`` namespaces.
.. code-block:: console
$ ip netns
qrouter-89dd2083-a160-4d75-ab3a-14239f01ea0b
qdhcp-7c6f9b37-76b4-463e-98d8-27e5686ed083
qdhcp-0e62efcd-8cee-46c7-b163-d8df05c3c5ad
#. List ports on the router to determine the gateway IP address on the public
provider network:
.. code-block:: console
$ neutron router-port-list router
+--------------------------------------+------+-------------------+------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+------------------------------------------+
| bff6605d-824c-41f9-b744-21d128fc86e1 | | fa:16:3e:2f:34:9b | {"subnet_id": |
| | | | "3482f524-8bff-4871-80d4-5774c2730728", |
| | | | "ip_address": "172.16.1.1"} |
| d6fe98db-ae01-42b0-a860-37b1661f5950 | | fa:16:3e:e8:c1:41 | {"subnet_id": |
| | | | "5cc70da8-4ee7-4565-be53-b9c011fca011", |
| | | | "ip_address": "203.0.113.102"} |
+--------------------------------------+------+-------------------+------------------------------------------+
#. Ping this IP address from the controller node or any host on the public
physical network:
.. code-block:: console
$ ping -c 4 203.0.113.102
PING 203.0.113.102 (203.0.113.102) 56(84) bytes of data.
64 bytes from 203.0.113.102: icmp_req=1 ttl=64 time=0.619 ms
64 bytes from 203.0.113.102: icmp_req=2 ttl=64 time=0.189 ms
64 bytes from 203.0.113.102: icmp_req=3 ttl=64 time=0.165 ms
64 bytes from 203.0.113.102: icmp_req=4 ttl=64 time=0.216 ms
--- 203.0.113.102 ping statistics ---
rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms
Return to :ref:`Launch an instance - Create virtual networks
<launch-instance-networks>`.

View File

@ -0,0 +1,136 @@
.. _launch-instance-networks-public:
Public provider network
~~~~~~~~~~~~~~~~~~~~~~~
Before launching an instance, you must create the necessary virtual network
infrastructure. For networking option 1, an instance uses a public provider
virtual network that connects to the physical network infrastructure
via layer-2 (bridging/switching). This network includes a DHCP server that
provides IP addresses to instances.
The ``admin`` or other privileged user must create this network because it
connects directly to the physical network infrastructure.
.. note::
The following instructions and diagrams use example IP address ranges. You
must adjust them for your particular environment.
.. figure:: figures/network1-overview.png
:alt: Networking Option 1: Provider networks - Overview
**Networking Option 1: Provider networks - Overview**
.. figure:: figures/network1-connectivity.png
:alt: Networking Option 1: Provider networks - Connectivity
**Networking Option 2: Provider networks - Connectivity**
Create the public network
-------------------------
#. On the controller node, source the ``admin`` credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ source admin-openrc.sh
#. Create the network:
.. code-block:: console
$ neutron net-create public --shared --provider:physical_network public \
--provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 0e62efcd-8cee-46c7-b163-d8df05c3c5ad |
| mtu | 0 |
| name | public |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | public |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | d84313397390425c8ed50b2f6e18d092 |
+---------------------------+--------------------------------------+
The ``--shared`` option allows all projects to use the virtual network.
The ``--provider:physical_network public`` and
``--provider:network_type flat`` options connect the flat virtual network
to the flat (native/untagged) public physical network on the ``eth1``
interface on the host using information from the following files:
``ml2_conf.ini``:
.. code-block:: ini
[ml2_type_flat]
flat_networks = public
``linuxbridge_agent.ini``:
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = public:eth1
#. Create a subnet on the network:
.. code-block:: console
$ neutron subnet-create public PUBLIC_NETWORK_CIDR --name public \
--allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS\
--gateway PUBLIC_NETWORK_GATEWAY
Replace ``PUBLIC_NETWORK_CIDR`` with the subnet on the public physical
network in CIDR notation.
Replace ``START_IP_ADDRESS`` and ``END_IP_ADDRESS`` with the first and
last IP address of the range within the subnet that you want to allocate
for instances. This range must not include any existing active IP
addresses.
Replace ``PUBLIC_NETWORK_GATEWAY`` with the gateway IP address on the
public physical network, typically the ".1" IP address.
**Example**
The public physical network uses 203.0.113.0/24 with a gateway on
203.0.113.1 and instances can use 203.0.113.101 to 203.0.113.200.
.. code-block:: console
$ neutron subnet-create public 203.0.113.0/24 --name public \
--allocation-pool start=203.0.113.101,end=203.0.113.200 \
--gateway 203.0.113.1
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field | Value |
+-------------------+----------------------------------------------------+
| allocation_pools | {"start": "203.0.113.101", "end": "203.0.113.200"} |
| cidr | 203.0.113.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| host_routes | |
| id | 5cc70da8-4ee7-4565-be53-b9c011fca011 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | public |
| network_id | 0e62efcd-8cee-46c7-b163-d8df05c3c5ad |
| subnetpool_id | |
| tenant_id | d84313397390425c8ed50b2f6e18d092 |
+-------------------+----------------------------------------------------+
Return to :ref:`Launch an instance - Create virtual networks
<launch-instance-networks>`.

View File

@ -1,414 +0,0 @@
======================================================
Launch an instance with OpenStack Networking (neutron)
======================================================
To generate a key pair
~~~~~~~~~~~~~~~~~~~~~~
Most cloud images support :term:`public key authentication`
rather than conventional user name/password authentication.
Before launching an instance, you must
generate a public/private key pair.
1. Source the ``demo`` tenant credentials:
.. code-block:: console
$ source demo-openrc.sh
2. Generate and add a key pair:
.. code-block:: console
$ nova keypair-add demo-key
3. Verify addition of the key pair:
.. code-block:: console
$ nova keypair-list
+----------+-------------------------------------------------+
| Name | Fingerprint |
+----------+-------------------------------------------------+
| demo-key | 6c:74:ec:3a:08:05:4e:9e:21:22:a6:dd:b2:62:b8:28 |
+----------+-------------------------------------------------+
To launch an instance
~~~~~~~~~~~~~~~~~~~~~
To launch an instance, you must at least specify the flavor, image
name, network, security group, key, and instance name.
1. A flavor specifies a virtual resource allocation profile which
includes processor, memory, and storage.
List available flavors:
.. code-block:: console
$ nova flavor-list
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Your first instance uses the ``m1.tiny`` flavor.
.. note::
You can also reference a flavor by ID.
2. List available images:
.. code-block:: console
$ nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.4-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
Your first instance uses the ``cirros-0.3.4-x86_64`` image.
3. List available networks:
.. code-block:: console
$ neutron net-list
+------------------------+----------+-----------------------------------------+
| id | name | subnets |
+------------------------+----------+-----------------------------------------+
| 3c612b5a-d1db-498a-... | demo-net | 20bcd3fd-5785-41fe-... 192.168.1.0/24 |
| 9bce64a3-a963-4c05-... | ext-net | b54a8d85-b434-4e85-... 203.0.113.0/24 |
+------------------------+----------+-----------------------------------------+
Your first instance uses the ``demo-net`` tenant network. However,
you must reference this network using the ID instead of the name.
4. List available security groups:
.. code-block:: console
$ nova secgroup-list
+--------------------------------------+---------+-------------+
| Id | Name | Description |
+--------------------------------------+---------+-------------+
| ad8d4ea5-3cad-4f7d-b164-ada67ec59473 | default | default |
+--------------------------------------+---------+-------------+
Your first instance uses the ``default`` security
group. By default, this security group implements a firewall that
blocks remote access to instances. If you would like to permit
remote access to your instance, launch it and then
:ref:`configure remote access <launch-instance-neutron-remoteaccess>`.
5. Launch the instance:
Replace ``DEMO_NET_ID`` with the ID of the ``demo-net`` tenant network.
.. code-block:: console
$ nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64 --nic net-id=DEMO_NET_ID \
--security-group default --key-name demo-key demo-instance1
+--------------------------------------+------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | vFW7Bp8PQGNo |
| config_drive | |
| created | 2014-04-09T19:24:27Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 05682b91-81a1-464c-8f40-8b3da7e... |
| image | cirros-0.3.4-x86_64 (acafc7c0-...) |
| key_name | demo-key |
| metadata | {} |
| name | demo-instance1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 7cf50047f8df4824bc76c2fdf66d11ec |
| updated | 2014-04-09T19:24:27Z |
| user_id | 0e47686e72114d7182f7569d70c519c9 |
+--------------------------------------+------------------------------------+
6. Check the status of your instance:
.. code-block:: console
$ nova list
+--------------+----------------+--------+------------+-------------+-------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------+----------------+--------+------------+-------------+-------------------------+
| 05682b91-... | demo-instance1 | ACTIVE | - | Running | demo-net=192.168.1.3 |
+--------------+----------------+--------+------------+-------------+-------------------------+
The status changes from ``BUILD`` to ``ACTIVE``
when your instance finishes the build process.
To access your instance using a virtual console
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Obtain a :term:`Virtual Network Computing (VNC)`
session URL for your instance and access it from a web browser:
.. code-block:: console
$ nova get-vnc-console demo-instance1 novnc
+-------+------------------------------------------------------------------------------------+
| Type | Url |
+-------+------------------------------------------------------------------------------------+
| novnc | http://controller:6080/vnc_auto.html?token=2f6dd985-f906-4bfc-b566-e87ce656375b |
+-------+------------------------------------------------------------------------------------+
.. note::
If your web browser runs on a host that cannot resolve the
``controller`` host name, you can replace ``controller`` with the
IP address of the management interface on your controller node.
The CirrOS image includes conventional user name/password
authentication and provides these credentials at the login prompt.
After logging into CirrOS, we recommend that you verify network
connectivity using ``ping``.
Verify the ``demo-net`` tenant network gateway:
.. code-block:: console
$ ping -c 4 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=0.357 ms
64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=0.473 ms
64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=0.504 ms
64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=0.470 ms
--- 192.168.1.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
Verify the ``ext-net`` external network:
.. code-block:: console
$ ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms
64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms
64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms
64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
.. _launch-instance-neutron-remoteaccess:
To access your instance remotely
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Add rules to the ``default`` security group:
a. Permit :term:`ICMP` (ping):
.. code-block:: console
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
b. Permit secure shell (SSH) access:
.. code-block:: console
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
2. Create a :term:`floating IP address` on the ``ext-net`` external network:
.. code-block:: console
$ neutron floatingip-create ext-net
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 203.0.113.102 |
| floating_network_id | 9bce64a3-a963-4c05-bfcd-161f708042d1 |
| id | 05e36754-e7f3-46bb-9eaa-3521623b3722 |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | 7cf50047f8df4824bc76c2fdf66d11ec |
+---------------------+--------------------------------------+
3. Associate the floating IP address with your instance:
.. code-block:: console
$ nova floating-ip-associate demo-instance1 203.0.113.102
.. note::
This command provides no output.
4. Check the status of your floating IP address:
.. code-block:: console
$ nova list
+--------------+----------------+--------+------------+-------------+-----------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------+----------------+--------+------------+-------------+-----------------------------------------+
| 05682b91-... | demo-instance1 | ACTIVE | - | Running | demo-net=192.168.1.3, 203.0.113.102 |
+--------------+----------------+--------+------------+-------------+-----------------------------------------+
5. Verify network connectivity using :command:`ping` from the
controller node or any host on the external network:
.. code-block:: console
$ ping -c 4 203.0.113.102
PING 203.0.113.102 (203.0.113.112) 56(84) bytes of data.
64 bytes from 203.0.113.102: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.102: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.102: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.102: icmp_req=4 ttl=63 time=0.929 ms
--- 203.0.113.102 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
6. Access your instance using SSH from the controller node or any
host on the external network:
.. code-block:: console
$ ssh cirros@203.0.113.102
The authenticity of host '203.0.113.102 (203.0.113.102)' can't be established.
RSA key fingerprint is ed:05:e9:e7:52:a0:ff:83:68:94:c7:d1:f2:f8:e2:e9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.102' (RSA) to the list of known hosts.
$
.. note::
If your host does not contain the public/private key pair created
in an earlier step, originally SSH prompts for the password associated
with the ``cirros`` user that is ``cubswin:)``.
To attach a Block Storage volume to your instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If your environment includes the Block Storage service, you can
attach a volume to the instance.
1. Source the ``demo`` credentials:
.. code-block:: console
$ source demo-openrc.sh
2. List volumes:
.. code-block:: console
$ nova volume-list
+--------------+-----------+--------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------+-----------+--------------+------+-------------+-------------+
| 158bea89-... | available | | 1 | - | |
+--------------+-----------+--------------+------+-------------+-------------+
3. Attach the ``demo-volume1`` volume to the ``demo-instance1`` instance:
.. code-block:: console
$ nova volume-attach demo-instance1 158bea89-07db-4ac2-8115-66c0d6a4bb48
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
| serverId | 05682b91-81a1-464c-8f40-8b3da7ee92c5 |
| volumeId | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
+----------+--------------------------------------+
.. note::
You must reference volumes using the IDs instead of names.
4. List volumes:
.. code-block:: console
$ nova volume-list
+--------------+-----------+--------------+------+-------------+--------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------+-----------+--------------+------+-------------+--------------+
| 158bea89-... | in-use | | 1 | - | 05682b91-... |
+--------------+-----------+--------------+------+-------------+--------------+
The ID of the ``demo-volume1`` volume should indicate ``in-use``
status by the ID of the ``demo-instance1`` instance.
5. Access your instance using SSH from the controller node or any
host on the external network and use the :command:`fdisk`
command to verify presence of the volume as the
``/dev/vdb`` block storage device:
.. code-block:: console
$ ssh cirros@203.0.113.102
$ sudo fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/vda1 * 16065 2088449 1036192+ 83 Linux
Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn't contain a valid partition table
.. note::
You must create a partition table and file system to use the volume.
If your instance does not launch or seem to work as you expect, see the
`OpenStack Operations Guide <http://docs.openstack.org/ops>`__ for more
information or use one of the :doc:`many other options <common/app_support>`
to seek assistance. We want your environment to work!

View File

@ -1,378 +0,0 @@
========================================================
Launch an instance with legacy networking (nova-network)
========================================================
To generate a key pair
~~~~~~~~~~~~~~~~~~~~~~
Most cloud images support :term:`public key authentication`
rather than conventional user name/password authentication.
Before launching an instance, you must you must generate
a public/private key pair using :command:`ssh-keygen` and
add the public key to your OpenStack environment.
1. Source the ``demo`` tenant credentials:
.. code-block:: console
$ source demo-openrc.sh
2. Generate a key pair:
.. code-block:: console
$ ssh-keygen
3. Add the public key to your OpenStack environment:
.. code-block:: console
$ nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key
.. note::
This command provides no output.
4. Verify addition of the public key:
.. code-block:: console
$ nova keypair-list
+----------+-------------------------------------------------+
| Name | Fingerprint |
+----------+-------------------------------------------------+
| demo-key | 6c:74:ec:3a:08:05:4e:9e:21:22:a6:dd:b2:62:b8:28 |
+----------+-------------------------------------------------+
To launch an instance
~~~~~~~~~~~~~~~~~~~~~
To launch an instance, you must at least specify the flavor, image
name, network, security group, key, and instance name.
1. A flavor specifies a virtual resource allocation profile which
includes processor, memory, and storage.
List available flavors:
.. code-block:: console
$ nova flavor-list
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Your first instance uses the ``m1.tiny`` flavor.
.. note::
You can also reference a flavor by ID.
2. List available images:
.. code-block:: console
$ nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.4-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
Your first instance uses the ``cirros-0.3.4-x86_64`` image.
3. List available networks:
.. note::
You must source the ``admin`` tenant credentials for this step and
then source the ``demo`` tenant credentials for the remaining steps.
.. code-block:: console
$ source admin-openrc.sh
.. code-block:: console
$ nova net-list
+--------------------------------------+----------+------------------+
| ID | Label | CIDR |
+--------------------------------------+----------+------------------+
| 7f849be3-4494-495a-95a1-0f99ccb884c4 | demo-net | 203.0.113.24/29 |
+--------------------------------------+----------+------------------+
Your first instance uses the ``demo-net`` tenant network. However,
you must reference this network using the ID instead of the name.
4. List available security groups:
.. code-block:: console
$ nova secgroup-list
+--------------------------------------+---------+-------------+
| Id | Name | Description |
+--------------------------------------+---------+-------------+
| ad8d4ea5-3cad-4f7d-b164-ada67ec59473 | default | default |
+--------------------------------------+---------+-------------+
Your first instance uses the ``default`` security
group. By default, this security group implements a firewall that
blocks remote access to instances. If you would like to permit
remote access to your instance, launch it and then
:ref:`configure remote access <launch-instance-nova-remoteaccess>`.
5. Launch the instance:
Replace ``DEMO_NET_ID`` with the ID of the ``demo-net`` tenant network.
.. code-block:: console
$ nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64 --nic net-id=DEMO_NET_ID \
--security-group default --key-name demo-key demo-instance1
+--------------------------------------+------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | ThZqrg7ach78 |
| config_drive | |
| created | 2014-04-10T00:09:16Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 45ea195c-c469-43eb-83db-1a663bb... |
| image | cirros-0.3.4-x86_64 (acafc7c0-...) |
| key_name | demo-key |
| metadata | {} |
| name | demo-instance1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 93849608fe3d462ca9fa0e5dbfd4d040 |
| updated | 2014-04-10T00:09:16Z |
| user_id | 8397567baf4746cca7a1e608677c3b23 |
+--------------------------------------+------------------------------------+
6. Check the status of your instance:
.. code-block:: console
$ nova list
+--------------+----------------+--------+------------+-------------+------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------+----------------+--------+------------+-------------+------------------------+
| 45ea195c-... | demo-instance1 | ACTIVE | - | Running | demo-net=203.0.113.26 |
+--------------+----------------+--------+------------+-------------+------------------------+
The status changes from ``BUILD`` to ``ACTIVE``
when your instance finishes the build process.
To access your instance using a virtual console
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Obtain a :term:`Virtual Network Computing (VNC)`
session URL for your instance and access it from a web browser:
.. code-block:: console
$ nova get-vnc-console demo-instance1 novnc
+-------+------------------------------------------------------------------------------------+
| Type | Url |
+-------+------------------------------------------------------------------------------------+
| novnc | http://controller:6080/vnc_auto.html?token=2f6dd985-f906-4bfc-b566-e87ce656375b |
+-------+------------------------------------------------------------------------------------+
.. note::
If your web browser runs on a host that cannot resolve the
``controller`` host name, you can replace ``controller`` with the
IP address of the management interface on your controller node.
The CirrOS image includes conventional user name/password
authentication and provides these credentials at the login prompt.
After logging into CirrOS, we recommend that you verify network
connectivity using ``ping``.
Verify the ``demo-net`` network:
.. code-block:: console
$ ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms
64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms
64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms
64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
.. _launch-instance-nova-remoteaccess:
To access your instance remotely
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Add rules to the ``default`` security group:
a. Permit :term:`ICMP` (ping):
.. code-block:: console
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
b. Permit secure shell (SSH) access:
.. code-block:: console
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
2. Verify network connectivity using :command:`ping` from the
controller node or any host on the external network:
.. code-block:: console
$ ping -c 4 203.0.113.26
PING 203.0.113.102 (203.0.113.26) 56(84) bytes of data.
64 bytes from 203.0.113.26: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.26: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.26: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.26: icmp_req=4 ttl=63 time=0.929 ms
--- 203.0.113.26 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
3. Access your instance using SSH from the controller node or any
host on the external network:
.. code-block:: console
$ ssh cirros@203.0.113.26
The authenticity of host '203.0.113.26 (203.0.113.26)' can't be established.
RSA key fingerprint is ed:05:e9:e7:52:a0:ff:83:68:94:c7:d1:f2:f8:e2:e9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.26' (RSA) to the list of known hosts.
$
.. note::
If your host does not contain the public/private key pair created
in an earlier step, SSH prompts for the default password associated
with the ``cirros`` user.
To attach a Block Storage volume to your instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If your environment includes the Block Storage service, you can
attach a volume to the instance.
1. Source the ``demo`` credentials:
.. code-block:: console
$ source demo-openrc.sh
2. List volumes:
.. code-block:: console
$ nova volume-list
+--------------+-----------+--------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------+-----------+--------------+------+-------------+-------------+
| 158bea89-... | available | | 1 | - | |
+--------------+-----------+--------------+------+-------------+-------------+
3. Attach the ``demo-volume1`` volume to the ``demo-instance1`` instance:
.. code-block:: console
$ nova volume-attach demo-instance1 158bea89-07db-4ac2-8115-66c0d6a4bb48
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
| serverId | 45ea195c-c469-43eb-83db-1a663bbad2fc |
| volumeId | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
+----------+--------------------------------------+
.. note::
You must reference volumes using the IDs instead of names.
4. List volumes:
.. code-block:: console
$ nova volume-list
+--------------+-----------+--------------+------+-------------+--------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------+-----------+--------------+------+-------------+--------------+
| 158bea89-... | in-use | | 1 | - | 45ea195c-... |
+--------------+-----------+--------------+------+-------------+--------------+
The ID of the ``demo-volume1`` volume should indicate ``in-use``
status by the ID of the ``demo-instance1`` instance.
5. Access your instance using SSH from the controller node or any
host on the external network and use the :command:`fdisk`
command to verify presence of the volume as the
``/dev/vdb`` block storage device:
.. code-block:: console
$ ssh cirros@203.0.113.102
$ sudo fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/vda1 * 16065 2088449 1036192+ 83 Linux
Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn't contain a valid partition table
.. note::
You must create a partition table and file system to use the volume.
If your instance does not launch or seem to work as you expect, see the
`OpenStack Operations Guide <http://docs.openstack.org/ops>`__ for more
information or use one of the :doc:`many other options <common/app_support>`
to seek assistance. We want your environment to work!

View File

@ -0,0 +1,270 @@
.. _launch-instance-private:
Launch an instance on the private network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Determine instance options
--------------------------
To launch an instance, you must at least specify the flavor, image
name, network, security group, key, and instance name.
#. A flavor specifies a virtual resource allocation profile which
includes processor, memory, and storage.
List available flavors:
.. code-block:: console
$ nova flavor-list
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
This instance uses the ``m1.tiny`` flavor.
.. note::
You can also reference a flavor by ID.
#. List available images:
.. code-block:: console
$ nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.4-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
This instance uses the ``cirros-0.3.4-x86_64`` image.
#. List available networks:
.. code-block:: console
$ neutron net-list
+--------------------------------------+---------+----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+----------------------------------------------------+
| 0e62efcd-8cee-46c7-b163-d8df05c3c5ad | public | 5cc70da8-4ee7-4565-be53-b9c011fca011 10.3.31.0/24 |
| 7c6f9b37-76b4-463e-98d8-27e5686ed083 | private | 3482f524-8bff-4871-80d4-5774c2730728 172.16.1.0/24 |
+--------------------------------------+---------+----------------------------------------------------+
This instance uses the ``private`` project network. However, you must
reference this network using the ID instead of the name.
4. List available security groups:
.. code-block:: console
$ nova secgroup-list
+--------------------------------------+---------+-------------+
| Id | Name | Description |
+--------------------------------------+---------+-------------+
| ad8d4ea5-3cad-4f7d-b164-ada67ec59473 | default | default |
+--------------------------------------+---------+-------------+
This instance uses the ``default`` security group.
5. Launch the instance:
Replace ``PRIVATE_NET_ID`` with the ID of the ``private`` project network.
.. code-block:: console
$ nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64 --nic net-id=PRIVATE_NET_ID \
--security-group default --key-name mykey private-instance
+--------------------------------------+------------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | oMeLMk9zVGpk |
| config_drive | |
| created | 2015-09-17T22:36:05Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 113c5892-e58e-4093-88c7-e33f502eaaa4 |
| image | cirros-0.3.4-x86_64 (939ad102-c74e-405d-a957-2798071d0a7c) |
| key_name | mykey |
| metadata | {} |
| name | private-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
| updated | 2015-09-17T22:36:05Z |
| user_id | 684286a9079845359882afc3aa5011fb |
+--------------------------------------+------------------------------------------------------------+
6. Check the status of your instance:
.. code-block:: console
$ nova list
+--------------------------------------+------------------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------+--------+------------+-------------+----------------------+
| 113c5892-e58e-4093-88c7-e33f502eaaa4 | private-instance | ACTIVE | - | Running | private=172.16.1.3 |
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | public-instance | ACTIVE | - | Running | public=203.0.113.103 |
+--------------------------------------+------------------+--------+------------+-------------+----------------------+
The status changes from ``BUILD`` to ``ACTIVE`` when the build process
successfully completes.
Access the instance using a virtual console
-------------------------------------------
1. Obtain a :term:`Virtual Network Computing (VNC)`
session URL for your instance and access it from a web browser:
.. code-block:: console
$ nova get-vnc-console private-instance novnc
+-------+------------------------------------------------------------------------------------+
| Type | Url |
+-------+------------------------------------------------------------------------------------+
| novnc | http://controller:6080/vnc_auto.html?token=2f6dd985-f906-4bfc-b566-e87ce656375b |
+-------+------------------------------------------------------------------------------------+
.. note::
If your web browser runs on a host that cannot resolve the
``controller`` host name, you can replace ``controller`` with the
IP address of the management interface on your controller node.
The CirrOS image includes conventional user name/password
authentication and provides these credentials at the login prompt.
After logging into CirrOS, we recommend that you verify network
connectivity using ``ping``.
#. Verify access to the ``private`` project network gateway:
.. code-block:: console
$ ping -c 4 172.16.1.1
PING 172.16.1.1 (172.16.1.1) 56(84) bytes of data.
64 bytes from 172.16.1.1: icmp_req=1 ttl=64 time=0.357 ms
64 bytes from 172.16.1.1: icmp_req=2 ttl=64 time=0.473 ms
64 bytes from 172.16.1.1: icmp_req=3 ttl=64 time=0.504 ms
64 bytes from 172.16.1.1: icmp_req=4 ttl=64 time=0.470 ms
--- 172.16.1.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
#. Verify access to the internet:
.. code-block:: console
$ ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms
64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms
64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms
64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
Access the instance remotely
----------------------------
#. Create a :term:`floating IP address` on the ``public`` provider network:
.. code-block:: console
$ neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 203.0.113.104 |
| floating_network_id | 9bce64a3-a963-4c05-bfcd-161f708042d1 |
| id | 05e36754-e7f3-46bb-9eaa-3521623b3722 |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | 7cf50047f8df4824bc76c2fdf66d11ec |
+---------------------+--------------------------------------+
#. Associate the floating IP address with the instance:
.. code-block:: console
$ nova floating-ip-associate private-instance 203.0.113.104
.. note::
This command provides no output.
#. Check the status of your floating IP address:
.. code-block:: console
$ nova list
+--------------------------------------+------------------+--------+------------+-------------+-----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------+--------+------------+-------------+-----------------------------------+
| 113c5892-e58e-4093-88c7-e33f502eaaa4 | private-instance | ACTIVE | - | Running | private=172.16.1.3, 203.0.113.104 |
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | public-instance | ACTIVE | - | Running | public=203.0.113.103 |
+--------------------------------------+------------------+--------+------------+-------------+-----------------------------------+
#. Verify connectivity to the instance via floating IP address from
the controller node or any host on the public physical network:
.. code-block:: console
$ ping -c 4 203.0.113.104
PING 203.0.113.104 (203.0.113.104) 56(84) bytes of data.
64 bytes from 203.0.113.104: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.104: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.104: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.104: icmp_req=4 ttl=63 time=0.929 ms
--- 203.0.113.104 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
#. Access your instance using SSH from the controller node or any
host on the public physical network:
.. code-block:: console
$ ssh cirros@203.0.113.104
The authenticity of host '203.0.113.104 (203.0.113.104)' can't be established.
RSA key fingerprint is ed:05:e9:e7:52:a0:ff:83:68:94:c7:d1:f2:f8:e2:e9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.104' (RSA) to the list of known hosts.
$
.. note::
If your host does not contain the public/private key pair created
in an earlier step, SSH prompts for the default password associated
with the ``cirros`` user, ``cubswin:)``.
If your instance does not launch or seem to work as you expect, see the
`OpenStack Operations Guide <http://docs.openstack.org/ops>`__ for more
information or use one of the :doc:`many other options <common/app_support>`
to seek assistance. We want your first installation to work!
Return to :ref:`Launch an instance <launch-instance-complete>`.

View File

@ -0,0 +1,240 @@
.. _launch-instance-public:
Launch an instance on the public network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Determine instance options
--------------------------
To launch an instance, you must at least specify the flavor, image
name, network, security group, key, and instance name.
#. A flavor specifies a virtual resource allocation profile which
includes processor, memory, and storage.
List available flavors:
.. code-block:: console
$ nova flavor-list
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
This instance uses the ``m1.tiny`` flavor.
.. note::
You can also reference a flavor by ID.
#. List available images:
.. code-block:: console
$ nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.4-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
This instance uses the ``cirros-0.3.4-x86_64`` image.
#. List available networks:
.. code-block:: console
$ neutron net-list
+--------------------------------------+---------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+-----------------------------------------------------+
| 7e25a106-e978-4adb-a4ef-d46c6170254a | public | 0e62efcd-8cee-46c7-b163-d8df05c3c5ad 203.0.113.0/24 |
+--------------------------------------+---------+-----------------------------------------------------+
This instance uses the ``public`` provider network. However, you must
reference this network using the ID instead of the name.
.. note::
If you chose option 2, the output should also contain the private network.
#. List available security groups:
.. code-block:: console
$ nova secgroup-list
+--------------------------------------+---------+-------------+
| Id | Name | Description |
+--------------------------------------+---------+-------------+
| ad8d4ea5-3cad-4f7d-b164-ada67ec59473 | default | default |
+--------------------------------------+---------+-------------+
This instance uses the ``default`` security group.
Launch the instance
-------------------
#. Launch the instance:
Replace ``PUBLIC_NET_ID`` with the ID of the ``public`` provider network.
.. note::
If you chose option 1 and your environment contains only one network,
you can omit the ``--nic`` option because OpenStack automatically
chooses the only network available.
.. code-block:: console
$ nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64 --nic net-id=PUBLIC_NET_ID \
--security-group default --key-name mykey public-instance
+--------------------------------------+------------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | hdF4LMQqC5PB |
| config_drive | |
| created | 2015-09-17T21:58:18Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf |
| image | cirros-0.3.4-x86_64 (939ad102-c74e-405d-a957-2798071d0a7c) |
| key_name | key |
| metadata | {} |
| name | public-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
| updated | 2015-09-17T21:58:18Z |
| user_id | 684286a9079845359882afc3aa5011fb |
+--------------------------------------+------------------------------------------------------------+
#. Check the status of your instance:
.. code-block:: console
$ nova list
+--------------------------------------+-----------------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------------+--------+------------+-------------+----------------------+
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | public-instance | ACTIVE | - | Running | public=203.0.113.103 |
+--------------------------------------+-----------------+--------+------------+-------------+----------------------+
The status changes from ``BUILD`` to ``ACTIVE`` when the build process
successfully completes.
Access the instance using the virtual console
---------------------------------------------
#. Obtain a :term:`Virtual Network Computing (VNC)`
session URL for your instance and access it from a web browser:
.. code-block:: console
$ nova get-vnc-console public-instance novnc
+-------+------------------------------------------------------------------------------------+
| Type | Url |
+-------+------------------------------------------------------------------------------------+
| novnc | http://controller:6080/vnc_auto.html?token=2f6dd985-f906-4bfc-b566-e87ce656375b |
+-------+------------------------------------------------------------------------------------+
.. note::
If your web browser runs on a host that cannot resolve the
``controller`` host name, you can replace ``controller`` with the
IP address of the management interface on your controller node.
The CirrOS image includes conventional user name/password
authentication and provides these credentials at the login prompt.
After logging into CirrOS, we recommend that you verify network
connectivity using ``ping``.
#. Verify access to the public provider network gateway:
.. code-block:: console
$ ping -c 4 203.0.113.1
PING 203.0.113.1 (203.0.113.1) 56(84) bytes of data.
64 bytes from 203.0.113.1: icmp_req=1 ttl=64 time=0.357 ms
64 bytes from 203.0.113.1: icmp_req=2 ttl=64 time=0.473 ms
64 bytes from 203.0.113.1: icmp_req=3 ttl=64 time=0.504 ms
64 bytes from 203.0.113.1: icmp_req=4 ttl=64 time=0.470 ms
--- 203.0.113.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
#. Verify access to the internet:
.. code-block:: console
$ ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms
64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms
64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms
64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
Access the instance remotely
----------------------------
#. Verify connectivity to the instance from the controller node or any host
on the public physical network:
.. code-block:: console
$ ping -c 4 203.0.113.103
PING 203.0.113.103 (203.0.113.103) 56(84) bytes of data.
64 bytes from 203.0.113.103: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.103: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.103: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.103: icmp_req=4 ttl=63 time=0.929 ms
--- 203.0.113.103 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
#. Access your instance using SSH from the controller node or any
host on the public physical network:
.. code-block:: console
$ ssh cirros@203.0.113.103
The authenticity of host '203.0.113.102 (203.0.113.102)' can't be established.
RSA key fingerprint is ed:05:e9:e7:52:a0:ff:83:68:94:c7:d1:f2:f8:e2:e9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.102' (RSA) to the list of known hosts.
$
.. note::
If your host does not contain the public/private key pair created
in an earlier step, SSH prompts for the default password associated
with the ``cirros`` user, ``cubswin:)``.
If your instance does not launch or seem to work as you expect, see the
`OpenStack Operations Guide <http://docs.openstack.org/ops>`__ for more
information or use one of the :doc:`many other options <common/app_support>`
to seek assistance. We want your first installation to work!
Return to :ref:`Launch an instance <launch-instance-complete>`.

View File

@ -1,30 +1,130 @@
.. _launch-instance:
==================
Launch an instance
==================
.. toctree::
:maxdepth: 1
launch-instance-neutron.rst
launch-instance-nova.rst
An instance is a VM that OpenStack provisions on a compute node.
This guide shows you how to launch a minimal instance using the
:term:`CirrOS` image that you added to your environment
in the :doc:`glance` chapter. In these steps, you use the
command-line interface (CLI) on your controller node or any system with
the appropriate OpenStack client libraries. To use the dashboard, see the
This section creates the necessary virtual networks to support launching
one more instances. Networking option 1 includes one public virtual
network and one instance that uses it. Networking option 1 includes one
public virtual network, one private virtual network, and one instance
that uses each network. The instructions in this section use command-line
interface (CLI) tools on the controller node. For more information on the
CLI tools, see the `OpenStack User Guide
<http://docs.openstack.org/user-guide/cli_launch_instances.html>`__.
To use the dashboard, see the
`OpenStack User Guide
<http://docs.openstack.org/user-guide/dashboard.html>`__.
Launch an instance using
:doc:`OpenStack Networking (neutron) <launch-instance-neutron>` or
:doc:`legacy networking (nova-network) <launch-instance-nova>`.
For more information, see the `OpenStack User Guide
<http://docs.openstack.org/user-guide/cli_launch_instances.html>`__.
.. _launch-instance-networks:
.. note::
Create virtual networks
-----------------------
These steps reference example components created in previous
chapters. You must adjust certain values such as IP addresses to
match your environment.
Create virtual networks for the networking option that you chose
in :ref:`networking`. If you chose option 1, create only the public
virtual network. If you chose option 2, create the public and private
virtual networks.
.. toctree::
:maxdepth: 1
launch-instance-networks-public.rst
launch-instance-networks-private.rst
After creating the appropriate networks for your environment, you can
continue preparing the environment to launch an instance.
Generate a key pair
-------------------
Most cloud images support :term:`public key authentication` rather than
conventional password authentication. Before launching an instance, you
must add a public key to the Compute service.
#. Source the ``demo`` tenant credentials:
.. code-block:: console
$ source demo-openrc.sh
#. Generate and add a key pair:
.. code-block:: console
$ ssh-keygen -q -N ""
$ nova keypair-add --pub-key .ssh/id_rsa.pub mykey
.. note::
Alternatively, you can skip the ``ssh-keygen`` command and use an
existing public key.
#. Verify addition of the key pair:
.. code-block:: console
$ nova keypair-list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | 6c:74:ec:3a:08:05:4e:9e:21:22:a6:dd:b2:62:b8:28 |
+-------+-------------------------------------------------+
Add security group rules
------------------------
By default, the ``default`` security group applies to all instances and
includes firewall rules that deny remote access to instances. For Linux
images such as CirrOS, we recommend allowing at least ICMP (ping) and
secure shell (SSH).
#. Add rules to the ``default`` security group:
#. Permit :term:`ICMP` (ping):
.. code-block:: console
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
#. Permit secure shell (SSH) access:
.. code-block:: console
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
Launch an instance
------------------
If you chose networking option 1, you can only launch an instance on the
public network. If you chose networking option 2, you can launch an instance
on the public network and the private network.
.. toctree::
:maxdepth: 1
launch-instance-public.rst
launch-instance-private.rst
.. _launch-instance-complete:
Block Storage
-------------
If your environment includes the Block Storage service, you can create a
volume and attach it to an instance.
.. toctree::
:maxdepth: 1
launch-instance-cinder.rst

View File

@ -1,42 +0,0 @@
==============================
OpenStack Networking (neutron)
==============================
.. toctree::
neutron-concepts.rst
neutron-controller-node.rst
neutron-network-node.rst
neutron-compute-node.rst
neutron-initial-networks.rst
OpenStack Networking allows you to create and attach interface devices
managed by other OpenStack services to networks. Plug-ins can be
implemented to accommodate different networking equipment and
software, providing flexibility to OpenStack architecture and
deployment.
It includes the following components:
neutron-server
Accepts and routes API requests to the appropriate OpenStack
Networking plug-in for action.
OpenStack Networking plug-ins and agents
Plugs and unplugs ports, creates networks or subnets, and provides
IP addressing. These plug-ins and agents differ depending on the
vendor and technologies used in the particular cloud. OpenStack
Networking ships with plug-ins and agents for Cisco virtual and
physical switches, NEC OpenFlow products, Open vSwitch, Linux
bridging, and the VMware NSX product.
The common agents are L3 (layer 3), DHCP (dynamic host IP
addressing), and a plug-in agent.
Messaging queue
Used by most OpenStack Networking installations to route
information between the neutron-server and various agents, as well
as a database to store networking state for particular plug-ins.
OpenStack Networking mainly interacts with OpenStack Compute to
provide networks and connectivity for its instances.

View File

@ -1,8 +0,0 @@
==========
Next steps
==========
Your OpenStack environment now includes the core components necessary
to launch a basic instance. You can :doc:`launch an
instance <launch-instance-neutron>` or add more OpenStack services
to your environment.

View File

@ -1,171 +0,0 @@
================================
Legacy networking (nova-network)
================================
Configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~
Legacy networking primarily involves compute nodes. However, you must
configure the controller node to use legacy networking.
**To configure legacy networking**
#. Open the :file:`/etc/nova/nova.conf` file and edit the ``[DEFAULT]``
section. Configure the network and security group APIs:
.. code-block:: ini
[DEFAULT]
...
network_api_class = nova.network.api.API
security_group_api = nova
#. Restart the Compute services:
.. only:: rdo or obs
.. code-block:: console
# systemctl restart openstack-nova-api.service \
openstack-nova-scheduler.service openstack-nova-conductor.service
.. only:: ubuntu or debian
.. code-block:: console
# service nova-api restart
# service nova-scheduler restart
# service nova-conductor restart
Configure compute node
~~~~~~~~~~~~~~~~~~~~~~
This section covers deployment of a simple :term:`flat network` that provides
IP addresses to your instances via :term:`DHCP`. If your environment includes
multiple compute nodes, the :term:`multi-host` feature provides redundancy by
spreading network functions across compute nodes.
**To install legacy networking components**
.. only:: ubuntu
.. code-block:: console
# apt-get install nova-network nova-api-metadata
.. only:: debian
.. code-block:: console
# apt-get install nova-network nova-api
.. only:: rdo
.. code-block:: console
# yum install openstack-nova-network openstack-nova-api
.. only:: obs
.. code-block:: console
# zypper install openstack-nova-network openstack-nova-api
**To configure legacy networking**
#. Open the :file:`/etc/nova/nova.conf` file and edit the ``[DEFAULT]``
section. Configure the network parameters:
.. code-block:: ini
:linenos:
[DEFAULT]
...
network_api_class = nova.network.api.API
security_group_api = nova
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
network_manager = nova.network.manager.FlatDHCPManager
network_size = 254
allow_same_net_traffic = False
multi_host = True
send_arp_for_ha = True
share_dhcp_address = True
force_dhcp_release = True
flat_network_bridge = br100
flat_interface = INTERFACE_NAME
public_interface = INTERFACE_NAME
Replace ``INTERFACE_NAME`` with the actual interface name for the external
network. For example, *eth1* or *ens224*. You can also leave these two
parameters undefined if you are serving multiple networks with
individual bridges for each.
.. only:: ubuntu or debian
2. Restart the services:
.. code-block:: console
# service nova-network restart
# service nova-api-metadata restart
.. only:: rdo or obs
2. Start the services and configure them to start when the system boots:
.. code-block:: console
# systemctl enable openstack-nova-network.service openstack-nova-metadata-api.service
# systemctl start openstack-nova-network.service openstack-nova-metadata-api.service
Create initial network
~~~~~~~~~~~~~~~~~~~~~~
Before launching your first instance, you must create the necessary
virtual network infrastructure to which the instance will connect. This
network typically provides Internet access *from* instances. You can
enable Internet access *to* individual instances using a :term:`floating IP
address` and suitable :term:`security group` rules. The ``admin`` tenant owns
this network because it provides external network access for multiple
tenants.
This network shares the same :term:`subnet` associated with the physical
network connected to the external :term:`interface` on the compute node. You
should specify an exclusive slice of this subnet to prevent interference with
other devices on the external network.
**To create the network**
#. On the controller node, source the ``admin`` tenant credentials:
.. code-block:: console
$ source admin-openrc.sh
#. Create the network:
Replace ``NETWORK_CIDR`` with the subnet associated with the physical
network.
.. code-block:: console
$ nova network-create demo-net --bridge br100 --multi-host T \
--fixed-range-v4 NETWORK_CIDR
For example, using an exclusive slice of ``203.0.113.0/24`` with IP
address range ``203.0.113.24`` to ``203.0.113.31``:
.. code-block:: console
$ nova network-create demo-net --bridge br100 --multi-host T \
--fixed-range-v4 203.0.113.24/29
.. note:: This command provides no output.
#. Verify creation of the network:
.. code-block:: console
$ nova net-list
+--------------------------------------+----------+------------------+
| ID | Label | CIDR |
+--------------------------------------+----------+------------------+
| 84b34a65-a762-44d6-8b5e-3b461a53f513 | demo-net | 203.0.113.24/29 |
+--------------------------------------+----------+------------------+

View File

@ -1,21 +0,0 @@
==========================
Add a networking component
==========================
This chapter explains how to install and configure either OpenStack
Networking (neutron), or the legacy ``nova-network`` component. The
``nova-network`` service enables you to deploy one network type per
instance and is suitable for basic network functionality. OpenStack
Networking enables you to deploy multiple network types per instance
and includes :term:`plug-in` s for a variety of products that support
:term:`virtual networking`.
For more information, see the
`Networking <http://docs.openstack.org/admin-guide-cloud/networking.html>`__
chapter of the OpenStack Cloud Administrator Guide.
.. toctree::
networking-neutron.rst
networking-nova.rst
networking-next-steps.rst

View File

@ -0,0 +1,46 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on a *compute* node.
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances including VXLAN tunnels for private
networks and handles security groups.
Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.conf`` file.
#. In the ``[linux_bridge]`` section, map the public virtual network to the
public physical network interface:
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
public network interface.
#. In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. code-block:: ini
[vxlan]
enable_vxlan = False
#. In the ``[securitygroup]`` section, enable security groups, enable
:term:`ipset`, and configure the Linux bridge :term:`iptables` firewall
driver:
.. code-block:: ini
[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Return to
:ref:`Networking compute node configuration <neutron-compute-compute>`.

View File

@ -0,0 +1,54 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on a *compute* node.
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances including VXLAN tunnels for private
networks and handles security groups.
Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.conf`` file.
#. In the ``[linux_bridge]`` section, map the public virtual network to the
public physical network interface:
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
public network interface.
#. In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. code-block:: ini
[vxlan]
enable_vxlan = True
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface.
#. In the ``[securitygroup]`` section, enable security groups, enable
:term:`ipset`, and configure the Linux bridge :term:`iptables` firewall
driver:
.. code-block:: ini
[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Return to
:ref:`Networking compute node configuration <neutron-compute-compute>`.

View File

@ -0,0 +1,301 @@
Install and configure compute node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The compute node handles connectivity and :term:`security groups <security
group>` for instances.
Prerequisites
-------------
Before you install and configure OpenStack Networking, you must
kernel networking parameters to disable reverse-path filtering:
#. Edit the :file:`/etc/sysctl.conf` file to contain the following parameters:
.. code-block:: ini
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
#. Implement the changes:
.. code-block:: ini
# sysctl -p
.. only:: ubuntu or rdo or obs
Install the Networking components
---------------------------------
.. only:: ubuntu
.. code-block:: console
# apt-get install neutron-plugin-linuxbridge-agent
.. only:: rdo
.. code-block:: console
# yum install openstack-neutron-linuxbridge
.. only:: obs
.. code-block:: console
# zypper install --no-recommends openstack-neutron-linuxbridge-agent ipset
.. only:: debian
Install and configure the Networking components
-----------------------------------------------
#. .. code-block:: console
# apt-get install neutron-plugin-linuxbridge-agent
#. Respond to prompts for ``database management``, ``Identity service
credentials``, ``service endpoint``, and ``message queue credentials``.
#. Select the ML2 plug-in:
.. image:: figures/debconf-screenshots/neutron_1_plugin_selection.png
:alt: Neutron plug-in selection dialog
.. note::
Selecting the ML2 plug-in also populates the ``service_plugins`` and
``allow_overlapping_ips`` options in the
:file:`/etc/neutron/neutron.conf` file with the appropriate values.
.. only:: ubuntu or rdo or obs
To configure the Networking common components
---------------------------------------------
The Networking common component configuration includes the
authentication mechanism, message queue, and plug-in.
.. note::
Default configuration files vary by distribution. You might need to
add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you
should retain.
Edit the ``/etc/neutron/neutron.conf`` file.
#. In the ``[database]`` section, comment out any ``connection`` options
because compute nodes do not directly access the database.
#. In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections, configure
RabbitMQ message queue access:
.. code-block:: ini
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in RabbitMQ.
#. In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. code-block:: ini
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
#. (Optional) To assist with troubleshooting, enable verbose logging in the
``[DEFAULT]`` section:
.. code-block:: ini
[DEFAULT]
...
verbose = True
Configure networking options
----------------------------
Choose the same networking option that you chose for the controller node to
configure services specific to it.
.. note::
Option 2 augments option 1 with the layer-3 (routing) service and
enables self-service (private) networks. If you want to use public
(provider) and private (self-service) networks, choose option 2.
.. toctree::
:maxdepth: 1
neutron-compute-install-option1.rst
neutron-compute-install-option2.rst
.. _neutron-compute-compute:
Configure Compute to use Networking
-----------------------------------
Edit the ``/etc/nova/nova.conf`` file.
#. In the ``[DEFAULT]`` section, configure Compute to use the Networking
service:
.. code-block:: ini
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
The ``firewall_driver`` option uses the ``NoopFirewallDriver`` value
because Compute delegates security group (firewall) operation to the
Networking service.
#. In the ``[neutron]`` section, configure access parameters:
.. code-block:: ini
[neutron]
...
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = NEUTRON_PASS
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
Finalize installation
---------------------
.. only:: rdo
#. The Networking service initialization scripts expect a symbolic link
:file:`/etc/neutron/plugin.ini` pointing to the ML2 plug-in configuration
file, :file:`/etc/neutron/plugins/ml2/ml2_conf.ini`. If this symbolic
link does not exist, create it using the following command:
.. code-block:: console
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
#. Due to a packaging issue, the Linux bridge agent initialization script
explicitly looks for the Linux bridge plug-in configuration file rather
than the agent configuration file. Run the following commands to resolve
this issue:
.. code-block:: console
# cp /usr/lib/systemd/system/neutron-linuxbridge-agent.service \
/usr/lib/systemd/system/neutron-linuxbridge-agent.service.orig
# sed -i 's,openvswitch/linuxbridge_neutron_plugin.ini,ml2/linuxbridge_agent.ini,g' \
/usr/lib/systemd/system/neutron-linuxbridge-agent.service
.. note::
Future upgrades of the ``neutron-linuxbridge-agent`` package may
overwrite this modification.
#. Restart the Compute service:
.. code-block:: console
# systemctl restart openstack-nova-compute.service
#. Start the Linux bridge agent and configure it to start when the
system boots:
.. code-block:: console
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service
.. only:: obs
#. The Networking service initialization scripts expect the variable
``NEUTRON_PLUGIN_CONF`` in the :file:`/etc/sysconfig/neutron` file to
reference the ML2 plug-in configuration file. Edit the
:file:`/etc/sysconfig/neutron` file and add the following:
.. code-block:: ini
NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"
#. Restart the Compute service:
.. code-block:: console
# systemctl restart openstack-nova-compute.service
#. Start the Linux Bridge agent and configure it to start when the
system boots:
.. code-block:: console
# systemctl enable openstack-neutron-linuxbridge-agent.service
# systemctl start openstack-neutron-linuxbridge-agent.service
.. only:: ubuntu or debian
#. Restart the Compute service:
.. code-block:: console
# service nova-compute restart
#. Due to a packaging issue, the Linux bridge agent initialization script
explicitly looks for the ML2 plug-in configuration file rather than the
agent configuration file. Run the following commands to resolve this
issue:
.. code:: console
# cp /etc/init/neutron-plugin-linuxbridge-agent.conf \
/etc/init/neutron-plugin-linuxbridge-agent.conf.orig
# sed -i 's,ml2_conf.ini,linuxbridge_agent.ini,g' \
/etc/init/neutron-plugin-linuxbridge-agent.conf
#. Restart the Linux bridge agent:
.. code-block:: console
# service neutron-plugin-linuxbridge-agent restart

View File

@ -1,394 +0,0 @@
==================================
Install and configure compute node
==================================
The compute node handles connectivity and :term:`security groups <security
group>` for instances.
**To configure prerequisites**
Before you install and configure OpenStack Networking, you must
configure certain kernel networking parameters.
#. Edit the :file:`/etc/sysctl.conf` file to contain the following parameters:
.. code-block:: ini
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
#. Implement the changes:
.. code-block:: ini
# sysctl -p
.. only:: ubuntu or rdo or obs
**To install the Networking components**
.. only:: ubuntu
.. code-block:: console
# apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent
.. only:: rdo
.. code-block:: console
# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
.. only:: obs
.. code-block:: console
# zypper install --no-recommends openstack-neutron-openvswitch-agent ipset
.. note:: SUSE does not use a separate ML2 plug-in package.
.. only:: debian
**To install and configure the Networking components**
#. .. code-block:: console
# apt-get install neutron-plugin-openvswitch-agent openvswitch-datapath-dkms
.. note::
Debian does not use a separate ML2 plug-in package.
#. Respond to prompts for ``database management``, ``Identity service
credentials``, ``service endpoint``, and ``message queue credentials``.
#. Select the ML2 plug-in:
.. image:: figures/debconf-screenshots/neutron_1_plugin_selection.png
:alt: Neutron plug-in selection dialog
.. note::
Selecting the ML2 plug-in also populates the ``service_plugins`` and
``allow_overlapping_ips`` options in the
:file:`/etc/neutron/neutron.conf` file with the appropriate values.
.. only:: ubuntu or rdo or obs
**To configure the Networking common components**
The Networking common component configuration includes the
authentication mechanism, message queue, and plug-in.
.. note::
Default configuration files vary by distribution. You might need to
add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you
should retain.
#. Open the :file:`/etc/neutron/neutron.conf` file and edit the
``[database]`` section. Comment out any ``connection`` options because
compute nodes do not directly access the database.
#. In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections, configure
RabbitMQ message queue access:
.. code-block:: ini
:linenos:
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in RabbitMQ.
#. In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. code-block:: ini
:linenos:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
#. In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) plug-in,
router service, and overlapping IP addresses:
.. code-block:: ini
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
#. (Optional) To assist with troubleshooting, enable verbose logging in the
``[DEFAULT]`` section:
.. code-block:: ini
[DEFAULT]
...
verbose = True
**To configure the Modular Layer 2 (ML2) plug-in**
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build
the virtual networking framework for instances.
#. Open the :file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file and edit the
``[ml2]`` section. Enable the :term:`flat <flat network>`, :term:`VLAN
<VLAN network>`, :term:`generic routing encapsulation (GRE)`, and
:term:`virtual extensible LAN (VXLAN)` network type
drivers, GRE tenant networks, and the OVS mechanism driver:
.. code-block:: ini
[ml2]
...
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch
#. In the ``[ml2_type_gre]`` section, configure the tunnel identifier (id)
range:
.. code-block:: ini
[ml2_type_gre]
...
tunnel_id_ranges = 1:1000
#. In the ``[securitygroup]`` section, enable security groups, enable
:term:`ipset`, and configure the OVS :term:`iptables` firewall driver:
.. code-block:: ini
[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
#. In the ``[ovs]`` section, enable tunnels and configure the local tunnel
endpoint:
.. code-block:: ini
[ovs]
...
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
Replace ``INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS`` with the IP address of
the instance tunnels network interface on your compute node.
#. In the ``[agent]`` section, enable GRE tunnels:
.. code-block:: ini
[agent]
...
tunnel_types = gre
**To configure the Open vSwitch (OVS) service**
The OVS service provides the underlying virtual networking framework for
instances.
.. only:: rdo or obs
Start the OVS service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable openvswitch.service
# systemctl start openvswitch.service
.. only:: ubuntu or debian
Restart the OVS service:
.. code-block:: console
# service openvswitch-switch restart
**To configure Compute to use Networking**
By default, distribution packages configure Compute to use legacy
networking. You must reconfigure Compute to manage networks through
Networking.
#. Open the :file:`/etc/nova/nova.conf` file and edit the ``[DEFAULT]``
section. Configure the :term:`APIs <API>` and drivers:
.. code-block:: ini
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall service. Since
Networking includes a firewall service, you must disable the Compute
firewall service by using the
``nova.virt.firewall.NoopFirewallDriver`` firewall driver.
#. In the ``[neutron]`` section, configure access parameters:
.. code-block:: ini
:linenos:
[neutron]
...
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = NEUTRON_PASS
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
**To finalize the installation**
.. only:: rdo
#. The Networking service initialization scripts expect a symbolic link
:file:`/etc/neutron/plugin.ini` pointing to the ML2 plug-in configuration
file, :file:`/etc/neutron/plugins/ml2/ml2_conf.ini`. If this symbolic
link does not exist, create it using the following command:
.. code-block:: console
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
#. Due to a packaging bug, the Open vSwitch agent initialization script
explicitly looks for the Open vSwitch plug-in configuration file rather
than a symbolic link :file:`/etc/neutron/plugin.ini` pointing to the ML2
plug-in configuration file. Run the following commands to resolve this
issue:
.. code-block:: console
# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
/usr/lib/systemd/system/neutron-openvswitch-agent.service
#. Restart the Compute service:
.. code-block:: console
# systemctl restart openstack-nova-compute.service
#. Start the Open vSwitch (OVS) agent and configure it to start when the
system boots:
.. code-block:: console
# systemctl enable neutron-openvswitch-agent.service
# systemctl start neutron-openvswitch-agent.service
.. only:: obs
#. The Networking service initialization scripts expect the variable
``NEUTRON_PLUGIN_CONF`` in the :file:`/etc/sysconfig/neutron` file to
reference the ML2 plug-in configuration file. Edit the
:file:`/etc/sysconfig/neutron` file and add the following:
.. code-block:: ini
NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"
#. Restart the Compute service:
.. code-block:: console
# systemctl restart openstack-nova-compute.service
#. Start the Open vSwitch (OVS) agent and configure it to start when the
system boots:
.. code-block:: console
# systemctl enable openstack-neutron-openvswitch-agent.service
# systemctl start openstack-neutron-openvswitch-agent.service
.. only:: ubuntu or debian
#. Restart the Compute service:
.. code-block:: console
# service nova-compute restart
#. Restart the Open vSwitch (OVS) agent:
.. code-block:: console
# service neutron-plugin-openvswitch-agent restart
**Verify operation**
Perform the following commands on the controller node:
#. Source the ``admin`` credentials to gain access to admin-only CLI
commands:
.. code-block:: console
$ source admin-openrc.sh
#. List agents to verify successful launch of the neutron agents:
.. code-block:: console
$ neutron agent-list
+------+--------------------+----------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+------+--------------------+----------+-------+----------------+---------------------------+
|302...| Metadata agent | network | :-) | True | neutron-metadata-agent |
|4bd...| Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent |
|756...| L3 agent | network | :-) | True | neutron-l3-agent |
|9c4...| DHCP agent | network | :-) | True | neutron-dhcp-agent |
|a5a...| Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent |
+------+--------------------+----------+-------+----------------+---------------------------+
This output should indicate four agents alive on the network node
and one agent alive on the compute node.

View File

@ -1,6 +1,5 @@
=============================
Networking (neutron) concepts
=============================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack Networking (neutron) manages all networking facets for the
Virtual Networking Infrastructure (VNI) and the access layer aspects

View File

@ -0,0 +1,331 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on the *controller* node.
Prerequisites
-------------
Before you configure networking option 1, you must configure kernel
parameters to disable reverse-path filtering.
#. Edit the :file:`/etc/sysctl.conf` file to contain the following parameters:
.. code-block:: ini
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
#. Implement the changes:
.. code-block:: console
# sysctl -p
Install the networking components
---------------------------------
.. only:: ubuntu
.. code:: console
# apt-get install neutron-server neutron-plugin-ml2 \
neutron-plugin-linuxbridge-agent neutron-dhcp-agent \
neutron-metadata-agent python-neutronclient
.. only:: rdo
.. code:: console
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge python-neutronclient
.. only:: obs
.. code:: console
# zypper install --no-recommends openstack-neutron \
openstack-neutron-server openstack-neutron-linuxbridge-agent \
openstack-neutron-dhcp-agent openstack-neutron-metadata-agent \
ipset
.. only:: debian
Install and configure the networking components
-----------------------------------------------
#. .. code:: console
# apt-get install neutron-server neutron-plugin-linuxbridge-agent \
neutron-dhcp-agent neutron-metadata-agent
For networking option 2, also install the ``neutron-l3-agent`` package.
#. Respond to prompts for `database
management <#debconf-dbconfig-common>`__, `Identity service
credentials <#debconf-keystone_authtoken>`__, `service endpoint
registration <#debconf-api-endpoints>`__, and `message queue
credentials <#debconf-rabbitmq>`__.
#. Select the ML2 plug-in:
.. image:: figures/debconf-screenshots/neutron_1_plugin_selection.png
.. note::
Selecting the ML2 plug-in also populates the ``service_plugins`` and
``allow_overlapping_ips`` options in the
:file:`/etc/neutron/neutron.conf` file with the appropriate values.
.. only:: ubuntu or rdo or obs
Configure the Networking server component
-----------------------------------------
The Networking server component configuration includes the database,
authentication mechanism, message queue, topology change notifications,
and plug-in.
.. note::
Default configuration files vary by distribution. You might need to
add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you
should retain.
Edit the ``/etc/neutron/neutron.conf`` file.
#. In the ``[database]`` section, configure database access:
.. code:: ini
[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
#. In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in and disable additional plug-ins:
.. code:: ini
[DEFAULT]
...
core_plugin = ml2
service_plugins =
#. In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure RabbitMQ message queue access:
.. code-block:: ini
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
#. In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. code-block:: ini
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
#. In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. code-block:: ini
[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
[nova]
...
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
#. (Optional) To assist with troubleshooting, enable verbose logging in
the ``[DEFAULT]`` section:
.. code:: ini
[DEFAULT]
...
verbose = True
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file.
#. In the ``[ml2]`` section, enable flat and VLAN networks:
.. code:: ini
[ml2]
...
type_drivers = flat,vlan
#. In the ``[ml2]`` section, disable project (private) networks:
.. code:: ini
[ml2]
...
tenant_network_types =
#. In the ``[ml2]`` section, enable the Linux bridge mechanism:
.. code:: ini
[ml2]
...
mechanism_drivers = linuxbridge
.. warning::
After you configure the ML2 plug-in, removing values in the
``type_drivers`` option can lead to database inconsistency.
#. In the ``[ml2]`` section, enable the port security extension driver:
.. code:: ini
[ml2]
...
extension_drivers = port_security
#. In the ``[ml2_type_flat]`` section, configure the public flat provider
network:
.. code-block:: ini
[ml2_type_flat]
...
flat_networks = public
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances including VXLAN tunnels for private
networks and handles security groups.
Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.conf`` file.
#. In the ``[linux_bridge]`` section, map the public virtual network to the
public physical network interface:
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
public network interface.
#. In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. code-block:: ini
[vxlan]
enable_vxlan = False
#. In the ``[agent]`` section, enable ARP spoofing protection:
.. code-block:: ini
[agent]
...
prevent_arp_spoofing = True
#. In the ``[securitygroup]`` section, enable security groups, enable
:term:`ipset`, and configure the Linux bridge :term:`iptables` firewall
driver:
.. code-block:: ini
[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure the DHCP agent
------------------------
The :term:`DHCP agent` provides DHCP services for virtual networks.
Edit the ``/etc/neutron/dhcp_agent.ini`` file.
#. In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
Dnsmasq DHCP driver, and enable isolated metadata so instances on public
networks can access metadata over the network:
.. code-block:: ini
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
#. (Optional) To assist with troubleshooting, enable verbose logging in the
``[DEFAULT]`` section:
.. code-block:: ini
[DEFAULT]
...
verbose = True
Return to
:ref:`Networking controller node configuration
<neutron-controller-metadata-agent>`.

View File

@ -0,0 +1,420 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on the *controller* node.
Prerequisites
-------------
Before you configure networking option 2, you must configure kernel
parameters to enable IP forwarding (routing) and disable reverse-path
filtering.
#. Edit the :file:`/etc/sysctl.conf` file to contain the following parameters:
.. code-block:: ini
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
#. Implement the changes:
.. code-block:: console
# sysctl -p
Install the Networking components
---------------------------------
.. only:: ubuntu
.. code:: console
# apt-get install neutron-server neutron-plugin-ml2 \
neutron-plugin-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \
neutron-metadata-agent python-neutronclient
.. only:: rdo
.. code:: console
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge python-neutronclient
.. only:: obs
.. code:: console
# zypper install --no-recommends openstack-neutron \
openstack-neutron-server openstack-neutron-linuxbridge-agent \
openstack-neutron-l3-agent openstack-neutron-dhcp-agent \
openstack-neutron-metadata-agent ipset
.. only:: debian
Install and configure the Networking components
-----------------------------------------------
#. .. code:: console
# apt-get install neutron-server neutron-plugin-linuxbridge-agent \
neutron-dhcp-agent neutron-metadata-agent
For networking option 2, also install the ``neutron-l3-agent`` package.
#. Respond to prompts for `database
management <#debconf-dbconfig-common>`__, `Identity service
credentials <#debconf-keystone_authtoken>`__, `service endpoint
registration <#debconf-api-endpoints>`__, and `message queue
credentials <#debconf-rabbitmq>`__.
#. Select the ML2 plug-in:
.. image:: figures/debconf-screenshots/neutron_1_plugin_selection.png
.. note::
Selecting the ML2 plug-in also populates the ``service_plugins`` and
``allow_overlapping_ips`` options in the
:file:`/etc/neutron/neutron.conf` file with the appropriate values.
.. only:: ubuntu or rdo or obs
Configure the Networking server component
-----------------------------------------
Edit the ``/etc/neutron/neutron.conf`` file.
#. In the ``[database]`` section, configure database access:
.. code:: ini
[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
#. In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in, router service, and overlapping IP addresses:
.. code:: ini
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
#. In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure RabbitMQ message queue access:
.. code-block:: ini
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
#. In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. code-block:: ini
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
#. In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. code-block:: ini
[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
[nova]
...
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
#. (Optional) To assist with troubleshooting, enable verbose logging in
the ``[DEFAULT]`` section:
.. code:: ini
[DEFAULT]
...
verbose = True
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file.
#. In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
.. code:: ini
[ml2]
...
type_drivers = flat,vlan,vxlan
#. In the ``[ml2]`` section, enable VXLAN project (private) networks:
.. code:: ini
[ml2]
...
tenant_network_types = vxlan
#. In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
mechanisms:
.. code:: ini
[ml2]
...
mechanism_drivers = linuxbridge,l2population
.. warning::
After you configure the ML2 plug-in, removing values in the
``type_drivers`` option can lead to database inconsistency.
.. note::
The Linux bridge agent only supports VXLAN overlay networks.
#. In the ``[ml2]`` section, enable the port security extension driver:
.. code:: ini
[ml2]
...
extension_drivers = port_security
#. In the ``[ml2_type_flat]`` section, configure the public flat provider
network:
.. code-block:: ini
[ml2_type_flat]
...
flat_networks = public
#. In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
range for private networks:
.. code:: ini
[ml2_type_vxlan]
...
vni_ranges = 1:1000
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances including VXLAN tunnels for private
networks and handles security groups.
Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.conf`` file.
#. In the ``[linux_bridge]`` section, map the public virtual network to the
public physical network interface:
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
public network interface.
#. In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. code-block:: ini
[vxlan]
enable_vxlan = True
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface.
#. In the ``[agent]`` section, enable ARP spoofing protection:
.. code-block:: ini
[agent]
...
prevent_arp_spoofing = True
#. In the ``[securitygroup]`` section, enable security groups, enable
:term:`ipset`, and configure the Linux bridge :term:`iptables` firewall
driver:
.. code-block:: ini
[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure the layer-3 agent
---------------------------
The :term:`Layer-3 (L3) agent` provides routing and NAT services for virtual
networks.
Edit the ``/etc/neutron/l3_agent.ini`` file:
#. In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
and external network bridge:
.. code-block:: ini
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
.. note::
The ``external_network_bridge`` option intentionally lacks a value
to enable multiple external networks on a single agent.
#. (Optional) To assist with troubleshooting, enable verbose logging in the
``[DEFAULT]`` section:
.. code-block:: ini
[DEFAULT]
...
verbose = True
Configure the DHCP agent
------------------------
The :term:`DHCP agent` provides DHCP services for virtual networks.
Edit the ``/etc/neutron/dhcp_agent.ini`` file.
#. In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
Dnsmasq DHCP driver, and enable isolated metadata so instances on public
networks can access metadata over the network:
.. code-block:: ini
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
Overlay networks such as VXLAN include additional packet headers that
increase overhead and decrease space available for the payload or user
data. Without knowledge of the virtual network infrastructure, instances
attempt to send packets using the default Ethernet :term:`maximum
transmission unit (MTU)` of 1500 bytes. :term:`Internet protocol (IP)`
networks contain the :term:`path MTU discovery (PMTUD)` mechanism to detect
end-to-end MTU and adjust packet size accordingly. However, some operating
systems and networks block or otherwise lack support for PMTUD causing
performance degradation or connectivity failure.
Ideally, you can prevent these problems by enabling :term:`jumbo frames
<jumbo frame>` on the physical network that contains your tenant virtual
networks. Jumbo frames support MTUs up to approximately 9000 bytes which
negates the impact of VXLAN overhead on virtual networks. However, many
network devices lack support for jumbo frames and OpenStack administrators
often lack control over network infrastructure. Given the latter
complications, you can also prevent MTU problems by reducing the
instance MTU to account for VXLAN overhead. Determining the proper MTU
value often takes experimentation, but 1450 bytes works in most
environments. You can configure the DHCP server that assigns IP
addresses to your instances to also adjust the MTU.
.. note::
Some cloud images ignore the DHCP MTU option in which case you
should configure it using metadata, a script, or other suitable
method.
#. In the ``[DEFAULT]`` section, enable the :term:`dnsmasq` configuration
file:
.. code-block:: ini
[DEFAULT]
...
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
#. Create and edit the :file:`/etc/neutron/dnsmasq-neutron.conf` file to
enable the DHCP MTU option (26) and configure it to 1450 bytes:
.. code-block:: ini
dhcp-option-force=26,1450
#. (Optional) To assist with troubleshooting, enable verbose logging in the
``[DEFAULT]`` section:
.. code-block:: ini
[DEFAULT]
...
verbose = True
Return to
:ref:`Networking controller node configuration
<neutron-controller-metadata-agent>`.

View File

@ -0,0 +1,393 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prerequisites
-------------
Before you configure the OpenStack Networking (neutron) service, you
must create a database, service credentials, and API endpoints.
#. To create the database, complete these steps:
a. Use the database access client to connect to the database server as the
``root`` user:
.. code:: console
$ mysql -u root -p
#. Create the ``neutron`` database:
.. code:: console
CREATE DATABASE neutron;
#. Grant proper access to the ``neutron`` database, replacing
``NEUTRON_DBPASS`` with a suitable password:
.. code:: console
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
#. Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only CLI
commands:
.. code:: console
$ source admin-openrc.sh
#. To create the service credentials, complete these steps:
a. Create the ``neutron`` user:
.. code:: console
$ openstack user create --password-prompt neutron
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field | Value |
+----------+----------------------------------+
| email | None |
| enabled | True |
| id | ab67f043d9304017aaa73d692eeb4945 |
| name | neutron |
| username | neutron |
+----------+----------------------------------+
#. Add the ``admin`` role to the ``neutron`` user:
.. code:: console
$ openstack role add --project service --user neutron admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | cd2cb9a39e874ea69e5d4b896eb16128 |
| name | admin |
+-------+----------------------------------+
#. Create the ``neutron`` service entity:
.. code:: console
$ openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
#. Create the Networking service API endpoints:
.. code:: console
$ openstack endpoint create \
--publicurl http://controller:9696 \
--adminurl http://controller:9696 \
--internalurl http://controller:9696 \
--region RegionOne \
network
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| adminurl | http://controller:9696 |
| id | 04a7d3c1de784099aaba83a8a74100b3 |
| internalurl | http://controller:9696 |
| publicurl | http://controller:9696 |
| region | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
+--------------+----------------------------------+
Configure networking options
----------------------------
Choose one of the following networking options to configure services
specific to it.
.. note::
Option 2 augments option 1 with the layer-3 (routing) service and
enables self-service (private) networks. If you want to use public
(provider) and private (self-service) networks, choose option 2.
.. toctree::
:maxdepth: 1
neutron-controller-install-option1.rst
neutron-controller-install-option2.rst
.. _neutron-controller-metadata-agent:
Configure the metadata agent
----------------------------
The :term:`metadata agent <Metadata agent>` provides configuration information
such as credentials to instances.
Edit the ``/etc/neutron/metadata_agent.ini`` file.
#. In the ``[DEFAULT]`` section, configure access parameters:
.. code-block:: ini
[DEFAULT]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
#. In the ``[DEFAULT]`` section, configure the metadata host:
.. code-block:: ini
[DEFAULT]
...
nova_metadata_ip = controller
#. In the ``[DEFAULT]`` section, configure the metadata proxy shared
secret:
.. code-block:: ini
[DEFAULT]
...
metadata_proxy_shared_secret = METADATA_SECRET
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
#. (Optional) To assist with troubleshooting, enable verbose logging in the
``[DEFAULT]`` section:
.. code-block:: ini
[DEFAULT]
...
verbose = True
Configure Compute to use Networking
-----------------------------------
Edit the ``/etc/nova/nova.conf`` file:
#. In the ``[DEFAULT]`` section, configure Compute to use the Networking
service:
.. code-block:: ini
[DEFAULT]
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
The ``firewall_driver`` option uses the ``NoopFirewallDriver`` value
because Compute delegates security group (firewall) operation to the
Networking service.
#. In the ``[neutron]`` section, configure access parameters, enable the
metadata proxy, and configure the secret:
.. code-block:: ini
[neutron]
...
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
proxy.
Finalize installation
---------------------
.. only:: rdo
#. The Networking service initialization scripts expect a symbolic link
:file:`/etc/neutron/plugin.ini` pointing to the ML2 plug-in configuration
file, :file:`/etc/neutron/plugins/ml2/ml2_conf.ini`. If this symbolic
link does not exist, create it using the following command:
.. code:: console
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
#. Due to a packaging issue, the Linux bridge agent initialization script
explicitly looks for the Linux bridge plug-in configuration file rather
than the agent configuration file. Run the following commands to resolve
this issue:
.. code-block:: console
# cp /usr/lib/systemd/system/neutron-linuxbridge-agent.service \
/usr/lib/systemd/system/neutron-linuxbridge-agent.service.orig
# sed -i 's,openvswitch/linuxbridge_neutron_plugin.ini,ml2/linuxbridge_agent.ini,g' \
/usr/lib/systemd/system/neutron-linuxbridge-agent.service
.. note::
Future upgrades of the ``neutron-linuxbridge-agent`` package may
overwrite this modification.
#. Populate the database:
.. code:: console
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
.. note::
Database population occurs later for Networking because the script
requires complete server and plug-in configuration files.
#. Restart the Compute services:
.. code:: console
# systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service
#. Start the Networking services and configure them to start when the system
boots.
For both networking options:
.. code:: console
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
For networking option 2, also enable and start the layer-3 service:
.. code:: console
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
.. only:: obs
#. The Networking service initialization scripts expect the variable
``NEUTRON_PLUGIN_CONF`` in the :file:`/etc/sysconfig/neutron` file to
reference the ML2 plug-in configuration file. Edit the
:file:`/etc/sysconfig/neutron` file and add the following:
.. code:: console
NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"
#. Restart the Compute services:
.. code:: console
# systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service
#. Start the Networking services and configure them to start when the system
boots.
For both networking options:
.. code:: console
# systemctl enable openstack-neutron.service \
openstack-neutron-linuxbridge.service \
openstack-neutron-dhcp-agent.service \
openstack-neutron-metadata-agent.service
# systemctl start openstack-neutron.service \
openstack-neutron-linuxbridge.service \
openstack-neutron-dhcp-agent.service \
openstack-neutron-metadata-agent.service
For networking option 2, also enable and start the layer-3 service:
.. code:: console
# systemctl enable openstack-neutron-l3-agent.service
# systemctl start openstack-neutron-l3-agent.service
.. only:: ubuntu
#. Due to a packaging issue, the Linux bridge agent initialization script
explicitly looks for the ML2 plug-in configuration file rather than the
agent configuration file. Run the following commands to resolve this
issue:
.. code:: console
# cp /etc/init/neutron-plugin-linuxbridge-agent.conf \
/etc/init/neutron-plugin-linuxbridge-agent.conf.orig
# sed -i 's,ml2_conf.ini,linuxbridge_agent.ini,g' \
/etc/init/neutron-plugin-linuxbridge-agent.conf
#. Populate the database:
.. code:: console
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
.. note::
Database population occurs later for Networking because the script
requires complete server and plug-in configuration files.
#. Restart the nova-api service:
.. code:: console
# service nova-api restart
#. Restart the Networking services.
For both networking options:
.. code:: console
# service neutron-server restart
# service neutron-plugin-linuxbridge-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
For networking option 2, also restart the layer-3 service:
.. code:: console
# service neutron-l3-agent restart

View File

@ -1,505 +0,0 @@
=====================================
Install and configure controller node
=====================================
**To configure prerequisites**
Before you configure the OpenStack Networking (neutron) service, you
must create a database, service credentials, and API endpoint.
#. To create the database, complete these steps:
a. Use the database access client to connect to the database server as the
``root`` user:
.. code:: console
$ mysql -u root -p
#. Create the ``neutron`` database:
.. code:: console
CREATE DATABASE neutron;
#. Grant proper access to the ``neutron`` database, replacing
``NEUTRON_DBPASS`` with a suitable password:
.. code:: console
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
#. Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only CLI
commands:
.. code:: console
$ source admin-openrc.sh
#. To create the service credentials, complete these steps:
a. Create the ``neutron`` user:
.. code:: console
$ openstack user create --password-prompt neutron
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field | Value |
+----------+----------------------------------+
| email | None |
| enabled | True |
| id | ab67f043d9304017aaa73d692eeb4945 |
| name | neutron |
| username | neutron |
+----------+----------------------------------+
#. Add the ``admin`` role to the ``neutron`` user:
.. code:: console
$ openstack role add --project service --user neutron admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | cd2cb9a39e874ea69e5d4b896eb16128 |
| name | admin |
+-------+----------------------------------+
#. Create the ``neutron`` service entity:
.. code:: console
$ openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
#. Create the Networking service API endpoint:
.. code:: console
$ openstack endpoint create \
--publicurl http://controller:9696 \
--adminurl http://controller:9696 \
--internalurl http://controller:9696 \
--region RegionOne \
network
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| adminurl | http://controller:9696 |
| id | 04a7d3c1de784099aaba83a8a74100b3 |
| internalurl | http://controller:9696 |
| publicurl | http://controller:9696 |
| region | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
+--------------+----------------------------------+
**To install the Networking components**
.. only:: ubuntu
.. code:: console
# apt-get install neutron-server neutron-plugin-ml2 python-neutronclient
.. only:: rdo
.. code:: console
# yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which
.. only:: obs
.. code:: console
# zypper install openstack-neutron openstack-neutron-server
.. note::
SUSE does not use a separate ML2 plug-in package.
.. only:: debian
**To install and configure the Networking components**
#. .. code:: console
# apt-get install neutron-server
.. note::
Debian does not use a separate ML2 plug-in package.
#. Respond to prompts for `database
management <#debconf-dbconfig-common>`__, `Identity service
credentials <#debconf-keystone_authtoken>`__, `service endpoint
registration <#debconf-api-endpoints>`__, and `message queue
credentials <#debconf-rabbitmq>`__.
#. Select the ML2 plug-in:
.. image:: figures/debconf-screenshots/neutron_1_plugin_selection.png
.. note::
Selecting the ML2 plug-in also populates the ``service_plugins`` and
``allow_overlapping_ips`` options in the
:file:`/etc/neutron/neutron.conf` file with the appropriate values.
.. only:: ubuntu or rdo or obs
**To configure the Networking server component**
The Networking server component configuration includes the database,
authentication mechanism, message queue, topology change notifications,
and plug-in.
.. note::
Default configuration files vary by distribution. You might need to
add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you
should retain.
#. Open the :file:`/etc/neutron/neutron.conf` file and edit the
``[database]`` section to configure database access:
.. code:: ini
[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
#. In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure RabbitMQ message queue access:
.. code-block:: ini
:linenos:
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
#. Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
#. In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. code-block:: ini
:linenos:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
#. Replace NEUTRON\_PASS with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
#. In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in, router service, and overlapping IP addresses:
.. code:: ini
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
#. In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. code-block:: ini
:linenos:
[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
[nova]
...
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
#. Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
#. (Optional) To assist with troubleshooting, enable verbose logging in
the ``[DEFAULT]`` section:
.. code:: ini
[DEFAULT]
...
verbose = True
**To configure the Modular Layer 2 (ML2) plug-in**
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build
the virtual networking framework for instances. However, the controller
node does not need the OVS components because it does not handle
instance network traffic.
#. Open the :file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file and edit the
``[ml2]`` section, to enable the flat, VLAN, generic routing
encapsulation (GRE), and virtual extensible LAN (VXLAN) network type
drivers, GRE tenant networks, and the OVS mechanism driver:
.. code:: ini
[ml2]
...
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch
.. warning::
After you configure the ML2 plug-in, changing values in the
``type_drivers`` option can lead to database inconsistency.
#. In the ``[ml2_type_gre]`` section, configure the tunnel identifier (id)
range:
.. code:: ini
[ml2_type_gre]
...
tunnel_id_ranges = 1:1000
#. In the ``[securitygroup]`` section, enable security groups, enable
ipset, and configure the OVS iptables firewall driver:
.. code:: ini
[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
**To configure Compute to use Networking**
By default, distribution packages configure Compute to use legacy
networking. You must reconfigure Compute to manage networks through
Networking.
#. Open the :file:`/etc/nova/nova.conf` file on the controller node and edit
the ``[DEFAULT]`` section to configure the APIs and drivers:
.. code:: ini
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall service. Since
Networking includes a firewall service, you must disable the Compute
firewall service by using the
``nova.virt.firewall.NoopFirewallDriver`` firewall driver.
#. In the ``[neutron]`` section, configure access parameters:
.. code-block:: ini
:linenos:
[neutron]
...
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = NEUTRON_PASS
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
**To finalize installation**
.. only:: rdo
#. The Networking service initialization scripts expect a symbolic link
:file:`/etc/neutron/plugin.ini` pointing to the ML2 plug-in configuration
file, :file:`/etc/neutron/plugins/ml2/ml2_conf.ini`. If this symbolic
link does not exist, create it using the following command:
.. code:: console
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
#. Populate the database:
.. code:: console
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade kilo" neutron
.. note::
Database population occurs later for Networking because the script
requires complete server and plug-in configuration files.
#. Restart the Compute services:
.. code:: console
# systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service
#. Start the Networking service and configure it to start when the system
boots:
.. code:: console
# systemctl enable neutron-server.service
# systemctl start neutron-server.service
.. only:: obs
#. The Networking service initialization scripts expect the variable
``NEUTRON_PLUGIN_CONF`` in the :file:`/etc/sysconfig/neutron` file to
reference the ML2 plug-in configuration file. Edit the
:file:`/etc/sysconfig/neutron` file and add the following:
.. code:: console
NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"
#. Restart the Compute services:
.. code:: console
# systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service
#. Start the Networking service and configure it to start when the system
boots:
.. code:: console
# systemctl enable openstack-neutron.service
# systemctl start openstack-neutron.service
.. only:: ubuntu
#. Populate the database:
.. code:: console
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade kilo" neutron
.. note::
Database population occurs later for Networking because the script
requires complete server and plug-in configuration files.
#. Restart the nova-api service:
.. code:: console
# service nova-api restart
#. Restart the Networking service:
.. code:: console
# service neutron-server restart
**Verify operation**
Perform the following commands on the controller node.
#. Source the ``admin`` credentials to gain access to admin-only CLI
commands:
.. code:: console
$ source admin-openrc.sh
#. List loaded extensions to verify successful launch of the
``neutron-server`` process:
.. code:: console
$ neutron ext-list
+-----------------------+-----------------------------------------------+
| alias | name |
+-----------------------+-----------------------------------------------+
| security-group | security-group |
| l3_agent_scheduler | L3 Agent Scheduler |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| provider | Provider Network |
| agent | agent |
| quotas | Quota management support |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| l3-ha | HA Router extension |
| multi-provider | Multi Provider Network |
| external-net | Neutron external network |
| router | Neutron L3 Router |
| allowed-address-pairs | Allowed Address Pairs |
| extraroute | Neutron Extra Route |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| dvr | Distributed Virtual Router |
+-----------------------+-----------------------------------------------+

View File

@ -1,276 +0,0 @@
=======================
Create initial networks
=======================
Before launching your first instance, you must create the necessary
virtual network infrastructure to which the instances connect, including
the :ref:`external-network` and :ref:`tenant-network`. After creating this
infrastructure, we recommend that you :ref:`verify-connectivity` and resolve
any issues before proceeding further. :ref:`Initial networks <initialnetworks>`
provides a basic architectural overview of the components that Networking
implements for the initial networks and shows how network traffic flows from
the instance to the external network or Internet.
.. _initialnetworks:
.. figure:: /figures/installguide-neutron-initialnetworks.png
:alt: OpenStack networking (neutron) initial networks
.. _external-network:
External network
~~~~~~~~~~~~~~~~
The external network typically provides Internet access for your
instances. By default, this network only allows Internet access *from*
instances using :term:`Network Address Translation (NAT)`. You can enable
Internet access *to* individual instances using a :term:`floating IP address`
and suitable :term:`security group` rules. The ``admin`` tenant owns this
network because it provides external network access for multiple
tenants.
**To create the external network**
#. On the controller node, source the ``admin`` credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ source admin-openrc.sh
#. Create the network:
.. code-block:: console
$ neutron net-create ext-net --router:external \
--provider:physical_network external --provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 893aebb9-1c1e-48be-8908-6b947f3237b3 |
| name | ext-net |
| provider:network_type | flat |
| provider:physical_network | external |
| provider:segmentation_id | |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 54cd044c64d5408b83f843d63624e0d8 |
+---------------------------+--------------------------------------+
Like a physical network, a virtual network requires a :term:`subnet` assigned
to it. The external network shares the same subnet and :term:`gateway`
associated with the physical network connected to the external interface on the
network node. You should specify an exclusive slice of this subnet for
:term:`router` and floating IP addresses to prevent interference with other
devices on the external network.
**To create a subnet on the external network**
Create the subnet:
.. code-block:: console
$ neutron subnet-create ext-net EXTERNAL_NETWORK_CIDR --name ext-subnet \
--allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \
--disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY
- Replace ``FLOATING_IP_START`` and ``FLOATING_IP_END`` with
the first and last IP addresses of the range that you want to allocate for
floating IP addresses.
- Replace ``EXTERNAL_NETWORK_CIDR`` with the subnet associated with the
physical network.
- Replace ``EXTERNAL_NETWORK_GATEWAY`` with the gateway associated with the
physical network, typically the ".1" IP address.
- You should disable :term:`DHCP` on this subnet because instances do not
connect directly to the external network and floating IP addresses
require manual assignment.
For example, using ``203.0.113.0/24`` with floating IP address range
``203.0.113.101`` to ``203.0.113.200``:
.. code-block:: console
$ neutron subnet-create ext-net 203.0.113.0/24 --name ext-subnet \
--allocation-pool start=203.0.113.101,end=203.0.113.200 \
--disable-dhcp --gateway 203.0.113.1
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "203.0.113.101", "end": "203.0.113.200"} |
| cidr | 203.0.113.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 203.0.113.1 |
| host_routes | |
| id | 9159f0dc-2b63-41cf-bd7a-289309da1391 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | ext-subnet |
| network_id | 893aebb9-1c1e-48be-8908-6b947f3237b3 |
| tenant_id | 54cd044c64d5408b83f843d63624e0d8 |
+-------------------+------------------------------------------------------+
.. _tenant-network:
Tenant network
~~~~~~~~~~~~~~
The tenant network provides internal network access for instances. The
architecture isolates this type of network from other tenants. The
``demo`` tenant owns this network because it only provides network
access for instances within it.
**To create the tenant network**
#. On the controller node, source the ``demo`` credentials to gain access to
user-only CLI commands:
.. code-block:: console
$ source demo-openrc.sh
#. Create the network:
.. code-block:: console
$ neutron net-create demo-net
Created a new network:
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | True |
| id | ac108952-6096-4243-adf4-bb6615b3de28 |
| name | demo-net |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | cdef0071a0194d19ac6bb63802dc9bae |
+-----------------+--------------------------------------+
Like the external network, your tenant network also requires a subnet
attached to it. You can specify any valid subnet because the
architecture isolates tenant networks. By default, this subnet uses DHCP
so your instances can obtain IP addresses.
**To create a subnet on the tenant network**
Create the subnet:
.. code-block:: console
$ neutron subnet-create demo-net TENANT_NETWORK_CIDR \
--name demo-subnet --gateway TENANT_NETWORK_GATEWAY
- Replace ``TENANT_NETWORK_CIDR`` with the subnet you want to associate with
the tenant network.
- Replace ``TENANT_NETWORK_GATEWAY`` with the gateway you want to associate
with it, typically the ".1" IP address.
Example using ``192.168.1.0/24``:
.. code-block:: console
$ neutron subnet-create demo-net 192.168.1.0/24 \
--name demo-subnet --gateway 192.168.1.1
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} |
| cidr | 192.168.1.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.1.1 |
| host_routes | |
| id | 69d38773-794a-4e49-b887-6de6734e792d |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | demo-subnet |
| network_id | ac108952-6096-4243-adf4-bb6615b3de28 |
| tenant_id | cdef0071a0194d19ac6bb63802dc9bae |
+-------------------+------------------------------------------------------+
A virtual router passes network traffic between two or more virtual
networks. Each router requires one or more :term:`interfaces <interface>`
and/or gateways that provide access to specific networks. In this case, you
create a router and attach your tenant and external networks to it.
**To create a router on the tenant network and attach the external and tenant
networks to it**
#. Create the router:
.. code-block:: console
$ neutron router-create demo-router
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | 635660ae-a254-4feb-8993-295aa9ec6418 |
| name | demo-router |
| routes | |
| status | ACTIVE |
| tenant_id | cdef0071a0194d19ac6bb63802dc9bae |
+-----------------------+--------------------------------------+
#. Attach the router to the ``demo`` tenant subnet:
.. code-block:: console
$ neutron router-interface-add demo-router demo-subnet
Added interface b1a894fd-aee8-475c-9262-4342afdc1b58 to router demo-router.
#. Attach the router to the external network by setting it as the gateway:
.. code-block:: console
$ neutron router-gateway-set demo-router ext-net
Set gateway for router demo-router
.. _verify-connectivity:
Verify connectivity
~~~~~~~~~~~~~~~~~~~
We recommend that you verify network connectivity and resolve any issues
before proceeding further. Following the external network subnet example
using ``203.0.113.0/24``, the tenant router gateway should occupy the
lowest IP address in the floating IP address range, ``203.0.113.101``.
If you configured your external physical network and virtual networks
correctly, you should be able to ``ping`` this IP address from any host
on your external physical network.
.. note::
If you are building your OpenStack nodes as virtual machines, you
must configure the hypervisor to permit promiscuous mode on the
external network.
**To verify network connectivity**
From a host on the external network, ping the tenant router gateway:
.. code-block:: console
$ ping -c 4 203.0.113.101
PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data.
64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619 ms
64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189 ms
64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165 ms
64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216 ms
--- 203.0.113.101 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms

View File

@ -1,579 +0,0 @@
==================================
Install and configure network node
==================================
The network node primarily handles internal and external routing and
:term:`DHCP` services for virtual networks.
**To configure prerequisites**
Before you install and configure OpenStack Networking, you must
configure certain kernel networking parameters.
#. Edit the :file:`/etc/sysctl.conf` file to contain the following parameters:
.. code-block:: ini
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
#. Implement the changes:
.. code-block:: console
# sysctl -p
.. only:: rdo or ubuntu or obs
**To install the Networking components**
.. only:: ubuntu
.. code-block:: console
# apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent
.. only:: rdo
.. code-block:: console
# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
.. only:: obs
.. code-block:: console
# zypper install --no-recommends openstack-neutron-openvswitch-agent \
openstack-neutron-l3-agent \
openstack-neutron-dhcp-agent openstack-neutron-metadata-agent ipset
.. note:: SUSE does not use a separate ML2 plug-in package.
.. only:: debian
**To install and configure the Networking components**
#. .. code-block:: console
# apt-get install neutron-plugin-openvswitch-agent openvswitch-datapath-dkms \
neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent
.. note:: Debian does not use a separate ML2 plug-in package.
#. Respond to prompts for `database
management <#debconf-dbconfig-common>`__, `Identity service
credentials <#debconf-keystone_authtoken>`__, `service endpoint
registration <#debconf-api-endpoints>`__, and `message queue
credentials <#debconf-rabbitmq>`__.
#. Select the ML2 plug-in:
.. image:: figures/debconf-screenshots/neutron_1_plugin_selection.png
.. note::
Selecting the ML2 plug-in also populates the ``service_plugins`` and
``allow_overlapping_ips`` options in the
:file:`/etc/neutron/neutron.conf` file with the appropriate values.
.. only:: rdo or ubuntu or obs
**To configure the Networking common components**
The Networking common component configuration includes the
authentication mechanism, message queue, and plug-in.
.. note::
Default configuration files vary by distribution. You might need to
add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you
should retain.
#. Open the :file:`/etc/neutron/neutron.conf` file and edit the
``[database]`` section. Comment out any ``connection`` options
because network nodes do not directly access the database.
#. In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections, configure
RabbitMQ message queue access:
.. code-block:: ini
:linenos:
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in RabbitMQ.
#. In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. code-block:: ini
:linenos:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
#. In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) plug-in,
router service, and overlapping IP addresses:
.. code-block:: ini
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
#. (Optional) To assist with troubleshooting, enable verbose logging in the
``[DEFAULT]`` section:
.. code-block:: ini
[DEFAULT]
...
verbose = True
**To configure the Modular Layer 2 (ML2) plug-in**
The ML2 plug-in uses the :term:`Open vSwitch (OVS) <Open vSwitch>` mechanism
(agent) to build the virtual networking framework for instances.
#. Open the :file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file and edit the
``[ml2]`` section. Enable the :term:`flat <flat network>`, :term:`VLAN <VLAN
network>`, :term:`generic routing encapsulation (GRE)`, and
:term:`virtual extensible LAN (VXLAN)` network type drivers, GRE tenant
networks, and the OVS mechanism driver:
.. code-block:: ini
[ml2]
...
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch
#. In the ``[ml2_type_flat]`` section, configure the external flat provider
network:
.. code-block:: ini
[ml2_type_flat]
...
flat_networks = external
#. In the ``[ml2_type_gre]`` section, configure the tunnel identifier (id)
range:
.. code-block:: ini
[ml2_type_gre]
...
tunnel_id_ranges = 1:1000
#. In the ``[securitygroup]`` section, enable security groups, enable
:term:`ipset`, and configure the OVS :term:`iptables` firewall driver:
.. code-block:: ini
[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
#. In the ``[ovs]`` section, enable tunnels, configure the local tunnel
endpoint, and map the external flat provider network to the ``br-ex``
external network bridge:
.. code-block:: ini
[ovs]
...
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
bridge_mappings = external:br-ex
Replace ``INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS`` with the IP address of
the instance tunnels network interface on your network node.
#. In the ``[agent]`` section, enable GRE tunnels:
.. code-block:: ini
[agent]
...
tunnel_types = gre
**To configure the Layer-3 (L3) agent**
The :term:`Layer-3 (L3) agent` provides routing services for virtual networks.
#. Open the :file:`/etc/neutron/l3_agent.ini` file edit the ``[DEFAULT]``
section. Configure the interface driver, external
network bridge, and enable deletion of defunct router namespaces:
.. code-block:: ini
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
router_delete_namespaces = True
.. note::
The ``external_network_bridge`` option intentionally lacks a value
to enable multiple external networks on a single agent.
#. (Optional) To assist with troubleshooting, enable verbose logging in the
``[DEFAULT]`` section:
.. code-block:: ini
[DEFAULT]
...
verbose = True
**To configure the DHCP agent**
The :term:`DHCP agent` provides DHCP services for virtual networks.
#. Open the :file:`/etc/neutron/dhcp_agent.ini` file and edit the ``[DEFAULT]``
section, configure the interface and DHCP drivers and enable deletion of
defunct DHCP namespaces:
.. code-block:: ini
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
dhcp_delete_namespaces = True
#. (Optional) To assist with troubleshooting, enable verbose logging in the
``[DEFAULT]`` section:
.. code-block:: ini
[DEFAULT]
...
verbose = True
#. (Optional)
Tunneling protocols such as GRE include additional packet headers that
increase overhead and decrease space available for the payload or user
data. Without knowledge of the virtual network infrastructure, instances
attempt to send packets using the default Ethernet :term:`maximum
transmission unit (MTU)` of 1500 bytes. :term:`Internet protocol (IP)`
networks contain the :term:`path MTU discovery (PMTUD)` mechanism to detect
end-to-end MTU and adjust packet size accordingly. However, some operating
systems and networks block or otherwise lack support for PMTUD causing
performance degradation or connectivity failure.
Ideally, you can prevent these problems by enabling :term:`jumbo frames
<jumbo frame>` on the physical network that contains your tenant virtual
networks. Jumbo frames support MTUs up to approximately 9000 bytes which
negates the impact of GRE overhead on virtual networks. However, many
network devices lack support for jumbo frames and OpenStack administrators
often lack control over network infrastructure. Given the latter
complications, you can also prevent MTU problems by reducing the
instance MTU to account for GRE overhead. Determining the proper MTU
value often takes experimentation, but 1454 bytes works in most
environments. You can configure the DHCP server that assigns IP
addresses to your instances to also adjust the MTU.
.. note::
Some cloud images ignore the DHCP MTU option in which case you
should configure it using metadata, a script, or another suitable
method.
#. Open the :file:`/etc/neutron/dhcp_agent.ini` file and edit the
``[DEFAULT]`` section. Enable the :term:`dnsmasq` configuration file:
.. code-block:: ini
[DEFAULT]
...
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
#. Create and edit the :file:`/etc/neutron/dnsmasq-neutron.conf` file to
enable the DHCP MTU option (26) and configure it to 1454 bytes:
.. code-block:: ini
dhcp-option-force=26,1454
#. Kill any existing dnsmasq processes:
.. code-block:: console
# pkill dnsmasq
**To configure the metadata agent**
The :term:`metadata agent <Metadata agent>` provides configuration information
such as credentials to instances.
#. Open the :file:`/etc/neutron/metadata_agent.ini` file and edit the
``[DEFAULT]`` section, configure access parameters:
.. code-block:: ini
:linenos:
[DEFAULT]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
#. Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
#. In the ``[DEFAULT]`` section, configure the metadata host:
.. code-block:: ini
[DEFAULT]
...
nova_metadata_ip = controller
#. In the ``[DEFAULT]`` section, configure the metadata proxy shared
secret:
.. code-block:: ini
[DEFAULT]
...
metadata_proxy_shared_secret = METADATA_SECRET
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
#. (Optional) To assist with troubleshooting, enable verbose logging in the
``[DEFAULT]`` section:
.. code-block:: ini
[DEFAULT]
...
verbose = True
#. On the *controller* node, open the :file:`/etc/nova/nova.conf` file and
edit the ``[neutron]`` section to enable the metadata proxy and configure
the secret:
.. code-block:: ini
[neutron]
...
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
proxy.
#. On the *controller* node, restart the Compute :term:`API` service:
.. only:: rdo or obs
.. code-block:: console
# systemctl restart openstack-nova-api.service
.. only:: ubuntu or debian
.. code-block:: console
# service nova-api restart
**To configure the Open vSwitch (OVS) service**
The OVS service provides the underlying virtual networking framework for
instances. The integration bridge ``br-int`` handles internal instance
network traffic within OVS. The external bridge ``br-ex`` handles
external instance network traffic within OVS. The external bridge
requires a port on the physical external network interface to provide
instances with external network access. In essence, this port connects
the virtual and physical external networks in your environment.
.. only:: rdo or obs
#. Start the OVS service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable openvswitch.service
# systemctl start openvswitch.service
.. only:: ubuntu or debian
#. Restart the OVS service:
.. code-block:: console
# service openvswitch-switch restart
2. Add the external bridge:
.. code-block:: console
# ovs-vsctl add-br br-ex
#. Add a port to the external bridge that connects to the physical external
network interface. Replace ``INTERFACE_NAME`` with the actual interface
name. For example, *eth2* or *ens256*:
.. code-block:: console
# ovs-vsctl add-port br-ex INTERFACE_NAME
.. note::
Depending on your network interface driver, you may need to disable
:term:`generic receive offload (GRO)` to achieve suitable throughput
between your instances and the external network.
To temporarily disable GRO on the external network interface while
testing your environment:
.. code-block:: console
# ethtool -K INTERFACE_NAME gro off
**To finalize the installation**
.. only:: rdo
#. The Networking service initialization scripts expect a symbolic link
:file:`/etc/neutron/plugin.ini` pointing to the ML2 plug-in configuration
file, :file:`/etc/neutron/plugins/ml2/ml2_conf.ini`. If this symbolic
link does not exist, create it using the following command:
.. code-block:: console
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
#. Due to a packaging bug, the Open vSwitch agent initialization script
explicitly looks for the Open vSwitch plug-in configuration file rather
than a symbolic link :file:`/etc/neutron/plugin.ini` pointing to the ML2
plug-in configuration file. Run the following commands to resolve this
issue:
.. code-block:: console
# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
/usr/lib/systemd/system/neutron-openvswitch-agent.service
#. Start the Networking services and configure them to start when the
system boots:
.. code-block:: console
# systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service \
neutron-ovs-cleanup.service
# systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service
.. note:: Do not explicitly start the neutron-ovs-cleanup service.
.. only:: obs
#. The Networking service initialization scripts expect the variable
``NEUTRON_PLUGIN_CONF`` in the :file:`/etc/sysconfig/neutron` file to
reference the ML2 plug-in configuration file. Edit the
:file:`/etc/sysconfig/neutron` file and add the following:
.. code-block:: ini
NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"
#. Start the Networking services and configure them to start when the
system boots:
.. code-block:: console
# systemctl enable openstack-neutron-openvswitch-agent.service
openstack-neutron-l3-agent.service \
openstack-neutron-dhcp-agent.service openstack-neutron-metadata-agent.service \
openstack-neutron-ovs-cleanup.service
# systemctl start openstack-neutron-openvswitch-agent.service
openstack-neutron-l3-agent.service \
openstack-neutron-dhcp-agent.service openstack-neutron-metadata-agent.service
.. note:: Do not explicitly start the neutron-ovs-cleanup service.
.. only:: ubuntu or debian
#. Restart the Networking services:
.. code-block:: console
# service neutron-plugin-openvswitch-agent restart
# service neutron-l3-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
.. note:: Perform these commands on the controller node.
**Verify operation**
#. Source the ``admin`` credentials to gain access to admin-only CLI
commands:
.. code-block:: console
$ source admin-openrc.sh
#. List agents to verify successful launch of the neutron agents:
.. code-block:: console
$ neutron agent-list
+-------+--------------------+---------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+-------+--------------------+---------+-------+----------------+---------------------------+
| 302...| Metadata agent | network | :-) | True | neutron-metadata-agent |
| 4bd...| Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent |
| 756...| L3 agent | network | :-) | True | neutron-l3-agent |
| 9c4...| DHCP agent | network | :-) | True | neutron-dhcp-agent |
+-------+--------------------+---------+-------+----------------+---------------------------+

View File

@ -0,0 +1,7 @@
==========
Next steps
==========
Your OpenStack environment now includes the core components necessary
to launch a basic instance. You can :ref:`launch-instance` or add more
OpenStack services to your environment.

View File

@ -0,0 +1,19 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. List agents to verify successful launch of the neutron agents:
.. code-block:: console
$ neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
The output should indicate three agents on the controller node and one
agent on each compute node.

View File

@ -0,0 +1,20 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. List agents to verify successful launch of the neutron agents:
.. code-block:: console
$ neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | :-) | True | neutron-l3-agent |
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
The output should indicate four agents on the controller node and one
agent on each compute node.

View File

@ -0,0 +1,51 @@
Verify operation
~~~~~~~~~~~~~~~~
#. Source the ``admin`` credentials to gain access to admin-only CLI
commands:
.. code:: console
$ source admin-openrc.sh
#. List loaded extensions to verify successful launch of the
``neutron-server`` process:
.. code:: console
$ neutron ext-list
+-----------------------+-----------------------------------------------+
| alias | name |
+-----------------------+-----------------------------------------------+
| dns-integration | DNS Integration |
| address-scope | Address scope |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| l3_agent_scheduler | L3 Agent Scheduler |
| external-net | Neutron external network |
| flavors | Neutron Service Flavors |
| net-mtu | Network MTU |
| quotas | Quota management support |
| l3-ha | HA Router extension |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| extraroute | Neutron Extra Route |
| router | Neutron L3 Router |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| security-group | security-group |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| rbac-policies | RBAC Policies |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
| dvr | Distributed Virtual Router |
+-----------------------+-----------------------------------------------+
Use the verification section for the networking option that you chose to
deploy.
.. toctree::
neutron-verify-option1.rst
neutron-verify-option2.rst

View File

@ -0,0 +1,22 @@
.. _networking:
==========================
Add the Networking service
==========================
This chapter explains how to install and configure the OpenStack Networking
service (neutron) using the :ref:`provider networks <network1>` or
:ref:`self-service networks <network2>` option. For more information about
the Networking service including virtual networking components, layout, and
traffic flows, see the
`Networking Guide <http://docs.openstack.org/networking-guide>`__.
.. toctree::
:maxdepth: 1
common/get_started_openstack_networking.rst
neutron-concepts.rst
neutron-controller-install.rst
neutron-compute-install.rst
neutron-verify.rst
neutron-next-steps.rst

View File

@ -18,15 +18,8 @@ scale your environment with additional compute nodes.
this guide step-by-step to configure the first compute node. If you
want to configure additional compute nodes, prepare them in a similar
fashion to the first compute node in the :ref:`example architectures
<overview-example-architectures>` section using the same networking
service as your existing environment. For either networking service,
follow the :ref:`NTP configuration <basics-ntp-other-nodes>` and
:doc:`OpenStack packages <basics-packages>` instructions.
For OpenStack Networking (neutron), also follow the
:doc:`OpenStack Networking compute node <basics-networking-neutron>`
instructions. For legacy networking (nova-network), also follow the
:doc:`legacy networking compute node <basics-networking-nova>`
instructions. Each additional compute node requires unique IP addresses.
<overview-example-architectures>` section. Each additional compute node
requires a unique IP address.
To install and configure the Compute hypervisor components
----------------------------------------------------------

View File

@ -227,6 +227,24 @@ To install and configure Compute controller components
...
my_ip = 10.0.0.11
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. code-block:: ini
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall service. Since
Networking includes a firewall service, you must disable the Compute
firewall service by using the
``nova.virt.firewall.NoopFirewallDriver`` firewall driver.
* In the ``[DEFAULT]`` section, configure the VNC proxy to use
the management interface IP address of the controller node:
@ -276,7 +294,6 @@ To install and configure Compute controller components
...
lock_path = /var/lib/nova/tmp
* (Optional) To assist with troubleshooting, enable verbose
logging in the ``[DEFAULT]`` section:

View File

@ -1,9 +1,6 @@
============
Architecture
============
========
Overview
~~~~~~~~
========
The :term:`OpenStack` project is an open source cloud computing platform that
supports all types of cloud environments. The project aims for simple
@ -13,7 +10,11 @@ computing experts from around the world contribute to the project.
OpenStack provides an :term:`Infrastructure-as-a-Service (IaaS)<IaaS>` solution
through a variety of complemental services. Each service offers an
:term:`application programming interface (API)<API>` that facilitates this
integration. The following table provides a list of OpenStack services:
integration.
This guide covers step-by-step deployment of the following major OpenStack
services using a functional example architecture suitable for new users of
OpenStack with sufficient Linux experience:
.. list-table:: **OpenStack services**
:widths: 20 15 70
@ -86,221 +87,147 @@ integration. The following table provides a list of OpenStack services:
format or the AWS CloudFormation template format, through both an
OpenStack-native REST API and a CloudFormation-compatible
Query API.
* - `Database service <http://www.openstack.org/software/openstack-shared-services/>`_
- `Trove <http://docs.openstack.org/developer/trove/>`_
- Provides scalable and reliable Cloud Database-as-a-Service
functionality for both relational and non-relational database
engines.
* - `Data processing service
<http://www.openstack.org/software/openstack-shared-services/>`_
- `Sahara <http://docs.openstack.org/developer/sahara/>`_
- Provides capabilities to provision and scale Hadoop clusters in OpenStack by
specifying parameters like Hadoop version, cluster topology and nodes hardware
details.
|
This guide describes how to deploy these services in a functional test
environment and, by example, teaches you how to build a production
environment. Realistically, you would use automation tools such as
Ansible, Chef, and Puppet to deploy and manage a production environment.
After becoming familiar with basic installation, configuration, operation,
and troubleshooting of these OpenStack services, you should consider the
following steps toward deployment using a production architecture:
.. _overview-conceptual-architecture:
- Determine and implement the necessary core and optional services to
meet performance and redundancy requirements.
Conceptual architecture
~~~~~~~~~~~~~~~~~~~~~~~
- Increase security using methods such as firewalls, encryption, and
service policies.
Launching a virtual machine or instance involves many interactions among
several services. The following diagram provides the conceptual
architecture of a typical OpenStack environment.
.. figure:: figures/openstack_kilo_conceptual_arch.png
:alt: Conceptual view of OpenStack Kilo architecture
:width: 7in
:height: 7in
Figure 1.1 Conceptual architecture
|
- Implement a deployment tool such as Ansible, Chef, Puppet, or Salt
to automate deployment and management of the production environment.
.. _overview-example-architectures:
Example architectures
~~~~~~~~~~~~~~~~~~~~~
Example architecture
~~~~~~~~~~~~~~~~~~~~
OpenStack is highly configurable to meet different needs with various
compute, networking, and storage options. This guide presents several
combinations of core and optional services for you to choose from. This guide
uses the following example architectures:
The example architecture requires at least two nodes (hosts) to launch a basic
:term:`virtual machine <virtual machine (VM)>` or instance. Optional
services such as Block Storage and Object Storage require additional nodes.
- Three-node architecture with OpenStack Networking (neutron) and
optional nodes for Block Storage and Object Storage services.
This example architecture differs from a minimal production architecture as
follows:
- The :term:`controller node <cloud controller node>` runs the
Identity service, Image Service, management portions of Compute
and Networking, Networking plug-in, and the dashboard. It also
includes supporting services such as an SQL database,
:term:`message queue`, and :term:`Network Time Protocol (NTP)`.
- Networking agents reside on the controller node instead of one or more
dedicated network nodes.
Optionally, the controller node runs portions of Block Storage,
Object Storage, Orchestration, Telemetry, Database, and Data
processing services. These components provide additional features
for your environment.
- Overlay (tunnel) traffic for private networks traverses the management
network instead of a dedicated network.
- The network node runs the Networking plug-in and several agents
that provision tenant networks and provide switching, routing,
:term:`NAT<Network Address Translation (NAT)>`, and
:term:`DHCP` services. This node also handles external (internet)
connectivity for tenant virtual machine instances.
For more information on production architectures, see the
`Architecture Design Guide <http://docs.openstack.org/arch-design/content/>`__,
`Operations Guide <http://docs.openstack.org/ops/>`__, and
`Networking Guide <http://docs.openstack.org/networking-guide/>`__.
- The :term:`compute node` runs the :term:`hypervisor` portion of
Compute that operates :term:`tenant`
:term:`virtual machines <virtual machine (VM)>` or instances. By
default, Compute uses :term:`KVM <kernel-based VM (KVM)>` as the
:term:`hypervisor`. The compute node also runs the Networking
plug-in and an agent that connect tenant networks to instances and
provide firewalling (:term:`security groups <security group>`)
services. You can run more than one compute node.
.. _figure-hwreqs:
Optionally, the compute node runs a Telemetry agent to collect
meters. Also, it can contain a third network interface on a
separate storage network to improve performance of storage
services.
.. figure:: figures/hwreqs.png
:alt: Hardware requirements
- The optional Block Storage node contains the disks that the Block
Storage service provisions for tenant virtual machine instances.
You can run more than one of these nodes.
**Hardware requirements**
Optionally, the Block Storage node runs a Telemetry agent to
collect meters. Also, it can contain a second network interface on
a separate storage network to improve performance of storage
services.
Controller
----------
- The optional Object Storage nodes contain the disks that the
Object Storage service uses for storing accounts, containers, and
objects. You can run more than two of these nodes. However, the
minimal architecture example requires two nodes.
The controller node runs the Identity service, Image service, management
portions of Compute, management portion of Networking, various Networking
agents, and the dashboard. It also includes supporting services such as
an SQL database, :term:`message queue`, and :term:`NTP`.
Optionally, these nodes can contain a second network interface on
a separate storage network to improve performance of storage
services.
Optionally, the controller node runs portions of Block Storage, Object
Storage, Orchestration, and Telemetry services.
.. note::
When you implement this architecture, skip the section
:doc:`networking-nova`. Optional services might require
additional nodes or additional resources on existing nodes.
The controller node requires a minimum of two network interfaces.
|
Compute
-------
.. _figure-neutron-network-hw:
The compute node runs the :term:`hypervisor` portion of Compute that
operates instances. By default, Compute uses the
:term:`KVM <kernel-based VM (KVM)>` hypervisor. The compute node also
runs a Networking service agent that connects instances to virtual networks
and provides firewalling services to instances via
:term:`security groups <security group>`.
.. figure:: figures/installguidearch-neutron-hw.png
:alt: Minimal architecture example with OpenStack Networking
(neutron)—Hardware requirements
You can deploy more than one compute node. Each node requires a minimum
of two network interfaces.
Figure 1.2 Minimal architecture example with OpenStack Networking
(neutron)—Hardware requirements
Block Storage
-------------
|
The optional Block Storage node contains the disks that the Block
Storage service provisions for instances.
.. _figure-neutron-networks:
For simplicity, service traffic between compute nodes and this node
uses the management network. Production environments should implement
a separate storage network to increase performance and security.
.. figure:: figures/installguidearch-neutron-networks.png
:alt: Minimal architecture example with OpenStack Networking
(neutron)—Network layout
You can deploy more than one block storage node. Each node requires a
minimum of one network interface.
Figure 1.3 Minimal architecture example with OpenStack Networking
(neutron)—Network layout
Object Storage
--------------
|
The optional Object Storage node contain the disks that the
Object Storage service uses for storing accounts, containers, and
objects.
.. figure:: figures/installguidearch-neutron-services.png
:alt: Minimal architecture example with OpenStack Networking
(neutron)—Service layout
For simplicity, service traffic between compute nodes and this node
uses the management network. Production environments should implement
a separate storage network to increase performance and security.
Figure 1.4 Minimal architecture example with OpenStack Networking
(neutron)—Service layout
This service requires two nodes. Eac node requires a minimum of one
network interface. You can deploy more than two object storage nodes.
|
Networking
~~~~~~~~~~
- Two-node architecture with legacy networking (nova-network) and
optional nodes for Block Storage and Object Storage services.
Choose one of the following virtual networking options.
- The :term:`controller node <cloud controller node>` runs the
Identity service, Image service, management portion of Compute,
and the dashboard. It also includes supporting services such as an
SQL database, :term:`message queue`, and :term:`Network Time
Protocol (NTP)`.
.. _network1:
Optionally, the controller node runs portions of Block Storage,
Object Storage, Orchestration, Telemetry, Database, and Data
processing services. These components provide additional features
for your environment.
Networking Option 1: Provider networks
--------------------------------------
- The :term:`compute node` runs the :term:`hypervisor` portion of
Compute that operates :term:`tenant` :term:`virtual machines
<virtual machine (VM)>` or instances. By default, Compute uses
:term:`KVM <kernel-based VM (KVM)>` as the :term:`hypervisor`.
Compute also provisions tenant networks and provides firewalling
(:term:`security groups <security group>`) services. You can run
more than one compute node.
The provider networks option deploys the OpenStack Networking service
in the simplest way possible with primarily layer-2 (bridging/switching)
services and VLAN segmentation of networks. Essentially, it bridges virtual
networks to physical networks and relies on physical network infrastructure
for layer-3 (routing) services. Additionally, a :term:`DHCP` service provides
IP address information to instances.
Optionally, the compute node runs a Telemetry agent to collect
meters. Also, it can contain a third network interface on a
separate storage network to improve performance of storage
services.
.. note::
- The optional Block Storage node contains the disks that the Block
Storage service provisions for tenant virtual machine instances.
You can run more than one of these nodes.
This option lacks support for self-service private networks, layer-3
(routing) services, and advanced services such as :term:`LBaaS` and
:term:`FWaaS`. Consider the self-service networks option if you
desire these features.
Optionally, the Block Storage node runs a Telemetry agent to
collect meters. Also, it can contain a second network interface on
a separate storage network to improve performance of storage
services.
.. _figure-network1-services:
- The optional Object Storage nodes contain the disks that the
Object Storage service uses for storing accounts, containers, and
objects. You can run more than two of these nodes. However, the
minimal architecture example requires two nodes.
.. figure:: figures/network1-services.png
:alt: Networking Option 1: Provider networks - Service layout
Optionally, these nodes can contain a second network interface on
a separate storage network to improve performance of storage
services.
.. _network2:
.. note::
Networking Option 2: Self-service networks
------------------------------------------
When you implement this architecture, skip the section
:doc:`networking-neutron`. To use optional services, you might need to
build additional nodes.
The self-service networks option augments the provider networks option
with layer-3 (routing) services that enable
:term:`self-service` networks using overlay segmentation methods such
as :term:`VXLAN`. Essentially, it routes virtual networks to physical networks
using :term:`NAT`. Additionally, this option provides the foundation
for advanced services such as LBaaS and FWaaS.
|
.. _figure-network2-services:
.. _figure-legacy-network-hw:
.. figure:: figures/installguidearch-nova-hw.png
:alt: Minimal architecture example with legacy networking
(nova-network)—Hardware requirements
Figure 1.5 Minimal architecture example with legacy networking
(nova-network)—Hardware requirements
|
.. _figure-nova-networks:
.. figure:: figures/installguidearch-nova-networks.png
:alt: Minimal architecture example with legacy networking
(nova-network)—Network layout
Figure 1.6 Minimal architecture example with legacy networking
(nova-network)—Network layout
|
.. figure:: figures/installguidearch-nova-services.png
:alt: Minimal architecture example with legacy networking
(nova-network)—Service layout
Figure 1.7 Minimal architecture example with legacy networking
(nova-network)—Service layout
.. figure:: figures/network2-services.png
:alt: Networking Option 2: Self-service networks - Service layout

View File

@ -3,5 +3,4 @@ Next steps
==========
Your OpenStack environment now includes Object Storage. You can
:doc:`launch an instance <launch-instance>` or add more services
to your environment in the following chapters.
:ref:`launch-instance` or add more services to your environment.

View File

@ -22,7 +22,7 @@ the Object Storage service on it. Similar to the controller node, each
storage node contains one network interface on the :term:`management network`.
Optionally, each storage node can contain a second network interface on
a separate network for replication. For more information, see
:doc:`basic_environment`.
:ref:`environment`.
#. Configure unique items on the first storage node:
@ -61,8 +61,8 @@ a separate network for replication. For more information, see
Also add this content to the :file:`/etc/hosts` file on all other
nodes in your environment.
* Install and configure :term:`NTP <Network Time Protocol (NTP)>` using
the instructions in :doc:`basics-ntp`.
* Install and configure :term:`NTP` using the instructions in
:ref:`environment-ntp`.
* Install the supporting utility packages: