fix pdf build

The PDF build does not include content multiple times if the same file
is included in a toctree more than once. That means we need to
restructure the guide to handle the common parts differently. This
approach merges some of the previously split sections back together
using inline prose to indicate where minor variations apply for
different operating systems but retaining separate files for cases where
the differences are significant.

Change-Id: I5d9ff549b05ca4ce54486719d70858589b8fcfa3
Depends-On: Ia750cb049c0f53a234ea70ce1f2bbbb7a2aa9454
Signed-off-by: Doug Hellmann <doug@doughellmann.com>
This commit is contained in:
Doug Hellmann 2017-06-19 10:28:43 -04:00
parent 5be9babf5e
commit e39304d4ae
56 changed files with 627 additions and 2130 deletions

View File

@ -1,76 +0,0 @@
===========
Environment
===========
This section explains how to configure the controller node and one compute
node using the example architecture.
Although most environments include Identity, Image service, Compute, at least
one networking service, and the Dashboard, the Object Storage service can
operate independently. If your use case only involves Object Storage, you can
skip to `Object Storage Installation Guide
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_
after configuring the appropriate nodes for it.
You must use an account with administrative privileges to configure each node.
Either run the commands as the ``root`` user or configure the ``sudo``
utility.
For best performance, we recommend that your environment meets or exceeds
the hardware requirements in :ref:`figure-hwreqs`.
The following minimum requirements should support a proof-of-concept
environment with core services and several :term:`CirrOS` instances:
* Controller Node: 1 processor, 4 GB memory, and 5 GB storage
* Compute Node: 1 processor, 2 GB memory, and 10 GB storage
As the number of OpenStack services and virtual machines increase, so do the
hardware requirements for the best performance. If performance degrades after
enabling additional services or virtual machines, consider adding hardware
resources to your environment.
To minimize clutter and provide more resources for OpenStack, we recommend
a minimal installation of your Linux distribution. Also, you must install a
64-bit version of your distribution on each node.
A single disk partition on each node works for most basic installations.
However, you should consider :term:`Logical Volume Manager (LVM)` for
installations with optional services such as Block Storage.
For first-time installation and testing purposes, many users select to build
each host as a :term:`virtual machine (VM)`. The primary benefits of VMs
include the following:
* One physical server can support multiple nodes, each with almost any
number of network interfaces.
* Ability to take periodic "snap shots" throughout the installation
process and "roll back" to a working configuration in the event of a
problem.
However, VMs will reduce performance of your instances, particularly if
your hypervisor and/or processor lacks support for hardware acceleration
of nested VMs.
.. note::
If you choose to install on VMs, make sure your hypervisor provides
a way to disable MAC address filtering on the provider network
interface.
For more information about system requirements, see the `OpenStack
Operations Guide <https://docs.openstack.org/ops-guide/>`_.
.. toctree::
:maxdepth: 1
environment-security.rst
environment-networking.rst
environment-ntp.rst
environment-packages.rst
environment-sql-database.rst
environment-messaging.rst
environment-memcached.rst

View File

@ -1,5 +1,5 @@
Memcached
~~~~~~~~~
Debian Memcached
~~~~~~~~~~~~~~~~
The Identity service authentication mechanism for services uses Memcached
to cache tokens. The memcached service typically runs on the controller

View File

@ -1,5 +1,5 @@
Memcached
~~~~~~~~~
SUSE Memcached
~~~~~~~~~~~~~~
The Identity service authentication mechanism for services uses Memcached
to cache tokens. The memcached service typically runs on the controller

View File

@ -1,5 +1,5 @@
Memcached
~~~~~~~~~
Red Hat Memcached
~~~~~~~~~~~~~~~~~
The Identity service authentication mechanism for services uses Memcached
to cache tokens. The memcached service typically runs on the controller

View File

@ -1,5 +1,5 @@
Memcached
~~~~~~~~~
Ubuntu Memcached
~~~~~~~~~~~~~~~~
The Identity service authentication mechanism for services uses Memcached
to cache tokens. The memcached service typically runs on the controller

View File

@ -1,5 +1,5 @@
Message queue
~~~~~~~~~~~~~
Debian Message queue
~~~~~~~~~~~~~~~~~~~~
OpenStack uses a :term:`message queue` to coordinate operations and
status information among services. The message queue service typically

View File

@ -1,5 +1,5 @@
Message queue
~~~~~~~~~~~~~
SUSE Message queue
~~~~~~~~~~~~~~~~~~
OpenStack uses a :term:`message queue` to coordinate operations and
status information among services. The message queue service typically

View File

@ -1,5 +1,5 @@
Message queue
~~~~~~~~~~~~~
Red Hat Message queue
~~~~~~~~~~~~~~~~~~~~~
OpenStack uses a :term:`message queue` to coordinate operations and
status information among services. The message queue service typically

View File

@ -1,5 +1,5 @@
Message queue
~~~~~~~~~~~~~
Ubuntu Message queue
~~~~~~~~~~~~~~~~~~~~
OpenStack uses a :term:`message queue` to coordinate operations and
status information among services. The message queue service typically

View File

@ -1,50 +0,0 @@
Compute node
~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
.. note::
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/network/interfaces`` file to contain the following:
.. path /etc/network/interfaces
.. code-block:: bash
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``compute1``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -1,48 +0,0 @@
Compute node
~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
.. note::
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
contain the following:
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
.. code-block:: bash
STARTMODE='auto'
BOOTPROTO='static'
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``compute1``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -1,52 +0,0 @@
Compute node
~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
.. note::
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
to contain the following:
Do not change the ``HWADDR`` and ``UUID`` keys.
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
.. code-block:: bash
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``compute1``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -1,50 +0,0 @@
Compute node
~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
.. note::
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/network/interfaces`` file to contain the following:
.. path /etc/network/interfaces
.. code-block:: bash
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``compute1``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -1,7 +1,78 @@
Compute node
~~~~~~~~~~~~
.. toctree::
:glob:
Configure network interfaces
----------------------------
environment-networking-compute-*
#. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
.. note::
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
For Ubuntu or Debian:
* Edit the ``/etc/network/interfaces`` file to contain the following:
.. path /etc/network/interfaces
.. code-block:: bash
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
For Red Hat or CentOS:
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
to contain the following:
Do not change the ``HWADDR`` and ``UUID`` keys.
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
.. code-block:: bash
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
.. end
For SUSE:
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
contain the following:
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
.. code-block:: bash
STARTMODE='auto'
BOOTPROTO='static'
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``compute1``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -1,46 +0,0 @@
Controller node
~~~~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/network/interfaces`` file to contain the following:
.. path /etc/network/interfaces
.. code-block:: bash
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``controller``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -1,44 +0,0 @@
Controller node
~~~~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
contain the following:
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
.. code-block:: ini
STARTMODE='auto'
BOOTPROTO='static'
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``controller``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -1,48 +0,0 @@
Controller node
~~~~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
to contain the following:
Do not change the ``HWADDR`` and ``UUID`` keys.
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
.. code-block:: ini
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``controller``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -1,46 +0,0 @@
Controller node
~~~~~~~~~~~~~~~
Configure network interfaces
----------------------------
#. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
* Edit the ``/etc/network/interfaces`` file to contain the following:
.. path /etc/network/interfaces
.. code-block:: bash
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``controller``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -1,7 +1,74 @@
Controller node
~~~~~~~~~~~~~~~
.. toctree::
:glob:
Configure network interfaces
----------------------------
environment-networking-controller-*
#. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
#. The provider interface uses a special configuration without an IP
address assigned to it. Configure the second interface as the provider
interface:
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
*eth1* or *ens224*.
For Ubuntu or Debian:
* Edit the ``/etc/network/interfaces`` file to contain the following:
.. path /etc/network/interfaces
.. code-block:: bash
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
For Red Hat or CentOS:
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
to contain the following:
Do not change the ``HWADDR`` and ``UUID`` keys.
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
.. code-block:: ini
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
.. end
For SUSE:
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
contain the following:
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
.. code-block:: ini
STARTMODE='auto'
BOOTPROTO='static'
.. end
#. Reboot the system to activate the changes.
Configure name resolution
-------------------------
#. Set the hostname of the node to ``controller``.
#. .. include:: shared/edit_hosts_file.txt

View File

@ -1,91 +0,0 @@
Host networking
~~~~~~~~~~~~~~~
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation
<https://wiki.debian.org/NetworkConfiguration>`__ .
All nodes require Internet access for administrative purposes such as package
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
Internet access through the management network interface.
To highlight the importance of network separation, the example architectures
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
management network and assume that the physical network infrastructure
provides Internet access via :term:`NAT <Network Address Translation (NAT)>`
or other methods. The example architectures use routable IP address space for
the provider (external) network and assume that the physical network
infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly
to the provider network. In the self-service (private) networks architecture,
instances can attach to a self-service or provider network. Self-service
networks can reside entirely within OpenStack or provide some level of external
network access using :term:`NAT <Network Address Translation (NAT)>` through
the provider network.
.. _figure-networklayout:
.. figure:: figures/networklayout.png
:alt: Network layout
The example architectures assume use of the following networks:
* Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all
nodes for administrative purposes such as package installation,
security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`.
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to
instances in your OpenStack environment.
You can modify these ranges and gateways to work with your particular
network infrastructure.
Network interface names vary by distribution. Traditionally,
interfaces use ``eth`` followed by a sequential number. To cover all
variations, this guide refers to the first interface as the
interface with the lowest number and the second interface as the
interface with the highest number.
Unless you intend to use the exact configuration provided in this
example architecture, you must modify the networks in this procedure to
match your environment. Each node must resolve the other nodes by
name in addition to IP address. For example, the ``controller`` name must
resolve to ``10.0.0.11``, the IP address of the management interface on
the controller node.
.. warning::
Reconfiguring network interfaces will interrupt network
connectivity. We recommend using a local terminal session for these
procedures.
.. note::
Your distribution does not enable a restrictive :term:`firewall` by
default. For more information about securing your environment,
refer to the `OpenStack Security Guide
<https://docs.openstack.org/security-guide/>`_.
.. toctree::
:maxdepth: 1
environment-networking-controller.rst
environment-networking-compute.rst
environment-networking-storage-cinder.rst
environment-networking-verify.rst

View File

@ -1,96 +0,0 @@
Host networking
~~~~~~~~~~~~~~~
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `SLES 12
<https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_basicnet_manconf.html>`__
or `openSUSE
<https://doc.opensuse.org/documentation/leap/reference/html/book.opensuse.reference/cha.basicnet.html>`__
documentation.
All nodes require Internet access for administrative purposes such as package
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
Internet access through the management network interface.
To highlight the importance of network separation, the example architectures
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
management network and assume that the physical network infrastructure
provides Internet access via :term:`NAT <Network Address Translation (NAT)>`
or other methods. The example architectures use routable IP address space for
the provider (external) network and assume that the physical network
infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly
to the provider network. In the self-service (private) networks architecture,
instances can attach to a self-service or provider network. Self-service
networks can reside entirely within OpenStack or provide some level of external
network access using :term:`NAT <Network Address Translation (NAT)>` through
the provider network.
.. _figure-networklayout:
.. figure:: figures/networklayout.png
:alt: Network layout
The example architectures assume use of the following networks:
* Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all
nodes for administrative purposes such as package installation,
security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`.
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to
instances in your OpenStack environment.
You can modify these ranges and gateways to work with your particular
network infrastructure.
Network interface names vary by distribution. Traditionally,
interfaces use ``eth`` followed by a sequential number. To cover all
variations, this guide refers to the first interface as the
interface with the lowest number and the second interface as the
interface with the highest number.
Unless you intend to use the exact configuration provided in this
example architecture, you must modify the networks in this procedure to
match your environment. Each node must resolve the other nodes by
name in addition to IP address. For example, the ``controller`` name must
resolve to ``10.0.0.11``, the IP address of the management interface on
the controller node.
.. warning::
Reconfiguring network interfaces will interrupt network
connectivity. We recommend using a local terminal session for these
procedures.
.. note::
Your distribution enables a restrictive :term:`firewall` by
default. During the installation process, certain steps will fail
unless you alter or disable the firewall. For more information
about securing your environment, refer to the `OpenStack Security
Guide <https://docs.openstack.org/security-guide/>`_.
.. toctree::
:maxdepth: 1
environment-networking-controller.rst
environment-networking-compute.rst
environment-networking-storage-cinder.rst
environment-networking-verify.rst

View File

@ -1,93 +0,0 @@
Host networking
~~~~~~~~~~~~~~~
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation
<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Using_the_Command_Line_Interface.html>`__ .
All nodes require Internet access for administrative purposes such as package
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
Internet access through the management network interface.
To highlight the importance of network separation, the example architectures
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
management network and assume that the physical network infrastructure
provides Internet access via :term:`NAT <Network Address Translation (NAT)>`
or other methods. The example architectures use routable IP address space for
the provider (external) network and assume that the physical network
infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly
to the provider network. In the self-service (private) networks architecture,
instances can attach to a self-service or provider network. Self-service
networks can reside entirely within OpenStack or provide some level of external
network access using :term:`NAT <Network Address Translation (NAT)>` through
the provider network.
.. _figure-networklayout:
.. figure:: figures/networklayout.png
:alt: Network layout
The example architectures assume use of the following networks:
* Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all
nodes for administrative purposes such as package installation,
security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`.
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to
instances in your OpenStack environment.
You can modify these ranges and gateways to work with your particular
network infrastructure.
Network interface names vary by distribution. Traditionally,
interfaces use ``eth`` followed by a sequential number. To cover all
variations, this guide refers to the first interface as the
interface with the lowest number and the second interface as the
interface with the highest number.
Unless you intend to use the exact configuration provided in this
example architecture, you must modify the networks in this procedure to
match your environment. Each node must resolve the other nodes by
name in addition to IP address. For example, the ``controller`` name must
resolve to ``10.0.0.11``, the IP address of the management interface on
the controller node.
.. warning::
Reconfiguring network interfaces will interrupt network
connectivity. We recommend using a local terminal session for these
procedures.
.. note::
Your distribution enables a restrictive :term:`firewall` by
default. During the installation process, certain steps will fail
unless you alter or disable the firewall. For more information
about securing your environment, refer to the `OpenStack Security
Guide <https://docs.openstack.org/security-guide/>`_.
.. toctree::
:maxdepth: 1
environment-networking-controller.rst
environment-networking-compute.rst
environment-networking-storage-cinder.rst
environment-networking-verify.rst

View File

@ -1,90 +0,0 @@
Host networking
~~~~~~~~~~~~~~~
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the `documentation <https://help.ubuntu.com/lts/serverguide/network-configuration.html>`_.
All nodes require Internet access for administrative purposes such as package
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
Internet access through the management network interface.
To highlight the importance of network separation, the example architectures
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
management network and assume that the physical network infrastructure
provides Internet access via :term:`NAT <Network Address Translation (NAT)>`
or other methods. The example architectures use routable IP address space for
the provider (external) network and assume that the physical network
infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly
to the provider network. In the self-service (private) networks architecture,
instances can attach to a self-service or provider network. Self-service
networks can reside entirely within OpenStack or provide some level of external
network access using :term:`NAT <Network Address Translation (NAT)>` through
the provider network.
.. _figure-networklayout:
.. figure:: figures/networklayout.png
:alt: Network layout
The example architectures assume use of the following networks:
* Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all
nodes for administrative purposes such as package installation,
security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`.
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to
instances in your OpenStack environment.
You can modify these ranges and gateways to work with your particular
network infrastructure.
Network interface names vary by distribution. Traditionally,
interfaces use ``eth`` followed by a sequential number. To cover all
variations, this guide refers to the first interface as the
interface with the lowest number and the second interface as the
interface with the highest number.
Unless you intend to use the exact configuration provided in this
example architecture, you must modify the networks in this procedure to
match your environment. Each node must resolve the other nodes by
name in addition to IP address. For example, the ``controller`` name must
resolve to ``10.0.0.11``, the IP address of the management interface on
the controller node.
.. warning::
Reconfiguring network interfaces will interrupt network
connectivity. We recommend using a local terminal session for these
procedures.
.. note::
Your distribution does not enable a restrictive :term:`firewall` by
default. For more information about securing your environment,
refer to the `OpenStack Security Guide
<https://docs.openstack.org/security-guide/>`_.
.. toctree::
:maxdepth: 1
environment-networking-controller.rst
environment-networking-compute.rst
environment-networking-storage-cinder.rst
environment-networking-verify.rst

View File

@ -1,87 +0,0 @@
Verify connectivity
-------------------
We recommend that you verify network connectivity to the Internet and
among the nodes before proceeding further.
#. From the *controller* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *controller* node, test access to the management interface on the
*compute* node:
.. code-block:: console
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
#. From the *compute* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *compute* node, test access to the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
.. note::
Your distribution does not enable a restrictive :term:`firewall` by
default. For more information about securing your environment,
refer to the `OpenStack Security Guide
<https://docs.openstack.org/security-guide/>`_.

View File

@ -1,89 +0,0 @@
Verify connectivity
-------------------
We recommend that you verify network connectivity to the Internet and
among the nodes before proceeding further.
#. From the *controller* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *controller* node, test access to the management interface on the
*compute* node:
.. code-block:: console
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
#. From the *compute* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *compute* node, test access to the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
.. note::
Your distribution enables a restrictive :term:`firewall` by
default. During the installation process, certain steps will fail
unless you alter or disable the firewall. For more information
about securing your environment, refer to the `OpenStack Security
Guide <https://docs.openstack.org/security-guide/>`_.

View File

@ -1,89 +0,0 @@
Verify connectivity
-------------------
We recommend that you verify network connectivity to the Internet and
among the nodes before proceeding further.
#. From the *controller* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *controller* node, test access to the management interface on the
*compute* node:
.. code-block:: console
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
#. From the *compute* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *compute* node, test access to the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
.. note::
Your distribution enables a restrictive :term:`firewall` by
default. During the installation process, certain steps will fail
unless you alter or disable the firewall. For more information
about securing your environment, refer to the `OpenStack Security
Guide <https://docs.openstack.org/security-guide/>`_.

View File

@ -1,87 +0,0 @@
Verify connectivity
-------------------
We recommend that you verify network connectivity to the Internet and
among the nodes before proceeding further.
#. From the *controller* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *controller* node, test access to the management interface on the
*compute* node:
.. code-block:: console
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
#. From the *compute* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *compute* node, test access to the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
.. note::
Your distribution does not enable a restrictive :term:`firewall` by
default. For more information about securing your environment,
refer to the `OpenStack Security Guide
<https://docs.openstack.org/security-guide/>`_.

View File

@ -1,7 +1,92 @@
Verify connectivity
-------------------
.. toctree::
:glob:
We recommend that you verify network connectivity to the Internet and
among the nodes before proceeding further.
environment-networking-verify-*
#. From the *controller* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *controller* node, test access to the management interface on the
*compute* node:
.. code-block:: console
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
#. From the *compute* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *compute* node, test access to the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
.. note::
Red Hat and SUSE enables a restrictive :term:`firewall` by
default. During the installation process, certain steps will fail
unless you alter or disable the firewall. For more information
about securing your environment, refer to the `OpenStack Security
Guide <https://docs.openstack.org/security-guide/>`_.
Debian and Ubuntu do not enable a restrictive :term:`firewall` by
default. For more information about securing your environment,
refer to the `OpenStack Security Guide
<https://docs.openstack.org/security-guide/>`_.

View File

@ -3,9 +3,98 @@
Host networking
~~~~~~~~~~~~~~~
.. toctree::
After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your
distribution, see the documentation.
environment-networking-debian
environment-networking-obs
environment-networking-rdo
environment-networking-ubuntu
.. seealso::
* `Debian Network Configuration <https://wiki.debian.org/NetworkConfiguration>`__
* `Ubuntu Network Configuration
<https://help.ubuntu.com/lts/serverguide/network-configuration.html>`__
* `Red Hat Network Configuration
<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Using_the_Command_Line_Interface.html>`__
* `SLES 12
<https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_basicnet_manconf.html>`__
or `openSUSE
<https://doc.opensuse.org/documentation/leap/reference/html/book.opensuse.reference/cha.basicnet.html>`__ Network Configuration
All nodes require Internet access for administrative purposes such as package
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
Internet access through the management network interface.
To highlight the importance of network separation, the example architectures
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
management network and assume that the physical network infrastructure
provides Internet access via :term:`NAT <Network Address Translation (NAT)>`
or other methods. The example architectures use routable IP address space for
the provider (external) network and assume that the physical network
infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly
to the provider network. In the self-service (private) networks architecture,
instances can attach to a self-service or provider network. Self-service
networks can reside entirely within OpenStack or provide some level of external
network access using :term:`NAT <Network Address Translation (NAT)>` through
the provider network.
.. _figure-networklayout:
.. figure:: figures/networklayout.png
:alt: Network layout
The example architectures assume use of the following networks:
* Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all
nodes for administrative purposes such as package installation,
security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`.
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to
instances in your OpenStack environment.
You can modify these ranges and gateways to work with your particular
network infrastructure.
Network interface names vary by distribution. Traditionally,
interfaces use ``eth`` followed by a sequential number. To cover all
variations, this guide refers to the first interface as the
interface with the lowest number and the second interface as the
interface with the highest number.
Unless you intend to use the exact configuration provided in this
example architecture, you must modify the networks in this procedure to
match your environment. Each node must resolve the other nodes by
name in addition to IP address. For example, the ``controller`` name must
resolve to ``10.0.0.11``, the IP address of the management interface on
the controller node.
.. warning::
Reconfiguring network interfaces will interrupt network
connectivity. We recommend using a local terminal session for these
procedures.
.. note::
Red Hat and SUSE distributions enable a restrictive
:term:`firewall` by default. Ubuntu and Debian do not. For more
information about securing your environment, refer to the
`OpenStack Security Guide
<https://docs.openstack.org/security-guide/>`_.
.. toctree::
:maxdepth: 1
environment-networking-controller.rst
environment-networking-compute.rst
environment-networking-storage-cinder.rst
environment-networking-verify.rst

View File

@ -1,59 +0,0 @@
Controller node
~~~~~~~~~~~~~~~
Perform these steps on the controller node.
Install and configure components
--------------------------------
1. Install the packages:
.. code-block:: console
# apt install chrony
.. end
2. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove
these keys as necessary for your environment:
.. code-block:: shell
server NTP_SERVER iburst
.. end
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
.. note::
By default, the controller node synchronizes the time via a pool of
public servers. However, you can optionally configure alternative
servers such as those provided by your organization.
3. To enable other nodes to connect to the chrony daemon on the
controller node, add this key to the ``/etc/chrony/chrony.conf``
file:
.. code-block:: shell
allow 10.0.0.0/24
.. end
4. Restart the NTP service:
.. code-block:: console
# service chrony restart
.. end

View File

@ -1,61 +0,0 @@
Controller node
~~~~~~~~~~~~~~~
Perform these steps on the controller node.
Install and configure components
--------------------------------
1. Install the packages:
.. code-block:: console
# zypper install chrony
.. end
2. Edit the ``/etc/chrony.conf`` file and add, change, or remove these
keys as necessary for your environment:
.. code-block:: shell
server NTP_SERVER iburst
.. end
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
.. note::
By default, the controller node synchronizes the time via a pool of
public servers. However, you can optionally configure alternative
servers such as those provided by your organization.
3. To enable other nodes to connect to the chrony daemon on the
controller node, add this key to the ``/etc/chrony.conf`` file:
.. code-block:: shell
allow 10.0.0.0/24
.. end
If necessary, replace ``10.0.0.0/24`` with a description of your subnet.
4. Start the NTP service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
.. end

View File

@ -1,61 +0,0 @@
Controller node
~~~~~~~~~~~~~~~
Perform these steps on the controller node.
Install and configure components
--------------------------------
1. Install the packages:
.. code-block:: console
# yum install chrony
.. end
2. Edit the ``/etc/chrony.conf`` file and add, change, or remove these
keys as necessary for your environment:
.. code-block:: shell
server NTP_SERVER iburst
.. end
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
.. note::
By default, the controller node synchronizes the time via a pool of
public servers. However, you can optionally configure alternative
servers such as those provided by your organization.
3. To enable other nodes to connect to the chrony daemon on the
controller node, add this key to the ``/etc/chrony.conf`` file:
.. code-block:: shell
allow 10.0.0.0/24
.. end
If necessary, replace ``10.0.0.0/24`` with a description of your subnet.
4. Start the NTP service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
.. end

View File

@ -1,59 +0,0 @@
Controller node
~~~~~~~~~~~~~~~
Perform these steps on the controller node.
Install and configure components
--------------------------------
1. Install the packages:
.. code-block:: console
# apt install chrony
.. end
2. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove
these keys as necessary for your environment:
.. code-block:: shell
server NTP_SERVER iburst
.. end
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
.. note::
By default, the controller node synchronizes the time via a pool of
public servers. However, you can optionally configure alternative
servers such as those provided by your organization.
3. To enable other nodes to connect to the chrony daemon on the
controller node, add this key to the ``/etc/chrony/chrony.conf``
file:
.. code-block:: shell
allow 10.0.0.0/24
.. end
4. Restart the NTP service:
.. code-block:: console
# service chrony restart
.. end

View File

@ -1,9 +1,87 @@
.. _environment-ntp-controller:
Controller node
~~~~~~~~~~~~~~~
=================
Controller node
=================
.. toctree::
:glob:
Perform these steps on the controller node.
environment-ntp-controller-*
Install and configure components
================================
1. Install the packages:
For Debian or Ubuntu:
.. code-block:: console
# apt install chrony
.. end
For Red Hat or CentOS:
.. code-block:: console
# yum install chrony
.. end
For SUSE:
.. code-block:: console
# zypper install chrony
.. end
2. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove
these keys as necessary for your environment:
.. code-block:: shell
server NTP_SERVER iburst
.. end
Replace ``NTP_SERVER`` with the hostname or IP address of a
suitable more accurate (lower stratum) NTP server. The
configuration supports multiple ``server`` keys.
.. note::
By default, the controller node synchronizes the time via a pool of
public servers. However, you can optionally configure alternative
servers such as those provided by your organization.
3. To enable other nodes to connect to the chrony daemon on the
controller node, add this key to the ``/etc/chrony/chrony.conf``
file:
.. code-block:: shell
allow 10.0.0.0/24
.. end
If necessary, replace ``10.0.0.0/24`` with a description of your
subnet.
4. Restart the NTP service:
For Debian or Ubuntu:
.. code-block:: console
# service chrony restart
.. end
For Red Hat or SUSE:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
.. end

View File

@ -1,43 +0,0 @@
Other nodes
~~~~~~~~~~~
Other nodes reference the controller node for clock synchronization.
Perform these steps on all other nodes.
Install and configure components
--------------------------------
1. Install the packages:
.. code-block:: console
# apt install chrony
.. end
2. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
but one ``server`` key. Change it to reference the controller node:
.. path /etc/chrony/chrony.conf
.. code-block:: shell
server controller iburst
.. end
3. Comment out the ``pool 2.debian.pool.ntp.org offline iburst`` line.
4. Restart the NTP service:
.. code-block:: console
# service chrony restart
.. end

View File

@ -1,42 +0,0 @@
Other nodes
~~~~~~~~~~~
Other nodes reference the controller node for clock synchronization.
Perform these steps on all other nodes.
Install and configure components
--------------------------------
1. Install the packages:
.. code-block:: console
# zypper install chrony
.. end
2. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
``server`` key. Change it to reference the controller node:
.. path /etc/chrony.conf
.. code-block:: shell
server controller iburst
.. end
3. Start the NTP service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
.. end

View File

@ -1,42 +0,0 @@
Other nodes
~~~~~~~~~~~
Other nodes reference the controller node for clock synchronization.
Perform these steps on all other nodes.
Install and configure components
--------------------------------
1. Install the packages:
.. code-block:: console
# yum install chrony
.. end
2. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
``server`` key. Change it to reference the controller node:
.. path /etc/chrony.conf
.. code-block:: shell
server controller iburst
.. end
3. Start the NTP service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
.. end

View File

@ -1,43 +0,0 @@
Other nodes
~~~~~~~~~~~
Other nodes reference the controller node for clock synchronization.
Perform these steps on all other nodes.
Install and configure components
--------------------------------
1. Install the packages:
.. code-block:: console
# apt install chrony
.. end
2. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
but one ``server`` key. Change it to reference the controller node:
.. path /etc/chrony/chrony.conf
.. code-block:: shell
server controller iburst
.. end
3. Comment out the ``pool 2.debian.pool.ntp.org offline iburst`` line.
4. Restart the NTP service:
.. code-block:: console
# service chrony restart
.. end

View File

@ -1,12 +1,66 @@
.. _environment-ntp-other:
Other nodes
~~~~~~~~~~~
=============
Other nodes
=============
Other nodes reference the controller node for clock synchronization.
Perform these steps on all other nodes.
.. toctree::
:glob:
Install and configure components
================================
environment-ntp-other-*
1. Install the packages.
For Debian or Ubuntu:
.. code-block:: console
# apt install chrony
For Red Hat:
.. code-block:: console
# yum install chrony
.. end
For SUSE:
.. code-block:: console
# zypper install chrony
.. end
2. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
but one ``server`` key. Change it to reference the controller node:
.. path /etc/chrony/chrony.conf
.. code-block:: shell
server controller iburst
.. end
3. Comment out the ``pool 2.debian.pool.ntp.org offline iburst`` line.
4. Restart the NTP service.
For Debian or Ubuntu:
.. code-block:: console
# service chrony restart
.. end
For Red Hat or SUSE:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
.. end

View File

@ -1,81 +0,0 @@
===========
Environment
===========
This section explains how to configure the controller node and one compute
node using the example architecture.
Although most environments include Identity, Image service, Compute, at least
one networking service, and the Dashboard, the Object Storage service can
operate independently. If your use case only involves Object Storage, you can
skip to `Object Storage Installation Guide
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_
after configuring the appropriate nodes for it.
You must use an account with administrative privileges to configure each node.
Either run the commands as the ``root`` user or configure the ``sudo``
utility.
The :command:`systemctl enable` call on openSUSE outputs a warning message
when the service uses SysV Init scripts instead of native systemd files. This
warning can be ignored.
For best performance, we recommend that your environment meets or exceeds
the hardware requirements in :ref:`figure-hwreqs`.
The following minimum requirements should support a proof-of-concept
environment with core services and several :term:`CirrOS` instances:
* Controller Node: 1 processor, 4 GB memory, and 5 GB storage
* Compute Node: 1 processor, 2 GB memory, and 10 GB storage
As the number of OpenStack services and virtual machines increase, so do the
hardware requirements for the best performance. If performance degrades after
enabling additional services or virtual machines, consider adding hardware
resources to your environment.
To minimize clutter and provide more resources for OpenStack, we recommend
a minimal installation of your Linux distribution. Also, you must install a
64-bit version of your distribution on each node.
A single disk partition on each node works for most basic installations.
However, you should consider :term:`Logical Volume Manager (LVM)` for
installations with optional services such as Block Storage.
For first-time installation and testing purposes, many users select to build
each host as a :term:`virtual machine (VM)`. The primary benefits of VMs
include the following:
* One physical server can support multiple nodes, each with almost any
number of network interfaces.
* Ability to take periodic "snap shots" throughout the installation
process and "roll back" to a working configuration in the event of a
problem.
However, VMs will reduce performance of your instances, particularly if
your hypervisor and/or processor lacks support for hardware acceleration
of nested VMs.
.. note::
If you choose to install on VMs, make sure your hypervisor provides
a way to disable MAC address filtering on the provider network
interface.
For more information about system requirements, see the `OpenStack
Operations Guide <https://docs.openstack.org/ops-guide/>`_.
.. toctree::
:maxdepth: 1
environment-security.rst
environment-networking.rst
environment-ntp.rst
environment-packages.rst
environment-sql-database.rst
environment-messaging.rst
environment-memcached.rst

View File

@ -1,5 +1,5 @@
OpenStack packages
~~~~~~~~~~~~~~~~~~
Debian OpenStack packages
~~~~~~~~~~~~~~~~~~~~~~~~~
Distributions release OpenStack packages as part of the distribution or
using other methods because of differing release schedules. Perform

View File

@ -1,5 +1,5 @@
OpenStack packages
~~~~~~~~~~~~~~~~~~
SUSE OpenStack packages
~~~~~~~~~~~~~~~~~~~~~~~
Distributions release OpenStack packages as part of the distribution or
using other methods because of differing release schedules. Perform

View File

@ -1,5 +1,5 @@
OpenStack packages
~~~~~~~~~~~~~~~~~~
Red Hat OpenStack packages
~~~~~~~~~~~~~~~~~~~~~~~~~~
Distributions release OpenStack packages as part of the distribution or
using other methods because of differing release schedules. Perform

View File

@ -1,5 +1,5 @@
OpenStack packages
~~~~~~~~~~~~~~~~~~
Ubuntu OpenStack packages
~~~~~~~~~~~~~~~~~~~~~~~~~
Distributions release OpenStack packages as part of the distribution or
using other methods because of differing release schedules. Perform

View File

@ -1,76 +0,0 @@
===========
Environment
===========
This section explains how to configure the controller node and one compute
node using the example architecture.
Although most environments include Identity, Image service, Compute, at least
one networking service, and the Dashboard, the Object Storage service can
operate independently. If your use case only involves Object Storage, you can
skip to `Object Storage Installation Guide
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_
after configuring the appropriate nodes for it.
You must use an account with administrative privileges to configure each node.
Either run the commands as the ``root`` user or configure the ``sudo``
utility.
For best performance, we recommend that your environment meets or exceeds
the hardware requirements in :ref:`figure-hwreqs`.
The following minimum requirements should support a proof-of-concept
environment with core services and several :term:`CirrOS` instances:
* Controller Node: 1 processor, 4 GB memory, and 5 GB storage
* Compute Node: 1 processor, 2 GB memory, and 10 GB storage
As the number of OpenStack services and virtual machines increase, so do the
hardware requirements for the best performance. If performance degrades after
enabling additional services or virtual machines, consider adding hardware
resources to your environment.
To minimize clutter and provide more resources for OpenStack, we recommend
a minimal installation of your Linux distribution. Also, you must install a
64-bit version of your distribution on each node.
A single disk partition on each node works for most basic installations.
However, you should consider :term:`Logical Volume Manager (LVM)` for
installations with optional services such as Block Storage.
For first-time installation and testing purposes, many users select to build
each host as a :term:`virtual machine (VM)`. The primary benefits of VMs
include the following:
* One physical server can support multiple nodes, each with almost any
number of network interfaces.
* Ability to take periodic "snap shots" throughout the installation
process and "roll back" to a working configuration in the event of a
problem.
However, VMs will reduce performance of your instances, particularly if
your hypervisor and/or processor lacks support for hardware acceleration
of nested VMs.
.. note::
If you choose to install on VMs, make sure your hypervisor provides
a way to disable MAC address filtering on the provider network
interface.
For more information about system requirements, see the `OpenStack
Operations Guide <https://docs.openstack.org/ops-guide/>`_.
.. toctree::
:maxdepth: 1
environment-security.rst
environment-networking.rst
environment-ntp.rst
environment-packages.rst
environment-sql-database.rst
environment-messaging.rst
environment-memcached.rst

View File

@ -1,5 +1,5 @@
SQL database
~~~~~~~~~~~~
Debian SQL database
~~~~~~~~~~~~~~~~~~~
Most OpenStack services use an SQL database to store information. The
database typically runs on the controller node. The procedures in this

View File

@ -1,5 +1,5 @@
SQL database
~~~~~~~~~~~~
SUSE SQL database
~~~~~~~~~~~~~~~~~
Most OpenStack services use an SQL database to store information. The
database typically runs on the controller node. The procedures in this

View File

@ -1,5 +1,5 @@
SQL database
~~~~~~~~~~~~
Red Hat SQL database
~~~~~~~~~~~~~~~~~~~~
Most OpenStack services use an SQL database to store information. The
database typically runs on the controller node. The procedures in this

View File

@ -1,5 +1,5 @@
SQL database
~~~~~~~~~~~~
Ubuntu SQL database
~~~~~~~~~~~~~~~~~~~
Most OpenStack services use an SQL database to store information. The
database typically runs on the controller node. The procedures in this

View File

@ -1,76 +0,0 @@
===========
Environment
===========
This section explains how to configure the controller node and one compute
node using the example architecture.
Although most environments include Identity, Image service, Compute, at least
one networking service, and the Dashboard, the Object Storage service can
operate independently. If your use case only involves Object Storage, you can
skip to `Object Storage Installation Guide
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_
after configuring the appropriate nodes for it.
You must use an account with administrative privileges to configure each node.
Either run the commands as the ``root`` user or configure the ``sudo``
utility.
For best performance, we recommend that your environment meets or exceeds
the hardware requirements in :ref:`figure-hwreqs`.
The following minimum requirements should support a proof-of-concept
environment with core services and several :term:`CirrOS` instances:
* Controller Node: 1 processor, 4 GB memory, and 5 GB storage
* Compute Node: 1 processor, 2 GB memory, and 10 GB storage
As the number of OpenStack services and virtual machines increase, so do the
hardware requirements for the best performance. If performance degrades after
enabling additional services or virtual machines, consider adding hardware
resources to your environment.
To minimize clutter and provide more resources for OpenStack, we recommend
a minimal installation of your Linux distribution. Also, you must install a
64-bit version of your distribution on each node.
A single disk partition on each node works for most basic installations.
However, you should consider :term:`Logical Volume Manager (LVM)` for
installations with optional services such as Block Storage.
For first-time installation and testing purposes, many users select to build
each host as a :term:`virtual machine (VM)`. The primary benefits of VMs
include the following:
* One physical server can support multiple nodes, each with almost any
number of network interfaces.
* Ability to take periodic "snap shots" throughout the installation
process and "roll back" to a working configuration in the event of a
problem.
However, VMs will reduce performance of your instances, particularly if
your hypervisor and/or processor lacks support for hardware acceleration
of nested VMs.
.. note::
If you choose to install on VMs, make sure your hypervisor provides
a way to disable MAC address filtering on the provider network
interface.
For more information about system requirements, see the `OpenStack
Operations Guide <https://docs.openstack.org/ops-guide/>`_.
.. toctree::
:maxdepth: 1
environment-security.rst
environment-networking.rst
environment-ntp.rst
environment-packages.rst
environment-sql-database.rst
environment-messaging.rst
environment-memcached.rst

View File

@ -1,12 +1,82 @@
.. _environment:
===========
Environment
===========
.. toctree::
This section explains how to configure the controller node and one compute
node using the example architecture.
environment-debian
environment-obs
environment-rdo
environment-ubuntu
Although most environments include Identity, Image service, Compute, at least
one networking service, and the Dashboard, the Object Storage service can
operate independently. If your use case only involves Object Storage, you can
skip to `Object Storage Installation Guide
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_
after configuring the appropriate nodes for it.
You must use an account with administrative privileges to configure each node.
Either run the commands as the ``root`` user or configure the ``sudo``
utility.
.. note::
The :command:`systemctl enable` call on openSUSE outputs a warning
message when the service uses SysV Init scripts instead of native
systemd files. This warning can be ignored.
For best performance, we recommend that your environment meets or exceeds
the hardware requirements in :ref:`figure-hwreqs`.
The following minimum requirements should support a proof-of-concept
environment with core services and several :term:`CirrOS` instances:
* Controller Node: 1 processor, 4 GB memory, and 5 GB storage
* Compute Node: 1 processor, 2 GB memory, and 10 GB storage
As the number of OpenStack services and virtual machines increase, so do the
hardware requirements for the best performance. If performance degrades after
enabling additional services or virtual machines, consider adding hardware
resources to your environment.
To minimize clutter and provide more resources for OpenStack, we recommend
a minimal installation of your Linux distribution. Also, you must install a
64-bit version of your distribution on each node.
A single disk partition on each node works for most basic installations.
However, you should consider :term:`Logical Volume Manager (LVM)` for
installations with optional services such as Block Storage.
For first-time installation and testing purposes, many users select to build
each host as a :term:`virtual machine (VM)`. The primary benefits of VMs
include the following:
* One physical server can support multiple nodes, each with almost any
number of network interfaces.
* Ability to take periodic "snap shots" throughout the installation
process and "roll back" to a working configuration in the event of a
problem.
However, VMs will reduce performance of your instances, particularly if
your hypervisor and/or processor lacks support for hardware acceleration
of nested VMs.
.. note::
If you choose to install on VMs, make sure your hypervisor provides
a way to disable MAC address filtering on the provider network
interface.
For more information about system requirements, see the `OpenStack
Operations Guide <https://docs.openstack.org/ops-guide/>`_.
.. toctree::
:maxdepth: 1
environment-security.rst
environment-networking.rst
environment-ntp.rst
environment-packages.rst
environment-sql-database.rst
environment-messaging.rst
environment-memcached.rst

View File

@ -1,65 +0,0 @@
======================================================================
OpenStack Installation Tutorial for openSUSE and SUSE Linux Enterprise
======================================================================
Abstract
~~~~~~~~
The OpenStack system consists of several key services that are separately
installed. These services work together depending on your cloud
needs and include the Compute, Identity, Networking, Image, Block Storage,
Object Storage, Telemetry, Orchestration, and Database services. You
can install any of these projects separately and configure them stand-alone
or as connected entities.
This guide will show you how to install OpenStack by using packages
on openSUSE Leap 42.2 and SUSE Linux Enterprise Server 12 - for
both SP1 and SP2 - through the Open Build Service Cloud repository.
Explanations of configuration options and sample configuration files
are included.
.. note::
The Training Labs scripts provide an automated way of deploying the
cluster described in this Installation Guide into VirtualBox or KVM
VMs. You will need a desktop computer or a laptop with at least 8
GB memory and 20 GB free storage running Linux, MaOS, or Windows.
Please see the
`OpenStack Training Labs <https://docs.openstack.org/training_labs/>`_.
This guide documents the OpenStack Ocata release.
.. warning::
This guide is a work-in-progress and is subject to updates frequently.
Pre-release packages have been used for testing, and some instructions
may not work with final versions. Please help us make this guide better
by reporting any errors you encounter.
Contents
~~~~~~~~
.. toctree::
:maxdepth: 2
common/conventions.rst
overview.rst
environment.rst
launch-instance.rst
common/appendix.rst
.. Pseudo only directive for each distribution used by the build tool.
This pseudo only directive for toctree only works fine with Tox.
When you directly build this guide with Sphinx,
some navigation menu may not work properly.
.. Keep this pseudo only directive not to break translation tool chain
at the openstack-doc-tools repo until it is changed.
.. end of contents

View File

@ -1,66 +0,0 @@
=======================================================================
OpenStack Installation Tutorial for Red Hat Enterprise Linux and CentOS
=======================================================================
Abstract
~~~~~~~~
The OpenStack system consists of several key services that are separately
installed. These services work together depending on your cloud
needs and include the Compute, Identity, Networking, Image, Block Storage,
Object Storage, Telemetry, Orchestration, and Database services. You
can install any of these projects separately and configure them stand-alone
or as connected entities.
This guide will show you how to install OpenStack by using packages
available on Red Hat Enterprise Linux 7 and its derivatives through
the RDO repository.
Explanations of configuration options and sample configuration files
are included.
.. note::
The Training Labs scripts provide an automated way of deploying the
cluster described in this Installation Guide into VirtualBox or KVM
VMs. You will need a desktop computer or a laptop with at least 8
GB memory and 20 GB free storage running Linux, MaOS, or Windows.
Please see the
`OpenStack Training Labs <https://docs.openstack.org/training_labs/>`_.
This guide documents the OpenStack Ocata release.
.. warning::
This guide is a work-in-progress and is subject to updates frequently.
Pre-release packages have been used for testing, and some instructions
may not work with final versions. Please help us make this guide better
by reporting any errors you encounter.
Contents
~~~~~~~~
.. toctree::
:maxdepth: 2
common/conventions.rst
overview.rst
environment.rst
launch-instance.rst
common/appendix.rst
.. Pseudo only directive for each distribution used by the build tool.
This pseudo only directive for toctree only works fine with Tox.
When you directly build this guide with Sphinx,
some navigation menu may not work properly.
.. Keep this pseudo only directive not to break translation tool chain
at the openstack-doc-tools repo until it is changed.
.. end of contents

View File

@ -1,64 +0,0 @@
==========================================
OpenStack Installation Tutorial for Ubuntu
==========================================
Abstract
~~~~~~~~
The OpenStack system consists of several key services that are separately
installed. These services work together depending on your cloud
needs and include the Compute, Identity, Networking, Image, Block Storage,
Object Storage, Telemetry, Orchestration, and Database services. You
can install any of these projects separately and configure them stand-alone
or as connected entities.
This guide will walk through an installation by using packages
available through Canonical's Ubuntu Cloud archive repository for
Ubuntu 16.04 (LTS).
Explanations of configuration options and sample configuration files
are included.
.. note::
The Training Labs scripts provide an automated way of deploying the
cluster described in this Installation Guide into VirtualBox or KVM
VMs. You will need a desktop computer or a laptop with at least 8
GB memory and 20 GB free storage running Linux, MaOS, or Windows.
Please see the
`OpenStack Training Labs <https://docs.openstack.org/training_labs/>`_.
This guide documents the OpenStack Ocata release.
.. warning::
This guide is a work-in-progress and is subject to updates frequently.
Pre-release packages have been used for testing, and some instructions
may not work with final versions. Please help us make this guide better
by reporting any errors you encounter.
Contents
~~~~~~~~
.. toctree::
:maxdepth: 2
common/conventions.rst
overview.rst
environment.rst
launch-instance.rst
common/appendix.rst
.. Pseudo only directive for each distribution used by the build tool.
This pseudo only directive for toctree only works fine with Tox.
When you directly build this guide with Sphinx,
some navigation menu may not work properly.
.. Keep this pseudo only directive not to break translation tool chain
at the openstack-doc-tools repo until it is changed.
.. end of contents

View File

@ -5,8 +5,10 @@
.. toctree::
:maxdepth: 3
preface
get-started-with-openstack
index-debian
index-obs
index-rdo
index-ubuntu
common/conventions.rst
overview.rst
environment.rst
launch-instance.rst
common/appendix.rst

View File

@ -1,7 +1,6 @@
==========================================
OpenStack Installation Tutorial for Debian
==========================================
=========
Preface
=========
Abstract
~~~~~~~~
@ -13,29 +12,6 @@ Object Storage, Telemetry, Orchestration, and Database services. You
can install any of these projects separately and configure them stand-alone
or as connected entities.
This guide walks through an installation by using packages
available through Debian 8 (code name: Jessie).
.. note::
This guide uses installation with debconf set to non-interactive
mode. That is, there will be no debconf prompt. To configure a computer
to use this mode, run the following command:
.. code-block:: console
# dpkg-reconfigure debconf
.. end
If you prefer to use debconf, refer to the debconf
install-guide for Debian.
Explanations of configuration options and sample configuration files
are included.
@ -56,22 +32,47 @@ This guide documents the OpenStack Ocata release.
may not work with final versions. Please help us make this guide better
by reporting any errors you encounter.
Contents
~~~~~~~~
Operating Systems
~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 2
Debian
++++++
common/conventions.rst
overview.rst
environment.rst
launch-instance.rst
common/appendix.rst
This guide walks through an installation by using packages
available through Debian 8 (code name: Jessie).
.. Pseudo only directive for each distribution used by the build tool.
This pseudo only directive for toctree only works fine with Tox.
When you directly build this guide with Sphinx,
some navigation menu may not work properly.
.. Keep this pseudo only directive not to break translation tool chain
at the openstack-doc-tools repo until it is changed.
.. end of contents
.. note::
This guide uses installation with debconf set to non-interactive
mode. That is, there will be no debconf prompt. To configure a computer
to use this mode, run the following command:
.. code-block:: console
# dpkg-reconfigure debconf
.. end
If you prefer to use debconf, refer to the debconf
install-guide for Debian.
openSUSE and SUSE Linux Enterprise Server
+++++++++++++++++++++++++++++++++++++++++
This guide will show you how to install OpenStack by using packages
on openSUSE Leap 42.2 and SUSE Linux Enterprise Server 12 - for
both SP1 and SP2 - through the Open Build Service Cloud repository.
Red Hat Enterprise Linux and CentOS
+++++++++++++++++++++++++++++++++++
This guide will show you how to install OpenStack by using packages
available on Red Hat Enterprise Linux 7 and its derivatives through
the RDO repository.
Ubuntu
++++++
This guide will walk through an installation by using packages
available through Canonical's Ubuntu Cloud archive repository for
Ubuntu 16.04 (LTS).