install: Updates syntax for training labs parser.

Training labs parser will allow us to automatically parse RST code
to BASH. This BASH code in turn will be invoked by install-guides for
validating the install guides. To provide the correct information to the
parser for generating BASH code, there are a few changes required to the
RST syntax.

Introduces the following changes to RST syntax:

  - `.. end`

    This tag provides information for the parser to stop extracting the
    given block which could be code, file injection or configuration
    file edit.

  - `.. endonly`

    This tag provides information for the parser with the correct
    distro-switch logic for identifying distro-specific code.

    For .. only:: tags, it is better to avoid nesting. If nesting
    is not avoidable then it is preferable to add the .. endonly
    tag to close the outer block immediately.

  - Extra new lines in code-blocks

    Some commands in the code-blocks provides the expected output of the
    given command. This is not a BASH command which we want to run but
    rather some visual niceness for the users. These new lines provides
    the parser information to identify the end of the command. This
    basic logic would be something similar to find '\r\n' which at least
    for python means new empty line.

  - `mysql>`

    Introducing this operator for mysql commands. This could potentially
    be changed to `pgsql>` or similar for other SQL type databases.
    This allows the parser to identify mysql commands and then run
    them in mysql instead of in 'sh' or 'bash'.

  - `.. path`

    Introducing this tag to provide the parser with the information with
    the path of the configuration file. Using the description text for
    the same is not reliable since the description text may not be
    consistent.

This commit should ideally introduce all the syntax changes required for
the parser to convert the code-blocks in here to BASH code. These
changes should have no impact on the HTML output of the RST code.

Change-Id: I47830b1bc61c8b1a0f3350932d15aa3ce88fa672
This commit is contained in:
Pranav Salunke 2016-08-24 02:46:17 +02:00
parent 52d17f2604
commit de38f2767f
46 changed files with 1474 additions and 40 deletions

View File

@ -35,6 +35,8 @@ Command prompts
$ command
.. end
Any user, including the ``root`` user, can run commands that are
prefixed with the ``$`` prompt.
@ -42,6 +44,8 @@ prefixed with the ``$`` prompt.
# command
.. end
The ``root`` user must run commands that are prefixed with the ``#``
prompt. You can also prefix these commands with the :command:`sudo`
command, if available, to run them.

View File

@ -28,6 +28,10 @@ Install and configure components
# zypper install openstack-cinder-backup
.. end
.. endonly
.. only:: rdo
#. Install the packages:
@ -36,6 +40,10 @@ Install and configure components
# yum install openstack-cinder
.. end
.. endonly
.. only:: ubuntu or debian
#. Install the packages:
@ -44,11 +52,16 @@ Install and configure components
# apt-get install cinder-backup
.. end
.. endonly
2. Edit the ``/etc/cinder/cinder.conf`` file
and complete the following actions:
* In the ``[DEFAULT]`` section, configure backup options:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
@ -56,6 +69,8 @@ Install and configure components
backup_driver = cinder.backup.drivers.swift
backup_swift_url = SWIFT_URL
.. end
Replace ``SWIFT_URL`` with the URL of the Object Storage service, typically
``http://10.0.0.51:8080/v1/AUTH_`` if using the installation guide
architecture.
@ -73,6 +88,10 @@ Finalize installation
# systemctl enable openstack-cinder-backup.service
# systemctl start openstack-cinder-backup.service
.. end
.. endonly
.. only:: ubuntu or debian
Restart the Block Storage backup service:
@ -80,3 +99,7 @@ Finalize installation
.. code-block:: console
# service cinder-backup restart
.. end
.. endonly

View File

@ -23,21 +23,27 @@ must create a database, service credentials, and API endpoints.
$ mysql -u root -p
.. end
* Create the ``cinder`` database:
.. code-block:: console
CREATE DATABASE cinder;
mysql> CREATE DATABASE cinder;
.. end
* Grant proper access to the ``cinder`` database:
.. code-block:: console
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';
.. end
Replace ``CINDER_DBPASS`` with a suitable password.
* Exit the database access client.
@ -49,6 +55,8 @@ must create a database, service credentials, and API endpoints.
$ . admin-openrc
.. end
#. To create the service credentials, complete these steps:
* Create a ``cinder`` user:
@ -56,6 +64,7 @@ must create a database, service credentials, and API endpoints.
.. code-block:: console
$ openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+-----------+----------------------------------+
@ -67,12 +76,16 @@ must create a database, service credentials, and API endpoints.
| name | cinder |
+-----------+----------------------------------+
.. end
* Add the ``admin`` role to the ``cinder`` user:
.. code-block:: console
$ openstack role add --project service --user cinder admin
.. end
.. note::
This command provides no output.
@ -83,6 +96,7 @@ must create a database, service credentials, and API endpoints.
$ openstack service create --name cinder \
--description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
@ -93,10 +107,13 @@ must create a database, service credentials, and API endpoints.
| type | volume |
+-------------+----------------------------------+
.. end
.. code-block:: console
$ openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
@ -107,6 +124,8 @@ must create a database, service credentials, and API endpoints.
| type | volumev2 |
+-------------+----------------------------------+
.. end
.. note::
The Block Storage services require two service entities.
@ -117,6 +136,7 @@ must create a database, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
volume public http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
@ -133,6 +153,7 @@ must create a database, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
@ -149,6 +170,7 @@ must create a database, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
@ -163,10 +185,13 @@ must create a database, service credentials, and API endpoints.
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
.. end
.. code-block:: console
$ openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
@ -183,6 +208,7 @@ must create a database, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
@ -199,6 +225,7 @@ must create a database, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
@ -213,6 +240,8 @@ must create a database, service credentials, and API endpoints.
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
.. end
.. note::
The Block Storage services require endpoints for each service
@ -229,6 +258,10 @@ Install and configure components
# zypper install openstack-cinder-api openstack-cinder-scheduler
.. end
.. endonly
.. only:: rdo
#. Install the packages:
@ -237,6 +270,10 @@ Install and configure components
# yum install openstack-cinder
.. end
.. endonly
.. only:: ubuntu or debian
#. Install the packages:
@ -245,23 +282,31 @@ Install and configure components
# apt-get install cinder-api cinder-scheduler
.. end
.. endonly
2. Edit the ``/etc/cinder/cinder.conf`` file and complete the
following actions:
* In the ``[database]`` section, configure database access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[database]
...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
.. end
Replace ``CINDER_DBPASS`` with the password you chose for the
Block Storage database.
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure ``RabbitMQ`` message queue access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
@ -274,12 +319,15 @@ Install and configure components
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
.. end
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in ``RabbitMQ``.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
@ -298,6 +346,8 @@ Install and configure components
username = cinder
password = CINDER_PASS
.. end
Replace ``CINDER_PASS`` with the password you chose for
the ``cinder`` user in the Identity service.
@ -309,22 +359,30 @@ Install and configure components
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
use the management interface IP address of the controller node:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
...
my_ip = 10.0.0.11
.. end
.. only:: obs or rdo or ubuntu
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp
.. end
.. endonly
.. only:: rdo or ubuntu or debian
3. Populate the Block Storage database:
@ -333,21 +391,28 @@ Install and configure components
# su -s /bin/sh -c "cinder-manage db sync" cinder
.. end
.. note::
Ignore any deprecation messages in this output.
.. endonly
Configure Compute to use Block Storage
--------------------------------------
* Edit the ``/etc/nova/nova.conf`` file and add the following
to it:
.. path /etc/nova/nova.conf
.. code-block:: ini
[cinder]
os_region_name = RegionOne
.. end
Finalize installation
---------------------
@ -359,6 +424,8 @@ Finalize installation
# systemctl restart openstack-nova-api.service
.. end
#. Start the Block Storage services and configure them to start when
the system boots:
@ -367,6 +434,10 @@ Finalize installation
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
.. end
.. endonly
.. only:: ubuntu or debian
#. Restart the Compute API service:
@ -375,9 +446,15 @@ Finalize installation
# service nova-api restart
.. end
#. Restart the Block Storage services:
.. code-block:: console
# service cinder-scheduler restart
# service cinder-api restart
.. end
.. endonly

View File

@ -35,6 +35,8 @@ storage node, you must prepare the storage device.
# zypper install lvm2
.. end
* (Optional) If you intend to use non-raw image types such as QCOW2
and VMDK, install the QEMU package:
@ -42,6 +44,10 @@ storage node, you must prepare the storage device.
# zypper install qemu
.. end
.. endonly
.. only:: rdo
* Install the LVM packages:
@ -50,6 +56,8 @@ storage node, you must prepare the storage device.
# yum install lvm2
.. end
* Start the LVM metadata service and configure it to start when the
system boots:
@ -58,12 +66,20 @@ storage node, you must prepare the storage device.
# systemctl enable lvm2-lvmetad.service
# systemctl start lvm2-lvmetad.service
.. end
.. endonly
.. only:: ubuntu
.. code-block:: console
# apt-get install lvm2
.. end
.. endonly
.. note::
Some distributions include LVM by default.
@ -73,15 +89,21 @@ storage node, you must prepare the storage device.
.. code-block:: console
# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
.. end
#. Create the LVM volume group ``cinder-volumes``:
.. code-block:: console
# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
.. end
The Block Storage service creates logical volumes in this volume group.
#. Only instances can access Block Storage volumes. However, the
@ -98,12 +120,15 @@ storage node, you must prepare the storage device.
* In the ``devices`` section, add a filter that accepts the
``/dev/sdb`` device and rejects all other devices:
.. path /etc/lvm/lvm.conf
.. code-block:: ini
devices {
...
filter = [ "a/sdb/", "r/.*/"]
.. end
Each item in the filter array begins with ``a`` for **accept** or
``r`` for **reject** and includes a regular expression for the
device name. The array must end with ``r/.*/`` to reject any
@ -116,20 +141,26 @@ storage node, you must prepare the storage device.
must also add the associated device to the filter. For example,
if the ``/dev/sda`` device contains the operating system:
.. ignore_path /etc/lvm/lvm.conf
.. code-block:: ini
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
.. end
Similarly, if your compute nodes use LVM on the operating
system disk, you must also modify the filter in the
``/etc/lvm/lvm.conf`` file on those nodes to include only
the operating system disk. For example, if the ``/dev/sda``
device contains the operating system:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
filter = [ "a/sda/", "r/.*/"]
.. end
Install and configure components
--------------------------------
@ -141,6 +172,10 @@ Install and configure components
# zypper install openstack-cinder-volume tgt
.. end
.. endonly
.. only:: rdo
#. Install the packages:
@ -149,6 +184,10 @@ Install and configure components
# yum install openstack-cinder targetcli python-keystone
.. end
.. endonly
.. only:: ubuntu or debian
#. Install the packages:
@ -157,23 +196,31 @@ Install and configure components
# apt-get install cinder-volume
.. end
.. endonly
2. Edit the ``/etc/cinder/cinder.conf`` file
and complete the following actions:
* In the ``[database]`` section, configure database access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[database]
...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
.. end
Replace ``CINDER_DBPASS`` with the password you chose for
the Block Storage database.
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure ``RabbitMQ`` message queue access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
@ -186,12 +233,15 @@ Install and configure components
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
.. end
Replace ``RABBIT_PASS`` with the password you chose for
the ``openstack`` account in ``RabbitMQ``.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
@ -210,6 +260,8 @@ Install and configure components
username = cinder
password = CINDER_PASS
.. end
Replace ``CINDER_PASS`` with the password you chose for the
``cinder`` user in the Identity service.
@ -220,12 +272,15 @@ Install and configure components
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
.. end
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
of the management network interface on your storage node,
typically 10.0.0.41 for the first node in the
@ -237,6 +292,7 @@ Install and configure components
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
and appropriate iSCSI service:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[lvm]
@ -246,12 +302,17 @@ Install and configure components
iscsi_protocol = iscsi
iscsi_helper = tgtadm
.. end
.. endonly
.. only:: rdo
* In the ``[lvm]`` section, configure the LVM back end with the
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
and appropriate iSCSI service:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[lvm]
@ -261,14 +322,21 @@ Install and configure components
iscsi_protocol = iscsi
iscsi_helper = lioadm
.. end
.. endonly
* In the ``[DEFAULT]`` section, enable the LVM back end:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
...
enabled_backends = lvm
.. end
.. note::
Back-end names are arbitrary. As an example, this guide
@ -277,20 +345,26 @@ Install and configure components
* In the ``[DEFAULT]`` section, configure the location of the
Image service API:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
...
glance_api_servers = http://controller:9292
.. end
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp
.. end
.. only:: obs
3. Create the ``/etc/tgt/conf.d/cinder.conf`` file
@ -300,6 +374,10 @@ Install and configure components
include /var/lib/cinder/volumes/*
.. end
.. endonly
Finalize installation
---------------------
@ -313,6 +391,10 @@ Finalize installation
# systemctl enable openstack-cinder-volume.service tgtd.service
# systemctl start openstack-cinder-volume.service tgtd.service
.. end
.. endonly
.. only:: rdo
* Start the Block Storage volume service including its dependencies
@ -323,6 +405,10 @@ Finalize installation
# systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service
.. end
.. endonly
.. only:: ubuntu or debian
#. Restart the Block Storage volume service including its dependencies:
@ -331,3 +417,7 @@ Finalize installation
# service tgt restart
# service cinder-volume restart
.. end
.. endonly

View File

@ -16,11 +16,14 @@ Verify operation of the Block Storage service.
$ . admin-openrc
.. end
#. List service components to verify successful launch of each process:
.. code-block:: console
$ cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
@ -29,6 +32,8 @@ Verify operation of the Block Storage service.
| cinder-backup | block1 | nova | enabled | up | 2014-10-18T01:30:59.000000 | None |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
.. end
.. note::
The ``cinder-backup`` service only appears if you :ref:`cinder-backup-install`.

View File

@ -17,18 +17,30 @@ Install and configure components
# apt-get install memcached python-memcache
.. end
.. endonly
.. only:: rdo
.. code-block:: console
# yum install memcached python-memcached
.. end
.. endonly
.. only:: obs
.. code-block:: console
# zypper install memcached python-python-memcached
.. end
.. endonly
.. only:: ubuntu or debian
2. Edit the ``/etc/memcached.conf`` file and configure the
@ -39,10 +51,14 @@ Install and configure components
-l 10.0.0.11
.. end
.. note::
Change the existing line with ``-l 127.0.0.1``.
.. endonly
Finalize installation
---------------------
@ -54,6 +70,10 @@ Finalize installation
# service memcached restart
.. end
.. endonly
.. only:: rdo or obs
* Start the Memcached service and configure it to start when the system
@ -63,3 +83,7 @@ Finalize installation
# systemctl enable memcached.service
# systemctl start memcached.service
.. end
.. endonly

View File

@ -25,18 +25,30 @@ Install and configure components
# apt-get install rabbitmq-server
.. end
.. endonly
.. only:: rdo
.. code-block:: console
# yum install rabbitmq-server
.. end
.. endonly
.. only:: obs
.. code-block:: console
# zypper install rabbitmq-server
.. end
.. endonly
.. only:: rdo or obs
2. Start the message queue service and configure it to start when the
@ -47,13 +59,18 @@ Install and configure components
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
.. end
3. Add the ``openstack`` user:
.. code-block:: console
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
.. end
Replace ``RABBIT_PASS`` with a suitable password.
4. Permit configuration, write, and read access for the
@ -62,8 +79,13 @@ Install and configure components
.. code-block:: console
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
.. end
.. endonly
.. only:: ubuntu or debian
2. Add the ``openstack`` user:
@ -71,9 +93,12 @@ Install and configure components
.. code-block:: console
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
...done.
.. end
Replace ``RABBIT_PASS`` with a suitable password.
3. Permit configuration, write, and read access for the
@ -82,5 +107,10 @@ Install and configure components
.. code-block:: console
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
...done.
.. end
.. endonly

View File

@ -27,6 +27,7 @@ Configure network interfaces
* Edit the ``/etc/network/interfaces`` file to contain the following:
.. path /etc/network/interfaces
.. code-block:: ini
# The provider network interface
@ -35,6 +36,10 @@ Configure network interfaces
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
.. endonly
.. only:: rdo
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
@ -42,6 +47,7 @@ Configure network interfaces
Do not change the ``HWADDR`` and ``UUID`` keys.
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
.. code-block:: ini
DEVICE=INTERFACE_NAME
@ -49,16 +55,25 @@ Configure network interfaces
ONBOOT="yes"
BOOTPROTO="none"
.. end
.. endonly
.. only:: obs
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
contain the following:
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
.. code-block:: ini
STARTMODE='auto'
BOOTPROTO='static'
.. end
.. endonly
#. Reboot the system to activate the changes.
Configure name resolution

View File

@ -23,6 +23,7 @@ Configure network interfaces
* Edit the ``/etc/network/interfaces`` file to contain the following:
.. path /etc/network/interfaces
.. code-block:: ini
# The provider network interface
@ -31,6 +32,10 @@ Configure network interfaces
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
.. endonly
.. only:: rdo
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
@ -38,6 +43,7 @@ Configure network interfaces
Do not change the ``HWADDR`` and ``UUID`` keys.
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
.. code-block:: ini
DEVICE=INTERFACE_NAME
@ -45,16 +51,25 @@ Configure network interfaces
ONBOOT="yes"
BOOTPROTO="none"
.. end
.. endonly
.. only:: obs
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
contain the following:
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
.. code-block:: ini
STARTMODE='auto'
BOOTPROTO='static'
.. end
.. endonly
#. Reboot the system to activate the changes.
Configure name resolution

View File

@ -9,6 +9,7 @@ among the nodes before proceeding further.
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
@ -19,12 +20,15 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *controller* node, test access to the management interface on the
*compute* node:
.. code-block:: console
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
@ -35,11 +39,14 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
#. From the *compute* node, test access to the Internet:
.. code-block:: console
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
@ -50,12 +57,15 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *compute* node, test access to the management interface on the
*controller* node:
.. code-block:: console
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
@ -66,6 +76,8 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
.. note::
.. only:: rdo or obs
@ -76,9 +88,13 @@ among the nodes before proceeding further.
information about securing your environment, refer to the
`OpenStack Security Guide <http://docs.openstack.org/sec/>`__.
.. endonly
.. only:: ubuntu or debian
Your distribution does not enable a restrictive :term:`firewall`
by default. For more information about securing your environment,
refer to the
`OpenStack Security Guide <http://docs.openstack.org/sec/>`__.
.. endonly

View File

@ -12,6 +12,8 @@ Host networking
For more information on how to configure networking on your
distribution, see the `documentation <https://help.ubuntu.com/lts/serverguide/network-configuration.html>`__ .
.. endonly
.. only:: debian
After installing the operating system on each node for the architecture
@ -21,6 +23,8 @@ Host networking
For more information on how to configure networking on your
distribution, see the `documentation <https://wiki.debian.org/NetworkConfiguration>`__ .
.. endonly
.. only:: rdo
After installing the operating system on each node for the architecture
@ -30,6 +34,8 @@ Host networking
For more information on how to configure networking on your
distribution, see the `documentation <https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Using_the_Command_Line_Interface.html>`__ .
.. endonly
.. only:: obs
After installing the operating system on each node for the architecture
@ -39,6 +45,8 @@ Host networking
For more information on how to configure networking on your
distribution, see the `SLES 12 <https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_basicnet_manconf.html>`__ or `openSUSE <http://activedoc.opensuse.org/book/opensuse-reference/chapter-13-basic-networking>`__ documentation.
.. endonly
All nodes require Internet access for administrative purposes such as package
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
@ -109,6 +117,8 @@ the controller node.
information about securing your environment, refer to the
`OpenStack Security Guide <http://docs.openstack.org/sec/>`__.
.. endonly
.. only:: ubuntu or debian
Your distribution does not enable a restrictive :term:`firewall`
@ -116,6 +126,8 @@ the controller node.
refer to the
`OpenStack Security Guide <http://docs.openstack.org/sec/>`__.
.. endonly
.. toctree::
:maxdepth: 1

View File

@ -16,12 +16,20 @@ Install and configure components
# apt-get install chrony
.. end
.. endonly
.. only:: rdo
.. code-block:: console
# yum install chrony
.. end
.. endonly
.. only:: obs
On openSUSE:
@ -32,6 +40,8 @@ Install and configure components
# zypper refresh
# zypper install chrony
.. end
On SLES:
.. code-block:: console
@ -40,6 +50,8 @@ Install and configure components
# zypper refresh
# zypper install chrony
.. end
.. note::
The packages are signed by GPG key ``17280DDF``. You should
@ -52,6 +64,10 @@ Install and configure components
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
.. end
.. endonly
.. only:: ubuntu or debian
2. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove the
@ -61,6 +77,8 @@ Install and configure components
server NTP_SERVER iburst
.. end
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
@ -77,6 +95,10 @@ Install and configure components
# service chrony restart
.. end
.. endonly
.. only:: rdo or obs
2. Edit the ``/etc/chrony.conf`` file and add, change, or remove the
@ -86,6 +108,8 @@ Install and configure components
server NTP_SERVER iburst
.. end
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
@ -103,6 +127,8 @@ Install and configure components
allow 10.0.0.0/24
.. end
If necessary, replace ``10.0.0.0/24`` with a description of your subnet.
4. Start the NTP service and configure it to start when the system boots:
@ -111,3 +137,7 @@ Install and configure components
# systemctl enable chronyd.service
# systemctl start chronyd.service
.. end
.. endonly

View File

@ -17,12 +17,20 @@ Install and configure components
# apt-get install chrony
.. end
.. endonly
.. only:: rdo
.. code-block:: console
# yum install chrony
.. end
.. endonly
.. only:: obs
On openSUSE:
@ -33,6 +41,8 @@ Install and configure components
# zypper refresh
# zypper install chrony
.. end
On SLES:
.. code-block:: console
@ -41,6 +51,8 @@ Install and configure components
# zypper refresh
# zypper install chrony
.. end
.. note::
The packages are signed by GPG key ``17280DDF``. You should
@ -53,33 +65,51 @@ Install and configure components
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
.. end
.. endonly
.. only:: ubuntu or debian
2. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
but one ``server`` key. Change it to reference the controller node:
.. path /etc/chrony/chrony.conf
.. code-block:: ini
server controller iburst
.. end
3. Restart the NTP service:
.. code-block:: console
# service chrony restart
.. end
.. endonly
.. only:: rdo or obs
2. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
``server`` key. Change it to reference the controller node:
.. path /etc/chrony.conf
.. code-block:: ini
server controller iburst
.. end
3. Start the NTP service and configure it to start when the system boots:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
.. end
.. endonly

View File

@ -12,12 +12,15 @@ node, can take several minutes to synchronize.
.. code-block:: console
# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/- 43ms
^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/- 68ms
.. end
Contents in the *Name/IP address* column should indicate the hostname or IP
address of one or more NTP servers. Contents in the *S* column should indicate
*\** for the server to which the NTP service is currently synchronized.
@ -27,10 +30,13 @@ node, can take several minutes to synchronize.
.. code-block:: console
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* controller 3 9 377 421 +15us[ -87us] +/- 15ms
.. end
Contents in the *Name/IP address* column should indicate the hostname of the
controller node.

View File

@ -30,6 +30,8 @@ these procedures on all nodes.
# apt-get install software-properties-common
# add-apt-repository cloud-archive:newton
.. end
.. note::
For pre-release testing, use the staging repository:
@ -38,6 +40,10 @@ these procedures on all nodes.
# add-apt-repository cloud-archive:newton-proposed
.. end
.. endonly
.. only:: rdo
Prerequisites
@ -60,12 +66,16 @@ these procedures on all nodes.
# subscription-manager register --username="USERNAME" --password="PASSWORD"
.. end
#. Find entitlement pools containing the channels for your RHEL system:
.. code-block:: console
# subscription-manager list --available
.. end
#. Use the pool identifiers found in the previous step to attach your RHEL
entitlements:
@ -73,6 +83,8 @@ these procedures on all nodes.
# subscription-manager attach --pool="POOLID"
.. end
#. Enable required repositories:
.. code-block:: console
@ -80,6 +92,10 @@ these procedures on all nodes.
# subscription-manager repos --enable=rhel-7-server-optional-rpms \
--enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms
.. end
.. endonly
.. only:: rdo
Enable the OpenStack repository
@ -94,6 +110,8 @@ these procedures on all nodes.
# yum install centos-release-openstack-newton
.. end
* On RHEL, download and install the RDO repository RPM to enable the
OpenStack repository.
@ -101,6 +119,8 @@ these procedures on all nodes.
# yum install https://rdoproject.org/repos/rdo-release.rpm
.. end
.. only:: obs
Enable the OpenStack repository
@ -115,6 +135,8 @@ these procedures on all nodes.
# zypper addrepo -f obs://Cloud:OpenStack:Newton/openSUSE_Leap_42.1 Newton
.. end
.. note::
The openSUSE distribution uses the concept of patterns to
@ -128,12 +150,16 @@ these procedures on all nodes.
# zypper rm patterns-openSUSE-minimal_base-conflicts
.. end
**On SLES:**
.. code-block:: console
# zypper addrepo -f obs://Cloud:OpenStack:Newton/SLE_12_SP2 Newton
.. end
.. note::
The packages are signed by GPG key ``D85F9316``. You should
@ -146,6 +172,10 @@ these procedures on all nodes.
Key Created: 2015-12-16T16:48:37 CET
Key Expires: 2018-02-23T16:48:37 CET
.. end
.. endonly
.. only:: debian
Enable the backports repository
@ -165,6 +195,8 @@ these procedures on all nodes.
# echo "deb http://http.debian.net/debian jessie-backports main" \
>>/etc/apt/sources.list
.. end
.. note::
Later you can use the following command to install a package:
@ -173,6 +205,10 @@ these procedures on all nodes.
# apt-get -t jessie-backports install ``PACKAGE``
.. end
.. endonly
Finalize the installation
-------------------------
@ -184,18 +220,30 @@ Finalize the installation
# apt-get update && apt-get dist-upgrade
.. end
.. endonly
.. only:: rdo
.. code-block:: console
# yum upgrade
.. end
.. endonly
.. only:: obs
.. code-block:: console
# zypper refresh && zypper dist-upgrade
.. end
.. endonly
.. note::
If the upgrade process includes a new kernel, reboot your host
@ -209,18 +257,30 @@ Finalize the installation
# apt-get install python-openstackclient
.. end
.. endonly
.. only:: rdo
.. code-block:: console
# yum install python-openstackclient
.. end
.. endonly
.. only:: obs
.. code-block:: console
# zypper install python-openstackclient
.. end
.. endonly
.. only:: rdo
3. RHEL and CentOS enable :term:`SELinux` by default. Install the
@ -230,3 +290,7 @@ Finalize the installation
.. code-block:: console
# yum install openstack-selinux
.. end
.. endonly

View File

@ -15,6 +15,8 @@ following command:
$ openssl rand -hex 10
.. end
For OpenStack services, this guide uses ``SERVICE_PASS`` to reference
service account passwords and ``SERVICE_DBPASS`` to reference database
passwords.

View File

@ -18,24 +18,40 @@ Install and configure components
# apt-get install mariadb-server python-pymysql
.. end
.. endonly
.. only:: debian
.. code-block:: console
# apt-get install mysql-server python-pymysql
.. end
.. endonly
.. only:: rdo
.. code-block:: console
# yum install mariadb mariadb-server python2-PyMySQL
.. end
.. endonly
.. only:: obs
.. code-block:: console
# zypper install mariadb-client mariadb python-PyMySQL
.. end
.. endonly
.. only:: debian
2. Choose a suitable password for the database ``root`` account.
@ -49,6 +65,7 @@ Install and configure components
additional keys to enable useful options and the UTF-8
character set:
.. path /etc/mysql/conf.d/openstack.cnf
.. code-block:: ini
[mysqld]
@ -60,6 +77,10 @@ Install and configure components
collation-server = utf8_general_ci
character-set-server = utf8
.. end
.. endonly
.. only:: ubuntu
2. Create and edit the ``/etc/mysql/mariadb.conf.d/99-openstack.cnf`` file
@ -81,6 +102,9 @@ Install and configure components
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
.. end
.. endonly
.. only:: obs or rdo
@ -93,6 +117,7 @@ Install and configure components
additional keys to enable useful options and the UTF-8
character set:
.. path /etc/my.cnf.d/openstack.cnf
.. code-block:: ini
[mysqld]
@ -104,6 +129,10 @@ Install and configure components
collation-server = utf8_general_ci
character-set-server = utf8
.. end
.. endonly
Finalize installation
---------------------
@ -115,6 +144,10 @@ Finalize installation
# service mysql restart
.. end
.. endonly
.. only:: rdo or obs
#. Start the database service and configure it to start when the system
@ -127,6 +160,10 @@ Finalize installation
# systemctl enable mariadb.service
# systemctl start mariadb.service
.. end
.. endonly
.. only:: obs
.. code-block:: console
@ -134,6 +171,10 @@ Finalize installation
# systemctl enable mysql.service
# systemctl start mysql.service
.. end
.. endonly
.. only:: rdo or obs or ubuntu
2. Secure the database service by running the ``mysql_secure_installation``
@ -143,3 +184,7 @@ Finalize installation
.. code-block:: console
# mysql_secure_installation
.. end
.. endonly

View File

@ -24,6 +24,8 @@ utility.
when the service uses SysV Init scripts instead of native systemd files. This
warning can be ignored.
.. endonly
For best performance, we recommend that your environment meets or exceeds
the hardware requirements in :ref:`figure-hwreqs`.

View File

@ -20,21 +20,27 @@ create a database, service credentials, and API endpoints.
$ mysql -u root -p
.. end
* Create the ``glance`` database:
.. code-block:: console
CREATE DATABASE glance;
mysql> CREATE DATABASE glance;
.. end
* Grant proper access to the ``glance`` database:
.. code-block:: console
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';
.. end
Replace ``GLANCE_DBPASS`` with a suitable password.
* Exit the database access client.
@ -46,6 +52,8 @@ create a database, service credentials, and API endpoints.
$ . admin-openrc
.. end
#. To create the service credentials, complete these steps:
* Create the ``glance`` user:
@ -53,6 +61,7 @@ create a database, service credentials, and API endpoints.
.. code-block:: console
$ openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+-----------+----------------------------------+
@ -64,6 +73,8 @@ create a database, service credentials, and API endpoints.
| name | glance |
+-----------+----------------------------------+
.. end
* Add the ``admin`` role to the ``glance`` user and
``service`` project:
@ -71,6 +82,8 @@ create a database, service credentials, and API endpoints.
$ openstack role add --project service --user glance admin
.. end
.. note::
This command provides no output.
@ -81,6 +94,7 @@ create a database, service credentials, and API endpoints.
$ openstack service create --name glance \
--description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
@ -91,12 +105,15 @@ create a database, service credentials, and API endpoints.
| type | image |
+-------------+----------------------------------+
.. end
#. Create the Image service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
@ -113,6 +130,7 @@ create a database, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
@ -129,6 +147,7 @@ create a database, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
@ -143,6 +162,8 @@ create a database, service credentials, and API endpoints.
| url | http://controller:9292 |
+--------------+----------------------------------+
.. end
Install and configure components
--------------------------------
@ -156,6 +177,10 @@ Install and configure components
# zypper install openstack-glance
.. end
.. endonly
.. only:: rdo
#. Install the packages:
@ -164,6 +189,10 @@ Install and configure components
# yum install openstack-glance
.. end
.. endonly
.. only:: ubuntu or debian
#. Install the packages:
@ -172,23 +201,31 @@ Install and configure components
# apt-get install glance
.. end
.. endonly
2. Edit the ``/etc/glance/glance-api.conf`` file and complete the
following actions:
* In the ``[database]`` section, configure database access:
.. path /etc/glance/glance.conf
.. code-block:: ini
[database]
...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
.. end
Replace ``GLANCE_DBPASS`` with the password you chose for the
Image service database.
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
configure Identity service access:
.. path /etc/glance/glance.conf
.. code-block:: ini
[keystone_authtoken]
@ -207,6 +244,8 @@ Install and configure components
...
flavor = keystone
.. end
Replace ``GLANCE_PASS`` with the password you chose for the
``glance`` user in the Identity service.
@ -218,6 +257,7 @@ Install and configure components
* In the ``[glance_store]`` section, configure the local file
system store and location of image files:
.. path /etc/glance/glance.conf
.. code-block:: ini
[glance_store]
@ -226,23 +266,29 @@ Install and configure components
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
.. end
3. Edit the ``/etc/glance/glance-registry.conf`` file and complete
the following actions:
* In the ``[database]`` section, configure database access:
.. path /etc/glance/glance-registry.conf
.. code-block:: ini
[database]
...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
.. end
Replace ``GLANCE_DBPASS`` with the password you chose for the
Image service database.
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
configure Identity service access:
.. path /etc/glance/glance-registry.conf
.. code-block:: ini
[keystone_authtoken]
@ -261,6 +307,8 @@ Install and configure components
...
flavor = keystone
.. end
Replace ``GLANCE_PASS`` with the password you chose for the
``glance`` user in the Identity service.
@ -277,10 +325,14 @@ Install and configure components
# su -s /bin/sh -c "glance-manage db_sync" glance
.. end
.. note::
Ignore any deprecation messages in this output.
.. endonly
Finalize installation
---------------------
@ -296,6 +348,10 @@ Finalize installation
# systemctl start openstack-glance-api.service \
openstack-glance-registry.service
.. end
.. endonly
.. only:: ubuntu or debian
#. Restart the Image services:
@ -304,3 +360,7 @@ Finalize installation
# service glance-registry restart
# service glance-api restart
.. end
.. endonly

View File

@ -23,12 +23,16 @@ For information about how to manage images, see the
$ . admin-openrc
.. end
#. Download the source image:
.. code-block:: console
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
.. end
.. note::
Install ``wget`` if your distribution does not include it.
@ -43,6 +47,7 @@ For information about how to manage images, see the
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
@ -66,6 +71,8 @@ For information about how to manage images, see the
| visibility | public |
+------------------+------------------------------------------------------+
.. end
For information about the :command:`openstack image create` parameters,
see `Image service command-line client
<http://docs.openstack.org/cli-reference/openstack.html#openstack-image-create>`__
@ -86,8 +93,11 @@ For information about how to manage images, see the
.. code-block:: console
$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
+--------------------------------------+--------+--------+
.. end

View File

@ -23,6 +23,8 @@ Install and configure components
.. include:: shared/note_configuration_vary_by_distribution.rst
.. endonly
.. only:: obs
1. Install the packages:
@ -31,6 +33,10 @@ Install and configure components
# zypper install openstack-dashboard
.. end
.. endonly
.. only:: rdo
1. Install the packages:
@ -39,6 +45,10 @@ Install and configure components
# yum install openstack-dashboard
.. end
.. endonly
.. only:: ubuntu
1. Install the packages:
@ -47,6 +57,10 @@ Install and configure components
# apt-get install openstack-dashboard
.. end
.. endonly
.. only:: debian
1. Install the packages:
@ -55,6 +69,8 @@ Install and configure components
# apt-get install openstack-dashboard-apache
.. end
2. Respond to prompts for web server configuration.
.. note::
@ -73,6 +89,8 @@ Install and configure components
manually, install the ``openstack-dashboard`` package instead of
``openstack-dashboard-apache``.
.. endonly
.. only:: obs
2. Configure the web server:
@ -83,6 +101,8 @@ Install and configure components
/etc/apache2/conf.d/openstack-dashboard.conf
# a2enmod rewrite
.. end
3. Edit the
``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py``
file and complete the following actions:
@ -90,18 +110,25 @@ Install and configure components
* Configure the dashboard to use OpenStack services on the
``controller`` node:
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
.. code-block:: ini
OPENSTACK_HOST = "controller"
.. end
* Allow all hosts to access the dashboard:
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
.. code-block:: ini
ALLOWED_HOSTS = ['*', ]
.. end
* Configure the ``memcached`` session storage service:
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
.. code-block:: ini
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
@ -113,24 +140,33 @@ Install and configure components
}
}
.. end
.. note::
Comment out any other session storage configuration.
* Enable the Identity API version 3:
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
.. code-block:: ini
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
.. end
* Enable support for domains:
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
.. code-block:: ini
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
.. end
* Configure API versions:
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
.. code-block:: ini
OPENSTACK_API_VERSIONS = {
@ -139,23 +175,32 @@ Install and configure components
"volume": 2,
}
.. end
* Configure ``default`` as the default domain for users that you create
via the dashboard:
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
.. code-block:: ini
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
.. end
* Configure ``user`` as the default role for
users that you create via the dashboard:
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
.. code-block:: ini
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
.. end
* If you chose networking option 1, disable support for layer-3
networking services:
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
.. code-block:: ini
OPENSTACK_NEUTRON_NETWORK = {
@ -170,16 +215,23 @@ Install and configure components
'enable_fip_topology_check': False,
}
.. end
* Optionally, configure the time zone:
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
.. code-block:: ini
TIME_ZONE = "TIME_ZONE"
.. end
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
For more information, see the `list of time zones
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
.. endonly
.. only:: rdo
2. Edit the
@ -189,18 +241,25 @@ Install and configure components
* Configure the dashboard to use OpenStack services on the
``controller`` node:
.. path /etc/openstack-dashboard/local_settings
.. code-block:: ini
OPENSTACK_HOST = "controller"
.. end
* Allow all hosts to access the dashboard:
.. path /etc/openstack-dashboard/local_settings
.. code-block:: ini
ALLOWED_HOSTS = ['*', ]
.. end
* Configure the ``memcached`` session storage service:
.. path /etc/openstack-dashboard/local_settings
.. code-block:: ini
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
@ -212,24 +271,33 @@ Install and configure components
}
}
.. end
.. note::
Comment out any other session storage configuration.
* Enable the Identity API version 3:
.. path /etc/openstack-dashboard/local_settings
.. code-block:: ini
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
.. end
* Enable support for domains:
.. path /etc/openstack-dashboard/local_settings
.. code-block:: ini
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
.. end
* Configure API versions:
.. path /etc/openstack-dashboard/local_settings
.. code-block:: ini
OPENSTACK_API_VERSIONS = {
@ -238,23 +306,32 @@ Install and configure components
"volume": 2,
}
.. end
* Configure ``default`` as the default domain for users that you create
via the dashboard:
.. path /etc/openstack-dashboard/local_settings
.. code-block:: ini
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
.. end
* Configure ``user`` as the default role for
users that you create via the dashboard:
.. path /etc/openstack-dashboard/local_settings
.. code-block:: ini
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
.. end
* If you chose networking option 1, disable support for layer-3
networking services:
.. path /etc/openstack-dashboard/local_settings
.. code-block:: ini
OPENSTACK_NEUTRON_NETWORK = {
@ -269,16 +346,23 @@ Install and configure components
'enable_fip_topology_check': False,
}
.. end
* Optionally, configure the time zone:
.. path /etc/openstack-dashboard/local_settings
.. code-block:: ini
TIME_ZONE = "TIME_ZONE"
.. end
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
For more information, see the `list of time zones
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
.. endonly
.. only:: ubuntu
2. Edit the
@ -288,18 +372,25 @@ Install and configure components
* Configure the dashboard to use OpenStack services on the
``controller`` node:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
OPENSTACK_HOST = "controller"
.. end
* Allow all hosts to access the dashboard:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
ALLOWED_HOSTS = ['*', ]
.. end
* Configure the ``memcached`` session storage service:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
@ -311,24 +402,33 @@ Install and configure components
}
}
.. end
.. note::
Comment out any other session storage configuration.
* Enable the Identity API version 3:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
.. end
* Enable support for domains:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
.. end
* Configure API versions:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
OPENSTACK_API_VERSIONS = {
@ -337,23 +437,32 @@ Install and configure components
"volume": 2,
}
.. end
* Configure ``default`` as the default domain for users that you create
via the dashboard:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
.. end
* Configure ``user`` as the default role for
users that you create via the dashboard:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
.. end
* If you chose networking option 1, disable support for layer-3
networking services:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
OPENSTACK_NEUTRON_NETWORK = {
@ -368,16 +477,23 @@ Install and configure components
'enable_fip_topology_check': False,
}
.. end
* Optionally, configure the time zone:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
TIME_ZONE = "TIME_ZONE"
.. end
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
For more information, see the `list of time zones
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
.. endonly
Finalize installation
---------------------
@ -389,6 +505,10 @@ Finalize installation
# service apache2 reload
.. end
.. endonly
.. only:: obs
* Restart the web server and session storage service:
@ -397,11 +517,15 @@ Finalize installation
# systemctl restart apache2.service memcached.service
.. end
.. note::
The ``systemctl restart`` command starts each service if
not currently running.
.. endonly
.. only:: rdo
* Restart the web server and session storage service:
@ -410,7 +534,11 @@ Finalize installation
# systemctl restart httpd.service memcached.service
.. end
.. note::
The ``systemctl restart`` command starts each service if
not currently running.
.. endonly

View File

@ -8,15 +8,21 @@ Verify operation of the dashboard.
Access the dashboard using a web browser at
``http://controller/``.
.. endonly
.. only:: rdo
Access the dashboard using a web browser at
``http://controller/dashboard``.
.. endonly
.. only:: ubuntu
Access the dashboard using a web browser at
``http://controller/horizon``.
.. endonly
Authenticate using ``admin`` or ``demo`` user
and ``default`` domain credentials.

View File

@ -8,24 +8,31 @@
OpenStack Installation Tutorial for Red Hat Enterprise Linux and CentOS
=======================================================================
.. endonly
.. only:: obs
======================================================================
OpenStack Installation Tutorial for openSUSE and SUSE Linux Enterprise
======================================================================
.. endonly
.. only:: ubuntu
==========================================
OpenStack Installation Tutorial for Ubuntu
==========================================
.. endonly
.. only:: debian
==========================================
OpenStack Installation Tutorial for Debian
==========================================
.. endonly
Abstract
~~~~~~~~
@ -43,17 +50,23 @@ or as connected entities.
available on Red Hat Enterprise Linux 7 and its derivatives through
the RDO repository.
.. endonly
.. only:: ubuntu
This guide will walk through an installation by using packages
available through Canonical's Ubuntu Cloud archive repository.
.. endonly
.. only:: obs
This guide will show you how to install OpenStack by using packages
on openSUSE Leap 42.1 and SUSE Linux Enterprise Server 12 - for
both SP1 and SP2 - through the Open Build Service Cloud repository.
.. endonly
.. only:: debian
This guide walks through an installation by using packages
@ -69,9 +82,13 @@ or as connected entities.
# dpkg-reconfigure debconf
.. end
If you prefer to use debconf, refer to the debconf
install-guide for Debian.
.. endonly
Explanations of configuration options and sample configuration files
are included.

View File

@ -23,21 +23,27 @@ database and an administration token.
$ mysql -u root -p
.. end
* Create the ``keystone`` database:
.. code-block:: console
CREATE DATABASE keystone;
mysql> CREATE DATABASE keystone;
.. end
* Grant proper access to the ``keystone`` database:
.. code-block:: console
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
.. end
Replace ``KEYSTONE_DBPASS`` with a suitable password.
* Exit the database access client.
@ -56,6 +62,8 @@ Install and configure components
keystone service still listens on these ports. Therefore, this guide
manually disables the keystone service.
.. endonly
.. only:: ubuntu or debian
.. note::
@ -72,49 +80,70 @@ Install and configure components
# apt-get install keystone
.. only:: obs or rdo
.. end
.. endonly
.. only:: rdo
#. Run the following command to install the packages:
.. only:: rdo
.. code-block:: console
# yum install openstack-keystone httpd mod_wsgi
.. only:: obs
.. end
.. endonly
.. only:: obs
#. Run the following command to install the packages:
.. code-block:: console
# zypper install openstack-keystone apache2-mod_wsgi
.. end
.. endonly
2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access:
.. path /etc/keystone/keystone.conf
.. code-block:: ini
[database]
...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
.. end
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
* In the ``[token]`` section, configure the Fernet token provider:
.. path /etc/keystone/keystone.conf
.. code-block:: ini
[token]
...
provider = fernet
.. end
3. Populate the Identity service database:
.. code-block:: console
# su -s /bin/sh -c "keystone-manage db_sync" keystone
.. end
.. note::
Ignore any deprecation messages in this output.
@ -126,6 +155,8 @@ Install and configure components
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
.. end
5. Bootstrap the Identity service:
.. code-block:: console
@ -136,25 +167,32 @@ Install and configure components
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
.. end
Replace ``ADMIN_PASSWORD`` with a suitable password for an administrative user.
.. only:: obs or rdo or ubuntu
.. only:: rdo
Configure the Apache HTTP server
--------------------------------
.. only:: rdo
#. Edit the ``/etc/httpd/conf/httpd.conf`` file and configure the
``ServerName`` option to reference the controller node:
#. Edit the ``/etc/httpd/conf/httpd.conf`` file and configure the
``ServerName`` option to reference the controller node:
.. path /etc/httpd/conf/httpd
.. code-block:: apache
ServerName controller
.. end
#. Create the ``/etc/httpd/conf.d/wsgi-keystone.conf`` file with
the following content:
.. path /etc/httpd/conf.d/wsgi-keystone.conf
.. code-block:: apache
Listen 5000
@ -190,18 +228,26 @@ Install and configure components
</Directory>
</VirtualHost>
.. end
.. endonly
.. only:: ubuntu
#. Edit the ``/etc/apache2/apache2.conf`` file and configure the
``ServerName`` option to reference the controller node:
.. path /etc/apache2/apache2.conf
.. code-block:: apache
ServerName controller
.. end
#. Create the ``/etc/apache2/sites-available/wsgi-keystone.conf`` file
with the following content:
.. path /etc/apache2/sites-available/wsgi-keystone.conf
.. code-block:: apache
Listen 5000
@ -237,24 +283,34 @@ Install and configure components
</Directory>
</VirtualHost>
.. end
#. Enable the Identity service virtual hosts:
.. code-block:: console
# ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled
.. end
.. endonly
.. only:: obs
#. Edit the ``/etc/sysconfig/apache2`` file and configure the
``APACHE_SERVERNAME`` option to reference the controller node:
.. path /etc/sysconfig/apache2
.. code-block:: apache
APACHE_SERVERNAME="controller"
.. end
#. Create the ``/etc/apache2/conf.d/wsgi-keystone.conf`` file
with the following content:
.. path /etc/apache2/conf.d/wsgi-keystone.conf
.. code-block:: apache
Listen 5000
@ -290,43 +346,56 @@ Install and configure components
</Directory>
</VirtualHost>
.. end
6. Recursively change the ownership of the ``/etc/keystone`` directory:
.. code-block:: console
# chown -R keystone:keystone /etc/keystone
.. end
.. endonly
.. only:: ubuntu or rdo or obs
Finalize the installation
-------------------------
.. only:: ubuntu
.. endonly
#. Restart the Apache HTTP server:
.. only:: ubuntu
.. code-block:: console
# service apache2 restart
.. end
#. By default, the Ubuntu packages create an SQLite database.
Because this configuration uses an SQL database server, you can remove
the SQLite database file:
#. By default, the Ubuntu packages create an SQLite database.
.. code-block:: console
# rm -f /var/lib/keystone/keystone.db
.. only:: rdo
.. end
* Start the Apache HTTP service and configure it to start when the system boots:
.. endonly
.. only:: rdo
.. code-block:: console
# systemctl enable httpd.service
# systemctl start httpd.service
.. end
.. endonly
.. only:: obs
#. Start the Apache HTTP service and configure it to start when the system boots:
@ -336,6 +405,10 @@ Install and configure components
# systemctl enable apache2.service
# systemctl start apache2.service
.. end
.. endonly
6. Configure the administrative account
.. code-block:: console
@ -348,8 +421,12 @@ Install and configure components
$ export OS_AUTH_URL=http://controller:35357/v3
$ export OS_IDENTITY_API_VERSION=3
.. end
.. only:: obs or rdo or ubuntu
Replace ``ADMIN_PASSWORD`` with the password used in the
``keystone-manage bootstrap`` command from the section called
:ref:`keystone-install`.
.. endonly

View File

@ -30,6 +30,8 @@ scripts to load appropriate credentials for client operations.
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
.. end
Replace ``ADMIN_PASS`` with the password you chose
for the ``admin`` user in the Identity service.
@ -46,6 +48,8 @@ scripts to load appropriate credentials for client operations.
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
.. end
Replace ``DEMO_PASS`` with the password you chose
for the ``demo`` user in the Identity service.
@ -64,11 +68,14 @@ For example:
$ . admin-openrc
.. end
#. Request an authentication token:
.. code-block:: console
$ openstack token issue
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
@ -79,3 +86,5 @@ For example:
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
.. end

View File

@ -14,6 +14,7 @@ service. The authentication service uses a combination of :term:`domains
$ openstack project create --domain default \
--description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
@ -26,6 +27,8 @@ service. The authentication service uses a combination of :term:`domains
| parent_id | e0353a670a9e496da891347c589539e9 |
+-------------+----------------------------------+
.. end
#. Regular (non-admin) tasks should use an unprivileged project and user.
As an example, this guide creates the ``demo`` project and user.
@ -35,6 +38,7 @@ service. The authentication service uses a combination of :term:`domains
$ openstack project create --domain default \
--description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
@ -47,6 +51,8 @@ service. The authentication service uses a combination of :term:`domains
| parent_id | e0353a670a9e496da891347c589539e9 |
+-------------+----------------------------------+
.. end
.. note::
Do not repeat this step when creating additional users for this
@ -58,6 +64,7 @@ service. The authentication service uses a combination of :term:`domains
$ openstack user create --domain default \
--password-prompt demo
User Password:
Repeat User Password:
+-----------+----------------------------------+
@ -69,11 +76,14 @@ service. The authentication service uses a combination of :term:`domains
| name | demo |
+-----------+----------------------------------+
.. end
* Create the ``user`` role:
.. code-block:: console
$ openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
@ -82,12 +92,16 @@ service. The authentication service uses a combination of :term:`domains
| name | user |
+-----------+----------------------------------+
.. end
* Add the ``user`` role to the ``demo`` project and user:
.. code-block:: console
$ openstack role add --project demo --user demo user
.. end
.. note::
This command provides no output.

View File

@ -18,6 +18,8 @@ services.
``[pipeline:public_api]``, ``[pipeline:admin_api]``,
and ``[pipeline:api_v3]`` sections.
.. endonly
.. only:: rdo
#. For security reasons, disable the temporary authentication
@ -28,12 +30,16 @@ services.
``[pipeline:public_api]``, ``[pipeline:admin_api]``,
and ``[pipeline:api_v3]`` sections.
.. endonly
2. Unset the temporary ``OS_URL`` environment variable:
.. code-block:: console
$ unset OS_URL
.. end
3. As the ``admin`` user, request an authentication token:
.. code-block:: console
@ -41,6 +47,7 @@ services.
$ openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
@ -53,6 +60,8 @@ services.
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
.. end
.. note::
This command uses the password for the ``admin`` user.
@ -64,6 +73,7 @@ services.
$ openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
@ -76,6 +86,8 @@ services.
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+------------+-----------------------------------------------------------------+
.. end
.. note::
This command uses the password for the ``demo``

View File

@ -13,11 +13,14 @@ Create a volume
$ . demo-openrc
.. end
#. Create a 1 GB volume:
.. code-block:: console
$ openstack volume create --size 1 volume1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
@ -42,18 +45,23 @@ Create a volume
| user_id | 684286a9079845359882afc3aa5011fb |
+---------------------+--------------------------------------+
.. end
#. After a short time, the volume status should change from ``creating``
to ``available``:
.. code-block:: console
$ openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| a1e8be72-a395-4a6f-8e07-856a57c39524 | volume1 | available | 1 | |
+--------------------------------------+--------------+-----------+------+-------------+
.. end
Attach the volume to an instance
--------------------------------
@ -63,6 +71,8 @@ Attach the volume to an instance
$ openstack server add volume INSTANCE_NAME VOLUME_NAME
.. end
Replace ``INSTANCE_NAME`` with the name of the instance and ``VOLUME_NAME``
with the name of the volume you want to attach to it.
@ -74,6 +84,8 @@ Attach the volume to an instance
$ openstack server add volume provider-instance volume1
.. end
.. note::
This command provides no output.
@ -83,12 +95,15 @@ Attach the volume to an instance
.. code-block:: console
$ openstack volume list
+--------------------------------------+--------------+--------+------+--------------------------------------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+--------+------+--------------------------------------------+
| a1e8be72-a395-4a6f-8e07-856a57c39524 | volume1 | in-use | 1 | Attached to provider-instance on /dev/vdb |
+--------------------------------------+--------------+--------+------+--------------------------------------------+
.. end
#. Access your instance using SSH and use the ``fdisk`` command to verify
presence of the volume as the ``/dev/vdb`` block storage device:
@ -115,6 +130,8 @@ Attach the volume to an instance
Disk /dev/vdb doesn't contain a valid partition table
.. end
.. note::
You must create a file system on the device and mount it

View File

@ -37,12 +37,15 @@ Create the provider network
$ . admin-openrc
.. end
#. Create the network:
.. code-block:: console
$ neutron net-create --shared --provider:physical_network provider \
--provider:network_type flat provider
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
@ -62,6 +65,8 @@ Create the provider network
| tenant_id | d84313397390425c8ed50b2f6e18d092 |
+---------------------------+--------------------------------------+
.. end
The ``--shared`` option allows all projects to use the virtual network.
The ``--provider:physical_network provider`` and
@ -76,6 +81,8 @@ Create the provider network
[ml2_type_flat]
flat_networks = provider
.. end
``linuxbridge_agent.ini``:
.. code-block:: ini
@ -83,6 +90,8 @@ Create the provider network
[linux_bridge]
physical_interface_mappings = provider:eth1
.. end
#. Create a subnet on the network:
.. code-block:: console
@ -92,6 +101,8 @@ Create the provider network
--dns-nameserver DNS_RESOLVER --gateway PROVIDER_NETWORK_GATEWAY \
provider PROVIDER_NETWORK_CIDR
.. end
Replace ``PROVIDER_NETWORK_CIDR`` with the subnet on the provider
physical network in CIDR notation.
@ -119,6 +130,7 @@ Create the provider network
--allocation-pool start=203.0.113.101,end=203.0.113.250 \
--dns-nameserver 8.8.4.4 --gateway 203.0.113.1 \
provider 203.0.113.0/24
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field | Value |
@ -139,5 +151,7 @@ Create the provider network
| tenant_id | d84313397390425c8ed50b2f6e18d092 |
+-------------------+----------------------------------------------------+
.. end
Return to :ref:`Launch an instance - Create virtual networks
<launch-instance-networks>`.

View File

@ -43,11 +43,14 @@ Create the self-service network
$ . demo-openrc
.. end
#. Create the network:
.. code-block:: console
$ neutron net-create selfservice
Created a new network:
+-----------------------+--------------------------------------+
| Field | Value |
@ -64,6 +67,8 @@ Create the self-service network
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
+-----------------------+--------------------------------------+
.. end
Non-privileged users typically cannot supply additional parameters to
this command. The service automatically chooses parameters using
information from the following files:
@ -78,6 +83,8 @@ Create the self-service network
[ml2_type_vxlan]
vni_ranges = 1:1000
.. end
#. Create a subnet on the network:
.. code-block:: console
@ -86,6 +93,8 @@ Create the self-service network
--dns-nameserver DNS_RESOLVER --gateway SELFSERVICE_NETWORK_GATEWAY \
selfservice SELFSERVICE_NETWORK_CIDR
.. end
Replace ``DNS_RESOLVER`` with the IP address of a DNS resolver. In
most cases, you can use one from the ``/etc/resolv.conf`` file on
the host.
@ -108,6 +117,7 @@ Create the self-service network
$ neutron subnet-create --name selfservice \
--dns-nameserver 8.8.4.4 --gateway 172.16.1.1 \
selfservice 172.16.1.0/24
Created a new subnet:
+-------------------+------------------------------------------------+
| Field | Value |
@ -128,6 +138,8 @@ Create the self-service network
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
+-------------------+------------------------------------------------+
.. end
Create a router
---------------
@ -148,24 +160,32 @@ to the existing ``provider`` provider network.
$ . admin-openrc
.. end
#. Add the ``router: external`` option to the ``provider`` network:
.. code-block:: console
$ neutron net-update provider --router:external
Updated network: provider
.. end
#. Source the ``demo`` credentials to gain access to user-only CLI commands:
.. code-block:: console
$ . demo-openrc
.. end
#. Create the router:
.. code-block:: console
$ neutron router-create router
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
@ -179,20 +199,28 @@ to the existing ``provider`` provider network.
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
+-----------------------+--------------------------------------+
.. end
#. Add the self-service network subnet as an interface on the router:
.. code-block:: console
$ neutron router-interface-add router selfservice
Added interface bff6605d-824c-41f9-b744-21d128fc86e1 to router router.
.. end
#. Set a gateway on the provider network on the router:
.. code-block:: console
$ neutron router-gateway-set router provider
Set gateway for router router
.. end
Verify operation
----------------
@ -207,22 +235,28 @@ creation examples.
$ . admin-openrc
.. end
#. List network namespaces. You should see one ``qrouter`` namespace and two
``qdhcp`` namespaces.
.. code-block:: console
$ ip netns
qrouter-89dd2083-a160-4d75-ab3a-14239f01ea0b
qdhcp-7c6f9b37-76b4-463e-98d8-27e5686ed083
qdhcp-0e62efcd-8cee-46c7-b163-d8df05c3c5ad
.. end
#. List ports on the router to determine the gateway IP address on the
provider network:
.. code-block:: console
$ neutron router-port-list router
+--------------------------------------+------+-------------------+------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+------------------------------------------+
@ -234,12 +268,15 @@ creation examples.
| | | | "ip_address": "203.0.113.102"} |
+--------------------------------------+------+-------------------+------------------------------------------+
.. end
#. Ping this IP address from the controller node or any host on the physical
provider network:
.. code-block:: console
$ ping -c 4 203.0.113.102
PING 203.0.113.102 (203.0.113.102) 56(84) bytes of data.
64 bytes from 203.0.113.102: icmp_req=1 ttl=64 time=0.619 ms
64 bytes from 203.0.113.102: icmp_req=2 ttl=64 time=0.189 ms
@ -249,5 +286,7 @@ creation examples.
--- 203.0.113.102 ping statistics ---
rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms
.. end
Return to :ref:`Launch an instance - Create virtual networks
<launch-instance-networks>`.

View File

@ -16,6 +16,8 @@ name, network, security group, key, and instance name.
$ . demo-openrc
.. end
#. A flavor specifies a virtual resource allocation profile which
includes processor, memory, and storage.
@ -34,6 +36,8 @@ name, network, security group, key, and instance name.
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+----+-----------+-------+------+-----------+-------+-----------+
.. end
This instance uses the ``m1.tiny`` flavor. If you created the optional
``m1.nano`` flavor, use it instead of the ``m1.tiny`` flavor.
@ -52,6 +56,8 @@ name, network, security group, key, and instance name.
| 390eb5f7-8d49-41ec-95b7-68c0d5d54b34 | cirros | active |
+--------------------------------------+--------+--------+
.. end
This instance uses the ``cirros`` image.
#. List available networks:
@ -66,6 +72,8 @@ name, network, security group, key, and instance name.
| b5b6993c-ddf9-40e7-91d0-86806a42edb8 | provider | 310911f6-acf0-4a47-824e-3032916582ff |
+--------------------------------------+--------------+--------------------------------------+
.. end
This instance uses the ``provider`` provider network. However, you must
reference this network using the ID instead of the name.
@ -85,6 +93,8 @@ name, network, security group, key, and instance name.
| dd2b614c-3dad-48ed-958b-b155a3b38515 | default | Default security group |
+--------------------------------------+---------+------------------------+
.. end
This instance uses the ``default`` security group.
Launch the instance
@ -138,17 +148,22 @@ Launch the instance
| user_id | 684286a9079845359882afc3aa5011fb |
+--------------------------------------+-----------------------------------------------+
.. end
#. Check the status of your instance:
.. code-block:: console
$ openstack server list
+--------------------------------------+-------------------+--------+---------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+-------------------+--------+---------------------------------+
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | provider-instance | ACTIVE | provider=203.0.113.103 |
+--------------------------------------+-------------------+--------+---------------------------------+
.. end
The status changes from ``BUILD`` to ``ACTIVE`` when the build process
successfully completes.
@ -161,6 +176,7 @@ Access the instance using the virtual console
.. code-block:: console
$ openstack console url show provider-instance
+-------+---------------------------------------------------------------------------------+
| Field | Value |
+-------+---------------------------------------------------------------------------------+
@ -168,6 +184,8 @@ Access the instance using the virtual console
| url | http://controller:6080/vnc_auto.html?token=5eeccb47-525c-4918-ac2a-3ad1e9f1f493 |
+-------+---------------------------------------------------------------------------------+
.. end
.. note::
If your web browser runs on a host that cannot resolve the
@ -184,6 +202,7 @@ Access the instance using the virtual console
.. code-block:: console
$ ping -c 4 203.0.113.1
PING 203.0.113.1 (203.0.113.1) 56(84) bytes of data.
64 bytes from 203.0.113.1: icmp_req=1 ttl=64 time=0.357 ms
64 bytes from 203.0.113.1: icmp_req=2 ttl=64 time=0.473 ms
@ -194,11 +213,14 @@ Access the instance using the virtual console
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
.. end
#. Verify access to the internet:
.. code-block:: console
$ ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms
64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms
@ -209,6 +231,8 @@ Access the instance using the virtual console
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
.. end
Access the instance remotely
----------------------------
@ -218,6 +242,7 @@ Access the instance remotely
.. code-block:: console
$ ping -c 4 203.0.113.103
PING 203.0.113.103 (203.0.113.103) 56(84) bytes of data.
64 bytes from 203.0.113.103: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.103: icmp_req=2 ttl=63 time=0.981 ms
@ -228,18 +253,23 @@ Access the instance remotely
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
.. end
#. Access your instance using SSH from the controller node or any
host on the provider physical network:
.. code-block:: console
$ ssh cirros@203.0.113.103
The authenticity of host '203.0.113.102 (203.0.113.102)' can't be established.
RSA key fingerprint is ed:05:e9:e7:52:a0:ff:83:68:94:c7:d1:f2:f8:e2:e9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.102' (RSA) to the list of known hosts.
$
.. end
If your instance does not launch or seem to work as you expect, see the
`Instance Boot Failures
<http://docs.openstack.org/ops-guide/ops-maintenance-compute.html#instances>`__

View File

@ -16,6 +16,8 @@ name, network, security group, key, and instance name.
$ . demo-openrc
.. end
#. A flavor specifies a virtual resource allocation profile which
includes processor, memory, and storage.
@ -24,6 +26,7 @@ name, network, security group, key, and instance name.
.. code-block:: console
$ openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
@ -34,6 +37,8 @@ name, network, security group, key, and instance name.
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+----+-----------+-------+------+-----------+-------+-----------+
.. end
This instance uses the ``m1.tiny`` flavor. If you created the optional
``m1.nano`` flavor, use it instead of the ``m1.tiny`` flavor.
@ -46,12 +51,15 @@ name, network, security group, key, and instance name.
.. code-block:: console
$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 390eb5f7-8d49-41ec-95b7-68c0d5d54b34 | cirros | active |
+--------------------------------------+--------+--------+
.. end
This instance uses the ``cirros`` image.
#. List available networks:
@ -59,6 +67,7 @@ name, network, security group, key, and instance name.
.. code-block:: console
$ openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-------------+--------------------------------------+
@ -66,6 +75,8 @@ name, network, security group, key, and instance name.
| b5b6993c-ddf9-40e7-91d0-86806a42edb8 | provider | 310911f6-acf0-4a47-824e-3032916582ff |
+--------------------------------------+-------------+--------------------------------------+
.. end
This instance uses the ``selfservice`` self-service network. However, you
must reference this network using the ID instead of the name.
@ -74,12 +85,15 @@ name, network, security group, key, and instance name.
.. code-block:: console
$ openstack security group list
+--------------------------------------+---------+------------------------+
| ID | Name | Description |
+--------------------------------------+---------+------------------------+
| dd2b614c-3dad-48ed-958b-b155a3b38515 | default | Default security group |
+--------------------------------------+---------+------------------------+
.. end
This instance uses the ``default`` security group.
#. Launch the instance:
@ -91,6 +105,7 @@ name, network, security group, key, and instance name.
$ openstack server create --flavor m1.tiny --image cirros \
--nic net-id=SELFSERVICE_NET_ID --security-group default \
--key-name mykey selfservice-instance
+--------------------------------------+---------------------------------------+
| Field | Value |
+--------------------------------------+---------------------------------------+
@ -124,11 +139,14 @@ name, network, security group, key, and instance name.
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+--------------------------------------+---------------------------------------+
.. end
#. Check the status of your instance:
.. code-block:: console
$ openstack server list
+--------------------------------------+----------------------+--------+---------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+----------------------+--------+---------------------------------+
@ -136,6 +154,8 @@ name, network, security group, key, and instance name.
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | provider-instance | ACTIVE | provider=203.0.113.103 |
+--------------------------------------+----------------------+--------+---------------------------------+
.. end
The status changes from ``BUILD`` to ``ACTIVE`` when the build process
successfully completes.
@ -148,6 +168,7 @@ Access the instance using a virtual console
.. code-block:: console
$ openstack console url show selfservice-instance
+-------+---------------------------------------------------------------------------------+
| Field | Value |
+-------+---------------------------------------------------------------------------------+
@ -155,6 +176,8 @@ Access the instance using a virtual console
| url | http://controller:6080/vnc_auto.html?token=5eeccb47-525c-4918-ac2a-3ad1e9f1f493 |
+-------+---------------------------------------------------------------------------------+
.. end
.. note::
If your web browser runs on a host that cannot resolve the
@ -171,6 +194,7 @@ Access the instance using a virtual console
.. code-block:: console
$ ping -c 4 172.16.1.1
PING 172.16.1.1 (172.16.1.1) 56(84) bytes of data.
64 bytes from 172.16.1.1: icmp_req=1 ttl=64 time=0.357 ms
64 bytes from 172.16.1.1: icmp_req=2 ttl=64 time=0.473 ms
@ -181,11 +205,14 @@ Access the instance using a virtual console
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
.. end
#. Verify access to the internet:
.. code-block:: console
$ ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms
64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms
@ -196,6 +223,8 @@ Access the instance using a virtual console
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
.. end
Access the instance remotely
----------------------------
@ -204,6 +233,7 @@ Access the instance remotely
.. code-block:: console
$ openstack ip floating create provider
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
@ -214,12 +244,16 @@ Access the instance remotely
| pool | provider |
+-------------+--------------------------------------+
.. end
#. Associate the floating IP address with the instance:
.. code-block:: console
$ openstack ip floating add 203.0.113.104 selfservice-instance
.. end
.. note::
This command provides no output.
@ -229,6 +263,7 @@ Access the instance remotely
.. code-block:: console
$ openstack server list
+--------------------------------------+----------------------+--------+---------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+----------------------+--------+---------------------------------------+
@ -236,12 +271,15 @@ Access the instance remotely
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | provider-instance | ACTIVE | provider=203.0.113.103 |
+--------------------------------------+----------------------+--------+---------------------------------------+
.. end
#. Verify connectivity to the instance via floating IP address from
the controller node or any host on the provider physical network:
.. code-block:: console
$ ping -c 4 203.0.113.104
PING 203.0.113.104 (203.0.113.104) 56(84) bytes of data.
64 bytes from 203.0.113.104: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.104: icmp_req=2 ttl=63 time=0.981 ms
@ -252,18 +290,23 @@ Access the instance remotely
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
.. end
#. Access your instance using SSH from the controller node or any
host on the provider physical network:
.. code-block:: console
$ ssh cirros@203.0.113.104
The authenticity of host '203.0.113.104 (203.0.113.104)' can't be established.
RSA key fingerprint is ed:05:e9:e7:52:a0:ff:83:68:94:c7:d1:f2:f8:e2:e9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.104' (RSA) to the list of known hosts.
$
.. end
If your instance does not launch or seem to work as you expect, see the
`Instance Boot Failures
<http://docs.openstack.org/ops-guide/ops-maintenance-compute.html#instances>`__

View File

@ -48,6 +48,7 @@ purposes.
.. code-block:: console
$ openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
@ -63,6 +64,8 @@ purposes.
| vcpus | 1 |
+----------------------------+---------+
.. end
Generate a key pair
-------------------
@ -76,12 +79,15 @@ must add a public key to the Compute service.
$ . demo-openrc
.. end
#. Generate and add a key pair:
.. code-block:: console
$ ssh-keygen -q -N ""
$ openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
@ -90,6 +96,8 @@ must add a public key to the Compute service.
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+-------------+-------------------------------------------------+
.. end
.. note::
Alternatively, you can skip the ``ssh-keygen`` command and use an
@ -100,12 +108,15 @@ must add a public key to the Compute service.
.. code-block:: console
$ openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | ee:3d:2e:97:d4:e2:6a:54:6d:0d:ce:43:39:2c:ba:4d |
+-------+-------------------------------------------------+
.. end
Add security group rules
------------------------
@ -121,6 +132,7 @@ secure shell (SSH).
.. code-block:: console
$ openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
@ -132,11 +144,14 @@ secure shell (SSH).
| remote_security_group | |
+-----------------------+--------------------------------------+
.. end
* Permit secure shell (SSH) access:
.. code-block:: console
$ openstack security group rule create --proto tcp --dst-port 22 default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
@ -148,6 +163,8 @@ secure shell (SSH).
| remote_security_group | |
+-----------------------+--------------------------------------+
.. end
Launch an instance
------------------

View File

@ -15,25 +15,32 @@ networking infrastructure for instances and handles security groups.
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :ref:`environment-networking`
for more information.
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = False
.. end
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge :term:`iptables` firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
@ -41,5 +48,7 @@ networking infrastructure for instances and handles security groups.
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Return to
:ref:`Networking compute node configuration <neutron-compute-compute>`.

View File

@ -15,11 +15,14 @@ networking infrastructure for instances and handles security groups.
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :ref:`environment-networking`
for more information.
@ -28,6 +31,7 @@ networking infrastructure for instances and handles security groups.
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
@ -35,6 +39,8 @@ networking infrastructure for instances and handles security groups.
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True
.. end
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface to tunnel traffic to
@ -45,6 +51,7 @@ networking infrastructure for instances and handles security groups.
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge :term:`iptables` firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
@ -52,5 +59,7 @@ networking infrastructure for instances and handles security groups.
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Return to
:ref:`Networking compute node configuration <neutron-compute-compute>`.

View File

@ -4,19 +4,24 @@ Install and configure compute node
The compute node handles connectivity and :term:`security groups <security
group>` for instances.
.. only:: ubuntu or rdo or obs
.. only:: ubuntu or debian
Install the components
----------------------
.. only:: ubuntu or debian
.. code-block:: console
# apt-get install neutron-linuxbridge-agent
.. end
.. endonly
.. only:: rdo
Install the components
----------------------
.. todo:
https://bugzilla.redhat.com/show_bug.cgi?id=1334626
@ -25,12 +30,23 @@ Install the components
# yum install openstack-neutron-linuxbridge ebtables ipset
.. end
.. endonly
.. only:: obs
Install the components
----------------------
.. code-block:: console
# zypper install --no-recommends openstack-neutron-linuxbridge-agent
.. end
.. endonly
Configure the common component
------------------------------
@ -48,6 +64,7 @@ authentication mechanism, message queue, and plug-in.
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections, configure
RabbitMQ message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
@ -60,12 +77,15 @@ authentication mechanism, message queue, and plug-in.
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
.. end
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
@ -84,6 +104,8 @@ authentication mechanism, message queue, and plug-in.
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
@ -96,12 +118,18 @@ authentication mechanism, message queue, and plug-in.
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
.. end
.. endonly
Configure networking options
----------------------------
@ -124,6 +152,7 @@ Configure Compute to use Networking
* In the ``[neutron]`` section, configure access parameters:
.. path /etc/nova/nova.conf
.. code-block:: ini
[neutron]
@ -138,6 +167,8 @@ Configure Compute to use Networking
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
@ -152,6 +183,8 @@ Finalize installation
# systemctl restart openstack-nova-compute.service
.. end
#. Start the Linux bridge agent and configure it to start when the
system boots:
@ -160,6 +193,10 @@ Finalize installation
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service
.. end
.. endonly
.. only:: obs
#. The Networking service initialization scripts expect the variable
@ -167,16 +204,21 @@ Finalize installation
reference the ML2 plug-in configuration file. Ensure that the
``/etc/sysconfig/neutron`` file contains the following:
.. path /etc/sysconfig/neutron
.. code-block:: ini
NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"
.. end
#. Restart the Compute service:
.. code-block:: console
# systemctl restart openstack-nova-compute.service
.. end
#. Start the Linux Bridge agent and configure it to start when the
system boots:
@ -185,6 +227,10 @@ Finalize installation
# systemctl enable openstack-neutron-linuxbridge-agent.service
# systemctl start openstack-neutron-linuxbridge-agent.service
.. end
.. endonly
.. only:: ubuntu or debian
#. Restart the Compute service:
@ -193,8 +239,14 @@ Finalize installation
# service nova-compute restart
.. end
#. Restart the Linux bridge agent:
.. code-block:: console
# service neutron-linuxbridge-agent restart
.. end
.. endonly

View File

@ -14,6 +14,8 @@ Install the components
neutron-linuxbridge-agent neutron-dhcp-agent \
neutron-metadata-agent
.. end
.. only:: debian
.. code-block:: console
@ -21,6 +23,8 @@ Install the components
# apt-get install neutron-server neutron-linuxbridge-agent \
neutron-dhcp-agent neutron-metadata-agent python-neutronclient
.. end
.. only:: rdo
.. code-block:: console
@ -28,6 +32,8 @@ Install the components
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
.. end
.. only:: obs
.. code-block:: console
@ -36,6 +42,8 @@ Install the components
openstack-neutron-server openstack-neutron-linuxbridge-agent \
openstack-neutron-dhcp-agent openstack-neutron-metadata-agent
.. end
Configure the server component
------------------------------
@ -50,18 +58,22 @@ and plug-in.
* In the ``[database]`` section, configure database access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
.. end
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in and disable additional plug-ins:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
@ -69,9 +81,12 @@ and plug-in.
core_plugin = ml2
service_plugins =
.. end
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure RabbitMQ message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
@ -84,12 +99,15 @@ and plug-in.
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
.. end
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
@ -108,6 +126,8 @@ and plug-in.
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
@ -119,6 +139,7 @@ and plug-in.
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
@ -137,6 +158,8 @@ and plug-in.
username = nova
password = NOVA_PASS
.. end
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
@ -144,12 +167,15 @@ and plug-in.
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
.. end
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
@ -161,28 +187,37 @@ and switching) virtual networking infrastructure for instances.
* In the ``[ml2]`` section, enable flat and VLAN networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
...
type_drivers = flat,vlan
.. end
* In the ``[ml2]`` section, disable self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
...
tenant_network_types =
.. end
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
...
mechanism_drivers = linuxbridge
.. end
.. warning::
After you configure the ML2 plug-in, removing values in the
@ -190,30 +225,39 @@ and switching) virtual networking infrastructure for instances.
* In the ``[ml2]`` section, enable the port security extension driver:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
...
extension_drivers = port_security
.. end
* In the ``[ml2_type_flat]`` section, configure the provider virtual
network as a flat network:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_flat]
...
flat_networks = provider
.. end
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
efficiency of security group rules:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[securitygroup]
...
enable_ipset = True
.. end
Configure the Linux bridge agent
--------------------------------
@ -226,25 +270,32 @@ networking infrastructure for instances and handles security groups.
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :ref:`environment-networking`
for more information.
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
enable_vxlan = False
.. end
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge :term:`iptables` firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
@ -252,6 +303,8 @@ networking infrastructure for instances and handles security groups.
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Configure the DHCP agent
------------------------
@ -264,6 +317,7 @@ The :term:`DHCP agent` provides DHCP services for virtual networks.
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
networks can access metadata over the network:
.. path /etc/neutron/dhcp_agent.ini
.. code-block:: ini
[DEFAULT]
@ -272,6 +326,8 @@ The :term:`DHCP agent` provides DHCP services for virtual networks.
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
.. end
Return to
:ref:`Networking controller node configuration
<neutron-controller-metadata-agent>`.

View File

@ -14,6 +14,10 @@ Install the components
neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \
neutron-metadata-agent
.. end
.. endonly
.. only:: rdo
.. code-block:: console
@ -21,6 +25,10 @@ Install the components
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
.. end
.. endonly
.. only:: obs
.. code-block:: console
@ -30,6 +38,10 @@ Install the components
openstack-neutron-l3-agent openstack-neutron-dhcp-agent \
openstack-neutron-metadata-agent
.. end
.. endonly
.. only:: debian
#. .. code-block:: console
@ -37,6 +49,10 @@ Install the components
# apt-get install neutron-server neutron-linuxbridge-agent \
neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
.. end
.. endonly
Configure the server component
------------------------------
@ -45,18 +61,22 @@ Configure the server component
* In the ``[database]`` section, configure database access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
.. end
Replace ``NEUTRON_DBPASS`` with the password you chose for the
database.
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
plug-in, router service, and overlapping IP addresses:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
@ -65,9 +85,12 @@ Configure the server component
service_plugins = router
allow_overlapping_ips = True
.. end
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure RabbitMQ message queue access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
@ -80,12 +103,15 @@ Configure the server component
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
.. end
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in RabbitMQ.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
Identity service access:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
@ -104,6 +130,8 @@ Configure the server component
username = neutron
password = NEUTRON_PASS
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
@ -115,6 +143,7 @@ Configure the server component
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[DEFAULT]
@ -133,6 +162,8 @@ Configure the server component
username = nova
password = NOVA_PASS
.. end
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
@ -140,12 +171,15 @@ Configure the server component
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/neutron/neutron.conf
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
.. end
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
@ -157,29 +191,38 @@ and switching) virtual networking infrastructure for instances.
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
...
type_drivers = flat,vlan,vxlan
.. end
* In the ``[ml2]`` section, enable VXLAN self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
...
tenant_network_types = vxlan
.. end
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
mechanisms:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
...
mechanism_drivers = linuxbridge,l2population
.. end
.. warning::
After you configure the ML2 plug-in, removing values in the
@ -191,39 +234,51 @@ and switching) virtual networking infrastructure for instances.
* In the ``[ml2]`` section, enable the port security extension driver:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2]
...
extension_drivers = port_security
.. end
* In the ``[ml2_type_flat]`` section, configure the provider virtual
network as a flat network:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_flat]
...
flat_networks = provider
.. end
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
range for self-service networks:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[ml2_type_vxlan]
...
vni_ranges = 1:1000
.. end
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
efficiency of security group rules:
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
.. code-block:: ini
[securitygroup]
...
enable_ipset = True
.. end
Configure the Linux bridge agent
--------------------------------
@ -236,11 +291,14 @@ networking infrastructure for instances and handles security groups.
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
.. end
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :ref:`environment-networking`
for more information.
@ -249,6 +307,7 @@ networking infrastructure for instances and handles security groups.
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[vxlan]
@ -256,6 +315,8 @@ networking infrastructure for instances and handles security groups.
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True
.. end
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface to tunnel traffic to
@ -266,6 +327,7 @@ networking infrastructure for instances and handles security groups.
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge :term:`iptables` firewall driver:
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
.. code-block:: ini
[securitygroup]
@ -273,6 +335,8 @@ networking infrastructure for instances and handles security groups.
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
.. end
Configure the layer-3 agent
---------------------------
@ -285,6 +349,7 @@ self-service virtual networks.
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
and external network bridge:
.. path /etc/neutron/l3_agent.ini
.. code-block:: ini
[DEFAULT]
@ -292,6 +357,8 @@ self-service virtual networks.
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
.. end
.. note::
The ``external_network_bridge`` option intentionally lacks a value
@ -309,6 +376,7 @@ The :term:`DHCP agent` provides DHCP services for virtual networks.
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
networks can access metadata over the network:
.. path /etc/neutron/dhcp_agent.ini
.. code-block:: ini
[DEFAULT]
@ -317,6 +385,8 @@ The :term:`DHCP agent` provides DHCP services for virtual networks.
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
.. end
Return to
:ref:`Networking controller node configuration
<neutron-controller-metadata-agent>`.

View File

@ -14,24 +14,30 @@ must create a database, service credentials, and API endpoints.
.. code-block:: console
$ mysql -u root -p
mysql> $ mysql -u root -p
.. end
* Create the ``neutron`` database:
.. code-block:: console
CREATE DATABASE neutron;
mysql> CREATE DATABASE neutron;
.. end
* Grant proper access to the ``neutron`` database, replacing
``NEUTRON_DBPASS`` with a suitable password:
.. code-block:: console
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
.. end
* Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only CLI
@ -41,6 +47,8 @@ must create a database, service credentials, and API endpoints.
$ . admin-openrc
.. end
#. To create the service credentials, complete these steps:
* Create the ``neutron`` user:
@ -48,6 +56,7 @@ must create a database, service credentials, and API endpoints.
.. code-block:: console
$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+-----------+----------------------------------+
@ -59,6 +68,7 @@ must create a database, service credentials, and API endpoints.
| name | neutron |
+-----------+----------------------------------+
.. end
* Add the ``admin`` role to the ``neutron`` user:
@ -66,6 +76,8 @@ must create a database, service credentials, and API endpoints.
$ openstack role add --project service --user neutron admin
.. end
.. note::
This command provides no output.
@ -76,6 +88,7 @@ must create a database, service credentials, and API endpoints.
$ openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
@ -86,12 +99,15 @@ must create a database, service credentials, and API endpoints.
| type | network |
+-------------+----------------------------------+
.. end
#. Create the Networking service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
@ -108,6 +124,7 @@ must create a database, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
@ -124,6 +141,7 @@ must create a database, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
@ -138,6 +156,8 @@ must create a database, service credentials, and API endpoints.
| url | http://controller:9696 |
+--------------+----------------------------------+
.. end
Configure networking options
----------------------------
@ -193,6 +213,7 @@ such as credentials to instances.
* In the ``[DEFAULT]`` section, configure the metadata host and shared
secret:
.. path /etc/neutron/metadata_agent.ini
.. code-block:: ini
[DEFAULT]
@ -200,6 +221,8 @@ such as credentials to instances.
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
.. end
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
Configure Compute to use Networking
@ -210,6 +233,7 @@ Configure Compute to use Networking
* In the ``[neutron]`` section, configure access parameters, enable the
metadata proxy, and configure the secret:
.. path /etc/nova/nova.conf
.. code-block:: ini
[neutron]
@ -223,10 +247,11 @@ Configure Compute to use Networking
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
.. end
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
@ -247,6 +272,8 @@ Finalize installation
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
.. end
#. Populate the database:
.. code-block:: console
@ -254,6 +281,8 @@ Finalize installation
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
.. end
.. note::
Database population occurs later for Networking because the script
@ -265,6 +294,8 @@ Finalize installation
# systemctl restart openstack-nova-api.service
.. end
#. Start the Networking services and configure them to start when the system
boots.
@ -279,6 +310,8 @@ Finalize installation
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
.. end
For networking option 2, also enable and start the layer-3 service:
.. code-block:: console
@ -286,6 +319,10 @@ Finalize installation
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
.. end
.. endonly
.. only:: obs
#. Restart the Compute API service:
@ -294,6 +331,8 @@ Finalize installation
# systemctl restart openstack-nova-api.service
.. end
#. Start the Networking services and configure them to start when the system
boots.
@ -310,6 +349,8 @@ Finalize installation
openstack-neutron-dhcp-agent.service \
openstack-neutron-metadata-agent.service
.. end
For networking option 2, also enable and start the layer-3 service:
.. code-block:: console
@ -317,6 +358,10 @@ Finalize installation
# systemctl enable openstack-neutron-l3-agent.service
# systemctl start openstack-neutron-l3-agent.service
.. end
.. endonly
.. only:: ubuntu or debian
#. Populate the database:
@ -326,6 +371,8 @@ Finalize installation
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
.. end
.. note::
Database population occurs later for Networking because the script
@ -337,6 +384,8 @@ Finalize installation
# service nova-api restart
.. end
#. Restart the Networking services.
For both networking options:
@ -348,8 +397,14 @@ Finalize installation
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
.. end
For networking option 2, also restart the layer-3 service:
.. code-block:: console
# service neutron-l3-agent restart
.. end
.. endonly

View File

@ -12,6 +12,7 @@ List agents to verify successful launch of the neutron agents:
.. code-block:: console
$ neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
@ -21,5 +22,7 @@ List agents to verify successful launch of the neutron agents:
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
.. end
The output should indicate three agents on the controller node and one
agent on each compute node.

View File

@ -12,6 +12,7 @@ List agents to verify successful launch of the neutron agents:
.. code-block:: console
$ neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
@ -22,5 +23,7 @@ List agents to verify successful launch of the neutron agents:
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
.. end
The output should indicate four agents on the controller node and one
agent on each compute node.

View File

@ -12,12 +12,15 @@ Verify operation
$ . admin-openrc
.. end
#. List loaded extensions to verify successful launch of the
``neutron-server`` process:
.. code-block:: console
$ neutron ext-list
+---------------------------+-----------------------------------------------+
| alias | name |
+---------------------------+-----------------------------------------------+
@ -55,6 +58,8 @@ Verify operation
| dvr | Distributed Virtual Router |
+---------------------------+-----------------------------------------------+
.. end
.. note::
Actual output may differ slightly from this example.

View File

@ -34,6 +34,10 @@ Install and configure components
# zypper install openstack-nova-compute genisoimage kvm libvirt
.. end
.. endonly
.. only:: rdo
#. Install the packages:
@ -42,6 +46,10 @@ Install and configure components
# yum install openstack-nova-compute
.. end
.. endonly
.. only:: ubuntu or debian
#. Install the packages:
@ -50,6 +58,10 @@ Install and configure components
# apt-get install nova-compute
.. end
.. endonly
.. only:: debian
Respond to prompts for debconf.
@ -60,21 +72,27 @@ Install and configure components
sure that you do not activate database management handling by debconf,
as a compute node should not access the central database.
.. endonly
2. Edit the ``/etc/nova/nova.conf`` file and
complete the following actions:
* In the ``[DEFAULT]`` section, enable only the compute and
metadata APIs:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
...
enabled_apis = osapi_compute,metadata
.. end
* In the ``[DEFAULT]`` and [oslo_messaging_rabbit]
sections, configure ``RabbitMQ`` message queue access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
@ -87,12 +105,15 @@ Install and configure components
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
.. end
Replace ``RABBIT_PASS`` with the password you chose for
the ``openstack`` account in ``RabbitMQ``.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
@ -111,6 +132,8 @@ Install and configure components
username = nova
password = NOVA_PASS
.. end
Replace ``NOVA_PASS`` with the password you chose for the
``nova`` user in the Identity service.
@ -125,27 +148,35 @@ Install and configure components
is correctly set (this value is handled by the config and postinst
scripts of the ``nova-common`` package using debconf):
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
.. end
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
of the management network interface on your compute node,
typically 10.0.0.31 for the first node in the
:ref:`example architecture <overview-example-architectures>`.
.. endonly
.. only:: obs or rdo or ubuntu
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
.. end
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
of the management network interface on your compute node,
typically 10.0.0.31 for the first node in the
@ -153,6 +184,7 @@ Install and configure components
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
@ -160,6 +192,8 @@ Install and configure components
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. end
.. note::
By default, Compute uses an internal firewall service. Since
@ -167,8 +201,11 @@ Install and configure components
firewall service by using the
``nova.virt.firewall.NoopFirewallDriver`` firewall driver.
.. endonly
* In the ``[vnc]`` section, enable and configure remote console access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[vnc]
@ -178,6 +215,8 @@ Install and configure components
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
.. end
The server component listens on all IP addresses and the proxy
component only listens on the management interface IP address of
the compute node. The base URL indicates the location where you
@ -194,32 +233,45 @@ Install and configure components
* In the ``[glance]`` section, configure the location of the
Image service API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[glance]
...
api_servers = http://controller:9292
.. end
.. only:: obs
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/run/nova
.. end
.. endonly
.. only:: rdo or ubuntu
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
.. end
.. endonly
.. only:: ubuntu
.. todo:
@ -229,6 +281,8 @@ Install and configure components
* Due to a packaging bug, remove the ``logdir`` option from the
``[DEFAULT]`` section.
.. endonly
.. only:: obs or debian
3. Ensure the kernel module ``nbd`` is loaded.
@ -237,9 +291,13 @@ Install and configure components
# modprobe nbd
.. end
4. Ensure the module loads on every boot by adding ``nbd``
to the ``/etc/modules-load.d/nbd.conf`` file.
.. endonly
Finalize installation
---------------------
@ -250,6 +308,8 @@ Finalize installation
$ egrep -c '(vmx|svm)' /proc/cpuinfo
.. end
If this command returns a value of ``one or greater``, your compute
node supports hardware acceleration which typically requires no
additional configuration.
@ -263,23 +323,33 @@ Finalize installation
* Edit the ``[libvirt]`` section in the
``/etc/nova/nova.conf`` file as follows:
.. path /etc/nova/nova.conf
.. code-block:: ini
[libvirt]
...
virt_type = qemu
.. end
.. endonly
.. only:: ubuntu
* Edit the ``[libvirt]`` section in the
``/etc/nova/nova-compute.conf`` file as follows:
.. path /etc/nova/nova-compute.conf
.. code-block:: ini
[libvirt]
...
virt_type = qemu
.. end
.. endonly
.. only:: debian
* Replace the ``nova-compute-kvm`` package with ``nova-compute-qemu``
@ -290,6 +360,10 @@ Finalize installation
# apt-get install nova-compute-qemu
.. end
.. endonly
.. only:: obs or rdo
2. Start the Compute service including its dependencies and configure
@ -300,6 +374,10 @@ Finalize installation
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
.. end
.. endonly
.. only:: ubuntu or debian
2. Restart the Compute service:
@ -307,3 +385,7 @@ Finalize installation
.. code-block:: console
# service nova-compute restart
.. end
.. endonly

View File

@ -19,26 +19,32 @@ create databases, service credentials, and API endpoints.
$ mysql -u root -p
.. end
* Create the ``nova_api`` and ``nova`` databases:
.. code-block:: console
CREATE DATABASE nova_api;
CREATE DATABASE nova;
mysql> CREATE DATABASE nova_api;
mysql> CREATE DATABASE nova;
.. end
* Grant proper access to the databases:
.. code-block:: console
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
mysql> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
mysql> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
.. end
Replace ``NOVA_DBPASS`` with a suitable password.
* Exit the database access client.
@ -50,6 +56,8 @@ create databases, service credentials, and API endpoints.
$ . admin-openrc
.. end
#. To create the service credentials, complete these steps:
* Create the ``nova`` user:
@ -58,6 +66,7 @@ create databases, service credentials, and API endpoints.
$ openstack user create --domain default \
--password-prompt nova
User Password:
Repeat User Password:
+-----------+----------------------------------+
@ -69,12 +78,16 @@ create databases, service credentials, and API endpoints.
| name | nova |
+-----------+----------------------------------+
.. end
* Add the ``admin`` role to the ``nova`` user:
.. code-block:: console
$ openstack role add --project service --user nova admin
.. end
.. note::
This command provides no output.
@ -85,6 +98,7 @@ create databases, service credentials, and API endpoints.
$ openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
@ -95,12 +109,15 @@ create databases, service credentials, and API endpoints.
| type | compute |
+-------------+----------------------------------+
.. end
#. Create the Compute service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
@ -117,6 +134,7 @@ create databases, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
@ -133,6 +151,7 @@ create databases, service credentials, and API endpoints.
$ openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
@ -147,6 +166,8 @@ create databases, service credentials, and API endpoints.
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
.. end
Install and configure components
--------------------------------
@ -162,6 +183,10 @@ Install and configure components
openstack-nova-conductor openstack-nova-consoleauth \
openstack-nova-novncproxy iptables
.. end
.. endonly
.. only:: rdo
#. Install the packages:
@ -172,6 +197,10 @@ Install and configure components
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler
.. end
.. endonly
.. only:: ubuntu
#. Install the packages:
@ -181,6 +210,10 @@ Install and configure components
# apt-get install nova-api nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler
.. end
.. endonly
.. only:: debian
#. Install the packages:
@ -190,6 +223,8 @@ Install and configure components
# apt-get install nova-api nova-conductor nova-consoleauth \
nova-consoleproxy nova-scheduler
.. end
.. note::
``nova-api-metadata`` is included in the ``nova-api`` package,
@ -204,21 +239,27 @@ Install and configure components
You can also manually edit the ``/etc/default/nova-consoleproxy``
file, and stop and start the console daemons.
.. endonly
2. Edit the ``/etc/nova/nova.conf`` file and
complete the following actions:
* In the ``[DEFAULT]`` section, enable only the compute and metadata
APIs:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
...
enabled_apis = osapi_compute,metadata
.. end
* In the ``[api_database]`` and ``[database]`` sections, configure
database access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[api_database]
@ -229,12 +270,15 @@ Install and configure components
...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
.. end
Replace ``NOVA_DBPASS`` with the password you chose for
the Compute databases.
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure ``RabbitMQ`` message queue access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
@ -247,12 +291,15 @@ Install and configure components
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
.. end
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in ``RabbitMQ``.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
@ -271,6 +318,8 @@ Install and configure components
username = nova
password = NOVA_PASS
.. end
Replace ``NOVA_PASS`` with the password you chose for the
``nova`` user in the Identity service.
@ -282,12 +331,15 @@ Install and configure components
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
use the management interface IP address of the controller node:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
...
my_ip = 10.0.0.11
.. end
.. only:: debian
* The ``.config`` and ``.postinst`` maintainer scripts of the
@ -296,14 +348,20 @@ Install and configure components
value will normally still be prompted, and you can check that it
is correct in the nova.conf after ``nova-common`` is installed:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
...
my_ip = 10.0.0.11
.. end
.. endonly
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
@ -311,6 +369,8 @@ Install and configure components
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. end
.. note::
By default, Compute uses an internal firewall driver. Since the
@ -321,6 +381,7 @@ Install and configure components
* In the ``[vnc]`` section, configure the VNC proxy to use the management
interface IP address of the controller node:
.. path /etc/nova/nova.conf
.. code-block:: ini
[vnc]
@ -328,45 +389,65 @@ Install and configure components
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
.. end
* In the ``[glance]`` section, configure the location of the
Image service API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[glance]
...
api_servers = http://controller:9292
.. end
.. only:: obs
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/run/nova
.. end
.. endonly
.. only:: rdo
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
.. end
.. endonly
.. only:: ubuntu
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
.. end
.. endonly
.. only:: ubuntu
.. todo:
@ -376,6 +457,8 @@ Install and configure components
* Due to a packaging bug, remove the ``logdir`` option from the
``[DEFAULT]`` section.
.. endonly
.. only:: rdo or ubuntu or debian
3. Populate the Compute databases:
@ -385,10 +468,14 @@ Install and configure components
# su -s /bin/sh -c "nova-manage api_db sync" nova
# su -s /bin/sh -c "nova-manage db sync" nova
.. end
.. note::
Ignore any deprecation messages in this output.
.. endonly
Finalize installation
---------------------
@ -406,6 +493,10 @@ Finalize installation
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
.. end
.. endonly
.. only:: rdo
* Start the Compute services and configure them to start
@ -420,6 +511,10 @@ Finalize installation
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
.. end
.. endonly
.. only:: ubuntu or debian
* Restart the Compute services:
@ -431,3 +526,7 @@ Finalize installation
# service nova-scheduler restart
# service nova-conductor restart
# service nova-novncproxy restart
.. end
.. endonly

View File

@ -14,12 +14,15 @@ Verify operation of the Compute service.
$ . admin-openrc
.. end
#. List service components to verify successful launch and
registration of each process:
.. code-block:: console
$ openstack compute service list
+----+--------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+--------------------+------------+----------+---------+-------+----------------------------+
@ -29,6 +32,8 @@ Verify operation of the Compute service.
| 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 |
+----+--------------------+------------+----------+---------+-------+----------------------------+
.. end
.. note::
This output should indicate three service components enabled on

View File

@ -1,5 +1,6 @@
Edit the ``/etc/hosts`` file to contain the following:
.. path /etc/hosts
.. code-block:: ini
# controller
@ -17,6 +18,8 @@ Edit the ``/etc/hosts`` file to contain the following:
# object2
10.0.0.52 object2
.. end
.. warning::
Some distributions add an extraneous entry in the ``/etc/hosts``