[install] Fix various minor problems
Fix the following minor problems to reduce work after stable/liberty branching: 1) RDO: Revert Python MySQL library from PyMySQL to MySQL-python due to lack of support for the former. 2) RDO: Explicitly install 'ebtables' and 'ipset' packages due to dependency problems. 3) General: Change numbered list to bulleted list for lists with only one item. 4) General: Restructure horizon content to match other services. More duplication of content, but sometimes RST conditionals are terrible and distro packages should use the same configuration files. 5) General: Restructure NoSQL content to match SQL content. 6) General: Improve clarity of NTP content. Change-Id: I2620250aa27c7d41b525aa2646ad25e0692140c4 Closes-Bug: #1514760 Closes-Bug: #1514683 Implements: bp installguide-liberty
This commit is contained in:
parent
e135ca8442
commit
5ee72cfa36
@ -14,13 +14,13 @@ Configure Cinder to use Telemetry
|
|||||||
Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
||||||
following actions:
|
following actions:
|
||||||
|
|
||||||
#. In the ``[DEFAULT]`` section, configure notifications:
|
* In the ``[DEFAULT]`` section, configure notifications:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
notification_driver = messagingv2
|
notification_driver = messagingv2
|
||||||
|
|
||||||
Finalize installation
|
Finalize installation
|
||||||
---------------------
|
---------------------
|
||||||
|
@ -7,45 +7,45 @@ these steps on the controller node.
|
|||||||
Configure the Image service to use Telemetry
|
Configure the Image service to use Telemetry
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
|
|
||||||
Edit the ``/etc/glance/glance-api.conf`` and
|
* Edit the ``/etc/glance/glance-api.conf`` and
|
||||||
``/etc/glance/glance-registry.conf`` files and
|
``/etc/glance/glance-registry.conf`` files and
|
||||||
complete the following actions:
|
complete the following actions:
|
||||||
|
|
||||||
#. In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
||||||
configure notifications and RabbitMQ message broker access:
|
configure notifications and RabbitMQ message broker access:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
notification_driver = messagingv2
|
notification_driver = messagingv2
|
||||||
rpc_backend = rabbit
|
rpc_backend = rabbit
|
||||||
|
|
||||||
[oslo_messaging_rabbit]
|
[oslo_messaging_rabbit]
|
||||||
...
|
...
|
||||||
rabbit_host = controller
|
rabbit_host = controller
|
||||||
rabbit_userid = openstack
|
rabbit_userid = openstack
|
||||||
rabbit_password = RABBIT_PASS
|
rabbit_password = RABBIT_PASS
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for
|
Replace ``RABBIT_PASS`` with the password you chose for
|
||||||
the ``openstack`` account in ``RabbitMQ``.
|
the ``openstack`` account in ``RabbitMQ``.
|
||||||
|
|
||||||
Finalize installation
|
Finalize installation
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
.. only:: obs or rdo
|
.. only:: obs or rdo
|
||||||
|
|
||||||
#. Restart the Image service:
|
* Restart the Image service:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# systemctl restart openstack-glance-api.service openstack-glance-registry.service
|
# systemctl restart openstack-glance-api.service openstack-glance-registry.service
|
||||||
|
|
||||||
.. only:: ubuntu
|
.. only:: ubuntu
|
||||||
|
|
||||||
#. Restart the Image service:
|
* Restart the Image service:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# service glance-registry restart
|
# service glance-registry restart
|
||||||
# service glance-api restart
|
# service glance-api restart
|
||||||
|
@ -284,66 +284,66 @@ Install and configure components
|
|||||||
dispatcher = database
|
dispatcher = database
|
||||||
|
|
||||||
* (Optional) To assist with troubleshooting, enable verbose
|
* (Optional) To assist with troubleshooting, enable verbose
|
||||||
logging in the ``[DEFAULT]`` section:
|
logging in the ``[DEFAULT]`` section:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
verbose = True
|
verbose = True
|
||||||
|
|
||||||
Finalize installation
|
Finalize installation
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
.. only:: obs
|
.. only:: obs
|
||||||
|
|
||||||
#. Start the Telemetry services and configure them to start when the
|
* Start the Telemetry services and configure them to start when the
|
||||||
system boots:
|
system boots:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# systemctl enable openstack-ceilometer-api.service \
|
# systemctl enable openstack-ceilometer-api.service \
|
||||||
openstack-ceilometer-agent-notification.service \
|
openstack-ceilometer-agent-notification.service \
|
||||||
openstack-ceilometer-agent-central.service \
|
openstack-ceilometer-agent-central.service \
|
||||||
openstack-ceilometer-collector.service \
|
openstack-ceilometer-collector.service \
|
||||||
openstack-ceilometer-alarm-evaluator.service \
|
openstack-ceilometer-alarm-evaluator.service \
|
||||||
openstack-ceilometer-alarm-notifier.service
|
openstack-ceilometer-alarm-notifier.service
|
||||||
# systemctl start openstack-ceilometer-api.service \
|
# systemctl start openstack-ceilometer-api.service \
|
||||||
openstack-ceilometer-agent-notification.service \
|
openstack-ceilometer-agent-notification.service \
|
||||||
openstack-ceilometer-agent-central.service \
|
openstack-ceilometer-agent-central.service \
|
||||||
openstack-ceilometer-collector.service \
|
openstack-ceilometer-collector.service \
|
||||||
openstack-ceilometer-alarm-evaluator.service \
|
openstack-ceilometer-alarm-evaluator.service \
|
||||||
openstack-ceilometer-alarm-notifier.service
|
openstack-ceilometer-alarm-notifier.service
|
||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
#. Start the Telemetry services and configure them to start when the
|
* Start the Telemetry services and configure them to start when the
|
||||||
system boots:
|
system boots:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# systemctl enable openstack-ceilometer-api.service \
|
# systemctl enable openstack-ceilometer-api.service \
|
||||||
openstack-ceilometer-notification.service \
|
openstack-ceilometer-notification.service \
|
||||||
openstack-ceilometer-central.service \
|
openstack-ceilometer-central.service \
|
||||||
openstack-ceilometer-collector.service \
|
openstack-ceilometer-collector.service \
|
||||||
openstack-ceilometer-alarm-evaluator.service \
|
openstack-ceilometer-alarm-evaluator.service \
|
||||||
openstack-ceilometer-alarm-notifier.service
|
openstack-ceilometer-alarm-notifier.service
|
||||||
# systemctl start openstack-ceilometer-api.service \
|
# systemctl start openstack-ceilometer-api.service \
|
||||||
openstack-ceilometer-notification.service \
|
openstack-ceilometer-notification.service \
|
||||||
openstack-ceilometer-central.service \
|
openstack-ceilometer-central.service \
|
||||||
openstack-ceilometer-collector.service \
|
openstack-ceilometer-collector.service \
|
||||||
openstack-ceilometer-alarm-evaluator.service \
|
openstack-ceilometer-alarm-evaluator.service \
|
||||||
openstack-ceilometer-alarm-notifier.service
|
openstack-ceilometer-alarm-notifier.service
|
||||||
|
|
||||||
.. only:: ubuntu
|
.. only:: ubuntu
|
||||||
|
|
||||||
#. Restart the Telemetry services:
|
* Restart the Telemetry services:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# service ceilometer-agent-central restart
|
# service ceilometer-agent-central restart
|
||||||
# service ceilometer-agent-notification restart
|
# service ceilometer-agent-notification restart
|
||||||
# service ceilometer-api restart
|
# service ceilometer-api restart
|
||||||
# service ceilometer-collector restart
|
# service ceilometer-collector restart
|
||||||
# service ceilometer-alarm-evaluator restart
|
# service ceilometer-alarm-evaluator restart
|
||||||
# service ceilometer-alarm-notifier restart
|
# service ceilometer-alarm-notifier restart
|
||||||
|
@ -121,8 +121,7 @@ Finalize installation
|
|||||||
|
|
||||||
.. only:: obs
|
.. only:: obs
|
||||||
|
|
||||||
#. Start the Telemetry agent and configure it to start when the
|
#. Start the agent and configure it to start when the system boots:
|
||||||
system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -131,8 +130,7 @@ Finalize installation
|
|||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
#. Start the Telemetry agent and configure it to start when the
|
#. Start the agent and configure it to start when the system boots:
|
||||||
system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
|
@ -73,59 +73,59 @@ Configure Object Storage to use Telemetry
|
|||||||
Perform these steps on the controller and any other nodes that
|
Perform these steps on the controller and any other nodes that
|
||||||
run the Object Storage proxy service.
|
run the Object Storage proxy service.
|
||||||
|
|
||||||
#. Edit the ``/etc/swift/proxy-server.conf`` file
|
* Edit the ``/etc/swift/proxy-server.conf`` file
|
||||||
and complete the following actions:
|
and complete the following actions:
|
||||||
|
|
||||||
* In the ``[filter:keystoneauth]`` section, add the
|
* In the ``[filter:keystoneauth]`` section, add the
|
||||||
``ResellerAdmin`` role:
|
``ResellerAdmin`` role:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[filter:keystoneauth]
|
[filter:keystoneauth]
|
||||||
...
|
...
|
||||||
operator_roles = admin, user, ResellerAdmin
|
operator_roles = admin, user, ResellerAdmin
|
||||||
|
|
||||||
* In the ``[pipeline:main]`` section, add ``ceilometer``:
|
* In the ``[pipeline:main]`` section, add ``ceilometer``:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[pipeline:main]
|
[pipeline:main]
|
||||||
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache
|
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache
|
||||||
container_sync bulk ratelimit authtoken keystoneauth container-quotas
|
container_sync bulk ratelimit authtoken keystoneauth container-quotas
|
||||||
account-quotas slo dlo versioned_writes proxy-logging ceilometer
|
account-quotas slo dlo versioned_writes proxy-logging ceilometer
|
||||||
proxy-server
|
proxy-server
|
||||||
|
|
||||||
* In the ``[filter:ceilometer]`` section, configure notifications:
|
* In the ``[filter:ceilometer]`` section, configure notifications:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[filter:ceilometer]
|
[filter:ceilometer]
|
||||||
paste.filter_factory = ceilometermiddleware.swift:filter_factory
|
paste.filter_factory = ceilometermiddleware.swift:filter_factory
|
||||||
...
|
...
|
||||||
control_exchange = swift
|
control_exchange = swift
|
||||||
url = rabbit://openstack:RABBIT_PASS@controller:5672/
|
url = rabbit://openstack:RABBIT_PASS@controller:5672/
|
||||||
driver = messagingv2
|
driver = messagingv2
|
||||||
topic = notifications
|
topic = notifications
|
||||||
log_level = WARN
|
log_level = WARN
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||||
``openstack`` account in ``RabbitMQ``.
|
``openstack`` account in ``RabbitMQ``.
|
||||||
|
|
||||||
Finalize installation
|
Finalize installation
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
.. only:: rdo or obs
|
.. only:: rdo or obs
|
||||||
|
|
||||||
#. Restart the Object Storage proxy service:
|
* Restart the Object Storage proxy service:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# systemctl restart openstack-swift-proxy.service
|
# systemctl restart openstack-swift-proxy.service
|
||||||
|
|
||||||
.. only:: ubuntu
|
.. only:: ubuntu
|
||||||
|
|
||||||
#. Restart the Object Storage proxy service:
|
* Restart the Object Storage proxy service:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# service swift-proxy restart
|
# service swift-proxy restart
|
||||||
|
@ -314,23 +314,23 @@ Finalize installation
|
|||||||
|
|
||||||
.. only:: obs
|
.. only:: obs
|
||||||
|
|
||||||
#. Start the Block Storage volume service including its dependencies
|
* Start the Block Storage volume service including its dependencies
|
||||||
and configure them to start when the system boots:
|
and configure them to start when the system boots:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# systemctl enable openstack-cinder-volume.service tgtd.service
|
# systemctl enable openstack-cinder-volume.service tgtd.service
|
||||||
# systemctl start openstack-cinder-volume.service tgtd.service
|
# systemctl start openstack-cinder-volume.service tgtd.service
|
||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
#. Start the Block Storage volume service including its dependencies
|
* Start the Block Storage volume service including its dependencies
|
||||||
and configure them to start when the system boots:
|
and configure them to start when the system boots:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# systemctl enable openstack-cinder-volume.service target.service
|
# systemctl enable openstack-cinder-volume.service target.service
|
||||||
# systemctl start openstack-cinder-volume.service target.service
|
# systemctl start openstack-cinder-volume.service target.service
|
||||||
|
|
||||||
.. only:: ubuntu
|
.. only:: ubuntu
|
||||||
|
|
||||||
|
@ -1,207 +0,0 @@
|
|||||||
=====================
|
|
||||||
Install and configure
|
|
||||||
=====================
|
|
||||||
|
|
||||||
This section describes how to install and configure the dashboard
|
|
||||||
on the controller node.
|
|
||||||
|
|
||||||
The dashboard relies on functional core services including
|
|
||||||
Identity, Image service, Compute, and either Networking (neutron)
|
|
||||||
or legacy networking (nova-network). Environments with
|
|
||||||
stand-alone services such as Object Storage cannot use the
|
|
||||||
dashboard. For more information, see the
|
|
||||||
`developer documentation <http://docs.openstack.org/developer/
|
|
||||||
horizon/topics/deployment.html>`__.
|
|
||||||
|
|
||||||
This section assumes proper installation, configuration, and
|
|
||||||
operation of the Identity service using the Apache HTTP server and
|
|
||||||
Memcached as described in the ":doc:`keystone-install`" section.
|
|
||||||
|
|
||||||
To install the dashboard components
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. only:: obs
|
|
||||||
|
|
||||||
* Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install openstack-dashboard apache2-mod_wsgi \
|
|
||||||
memcached python-python-memcached
|
|
||||||
|
|
||||||
.. only:: rdo
|
|
||||||
|
|
||||||
* Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install openstack-dashboard httpd mod_wsgi \
|
|
||||||
memcached python-memcached
|
|
||||||
|
|
||||||
.. only:: ubuntu
|
|
||||||
|
|
||||||
* Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt-get install openstack-dashboard
|
|
||||||
|
|
||||||
.. only:: debian
|
|
||||||
|
|
||||||
* Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt-get install openstack-dashboard-apache
|
|
||||||
|
|
||||||
* Respond to prompts for web server configuration.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The automatic configuration process generates a self-signed
|
|
||||||
SSL certificate. Consider obtaining an official certificate
|
|
||||||
for production environments.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
There are two modes of installation. One using ``/horizon`` as the URL,
|
|
||||||
keeping your default vhost and only adding an Alias directive: this is
|
|
||||||
the default. The other mode will remove the default Apache vhost and install
|
|
||||||
the dashboard on the webroot. It was the only available option
|
|
||||||
before the Liberty release. If you prefer to set the Apache configuration
|
|
||||||
manually, install the ``openstack-dashboard`` package instead of
|
|
||||||
``openstack-dashboard-apache``.
|
|
||||||
|
|
||||||
.. only:: ubuntu
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Ubuntu installs the ``openstack-dashboard-ubuntu-theme``
|
|
||||||
package as a dependency. Some users reported issues with
|
|
||||||
this theme in previous releases. If you encounter issues,
|
|
||||||
remove this package to restore the original OpenStack theme.
|
|
||||||
|
|
||||||
To configure the dashboard
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. only:: obs
|
|
||||||
|
|
||||||
* Configure the web server:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
|
|
||||||
/etc/apache2/conf.d/openstack-dashboard.conf
|
|
||||||
# a2enmod rewrite;a2enmod ssl;a2enmod wsgi
|
|
||||||
|
|
||||||
.. only:: obs
|
|
||||||
|
|
||||||
* Edit the
|
|
||||||
``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py``
|
|
||||||
file and complete the following actions:
|
|
||||||
|
|
||||||
.. only:: rdo
|
|
||||||
|
|
||||||
* Edit the
|
|
||||||
``/etc/openstack-dashboard/local_settings``
|
|
||||||
file and complete the following actions:
|
|
||||||
|
|
||||||
.. only:: ubuntu or debian
|
|
||||||
|
|
||||||
* Edit the
|
|
||||||
``/etc/openstack-dashboard/local_settings.py``
|
|
||||||
file and complete the following actions:
|
|
||||||
|
|
||||||
* Configure the dashboard to use OpenStack services on the
|
|
||||||
``controller`` node:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
OPENSTACK_HOST = "controller"
|
|
||||||
|
|
||||||
* Allow all hosts to access the dashboard:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
ALLOWED_HOSTS = ['*', ]
|
|
||||||
|
|
||||||
* Configure the ``memcached`` session storage service:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
CACHES = {
|
|
||||||
'default': {
|
|
||||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
|
||||||
'LOCATION': '127.0.0.1:11211',
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out any other session storage configuration.
|
|
||||||
|
|
||||||
.. only:: obs
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
By default, SLES and openSUSE use an SQL database for session
|
|
||||||
storage. For simplicity, we recommend changing the configuration
|
|
||||||
to use ``memcached`` for session storage.
|
|
||||||
|
|
||||||
* Configure ``user`` as the default role for
|
|
||||||
users that you create via the dashboard:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
|
||||||
|
|
||||||
* Optionally, configure the time zone:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
TIME_ZONE = "TIME_ZONE"
|
|
||||||
|
|
||||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
|
||||||
For more information, see the `list of time zones
|
|
||||||
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
|
||||||
|
|
||||||
To finalize installation
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. only:: ubuntu or debian
|
|
||||||
|
|
||||||
Reload the web server configuration:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service apache2 reload
|
|
||||||
|
|
||||||
.. only:: obs
|
|
||||||
|
|
||||||
Start the web server and session storage service and configure
|
|
||||||
them to start when the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable apache2.service memcached.service
|
|
||||||
# systemctl restart apache2.service memcached.service
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The ``systemctl restart`` command starts the Apache HTTP service if
|
|
||||||
not currently running.
|
|
||||||
|
|
||||||
.. only:: rdo
|
|
||||||
|
|
||||||
Start the web server and session storage service and configure
|
|
||||||
them to start when the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable httpd.service memcached.service
|
|
||||||
# systemctl restart httpd.service memcached.service
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The ``systemctl restart`` command starts the Apache HTTP service if
|
|
||||||
not currently running.
|
|
@ -12,36 +12,32 @@ service because most distributions support it. If you prefer to
|
|||||||
implement a different message queue service, consult the documentation
|
implement a different message queue service, consult the documentation
|
||||||
associated with it.
|
associated with it.
|
||||||
|
|
||||||
Install the message queue service
|
Install and configure components
|
||||||
---------------------------------
|
--------------------------------
|
||||||
|
|
||||||
* Install the package:
|
1. Install the package:
|
||||||
|
|
||||||
.. only:: ubuntu or debian
|
.. only:: ubuntu or debian
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# apt-get install rabbitmq-server
|
# apt-get install rabbitmq-server
|
||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# yum install rabbitmq-server
|
# yum install rabbitmq-server
|
||||||
|
|
||||||
.. only:: obs
|
.. only:: obs
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# zypper install rabbitmq-server
|
# zypper install rabbitmq-server
|
||||||
|
|
||||||
|
|
||||||
Configure the message queue service
|
|
||||||
-----------------------------------
|
|
||||||
|
|
||||||
.. only:: rdo or obs
|
.. only:: rdo or obs
|
||||||
|
|
||||||
#. Start the message queue service and configure it to start when the
|
2. Start the message queue service and configure it to start when the
|
||||||
system boots:
|
system boots:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
@ -71,7 +67,7 @@ Configure the message queue service
|
|||||||
|
|
||||||
* Start the message queue service again.
|
* Start the message queue service again.
|
||||||
|
|
||||||
#. Add the ``openstack`` user:
|
3. Add the ``openstack`` user:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -80,7 +76,7 @@ Configure the message queue service
|
|||||||
|
|
||||||
Replace ``RABBIT_PASS`` with a suitable password.
|
Replace ``RABBIT_PASS`` with a suitable password.
|
||||||
|
|
||||||
#. Permit configuration, write, and read access for the
|
4. Permit configuration, write, and read access for the
|
||||||
``openstack`` user:
|
``openstack`` user:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
@ -90,7 +86,7 @@ Configure the message queue service
|
|||||||
|
|
||||||
.. only:: ubuntu or debian
|
.. only:: ubuntu or debian
|
||||||
|
|
||||||
#. Add the ``openstack`` user:
|
2. Add the ``openstack`` user:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -99,7 +95,7 @@ Configure the message queue service
|
|||||||
|
|
||||||
Replace ``RABBIT_PASS`` with a suitable password.
|
Replace ``RABBIT_PASS`` with a suitable password.
|
||||||
|
|
||||||
#. Permit configuration, write, and read access for the
|
3. Permit configuration, write, and read access for the
|
||||||
``openstack`` user:
|
``openstack`` user:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
@ -7,13 +7,13 @@ additional storage node.
|
|||||||
Configure network interfaces
|
Configure network interfaces
|
||||||
----------------------------
|
----------------------------
|
||||||
|
|
||||||
#. Configure the management interface:
|
* Configure the management interface:
|
||||||
|
|
||||||
* IP address: ``10.0.0.41``
|
* IP address: ``10.0.0.41``
|
||||||
|
|
||||||
* Network mask: ``255.255.255.0`` (or ``/24``)
|
* Network mask: ``255.255.255.0`` (or ``/24``)
|
||||||
|
|
||||||
* Default gateway: ``10.0.0.1``
|
* Default gateway: ``10.0.0.1``
|
||||||
|
|
||||||
Configure name resolution
|
Configure name resolution
|
||||||
-------------------------
|
-------------------------
|
||||||
|
@ -10,13 +10,13 @@ First node
|
|||||||
Configure network interfaces
|
Configure network interfaces
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
#. Configure the management interface:
|
* Configure the management interface:
|
||||||
|
|
||||||
* IP address: ``10.0.0.51``
|
* IP address: ``10.0.0.51``
|
||||||
|
|
||||||
* Network mask: ``255.255.255.0`` (or ``/24``)
|
* Network mask: ``255.255.255.0`` (or ``/24``)
|
||||||
|
|
||||||
* Default gateway: ``10.0.0.1``
|
* Default gateway: ``10.0.0.1``
|
||||||
|
|
||||||
Configure name resolution
|
Configure name resolution
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
@ -33,13 +33,13 @@ Second node
|
|||||||
Configure network interfaces
|
Configure network interfaces
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
#. Configure the management interface:
|
* Configure the management interface:
|
||||||
|
|
||||||
* IP address: ``10.0.0.52``
|
* IP address: ``10.0.0.52``
|
||||||
|
|
||||||
* Network mask: ``255.255.255.0`` (or ``/24``)
|
* Network mask: ``255.255.255.0`` (or ``/24``)
|
||||||
|
|
||||||
* Default gateway: ``10.0.0.1``
|
* Default gateway: ``10.0.0.1``
|
||||||
|
|
||||||
Configure name resolution
|
Configure name resolution
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
@ -12,12 +12,12 @@ MongoDB.
|
|||||||
The installation of the NoSQL database server is only necessary when
|
The installation of the NoSQL database server is only necessary when
|
||||||
installing the Telemetry service as documented in :ref:`install_ceilometer`.
|
installing the Telemetry service as documented in :ref:`install_ceilometer`.
|
||||||
|
|
||||||
Install and configure the database server
|
Install and configure components
|
||||||
-----------------------------------------
|
--------------------------------
|
||||||
|
|
||||||
.. only:: obs
|
.. only:: obs
|
||||||
|
|
||||||
#. Enable the Open Build Service repositories for MongoDB based on
|
1. Enable the Open Build Service repositories for MongoDB based on
|
||||||
your openSUSE or SLES version:
|
your openSUSE or SLES version:
|
||||||
|
|
||||||
On openSUSE:
|
On openSUSE:
|
||||||
@ -52,7 +52,7 @@ Install and configure the database server
|
|||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
#. Install the MongoDB package:
|
1. Install the MongoDB packages:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -60,7 +60,7 @@ Install and configure the database server
|
|||||||
|
|
||||||
.. only:: ubuntu
|
.. only:: ubuntu
|
||||||
|
|
||||||
#. Install the MongoDB package:
|
1. Install the MongoDB packages:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -91,18 +91,8 @@ Install and configure the database server
|
|||||||
You can also disable journaling. For more information, see the
|
You can also disable journaling. For more information, see the
|
||||||
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
|
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
|
||||||
|
|
||||||
* Start the MongoDB service and configure it to start when
|
|
||||||
the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable mongodb.service
|
|
||||||
# systemctl start mongodb.service
|
|
||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
.. The use of mongod, and not mongodb, in the below screen is intentional.
|
|
||||||
|
|
||||||
2. Edit the ``/etc/mongod.conf`` file and complete the following
|
2. Edit the ``/etc/mongod.conf`` file and complete the following
|
||||||
actions:
|
actions:
|
||||||
|
|
||||||
@ -126,14 +116,6 @@ Install and configure the database server
|
|||||||
You can also disable journaling. For more information, see the
|
You can also disable journaling. For more information, see the
|
||||||
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
|
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
|
||||||
|
|
||||||
* Start the MongoDB service and configure it to start when
|
|
||||||
the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable mongod.service
|
|
||||||
# systemctl start mongod.service
|
|
||||||
|
|
||||||
.. only:: ubuntu
|
.. only:: ubuntu
|
||||||
|
|
||||||
2. Edit the ``/etc/mongodb.conf`` file and complete the following
|
2. Edit the ``/etc/mongodb.conf`` file and complete the following
|
||||||
@ -156,14 +138,39 @@ Install and configure the database server
|
|||||||
|
|
||||||
smallfiles = true
|
smallfiles = true
|
||||||
|
|
||||||
If you change the journaling configuration, stop the MongoDB
|
|
||||||
service, remove the initial journal files, and start the service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service mongodb stop
|
|
||||||
# rm /var/lib/mongodb/journal/prealloc.*
|
|
||||||
# service mongodb start
|
|
||||||
|
|
||||||
You can also disable journaling. For more information, see the
|
You can also disable journaling. For more information, see the
|
||||||
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
|
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
|
||||||
|
|
||||||
|
Finalize installation
|
||||||
|
---------------------
|
||||||
|
|
||||||
|
.. only:: ubuntu
|
||||||
|
|
||||||
|
* If you change the journaling configuration, stop the MongoDB
|
||||||
|
service, remove the initial journal files, and start the service:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# service mongodb stop
|
||||||
|
# rm /var/lib/mongodb/journal/prealloc.*
|
||||||
|
# service mongodb start
|
||||||
|
|
||||||
|
.. only:: rdo
|
||||||
|
|
||||||
|
* Start the MongoDB service and configure it to start when
|
||||||
|
the system boots:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# systemctl enable mongod.service
|
||||||
|
# systemctl start mongod.service
|
||||||
|
|
||||||
|
.. only:: obs
|
||||||
|
|
||||||
|
* Start the MongoDB service and configure it to start when
|
||||||
|
the system boots:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# systemctl enable mongodb.service
|
||||||
|
# systemctl start mongodb.service
|
||||||
|
@ -3,60 +3,58 @@
|
|||||||
Controller node
|
Controller node
|
||||||
~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Perform these steps on the controller node.
|
||||||
|
|
||||||
Install and configure components
|
Install and configure components
|
||||||
--------------------------------
|
--------------------------------
|
||||||
|
|
||||||
Install the packages:
|
1. Install the packages:
|
||||||
|
|
||||||
.. only:: ubuntu or debian
|
.. only:: ubuntu or debian
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt-get install chrony
|
|
||||||
|
|
||||||
.. only:: rdo
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install chrony
|
|
||||||
|
|
||||||
.. only:: obs
|
|
||||||
|
|
||||||
On openSUSE 13.2:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper addrepo -f obs://network:time/openSUSE_13.2 network_time
|
|
||||||
# zypper refresh
|
|
||||||
# zypper install chrony
|
|
||||||
|
|
||||||
On SLES 12:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper addrepo -f obs://network:time/SLE_12 network_time
|
|
||||||
# zypper refresh
|
|
||||||
# zypper install chrony
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The packages are signed by GPG key ``17280DDF``. You should
|
|
||||||
verify the fingerprint of the imported GPG key before using it.
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
Key Name: network OBS Project <network@build.opensuse.org>
|
# apt-get install chrony
|
||||||
Key Fingerprint: 0080689B E757A876 CB7DC269 62EB1A09 17280DDF
|
|
||||||
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
|
|
||||||
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
|
|
||||||
|
|
||||||
By default, the controller node synchronizes the time via a pool of
|
.. only:: rdo
|
||||||
public servers. However, you can optionally configure alternative servers such
|
|
||||||
as those provided by your organization.
|
.. code-block:: console
|
||||||
|
|
||||||
|
# yum install chrony
|
||||||
|
|
||||||
|
.. only:: obs
|
||||||
|
|
||||||
|
On openSUSE:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# zypper addrepo -f obs://network:time/openSUSE_13.2 network_time
|
||||||
|
# zypper refresh
|
||||||
|
# zypper install chrony
|
||||||
|
|
||||||
|
On SLES:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# zypper addrepo -f obs://network:time/SLE_12 network_time
|
||||||
|
# zypper refresh
|
||||||
|
# zypper install chrony
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
The packages are signed by GPG key ``17280DDF``. You should
|
||||||
|
verify the fingerprint of the imported GPG key before using it.
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
Key Name: network OBS Project <network@build.opensuse.org>
|
||||||
|
Key Fingerprint: 0080689B E757A876 CB7DC269 62EB1A09 17280DDF
|
||||||
|
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
|
||||||
|
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
|
||||||
|
|
||||||
.. only:: ubuntu or debian
|
.. only:: ubuntu or debian
|
||||||
|
|
||||||
#. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove the
|
2. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove the
|
||||||
following keys as necessary for your environment:
|
following keys as necessary for your environment:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
@ -67,7 +65,13 @@ as those provided by your organization.
|
|||||||
accurate (lower stratum) NTP server. The configuration supports multiple
|
accurate (lower stratum) NTP server. The configuration supports multiple
|
||||||
``server`` keys.
|
``server`` keys.
|
||||||
|
|
||||||
#. Restart the NTP service:
|
.. note::
|
||||||
|
|
||||||
|
By default, the controller node synchronizes the time via a pool of
|
||||||
|
public servers. However, you can optionally configure alternative
|
||||||
|
servers such as those provided by your organization.
|
||||||
|
|
||||||
|
3. Restart the NTP service:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -75,7 +79,7 @@ as those provided by your organization.
|
|||||||
|
|
||||||
.. only:: rdo or obs
|
.. only:: rdo or obs
|
||||||
|
|
||||||
#. Edit the ``/etc/chrony.conf`` file and add, change, or remove the
|
2. Edit the ``/etc/chrony.conf`` file and add, change, or remove the
|
||||||
following keys as necessary for your environment:
|
following keys as necessary for your environment:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
@ -86,7 +90,13 @@ as those provided by your organization.
|
|||||||
accurate (lower stratum) NTP server. The configuration supports multiple
|
accurate (lower stratum) NTP server. The configuration supports multiple
|
||||||
``server`` keys.
|
``server`` keys.
|
||||||
|
|
||||||
#. To enable other nodes to connect to the chrony daemon on the controller,
|
.. note::
|
||||||
|
|
||||||
|
By default, the controller node synchronizes the time via a pool of
|
||||||
|
public servers. However, you can optionally configure alternative
|
||||||
|
servers such as those provided by your organization.
|
||||||
|
|
||||||
|
3. To enable other nodes to connect to the chrony daemon on the controller,
|
||||||
add the following key to the ``/etc/chrony.conf`` file:
|
add the following key to the ``/etc/chrony.conf`` file:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
@ -95,7 +105,7 @@ as those provided by your organization.
|
|||||||
|
|
||||||
If necessary, replace ``10.0.0.0/24`` with a description of your subnet.
|
If necessary, replace ``10.0.0.0/24`` with a description of your subnet.
|
||||||
|
|
||||||
#. Start the NTP service and configure it to start when the system boots:
|
4. Start the NTP service and configure it to start when the system boots:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
|
@ -3,66 +3,66 @@
|
|||||||
Other nodes
|
Other nodes
|
||||||
~~~~~~~~~~~
|
~~~~~~~~~~~
|
||||||
|
|
||||||
|
Other nodes reference the controller node for clock synchronization.
|
||||||
|
Perform these steps on all other nodes.
|
||||||
|
|
||||||
Install and configure components
|
Install and configure components
|
||||||
--------------------------------
|
--------------------------------
|
||||||
|
|
||||||
Install the packages:
|
1. Install the packages:
|
||||||
|
|
||||||
.. only:: ubuntu or debian
|
.. only:: ubuntu or debian
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt-get install chrony
|
|
||||||
|
|
||||||
.. only:: rdo
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install chrony
|
|
||||||
|
|
||||||
.. only:: obs
|
|
||||||
|
|
||||||
On openSUSE:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper addrepo -f obs://network:time/openSUSE_13.2 network_time
|
|
||||||
# zypper refresh
|
|
||||||
# zypper install chrony
|
|
||||||
|
|
||||||
On SLES:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper addrepo -f obs://network:time/SLE_12 network_time
|
|
||||||
# zypper refresh
|
|
||||||
# zypper install chrony
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The packages are signed by GPG key ``17280DDF``. You should
|
|
||||||
verify the fingerprint of the imported GPG key before using it.
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
Key Name: network OBS Project <network@build.opensuse.org>
|
# apt-get install chrony
|
||||||
Key Fingerprint: 0080689B E757A876 CB7DC269 62EB1A09 17280DDF
|
|
||||||
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
|
|
||||||
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
|
|
||||||
|
|
||||||
Configure the network and compute nodes to reference the controller
|
.. only:: rdo
|
||||||
node.
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# yum install chrony
|
||||||
|
|
||||||
|
.. only:: obs
|
||||||
|
|
||||||
|
On openSUSE:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# zypper addrepo -f obs://network:time/openSUSE_13.2 network_time
|
||||||
|
# zypper refresh
|
||||||
|
# zypper install chrony
|
||||||
|
|
||||||
|
On SLES:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# zypper addrepo -f obs://network:time/SLE_12 network_time
|
||||||
|
# zypper refresh
|
||||||
|
# zypper install chrony
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
The packages are signed by GPG key ``17280DDF``. You should
|
||||||
|
verify the fingerprint of the imported GPG key before using it.
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
Key Name: network OBS Project <network@build.opensuse.org>
|
||||||
|
Key Fingerprint: 0080689B E757A876 CB7DC269 62EB1A09 17280DDF
|
||||||
|
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
|
||||||
|
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
|
||||||
|
|
||||||
.. only:: ubuntu or debian
|
.. only:: ubuntu or debian
|
||||||
|
|
||||||
#. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
|
2. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
|
||||||
but one ``server`` key. Change it to reference the controller node:
|
but one ``server`` key. Change it to reference the controller node:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
server controller iburst
|
server controller iburst
|
||||||
|
|
||||||
#. Restart the NTP service:
|
3. Restart the NTP service:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -70,14 +70,14 @@ node.
|
|||||||
|
|
||||||
.. only:: rdo or obs
|
.. only:: rdo or obs
|
||||||
|
|
||||||
#. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
|
2. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
|
||||||
``server`` key. Change it to reference the controller node:
|
``server`` key. Change it to reference the controller node:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
server controller iburst
|
server controller iburst
|
||||||
|
|
||||||
#. Start the NTP service and configure it to start when the system boots:
|
3. Start the NTP service and configure it to start when the system boots:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
|
@ -53,21 +53,21 @@ these procedures on all nodes.
|
|||||||
Enable the OpenStack repository
|
Enable the OpenStack repository
|
||||||
-------------------------------
|
-------------------------------
|
||||||
|
|
||||||
On CentOS, the *extras* repository provides the RPM that enables the
|
* On CentOS, the *extras* repository provides the RPM that enables the
|
||||||
OpenStack repository. CentOS includes the *extras* repository by
|
OpenStack repository. CentOS includes the *extras* repository by
|
||||||
default, so you can simply install the package to enable the OpenStack
|
default, so you can simply install the package to enable the OpenStack
|
||||||
repository.
|
repository.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# yum install centos-release-openstack-liberty
|
# yum install centos-release-openstack-liberty
|
||||||
|
|
||||||
On RHEL, download and install the RDO repository RPM to enable the
|
* On RHEL, download and install the RDO repository RPM to enable the
|
||||||
OpenStack repository.
|
OpenStack repository.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# yum install https://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm
|
# yum install https://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm
|
||||||
|
|
||||||
.. only:: obs
|
.. only:: obs
|
||||||
|
|
||||||
@ -122,7 +122,6 @@ these procedures on all nodes.
|
|||||||
`Debian website <http://backports.debian.org/Instructions/>`_,
|
`Debian website <http://backports.debian.org/Instructions/>`_,
|
||||||
which basically suggest doing the following steps:
|
which basically suggest doing the following steps:
|
||||||
|
|
||||||
|
|
||||||
#. On all nodes, adding the Debian 8 (Jessie) backport repository to
|
#. On all nodes, adding the Debian 8 (Jessie) backport repository to
|
||||||
the source list:
|
the source list:
|
||||||
|
|
||||||
|
@ -7,17 +7,11 @@ guide use MariaDB or MySQL depending on the distribution. OpenStack
|
|||||||
services also support other SQL databases including
|
services also support other SQL databases including
|
||||||
`PostgreSQL <http://www.postgresql.org/>`__.
|
`PostgreSQL <http://www.postgresql.org/>`__.
|
||||||
|
|
||||||
Install and configure the database server
|
Install and configure components
|
||||||
-----------------------------------------
|
--------------------------------
|
||||||
|
|
||||||
#. Install the packages:
|
#. Install the packages:
|
||||||
|
|
||||||
.. only:: rdo or ubuntu or obs
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Python MySQL library is compatible with MariaDB.
|
|
||||||
|
|
||||||
.. only:: ubuntu
|
.. only:: ubuntu
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
@ -34,7 +28,7 @@ Install and configure the database server
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# yum install mariadb mariadb-server python2-PyMySQL
|
# yum install mariadb mariadb-server MySQL-python
|
||||||
|
|
||||||
.. only:: obs
|
.. only:: obs
|
||||||
|
|
||||||
@ -116,9 +110,8 @@ Install and configure the database server
|
|||||||
collation-server = utf8_general_ci
|
collation-server = utf8_general_ci
|
||||||
character-set-server = utf8
|
character-set-server = utf8
|
||||||
|
|
||||||
|
Finalize installation
|
||||||
To finalize installation
|
---------------------
|
||||||
------------------------
|
|
||||||
|
|
||||||
.. only:: ubuntu or debian
|
.. only:: ubuntu or debian
|
||||||
|
|
||||||
|
@ -374,15 +374,15 @@ Install and configure components
|
|||||||
|
|
||||||
.. only:: obs or rdo
|
.. only:: obs or rdo
|
||||||
|
|
||||||
#. Start the Image service services and configure them to start when
|
* Start the Image service services and configure them to start when
|
||||||
the system boots:
|
the system boots:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# systemctl enable openstack-glance-api.service \
|
# systemctl enable openstack-glance-api.service \
|
||||||
openstack-glance-registry.service
|
openstack-glance-registry.service
|
||||||
# systemctl start openstack-glance-api.service \
|
# systemctl start openstack-glance-api.service \
|
||||||
openstack-glance-registry.service
|
openstack-glance-registry.service
|
||||||
|
|
||||||
.. only:: ubuntu
|
.. only:: ubuntu
|
||||||
|
|
||||||
|
@ -489,19 +489,19 @@ Finalize installation
|
|||||||
|
|
||||||
.. only:: obs or rdo
|
.. only:: obs or rdo
|
||||||
|
|
||||||
#. Start the Orchestration services and configure them to start
|
* Start the Orchestration services and configure them to start
|
||||||
when the system boots:
|
when the system boots:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# systemctl enable openstack-heat-api.service \
|
# systemctl enable openstack-heat-api.service \
|
||||||
openstack-heat-api-cfn.service openstack-heat-engine.service
|
openstack-heat-api-cfn.service openstack-heat-engine.service
|
||||||
# systemctl start openstack-heat-api.service \
|
# systemctl start openstack-heat-api.service \
|
||||||
openstack-heat-api-cfn.service openstack-heat-engine.service
|
openstack-heat-api-cfn.service openstack-heat-engine.service
|
||||||
|
|
||||||
.. only:: ubuntu or debian
|
.. only:: ubuntu or debian
|
||||||
|
|
||||||
#. Restart the Orchestration services:
|
1. Restart the Orchestration services:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
|
279
doc/install-guide/source/horizon-install.rst
Normal file
279
doc/install-guide/source/horizon-install.rst
Normal file
@ -0,0 +1,279 @@
|
|||||||
|
Install and configure
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
This section describes how to install and configure the dashboard
|
||||||
|
on the controller node.
|
||||||
|
|
||||||
|
The dashboard relies on functional core services including
|
||||||
|
Identity, Image service, Compute, and either Networking (neutron)
|
||||||
|
or legacy networking (nova-network). Environments with
|
||||||
|
stand-alone services such as Object Storage cannot use the
|
||||||
|
dashboard. For more information, see the
|
||||||
|
`developer documentation <http://docs.openstack.org/developer/
|
||||||
|
horizon/topics/deployment.html>`__.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
This section assumes proper installation, configuration, and operation
|
||||||
|
of the Identity service using the Apache HTTP server and Memcached
|
||||||
|
service as described in the :ref:`Install and configure the Identity
|
||||||
|
service <keystone-install>` section.
|
||||||
|
|
||||||
|
Install and configure components
|
||||||
|
--------------------------------
|
||||||
|
|
||||||
|
.. only:: obs or rdo or ubuntu
|
||||||
|
|
||||||
|
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||||
|
|
||||||
|
.. only:: obs
|
||||||
|
|
||||||
|
1. Install the packages:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# zypper install openstack-dashboard
|
||||||
|
|
||||||
|
.. only:: rdo
|
||||||
|
|
||||||
|
1. Install the packages:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# yum install openstack-dashboard
|
||||||
|
|
||||||
|
.. only:: ubuntu
|
||||||
|
|
||||||
|
1. Install the packages:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# apt-get install openstack-dashboard
|
||||||
|
|
||||||
|
.. only:: debian
|
||||||
|
|
||||||
|
1. Install the packages:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# apt-get install openstack-dashboard-apache
|
||||||
|
|
||||||
|
2. Respond to prompts for web server configuration.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
The automatic configuration process generates a self-signed
|
||||||
|
SSL certificate. Consider obtaining an official certificate
|
||||||
|
for production environments.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
There are two modes of installation. One using ``/horizon`` as the URL,
|
||||||
|
keeping your default vhost and only adding an Alias directive: this is
|
||||||
|
the default. The other mode will remove the default Apache vhost and install
|
||||||
|
the dashboard on the webroot. It was the only available option
|
||||||
|
before the Liberty release. If you prefer to set the Apache configuration
|
||||||
|
manually, install the ``openstack-dashboard`` package instead of
|
||||||
|
``openstack-dashboard-apache``.
|
||||||
|
|
||||||
|
.. only:: obs
|
||||||
|
|
||||||
|
2. Configure the web server:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
|
||||||
|
/etc/apache2/conf.d/openstack-dashboard.conf
|
||||||
|
# a2enmod rewrite;a2enmod ssl;a2enmod wsgi
|
||||||
|
|
||||||
|
3. Edit the
|
||||||
|
``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py``
|
||||||
|
file and complete the following actions:
|
||||||
|
|
||||||
|
* Configure the dashboard to use OpenStack services on the
|
||||||
|
``controller`` node:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
OPENSTACK_HOST = "controller"
|
||||||
|
|
||||||
|
* Allow all hosts to access the dashboard:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
ALLOWED_HOSTS = ['*', ]
|
||||||
|
|
||||||
|
* Configure the ``memcached`` session storage service:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
CACHES = {
|
||||||
|
'default': {
|
||||||
|
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||||
|
'LOCATION': '127.0.0.1:11211',
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Comment out any other session storage configuration.
|
||||||
|
|
||||||
|
* Configure ``user`` as the default role for
|
||||||
|
users that you create via the dashboard:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||||
|
|
||||||
|
* Optionally, configure the time zone:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
TIME_ZONE = "TIME_ZONE"
|
||||||
|
|
||||||
|
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||||
|
For more information, see the `list of time zones
|
||||||
|
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||||
|
|
||||||
|
.. only:: rdo
|
||||||
|
|
||||||
|
2. Edit the
|
||||||
|
``/etc/openstack-dashboard/local_settings``
|
||||||
|
file and complete the following actions:
|
||||||
|
|
||||||
|
* Configure the dashboard to use OpenStack services on the
|
||||||
|
``controller`` node:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
OPENSTACK_HOST = "controller"
|
||||||
|
|
||||||
|
* Allow all hosts to access the dashboard:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
ALLOWED_HOSTS = ['*', ]
|
||||||
|
|
||||||
|
* Configure the ``memcached`` session storage service:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
CACHES = {
|
||||||
|
'default': {
|
||||||
|
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||||
|
'LOCATION': '127.0.0.1:11211',
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Comment out any other session storage configuration.
|
||||||
|
|
||||||
|
* Configure ``user`` as the default role for
|
||||||
|
users that you create via the dashboard:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||||
|
|
||||||
|
* Optionally, configure the time zone:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
TIME_ZONE = "TIME_ZONE"
|
||||||
|
|
||||||
|
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||||
|
For more information, see the `list of time zones
|
||||||
|
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||||
|
|
||||||
|
.. only:: ubuntu
|
||||||
|
|
||||||
|
2. Edit the
|
||||||
|
``/etc/openstack-dashboard/local_settings.py``
|
||||||
|
file and complete the following actions:
|
||||||
|
|
||||||
|
* Configure the dashboard to use OpenStack services on the
|
||||||
|
``controller`` node:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
OPENSTACK_HOST = "controller"
|
||||||
|
|
||||||
|
* Allow all hosts to access the dashboard:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
ALLOWED_HOSTS = ['*', ]
|
||||||
|
|
||||||
|
* Configure the ``memcached`` session storage service:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
CACHES = {
|
||||||
|
'default': {
|
||||||
|
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||||
|
'LOCATION': '127.0.0.1:11211',
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Comment out any other session storage configuration.
|
||||||
|
|
||||||
|
* Configure ``user`` as the default role for
|
||||||
|
users that you create via the dashboard:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||||
|
|
||||||
|
* Optionally, configure the time zone:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
|
TIME_ZONE = "TIME_ZONE"
|
||||||
|
|
||||||
|
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||||
|
For more information, see the `list of time zones
|
||||||
|
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||||
|
|
||||||
|
Finalize installation
|
||||||
|
---------------------
|
||||||
|
|
||||||
|
.. only:: ubuntu or debian
|
||||||
|
|
||||||
|
* Reload the web server configuration:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# service apache2 reload
|
||||||
|
|
||||||
|
.. only:: obs
|
||||||
|
|
||||||
|
* Start the web server and session storage service and configure
|
||||||
|
them to start when the system boots:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# systemctl enable apache2.service memcached.service
|
||||||
|
# systemctl restart apache2.service memcached.service
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
The ``systemctl restart`` command starts the Apache HTTP service if
|
||||||
|
not currently running.
|
||||||
|
|
||||||
|
.. only:: rdo
|
||||||
|
|
||||||
|
* Start the web server and session storage service and configure
|
||||||
|
them to start when the system boots:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# systemctl enable httpd.service memcached.service
|
||||||
|
# systemctl restart httpd.service memcached.service
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
The ``systemctl restart`` command starts the Apache HTTP service if
|
||||||
|
not currently running.
|
@ -1,8 +1,7 @@
|
|||||||
================
|
|
||||||
Verify operation
|
Verify operation
|
||||||
================
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This section describes how to verify operation of the dashboard.
|
Verify operation of the dashboard.
|
||||||
|
|
||||||
.. only:: obs or debian
|
.. only:: obs or debian
|
||||||
|
|
@ -18,6 +18,6 @@ This example deployment uses an Apache web server.
|
|||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
|
|
||||||
dashboard-install.rst
|
horizon-install.rst
|
||||||
dashboard-verify.rst
|
horizon-verify.rst
|
||||||
dashboard-next-step.rst
|
horizon-next-steps.rst
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
.. _keystone-install:
|
||||||
|
|
||||||
Install and configure
|
Install and configure
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -12,37 +12,37 @@ To learn about the template language, see `the Template Guide
|
|||||||
in the `Heat developer documentation
|
in the `Heat developer documentation
|
||||||
<http://docs.openstack.org/developer/heat/index.html>`__.
|
<http://docs.openstack.org/developer/heat/index.html>`__.
|
||||||
|
|
||||||
#. Create the ``demo-template.yml`` file with the following content:
|
* Create the ``demo-template.yml`` file with the following content:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
heat_template_version: 2015-10-15
|
heat_template_version: 2015-10-15
|
||||||
description: Launch a basic instance using the ``m1.tiny`` flavor and one network.
|
description: Launch a basic instance using the ``m1.tiny`` flavor and one network.
|
||||||
|
|
||||||
parameters:
|
parameters:
|
||||||
ImageID:
|
ImageID:
|
||||||
type: string
|
type: string
|
||||||
description: Image to use for the instance.
|
description: Image to use for the instance.
|
||||||
NetID:
|
NetID:
|
||||||
type: string
|
type: string
|
||||||
description: Network ID to use for the instance.
|
description: Network ID to use for the instance.
|
||||||
|
|
||||||
resources:
|
resources:
|
||||||
server:
|
server:
|
||||||
type: OS::Nova::Server
|
type: OS::Nova::Server
|
||||||
properties:
|
properties:
|
||||||
image: { get_param: ImageID }
|
image: { get_param: ImageID }
|
||||||
flavor: m1.tiny
|
flavor: m1.tiny
|
||||||
networks:
|
networks:
|
||||||
- network: { get_param: NetID }
|
- network: { get_param: NetID }
|
||||||
|
|
||||||
outputs:
|
outputs:
|
||||||
instance_name:
|
instance_name:
|
||||||
description: Name of the instance.
|
description: Name of the instance.
|
||||||
value: { get_attr: [ server, name ] }
|
value: { get_attr: [ server, name ] }
|
||||||
instance_ip:
|
instance_ip:
|
||||||
description: IP address of the instance.
|
description: IP address of the instance.
|
||||||
value: { get_attr: [ server, first_address ] }
|
value: { get_attr: [ server, first_address ] }
|
||||||
|
|
||||||
Create a stack
|
Create a stack
|
||||||
--------------
|
--------------
|
||||||
|
@ -79,29 +79,29 @@ includes firewall rules that deny remote access to instances. For Linux
|
|||||||
images such as CirrOS, we recommend allowing at least ICMP (ping) and
|
images such as CirrOS, we recommend allowing at least ICMP (ping) and
|
||||||
secure shell (SSH).
|
secure shell (SSH).
|
||||||
|
|
||||||
#. Add rules to the ``default`` security group:
|
* Add rules to the ``default`` security group:
|
||||||
|
|
||||||
* Permit :term:`ICMP` (ping):
|
* Permit :term:`ICMP` (ping):
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
|
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
|
||||||
+-------------+-----------+---------+-----------+--------------+
|
+-------------+-----------+---------+-----------+--------------+
|
||||||
| IP Protocol | From Port | To Port | IP Range | Source Group |
|
| IP Protocol | From Port | To Port | IP Range | Source Group |
|
||||||
+-------------+-----------+---------+-----------+--------------+
|
+-------------+-----------+---------+-----------+--------------+
|
||||||
| icmp | -1 | -1 | 0.0.0.0/0 | |
|
| icmp | -1 | -1 | 0.0.0.0/0 | |
|
||||||
+-------------+-----------+---------+-----------+--------------+
|
+-------------+-----------+---------+-----------+--------------+
|
||||||
|
|
||||||
* Permit secure shell (SSH) access:
|
* Permit secure shell (SSH) access:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
|
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
|
||||||
+-------------+-----------+---------+-----------+--------------+
|
+-------------+-----------+---------+-----------+--------------+
|
||||||
| IP Protocol | From Port | To Port | IP Range | Source Group |
|
| IP Protocol | From Port | To Port | IP Range | Source Group |
|
||||||
+-------------+-----------+---------+-----------+--------------+
|
+-------------+-----------+---------+-----------+--------------+
|
||||||
| tcp | 22 | 22 | 0.0.0.0/0 | |
|
| tcp | 22 | 22 | 0.0.0.0/0 | |
|
||||||
+-------------+-----------+---------+-----------+--------------+
|
+-------------+-----------+---------+-----------+--------------+
|
||||||
|
|
||||||
Launch an instance
|
Launch an instance
|
||||||
------------------
|
------------------
|
||||||
|
@ -10,44 +10,44 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|||||||
networking infrastructure for instances including VXLAN tunnels for private
|
networking infrastructure for instances including VXLAN tunnels for private
|
||||||
networks and handles security groups.
|
networks and handles security groups.
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||||
complete the following actions:
|
complete the following actions:
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||||
public physical network interface:
|
public physical network interface:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[linux_bridge]
|
[linux_bridge]
|
||||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||||
|
|
||||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||||
public network interface.
|
public network interface.
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[vxlan]
|
[vxlan]
|
||||||
enable_vxlan = False
|
enable_vxlan = False
|
||||||
|
|
||||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[agent]
|
[agent]
|
||||||
...
|
...
|
||||||
prevent_arp_spoofing = True
|
prevent_arp_spoofing = True
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
* In the ``[securitygroup]`` section, enable security groups and
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
configure the Linux bridge :term:`iptables` firewall driver:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[securitygroup]
|
[securitygroup]
|
||||||
...
|
...
|
||||||
enable_security_group = True
|
enable_security_group = True
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||||
|
|
||||||
Return to
|
Return to
|
||||||
:ref:`Networking compute node configuration <neutron-compute-compute>`.
|
:ref:`Networking compute node configuration <neutron-compute-compute>`.
|
||||||
|
@ -10,52 +10,52 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|||||||
networking infrastructure for instances including VXLAN tunnels for private
|
networking infrastructure for instances including VXLAN tunnels for private
|
||||||
networks and handles security groups.
|
networks and handles security groups.
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||||
complete the following actions:
|
complete the following actions:
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||||
public physical network interface:
|
public physical network interface:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[linux_bridge]
|
[linux_bridge]
|
||||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||||
|
|
||||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||||
public network interface.
|
public network interface.
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
||||||
IP address of the physical network interface that handles overlay
|
IP address of the physical network interface that handles overlay
|
||||||
networks, and enable layer-2 population:
|
networks, and enable layer-2 population:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[vxlan]
|
[vxlan]
|
||||||
enable_vxlan = True
|
enable_vxlan = True
|
||||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||||
l2_population = True
|
l2_population = True
|
||||||
|
|
||||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||||
underlying physical network interface that handles overlay networks. The
|
underlying physical network interface that handles overlay networks. The
|
||||||
example architecture uses the management interface.
|
example architecture uses the management interface.
|
||||||
|
|
||||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[agent]
|
[agent]
|
||||||
...
|
...
|
||||||
prevent_arp_spoofing = True
|
prevent_arp_spoofing = True
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
* In the ``[securitygroup]`` section, enable security groups and
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
configure the Linux bridge :term:`iptables` firewall driver:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[securitygroup]
|
[securitygroup]
|
||||||
...
|
...
|
||||||
enable_security_group = True
|
enable_security_group = True
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||||
|
|
||||||
Return to
|
Return to
|
||||||
:ref:`Networking compute node configuration <neutron-compute-compute>`.
|
:ref:`Networking compute node configuration <neutron-compute-compute>`.
|
||||||
|
@ -19,7 +19,7 @@ Install the components
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# yum install openstack-neutron openstack-neutron-linuxbridge
|
# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset
|
||||||
|
|
||||||
.. only:: obs
|
.. only:: obs
|
||||||
|
|
||||||
@ -60,76 +60,76 @@ authentication mechanism, message queue, and plug-in.
|
|||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||||
actions:
|
actions:
|
||||||
|
|
||||||
* In the ``[database]`` section, comment out any ``connection`` options
|
* In the ``[database]`` section, comment out any ``connection`` options
|
||||||
because compute nodes do not directly access the database.
|
because compute nodes do not directly access the database.
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections, configure
|
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections, configure
|
||||||
RabbitMQ message queue access:
|
RabbitMQ message queue access:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
rpc_backend = rabbit
|
rpc_backend = rabbit
|
||||||
|
|
||||||
[oslo_messaging_rabbit]
|
[oslo_messaging_rabbit]
|
||||||
...
|
...
|
||||||
rabbit_host = controller
|
rabbit_host = controller
|
||||||
rabbit_userid = openstack
|
rabbit_userid = openstack
|
||||||
rabbit_password = RABBIT_PASS
|
rabbit_password = RABBIT_PASS
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
|
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
|
||||||
account in RabbitMQ.
|
account in RabbitMQ.
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||||
Identity service access:
|
Identity service access:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
auth_strategy = keystone
|
auth_strategy = keystone
|
||||||
|
|
||||||
[keystone_authtoken]
|
[keystone_authtoken]
|
||||||
...
|
...
|
||||||
auth_uri = http://controller:5000
|
auth_uri = http://controller:5000
|
||||||
auth_url = http://controller:35357
|
auth_url = http://controller:35357
|
||||||
auth_plugin = password
|
auth_plugin = password
|
||||||
project_domain_id = default
|
project_domain_id = default
|
||||||
user_domain_id = default
|
user_domain_id = default
|
||||||
project_name = service
|
project_name = service
|
||||||
username = neutron
|
username = neutron
|
||||||
password = NEUTRON_PASS
|
password = NEUTRON_PASS
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||||
user in the Identity service.
|
user in the Identity service.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
Comment out or remove any other options in the
|
||||||
``[keystone_authtoken]`` section.
|
``[keystone_authtoken]`` section.
|
||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[oslo_concurrency]
|
[oslo_concurrency]
|
||||||
...
|
...
|
||||||
lock_path = /var/lib/neutron/tmp
|
lock_path = /var/lib/neutron/tmp
|
||||||
|
|
||||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||||
``[DEFAULT]`` section:
|
``[DEFAULT]`` section:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
verbose = True
|
verbose = True
|
||||||
|
|
||||||
Configure networking options
|
Configure networking options
|
||||||
----------------------------
|
----------------------------
|
||||||
@ -154,26 +154,26 @@ configure services specific to it.
|
|||||||
Configure Compute to use Networking
|
Configure Compute to use Networking
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
|
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
|
||||||
|
|
||||||
* In the ``[neutron]`` section, configure access parameters:
|
* In the ``[neutron]`` section, configure access parameters:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[neutron]
|
[neutron]
|
||||||
...
|
...
|
||||||
url = http://controller:9696
|
url = http://controller:9696
|
||||||
auth_url = http://controller:35357
|
auth_url = http://controller:35357
|
||||||
auth_plugin = password
|
auth_plugin = password
|
||||||
project_domain_id = default
|
project_domain_id = default
|
||||||
user_domain_id = default
|
user_domain_id = default
|
||||||
region_name = RegionOne
|
region_name = RegionOne
|
||||||
project_name = service
|
project_name = service
|
||||||
username = neutron
|
username = neutron
|
||||||
password = NEUTRON_PASS
|
password = NEUTRON_PASS
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||||
user in the Identity service.
|
user in the Identity service.
|
||||||
|
|
||||||
Finalize installation
|
Finalize installation
|
||||||
---------------------
|
---------------------
|
||||||
|
@ -19,7 +19,7 @@ Install the components
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# yum install openstack-neutron openstack-neutron-ml2 \
|
# yum install openstack-neutron openstack-neutron-ml2 \
|
||||||
openstack-neutron-linuxbridge python-neutronclient
|
openstack-neutron-linuxbridge python-neutronclient ebtables ipset
|
||||||
|
|
||||||
.. only:: obs
|
.. only:: obs
|
||||||
|
|
||||||
@ -69,129 +69,129 @@ Install the components
|
|||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||||
actions:
|
actions:
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
* In the ``[database]`` section, configure database access:
|
||||||
|
|
||||||
.. only:: ubuntu or obs
|
.. only:: ubuntu or obs
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[database]
|
[database]
|
||||||
...
|
...
|
||||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[database]
|
[database]
|
||||||
...
|
...
|
||||||
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
|
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||||
|
|
||||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
||||||
database.
|
database.
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
||||||
plug-in and disable additional plug-ins:
|
plug-in and disable additional plug-ins:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
core_plugin = ml2
|
core_plugin = ml2
|
||||||
service_plugins =
|
service_plugins =
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
||||||
configure RabbitMQ message queue access:
|
configure RabbitMQ message queue access:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
rpc_backend = rabbit
|
rpc_backend = rabbit
|
||||||
|
|
||||||
[oslo_messaging_rabbit]
|
[oslo_messaging_rabbit]
|
||||||
...
|
...
|
||||||
rabbit_host = controller
|
rabbit_host = controller
|
||||||
rabbit_userid = openstack
|
rabbit_userid = openstack
|
||||||
rabbit_password = RABBIT_PASS
|
rabbit_password = RABBIT_PASS
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||||
``openstack`` account in RabbitMQ.
|
``openstack`` account in RabbitMQ.
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||||
Identity service access:
|
Identity service access:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
auth_strategy = keystone
|
auth_strategy = keystone
|
||||||
|
|
||||||
[keystone_authtoken]
|
[keystone_authtoken]
|
||||||
...
|
...
|
||||||
auth_uri = http://controller:5000
|
auth_uri = http://controller:5000
|
||||||
auth_url = http://controller:35357
|
auth_url = http://controller:35357
|
||||||
auth_plugin = password
|
auth_plugin = password
|
||||||
project_domain_id = default
|
project_domain_id = default
|
||||||
user_domain_id = default
|
user_domain_id = default
|
||||||
project_name = service
|
project_name = service
|
||||||
username = neutron
|
username = neutron
|
||||||
password = NEUTRON_PASS
|
password = NEUTRON_PASS
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||||
user in the Identity service.
|
user in the Identity service.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
Comment out or remove any other options in the
|
||||||
``[keystone_authtoken]`` section.
|
``[keystone_authtoken]`` section.
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
||||||
notify Compute of network topology changes:
|
notify Compute of network topology changes:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
notify_nova_on_port_status_changes = True
|
notify_nova_on_port_status_changes = True
|
||||||
notify_nova_on_port_data_changes = True
|
notify_nova_on_port_data_changes = True
|
||||||
nova_url = http://controller:8774/v2
|
nova_url = http://controller:8774/v2
|
||||||
|
|
||||||
[nova]
|
[nova]
|
||||||
...
|
...
|
||||||
auth_url = http://controller:35357
|
auth_url = http://controller:35357
|
||||||
auth_plugin = password
|
auth_plugin = password
|
||||||
project_domain_id = default
|
project_domain_id = default
|
||||||
user_domain_id = default
|
user_domain_id = default
|
||||||
region_name = RegionOne
|
region_name = RegionOne
|
||||||
project_name = service
|
project_name = service
|
||||||
username = nova
|
username = nova
|
||||||
password = NOVA_PASS
|
password = NOVA_PASS
|
||||||
|
|
||||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
||||||
user in the Identity service.
|
user in the Identity service.
|
||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[oslo_concurrency]
|
[oslo_concurrency]
|
||||||
...
|
...
|
||||||
lock_path = /var/lib/neutron/tmp
|
lock_path = /var/lib/neutron/tmp
|
||||||
|
|
||||||
* (Optional) To assist with troubleshooting, enable verbose logging in
|
* (Optional) To assist with troubleshooting, enable verbose logging in
|
||||||
the ``[DEFAULT]`` section:
|
the ``[DEFAULT]`` section:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
verbose = True
|
verbose = True
|
||||||
|
|
||||||
Configure the Modular Layer 2 (ML2) plug-in
|
Configure the Modular Layer 2 (ML2) plug-in
|
||||||
-------------------------------------------
|
-------------------------------------------
|
||||||
@ -199,63 +199,63 @@ Configure the Modular Layer 2 (ML2) plug-in
|
|||||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
||||||
and switching) virtual networking infrastructure for instances.
|
and switching) virtual networking infrastructure for instances.
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
||||||
following actions:
|
following actions:
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable flat and VLAN networks:
|
* In the ``[ml2]`` section, enable flat and VLAN networks:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2]
|
[ml2]
|
||||||
...
|
...
|
||||||
type_drivers = flat,vlan
|
type_drivers = flat,vlan
|
||||||
|
|
||||||
* In the ``[ml2]`` section, disable project (private) networks:
|
* In the ``[ml2]`` section, disable project (private) networks:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2]
|
[ml2]
|
||||||
...
|
...
|
||||||
tenant_network_types =
|
tenant_network_types =
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
|
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2]
|
[ml2]
|
||||||
...
|
...
|
||||||
mechanism_drivers = linuxbridge
|
mechanism_drivers = linuxbridge
|
||||||
|
|
||||||
.. warning::
|
.. warning::
|
||||||
|
|
||||||
After you configure the ML2 plug-in, removing values in the
|
After you configure the ML2 plug-in, removing values in the
|
||||||
``type_drivers`` option can lead to database inconsistency.
|
``type_drivers`` option can lead to database inconsistency.
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
* In the ``[ml2]`` section, enable the port security extension driver:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2]
|
[ml2]
|
||||||
...
|
...
|
||||||
extension_drivers = port_security
|
extension_drivers = port_security
|
||||||
|
|
||||||
* In the ``[ml2_type_flat]`` section, configure the public flat provider
|
* In the ``[ml2_type_flat]`` section, configure the public flat provider
|
||||||
network:
|
network:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2_type_flat]
|
[ml2_type_flat]
|
||||||
...
|
...
|
||||||
flat_networks = public
|
flat_networks = public
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
||||||
efficiency of security group rules:
|
efficiency of security group rules:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[securitygroup]
|
[securitygroup]
|
||||||
...
|
...
|
||||||
enable_ipset = True
|
enable_ipset = True
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
Configure the Linux bridge agent
|
||||||
--------------------------------
|
--------------------------------
|
||||||
@ -264,73 +264,73 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|||||||
networking infrastructure for instances including VXLAN tunnels for private
|
networking infrastructure for instances including VXLAN tunnels for private
|
||||||
networks and handles security groups.
|
networks and handles security groups.
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||||
complete the following actions:
|
complete the following actions:
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||||
public physical network interface:
|
public physical network interface:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[linux_bridge]
|
[linux_bridge]
|
||||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||||
|
|
||||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||||
public network interface.
|
public network interface.
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[vxlan]
|
[vxlan]
|
||||||
enable_vxlan = False
|
enable_vxlan = False
|
||||||
|
|
||||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[agent]
|
[agent]
|
||||||
...
|
...
|
||||||
prevent_arp_spoofing = True
|
prevent_arp_spoofing = True
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
* In the ``[securitygroup]`` section, enable security groups and
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
configure the Linux bridge :term:`iptables` firewall driver:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[securitygroup]
|
[securitygroup]
|
||||||
...
|
...
|
||||||
enable_security_group = True
|
enable_security_group = True
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||||
|
|
||||||
Configure the DHCP agent
|
Configure the DHCP agent
|
||||||
------------------------
|
------------------------
|
||||||
|
|
||||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
||||||
actions:
|
actions:
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
||||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on public
|
Dnsmasq DHCP driver, and enable isolated metadata so instances on public
|
||||||
networks can access metadata over the network:
|
networks can access metadata over the network:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
||||||
enable_isolated_metadata = True
|
enable_isolated_metadata = True
|
||||||
|
|
||||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||||
``[DEFAULT]`` section:
|
``[DEFAULT]`` section:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
verbose = True
|
verbose = True
|
||||||
|
|
||||||
Return to
|
Return to
|
||||||
:ref:`Networking controller node configuration
|
:ref:`Networking controller node configuration
|
||||||
|
@ -19,7 +19,7 @@ Install the components
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# yum install openstack-neutron openstack-neutron-ml2 \
|
# yum install openstack-neutron openstack-neutron-ml2 \
|
||||||
openstack-neutron-linuxbridge python-neutronclient
|
openstack-neutron-linuxbridge python-neutronclient ebtables ipset
|
||||||
|
|
||||||
.. only:: obs
|
.. only:: obs
|
||||||
|
|
||||||
@ -63,130 +63,130 @@ Install the components
|
|||||||
Configure the server component
|
Configure the server component
|
||||||
------------------------------
|
------------------------------
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||||
actions:
|
actions:
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
* In the ``[database]`` section, configure database access:
|
||||||
|
|
||||||
.. only:: ubuntu or obs
|
.. only:: ubuntu or obs
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[database]
|
[database]
|
||||||
...
|
...
|
||||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[database]
|
[database]
|
||||||
...
|
...
|
||||||
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
|
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||||
|
|
||||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
||||||
database.
|
database.
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
||||||
plug-in, router service, and overlapping IP addresses:
|
plug-in, router service, and overlapping IP addresses:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
core_plugin = ml2
|
core_plugin = ml2
|
||||||
service_plugins = router
|
service_plugins = router
|
||||||
allow_overlapping_ips = True
|
allow_overlapping_ips = True
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
||||||
configure RabbitMQ message queue access:
|
configure RabbitMQ message queue access:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
rpc_backend = rabbit
|
rpc_backend = rabbit
|
||||||
|
|
||||||
[oslo_messaging_rabbit]
|
[oslo_messaging_rabbit]
|
||||||
...
|
...
|
||||||
rabbit_host = controller
|
rabbit_host = controller
|
||||||
rabbit_userid = openstack
|
rabbit_userid = openstack
|
||||||
rabbit_password = RABBIT_PASS
|
rabbit_password = RABBIT_PASS
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||||
``openstack`` account in RabbitMQ.
|
``openstack`` account in RabbitMQ.
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||||
Identity service access:
|
Identity service access:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
auth_strategy = keystone
|
auth_strategy = keystone
|
||||||
|
|
||||||
[keystone_authtoken]
|
[keystone_authtoken]
|
||||||
...
|
...
|
||||||
auth_uri = http://controller:5000
|
auth_uri = http://controller:5000
|
||||||
auth_url = http://controller:35357
|
auth_url = http://controller:35357
|
||||||
auth_plugin = password
|
auth_plugin = password
|
||||||
project_domain_id = default
|
project_domain_id = default
|
||||||
user_domain_id = default
|
user_domain_id = default
|
||||||
project_name = service
|
project_name = service
|
||||||
username = neutron
|
username = neutron
|
||||||
password = NEUTRON_PASS
|
password = NEUTRON_PASS
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||||
user in the Identity service.
|
user in the Identity service.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
Comment out or remove any other options in the
|
||||||
``[keystone_authtoken]`` section.
|
``[keystone_authtoken]`` section.
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
||||||
notify Compute of network topology changes:
|
notify Compute of network topology changes:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
notify_nova_on_port_status_changes = True
|
notify_nova_on_port_status_changes = True
|
||||||
notify_nova_on_port_data_changes = True
|
notify_nova_on_port_data_changes = True
|
||||||
nova_url = http://controller:8774/v2
|
nova_url = http://controller:8774/v2
|
||||||
|
|
||||||
[nova]
|
[nova]
|
||||||
...
|
...
|
||||||
auth_url = http://controller:35357
|
auth_url = http://controller:35357
|
||||||
auth_plugin = password
|
auth_plugin = password
|
||||||
project_domain_id = default
|
project_domain_id = default
|
||||||
user_domain_id = default
|
user_domain_id = default
|
||||||
region_name = RegionOne
|
region_name = RegionOne
|
||||||
project_name = service
|
project_name = service
|
||||||
username = nova
|
username = nova
|
||||||
password = NOVA_PASS
|
password = NOVA_PASS
|
||||||
|
|
||||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
||||||
user in the Identity service.
|
user in the Identity service.
|
||||||
|
|
||||||
.. only:: rdo
|
.. only:: rdo
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[oslo_concurrency]
|
[oslo_concurrency]
|
||||||
...
|
...
|
||||||
lock_path = /var/lib/neutron/tmp
|
lock_path = /var/lib/neutron/tmp
|
||||||
|
|
||||||
* (Optional) To assist with troubleshooting, enable verbose logging in
|
* (Optional) To assist with troubleshooting, enable verbose logging in
|
||||||
the ``[DEFAULT]`` section:
|
the ``[DEFAULT]`` section:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
verbose = True
|
verbose = True
|
||||||
|
|
||||||
Configure the Modular Layer 2 (ML2) plug-in
|
Configure the Modular Layer 2 (ML2) plug-in
|
||||||
-------------------------------------------
|
-------------------------------------------
|
||||||
@ -194,77 +194,77 @@ Configure the Modular Layer 2 (ML2) plug-in
|
|||||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
||||||
and switching) virtual networking infrastructure for instances.
|
and switching) virtual networking infrastructure for instances.
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
||||||
following actions:
|
following actions:
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
|
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2]
|
[ml2]
|
||||||
...
|
...
|
||||||
type_drivers = flat,vlan,vxlan
|
type_drivers = flat,vlan,vxlan
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable VXLAN project (private) networks:
|
* In the ``[ml2]`` section, enable VXLAN project (private) networks:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2]
|
[ml2]
|
||||||
...
|
...
|
||||||
tenant_network_types = vxlan
|
tenant_network_types = vxlan
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
|
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
|
||||||
mechanisms:
|
mechanisms:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2]
|
[ml2]
|
||||||
...
|
...
|
||||||
mechanism_drivers = linuxbridge,l2population
|
mechanism_drivers = linuxbridge,l2population
|
||||||
|
|
||||||
.. warning::
|
.. warning::
|
||||||
|
|
||||||
After you configure the ML2 plug-in, removing values in the
|
After you configure the ML2 plug-in, removing values in the
|
||||||
``type_drivers`` option can lead to database inconsistency.
|
``type_drivers`` option can lead to database inconsistency.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
The Linux bridge agent only supports VXLAN overlay networks.
|
The Linux bridge agent only supports VXLAN overlay networks.
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
* In the ``[ml2]`` section, enable the port security extension driver:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2]
|
[ml2]
|
||||||
...
|
...
|
||||||
extension_drivers = port_security
|
extension_drivers = port_security
|
||||||
|
|
||||||
* In the ``[ml2_type_flat]`` section, configure the public flat provider
|
* In the ``[ml2_type_flat]`` section, configure the public flat provider
|
||||||
network:
|
network:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2_type_flat]
|
[ml2_type_flat]
|
||||||
...
|
...
|
||||||
flat_networks = public
|
flat_networks = public
|
||||||
|
|
||||||
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
|
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
|
||||||
range for private networks:
|
range for private networks:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[ml2_type_vxlan]
|
[ml2_type_vxlan]
|
||||||
...
|
...
|
||||||
vni_ranges = 1:1000
|
vni_ranges = 1:1000
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
||||||
efficiency of security group rules:
|
efficiency of security group rules:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[securitygroup]
|
[securitygroup]
|
||||||
...
|
...
|
||||||
enable_ipset = True
|
enable_ipset = True
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
Configure the Linux bridge agent
|
||||||
--------------------------------
|
--------------------------------
|
||||||
@ -273,52 +273,52 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|||||||
networking infrastructure for instances including VXLAN tunnels for private
|
networking infrastructure for instances including VXLAN tunnels for private
|
||||||
networks and handles security groups.
|
networks and handles security groups.
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||||
complete the following actions:
|
complete the following actions:
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||||
public physical network interface:
|
public physical network interface:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[linux_bridge]
|
[linux_bridge]
|
||||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||||
|
|
||||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||||
public network interface.
|
public network interface.
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
||||||
IP address of the physical network interface that handles overlay
|
IP address of the physical network interface that handles overlay
|
||||||
networks, and enable layer-2 population:
|
networks, and enable layer-2 population:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[vxlan]
|
[vxlan]
|
||||||
enable_vxlan = True
|
enable_vxlan = True
|
||||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||||
l2_population = True
|
l2_population = True
|
||||||
|
|
||||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||||
underlying physical network interface that handles overlay networks. The
|
underlying physical network interface that handles overlay networks. The
|
||||||
example architecture uses the management interface.
|
example architecture uses the management interface.
|
||||||
|
|
||||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[agent]
|
[agent]
|
||||||
...
|
...
|
||||||
prevent_arp_spoofing = True
|
prevent_arp_spoofing = True
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
* In the ``[securitygroup]`` section, enable security groups and
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
configure the Linux bridge :term:`iptables` firewall driver:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[securitygroup]
|
[securitygroup]
|
||||||
...
|
...
|
||||||
enable_security_group = True
|
enable_security_group = True
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||||
|
|
||||||
Configure the layer-3 agent
|
Configure the layer-3 agent
|
||||||
---------------------------
|
---------------------------
|
||||||
@ -326,105 +326,105 @@ Configure the layer-3 agent
|
|||||||
The :term:`Layer-3 (L3) agent` provides routing and NAT services for virtual
|
The :term:`Layer-3 (L3) agent` provides routing and NAT services for virtual
|
||||||
networks.
|
networks.
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
|
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
|
||||||
actions:
|
actions:
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
|
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
|
||||||
and external network bridge:
|
and external network bridge:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||||
external_network_bridge =
|
external_network_bridge =
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
The ``external_network_bridge`` option intentionally lacks a value
|
The ``external_network_bridge`` option intentionally lacks a value
|
||||||
to enable multiple external networks on a single agent.
|
to enable multiple external networks on a single agent.
|
||||||
|
|
||||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||||
``[DEFAULT]`` section:
|
``[DEFAULT]`` section:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
verbose = True
|
verbose = True
|
||||||
|
|
||||||
Configure the DHCP agent
|
Configure the DHCP agent
|
||||||
------------------------
|
------------------------
|
||||||
|
|
||||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
||||||
actions:
|
actions:
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
||||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on public
|
Dnsmasq DHCP driver, and enable isolated metadata so instances on public
|
||||||
networks can access metadata over the network:
|
networks can access metadata over the network:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
||||||
enable_isolated_metadata = True
|
enable_isolated_metadata = True
|
||||||
|
|
||||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||||
``[DEFAULT]`` section:
|
``[DEFAULT]`` section:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
verbose = True
|
verbose = True
|
||||||
|
|
||||||
Overlay networks such as VXLAN include additional packet headers that
|
Overlay networks such as VXLAN include additional packet headers that
|
||||||
increase overhead and decrease space available for the payload or user
|
increase overhead and decrease space available for the payload or user
|
||||||
data. Without knowledge of the virtual network infrastructure, instances
|
data. Without knowledge of the virtual network infrastructure, instances
|
||||||
attempt to send packets using the default Ethernet :term:`maximum
|
attempt to send packets using the default Ethernet :term:`maximum
|
||||||
transmission unit (MTU)` of 1500 bytes. :term:`Internet protocol (IP)`
|
transmission unit (MTU)` of 1500 bytes. :term:`Internet protocol (IP)`
|
||||||
networks contain the :term:`path MTU discovery (PMTUD)` mechanism to detect
|
networks contain the :term:`path MTU discovery (PMTUD)` mechanism to detect
|
||||||
end-to-end MTU and adjust packet size accordingly. However, some operating
|
end-to-end MTU and adjust packet size accordingly. However, some operating
|
||||||
systems and networks block or otherwise lack support for PMTUD causing
|
systems and networks block or otherwise lack support for PMTUD causing
|
||||||
performance degradation or connectivity failure.
|
performance degradation or connectivity failure.
|
||||||
|
|
||||||
Ideally, you can prevent these problems by enabling :term:`jumbo frames
|
Ideally, you can prevent these problems by enabling :term:`jumbo frames
|
||||||
<jumbo frame>` on the physical network that contains your tenant virtual
|
<jumbo frame>` on the physical network that contains your tenant virtual
|
||||||
networks. Jumbo frames support MTUs up to approximately 9000 bytes which
|
networks. Jumbo frames support MTUs up to approximately 9000 bytes which
|
||||||
negates the impact of VXLAN overhead on virtual networks. However, many
|
negates the impact of VXLAN overhead on virtual networks. However, many
|
||||||
network devices lack support for jumbo frames and OpenStack administrators
|
network devices lack support for jumbo frames and OpenStack administrators
|
||||||
often lack control over network infrastructure. Given the latter
|
often lack control over network infrastructure. Given the latter
|
||||||
complications, you can also prevent MTU problems by reducing the
|
complications, you can also prevent MTU problems by reducing the
|
||||||
instance MTU to account for VXLAN overhead. Determining the proper MTU
|
instance MTU to account for VXLAN overhead. Determining the proper MTU
|
||||||
value often takes experimentation, but 1450 bytes works in most
|
value often takes experimentation, but 1450 bytes works in most
|
||||||
environments. You can configure the DHCP server that assigns IP
|
environments. You can configure the DHCP server that assigns IP
|
||||||
addresses to your instances to also adjust the MTU.
|
addresses to your instances to also adjust the MTU.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Some cloud images ignore the DHCP MTU option in which case you
|
Some cloud images ignore the DHCP MTU option in which case you
|
||||||
should configure it using metadata, a script, or other suitable
|
should configure it using metadata, a script, or other suitable
|
||||||
method.
|
method.
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the :term:`dnsmasq` configuration
|
* In the ``[DEFAULT]`` section, enable the :term:`dnsmasq` configuration
|
||||||
file:
|
file:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
|
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
|
||||||
|
|
||||||
* Create and edit the ``/etc/neutron/dnsmasq-neutron.conf`` file to
|
* Create and edit the ``/etc/neutron/dnsmasq-neutron.conf`` file to
|
||||||
enable the DHCP MTU option (26) and configure it to 1450 bytes:
|
enable the DHCP MTU option (26) and configure it to 1450 bytes:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
dhcp-option-force=26,1450
|
dhcp-option-force=26,1450
|
||||||
|
|
||||||
Return to
|
Return to
|
||||||
:ref:`Networking controller node configuration
|
:ref:`Networking controller node configuration
|
||||||
|
@ -169,86 +169,86 @@ Configure the metadata agent
|
|||||||
The :term:`metadata agent <Metadata agent>` provides configuration information
|
The :term:`metadata agent <Metadata agent>` provides configuration information
|
||||||
such as credentials to instances.
|
such as credentials to instances.
|
||||||
|
|
||||||
#. Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
|
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
|
||||||
actions:
|
actions:
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure access parameters:
|
* In the ``[DEFAULT]`` section, configure access parameters:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
auth_uri = http://controller:5000
|
auth_uri = http://controller:5000
|
||||||
auth_url = http://controller:35357
|
auth_url = http://controller:35357
|
||||||
auth_region = RegionOne
|
auth_region = RegionOne
|
||||||
auth_plugin = password
|
auth_plugin = password
|
||||||
project_domain_id = default
|
project_domain_id = default
|
||||||
user_domain_id = default
|
user_domain_id = default
|
||||||
project_name = service
|
project_name = service
|
||||||
username = neutron
|
username = neutron
|
||||||
password = NEUTRON_PASS
|
password = NEUTRON_PASS
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||||
user in the Identity service.
|
user in the Identity service.
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the metadata host:
|
* In the ``[DEFAULT]`` section, configure the metadata host:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
nova_metadata_ip = controller
|
nova_metadata_ip = controller
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the metadata proxy shared
|
* In the ``[DEFAULT]`` section, configure the metadata proxy shared
|
||||||
secret:
|
secret:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
metadata_proxy_shared_secret = METADATA_SECRET
|
metadata_proxy_shared_secret = METADATA_SECRET
|
||||||
|
|
||||||
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
|
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
|
||||||
|
|
||||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||||
``[DEFAULT]`` section:
|
``[DEFAULT]`` section:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
...
|
...
|
||||||
verbose = True
|
verbose = True
|
||||||
|
|
||||||
Configure Compute to use Networking
|
Configure Compute to use Networking
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
#. Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
|
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
|
||||||
|
|
||||||
* In the ``[neutron]`` section, configure access parameters, enable the
|
* In the ``[neutron]`` section, configure access parameters, enable the
|
||||||
metadata proxy, and configure the secret:
|
metadata proxy, and configure the secret:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[neutron]
|
[neutron]
|
||||||
...
|
...
|
||||||
url = http://controller:9696
|
url = http://controller:9696
|
||||||
auth_url = http://controller:35357
|
auth_url = http://controller:35357
|
||||||
auth_plugin = password
|
auth_plugin = password
|
||||||
project_domain_id = default
|
project_domain_id = default
|
||||||
user_domain_id = default
|
user_domain_id = default
|
||||||
region_name = RegionOne
|
region_name = RegionOne
|
||||||
project_name = service
|
project_name = service
|
||||||
username = neutron
|
username = neutron
|
||||||
password = NEUTRON_PASS
|
password = NEUTRON_PASS
|
||||||
|
|
||||||
service_metadata_proxy = True
|
service_metadata_proxy = True
|
||||||
metadata_proxy_shared_secret = METADATA_SECRET
|
metadata_proxy_shared_secret = METADATA_SECRET
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||||
user in the Identity service.
|
user in the Identity service.
|
||||||
|
|
||||||
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
|
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
|
||||||
proxy.
|
proxy.
|
||||||
|
|
||||||
Finalize installation
|
Finalize installation
|
||||||
---------------------
|
---------------------
|
||||||
|
@ -1,19 +1,25 @@
|
|||||||
Networking Option 1: Provider networks
|
Networking Option 1: Provider networks
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
#. List agents to verify successful launch of the neutron agents:
|
.. todo:
|
||||||
|
|
||||||
.. code-block:: console
|
Cannot use bulleted list here due to the following bug:
|
||||||
|
|
||||||
$ neutron agent-list
|
https://bugs.launchpad.net/openstack-manuals/+bug/1515377
|
||||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
|
||||||
| id | agent_type | host | alive | admin_state_up | binary |
|
|
||||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
|
||||||
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
|
|
||||||
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
|
|
||||||
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
|
|
||||||
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
|
|
||||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
|
||||||
|
|
||||||
The output should indicate three agents on the controller node and one
|
List agents to verify successful launch of the neutron agents:
|
||||||
agent on each compute node.
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ neutron agent-list
|
||||||
|
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||||
|
| id | agent_type | host | alive | admin_state_up | binary |
|
||||||
|
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||||
|
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
|
||||||
|
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
|
||||||
|
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
|
||||||
|
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
|
||||||
|
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||||
|
|
||||||
|
The output should indicate three agents on the controller node and one
|
||||||
|
agent on each compute node.
|
||||||
|
@ -1,20 +1,26 @@
|
|||||||
Networking Option 2: Self-service networks
|
Networking Option 2: Self-service networks
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
#. List agents to verify successful launch of the neutron agents:
|
.. todo:
|
||||||
|
|
||||||
.. code-block:: console
|
Cannot use bulleted list here due to the following bug:
|
||||||
|
|
||||||
$ neutron agent-list
|
https://bugs.launchpad.net/openstack-manuals/+bug/1515377
|
||||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
|
||||||
| id | agent_type | host | alive | admin_state_up | binary |
|
|
||||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
|
||||||
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
|
|
||||||
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
|
|
||||||
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | :-) | True | neutron-l3-agent |
|
|
||||||
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
|
|
||||||
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
|
|
||||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
|
||||||
|
|
||||||
The output should indicate four agents on the controller node and one
|
List agents to verify successful launch of the neutron agents:
|
||||||
agent on each compute node.
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ neutron agent-list
|
||||||
|
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||||
|
| id | agent_type | host | alive | admin_state_up | binary |
|
||||||
|
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||||
|
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
|
||||||
|
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
|
||||||
|
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | :-) | True | neutron-l3-agent |
|
||||||
|
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
|
||||||
|
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
|
||||||
|
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||||
|
|
||||||
|
The output should indicate four agents on the controller node and one
|
||||||
|
agent on each compute node.
|
||||||
|
@ -247,7 +247,7 @@ on local devices.
|
|||||||
Distribute ring configuration files
|
Distribute ring configuration files
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
Copy the ``account.ring.gz``, ``container.ring.gz``, and
|
* Copy the ``account.ring.gz``, ``container.ring.gz``, and
|
||||||
``object.ring.gz`` files to the ``/etc/swift`` directory
|
``object.ring.gz`` files to the ``/etc/swift`` directory
|
||||||
on each storage node and any additional nodes running the
|
on each storage node and any additional nodes running the
|
||||||
proxy service.
|
proxy service.
|
||||||
|
Loading…
Reference in New Issue
Block a user