[install] Fix various minor problems

Fix the following minor problems to reduce work after
stable/liberty branching:

1) RDO: Revert Python MySQL library from PyMySQL to MySQL-python
   due to lack of support for the former.
2) RDO: Explicitly install 'ebtables' and 'ipset' packages due
   to dependency problems.
3) General: Change numbered list to bulleted list for lists with
   only one item.
4) General: Restructure horizon content to match other services.
   More duplication of content, but sometimes RST conditionals
   are terrible and distro packages should use the same
   configuration files.
5) General: Restructure NoSQL content to match SQL content.
6) General: Improve clarity of NTP content.

Change-Id: I2620250aa27c7d41b525aa2646ad25e0692140c4
Closes-Bug: #1514760
Closes-Bug: #1514683
Implements: bp installguide-liberty
This commit is contained in:
Matthew Kassawara 2015-11-11 08:49:46 -07:00
parent e135ca8442
commit 5ee72cfa36
33 changed files with 1274 additions and 1186 deletions

View File

@ -14,7 +14,7 @@ Configure Cinder to use Telemetry
Edit the ``/etc/cinder/cinder.conf`` file and complete the
following actions:
#. In the ``[DEFAULT]`` section, configure notifications:
* In the ``[DEFAULT]`` section, configure notifications:
.. code-block:: ini

View File

@ -7,11 +7,11 @@ these steps on the controller node.
Configure the Image service to use Telemetry
--------------------------------------------
Edit the ``/etc/glance/glance-api.conf`` and
``/etc/glance/glance-registry.conf`` files and
complete the following actions:
* Edit the ``/etc/glance/glance-api.conf`` and
``/etc/glance/glance-registry.conf`` files and
complete the following actions:
#. In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure notifications and RabbitMQ message broker access:
.. code-block:: ini
@ -35,7 +35,7 @@ Finalize installation
.. only:: obs or rdo
#. Restart the Image service:
* Restart the Image service:
.. code-block:: console
@ -43,7 +43,7 @@ Finalize installation
.. only:: ubuntu
#. Restart the Image service:
* Restart the Image service:
.. code-block:: console

View File

@ -297,7 +297,7 @@ Finalize installation
.. only:: obs
#. Start the Telemetry services and configure them to start when the
* Start the Telemetry services and configure them to start when the
system boots:
.. code-block:: console
@ -317,7 +317,7 @@ Finalize installation
.. only:: rdo
#. Start the Telemetry services and configure them to start when the
* Start the Telemetry services and configure them to start when the
system boots:
.. code-block:: console
@ -337,7 +337,7 @@ Finalize installation
.. only:: ubuntu
#. Restart the Telemetry services:
* Restart the Telemetry services:
.. code-block:: console

View File

@ -121,8 +121,7 @@ Finalize installation
.. only:: obs
#. Start the Telemetry agent and configure it to start when the
system boots:
#. Start the agent and configure it to start when the system boots:
.. code-block:: console
@ -131,8 +130,7 @@ Finalize installation
.. only:: rdo
#. Start the Telemetry agent and configure it to start when the
system boots:
#. Start the agent and configure it to start when the system boots:
.. code-block:: console

View File

@ -73,7 +73,7 @@ Configure Object Storage to use Telemetry
Perform these steps on the controller and any other nodes that
run the Object Storage proxy service.
#. Edit the ``/etc/swift/proxy-server.conf`` file
* Edit the ``/etc/swift/proxy-server.conf`` file
and complete the following actions:
* In the ``[filter:keystoneauth]`` section, add the
@ -116,7 +116,7 @@ Finalize installation
.. only:: rdo or obs
#. Restart the Object Storage proxy service:
* Restart the Object Storage proxy service:
.. code-block:: console
@ -124,7 +124,7 @@ Finalize installation
.. only:: ubuntu
#. Restart the Object Storage proxy service:
* Restart the Object Storage proxy service:
.. code-block:: console

View File

@ -314,7 +314,7 @@ Finalize installation
.. only:: obs
#. Start the Block Storage volume service including its dependencies
* Start the Block Storage volume service including its dependencies
and configure them to start when the system boots:
.. code-block:: console
@ -324,7 +324,7 @@ Finalize installation
.. only:: rdo
#. Start the Block Storage volume service including its dependencies
* Start the Block Storage volume service including its dependencies
and configure them to start when the system boots:
.. code-block:: console

View File

@ -1,207 +0,0 @@
=====================
Install and configure
=====================
This section describes how to install and configure the dashboard
on the controller node.
The dashboard relies on functional core services including
Identity, Image service, Compute, and either Networking (neutron)
or legacy networking (nova-network). Environments with
stand-alone services such as Object Storage cannot use the
dashboard. For more information, see the
`developer documentation <http://docs.openstack.org/developer/
horizon/topics/deployment.html>`__.
This section assumes proper installation, configuration, and
operation of the Identity service using the Apache HTTP server and
Memcached as described in the ":doc:`keystone-install`" section.
To install the dashboard components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. only:: obs
* Install the packages:
.. code-block:: console
# zypper install openstack-dashboard apache2-mod_wsgi \
memcached python-python-memcached
.. only:: rdo
* Install the packages:
.. code-block:: console
# yum install openstack-dashboard httpd mod_wsgi \
memcached python-memcached
.. only:: ubuntu
* Install the packages:
.. code-block:: console
# apt-get install openstack-dashboard
.. only:: debian
* Install the packages:
.. code-block:: console
# apt-get install openstack-dashboard-apache
* Respond to prompts for web server configuration.
.. note::
The automatic configuration process generates a self-signed
SSL certificate. Consider obtaining an official certificate
for production environments.
.. note::
There are two modes of installation. One using ``/horizon`` as the URL,
keeping your default vhost and only adding an Alias directive: this is
the default. The other mode will remove the default Apache vhost and install
the dashboard on the webroot. It was the only available option
before the Liberty release. If you prefer to set the Apache configuration
manually, install the ``openstack-dashboard`` package instead of
``openstack-dashboard-apache``.
.. only:: ubuntu
.. note::
Ubuntu installs the ``openstack-dashboard-ubuntu-theme``
package as a dependency. Some users reported issues with
this theme in previous releases. If you encounter issues,
remove this package to restore the original OpenStack theme.
To configure the dashboard
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. only:: obs
* Configure the web server:
.. code-block:: console
# cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
/etc/apache2/conf.d/openstack-dashboard.conf
# a2enmod rewrite;a2enmod ssl;a2enmod wsgi
.. only:: obs
* Edit the
``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py``
file and complete the following actions:
.. only:: rdo
* Edit the
``/etc/openstack-dashboard/local_settings``
file and complete the following actions:
.. only:: ubuntu or debian
* Edit the
``/etc/openstack-dashboard/local_settings.py``
file and complete the following actions:
* Configure the dashboard to use OpenStack services on the
``controller`` node:
.. code-block:: ini
OPENSTACK_HOST = "controller"
* Allow all hosts to access the dashboard:
.. code-block:: ini
ALLOWED_HOSTS = ['*', ]
* Configure the ``memcached`` session storage service:
.. code-block:: ini
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
.. note::
Comment out any other session storage configuration.
.. only:: obs
.. note::
By default, SLES and openSUSE use an SQL database for session
storage. For simplicity, we recommend changing the configuration
to use ``memcached`` for session storage.
* Configure ``user`` as the default role for
users that you create via the dashboard:
.. code-block:: ini
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
* Optionally, configure the time zone:
.. code-block:: ini
TIME_ZONE = "TIME_ZONE"
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
For more information, see the `list of time zones
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
To finalize installation
~~~~~~~~~~~~~~~~~~~~~~~~
.. only:: ubuntu or debian
Reload the web server configuration:
.. code-block:: console
# service apache2 reload
.. only:: obs
Start the web server and session storage service and configure
them to start when the system boots:
.. code-block:: console
# systemctl enable apache2.service memcached.service
# systemctl restart apache2.service memcached.service
.. note::
The ``systemctl restart`` command starts the Apache HTTP service if
not currently running.
.. only:: rdo
Start the web server and session storage service and configure
them to start when the system boots:
.. code-block:: console
# systemctl enable httpd.service memcached.service
# systemctl restart httpd.service memcached.service
.. note::
The ``systemctl restart`` command starts the Apache HTTP service if
not currently running.

View File

@ -12,10 +12,10 @@ service because most distributions support it. If you prefer to
implement a different message queue service, consult the documentation
associated with it.
Install the message queue service
---------------------------------
Install and configure components
--------------------------------
* Install the package:
1. Install the package:
.. only:: ubuntu or debian
@ -35,13 +35,9 @@ Install the message queue service
# zypper install rabbitmq-server
Configure the message queue service
-----------------------------------
.. only:: rdo or obs
#. Start the message queue service and configure it to start when the
2. Start the message queue service and configure it to start when the
system boots:
.. code-block:: console
@ -71,7 +67,7 @@ Configure the message queue service
* Start the message queue service again.
#. Add the ``openstack`` user:
3. Add the ``openstack`` user:
.. code-block:: console
@ -80,7 +76,7 @@ Configure the message queue service
Replace ``RABBIT_PASS`` with a suitable password.
#. Permit configuration, write, and read access for the
4. Permit configuration, write, and read access for the
``openstack`` user:
.. code-block:: console
@ -90,7 +86,7 @@ Configure the message queue service
.. only:: ubuntu or debian
#. Add the ``openstack`` user:
2. Add the ``openstack`` user:
.. code-block:: console
@ -99,7 +95,7 @@ Configure the message queue service
Replace ``RABBIT_PASS`` with a suitable password.
#. Permit configuration, write, and read access for the
3. Permit configuration, write, and read access for the
``openstack`` user:
.. code-block:: console

View File

@ -7,7 +7,7 @@ additional storage node.
Configure network interfaces
----------------------------
#. Configure the management interface:
* Configure the management interface:
* IP address: ``10.0.0.41``

View File

@ -10,7 +10,7 @@ First node
Configure network interfaces
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#. Configure the management interface:
* Configure the management interface:
* IP address: ``10.0.0.51``
@ -33,7 +33,7 @@ Second node
Configure network interfaces
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#. Configure the management interface:
* Configure the management interface:
* IP address: ``10.0.0.52``

View File

@ -12,12 +12,12 @@ MongoDB.
The installation of the NoSQL database server is only necessary when
installing the Telemetry service as documented in :ref:`install_ceilometer`.
Install and configure the database server
-----------------------------------------
Install and configure components
--------------------------------
.. only:: obs
#. Enable the Open Build Service repositories for MongoDB based on
1. Enable the Open Build Service repositories for MongoDB based on
your openSUSE or SLES version:
On openSUSE:
@ -52,7 +52,7 @@ Install and configure the database server
.. only:: rdo
#. Install the MongoDB package:
1. Install the MongoDB packages:
.. code-block:: console
@ -60,7 +60,7 @@ Install and configure the database server
.. only:: ubuntu
#. Install the MongoDB package:
1. Install the MongoDB packages:
.. code-block:: console
@ -91,18 +91,8 @@ Install and configure the database server
You can also disable journaling. For more information, see the
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
* Start the MongoDB service and configure it to start when
the system boots:
.. code-block:: console
# systemctl enable mongodb.service
# systemctl start mongodb.service
.. only:: rdo
.. The use of mongod, and not mongodb, in the below screen is intentional.
2. Edit the ``/etc/mongod.conf`` file and complete the following
actions:
@ -126,14 +116,6 @@ Install and configure the database server
You can also disable journaling. For more information, see the
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
* Start the MongoDB service and configure it to start when
the system boots:
.. code-block:: console
# systemctl enable mongod.service
# systemctl start mongod.service
.. only:: ubuntu
2. Edit the ``/etc/mongodb.conf`` file and complete the following
@ -156,7 +138,15 @@ Install and configure the database server
smallfiles = true
If you change the journaling configuration, stop the MongoDB
You can also disable journaling. For more information, see the
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
Finalize installation
---------------------
.. only:: ubuntu
* If you change the journaling configuration, stop the MongoDB
service, remove the initial journal files, and start the service:
.. code-block:: console
@ -165,5 +155,22 @@ Install and configure the database server
# rm /var/lib/mongodb/journal/prealloc.*
# service mongodb start
You can also disable journaling. For more information, see the
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
.. only:: rdo
* Start the MongoDB service and configure it to start when
the system boots:
.. code-block:: console
# systemctl enable mongod.service
# systemctl start mongod.service
.. only:: obs
* Start the MongoDB service and configure it to start when
the system boots:
.. code-block:: console
# systemctl enable mongodb.service
# systemctl start mongodb.service

View File

@ -3,26 +3,28 @@
Controller node
~~~~~~~~~~~~~~~
Perform these steps on the controller node.
Install and configure components
--------------------------------
Install the packages:
1. Install the packages:
.. only:: ubuntu or debian
.. only:: ubuntu or debian
.. code-block:: console
# apt-get install chrony
.. only:: rdo
.. only:: rdo
.. code-block:: console
# yum install chrony
.. only:: obs
.. only:: obs
On openSUSE 13.2:
On openSUSE:
.. code-block:: console
@ -30,7 +32,7 @@ Install the packages:
# zypper refresh
# zypper install chrony
On SLES 12:
On SLES:
.. code-block:: console
@ -50,13 +52,9 @@ Install the packages:
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
By default, the controller node synchronizes the time via a pool of
public servers. However, you can optionally configure alternative servers such
as those provided by your organization.
.. only:: ubuntu or debian
#. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove the
2. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove the
following keys as necessary for your environment:
.. code-block:: ini
@ -67,7 +65,13 @@ as those provided by your organization.
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
#. Restart the NTP service:
.. note::
By default, the controller node synchronizes the time via a pool of
public servers. However, you can optionally configure alternative
servers such as those provided by your organization.
3. Restart the NTP service:
.. code-block:: console
@ -75,7 +79,7 @@ as those provided by your organization.
.. only:: rdo or obs
#. Edit the ``/etc/chrony.conf`` file and add, change, or remove the
2. Edit the ``/etc/chrony.conf`` file and add, change, or remove the
following keys as necessary for your environment:
.. code-block:: ini
@ -86,7 +90,13 @@ as those provided by your organization.
accurate (lower stratum) NTP server. The configuration supports multiple
``server`` keys.
#. To enable other nodes to connect to the chrony daemon on the controller,
.. note::
By default, the controller node synchronizes the time via a pool of
public servers. However, you can optionally configure alternative
servers such as those provided by your organization.
3. To enable other nodes to connect to the chrony daemon on the controller,
add the following key to the ``/etc/chrony.conf`` file:
.. code-block:: ini
@ -95,7 +105,7 @@ as those provided by your organization.
If necessary, replace ``10.0.0.0/24`` with a description of your subnet.
#. Start the NTP service and configure it to start when the system boots:
4. Start the NTP service and configure it to start when the system boots:
.. code-block:: console

View File

@ -3,24 +3,27 @@
Other nodes
~~~~~~~~~~~
Other nodes reference the controller node for clock synchronization.
Perform these steps on all other nodes.
Install and configure components
--------------------------------
Install the packages:
1. Install the packages:
.. only:: ubuntu or debian
.. only:: ubuntu or debian
.. code-block:: console
# apt-get install chrony
.. only:: rdo
.. only:: rdo
.. code-block:: console
# yum install chrony
.. only:: obs
.. only:: obs
On openSUSE:
@ -50,19 +53,16 @@ Install the packages:
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
Configure the network and compute nodes to reference the controller
node.
.. only:: ubuntu or debian
#. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
2. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
but one ``server`` key. Change it to reference the controller node:
.. code-block:: ini
server controller iburst
#. Restart the NTP service:
3. Restart the NTP service:
.. code-block:: console
@ -70,14 +70,14 @@ node.
.. only:: rdo or obs
#. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
2. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
``server`` key. Change it to reference the controller node:
.. code-block:: ini
server controller iburst
#. Start the NTP service and configure it to start when the system boots:
3. Start the NTP service and configure it to start when the system boots:
.. code-block:: console

View File

@ -53,7 +53,7 @@ these procedures on all nodes.
Enable the OpenStack repository
-------------------------------
On CentOS, the *extras* repository provides the RPM that enables the
* On CentOS, the *extras* repository provides the RPM that enables the
OpenStack repository. CentOS includes the *extras* repository by
default, so you can simply install the package to enable the OpenStack
repository.
@ -62,7 +62,7 @@ these procedures on all nodes.
# yum install centos-release-openstack-liberty
On RHEL, download and install the RDO repository RPM to enable the
* On RHEL, download and install the RDO repository RPM to enable the
OpenStack repository.
.. code-block:: console
@ -122,7 +122,6 @@ these procedures on all nodes.
`Debian website <http://backports.debian.org/Instructions/>`_,
which basically suggest doing the following steps:
#. On all nodes, adding the Debian 8 (Jessie) backport repository to
the source list:

View File

@ -7,17 +7,11 @@ guide use MariaDB or MySQL depending on the distribution. OpenStack
services also support other SQL databases including
`PostgreSQL <http://www.postgresql.org/>`__.
Install and configure the database server
-----------------------------------------
Install and configure components
--------------------------------
#. Install the packages:
.. only:: rdo or ubuntu or obs
.. note::
The Python MySQL library is compatible with MariaDB.
.. only:: ubuntu
.. code-block:: console
@ -34,7 +28,7 @@ Install and configure the database server
.. code-block:: console
# yum install mariadb mariadb-server python2-PyMySQL
# yum install mariadb mariadb-server MySQL-python
.. only:: obs
@ -116,9 +110,8 @@ Install and configure the database server
collation-server = utf8_general_ci
character-set-server = utf8
To finalize installation
------------------------
Finalize installation
---------------------
.. only:: ubuntu or debian

View File

@ -374,7 +374,7 @@ Install and configure components
.. only:: obs or rdo
#. Start the Image service services and configure them to start when
* Start the Image service services and configure them to start when
the system boots:
.. code-block:: console

View File

@ -489,7 +489,7 @@ Finalize installation
.. only:: obs or rdo
#. Start the Orchestration services and configure them to start
* Start the Orchestration services and configure them to start
when the system boots:
.. code-block:: console
@ -501,7 +501,7 @@ Finalize installation
.. only:: ubuntu or debian
#. Restart the Orchestration services:
1. Restart the Orchestration services:
.. code-block:: console

View File

@ -0,0 +1,279 @@
Install and configure
~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the dashboard
on the controller node.
The dashboard relies on functional core services including
Identity, Image service, Compute, and either Networking (neutron)
or legacy networking (nova-network). Environments with
stand-alone services such as Object Storage cannot use the
dashboard. For more information, see the
`developer documentation <http://docs.openstack.org/developer/
horizon/topics/deployment.html>`__.
.. note::
This section assumes proper installation, configuration, and operation
of the Identity service using the Apache HTTP server and Memcached
service as described in the :ref:`Install and configure the Identity
service <keystone-install>` section.
Install and configure components
--------------------------------
.. only:: obs or rdo or ubuntu
.. include:: shared/note_configuration_vary_by_distribution.rst
.. only:: obs
1. Install the packages:
.. code-block:: console
# zypper install openstack-dashboard
.. only:: rdo
1. Install the packages:
.. code-block:: console
# yum install openstack-dashboard
.. only:: ubuntu
1. Install the packages:
.. code-block:: console
# apt-get install openstack-dashboard
.. only:: debian
1. Install the packages:
.. code-block:: console
# apt-get install openstack-dashboard-apache
2. Respond to prompts for web server configuration.
.. note::
The automatic configuration process generates a self-signed
SSL certificate. Consider obtaining an official certificate
for production environments.
.. note::
There are two modes of installation. One using ``/horizon`` as the URL,
keeping your default vhost and only adding an Alias directive: this is
the default. The other mode will remove the default Apache vhost and install
the dashboard on the webroot. It was the only available option
before the Liberty release. If you prefer to set the Apache configuration
manually, install the ``openstack-dashboard`` package instead of
``openstack-dashboard-apache``.
.. only:: obs
2. Configure the web server:
.. code-block:: console
# cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
/etc/apache2/conf.d/openstack-dashboard.conf
# a2enmod rewrite;a2enmod ssl;a2enmod wsgi
3. Edit the
``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py``
file and complete the following actions:
* Configure the dashboard to use OpenStack services on the
``controller`` node:
.. code-block:: ini
OPENSTACK_HOST = "controller"
* Allow all hosts to access the dashboard:
.. code-block:: ini
ALLOWED_HOSTS = ['*', ]
* Configure the ``memcached`` session storage service:
.. code-block:: ini
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
.. note::
Comment out any other session storage configuration.
* Configure ``user`` as the default role for
users that you create via the dashboard:
.. code-block:: ini
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
* Optionally, configure the time zone:
.. code-block:: ini
TIME_ZONE = "TIME_ZONE"
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
For more information, see the `list of time zones
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
.. only:: rdo
2. Edit the
``/etc/openstack-dashboard/local_settings``
file and complete the following actions:
* Configure the dashboard to use OpenStack services on the
``controller`` node:
.. code-block:: ini
OPENSTACK_HOST = "controller"
* Allow all hosts to access the dashboard:
.. code-block:: ini
ALLOWED_HOSTS = ['*', ]
* Configure the ``memcached`` session storage service:
.. code-block:: ini
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
.. note::
Comment out any other session storage configuration.
* Configure ``user`` as the default role for
users that you create via the dashboard:
.. code-block:: ini
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
* Optionally, configure the time zone:
.. code-block:: ini
TIME_ZONE = "TIME_ZONE"
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
For more information, see the `list of time zones
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
.. only:: ubuntu
2. Edit the
``/etc/openstack-dashboard/local_settings.py``
file and complete the following actions:
* Configure the dashboard to use OpenStack services on the
``controller`` node:
.. code-block:: ini
OPENSTACK_HOST = "controller"
* Allow all hosts to access the dashboard:
.. code-block:: ini
ALLOWED_HOSTS = ['*', ]
* Configure the ``memcached`` session storage service:
.. code-block:: ini
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
.. note::
Comment out any other session storage configuration.
* Configure ``user`` as the default role for
users that you create via the dashboard:
.. code-block:: ini
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
* Optionally, configure the time zone:
.. code-block:: ini
TIME_ZONE = "TIME_ZONE"
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
For more information, see the `list of time zones
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
Finalize installation
---------------------
.. only:: ubuntu or debian
* Reload the web server configuration:
.. code-block:: console
# service apache2 reload
.. only:: obs
* Start the web server and session storage service and configure
them to start when the system boots:
.. code-block:: console
# systemctl enable apache2.service memcached.service
# systemctl restart apache2.service memcached.service
.. note::
The ``systemctl restart`` command starts the Apache HTTP service if
not currently running.
.. only:: rdo
* Start the web server and session storage service and configure
them to start when the system boots:
.. code-block:: console
# systemctl enable httpd.service memcached.service
# systemctl restart httpd.service memcached.service
.. note::
The ``systemctl restart`` command starts the Apache HTTP service if
not currently running.

View File

@ -1,8 +1,7 @@
================
Verify operation
================
~~~~~~~~~~~~~~~~
This section describes how to verify operation of the dashboard.
Verify operation of the dashboard.
.. only:: obs or debian

View File

@ -18,6 +18,6 @@ This example deployment uses an Apache web server.
.. toctree::
dashboard-install.rst
dashboard-verify.rst
dashboard-next-step.rst
horizon-install.rst
horizon-verify.rst
horizon-next-steps.rst

View File

@ -1,3 +1,5 @@
.. _keystone-install:
Install and configure
~~~~~~~~~~~~~~~~~~~~~

View File

@ -12,7 +12,7 @@ To learn about the template language, see `the Template Guide
in the `Heat developer documentation
<http://docs.openstack.org/developer/heat/index.html>`__.
#. Create the ``demo-template.yml`` file with the following content:
* Create the ``demo-template.yml`` file with the following content:
.. code-block:: yaml

View File

@ -79,7 +79,7 @@ includes firewall rules that deny remote access to instances. For Linux
images such as CirrOS, we recommend allowing at least ICMP (ping) and
secure shell (SSH).
#. Add rules to the ``default`` security group:
* Add rules to the ``default`` security group:
* Permit :term:`ICMP` (ping):

View File

@ -10,7 +10,7 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances including VXLAN tunnels for private
networks and handles security groups.
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the public virtual network to the

View File

@ -10,7 +10,7 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances including VXLAN tunnels for private
networks and handles security groups.
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the public virtual network to the

View File

@ -19,7 +19,7 @@ Install the components
.. code-block:: console
# yum install openstack-neutron openstack-neutron-linuxbridge
# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset
.. only:: obs
@ -60,7 +60,7 @@ authentication mechanism, message queue, and plug-in.
.. include:: shared/note_configuration_vary_by_distribution.rst
#. Edit the ``/etc/neutron/neutron.conf`` file and complete the following
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, comment out any ``connection`` options
@ -154,7 +154,7 @@ configure services specific to it.
Configure Compute to use Networking
-----------------------------------
#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* In the ``[neutron]`` section, configure access parameters:

View File

@ -19,7 +19,7 @@ Install the components
.. code-block:: console
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge python-neutronclient
openstack-neutron-linuxbridge python-neutronclient ebtables ipset
.. only:: obs
@ -69,7 +69,7 @@ Install the components
.. include:: shared/note_configuration_vary_by_distribution.rst
#. Edit the ``/etc/neutron/neutron.conf`` file and complete the following
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access:
@ -199,7 +199,7 @@ Configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
#. Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
following actions:
* In the ``[ml2]`` section, enable flat and VLAN networks:
@ -264,7 +264,7 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances including VXLAN tunnels for private
networks and handles security groups.
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the public virtual network to the
@ -308,7 +308,7 @@ Configure the DHCP agent
The :term:`DHCP agent` provides DHCP services for virtual networks.
#. Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,

View File

@ -19,7 +19,7 @@ Install the components
.. code-block:: console
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge python-neutronclient
openstack-neutron-linuxbridge python-neutronclient ebtables ipset
.. only:: obs
@ -63,7 +63,7 @@ Install the components
Configure the server component
------------------------------
#. Edit the ``/etc/neutron/neutron.conf`` file and complete the following
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access:
@ -194,7 +194,7 @@ Configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
#. Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
following actions:
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
@ -273,7 +273,7 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances including VXLAN tunnels for private
networks and handles security groups.
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the public virtual network to the
@ -326,7 +326,7 @@ Configure the layer-3 agent
The :term:`Layer-3 (L3) agent` provides routing and NAT services for virtual
networks.
#. Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
@ -358,7 +358,7 @@ Configure the DHCP agent
The :term:`DHCP agent` provides DHCP services for virtual networks.
#. Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,

View File

@ -169,7 +169,7 @@ Configure the metadata agent
The :term:`metadata agent <Metadata agent>` provides configuration information
such as credentials to instances.
#. Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure access parameters:
@ -222,7 +222,7 @@ such as credentials to instances.
Configure Compute to use Networking
-----------------------------------
#. Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
* In the ``[neutron]`` section, configure access parameters, enable the
metadata proxy, and configure the secret:

View File

@ -1,9 +1,15 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. List agents to verify successful launch of the neutron agents:
.. todo:
.. code-block:: console
Cannot use bulleted list here due to the following bug:
https://bugs.launchpad.net/openstack-manuals/+bug/1515377
List agents to verify successful launch of the neutron agents:
.. code-block:: console
$ neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
@ -15,5 +21,5 @@ Networking Option 1: Provider networks
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
The output should indicate three agents on the controller node and one
agent on each compute node.
The output should indicate three agents on the controller node and one
agent on each compute node.

View File

@ -1,9 +1,15 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. List agents to verify successful launch of the neutron agents:
.. todo:
.. code-block:: console
Cannot use bulleted list here due to the following bug:
https://bugs.launchpad.net/openstack-manuals/+bug/1515377
List agents to verify successful launch of the neutron agents:
.. code-block:: console
$ neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
@ -16,5 +22,5 @@ Networking Option 2: Self-service networks
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
The output should indicate four agents on the controller node and one
agent on each compute node.
The output should indicate four agents on the controller node and one
agent on each compute node.

View File

@ -247,7 +247,7 @@ on local devices.
Distribute ring configuration files
-----------------------------------
Copy the ``account.ring.gz``, ``container.ring.gz``, and
``object.ring.gz`` files to the ``/etc/swift`` directory
on each storage node and any additional nodes running the
proxy service.
* Copy the ``account.ring.gz``, ``container.ring.gz``, and
``object.ring.gz`` files to the ``/etc/swift`` directory
on each storage node and any additional nodes running the
proxy service.