Port basic installation guide
Convert RPC installation guide from DocBook to RST, remove content specific to Rackspace, and create initial OSAD installation guide. Change-Id: I3eedadc8ba441b4d931720dd6e3f7f3489302a9c Co-Authored-By: Matt Kassawara <mkassawara@gmail.com>
@ -56,7 +56,8 @@ Everything we do is in launchpad and gerrit. If you'd like to raise a bug, featu
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
While no os-ansible-deployment community documentation exists (yet), other than the .rst files present in this repository, comprehensive installation guides for Rackspace Private Cloud (an opinionated version of os-ansible-deployment) are available at "http://www.rackspace.com/knowledge_center/getting-started/rackspace-private-cloud".
|
||||
To build the docs make sure that you have installed the python requirements as found within the ``dev-requirements.txt`` file and then run the following command from within the ``doc`` directory.
|
||||
|
||||
Note:
|
||||
These docs may not be up-to-date with the current release of this repository however they are still a good source of documentation.
|
||||
.. code-block:: bash
|
||||
|
||||
make html
|
||||
|
@ -5,3 +5,4 @@ pep8==1.5.7
|
||||
pyflakes==0.8.1
|
||||
mccabe==0.2.1 # capped for flake8
|
||||
Sphinx==1.3.1
|
||||
oslosphinx>=3.0.0 # added for doc template
|
||||
|
@ -28,6 +28,7 @@
|
||||
# ones.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'oslosphinx'
|
||||
]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
|
@ -18,6 +18,7 @@ Contents:
|
||||
:maxdepth: 2
|
||||
|
||||
playbooks
|
||||
install-guide/index
|
||||
extending
|
||||
|
||||
|
||||
|
26
doc/source/install-guide/app-configfiles.rst
Normal file
@ -0,0 +1,26 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Appendix A. Configuration files
|
||||
-------------------------------
|
||||
|
||||
`openstack_user_config.yml
|
||||
<https://raw.githubusercontent.com/stackforge/os-ansible-deployment/master/etc/openstack_deploy/openstack_user_config.yml.example>`_
|
||||
|
||||
`user_variables.yml
|
||||
<https://raw.githubusercontent.com/stackforge/os-ansible-deployment/master/etc/openstack_deploy/user_variables.yml>`_
|
||||
|
||||
`user_secrets.yml
|
||||
<https://raw.githubusercontent.com/stackforge/os-ansible-deployment/master/etc/openstack_deploy/user_secrets.yml>`_
|
||||
|
||||
`openstack_environment.yml
|
||||
<https://raw.githubusercontent.com/stackforge/os-ansible-deployment/master/etc/openstack_deploy/openstack_environment.yml>`_
|
||||
|
||||
`swift.yml
|
||||
<https://raw.githubusercontent.com/stackforge/os-ansible-deployment/master/etc/openstack_deploy/conf.d/swift.yml.example>`_
|
||||
|
||||
`extra_container.yml
|
||||
<https://raw.githubusercontent.com/stackforge/os-ansible-deployment/master/etc/openstack_deploy/env.d/extra_container.yml.example>`_
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
39
doc/source/install-guide/app-resources.rst
Normal file
@ -0,0 +1,39 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Appendix B. Additional resources
|
||||
--------------------------------
|
||||
|
||||
These additional resources.
|
||||
|
||||
- `OpenStack Documentation <http://docs.openstack.org>`__
|
||||
|
||||
- `OpenStack Developer
|
||||
Documentation <http://developer.openstack.org/>`__
|
||||
|
||||
- `OpenStack API Quick
|
||||
Start <http://docs.openstack.org/api/quick-start/content/>`__
|
||||
|
||||
- `OpenStack Block Storage (cinder) Developer
|
||||
Documentation <http://docs.openstack.org/developer/cinder/>`__
|
||||
|
||||
- `OpenStack Compute (nova) Developer
|
||||
Documentation <http://docs.openstack.org/developer/nova/>`__
|
||||
|
||||
- `OpenStack Compute API v2 Developer
|
||||
Guide <http://developer.openstack.org/api-ref-compute-v2.html>`__
|
||||
|
||||
- `OpenStack Dashboard (horizon) Developer
|
||||
Documentation <http://docs.openstack.org/developer/horizon/>`__
|
||||
|
||||
- `OpenStack Identity (keystone) Developer
|
||||
Documentation <http://docs.openstack.org/developer/keystone/>`__
|
||||
|
||||
- `OpenStack Image service (glance) Developer
|
||||
Documentation <http://docs.openstack.org/developer/glance/>`__
|
||||
|
||||
- `OpenStack Object Storage (swift) Developer
|
||||
Documentation <http://docs.openstack.org/developer/swift/>`__
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
37
doc/source/install-guide/configure-cinder-az.rst
Normal file
@ -0,0 +1,37 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Availability zones
|
||||
------------------
|
||||
|
||||
Multiple availability zones can be created to manage Block Storage
|
||||
storage hosts. Edit the
|
||||
``/etc/openstack_deploy/openstack_user_config.yml`` file to set up
|
||||
availability zones.
|
||||
|
||||
#. For each cinder storage host, configure the availability zone under
|
||||
the ``container_vars`` stanza:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_storage_availability_zone: CINDERAZ
|
||||
|
||||
Replace *``CINDERAZ``* with a suitable name. For example
|
||||
*``cinderAZ_2``*
|
||||
|
||||
#. If more than one availability zone is created, configure the default
|
||||
availability zone for scheduling volume creation:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_default_availability_zone: CINDERAZ_DEFAULT
|
||||
|
||||
Replace *``CINDERAZ_DEFAULT``* with a suitable name. For example,
|
||||
*``cinderAZ_1``*. The default availability zone should be the same
|
||||
for all cinder storage hosts.
|
||||
|
||||
If the ``cinder_default_availability_zone`` is not defined, the
|
||||
default variable will be used.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
61
doc/source/install-guide/configure-cinder-backup.rst
Normal file
@ -0,0 +1,61 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Backup
|
||||
------
|
||||
|
||||
You can configure Block Storage (cinder) to back up volumes to Object
|
||||
Storage (swift) by setting variables. If enabled, the default
|
||||
configuration backs up volumes to an Object Storage installation
|
||||
accessible within your environment. Alternatively, you can set
|
||||
``cinder_service_backup_swift_url`` and other variables listed below to
|
||||
back up to an external Object Storage installation.
|
||||
|
||||
#. Add or edit the following line in the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file and set the value
|
||||
to ``True``:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cinder_service_backup_program_enabled: True
|
||||
|
||||
#. By default, Block Storage will use the access credentials of the user
|
||||
initiating the backup. Default values are set in the
|
||||
``/opt/os-ansible-deployment/playbooks/roles/os_cinder/defaults/main.yml``
|
||||
file. You can override those defaults by setting variables in
|
||||
``/etc/openstack_deploy/user_variables.yml`` to change how Block
|
||||
Storage performs backups. As needed, add and edit any of the
|
||||
following variables to the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
...
|
||||
cinder_service_backup_swift_auth: per_user
|
||||
# Options include 'per_user' or 'single_user'. We default to
|
||||
# 'per_user' so that backups are saved to a user's swift
|
||||
# account.
|
||||
cinder_service_backup_swift_url:
|
||||
# This is your swift storage url when using 'per_user', or keystone
|
||||
# endpoint when using 'single_user'. When using 'per_user', you
|
||||
# can leave this as empty or as None to allow cinder-backup to
|
||||
# obtain storage url from environment.
|
||||
cinder_service_backup_swift_url:
|
||||
cinder_service_backup_swift_auth_version: 2
|
||||
cinder_service_backup_swift_user:
|
||||
cinder_service_backup_swift_tenant:
|
||||
cinder_service_backup_swift_key:
|
||||
cinder_service_backup_swift_container: volumebackups
|
||||
cinder_service_backup_swift_object_size: 52428800
|
||||
cinder_service_backup_swift_retry_attempts: 3
|
||||
cinder_service_backup_swift_retry_backoff: 2
|
||||
cinder_service_backup_compression_algorithm: zlib
|
||||
cinder_service_backup_metadata_version: 2
|
||||
|
||||
|
||||
During installation of Block Storage, the backup service is configured.
|
||||
For more information about swift, refer to the Standalone Object Storage
|
||||
Deployment guide.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
44
doc/source/install-guide/configure-cinder-nfs.rst
Normal file
@ -0,0 +1,44 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
NFS back-end
|
||||
------------
|
||||
|
||||
If the NetApp back end is configured to use an NFS storage protocol,
|
||||
edit ``/etc/openstack_deploy/openstack_user_config.yml``, and configure
|
||||
the NFS client on each storage node that will use it.
|
||||
|
||||
#. Add the *``cinder_backends``* stanza (which includes
|
||||
*``cinder_nfs_client``*) under the *``container_vars``* stanza for
|
||||
each storage node:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
container_vars:
|
||||
cinder_backends:
|
||||
cinder_nfs_client:
|
||||
|
||||
#. Configure the location of the file that lists shares available to the
|
||||
block storage service. This configuration file must include
|
||||
*``nfs_shares_config``*:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
nfs_shares_config: SHARE_CONFIG
|
||||
|
||||
Replace *``SHARE_CONFIG``* with the location of the share
|
||||
configuration file. For example, ``/etc/cinder/nfs_shares``.
|
||||
|
||||
#. Configure one or more NFS shares:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
shares:
|
||||
- { ip: "NFS_HOST", share: "NFS_SHARE" }
|
||||
|
||||
Replace *``NFS_HOST``* with the IP address or hostname of the NFS
|
||||
server, and the *``NFS_SHARE``* with the absolute path to an existing
|
||||
and accessible NFS share.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
136
doc/source/install-guide/configure-cinder.rst
Normal file
@ -0,0 +1,136 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring the Block Storage service (optional)
|
||||
------------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
configure-cinder-nfs.rst
|
||||
configure-cinder-backup.rst
|
||||
configure-cinder-az.rst
|
||||
|
||||
By default, the Block Storage service uses the LVM back end. To use a
|
||||
NetApp storage appliance back end, edit the
|
||||
``/etc/openstack_deploy/openstack_user_config.yml`` file and configure
|
||||
each storage node that will use it:
|
||||
|
||||
Ensure that the NAS Team enables httpd.admin.access.
|
||||
|
||||
#. Add the ``netapp`` stanza under the ``cinder_backends`` stanza for
|
||||
each storage node:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_backends:
|
||||
netapp:
|
||||
|
||||
The options in subsequent steps fit under the ``netapp`` stanza.
|
||||
|
||||
The back end name is arbitrary and becomes a volume type within the
|
||||
Block Storage service.
|
||||
|
||||
#. Configure the storage family:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
netapp_storage_family: STORAGE_FAMILY
|
||||
|
||||
Replace *``STORAGE_FAMILY``* with ``ontap_7mode`` for Data ONTAP
|
||||
operating in 7-mode or ``ontap_cluster`` for Data ONTAP operating as
|
||||
a cluster.
|
||||
|
||||
#. Configure the storage protocol:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
netapp_storage_protocol: STORAGE_PROTOCOL
|
||||
|
||||
Replace *``STORAGE_PROTOCOL``* with ``iscsi`` for iSCSI or ``nfs``
|
||||
for NFS.
|
||||
|
||||
For the NFS protocol, you must also specify the location of the
|
||||
configuration file that lists the shares available to the Block
|
||||
Storage service:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
nfs_shares_config: SHARE_CONFIG
|
||||
|
||||
Replace *``SHARE_CONFIG``* with the location of the share
|
||||
configuration file. For example, ``/etc/cinder/nfs_shares``.
|
||||
|
||||
#. Configure the server:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
netapp_server_hostname: SERVER_HOSTNAME
|
||||
|
||||
Replace *``SERVER_HOSTNAME``* with the hostnames for both netapp
|
||||
controllers.
|
||||
|
||||
#. Configure the server API port:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
netapp_server_port: PORT_NUMBER
|
||||
|
||||
Replace *``PORT_NUMBER``* with 80 for HTTP or 443 for HTTPS.
|
||||
|
||||
#. Configure the server credentials:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
netapp_login: USER_NAME
|
||||
netapp_password: PASSWORD
|
||||
|
||||
Replace *``USER_NAME``* and *``PASSWORD``* with the appropriate
|
||||
values.
|
||||
|
||||
#. Select the NetApp driver:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
|
||||
#. Configure the volume back end name:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
volume_backend_name: BACKEND_NAME
|
||||
|
||||
Replace *``BACKEND_NAME``* with a suitable value that provides a hint
|
||||
for the Block Storage scheduler. For example, ``NETAPP_iSCSI``.
|
||||
|
||||
#. Check that the ``openstack_user_config.yml`` configuration is
|
||||
accurate:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
storage_hosts:
|
||||
xxxxxx-Infra01:
|
||||
ip: 172.29.236.16
|
||||
container_vars:
|
||||
cinder_backends:
|
||||
limit_container_types: cinder_volume
|
||||
netapp:
|
||||
netapp_storage_family: ontap_7mode
|
||||
netapp_storage_protocol: nfs
|
||||
netapp_server_hostname: 111.222.333.444
|
||||
netapp_server_port: 80
|
||||
netapp_login: openstack_cinder
|
||||
netapp_password: password
|
||||
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
volume_backend_name: NETAPP_NFS
|
||||
|
||||
For *``netapp_server_hostname``*, specify the IP address of the Data
|
||||
ONTAP server. Include iSCSI or NFS for the
|
||||
*``netapp_storage_family``* depending on the configuration. Add 80 if
|
||||
using HTTP or 443 if using HTTPS for *``netapp_server_port``*.
|
||||
|
||||
The ``cinder-volume.yml`` playbook will automatically install the
|
||||
``nfs-common`` file across the hosts, transitioning from an LVM to a
|
||||
NetApp back end.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
36
doc/source/install-guide/configure-creds.rst
Normal file
@ -0,0 +1,36 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring service credentials
|
||||
-------------------------------
|
||||
|
||||
Configure credentials for each service in the
|
||||
``/etc/openstack_deploy/*_secrets.yml`` files. Consider using `Ansible
|
||||
Vault <http://docs.ansible.com/playbooks_vault.html>`__ to increase
|
||||
security by encrypting any files containing credentials.
|
||||
|
||||
Adjust permissions on these files to restrict access by non-privileged
|
||||
users.
|
||||
|
||||
Note that the following options configure passwords for the web
|
||||
interfaces:
|
||||
|
||||
- ``keystone_auth_admin_password`` configures the ``admin`` tenant
|
||||
password for both the OpenStack API and dashboard access.
|
||||
|
||||
- ``kibana_password`` configures the password for Kibana web interface
|
||||
access.
|
||||
|
||||
Recommended: Use the ``pw-token-gen.py`` script to generate random
|
||||
values for the variables in each file that contains service credentials:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ cd /opt/os-ansible-deployment/scripts
|
||||
$ python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
|
||||
|
||||
|
||||
To regenerate existing passwords, add the ``--regen`` flag.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
112
doc/source/install-guide/configure-glance.rst
Normal file
@ -0,0 +1,112 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring the Image service
|
||||
-----------------------------
|
||||
|
||||
In an all-in-one deployment with a single infrastructure node, the Image
|
||||
service uses the local file system on the target host to store images.
|
||||
When deploying production clouds we recommend backing Glance with a
|
||||
swift backend or some form or another of shared storage.
|
||||
|
||||
The following procedure describes how to modify the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file to enable Cloud Files
|
||||
usage.
|
||||
|
||||
#. Change the default store to use Object Storage (swift), the
|
||||
underlying architecture of Cloud Files:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
glance_default_store: swift
|
||||
|
||||
#. Set the appropriate authentication URL:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
glance_swift_store_auth_address: https://127.0.0.1/v2.0
|
||||
|
||||
#. Set the swift account credentials:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# Replace this capitalized variables with actual data.
|
||||
glance_swift_store_user: GLANCE_SWIFT_TENANT:GLANCE_SWIFT_USER
|
||||
glance_swift_store_key: SWIFT_PASSWORD_OR_KEY
|
||||
|
||||
#. Change the *``glance_swift_store_endpoint_type``* from the default
|
||||
*``internalURL``* settings to *``publicURL``* if needed.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
glance_swift_store_endpoint_type: publicURL
|
||||
|
||||
#. Define the store name:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
glance_swift_store_container: STORE_NAME
|
||||
|
||||
Replace *``STORE_NAME``* with the container name in swift to be
|
||||
used for storing images. If the container doesn't exist, it will be
|
||||
automatically created.
|
||||
|
||||
#. Define the store region:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
glance_swift_store_region: STORE_REGION
|
||||
|
||||
Replace *``STORE_REGION``* if needed.
|
||||
|
||||
#. (Optional) Set the paste deploy flavor:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
glance_flavor: GLANCE_FLAVOR
|
||||
|
||||
|
||||
By default, the Image service uses caching and authenticates with the
|
||||
Identity service. The default maximum size of the image cache is 10
|
||||
GB. The default Image service container size is 12 GB. In some
|
||||
configurations, the Image service might attempt to cache an image
|
||||
which exceeds the available disk space. If necessary, you can disable
|
||||
caching. For example, to use Identity without caching, replace
|
||||
*``GLANCE_FLAVOR``* with ``keystone``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
glance_flavor: keystone
|
||||
|
||||
Or, to disable both authentication and caching, set
|
||||
*``GLANCE_FLAVOR``* to no value:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
glance_flavor:
|
||||
|
||||
This option is set by default to use authentication and cache
|
||||
management in the ``playbooks/roles/os_glance/defaults/main.yml``
|
||||
file. To override the default behavior, set ``glance_flavor`` to a
|
||||
different value in ``/etc/openstack_deploy/user_variables.yml``.
|
||||
|
||||
The possible values for *``GLANCE_FLAVOR``* are:
|
||||
|
||||
- (Nothing)
|
||||
|
||||
- ``caching``
|
||||
|
||||
- ``cachemanagement``
|
||||
|
||||
- ``keystone``
|
||||
|
||||
- ``keystone+caching``
|
||||
|
||||
- ``keystone+cachemanagement`` (default)
|
||||
|
||||
- ``trusted-auth``
|
||||
|
||||
- ``trusted-auth+cachemanagement``
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
28
doc/source/install-guide/configure-haproxy.rst
Normal file
@ -0,0 +1,28 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring HAProxy (optional)
|
||||
------------------------------
|
||||
|
||||
For evaluation, testing, and development, HAProxy can temporarily
|
||||
provide load balancing services in lieu of hardware load balancers. The
|
||||
default HAProxy configuration does not provide highly-available load
|
||||
balancing services. For production deployments, deploy a hardware load
|
||||
balancer prior to deploying OSAD.
|
||||
|
||||
- In the ``/etc/openstack_deploy/openstack_user_config.yml`` file, add
|
||||
the ``haproxy_hosts`` section with one or more infrastructure target
|
||||
hosts, for example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
haproxy_hosts:
|
||||
123456-infra01:
|
||||
ip: 172.29.236.51
|
||||
123457-infra02:
|
||||
ip: 172.29.236.52
|
||||
123458-infra03:
|
||||
ip: 172.29.236.53
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
108
doc/source/install-guide/configure-hostlist.rst
Normal file
@ -0,0 +1,108 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring target hosts
|
||||
------------------------
|
||||
|
||||
Modify the ``/etc/openstack_deploy/openstack_user_config.yml`` file to
|
||||
configure the target hosts.
|
||||
|
||||
Do not assign the same IP address to different target hostnames.
|
||||
Unexpected results may occur. Each IP address and hostname must be a
|
||||
matching pair. To use the same host in multiple roles, for example
|
||||
infrastructure and networking, specify the same hostname and IP in each
|
||||
section.
|
||||
|
||||
Use short hostnames rather than fully-qualified domain names (FQDN) to
|
||||
prevent length limitation issues with LXC and SSH. For example, a
|
||||
suitable short hostname for a compute host might be:
|
||||
``123456-Compute001``.
|
||||
|
||||
#. Configure a list containing at least three infrastructure target
|
||||
hosts in the ``infra_hosts`` section:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
infra_hosts:
|
||||
603975-infra01:
|
||||
ip: INFRA01_IP_ADDRESS
|
||||
603989-infra02:
|
||||
ip: INFRA02_IP_ADDRESS
|
||||
627116-infra03:
|
||||
ip: INFRA03_IP_ADDRESS
|
||||
628771-infra04: ...
|
||||
|
||||
Replace *``*_IP_ADDRESS``* with the IP address of the ``br-mgmt``
|
||||
container management bridge on each infrastructure target host. Use
|
||||
the same net block as bond0 on the nodes, for example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
infra_hosts:
|
||||
603975-infra01:
|
||||
ip: 10.240.0.80
|
||||
603989-infra02:
|
||||
ip: 10.240.0.81
|
||||
627116-infra03:
|
||||
ip: 10.240.0.184
|
||||
|
||||
#. Configure a list containing at least one network target host in the
|
||||
``network_hosts`` section:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
network_hosts:
|
||||
602117-network01:
|
||||
ip: NETWORK01_IP_ADDRESS
|
||||
602534-network02: ...
|
||||
|
||||
Replace *``*_IP_ADDRESS``* with the IP address of the ``br-mgmt``
|
||||
container management bridge on each network target host.
|
||||
|
||||
#. Configure a list containing at least one compute target host in the
|
||||
``compute_hosts`` section:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
compute_hosts:
|
||||
900089-compute001:
|
||||
ip: COMPUTE001_IP_ADDRESS
|
||||
900090-compute002: ...
|
||||
|
||||
Replace *``*_IP_ADDRESS``* with the IP address of the ``br-mgmt``
|
||||
container management bridge on each compute target host.
|
||||
|
||||
#. Configure a list containing at least one logging target host in the
|
||||
``log_hosts`` section:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
log_hosts:
|
||||
900088-logging01:
|
||||
ip: LOGGER1_IP_ADDRESS
|
||||
903877-logging02: ...
|
||||
|
||||
Replace *``*_IP_ADDRESS``* with the IP address of the ``br-mgmt``
|
||||
container management bridge on each logging target host.
|
||||
|
||||
#. Configure a list containing at least one optional storage host in the
|
||||
``storage_hosts`` section:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
storage_hosts:
|
||||
100338-storage01:
|
||||
ip: STORAGE01_IP_ADDRESS
|
||||
100392-storage02: ...
|
||||
|
||||
Replace *``*_IP_ADDRESS``* with the IP address of the ``br-mgmt``
|
||||
container management bridge on each storage target host. Each storage
|
||||
host also requires additional configuration to define the back end
|
||||
driver.
|
||||
|
||||
The default configuration includes an optional storage host. To
|
||||
install without storage hosts, comment out the stanza beginning with
|
||||
the *storage\_hosts:* line.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
18
doc/source/install-guide/configure-hypervisor.rst
Normal file
@ -0,0 +1,18 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring the hypervisor (optional)
|
||||
-------------------------------------
|
||||
|
||||
By default, the KVM hypervisor is used. If you are deploying to a host
|
||||
that does not support KVM hardware acceleration extensions, select a
|
||||
suitable hypervisor type such as ``qemu`` or ``lxc``. To change the
|
||||
hypervisor type, uncomment and edit the following line in the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# nova_virt_type: kvm
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
163
doc/source/install-guide/configure-networking.rst
Normal file
@ -0,0 +1,163 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring target host networking
|
||||
----------------------------------
|
||||
|
||||
Edit the ``/etc/openstack_deploy/openstack_user_config.yml`` file to
|
||||
configure target host networking.
|
||||
|
||||
#. Configure the IP address ranges associated with each network in the
|
||||
``cidr_networks`` section:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cidr_networks:
|
||||
# Management (same range as br-mgmt on the target hosts)
|
||||
management: CONTAINER_MGMT_CIDR
|
||||
# Tunnel endpoints for VXLAN tenant networks
|
||||
# (same range as br-vxlan on the target hosts)
|
||||
tunnel: TUNNEL_CIDR
|
||||
#Storage (same range as br-storage on the target hosts)
|
||||
storage: STORAGE_CIDR
|
||||
|
||||
Replace *``*_CIDR``* with the appropriate IP address range in CIDR
|
||||
notation. For example, 203.0.113.0/24.
|
||||
|
||||
Use the same IP address ranges as the underlying physical network
|
||||
interfaces or bridges configured in `the section called "Configuring
|
||||
the network" <sec-hosts-target-network.html>`__. For example, if the
|
||||
container network uses 203.0.113.0/24, the *``CONTAINER_MGMT_CIDR``*
|
||||
should also use 203.0.113.0/24.
|
||||
|
||||
The default configuration includes the optional storage and service
|
||||
networks. To remove one or both of them, comment out the appropriate
|
||||
network name.
|
||||
|
||||
#. Configure the existing IP addresses in the ``used_ips`` section:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
used_ips:
|
||||
- EXISTING_IP_ADDRESSES
|
||||
|
||||
Replace *``EXISTING_IP_ADDRESSES``* with a list of existing IP
|
||||
addresses in the ranges defined in the previous step. This list
|
||||
should include all IP addresses manually configured on target hosts
|
||||
in the `the section called "Configuring the
|
||||
network" <sec-hosts-target-network.html>`__, internal load balancers,
|
||||
service network bridge, and any other devices to avoid conflicts
|
||||
during the automatic IP address generation process.
|
||||
|
||||
Add individual IP addresses on separate lines. For example, to
|
||||
prevent use of 203.0.113.101 and 201:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
used_ips:
|
||||
- 203.0.113.101
|
||||
- 203.0.113.201
|
||||
|
||||
Add a range of IP addresses using a comma. For example, to prevent
|
||||
use of 203.0.113.101-201:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
used_ips:
|
||||
- 203.0.113.101, 203.0.113.201
|
||||
|
||||
#. Configure load balancing in the ``global_overrides`` section:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
global_overrides:
|
||||
# Internal load balancer VIP address
|
||||
internal_lb_vip_address: INTERNAL_LB_VIP_ADDRESS
|
||||
# External (DMZ) load balancer VIP address
|
||||
external_lb_vip_address: EXTERNAL_LB_VIP_ADDRESS
|
||||
# Container network bridge device
|
||||
management_bridge: "MGMT_BRIDGE"
|
||||
# Tunnel network bridge device
|
||||
tunnel_bridge: "TUNNEL_BRIDGE"
|
||||
|
||||
Replace *``INTERNAL_LB_VIP_ADDRESS``* with the internal IP address of
|
||||
the load balancer. Infrastructure and OpenStack services use this IP
|
||||
address for internal communication.
|
||||
|
||||
Replace *``EXTERNAL_LB_VIP_ADDRESS``* with the external, public, or
|
||||
DMZ IP address of the load balancer. Users primarily use this IP
|
||||
address for external API and web interfaces access.
|
||||
|
||||
Replace *``MGMT_BRIDGE``* with the container bridge device name,
|
||||
typically ``br-mgmt``.
|
||||
|
||||
Replace *``TUNNEL_BRIDGE``* with the tunnel/overlay bridge device
|
||||
name, typically ``br-vxlan``.
|
||||
|
||||
#. Configure optional networks in the ``provider_networks`` subsection:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
provider_networks:
|
||||
- network:
|
||||
group_binds:
|
||||
- glance_api
|
||||
- cinder_api
|
||||
- cinder_volume
|
||||
- nova_compute
|
||||
type: "raw"
|
||||
container_bridge: "br-storage"
|
||||
container_interface: "eth2"
|
||||
ip_from_q: "storage"
|
||||
|
||||
The default configuration includes the optional storage and service
|
||||
networks. To remove one or both of them, comment out the entire
|
||||
associated stanza beginning with the *- network:* line.
|
||||
|
||||
#. Configure OpenStack Networking tunnel/overlay network in the
|
||||
``provider_networks`` subsection:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
provider_networks:
|
||||
- network:
|
||||
group_binds:
|
||||
- neutron_linuxbridge_agent
|
||||
container_bridge: "br-vxlan"
|
||||
container_interface: "eth10"
|
||||
ip_from_q: "tunnel"
|
||||
type: "vxlan"
|
||||
range: "TUNNEL_ID_RANGE"
|
||||
net_name: "vxlan"
|
||||
|
||||
Replace *``TUNNEL_ID_RANGE``* with the tunnel ID range. For example,
|
||||
1:1000.
|
||||
|
||||
#. Configure OpenStack Networking provider networks in the
|
||||
``provider_networks`` subsection:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
provider_networks:
|
||||
- network:
|
||||
group_binds:
|
||||
- neutron_linuxbridge_agent
|
||||
container_bridge: "br-vlan"
|
||||
container_interface: "eth11"
|
||||
type: "flat"
|
||||
net_name: "vlan"
|
||||
- network:
|
||||
group_binds:
|
||||
- neutron_linuxbridge_agent
|
||||
container_bridge: "br-vlan"
|
||||
container_interface: "eth11"
|
||||
type: "vlan"
|
||||
range: VLAN_ID_RANGE
|
||||
net_name: "vlan"
|
||||
|
||||
Replace *``VLAN_ID_RANGE``* with the VLAN ID range for each VLAN
|
||||
provider network. For example, 1:1000. Create a similar stanza for
|
||||
each additional provider network.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
17
doc/source/install-guide/configure-prereq.rst
Normal file
@ -0,0 +1,17 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
#. Recursively copy the contents of the
|
||||
``/opt/os-ansible-deployment/etc/openstack_deploy`` directory to the
|
||||
``/etc/openstack_deploy`` directory.
|
||||
|
||||
#. Change to the ``/etc/openstack_deploy`` directory.
|
||||
|
||||
#. Copy the ``openstack_user_config.yml.example`` file to
|
||||
``/etc/openstack_deploy/openstack_user_config.yml``.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
39
doc/source/install-guide/configure-swift-add.rst
Normal file
@ -0,0 +1,39 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Add to existing deployment
|
||||
--------------------------
|
||||
|
||||
Complete the following procedure to deploy Object Storage on an
|
||||
existing deployment.
|
||||
|
||||
#. `the section called "Configure and mount storage
|
||||
devices" <configure-swift-devices.html>`__
|
||||
|
||||
#. `the section called "Configure an Object Storage
|
||||
deployment" <configure-swift-config.html>`__
|
||||
|
||||
#. Optionally, allow all Identity users to use Object Storage by setting
|
||||
``swift_allow_all_users`` in the ``user_variables.yml`` file to
|
||||
``True``. Any users with the ``_member_`` role (all authorized
|
||||
Identity (keystone) users) can create containers and upload objects
|
||||
to Object Storage.
|
||||
|
||||
If this value is ``False``, then by default, only users with the
|
||||
admin or swiftoperator role are allowed to create containers or
|
||||
manage tenants.
|
||||
|
||||
When the backend type for the Image Service (glance) is set to
|
||||
``swift``, the Image Service can access the Object Storage cluster
|
||||
regardless of whether this value is ``True`` or ``False``.
|
||||
|
||||
#. Run the Object Storage play:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ cd /opt/os-ansible-deployment/playbooks
|
||||
$ openstack-ansible os-swift-install.yml
|
||||
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
290
doc/source/install-guide/configure-swift-config.rst
Normal file
@ -0,0 +1,290 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring the service
|
||||
-----------------------
|
||||
|
||||
**Procedure 5.2. Updating the Object Storage configuration ``swift.yml``
|
||||
file**
|
||||
|
||||
#. Copy the ``/etc/openstack_deploy/conf.d/swift.yml.example`` file to
|
||||
``/etc/openstack_deploy/conf.d/swift.yml``:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
#cp /etc/openstack_deploy/conf.d/swift.yml.example \
|
||||
/etc/openstack_deploy/conf.d/swift.yml
|
||||
|
||||
#. Update the global override values:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# global_overrides:
|
||||
# swift:
|
||||
# part_power: 8
|
||||
# weight: 100
|
||||
# min_part_hours: 1
|
||||
# repl_number: 3
|
||||
# storage_network: 'br-storage'
|
||||
# replication_network: 'br-repl'
|
||||
# drives:
|
||||
# - name: sdc
|
||||
# - name: sdd
|
||||
# - name: sde
|
||||
# - name: sdf
|
||||
# mount_point: /mnt
|
||||
# account:
|
||||
# container:
|
||||
# storage_policies:
|
||||
# - policy:
|
||||
# name: gold
|
||||
# index: 0
|
||||
# default: True
|
||||
# - policy:
|
||||
# name: silver
|
||||
# index: 1
|
||||
# repl_number: 3
|
||||
# deprecated: True
|
||||
|
||||
|
||||
``part_power``
|
||||
Set the partition power value based on the total amount of
|
||||
storage the entire ring will use.
|
||||
|
||||
Multiply the maximum number of drives ever used with this Object
|
||||
Storage installation by 100 and round that value up to the
|
||||
closest power of two value. For example, a maximum of six drives,
|
||||
times 100, equals 600. The nearest power of two above 600 is two
|
||||
to the power of nine, so the partition power is nine. The
|
||||
partition power cannot be changed after the Object Storage rings
|
||||
are built.
|
||||
|
||||
``weight``
|
||||
The default weight is 100. If the drives are different sizes, set
|
||||
the weight value to avoid uneven distribution of data. For
|
||||
example, a 1 TB disk would have a weight of 100, while a 2 TB
|
||||
drive would have a weight of 200.
|
||||
|
||||
``min_part_hours``
|
||||
The default value is 1. Set the minimum partition hours to the
|
||||
amount of time to lock a partition's replicas after a partition
|
||||
has been moved. Moving multiple replicas at the same time might
|
||||
make data inaccessible. This value can be set separately in the
|
||||
swift, container, account, and policy sections with the value in
|
||||
lower sections superseding the value in the swift section.
|
||||
|
||||
``repl_number``
|
||||
The default value is 3. Set the replication number to the number
|
||||
of replicas of each object. This value can be set separately in
|
||||
the swift, container, account, and policy sections with the value
|
||||
in the more granular sections superseding the value in the swift
|
||||
section.
|
||||
|
||||
``storage_network``
|
||||
By default, the swift services will listen on the default
|
||||
management IP. Optionally, specify the interface of the storage
|
||||
network.
|
||||
|
||||
If the ``storage_network`` is not set, but the ``storage_ips``
|
||||
per host are set (or the ``storage_ip`` is not on the
|
||||
``storage_network`` interface) the proxy server will not be able
|
||||
to connect to the storage services.
|
||||
|
||||
``replication_network``
|
||||
Optionally, specify a dedicated replication network interface, so
|
||||
dedicated replication can be setup. If this value is not
|
||||
specified, no dedicated ``replication_network`` is set.
|
||||
|
||||
As with the ``storage_network``, if the ``repl_ip`` is not set on
|
||||
the ``replication_network`` interface, replication will not work
|
||||
properly.
|
||||
|
||||
``drives``
|
||||
Set the default drives per host. This is useful when all hosts
|
||||
have the same drives. These can be overridden on a per host
|
||||
basis.
|
||||
|
||||
``mount_point``
|
||||
Set the ``mount_point`` value to the location where the swift
|
||||
drives are mounted. For example, with a mount point of ``/mnt``
|
||||
and a drive of ``sdc``, a drive is mounted at ``/mnt/sdc`` on the
|
||||
``swift_host``. This can be overridden on a per-host basis.
|
||||
|
||||
``storage_policies``
|
||||
Storage policies determine on which hardware data is stored, how
|
||||
the data is stored across that hardware, and in which region the
|
||||
data resides. Each storage policy must have an unique ``name``
|
||||
and a unique ``index``. There must be a storage policy with an
|
||||
index of 0 in the ``swift.yml`` file to use any legacy containers
|
||||
created before storage policies were instituted.
|
||||
|
||||
``default``
|
||||
Set the default value to *yes* for at least one policy. This is
|
||||
the default storage policy for any non-legacy containers that are
|
||||
created.
|
||||
|
||||
``deprecated``
|
||||
Set the deprecated value to *yes* to turn off storage policies.
|
||||
|
||||
For account and container rings, ``min_part_hours`` and
|
||||
``repl_number`` are the only values that can be set. Setting them
|
||||
in this section overrides the defaults for the specific ring.
|
||||
|
||||
#. Update the Object Storage proxy hosts values:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# swift-proxy_hosts:
|
||||
# infra-node1:
|
||||
# ip: 192.0.2.1
|
||||
# infra-node2:
|
||||
# ip: 192.0.2.2
|
||||
# infra-node3:
|
||||
# ip: 192.0.2.3
|
||||
|
||||
``swift-proxy_hosts``
|
||||
Set the ``IP`` address of the hosts that Ansible will connect to
|
||||
to deploy the swift-proxy containers. The ``swift-proxy_hosts``
|
||||
value should match the infra nodes.
|
||||
|
||||
#. Update the Object Storage hosts values:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# swift_hosts:
|
||||
# swift-node1:
|
||||
# ip: 192.0.2.4
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# zone: 0
|
||||
# swift-node2:
|
||||
# ip: 192.0.2.5
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# zone: 1
|
||||
# swift-node3:
|
||||
# ip: 192.0.2.6
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# zone: 2
|
||||
# swift-node4:
|
||||
# ip: 192.0.2.7
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# zone: 3
|
||||
# swift-node5:
|
||||
# ip: 192.0.2.8
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# storage_ip: 198.51.100.8
|
||||
# repl_ip: 203.0.113.8
|
||||
# zone: 4
|
||||
# region: 3
|
||||
# weight: 200
|
||||
# groups:
|
||||
# - account
|
||||
# - container
|
||||
# - silver
|
||||
# drives:
|
||||
# - name: sdb
|
||||
# storage_ip: 198.51.100.9
|
||||
# repl_ip: 203.0.113.9
|
||||
# weight: 75
|
||||
# groups:
|
||||
# - gold
|
||||
# - name: sdc
|
||||
# - name: sdd
|
||||
# - name: sde
|
||||
# - name: sdf
|
||||
|
||||
``swift_hosts``
|
||||
Specify the hosts to be used as the storage nodes. The ``ip`` is
|
||||
the address of the host to which Ansible connects. Set the name
|
||||
and IP address of each Object Storage host. The ``swift_hosts``
|
||||
section is not required.
|
||||
|
||||
``swift_vars``
|
||||
Contains the Object Storage host specific values.
|
||||
|
||||
``storage_ip`` and ``repl_ip``
|
||||
These values are based on the IP addresses of the host's
|
||||
``storage_network`` or ``replication_network``. For example, if
|
||||
the ``storage_network`` is ``br-storage`` and host1 has an IP
|
||||
address of 1.1.1.1 on ``br-storage``, then that is the IP address
|
||||
that will be used for ``storage_ip``. If only the ``storage_ip``
|
||||
is specified then the ``repl_ip`` defaults to the ``storage_ip``.
|
||||
If neither are specified, both will default to the host IP
|
||||
address.
|
||||
|
||||
Overriding these values on a host or drive basis can cause
|
||||
problems if the IP address that the service listens on is based
|
||||
on a specified ``storage_network`` or ``replication_network`` and
|
||||
the ring is set to a different IP address.
|
||||
|
||||
``zone``
|
||||
The default is 0. Optionally, set the Object Storage zone for the
|
||||
ring.
|
||||
|
||||
``region``
|
||||
Optionally, set the Object Storage region for the ring.
|
||||
|
||||
``weight``
|
||||
The default weight is 100. If the drives are different sizes, set
|
||||
the weight value to avoid uneven distribution of data. This value
|
||||
can be specified on a host or drive basis (if specified at both,
|
||||
the drive setting takes precedence).
|
||||
|
||||
``groups``
|
||||
Set the groups to list the rings to which a host's drive belongs.
|
||||
This can be set on a per drive basis which will override the host
|
||||
setting.
|
||||
|
||||
``drives``
|
||||
Set the names of the drives on this Object Storage host. At least
|
||||
one name must be specified.
|
||||
|
||||
``weight``
|
||||
The default weight is 100. If the drives are different sizes, set
|
||||
the weight value to avoid uneven distribution of data. This value
|
||||
can be specified on a host or drive basis (if specified at both,
|
||||
the drive setting takes precedence).
|
||||
|
||||
In the following example, ``swift-node5`` shows values in the
|
||||
``swift_hosts`` section that will override the global values. Groups
|
||||
are set, which overrides the global settings for drive ``sdb``. The
|
||||
weight is overridden for the host and specifically adjusted on drive
|
||||
``sdb``. Also, the ``storage_ip`` and ``repl_ip`` are set differently
|
||||
for ``sdb``.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# swift-node5:
|
||||
# ip: 192.0.2.8
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# storage_ip: 198.51.100.8
|
||||
# repl_ip: 203.0.113.8
|
||||
# zone: 4
|
||||
# region: 3
|
||||
# weight: 200
|
||||
# groups:
|
||||
# - account
|
||||
# - container
|
||||
# - silver
|
||||
# drives:
|
||||
# - name: sdb
|
||||
# storage_ip: 198.51.100.9
|
||||
# repl_ip: 203.0.113.9
|
||||
# weight: 75
|
||||
# groups:
|
||||
# - gold
|
||||
# - name: sdc
|
||||
# - name: sdd
|
||||
# - name: sde
|
||||
# - name: sdf
|
||||
|
||||
#. Ensure the ``swift.yml`` is in the ``/etc/rpc_deploy/conf.d/``
|
||||
folder.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
112
doc/source/install-guide/configure-swift-devices.rst
Normal file
@ -0,0 +1,112 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Storage devices
|
||||
---------------
|
||||
|
||||
This section offers a set of prerequisite instructions for setting up
|
||||
Object Storage storage devices. The storage devices must be set up
|
||||
before installing Object Storage.
|
||||
|
||||
**Procedure 5.1. Configuring and mounting storage devices**
|
||||
|
||||
Object Storage recommends a minimum of three Object Storage hosts
|
||||
with five storage disks. The example commands in this procedure
|
||||
assume the storage devices for Object Storage are devices ``sdc``
|
||||
through ``sdg``.
|
||||
|
||||
#. Determine the storage devices on the node to be used for Object
|
||||
Storage.
|
||||
|
||||
#. Format each device on the node used for storage with XFS. While
|
||||
formatting the devices, add a unique label for each device.
|
||||
|
||||
Without labels, a failed drive can cause mount points to shift and
|
||||
data to become inaccessible.
|
||||
|
||||
For example, create the file systems on the devices using the
|
||||
**mkfs** command
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ apt-get install xfsprogs
|
||||
|
||||
$ mkfs.xfs -f -i size=1024 -L sdc /dev/sdc
|
||||
$ mkfs.xfs -f -i size=1024 -L sdd /dev/sdd
|
||||
$ mkfs.xfs -f -i size=1024 -L sde /dev/sde
|
||||
$ mkfs.xfs -f -i size=1024 -L sdf /dev/sdf
|
||||
$ mkfs.xfs -f -i size=1024 -L sdg /dev/sdg
|
||||
|
||||
#. Add the mount locations to the ``fstab`` file so that the storage
|
||||
devices are remounted on boot. The following example mount options
|
||||
are recommended when using XFS.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ LABEL=sdc /srv/node/sdc xfs noatime,nodiratime, \
|
||||
nobarrier,logbufs=8,noauto 0 0
|
||||
$ LABEL=sdd /srv/node/sdd xfs noatime,nodiratime, \
|
||||
nobarrier,logbufs=8,noauto 0 0
|
||||
$ LABEL=sde /srv/node/sde xfs noatime,nodiratime, \
|
||||
nobarrier,logbufs=8,noauto 0 0
|
||||
$ LABEL=sdf /srv/node/sdf xfs noatime,nodiratime, \
|
||||
nobarrier,logbufs=8,noauto 0 0
|
||||
$ LABEL=sdg /srv/node/sdg xfs noatime,nodiratime, \
|
||||
nobarrier,logbufs=8,noauto 0 0
|
||||
|
||||
#. Create the mount points for the devices using the **mkdir** command.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ mkdir -p /srv/node/sdc
|
||||
$ mkdir -p /srv/node/sdd
|
||||
$ mkdir -p /srv/node/sde
|
||||
$ mkdir -p /srv/node/sdf
|
||||
$ mkdir -p /srv/node/sdg
|
||||
|
||||
The mount point is referenced as the ``mount_point``\ parameter in
|
||||
the ``swift.yml`` file (``/etc/rpc_deploy/conf.d/swift.yml``).
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ mount /srv/node/sdc
|
||||
$ mount /srv/node/sdd
|
||||
$ mount /srv/node/sde
|
||||
$ mount /srv/node/sdf
|
||||
$ mount /srv/node/sdg
|
||||
|
||||
To view an annotated example of the ``swift.yml`` file, see `Appendix A,
|
||||
*OSAD configuration files* <app-osad-configfiles.html>`__.
|
||||
|
||||
For the following mounted devices:
|
||||
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| Device | Mount location |
|
||||
+======================================+======================================+
|
||||
| /dev/sdc | /srv/node/sdc |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| /dev/sdd | /srv/node/sdd |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| /dev/sde | /srv/node/sde |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| /dev/sdf | /srv/node/sdf |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| /dev/sdg | /srv/node/sdg |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
|
||||
Table: Table 5.1. Mounted devices
|
||||
|
||||
The entry in the ``swift.yml`` would be:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# drives:
|
||||
# - name: sdc
|
||||
# - name: sdd
|
||||
# - name: sde
|
||||
# - name: sdf
|
||||
# - name: sdg
|
||||
# mount_point: /mnt
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
70
doc/source/install-guide/configure-swift-glance.rst
Normal file
@ -0,0 +1,70 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Integrate with the Image Service
|
||||
--------------------------------
|
||||
|
||||
Optionally, the images created by the Image Service (glance) can be
|
||||
stored using Object Storage.
|
||||
|
||||
If there is an existing Image Service (glance) backend (for example,
|
||||
cloud files) but want to add Object Storage (swift) to use as the Image
|
||||
Service back end, re-add any images from the Image Service after moving
|
||||
to Object Storage. If the Image Service variables are changed (as
|
||||
described below) and begin using Object storage, any images in the Image
|
||||
Service will no longer be available.
|
||||
|
||||
**Procedure 5.3. Integrating Object Storage with Image Service**
|
||||
|
||||
This procedure requires the following:
|
||||
|
||||
- OSAD Kilo (v11)
|
||||
|
||||
- Object Storage v 2.2.0
|
||||
|
||||
#. Update the glance options in the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# Glance Options
|
||||
glance_default_store: swift
|
||||
glance_swift_store_auth_address: '{{ auth_identity_uri }}'
|
||||
glance_swift_store_container: glance_images
|
||||
glance_swift_store_endpoint_type: internalURL
|
||||
glance_swift_store_key: '{{ glance_service_password }}'
|
||||
glance_swift_store_region: RegionOne
|
||||
glance_swift_store_user: 'service:glance'
|
||||
|
||||
|
||||
- ``glance_default_store``: Set the default store to ``swift``.
|
||||
|
||||
- ``glance_swift_store_auth_address``: Set to the local
|
||||
authentication address using the
|
||||
``'{{ auth_identity_uri }}'`` variable.
|
||||
|
||||
- ``glance_swift_store_container``: Set the container name.
|
||||
|
||||
- ``glance_swift_store_endpoint_type``: Set the endpoint type to
|
||||
``internalURL``.
|
||||
|
||||
- ``glance_swift_store_key``: Set the Image Service password using
|
||||
the ``{{ glance_service_password }}`` variable.
|
||||
|
||||
- ``glance_swift_store_region``: Set the region. The default value
|
||||
is ``RegionOne``.
|
||||
|
||||
- ``glance_swift_store_user``: Set the tenant and user name to
|
||||
``'service:glance'``.
|
||||
|
||||
#. Rerun the Image Service (glance) configuration plays.
|
||||
|
||||
#. Run the Image Service (glance) playbook:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ cd /opt/os-ansible-deployment/playbooks
|
||||
$ openstack-ansible os-glance-install.yml --tags "glance-config"
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
23
doc/source/install-guide/configure-swift-overview.rst
Normal file
@ -0,0 +1,23 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
Object Storage is configured using the
|
||||
``/etc/openstack_deploy/conf.d/swift.yml`` file and the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file.
|
||||
|
||||
The group variables in the
|
||||
``/etc/openstack_deploy/conf.d/swift.yml`` file are used by the
|
||||
Ansible playbooks when installing Object Storage. Some variables cannot
|
||||
be changed after they are set, while some changes require re-running the
|
||||
playbooks. The values in the ``swift_hosts`` section supersede values in
|
||||
the ``swift`` section.
|
||||
|
||||
To view the configuration files, including information about which
|
||||
variables are required and which are optional, see `Appendix A, *OSAD
|
||||
configuration files* <app-osad-configfiles.html>`__.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
52
doc/source/install-guide/configure-swift-policies.rst
Normal file
@ -0,0 +1,52 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Storage Policies
|
||||
----------------
|
||||
|
||||
Storage Policies allow segmenting the cluster for various purposes
|
||||
through the creation of multiple object rings. Using policies, different
|
||||
devices can belong to different rings with varying levels of
|
||||
replication. By supporting multiple object rings, Object Storage can
|
||||
segregate the objects within a single cluster.
|
||||
|
||||
Storage policies can be used for the following situations:
|
||||
|
||||
- Differing levels of replication: A provider may want to offer 2x
|
||||
replication and 3x replication, but does not want to maintain two
|
||||
separate clusters. They can set up a 2x policy and a 3x policy and
|
||||
assign the nodes to their respective rings.
|
||||
|
||||
- Improving performance: Just as solid state drives (SSD) can be used
|
||||
as the exclusive members of an account or database ring, an SSD-only
|
||||
object ring can be created to implement a low-latency or high
|
||||
performance policy.
|
||||
|
||||
- Collecting nodes into groups: Different object rings can have
|
||||
different physical servers so that objects in specific storage
|
||||
policies are always placed in a specific data center or geography.
|
||||
|
||||
- Differing storage implementations: A policy can be used to direct
|
||||
traffic to collected nodes that use a different disk file (for
|
||||
example, Kinetic, GlusterFS).
|
||||
|
||||
Most storage clusters do not require more than one storage policy. The
|
||||
following problems can occur if using multiple storage policies per
|
||||
cluster:
|
||||
|
||||
- Creating a second storage policy without any specified drives (all
|
||||
drives are part of only the account, container, and default storage
|
||||
policy groups) creates an empty ring for that storage policy.
|
||||
|
||||
- A non-default storage policy is used only if specified when creating
|
||||
a container, using the
|
||||
``X-Storage-Policy: <policy-name>`` header. After the
|
||||
container is created, it uses the created storage policy. Other
|
||||
containers continue using the default or another storage policy
|
||||
specified when created.
|
||||
|
||||
For more information about storage policies, see: `Storage
|
||||
Policies <http://docs.openstack.org/developer/swift/overview_policies.html>`__
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
45
doc/source/install-guide/configure-swift.rst
Normal file
@ -0,0 +1,45 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring the Object Storage service (optional)
|
||||
-------------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
configure-swift-overview.rst
|
||||
configure-swift-devices.rst
|
||||
configure-swift-config.rst
|
||||
configure-swift-glance.rst
|
||||
configure-swift-add.rst
|
||||
configure-swift-policies.rst
|
||||
|
||||
Object Storage (swift) is a multi-tenant object storage system. It is
|
||||
highly scalable, can manage large amounts of unstructured data, and
|
||||
provides a RESTful HTTP API.
|
||||
|
||||
The following procedure describes how to set up storage devices and
|
||||
modify the Object Storage configuration files to enable Object Storage
|
||||
usage.
|
||||
|
||||
#. `the section called "Configure and mount storage
|
||||
devices" <configure-swift-devices.html>`__
|
||||
|
||||
#. `the section called "Configure an Object Storage
|
||||
deployment" <configure-swift-config.html>`__
|
||||
|
||||
#. Optionally, allow all Identity users to use Object Storage by setting
|
||||
``swift_allow_all_users`` in the ``user_variables.yml`` file to
|
||||
``True``. Any users with the ``_member_`` role (all authorized
|
||||
Identity (keystone) users) can create containers and upload objects
|
||||
to Object Storage.
|
||||
|
||||
If this value is ``False``, then by default, only users with the
|
||||
admin or swiftoperator role are allowed to create containers or
|
||||
manage tenants.
|
||||
|
||||
When the backend type for the Image Service (glance) is set to
|
||||
``swift``, the Image Service can access the Object Storage cluster
|
||||
regardless of whether this value is ``True`` or ``False``.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
49
doc/source/install-guide/configure.rst
Normal file
@ -0,0 +1,49 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Chapter 5. Deployment configuration
|
||||
-----------------------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
configure-prereq.rst
|
||||
configure-networking.rst
|
||||
configure-hostlist.rst
|
||||
configure-creds.rst
|
||||
configure-hypervisor.rst
|
||||
configure-glance.rst
|
||||
configure-cinder.rst
|
||||
configure-swift.rst
|
||||
configure-haproxy.rst
|
||||
|
||||
|
||||
**Figure 5.1. Installation work flow**
|
||||
|
||||
.. image:: figures/workflow-configdeployment.png
|
||||
|
||||
Ansible references a handful of files containing mandatory and optional
|
||||
configuration directives. These files must be modified to define the
|
||||
target environment before running the Ansible playbooks. Perform the
|
||||
following tasks:
|
||||
|
||||
- Configure Target host networking to define bridge interfaces and
|
||||
networks
|
||||
|
||||
- Configure a list of target hosts on which to install the software
|
||||
|
||||
- Configure virtual and physical network relationships for OpenStack
|
||||
Networking (neutron)
|
||||
|
||||
- (Optional) Configure the hypervisor
|
||||
|
||||
- (Optional) Configure Block Storage (cinder) to use the NetApp back
|
||||
end
|
||||
|
||||
- (Optional) Configure Block Storage (cinder) backups.
|
||||
|
||||
- (Optional) Configure Block Storage availability zones
|
||||
|
||||
- Configure passwords for all services
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
21
doc/source/install-guide/deploymenthost-add.rst
Normal file
@ -0,0 +1,21 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring the operating system
|
||||
--------------------------------
|
||||
|
||||
Install additional software packages and configure NTP.
|
||||
|
||||
#. Install additional software packages if not already installed during
|
||||
operating system installation:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# apt-get install aptitude build-essential git ntp ntpdate \
|
||||
openssh-server python-dev sudo
|
||||
|
||||
|
||||
#. Configure NTP to synchronize with a suitable time source.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
13
doc/source/install-guide/deploymenthost-os.rst
Normal file
@ -0,0 +1,13 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Installing the operating system
|
||||
-------------------------------
|
||||
|
||||
Install the `Ubuntu Server 14.04 (Trusty Tahr) LTS
|
||||
64-bit <http://releases.ubuntu.com/14.04/>`__ operating system on the
|
||||
deployment host with at least one network interface configured to access
|
||||
the Internet or suitable local repositories.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
27
doc/source/install-guide/deploymenthost-osad.rst
Normal file
@ -0,0 +1,27 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Installing source and dependencies
|
||||
----------------------------------
|
||||
|
||||
Install the source and dependencies for the deployment host.
|
||||
|
||||
#. Clone the OSAD repository into the ``/opt/os-ansible-deployment``
|
||||
directory:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# git clone -b TAG https://github.com/stackforge/os-ansible-deployment.git /opt/os-ansible-deploymemt
|
||||
|
||||
|
||||
Replace *``TAG``* with the current stable release tag.
|
||||
|
||||
#. Change to the ``/opt/os-ansible-deployment`` directory, and run the
|
||||
Ansible bootstrap script:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# scripts/bootstrap-ansible.sh
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
15
doc/source/install-guide/deploymenthost-sshkeys.rst
Normal file
@ -0,0 +1,15 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring Secure Shell (SSH) keys
|
||||
-----------------------------------
|
||||
|
||||
Ansible uses Secure Shell (SSH) with public key authentication for
|
||||
connectivity between the deployment and target hosts. To reduce user
|
||||
interaction during Ansible operations, key pairs should not include
|
||||
passphrases. However, if a passphrase is required, consider using the
|
||||
**ssh-agent** and **ssh-add** commands to temporarily store the
|
||||
passphrase before performing Ansible operations.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
28
doc/source/install-guide/deploymenthost.rst
Normal file
@ -0,0 +1,28 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Chapter 3. Deployment host
|
||||
--------------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
deploymenthost-os.rst
|
||||
deploymenthost-add.rst
|
||||
deploymenthost-osad.rst
|
||||
deploymenthost-sshkeys.rst
|
||||
|
||||
|
||||
**Figure 3.1. Installation work flow**
|
||||
|
||||
.. image:: figures/workflow-deploymenthost.png
|
||||
|
||||
The OSAD installation process recommends one deployment host. The
|
||||
deployment host contains Ansible and orchestrates the OSAD installation
|
||||
on the target hosts. One of the target hosts, preferably one of the
|
||||
infrastructure variants, can be used as the deployment host. To use a
|
||||
deployment host as a target host, follow the steps in `Chapter 4,
|
||||
*Target hosts* <targethosts.html>`__ on the deployment host. This
|
||||
guide assumes separate deployment and target hosts.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
BIN
doc/source/install-guide/figures/environment-overview.png
Normal file
After Width: | Height: | Size: 71 KiB |
After Width: | Height: | Size: 104 KiB |
BIN
doc/source/install-guide/figures/networkarch-bare-external.png
Normal file
After Width: | Height: | Size: 107 KiB |
After Width: | Height: | Size: 174 KiB |
After Width: | Height: | Size: 180 KiB |
BIN
doc/source/install-guide/figures/networkcomponents.png
Normal file
After Width: | Height: | Size: 37 KiB |
BIN
doc/source/install-guide/figures/networking-compute.png
Normal file
After Width: | Height: | Size: 114 KiB |
BIN
doc/source/install-guide/figures/networking-neutronagents.png
Normal file
After Width: | Height: | Size: 134 KiB |
BIN
doc/source/install-guide/figures/workflow-configdeployment.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
doc/source/install-guide/figures/workflow-deploymenthost.png
Normal file
After Width: | Height: | Size: 28 KiB |
After Width: | Height: | Size: 28 KiB |
BIN
doc/source/install-guide/figures/workflow-infraplaybooks.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
doc/source/install-guide/figures/workflow-openstackplaybooks.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
doc/source/install-guide/figures/workflow-overview.png
Normal file
After Width: | Height: | Size: 26 KiB |
BIN
doc/source/install-guide/figures/workflow-targethosts.png
Normal file
After Width: | Height: | Size: 28 KiB |
62
doc/source/install-guide/index.rst
Normal file
@ -0,0 +1,62 @@
|
||||
OpenStack Ansible Installation Guide
|
||||
====================================
|
||||
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
|
||||
Overview
|
||||
^^^^^^^^
|
||||
|
||||
.. toctree::
|
||||
|
||||
overview.rst
|
||||
|
||||
Deployment host
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
.. toctree::
|
||||
|
||||
deploymenthost.rst
|
||||
|
||||
|
||||
Target hosts
|
||||
^^^^^^^^^^^^
|
||||
|
||||
.. toctree::
|
||||
|
||||
targethosts.rst
|
||||
|
||||
|
||||
Configuration
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
.. toctree::
|
||||
|
||||
configure.rst
|
||||
|
||||
|
||||
Installation
|
||||
^^^^^^^^^^^^
|
||||
|
||||
.. toctree::
|
||||
|
||||
install-foundation.rst
|
||||
install-infrastructure.rst
|
||||
install-openstack.rst
|
||||
|
||||
|
||||
Operations
|
||||
^^^^^^^^^^
|
||||
|
||||
.. toctree::
|
||||
|
||||
ops.rst
|
||||
|
||||
|
||||
Appendix
|
||||
^^^^^^^^
|
||||
|
||||
.. toctree::
|
||||
|
||||
app-configfiles.rst
|
||||
app-resources.rst
|
32
doc/source/install-guide/install-foundation-run.rst
Normal file
@ -0,0 +1,32 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Running the foundation playbook
|
||||
-------------------------------
|
||||
|
||||
#. Change to the ``/opt/os-ansible-deployment/playbooks`` directory.
|
||||
|
||||
#. Run the host setup playbook, which runs a series of sub-playbooks:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ openstack-ansible setup-hosts.yml
|
||||
|
||||
|
||||
Confirm satisfactory completion with zero items unreachable or
|
||||
failed:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
PLAY RECAP ********************************************************************
|
||||
...
|
||||
deployment_host : ok=18 changed=11 unreachable=0 failed=0
|
||||
|
||||
#. If using HAProxy, run the playbook to deploy it:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ openstack-ansible haproxy-install.yml
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
29
doc/source/install-guide/install-foundation.rst
Normal file
@ -0,0 +1,29 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Chapter 6. Foundation playbooks
|
||||
-------------------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
install-foundation-run.rst
|
||||
|
||||
|
||||
**Figure 6.1. Installation work flow**
|
||||
|
||||
.. image:: figures/workflow-foundationplaybooks.png
|
||||
|
||||
The main Ansible foundation playbook prepares the target hosts for
|
||||
infrastructure and OpenStack services and performs the following
|
||||
operations:
|
||||
|
||||
- Perform deployment host initial setup
|
||||
|
||||
- Build containers on target hosts
|
||||
|
||||
- Restart containers on target hosts
|
||||
|
||||
- Install common components into containers on target hosts
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
27
doc/source/install-guide/install-infrastructure-run.rst
Normal file
@ -0,0 +1,27 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Running the infrastructure playbook
|
||||
-----------------------------------
|
||||
|
||||
#. Change to the ``/opt/os-ansible-deployment/playbooks`` directory.
|
||||
|
||||
#. Run the infrastructure setup playbook, which runs a series of
|
||||
sub-playbooks:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ openstack-ansible setup-infrastructure.yml
|
||||
|
||||
|
||||
Confirm satisfactory completion with zero items unreachable or
|
||||
failed:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
PLAY RECAP ********************************************************************
|
||||
...
|
||||
deployment_host : ok=27 changed=0 unreachable=0 failed=0
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
60
doc/source/install-guide/install-infrastructure-verify.rst
Normal file
@ -0,0 +1,60 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Verifying infrastructure operation
|
||||
----------------------------------
|
||||
|
||||
Verify the database cluster and Kibana web interface operation.
|
||||
|
||||
**Procedure 7.1. Verifying the database cluster**
|
||||
|
||||
#. Determine the Galera container name:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ lxc-ls | grep galera
|
||||
infra1_galera_container-4ed0d84a
|
||||
|
||||
#. Access the Galera container:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ lxc-attach -n infra1_galera_container-4ed0d84a
|
||||
|
||||
#. Run the MariaDB client, show cluster status, and exit the client:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ mysql -u root -p
|
||||
MariaDB> show status like 'wsrep_cluster%';
|
||||
+--------------------------+--------------------------------------+
|
||||
| Variable_name | Value |
|
||||
+--------------------------+--------------------------------------+
|
||||
| wsrep_cluster_conf_id | 3 |
|
||||
| wsrep_cluster_size | 3 |
|
||||
| wsrep_cluster_state_uuid | bbe3f0f6-3a88-11e4-bd8f-f7c9e138dd07 |
|
||||
| wsrep_cluster_status | Primary |
|
||||
+--------------------------+--------------------------------------+
|
||||
MariaDB> exit
|
||||
|
||||
|
||||
The ``wsrep_cluster_size`` field should indicate the number of nodes
|
||||
in the cluster and the ``wsrep_cluster_status`` field should indicate
|
||||
primary.
|
||||
|
||||
|
||||
|
||||
**Procedure 7.2. Verifying the Kibana web interface**
|
||||
|
||||
#. With a web browser, access the Kibana web interface using the
|
||||
external load balancer IP address defined by the
|
||||
``external_lb_vip_address`` option in the
|
||||
``/etc/openstack_deploy/openstack_user_config.yml`` file. The Kibana
|
||||
web interface uses HTTPS on port 8443.
|
||||
|
||||
#. Authenticate using the username ``kibana`` and password defined by
|
||||
the ``kibana_password`` option in the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
39
doc/source/install-guide/install-infrastructure.rst
Normal file
@ -0,0 +1,39 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Chapter 7. Infrastructure playbooks
|
||||
-----------------------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
install-infrastructure-run.rst
|
||||
install-infrastructure-verify.rst
|
||||
|
||||
|
||||
**Figure 7.1. Installation workflow**
|
||||
|
||||
.. image:: figures/workflow-infraplaybooks.png
|
||||
|
||||
The main Ansible infrastructure playbook installs infrastructure
|
||||
services and performs the following operations:
|
||||
|
||||
- Install Memcached
|
||||
|
||||
- Install Galera
|
||||
|
||||
- Install RabbitMQ
|
||||
|
||||
- Install Rsyslog
|
||||
|
||||
- Install Elasticsearch
|
||||
|
||||
- Install Logstash
|
||||
|
||||
- Install Kibana
|
||||
|
||||
- Install Elasticsearch command-line utilities
|
||||
|
||||
- Configure Rsyslog
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
81
doc/source/install-guide/install-openstack-run.rst
Normal file
@ -0,0 +1,81 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Running the OpenStack playbook
|
||||
------------------------------
|
||||
|
||||
#. Change to the ``/opt/os-ansible-deployment/playbooks`` directory.
|
||||
|
||||
#. Run the OpenStack setup playbook, which runs a series of
|
||||
sub-playbooks:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ openstack-ansible setup-openstack.yml
|
||||
|
||||
The openstack-common.yml sub-playbook builds all OpenStack services
|
||||
from source and takes up to 30 minutes to complete. As the playbook
|
||||
progresses, the quantity of containers in the "polling" state will
|
||||
approach zero. If any operations take longer than 30 minutes to
|
||||
complete, the playbook will terminate with an error.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
changed: [target_host_glance_container-f2ebdc06]
|
||||
changed: [target_host_heat_engine_container-36022446]
|
||||
changed: [target_host_neutron_agents_container-08ec00cd]
|
||||
changed: [target_host_heat_apis_container-4e170279]
|
||||
changed: [target_host_keystone_container-c6501516]
|
||||
changed: [target_host_neutron_server_container-94d370e5]
|
||||
changed: [target_host_nova_api_metadata_container-600fe8b3]
|
||||
changed: [target_host_nova_compute_container-7af962fe]
|
||||
changed: [target_host_cinder_api_container-df5d5929]
|
||||
changed: [target_host_cinder_volumes_container-ed58e14c]
|
||||
changed: [target_host_horizon_container-e68b4f66]
|
||||
<job 802849856578.7262> finished on target_host_heat_engine_container-36022446
|
||||
<job 802849856578.7739> finished on target_host_keystone_container-c6501516
|
||||
<job 802849856578.7262> finished on target_host_heat_apis_container-4e170279
|
||||
<job 802849856578.7359> finished on target_host_cinder_api_container-df5d5929
|
||||
<job 802849856578.7386> finished on target_host_cinder_volumes_container-ed58e14c
|
||||
<job 802849856578.7886> finished on target_host_horizon_container-e68b4f66
|
||||
<job 802849856578.7582> finished on target_host_nova_compute_container-7af962fe
|
||||
<job 802849856578.7604> finished on target_host_neutron_agents_container-08ec00cd
|
||||
<job 802849856578.7459> finished on target_host_neutron_server_container-94d370e5
|
||||
<job 802849856578.7327> finished on target_host_nova_api_metadata_container-600fe8b3
|
||||
<job 802849856578.7363> finished on target_host_glance_container-f2ebdc06
|
||||
<job 802849856578.7339> polling, 1675s remaining
|
||||
<job 802849856578.7338> polling, 1675s remaining
|
||||
<job 802849856578.7322> polling, 1675s remaining
|
||||
<job 802849856578.7319> polling, 1675s remaining
|
||||
|
||||
Setting up the compute hosts takes up to 30 minutes to complete,
|
||||
particularly in environments with many compute hosts. As the playbook
|
||||
progresses, the quantity of containers in the "polling" state will
|
||||
approach zero. If any operations take longer than 30 minutes to
|
||||
complete, the playbook will terminate with an error.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
ok: [target_host_nova_conductor_container-2b495dc4]
|
||||
ok: [target_host_nova_api_metadata_container-600fe8b3]
|
||||
ok: [target_host_nova_api_ec2_container-6c928c30]
|
||||
ok: [target_host_nova_scheduler_container-c3febca2]
|
||||
ok: [target_host_nova_api_os_compute_container-9fa0472b]
|
||||
<job 409029926086.9909> finished on target_host_nova_api_os_compute_container-9fa0472b
|
||||
<job 409029926086.9890> finished on target_host_nova_api_ec2_container-6c928c30
|
||||
<job 409029926086.9910> finished on target_host_nova_conductor_container-2b495dc4
|
||||
<job 409029926086.9882> finished on target_host_nova_scheduler_container-c3febca2
|
||||
<job 409029926086.9898> finished on target_host_nova_api_metadata_container-600fe8b3
|
||||
<job 409029926086.8330> polling, 1775s remaining
|
||||
|
||||
Confirm satisfactory completion with zero items unreachable or
|
||||
failed:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
PLAY RECAP **********************************************************************
|
||||
...
|
||||
deployment_host : ok=44 changed=11 unreachable=0 failed=0
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
@ -0,0 +1,21 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Utility container
|
||||
-----------------
|
||||
|
||||
The utility container provides a space where miscellaneous tools and
|
||||
other software can be installed. Tools and objects can be placed in a
|
||||
utility container if they do not require a dedicated container or if it
|
||||
is impractical to create a new container for a single tool or object.
|
||||
Utility containers can also be used when tools cannot be installed
|
||||
directly onto a host.
|
||||
|
||||
For example, the tempest playbooks are installed on the utility
|
||||
container since tempest testing does not need a container of its own.
|
||||
For another example of using the utility container, see `the section
|
||||
called "Verifying OpenStack
|
||||
operation" <install-openstack-verify.html>`__.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
75
doc/source/install-guide/install-openstack-verify.rst
Normal file
@ -0,0 +1,75 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Verifying OpenStack operation
|
||||
-----------------------------
|
||||
|
||||
Verify basic operation of the OpenStack API and dashboard.
|
||||
|
||||
|
||||
|
||||
**Procedure 8.1. Verifying the API**
|
||||
|
||||
The utility container provides a CLI environment for additional
|
||||
configuration and testing.
|
||||
|
||||
#. Determine the utility container name:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ lxc-ls | grep utility
|
||||
infra1_utility_container-161a4084
|
||||
|
||||
#. Access the utility container:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ lxc-attach -n infra1_utility_container-161a4084
|
||||
|
||||
#. Source the ``admin`` tenant credentials:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ source openrc
|
||||
|
||||
#. Run an OpenStack command that uses one or more APIs. For example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ keystone user-list
|
||||
+----------------------------------+----------+---------+-------+
|
||||
| id | name | enabled | email |
|
||||
+----------------------------------+----------+---------+-------+
|
||||
| 090c1023d0184a6e8a70e26a5722710d | admin | True | |
|
||||
| 239e04cd3f7d49929c7ead506d118e40 | cinder | True | |
|
||||
| e1543f70e56041679c013612bccfd4ee | cinderv2 | True | |
|
||||
| bdd2df09640e47888f819057c8e80f04 | demo | True | |
|
||||
| 453dc7932df64cc58e36bf0ac4f64d14 | ec2 | True | |
|
||||
| 257da50c5cfb4b7c9ca8334bc096f344 | glance | True | |
|
||||
| 6e0bc047206f4f5585f7b700a8ed6e94 | heat | True | |
|
||||
| 187ee2e32eec4293a3fa243fa21f6dd9 | keystone | True | |
|
||||
| dddaca4b39194dc4bcefd0bae542c60a | neutron | True | |
|
||||
| f1c232f9d53c4adabb54101ccefaefce | nova | True | |
|
||||
| fdfbda23668c4980990708c697384050 | novav3 | True | |
|
||||
| 744069c771d84f1891314388c1f23686 | s3 | True | |
|
||||
| 4e7fdfda8d14477f902eefc8731a7fdb | swift | True | |
|
||||
+----------------------------------+----------+---------+-------+
|
||||
|
||||
|
||||
|
||||
**Procedure 8.2. Verifying the dashboard**
|
||||
|
||||
#. With a web browser, access the dashboard using the external load
|
||||
balancer IP address defined by the ``external_lb_vip_address`` option
|
||||
in the ``/etc/openstack_deploy/openstack_user_config.yml`` file. The
|
||||
dashboard uses HTTPS on port 443.
|
||||
|
||||
#. Authenticate using the username ``admin`` and password defined by the
|
||||
``keystone_auth_admin_password`` option in the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file.
|
||||
|
||||
Uploading public images using the dashboard or CLI can only be performed
|
||||
by users with administrator privileges.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
45
doc/source/install-guide/install-openstack.rst
Normal file
@ -0,0 +1,45 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Chapter 8. OpenStack playbooks
|
||||
------------------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
install-openstack-run.rst
|
||||
install-openstack-utilitycontainer.rst
|
||||
install-openstack-verify.rst
|
||||
|
||||
|
||||
**Figure 8.1. Installation work flow**
|
||||
|
||||
.. image:: figures/workflow-openstackplaybooks.png
|
||||
|
||||
The main Ansible OpenStack playbook installs OpenStack services and
|
||||
performs the following operations:
|
||||
|
||||
- Install common components
|
||||
|
||||
- Create utility container that provides utilities to interact with
|
||||
services in other containers
|
||||
|
||||
- Install Identity (keystone)
|
||||
|
||||
- Generate service IDs for all services
|
||||
|
||||
- Install the Image service (glance)
|
||||
|
||||
- Install Orchestration (heat)
|
||||
|
||||
- Install Compute (nova)
|
||||
|
||||
- Install Networking (neutron)
|
||||
|
||||
- Install Block Storage (cinder)
|
||||
|
||||
- Install Dashboard (horizon)
|
||||
|
||||
- Reconfigure Rsyslog
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
110
doc/source/install-guide/navigation.txt
Normal file
@ -0,0 +1,110 @@
|
||||
`Main Index <index.html>`__
|
||||
|
||||
----------
|
||||
|
||||
- `2. Overview <overview.html>`__
|
||||
|
||||
- `About OpenStack Ansible Deployment <overview-osad>`__
|
||||
- `Ansible <overview-ansible.html>`__
|
||||
- `Linux Containers (LXC) <overview-lxc.html>`__
|
||||
- `Host layout <overview-hostlayout.html>`__
|
||||
- `Host networking <overview-hostnetworking.html>`__
|
||||
- `OpenStack Networking <overview-neutron.html>`__
|
||||
- `Installation requirements <overview-requirements.html>`__
|
||||
- `Installation workflow <overview-workflow.html>`__
|
||||
|
||||
- `3. Deployment host <deploymenthost.html>`__
|
||||
|
||||
- `Installing the operating system <deployment-os.html>`__
|
||||
- `Configuring the operating
|
||||
system <deploymenthost-add.html>`__
|
||||
- `Installing source and
|
||||
dependencies <deploymenthost-osad.html>`__
|
||||
- `Configuring Secure Shell (SSH)
|
||||
keys <deploymenthost-sshkeys.html>`__
|
||||
|
||||
- `4. Target hosts <targethosts.html>`__
|
||||
|
||||
- `Installing the operating system <targethosts-os.html>`__
|
||||
- `Configuring Secure Shell (SSH)
|
||||
keys <targethosts-sshkeys.html>`__
|
||||
- `Configuring the operating system <targethosts-add.html>`__
|
||||
- `Configuring LVM <targethosts-configlvm.html>`__
|
||||
- `Configuring the network <targethosts-network.html>`__
|
||||
|
||||
- `Reference
|
||||
architecture <targethosts-networkrefarch.html>`__
|
||||
- `Configuring the network on a target
|
||||
host <targethosts-networkexample.html>`__
|
||||
|
||||
- `5. Deployment configuration <configure.html>`__
|
||||
|
||||
- `Prerequisites <configure-prereq.html>`__
|
||||
- `Configuring target host
|
||||
networking <configure-networking.html>`__
|
||||
- `Configuring target hosts <configure-hostlist.html>`__
|
||||
- `Configuring service credentials <configure-creds.html>`__
|
||||
- `Configuring the hypervisor
|
||||
(optional) <configure-hypervisor.html>`__
|
||||
- `Configuring the Image service
|
||||
(optional) <configure-glance.html>`__
|
||||
- `Configuring the Block Storage service
|
||||
(optional) <configure-cinder.html>`__
|
||||
|
||||
- `NFS back-end <configure-cinder-nfs.html>`__
|
||||
- `Backup <configure-cinder-backup.html>`__
|
||||
- `Availability zones <configure-cinder-az.html>`__
|
||||
|
||||
- `Configuring the Object Storage service
|
||||
(optional) <configure-swift.html>`__
|
||||
|
||||
- `Overview <configure-swift-overview.html>`__
|
||||
- `Storage devices <configure-swift-devices.html>`__
|
||||
- `Object Storage service <configure-swift-config.html>`__
|
||||
- `Integrate with the Image Service
|
||||
<configure-swift-glance.html>`__
|
||||
- `Add to existing deployment <configure-swift-add.html>`__
|
||||
- `Policies <configure-swift-policies.html>`__
|
||||
|
||||
- `Configuring HAProxy (optional) <configure-haproxy.html>`__
|
||||
|
||||
- `6. Installation <install.html>`__
|
||||
|
||||
- `Foundation components <install-foundation.html>`__
|
||||
|
||||
- `Running the foundation
|
||||
playbook <install-foundation-run.html>`__
|
||||
|
||||
- `Infrastructure components <install-infrastructure.html>`__
|
||||
|
||||
- `Running the infrastructure
|
||||
playbook <install-infrastructure-run.html>`__
|
||||
- `Verifying infrastructure
|
||||
operation <install-infrastructure-verify.html>`__
|
||||
|
||||
- `OpenStack components <install-openstack.html>`__
|
||||
|
||||
- `Running the OpenStack
|
||||
playbook <install-openstack-run.html>`__
|
||||
- `Utility Container
|
||||
Overview <install-openstack-utilitycontainer.html>`__
|
||||
- `Verifying OpenStack
|
||||
operation <install-openstack-verify.html>`__
|
||||
|
||||
- `7. Operations <ops.html>`__
|
||||
|
||||
- `Adding a compute host <ops-addcomputehost.html>`__
|
||||
- `Galera cluster maintenance <ops-galera.html>`__
|
||||
|
||||
- `Removing nodes <ops-galera-remove.html>`__
|
||||
- `Starting a cluster <ops-galera-start.html>`__
|
||||
- `Cluster recovery <ops-galera-recovery.html>`__
|
||||
- `Single-node failure <ops-galera-recoverysingle.html>`__
|
||||
- `Multi-node failure <ops-galera-recoverymulti.html>`__
|
||||
- `Complete failure <ops-galera-recoverycomplete.html>`__
|
||||
- `Rebuilding a
|
||||
container <ops-galera-recoverycontainer.html>`__
|
||||
|
||||
- `A. OSAD configuration files <app-configfiles.html>`__
|
||||
|
||||
- `B. Additional resources <app-resources.html>`__
|
29
doc/source/install-guide/ops-addcomputehost.rst
Normal file
@ -0,0 +1,29 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Adding a compute host
|
||||
---------------------
|
||||
|
||||
Use the following procedure to add a compute host to an operational
|
||||
cluster.
|
||||
|
||||
#. Configure the host as a target host. See `Chapter 4, *Target
|
||||
hosts* <ch-hosts-target.html>`__ for more information.
|
||||
|
||||
#. Edit the ``/etc/openstack_deploy/openstack_user_config.yml`` file and
|
||||
add the host to the ``compute_hosts`` stanza.
|
||||
|
||||
If necessary, also modify the ``used_ips`` stanza.
|
||||
|
||||
#. Run the following commands to add the host. Replace
|
||||
*``NEW_HOST_NAME``* with the name of the new host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ cd /opt/os-ansible-deployment/playbooks
|
||||
$ openstack-ansible setup-everything.yml \
|
||||
rsyslog-config.yml --limit NEW_HOST_NAME
|
||||
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
25
doc/source/install-guide/ops-galera-recovery.rst
Normal file
@ -0,0 +1,25 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Galera cluster recovery
|
||||
-----------------------
|
||||
|
||||
When one or all nodes fail within a galera cluster you may need to
|
||||
re-bootstrap the environment. To make take advantage of the
|
||||
automation Ansible provides simply execute the ``galera-install.yml``
|
||||
play using the **galera-bootstrap** to auto recover a node or an
|
||||
entire environment.
|
||||
|
||||
#. Run the following Ansible command to show the failed nodes:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ openstack-ansible galera-install --tags galera-bootstrap
|
||||
|
||||
|
||||
Upon completion of this command the cluster should be back online an in
|
||||
a functional state.
|
||||
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
47
doc/source/install-guide/ops-galera-recoverycomplete.rst
Normal file
@ -0,0 +1,47 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Complete failure
|
||||
----------------
|
||||
|
||||
If all of the nodes in a Galera cluster fail (do not shutdown
|
||||
gracefully), then the integrity of the database can no longer be
|
||||
guaranteed and should be restored from backup. Run the following command
|
||||
to determine if all nodes in the cluster have failed:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat"
|
||||
node3_galera_container-3ea2cbd3 | success | rc=0 >>
|
||||
# GALERA saved state
|
||||
version: 2.1
|
||||
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
seqno: -1
|
||||
cert_index:
|
||||
|
||||
node2_galera_container-49a47d25 | success | rc=0 >>
|
||||
# GALERA saved state
|
||||
version: 2.1
|
||||
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
seqno: -1
|
||||
cert_index:
|
||||
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
# GALERA saved state
|
||||
version: 2.1
|
||||
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
seqno: -1
|
||||
cert_index:
|
||||
|
||||
|
||||
All the nodes have failed if ``mysqld`` is not running on any of the
|
||||
nodes and all of the nodes contain a ``seqno`` value of -1.
|
||||
|
||||
If any single node has a positive ``seqno`` value, then that node can be
|
||||
used to restart the cluster. However, because there is no guarantee that
|
||||
each node has an identical copy of the data, it is not recommended to
|
||||
restart the cluster using the **--wsrep-new-cluster** command on one
|
||||
node.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
109
doc/source/install-guide/ops-galera-recoverycontainer.rst
Normal file
@ -0,0 +1,109 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Rebuilding a container
|
||||
----------------------
|
||||
|
||||
Sometimes recovering from a failure requires rebuilding one or more
|
||||
containers.
|
||||
|
||||
#. Disable the failed node on the load balancer.
|
||||
|
||||
Do not rely on the load balancer health checks to disable the node.
|
||||
If the node is not disabled, the load balancer will send SQL requests
|
||||
to it before it rejoins the cluster and cause data inconsistencies.
|
||||
|
||||
#. Use the following commands to destroy the container and remove
|
||||
MariaDB data stored outside of the container. In this example, node 3
|
||||
failed.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ lxc-stop -n node3_galera_container-3ea2cbd3
|
||||
$ lxc-destroy -n node3_galera_container-3ea2cbd3
|
||||
$ rm -rf /openstack/node3_galera_container-3ea2cbd3/*
|
||||
|
||||
|
||||
#. Run the host setup playbook to rebuild the container specifically on
|
||||
node 3:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ openstack-ansible setup-hosts.yml -l node3 \
|
||||
-l node3_galera_container-3ea2cbd3
|
||||
|
||||
|
||||
The playbook will also restart all other containers on the node.
|
||||
|
||||
#. Run the infrastructure playbook to configure the container
|
||||
specifically on node 3:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ openstack-ansible infrastructure-setup.yml \
|
||||
-l node3_galera_container-3ea2cbd3
|
||||
|
||||
|
||||
The new container runs a single-node Galera cluster, a dangerous
|
||||
state because the environment contains more than one active database
|
||||
with potentially different data.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ansible galera_container -m shell -a "mysql \
|
||||
-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
|
||||
node3_galera_container-3ea2cbd3 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 1
|
||||
wsrep_cluster_size 1
|
||||
wsrep_cluster_state_uuid da078d01-29e5-11e4-a051-03d896dbdb2d
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
node2_galera_container-49a47d25 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 4
|
||||
wsrep_cluster_size 2
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 4
|
||||
wsrep_cluster_size 2
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
|
||||
#. Restart MariaDB in the new container and verify that it rejoins the
|
||||
cluster.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ansible galera_container -m shell -a "mysql \
|
||||
-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
|
||||
node2_galera_container-49a47d25 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 5
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
node3_galera_container-3ea2cbd3 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 5
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 5
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
|
||||
#. Enable the failed node on the load balancer.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
96
doc/source/install-guide/ops-galera-recoverymulti.rst
Normal file
@ -0,0 +1,96 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Multi-node failure
|
||||
------------------
|
||||
|
||||
When all but one node fails, the remaining node cannot achieve quorum
|
||||
and stops processing SQL requests. In this situation, failed nodes that
|
||||
recover cannot join the cluster because it no longer exists.
|
||||
|
||||
#. Run the following Ansible command to show the failed nodes:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ansible galera_container -m shell -a "mysql \
|
||||
-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
|
||||
node2_galera_container-49a47d25 | FAILED | rc=1 >>
|
||||
ERROR 2002 (HY000): Can't connect to local MySQL server
|
||||
through socket '/var/run/mysqld/mysqld.sock' (111)
|
||||
|
||||
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
|
||||
ERROR 2002 (HY000): Can't connect to local MySQL server
|
||||
through socket '/var/run/mysqld/mysqld.sock' (111)
|
||||
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 18446744073709551615
|
||||
wsrep_cluster_size 1
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status non-Primary
|
||||
|
||||
|
||||
In this example, nodes 2 and 3 have failed. The remaining operational
|
||||
server indicates ``non-Primary`` because it cannot achieve quorum.
|
||||
|
||||
#. Run the following command to
|
||||
`rebootstrap <http://galeracluster.com/documentation-webpages/quorumreset.html#id1>`__
|
||||
the operational node into the cluster.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ mysql -e "SET GLOBAL wsrep_provider_options='pc.bootstrap=yes';"
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 15
|
||||
wsrep_cluster_size 1
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
|
||||
ERROR 2002 (HY000): Can't connect to local MySQL server
|
||||
through socket '/var/run/mysqld/mysqld.sock' (111)
|
||||
|
||||
node2_galera_container-49a47d25 | FAILED | rc=1 >>
|
||||
ERROR 2002 (HY000): Can't connect to local MySQL server
|
||||
through socket '/var/run/mysqld/mysqld.sock' (111)
|
||||
|
||||
|
||||
The remaining operational node becomes the primary node and begins
|
||||
processing SQL requests.
|
||||
|
||||
#. Restart MariaDB on the failed nodes and verify that they rejoin the
|
||||
cluster.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ansible galera_container -m shell -a "mysql \
|
||||
-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
|
||||
node3_galera_container-3ea2cbd3 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 17
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
node2_galera_container-49a47d25 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 17
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 17
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
|
||||
#. If MariaDB fails to start on any of the failed nodes, run the
|
||||
**mysqld** command and perform further analysis on the output. As a
|
||||
last resort, rebuild the container for the node.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
45
doc/source/install-guide/ops-galera-recoverysingle.rst
Normal file
@ -0,0 +1,45 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Single-node failure
|
||||
-------------------
|
||||
|
||||
If a single node fails, the other nodes maintain quorum and continue to
|
||||
process SQL requests.
|
||||
|
||||
#. Run the following Ansible command to determine the failed node:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ansible galera_container -m shell -a "mysql -h localhost\
|
||||
-e 'show status like \"%wsrep_cluster_%\";'"
|
||||
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
|
||||
ERROR 2002 (HY000): Can't connect to local MySQL server through
|
||||
socket '/var/run/mysqld/mysqld.sock' (111)
|
||||
|
||||
node2_galera_container-49a47d25 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 17
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 17
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
|
||||
In this example, node 3 has failed.
|
||||
|
||||
#. Restart MariaDB on the failed node and verify that it rejoins the
|
||||
cluster.
|
||||
|
||||
#. If MariaDB fails to start, run the **mysqld** command and perform
|
||||
further analysis on the output. As a last resort, rebuild the
|
||||
container for the node.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
37
doc/source/install-guide/ops-galera-remove.rst
Normal file
@ -0,0 +1,37 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Removing nodes
|
||||
--------------
|
||||
|
||||
In the following example, all but one node was shut down gracefully:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ansible galera_container -m shell -a "mysql -h localhost\
|
||||
-e 'show status like \"%wsrep_cluster_%\";'"
|
||||
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
|
||||
ERROR 2002 (HY000): Can't connect to local MySQL server
|
||||
through socket '/var/run/mysqld/mysqld.sock' (2)
|
||||
|
||||
node2_galera_container-49a47d25 | FAILED | rc=1 >>
|
||||
ERROR 2002 (HY000): Can't connect to local MySQL server
|
||||
through socket '/var/run/mysqld/mysqld.sock' (2)
|
||||
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 7
|
||||
wsrep_cluster_size 1
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
|
||||
Compare this example output with the output from the multi-node failure
|
||||
scenario where the remaining operational node is non-primary and stops
|
||||
processing SQL requests. Gracefully shutting down the MariaDB service on
|
||||
all but one node allows the remaining operational node to continue
|
||||
processing SQL requests. When gracefully shutting down multiple nodes,
|
||||
perform the actions sequentially to retain operation.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
97
doc/source/install-guide/ops-galera-start.rst
Normal file
@ -0,0 +1,97 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Starting a cluster
|
||||
------------------
|
||||
|
||||
Gracefully shutting down all nodes destroys the cluster. Starting or
|
||||
restarting a cluster from zero nodes requires creating a new cluster on
|
||||
one of the nodes.
|
||||
|
||||
#. The new cluster should be started on the most advanced node. Run the
|
||||
following command to check the ``seqno`` value in the
|
||||
``grastate.dat`` file on all of the nodes:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat"
|
||||
node2_galera_container-49a47d25 | success | rc=0 >>
|
||||
# GALERA saved state version: 2.1
|
||||
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
seqno: 31
|
||||
cert_index:
|
||||
|
||||
node3_galera_container-3ea2cbd3 | success | rc=0 >>
|
||||
# GALERA saved state version: 2.1
|
||||
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
seqno: 31
|
||||
cert_index:
|
||||
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
# GALERA saved state version: 2.1
|
||||
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
seqno: 31
|
||||
cert_index:
|
||||
|
||||
|
||||
In this example, all nodes in the cluster contain the same positive
|
||||
``seqno`` values because they were synchronized just prior to
|
||||
graceful shutdown. If all ``seqno`` values are equal, any node can
|
||||
start the new cluster.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ /etc/init.d/mysql start --wsrep-new-cluster
|
||||
|
||||
|
||||
This command results in a cluster containing a single node. The
|
||||
``wsrep_cluster_size`` value shows the number of nodes in the
|
||||
cluster.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
node2_galera_container-49a47d25 | FAILED | rc=1 >>
|
||||
ERROR 2002 (HY000): Can't connect to local MySQL server
|
||||
through socket '/var/run/mysqld/mysqld.sock' (111)
|
||||
|
||||
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
|
||||
ERROR 2002 (HY000): Can't connect to local MySQL server
|
||||
through socket '/var/run/mysqld/mysqld.sock' (2)
|
||||
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 1
|
||||
wsrep_cluster_size 1
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
|
||||
#. Restart MariaDB on the other nodes and verify that they rejoin the
|
||||
cluster.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
node2_galera_container-49a47d25 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 3
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
node3_galera_container-3ea2cbd3 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 3
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
node4_galera_container-76275635 | success | rc=0 >>
|
||||
Variable_name Value
|
||||
wsrep_cluster_conf_id 3
|
||||
wsrep_cluster_size 3
|
||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
||||
wsrep_cluster_status Primary
|
||||
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
22
doc/source/install-guide/ops-galera.rst
Normal file
@ -0,0 +1,22 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Galera cluster maintenance
|
||||
--------------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
ops-galera-remove.rst
|
||||
ops-galera-start.rst
|
||||
ops-galera-recovery.rst
|
||||
ops-galera-recoverysingle.rst
|
||||
ops-galera-recoverymulti.rst
|
||||
ops-galera-recoverycomplete.rst
|
||||
ops-galera-recoverycontainer.rst
|
||||
|
||||
Routine maintenance includes gracefully adding or removing nodes from
|
||||
the cluster without impacting operation and also starting a cluster
|
||||
after gracefully shutting down all nodes.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
17
doc/source/install-guide/ops.rst
Normal file
@ -0,0 +1,17 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Chapter 9. Operations
|
||||
---------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
ops-addcomputehost.rst
|
||||
ops-galera.rst
|
||||
|
||||
|
||||
The following operations apply to environments after initial
|
||||
installation.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
32
doc/source/install-guide/overview-ansible.rst
Normal file
@ -0,0 +1,32 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Ansible
|
||||
-------
|
||||
|
||||
OpenStack Ansible Deployment uses a combination of Ansible and
|
||||
Linux Containers (LXC) to install and manage OpenStack. Ansible
|
||||
provides an automation platform to simplify system and application
|
||||
deployment. Ansible manages systems using Secure Shell (SSH)
|
||||
instead of unique protocols that require remote daemons or agents.
|
||||
|
||||
Ansible uses *playbooks* written in the YAML language for orchestration.
|
||||
For more information, see `Ansible - Intro to
|
||||
Playbooks <http://docs.ansible.com/playbooks_intro.html>`__.
|
||||
|
||||
In this guide, we refer to the host running Ansible playbooks as
|
||||
the *deployment host* and the hosts on which Ansible installs OSAD as the
|
||||
*target hosts*.
|
||||
|
||||
A recommended minimal layout for deployments involves five target
|
||||
hosts in total: three infrastructure hosts, one compute host, and one
|
||||
logging host. All hosts require three network interfaces. More
|
||||
information on setting up target hosts can be found in `the section
|
||||
called "Host layout" <overview-hostlayout.html>`__.
|
||||
|
||||
For more information on physical, logical, and virtual network
|
||||
interfaces within hosts see `the section called "Host
|
||||
networking" <overview-hostnetworking.html>`__.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
82
doc/source/install-guide/overview-hostlayout.rst
Normal file
@ -0,0 +1,82 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Host layout
|
||||
-----------
|
||||
|
||||
The recommended layout contains a minimum of five hosts (or servers).
|
||||
|
||||
- Three control plane infrastructure hosts
|
||||
|
||||
- One logging infrastructure host
|
||||
|
||||
- One compute host
|
||||
|
||||
To use the optional Block Storage (cinder) service, a sixth host is
|
||||
recommended. Block Storage hosts require an LVM volume group named
|
||||
*cinder-volumes*. See `the section called "Installation
|
||||
requirements" <overview-requirements.html>`__ and `the section
|
||||
called "Configuring LVM" <targethosts-configlvm.html>`__ for more information.
|
||||
|
||||
The hosts are called *target hosts* because Ansible deploys the OSAD
|
||||
environment within these hosts. The OSAD environment also recommends a
|
||||
*deployment host* from which Ansible orchestrates the deployment
|
||||
process. One of the target hosts can function as the deployment host.
|
||||
|
||||
At least one hardware load balancer **must** be included to manage the
|
||||
traffic among the target hosts.
|
||||
|
||||
Infrastructure Control Plane target hosts contain the following
|
||||
services:
|
||||
|
||||
- Infrastructure:
|
||||
|
||||
- Galera
|
||||
|
||||
- RabbitMQ
|
||||
|
||||
- Memcached
|
||||
|
||||
- Logging
|
||||
|
||||
- OpenStack:
|
||||
|
||||
- Identity (keystone)
|
||||
|
||||
- Image service (glance)
|
||||
|
||||
- Compute management (nova)
|
||||
|
||||
- Networking (neutron)
|
||||
|
||||
- Orchestration (heat)
|
||||
|
||||
- Dashboard (horizon)
|
||||
|
||||
Infrastructure Logging target hosts contain the following services:
|
||||
|
||||
- Rsyslog
|
||||
|
||||
- Logstash
|
||||
|
||||
- Elasticsearch with Kibana
|
||||
|
||||
Compute target hosts contain the following services:
|
||||
|
||||
- Compute virtualization
|
||||
|
||||
- Logging
|
||||
|
||||
(Optional) Storage target hosts contain the following services:
|
||||
|
||||
- Block Storage scheduler
|
||||
|
||||
- Block Storage volumes
|
||||
|
||||
|
||||
**Figure 2.1. Host Layout Overview**
|
||||
|
||||
.. image:: figures/environment-overview.png
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
121
doc/source/install-guide/overview-hostnetworking.rst
Normal file
@ -0,0 +1,121 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Host networking
|
||||
---------------
|
||||
|
||||
The combination of containers and flexible deployment options requires
|
||||
implementation of advanced Linux networking features such as bridges and
|
||||
namespaces.
|
||||
|
||||
*Bridges* provide layer 2 connectivity (similar to switches) among
|
||||
physical, logical, and virtual network interfaces within a host. After
|
||||
creating a bridge, the network interfaces are virtually "plugged in" to
|
||||
it.
|
||||
|
||||
OSAD uses bridges to connect physical and logical network interfaces
|
||||
on the host to virtual network interfaces within containers.
|
||||
|
||||
*Namespaces* provide logically separate layer 3 environments (similar to
|
||||
routers) within a host. Namespaces use virtual interfaces to connect
|
||||
with other namespaces including the host namespace. These interfaces,
|
||||
often called ``veth`` pairs, are virtually "plugged in" between
|
||||
namespaces similar to patch cables connecting physical devices such as
|
||||
switches and routers.
|
||||
|
||||
Each container has a namespace that connects to the host namespace with
|
||||
one or more ``veth`` pairs. Unless specified, the system generates
|
||||
random names for ``veth`` pairs.
|
||||
|
||||
The relationship between physical interfaces, logical interfaces,
|
||||
bridges, and virtual interfaces within containers is shown in
|
||||
`Figure 2.2, "Network
|
||||
components" <overview-hostnetworking.html#fig_overview_networkcomponents>`__.
|
||||
|
||||
|
||||
|
||||
**Figure 2.2. Network components**
|
||||
|
||||
.. image:: figures/networkcomponents.png
|
||||
|
||||
Target hosts can contain the following network bridges:
|
||||
|
||||
- LXC internal ``lxcbr0``:
|
||||
|
||||
- Mandatory (automatic).
|
||||
|
||||
- Provides external (typically internet) connectivity to containers.
|
||||
|
||||
- Automatically created and managed by LXC. Does not directly attach
|
||||
to any physical or logical interfaces on the host because iptables
|
||||
handle connectivity. Attaches to ``eth0`` in each container.
|
||||
|
||||
- Container management ``br-mgmt``:
|
||||
|
||||
- Mandatory.
|
||||
|
||||
- Provides management of and communication among infrastructure and
|
||||
OpenStack services.
|
||||
|
||||
- Manually created and attaches to a physical or logical interface,
|
||||
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth1``
|
||||
in each container.
|
||||
|
||||
- Storage ``br-storage``:
|
||||
|
||||
- Optional.
|
||||
|
||||
- Provides segregated access to block storage devices between
|
||||
Compute and Block Storage hosts.
|
||||
|
||||
- Manually created and attaches to a physical or logical interface,
|
||||
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth2``
|
||||
in each associated container.
|
||||
|
||||
- OpenStack Networking tunnel/overlay ``br-vxlan``:
|
||||
|
||||
- Mandatory.
|
||||
|
||||
- Provides infrastructure for VXLAN tunnel/overlay networks.
|
||||
|
||||
- Manually created and attaches to a physical or logical interface,
|
||||
typically a ``bond1`` VLAN subinterface. Also attaches to
|
||||
``eth10`` in each associated container.
|
||||
|
||||
- OpenStack Networking provider ``br-vlan``:
|
||||
|
||||
- Mandatory.
|
||||
|
||||
- Provides infrastructure for VLAN and flat networks.
|
||||
|
||||
- Manually created and attaches to a physical or logical interface,
|
||||
typically ``bond1``. Also attaches to ``eth11`` in each associated
|
||||
container. Does not contain an IP address because it only handles
|
||||
layer 2 connectivity.
|
||||
|
||||
`Figure 2.3, "Container network
|
||||
architecture" <overview-hostnetworking.html#fig_overview_networkarch-container>`__
|
||||
provides a visual representation of network components for services in
|
||||
containers.
|
||||
|
||||
|
||||
|
||||
**Figure 2.3. Container network architecture**
|
||||
|
||||
.. image:: figures/networkarch-container-external.png
|
||||
|
||||
By default, OSAD installs the Compute service in a bare metal
|
||||
environment rather than within a container. `Figure 2.4, "Bare/Metal
|
||||
network
|
||||
architecture" <overview-hostnetworking.html#fig_overview_networkarch-bare>`__
|
||||
provides a visual representation of the unique layout of network
|
||||
components on a Compute host.
|
||||
|
||||
|
||||
|
||||
**Figure 2.4. Bare/Metal network architecture**
|
||||
|
||||
.. image:: figures/networkarch-bare-external.png
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
64
doc/source/install-guide/overview-lxc.rst
Normal file
@ -0,0 +1,64 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Linux Containers (LXC)
|
||||
----------------------
|
||||
|
||||
Containers provide operating-system level virtualization by enhancing
|
||||
the concept of **chroot** environments, which isolate resources and file
|
||||
systems for a particular group of processes without the overhead and
|
||||
complexity of virtual machines. They access the same kernel, devices,
|
||||
and file systems on the underlying host and provide a thin operational
|
||||
layer built around a set of rules.
|
||||
|
||||
The Linux Containers (LXC) project implements operating system level
|
||||
virtualization on Linux using kernel namespaces and includes the
|
||||
following features:
|
||||
|
||||
- Resource isolation including CPU, memory, block I/O, and network
|
||||
using *cgroups*.
|
||||
|
||||
- Selective connectivity to physical and virtual network devices on the
|
||||
underlying physical host.
|
||||
|
||||
- Support for a variety of backing stores including LVM.
|
||||
|
||||
- Built on a foundation of stable Linux technologies with an active
|
||||
development and support community.
|
||||
|
||||
Useful commands:
|
||||
|
||||
- List containers and summary information such as operational state and
|
||||
network configuration:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# lxc-ls --fancy
|
||||
|
||||
- Show container details including operational state, resource
|
||||
utilization, and ``veth`` pairs:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# lxc-info --name container_name
|
||||
|
||||
- Start a container:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# lxc-start --name container_name
|
||||
|
||||
- Attach to a container:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# lxc-attach --name container_name
|
||||
|
||||
- Stop a container:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# lxc-stop --name container_name
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
32
doc/source/install-guide/overview-neutron.rst
Normal file
@ -0,0 +1,32 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
OpenStack Networking
|
||||
--------------------
|
||||
|
||||
OpenStack Networking (neutron) is configured to use a DHCP agent, L3
|
||||
Agent and Linux Bridge agent within a networking agents container.
|
||||
`Figure 2.5, "Networking agents
|
||||
containers" <overview-neutron.html#fig_overview_neutron-agents>`__
|
||||
shows the interaction of these agents, network components, and
|
||||
connection to a physical network.
|
||||
|
||||
|
||||
|
||||
**Figure 2.5. Networking agents containers**
|
||||
|
||||
.. image:: figures/networking-neutronagents.png
|
||||
|
||||
The Compute service uses the KVM hypervisor. `Figure 2.6, "Compute
|
||||
hosts" <overview-neutron.html#fig_overview_neutron-compute>`__ shows
|
||||
the interaction of instances, Linux Bridge agent, network components,
|
||||
and connection to a physical network.
|
||||
|
||||
|
||||
|
||||
**Figure 2.6. Compute hosts**
|
||||
|
||||
.. image:: figures/networking-compute.png
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
27
doc/source/install-guide/overview-osad.rst
Normal file
@ -0,0 +1,27 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
About OpenStack Ansible Deployment
|
||||
----------------------------------
|
||||
|
||||
OS-Ansible-Deployment uses the Ansible IT automation framework to
|
||||
create an OpenStack cluster on Ubuntu Linux. OpenStack components are
|
||||
installed into Linux Containers (LXC) for isolation and ease of
|
||||
maintenance.
|
||||
|
||||
| OpenStack Ansible Deployment
|
||||
|
||||
This documentation is intended for deployers of the OpenStack Ansible
|
||||
deployment system who are interested in installing an OpenStack. The
|
||||
document is for informational purposes only and is provided "AS IS."
|
||||
|
||||
Third-party trademarks and tradenames appearing in this document are the
|
||||
property of their respective owners. Such third-party trademarks have
|
||||
been printed in caps or initial caps and are used for referential
|
||||
purposes only. We do not intend our use or display of other companies"
|
||||
tradenames, trademarks, or service marks to imply a relationship with,
|
||||
or endorsement or sponsorship of us by, these other companies.
|
||||
|
||||
`OpenStack.org <http://www.openstack.org>`__
|
||||
|
||||
|
||||
.. include:: navigation.txt
|
45
doc/source/install-guide/overview-requirements.rst
Normal file
@ -0,0 +1,45 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Installation requirements
|
||||
-------------------------
|
||||
|
||||
Deployment host:
|
||||
|
||||
- Required items:
|
||||
|
||||
- Ubuntu 14.04 LTS (Trusty Tahr) or compatible operating system that
|
||||
meets all other requirements.
|
||||
|
||||
- Secure Shell (SSH) client supporting public key authentication.
|
||||
|
||||
- Synchronized network time (NTP) client.
|
||||
|
||||
- Python 2.7 or later.
|
||||
|
||||
Target hosts:
|
||||
|
||||
- Required items:
|
||||
|
||||
- Ubuntu Server 14.04 LTS (Trusty Tahr) 64-bit operating system,
|
||||
with Linux kernel version ``3.13.0-34-generic`` or later.
|
||||
|
||||
- SSH server supporting public key authentication.
|
||||
|
||||
- Synchronized NTP client.
|
||||
|
||||
- Optional items:
|
||||
|
||||
- For hosts providing Block Storage (cinder) service volumes, a
|
||||
Logical Volume Manager (LVM) volume group named *cinder-volumes*.
|
||||
|
||||
- LVM volume group named *lxc* to store container file systems. If
|
||||
the lxc volume group does not exist, containers will be
|
||||
automatically installed in the root file system of the host.
|
||||
|
||||
By default, ansible creates a 5 GB logical volume. Plan storage
|
||||
accordingly to support the quantity of containers on each target
|
||||
host.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
15
doc/source/install-guide/overview-workflow.rst
Normal file
@ -0,0 +1,15 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Installation workflow
|
||||
---------------------
|
||||
|
||||
This diagram shows the general workflow associated with OSAD
|
||||
installation.
|
||||
|
||||
**Figure 2.7. Installation workflow**
|
||||
|
||||
.. image:: figures/workflow-overview.png
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
19
doc/source/install-guide/overview.rst
Normal file
@ -0,0 +1,19 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Chapter 2. Overview
|
||||
-------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
overview-osad.rst
|
||||
overview-ansible.rst
|
||||
overview-lxc.rst
|
||||
overview-hostlayout.rst
|
||||
overview-hostnetworking.rst
|
||||
overview-neutron.rst
|
||||
overview-requirements.rst
|
||||
overview-workflow.rst
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
35
doc/source/install-guide/targethosts-add.rst
Normal file
@ -0,0 +1,35 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring the operating system
|
||||
--------------------------------
|
||||
|
||||
Check the kernel version, install additional software packages, and
|
||||
configure NTP.
|
||||
|
||||
#. Check the kernel version. It should be ``3.13.0-34-generic`` or
|
||||
later.
|
||||
|
||||
#. Install additional software packages if not already installed during
|
||||
operating system installation:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# apt-get install bridge-utils debootstrap ifenslave ifenslave-2.6 \
|
||||
lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan
|
||||
|
||||
|
||||
#. Add the appropriate kernel modules to the ``/etc/modules`` file to
|
||||
enable VLAN and bond interfaces:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# echo 'bonding' >> /etc/modules
|
||||
# echo '8021q' >> /etc/modules
|
||||
|
||||
#. Configure NTP to synchronize with a suitable time source.
|
||||
|
||||
#. Reboot the host to activate the changes.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
24
doc/source/install-guide/targethosts-configlvm.rst
Normal file
@ -0,0 +1,24 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring LVM
|
||||
---------------
|
||||
|
||||
#. To use the optional Block Storage (cinder) service, create an LVM
|
||||
volume group named *cinder-volumes* on the Block Storage host. A
|
||||
metadata size of 2048 must be specified during physical volume
|
||||
creation. For example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# pvcreate --metadatasize 2048 physical_volume_device_path
|
||||
# vgcreate cinder-volumes physical_volume_device_path
|
||||
|
||||
|
||||
#. Optionally, create an LVM volume group named *lxc* for container file
|
||||
systems. If the lxc volume group does not exist, containers will be
|
||||
automatically installed into the file system under */var/lib/lxc* by
|
||||
default.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
49
doc/source/install-guide/targethosts-network.rst
Normal file
@ -0,0 +1,49 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring the network
|
||||
-----------------------
|
||||
|
||||
Although Ansible automates most deployment operations, networking on
|
||||
target hosts requires manual configuration because it can vary
|
||||
dramatically per environment. For demonstration purposes, these
|
||||
instructions use a reference architecture with example network interface
|
||||
names, networks, and IP addresses. Modify these values as needed for the
|
||||
particular environment.
|
||||
|
||||
The reference architecture for target hosts contains the following
|
||||
mandatory components:
|
||||
|
||||
- A ``bond0`` interface using two physical interfaces. For redundancy
|
||||
purposes, avoid using more than one port on network interface cards
|
||||
containing multiple ports. The example configuration uses ``eth0``
|
||||
and ``eth2``. Actual interface names can vary depending on hardware
|
||||
and drivers. Configure the ``bond0`` interface with a static IP
|
||||
address on the host management network.
|
||||
|
||||
- A ``bond1`` interface using two physical interfaces. For redundancy
|
||||
purposes, avoid using more than one port on network interface cards
|
||||
containing multiple ports. The example configuration uses ``eth1``
|
||||
and ``eth3``. Actual interface names can vary depending on hardware
|
||||
and drivers. Configure the ``bond1`` interface without an IP address.
|
||||
|
||||
- Container management network subinterface on the ``bond0`` interface
|
||||
and ``br-mgmt`` bridge with a static IP address.
|
||||
|
||||
- The OpenStack Networking VXLAN subinterface on the ``bond1``
|
||||
interface and ``br-vxlan`` bridge with a static IP address.
|
||||
|
||||
- The OpenStack Networking VLAN ``br-vlan`` bridge on the ``bond1``
|
||||
interface without an IP address.
|
||||
|
||||
The reference architecture for target hosts can also contain the
|
||||
following optional components:
|
||||
|
||||
- Storage network subinterface on the ``bond0`` interface and
|
||||
``br-storage`` bridge with a static IP address.
|
||||
|
||||
For more information, see `OpenStack Ansible
|
||||
Networking <https://github.com/stackforge/os-ansible-deployment/blob/10.1.0/etc/network/README.html>`__.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
166
doc/source/install-guide/targethosts-networkexample.rst
Normal file
@ -0,0 +1,166 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring the network on a target host
|
||||
----------------------------------------
|
||||
|
||||
This example uses the following parameters to configure networking on a
|
||||
single target host. See `Figure 4.2, "Target hosts for infrastructure,
|
||||
networking, and storage
|
||||
services" <targethosts-networkexample.html#fig_hosts-target-network-containerexample>`__
|
||||
and `Figure 4.3, "Target hosts for Compute
|
||||
service" <targethosts-networkexample.html#fig_hosts-target-network-bareexample>`__
|
||||
for a visual representation of these parameters in the architecture.
|
||||
|
||||
- VLANs:
|
||||
|
||||
- Host management: Untagged/Native
|
||||
|
||||
- Container management: 10
|
||||
|
||||
- Tunnels: 30
|
||||
|
||||
- Storage: 20
|
||||
|
||||
Networks:
|
||||
|
||||
- Host management: 10.240.0.0/22
|
||||
|
||||
- Container management: 172.29.236.0/22
|
||||
|
||||
- Tunnel: 172.29.240.0/22
|
||||
|
||||
- Storage: 172.29.244.0/22
|
||||
|
||||
Addresses:
|
||||
|
||||
- Host management: 10.240.0.11
|
||||
|
||||
- Host management gateway: 10.240.0.1
|
||||
|
||||
- DNS servers: 69.20.0.164 69.20.0.196
|
||||
|
||||
- Container management: 172.29.236.11
|
||||
|
||||
- Tunnel: 172.29.240.11
|
||||
|
||||
- Storage: 172.29.244.11
|
||||
|
||||
|
||||
|
||||
**Figure 4.2. Target hosts for infrastructure, networking, and storage
|
||||
services**
|
||||
|
||||
.. image:: figures/networkarch-container-external-example.png
|
||||
|
||||
**Figure 4.3. Target hosts for Compute service**
|
||||
|
||||
.. image:: figures/networkarch-bare-external-example.png
|
||||
|
||||
Contents of the ``/etc/network/interfaces`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# Physical interface 1
|
||||
auto eth0
|
||||
iface eth0 inet manual
|
||||
bond-master bond0
|
||||
bond-primary eth0
|
||||
|
||||
# Physical interface 2
|
||||
auto eth1
|
||||
iface eth1 inet manual
|
||||
bond-master bond1
|
||||
bond-primary eth1
|
||||
|
||||
# Physical interface 3
|
||||
auto eth2
|
||||
iface eth2 inet manual
|
||||
bond-master bond0
|
||||
|
||||
# Physical interface 4
|
||||
auto eth3
|
||||
iface eth3 inet manual
|
||||
bond-master bond1
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# Bond interface 0 (physical interfaces 1 and 3)
|
||||
auto bond0
|
||||
iface bond0 inet static
|
||||
bond-slaves eth0 eth2
|
||||
bond-mode active-backup
|
||||
bond-miimon 100
|
||||
bond-downdelay 200
|
||||
bond-updelay 200
|
||||
address 10.240.0.11
|
||||
netmask 255.255.252.0
|
||||
gateway 10.240.0.1
|
||||
dns-nameservers 69.20.0.164 69.20.0.196
|
||||
|
||||
# Bond interface 1 (physical interfaces 2 and 4)
|
||||
auto bond1
|
||||
iface bond1 inet manual
|
||||
bond-slaves eth1 eth3
|
||||
bond-mode active-backup
|
||||
bond-miimon 100
|
||||
bond-downdelay 250
|
||||
bond-updelay 250
|
||||
|
||||
# Container management VLAN interface
|
||||
iface bond0.10 inet manual
|
||||
vlan-raw-device bond0
|
||||
|
||||
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
|
||||
iface bond1.30 inet manual
|
||||
vlan-raw-device bond1
|
||||
|
||||
# Storage network VLAN interface (optional)
|
||||
iface bond0.20 inet manual
|
||||
vlan-raw-device bond0
|
||||
|
||||
# Container management bridge
|
||||
auto br-mgmt
|
||||
iface br-mgmt inet static
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
# Bridge port references tagged interface
|
||||
bridge_ports bond0.10
|
||||
address 172.29.236.11
|
||||
netmask 255.255.252.0
|
||||
dns-nameservers 69.20.0.164 69.20.0.196
|
||||
|
||||
# OpenStack Networking VXLAN (tunnel/overlay) bridge
|
||||
auto br-vxlan
|
||||
iface br-vxlan inet static
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
# Bridge port references tagged interface
|
||||
bridge_ports bond1.30
|
||||
address 172.29.240.11
|
||||
netmask 255.255.252.0
|
||||
|
||||
# OpenStack Networking VLAN bridge
|
||||
auto br-vlan
|
||||
iface br-vlan inet manual
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
# Bridge port references untagged interface
|
||||
bridge_ports bond1
|
||||
|
||||
# Storage bridge (optional)
|
||||
auto br-storage
|
||||
iface br-storage inet static
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
# Bridge port reference tagged interface
|
||||
bridge_ports bond0.20
|
||||
address 172.29.244.11
|
||||
netmask 255.255.252.0
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
140
doc/source/install-guide/targethosts-networkrefarch.rst
Normal file
@ -0,0 +1,140 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Reference architecture
|
||||
----------------------
|
||||
|
||||
After establishing initial host management network connectivity using
|
||||
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file as
|
||||
described in the following procedure.
|
||||
|
||||
**Procedure 4.1. Modifying the network interfaces file**
|
||||
|
||||
#. Physical interfaces:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# Physical interface 1
|
||||
auto eth0
|
||||
iface eth0 inet manual
|
||||
bond-master bond0
|
||||
bond-primary eth0
|
||||
|
||||
# Physical interface 2
|
||||
auto eth1
|
||||
iface eth1 inet manual
|
||||
bond-master bond1
|
||||
bond-primary eth1
|
||||
|
||||
# Physical interface 3
|
||||
auto eth2
|
||||
iface eth2 inet manual
|
||||
bond-master bond0
|
||||
|
||||
# Physical interface 4
|
||||
auto eth3
|
||||
iface eth3 inet manual
|
||||
bond-master bond1
|
||||
|
||||
#. Bonding interfaces:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# Bond interface 0 (physical interfaces 1 and 3)
|
||||
auto bond0
|
||||
iface bond0 inet static
|
||||
bond-slaves eth0 eth2
|
||||
bond-mode active-backup
|
||||
bond-miimon 100
|
||||
bond-downdelay 200
|
||||
bond-updelay 200
|
||||
address HOST_IP_ADDRESS
|
||||
netmask HOST_NETMASK
|
||||
gateway HOST_GATEWAY
|
||||
dns-nameservers HOST_DNS_SERVERS
|
||||
|
||||
# Bond interface 1 (physical interfaces 2 and 4)
|
||||
auto bond1
|
||||
iface bond1 inet manual
|
||||
bond-slaves eth1 eth3
|
||||
bond-mode active-backup
|
||||
bond-miimon 100
|
||||
bond-downdelay 250
|
||||
bond-updelay 250
|
||||
|
||||
If not already complete, replace *``HOST_IP_ADDRESS``*,
|
||||
*``HOST_NETMASK``*, *``HOST_GATEWAY``*, and *``HOST_DNS_SERVERS``*
|
||||
with the appropriate configuration for the host management network.
|
||||
|
||||
#. Logical (VLAN) interfaces:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# Container management VLAN interface
|
||||
iface bond0.CONTAINER_MGMT_VLAN_ID inet manual
|
||||
vlan-raw-device bond0
|
||||
|
||||
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
|
||||
iface bond1.TUNNEL_VLAN_ID inet manual
|
||||
vlan-raw-device bond1
|
||||
|
||||
# Storage network VLAN interface (optional)
|
||||
iface bond0.STORAGE_VLAN_ID inet manual
|
||||
vlan-raw-device bond0
|
||||
|
||||
Replace *``*_VLAN_ID``* with the appropriate configuration for the
|
||||
environment.
|
||||
|
||||
#. Bridge devices:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# Container management bridge
|
||||
auto br-mgmt
|
||||
iface br-mgmt inet static
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
# Bridge port references tagged interface
|
||||
bridge_ports bond0.CONTAINER_MGMT_VLAN_ID
|
||||
address CONTAINER_MGMT_BRIDGE_IP_ADDRESS
|
||||
netmask CONTAINER_MGMT_BRIDGE_NETMASK
|
||||
dns-nameservers CONTAINER_MGMT_BRIDGE_DNS_SERVERS
|
||||
|
||||
# OpenStack Networking VXLAN (tunnel/overlay) bridge
|
||||
auto br-vxlan
|
||||
iface br-vxlan inet static
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
# Bridge port references tagged interface
|
||||
bridge_ports bond1.TUNNEL_VLAN_ID
|
||||
address TUNNEL_BRIDGE_IP_ADDRESS
|
||||
netmask TUNNEL_BRIDGE_NETMASK
|
||||
|
||||
# OpenStack Networking VLAN bridge
|
||||
auto br-vlan
|
||||
iface br-vlan inet manual
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
# Bridge port references untagged interface
|
||||
bridge_ports bond1
|
||||
|
||||
# Storage bridge (optional)
|
||||
auto br-storage
|
||||
iface br-storage inet static
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
# Bridge port reference tagged interface
|
||||
bridge_ports bond0.STORAGE_VLAN_ID
|
||||
address STORAGE_BRIDGE_IP_ADDRESS
|
||||
netmask STORAGE_BRIDGE_NETMASK
|
||||
|
||||
Replace *``*_VLAN_ID``*, *``*_BRIDGE_IP_ADDRESS``*, and
|
||||
*``*_BRIDGE_NETMASK``*, *``*_BRIDGE_DNS_SERVERS``* with the
|
||||
appropriate configuration for the environment.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
15
doc/source/install-guide/targethosts-os.rst
Normal file
@ -0,0 +1,15 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Installing the operating system
|
||||
-------------------------------
|
||||
|
||||
Install the Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit operating
|
||||
system on the target host with at least one network interface configured
|
||||
to access the Internet or suitable local repositories.
|
||||
|
||||
On target hosts without local (console) access, We recommend
|
||||
adding the Secure Shell (SSH) server packages to the installation.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
18
doc/source/install-guide/targethosts-sshkeys.rst
Normal file
@ -0,0 +1,18 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Configuring Secure Shell (SSH) keys
|
||||
-----------------------------------
|
||||
|
||||
Ansible uses Secure Shell (SSH) for connectivity between the deployment
|
||||
and target hosts.
|
||||
|
||||
#. Copy the contents of the public key file on the deployment host to
|
||||
the ``/root/.ssh/authorized_keys`` on each target host.
|
||||
|
||||
#. Test public key authentication from the deployment host to each
|
||||
target host. SSH should provide a shell without asking for a
|
||||
password.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
39
doc/source/install-guide/targethosts.rst
Normal file
@ -0,0 +1,39 @@
|
||||
`Home <index.html>`__ OpenStack Ansible Installation Guide
|
||||
|
||||
Chapter 4. Target hosts
|
||||
-----------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
targethosts-os.rst
|
||||
targethosts-sshkeys.rst
|
||||
targethosts-add.rst
|
||||
targethosts-configlvm.rst
|
||||
targethosts-network.rst
|
||||
targethosts-networkrefarch.rst
|
||||
targethosts-networkexample.rst
|
||||
|
||||
|
||||
**Figure 4.1. Installation workflow**
|
||||
|
||||
.. image:: figures/workflow-targethosts.png
|
||||
|
||||
The OSAD installation process recommends at least five target
|
||||
hosts that will contain the OpenStack environment and supporting
|
||||
infrastructure. On each target host, perform the following tasks:
|
||||
|
||||
- Naming target hosts.
|
||||
|
||||
- Install the operating system.
|
||||
|
||||
- Generate and set up security measures.
|
||||
|
||||
- Update the operating system and install additional software packages.
|
||||
|
||||
- Create LVM volume groups.
|
||||
|
||||
- Configure networking devices.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|