Begin refactor of CDG

Enable htaccess file to allow for HTTP redirections.
The guide's reorg (topic 'cdg-reorg') will require
a number of them. Here, the first one is being
implemented and has been tested locally:

config-openstack.rst --> configure-openstack.rst

Use content from the 'index.rst' file to create
'install-overview.rst' for the new Installation
section.

Minor rewording of the beginnings of a few files.
Remove their Overview sections.

Remove index file 'app.rst' used for appendices.
Display appendices directly in the menu and give
them better visible menu names. Many of these will
eventually become parts of new sections in
subsequent PRs.

Remove rogue file '1' that slipped in during Ceph
erasure coding work.

Add '.html' extension to a few links.

Change-Id: Ib6cbca00f84bb79f9f56c5166e5811fab78ee16c
This commit is contained in:
Peter Matulis 2020-11-12 22:40:27 -05:00
parent 800fcab110
commit 6e8487816b
12 changed files with 85 additions and 256 deletions

View File

@ -1,152 +0,0 @@
Appendix M: Ceph Erasure Coding and Device Classing
===================================================
Overview
++++++++
Ceph pools supporting applications within an OpenStack deployment are
by default configured as replicated pools which means that every object
stored is copied to multiple hosts or zones to allow the pool to survive
the loss of an OSD.
Ceph also supports Erasure Coded pools which can be used instead to save
raw space within the storage cluster. The following charms can be
configured to use Erasure Coded pools:
* glance
* cinder-ceph
* nova-compute
* ceph-fs
* ceph-radosgw
Configuring charms for Erasure Coding
+++++++++++++++++++++++++++++++++++++
All charms that support use with Erasure Coded pools support a consistent
set of configuration options to enable and tune the Erasure Coding profile
used to configure the Erasure Coded pool.
Erasure Coding is enabled by setting the 'pool-type' option to 'erasure-coded'.
By default the JErasure plugin is used with K=1 and M=2. This does not
actually save any raw storage compared to a replicated pool with 3 replicas
(and is designd to allow use on a three node Ceph cluster) so most deployments
using Erasure Coded pools will need to tune the K and M values based on either
the number of hosts deployed or the number of zones in the deployment if
the 'customize-failure-domain' option is enabled on the ceph-osd and ceph-mon
charms.
The K value defines the number of data chunks that will be used for each object
and the M value defines the number of coding chunks generated for each object.
The M value also defines the number of OSD's that may be lost before the pool
goes into a degraded state.
K + M must always be less than or equal to the number of hosts or zones in the
deployment (depending on the configuration of 'customize-failure-domain'.
In the example below, the Erasure Coded pool used by the glance application
will sustain the loss of two hosts or zones while only consuming 2TB instead
of 3TB of storage to store 1TB of data.
.. code-block:: yaml
glance:
options:
pool-type: erasure-coded
ec-profile-k: 2
ec-profile-m: 2
The full list of Erasure Coding configuration options is detailed below.
.. list-table:: Erasure Coding charm options
:widths: 25 5 15 55
:header-rows: 1
* - Option
- Type
- Default Value
- Description
* - pool-type
- string
- replicated
- Ceph pool type to use for storage - valid values are 'replicated' and 'erasure-coded'.
* - ec-profile-name
- string
-
- Name for the EC profile to be created for the EC pools. If not defined a profile name will be generated based on the name of the pool used by the application.
* - ec-rbd-metadata-pool
- string
-
- Name of the metadata pool to be created (for RBD use-cases). If not defined a metadata pool name will be generated based on the name of the data pool used by the application. The metadata pool is always replicated (not erasure coded).
* - ec-profile-k
- int
- 1
- Number of data chunks that will be used for EC data pool. K+M factors should never be greater than the number of available AZs for balancing.
* - ec-profile-m
- int
- 2
- Number of coding chunks that will be used for EC data pool. K+M factors should never be greater than number of available AZs for balancing.
* - ec-profile-locality
- int
-
- (lrc plugin - l) Group the coding and data chunks into sets of size l. For instance, for k=4 and m=2, when l=3 two groups of three are created. Each set can be recovered without reading chunks from another set. Note that using the lrc plugin does incur more raw storage usage than isa or jerasure in order to reduce the cost of recovery operations.
* - ec-profile-crush-locality
- string
-
- (lrc plugin) The type of the crush bucket in which each set of chunks defined by l will be stored. For instance, if it is set to rack, each group of l chunks will be placed in a different rack. It is used to create a CRUSH rule step such as 'step choose rack'. If it is not set, no such grouping is done.
* - ec-profile-durability-estimator
- int
-
- (shec plugin - c) The number of parity chunks each of which includes each data chunk in its calculation range. The number is used as a durability estimator. For instance, if c=2, 2 OSDs can be down without losing data.
* - ec-profile-helper-chunks
- int
-
- (clay plugin - d) Number of OSDs requested to send data during recovery of a single chunk. d needs to be chosen such that k+1 <= d <= k+m-1. The larger the d, the better the savings.
* - ec-profile-scalar-mds
- string
-
- (clay plugin) Specifies the plugin that is used as a building block in the layered construction. It can be one of: jerasure, isa or shec.
* - ec-profile-plugin
- string
- jerasure
- EC plugin to use for this applications pool. These plugins are available: jerasure, lrc, isa, shec, clay.
* - ec-profile-technique
- string
- reed_sol_van
- EC profile technique used for this applications pool - will be validated based on the plugin configured via ec-profile-plugin. Supported techniques are 'reed_sol_van', 'reed_sol_r6_op', 'cauchy_orig', 'cauchy_good', 'liber8tion' for jerasure, 'reed_sol_van', 'cauchy' for isa and 'single', 'multiple' for shec.
* - ec-profile-device-class
- string
-
- Device class from CRUSH map to use for placement groups for erasure profile - valid values: ssd, hdd or nvme (or leave unset to not use a device class).
Ceph automatic device classing
++++++++++++++++++++++++++++++
Newer versions of Ceph do automatic classing of OSD devices. Each OSD
will be placed into nvme, ssd or hdd device classes. These can
be used when creating erasure profiles or new CRUSH rules (see following
sections).
The classes can be inspected using:
.. code::
sudo ceph osd crush tree
ID CLASS WEIGHT TYPE NAME
-1 8.18729 root default
-5 2.72910 host node-laveran
2 nvme 0.90970 osd.2
5 ssd 0.90970 osd.5
7 ssd 0.90970 osd.7
-7 2.72910 host node-mees
1 nvme 0.90970 osd.1
6 ssd 0.90970 osd.6
8 ssd 0.90970 osd.8
-3 2.72910 host node-pytheas
0 nvme 0.90970 osd.0
3 ssd 0.90970 osd.3
4 ssd 0.90970 osd.4
The device class for an Erasure Coded pool can be configured in the
consuming charm using the 'ec-device-class' configuration option.

View File

@ -0,0 +1,3 @@
RewriteEngine on
Redirect 301 /config-openstack.html /configure-openstack.html

View File

@ -502,4 +502,4 @@ Masakari.
.. _Masakari charm: http://jaas.ai/masakari
.. _openstack-base: https://jaas.ai/openstack-base
.. _OpenStack high availability: app-ha.html#ha-applications
.. _Configure OpenStack: config-openstack.html
.. _Configure OpenStack: configure-openstack.html

View File

@ -1,31 +0,0 @@
==========
Appendices
==========
.. toctree::
:maxdepth: 2
app-ceph-migration.rst
app-upgrade-openstack.rst
app-vault.rst
app-encryption-at-rest.rst
app-certificate-management.rst
app-series-upgrade.rst
app-series-upgrade-openstack.rst
app-nova-cells.rst
app-octavia.rst
app-pci-passthrough-gpu.rst
app-rgw-multisite.rst
app-ceph-rbd-mirror.rst
app-masakari.rst
app-erasure-coding.rst
app-policy-overrides.rst
app-ovn.rst
app-managing-power-events.rst
app-manila-ganesha.rst
app-swift.rst
app-hardware-offload.rst
app-ha.rst
app-trilio-vault.rst
app-bridge-interface-configuration.rst
app-ceph-iscsi.rst

View File

@ -145,6 +145,7 @@ html_theme = 'openstackdocs'
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
html_extra_path = ['_extra']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.

View File

@ -2,9 +2,6 @@
Configure OpenStack
===================
Overview
--------
In the :doc:`previous section <install-openstack>`, we installed OpenStack. We
are now going to configure OpenStack with the intent of making it consumable by
regular users. Configuration will be performed by both the admin user and the

View File

@ -1,74 +1,64 @@
.. OpenStack documentation master file, created by
sphinx-quickstart on Fri Jun 30 11:14:11 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
=================================
OpenStack Charms Deployment Guide
=================================
Overview
--------
The OpenStack Charms Deployment Guide is the main source of information for
OpenStack Charms usage. A search facility is available as a separate
:ref:`search`.
The main purpose of the OpenStack Charms Deployment Guide is to demonstrate how
to build a multi-node OpenStack cloud with `MAAS`_, `Juju`_, and `OpenStack
Charms`_. For easy adoption the cloud will be minimal. Nevertheless, it will be
capable of both performing some real work and scaling to fit more ambitious
projects. High availability will not be implemented beyond natively HA
applications (Ceph, MySQL8, OVN, Swift, and RabbitMQ).
.. note::
For OpenStack Charms project information, development guidelines, release
notes, and release schedules, please refer to the `OpenStack Charm Guide`_.
Requirements
------------
The software versions used in this guide are as follows:
* **Ubuntu 20.04 LTS (Focal)** for the MAAS server, Juju client, Juju
controller, and all cloud nodes (including containers)
* **MAAS 2.8.2**
* **Juju 2.8.1**
* **OpenStack Ussuri**
Hardware requirements are listed on the :doc:`Install MAAS <install-maas>`
page.
Appendices
----------
The guide also includes a wealth of information in the form of appendices.
These cover a wide variety of subjects, such as an elaboration of a specific
charm feature, how to upgrade an OpenStack cloud, or how to manage power events
in a cloud.
Help improve this guide
-----------------------
Also included is a wealth of information in the form of appendices. These
cover a wide variety of subjects, such as an elaboration of a specific charm
feature or instructions for upgrading an OpenStack cloud.
To help improve this guide you may `file an issue`_ or `submit a
contribution`_.
Table of contents
-----------------
.. note::
For project information, development guidelines, release notes, and release
schedules, please refer to the `OpenStack Charm Guide`_.
.. toctree::
:maxdepth: 2
:caption: Installation
:maxdepth: 1
install-maas.rst
install-juju.rst
install-openstack.rst
install-openstack-bundle.rst
config-openstack.rst
app.rst
install-overview
install-maas
install-juju
install-openstack
install-openstack-bundle
configure-openstack
* :ref:`search`
.. toctree::
:caption: Appendices
:maxdepth: 1
OpenStack upgrades <app-upgrade-openstack>
Series upgrade <app-series-upgrade>
Series upgrade OpenStack <app-series-upgrade-openstack>
Vault <app-vault>
Certificate lifecycle management <app-certificate-management>
Encryption at Rest <app-encryption-at-rest>
Additional Nova cells <app-nova-cells>
Octavia LBaaS <app-octavia>
PCI passthrough <app-pci-passthrough-gpu>
Ceph erasure coding <app-erasure-coding>
Ceph RADOS Gateway multisite replication <app-rgw-multisite>
Ceph RBD mirroring <app-ceph-rbd-mirror>
Ceph iSCSI <app-ceph-iscsi>
Ceph charm deprecation <app-ceph-migration>
Masakari <app-masakari>
Policy overrides <app-policy-overrides>
OVN <app-ovn>
Managing power events <app-managing-power-events>
Manila Ganesha <app-manila-ganesha>
Swift usage <app-swift>
NIC hardware offload <app-hardware-offload>
High availability <app-ha>
TrilioVault Data Protection <app-trilio-vault>
Bridge interface configuration <app-bridge-interface-configuration>
.. LINKS
.. _MAAS: https://maas.io
.. _Juju: https://jaas.ai
.. _OpenStack Charms: https://docs.openstack.org/charm-guide
.. _OpenStack Charm Guide: https://docs.openstack.org/charm-guide
.. _file an issue: https://bugs.launchpad.net/charm-deployment-guide/+filebug
.. _submit a contribution: https://opendev.org/openstack/charm-deployment-guide
.. _OpenStack Charm Guide: https://docs.openstack.org/charm-guide

View File

@ -2,9 +2,6 @@
Install Juju
============
Overview
--------
In the :doc:`previous section <install-maas>`, we set up the base environment
in the form of a `MAAS`_ cluster. We are now going to implement `Juju`_ as a
management solution for that environment. The main goal will be the creation of

View File

@ -2,11 +2,9 @@
Install MAAS
============
Overview
--------
In the :doc:`previous section <index>`, we gave a summary of the OpenStack
cloud we'll be building and described the approach we'll take for doing that.
In the :doc:`previous section <install-overview>`, we gave a summary of the
OpenStack cloud we'll be building and described the approach we'll take for
doing that.
This page will cover the installation of `MAAS`_ as well as point out what is
required out of MAAS in terms of post-installation tasks. The goal is to

View File

@ -37,7 +37,7 @@ Administrator Guides`_ for long-term guidance.
.. LINKS
.. _Install OpenStack: install-openstack
.. _Configure OpenStack: config-openstack
.. _Configure OpenStack: configure-openstack.html
.. _Charm bundles: https://jaas.ai/docs/charm-bundles
.. _MAAS: https://maas.io
.. _openstack-base: https://jaas.ai/openstack-base

View File

@ -795,12 +795,12 @@ Next steps
You have successfully deployed OpenStack using both Juju and MAAS. The next
step is to render the cloud functional for users. This will involve setting up
networks, images, and a user environment. Go to :doc:`Configure OpenStack
<config-openstack>` now.
<configure-openstack>` now.
.. LINKS
.. _OpenStack Charms: https://docs.openstack.org/charm-guide/latest/openstack-charms.html
.. _Charm upgrades: app-upgrade-openstack#charm-upgrades
.. _Series upgrade: app-series-upgrade
.. _Charm upgrades: app-upgrade-openstack.html#charm-upgrades
.. _Series upgrade: app-series-upgrade.html
.. _Charm store: https://jaas.ai/store
.. _Post-commission configuration: https://maas.io/docs/commission-nodes#heading--post-commission-configuration
.. _Deploying applications: https://jaas.ai/docs/deploying-applications

View File

@ -0,0 +1,26 @@
========
Overview
========
The purpose of the Installation section is to demonstrate how to build a
multi-node OpenStack cloud with `MAAS`_, `Juju`_, and `OpenStack Charms`_. For
easy adoption the cloud will be minimal. Nevertheless, it will be capable of
both performing some real work and scaling to fit more ambitious projects. High
availability will not be implemented beyond natively HA applications (Ceph,
MySQL8, OVN, Swift, and RabbitMQ).
The software versions used in this guide are as follows:
* **Ubuntu 20.04 LTS (Focal)** for the MAAS server, Juju client, Juju
controller, and all cloud nodes (including containers)
* **MAAS 2.8.2**
* **Juju 2.8.1**
* **OpenStack Ussuri**
Proceed to the :doc:`Install MAAS <install-maas>` page to begin your
installation journey. Hardware requirements are also listed there.
.. LINKS
.. _MAAS: https://maas.io
.. _Juju: https://juju.is
.. _OpenStack Charms: https://docs.openstack.org/charm-guide