Add a spellchecker

To conform to documentation best practices add a
spellchecker. A Sphinx extension is used.

Correct any current spelling mistakes across the
doc set.

Add a seeding file that extends the system dictionary.

Going forward, the 'spelling_words.txt' file will
be used to extend the dictionary.

Do not enforce spelling during a normal doc build; add
a new tox target.

Add a non-voting Zuul job that consumes the new tox
target.

The doc-contrib documentation will include information
on the spellchecker in a subsequent PR.

Future work is necessary in terms of making checking more
intelligent. As such, file 'dubious_words.txt' has been
added to temporarily store those words that should be
filtered. For instance, the word 'tis' is in this file
because it is part of proper noun 'tpm-tis'. Hyphenated
words (or words in single quotes) could be exempt from
the check.

Change-Id: I70a1d5208b97923c081b359af3208f4de65eb6ca
This commit is contained in:
Peter Matulis 2022-03-31 20:54:19 -04:00
parent 7086480bf8
commit eb1e9e207b
28 changed files with 617 additions and 70 deletions

View File

@ -1,3 +1,18 @@
- project:
templates:
- publish-openstack-docs-pti
check:
jobs:
- charm-guide-spellcheck
gate:
jobs:
- charm-guide-spellcheck
- job:
name: charm-guide-spellcheck
parent: openstack-tox-py39
description: |
Run a spellchecker against the docs
voting: false
vars:
tox_envlist: spelling
bindep_profile: doc

4
bindep.txt Normal file
View File

@ -0,0 +1,4 @@
# needed for tox env 'spelling'
enchant-2 [platform:dpkg doc]
aspell [platform:dpkg doc]
aspell-en [platform:dpkg doc]

View File

@ -5,13 +5,14 @@ Emulated Trusted Platform Module (vTPM)
Overview
--------
`Trusted Platform Modules`_ can be used to enhance computer security and privacy.
TPM is even required by some Operating Systems.
`Trusted Platform Modules`_ can be used to enhance computer security and
privacy. TPM is even required by some Operating Systems.
To support TPM devices within guest instances, OpenStack Nova integrates with
software-based emulated TPM devices for QEMU and KVM guest intances. The secrets
stored within the emulated devices are encrypted using Barbican secrets. The
devices are then provided via the :command:`swtpm` software package.
software-based emulated TPM devices for QEMU and KVM guest instances. The
secrets stored within the emulated devices are encrypted using Barbican
secrets. The devices are then provided via the :command:`swtpm` software
package.
Pre-requisites
--------------
@ -23,24 +24,25 @@ nova-compute charm:
* Barbican Key Manager service must be deployed and configured
* swtpm libraries must be available for installation
If you are using an apt mirror, make sure it contains the ``swtpm``, ``swtpm-tools``,
and ``libtpms0`` packages.
If you are using an apt mirror, make sure it contains the ``swtpm``,
``swtpm-tools``, and ``libtpms0`` packages.
.. note::
The swtpm, swtpm-tools, and libtpms libraries are available in Ubuntu 22.04 LTS
(Jammy) release. It is expected that they will be backported to the Ubuntu 20.04
LTS (Focal) archives. Until this is done, the OpenStack Charms team is providing
a Personal Package Archive (PPA) with the necessary packages for Focal.
The swtpm, swtpm-tools, and libtpms libraries are available in Ubuntu 22.04
LTS (Jammy) release. It is expected that they will be backported to the
Ubuntu 20.04 LTS (Focal) archives. Until this is done, the OpenStack Charms
team is providing a Personal Package Archive (PPA) with the necessary
packages for Focal.
Deployment
----------
TPM support is enabled on all compute nodes by using the nova-comptue charm's
TPM support is enabled on all compute nodes by using the nova-compute charm's
``enable-vtpm`` configuration option.
In this example, support is enabled on Focal-based nodes via a PPA. The following YAML
excerpt contains the configuration:
In this example, support is enabled on Focal-based nodes via a PPA. The
following YAML excerpt contains the configuration:
.. code-block:: yaml
@ -48,11 +50,11 @@ excerpt contains the configuration:
enable-vtpm: True
extra-repositories: ppa:openstack-charmers/swtpm
Nova will use the credentials for service discovery from Keystone in order to determine
the Barbican endpoint to use.
Nova will use the credentials for service discovery from Keystone in order to
determine the Barbican endpoint to use.
Once vTPM support has been enabled in the compute nodes, verify that the compute nodes
are registering the TPM traits within the Placement service:
Once vTPM support has been enabled in the compute nodes, verify that the
compute nodes are registering the TPM traits within the Placement service:
.. code-block:: none
@ -64,20 +66,20 @@ are registering the TPM traits within the Placement service:
OpenStack configuration
-----------------------
TPM support is added to a VM by means of an OpenStack flavor. This will specify the TPM
version and model for the vTPM device to emulate.
TPM support is added to a VM by means of an OpenStack flavor. This will specify
the TPM version and model for the vTPM device to emulate.
There are two versions to choose from (1.2 and 2.0) as well as two model types (tpm-tis
and tpm-crb).
There are two versions to choose from (1.2 and 2.0) as well as two model types
(tpm-tis and tpm-crb).
.. note::
The default model is tpm-tis.
The default model is 'tpm-tis'.
The tpm-crb model is only compatible with TPM version 2.0
The following example configures an existing flavor to use TPM 2.0 with the CRB model
(optionally create a new flavor):
The following example configures an existing flavor to use TPM 2.0 with the CRB
model (optionally create a new flavor):
.. code-block:: none
@ -85,11 +87,12 @@ The following example configures an existing flavor to use TPM 2.0 with the CRB
--property hw:tpm_version=2.0 \
--property hw:tpm_model=tpm-crb
The image used to create a TPM-supported VM must be configured to use UEFI firmware.
This is done by setting the ``hw_firmware_type`` property to ``uefi``.
The image used to create a TPM-supported VM must be configured to use UEFI
firmware. This is done by setting the ``hw_firmware_type`` property to
``uefi``.
The following example configures an existing image to use UEFI (optionally import a
new image):
The following example configures an existing image to use UEFI (optionally
import a new image):
.. code-block:: none
@ -98,8 +101,8 @@ new image):
References
----------
More information related to the usage of vTPM can be found in the upstream OpenStack
documentation:
More information related to the usage of vTPM can be found in the upstream
OpenStack documentation:
* `Emulated Trusted Platform Module`_ (Nova)
* `Extra Specs`_ (Nova)

View File

@ -22,7 +22,7 @@ following Launchpad tags:
* `charm-upgrade`_ - Issues upgrading the charm revision, such as cs:foo-100
to cs:foo-101 (not a payload or OpenStack version upgrade, not a series
upgrade).
* `series-upgrade`_ - Issues upgrading from one series to the next, ie. Bionic
* `series-upgrade`_ - Issues upgrading from one series to the next, i.e. Bionic
to Focal.
* `ceph-upgrade`_ - Issues upgrading the Ceph version (not charm upgrade).
* `scaleback`_ - Issues removing a unit, shrinking a cluster, replacing a unit.

View File

@ -796,6 +796,6 @@ taken as an indicator that it is acceptable to add more.
Why?
Adapters and Contexts are regulary called via the update status hook to assess
Adapters and Contexts are regularly called via the update status hook to assess
whether a charm is ready. If calling the Context or Adapter has unexpected
side effects it could interrupt service. See `Bug #1605184 <https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1605184>`__ for an example of this issue.

View File

@ -23,8 +23,8 @@ schema.
Create the skeleton charm
=========================
Prerequists
~~~~~~~~~~~
Prerequisites
~~~~~~~~~~~~~
The charm-tools package and charm-templates-openstack python module are both
needed to construct the charm from a template and to build the resulting charm.
@ -59,7 +59,7 @@ All the questions are optional, below are the responses for Congress.
What port does the primary service listen on ? 1789
What is the name of the api service? congress-server
What type of service is this (used for keystone registration)? congress
What is the earliest OpenStack release this charm is compatible with? mitaka
What is the earliest OpenStack release this charm is compatible with? Mitaka
Where command is used to sync the database? congress-db-manage --config-file /etc/congress/congress.conf upgrade head
What packages should this charm install (space separated list)? congress-server congress-common python-antlr3 python-pymysql
List of config files managed by this charm (space separated) /etc/congress/congress.conf
@ -71,7 +71,7 @@ Configuration Files
The charm code searches through the templates directories looking for a
directory corresponding to the OpenStack release being installed or earlier.
Since Mitaka is the earliest release the charm is supporting a directory called
mitaka will house the templates and files.
Mitaka will house the templates and files.
A template for congress.conf is needed which will have connection
information for MySQL and Keystone as well as user controllable config options.

View File

@ -49,7 +49,7 @@ further editing to produce the functional charm needed.
charm-create -t openstack-manila-plugin new-manila-plugin
INFO: Generating charm for new-manila-plugin in ./new-manila-plugin
INFO: No new-manila-plugin in apt cache; creating an empty charm instead.
What is the earliest OpenStack release this charm will support? mitaka
What is the earliest OpenStack release this charm will support? Mitaka
What packages should this charm install (space separated list)?
What is the package to take the version from (manila-api is probably ok)?
@ -139,7 +139,7 @@ src/tox.ini
src/reactive/{package}_handlers.py
This file contains the reactive handlers for the charm. If the default
behavior of the charm needs to be altered then this is the starting point
behaviour of the charm needs to be altered then this is the starting point
for that change.
src/lib/charm/openstack/{package}.py
@ -151,7 +151,7 @@ src/templates/{release}/manila.conf
The template file makes it easier to write out the configuration section that
will be supplied to the ``manila.conf`` file in the manila charm. **This
file will need editing**. If the earliest release is something other than
mitaka, then the folder name will need to be renamed to the earliest release.
Mitaka, then the folder name will need to be renamed to the earliest release.
src/tests/*
These are the functional tests that can be run on the charm to demonstrate

View File

@ -23,7 +23,7 @@ Hook handlers run before any state handlers. Hooks *can't* be combined with
state/flag handlers. The state handlers then run until there are no more state
changes.
The can cause unexpected behavior as it means that state handlers are run
The can cause unexpected behaviour as it means that state handlers are run
whenever their condition state/flags evaluate to 'true' for *any* hook that
runs.

View File

@ -74,7 +74,7 @@ that depends on whether the function is *pure* or *impure*.
A *pure* function is one that always returns the same results for the same
set of values passed to the function. This means that there are no (input)
side-effects or dependencies on any other state outside of the function.
A pure function is analogous, algorithimically, to a mathematical function.
A pure function is analogous, algorithmically, to a mathematical function.
*Pure* functions also can only call other pure functions. i.e. a pure function
isn't pure if *it* calls a function that is impure. Impurity at a particular
level 'infects' every caller of that function.
@ -163,7 +163,7 @@ OpenStack provider (a cloud) or the Juju LXD provider (all on one machine).
tox -e func
:Smoke Tests: Executes a subset (generaly one) of the Zaza_ deployment test
:Smoke Tests: Executes a subset (generally one) of the Zaza_ deployment test
sets. The smoke test set runs automatically on every proposed patchset.
To manually execute the Zaza smoke test on your locally-defined cloud:
@ -218,7 +218,7 @@ race conditions or problems introduced by the proposed code changes).*
*Developers are expected to have executed tests prior to submitting patches.*
Tests can be retriggered, or additional tests can be requested, simply by
replying on the Gerrit review with one of the recognized magic phrases below.
replying on the Gerrit review with one of the recognised magic phrases below.
``recheck``
Re-triggers events as if a new patchset had been submitted, including

View File

@ -38,13 +38,23 @@ import os
# TODO(ajaeger): enable PDF building, for example add 'rst2pdf.pdfbuilder'
extensions = [
'openstackdocstheme',
'sphinx.ext.intersphinx'
'sphinx.ext.intersphinx',
'sphinxcontrib.spelling'
]
intersphinx_mapping = {
'cdg': ('https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest', None)
}
# Spellchecker
spelling_lang="en_GB"
spelling_show_whole_line=True
spelling_word_list_filename = [
'spelling_initial_seeding.txt',
'spelling_words.txt',
'dubious_words.txt'
]
# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']

View File

@ -0,0 +1 @@
tis

View File

@ -38,7 +38,7 @@ release. This approach allows new charms to incubate as part of the wider
OpenStack Charms project, with inclusion in the 6-monthly release when this
policy is met.
Charms may choose to opt-out of the co-ordinated charm release, and follow
Charms may choose to opt-out of the coordinated charm release, and follow
a more independent release approach - this may be appropriate for supporting
charms in the wider OpenStack ecosystem which are not aligned to the main
OpenStack release cycle.

View File

@ -155,7 +155,7 @@ services that they manage. This includes:
AppArmor profiles are disabled by default and can be enabled using the
aa-profile-mode configuration option. Valid settings are 'complain',
'enforce' or 'disable':
`
.. code:: bash
juju config neutron-gateway aa-profile-mode=enforce
@ -263,4 +263,4 @@ Bugs Fixed
==========
For the full list of bugs resolved for the 16.10 release please refer to
https://launchpad.net/charms/+milestone/16.10
``https://launchpad.net/charms/+milestone/16.10``

View File

@ -138,7 +138,7 @@ deployments of 3 or more units.
ceph-osd availability zone support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ceph-osd charm now supports an availability zone. This can be utilized to
The ceph-osd charm now supports an availability zone. This can be utilised to
modify the default of having 1 replica per host.
@ -161,7 +161,7 @@ ceph-radsogw FastCGI support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Inline with the Ceph project, the ceph-radosgw charm has dropped support for
deployment of the Rados Gateway using Apache and mod_fastcgi; existing deployments
deployment of the RADOS Gateway using Apache and mod_fastcgi; existing deployments
will be reconfigured on upgrade to use the embedded webserver support provided
by the radosgw binaries.

View File

@ -201,7 +201,7 @@ ZeroMQ messaging support across the charms will be removed during the Queens
development cycle.
PostgreSQL database support across the charms will be removed during the
Queens developement cycle.
Queens development cycle.
Deploy from Source (DFS) support is under review for sustainability and may be
removed during the Queens development cycle.

View File

@ -80,7 +80,7 @@ The OpenStack Charms now provide support for dynamic route propagation via Neutr
The neutron-dynamic-routing charm provides the BGP speaker for dynamic route propagation of tenant networks and floating IP addresses.
For utilizing the dynamic routing feature of OpenStack see upstream documentation for neutron dynamic routing.
For utilising the dynamic routing feature of OpenStack see upstream documentation for neutron dynamic routing.
https://docs.openstack.org/neutron-dynamic-routing/latest

View File

@ -108,7 +108,7 @@ Octavia Load Balancer Charm
The new Octavia charm leverages a lxd container property modeling feature which requires Juju 2.5 or later.
As such, it is classified as a preview charm with this charm release. When Juju 2.5 releases to stable channels, additional charm validation will takeplace, followed by an update and appendix to the 18.11 OpenStack Charms release notes. Subsequent charm changes may need to be back-ported to accommodate that staggered release.
As such, it is classified as a preview charm with this charm release. When Juju 2.5 releases to stable channels, additional charm validation will take place, followed by an update and appendix to the 18.11 OpenStack Charms release notes. Subsequent charm changes may need to be back-ported to accommodate that staggered release.
In the mean-time, the feature is available in the stable charms and it can be previewed using beta 2.5 of Juju.
@ -117,7 +117,7 @@ See the deployment guide for important guidance regarding the use of Octavia Cha
Barbican and Barbican-Vault Charms
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prior to this release, the Barbican Charm was in a preview state, having no production back-end charm until now. With this charm release, the Barbican Charm is supported for Rocky and later only. The production use case is to deploy the Barbican-Vault Charm to provide a Vault back-end store (which also leveragesthe Castellan library). A stable update to the barbican-vault charm is anticipated in coordination with the Juju 2.5 stable release.
Prior to this release, the Barbican Charm was in a preview state, having no production back-end charm until now. With this charm release, the Barbican Charm is supported for Rocky and later only. The production use case is to deploy the Barbican-Vault Charm to provide a Vault back-end store (which also leverages the Castellan library). A stable update to the barbican-vault charm is anticipated in coordination with the Juju 2.5 stable release.
Upgrading charms
================

View File

@ -362,19 +362,19 @@ Related to this issue, is an upstream oslo.cache bug which is working its way th
.. _bug 1796653: https://bugs.launchpad.net/juju/+bug/1796653
.. _bug 1812935: https://bugs.launchpad.net/oslo.cache/+bug/1812935
Cinder auto-resume after openstack upgrade action
Cinder auto-resume after OpenStack upgrade action
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There was a conflict between the way the cinder charm handled series-upgrade
and action managed openstack upgrades as described in (`bug 1824545`_).
and action managed OpenStack upgrades as described in (`bug 1824545`_).
When a cinder unit was paused and an action managed openstack upgrade was
When a cinder unit was paused and an action managed OpenStack upgrade was
performed certain necessary steps were accidentally skipped. The solution is
to run an automatic resume immediately after openstack upgrade, which the
to run an automatic resume immediately after OpenStack upgrade, which the
charm now does.
This note is to point out this behavior is different than the other charms.
We may change the other charms to match this behavior at some point in the
This note is to point out this behaviour is different than the other charms.
We may change the other charms to match this behaviour at some point in the
future.
After the following actions:
@ -386,7 +386,7 @@ After the following actions:
juju run-action --wait cinder/0 openstack-upgrade
The cinder charm will be upgraded and resumed. It is no longer necessary to run
the resume action post openstack upgrade.
the resume action post OpenStack upgrade.
.. _bug 1824545: https://bugs.launchpad.net/charm-cinder/+bug/1824545

View File

@ -274,7 +274,7 @@ read/write node and N number of read-only nodes.
and therefore does not support single-unit or non-clustered
deployments.
The mysql-router charm deploys MySQL 8 mysqlrouter which will proxy database
The mysql-router charm deploys a MySQL 8 Router which will proxy database
requests from the principle charm application to a MySQL 8 InnoDB Cluster.
MySQL Router handles cluster communication and understands the cluster schema.

View File

@ -175,7 +175,7 @@ The mysql-innodb-cluster charm deploys MySQL 8 in an InnoDB cluster with a
read/write node and N number of read-only nodes. This charm does not support
single-unit or non-clustered deployments.
The mysql-router charm deploys MySQL 8 mysqlrouter which will proxy database
The mysql-router charm deploys a MySQL 8 Router which will proxy database
requests from the principle charm application to a MySQL 8 InnoDB cluster.
MySQL Router handles cluster communication and understands the cluster schema.
@ -241,7 +241,7 @@ Watcher
~~~~~~~
The watcher charm deploys the OpenStack Watcher service, the resource
optimization service for multi-tenant clouds. The watcher-dashboard charm
optimisation service for multi-tenant clouds. The watcher-dashboard charm
provides a dashboard plugin for use with the OpenStack dashboard (Horizon). As
of the 20.02 OpenStack Charms release these charms are available as a tech
preview.

View File

@ -261,7 +261,7 @@ The mysql-innodb-cluster charm deploys MySQL 8 in an InnoDB cluster with a
read/write node and N number of read-only nodes. This charm does not support
single-unit or non-clustered deployments.
The mysql-router charm deploys MySQL 8 mysqlrouter which will proxy database
The mysql-router charm deploys a MySQL 8 Router which will proxy database
requests from the principle charm application to a MySQL 8 InnoDB cluster.
MySQL Router handles cluster communication and understands the cluster schema.
The charm is deployed as a subordinate on the principle charm application and
@ -360,7 +360,7 @@ Deployment Guide`_.
.. note::
TrilioVault is a commerical snapshot and restore solution for OpenStack and
TrilioVault is a commercial snapshot and restore solution for OpenStack and
does not form part of the OpenStack project.
ceph-iscsi

View File

@ -194,7 +194,7 @@ The symptom is missing sym links to certificates for Subject Alternative Name
(SAN) IP addresses. For example, for Virtual IP (VIP) addresses for services.
Apache configuration may fail as it will point to a certificate for the VIP(s).
The workaround is to set the above settings to False and utilize the
The workaround is to set the above settings to False and utilise the
post-deployment actions for preparing vault as documented in the `Vault
section`_ and the `Certificate Lifecycle Management`_ section of the `OpenStack
Charms Deployment Guide`_.

View File

@ -113,7 +113,7 @@ greater.
* - trilio-dm-api
- dmapi-workers
- New
- Number of dmapi workers. This replaces the previous worker-muliplier option.
- Number of dmapi workers. This replaces the previous worker-multiplier option.
* - trilio-dm-api
- worker-multiplier
@ -144,7 +144,7 @@ greater.
Using S3 to store backups
-------------------------
The Trilio charms now support using an S3 compatable storage service to store
The Trilio charms now support using an S3 compatible storage service to store
backups. This is achieved by setting the ``backup-target-type`` option of the
trilio-data-mover and trilio-wlm charms to `S3` and set the following
configuration options to provide information regarding the S3 service:

View File

@ -90,7 +90,7 @@ nova-cloud-controller charm: new option to configure ``scheduler-max-attempts``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The nova-cloud-controller charm has a new configuration option:
``scheduler-max-attempts``. This option will set the schedular.max_attempts
``scheduler-max-attempts``. This option will set the scheduler.max_attempts
in the nova configuration.
This flag allows to increase the number of retries and hence hosts to schedule

View File

@ -0,0 +1,497 @@
ACL
AWS
Adapter
Adapters
Aiven
AppArmor
Arista
Asumming
Autotuning
BTRFS
Backport
Backporting
Balancer
Barbican
Barbican
Bitwarden
BlueStore
Bluestore
CAAS
CACERT
CIDR
CMR
Castellan
CentOS
Ceph
CephFS
Charmhub
ConnectX
Cron
Crossgrade
DHCP
DN
DNS
Diataxis
Diátaxis
Docstring
Docstrings
EBS
EC2
EOL
Etcd
FC
FIPs
FQDN
FWaaS
FWaaS
Failback
Fernet
Freenode
GCE
Ganesha
GiB
Gitea
HAProxy
HAcluster
Haproxy
Howtos
Hyperconvergence
IPv4
Initialise
Inline
InnoDB
JAAS
KVM
Kerberos
Kubernetes
LBaaS
LDAP
LTS
LVM
LXC
LXD
LightGray
Linuxes
Liveness
MAAS
MTU
Masakari
Matulis
MediaWiki
Mellanox
Memcache
Mesos
MiB
MicroK8s
MicroStack
Microversion
Microversion
Mitaka
Monospace
Multisite
MySQL
NATed
Nagios
Netplan
Nvidia
OSD
OSDs
OVN
OVN
OVS
Ocata
OpenDev
OpenStack
OpenStack's
OpenvSwitch
OverLength
PPA
PoC
PostgreSQL
Pre
QoS
Quickstart
RADOS
RBD
RDN
READMEs
RST
RabbitMQ
Reconfiguring
Respawn
Reweight
Runtime
SAAS
STONITH
SimpleStreams
Supportability
TLS
TODO
Trilio
TrilioVault
UUID
Unregistering
Ussuri
VLAN
VM
VMWare
VMware
VPC
Xena
Xenial
YAML
Yakkety
ZFS
Zaza
aa
aarch
aarch64
alex
algorithmically
amd
amd64
amd64
amqp
aodh
api
arista
arm64
aurelien
autocompletion
autodoc
autoscaler
autotune
autotuning
backend
backends
backport
backports
backticks
balancer
balancers
barbican
base64
bluestore
boolean
bootup
br
btrfs
catalog
cba
cdafb
ceilometer
centralized
ceph
ceph-mon
ceph-osd
cephx
charmhelpers
charmhub
charms
charmstore
chris
chroot
ci
cidr
cinderclient
civetweb
cleartext
codebase
colocate
conf
config
consoleauth
coreycb
corosync
cpu
crm
crossgrade
crossgraded
crossgrading
ctermbg
ctermfg
datacenter
datacentre
datamover
decrypt
decrypts
deployable
dev
developement
dhcp
diataxis
dir
disaggregated
diskimage
distro
dm
dmapi
dmitriis
dns
dnsmasq
docstring
docstrings
documentarian
downscaling
driveby
dsvm
easyrsa
el
enablement
env
eoan
errored
etcd
eth
failback
failover
fastcgi
fernet
filestore
filesystem
filesystems
fileystem
flavor
flavor's
flavors
fnordahl
focussed
formalize
formatoptions
freyes
frontend
frontends
fs
fsid
functools
ganesha
gerrit
github
glusterfs
gnuoy
hacluster
haproxy
hardcode
hardcoded
hlsearch
honored
hostname
hostnames
howto
http
https
hwe
hyperconverged
hypervisor
hypervisor's
hypervisors
iSCSI
icey
ifupdown
imagebin
ini
init
innodb
instantiation
integrations
io
ip
ipmitool
iptables
iscsi
james
joyent
jujucharms
k8s
kB
kerberos
keypair
knownhosts
kubectl
kubernetes
kv
kvm
ldap
libvirt
libvirtd
lifecycle
loadbalancer
loadbalancers
loadtime
localhost
lookups
loopback
lourot
lvm
lxd
maas
macOS
manila
masakari
mediawiki
mellon
memcached
metaclass
minimizes
mitaka
mlnx
mnesia
modeling
mojo
mon
mongodb
monospace
multicast
multisite
murano
mypy
mysql
mysqldb
nWhitespace
nagios
nameserver
namespace
namespaces
namespacing
natively
nd
netapp
netplan
nofoldenable
northd
nrpe
ntp
num
nvidia
objectstore
octavia
odl
olsen
omittance
onwards
opendev
openstack
openvswitch
os
osd
osds
oslo
ovn
ovs
passphraseless
passthrough
pastebin
patchset
percona
physnets
pki
plugin
plugins
policyd
postgresql
poweron
ppc
ppc64el
pre
prewritten
psycopg
purestorage
py
pylxd
pypi
qemu
qos
rabbitmq
rackspace
radosgw
radsogw
rbd
reST
reStructuredText
rebalancing
repos
respawn
respawned
resynced
retriggered
rootfs
routable
rst
runtime
s390x
saml
scalability
scsi
simgplestreams
simplestreams
snapcraft
snat
softhsm
src
sriov
ssd
ssl
sst
stacktraces
standalone
startup
stonith
subcommand
subdirectory
subnet
sudo
superceeded
superset
supportability
swauth
switchdev
sym
symlinked
symlinks
synchronize
testability
textwidth
tinwood
tmpfs
topologies
touchpoint
tox
triaging
trilio
triliovault
tunables
tunneled
txt
ubuntu
ulimits
unclustered
unconfigured
unicast
unmount
unregister
unregisters
upgradability
url
userspace
vGPU
vSphere
vSwitch
veth
vgpu
vif
vip
virtualenv
virtualization
walkthrough
webserver
whitelist
whitelisting
whitespace
wlm
xenial
yakkety
yaml
yy
zaza
zfs
zonegroup

View File

@ -0,0 +1,12 @@
juju
Juju
Juju's
json
vTPM
swtpm
libtpm
backported
ppa
tpm
crb
libtpms

View File

@ -7,3 +7,5 @@ pbr!=2.1.0,>=2.0.0 # Apache-2.0
sphinx>=2.0.0,!=2.1.0 # BSD
openstackdocstheme>=2.2.1 # Apache-2.0
whereto>=0.3.0 # Apache-2.0
pyenchant
sphinxcontrib-spelling

View File

@ -11,6 +11,9 @@ deps = -r{toxinidir}/requirements.txt
[testenv:venv]
commands = {posargs}
[testenv:spelling]
commands = sphinx-build -a -W -d doc/build/doctrees -b html -b spelling doc/source doc/build/html
[testenv:docs]
commands = sphinx-build -a -W -d doc/build/doctrees -b html doc/source doc/build/html
whereto doc/source/_extra/.htaccess doc/test/redirect-tests.txt