Editorial updates in Fault management and Storage Guides.

Worked on Greg's comments.

Patch 1: Worked on Greg's comments.

Patch 2: Worked on Greg's comment.

Patch 3: Worked on Mary's comments.

Patch 4: Corrected typos (Mary's comments)

https://review.opendev.org/c/starlingx/docs/+/792590

Signed-off-by: egoncalv <elisamaraaoki.goncalves@windriver.com>
Change-Id: Ief8a1687243d38e4ec01d6dd809c83ad752edf96
This commit is contained in:
egoncalv 2021-05-21 09:46:18 -03:00
parent e2e42814e6
commit 0355aefc07
10 changed files with 59 additions and 260 deletions

View File

@ -1,97 +0,0 @@
.. hmg1558616220923
.. _cli-commands-for-alarms-management:
==================================
CLI Commands for Alarms Management
==================================
You can use the |CLI| to review alarm summaries for the |prod-dc|.
.. _cli-commands-for-alarms-management-ul-ncv-m4y-fdb:
- To show the status of all subclouds, as well as a summary count of alarms
and warnings for each one, use the :command:`alarm summary` command.
For example:
.. code-block:: none
~(keystone_admin)]$ dcmanager alarm summary
+------------+-----------------+--------------+--------------+----------+----------+
| NAME | CRITICAL_ALARMS | MAJOR_ALARMS | MINOR_ALARMS | WARNINGS | STATUS |
+------------+-----------------+--------------+--------------+----------+----------+
| subcloud-5 | 0 | 2 | 0 | 0 | degraded |
| subcloud-1 | 0 | 0 | 0 | 0 | OK |
+------------+-----------------+--------------+--------------+----------+----------+
System Controller alarms and warnings are not included.
The status is one of the following:
**OK**
There are no alarms or warnings, or only warnings.
**degraded**
There are minor or major alarms.
**critical**
There are critical alarms.
- To show the count of alarms and warnings for the System Controller, use the
:command:`alarm-summary` command.
For example:
.. code-block:: none
~(keystone_admin)]$ fm alarm-summary
+-----------------+--------------+--------------+----------+
| Critical Alarms | Major Alarms | Minor Alarms | Warnings |
+-----------------+--------------+--------------+----------+
| 0 | 0 | 0 | 0 |
+-----------------+--------------+--------------+----------+
The following command is equivalent to the :command:`fm alarm-summary`,
providing a count of alarms and warnings for the System Controller:
- :command:`fm --os-region-name RegionOne alarm-summary`
- To show the alarm and warning count for a specific subcloud only, add the
--os-region-name parameter and supply the region name:
For example:
.. code-block:: none
~(keystone_admin)]$ fm --os-region-name subcloud2 --os-auth-url http://192.168.121.2:5000/v3 alarm-summary
+-----------------+--------------+--------------+----------+
| Critical Alarms | Major Alarms | Minor Alarms | Warnings |
+-----------------+--------------+--------------+----------+
| 0 | 0 | 0 | 0 |
+-----------------+--------------+--------------+----------+
- To list the alarms for a subcloud:
.. code-block:: none
~(keystone_admin)]$ fm --os-region-name subcloud2 --os-auth-url http://192.168.121.2:5000/v3 alarm-list
+----------+--------------------------------------------+-------------------+----------+-------------------+
| Alarm ID | Reason Text | Entity ID | Severity | Time Stamp |
+----------+--------------------------------------------+-------------------+----------+-------------------+
| 250.001 | controller-0 Configuration is out-of-date. | host=controller-0 | major | 2018-02-06T21:37: |
| | | | | 32.650217 |
| | | | | |
| 250.001 | controller-1 Configuration is out-of-date. | host=controller-1 | major | 2018-02-06T21:37: |
| | | | | 29.121674 |
| | | | | |
+----------+--------------------------------------------+-------------------+----------+-------------------+

View File

@ -15,7 +15,6 @@ Introduction
shared-configurations
regionone-and-systemcontroller-modes
alarms-management-for-distributed-cloud
cli-commands-for-alarms-management
------------
Installation
@ -40,11 +39,12 @@ Operation
monitoring-subclouds-using-horizon
managing-subclouds-using-the-cli
switching-to-a-subcloud-from-the-system-controller
synchronization-monitoring-and-control
cli-commands-for-dc-alarms-management
managing-subcloud-groups
creating-subcloud-groups
ochestration-strategy-using-subcloud-groups
switching-to-a-subcloud-from-the-system-controller
synchronization-monitoring-and-control
managing-ldap-linux-user-accounts-on-the-system-controller
changing-the-admin-password-on-distributed-cloud
updating-docker-registry-credentials-on-a-subcloud

View File

@ -88,15 +88,6 @@ SNMP
setting-snmp-identifying-information
uninstalling-snmp
**********************************
Distributed Cloud alarm management
**********************************
.. toctree::
:maxdepth: 1
cli-commands-for-dc-alarms-management
******************************
Troubleshooting log collection
******************************

View File

@ -9,20 +9,10 @@ Ceph Storage Pools
On a system that uses a Ceph storage backend, kube-rbd pool |PVCs| are
configured on the storage hosts.
|prod| uses four pools for each Ceph backend:
|prod| uses up to three pools for the Ceph backend:
.. _ceph-storage-pools-ul-z5w-xwp-dw:
- kube-rbd
- Cinder Volume Storage pool
- Glance Image Storage pool
- Nova Ephemeral Disk Storage pool
- Swift Object Storage pool
.. note::
To increase the available storage, you can also add storage hosts. The
maximum number depends on the replication factor for the system; see
:ref:`Storage on Storage Hosts <storage-hosts-storage-on-storage-hosts>`.
- kube-cephfs-data
- kube-cephfs-metadata

View File

@ -2,9 +2,9 @@
.. rzp1584539804482
.. _configure-an-external-netapp-deployment-as-the-storage-backend:
================================================================
==============================================================
Configure an External Netapp Deployment as the Storage Backend
================================================================
==============================================================
Configure an external Netapp Trident deployment as the storage backend, after
system installation using a |prod|-provided ansible playbook.
@ -21,11 +21,11 @@ procedure.
#. Configure the storage network.
.. only:: starlingx
.. only:: starlingx
Follow the next steps to configure storage network
.. only:: partner
.. only:: partner
.. include:: ../../_includes/configure-external-netapp.rest
@ -82,11 +82,11 @@ procedure.
(keystone_admin)$ system host-unlock <hostname>
.. _configuring-an-external-netapp-deployment-as-the-storage-backend-mod-localhost:
.. _configuring-an-external-netapp-deployment-as-the-storage-backend-mod-localhost:
#. Configure Netapps configurable parameters and run the provided
install\_netapp\_backend.yml ansible playbook to enable connectivity to
Netapp as a storage backend for |prod|.
install\_netapp\_backend.yml ansible playbook to enable connectivity to
Netapp as a storage backend for |prod|.
#. Provide Netapp backend configurable parameters in an overrides yaml
file.
@ -98,56 +98,56 @@ Netapp as a storage backend for |prod|.
The following parameters are mandatory:
**ansible\_become\_pass**
``ansible\_become\_pass``
Provide the admin password.
**netapp\_backends**
**name**
``netapp\_backends``
``name``
A name for the storage class.
**provisioner**
This value must be **netapp.io/trident**.
``provisioner``
This value must be ``netapp.io/trident``.
**backendType**
``backendType``
This value can be anything but must be the same as
StorageDriverName below.
**version**
``version``
This value must be 1.
**storageDriverName**
``storageDriverName``
This value can be anything but must be the same as
backendType below.
**managementLIF**
``managementLIF``
The management IP address for the backend logical interface.
**dataLIF**
``dataLIF``
The data IP address for the backend logical interface.
**svm**
``svm``
The storage virtual machine type to use.
**username**
``username``
The username for authentication against the netapp backend.
**password**
``password``
The password for authentication against the netapp backend.
The following parameters are optional:
**trident\_setup\_dir**
``trident\_setup\_dir``
Set a staging directory for generated configuration files. The
default is /tmp/trident.
**trident\_namespace**
``trident\_namespace``
Set this option to use an alternate Kubernetes namespace.
**trident\_rest\_api\_port**
``trident\_rest\_api\_port``
Use an alternate port for the Trident REST API. The default is
8000.
**trident\_install\_extra\_params**
``trident\_install\_extra\_params``
Add extra space-separated parameters when installing trident.
For complete listings of available parameters, see
@ -190,8 +190,8 @@ Netapp as a storage backend for |prod|.
username: "admin"
password: "secret"
This file is sectioned into **netapp\_k8s\_storageclass**,
**netapp\_k8s\_snapshotstorageclasses**, and **netapp\_backends**
This file is sectioned into ``netapp\_k8s\_storageclass``,
``netapp\_k8s\_snapshotstorageclasses``, and ``netapp\_backends``.
You can add multiple backends and/or storage classes.
.. note::
@ -207,13 +207,13 @@ Netapp as a storage backend for |prod|.
<https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html>`__.
.. note::
By default, Netapp is configured to have **777** as
unixPermissions.|prod| recommends changing these settings to
make it more secure, for example, **"unixPermissions": "755"**.
By default, Netapp is configured to have ``777`` as
unixPermissions. |prod| recommends changing these settings to
make it more secure, for example, ``"unixPermissions": "755"``.
Ensure that the right permissions are used, and there is no
conflict with container security.
Do NOT use **777** as **unixPermissions** to configure an external
Do NOT use ``777`` as ``unixPermissions`` to configure an external
Netapp deployment as the Storage backend. For more information,
contact Netapp, at `https://www.netapp.com/
<https://www.netapp.com/>`__.
@ -248,12 +248,18 @@ Netapp as a storage backend for |prod|.
To configure a persistent volume claim for the Netapp backend, add the
appropriate storage-class name you set up in step :ref:`2
<configure-an-external-netapp-deployment-as-the-storage-backend>`
\(**netapp-nas-backend** in this example\) to the persistent volume
\(``netapp-nas-backend`` in this example\) to the persistent volume
claim's yaml configuration file. For more information about this file, see
|usertasks-doc|: :ref:`Create ReadWriteOnce Persistent Volume Claims
<kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims>`.
.. seealso::
.. _configure-netapps-using-a-private-docker-registry:
- :ref:`Configure Netapps Using a Private Docker Registry
<configure-netapps-using-a-private-docker-registry>`
-------------------------------------------------
Configure Netapps Using a Private Docker Registry
-------------------------------------------------
Use the ``docker_registries`` parameter to pull from the local registry rather
than public ones.
You must first push the files to the local registry.

View File

@ -1,12 +0,0 @@
.. ucd1592237332728
.. _configure-netapps-using-a-private-docker-registry:
===================================================
Configure Netapps Using a Private Docker Registry
===================================================
Use the ``docker_registries`` parameter to pull from the local registry rather
than public ones.
You must first push the files to the local registry.

View File

@ -16,13 +16,6 @@ Overview
Disks, Partitions, Volumes, and Volume Groups
---------------------------------------------
.. toctree::
:maxdepth: 1
work-with-local-volume-groups
local-volume-groups-cli-commands
increase-the-size-for-lvm-local-volumes-on-controller-filesystems
*************************
Work with Disk Partitions
*************************
@ -38,6 +31,17 @@ Work with Disk Partitions
increase-the-size-of-a-partition
delete-a-partition
*****************************
Work with Local Volume Groups
*****************************
.. toctree::
:maxdepth: 1
work-with-local-volume-groups
local-volume-groups-cli-commands
increase-the-size-for-lvm-local-volumes-on-controller-filesystems
**************************
Work with Physical Volumes
**************************
@ -61,7 +65,6 @@ Storage Backends
storage-backends
configure-the-internal-ceph-storage-backend
configure-an-external-netapp-deployment-as-the-storage-backend
configure-netapps-using-a-private-docker-registry
uninstall-the-netapp-backend
----------------
@ -147,15 +150,6 @@ Ceph File System Provisioner
mount-readwritemany-persistent-volumes-in-containers
enable-readwritemany-pvc-support-in-additional-namespaces
----------------
Storage Profiles
----------------
.. toctree::
:maxdepth: 1
storage-profiles
----------------------------
Storage-Related CLI Commands
----------------------------

View File

@ -117,10 +117,4 @@ must add the storage tier first. For more about storage tiers, see
reported in the **Status** field.
When the unlock is complete, the host is shown as as **Unlocked**,
**Enabled**, and **Available**.
.. rubric:: |postreq|
You can reuse the same settings with other nodes by creating and applying
a storage profile. See :ref:`Storage Profiles <storage-profiles>`.
**Enabled**, and **Available**.

View File

@ -1,67 +0,0 @@
.. frt1552675083821
.. _storage-profiles:
================
Storage Profiles
================
A storage profile is a named configuration for a list of storage resources
on a storage node or worker node.
Storage profiles for storage nodes are created using the **Create Storage
Profile** button on the storage node Host Detail page.
Storage profiles for worker nodes are created using the **Create Storage
Profile** button on the worker node Host Detail page.
Storage profiles are shown on the **Storage Profiles** tab on the Host
Inventory page. They can be created only after the host has been unlocked
for the first time.
Each storage resource consists of the following elements:
**Name**
This is the name given to the profile when it is created.
**Disk Configuration**
A Linux block storage device \(/dev/disk/by-path/..., identifying a
hard drive by physical location.
**Storage Configuration**
This field provides details on the storage type. The details differ
depending on the intended type of node for the profile.
Profiles for storage nodes indicate the type of storage backend, such
as **osd**, and potentially journal stor in the case of a storage node.
Profiles for worker nodes, and for controller/worker nodes in |prod-os|
Simplex or Duplex systems, provide details for the **nova-local**
volume group used for instance local storage as well as the Physical
volume and any Physical Volume Partitions that have been configured.
CoW Image is the default setting. Concurrent disk operations is now
configured as a helm chart override for containerized OpenStack.
.. _storage-profiles-d87e22:
.. note::
Storage profiles for worker-based or |prod-os| ephemeral storage \(that
is, storage profiles containing volume group and physical volume
information\) can be applied in two scenarios:
- on initial installation where a nova-local volume group has not
been previously provisioned
- on a previously provisioned host where the nova-local volume group
has been marked for removal
On a previously provisioned host, delete the nova-local volume group prior to applying the profile.
The example Storage Profiles screen below lists a storage profile for
image-backed **nova-local** storage, suitable for worker hosts.
.. image:: ../figures/jwe1570638362341.png
To delete storage profiles, select the check boxes next to the profile
names, and then click **Delete Storage Profiles**. This does not affect
hosts where the profiles have already been applied.