Storage Guide update

Applied review feedback.

Fixed 'grey bars' formatting.
Fixed gerunds and lists
Added abbreviations and references

Change-Id: I104d678ce3ea52bddcbc141f8aad49ea1e1971db
Signed-off-by: Keane Lim <keane.lim@windriver.com>
This commit is contained in:
Keane Lim 2020-12-01 17:36:57 -05:00
parent cbb1aaca3c
commit ceea9dda0d
65 changed files with 4515 additions and 103 deletions

View File

@ -129,6 +129,15 @@ User Tasks
usertasks/index
-------
Storage
-------
.. toctree::
:maxdepth: 2
storage/index
----------------
Operation guides
----------------

View File

@ -1,104 +1,105 @@
.. Common and domain-specific abbreviations.
.. Plural forms must be defined separately from singular as
.. replacements like |PVC|s won't work.
.. Please keep this list alphabetical.
.. |ACL| replace:: :abbr:`ACL (Access Control List)`
.. |AE| replace:: :abbr:`AE (Aggregated Ethernet)`
.. |AIO| replace:: :abbr:`AIO (All-In-One)`
.. |AVP| replace:: :abbr:`AVP (Accelerated Virtual Port)`
.. |AWS| replace:: :abbr:`AWS (Amazon Web Services)`
.. |BGP| replace:: :abbr:`BGP (Border Gateway Protocol)`
.. |BMC| replace:: :abbr:`BMC (Board Management Controller)`
.. |BMCs| replace:: :abbr:`BMCs (Board Management Controllers)`
.. |BOOTP| replace:: :abbr:`BOOTP (Boot Protocol)`
.. |BPDU| replace:: :abbr:`BPDU (Bridge Protocol Data Unit)`
.. |BPDUs| replace:: :abbr:`BPDUs (Bridge Protocol Data Units)`
.. |CA| replace:: :abbr:`CA (Certificate Authority)`
.. |CAs| replace:: :abbr:`CAs (Certificate Authorities)`
.. |CNI| replace:: :abbr:`CNI (Container Networking Interface)`
.. |CoW| replace:: :abbr:`CoW (Copy on Write)`
.. |CSK| replace:: :abbr:`CSK (Code Signing Key)`
.. |CSKs| replace:: :abbr:`CSKs (Code Signing Keys)`
.. |CVE| replace:: :abbr:`CVE (Common Vulnerabilities and Exposures)`
.. |DHCP| replace:: :abbr:`DHCP (Dynamic Host Configuration Protocol)`
.. |DPDK| replace:: :abbr:`DPDK (Data Plane Development Kit)`
.. |DRBD| replace:: :abbr:`DRBD (Distributed Replicated Block Device)`
.. |DSCP| replace:: :abbr:`DSCP (Differentiated Services Code Point)`
.. |DVR| replace:: :abbr:`DVR (Distributed Virtual Router)`
.. |FEC| replace:: :abbr:`FEC (Forward Error Correction)`
.. |FPGA| replace:: :abbr:`FPGA (Field Programmable Gate Array)`
.. |FQDN| replace:: :abbr:`FQDN (Fully Qualified Domain Name)`
.. |FQDNs| replace:: :abbr:`FQDNs (Fully Qualified Domain Names)`
.. |GNP| replace:: :abbr:`GNP (Global Network Policy)`
.. |IGMP| replace:: :abbr:`IGMP (Internet Group Management Protocol)`
.. |IoT| replace:: :abbr:`IoT (Internet of Things)`
.. |IPMI| replace:: :abbr:`IPMI (Intelligent Platform Management Interface)`
.. |LACP| replace:: :abbr:`LACP (Link Aggregation Control Protocol)`
.. |LAG| replace:: :abbr:`LAG (Link Aggregation)`
.. |LDAP| replace:: :abbr:`LDAP (Lightweight Directory Access Protocol)`
.. |LDPC| replace:: :abbr:`LDPC (Low-Density Parity Check)`
.. |LLDP| replace:: :abbr:`LLDP (Link Layer Discovery Protocol)`
.. |MAC| replace:: :abbr:`MAC (Media Access Control)`
.. |MEC| replace:: :abbr:`MEC (Multi-access Edge Computing)`
.. |MLD| replace:: :abbr:`MLD (Multicast Listener Discovery)`
.. |MNFA| replace:: :abbr:`MNFA (Multi-Node Failure Avoidance)`
.. |MOTD| replace:: :abbr:`MOTD (Message of the Day)`
.. |MTU| replace:: :abbr:`MTU (Maximum Transmission Unit)`
.. |NIC| replace:: :abbr:`NIC (Network Interface Card)`
.. |NICs| replace:: :abbr:`NICs (Network Interface Cards)`
.. |NTP| replace:: :abbr:`NTP (Network Time Protocol)`
.. |NUMA| replace:: :abbr:`NUMA (Non-Uniform Memory Access)`
.. |NVMe| replace:: :abbr:`NVMe (Non-Volatile Memory express)`
.. |OAM| replace:: :abbr:`OAM (Operations, administration and management)`
.. |ONAP| replace:: :abbr:`ONAP (Open Network Automation Program)`
.. |OSD| replace:: :abbr:`OSD (Object Storage Device)`
.. |OSDs| replace:: :abbr:`OSDs (Object Storage Devices)`
.. |PAC| replace:: :abbr:`PAC (Programmable Acceleration Card)`
.. |PCI| replace:: :abbr:`PCI (Peripheral Component Interconnect)`
.. |PDU| replace:: :abbr:`PDU (Packet Data Unit)`
.. |PF| replace:: :abbr:`PF (Physical Function)`
.. |PHB| replace:: :abbr:`PHB (Per-Hop Behavior)`
.. |PQDN| replace:: :abbr:`PDQN (Partially Qualified Domain Name)`
.. |PQDNs| replace:: :abbr:`PQDNs (Partially Qualified Domain Names)`
.. |PTP| replace:: :abbr:`PTP (Precision Time Protocol)`
.. |PVC| replace:: :abbr:`PVC (Persistent Volume Claim)`
.. |PVCs| replace:: :abbr:`PVCs (Persistent Volume Claims)`
.. |PXE| replace:: :abbr:`PXE (Preboot Execution Environment)`
.. |QoS| replace:: :abbr:`QoS (Quality of Service)`
.. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)`
.. |RPC| replace:: :abbr:`RPC (Remote Procedure Call)`
.. |SAN| replace:: :abbr:`SAN (Subject Alternative Name)`
.. |SANs| replace:: :abbr:`SANs (Subject Alternative Names)`
.. |SAS| replace:: :abbr:`SAS (Serial Attached SCSI)`
.. |SATA| replace:: :abbr:`SATA (Serial AT Attachment)`
.. |SLA| replace:: :abbr:`SLA (Service Level Agreement)`
.. |SLAs| replace:: :abbr:`SLAs (Service Level Agreements)`
.. |SNAT| replace:: :abbr:`SNAT (Source Network Address Translation)`
.. |SNMP| replace:: :abbr:`SNMP (Simple Network Management Protocol)`
.. |SRIOV| replace:: :abbr:`SR-IOV (Single Root I/O Virtualization)`
.. |SSD| replace:: :abbr:`SSD (Solid State Drive)`
.. |SSDs| replace:: :abbr:`SSDs (Solid State Drives)`
.. |SSH| replace:: :abbr:`SSH (Secure Shell)`
.. |SSL| replace:: :abbr:`SSL (Secure Socket Layer)`
.. |STP| replace:: :abbr:`STP (Spanning Tree Protocol)`
.. |TPM| replace:: :abbr:`TPM (Trusted Platform Module)`
.. |TFTP| replace:: :abbr:`TFTP (Trivial File Transfer Protocol)`
.. |ToR| replace:: :abbr:`ToR (Top-of-Rack)`
.. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)`
.. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)`
.. |VF| replace:: :abbr:`VF (Virtual Function)`
.. |VFs| replace:: :abbr:`VFs (Virtual Functions)`
.. |VLAN| replace:: :abbr:`VLAN (Virtual Local Area Network)`
.. |VLANs| replace:: :abbr:`VLANs (Virtual Local Area Networks)`
.. |VM| replace:: :abbr:`VM (Virtual Machine)`
.. |VMs| replace:: :abbr:`VMs (Virtual Machines)`
.. |VNC| replace:: :abbr:`VNC (Virtual Network Computing)`
.. |VPC| replace:: :abbr:`VPC (Virtual Port Channel)`
.. |VNI| replace:: :abbr:`VNI (VXLAN Network Interface)`
.. |VNIs| replace:: :abbr:`VNIs (VXLAN Network Interfaces)`
.. |VXLAN| replace:: :abbr:`VXLAN (Virtual eXtensible Local Area Network)`
.. |VXLANs| replace:: :abbr:`VXLANs (Virtual eXtensible Local Area Networks)`
.. |XML| replace:: :abbr:`XML (eXtensible Markup Language)`
.. Common and domain-specific abbreviations.
.. Plural forms must be defined separately from singular as
.. replacements like |PVC|s won't work.
.. Please keep this list alphabetical.
.. |ACL| replace:: :abbr:`ACL (Access Control List)`
.. |AE| replace:: :abbr:`AE (Aggregated Ethernet)`
.. |AIO| replace:: :abbr:`AIO (All-In-One)`
.. |AVP| replace:: :abbr:`AVP (Accelerated Virtual Port)`
.. |AWS| replace:: :abbr:`AWS (Amazon Web Services)`
.. |BGP| replace:: :abbr:`BGP (Border Gateway Protocol)`
.. |BMC| replace:: :abbr:`BMC (Board Management Controller)`
.. |BMCs| replace:: :abbr:`BMCs (Board Management Controllers)`
.. |BOOTP| replace:: :abbr:`BOOTP (Boot Protocol)`
.. |BPDU| replace:: :abbr:`BPDU (Bridge Protocol Data Unit)`
.. |BPDUs| replace:: :abbr:`BPDUs (Bridge Protocol Data Units)`
.. |CA| replace:: :abbr:`CA (Certificate Authority)`
.. |CAs| replace:: :abbr:`CAs (Certificate Authorities)`
.. |CLI| replace:: :abbr:`CLI (Command Line Interface)`
.. |CNI| replace:: :abbr:`CNI (Container Networking Interface)`
.. |CoW| replace:: :abbr:`CoW (Copy on Write)`
.. |CSK| replace:: :abbr:`CSK (Code Signing Key)`
.. |CSKs| replace:: :abbr:`CSKs (Code Signing Keys)`
.. |CVE| replace:: :abbr:`CVE (Common Vulnerabilities and Exposures)`
.. |DHCP| replace:: :abbr:`DHCP (Dynamic Host Configuration Protocol)`
.. |DPDK| replace:: :abbr:`DPDK (Data Plane Development Kit)`
.. |DRBD| replace:: :abbr:`DRBD (Distributed Replicated Block Device)`
.. |DSCP| replace:: :abbr:`DSCP (Differentiated Services Code Point)`
.. |DVR| replace:: :abbr:`DVR (Distributed Virtual Router)`
.. |FEC| replace:: :abbr:`FEC (Forward Error Correction)`
.. |FPGA| replace:: :abbr:`FPGA (Field Programmable Gate Array)`
.. |FQDN| replace:: :abbr:`FQDN (Fully Qualified Domain Name)`
.. |FQDNs| replace:: :abbr:`FQDNs (Fully Qualified Domain Names)`
.. |GNP| replace:: :abbr:`GNP (Global Network Policy)`
.. |IGMP| replace:: :abbr:`IGMP (Internet Group Management Protocol)`
.. |IoT| replace:: :abbr:`IoT (Internet of Things)`
.. |IPMI| replace:: :abbr:`IPMI (Intelligent Platform Management Interface)`
.. |LACP| replace:: :abbr:`LACP (Link Aggregation Control Protocol)`
.. |LAG| replace:: :abbr:`LAG (Link Aggregation)`
.. |LDAP| replace:: :abbr:`LDAP (Lightweight Directory Access Protocol)`
.. |LDPC| replace:: :abbr:`LDPC (Low-Density Parity Check)`
.. |LLDP| replace:: :abbr:`LLDP (Link Layer Discovery Protocol)`
.. |MAC| replace:: :abbr:`MAC (Media Access Control)`
.. |MEC| replace:: :abbr:`MEC (Multi-access Edge Computing)`
.. |MLD| replace:: :abbr:`MLD (Multicast Listener Discovery)`
.. |MNFA| replace:: :abbr:`MNFA (Multi-Node Failure Avoidance)`
.. |MOTD| replace:: :abbr:`MOTD (Message of the Day)`
.. |MTU| replace:: :abbr:`MTU (Maximum Transmission Unit)`
.. |NIC| replace:: :abbr:`NIC (Network Interface Card)`
.. |NICs| replace:: :abbr:`NICs (Network Interface Cards)`
.. |NTP| replace:: :abbr:`NTP (Network Time Protocol)`
.. |NUMA| replace:: :abbr:`NUMA (Non-Uniform Memory Access)`
.. |NVMe| replace:: :abbr:`NVMe (Non-Volatile Memory express)`
.. |OAM| replace:: :abbr:`OAM (Operations, administration and management)`
.. |ONAP| replace:: :abbr:`ONAP (Open Network Automation Program)`
.. |OSD| replace:: :abbr:`OSD (Object Storage Device)`
.. |OSDs| replace:: :abbr:`OSDs (Object Storage Devices)`
.. |PAC| replace:: :abbr:`PAC (Programmable Acceleration Card)`
.. |PCI| replace:: :abbr:`PCI (Peripheral Component Interconnect)`
.. |PDU| replace:: :abbr:`PDU (Packet Data Unit)`
.. |PF| replace:: :abbr:`PF (Physical Function)`
.. |PHB| replace:: :abbr:`PHB (Per-Hop Behavior)`
.. |PQDN| replace:: :abbr:`PDQN (Partially Qualified Domain Name)`
.. |PQDNs| replace:: :abbr:`PQDNs (Partially Qualified Domain Names)`
.. |PTP| replace:: :abbr:`PTP (Precision Time Protocol)`
.. |PVC| replace:: :abbr:`PVC (Persistent Volume Claim)`
.. |PVCs| replace:: :abbr:`PVCs (Persistent Volume Claims)`
.. |PXE| replace:: :abbr:`PXE (Preboot Execution Environment)`
.. |QoS| replace:: :abbr:`QoS (Quality of Service)`
.. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)`
.. |RPC| replace:: :abbr:`RPC (Remote Procedure Call)`
.. |SAN| replace:: :abbr:`SAN (Subject Alternative Name)`
.. |SANs| replace:: :abbr:`SANs (Subject Alternative Names)`
.. |SAS| replace:: :abbr:`SAS (Serial Attached SCSI)`
.. |SATA| replace:: :abbr:`SATA (Serial AT Attachment)`
.. |SLA| replace:: :abbr:`SLA (Service Level Agreement)`
.. |SLAs| replace:: :abbr:`SLAs (Service Level Agreements)`
.. |SNAT| replace:: :abbr:`SNAT (Source Network Address Translation)`
.. |SNMP| replace:: :abbr:`SNMP (Simple Network Management Protocol)`
.. |SRIOV| replace:: :abbr:`SR-IOV (Single Root I/O Virtualization)`
.. |SSD| replace:: :abbr:`SSD (Solid State Drive)`
.. |SSDs| replace:: :abbr:`SSDs (Solid State Drives)`
.. |SSH| replace:: :abbr:`SSH (Secure Shell)`
.. |SSL| replace:: :abbr:`SSL (Secure Socket Layer)`
.. |STP| replace:: :abbr:`STP (Spanning Tree Protocol)`
.. |TPM| replace:: :abbr:`TPM (Trusted Platform Module)`
.. |TFTP| replace:: :abbr:`TFTP (Trivial File Transfer Protocol)`
.. |ToR| replace:: :abbr:`ToR (Top-of-Rack)`
.. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)`
.. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)`
.. |VF| replace:: :abbr:`VF (Virtual Function)`
.. |VFs| replace:: :abbr:`VFs (Virtual Functions)`
.. |VLAN| replace:: :abbr:`VLAN (Virtual Local Area Network)`
.. |VLANs| replace:: :abbr:`VLANs (Virtual Local Area Networks)`
.. |VM| replace:: :abbr:`VM (Virtual Machine)`
.. |VMs| replace:: :abbr:`VMs (Virtual Machines)`
.. |VNC| replace:: :abbr:`VNC (Virtual Network Computing)`
.. |VPC| replace:: :abbr:`VPC (Virtual Port Channel)`
.. |VNI| replace:: :abbr:`VNI (VXLAN Network Interface)`
.. |VNIs| replace:: :abbr:`VNIs (VXLAN Network Interfaces)`
.. |VXLAN| replace:: :abbr:`VXLAN (Virtual eXtensible Local Area Network)`
.. |VXLANs| replace:: :abbr:`VXLANs (Virtual eXtensible Local Area Networks)`
.. |XML| replace:: :abbr:`XML (eXtensible Markup Language)`
.. |YAML| replace:: :abbr:`YAML (YAML Ain't Markup Language)`

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -0,0 +1,163 @@
=======
Storage
=======
----------
Kubernetes
----------
********
Overview
********
.. toctree::
:maxdepth: 1
kubernetes/storage-configuration-storage-resources
kubernetes/disk-naming-conventions
*********************************************
Disks, Partitions, Volumes, and Volume Groups
*********************************************
.. toctree::
:maxdepth: 1
kubernetes/work-with-local-volume-groups
kubernetes/local-volume-groups-cli-commands
kubernetes/increase-the-size-for-lvm-local-volumes-on-controller-filesystems
Work with Disk Partitions
*************************
.. toctree::
:maxdepth: 1
kubernetes/work-with-disk-partitions
kubernetes/identify-space-available-for-partitions
kubernetes/list-partitions
kubernetes/view-details-for-a-partition
kubernetes/add-a-partition
kubernetes/increase-the-size-of-a-partition
kubernetes/delete-a-partition
Work with Physical Volumes
**************************
.. toctree::
:maxdepth: 1
kubernetes/work-with-physical-volumes
kubernetes/add-a-physical-volume
kubernetes/list-physical-volumes
kubernetes/view-details-for-a-physical-volume
kubernetes/delete-a-physical-volume
****************
Storage Backends
****************
.. toctree::
:maxdepth: 1
kubernetes/storage-backends
kubernetes/configure-the-internal-ceph-storage-backend
kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend
kubernetes/configure-netapps-using-a-private-docker-registry
kubernetes/uninstall-the-netapp-backend
****************
Controller Hosts
****************
.. toctree::
:maxdepth: 1
kubernetes/controller-hosts-storage-on-controller-hosts
kubernetes/ceph-cluster-on-a-controller-host
kubernetes/increase-controller-filesystem-storage-allotments-using-horizon
kubernetes/increase-controller-filesystem-storage-allotments-using-the-cli
************
Worker Hosts
************
.. toctree::
:maxdepth: 1
kubernetes/storage-configuration-storage-on-worker-hosts
*************
Storage Hosts
*************
.. toctree::
:maxdepth: 1
kubernetes/storage-hosts-storage-on-storage-hosts
kubernetes/replication-groups
*****************************
Configure Ceph OSDs on a Host
*****************************
.. toctree::
:maxdepth: 1
kubernetes/ceph-storage-pools
kubernetes/osd-replication-factors-journal-functions-and-storage-tiers
kubernetes/storage-functions-osds-and-ssd-backed-journals
kubernetes/add-ssd-backed-journals-using-horizon
kubernetes/add-ssd-backed-journals-using-the-cli
kubernetes/add-a-storage-tier-using-the-cli
kubernetes/replace-osds-and-journal-disks
kubernetes/provision-storage-on-a-controller-or-storage-host-using-horizon
kubernetes/provision-storage-on-a-storage-host-using-the-cli
*************************
Persistent Volume Support
*************************
.. toctree::
:maxdepth: 1
kubernetes/about-persistent-volume-support
kubernetes/default-behavior-of-the-rbd-provisioner
kubernetes/storage-configuration-create-persistent-volume-claims
kubernetes/storage-configuration-mount-persistent-volumes-in-containers
kubernetes/enable-pvc-support-in-additional-namespaces
kubernetes/enable-additional-storage-classes
kubernetes/install-additional-rbd-provisioners
****************
Storage Profiles
****************
.. toctree::
:maxdepth: 1
kubernetes/storage-profiles
****************************
Storage-Related CLI Commands
****************************
.. toctree::
:maxdepth: 1
kubernetes/storage-configuration-storage-related-cli-commands
*********************
Storage Usage Details
*********************
.. toctree::
:maxdepth: 1
kubernetes/storage-usage-details-storage-utilization-display
kubernetes/view-storage-utilization-using-horizon
---------
OpenStack
---------
Coming soon.

View File

@ -0,0 +1,20 @@
.. rhb1561120463240
.. _about-persistent-volume-support:
===============================
About Persistent Volume Support
===============================
Persistent Volume Claims \(PVCs\) are requests for storage resources in your
cluster. By default, container images have an ephemeral file system. In order
for containers to persist files beyond the lifetime of the container, a
Persistent Volume Claim can be created to obtain a persistent volume which the
container can mount and read/write files.
Management and customization tasks for Kubernetes Persistent Volume Claims can
be accomplished using the **rbd-provisioner** helm chart. The
**rbd-provisioner** helm chart is included in the **platform-integ-apps**
system application, which is automatically loaded and applied as part of the
|prod| installation.

View File

@ -0,0 +1,55 @@
.. eiq1590580042262
.. _add-a-partition:
===============
Add a Partition
===============
You can add a partition using the :command:`system host-disk-partition-add`
command.
.. rubric:: |context|
The syntax for the command is:
.. code-block:: none
system host-disk-partition-add <host> <disk> <size>
where:
**<host>**
is the host name or ID.
**<disk>**
is the disk path or UUID.
**<size>**
is the partition size in MiB.
For example, to set up a 512 MiB partition on compute-1, do the following:
.. code-block:: none
~(keystone_admin)$ system host-disk-partition-add compute-1 fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc 512
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part6 |
| device_node | /dev/sdb6 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 512 |
| uuid | a259e898-6390-44ba-a750-e0cb1579d8e0 |
| ihost_uuid | 3b315241-d54f-499b-8566-a6ed7d2d6b39 |
| idisk_uuid | fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc |
| ipv_uuid | None |
| status | Creating |
| created_at | 2017-09-08T19:10:27.506768+00:00 |
| updated_at | None |
+-------------+--------------------------------------------------+

View File

@ -0,0 +1,89 @@
.. lle1590587515952
.. _add-a-physical-volume:
=====================
Add a Physical Volume
=====================
You can add a physical volume using the :command:`system host-pv-add` command.
.. rubric:: |prereq|
.. _add-a-physical-volume-ul-zln-ssc-vlb:
- You must lock a host before you can modify its settings.
.. code-block:: none
~(keystone_admin)$ system host-lock <hostname>
- A suitable local volume group must exist on the host. For more
information, see :ref:`Work with Physical Volumes
<work-with-physical-volumes>`.
- An unused disk or partition must be available on the host. For more
information about partitions, see :ref:`Work with Disk Partitions
<work-with-disk-partitions>`.
.. rubric:: |context|
The command syntax is:
.. code-block:: none
system host-pv-add <hostname> <groupname> <uuid>
where:
**<hostname>**
is the host name or ID.
**<groupname>**
is the name of the local volume group to include the physical volume.
**<uuid>**
is the identifier of the disk or partition to use.
You can specify the device node or the device path.
On a compute host with a single disk, you must assign a partition on
the root disk for **nova-local** storage. This is required to support
some small **nova-local** files. The host must not be used for VM local
ephemeral storage.
On a compute host with more than one disk, it is possible to create a
partition on the root disk for use as **nova-local** storage. However,
for performance reasons, you must either use a non-root disk for
**nova-local** storage, or ensure that the host is not used for VMs
with ephemeral local storage.
For example, to add a volume with the UUID
67b368ab-626a-4168-9b2a-d1d239d4f3b0 to compute-1, use the following command.
.. code-block:: none
~(keystone_admin)$ system host-pv-add compute-1 nova-local 67b368ab-626a-4168-9b2a-d1d239d4f3b0
+--------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------------+
| uuid | 1145ac0b-5be1-416c-a080-581fa95fce77 |
| pv_state | adding |
| pv_type | partition |
| disk_or_part_uuid | 67b368ab-626a-4168-9b2a-d1d239d4f3b0 |
| disk_or_part_device_node | /dev/sdb5 |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part5 |
| lvm_pv_name | /dev/sdb5 |
| lvm_vg_name | nova-local |
| lvm_pv_uuid | None |
| lvm_pv_size | 0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | 3b315241-d54f-499b-8566-a6ed7d2d6b39 |
| created_at | 2017-09-08T21:14:00.217360+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------------+

View File

@ -0,0 +1,193 @@
.. kzy1552678575570
.. _add-a-storage-tier-using-the-cli:
================================
Add a Storage Tier Using the CLI
================================
You can add custom storage tiers for |OSDs| to meet specific container disk
requirements.
.. rubric:: |context|
For more information about storage tiers, see |stor-doc|: :ref:`Storage on
Storage Hosts <storage-hosts-storage-on-storage-hosts>`.
.. rubric:: |prereq|
.. _adding-a-storage-tier-using-the-cli-ul-eyx-pwm-k3b:
- On an All-in-One Simplex or Duplex system, controller-0 must be
provisioned and unlocked before you can add a secondary tier.
- On Standard \(2+2\) and Standard with Storage \(2+2+2\) system, both
controllers must be unlocked and available before secondary tiers can be
added.
.. rubric:: |proc|
#. Ensure that the **storage** tier has a full complement of OSDs.
You cannot add new tiers until the default **storage** tier contains
the number of OSDs required by the replication factor for the storage
backend.
.. code-block:: none
~(keystone)admin)$ system storage-tier-show ceph_cluster storage
+--------------+--------------------------------------+
| Property | Value |
+--------------+--------------------------------------+
| uuid | acc8fb74-6dc9-453f-85c8-884f85522639 |
| name | storage |
| type | ceph |
| status | in-use |
| backend_uuid | 649830bf-b628-4170-b275-1f0b01cfc859 |
| cluster_uuid | 364d4f89-bbe1-4797-8e3b-01b745e3a471 |
| OSDs | [0, 1] |
| created_at | 2018-02-15T19:32:28.682391+00:00 |
| updated_at | 2018-02-15T20:01:34.557959+00:00 |
+--------------+--------------------------------------+
#. List the names of any other existing storage tiers.
To create a new tier, you must assign a unique name.
.. code-block:: none
~(keystone)admin)$ system storage-tier-list ceph_cluster
+---------+---------+--------+--------------------------------------+
| uuid | name | status | backend_using |
+---------+---------+--------+--------------------------------------+
| acc8... | storage | in-use | 649830bf-b628-4170-b275-1f0b01cfc859 |
+---------+---------+--------+--------------------------------------+
#. Use the :command:`system storage-tier-add` command to add a new tier.
For example, to add a storage tier called **gold**:
.. code-block:: none
~(keystone)admin)$ system storage-tier-add ceph_cluster gold
+--------------+--------------------------------------+
| Property | Value |
+--------------+--------------------------------------+
| uuid | 220f17e2-8564-4f4d-8665-681f73d13dfb |
| name | gold |
| type | ceph |
| status | defined |
| backend_uuid | None |
| cluster_uuid | 5c48ed22-2a03-4b90-abc4-73757a594494 |
| OSDs | [0, 1] |
| created_at | 2018-02-19T21:36:59.302059+00:00 |
| updated_at | None |
+--------------+--------------------------------------+
#. Add a storage backend to provide access to the tier.
.. code-block:: none
~(keystone)admin)$ system storage-backend-add -n <name> -t <tier_uuid> ceph
For example, to add a storage backend named **gold-store** using the
new tier:
.. code-block:: none
~(keystone)admin)$ system storage-backend-add -n gold-store -t 220f17e2-8564-4f4d-8665-681f73d13dfb ceph
System configuration has changed.
Please follow the administrator guide to complete configuring the system.
+-----------+--------------+----------+------------+-----------+----------+--------------------+
| uuid | name | backend | state | task | services | capabilities |
+-----------+--------------+----------+------------+-----------+----------+--------------------+
| 23e396f2- | shared_servi | external | configured | None | glance | |
| | ces | | | | | |
| | | | | | | |
| 558e5573- | gold-store | ceph | configured | None | None | min_replication: 1 |
| | | | | | | replication: 2 |
| | | | | | | |
| 5ccdf53a- | ceph-store | ceph | configured | provision | None | min_replication: 1 |
| | | | |-storage | | replication: 2 |
| | | | | | | |
| | | | | | | |
+-----------+--------------+----------+------------+-----------+----------+--------------------+
#. Enable the Cinder service on the new storage backend.
.. note::
The Cinder Service is ONLY applicable to the |prod-os| application.
.. code-block:: none
~(keystone)admin)$ system storage-backend-modify gold-store
+----------------------+-----------------------------------------+
| Property | Value |
+----------------------+-----------------------------------------+
| backend | ceph |
| name | gold-store |
| state | configuring |
| task | {u'controller-1': 'applying-manifests', |
| | u'controller-0': 'applying-manifests'} |
| services | cinder |
| capabilities | {u'min_replication': u'1', |
| | u'replication': u'2'} |
| object_gateway | False |
| ceph_total_space_gib | 0 |
| object_pool_gib | None |
| cinder_pool_gib | 0 |
| glance_pool_gib | None |
| ephemeral_pool_gib | None |
| tier_name | gold |
| tier_uuid | 220f17e2-8564-4f4d-8665-681f73d13dfb |
| created_at | 2018-02-20T19:55:49.912568+00:00 |
| updated_at | 2018-02-20T20:14:57.476317+00:00 |
+----------------------+-----------------------------------------+
.. note::
During storage backend configuration, Openstack services may not be
available for a short period of time. Proceed to the next step once
the configuration is complete.
.. rubric:: |postreq|
You must assign OSDs to the tier. For more information, see |stor-doc|:
:ref:`Provision Storage on a Storage Host
<provision-storage-on-a-controller-or-storage-host-using-horizon>`.
To delete a tier that is not in use by a storage backend and does not have
OSDs assigned to it, use the command:
.. code-block:: none
~(keystone)admin)$ system storage-tier-delete
usage: system storage-tier-delete <cluster name or uuid> <storage tier name or uuid>
For example:
.. code-block:: none
~(keystone)admin)$ system storage-tier-delete ceph_cluster 268c967b-207e-4641-bd5a-6c05cc8706ef
To use the tier for a container volume, include the ``--volume-type`` parameter
when creating the Cinder volume, and supply the name of the cinder type.
For example:
.. code-block:: none
~(keystone)admin)$ cinder create --volume-type ceph-gold --name centos-guest 2
+---------+-----------+-------------+-----------+
| ID | Name | Description | Is_Public |
+---------+-----------+-------------+-----------+
| 77b2... | ceph-gold | - | True |
| df25... | ceph | - | True |
+---------+-----------+-------------+-----------+

View File

@ -0,0 +1,89 @@
.. qhr1552678653880
.. _add-ssd-backed-journals-using-horizon:
=====================================
Add SSD-Backed Journals Using Horizon
=====================================
On storage hosts with SSDs or NVMe drives, you can use SSD-backed Ceph
journals for improved I/O performance.
.. rubric:: |context|
If you prefer, you can use the |CLI|. For more information, see :ref:`Add
SSD-backed Journals Using the CLI
<add-ssd-backed-journals-using-the-cli>`.
For more information about SSD-backed journals, see :ref:`Storage on
Storage Hosts <storage-hosts-storage-on-storage-hosts>`.
.. rubric:: |prereq|
A storage host with a solid-state drive \(SSD\) or Non-Volatile Memory
Express \(NVMe\) drive is required.
To create or edit an SSD-backed journal, you must lock the host. The system
must have at least two other unlocked hosts with Ceph monitors. \(Ceph
monitors run on **controller-0**, **controller-1**, and **storage-0** only\).
.. rubric:: |proc|
#. Lock the host to prepare it for configuration changes.
On the **Hosts** tab of the Host Inventory page, open the drop-down
list for the host, and then select **Lock Host**.
The host is locked and reported as **Locked**, **Disabled**, and
**Online**.
#. Open the Host Detail page for the host.
To open the Host Detail page, click the name of the host on the
**Hosts** tab of the Host Inventory page.
#. Select the **Storage** tab to view the **Disks** and **Storage Functions** for the host.
.. image:: ../figures/yts1496238000598.png
#. Assign the SSD to use for Ceph journals.
.. note::
This option is available only if the storage host is equipped with
at least one SSD.
#. Click **Assign Storage Function** to open the Assign Storage Function dialog box.
.. image:: ../figures/wlx1464876289283.png
#. In the **Function** field, select Journal.
A simplified dialog is displayed.
.. image:: ../figures/pzu1464883037926.png
#. In the **Disks** field, select the SSD device.
#. Click **Assign Storage Function**.
The journal function is assigned to the SSD.
.. image:: ../figures/zfd1464884207881.png
#. Assign the journal function for use by one or more OSDs.
Use the **Edit** button for the OSD to open the Edit Storage Volume
dialog box, and then select the **Journal** to use with the OSD.
.. image:: ../figures/eew1464963403075.png
#. Unlock the host to make it available for use.
On the **Hosts** tab of the Host Inventory page, open the drop-down
list for the host, and then select **Unlock Host**.
The host is rebooted, and its **Availability State** is reported as
**In-Test**. After a few minutes, it is reported as **Unlocked**,
**Enabled**, and **Available**.

View File

@ -0,0 +1,113 @@
.. oim1552678636383
.. _add-ssd-backed-journals-using-the-cli:
=====================================
Add SSD-backed Journals Using the CLI
=====================================
You can use the command line to define SSD-backed journals.
.. rubric:: |context|
For more about SSD-backed journals, see :ref:`Storage on Storage Hosts
<storage-hosts-storage-on-storage-hosts>`.
To use the Horizon Web interface, see :ref:`Add SSD-Backed Journals
Using Horizon <add-ssd-backed-journals-using-horizon>`.
.. rubric:: |prereq|
A storage host with a solid-state drive \(SSD\) or Non-Volatile Memory
Express \(NVMe\) drive is required.
To create or edit an SSD-backed journal, you must lock the host. The system
must have at least two other unlocked hosts with Ceph monitors. \(Ceph
monitors run on **controller-0**, **controller-1**, and **storage-0** only\).
.. rubric:: |proc|
#. List the available physical disks.
.. code-block:: none
~(keystone_admin)$ system host-disk-list storage-3
+-------+-------------+------------+-------------+------------------+
| uuid | device_node | device_num | device_type | journal_size_gib |
+-------+-------------+------------+-------------+------------------+
| ba7...| /dev/sda | 2048 | HDD | 51200 |
| e87...| /dev/sdb | 2064 | HDD | 10240 |
| ae8...| /dev/sdc | 2080 | SSD | 8192 |
+-------+-------------+------------+-------------+------------------+
#. Create a journal function.
Use the :command:`system host-stor-add` command:
.. code-block:: none
~(keystone_admin)$ system host-stor-add <host_name> journal <device_uuid>
where <host\_name> is the name of the storage host \(for example,
storage-3\), and <device\_uuid> identifies an SSD.
For example:
.. code-block:: none
~(keystone_admin)$ system host-stor-add storage-3 journal ae885ad3-8be7-4103-84eb-93892d7182da
|------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| osdid | None |
| state | None |
| function | journal |
| journal_location | None |
| journal_size_mib | 0 |
| journal_node | None |
| uuid | e639f1a2-e71a-4f65-8246-5cd0662d966b |
| ihost_uuid | 4eb90dc1-2b17-443e-b997-75bdd19e3eeb |
| idisk_uuid | ae8b1434-d8fa-42a0-ac3b-110e2e99c68e |
| created_at | 2016-06-02T20:12:35.382099+00:00 |
| updated_at | None |
+------------------+--------------------------------------+
#. Update one or more OSDs to use the journal function.
.. code-block:: none
~(keystone_admin)$ system host-stor-update <osd_uuid> \
--journal-location <journal_function_uuid> [--journal-size <size_in_gib>]
For example:
.. code-block:: none
~(keystone_admin)$ system host-stor-update --journal-location dc4c9a99-a525-4c7e-baf2-22e8fad3f274 --journal-size 10 355b35d3-1f96-4423-a106-d27d8051af29
+------------------+-------------------------------------------------+
| Property | Value |
+------------------+-------------------------------------------------+
| osdid | 1 |
| function | osd |
| state | configuring-on-unlock |
| journal_location | dc4c9a99-a525-4c7e-baf2-22e8fad3f274 |
| journal_size_gib | 10240 |
| journal_path | /dev/disk/by-path/pci-0000:84:00.0-nvme-1-part1 |
| journal_node | /dev/nvme1n1p1 |
| uuid | 355b35d3-1f96-4423-a106-d27d8051af29 |
| ihost_uuid | 61d70ac5-bf10-4533-b65e-53efb8c20973 |
| idisk_uuid | b28abe19-fc43-4098-8054-e8bfa2136868 |
| tier_uuid | 100d7cf9-51d8-4c15-b7b1-83c082d506a0 |
| tier_name | storage |
| created_at | 2019-11-12T16:14:01.176137+00:00 |
| updated_at | 2019-11-12T19:51:16.034338+00:00 |
+------------------+-------------------------------------------------+
.. rubric:: |postreq|
Unlock the host to make the changes take effect. Wait for the host to be
reported as unlocked, online, and available in the hosts list.

View File

@ -0,0 +1,45 @@
.. gow1564588201550
.. _ceph-cluster-on-a-controller-host:
=================================
Ceph Cluster on a Controller Host
=================================
You can add one or more object storage devices \(OSDs\) per controller host
for data storage.
.. rubric:: |context|
See :ref:`Configure Ceph OSDs on a Host <ceph-storage-pools>` for
details on configuring Ceph on a host.
For Standard-with-controller storage and All-in-one Duplex scenarios with
2x controllers, Ceph replication is nodal. For All-in-one Simplex, with a
single controller, replication is across OSDs.
For All-in-one Simplex and Duplex there is a single Ceph monitor; on
Duplex, the Ceph monitor floats between controllers. For
Standard-with-controller storage, there are 3 Ceph monitors; 2
automatically configured on each controller and the third Ceph monitor
configured on one of the worker nodes.
.. rubric:: |prereq|
The worker must be locked.
.. rubric:: |proc|
- To configure a Ceph monitor on a worker node, execute the following
command:
.. code-block:: none
~(keystone_admin)$ system ceph-mon-add compute-0
- To add OSDs on an AIO-SX, AIO-DX and Standard systems, see
:ref:`Provision Storage on a Controller or Storage Host Using Horizon
<provision-storage-on-a-controller-or-storage-host-using-horizon>` for
more information.

View File

@ -0,0 +1,28 @@
.. cmn1552678621471
.. _ceph-storage-pools:
==================
Ceph Storage Pools
==================
On a system that uses a Ceph storage backend, kube-rbd pool |PVCs| are
configured on the storage hosts.
|prod| uses four pools for each Ceph backend:
.. _ceph-storage-pools-ul-z5w-xwp-dw:
- Cinder Volume Storage pool
- Glance Image Storage pool
- Nova Ephemeral Disk Storage pool
- Swift Object Storage pool
.. note::
To increase the available storage, you can also add storage hosts. The
maximum number depends on the replication factor for the system; see
:ref:`Storage on Storage Hosts <storage-hosts-storage-on-storage-hosts>`.

View File

@ -0,0 +1,244 @@
.. rzp1584539804482
.. _configure-an-external-netapp-deployment-as-the-storage-backend:
================================================================
Configure an External Netapp Deployment as the Storage Backend
================================================================
Configure an external Netapp Trident deployment as the storage backend,
after system installation using with the help of a |prod|-provided ansible
playbook.
..
.. rubric:: |prereq|
.. xbooklink
|prod-long| must be installed and fully deployed before performing this
procedure. See the :ref:`Installation Overview <installation-overview>`
for more information.
.. rubric:: |proc|
#. Configure the storage network.
If you have not created the storage network during system deployment,
you must create it manually.
#. If you have not done so already, create an address pool for the
storage network. This can be done at any time.
.. code-block:: none
system addrpool-add --ranges <start_address>-<end_address> <name_of_address_pool> <network_address> <network_prefix>
For example:
.. code-block:: none
(keystone_admin)$ system addrpool-add --ranges 10.10.20.1-10.10.20.100 storage-pool 10.10.20.0 24
#. If you have not done so already, create the storage network using
the address pool.
For example:
.. code-block:: none
(keystone_admin)$ system addrpool-list | grep storage-pool | awk '{print$2}' | xargs system network-add storage-net storage true
#. For each host in the system, do the following:
1. Lock the host.
.. code-block:: none
(keystone_admin)$ system host-lock <hostname>
2. Create an interface using the address pool.
For example:
.. code-block:: none
(keystone_admin)$ system host-if-modify -n storage0 -c platform --ipv4-mode static --ipv4-pool storage-pool controller-0 enp0s9
3. Assign the interface to the network.
For example:
.. code-block:: none
(keystone_admin)$ system interface-network-assign controller-0 storage0 storage-net
4. Unlock the system.
.. code-block:: none
(keystone_admin)$ system host-unlock <hostname>
.. _configuring-an-external-netapp-deployment-as-the-storage-backend-mod-localhost:
#. Configure Netapps configurable parameters and run the provided
install\_netapp\_backend.yml ansible playbook to enable connectivity to
Netapp as a storage backend for |prod|.
#. Provide Netapp backend configurable parameters in an overrides yaml
file.
You can make changes-in-place to your existing localhost.yml file
or create another in an alternative location. In either case, you
also have the option of using an ansible vault named secrets.yml
for sensitive data. The alternative must be named localhost.yaml.
The following parameters are mandatory:
**ansible\_become\_pass**
Provide the admin password.
**netapp\_backends**
**name**
A name for the storage class.
**provisioner**
This value must be **netapp.io/trident**.
**backendType**
This value can be anything but must be the same as
StorageDriverName below.
**version**
This value must be 1.
**storageDriverName**
This value can be anything but must be the same as
backendType below.
**managementLIF**
The management IP address for the backend logical interface.
**dataLIF**
The data IP address for the backend logical interface.
**svm**
The storage virtual machine type to use.
**username**
The username for authentication against the netapp backend.
**password**
The password for authentication against the netapp backend.
The following parameters are optional:
**trident\_setup\_dir**
Set a staging directory for generated configuration files. The
default is /tmp/trident.
**trident\_namespace**
Set this option to use an alternate Kubernetes namespace.
**trident\_rest\_api\_port**
Use an alternate port for the Trident REST API. The default is
8000.
**trident\_install\_extra\_params**
Add extra space-separated parameters when installing trident.
For complete listings of available parameters, see
`https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/host_vars/netapp/default.yml
<https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/host_vars/netapp/default.yml>`__
and
`https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/roles/k8s-storage-backends/netapp/vars/main.yml
<https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/roles/k8s-storage-backends/netapp/vars/main.yml>`__
The following example shows a minimal configuration in
localhost.yaml:
.. code-block:: none
ansible_become_pass: xx43U~a96DN*m.?
trident_setup_dir: /tmp/trident
netapp_k8s_storageclasses:
- metadata:
name: netapp-nas-backend
provisioner: netapp.io/trident
parameters:
backendType: "ontap-nas"
netapp_k8s_snapshotstorageclasses:
- metadata:
name: csi-snapclass
driver: csi.trident.netapp.io
deletionPolicy: Delete
netapp_backends:
- version: 1
storageDriverName: "ontap-nas"
backendName: "nas-backend"
managementLIF: "10.0.0.1"
dataLIF: "10.0.0.2"
svm: "svm_nfs"
username: "admin"
password: "secret"
This file is sectioned into **netapp\_k8s\_storageclass**,
**netapp\_k8s\_snapshotstorageclasses**, and **netapp\_backends**
You can add multiple backends and/or storage classes.
.. note::
To use IPv6 addressing, you must add the following to your configuration:
.. code-block:: none
trident_install_extra_params: "--use-ipv6"
For more information about configuration options, see
`https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html
<https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html>`__.
#. Run the playbook.
The following example uses the ``-e`` option to specify a customized
location for the localhost.yml file.
.. code-block:: none
# ansible-playbook /usr/share/ansible/stx-ansible/playbooks/install_netapp_backend.yml -e "override_files_dir=</home/sysadmin/mynetappconfig>"
Upon successful launch, there will be one Trident pod running on
each node, plus an extra pod for the REST API running on one of the
controller nodes.
#. Confirm that the pods launched successfully.
In an all-in-one simplex environment you will see pods similar to the
following:
.. code-block:: none
(keystone_admin)$ kubectl -n <tridentNamespace> get pods
NAME READY STATUS RESTARTS AGE
trident-csi-c4575c987-ww49n 5/5 Running 0 0h5m
trident-csi-hv5l7 2/2 Running 0 0h5m
.. rubric:: |postreq|
To configure a persistent volume claim for the Netapp backend, add the
appropriate storage-class name you set up in step :ref:`2
<configure-an-external-netapp-deployment-as-the-storage-backend>`
\(**netapp-nas-backend** in this example\) to the persistent volume
claim's yaml configuration file. For more information about this file, see
|usertasks-doc|: :ref:`Create Persistent Volume Claims
<kubernetes-user-tutorials-creating-persistent-volume-claims>`.
.. seealso::
- :ref:`Configure Netapps Using a Private Docker Registry
<configure-netapps-using-a-private-docker-registry>`

View File

@ -0,0 +1,22 @@
.. ucd1592237332728
.. _configure-netapps-using-a-private-docker-registry:
===================================================
Configure Netapps Using a Private Docker Registry
===================================================
Use the ``docker\_registries`` parameter to pull from the local registry rather
than public ones.
You must first push the files to the local registry.
.. xbooklink
Refer to the workflow and
yaml file formats described in |inst-doc|: :ref:`Populate a Private Docker
Registry from the Wind River Amazon Registry
<populate-a-private-docker-registry-from-the-wind-river-amazon-registry>`
and |inst-doc|: :ref:`Bootstrap from a Private Docker Registry
<bootstrap-from-a-private-docker-registry>`.

View File

@ -0,0 +1,55 @@
.. oim1582827207220
.. _configure-the-internal-ceph-storage-backend:
===========================================
Configure the Internal Ceph Storage Backend
===========================================
This section provides steps to configure the internal Ceph storage backend.
Depending on the system type, |prod| can be configured with an internal
Ceph storage backend on controller nodes or on dedicated storage nodes.
.. rubric:: |prereq|
Unlock all controllers before you run the following commands to configure
the internal Ceph storage backend.
.. rubric:: |proc|
.. _configuring-the-internal-ceph-storage-backend-steps-xdm-tmz-vkb:
#. Run the following command:
.. code-block:: none
~(keystone_admin)$ system storage-backend-add ceph --confirmed
#. Wait for Ceph storage to be configured. Run the following command to
check if Ceph storage is configured:
.. code-block:: none
~(keystone_admin)$ system storage-backend-list
#. On a Standard configuration with Controller Storage, that is, where
Ceph OSDs are to be configured on the controller nodes, configure the
third Ceph monitor instance on a worker node, using the following
command:
.. code-block:: none
~(keystone_admin)$ system ceph-mon-add <worker_node>
.. note::
For Standard configuration with dedicated Storage, that is, where
Ceph OSDs are to be configured on dedicated Storage nodes, the
third Ceph monitor instance is configured by default on the first
storage node.
#. Configure Ceph OSDs. For more information, see :ref:`Provision
Storage on a Controller or Storage Host Using Horizon
<provision-storage-on-a-controller-or-storage-host-using-horizon>`.

View File

@ -0,0 +1,203 @@
.. jfg1552671545748
.. _controller-hosts-storage-on-controller-hosts:
===========================
Storage on Controller Hosts
===========================
The controller's root disk provides storage for the |prod| system
databases, system configuration files, local docker images, container's
ephemeral filesystems, the Local Docker Registry container image store,
platform backup, and the system backup operations.
.. contents::
:local:
:depth: 1
Container local storage is derived from the cgts-vg volume group on the
root disk. You can add additional storage to the cgts-vg volume by
assigning a partition or disk to make it larger. This will allow you to
increase the size of the container local storage for the host, however, you
cannot assign it specifically to a non-root disk.
On All-in-one Simplex, All-in-one Duplex, and Standard with controller
storage systems, at least one additional disk for each controller host is
required for backing container |PVCs|.
Two disks are required with one being a Ceph |OSD|.
.. _controller-hosts-storage-on-controller-hosts-d94e57:
-----------------------
Root Filesystem Storage
-----------------------
Space on the root disk is allocated to provide filesystem storage.
You can increase the allotments for the following filesystems using the
Horizon Web interface or the |CLI|. The following commands are available to
increase various filesystem sizes; :command:`system`
:command:`controllerfs`, and :command:`system` :command:`host-fs`.
.. _controller-hosts-storage-on-controller-hosts-d94e93:
------------------------
Synchronized Filesystems
------------------------
Synchronized filesystems ensure that files stored in several different
physical locations are up to date. The following commands can be used to
resize an DRBD-synced filesystem \(Database, Docker-distribution, Etcd,
Extension, Platform\) on controllers: :command:`controllerfs-list`,
:command:`controllerfs-modify`, and :command:`controllerfs-show`.
**Platform Storage**
This is the storage allotment for a variety of platform items including
the local helm repository, the StarlingX application repository, and
internal platform configuration data files.
**Database Storage**
The storage allotment for the platform's postgres database is used by
StarlingX, System Inventory, Keystone and Barbican.
Internal database storage is provided using DRBD-synchronized
partitions on the controller primary disks. The size of the database
grows with the number of system resources created by the system
administrator. This includes objects of all kinds such as hosts,
interfaces, and service parameters.
If you add a database filesystem or increase its size, you must also
increase the size of the backup filesystem.
**Docker-distribution Storage \(Local Docker Registry storage\)**
The storage allotment for container images stored in the local docker
registry. This storage is provided using a DRBD-synchronized partition
on the controller primary disk.
**Etcd Storage**
The storage allotment for the Kubernetes etcd database.
Internal database storage is provided using a DRBD-synchronized
partition on the controller primary disk. The size of the database
grows with the number of system resources created by the system
administrator and the users. This includes objects of all kinds such as
pods, services, and secrets.
**Ceph-mon**
Ceph-mon is the cluster monitor daemon for the Ceph distributed file
system that is used for Ceph monitors to synchronize.
**Extension Storage**
This filesystem is reserved for future use. This storage is implemented
on a DRBD-synchronized partition on the controller primary disk.
.. _controller-hosts-storage-on-controller-hosts-d94e219:
----------------
Host Filesystems
----------------
The following host filesystem commands can be used to resize non-DRBD
filesystems \(Backup, Docker, Kubelet and Scratch\) and do not apply to all
hosts of a give personality type:
:command:`host-fs-list`, :command:`host-fs-modify`, and :command:`host-fs-show`
The :command:`host-fs-modify` command increases the storage configuration
for the filesystem specified on a per-host basis. For example, the
following command increases the scratch filesystem size to 10 GB:
.. code-block:: none
~(keystone_admin)$ system host-fs-modify controller-1 scratch=10
**Backup Storage**
This is the storage allotment for backup operations. This is a backup
area, where, backup=2\*database+platform size.
**Docker Storage**
This storage allotment is for ephemeral filesystems for containers on
the host, and for docker image cache.
**Kubelet Storage**
This storage allotment is for ephemeral storage size related to
kubernetes pods on this host.
**Scratch Storage**
This storage allotment is used by the host as a temp area for a variety
of miscellaneous transient host operations.
**Logs Storage**
This is the storage allotment for log data. This filesystem is not
resizable. Logs are rotated within the fixed space allocated.
Replacement root disks for a reinstalled controller should be the same size
or larger to ensure that existing allocation sizes for filesystems will fit
on the replacement disk.
.. _controller-hosts-storage-on-controller-hosts-d94e334:
-------------------------------------------------
Persistent Volume Claims Storage \(Ceph Cluster\)
-------------------------------------------------
For controller-storage systems, additional disks on the controller,
configured as Ceph OSDs, provide a small Ceph cluster for backing
Persistent Volume Claims storage for Containers.
.. _controller-hosts-storage-on-controller-hosts-d94e345:
-----------
Replication
-----------
On AIO-SX systems, replication is done between OSDs within the host.
The following three replication factors are supported:
**1**
This is the default, and requires one or more OSD disks.
**2**
This requires two or more OSD disks.
**3**
This requires three or more OSD disks.
On AIO-DX systems, replication is between the two controllers. Only one replication
group is supported and additional controllers cannot be added.
The following replication factor is supported:
**2**
There can be any number of OSDs on each controller, with a minimum of
one each. It is recommended that you use the same number and same size
OSD disks on the controllers.
.. seealso::
- :ref:`About Persistent Volume Support
<about-persistent-volume-support>`
- :ref:`Ceph Storage Pools <ceph-storage-pools>`
- :ref:`Provision Storage on a Controller or Storage Host Using
Horizon
<provision-storage-on-a-controller-or-storage-host-using-horizon>`

View File

@ -0,0 +1,43 @@
.. yam1561029988526
.. _default-behavior-of-the-rbd-provisioner:
=======================================
Default Behavior of the RBD Provisioner
=======================================
The default Ceph Cluster configuration set up during |prod| installation
contains a single storage tier, storage, containing all the |OSDs|.
The default rbd-provisioner service runs within the kube-system namespace
and has a single storage class, 'general', which is configured to:
.. _default-behavior-of-the-rbd-provisioner-ul-zg2-r2q-43b:
- use the default 'storage' ceph storage tier
- use a **kube-rbd** ceph pool, and
- only support PVC requests from the following namespaces: kube-system, default and kube-public.
The full details of the rbd-provisioner configuration can be viewed with
the following commands:
.. code-block:: none
~(keystone_admin)$ system helm-override-list platform-integ-apps
This command provides the chart names and the overrides namespaces.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
See :ref:`Create Persistent Volume Claims
<storage-configuration-create-persistent-volume-claims>` and
:ref:`Mount Persistent Volumes in Containers
<storage-configuration-mount-persistent-volumes-in-containers>` for
details on how to create and mount a PVC from this storage class.

View File

@ -0,0 +1,41 @@
.. ols1590583073449
.. _delete-a-partition:
==================
Delete a Partition
==================
You can use the :command:`system host-disk-partition-delete` command to
delete a partition.
.. rubric:: |context|
You can delete only the last partition on a disk. You cannot delete a
partition that is in use by a physical volume.
The syntax for the command is:
.. code-block:: none
system host-disk-partition-delete <host> <partition>
where:
**<host>**
is the host name or ID.
**<partition>**
is the partition device path or UUID.
For example, to delete a partition with the UUID
9f93c549-e26c-4d4c-af71-fb84e3fcae63 from compute-1, do the following.
.. code-block:: none
~(keystone_admin)$ system host-disk-partition-delete compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63
To view the progress of the deletion, use the :command:`system
host-disk-partition-list` command. The progress is shown in the status
column.

View File

@ -0,0 +1,55 @@
.. cdw1590589749382
.. _delete-a-physical-volume:
========================
Delete a Physical Volume
========================
You can delete a physical volume using the :command:`system-host-pv-delete`
command.
.. rubric:: |prereq|
.. _deleting-a-physical-volume-ul-zln-ssc-vlb:
- You must lock a host before you can modify its settings.
.. code-block:: none
~(keystone_admin)$ system host-lock <hostname>
- A suitable local volume group must exist on the host. For more
information, see :ref:`Work with Physical Volumes
<work-with-physical-volumes>`.
- An unused disk or partition must be available on the host. For more
information about partitions, see :ref:`Work with Disk Partitions
<work-with-disk-partitions>`.
.. rubric:: |context|
The syntax of the command is:
.. code-block:: none
system host-pv-delete <hostname> <uuid>
where:
**<hostname>**
is the name or ID of the host.
**<uuid>**
is the uuid of the physical volume.
For example, to delete a physical volume from compute-1, use the following
command.
.. code-block:: none
~(keystone_admin)$ system host-pv-delete compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63

View File

@ -0,0 +1,44 @@
.. sgc1552679032825
.. _disk-naming-conventions:
=======================
Disk Naming Conventions
=======================
|prod| uses persistent disk names to simplify hardware management.
In addition to the device node identification commonly used in Linux
systems \(for example, **/dev/sda**\), |prod| identifies hardware storage
devices by physical location \(device path\). This ensures that the system
can always identify a given disk based on its location, even if its device
node enumeration changes because of a hardware reconfiguration. This helps
to avoid the need for a system re-installation after a change to the disk
complement on a host.
In the Horizon Web interface and in |CLI| output, both identifications are
shown. For example, the output of the :command:`system host-disk-show`
command includes both the **device\_node** and the **device\_path**.
.. code-block:: none
~(keystone_admin)$ system host-disk-show controller-0 \
1722b081-8421-4475-a6e8-a26808cae031
+-------------+--------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------+
| device_node | /dev/sda |
| device_num | 2048 |
| device_type | HDD |
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| size_gib | 120 |
| rpm | Undetermined |
| serial_id | VB77269fb1-ae169607 |
| uuid | 1722b081-8421-4475-a6e8-a26808cae031 |
| ihost_uuid | 78c46728-4108-4b35-8081-bed1bd4cba35 |
| istor_uuid | None |
| ipv_uuid | 2a7e7aad-6da5-4a2d-957c-058d37eace1c |
| created_at | 2017-05-05T07:56:02.969888+00:00 |
| updated_at | 2017-05-08T12:27:04.437818+00:00 |
+-------------+--------------------------------------------+

View File

@ -0,0 +1,204 @@
.. csl1561030322454
.. _enable-additional-storage-classes:
=================================
Enable Additional Storage Classes
=================================
Additional storage classes can be added to the default rbd-provisioner
service.
.. rubric:: |context|
Some reasons for adding an additional storage class include:
.. _enable-additional-storage-classes-ul-nz1-r3q-43b:
- managing Ceph resources for particular namespaces in a separate Ceph
pool; simply for Ceph partitioning reasons
- using an alternate Ceph Storage Tier, for example. with faster drives
A modification to the configuration \(helm overrides\) of the
**rbd-provisioner** service is required to enable an additional storage class
The following example that illustrates adding a second storage class to be
utilized by a specific namespace.
.. note::
Due to limitations with templating and merging of overrides, the entire
storage class must be redefined in the override when updating specific
values.
.. rubric:: |proc|
#. List installed helm chart overrides for the platform-integ-apps.
.. code-block:: none
~(keystone_admin)$ system helm-override-list platform-integ-apps
+------------------+----------------------+
| chart name | overrides namespaces |
+------------------+----------------------+
| ceph-pools-audit | [u'kube-system'] |
| helm-toolkit | [] |
| rbd-provisioner | [u'kube-system'] |
+------------------+----------------------+
#. Review existing overrides for the rbd-provisioner chart. You will refer
to this information in the following step.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
#. Create an overrides yaml file defining the new namespaces.
In this example we will create the file
/home/sysadmin/update-namespaces.yaml with the following content:
.. code-block:: none
classes:
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
chunk_size: 64
crush_rule_name: storage_tier_ruleset
name: general
pool_name: kube-rbd
replication: 1
userId: ceph-pool-kube-rbd
userSecretName: ceph-pool-kube-rbd
- additionalNamespaces: [ new-sc-app ]
chunk_size: 64
crush_rule_name: storage_tier_ruleset
name: special-storage-class
pool_name: new-sc-app-pool
replication: 1
userId: ceph-pool-new-sc-app
userSecretName: ceph-pool-new-sc-app
#. Apply the overrides file to the chart.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml \
platform-integ-apps rbd-provisioner
+----------------+-----------------------------------------+
| Property | Value |
+----------------+-----------------------------------------+
| name | rbd-provisioner |
| namespace | kube-system |
| user_overrides | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | - new-app |
| | - new-app2 |
| | - new-app3 |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | pool_name: kube-rbd |
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | - additionalNamespaces: |
| | - new-sc-app |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: special-storage-class |
| | pool_name: new-sc-app-pool |
| | replication: 1 |
| | userId: ceph-pool-new-sc-app |
| | userSecretName: ceph-pool-new-sc-app |
+----------------+-----------------------------------------+
#. Confirm that the new overrides have been applied to the chart.
The following output has been edited for brevity.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
+--------------------+-----------------------------------------+
| Property | Value |
+--------------------+-----------------------------------------+
| combined_overrides | ... |
| | |
| name | |
| namespace | |
| system_overrides | ... |
| | |
| | |
| user_overrides | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | - new-app |
| | - new-app2 |
| | - new-app3 |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | pool_name: kube-rbd |
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | - additionalNamespaces: |
| | - new-sc-app |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: special-storage-class |
| | pool_name: new-sc-app-pool |
| | replication: 1 |
| | userId: ceph-pool-new-sc-app |
| | userSecretName: ceph-pool-new-sc-app |
+--------------------+-----------------------------------------+
#. Apply the overrides.
#. Run the :command:`application-apply` command.
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
| active | True |
| app_version | 1.0-5 |
| created_at | 2019-05-26T06:22:20.711732+00:00 |
| manifest_file | manifest.yaml |
| manifest_name | platform-integration-manifest |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2019-05-26T22:50:54.168114+00:00 |
+---------------+----------------------------------+
#. Monitor progress using the :command:`application-list` command.
.. code-block:: none
~(keystone_admin)$ system application-list
+-------------+---------+---------------+---------------+---------+-----------+
| application | version | manifest name | manifest file | status | progress |
+-------------+---------+---------------+---------------+---------+-----------+
| platform- | 1.0-8 | platform- | manifest.yaml | applied | completed |
| integ-apps | | integration- | | | |
| | | manifest | | | |
+-------------+---------+------ --------+---------------+---------+-----------+
You can now create and mount persistent volumes from the new
rbd-provisioner's **special** storage class from within the
**new-sc-app** application-specific namespace.

View File

@ -0,0 +1,220 @@
.. vqw1561030204071
.. _enable-pvc-support-in-additional-namespaces:
===========================================
Enable PVC Support in Additional Namespaces
===========================================
The default general **rbd-provisioner** storage class is enabled for the
default, kube-system, and kube-public namespaces. To enable an additional
namespace, for example for an application-specific namespace, a
modification to the configuration \(helm overrides\) of the
**rbd-provisioner** service is required.
.. rubric:: |context|
The following example illustrates the configuration of three additional
application-specific namespaces to access the rbd-provisioner's **general**
storage class.
.. note::
Due to limitations with templating and merging of overrides, the entire
storage class must be redefined in the override when updating specific
values.
.. rubric:: |proc|
#. List installed helm chart overrides for the platform-integ-apps.
.. code-block:: none
~(keystone_admin)$ system helm-override-list platform-integ-apps
+------------------+----------------------+
| chart name | overrides namespaces |
+------------------+----------------------+
| ceph-pools-audit | [u'kube-system'] |
| helm-toolkit | [] |
| rbd-provisioner | [u'kube-system'] |
+------------------+----------------------+
#. Review existing overrides for the rbd-provisioner chart. You will refer
to this information in the following step.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
+--------------------+--------------------------------------------------+
| Property | Value |
+--------------------+--------------------------------------------------+
| combined_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-admin |
| | monitors: |
| | - 192.168.204.4:6789 |
| | - 192.168.204.2:6789 |
| | - 192.168.204.3:6789 |
| | - 192.168.204.60:6789 |
| | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | pool_name: kube-rbd |
| | replication: 2 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | global: |
| | defaultStorageClass: general |
| | replicas: 2 |
| | |
| name | rbd-provisioner |
| namespace | kube-system |
| system_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-admin |
| | monitors: ['192.168.204.4:6789', |
| |'192.168.204.2:6789', '192.168.204.3:6789', |
| | '192.168.204.60:6789'] |
| | classes: |
| | - additionalNamespaces: [default, kube-public] |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | pool_name: kube-rbd |
| | replication: 2 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | global: {defaultStorageClass: general, replicas: |
| | 2} |
| | |
| user_overrides | None |
+--------------------+--------------------------------------------------+
#. Create an overrides yaml file defining the new namespaces.
In this example we will create the file
/home/sysadmin/update-namespaces.yaml with the following content:
.. code-block:: none
classes:
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
chunk_size: 64
crush_rule_name: storage_tier_ruleset
name: general
pool_name: kube-rbd
replication: 1
userId: ceph-pool-kube-rbd
userSecretName: ceph-pool-kube-rbd
#. Apply the overrides file to the chart.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml \
platform-integ-apps rbd-provisioner kube-system
+----------------+-----------------------------------------+
| Property | Value |
+----------------+-----------------------------------------+
| name | rbd-provisioner |
| namespace | kube-system |
| user_overrides | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | - new-app |
| | - new-app2 |
| | - new-app3 |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | pool_name: kube-rbd |
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
+----------------+-----------------------------------------+
#. Confirm that the new overrides have been applied to the chart.
The following output has been edited for brevity.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
+---------------------+--------------------------------------+
| Property | Value |
+--------------------+------------------------------------- --+
| combined_overrides | ... |
| | |
| name | |
| namespace | |
| system_overrides | ... |
| | |
| | |
| user_overrides | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | - new-app |
| | - new-app2 |
| | - new-app3 |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset|
| | name: general |
| | pool_name: kube-rbd |
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
+--------------------+----------------------------------------+
#. Apply the overrides.
#. Run the :command:`application-apply` command.
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
| active | True |
| app_version | 1.0-5 |
| created_at | 2019-05-26T06:22:20.711732+00:00 |
| manifest_file | manifest.yaml |
| manifest_name | platform-integration-manifest |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2019-05-26T22:27:26.547181+00:00 |
+---------------+----------------------------------+
#. Monitor progress using the :command:`application-list` command.
.. code-block:: none
~(keystone_admin)$ system application-list
+-------------+---------+---------------+---------------+---------+-----------+
| application | version | manifest name | manifest file | status | progress |
+-------------+---------+---------------+---------------+---------+-----------+
| platform- | 1.0-5 | platform | manifest.yaml | applied | completed |
| integ-apps | | -integration | | | |
| | | -manifest | | | |
+-------------+---------+---------------+---------------+---------+-----------+
You can now create and mount PVCs from the default
**rbd-provisioner's general** storage class, from within these
application-specific namespaces.
#. Apply the secret to the new **rbd-provisioner** namespace.
.. code-block:: none
~(keystone_admin)$ kubectl get secret ceph-pool-kube-rbd -n default -o yaml | grep -v '^\s*namespace:\s' | kubectl apply -n <namespace> -f -

View File

@ -0,0 +1,22 @@
.. euf1590523814334
.. _identify-space-available-for-partitions:
=======================================
Identify Space Available for Partitions
=======================================
For example, run the following command to show space available on compute-1.
.. code-block:: none
~(keystone_admin)$ system host-disk-list compute-1
+--------------------------------------+------------+...+---------------+...
| uuid |device_node | | available_gib |...
+--------------------------------------+------------+...+---------------+...
| 6a0cadea-58ae-406f-bedf-b25ba82f0488 | /dev/sda |...| 32698 |...
| fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc | /dev/sdb |...| 9215 |...
+--------------------------------------+------------+...+---------------+...

View File

@ -0,0 +1,104 @@
.. ndt1552678803575
.. _increase-controller-filesystem-storage-allotments-using-horizon:
===============================================================
Increase Controller Filesystem Storage Allotments Using Horizon
===============================================================
Using the Horizon Web interface, you can increase the allotments for
controller-based storage.
.. rubric:: |context|
If you prefer, you can use the |CLI|. See :ref:`Increase Controller
Filesystem Storage Allotments Using the CLI
<increase-controller-filesystem-storage-allotments-using-the-cli>`.
The requested changes are checked against available space on the affected
disks; if there is not enough, the changes are disallowed.
To provide more space for the controller filesystem, you can replace the
primary disk.
With the exception of the Ceph monitor space, you can resize logical
volumes of the filesystem without doing a reboot. Resizing the Ceph monitor
requires a reboot.
.. caution::
Decreasing the filesystem size is not supported.
For more about controller-based storage, see |stor-doc|: :ref:`Storage on
Controller Hosts <controller-hosts-storage-on-controller-hosts>`.
.. rubric:: |prereq|
Before changing storage allotments, prepare as follows:
.. _increase-controller-filesystem-storage-allotments-using-horizon-ul-p3d-2h5-vp:
- Record the current configuration settings in case they need to be
restored \(for example, because of an unexpected interruption during
changes to the system configuration\). Consult the configuration plan for
your system.
- Ensure that the BIOS boot settings for the host are appropriate for a
reinstall operation.
- If necessary, install replacement disks in the controllers.
If you do not need to replace disks, you can skip this step. Be sure to
include the headroom required on the primary disk.
To replace disks in the controllers, see |node-doc|: :ref:`Change
Hardware Components for a Controller Host
<changing-hardware-components-for-a-controller-host>`.
- Add and assign enough disk partition space to accommodate the increased
filesystem size.
.. rubric:: |proc|
#. Edit the disk storage allotments.
#. In the |prod| Horizon interface, open the System Configuration pane.
The System Configuration pane is available from **Admin** \>
**Platform** \> **System Configuration** in the left-hand pane.
#. Select the **Controller Filesystem** tab.
The Controller Filesystem page appears, showing the currently
defined storage allotments.
.. image:: ../figures/ele1569534467005.jpeg
#. Click **Edit Filesystem**.
The Edit Controller Filesystem dialog box appears.
.. image:: ../figures/ngh1569534630524.jpeg
#. Replace the storage allotments as required.
#. Click **Save**.
This raises major alarms against the controllers \(**250.001
Configuration out-of-date**\). You can view the alarms on the Fault
Management page. In addition, the status **Config out-of-date** is
shown for the controllers in the Hosts list.
#. Confirm that the **250.001 Configuration out-of-date** alarms are
cleared for both controllers as the configuration is deployed in the
background.
.. rubric:: |postreq|
After making these changes, ensure that the configuration plan for your
system is updated with the new storage allotments and disk sizes.

View File

@ -0,0 +1,103 @@
.. xuj1552678789246
.. _increase-controller-filesystem-storage-allotments-using-the-cli:
===============================================================
Increase Controller Filesystem Storage Allotments Using the CLI
===============================================================
You can use the |CLI| to list or increase the allotments for controller-based
storage at any time after installation.
.. rubric:: |context|
For more information about increasing filesystem allotments, or to use the
Horizon Web interface, see :ref:`Increase Controller Filesystem Storage
Allotments Using Horizon
<increase-controller-filesystem-storage-allotments-using-horizon>`.
.. caution::
Decreasing the filesystem size is not supported, and can result in
synchronization failures requiring system re-installation. Do not
attempt to decrease the size of the filesystem.
.. rubric:: |prereq|
Before proceeding, review the prerequisites given for :ref:`Increase
Controller Filesystem Storage Allotments Using Horizon
<increase-controller-filesystem-storage-allotments-using-horizon>`.
.. rubric:: |proc|
.. _increase-controller-filesystem-storage-allotments-using-the-cli-steps-ims-sxx-mcb:
#. To review the existing storage configuration, use the
:command:`system controllerfs-list` command.
.. code-block:: none
~(keystone_admin)$ system controllerfs-list
+-------------+-----------+------+--------------+------------+-----------+
| UUID | FS Name | Size | Logical | Replicated | State |
| | | in | Volume | | |
| | | GiB | | | |
| | | | | | |
+-------------+-----------+------+--------------+------------+-----------+
| aa9c7eab... | database | 10 | pgsql-lv | True | available |
| | | | | | |
| 173cbb02... | docker- | 16 | docker | | |
| | | | distribution | True | available |
| | | |-lv | | |
| | | | | | |
| 448f77b9... | etcd | 5 | etcd-lv | True | available |
| | | | | | |
| 9eadf06a... | extension | 1 | extension-lv | True | available |
| | | | | | |
| afcb9f0e... | platform | 10 | platform-lv | True | available |
+-------------+-----------+------+--------------+------------+-----------+
.. note::
The values shown by :command:`system controllerfs-list` are not
adjusted for space used by the filesystem, and therefore may not
agree with the output of the Linux :command:`df` command. Also,
they are rounded compared to the :command:`df` output.
#. Modify the backup filesystem size on controller-0.
.. code-block:: none
~(keystone_admin)$ system host-fs-modify controller-0 backup=35
+-------------+---------+---------+----------------+
| UUID | FS Name | Size in | Logical Volume |
| | | GiB | |
+-------------+---------+---------+----------------+
| bf0ef915... | backup | 35 | backup-lv |
| e8b087ea... | docker | 30 | docker-lv |
| 4cac1020... | kubelet | 10 | kubelet-lv |
| 9c5a53a8... | scratch | 8 | scratch-lv |
+-------------+---------+---------+----------------+
#. On a non AIO-Simplex system, modify the backup filesystem size on
controller-1.
The backup filesystem is not replicated across controllers. You must
repeat the previous step on the other controller.
For example:
.. code-block:: none
~(keystone_admin)$ system host-fs-modify controller-1 backup=35
+-------------+---------+------+----------------+
| UUID | FS Name | Size | Logical Volume |
| | | in | |
| | | GiB | |
+-------------+---------+------+----------------+
| 45f22520... | backup | 35 | backup-lv |
| 173cbb02... | docker | 30 | docker-lv |
| 4120d512... | kubelet | 10 | kubelet-lv |
| 8885ad63... | scratch | 8 | scratch-lv |
+-------------+---------+------+----------------+

View File

@ -0,0 +1,112 @@
.. dvn1552678726609
.. _increase-the-size-for-lvm-local-volumes-on-controller-filesystems:
=================================================================
Increase the Size for LVM Local Volumes on Controller Filesystems
=================================================================
Controller filesystems are allocated as LVM local volumes inside the
**cgts-vg** volume group. You can increase controller filesystem storage
inside the **cgts-vg** volume group by using the |CLI|, or the Horizon Web
interface.
.. rubric:: |context|
To provision filesystem storage, enough free disk space has to be available
in this volume group. You can increase available space for provisioning by
creating a partition and assigning it to **cgts-vg** volume group. This
partition can be created on the root disk or on a different disk of your
choice. In |prod-long|, Simplex or Duplex systems that use a dedicated disk
for **nova-local**, some root disk space reserved for **nova-local** is
unused. You can recover this space for use by the **cgts-vg** volume group
to allow for controller filesystem expansion.
For convenience, this operation is permitted on an unlocked controller.
.. note::
Using more than one disk during setup for **cgts-vg**, may increase
disk failures. In case any of the disks in the **cgts-vg** volume group
fails, the disk has to be replaced and the node has to be reinstalled.
It is strongly recommended to limit **cgts-vg** to the root disk.
.. note::
The partition should be the same size on both controllers, otherwise
only the smallest common denominator size can be provisioned from
**cgts-vg**.
.. caution::
Once the **cgts-vg** partition is added, it cannot be removed.
The following example is used for provisioning **cgts-vg** on a root disk.
The default **rootfs** device is **/dev/sda**.
.. rubric:: |proc|
#. Check the free space on the **rootfs**, by using the following command:
.. code-block:: none
~(keystone_admin)$ system host-disk-list 1
#. Create a new partition on **rootfs**, for example:
.. code-block:: none
~(keystone_admin)$ system host-disk-partition-add -t lvm_phys_vol controller-0 /dev/sda 22
+---------------+------------------------------------------------+
| Property | Value |
+---------------+------------------------------------------------+
| device_path |/dev/disk/by-path/pci-0000:00:0d.0-ata-1.0-part7|
| device_node | /dev/sda7 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 22528 |
| uuid | 994d7efb-6ac1-4414-b4ef-ae3335dd73c7 |
| ihost_uuid | 75ea78b6-62f0-4821-b713-2618f0d5f834 |
| idisk_uuid | 685bee31-45de-4951-a35c-9159bd7d1295 |
| ipv_uuid | None |
| status | Creating |
| created_at | 2020-07-30T21:29:04.014193+00:00 |
| updated_at | None |
+---------------+------------------------------------------------+
#. Check for free disk space on the new partition, once it is created.
.. code-block:: none
~(keystone_admin)$ system host-disk-partition-list 1
#. Assign the unused partition on **controller-0** as a physical volume to
**cgts-vg** volume group.
.. code-block:: none
~(keystone_admin)$ system host-pv-add controller-0 cgts-vg dev/sda
#. Assign the unused partition on **controller-1** as a physical volume to
**cgts-vg** volume group. You can also **swact** the hosts, and repeat the
procedure on **controller-1**.
.. code-block:: none
~(keystone_admin)$ system host-pv-add controller-1 cgts-vg /dev/sda
.. rubric:: |postreq|
After increasing the **cgts-vg** volume size, you can provision the
filesystem storage. For more information about increasing filesystem
allotments using the CLI, or the Horizon Web interface, see:
.. _increase-the-size-for-lvm-local-volumes-on-controller-filesystems-ul-mxm-f1c-nmb:
- :ref:`Increase Controller Filesystem Storage Allotments Using Horizon
<increase-controller-filesystem-storage-allotments-using-horizon>`
- :ref:`Increase Controller Filesystem Storage Allotments Using the CLI
<increase-controller-filesystem-storage-allotments-using-the-cli>`

View File

@ -0,0 +1,62 @@
.. gnn1590581447913
.. _increase-the-size-of-a-partition:
================================
Increase the Size of a Partition
================================
You can increase the size of a partition using the :command:`system
host-disk-partition-modify` command.
.. rubric:: |context|
You can modify only the last partition on a disk \(indicated by **part** in
the device path; for example,
``/dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part6``\).
You cannot decrease the size of a partition.
The syntax for the command is:
.. code-block:: none
system host-disk-partition-modify -s <size> <host> <partition>
where:
**<size>**
is the new partition size in MiB.
**<host>**
is the host name or ID.
**<partition>**
is the partition device path or UUID.
For example, to change the size of a partition on compute-1 to 726 MiB, do
the following:
.. code-block:: none
~(keystone_admin)$ system host-disk-partition-modify -s 726 compute-1 a259e898-6390-44ba-a750-e0cb1579d8e0
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part6 |
| device_node | /dev/sdb6 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | LVM Physical Volume |
| start_mib | 512 |
| end_mib | 12545 |
| size_mib | 726 |
| uuid | a259e898-6390-44ba-a750-e0cb1579d8e0 |
| ihost_uuid | 3b315241-d54f-499b-8566-a6ed7d2d6b39 |
| idisk_uuid | fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc |
| ipv_uuid | None |
| status | Modifying |
| created_at | 2017-09-08T19:10:27.506768+00:00 |
| updated_at | 2017-09-08T19:15:06.016996+00:00 |
+-------------+--------------------------------------------------+

View File

@ -0,0 +1,106 @@
.. vgr1561030583228
.. _install-additional-rbd-provisioners:
===================================
Install Additional RBD Provisioners
===================================
You can launch additional dedicated rdb-provisioners to support specific
applications using dedicated pools, storage classes, and namespaces.
.. rubric:: |context|
This can be useful if, for example, to allow an application to have control
over its own persistent volume provisioner, that is, managing the Ceph
pool, storage tier, allowed namespaces, and so on, without requiring the
kubernetes admin to modify the default rbd-provisioner service in the
kube-system namespace.
This procedure uses standard Helm mechanisms to install a second
rbd-provisioner.
.. rubric:: |proc|
#. Capture a list of monitors.
This will be stored in the environment variable ``<MON\_LIST>`` and
used in the following step.
.. code-block:: none
~(keystone_admin)$ MON_LIST=$(ceph mon dump 2>&1 | awk /^[0-2]:/'{print $2}' | awk -F'/' '{print " - "$1}')
#. Create an overrides yaml file defining the new provisioner.
In this example we will create the file
/home/sysadmin/my-second-provisioner-overrides.yaml.
.. code-block:: none
~(keystone_admin)$ cat <<EOF > /home/sysadmin/my-second-provisioner-overrides.yaml
global:
adminId: admin
adminSecretName: ceph-admin
name: 2nd-provisioner
provisioner_name: "ceph.com/2nd-rbd"
classdefaults:
monitors:
${MON_LIST}
classes:
- name: 2nd-storage
pool_name: another-pool
chunk_size: 64
crush_rule_name: storage_tier_ruleset
replication: 1
userId: 2nd-user-secret
userSecretName: 2nd-user-secret
rbac:
clusterRole: 2nd-provisioner
clusterRoleBinding: 2nd-provisioner
role: 2nd-provisioner
roleBinding: 2nd-provisioner
serviceAccount: 2nd-provisioner
EOF
#. Install the chart.
.. code-block:: none
~(keystone_admin)$ helm upgrade --install my-2nd-provisioner stx-platform/rbd-provisioner --namespace=isolated-app --values=/home/sysadmin/my-second-provisioner-overrides.yaml
Release "my-2nd-provisioner" does not exist. Installing it now.
NAME: my-2nd-provisioner
LAST DEPLOYED: Mon May 27 05:04:51 2019
NAMESPACE: isolated-app
STATUS: DEPLOYED
...
.. note::
Helm automatically created the namespace **isolated-app** while
installing the chart.
#. Confirm that **my-2nd-provisioner** has been deployed.
.. code-block:: none
~(keystone_admin)$ helm list -a
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
my-2nd-provisioner 1 Mon May 27 05:04:51 2019 DEPLOYED rbd-provisioner-0.1.0 isolated-app
my-app3 1 Sun May 26 22:52:16 2019 DEPLOYED mysql-1.1.1 5.7.14 new-app3
my-new-sc-app 1 Sun May 26 23:11:37 2019 DEPLOYED mysql-1.1.1 5.7.14 new-sc-app
my-release 1 Sun May 26 22:31:08 2019 DEPLOYED mysql-1.1.1 5.7.14 default
...
#. Confirm that the **2nd-storage** storage class was created.
.. code-block:: none
~(keystone_admin)$ kubectl get sc --all-namespaces
NAME PROVISIONER AGE
2nd-storage ceph.com/2nd-rbd 61s
general (default) ceph.com/rbd 6h39m
special-storage-class ceph.com/rbd 5h58m
You can now create and mount PVCs from the new rbd-provisioner's
**2nd-storage** storage class, from within the **isolated-app**
namespace.

View File

@ -0,0 +1,39 @@
.. hxb1590524383019
.. _list-partitions:
===============
List Partitions
===============
To list partitions, use the :command:`system host-disk-partition-list`
command.
.. rubric:: |context|
The command has the following format:
.. code-block:: none
system host-disk-partition-list [--nowrap] [--disk [disk_uuid]] <host>
where:
**<host>**
is the hostname or ID.
For example, run the following command to list the partitions on a
compute-1 disk.
.. code-block:: none
~(keystone_admin)$ system host-disk-partition-list --disk fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc compute-1
+------...+------------------+-------------+...+----------+--------+
| uuid | device_path | device_node | | size_mib | status |
+------...+------------------+-------------+...+----------+--------+
| 15943...| ...ata-2.0-part1 | /dev/sdb1 |...| 1024 | In-Use |
| 63440...| ...ata-2.0-part2 | /dev/sdb2 |...| 10240 | In-Use |
| a4aa3...| ...ata-2.0-part3 | /dev/sdb3 |...| 10240 | In-Use |
+------...+------------------+-------------+...+----------+--------+

View File

@ -0,0 +1,27 @@
.. rnd1590588857064
.. _list-physical-volumes:
=====================
List Physical Volumes
=====================
You can list physical volumes using the :command:`system-host-pv-list` command.
.. rubric:: |context|
The syntax of the command is:
.. code-block:: none
system host-pv-list <hostname>
where **<hostname>** is the name or ID of the host.
For example, to list physical volumes on compute-1, do the following:
.. code-block:: none
$ system host-pv-list compute-1

View File

@ -0,0 +1,63 @@
.. rtm1590585833668
.. _local-volume-groups-cli-commands:
====================================
CLI Commands for Local Volume Groups
====================================
You can use CLI commands to manage local volume groups.
.. _local-volume-groups-cli-commands-simpletable-kfn-qwk-nx:
.. table::
:widths: auto
+-------------------------------------------------------+-------------------------------------------------------+
| Command Syntax | Description |
+=======================================================+=======================================================+
| .. code-block:: none | List local volume groups. |
| | |
| system host-lvg-list <hostname> | |
+-------------------------------------------------------+-------------------------------------------------------+
| .. code-block:: none | Show details for a particular local volume group. |
| | |
| system host-lvg-show <hostname> <groupname> | |
+-------------------------------------------------------+-------------------------------------------------------+
| .. code-block:: none | Add a local volume group. |
| | |
| system host-lvg-add <hostname> <groupname> | |
+-------------------------------------------------------+-------------------------------------------------------+
| .. code-block:: none | Delete a local volume group. |
| | |
| system host-lvg-delete <hostname> <groupname> | |
+-------------------------------------------------------+-------------------------------------------------------+
| .. code-block:: none | Modify a local volume group. |
| | |
| system host-lvg-modify [-b <instance_backing>] | |
| [-c <concurrent_disk_operations>] [-l <lvm_type>] | |
| <hostname> <groupname> | |
| | |
+-------------------------------------------------------+-------------------------------------------------------+
where:
**<instance\_backing>**
is the storage method for the local volume group \(image or remote\).
The remote option is valid only for systems with dedicated storage.
**<concurrent\_disk\_operations>**
is the number of I/O intensive disk operations, such as glance image
downloads or image format conversions, that can occur at the same time.
**<lvm\_type>**
is the provisioning type for VM volumes \(thick or thin\). The default
value is thin.
**<hostname>**
is the name or ID of the host.
**<groupname>**
is the name or ID of the local volume group.

View File

@ -0,0 +1,99 @@
.. ldg1564594442097
.. _osd-replication-factors-journal-functions-and-storage-tiers:
============================================================
OSD Replication Factors, Journal Functions and Storage Tiers
============================================================
.. _osd-replication-factors-journal-functions-and-storage-tiers-section-N1003F-N1002B-N10001:
----------------------
OSD Replication Factor
----------------------
.. _osd-replication-factors-journal-functions-and-storage-tiers-d61e23:
.. table::
:widths: auto
+--------------------+-----------------------------+--------------------------------------+
| Replication Factor | Hosts per Replication Group | Maximum Replication Groups Supported |
+====================+=============================+======================================+
| 2 | 2 | 4 |
+--------------------+-----------------------------+--------------------------------------+
| 3 | 3 | 3 |
+--------------------+-----------------------------+--------------------------------------+
You can add up to 16 object storage devices \(OSDs\) per storage host for
data storage.
Space on the storage hosts must be configured at installation before you
can unlock the hosts. You can change the configuration after installation
by adding resources to existing storage hosts or adding more storage hosts.
For more information, see the `StarlingX Installation and Deployment Guide
<https://docs.starlingx.io/deploy_install_guides/index.html>`__.
Storage hosts can achieve faster data access using SSD-backed transaction
journals \(journal functions\). NVMe-compatible SSDs are supported.
.. _osd-replication-factors-journal-functions-and-storage-tiers-section-N10044-N1002B-N10001:
-----------------
Journal Functions
-----------------
Each OSD on a storage host has an associated Ceph transaction journal,
which tracks changes to be committed to disk for data storage and
replication, and if required, for data recovery. This is a full Ceph
journal, containing both meta-data and data. By default, it is collocated
on the OSD, which typically uses slower but less expensive HDD-backed
storage. For faster commits and improved reliability, you can use a
dedicated solid-state drive \(SSD\) installed on the host and assigned as a
journal function. NVMe-compatible SSDs are also supported. You can dedicate
more than one SSD as a journal function.
.. note::
You can also assign an SSD for use as an OSD, but you cannot assign the
same SSD as a journal function.
If a journal function is available, you can configure individual OSDs to
use journals located on the journal function. Each journal is implemented
as a partition. You can adjust the size and location of the journals.
For OSDs implemented on rotational disks, |org| strongly recommends that
you use an SSD-based journal function. For OSDs implemented on SSDs,
collocated journals can be used with no performance cost.
For more information, see |stor-doc|: :ref:`Storage Functions: OSDs and
SSD-backed Journals <storage-functions-osds-and-ssd-backed-journals>`.
.. _osd-replication-factors-journal-functions-and-storage-tiers-section-N10049-N1002B-N10001:
-------------
Storage Tiers
-------------
You can create different tiers of OSDs storage to meet different Container
requirements. For example, to meet the needs of Containers with frequent
disk access, you can create a tier containing only high-performance OSDs.
You can then associate new Persistent Volume Claims with this tier for use
with the Containers.
By default, |prod| is configured with one tier, called the Storage
Tier. This is created as part of adding the Ceph storage back-end. It uses
the first OSD in each peer host of the first replication group.
You can add more tiers as required, limited only by the available hardware.
After adding a tier, you can assign OSDs to it. The OSD assignments must
satisfy the replication requirements for the system. That is, in the
replication group used to implement a tier, each peer host must contribute
the same number of OSDs to the tier.
For more information on storage tiers, see |stor-doc|: :ref:`Add a
Storage Tier Using the CLI <add-a-storage-tier-using-the-cli>`.

View File

@ -0,0 +1,126 @@
.. nxl1552678669664
.. _provision-storage-on-a-controller-or-storage-host-using-horizon:
===============================================================
Provision Storage on a Controller or Storage Host Using Horizon
===============================================================
You must configure the object storage devices \(OSDs\) on controllers or
storage hosts to provide container disk storage.
.. rubric:: |context|
For more about OSDs, see :ref:`Storage on Storage Hosts
<storage-hosts-storage-on-storage-hosts>`.
.. rubric:: |prereq|
.. _provision-storage-on-a-controller-or-storage-host-using-horizon-d388e17:
To create or edit an OSD, you must lock the controller or storage host.
.. _provision-storage-on-a-controller-or-storage-host-using-horizon-d388e19:
- When adding storage to a storage host or controller on a Standard
system , you must have at least two other unlocked hosts with Ceph
monitors. \(Ceph monitors typically run on **controller-0**,
**controller-1**, and **storage-0** only\).
- When adding storage to AIO-SX and AIO-DX system, a single Ceph monitor
is required.
- An AIO-SX system can be locked independent of the ceph monitor.
- An AIO-DX standby controller can be locked independent of ceph
monitor status since the ceph monitor runs on the active controller in
this configuration.
.. _provision-storage-on-a-controller-or-storage-host-using-horizon-d388e42:
If you want to use an SSD-backed journal, you must create the journal
first. For more about SSD-backed Ceph journals, see :ref:`Add SSD-Backed
Journals Using Horizon <add-ssd-backed-journals-using-horizon>`.
.. _provision-storage-on-a-controller-or-storage-host-using-horizon-d388e46:
If you want to assign the OSD to a storage tier other than the default, you
must add the storage tier first. For more about storage tiers, see
:ref:`Add a Storage Tier Using the CLI <add-a-storage-tier-using-the-cli>`.
.. rubric:: |proc|
.. _provision-storage-on-a-controller-or-storage-host-using-horizon-d388e50:
#. Lock the host to prepare it for configuration changes.
On the **Hosts** tab of the Host Inventory page, open the drop-down
list for the host, and then select **Lock Host**.
The host is locked and reported as **Locked**, **Disabled**, and
**Online**.
#. Open the Host Detail page for the host.
To open the Host Detail page, click the name of the host on the
**Hosts** tab of the System Inventory page.
#. Select the **Storage** tab to view the disks and storage functions for
the node.
.. image:: ../figures/qgh1567533283603.png
.. note::
User-defined partitions are not supported on storage hosts.
#. Add an OSD storage device.
#. Click **Assign Storage Function** to open the Assign Storage
Function dialog box.
.. image:: ../figures/bse1464884816923.png
#. In the **Disks** field, select the OSD to use for storage.
You cannot use the rootfs disk \(**dev/sda**\) for storage functions.
#. If applicable, specify the size of the Ceph journal.
If an SSD-backed Ceph journal is available, the **Journal** for the
OSD is automatically set to use the SSD or NVMe device assigned for
journals. You can optionally adjust the **Journal Size**. For
sizing considerations, refer to the guide.
If no journal function is configured on the host, then the
**Journal** is set to **Collocated with OSD**, and the **Journal
Size** is set to a default value. These settings cannot be changed.
#. Select a **Storage Tier**.
If more than one storage tier is available, select the storage tier
for this OSD.
The storage function is added.
.. image:: ../figures/caf1464886132887.png
#. Unlock the host to make it available for use.
#. Select **Admin** \> **Platform** \> **Host Inventory**.
#. On the **Hosts** tab of the Host Inventory page, open the drop-down
list for the host, and then select **Unlock Host**.
The host is rebooted, and the progress of the unlock operation is
reported in the **Status** field.
When the unlock is complete, the host is shown as as **Unlocked**,
**Enabled**, and **Available**.
.. rubric:: |postreq|
You can reuse the same settings with other nodes by creating and applying
a storage profile. See :ref:`Storage Profiles <storage-profiles>`.

View File

@ -0,0 +1,143 @@
.. ytc1552678540385
.. _provision-storage-on-a-storage-host-using-the-cli:
=================================================
Provision Storage on a Storage Host Using the CLI
=================================================
You can use the command line to configure the object storage devices \(OSDs\)
on storage hosts.
.. rubric:: |context|
For more about OSDs, see |stor-doc|: :ref:`Storage on Storage Hosts
<storage-hosts-storage-on-storage-hosts>`.
.. xbooklink
To use the Horizon Web interface, see the :ref:`Installation Overview
<installation-overview>` for your system.
.. rubric:: |prereq|
To create or edit an OSD, you must lock the storage host. The system must
have at least two other unlocked hosts with Ceph monitors. \(Ceph monitors
run on **controller-0**, **controller-1**, and **storage-0** only\).
To use a custom storage tier, you must create the tier first.
.. rubric:: |proc|
#. List the available physical disks.
.. code-block:: none
~(keystone_admin)$ system host-disk-list storage-3
+-------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | device_path |
+-------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
| ba7...| /dev/sda | 2048 | HDD | 51.2 | 0 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| e87...| /dev/sdb | 2064 | HDD | 10.2 | 10.1 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| ae8...| /dev/sdc | 2080 | SSD | 8.1 | 8.0 | /dev/disk/by-path/pci-0000:00:0d.0-ata-4.0 |
+-------+-------------+------------+--------------+---------+---------------+--------------------------------------------+
#. List the available storage tiers.
.. code-block:: none
~(keystone_admin)$ system storage-tier-list ceph_cluster
+--------------------------------------+---------+--------+----------------+
| uuid | name | status | backend_using |
+--------------------------------------+---------+--------+----------------+
| 220f17e2-8564-4f4d-8665-681f73d13dfb | gold | in-use | 283e5997-ea... |
| e9ddc040-7d5e-4e28-86be-f8c80f5c0c42 | storage | in-use | f1151da5-bd... |
+--------------------------------------+---------+--------+----------------+
#. Create a storage function \(an OSD\).
.. note::
You cannot add a storage function to the root disk \(/dev/sda in this
example\).
.. code-block:: none
~(keystone_admin)$ system host-stor-add
usage: system host-stor-add [--journal-location [<journal_location>]]
[--journal-size[<size of the journal MiB>]]
[--tier-uuid[<storage tier uuid>]]
<hostname or id> [<function>] <idisk_uuid>
where <idisk\_uuid> identifies an OSD. For example:
.. code-block:: none
~(keystone_admin)$ system host-stor-add storage-3 e8751efe-6101-4d1c-a9d3-7b1a16871791
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 3 |
| function | osd |
| journal_location | e639f1a2-e71a-4f65-8246-5cd0662d966b |
| journal_size_gib | 1 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part2 |
| journal_node | /dev/sdc1 |
| uuid | fc7b2d29-11bf-49a9-b4a9-3bc9a973077d |
| ihost_uuid | 4eb90dc1-2b17-443e-b997-75bdd19e3eeb |
| idisk_uuid | e8751efe-6101-4d1c-a9d3-7b1a16871791 |
| tier_uuid | e9ddc040-7d5e-4e28-86be-f8c80f5c0c42 |
| tier_name | storage |
| created_at | 2018-01-30T22:57:11.561447+00:00 |
| updated_at | 2018-01-30T22:57:27.169874+00:00 |
+------------------+--------------------------------------------------+
In this example, an SSD-backed journal function is available. For
more about SSD-backed journals, see :ref:`Storage Functions: OSDs and
SSD-backed Journals
<storage-functions-osds-and-ssd-backed-journals>`. The Ceph journal for
the OSD is automatically created on the journal function using a
default size of 1 GiB. You can use the ``--journal-size`` option to
specify a different size in GiB.
If multiple journal functions exist \(corresponding to multiple
dedicated SSDs\), then you must include the ``--journal-location``
option and specify the journal function to use for the OSD. You can
obtain the UUIDs for journal functions using the :command:`system
host-stor-list` command:
.. code-block:: none
~(keystone_admin)$ system host-stor-list storage-3
+---------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
| uuid | function | osdid | capabilities | idisk_uuid | journal_path | journal_size_gib | tier_name |
+---------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
| e639... | journal | None | {} | ae8b1434-d... | None | 0 | |
| fc7b... | osd | 3 | {} | e8751efe-6... | /dev/disk/by-path/pci... | 1.0 | storage |
+---------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
If no journal function exists when the storage function is created, the
Ceph journal for the OSD is collocated on the OSD.
If an SSD or NVMe drive is available on the host, you can add a
journal function. For more information, see :ref:`Add SSD-Backed
Journals Using the CLI <add-ssd-backed-journals-using-the-cli>`. You
can update the OSD to use a journal on the SSD by referencing the
journal function UUID, as follows:
.. code-block:: none
~(keystone_admin)$ system host-stor-update <osd_uuid> \
--journal-location <journal_function_uuid> [--journal-size <size>]
.. rubric:: |postreq|
Unlock the host to make the changes take effect. Wait for the host to be
reported as unlocked, online, and available in the hosts list.
You can re-use the same settings with other storage nodes by creating and
applying a storage profile. For more information, see the `StarlingX
Containers Installation Guide
<https://docs.starlingx.io/deploy_install_guides/index.html>`__.

View File

@ -0,0 +1,20 @@
.. xps1552678558589
.. _replace-osds-and-journal-disks:
==============================
Replace OSDs and Journal Disks
==============================
You can replace failed storage devices on storage nodes.
.. rubric:: |context|
For best results, ensure the replacement disk is the same size as others in
the same peer group. Do not substitute a smaller disk than the original.
The replacement disk is automatically formatted and updated with data when the
storage host is unlocked. For more information, see |node-doc|: :ref:`Change
Hardware Components for a Storage Host
<changing-hardware-components-for-a-storage-host>`.

View File

@ -0,0 +1,85 @@
.. awp1552678699112
.. _replication-groups:
==================
Replication Groups
==================
The storage hosts on Ceph systems are organized into replication groups to
provide redundancy.
Each replication group contains a number of hosts, referred to as peers.
Each peer independently replicates the same data. |prod| supports a minimum
of two peers and a maximum of three peers per replication group. This
replication factor is defined when the Ceph storage backend is added.
For a system with two peers per replication group, up to four replication
groups are supported. For a system with three peers per replication group,
up to three replication groups are supported.
For best performance, |org| recommends a balanced storage capacity, in
which each peer has sufficient resources to meet the operational
requirements of the system.
A replication group is considered healthy when all its peers are available.
When only a minimum number of peers are available \(as indicated by the
**min\_replication** value reported for the group\), the group continues to
provide storage services but without full replication, and a HEALTH\_WARN
state is declared. When the number of available peers falls below the
**min\_replication** value, the group no longer provides storage services,
and a HEALTH\_ERR state is declared. The **min\_replication** value is
always one less than the replication factor for the group.
It is not possible to lock more than one peer at a time in a replication
group.
Replication groups are created automatically. When a new storage host is
added and an incomplete replication group exists, the host is added to the
existing group. If all existing replication groups are complete, then a new
incomplete replication group is defined and the host becomes its first
member.
.. note::
Ceph relies on monitoring to detect when to switch from a primary OSD
to a replicated OSD. The Ceph parameter :command:`osd heartbeat grace` sets
the amount of time required to wait before switching OSDs when the
primary OSD is not responding. |prod| currently uses the default value
of 20 seconds. This means that a Ceph filesystem may not respond to I/O
for up to 20 seconds when a storage node or OSD goes out of service.
For more information, see the Ceph documentation:
`http://docs.ceph.com/docs/master/rados/configuration/mon-osd-interaction
<http://docs.ceph.com/docs/master/rados/configuration/mon-osd-interaction>`__.
Replication groups are shown on the Hosts Inventory page in association
with the storage hosts. You can also use the following CLI commands to
obtain information about replication groups:
.. code-block:: none
~(keystone_admin)$ system cluster-list
+-----------+--------------+------+----------+------------------+
| uuid | cluster_uuid | type | name | deployment_model |
+-----------+--------------+------+----------+------------------+
| 335766eb- | None | ceph | ceph_clu | controller-nodes |
| | | | ster | |
| | | | | |
+-----------+--------------+------+----------+------------------+
.. code-block:: none
~(keystone_admin)$ system cluster-show 335766eb-968e-44fc-9ca7-907f93c772a1
+--------------------+----------------------------------------+
| Property | Value |
+--------------------+----------------------------------------+
| uuid | 335766eb-968e-44fc-9ca7-907f93c772a1 |
| cluster_uuid | None |
| type | ceph |
| name | ceph_cluster |
| replication_groups | ["group-0:['storage-0', 'storage-1']"] |
| storage_tiers | ['storage (in-use)'] |
| deployment_model | controller-nodes |
+--------------------+----------------------------------------+

View File

@ -0,0 +1,124 @@
.. qcq1552678925205
.. _storage-backends:
================
Storage Backends
================
|prod-long| supports an internal Ceph block storage backend and connecting
to an external Netapp Trident block storage backend. Configuring a storage
backend is optional, but it is required if the applications being hosted
require persistent volume claims \(PVCs\).
.. _storage-backends-section-bgt-gv5-blb:
-------------
Internal Ceph
-------------
|prod| can be configured with an internal Ceph storage backend on |prod|
controller nodes or on dedicated |prod| storage nodes.
You can organize the OSDs in the hosts into *tiers* with different
performance characteristics such as SATA, SAS, SSD and NVME.
The following internal Ceph deployment models are supported:
.. _storage-backends-table-hdq-pv5-blb:
.. table:: Table 1. Internal Ceph Deployment Models
:widths: auto
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Name | Description |
+======================+======================================================================================================================================================================================+
| **storage-nodes** | Applies to the Standard with Dedicated Storage deployment configuration. |
| | |
| | Storage nodes are deployed in replication groups of 2 or 3, depending on the configured replication factor. |
| | |
| | Ceph OSDs are configured only on storage nodes. Ceph monitors are automatically configured on controller-0, controller-1 and storage-0. |
| | |
| | Data replication is done between storage nodes within a replication group. |
| | |
| | After configuring a storage node, OSDs cannot be added to controllers. |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **controller-nodes** | Applies to the Standard with Controller Storage and the All-in-One Duplex deployment configurations. |
| | |
| | Ceph OSDs are configured on controller nodes. For All-in-One Duplex configurations, a single Ceph monitor is automatically configured that runs on the 'active' controller. |
| | |
| | For Standard with Controller Storage, Ceph monitors are automatically configured on controller-0 and controller-1, and a third must be manually configured by user on a worker node. |
| | |
| | Data replication is done between controllers. |
| | |
| | After configuring an OSD on a controller, storage nodes cannot be installed. |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **aio-sx** | Applies to All-in-One Simplex deployment configurations. |
| | |
| | Ceph OSDs are configured on controller-0. A single Ceph monitor is automatically configured on controller-0. |
| | |
| | Replication is done per OSD, not per node. Configuration updates are applied without requiring a host lock/unlock. |
| | |
| | You can set replication to 1, 2 or 3 \(default is 1\). |
| | |
| | - A replication setting of 1 requires a minimum of one OSD. |
| | |
| | - A replication setting of 2 requires a minimum of two OSDs to provide data security. |
| | |
| | - A replication setting of 3 requires a minimum of three OSDs to provide data security. |
| | |
| | |
| | When replication 2-3 is set, data is replicated between OSDs on the node. |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
For more information on Ceph storage backend provisioning, see
:ref:`Configure the Internal Ceph Storage Backend
<configure-the-internal-ceph-storage-backend>`.
.. _storage-backends-section-N10151-N10028-N10001:
-----------------------
External Netapp Trident
-----------------------
|prod| can be configured to connect to and use an external Netapp Trident
deployment as its storage backend.
Netapp Trident supports:
.. _storage-backends-d201e23:
- AWS Cloud Volumes
- E and EF-Series SANtricity
- ONTAP AFF, FAS, Select, and Cloud
- Element HCI and SolidFire
- Azure NetApp Files service
.. _storage-backends-d201e56:
For more information about Trident, see
`https://netapp-trident.readthedocs.io
<https://netapp-trident.readthedocs.io>`__.
.. seealso::
- :ref:`Configure the Internal Ceph Storage Backend
<configure-the-internal-ceph-storage-backend>`
- :ref:`Configure an External Netapp Deployment as the Storage Backend
<configure-an-external-netapp-deployment-as-the-storage-backend>`
- :ref:`Uninstall the Netapp Backend <uninstall-the-netapp-backend>`

View File

@ -0,0 +1,100 @@
.. xco1564696647432
.. _storage-configuration-create-persistent-volume-claims:
===============================
Create Persistent Volume Claims
===============================
Container images have an ephemeral file system by default. For data to
survive beyond the lifetime of a container, it can read and write files to
a persistent volume obtained with a |PVC| created to provide persistent
storage.
.. rubric:: |context|
The following steps create two 1Gb persistent volume claims.
.. rubric:: |proc|
.. _storage-configuration-create-persistent-volume-claims-d891e32:
#. Create the **test-claim1** persistent volume claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: none
~(keystone_admin)$ cat <<EOF > claim1.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)$ kubectl apply -f claim1.yaml
persistentvolumeclaim/test-claim1 created
#. Create the **test-claim2** persistent volume claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: none
~(keystone_admin)$ cat <<EOF > claim2.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)$ kubectl apply -f claim2.yaml
persistentvolumeclaim/test-claim2 created
.. rubric:: |result|
Two 1Gb persistent volume claims have been created. You can view them with
the following command.
.. code-block:: none
~(keystone_admin)$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim1 Bound pvc-aaca.. 1Gi RWO general 2m56s
test-claim2 Bound pvc-e93f.. 1Gi RWO general 68s

View File

@ -0,0 +1,201 @@
.. pjw1564749970685
.. _storage-configuration-mount-persistent-volumes-in-containers:
======================================
Mount Persistent Volumes in Containers
======================================
You can launch, attach, and terminate a busybox container to mount |PVCs| in
your cluster.
.. rubric:: |context|
This example shows how a volume is claimed and mounted by a simple running
container. It is the responsibility of an individual micro-service within
an application to make a volume claim, mount it, and use it. For example,
the stx-openstack application will make volume claims for **mariadb** and
**rabbitmq** via their helm charts to orchestrate this.
.. rubric:: |prereq|
You must have created the persistent volume claims. This procedure uses
PVCs with names and configurations created in |stor-doc|: :ref:`Create
Persistent Volume Claims
<storage-configuration-create-persistent-volume-claims>`.
.. rubric:: |proc|
.. _storage-configuration-mount-persistent-volumes-in-containers-d583e55:
#. Create the busybox container with the persistent volumes created from
the PVCs mounted.
#. Create a yaml file definition for the busybox container.
.. code-block:: none
% cat <<EOF > busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
selector:
matchLabels:
run: busybox
template:
metadata:
labels:
run: busybox
spec:
containers:
- args:
- sh
image: busybox
imagePullPolicy: Always
name: busybox
stdin: true
tty: true
volumeMounts:
- name: pvc1
mountPath: "/mnt1"
- name: pvc2
mountPath: "/mnt2"
restartPolicy: Always
volumes:
- name: pvc1
persistentVolumeClaim:
claimName: test-claim1
- name: pvc2
persistentVolumeClaim:
claimName: test-claim2
EOF
#. Apply the busybox configuration.
.. code-block:: none
% kubectl apply -f busybox.yaml
#. Attach to the busybox and create files on the persistent volumes.
#. List the available pods.
.. code-block:: none
% kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-5c4f877455-gkg2s 1/1 Running 0 19s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t
#. From the container's console, list the disks to verify that the
persistent volumes are attached.
.. code-block:: none
# df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 3239984 28201936 10% /
tmpfs 65536 0 65536 0% /dev
tmpfs 65900776 0 65900776 0% /sys/fs/cgroup
/dev/rbd0 999320 2564 980372 0% /mnt1
/dev/rbd1 999320 2564 980372 0% /mnt2
/dev/sda4 20027216 4952208 14034624 26%
...
The PVCs are mounted as /mnt1 and /mnt2.
#. Create files in the mounted volumes.
.. code-block:: none
# cd /mnt1
# touch i-was-here
# ls /mnt1
i-was-here lost+found
#
# cd /mnt2
# touch i-was-here-too
# ls /mnt2
i-was-here-too lost+found
#. End the container session.
.. code-block:: none
# exit
Session ended, resume using 'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when the pod is running
#. Terminate the busybox container.
.. code-block:: none
% kubectl delete -f busybox.yaml
#. Recreate the busybox container, again attached to persistent volumes.
#. Apply the busybox configuration.
.. code-block:: none
% kubectl apply -f busybox.yaml
#. List the available pods.
.. code-block:: none
% kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-5c4f877455-jgcc4 1/1 Running 0 19s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach busybox-5c4f877455-jgcc4 -c busybox -i -t
#. From the container's console, list the disks to verify that the PVCs
are attached.
.. code-block:: none
# df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 3239984 28201936 10% /
tmpfs 65536 0 65536 0% /dev
tmpfs 65900776 0 65900776 0% /sys/fs/cgroup
/dev/rbd0 999320 2564 980372 0% /mnt1
/dev/rbd1 999320 2564 980372 0% /mnt2
/dev/sda4 20027216 4952208 14034624 26%
...
#. Verify that the files created during the earlier container session
still exist.
.. code-block:: none
# ls /mnt1
i-was-here lost+found
# ls /mnt2
i-was-here-too lost+found

View File

@ -0,0 +1,49 @@
.. fap1552671560683
.. _storage-configuration-storage-on-worker-hosts:
=======================
Storage on Worker Hosts
=======================
A worker host's root disk provides storage for host configuration files,
local docker images, and hosted container's ephemeral filesystems.
.. note::
On |prod| Simplex or Duplex systems, worker storage is provided
using resources on the combined host. For more information, see
:ref:`Storage on Controller Hosts
<controller-hosts-storage-on-controller-hosts>`.
.. _storage-configuration-storage-on-worker-hosts-d18e38:
-----------------------
Root Filesystem Storage
-----------------------
Space on the root disk is allocated to provide filesystem storage.
You can increase the allotments for the following filesystems using the
Horizon Web interface interface or the CLI. Resizing must be done on a
host-by-host basis for non-DRBD synced filesystems.
**Docker Storage**
The storage allotment for the docker image cache for this host, and for
the ephemeral filesystems of containers on this host.
**Kubelet Storage**
The storage allotment for ephemeral storage related to kubernetes pods on this host.
**Scratch Storage**
The storage allotment for a variety of miscellaneous transient host operations.
**Logs Storage**
The storage allotment for log data. This filesystem is not resizable.
Logs are rotated within the fixed space as allocated.

View File

@ -0,0 +1,301 @@
.. opm1552678478222
.. _storage-configuration-storage-related-cli-commands:
============================
Storage-Related CLI Commands
============================
You can use |CLI| commands when working with storage.
.. contents::
:local:
:depth: 1
.. _storage-configuration-storage-related-cli-commands-section-N1001F-N1001C-N10001:
-------------------------------
Modify Ceph Monitor Volume Size
-------------------------------
You can change the space allotted for the Ceph monitor, if required.
.. code-block:: none
~(keystone_admin)$ system ceph-mon-modify <controller> ceph_mon_gib=<size>
where ``<partition\_size>`` is the size in GiB to use for the Ceph monitor.
The value must be between 21 and 40 GiB.
.. code-block:: none
~(keystone_admin)$ system ceph-mon-modify controller-0 ceph_mon_gib=21
+-------------+-------+--------------+------------+------+
| uuid | ceph_ | hostname | state | task |
| | mon_g | | | |
| | ib | | | |
+-------------+-------+--------------+------------+------+
| 069f106a... | 21 | compute-0 | configured | None |
| 4763139e... | 21 | controller-1 | configured | None |
| e39970e5... | 21 | controller-0 | configured | None |
+-------------+-------+--------------+------------+------+
NOTE: ceph_mon_gib for both controllers are changed.
System configuration has changed.
please follow the System Configuration guide to complete configuring system.
The configuration is out of date after running this command. To update it,
you must lock and then unlock the host.
.. _storage-configuration-storage-related-cli-commands-section-N10044-N1001C-N10001:
----------------------------------------
Add, Modify, or Display Storage Backends
----------------------------------------
To list the storage backend types installed on a system:
.. code-block:: none
~(keystone_admin)$ system storage-backend-list
+--------+-----------+----------+-------+--------------+---------+-----------------+
| uuid |name | backend | state | task | services| capabilities |
+--------+-----------+----------+-------+--------------+---------+-----------------+
| 248a...|ceph-store | ceph | config| resize-ceph..| None |min_replication:1|
| | | | | | |replication: 2 |
| 76dd...|shared_serv| external | config| None | glance | |
| |ices | | | | | |
+--------+-----------+----------+-------+--------------+---------+-----------------+
To show details for a storage backend:
.. code-block:: none
~(keystone_admin)$ system storage-backend-show <name>
For example:
.. code-block:: none
~(keystone_admin)$ system storage-backend-show ceph-store
+----------------------+--------------------------------------+
| Property | Value |
+----------------------+--------------------------------------+
| backend | ceph |
| name | ceph-store |
| state | configured |
| task | provision-storage |
| services | None |
| capabilities | min_replication: 1 |
| | replication: 2 |
| object_gateway | False |
| ceph_total_space_gib | 0 |
| object_pool_gib | None |
| cinder_pool_gib | None |
| kube_pool_gib | None |
| glance_pool_gib | None |
| ephemeral_pool_gib | None |
| tier_name | storage |
| tier_uuid | 249bb348-f1a0-446c-9dd1-256721f043da |
| created_at | 2019-10-07T18:33:19.839445+00:00 |
| updated_at | None |
+----------------------+--------------------------------------+
To add a backend:
.. code-block:: none
~(keystone_admin)$ system storage-backend-add \
[-s <services>] [-n <name>] [-t <tier_uuid>] \
[-c <ceph_conf>] [--confirmed] [--ceph-mon-gib <ceph-mon-gib>] \
<backend> [<parameter>=<value> [<parameter>=<value> ...]]
The following are positional arguments:
**backend**
The storage backend to add. This argument is required.
**<parameter>**
Required backend/service parameters to apply.
The following are optional arguments:
**-s,** ``--services``
A comma-delimited list of storage services to include.
For a Ceph backend, this is an optional parameter. Valid values are
cinder, glance, and swift.
**-n,** ``--name``
For a Ceph backend, this is a user-assigned name for the backend. The
default is **ceph-store** for a Ceph backend.
**-t,** ``--tier\_uuid``
For a Ceph backend, is the UUID of a storage tier to back.
**-c,** ``--ceph\_conf``
Location of the Ceph configuration file used for provisioning an
external backend.
``--confirmed``
Provide acknowledgment that the operation should continue as it is not
reversible.
``--ceph-mon-gib``
For a Ceph backend, this is the space in gibibytes allotted for the
Ceph monitor.
.. note::
A Ceph backend is configured by default.
To modify a backend:
.. code-block:: none
~(keystone_admin)$ system storage-backend-modify [-s <services>] [-c <ceph_conf>] \
<backend_name_or_uuid> [<parameter>=<value> [<parameter>=<value> ...]]
To delete a failed backend configuration:
.. code-block:: none
~(keystone_admin)$ system storage-backend-delete <backend>
.. note::
If a backend installation fails before completion, you can use this
command to remove the partial installation so that you can try again.
You cannot delete a successfully installed backend.
.. _storage-configuration-storage-related-cli-commands-section-N10247-N10024-N10001:
-------------------------------------
Add, Modify, or Display Storage Tiers
-------------------------------------
To list storage tiers:
.. code-block:: none
~(keystone)admin)$ system storage-tier-list ceph_cluster
+---------+---------+--------+--------------------------------------+
| uuid | name | status | backend_using |
+---------+---------+--------+--------------------------------------+
| acc8... | storage | in-use | 649830bf-b628-4170-b275-1f0b01cfc859 |
+---------+---------+--------+--------------------------------------+
To display information for a storage tier:
.. code-block:: none
~(keystone)admin)$ system storage-tier-show ceph_cluster <tier_name>
For example:
.. code-block:: none
~(keystone)admin)$ system storage-tier-show ceph_cluster <storage>
+--------------+--------------------------------------+
| Property | Value |
+--------------+--------------------------------------+
| uuid | 2a50cb4a-659d-4586-a5a2-30a5e01172aa |
| name | storage |
| type | ceph |
| status | in-use |
| backend_uuid | 248a90e4-9447-449f-a87a-5195af46d29e |
| cluster_uuid | 4dda5c01-6ea8-4bab-956c-c95eda4be99c |
| OSDs | [0, 1] |
| created_at | 2019-09-25T16:02:19.901343+00:00 |
| updated_at | 2019-09-25T16:04:25.884053+00:00 |
+--------------+--------------------------------------+
To add a storage tier:
.. code-block:: none
~(keystone)admin)$ system storage-tier-add ceph_cluster <tier_name>
To delete a tier that is not in use by a storage backend and does not have
OSDs assigned to it:
.. code-block:: none
~(keystone)admin)$ system storage-tier-delete <tier_name>
.. _storage-configuration-storage-related-cli-commands-section-N1005E-N1001C-N10001:
-------------------
Display File System
-------------------
You can use the :command:`system controllerfs list` command to list the
storage space allotments on a host.
.. code-block:: none
~(keystone_admin)$ system controllerfs-list
+-------+------------+-----+-----------------------+-------+-----------+
| UUID | FS Name | Size| Logical Volume | Rep.. | State |
| | | in | | | |
| | | GiB | | | |
+-------+------------+-----+-----------------------+-------+-----------+
| d0e...| database | 10 | pgsql-lv | True | available |
| 40d...| docker-dist| 16 | dockerdistribution-lv | True | available |
| 20e...| etcd | 5 | etcd-lv | True | available |
| 9e5...| extension | 1 | extension-lv | True | available |
| 55b...| platform | 10 | platform-lv | True | available |
+-------+------------+-----+-----------------------+-------+-----------+
For a system with dedicated storage:
.. code-block:: none
~(keystone_admin)$ system storage-backend-show ceph-store
+----------------------+--------------------------------------+
| Property | Value |
+----------------------+--------------------------------------+
| backend | ceph |
| name | ceph-store |
| state | configured |
| task | resize-ceph-mon-lv |
| services | None |
| capabilities | min_replication: 1 |
| | replication: 2 |
| object_gateway | False |
| ceph_total_space_gib | 0 |
| object_pool_gib | None |
| cinder_pool_gib | None |
| kube_pool_gib | None |
| glance_pool_gib | None |
| ephemeral_pool_gib | None |
| tier_name | storage |
| tier_uuid | 2a50cb4a-659d-4586-a5a2-30a5e01172aa |
| created_at | 2019-09-25T16:04:25.854193+00:00 |
| updated_at | 2019-09-26T18:47:56.563783+00:00 |
+----------------------+--------------------------------------+

View File

@ -0,0 +1,137 @@
.. jeg1583353455217
.. _storage-configuration-storage-resources:
=================
Storage Resources
=================
|prod| uses storage resources on the controller and worker hosts, and on
storage hosts if they are present.
.. contents::
:local:
:depth: 1
The |prod| storage configuration is highly flexible. The specific
configuration depends on the type of system installed, and the requirements
of the system.
.. _storage-configuration-storage-resources-d153e38:
--------------------
Uses of Disk Storage
--------------------
**StarlingX System**
The |prod| system uses root disk storage for the operating system and
related files, and for internal databases. On controller nodes, the
database storage and selected root file-systems are synchronized
between the controller nodes using DRBD.
**Local Docker Registry**
An HA local docker registry is deployed on controller nodes to provide
local centralized storage of container images. Its image store is a
DRBD synchronized file system.
**Docker Container Images**
Container images are pulled from either a remote or local Docker
Registry, and cached locally by docker on the host worker or controller
node when a container is launched.
**Container Ephemeral Local Disk**
Containers have local filesystems for ephemeral storage of data. This
data is lost when the container is terminated.
Kubernetes Docker ephemeral storage is allocated as part of the
docker-lv and kubelet-lv file systems from the cgts-vg volume group on
the root disk. These filesystems are resizable.
**Container Persistent Volume Claims \(PVCs\)**
Containers can mount remote HA replicated volumes backed by the Ceph
Storage Cluster for managing persistent data. This data survives
restarts of the container.
.. note::
Ceph is not configured by default. For more information, see
|stor-doc|: :ref:`Configure the Internal Ceph Storage Backend
<configure-the-internal-ceph-storage-backend>`.
.. _storage-configuration-storage-resources-d153e134:
-----------------
Storage Locations
-----------------
In addition to the root disks present on each host for system storage, the
following storage may be used only for:
.. _storage-configuration-storage-resources-d153e143:
- Controller hosts: Container Persistent Volume Claims on dedicated
storage hosts when using that setup or on controller hosts. Additional
Ceph OSD disk\(s\) are present on controllers in configurations
without dedicated storage hosts. These OSD\(s\) provide storage to fill
Persistent Volume Claims made by Kubernetes pods or containers.
- Worker hosts: This is storage is derived from docker-lv/kubelet-lv as
defined on the cgts-vg \(root disk\). You can add a disk to cgts-vg and
increase the size of the docker-lv/kubelet-lv.
**Combined Controller-Worker Hosts**
One or more disks can be used on combined hosts in Simplex or Duplex
systems to provide local ephemeral storage for containers, and a Ceph
cluster for backing Persistent Volume Claims.
Container/Pod ephemeral storage is implemented on the root disk on all
controllers/workers regardless of labeling.
**Storage Hosts**
One or more disks are used on storage hosts to realize a large scale
Ceph cluster providing backing for Persistent Volume Claims for
containers. Storage hosts are used only on |prod| with Dedicated
Storage systems.
.. _storage-configuration-storage-resources-section-N1015E-N10031-N1000F-N10001:
-----------------------
External Netapp Trident
-----------------------
|prod| can be configured to connect to and use an external Netapp Trident
deployment as its storage backend.
Netapp Trident supports:
.. _storage-configuration-storage-resources-d201e23:
- AWS Cloud Volumes
- E and EF-Series SANtricity
- ONTAP AFF, FAS, Select, and Cloud
- Element HCI and SolidFire
- Azure NetApp Files service
.. _storage-configuration-storage-resources-d201e56:
For more information about Trident, see
`https://netapp-trident.readthedocs.io
<https://netapp-trident.readthedocs.io>`__.

View File

@ -0,0 +1,24 @@
.. dnn1552678684527
.. _storage-functions-osds-and-ssd-backed-journals:
===============================================
Storage Functions: OSDs and SSD-backed Journals
===============================================
Disks on storage hosts are assigned storage functions in |prod| to provide
either OSD storage or Ceph journal storage.
Rotational disks on storage hosts are always assigned as object storage
devices \(OSDs\) to provide storage for Application disks. Solid-state disks
\(SSDs\) can be assigned as OSDs, or as journal functions to provide space for
Ceph transaction journals associated with OSDs. NVMe-compatible SSDs are also
supported.
To assign storage-host disks as OSDs, see :ref:`Provision Storage on a
Controller or Storage Host Using Horizon
<provision-storage-on-a-controller-or-storage-host-using-horizon>`.
To create SSD-backed journals, see :ref:`Add SSD-Backed Journals Using
Horizon <add-ssd-backed-journals-using-horizon>`.

View File

@ -0,0 +1,62 @@
.. uma1552671621577
.. _storage-hosts-storage-on-storage-hosts:
========================
Storage on Storage Hosts
========================
Storage hosts provide a large-scale, persistent, and highly available Ceph
cluster for backing Persistent Volume Claims.
The storage hosts can only be provisioned in a Standard with dedicated
storage deployment and comprise the storage cluster for the system. Within
the storage cluster, the storage hosts are deployed in replication groups
for redundancy. On dedicated storage setups Ceph storage backend is enabled
automatically, and the replication factor is updated later, depending on
the number of storage hosts provisioned.
.. _storage-hosts-storage-on-storage-hosts-section-N1003F-N1002B-N10001:
----------------------
OSD Replication Factor
----------------------
.. _storage-hosts-storage-on-storage-hosts-d61e23:
.. table::
:widths: auto
+--------------------+-----------------------------+--------------------------------------+
| Replication Factor | Hosts per Replication Group | Maximum Replication Groups Supported |
+====================+=============================+======================================+
| 2 | 2 | 4 |
+--------------------+-----------------------------+--------------------------------------+
| 3 | 3 | 3 |
+--------------------+-----------------------------+--------------------------------------+
You can add up to 16 object storage devices \(OSDs\) per storage host for
data storage.
Space on the storage hosts must be configured at installation before you
can unlock the hosts. You can change the configuration after installation
by adding resources to existing storage hosts or adding more storage hosts.
For more information, see the `StarlingX Installation and Deployment Guide
<https://docs.starlingx.io/deploy_install_guides/index.html>`__.
Storage hosts can achieve faster data access using SSD-backed transaction
journals \(journal functions\). NVMe-compatible SSDs are supported.
.. seealso::
- :ref:`Provision Storage on a Controller or Storage Host Using
Horizon
<provision-storage-on-a-controller-or-storage-host-using-horizon>`
- :ref:`Ceph Storage Pools <ceph-storage-pools>`
- :ref:`Change Hardware Components for a Storage Host
<changing-hardware-components-for-a-storage-host>`

View File

@ -0,0 +1,67 @@
.. frt1552675083821
.. _storage-profiles:
================
Storage Profiles
================
A storage profile is a named configuration for a list of storage resources
on a storage node or worker node.
Storage profiles for storage nodes are created using the **Create Storage
Profile** button on the storage node Host Detail page.
Storage profiles for worker nodes are created using the **Create Storage
Profile** button on the worker node Host Detail page.
Storage profiles are shown on the **Storage Profiles** tab on the Host
Inventory page. They can be created only after the host has been unlocked
for the first time.
Each storage resource consists of the following elements:
**Name**
This is the name given to the profile when it is created.
**Disk Configuration**
A Linux block storage device \(/dev/disk/by-path/..., identifying a
hard drive by physical location.
**Storage Configuration**
This field provides details on the storage type. The details differ
depending on the intended type of node for the profile.
Profiles for storage nodes indicate the type of storage backend, such
as **osd**, and potentially journal stor in the case of a storage node.
Profiles for worker nodes, and for controller/worker nodes in |prod-os|
Simplex or Duplex systems, provide details for the **nova-local**
volume group used for instance local storage as well as the Physical
volume and any Physical Volume Partitions that have been configured.
CoW Image is the default setting. Concurrent disk operations is now
configured as a helm chart override for containerized OpenStack.
.. _storage-profiles-d87e22:
.. note::
Storage profiles for worker-based or |prod-os| ephemeral storage \(that
is, storage profiles containing volume group and physical volume
information\) can be applied in two scenarios:
- on initial installation where a nova-local volume group has not
been previously provisioned
- on a previously provisioned host where the nova-local volume group
has been marked for removal
On a previously provisioned host, delete the nova-local volume group prior to applying the profile.
The example Storage Profiles screen below lists a storage profile for
image-backed **nova-local** storage, suitable for worker hosts.
.. image:: ../figures/jwe1570638362341.png
To delete storage profiles, select the check boxes next to the profile
names, and then click **Delete Storage Profiles**. This does not affect
hosts where the profiles have already been applied.

View File

@ -0,0 +1,20 @@
.. dem1552679497653
.. _storage-usage-details-storage-utilization-display:
===========================
Storage Utilization Display
===========================
|prod| provides enhanced backend storage usage details through the Horizon Web
interface.
Upstream storage utilization display is limited to the hypervisor statistics
which include only local storage utilization on the worker nodes. |prod|
provides enhanced storage utilization statistics for the ceph, and
controller-fs backends. The statistics are available using the |CLI| and
Horizon.
In Horizon, the Storage Overview panel includes storage Services and Usage
with storage details.

View File

@ -0,0 +1,19 @@
.. vba1584558499981
.. _uninstall-the-netapp-backend:
============================
Uninstall the Netapp Backend
============================
Uninstalling the Netapp backend is done using the :command:`tridentctl` command.
Run the following command.
.. code-block:: none
# tridentctl -n <tridentNamespace> uninstall
Pods and resources created during installation are deleted.

View File

@ -0,0 +1,52 @@
.. ujn1590525049608
.. _view-details-for-a-partition:
============================
View Details for a Partition
============================
You can view details for a partition with the **system
host-disk-partition-show** command.
.. rubric:: |context|
The syntax of the command is:
.. code-block:: none
system host-disk-partition-show <host> <partition>
Make the following substitutions:
**<host>**
The host name or ID.
**<partition>**
The partition device path or UUID.
This example displays details for a particular partition on compute-1.
.. code-block:: none
~(keystone_admin)$ system host-disk-partition-show compute-1 a4aa3f66-ff3c-49a0-a43f-bc30012f8361
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part3 |
| device_node | /dev/sdb3 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | LVM Physical Volume |
| start_mib | 10240 |
| end_mib | 21505 |
| size_mib | 10240 |
| uuid | a4aa3f66-ff3c-49a0-a43f-bc30012f8361 |
| ihost_uuid | 3b315241-d54f-499b-8566-a6ed7d2d6b39 |
| idisk_uuid | fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc |
| ipv_uuid | c571653b-1d91-4299-adea-1b24f86cb898 |
| status | In-Use |
| created_at | 2017-09-07T19:53:23.743734+00:00 |
| updated_at | 2017-09-07T20:06:06.914404+00:00 |
+-------------+--------------------------------------------------+

View File

@ -0,0 +1,35 @@
.. nen1590589232375
.. _view-details-for-a-physical-volume:
==================================
View details for a Physical Volume
==================================
You can view details for a physical volume using the
:command:`system-host-pv-show` command.
.. rubric:: |context|
The syntax of the command is:
.. code-block:: none
system host-pv-show <hostname> <uuid>
where:
**<hostname>**
is the name or ID of the host.
**<uuid>**
is the uuid of the physical volume.
For example, to view details for a physical volume on compute-1, do the
following:
.. code-block:: none
~(keystone_admin)$ system host-pv-show compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63

View File

@ -0,0 +1,32 @@
.. vpi1552679480629
.. _view-storage-utilization-using-horizon:
======================================
View Storage Utilization Using Horizon
======================================
You can view storage utilization in the Horizon Web interface.
.. rubric:: |context|
The storage utilization shows the free, used and total capacity for the
system, as well as storage I/O throughput.
For more information on per-host storage, see |node-doc|: :ref:`Storage Tab
<storage-tab>`.
.. rubric:: |proc|
#. Navigate to **Admin** \> **Platform** \> **Storage Overview** in Horizon.
In the following example screen, two controllers on an AIO-Duplex
system are configured with storage with Ceph OSDs **osd.0** through
**osd.5**.
.. image:: ../figures/gzf1569521230362.png
Rank is evaluated and assigned when a monitor is added to the cluster. It
is based on the IP address and port assigned.

View File

@ -0,0 +1,58 @@
.. rjs1590523169603
.. _work-with-disk-partitions:
=========================
Work with Disk Partitions
=========================
You can use disk partitions to provide space for local volume groups.
You can create, modify, and delete partitions from the Horizon Web
interface or the |CLI|.
To use |prod-os|, select **Admin** \> **Platform** \> **Host Inventory**,
and then click the host name to open the Host Details page. On the Host
Details page, select the Storage tab. For more information, refer to the
host provisioning sections in the OpenStack Installation guide.
The following restrictions apply:
.. _work-with-disk-partitions-ul-mkv-pgx-5lb:
- Logical volumes are not supported. All user-created partitions are
implemented as physical volumes.
- You cannot specify a start location. Each new partition is created
using the first available location on the disk.
- You can modify or delete only the last partition on a disk.
- You can increase the size of a partition, but you cannot decrease the
size.
- You cannot delete a partition while it is in use \(that is, while its
physical volume is assigned to a local volume group\).
- User-created partitions are not supported for storage hosts.
- Partition operations on a host are limited to one operation at a time.
.. seealso::
- :ref:`Identify Space Available for Partitions
<identify-space-available-for-partitions>`
- :ref:`List Partitions <list-partitions>`
- :ref:`View Details for a Partition <view-details-for-a-partition>`
- :ref:`Add a Partition <add-a-partition>`
- :ref:`Increase the Size of a Partition
<increase-the-size-of-a-partition>`
- :ref:`Delete a Partition <delete-a-partition>`

View File

@ -0,0 +1,50 @@
.. zqw1590583956872
.. _work-with-local-volume-groups:
=============================
Work with Local Volume Groups
=============================
You can use the |prod-long| Horizon Web interface or the |CLI| to add local
volume groups and to adjust their settings.
.. rubric:: |context|
To manage the physical volumes that support local volume groups, see
:ref:`Work with Physical Volumes <work-with-physical-volumes>`.
.. rubric:: |proc|
#. Lock the host.
.. code-block:: none
~(keystone_admin)$ system host-lock <hostname>
where:
**<hostname>**
is the name or ID of the host.
#. Open the Storage page for the host.
#. Select **Admin** \> **Platform** \> **Host Inventory**.
#. Click the name of the host to open the Host Details page.
#. Select the **Storage** tab.
#. Click the Name of the group in the **Local Volume Groups** list.
#. Select the Parameters tab on the Local Volume Group Detail page.
You can now review and modify the parameters for the local volume group.
.. image:: ../figures/qig1590585618135.png
:width: 550

View File

@ -0,0 +1,34 @@
.. yyw1590586744573
.. _work-with-physical-volumes:
==========================
Work with Physical Volumes
==========================
Physical volumes provide storage for local volume groups on controller or
compute hosts. You can work with them in order to configure local volume
groups.
You can add, delete, and review physical volumes using the |CLI| or OpenStack
Horizon Web interface.
To use OpenStack Horizon, select **Admin** \> **Platform** \> **Host
Inventory**, and then click the host name to open the Host Details page. On
the Host Details page, select the Storage tab.
Physical volumes are created on available disks or partitions. As each
physical volume is created, it is included in an existing local volume group.
.. seealso::
- :ref:`Add a Physical Volume <add-a-physical-volume>`
- :ref:`List Physical Volumes <list-physical-volumes>`
- :ref:`View details for a Physical Volume
<view-details-for-a-physical-volume>`
- :ref:`Delete a Physical Volume <delete-a-physical-volume>`