Storage Update
Signed-off-by: Rafael Jardim <rafaeljordao.jardim@windriver.com> Change-Id: Ic8eea41e912e52ddebc5ed9dca62e8d4f9255b09
This commit is contained in:
parent
fad2ab40ae
commit
2e74ccd0b7
0
doc/source/_includes/configure-external-netapp.rest
Normal file
0
doc/source/_includes/configure-external-netapp.rest
Normal file
@ -11,6 +11,7 @@
|
|||||||
.. |prod-long| replace:: StarlingX
|
.. |prod-long| replace:: StarlingX
|
||||||
.. |prod-os| replace:: StarlingX OpenStack
|
.. |prod-os| replace:: StarlingX OpenStack
|
||||||
.. |prod-dc| replace:: Distributed Cloud
|
.. |prod-dc| replace:: Distributed Cloud
|
||||||
|
.. |prod-p| replace:: StarlingX Platform
|
||||||
|
|
||||||
.. Guide names; will be formatted in italics by default.
|
.. Guide names; will be formatted in italics by default.
|
||||||
.. |node-doc| replace:: :title:`StarlingX Node Configuration and Management`
|
.. |node-doc| replace:: :title:`StarlingX Node Configuration and Management`
|
||||||
|
@ -89,6 +89,7 @@
|
|||||||
.. |ToR| replace:: :abbr:`ToR (Top-of-Rack)`
|
.. |ToR| replace:: :abbr:`ToR (Top-of-Rack)`
|
||||||
.. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)`
|
.. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)`
|
||||||
.. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)`
|
.. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)`
|
||||||
|
.. |UUID| replace:: :abbr:`UUID (Universally Unique Identifier)`
|
||||||
.. |VF| replace:: :abbr:`VF (Virtual Function)`
|
.. |VF| replace:: :abbr:`VF (Virtual Function)`
|
||||||
.. |VFs| replace:: :abbr:`VFs (Virtual Functions)`
|
.. |VFs| replace:: :abbr:`VFs (Virtual Functions)`
|
||||||
.. |VLAN| replace:: :abbr:`VLAN (Virtual Local Area Network)`
|
.. |VLAN| replace:: :abbr:`VLAN (Virtual Local Area Network)`
|
||||||
|
@ -9,8 +9,6 @@ Add a Partition
|
|||||||
You can add a partition using the :command:`system host-disk-partition-add`
|
You can add a partition using the :command:`system host-disk-partition-add`
|
||||||
command.
|
command.
|
||||||
|
|
||||||
.. rubric:: |context|
|
|
||||||
|
|
||||||
The syntax for the command is:
|
The syntax for the command is:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -23,11 +21,13 @@ where:
|
|||||||
is the host name or ID.
|
is the host name or ID.
|
||||||
|
|
||||||
**<disk>**
|
**<disk>**
|
||||||
is the disk path or UUID.
|
is the disk path or |UUID|.
|
||||||
|
|
||||||
**<size>**
|
**<size>**
|
||||||
is the partition size in MiB.
|
is the partition size in MiB.
|
||||||
|
|
||||||
|
.. rubric:: |proc|
|
||||||
|
|
||||||
For example, to set up a 512 MiB partition on compute-1, do the following:
|
For example, to set up a 512 MiB partition on compute-1, do the following:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
@ -32,13 +32,13 @@ monitors run on **controller-0**, **controller-1**, and **storage-0** only\).
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-disk-list storage-3
|
~(keystone_admin)$ system host-disk-list storage-3
|
||||||
+-------+-------------+------------+-------------+------------------+
|
+--------------------------------------+-------------+------------+-------------+------------------+
|
||||||
| uuid | device_node | device_num | device_type | journal_size_gib |
|
| uuid | device_node | device_num | device_type | journal_size_gib |
|
||||||
+-------+-------------+------------+-------------+------------------+
|
+--------------------------------------+-------------+------------+-------------+------------------+
|
||||||
| ba7...| /dev/sda | 2048 | HDD | 51200 |
|
| ba785ad3-8be7-3654-45fd-93892d7182da | /dev/sda | 2048 | HDD | 51200 |
|
||||||
| e87...| /dev/sdb | 2064 | HDD | 10240 |
|
| e8785ad3-98sa-1234-32ss-923433dd82da | /dev/sdb | 2064 | HDD | 10240 |
|
||||||
| ae8...| /dev/sdc | 2080 | SSD | 8192 |
|
| ae885ad3-8cc7-4103-84eb-9333ff3482da | /dev/sdc | 2080 | SSD | 8192 |
|
||||||
+-------+-------------+------------+-------------+------------------+
|
+--------------------------------------+-------------+------------+-------------+------------------+
|
||||||
|
|
||||||
#. Create a journal function.
|
#. Create a journal function.
|
||||||
|
|
||||||
@ -46,7 +46,7 @@ monitors run on **controller-0**, **controller-1**, and **storage-0** only\).
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-stor-add <host_name> journal <device_uuid>
|
~(keystone_admin)]$ system host-stor-add <host_name> journal <device_uuid>
|
||||||
|
|
||||||
where <host\_name> is the name of the storage host \(for example,
|
where <host\_name> is the name of the storage host \(for example,
|
||||||
storage-3\), and <device\_uuid> identifies an SSD.
|
storage-3\), and <device\_uuid> identifies an SSD.
|
||||||
@ -55,8 +55,9 @@ monitors run on **controller-0**, **controller-1**, and **storage-0** only\).
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-stor-add storage-3 journal ae885ad3-8be7-4103-84eb-93892d7182da
|
~(keystone_admin)]$ system host-stor-add storage-3 journal ae885ad3-8be7-4103-84eb-93892d7182da
|
||||||
|------------------+--------------------------------------+
|
|
||||||
|
+------------------+--------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+------------------+--------------------------------------+
|
+------------------+--------------------------------------+
|
||||||
| osdid | None |
|
| osdid | None |
|
||||||
@ -73,12 +74,13 @@ monitors run on **controller-0**, **controller-1**, and **storage-0** only\).
|
|||||||
+------------------+--------------------------------------+
|
+------------------+--------------------------------------+
|
||||||
|
|
||||||
|
|
||||||
#. Update one or more OSDs to use the journal function.
|
#. Update one or more |OSDs| to use the journal function.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-stor-update <osd_uuid> \
|
~(keystone_admin)$ system host-stor-update <osd_uuid>
|
||||||
--journal-location <journal_function_uuid> [--journal-size <size_in_gib>]
|
--journal-location <journal_function_uuid> [--journal-size
|
||||||
|
<size_in_gib>]
|
||||||
|
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
@ -6,26 +6,28 @@
|
|||||||
Configure an External Netapp Deployment as the Storage Backend
|
Configure an External Netapp Deployment as the Storage Backend
|
||||||
================================================================
|
================================================================
|
||||||
|
|
||||||
Configure an external Netapp Trident deployment as the storage backend,
|
Configure an external Netapp Trident deployment as the storage backend, after
|
||||||
after system installation using with the help of a |prod|-provided ansible
|
system installation using a |prod|-provided ansible playbook.
|
||||||
playbook.
|
|
||||||
|
|
||||||
..
|
|
||||||
.. rubric:: |prereq|
|
.. rubric:: |prereq|
|
||||||
|
|
||||||
.. xbooklink
|
|
||||||
|
|
||||||
|prod-long| must be installed and fully deployed before performing this
|
|prod-long| must be installed and fully deployed before performing this
|
||||||
procedure. See the :ref:`Installation Overview <installation-overview>`
|
procedure.
|
||||||
|
|
||||||
|
.. xbooklink See the :ref:`Installation Overview <installation-overview>`
|
||||||
for more information.
|
for more information.
|
||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
#. Configure the storage network.
|
#. Configure the storage network.
|
||||||
|
|
||||||
|
.. only:: starlingx
|
||||||
|
|
||||||
If you have not created the storage network during system deployment,
|
Follow the next steps to configure storage network
|
||||||
you must create it manually.
|
|
||||||
|
.. only:: partner
|
||||||
|
|
||||||
|
.. include:: ../../_includes/configure-external-netapp.rest
|
||||||
|
|
||||||
|
|
||||||
#. If you have not done so already, create an address pool for the
|
#. If you have not done so already, create an address pool for the
|
||||||
@ -52,13 +54,13 @@ playbook.
|
|||||||
|
|
||||||
#. For each host in the system, do the following:
|
#. For each host in the system, do the following:
|
||||||
|
|
||||||
1. Lock the host.
|
#. Lock the host.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
(keystone_admin)$ system host-lock <hostname>
|
(keystone_admin)$ system host-lock <hostname>
|
||||||
|
|
||||||
2. Create an interface using the address pool.
|
#. Create an interface using the address pool.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
@ -66,7 +68,7 @@ playbook.
|
|||||||
|
|
||||||
(keystone_admin)$ system host-if-modify -n storage0 -c platform --ipv4-mode static --ipv4-pool storage-pool controller-0 enp0s9
|
(keystone_admin)$ system host-if-modify -n storage0 -c platform --ipv4-mode static --ipv4-pool storage-pool controller-0 enp0s9
|
||||||
|
|
||||||
3. Assign the interface to the network.
|
#. Assign the interface to the network.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
@ -74,7 +76,7 @@ playbook.
|
|||||||
|
|
||||||
(keystone_admin)$ system interface-network-assign controller-0 storage0 storage-net
|
(keystone_admin)$ system interface-network-assign controller-0 storage0 storage-net
|
||||||
|
|
||||||
4. Unlock the system.
|
#. Unlock the system.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
@ -193,7 +195,8 @@ playbook.
|
|||||||
You can add multiple backends and/or storage classes.
|
You can add multiple backends and/or storage classes.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
To use IPv6 addressing, you must add the following to your configuration:
|
To use IPv6 addressing, you must add the following to your
|
||||||
|
configuration:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
@ -203,6 +206,18 @@ playbook.
|
|||||||
`https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html
|
`https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html
|
||||||
<https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html>`__.
|
<https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html>`__.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
By default, Netapp is configured to have **777** as
|
||||||
|
unixPermissions.|prod| recommends changing these settings to
|
||||||
|
make it more secure, for example, **"unixPermissions": "755"**.
|
||||||
|
Ensure that the right permissions are used, and there is no
|
||||||
|
conflict with container security.
|
||||||
|
|
||||||
|
Do NOT use **777** as **unixPermissions** to configure an external
|
||||||
|
Netapp deployment as the Storage backend. For more information,
|
||||||
|
contact Netapp, at `https://www.netapp.com/
|
||||||
|
<https://www.netapp.com/>`__.
|
||||||
|
|
||||||
#. Run the playbook.
|
#. Run the playbook.
|
||||||
|
|
||||||
The following example uses the ``-e`` option to specify a customized
|
The following example uses the ``-e`` option to specify a customized
|
||||||
|
@ -0,0 +1,243 @@
|
|||||||
|
|
||||||
|
.. clb1615317605723
|
||||||
|
.. _configure-ceph-file-system-for-internal-ceph-storage-backend:
|
||||||
|
|
||||||
|
============================================================
|
||||||
|
Configure Ceph File System for Internal Ceph Storage Backend
|
||||||
|
============================================================
|
||||||
|
|
||||||
|
CephFS \(Ceph File System\) is a highly available, mutli-use, performant file
|
||||||
|
store for a variety of applications, built on top of Ceph's Distributed Object
|
||||||
|
Store \(RADOS\).
|
||||||
|
|
||||||
|
.. rubric:: |context|
|
||||||
|
|
||||||
|
CephFS provides the following functionality:
|
||||||
|
|
||||||
|
|
||||||
|
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-h2b-h1k-x4b:
|
||||||
|
|
||||||
|
- Enabled by default \(along with existing Ceph RDB\)
|
||||||
|
|
||||||
|
- Highly available, multi-use, performant file storage
|
||||||
|
|
||||||
|
- Scalability using a separate RADOS pool for the file's metadata
|
||||||
|
|
||||||
|
- Metadata using Metadata Servers \(MDS\) that provide high availability and
|
||||||
|
scalability
|
||||||
|
|
||||||
|
- Deployed in HA configurations for all |prod| deployment options
|
||||||
|
|
||||||
|
- Integrates **cephfs-provisioner** supporting Kubernetes **StorageClass**
|
||||||
|
|
||||||
|
- Enables configuration of:
|
||||||
|
|
||||||
|
|
||||||
|
- **PersistentVolumeClaim** \(|PVC|\) using **StorageClass** and
|
||||||
|
ReadWriteMany accessmode
|
||||||
|
|
||||||
|
- Two or more application pods mounting |PVC| and reading/writing data to it
|
||||||
|
|
||||||
|
CephFS is configured automatically when a Ceph backend is enabled and provides
|
||||||
|
a Kubernetes **StorageClass**. Once enabled, every node in the cluster that
|
||||||
|
serves as a Ceph monitor will also be configured as a CephFS Metadata Server
|
||||||
|
\(MDS\). Creation of the CephFS pools, filesystem initialization, and creation
|
||||||
|
of Kubernetes resource is done by the **platform-integ-apps** application,
|
||||||
|
using **cephfs-provisioner** Helm chart.
|
||||||
|
|
||||||
|
When applied, **platform-integ-apps** creates two Ceph pools for each storage
|
||||||
|
backend configured, one for CephFS data and a second pool for metadata:
|
||||||
|
|
||||||
|
|
||||||
|
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-jp2-yn2-x4b:
|
||||||
|
|
||||||
|
- **CephFS data pool**: The pool name for the default storage backend is
|
||||||
|
**kube-cephfs-data**
|
||||||
|
|
||||||
|
- **Metadata pool**: The pool name is **kube-cephfs-metadata**
|
||||||
|
|
||||||
|
When a new storage backend is created, a new CephFS data pool will be
|
||||||
|
created with the name **kube-cephfs-data-** \<storage\_backend\_name\>, and
|
||||||
|
the metadata pool will be created with the name
|
||||||
|
**kube-cephfs-metadata-** \<storage\_backend\_name\>. The default
|
||||||
|
filesystem name is **kube-cephfs**.
|
||||||
|
|
||||||
|
When a new storage backend is created, a new filesystem will be created
|
||||||
|
with the name **kube-cephfs-** \<storage\_backend\_name\>.
|
||||||
|
|
||||||
|
|
||||||
|
For example, if the user adds a storage backend named, 'test',
|
||||||
|
**cephfs-provisioner** will create the following pools:
|
||||||
|
|
||||||
|
|
||||||
|
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-i3w-h1f-x4b:
|
||||||
|
|
||||||
|
- kube-cephfs-data-test
|
||||||
|
|
||||||
|
- kube-cephfs-metadata-test
|
||||||
|
|
||||||
|
|
||||||
|
Also, the application **platform-integ-apps** will create a filesystem **kube
|
||||||
|
cephfs-test**.
|
||||||
|
|
||||||
|
If you list all the pools in a cluster with 'test' storage backend, you should
|
||||||
|
see four pools created by **cephfs-provisioner** using **platform-integ-apps**.
|
||||||
|
Use the following command to list the CephFS |OSD| pools created.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ ceph osd lspools
|
||||||
|
|
||||||
|
|
||||||
|
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-nnv-lr2-x4b:
|
||||||
|
|
||||||
|
- kube-rbd
|
||||||
|
|
||||||
|
- kube-rbd-test
|
||||||
|
|
||||||
|
- **kube-cephfs-data**
|
||||||
|
|
||||||
|
- **kube-cephfs-data-test**
|
||||||
|
|
||||||
|
- **kube-cephfs-metadata**
|
||||||
|
|
||||||
|
- **kube-cephfs-metadata-test**
|
||||||
|
|
||||||
|
|
||||||
|
Use the following command to list Ceph File Systems:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ ceph fs ls
|
||||||
|
name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ]
|
||||||
|
name: kube-cephfs-silver, metadata pool: kube-cephfs-metadata-silver, data pools: [kube-cephfs-data-silver ]
|
||||||
|
|
||||||
|
:command:`cephfs-provisioner` creates in a Kubernetes cluster, a
|
||||||
|
**StorageClass** for each storage backend present.
|
||||||
|
|
||||||
|
These **StorageClass** resources should be used to create
|
||||||
|
**PersistentVolumeClaim** resources in order to allow pods to use CephFS. The
|
||||||
|
default **StorageClass** resource is named **cephfs**, and additional resources
|
||||||
|
are created with the name \<storage\_backend\_name\> **-cephfs** for each
|
||||||
|
additional storage backend created.
|
||||||
|
|
||||||
|
For example, when listing **StorageClass** resources in a cluster that is
|
||||||
|
configured with a storage backend named 'test', the following storage classes
|
||||||
|
are created:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ kubectl get sc
|
||||||
|
NAME PROVISIONER RECLAIM.. VOLUME.. ALLOWVOLUME.. AGE
|
||||||
|
cephfs ceph.com/cephfs Delete Immediate false 65m
|
||||||
|
general (default) ceph.com/rbd Delete Immediate false 66m
|
||||||
|
test-cephfs ceph.com/cephfs Delete Immediate false 65m
|
||||||
|
test-general ceph.com/rbd Delete Immediate false 66m
|
||||||
|
|
||||||
|
All Kubernetes resources \(pods, StorageClasses, PersistentVolumeClaims,
|
||||||
|
configmaps, etc.\) used by the provisioner are created in the **kube-system
|
||||||
|
namespace.**
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
Multiple Ceph file systems are not enabled by default in the cluster. You
|
||||||
|
can enable it manually, for example, using the command; :command:`ceph fs
|
||||||
|
flag set enable\_multiple true --yes-i-really-mean-it`.
|
||||||
|
|
||||||
|
|
||||||
|
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-section-dq5-wgk-x4b:
|
||||||
|
|
||||||
|
-------------------------------
|
||||||
|
Persistent Volume Claim \(PVC\)
|
||||||
|
-------------------------------
|
||||||
|
|
||||||
|
.. rubric:: |context|
|
||||||
|
|
||||||
|
If you need to create a Persistent Volume Claim, you can create it using
|
||||||
|
**kubectl**. For example:
|
||||||
|
|
||||||
|
|
||||||
|
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ol-lrh-pdf-x4b:
|
||||||
|
|
||||||
|
#. Create a file named **my\_pvc.yaml**, and add the following content:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: claim1
|
||||||
|
namespace: kube-system
|
||||||
|
spec:
|
||||||
|
storageClassName: cephfs
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteMany
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
||||||
|
|
||||||
|
#. To apply the updates, use the following command:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ kubectl apply -f my_pvc.yaml
|
||||||
|
|
||||||
|
#. After the |PVC| is created, use the following command to see the |PVC|
|
||||||
|
bound to the existing **StorageClass**.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ kubectl get pvc -n kube-system
|
||||||
|
|
||||||
|
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||||
|
claim1 Boundpvc.. 1Gi RWX cephfs
|
||||||
|
|
||||||
|
#. The |PVC| is automatically provisioned by the **StorageClass**, and a |PVC|
|
||||||
|
is created. Use the following command to list the |PVC|.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ kubectl get pv -n kube-system
|
||||||
|
|
||||||
|
NAME CAPACITY ACCESS..RECLAIM.. STATUS CLAIM STORAGE.. REASON AGE
|
||||||
|
pvc-5.. 1Gi RWX Delete Bound kube-system/claim1 cephfs 26s
|
||||||
|
|
||||||
|
|
||||||
|
#. Create Pods to use the |PVC|. Create a file my\_pod.yaml:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
kind: Pod
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: test-pod
|
||||||
|
namespace: kube-system
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: test-pod
|
||||||
|
image: gcr.io/google_containers/busybox:1.24
|
||||||
|
command:
|
||||||
|
- "/bin/sh"
|
||||||
|
args:
|
||||||
|
- "-c"
|
||||||
|
- "touch /mnt/SUCCESS && exit 0 || exit 1"
|
||||||
|
volumeMounts:
|
||||||
|
- name: pvc
|
||||||
|
mountPath: "/mnt"
|
||||||
|
restartPolicy: "Never"
|
||||||
|
volumes:
|
||||||
|
- name: pvc
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: claim1
|
||||||
|
|
||||||
|
#. Apply the inputs to the **pod.yaml** file, using the following command.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ kubectl apply -f my_pod.yaml
|
||||||
|
|
||||||
|
|
||||||
|
For more information on Persistent Volume Support, see, :ref:`About Persistent
|
||||||
|
Volume Support <about-persistent-volume-support>`, and,
|
||||||
|
|usertasks-doc|: :ref:`Creating Persistent Volume Claims
|
||||||
|
<kubernetes-user-tutorials-creating-persistent-volume-claims>`.
|
||||||
|
|
@ -6,17 +6,7 @@
|
|||||||
Configure Netapps Using a Private Docker Registry
|
Configure Netapps Using a Private Docker Registry
|
||||||
===================================================
|
===================================================
|
||||||
|
|
||||||
Use the ``docker\_registries`` parameter to pull from the local registry rather
|
Use the ``docker_registries`` parameter to pull from the local registry rather
|
||||||
than public ones.
|
than public ones.
|
||||||
|
|
||||||
You must first push the files to the local registry.
|
You must first push the files to the local registry.
|
||||||
|
|
||||||
.. xbooklink
|
|
||||||
|
|
||||||
Refer to the workflow and
|
|
||||||
yaml file formats described in |inst-doc|: :ref:`Populate a Private Docker
|
|
||||||
Registry from the Wind River Amazon Registry
|
|
||||||
<populate-a-private-docker-registry-from-the-wind-river-amazon-registry>`
|
|
||||||
and |inst-doc|: :ref:`Bootstrap from a Private Docker Registry
|
|
||||||
<bootstrap-from-a-private-docker-registry>`.
|
|
||||||
|
|
||||||
|
@ -119,7 +119,7 @@ following command increases the scratch filesystem size to 10 GB:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-fs-modify controller-1 scratch=10
|
~(keystone_admin)]$ system host-fs-modify controller-1 scratch=10
|
||||||
|
|
||||||
**Backup Storage**
|
**Backup Storage**
|
||||||
|
|
||||||
|
@ -26,9 +26,9 @@ where:
|
|||||||
is the host name or ID.
|
is the host name or ID.
|
||||||
|
|
||||||
**<partition>**
|
**<partition>**
|
||||||
is the partition device path or UUID.
|
is the partition device path or |UUID|.
|
||||||
|
|
||||||
For example, to delete a partition with the UUID
|
For example, to delete a partition with the |UUID|
|
||||||
9f93c549-e26c-4d4c-af71-fb84e3fcae63 from compute-1, do the following.
|
9f93c549-e26c-4d4c-af71-fb84e3fcae63 from compute-1, do the following.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
@ -50,6 +50,6 @@ command.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-pv-delete compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63
|
~(keystone_admin)]$ system host-pv-delete compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63
|
||||||
|
|
||||||
|
|
||||||
|
@ -22,8 +22,9 @@ command includes both the **device\_node** and the **device\_path**.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-disk-show controller-0 \
|
~(keystone_admin)]$ system host-disk-show controller-0
|
||||||
1722b081-8421-4475-a6e8-a26808cae031
|
1722b081-8421-4475-a6e8-a26808cae031
|
||||||
|
|
||||||
+-------------+--------------------------------------------+
|
+-------------+--------------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+-------------+--------------------------------------------+
|
+-------------+--------------------------------------------+
|
||||||
|
@ -215,6 +215,4 @@ storage class.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ kubectl get secret ceph-pool-kube-rbd -n default -o yaml | grep -v '^\s*namespace:\s' | kubectl apply -n <namespace> -f -
|
~(keystone_admin)$ kubectl get secret ceph-pool-kube-rbd -n default -o yaml | grep -v '^\s*namespace:\s' | kubectl apply -n <namespace> -f
|
||||||
|
|
||||||
|
|
||||||
|
@ -6,17 +6,23 @@
|
|||||||
Identify Space Available for Partitions
|
Identify Space Available for Partitions
|
||||||
=======================================
|
=======================================
|
||||||
|
|
||||||
|
Use the :command:`system host-disk-list` command to identify space available for partitions.
|
||||||
|
|
||||||
For example, run the following command to show space available on compute-1.
|
For example, run the following command to show space available on compute-1.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-disk-list compute-1
|
~(keystone_admin)$ system host-disk-list compute-1
|
||||||
|
+--------------------------------------+-------------+------------+-------------+----------+------------------+-----+--------------------+--------------------------------------------+
|
||||||
+--------------------------------------+------------+...+---------------+...
|
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
|
||||||
| uuid |device_node | | available_gib |...
|
| | | | | | | | | |
|
||||||
+--------------------------------------+------------+...+---------------+...
|
+--------------------------------------+-------------+------------+-------------+----------+------------------+-----+--------------------+--------------------------------------------+
|
||||||
| 6a0cadea-58ae-406f-bedf-b25ba82f0488 | /dev/sda |...| 32698 |...
|
| 2f71f715-ffc8-40f1-b099-f97b8c00e9cc | /dev/sda | 2048 | SSD | 447. | 357.816 | N/A | PHWA6062001U480FGN | /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0 |
|
||||||
| fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc | /dev/sdb |...| 9215 |...
|
| | | | | 13 | | | | |
|
||||||
+--------------------------------------+------------+...+---------------+...
|
| | | | | | | | | |
|
||||||
|
| 5331459b-4eff-4d1a-83ea-555acd198bb6 | /dev/sdb | 2064 | SSD | 447. | 0.0 | N/A | PHWA6282051N480FGN | /dev/disk/by-path/pci-0000:00:1f.2-ata-2.0 |
|
||||||
|
| | | | | 13 | | | | |
|
||||||
|
| | | | | | | | | |
|
||||||
|
+--------------------------------------+-------------+------------+-------------+----------+------------------+-----+--------------------+--------------------------------------------+
|
||||||
|
|
||||||
|
|
||||||
|
@ -73,33 +73,55 @@ The default **rootfs** device is **/dev/sda**.
|
|||||||
| updated_at | None |
|
| updated_at | None |
|
||||||
+---------------+------------------------------------------------+
|
+---------------+------------------------------------------------+
|
||||||
|
|
||||||
#. Check for free disk space on the new partition, once it is created.
|
#. Check the disk space on the new partition, once it is created.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-disk-partition-list 1
|
~(keystone_admin)$ system host-disk-partition-list 1
|
||||||
|
---------------------------------------------------------------------------------
|
||||||
|
uuid device_path device_node type_guid type_name size_gib status
|
||||||
|
---------------------------------------------------------------------------------
|
||||||
|
69b1b.. /dev/disk/by-path/.. /dev/sda6 ba5eba11.. LVM Phy.Vol.. 22.0 Ready
|
||||||
|
---------------------------------------------------------------------------------
|
||||||
|
|
||||||
#. Assign the unused partition on **controller-0** as a physical volume to
|
#. Assign the unused partition on **controller-0** as a physical volume to
|
||||||
**cgts-vg** volume group.
|
**cgts-vg** volume group.
|
||||||
|
|
||||||
.. code-block:: none
|
For example
|
||||||
|
|
||||||
~(keystone_admin)$ system host-pv-add controller-0 cgts-vg dev/sda
|
|
||||||
|
|
||||||
#. Assign the unused partition on **controller-1** as a physical volume to
|
|
||||||
**cgts-vg** volume group. You can also **swact** the hosts, and repeat the
|
|
||||||
procedure on **controller-1**.
|
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-pv-add controller-1 cgts-vg /dev/sda
|
~(keystone_admin)$ system host-pv-add controller-0 cgts-vg 69b1bb35-7326-4bcc-94d7-bef72f064f46
|
||||||
|
+---------------------------+---------------------------------------+
|
||||||
|
| Property | Value |
|
||||||
|
+---------------------------+---------------------------------------+
|
||||||
|
| uuid | 626c450f-4472-485c-bae7-791768630e1e |
|
||||||
|
| pv_state | adding |
|
||||||
|
| pv_type | partition |
|
||||||
|
| disk_or_part_uuid | 69b1bb35-7326-4bcc-94d7-bef72f064f46 |
|
||||||
|
| disk_or_part_device_node | /dev/sda6 |
|
||||||
|
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:18:00. |
|
||||||
|
| | 0-scsi-0:2:0:0-part6 |
|
||||||
|
| lvm_pv_name | /dev/sda6 |
|
||||||
|
| lvm_vg_name | cgts-vg |
|
||||||
|
| lvm_pv_uuid | None |
|
||||||
|
| lvm_pv_size_gib | 0.0 |
|
||||||
|
| lvm_pe_total | 0 |
|
||||||
|
| lvm_pe_alloced | 0 |
|
||||||
|
| ihost_uuid | e579a4af-108b-4dc9-9975-0aa089d530d7 |
|
||||||
|
| created_at | 2020-12-09T17:22:19.666250+00:00 |
|
||||||
|
| updated_at | None |
|
||||||
|
+---------------------------+---------------------------------------+
|
||||||
|
|
||||||
|
#. To assign the unused partition on **controller-1** as a physical volume to
|
||||||
|
**cgts-vg** volume group, **swact** the hosts and repeat the procedure on
|
||||||
|
**controller-1**.
|
||||||
|
|
||||||
.. rubric:: |postreq|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
After increasing the **cgts-vg** volume size, you can provision the
|
After increasing the **cgts-vg** volume size, you can provision the filesystem
|
||||||
filesystem storage. For more information about increasing filesystem
|
storage. For more information about increasing filesystem allotments using the
|
||||||
allotments using the CLI, or the Horizon Web interface, see:
|
|CLI|, or the Horizon Web interface, see:
|
||||||
|
|
||||||
.. _increase-the-size-for-lvm-local-volumes-on-controller-filesystems-ul-mxm-f1c-nmb:
|
.. _increase-the-size-for-lvm-local-volumes-on-controller-filesystems-ul-mxm-f1c-nmb:
|
||||||
|
|
||||||
|
@ -11,9 +11,8 @@ host-disk-partition-modify` command.
|
|||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
You can modify only the last partition on a disk \(indicated by **part** in
|
You can modify only the last partition on a disk \(indicated by **part** in the
|
||||||
the device path; for example,
|
device path; for example, /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part6\).
|
||||||
``/dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part6``\).
|
|
||||||
|
|
||||||
You cannot decrease the size of a partition.
|
You cannot decrease the size of a partition.
|
||||||
|
|
||||||
@ -34,12 +33,12 @@ where:
|
|||||||
**<partition>**
|
**<partition>**
|
||||||
is the partition device path or UUID.
|
is the partition device path or UUID.
|
||||||
|
|
||||||
For example, to change the size of a partition on compute-1 to 726 MiB, do
|
#. For example, to change the size of a partition on compute-1 to 726 MiB, do
|
||||||
the following:
|
the following:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-disk-partition-modify -s 726 compute-1 a259e898-6390-44ba-a750-e0cb1579d8e0
|
~(keystone_admin)]$ system host-disk-partition-modify -s 726 compute-1 a259e898-6390-44ba-a750-e0cb1579d8e0
|
||||||
+-------------+--------------------------------------------------+
|
+-------------+--------------------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+-------------+--------------------------------------------------+
|
+-------------+--------------------------------------------------+
|
||||||
|
@ -60,6 +60,7 @@ Storage Backends
|
|||||||
|
|
||||||
storage-backends
|
storage-backends
|
||||||
configure-the-internal-ceph-storage-backend
|
configure-the-internal-ceph-storage-backend
|
||||||
|
configure-ceph-file-system-for-internal-ceph-storage-backend
|
||||||
configure-an-external-netapp-deployment-as-the-storage-backend
|
configure-an-external-netapp-deployment-as-the-storage-backend
|
||||||
configure-netapps-using-a-private-docker-registry
|
configure-netapps-using-a-private-docker-registry
|
||||||
uninstall-the-netapp-backend
|
uninstall-the-netapp-backend
|
||||||
|
@ -6,8 +6,7 @@
|
|||||||
List Partitions
|
List Partitions
|
||||||
===============
|
===============
|
||||||
|
|
||||||
To list partitions, use the :command:`system host-disk-partition-list`
|
To list partitions, use the :command:`system host-disk-partition-list` command.
|
||||||
command.
|
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
@ -17,23 +16,21 @@ The command has the following format:
|
|||||||
|
|
||||||
system host-disk-partition-list [--nowrap] [--disk [disk_uuid]] <host>
|
system host-disk-partition-list [--nowrap] [--disk [disk_uuid]] <host>
|
||||||
|
|
||||||
where:
|
<host> is the hostname or ID.
|
||||||
|
|
||||||
**<host>**
|
For example, run the following command to list the partitions on a compute-1
|
||||||
is the hostname or ID.
|
disk.
|
||||||
|
|
||||||
For example, run the following command to list the partitions on a
|
|
||||||
compute-1 disk.
|
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-disk-partition-list --disk fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc compute-1
|
~(keystone_admin)$ system host-disk-partition-list --disk 84b1ba35-addb-4fb7-9495-c47c3cb10377 compute-1
|
||||||
+------...+------------------+-------------+...+----------+--------+
|
+--------------------------------------+--------------------------------------+------------+--------------------------------------+-------------------+----------+--------+
|
||||||
| uuid | device_path | device_node | | size_mib | status |
|
| uuid | device_path | device_nod | type_guid | type_name | size_gib | status |
|
||||||
+------...+------------------+-------------+...+----------+--------+
|
| | | e | | | | |
|
||||||
| 15943...| ...ata-2.0-part1 | /dev/sdb1 |...| 1024 | In-Use |
|
+--------------------------------------+--------------------------------------+------------+--------------------------------------+-------------------+----------+--------+
|
||||||
| 63440...| ...ata-2.0-part2 | /dev/sdb2 |...| 10240 | In-Use |
|
| 921c07dc-a79d-4104-a6a8-34691120514e | /dev/disk/by-path/pci-0000:04:00.0 | /dev/sda5 | ba5eba11-0000-1111-2222-000000000001 | LVM Physical | 22.0 | In-Use |
|
||||||
| a4aa3...| ...ata-2.0-part3 | /dev/sdb3 |...| 10240 | In-Use |
|
| | -sas-0x5001e6768017d000-lun-0-part5 | | | Volume | | |
|
||||||
+------...+------------------+-------------+...+----------+--------+
|
| | | | | | | |
|
||||||
|
+--------------------------------------+--------------------------------------+------------+--------------------------------------+-------------------+----------+--------+
|
||||||
|
|
||||||
|
|
||||||
|
@ -16,7 +16,7 @@ The syntax of the command is:
|
|||||||
|
|
||||||
system host-pv-list <hostname>
|
system host-pv-list <hostname>
|
||||||
|
|
||||||
where **<hostname>** is the name or ID of the host.
|
<hostname> is the name or ID of the host.
|
||||||
|
|
||||||
For example, to list physical volumes on compute-1, do the following:
|
For example, to list physical volumes on compute-1, do the following:
|
||||||
|
|
||||||
|
@ -2,11 +2,11 @@
|
|||||||
.. rtm1590585833668
|
.. rtm1590585833668
|
||||||
.. _local-volume-groups-cli-commands:
|
.. _local-volume-groups-cli-commands:
|
||||||
|
|
||||||
====================================
|
================================
|
||||||
CLI Commands for Local Volume Groups
|
Local Volume Groups CLI Commands
|
||||||
====================================
|
================================
|
||||||
|
|
||||||
You can use CLI commands to manage local volume groups.
|
You can use |CLI| commands to manage local volume groups.
|
||||||
|
|
||||||
|
|
||||||
.. _local-volume-groups-cli-commands-simpletable-kfn-qwk-nx:
|
.. _local-volume-groups-cli-commands-simpletable-kfn-qwk-nx:
|
||||||
@ -42,11 +42,9 @@ You can use CLI commands to manage local volume groups.
|
|||||||
| | |
|
| | |
|
||||||
+-------------------------------------------------------+-------------------------------------------------------+
|
+-------------------------------------------------------+-------------------------------------------------------+
|
||||||
|
|
||||||
where:
|
|
||||||
|
|
||||||
**<instance\_backing>**
|
**<instance\_backing>**
|
||||||
is the storage method for the local volume group \(image or remote\).
|
is the storage method for the local volume group \(image or remote\). The
|
||||||
The remote option is valid only for systems with dedicated storage.
|
remote option is valid only for systems with dedicated storage.
|
||||||
|
|
||||||
**<concurrent\_disk\_operations>**
|
**<concurrent\_disk\_operations>**
|
||||||
is the number of I/O intensive disk operations, such as glance image
|
is the number of I/O intensive disk operations, such as glance image
|
||||||
@ -61,3 +59,4 @@ where:
|
|||||||
|
|
||||||
**<groupname>**
|
**<groupname>**
|
||||||
is the name or ID of the local volume group.
|
is the name or ID of the local volume group.
|
||||||
|
|
||||||
|
@ -6,12 +6,12 @@
|
|||||||
Provision Storage on a Storage Host Using the CLI
|
Provision Storage on a Storage Host Using the CLI
|
||||||
=================================================
|
=================================================
|
||||||
|
|
||||||
You can use the command line to configure the object storage devices \(OSDs\)
|
You can use the command line to configure the object storage devices \(|OSDs|\)
|
||||||
on storage hosts.
|
on storage hosts.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
For more about OSDs, see |stor-doc|: :ref:`Storage on Storage Hosts
|
For more about |OSDs|, see |stor-doc|: :ref:`Storage on Storage Hosts
|
||||||
<storage-hosts-storage-on-storage-hosts>`.
|
<storage-hosts-storage-on-storage-hosts>`.
|
||||||
|
|
||||||
.. xbooklink
|
.. xbooklink
|
||||||
@ -21,7 +21,7 @@ For more about OSDs, see |stor-doc|: :ref:`Storage on Storage Hosts
|
|||||||
|
|
||||||
.. rubric:: |prereq|
|
.. rubric:: |prereq|
|
||||||
|
|
||||||
To create or edit an OSD, you must lock the storage host. The system must
|
To create or edit an |OSD|, you must lock the storage host. The system must
|
||||||
have at least two other unlocked hosts with Ceph monitors. \(Ceph monitors
|
have at least two other unlocked hosts with Ceph monitors. \(Ceph monitors
|
||||||
run on **controller-0**, **controller-1**, and **storage-0** only\).
|
run on **controller-0**, **controller-1**, and **storage-0** only\).
|
||||||
|
|
||||||
@ -33,20 +33,20 @@ To use a custom storage tier, you must create the tier first.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-disk-list storage-3
|
~(keystone_admin)]$ system host-disk-list storage-3
|
||||||
+-------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
|
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
|
||||||
| uuid | device_node | device_num | device_type | size_gib | available_gib | device_path |
|
| uuid | device_node | device_num | device_type | size_gib | available_gib | device_path |
|
||||||
+-------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
|
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
|
||||||
| ba7...| /dev/sda | 2048 | HDD | 51.2 | 0 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
| ba751efe-33sd-as34-7u78-df3416875896 | /dev/sda | 2048 | HDD | 51.2 | 0 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
||||||
| e87...| /dev/sdb | 2064 | HDD | 10.2 | 10.1 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
| e8751efe-6101-4d1c-a9d3-7b1a16871791 | /dev/sdb | 2064 | HDD | 10.2 | 10.1 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
||||||
| ae8...| /dev/sdc | 2080 | SSD | 8.1 | 8.0 | /dev/disk/by-path/pci-0000:00:0d.0-ata-4.0 |
|
| ae851efe-87hg-67gv-9ouj-sd3s16877658 | /dev/sdc | 2080 | SSD | 8.1 | 8.0 | /dev/disk/by-path/pci-0000:00:0d.0-ata-4.0 |
|
||||||
+-------+-------------+------------+--------------+---------+---------------+--------------------------------------------+
|
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
|
||||||
|
|
||||||
#. List the available storage tiers.
|
#. List the available storage tiers.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system storage-tier-list ceph_cluster
|
~(keystone_admin)]$ system storage-tier-list ceph_cluster
|
||||||
+--------------------------------------+---------+--------+----------------+
|
+--------------------------------------+---------+--------+----------------+
|
||||||
| uuid | name | status | backend_using |
|
| uuid | name | status | backend_using |
|
||||||
+--------------------------------------+---------+--------+----------------+
|
+--------------------------------------+---------+--------+----------------+
|
||||||
@ -54,7 +54,7 @@ To use a custom storage tier, you must create the tier first.
|
|||||||
| e9ddc040-7d5e-4e28-86be-f8c80f5c0c42 | storage | in-use | f1151da5-bd... |
|
| e9ddc040-7d5e-4e28-86be-f8c80f5c0c42 | storage | in-use | f1151da5-bd... |
|
||||||
+--------------------------------------+---------+--------+----------------+
|
+--------------------------------------+---------+--------+----------------+
|
||||||
|
|
||||||
#. Create a storage function \(an OSD\).
|
#. Create a storage function \(an |OSD|\).
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
You cannot add a storage function to the root disk \(/dev/sda in this
|
You cannot add a storage function to the root disk \(/dev/sda in this
|
||||||
@ -62,17 +62,17 @@ To use a custom storage tier, you must create the tier first.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-stor-add
|
~(keystone_admin)]$ system host-stor-add
|
||||||
usage: system host-stor-add [--journal-location [<journal_location>]]
|
usage: system host-stor-add [--journal-location [<journal_location>]]
|
||||||
[--journal-size[<size of the journal MiB>]]
|
[--journal-size[<size of the journal MiB>]]
|
||||||
[--tier-uuid[<storage tier uuid>]]
|
[--tier-uuid[<storage tier uuid>]]
|
||||||
<hostname or id> [<function>] <idisk_uuid>
|
<hostname or id> [<function>] <idisk_uuid>
|
||||||
|
|
||||||
where <idisk\_uuid> identifies an OSD. For example:
|
where <idisk\_uuid> identifies an |OSD|. For example:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-stor-add storage-3 e8751efe-6101-4d1c-a9d3-7b1a16871791
|
~(keystone_admin)]$ system host-stor-add storage-3 e8751efe-6101-4d1c-a9d3-7b1a16871791
|
||||||
|
|
||||||
+------------------+--------------------------------------------------+
|
+------------------+--------------------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
@ -101,35 +101,34 @@ To use a custom storage tier, you must create the tier first.
|
|||||||
specify a different size in GiB.
|
specify a different size in GiB.
|
||||||
|
|
||||||
If multiple journal functions exist \(corresponding to multiple
|
If multiple journal functions exist \(corresponding to multiple
|
||||||
dedicated SSDs\), then you must include the ``--journal-location``
|
dedicated |SSDs|\), then you must include the ``--journal-location``
|
||||||
option and specify the journal function to use for the OSD. You can
|
option and specify the journal function to use for the |OSD|. You can
|
||||||
obtain the UUIDs for journal functions using the :command:`system
|
obtain the UUIDs for journal functions using the :command:`system
|
||||||
host-stor-list` command:
|
host-stor-list` command:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-stor-list storage-3
|
~(keystone_admin)]$ system host-stor-list storage-3
|
||||||
|
|
||||||
+---------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
|
+--------------------------------------+----------+-------+--------------+---------------+--------------------------+------------------+-----------+
|
||||||
| uuid | function | osdid | capabilities | idisk_uuid | journal_path | journal_size_gib | tier_name |
|
| uuid | function | osdid | capabilities | idisk_uuid | journal_path | journal_size_gib | tier_name |
|
||||||
+---------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
|
+--------------------------------------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
|
||||||
| e639... | journal | None | {} | ae8b1434-d... | None | 0 | |
|
| e6391e2-8564-4f4d-8665-681f73d13dfb | journal | None | {} | ae8b1434-d... | None | 0 | |
|
||||||
| fc7b... | osd | 3 | {} | e8751efe-6... | /dev/disk/by-path/pci... | 1.0 | storage |
|
| fc7bdc40-7d5e-4e28-86be-f8c80f5c0c42 | osd | 3 | {} | e8751efe-6... | /dev/disk/by-path/pci... | 1.0 | storage |
|
||||||
+---------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
|
+--------------------------------------+----------+-------+--------------+---------------+--------------------------+------------------+-----------+
|
||||||
|
|
||||||
If no journal function exists when the storage function is created, the
|
If no journal function exists when the storage function is created, the
|
||||||
Ceph journal for the OSD is collocated on the OSD.
|
Ceph journal for the |OSD| is collocated on the |OSD|.
|
||||||
|
|
||||||
If an SSD or NVMe drive is available on the host, you can add a
|
If an |SSD| or |NVMe| drive is available on the host, you can add a
|
||||||
journal function. For more information, see :ref:`Add SSD-Backed
|
journal function. For more information, see :ref:`Add SSD-Backed
|
||||||
Journals Using the CLI <add-ssd-backed-journals-using-the-cli>`. You
|
Journals Using the CLI <add-ssd-backed-journals-using-the-cli>`. You
|
||||||
can update the OSD to use a journal on the SSD by referencing the
|
can update the |OSD| to use a journal on the |SSD| by referencing the
|
||||||
journal function UUID, as follows:
|
journal function |UUID|, as follows:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-stor-update <osd_uuid> \
|
~(keystone_admin)]$ system host-stor-update <osd_uuid> --journal-location <journal_function_uuid> [--journal-size <size>]
|
||||||
--journal-location <journal_function_uuid> [--journal-size <size>]
|
|
||||||
|
|
||||||
.. rubric:: |postreq|
|
.. rubric:: |postreq|
|
||||||
|
|
||||||
|
@ -57,19 +57,19 @@ obtain information about replication groups:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system cluster-list
|
~(keystone_admin)]$ system cluster-list
|
||||||
+-----------+--------------+------+----------+------------------+
|
+--------------------------------------+--------------+------+----------+------------------+
|
||||||
| uuid | cluster_uuid | type | name | deployment_model |
|
| uuid | cluster_uuid | type | name | deployment_model |
|
||||||
+-----------+--------------+------+----------+------------------+
|
+--------------------------------------+--------------+------+----------+------------------+
|
||||||
| 335766eb- | None | ceph | ceph_clu | controller-nodes |
|
| 335766eb-8564-4f4d-8665-681f73d13dfb | None | ceph | ceph_clu | controller-nodes |
|
||||||
| | | | ster | |
|
| | | | ster | |
|
||||||
| | | | | |
|
| | | | | |
|
||||||
+-----------+--------------+------+----------+------------------+
|
+--------------------------------------+--------------+------+----------+------------------+
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system cluster-show 335766eb-968e-44fc-9ca7-907f93c772a1
|
~(keystone_admin)]$ system cluster-show 335766eb-968e-44fc-9ca7-907f93c772a1
|
||||||
|
|
||||||
+--------------------+----------------------------------------+
|
+--------------------+----------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
|
@ -116,6 +116,9 @@ For more information about Trident, see
|
|||||||
- :ref:`Configure the Internal Ceph Storage Backend
|
- :ref:`Configure the Internal Ceph Storage Backend
|
||||||
<configure-the-internal-ceph-storage-backend>`
|
<configure-the-internal-ceph-storage-backend>`
|
||||||
|
|
||||||
|
- :ref:`Configuring Ceph File System for Internal Ceph Storage Backend
|
||||||
|
<configure-ceph-file-system-for-internal-ceph-storage-backend>`
|
||||||
|
|
||||||
- :ref:`Configure an External Netapp Deployment as the Storage Backend
|
- :ref:`Configure an External Netapp Deployment as the Storage Backend
|
||||||
<configure-an-external-netapp-deployment-as-the-storage-backend>`
|
<configure-an-external-netapp-deployment-as-the-storage-backend>`
|
||||||
|
|
||||||
|
@ -29,7 +29,7 @@ The following steps create two 1Gb persistent volume claims.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ cat <<EOF > claim1.yaml
|
~(keystone_admin)]$ cat <<EOF > claim1.yaml
|
||||||
kind: PersistentVolumeClaim
|
kind: PersistentVolumeClaim
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
@ -47,11 +47,10 @@ The following steps create two 1Gb persistent volume claims.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ kubectl apply -f claim1.yaml
|
~(keystone_admin)]$ kubectl apply -f claim1.yaml
|
||||||
|
|
||||||
persistentvolumeclaim/test-claim1 created
|
persistentvolumeclaim/test-claim1 created
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Create the **test-claim2** persistent volume claim.
|
#. Create the **test-claim2** persistent volume claim.
|
||||||
|
|
||||||
|
|
||||||
@ -61,7 +60,7 @@ The following steps create two 1Gb persistent volume claims.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ cat <<EOF > claim2.yaml
|
~(keystone_admin)]$ cat <<EOF > claim2.yaml
|
||||||
kind: PersistentVolumeClaim
|
kind: PersistentVolumeClaim
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
@ -80,12 +79,9 @@ The following steps create two 1Gb persistent volume claims.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ kubectl apply -f claim2.yaml
|
~(keystone_admin)]$ kubectl apply -f claim2.yaml
|
||||||
persistentvolumeclaim/test-claim2 created
|
persistentvolumeclaim/test-claim2 created
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. rubric:: |result|
|
.. rubric:: |result|
|
||||||
|
|
||||||
Two 1Gb persistent volume claims have been created. You can view them with
|
Two 1Gb persistent volume claims have been created. You can view them with
|
||||||
@ -93,8 +89,10 @@ the following command.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ kubectl get persistentvolumeclaims
|
~(keystone_admin)]$ kubectl get persistentvolumeclaims
|
||||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||||
test-claim1 Bound pvc-aaca.. 1Gi RWO general 2m56s
|
test-claim1 Bound pvc-aaca.. 1Gi RWO general 2m56s
|
||||||
test-claim2 Bound pvc-e93f.. 1Gi RWO general 68s
|
test-claim2 Bound pvc-e93f.. 1Gi RWO general 68s
|
||||||
|
|
||||||
|
For more information on using CephFS for internal Ceph backends, see,
|
||||||
|
:ref:`Using CephFS for Internal Ceph Storage Backend <configure-ceph-file-system-for-internal-ceph-storage-backend>`
|
@ -22,24 +22,24 @@ You can change the space allotted for the Ceph monitor, if required.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system ceph-mon-modify <controller> ceph_mon_gib=<size>
|
~(keystone_admin)]$ system ceph-mon-modify <controller> ceph_mon_gib=<size>
|
||||||
|
|
||||||
where ``<partition\_size>`` is the size in GiB to use for the Ceph monitor.
|
where ``<partition\_size>`` is the size in GiB to use for the Ceph monitor.
|
||||||
The value must be between 21 and 40 GiB.
|
The value must be between 21 and 40 GiB.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system ceph-mon-modify controller-0 ceph_mon_gib=21
|
~(keystone_admin)]$ system ceph-mon-modify controller-0 ceph_mon_gib=21
|
||||||
|
|
||||||
+-------------+-------+--------------+------------+------+
|
+--------------------------------+-------+--------------+------------+------+
|
||||||
| uuid | ceph_ | hostname | state | task |
|
| uuid | ceph_ | hostname | state | task |
|
||||||
| | mon_g | | | |
|
| | mon_g | | | |
|
||||||
| | ib | | | |
|
| | ib | | | |
|
||||||
+-------------+-------+--------------+------------+------+
|
+--------------------------------+-------+--------------+------------+------+
|
||||||
| 069f106a... | 21 | compute-0 | configured | None |
|
| 069f106-4f4d-8665-681f73d13dfb | 21 | compute-0 | configured | None |
|
||||||
| 4763139e... | 21 | controller-1 | configured | None |
|
| 4763139-4f4d-8665-681f73d13dfb | 21 | controller-1 | configured | None |
|
||||||
| e39970e5... | 21 | controller-0 | configured | None |
|
| e39970e-4f4d-8665-681f73d13dfb | 21 | controller-0 | configured | None |
|
||||||
+-------------+-------+--------------+------------+------+
|
+--------------------------------+-------+--------------+------------+------+
|
||||||
|
|
||||||
NOTE: ceph_mon_gib for both controllers are changed.
|
NOTE: ceph_mon_gib for both controllers are changed.
|
||||||
|
|
||||||
@ -61,30 +61,30 @@ To list the storage backend types installed on a system:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system storage-backend-list
|
~(keystone_admin)]$ system storage-backend-list
|
||||||
|
|
||||||
+--------+-----------+----------+-------+--------------+---------+-----------------+
|
+-------------------------------+------------+----------+-------+--------------+---------+-----------------+
|
||||||
| uuid | name | backend | state | task | services| capabilities |
|
| uuid | name | backend | state | task | services| capabilities |
|
||||||
+--------+-----------+----------+-------+--------------+---------+-----------------+
|
+-------------------------------+------------+----------+-------+--------------+---------+-----------------+
|
||||||
| 248a...|ceph-store | ceph | config| resize-ceph..| None |min_replication:1|
|
| 248a106-4r54-3324-681f73d13dfb| ceph-store | ceph | config| resize-ceph..| None |min_replication:1|
|
||||||
| | | | | | |replication: 2 |
|
| | | | | | |replication: 2 |
|
||||||
| 76dd...|shared_serv| external | config| None | glance | |
|
| 76dd106-6yth-4356-681f73d13dfb| shared_serv| external | config| None | glance | |
|
||||||
| | ices | | | | | |
|
| | ices | | | | | |
|
||||||
+--------+-----------+----------+-------+--------------+---------+-----------------+
|
+-------------------------------+------------+----------+-------+--------------+---------+-----------------+
|
||||||
|
|
||||||
|
|
||||||
To show details for a storage backend:
|
To show details for a storage backend:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system storage-backend-show <name>
|
~(keystone_admin)]$ system storage-backend-show <name>
|
||||||
|
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system storage-backend-show ceph-store
|
~(keystone_admin)]$ system storage-backend-show ceph-store
|
||||||
+----------------------+--------------------------------------+
|
+----------------------+--------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+----------------------+--------------------------------------+
|
+----------------------+--------------------------------------+
|
||||||
@ -114,7 +114,7 @@ To add a backend:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system storage-backend-add \
|
~(keystone_admin)]$ system storage-backend-add \
|
||||||
[-s <services>] [-n <name>] [-t <tier_uuid>] \
|
[-s <services>] [-n <name>] [-t <tier_uuid>] \
|
||||||
[-c <ceph_conf>] [--confirmed] [--ceph-mon-gib <ceph-mon-gib>] \
|
[-c <ceph_conf>] [--confirmed] [--ceph-mon-gib <ceph-mon-gib>] \
|
||||||
<backend> [<parameter>=<value> [<parameter>=<value> ...]]
|
<backend> [<parameter>=<value> [<parameter>=<value> ...]]
|
||||||
@ -162,7 +162,7 @@ To modify a backend:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system storage-backend-modify [-s <services>] [-c <ceph_conf>] \
|
~(keystone_admin)]$ system storage-backend-modify [-s <services>] [-c <ceph_conf>] \
|
||||||
<backend_name_or_uuid> [<parameter>=<value> [<parameter>=<value> ...]]
|
<backend_name_or_uuid> [<parameter>=<value> [<parameter>=<value> ...]]
|
||||||
|
|
||||||
|
|
||||||
@ -170,7 +170,7 @@ To delete a failed backend configuration:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system storage-backend-delete <backend>
|
~(keystone_admin)]$ system storage-backend-delete <backend>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -190,26 +190,26 @@ To list storage tiers:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone)admin)$ system storage-tier-list ceph_cluster
|
~(keystone)admin)]$ system storage-tier-list ceph_cluster
|
||||||
|
|
||||||
+---------+---------+--------+--------------------------------------+
|
+--------------------------------+---------+--------+--------------------------------------+
|
||||||
| uuid | name | status | backend_using |
|
| uuid | name | status | backend_using |
|
||||||
+---------+---------+--------+--------------------------------------+
|
+--------------------------------+---------+--------+--------------------------------------+
|
||||||
| acc8... | storage | in-use | 649830bf-b628-4170-b275-1f0b01cfc859 |
|
| acc8706-6yth-4356-681f73d13dfb | storage | in-use | 649830bf-b628-4170-b275-1f0b01cfc859 |
|
||||||
+---------+---------+--------+--------------------------------------+
|
+--------------------------------+---------+--------+--------------------------------------+
|
||||||
|
|
||||||
To display information for a storage tier:
|
To display information for a storage tier:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone)admin)$ system storage-tier-show ceph_cluster <tier_name>
|
~(keystone)admin)]$ system storage-tier-show ceph_cluster <tier_name>
|
||||||
|
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone)admin)$ system storage-tier-show ceph_cluster <storage>
|
~(keystone)admin)]$ system storage-tier-show ceph_cluster <storage>
|
||||||
|
|
||||||
+--------------+--------------------------------------+
|
+--------------+--------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
@ -230,7 +230,7 @@ To add a storage tier:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone)admin)$ system storage-tier-add ceph_cluster <tier_name>
|
~(keystone)admin)]$ system storage-tier-add ceph_cluster <tier_name>
|
||||||
|
|
||||||
|
|
||||||
To delete a tier that is not in use by a storage backend and does not have
|
To delete a tier that is not in use by a storage backend and does not have
|
||||||
@ -238,7 +238,7 @@ OSDs assigned to it:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone)admin)$ system storage-tier-delete <tier_name>
|
~(keystone)admin)]$ system storage-tier-delete <tier_name>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -253,26 +253,26 @@ storage space allotments on a host.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system controllerfs-list
|
~(keystone_admin)]$ system controllerfs-list
|
||||||
|
|
||||||
+-------+------------+-----+-----------------------+-------+-----------+
|
+--------------------------------+------------+-----+-----------------------+-------+-----------+
|
||||||
| UUID | FS Name | Size| Logical Volume | Rep.. | State |
|
| UUID | FS Name | Size| Logical Volume | Rep.. | State |
|
||||||
| | | in | | | |
|
| | | in | | | |
|
||||||
| | | GiB | | | |
|
| | | GiB | | | |
|
||||||
+-------+------------+-----+-----------------------+-------+-----------+
|
+--------------------------------+------------+-----+-----------------------+-------+-----------+
|
||||||
| d0e...| database | 10 | pgsql-lv | True | available |
|
| d0e8706-6yth-4356-681f73d13dfb | database | 10 | pgsql-lv | True | available |
|
||||||
| 40d...| docker-dist| 16 | dockerdistribution-lv | True | available |
|
| 40d8706-ssf4-4356-6814356145tf | docker-dist| 16 | dockerdistribution-lv | True | available |
|
||||||
| 20e...| etcd | 5 | etcd-lv | True | available |
|
| 20e8706-87gf-4356-681f73d13dfb | etcd | 5 | etcd-lv | True | available |
|
||||||
| 9e5...| extension | 1 | extension-lv | True | available |
|
| 9e58706-sd42-4356-435673d1sd3b | extension | 1 | extension-lv | True | available |
|
||||||
| 55b...| platform | 10 | platform-lv | True | available |
|
| 55b8706-sd13-4356-681f73d16yth | platform | 10 | platform-lv | True | available |
|
||||||
+-------+------------+-----+-----------------------+-------+-----------+
|
+--------------------------------+------------+-----+-----------------------+-------+-----------+
|
||||||
|
|
||||||
|
|
||||||
For a system with dedicated storage:
|
For a system with dedicated storage:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system storage-backend-show ceph-store
|
~(keystone_admin)]$ system storage-backend-show ceph-store
|
||||||
|
|
||||||
+----------------------+--------------------------------------+
|
+----------------------+--------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
|
@ -6,8 +6,8 @@
|
|||||||
View Details for a Partition
|
View Details for a Partition
|
||||||
============================
|
============================
|
||||||
|
|
||||||
You can view details for a partition with the **system
|
You can view details for a partition, use the **system host-disk-partition-show**
|
||||||
host-disk-partition-show** command.
|
command.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
@ -20,16 +20,16 @@ The syntax of the command is:
|
|||||||
Make the following substitutions:
|
Make the following substitutions:
|
||||||
|
|
||||||
**<host>**
|
**<host>**
|
||||||
The host name or ID.
|
The host name or ID
|
||||||
|
|
||||||
**<partition>**
|
**<partition>**
|
||||||
The partition device path or UUID.
|
The partition device path or UUID.
|
||||||
|
|
||||||
This example displays details for a particular partition on compute-1.
|
#. This example displays details for a particular partition on compute-1.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-disk-partition-show compute-1 a4aa3f66-ff3c-49a0-a43f-bc30012f8361
|
~(keystone_admin)]$ system host-disk-partition-show compute-1 a4aa3f66-ff3c-49a0-a43f-bc30012f8361
|
||||||
+-------------+--------------------------------------------------+
|
+-------------+--------------------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+-------------+--------------------------------------------------+
|
+-------------+--------------------------------------------------+
|
||||||
|
@ -30,6 +30,6 @@ following:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-pv-show compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63
|
~(keystone_admin)]$ system host-pv-show compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63
|
||||||
|
|
||||||
|
|
||||||
|
@ -8,8 +8,8 @@ Work with Disk Partitions
|
|||||||
|
|
||||||
You can use disk partitions to provide space for local volume groups.
|
You can use disk partitions to provide space for local volume groups.
|
||||||
|
|
||||||
You can create, modify, and delete partitions from the Horizon Web
|
You can create, modify, and delete partitions from the Horizon Web interface or
|
||||||
interface or the |CLI|.
|
the |CLI|.
|
||||||
|
|
||||||
To use |prod-os|, select **Admin** \> **Platform** \> **Host Inventory**,
|
To use |prod-os|, select **Admin** \> **Platform** \> **Host Inventory**,
|
||||||
and then click the host name to open the Host Details page. On the Host
|
and then click the host name to open the Host Details page. On the Host
|
||||||
|
@ -20,12 +20,9 @@ To manage the physical volumes that support local volume groups, see
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)$ system host-lock <hostname>
|
~(keystone_admin)]$ system host-lock <hostname>
|
||||||
|
|
||||||
where:
|
<hostname> is the name or ID of the host.
|
||||||
|
|
||||||
**<hostname>**
|
|
||||||
is the name or ID of the host.
|
|
||||||
|
|
||||||
#. Open the Storage page for the host.
|
#. Open the Storage page for the host.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user