Storage Update
Signed-off-by: Rafael Jardim <rafaeljordao.jardim@windriver.com> Change-Id: Ic8eea41e912e52ddebc5ed9dca62e8d4f9255b09
This commit is contained in:
parent
fad2ab40ae
commit
2e74ccd0b7
0
doc/source/_includes/configure-external-netapp.rest
Normal file
0
doc/source/_includes/configure-external-netapp.rest
Normal file
@ -11,6 +11,7 @@
|
||||
.. |prod-long| replace:: StarlingX
|
||||
.. |prod-os| replace:: StarlingX OpenStack
|
||||
.. |prod-dc| replace:: Distributed Cloud
|
||||
.. |prod-p| replace:: StarlingX Platform
|
||||
|
||||
.. Guide names; will be formatted in italics by default.
|
||||
.. |node-doc| replace:: :title:`StarlingX Node Configuration and Management`
|
||||
|
@ -89,6 +89,7 @@
|
||||
.. |ToR| replace:: :abbr:`ToR (Top-of-Rack)`
|
||||
.. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)`
|
||||
.. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)`
|
||||
.. |UUID| replace:: :abbr:`UUID (Universally Unique Identifier)`
|
||||
.. |VF| replace:: :abbr:`VF (Virtual Function)`
|
||||
.. |VFs| replace:: :abbr:`VFs (Virtual Functions)`
|
||||
.. |VLAN| replace:: :abbr:`VLAN (Virtual Local Area Network)`
|
||||
|
@ -9,8 +9,6 @@ Add a Partition
|
||||
You can add a partition using the :command:`system host-disk-partition-add`
|
||||
command.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
The syntax for the command is:
|
||||
|
||||
.. code-block:: none
|
||||
@ -23,11 +21,13 @@ where:
|
||||
is the host name or ID.
|
||||
|
||||
**<disk>**
|
||||
is the disk path or UUID.
|
||||
is the disk path or |UUID|.
|
||||
|
||||
**<size>**
|
||||
is the partition size in MiB.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
For example, to set up a 512 MiB partition on compute-1, do the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
@ -32,13 +32,13 @@ monitors run on **controller-0**, **controller-1**, and **storage-0** only\).
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-disk-list storage-3
|
||||
+-------+-------------+------------+-------------+------------------+
|
||||
| uuid | device_node | device_num | device_type | journal_size_gib |
|
||||
+-------+-------------+------------+-------------+------------------+
|
||||
| ba7...| /dev/sda | 2048 | HDD | 51200 |
|
||||
| e87...| /dev/sdb | 2064 | HDD | 10240 |
|
||||
| ae8...| /dev/sdc | 2080 | SSD | 8192 |
|
||||
+-------+-------------+------------+-------------+------------------+
|
||||
+--------------------------------------+-------------+------------+-------------+------------------+
|
||||
| uuid | device_node | device_num | device_type | journal_size_gib |
|
||||
+--------------------------------------+-------------+------------+-------------+------------------+
|
||||
| ba785ad3-8be7-3654-45fd-93892d7182da | /dev/sda | 2048 | HDD | 51200 |
|
||||
| e8785ad3-98sa-1234-32ss-923433dd82da | /dev/sdb | 2064 | HDD | 10240 |
|
||||
| ae885ad3-8cc7-4103-84eb-9333ff3482da | /dev/sdc | 2080 | SSD | 8192 |
|
||||
+--------------------------------------+-------------+------------+-------------+------------------+
|
||||
|
||||
#. Create a journal function.
|
||||
|
||||
@ -46,7 +46,7 @@ monitors run on **controller-0**, **controller-1**, and **storage-0** only\).
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-stor-add <host_name> journal <device_uuid>
|
||||
~(keystone_admin)]$ system host-stor-add <host_name> journal <device_uuid>
|
||||
|
||||
where <host\_name> is the name of the storage host \(for example,
|
||||
storage-3\), and <device\_uuid> identifies an SSD.
|
||||
@ -55,8 +55,9 @@ monitors run on **controller-0**, **controller-1**, and **storage-0** only\).
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-stor-add storage-3 journal ae885ad3-8be7-4103-84eb-93892d7182da
|
||||
|------------------+--------------------------------------+
|
||||
~(keystone_admin)]$ system host-stor-add storage-3 journal ae885ad3-8be7-4103-84eb-93892d7182da
|
||||
|
||||
+------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+------------------+--------------------------------------+
|
||||
| osdid | None |
|
||||
@ -73,12 +74,13 @@ monitors run on **controller-0**, **controller-1**, and **storage-0** only\).
|
||||
+------------------+--------------------------------------+
|
||||
|
||||
|
||||
#. Update one or more OSDs to use the journal function.
|
||||
#. Update one or more |OSDs| to use the journal function.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-stor-update <osd_uuid> \
|
||||
--journal-location <journal_function_uuid> [--journal-size <size_in_gib>]
|
||||
~(keystone_admin)$ system host-stor-update <osd_uuid>
|
||||
--journal-location <journal_function_uuid> [--journal-size
|
||||
<size_in_gib>]
|
||||
|
||||
|
||||
For example:
|
||||
|
@ -6,215 +6,230 @@
|
||||
Configure an External Netapp Deployment as the Storage Backend
|
||||
================================================================
|
||||
|
||||
Configure an external Netapp Trident deployment as the storage backend,
|
||||
after system installation using with the help of a |prod|-provided ansible
|
||||
playbook.
|
||||
Configure an external Netapp Trident deployment as the storage backend, after
|
||||
system installation using a |prod|-provided ansible playbook.
|
||||
|
||||
..
|
||||
.. rubric:: |prereq|
|
||||
.. rubric:: |prereq|
|
||||
|
||||
.. xbooklink
|
||||
|prod-long| must be installed and fully deployed before performing this
|
||||
procedure.
|
||||
|
||||
|prod-long| must be installed and fully deployed before performing this
|
||||
procedure. See the :ref:`Installation Overview <installation-overview>`
|
||||
.. xbooklink See the :ref:`Installation Overview <installation-overview>`
|
||||
for more information.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Configure the storage network.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
If you have not created the storage network during system deployment,
|
||||
you must create it manually.
|
||||
Follow the next steps to configure storage network
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: ../../_includes/configure-external-netapp.rest
|
||||
|
||||
|
||||
#. If you have not done so already, create an address pool for the
|
||||
storage network. This can be done at any time.
|
||||
#. If you have not done so already, create an address pool for the
|
||||
storage network. This can be done at any time.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system addrpool-add --ranges <start_address>-<end_address> <name_of_address_pool> <network_address> <network_prefix>
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
(keystone_admin)$ system addrpool-add --ranges 10.10.20.1-10.10.20.100 storage-pool 10.10.20.0 24
|
||||
|
||||
#. If you have not done so already, create the storage network using
|
||||
the address pool.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
(keystone_admin)$ system addrpool-list | grep storage-pool | awk '{print$2}' | xargs system network-add storage-net storage true
|
||||
|
||||
#. For each host in the system, do the following:
|
||||
|
||||
#. Lock the host.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system addrpool-add --ranges <start_address>-<end_address> <name_of_address_pool> <network_address> <network_prefix>
|
||||
(keystone_admin)$ system host-lock <hostname>
|
||||
|
||||
#. Create an interface using the address pool.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
(keystone_admin)$ system addrpool-add --ranges 10.10.20.1-10.10.20.100 storage-pool 10.10.20.0 24
|
||||
(keystone_admin)$ system host-if-modify -n storage0 -c platform --ipv4-mode static --ipv4-pool storage-pool controller-0 enp0s9
|
||||
|
||||
#. If you have not done so already, create the storage network using
|
||||
the address pool.
|
||||
#. Assign the interface to the network.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
(keystone_admin)$ system addrpool-list | grep storage-pool | awk '{print$2}' | xargs system network-add storage-net storage true
|
||||
(keystone_admin)$ system interface-network-assign controller-0 storage0 storage-net
|
||||
|
||||
#. For each host in the system, do the following:
|
||||
#. Unlock the system.
|
||||
|
||||
1. Lock the host.
|
||||
.. code-block:: none
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
(keystone_admin)$ system host-lock <hostname>
|
||||
|
||||
2. Create an interface using the address pool.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
(keystone_admin)$ system host-if-modify -n storage0 -c platform --ipv4-mode static --ipv4-pool storage-pool controller-0 enp0s9
|
||||
|
||||
3. Assign the interface to the network.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
(keystone_admin)$ system interface-network-assign controller-0 storage0 storage-net
|
||||
|
||||
4. Unlock the system.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
(keystone_admin)$ system host-unlock <hostname>
|
||||
(keystone_admin)$ system host-unlock <hostname>
|
||||
|
||||
.. _configuring-an-external-netapp-deployment-as-the-storage-backend-mod-localhost:
|
||||
|
||||
#. Configure Netapps configurable parameters and run the provided
|
||||
install\_netapp\_backend.yml ansible playbook to enable connectivity to
|
||||
Netapp as a storage backend for |prod|.
|
||||
install\_netapp\_backend.yml ansible playbook to enable connectivity to
|
||||
Netapp as a storage backend for |prod|.
|
||||
|
||||
#. Provide Netapp backend configurable parameters in an overrides yaml
|
||||
file.
|
||||
#. Provide Netapp backend configurable parameters in an overrides yaml
|
||||
file.
|
||||
|
||||
You can make changes-in-place to your existing localhost.yml file
|
||||
or create another in an alternative location. In either case, you
|
||||
also have the option of using an ansible vault named secrets.yml
|
||||
for sensitive data. The alternative must be named localhost.yaml.
|
||||
You can make changes-in-place to your existing localhost.yml file
|
||||
or create another in an alternative location. In either case, you
|
||||
also have the option of using an ansible vault named secrets.yml
|
||||
for sensitive data. The alternative must be named localhost.yaml.
|
||||
|
||||
The following parameters are mandatory:
|
||||
The following parameters are mandatory:
|
||||
|
||||
**ansible\_become\_pass**
|
||||
Provide the admin password.
|
||||
**ansible\_become\_pass**
|
||||
Provide the admin password.
|
||||
|
||||
**netapp\_backends**
|
||||
**name**
|
||||
A name for the storage class.
|
||||
**netapp\_backends**
|
||||
**name**
|
||||
A name for the storage class.
|
||||
|
||||
**provisioner**
|
||||
This value must be **netapp.io/trident**.
|
||||
**provisioner**
|
||||
This value must be **netapp.io/trident**.
|
||||
|
||||
**backendType**
|
||||
This value can be anything but must be the same as
|
||||
StorageDriverName below.
|
||||
**backendType**
|
||||
This value can be anything but must be the same as
|
||||
StorageDriverName below.
|
||||
|
||||
**version**
|
||||
This value must be 1.
|
||||
**version**
|
||||
This value must be 1.
|
||||
|
||||
**storageDriverName**
|
||||
This value can be anything but must be the same as
|
||||
backendType below.
|
||||
**storageDriverName**
|
||||
This value can be anything but must be the same as
|
||||
backendType below.
|
||||
|
||||
**managementLIF**
|
||||
The management IP address for the backend logical interface.
|
||||
**managementLIF**
|
||||
The management IP address for the backend logical interface.
|
||||
|
||||
**dataLIF**
|
||||
The data IP address for the backend logical interface.
|
||||
**dataLIF**
|
||||
The data IP address for the backend logical interface.
|
||||
|
||||
**svm**
|
||||
The storage virtual machine type to use.
|
||||
**svm**
|
||||
The storage virtual machine type to use.
|
||||
|
||||
**username**
|
||||
The username for authentication against the netapp backend.
|
||||
**username**
|
||||
The username for authentication against the netapp backend.
|
||||
|
||||
**password**
|
||||
The password for authentication against the netapp backend.
|
||||
**password**
|
||||
The password for authentication against the netapp backend.
|
||||
|
||||
The following parameters are optional:
|
||||
The following parameters are optional:
|
||||
|
||||
**trident\_setup\_dir**
|
||||
Set a staging directory for generated configuration files. The
|
||||
default is /tmp/trident.
|
||||
**trident\_setup\_dir**
|
||||
Set a staging directory for generated configuration files. The
|
||||
default is /tmp/trident.
|
||||
|
||||
**trident\_namespace**
|
||||
Set this option to use an alternate Kubernetes namespace.
|
||||
**trident\_namespace**
|
||||
Set this option to use an alternate Kubernetes namespace.
|
||||
|
||||
**trident\_rest\_api\_port**
|
||||
Use an alternate port for the Trident REST API. The default is
|
||||
8000.
|
||||
**trident\_rest\_api\_port**
|
||||
Use an alternate port for the Trident REST API. The default is
|
||||
8000.
|
||||
|
||||
**trident\_install\_extra\_params**
|
||||
Add extra space-separated parameters when installing trident.
|
||||
**trident\_install\_extra\_params**
|
||||
Add extra space-separated parameters when installing trident.
|
||||
|
||||
For complete listings of available parameters, see
|
||||
For complete listings of available parameters, see
|
||||
|
||||
`https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/host_vars/netapp/default.yml
|
||||
<https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/host_vars/netapp/default.yml>`__
|
||||
`https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/host_vars/netapp/default.yml
|
||||
<https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/host_vars/netapp/default.yml>`__
|
||||
|
||||
and
|
||||
and
|
||||
|
||||
`https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/roles/k8s-storage-backends/netapp/vars/main.yml
|
||||
<https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/roles/k8s-storage-backends/netapp/vars/main.yml>`__
|
||||
`https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/roles/k8s-storage-backends/netapp/vars/main.yml
|
||||
<https://opendev.org/starlingx/ansible-playbooks/src/commit/d05785ffd9add6553662fcab43f30bf8d9f6d2e3/playbookconfig/src/playbooks/roles/k8s-storage-backends/netapp/vars/main.yml>`__
|
||||
|
||||
The following example shows a minimal configuration in
|
||||
localhost.yaml:
|
||||
The following example shows a minimal configuration in
|
||||
localhost.yaml:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ansible_become_pass: xx43U~a96DN*m.?
|
||||
trident_setup_dir: /tmp/trident
|
||||
netapp_k8s_storageclasses:
|
||||
- metadata:
|
||||
name: netapp-nas-backend
|
||||
provisioner: netapp.io/trident
|
||||
parameters:
|
||||
backendType: "ontap-nas"
|
||||
|
||||
netapp_k8s_snapshotstorageclasses:
|
||||
- metadata:
|
||||
name: csi-snapclass
|
||||
driver: csi.trident.netapp.io
|
||||
deletionPolicy: Delete
|
||||
|
||||
netapp_backends:
|
||||
- version: 1
|
||||
storageDriverName: "ontap-nas"
|
||||
backendName: "nas-backend"
|
||||
managementLIF: "10.0.0.1"
|
||||
dataLIF: "10.0.0.2"
|
||||
svm: "svm_nfs"
|
||||
username: "admin"
|
||||
password: "secret"
|
||||
|
||||
This file is sectioned into **netapp\_k8s\_storageclass**,
|
||||
**netapp\_k8s\_snapshotstorageclasses**, and **netapp\_backends**
|
||||
You can add multiple backends and/or storage classes.
|
||||
|
||||
.. note::
|
||||
To use IPv6 addressing, you must add the following to your
|
||||
configuration:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ansible_become_pass: xx43U~a96DN*m.?
|
||||
trident_setup_dir: /tmp/trident
|
||||
netapp_k8s_storageclasses:
|
||||
- metadata:
|
||||
name: netapp-nas-backend
|
||||
provisioner: netapp.io/trident
|
||||
parameters:
|
||||
backendType: "ontap-nas"
|
||||
trident_install_extra_params: "--use-ipv6"
|
||||
|
||||
netapp_k8s_snapshotstorageclasses:
|
||||
- metadata:
|
||||
name: csi-snapclass
|
||||
driver: csi.trident.netapp.io
|
||||
deletionPolicy: Delete
|
||||
For more information about configuration options, see
|
||||
`https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html
|
||||
<https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html>`__.
|
||||
|
||||
netapp_backends:
|
||||
- version: 1
|
||||
storageDriverName: "ontap-nas"
|
||||
backendName: "nas-backend"
|
||||
managementLIF: "10.0.0.1"
|
||||
dataLIF: "10.0.0.2"
|
||||
svm: "svm_nfs"
|
||||
username: "admin"
|
||||
password: "secret"
|
||||
.. note::
|
||||
By default, Netapp is configured to have **777** as
|
||||
unixPermissions.|prod| recommends changing these settings to
|
||||
make it more secure, for example, **"unixPermissions": "755"**.
|
||||
Ensure that the right permissions are used, and there is no
|
||||
conflict with container security.
|
||||
|
||||
This file is sectioned into **netapp\_k8s\_storageclass**,
|
||||
**netapp\_k8s\_snapshotstorageclasses**, and **netapp\_backends**
|
||||
You can add multiple backends and/or storage classes.
|
||||
Do NOT use **777** as **unixPermissions** to configure an external
|
||||
Netapp deployment as the Storage backend. For more information,
|
||||
contact Netapp, at `https://www.netapp.com/
|
||||
<https://www.netapp.com/>`__.
|
||||
|
||||
.. note::
|
||||
To use IPv6 addressing, you must add the following to your configuration:
|
||||
#. Run the playbook.
|
||||
|
||||
.. code-block:: none
|
||||
The following example uses the ``-e`` option to specify a customized
|
||||
location for the localhost.yml file.
|
||||
|
||||
trident_install_extra_params: "--use-ipv6"
|
||||
.. code-block:: none
|
||||
|
||||
For more information about configuration options, see
|
||||
`https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html
|
||||
<https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/ontap.html>`__.
|
||||
# ansible-playbook /usr/share/ansible/stx-ansible/playbooks/install_netapp_backend.yml -e "override_files_dir=</home/sysadmin/mynetappconfig>"
|
||||
|
||||
#. Run the playbook.
|
||||
|
||||
The following example uses the ``-e`` option to specify a customized
|
||||
location for the localhost.yml file.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# ansible-playbook /usr/share/ansible/stx-ansible/playbooks/install_netapp_backend.yml -e "override_files_dir=</home/sysadmin/mynetappconfig>"
|
||||
|
||||
Upon successful launch, there will be one Trident pod running on
|
||||
each node, plus an extra pod for the REST API running on one of the
|
||||
controller nodes.
|
||||
Upon successful launch, there will be one Trident pod running on
|
||||
each node, plus an extra pod for the REST API running on one of the
|
||||
controller nodes.
|
||||
|
||||
#. Confirm that the pods launched successfully.
|
||||
|
||||
|
@ -0,0 +1,243 @@
|
||||
|
||||
.. clb1615317605723
|
||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend:
|
||||
|
||||
============================================================
|
||||
Configure Ceph File System for Internal Ceph Storage Backend
|
||||
============================================================
|
||||
|
||||
CephFS \(Ceph File System\) is a highly available, mutli-use, performant file
|
||||
store for a variety of applications, built on top of Ceph's Distributed Object
|
||||
Store \(RADOS\).
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
CephFS provides the following functionality:
|
||||
|
||||
|
||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-h2b-h1k-x4b:
|
||||
|
||||
- Enabled by default \(along with existing Ceph RDB\)
|
||||
|
||||
- Highly available, multi-use, performant file storage
|
||||
|
||||
- Scalability using a separate RADOS pool for the file's metadata
|
||||
|
||||
- Metadata using Metadata Servers \(MDS\) that provide high availability and
|
||||
scalability
|
||||
|
||||
- Deployed in HA configurations for all |prod| deployment options
|
||||
|
||||
- Integrates **cephfs-provisioner** supporting Kubernetes **StorageClass**
|
||||
|
||||
- Enables configuration of:
|
||||
|
||||
|
||||
- **PersistentVolumeClaim** \(|PVC|\) using **StorageClass** and
|
||||
ReadWriteMany accessmode
|
||||
|
||||
- Two or more application pods mounting |PVC| and reading/writing data to it
|
||||
|
||||
CephFS is configured automatically when a Ceph backend is enabled and provides
|
||||
a Kubernetes **StorageClass**. Once enabled, every node in the cluster that
|
||||
serves as a Ceph monitor will also be configured as a CephFS Metadata Server
|
||||
\(MDS\). Creation of the CephFS pools, filesystem initialization, and creation
|
||||
of Kubernetes resource is done by the **platform-integ-apps** application,
|
||||
using **cephfs-provisioner** Helm chart.
|
||||
|
||||
When applied, **platform-integ-apps** creates two Ceph pools for each storage
|
||||
backend configured, one for CephFS data and a second pool for metadata:
|
||||
|
||||
|
||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-jp2-yn2-x4b:
|
||||
|
||||
- **CephFS data pool**: The pool name for the default storage backend is
|
||||
**kube-cephfs-data**
|
||||
|
||||
- **Metadata pool**: The pool name is **kube-cephfs-metadata**
|
||||
|
||||
When a new storage backend is created, a new CephFS data pool will be
|
||||
created with the name **kube-cephfs-data-** \<storage\_backend\_name\>, and
|
||||
the metadata pool will be created with the name
|
||||
**kube-cephfs-metadata-** \<storage\_backend\_name\>. The default
|
||||
filesystem name is **kube-cephfs**.
|
||||
|
||||
When a new storage backend is created, a new filesystem will be created
|
||||
with the name **kube-cephfs-** \<storage\_backend\_name\>.
|
||||
|
||||
|
||||
For example, if the user adds a storage backend named, 'test',
|
||||
**cephfs-provisioner** will create the following pools:
|
||||
|
||||
|
||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-i3w-h1f-x4b:
|
||||
|
||||
- kube-cephfs-data-test
|
||||
|
||||
- kube-cephfs-metadata-test
|
||||
|
||||
|
||||
Also, the application **platform-integ-apps** will create a filesystem **kube
|
||||
cephfs-test**.
|
||||
|
||||
If you list all the pools in a cluster with 'test' storage backend, you should
|
||||
see four pools created by **cephfs-provisioner** using **platform-integ-apps**.
|
||||
Use the following command to list the CephFS |OSD| pools created.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ ceph osd lspools
|
||||
|
||||
|
||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-nnv-lr2-x4b:
|
||||
|
||||
- kube-rbd
|
||||
|
||||
- kube-rbd-test
|
||||
|
||||
- **kube-cephfs-data**
|
||||
|
||||
- **kube-cephfs-data-test**
|
||||
|
||||
- **kube-cephfs-metadata**
|
||||
|
||||
- **kube-cephfs-metadata-test**
|
||||
|
||||
|
||||
Use the following command to list Ceph File Systems:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ ceph fs ls
|
||||
name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ]
|
||||
name: kube-cephfs-silver, metadata pool: kube-cephfs-metadata-silver, data pools: [kube-cephfs-data-silver ]
|
||||
|
||||
:command:`cephfs-provisioner` creates in a Kubernetes cluster, a
|
||||
**StorageClass** for each storage backend present.
|
||||
|
||||
These **StorageClass** resources should be used to create
|
||||
**PersistentVolumeClaim** resources in order to allow pods to use CephFS. The
|
||||
default **StorageClass** resource is named **cephfs**, and additional resources
|
||||
are created with the name \<storage\_backend\_name\> **-cephfs** for each
|
||||
additional storage backend created.
|
||||
|
||||
For example, when listing **StorageClass** resources in a cluster that is
|
||||
configured with a storage backend named 'test', the following storage classes
|
||||
are created:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ kubectl get sc
|
||||
NAME PROVISIONER RECLAIM.. VOLUME.. ALLOWVOLUME.. AGE
|
||||
cephfs ceph.com/cephfs Delete Immediate false 65m
|
||||
general (default) ceph.com/rbd Delete Immediate false 66m
|
||||
test-cephfs ceph.com/cephfs Delete Immediate false 65m
|
||||
test-general ceph.com/rbd Delete Immediate false 66m
|
||||
|
||||
All Kubernetes resources \(pods, StorageClasses, PersistentVolumeClaims,
|
||||
configmaps, etc.\) used by the provisioner are created in the **kube-system
|
||||
namespace.**
|
||||
|
||||
.. note::
|
||||
Multiple Ceph file systems are not enabled by default in the cluster. You
|
||||
can enable it manually, for example, using the command; :command:`ceph fs
|
||||
flag set enable\_multiple true --yes-i-really-mean-it`.
|
||||
|
||||
|
||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-section-dq5-wgk-x4b:
|
||||
|
||||
-------------------------------
|
||||
Persistent Volume Claim \(PVC\)
|
||||
-------------------------------
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
If you need to create a Persistent Volume Claim, you can create it using
|
||||
**kubectl**. For example:
|
||||
|
||||
|
||||
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ol-lrh-pdf-x4b:
|
||||
|
||||
#. Create a file named **my\_pvc.yaml**, and add the following content:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: claim1
|
||||
namespace: kube-system
|
||||
spec:
|
||||
storageClassName: cephfs
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
|
||||
#. To apply the updates, use the following command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ kubectl apply -f my_pvc.yaml
|
||||
|
||||
#. After the |PVC| is created, use the following command to see the |PVC|
|
||||
bound to the existing **StorageClass**.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ kubectl get pvc -n kube-system
|
||||
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
claim1 Boundpvc.. 1Gi RWX cephfs
|
||||
|
||||
#. The |PVC| is automatically provisioned by the **StorageClass**, and a |PVC|
|
||||
is created. Use the following command to list the |PVC|.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ kubectl get pv -n kube-system
|
||||
|
||||
NAME CAPACITY ACCESS..RECLAIM.. STATUS CLAIM STORAGE.. REASON AGE
|
||||
pvc-5.. 1Gi RWX Delete Bound kube-system/claim1 cephfs 26s
|
||||
|
||||
|
||||
#. Create Pods to use the |PVC|. Create a file my\_pod.yaml:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: test-pod
|
||||
namespace: kube-system
|
||||
spec:
|
||||
containers:
|
||||
- name: test-pod
|
||||
image: gcr.io/google_containers/busybox:1.24
|
||||
command:
|
||||
- "/bin/sh"
|
||||
args:
|
||||
- "-c"
|
||||
- "touch /mnt/SUCCESS && exit 0 || exit 1"
|
||||
volumeMounts:
|
||||
- name: pvc
|
||||
mountPath: "/mnt"
|
||||
restartPolicy: "Never"
|
||||
volumes:
|
||||
- name: pvc
|
||||
persistentVolumeClaim:
|
||||
claimName: claim1
|
||||
|
||||
#. Apply the inputs to the **pod.yaml** file, using the following command.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ kubectl apply -f my_pod.yaml
|
||||
|
||||
|
||||
For more information on Persistent Volume Support, see, :ref:`About Persistent
|
||||
Volume Support <about-persistent-volume-support>`, and,
|
||||
|usertasks-doc|: :ref:`Creating Persistent Volume Claims
|
||||
<kubernetes-user-tutorials-creating-persistent-volume-claims>`.
|
||||
|
@ -6,17 +6,7 @@
|
||||
Configure Netapps Using a Private Docker Registry
|
||||
===================================================
|
||||
|
||||
Use the ``docker\_registries`` parameter to pull from the local registry rather
|
||||
Use the ``docker_registries`` parameter to pull from the local registry rather
|
||||
than public ones.
|
||||
|
||||
You must first push the files to the local registry.
|
||||
|
||||
.. xbooklink
|
||||
|
||||
Refer to the workflow and
|
||||
yaml file formats described in |inst-doc|: :ref:`Populate a Private Docker
|
||||
Registry from the Wind River Amazon Registry
|
||||
<populate-a-private-docker-registry-from-the-wind-river-amazon-registry>`
|
||||
and |inst-doc|: :ref:`Bootstrap from a Private Docker Registry
|
||||
<bootstrap-from-a-private-docker-registry>`.
|
||||
|
||||
|
@ -119,7 +119,7 @@ following command increases the scratch filesystem size to 10 GB:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-fs-modify controller-1 scratch=10
|
||||
~(keystone_admin)]$ system host-fs-modify controller-1 scratch=10
|
||||
|
||||
**Backup Storage**
|
||||
|
||||
|
@ -26,9 +26,9 @@ where:
|
||||
is the host name or ID.
|
||||
|
||||
**<partition>**
|
||||
is the partition device path or UUID.
|
||||
is the partition device path or |UUID|.
|
||||
|
||||
For example, to delete a partition with the UUID
|
||||
For example, to delete a partition with the |UUID|
|
||||
9f93c549-e26c-4d4c-af71-fb84e3fcae63 from compute-1, do the following.
|
||||
|
||||
.. code-block:: none
|
||||
|
@ -50,6 +50,6 @@ command.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-pv-delete compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63
|
||||
~(keystone_admin)]$ system host-pv-delete compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63
|
||||
|
||||
|
||||
|
@ -22,8 +22,9 @@ command includes both the **device\_node** and the **device\_path**.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-disk-show controller-0 \
|
||||
1722b081-8421-4475-a6e8-a26808cae031
|
||||
~(keystone_admin)]$ system host-disk-show controller-0
|
||||
1722b081-8421-4475-a6e8-a26808cae031
|
||||
|
||||
+-------------+--------------------------------------------+
|
||||
| Property | Value |
|
||||
+-------------+--------------------------------------------+
|
||||
|
@ -215,6 +215,4 @@ storage class.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ kubectl get secret ceph-pool-kube-rbd -n default -o yaml | grep -v '^\s*namespace:\s' | kubectl apply -n <namespace> -f -
|
||||
|
||||
|
||||
~(keystone_admin)$ kubectl get secret ceph-pool-kube-rbd -n default -o yaml | grep -v '^\s*namespace:\s' | kubectl apply -n <namespace> -f
|
||||
|
@ -6,17 +6,23 @@
|
||||
Identify Space Available for Partitions
|
||||
=======================================
|
||||
|
||||
Use the :command:`system host-disk-list` command to identify space available for partitions.
|
||||
|
||||
For example, run the following command to show space available on compute-1.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-disk-list compute-1
|
||||
|
||||
+--------------------------------------+------------+...+---------------+...
|
||||
| uuid |device_node | | available_gib |...
|
||||
+--------------------------------------+------------+...+---------------+...
|
||||
| 6a0cadea-58ae-406f-bedf-b25ba82f0488 | /dev/sda |...| 32698 |...
|
||||
| fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc | /dev/sdb |...| 9215 |...
|
||||
+--------------------------------------+------------+...+---------------+...
|
||||
+--------------------------------------+-------------+------------+-------------+----------+------------------+-----+--------------------+--------------------------------------------+
|
||||
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
|
||||
| | | | | | | | | |
|
||||
+--------------------------------------+-------------+------------+-------------+----------+------------------+-----+--------------------+--------------------------------------------+
|
||||
| 2f71f715-ffc8-40f1-b099-f97b8c00e9cc | /dev/sda | 2048 | SSD | 447. | 357.816 | N/A | PHWA6062001U480FGN | /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0 |
|
||||
| | | | | 13 | | | | |
|
||||
| | | | | | | | | |
|
||||
| 5331459b-4eff-4d1a-83ea-555acd198bb6 | /dev/sdb | 2064 | SSD | 447. | 0.0 | N/A | PHWA6282051N480FGN | /dev/disk/by-path/pci-0000:00:1f.2-ata-2.0 |
|
||||
| | | | | 13 | | | | |
|
||||
| | | | | | | | | |
|
||||
+--------------------------------------+-------------+------------+-------------+----------+------------------+-----+--------------------+--------------------------------------------+
|
||||
|
||||
|
||||
|
@ -73,33 +73,55 @@ The default **rootfs** device is **/dev/sda**.
|
||||
| updated_at | None |
|
||||
+---------------+------------------------------------------------+
|
||||
|
||||
#. Check for free disk space on the new partition, once it is created.
|
||||
#. Check the disk space on the new partition, once it is created.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-disk-partition-list 1
|
||||
---------------------------------------------------------------------------------
|
||||
uuid device_path device_node type_guid type_name size_gib status
|
||||
---------------------------------------------------------------------------------
|
||||
69b1b.. /dev/disk/by-path/.. /dev/sda6 ba5eba11.. LVM Phy.Vol.. 22.0 Ready
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
#. Assign the unused partition on **controller-0** as a physical volume to
|
||||
**cgts-vg** volume group.
|
||||
|
||||
For example
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-pv-add controller-0 cgts-vg dev/sda
|
||||
~(keystone_admin)$ system host-pv-add controller-0 cgts-vg 69b1bb35-7326-4bcc-94d7-bef72f064f46
|
||||
+---------------------------+---------------------------------------+
|
||||
| Property | Value |
|
||||
+---------------------------+---------------------------------------+
|
||||
| uuid | 626c450f-4472-485c-bae7-791768630e1e |
|
||||
| pv_state | adding |
|
||||
| pv_type | partition |
|
||||
| disk_or_part_uuid | 69b1bb35-7326-4bcc-94d7-bef72f064f46 |
|
||||
| disk_or_part_device_node | /dev/sda6 |
|
||||
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:18:00. |
|
||||
| | 0-scsi-0:2:0:0-part6 |
|
||||
| lvm_pv_name | /dev/sda6 |
|
||||
| lvm_vg_name | cgts-vg |
|
||||
| lvm_pv_uuid | None |
|
||||
| lvm_pv_size_gib | 0.0 |
|
||||
| lvm_pe_total | 0 |
|
||||
| lvm_pe_alloced | 0 |
|
||||
| ihost_uuid | e579a4af-108b-4dc9-9975-0aa089d530d7 |
|
||||
| created_at | 2020-12-09T17:22:19.666250+00:00 |
|
||||
| updated_at | None |
|
||||
+---------------------------+---------------------------------------+
|
||||
|
||||
#. Assign the unused partition on **controller-1** as a physical volume to
|
||||
**cgts-vg** volume group. You can also **swact** the hosts, and repeat the
|
||||
procedure on **controller-1**.
|
||||
#. To assign the unused partition on **controller-1** as a physical volume to
|
||||
**cgts-vg** volume group, **swact** the hosts and repeat the procedure on
|
||||
**controller-1**.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-pv-add controller-1 cgts-vg /dev/sda
|
||||
|
||||
|
||||
.. rubric:: |postreq|
|
||||
|
||||
After increasing the **cgts-vg** volume size, you can provision the
|
||||
filesystem storage. For more information about increasing filesystem
|
||||
allotments using the CLI, or the Horizon Web interface, see:
|
||||
After increasing the **cgts-vg** volume size, you can provision the filesystem
|
||||
storage. For more information about increasing filesystem allotments using the
|
||||
|CLI|, or the Horizon Web interface, see:
|
||||
|
||||
.. _increase-the-size-for-lvm-local-volumes-on-controller-filesystems-ul-mxm-f1c-nmb:
|
||||
|
||||
|
@ -11,9 +11,8 @@ host-disk-partition-modify` command.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
You can modify only the last partition on a disk \(indicated by **part** in
|
||||
the device path; for example,
|
||||
``/dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part6``\).
|
||||
You can modify only the last partition on a disk \(indicated by **part** in the
|
||||
device path; for example, /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part6\).
|
||||
|
||||
You cannot decrease the size of a partition.
|
||||
|
||||
@ -34,29 +33,29 @@ where:
|
||||
**<partition>**
|
||||
is the partition device path or UUID.
|
||||
|
||||
For example, to change the size of a partition on compute-1 to 726 MiB, do
|
||||
the following:
|
||||
#. For example, to change the size of a partition on compute-1 to 726 MiB, do
|
||||
the following:
|
||||
|
||||
.. code-block:: none
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-disk-partition-modify -s 726 compute-1 a259e898-6390-44ba-a750-e0cb1579d8e0
|
||||
+-------------+--------------------------------------------------+
|
||||
| Property | Value |
|
||||
+-------------+--------------------------------------------------+
|
||||
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part6 |
|
||||
| device_node | /dev/sdb6 |
|
||||
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
|
||||
| type_name | LVM Physical Volume |
|
||||
| start_mib | 512 |
|
||||
| end_mib | 12545 |
|
||||
| size_mib | 726 |
|
||||
| uuid | a259e898-6390-44ba-a750-e0cb1579d8e0 |
|
||||
| ihost_uuid | 3b315241-d54f-499b-8566-a6ed7d2d6b39 |
|
||||
| idisk_uuid | fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc |
|
||||
| ipv_uuid | None |
|
||||
| status | Modifying |
|
||||
| created_at | 2017-09-08T19:10:27.506768+00:00 |
|
||||
| updated_at | 2017-09-08T19:15:06.016996+00:00 |
|
||||
+-------------+--------------------------------------------------+
|
||||
~(keystone_admin)]$ system host-disk-partition-modify -s 726 compute-1 a259e898-6390-44ba-a750-e0cb1579d8e0
|
||||
+-------------+--------------------------------------------------+
|
||||
| Property | Value |
|
||||
+-------------+--------------------------------------------------+
|
||||
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part6 |
|
||||
| device_node | /dev/sdb6 |
|
||||
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
|
||||
| type_name | LVM Physical Volume |
|
||||
| start_mib | 512 |
|
||||
| end_mib | 12545 |
|
||||
| size_mib | 726 |
|
||||
| uuid | a259e898-6390-44ba-a750-e0cb1579d8e0 |
|
||||
| ihost_uuid | 3b315241-d54f-499b-8566-a6ed7d2d6b39 |
|
||||
| idisk_uuid | fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc |
|
||||
| ipv_uuid | None |
|
||||
| status | Modifying |
|
||||
| created_at | 2017-09-08T19:10:27.506768+00:00 |
|
||||
| updated_at | 2017-09-08T19:15:06.016996+00:00 |
|
||||
+-------------+--------------------------------------------------+
|
||||
|
||||
|
||||
|
@ -22,7 +22,7 @@ Disks, Partitions, Volumes, and Volume Groups
|
||||
work-with-local-volume-groups
|
||||
local-volume-groups-cli-commands
|
||||
increase-the-size-for-lvm-local-volumes-on-controller-filesystems
|
||||
|
||||
|
||||
*************************
|
||||
Work with Disk Partitions
|
||||
*************************
|
||||
@ -60,6 +60,7 @@ Storage Backends
|
||||
|
||||
storage-backends
|
||||
configure-the-internal-ceph-storage-backend
|
||||
configure-ceph-file-system-for-internal-ceph-storage-backend
|
||||
configure-an-external-netapp-deployment-as-the-storage-backend
|
||||
configure-netapps-using-a-private-docker-registry
|
||||
uninstall-the-netapp-backend
|
||||
|
@ -6,8 +6,7 @@
|
||||
List Partitions
|
||||
===============
|
||||
|
||||
To list partitions, use the :command:`system host-disk-partition-list`
|
||||
command.
|
||||
To list partitions, use the :command:`system host-disk-partition-list` command.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
@ -17,23 +16,21 @@ The command has the following format:
|
||||
|
||||
system host-disk-partition-list [--nowrap] [--disk [disk_uuid]] <host>
|
||||
|
||||
where:
|
||||
<host> is the hostname or ID.
|
||||
|
||||
**<host>**
|
||||
is the hostname or ID.
|
||||
|
||||
For example, run the following command to list the partitions on a
|
||||
compute-1 disk.
|
||||
For example, run the following command to list the partitions on a compute-1
|
||||
disk.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-disk-partition-list --disk fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc compute-1
|
||||
+------...+------------------+-------------+...+----------+--------+
|
||||
| uuid | device_path | device_node | | size_mib | status |
|
||||
+------...+------------------+-------------+...+----------+--------+
|
||||
| 15943...| ...ata-2.0-part1 | /dev/sdb1 |...| 1024 | In-Use |
|
||||
| 63440...| ...ata-2.0-part2 | /dev/sdb2 |...| 10240 | In-Use |
|
||||
| a4aa3...| ...ata-2.0-part3 | /dev/sdb3 |...| 10240 | In-Use |
|
||||
+------...+------------------+-------------+...+----------+--------+
|
||||
~(keystone_admin)$ system host-disk-partition-list --disk 84b1ba35-addb-4fb7-9495-c47c3cb10377 compute-1
|
||||
+--------------------------------------+--------------------------------------+------------+--------------------------------------+-------------------+----------+--------+
|
||||
| uuid | device_path | device_nod | type_guid | type_name | size_gib | status |
|
||||
| | | e | | | | |
|
||||
+--------------------------------------+--------------------------------------+------------+--------------------------------------+-------------------+----------+--------+
|
||||
| 921c07dc-a79d-4104-a6a8-34691120514e | /dev/disk/by-path/pci-0000:04:00.0 | /dev/sda5 | ba5eba11-0000-1111-2222-000000000001 | LVM Physical | 22.0 | In-Use |
|
||||
| | -sas-0x5001e6768017d000-lun-0-part5 | | | Volume | | |
|
||||
| | | | | | | |
|
||||
+--------------------------------------+--------------------------------------+------------+--------------------------------------+-------------------+----------+--------+
|
||||
|
||||
|
||||
|
@ -16,7 +16,7 @@ The syntax of the command is:
|
||||
|
||||
system host-pv-list <hostname>
|
||||
|
||||
where **<hostname>** is the name or ID of the host.
|
||||
<hostname> is the name or ID of the host.
|
||||
|
||||
For example, to list physical volumes on compute-1, do the following:
|
||||
|
||||
|
@ -2,11 +2,11 @@
|
||||
.. rtm1590585833668
|
||||
.. _local-volume-groups-cli-commands:
|
||||
|
||||
====================================
|
||||
CLI Commands for Local Volume Groups
|
||||
====================================
|
||||
================================
|
||||
Local Volume Groups CLI Commands
|
||||
================================
|
||||
|
||||
You can use CLI commands to manage local volume groups.
|
||||
You can use |CLI| commands to manage local volume groups.
|
||||
|
||||
|
||||
.. _local-volume-groups-cli-commands-simpletable-kfn-qwk-nx:
|
||||
@ -42,11 +42,9 @@ You can use CLI commands to manage local volume groups.
|
||||
| | |
|
||||
+-------------------------------------------------------+-------------------------------------------------------+
|
||||
|
||||
where:
|
||||
|
||||
**<instance\_backing>**
|
||||
is the storage method for the local volume group \(image or remote\).
|
||||
The remote option is valid only for systems with dedicated storage.
|
||||
is the storage method for the local volume group \(image or remote\). The
|
||||
remote option is valid only for systems with dedicated storage.
|
||||
|
||||
**<concurrent\_disk\_operations>**
|
||||
is the number of I/O intensive disk operations, such as glance image
|
||||
@ -61,3 +59,4 @@ where:
|
||||
|
||||
**<groupname>**
|
||||
is the name or ID of the local volume group.
|
||||
|
||||
|
@ -6,12 +6,12 @@
|
||||
Provision Storage on a Storage Host Using the CLI
|
||||
=================================================
|
||||
|
||||
You can use the command line to configure the object storage devices \(OSDs\)
|
||||
You can use the command line to configure the object storage devices \(|OSDs|\)
|
||||
on storage hosts.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
For more about OSDs, see |stor-doc|: :ref:`Storage on Storage Hosts
|
||||
For more about |OSDs|, see |stor-doc|: :ref:`Storage on Storage Hosts
|
||||
<storage-hosts-storage-on-storage-hosts>`.
|
||||
|
||||
.. xbooklink
|
||||
@ -21,7 +21,7 @@ For more about OSDs, see |stor-doc|: :ref:`Storage on Storage Hosts
|
||||
|
||||
.. rubric:: |prereq|
|
||||
|
||||
To create or edit an OSD, you must lock the storage host. The system must
|
||||
To create or edit an |OSD|, you must lock the storage host. The system must
|
||||
have at least two other unlocked hosts with Ceph monitors. \(Ceph monitors
|
||||
run on **controller-0**, **controller-1**, and **storage-0** only\).
|
||||
|
||||
@ -33,20 +33,20 @@ To use a custom storage tier, you must create the tier first.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-disk-list storage-3
|
||||
+-------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
|
||||
| uuid | device_node | device_num | device_type | size_gib | available_gib | device_path |
|
||||
+-------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
|
||||
| ba7...| /dev/sda | 2048 | HDD | 51.2 | 0 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
||||
| e87...| /dev/sdb | 2064 | HDD | 10.2 | 10.1 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
||||
| ae8...| /dev/sdc | 2080 | SSD | 8.1 | 8.0 | /dev/disk/by-path/pci-0000:00:0d.0-ata-4.0 |
|
||||
+-------+-------------+------------+--------------+---------+---------------+--------------------------------------------+
|
||||
~(keystone_admin)]$ system host-disk-list storage-3
|
||||
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
|
||||
| uuid | device_node | device_num | device_type | size_gib | available_gib | device_path |
|
||||
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
|
||||
| ba751efe-33sd-as34-7u78-df3416875896 | /dev/sda | 2048 | HDD | 51.2 | 0 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
||||
| e8751efe-6101-4d1c-a9d3-7b1a16871791 | /dev/sdb | 2064 | HDD | 10.2 | 10.1 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
||||
| ae851efe-87hg-67gv-9ouj-sd3s16877658 | /dev/sdc | 2080 | SSD | 8.1 | 8.0 | /dev/disk/by-path/pci-0000:00:0d.0-ata-4.0 |
|
||||
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------------------------------------+
|
||||
|
||||
#. List the available storage tiers.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system storage-tier-list ceph_cluster
|
||||
~(keystone_admin)]$ system storage-tier-list ceph_cluster
|
||||
+--------------------------------------+---------+--------+----------------+
|
||||
| uuid | name | status | backend_using |
|
||||
+--------------------------------------+---------+--------+----------------+
|
||||
@ -54,7 +54,7 @@ To use a custom storage tier, you must create the tier first.
|
||||
| e9ddc040-7d5e-4e28-86be-f8c80f5c0c42 | storage | in-use | f1151da5-bd... |
|
||||
+--------------------------------------+---------+--------+----------------+
|
||||
|
||||
#. Create a storage function \(an OSD\).
|
||||
#. Create a storage function \(an |OSD|\).
|
||||
|
||||
.. note::
|
||||
You cannot add a storage function to the root disk \(/dev/sda in this
|
||||
@ -62,17 +62,17 @@ To use a custom storage tier, you must create the tier first.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-stor-add
|
||||
~(keystone_admin)]$ system host-stor-add
|
||||
usage: system host-stor-add [--journal-location [<journal_location>]]
|
||||
[--journal-size[<size of the journal MiB>]]
|
||||
[--tier-uuid[<storage tier uuid>]]
|
||||
<hostname or id> [<function>] <idisk_uuid>
|
||||
|
||||
where <idisk\_uuid> identifies an OSD. For example:
|
||||
where <idisk\_uuid> identifies an |OSD|. For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-stor-add storage-3 e8751efe-6101-4d1c-a9d3-7b1a16871791
|
||||
~(keystone_admin)]$ system host-stor-add storage-3 e8751efe-6101-4d1c-a9d3-7b1a16871791
|
||||
|
||||
+------------------+--------------------------------------------------+
|
||||
| Property | Value |
|
||||
@ -101,35 +101,34 @@ To use a custom storage tier, you must create the tier first.
|
||||
specify a different size in GiB.
|
||||
|
||||
If multiple journal functions exist \(corresponding to multiple
|
||||
dedicated SSDs\), then you must include the ``--journal-location``
|
||||
option and specify the journal function to use for the OSD. You can
|
||||
dedicated |SSDs|\), then you must include the ``--journal-location``
|
||||
option and specify the journal function to use for the |OSD|. You can
|
||||
obtain the UUIDs for journal functions using the :command:`system
|
||||
host-stor-list` command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-stor-list storage-3
|
||||
~(keystone_admin)]$ system host-stor-list storage-3
|
||||
|
||||
+---------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
|
||||
| uuid | function | osdid | capabilities | idisk_uuid | journal_path | journal_size_gib | tier_name |
|
||||
+---------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
|
||||
| e639... | journal | None | {} | ae8b1434-d... | None | 0 | |
|
||||
| fc7b... | osd | 3 | {} | e8751efe-6... | /dev/disk/by-path/pci... | 1.0 | storage |
|
||||
+---------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
|
||||
+--------------------------------------+----------+-------+--------------+---------------+--------------------------+------------------+-----------+
|
||||
| uuid | function | osdid | capabilities | idisk_uuid | journal_path | journal_size_gib | tier_name |
|
||||
+--------------------------------------+----------+-------+--------------+---------------+--------------------------+------------------+-----------|
|
||||
| e6391e2-8564-4f4d-8665-681f73d13dfb | journal | None | {} | ae8b1434-d... | None | 0 | |
|
||||
| fc7bdc40-7d5e-4e28-86be-f8c80f5c0c42 | osd | 3 | {} | e8751efe-6... | /dev/disk/by-path/pci... | 1.0 | storage |
|
||||
+--------------------------------------+----------+-------+--------------+---------------+--------------------------+------------------+-----------+
|
||||
|
||||
If no journal function exists when the storage function is created, the
|
||||
Ceph journal for the OSD is collocated on the OSD.
|
||||
Ceph journal for the |OSD| is collocated on the |OSD|.
|
||||
|
||||
If an SSD or NVMe drive is available on the host, you can add a
|
||||
If an |SSD| or |NVMe| drive is available on the host, you can add a
|
||||
journal function. For more information, see :ref:`Add SSD-Backed
|
||||
Journals Using the CLI <add-ssd-backed-journals-using-the-cli>`. You
|
||||
can update the OSD to use a journal on the SSD by referencing the
|
||||
journal function UUID, as follows:
|
||||
can update the |OSD| to use a journal on the |SSD| by referencing the
|
||||
journal function |UUID|, as follows:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-stor-update <osd_uuid> \
|
||||
--journal-location <journal_function_uuid> [--journal-size <size>]
|
||||
~(keystone_admin)]$ system host-stor-update <osd_uuid> --journal-location <journal_function_uuid> [--journal-size <size>]
|
||||
|
||||
.. rubric:: |postreq|
|
||||
|
||||
|
@ -57,19 +57,19 @@ obtain information about replication groups:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system cluster-list
|
||||
+-----------+--------------+------+----------+------------------+
|
||||
| uuid | cluster_uuid | type | name | deployment_model |
|
||||
+-----------+--------------+------+----------+------------------+
|
||||
| 335766eb- | None | ceph | ceph_clu | controller-nodes |
|
||||
| | | | ster | |
|
||||
| | | | | |
|
||||
+-----------+--------------+------+----------+------------------+
|
||||
~(keystone_admin)]$ system cluster-list
|
||||
+--------------------------------------+--------------+------+----------+------------------+
|
||||
| uuid | cluster_uuid | type | name | deployment_model |
|
||||
+--------------------------------------+--------------+------+----------+------------------+
|
||||
| 335766eb-8564-4f4d-8665-681f73d13dfb | None | ceph | ceph_clu | controller-nodes |
|
||||
| | | | ster | |
|
||||
| | | | | |
|
||||
+--------------------------------------+--------------+------+----------+------------------+
|
||||
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system cluster-show 335766eb-968e-44fc-9ca7-907f93c772a1
|
||||
~(keystone_admin)]$ system cluster-show 335766eb-968e-44fc-9ca7-907f93c772a1
|
||||
|
||||
+--------------------+----------------------------------------+
|
||||
| Property | Value |
|
||||
|
@ -116,6 +116,9 @@ For more information about Trident, see
|
||||
- :ref:`Configure the Internal Ceph Storage Backend
|
||||
<configure-the-internal-ceph-storage-backend>`
|
||||
|
||||
- :ref:`Configuring Ceph File System for Internal Ceph Storage Backend
|
||||
<configure-ceph-file-system-for-internal-ceph-storage-backend>`
|
||||
|
||||
- :ref:`Configure an External Netapp Deployment as the Storage Backend
|
||||
<configure-an-external-netapp-deployment-as-the-storage-backend>`
|
||||
|
||||
|
@ -29,7 +29,7 @@ The following steps create two 1Gb persistent volume claims.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ cat <<EOF > claim1.yaml
|
||||
~(keystone_admin)]$ cat <<EOF > claim1.yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
@ -47,11 +47,10 @@ The following steps create two 1Gb persistent volume claims.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ kubectl apply -f claim1.yaml
|
||||
~(keystone_admin)]$ kubectl apply -f claim1.yaml
|
||||
|
||||
persistentvolumeclaim/test-claim1 created
|
||||
|
||||
|
||||
|
||||
#. Create the **test-claim2** persistent volume claim.
|
||||
|
||||
|
||||
@ -61,7 +60,7 @@ The following steps create two 1Gb persistent volume claims.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ cat <<EOF > claim2.yaml
|
||||
~(keystone_admin)]$ cat <<EOF > claim2.yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
@ -80,12 +79,9 @@ The following steps create two 1Gb persistent volume claims.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ kubectl apply -f claim2.yaml
|
||||
~(keystone_admin)]$ kubectl apply -f claim2.yaml
|
||||
persistentvolumeclaim/test-claim2 created
|
||||
|
||||
|
||||
|
||||
|
||||
.. rubric:: |result|
|
||||
|
||||
Two 1Gb persistent volume claims have been created. You can view them with
|
||||
@ -93,8 +89,10 @@ the following command.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ kubectl get persistentvolumeclaims
|
||||
~(keystone_admin)]$ kubectl get persistentvolumeclaims
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
test-claim1 Bound pvc-aaca.. 1Gi RWO general 2m56s
|
||||
test-claim2 Bound pvc-e93f.. 1Gi RWO general 68s
|
||||
|
||||
For more information on using CephFS for internal Ceph backends, see,
|
||||
:ref:`Using CephFS for Internal Ceph Storage Backend <configure-ceph-file-system-for-internal-ceph-storage-backend>`
|
@ -22,24 +22,24 @@ You can change the space allotted for the Ceph monitor, if required.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system ceph-mon-modify <controller> ceph_mon_gib=<size>
|
||||
~(keystone_admin)]$ system ceph-mon-modify <controller> ceph_mon_gib=<size>
|
||||
|
||||
where ``<partition\_size>`` is the size in GiB to use for the Ceph monitor.
|
||||
The value must be between 21 and 40 GiB.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system ceph-mon-modify controller-0 ceph_mon_gib=21
|
||||
~(keystone_admin)]$ system ceph-mon-modify controller-0 ceph_mon_gib=21
|
||||
|
||||
+-------------+-------+--------------+------------+------+
|
||||
| uuid | ceph_ | hostname | state | task |
|
||||
| | mon_g | | | |
|
||||
| | ib | | | |
|
||||
+-------------+-------+--------------+------------+------+
|
||||
| 069f106a... | 21 | compute-0 | configured | None |
|
||||
| 4763139e... | 21 | controller-1 | configured | None |
|
||||
| e39970e5... | 21 | controller-0 | configured | None |
|
||||
+-------------+-------+--------------+------------+------+
|
||||
+--------------------------------+-------+--------------+------------+------+
|
||||
| uuid | ceph_ | hostname | state | task |
|
||||
| | mon_g | | | |
|
||||
| | ib | | | |
|
||||
+--------------------------------+-------+--------------+------------+------+
|
||||
| 069f106-4f4d-8665-681f73d13dfb | 21 | compute-0 | configured | None |
|
||||
| 4763139-4f4d-8665-681f73d13dfb | 21 | controller-1 | configured | None |
|
||||
| e39970e-4f4d-8665-681f73d13dfb | 21 | controller-0 | configured | None |
|
||||
+--------------------------------+-------+--------------+------------+------+
|
||||
|
||||
NOTE: ceph_mon_gib for both controllers are changed.
|
||||
|
||||
@ -61,30 +61,30 @@ To list the storage backend types installed on a system:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system storage-backend-list
|
||||
~(keystone_admin)]$ system storage-backend-list
|
||||
|
||||
+--------+-----------+----------+-------+--------------+---------+-----------------+
|
||||
| uuid |name | backend | state | task | services| capabilities |
|
||||
+--------+-----------+----------+-------+--------------+---------+-----------------+
|
||||
| 248a...|ceph-store | ceph | config| resize-ceph..| None |min_replication:1|
|
||||
| | | | | | |replication: 2 |
|
||||
| 76dd...|shared_serv| external | config| None | glance | |
|
||||
| |ices | | | | | |
|
||||
+--------+-----------+----------+-------+--------------+---------+-----------------+
|
||||
+-------------------------------+------------+----------+-------+--------------+---------+-----------------+
|
||||
| uuid | name | backend | state | task | services| capabilities |
|
||||
+-------------------------------+------------+----------+-------+--------------+---------+-----------------+
|
||||
| 248a106-4r54-3324-681f73d13dfb| ceph-store | ceph | config| resize-ceph..| None |min_replication:1|
|
||||
| | | | | | |replication: 2 |
|
||||
| 76dd106-6yth-4356-681f73d13dfb| shared_serv| external | config| None | glance | |
|
||||
| | ices | | | | | |
|
||||
+-------------------------------+------------+----------+-------+--------------+---------+-----------------+
|
||||
|
||||
|
||||
To show details for a storage backend:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system storage-backend-show <name>
|
||||
~(keystone_admin)]$ system storage-backend-show <name>
|
||||
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system storage-backend-show ceph-store
|
||||
~(keystone_admin)]$ system storage-backend-show ceph-store
|
||||
+----------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+----------------------+--------------------------------------+
|
||||
@ -114,7 +114,7 @@ To add a backend:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system storage-backend-add \
|
||||
~(keystone_admin)]$ system storage-backend-add \
|
||||
[-s <services>] [-n <name>] [-t <tier_uuid>] \
|
||||
[-c <ceph_conf>] [--confirmed] [--ceph-mon-gib <ceph-mon-gib>] \
|
||||
<backend> [<parameter>=<value> [<parameter>=<value> ...]]
|
||||
@ -162,7 +162,7 @@ To modify a backend:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system storage-backend-modify [-s <services>] [-c <ceph_conf>] \
|
||||
~(keystone_admin)]$ system storage-backend-modify [-s <services>] [-c <ceph_conf>] \
|
||||
<backend_name_or_uuid> [<parameter>=<value> [<parameter>=<value> ...]]
|
||||
|
||||
|
||||
@ -170,7 +170,7 @@ To delete a failed backend configuration:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system storage-backend-delete <backend>
|
||||
~(keystone_admin)]$ system storage-backend-delete <backend>
|
||||
|
||||
|
||||
|
||||
@ -190,26 +190,26 @@ To list storage tiers:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ system storage-tier-list ceph_cluster
|
||||
~(keystone)admin)]$ system storage-tier-list ceph_cluster
|
||||
|
||||
+---------+---------+--------+--------------------------------------+
|
||||
| uuid | name | status | backend_using |
|
||||
+---------+---------+--------+--------------------------------------+
|
||||
| acc8... | storage | in-use | 649830bf-b628-4170-b275-1f0b01cfc859 |
|
||||
+---------+---------+--------+--------------------------------------+
|
||||
+--------------------------------+---------+--------+--------------------------------------+
|
||||
| uuid | name | status | backend_using |
|
||||
+--------------------------------+---------+--------+--------------------------------------+
|
||||
| acc8706-6yth-4356-681f73d13dfb | storage | in-use | 649830bf-b628-4170-b275-1f0b01cfc859 |
|
||||
+--------------------------------+---------+--------+--------------------------------------+
|
||||
|
||||
To display information for a storage tier:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ system storage-tier-show ceph_cluster <tier_name>
|
||||
~(keystone)admin)]$ system storage-tier-show ceph_cluster <tier_name>
|
||||
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ system storage-tier-show ceph_cluster <storage>
|
||||
~(keystone)admin)]$ system storage-tier-show ceph_cluster <storage>
|
||||
|
||||
+--------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
@ -230,7 +230,7 @@ To add a storage tier:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ system storage-tier-add ceph_cluster <tier_name>
|
||||
~(keystone)admin)]$ system storage-tier-add ceph_cluster <tier_name>
|
||||
|
||||
|
||||
To delete a tier that is not in use by a storage backend and does not have
|
||||
@ -238,7 +238,7 @@ OSDs assigned to it:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ system storage-tier-delete <tier_name>
|
||||
~(keystone)admin)]$ system storage-tier-delete <tier_name>
|
||||
|
||||
|
||||
|
||||
@ -253,26 +253,26 @@ storage space allotments on a host.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system controllerfs-list
|
||||
~(keystone_admin)]$ system controllerfs-list
|
||||
|
||||
+-------+------------+-----+-----------------------+-------+-----------+
|
||||
| UUID | FS Name | Size| Logical Volume | Rep.. | State |
|
||||
| | | in | | | |
|
||||
| | | GiB | | | |
|
||||
+-------+------------+-----+-----------------------+-------+-----------+
|
||||
| d0e...| database | 10 | pgsql-lv | True | available |
|
||||
| 40d...| docker-dist| 16 | dockerdistribution-lv | True | available |
|
||||
| 20e...| etcd | 5 | etcd-lv | True | available |
|
||||
| 9e5...| extension | 1 | extension-lv | True | available |
|
||||
| 55b...| platform | 10 | platform-lv | True | available |
|
||||
+-------+------------+-----+-----------------------+-------+-----------+
|
||||
+--------------------------------+------------+-----+-----------------------+-------+-----------+
|
||||
| UUID | FS Name | Size| Logical Volume | Rep.. | State |
|
||||
| | | in | | | |
|
||||
| | | GiB | | | |
|
||||
+--------------------------------+------------+-----+-----------------------+-------+-----------+
|
||||
| d0e8706-6yth-4356-681f73d13dfb | database | 10 | pgsql-lv | True | available |
|
||||
| 40d8706-ssf4-4356-6814356145tf | docker-dist| 16 | dockerdistribution-lv | True | available |
|
||||
| 20e8706-87gf-4356-681f73d13dfb | etcd | 5 | etcd-lv | True | available |
|
||||
| 9e58706-sd42-4356-435673d1sd3b | extension | 1 | extension-lv | True | available |
|
||||
| 55b8706-sd13-4356-681f73d16yth | platform | 10 | platform-lv | True | available |
|
||||
+--------------------------------+------------+-----+-----------------------+-------+-----------+
|
||||
|
||||
|
||||
For a system with dedicated storage:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system storage-backend-show ceph-store
|
||||
~(keystone_admin)]$ system storage-backend-show ceph-store
|
||||
|
||||
+----------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
|
@ -6,8 +6,8 @@
|
||||
View Details for a Partition
|
||||
============================
|
||||
|
||||
You can view details for a partition with the **system
|
||||
host-disk-partition-show** command.
|
||||
You can view details for a partition, use the **system host-disk-partition-show**
|
||||
command.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
@ -20,33 +20,33 @@ The syntax of the command is:
|
||||
Make the following substitutions:
|
||||
|
||||
**<host>**
|
||||
The host name or ID.
|
||||
The host name or ID
|
||||
|
||||
**<partition>**
|
||||
The partition device path or UUID.
|
||||
|
||||
This example displays details for a particular partition on compute-1.
|
||||
#. This example displays details for a particular partition on compute-1.
|
||||
|
||||
.. code-block:: none
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-disk-partition-show compute-1 a4aa3f66-ff3c-49a0-a43f-bc30012f8361
|
||||
+-------------+--------------------------------------------------+
|
||||
| Property | Value |
|
||||
+-------------+--------------------------------------------------+
|
||||
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part3 |
|
||||
| device_node | /dev/sdb3 |
|
||||
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
|
||||
| type_name | LVM Physical Volume |
|
||||
| start_mib | 10240 |
|
||||
| end_mib | 21505 |
|
||||
| size_mib | 10240 |
|
||||
| uuid | a4aa3f66-ff3c-49a0-a43f-bc30012f8361 |
|
||||
| ihost_uuid | 3b315241-d54f-499b-8566-a6ed7d2d6b39 |
|
||||
| idisk_uuid | fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc |
|
||||
| ipv_uuid | c571653b-1d91-4299-adea-1b24f86cb898 |
|
||||
| status | In-Use |
|
||||
| created_at | 2017-09-07T19:53:23.743734+00:00 |
|
||||
| updated_at | 2017-09-07T20:06:06.914404+00:00 |
|
||||
+-------------+--------------------------------------------------+
|
||||
~(keystone_admin)]$ system host-disk-partition-show compute-1 a4aa3f66-ff3c-49a0-a43f-bc30012f8361
|
||||
+-------------+--------------------------------------------------+
|
||||
| Property | Value |
|
||||
+-------------+--------------------------------------------------+
|
||||
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part3 |
|
||||
| device_node | /dev/sdb3 |
|
||||
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
|
||||
| type_name | LVM Physical Volume |
|
||||
| start_mib | 10240 |
|
||||
| end_mib | 21505 |
|
||||
| size_mib | 10240 |
|
||||
| uuid | a4aa3f66-ff3c-49a0-a43f-bc30012f8361 |
|
||||
| ihost_uuid | 3b315241-d54f-499b-8566-a6ed7d2d6b39 |
|
||||
| idisk_uuid | fcd2f59d-c9ee-4423-9f57-e2c55d5b97dc |
|
||||
| ipv_uuid | c571653b-1d91-4299-adea-1b24f86cb898 |
|
||||
| status | In-Use |
|
||||
| created_at | 2017-09-07T19:53:23.743734+00:00 |
|
||||
| updated_at | 2017-09-07T20:06:06.914404+00:00 |
|
||||
+-------------+--------------------------------------------------+
|
||||
|
||||
|
||||
|
@ -25,11 +25,11 @@ where:
|
||||
**<uuid>**
|
||||
is the uuid of the physical volume.
|
||||
|
||||
For example, to view details for a physical volume on compute-1, do the
|
||||
following:
|
||||
For example, to view details for a physical volume on compute-1, do the
|
||||
following:
|
||||
|
||||
.. code-block:: none
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-pv-show compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63
|
||||
~(keystone_admin)]$ system host-pv-show compute-1 9f93c549-e26c-4d4c-af71-fb84e3fcae63
|
||||
|
||||
|
||||
|
@ -8,8 +8,8 @@ Work with Disk Partitions
|
||||
|
||||
You can use disk partitions to provide space for local volume groups.
|
||||
|
||||
You can create, modify, and delete partitions from the Horizon Web
|
||||
interface or the |CLI|.
|
||||
You can create, modify, and delete partitions from the Horizon Web interface or
|
||||
the |CLI|.
|
||||
|
||||
To use |prod-os|, select **Admin** \> **Platform** \> **Host Inventory**,
|
||||
and then click the host name to open the Host Details page. On the Host
|
||||
|
@ -20,12 +20,9 @@ To manage the physical volumes that support local volume groups, see
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-lock <hostname>
|
||||
~(keystone_admin)]$ system host-lock <hostname>
|
||||
|
||||
where:
|
||||
|
||||
**<hostname>**
|
||||
is the name or ID of the host.
|
||||
<hostname> is the name or ID of the host.
|
||||
|
||||
#. Open the Storage page for the host.
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user