From 7a3d9a664efc75b7f3e8a7397ca06ba2e4bb474f Mon Sep 17 00:00:00 2001 From: Dmitry Tantsur Date: Wed, 7 Aug 2019 18:26:22 +0200 Subject: [PATCH] Clean up RAID documentation * Use more copy-paste friendly indentation in the examples * Use subheadings for properties * Render JSON examples as JSON * Remove explicit API version from CLI, we've been defaulting to latest for several releases. * Small fixes Change-Id: I1cae6e9b4ff124e3404bd55638bc77bdf3465fe0 --- doc/source/admin/raid.rst | 291 ++++++++++++++++++++------------------ 1 file changed, 154 insertions(+), 137 deletions(-) diff --git a/doc/source/admin/raid.rst b/doc/source/admin/raid.rst index 06b1d54022..015ba59e92 100644 --- a/doc/source/admin/raid.rst +++ b/doc/source/admin/raid.rst @@ -60,13 +60,11 @@ as the key. The value for the ``logical_disks`` is a list of JSON dictionaries. It looks like:: { - "logical_disks": [ - {}, - {}, - . - . - . - ] + "logical_disks": [ + {}, + {}, + ... + ] } If the ``target_raid_config`` is an empty dictionary, it unsets the value of @@ -76,76 +74,74 @@ done on the node. Each dictionary of logical disk contains the desired properties of logical disk supported by the hardware type. These properties are discoverable by:: - openstack baremetal --os-baremetal-api-version 1.15 driver raid property list + openstack baremetal driver raid property list -The RAID feature is available in ironic API version 1.15 and above. -If ``--os-baremetal-api-version`` is not used in the CLI, it will error out -with the following message:: +Mandatory properties +^^^^^^^^^^^^^^^^^^^^ - No API version was specified and the requested operation was not - supported by the client's negotiated API version 1.9. Supported - version range is: 1.1 to ... +These properties must be specified for each logical +disk and have no default values: - where the "..." in above error message would be the maximum version - supported by the service. +- ``size_gb`` - Size (Integer) of the logical disk to be created in GiB. + ``MAX`` may be specified if the logical disk should use all of the + remaining space available. This can be used only when backing physical + disks are specified (see below). -The RAID properties can be split into 4 different types: +- ``raid_level`` - RAID level for the logical disk. Ironic supports the + following RAID levels: 0, 1, 2, 5, 6, 1+0, 5+0, 6+0. -#. Mandatory properties. These properties must be specified for each logical - disk and have no default values. +Optional properties +^^^^^^^^^^^^^^^^^^^ - - ``size_gb`` - Size (Integer) of the logical disk to be created in GiB. - ``MAX`` may be specified if the logical disk should use all of the - remaining space available. This can be used only when backing physical - disks are specified (see below). +These properties have default values and they may be overridden in the +specification of any logical disk. - - ``raid_level`` - RAID level for the logical disk. Ironic supports the - following RAID levels: 0, 1, 2, 5, 6, 1+0, 5+0, 6+0. +- ``volume_name`` - Name of the volume. Should be unique within the Node. + If not specified, volume name will be auto-generated. -#. Optional properties. These properties have default values and - they may be overridden in the specification of any logical disk. +- ``is_root_volume`` - Set to ``true`` if this is the root volume. At + most one logical disk can have this set to ``true``; the other + logical disks must have this set to ``false``. The + ``root device hint`` will be saved, if the RAID interface is capable of + retrieving it. This is ``false`` by default. - - ``volume_name`` - Name of the volume. Should be unique within the Node. - If not specified, volume name will be auto-generated. +Backing physical disk hints +^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - ``is_root_volume`` - Set to ``true`` if this is the root volume. At - most one logical disk can have this set to ``true``; the other - logical disks must have this set to ``false``. The - ``root device hint`` will be saved, if the RAID interface is capable of - retrieving it. This is ``false`` by default. +These hints are specified for each logical disk to let Ironic find the desired +disks for RAID configuration. This is machine-independent information. This +serves the use-case where the operator doesn't want to provide individual +details for each bare metal node. -#. Backing physical disk hints. These hints are specified for each logical - disk to let Ironic find the desired disks for RAID configuration. This is - machine-independent information. This serves the use-case where the - operator doesn't want to provide individual details for each bare metal - node. +- ``share_physical_disks`` - Set to ``true`` if this logical disk can + share physical disks with other logical disks. The default value is + ``false``. - - ``share_physical_disks`` - Set to ``true`` if this logical disk can - share physical disks with other logical disks. The default value is - ``false``. +- ``disk_type`` - ``hdd`` or ``ssd``. If this is not specified, disk type + will not be a criterion to find backing physical disks. - - ``disk_type`` - ``hdd`` or ``ssd``. If this is not specified, disk type - will not be a criterion to find backing physical disks. +- ``interface_type`` - ``sata`` or ``scsi`` or ``sas``. If this is not + specified, interface type will not be a criterion to + find backing physical disks. - - ``interface_type`` - ``sata`` or ``scsi`` or ``sas``. If this is not - specified, interface type will not be a criterion to - find backing physical disks. +- ``number_of_physical_disks`` - Integer, number of disks to use for the + logical disk. Defaults to minimum number of disks required for the + particular RAID level. - - ``number_of_physical_disks`` - Integer, number of disks to use for the - logical disk. Defaults to minimum number of disks required for the - particular RAID level. +Backing physical disks +^^^^^^^^^^^^^^^^^^^^^^ -#. Backing physical disks. These are the actual machine-dependent - information. This is suitable for environments where the operator wants - to automate the selection of physical disks with a 3rd-party tool based - on a wider range of attributes (eg. S.M.A.R.T. status, physical location). - The values for these properties are hardware dependent. +These are the actual machine-dependent information. This is suitable for +environments where the operator wants to automate the selection of physical +disks with a 3rd-party tool based on a wider range of attributes +(eg. S.M.A.R.T. status, physical location). The values for these properties +are hardware dependent. - - ``controller`` - The name of the controller as read by the RAID interface. - In order to trigger the setup of a Software RAID via the Ironic Python - Agent, the value of this property needs to be set to ``software``. - - ``physical_disks`` - A list of physical disks to use as read by the - RAID interface. +- ``controller`` - The name of the controller as read by the RAID interface. + In order to trigger the setup of a Software RAID via the Ironic Python + Agent, the value of this property needs to be set to ``software``. +- ``physical_disks`` - A list of physical disks to use as read by the + RAID interface. .. note:: If properties from both "Backing physical disk hints" or @@ -160,97 +156,106 @@ Examples for ``target_raid_config`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ *Example 1*. Single RAID disk of RAID level 5 with all of the space -available. Make this the root volume to which Ironic deploys the image:: +available. Make this the root volume to which Ironic deploys the image: + +.. code-block:: json { - "logical_disks": [ - { - "size_gb": "MAX", - "raid_level": "5", - "is_root_volume": true - } - ] + "logical_disks": [ + { + "size_gb": "MAX", + "raid_level": "5", + "is_root_volume": true + } + ] } *Example 2*. Two RAID disks. One with RAID level 5 of 100 GiB and make it root volume and use SSD. Another with RAID level 1 of 500 GiB and use -HDD:: +HDD: + +.. code-block:: json { - "logical_disks": [ - { - "size_gb": 100, - "raid_level": "5", - "is_root_volume": true, - "disk_type": "ssd" - }, - { - "size_gb": 500, - "raid_level": "1", - "disk_type": "hdd" - } - ] + "logical_disks": [ + { + "size_gb": 100, + "raid_level": "5", + "is_root_volume": true, + "disk_type": "ssd" + }, + { + "size_gb": 500, + "raid_level": "1", + "disk_type": "hdd" + } + ] } -*Example 3*. Single RAID disk. I know which disks and controller to use:: +*Example 3*. Single RAID disk. I know which disks and controller to use: + +.. code-block:: json { - "logical_disks": [ - { - "size_gb": 100, - "raid_level": "5", - "controller": "Smart Array P822 in Slot 3", - "physical_disks": ["6I:1:5", "6I:1:6", "6I:1:7"], - "is_root_volume": true - } - ] + "logical_disks": [ + { + "size_gb": 100, + "raid_level": "5", + "controller": "Smart Array P822 in Slot 3", + "physical_disks": ["6I:1:5", "6I:1:6", "6I:1:7"], + "is_root_volume": true + } + ] } -*Example 4*. Using backing physical disks:: +*Example 4*. Using backing physical disks: + +.. code-block:: json { - "logical_disks": - [ - { - "size_gb": 50, - "raid_level": "1+0", - "controller": "RAID.Integrated.1-1", - "volume_name": "root_volume", - "is_root_volume": true, - "physical_disks": [ - "Disk.Bay.0:Encl.Int.0-1:RAID.Integrated.1-1", - "Disk.Bay.1:Encl.Int.0-1:RAID.Integrated.1-1" - ] - }, - { - "size_gb": 100, - "raid_level": "5", - "controller": "RAID.Integrated.1-1", - "volume_name": "data_volume", - "physical_disks": [ - "Disk.Bay.2:Encl.Int.0-1:RAID.Integrated.1-1", - "Disk.Bay.3:Encl.Int.0-1:RAID.Integrated.1-1", - "Disk.Bay.4:Encl.Int.0-1:RAID.Integrated.1-1" - ] - } - ] + "logical_disks": [ + { + "size_gb": 50, + "raid_level": "1+0", + "controller": "RAID.Integrated.1-1", + "volume_name": "root_volume", + "is_root_volume": true, + "physical_disks": [ + "Disk.Bay.0:Encl.Int.0-1:RAID.Integrated.1-1", + "Disk.Bay.1:Encl.Int.0-1:RAID.Integrated.1-1" + ] + }, + { + "size_gb": 100, + "raid_level": "5", + "controller": "RAID.Integrated.1-1", + "volume_name": "data_volume", + "physical_disks": [ + "Disk.Bay.2:Encl.Int.0-1:RAID.Integrated.1-1", + "Disk.Bay.3:Encl.Int.0-1:RAID.Integrated.1-1", + "Disk.Bay.4:Encl.Int.0-1:RAID.Integrated.1-1" + ] + } + ] } -*Example 5*. Software RAID with two RAID devices:: +*Example 5*. Software RAID with two RAID devices: + +.. code-block:: json { - "logical_disks": [ - { - "size_gb": 100, - "raid_level": "1", - "controller": "software" - }, - { - "size_gb": "MAX", - "raid_level": "0", - "controller": "software" - } - ] + "logical_disks": [ + { + "size_gb": 100, + "raid_level": "1", + "controller": "software" + }, + { + "size_gb": "MAX", + "raid_level": "0", + "controller": "software" + } + ] } Current RAID configuration @@ -265,7 +270,7 @@ physical disk found on the bare metal node. To get the current RAID configuration:: - openstack baremetal --os-baremetal-api-version 1.15 node show + openstack baremetal node show Workflow ======== @@ -286,14 +291,14 @@ Workflow openstack baremetal node set \ --target-raid-config - The CLI command can accept the input from standard input also: + The CLI command can accept the input from standard input also:: + openstack baremetal node set \ --target-raid-config - * Create a JSON file with the RAID clean steps for manual cleaning. Add other clean steps as desired:: - [{ "interface": "raid", "step": "delete_configuration" @@ -347,8 +352,20 @@ There are certain limitations to be aware of: in case of a disk failure. * Building RAID will fail if the target disks are already partitioned. Wipe the - disks using e.g. the ``erase_device_metadata`` clean step before building - RAID. + disks using e.g. the ``erase_devices_metadata`` clean step before building + RAID:: + + [{ + "interface": "raid", + "step": "delete_configuration" + }, + { + "interface": "deploy", + "step": "erase_devices_metadata" + { + "interface": "raid", + "step": "create_configuration" + }] * If local boot is going to be used, the final instance image must have the ``mdadm`` utility installed and needs to be able to detect software RAID @@ -367,7 +384,7 @@ Using RAID in nova flavor for scheduling The operator can specify the `raid_level` capability in nova flavor for node to be selected for scheduling:: - nova flavor-key my-baremetal-flavor set capabilities:raid_level="1+0" + openstack flavor set my-baremetal-flavor --property capabilities:raid_level="1+0" Developer documentation =======================