From 00942d9bed8e42b747934f0e9453e9fdecfa38a1 Mon Sep 17 00:00:00 2001 From: Adriano Rosso Date: Tue, 7 Apr 2015 13:54:33 -0300 Subject: [PATCH] Updates HNAS driver documentation for Kilo Inserts information for the configuration of the new features, updates the documentation format and changes some descriptions of HNAS driver for Kilo version. Change-Id: I3afe4ade5030c8771e5f5749b9d50f7d5af7f39f --- .../block-storage/drivers/hds-hnas-driver.xml | 1073 +++++++++-------- 1 file changed, 603 insertions(+), 470 deletions(-) diff --git a/doc/config-reference/block-storage/drivers/hds-hnas-driver.xml b/doc/config-reference/block-storage/drivers/hds-hnas-driver.xml index d1460e8eef..0007e916d7 100644 --- a/doc/config-reference/block-storage/drivers/hds-hnas-driver.xml +++ b/doc/config-reference/block-storage/drivers/hds-hnas-driver.xml @@ -6,481 +6,614 @@ xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> - HDS HNAS iSCSI and NFS driver - - This Block Storage volume driver provides iSCSI and NFS support for HNAS (Hitachi Network-attached Storage) arrays such as, - HNAS 3000 and 4000 family. -
- System requirements - Use the HDS ssc command to - communicate with an HNAS array. This utility package is available in the - physical media distributed with the hardware or it can be copied from - the SMU (/usr/local/bin/ssc). - Platform: Ubuntu 12.04 LTS or newer. -
-
- Supported operations - - The base NFS driver combined with the HNAS driver - extensions support these operations: - - - - Create, delete, attach, and detach volumes. - - - Create, list, and delete volume snapshots. - - - Create a volume from a snapshot. - - - Copy an image to a volume. - - - Copy a volume to an image. - - - Clone a volume. - - - Extend a volume. - - - Get volume statistics. - - -
-
- Configuration - The HDS driver supports the concept of differentiated services - (also referred to as quality of service) by mapping volume types - to services provided through HNAS. HNAS supports a variety of - storage options and file system capabilities which are selected - through volume typing and the use of multiple back-ends. The HDS driver - maps up to 4 volume types into separate exports/filesystems, and can - support any number using multiple back-ends. - Configuration is read from an XML-formatted file (one per backend). Examples - are shown for single and multi back-end cases. - - - - Configuration is read from an XML file. This - example shows the configuration for single - back-end and for multi-back-end cases. - - - - - The default volume type - needs to be set in configuration file. If there is no - default volume type, - only matching volume types will work. - - - - - - HNAS setup - Before using iSCSI and NFS services, use the HNAS Web Interface to create storage pool(s), - filesystem(s), and assign an EVS. For NFS, NFS exports should be created. - For iSCSI, a SCSI Domain needs to be set. - - - Single back-end - In a single back-end deployment, only one OpenStack Block Storage - instance runs on the OpenStack Block Storage server and controls one - HNAS array: this deployment requires these configuration - files: - - - Set the - - option in the - /etc/cinder/cinder.conf - file to use the HNAS iSCSI volume driver. Or - - to use HNAS NFS driver. This option - points to a configuration file. - The configuration file location - may differ. - - For HNAS iSCSI driver: - volume_driver = cinder.volume.drivers.hds.iscsi.HDSISCSIDriver -hds_hnas_iscsi_config_file = /opt/hds/hnas/cinder_iscsi_conf.xml - For HNAS NFS driver: - volume_driver = cinder.volume.drivers.hds.nfs.HDSNFSDriver -hds_hnas_nfs_config_file = /opt/hds/hnas/cinder_nfs_conf.xml - - - For HNAS iSCSI, configure - at - the location specified previously. For - example, - /opt/hds/hnas/cinder_iscsi_conf.xml: - <?xml version="1.0" encoding="UTF-8" ?> -<config> - <mgmt_ip0>172.17.44.16</mgmt_ip0> - <hnas_cmd>ssc</hnas_cmd> - <chap_enabled>True</chap_enabled> - <username>supervisor</username> - <password>supervisor</password> - <svc_0> - <volume_type>default</volume_type> - <iscsi_ip>172.17.39.132</iscsi_ip> - <hdp>fs-01</hdp> - </svc_0> -</config> - For HNAS NFS, configure - at - the location specified previously. For - example, - /opt/hds/hnas/cinder_nfs_conf.xml: - <?xml version="1.0" encoding="UTF-8" ?> -<config> - <mgmt_ip0>172.17.44.16</mgmt_ip0> - <hnas_cmd>ssc</hnas_cmd> - <username>supervisor</username> - <password>supervisor</password> - <chap_enabled>False</chap_enabled> - <svc_0> - <volume_type>default</volume_type> - <hdp>172.17.44.100:/virtual-01</hdp> - </svc_0> -</config> - - - Up to 4 service stanzas can be included in the XML file; named - svc_0, svc_1, - svc_2 and svc_3. - Additional services can be enabled using multi-backend - as described below. - - - Multi back-end - In a multi back-end deployment, more than one OpenStack Block Storage - instance runs on the same server. In this example, two - HNAS arrays are used, possibly providing different - storage performance: - - - For HNAS iSCSI, configure - /etc/cinder/cinder.conf: - the hnas1 and - hnas2 configuration blocks are - created. Set the - - option to point to an unique configuration - file for each block. Set the - option for - each back-end to - cinder.volume.drivers.hds.iscsi.HDSISCSIDriver. - enabled_backends=hnas1,hnas2 - -[hnas1] + HDS HNAS iSCSI and NFS driver + + This Block Storage volume driver provides iSCSI and NFS support for + HNAS (Hitachi Network-attached Storage) + arrays such as, HNAS 3000 and 4000 family. +
+ Supported operations + The NFS and iSCSI drivers support these operations: + + + Create, delete, attach, and detach volumes. + + + Create, list, and delete volume snapshots. + + + Create a volume from a snapshot. + + + Copy an image to a volume. + + + Copy a volume to an image. + + + Clone a volume. + + + Extend a volume. + + + Get volume statistics. + + +
+
+ HNAS storage requirements + + Before using iSCSI and NFS services, use the HNAS Web + Interface to create storage pool(s), file system(s), and assign + an EVS. Make sure that the file system used is not created as + replication targets. Additionally: + + + For NFS: + + + Create NFS exports, choose a path for them (it must + be different from "/") and set the Show + snapshots option to hide and disable + access. + + + Also, configure the option norootsquash + as so + cinder services can change the permissions of its volumes. + + + In order to use the hardware accelerated features of + NFS HNAS, we recommend setting max-nfs-version + to 3. + Refer to HNAS command line reference to see how to + configure this option. + + + + For iSCSI: + + You need to set an iSCSI domain. + + + +
+
+ Block storage host requirements + + All versions: + + + + nfs-utils for RPM packages + + + + nfs-common, libc6-i386 + for DEB packages (libc6-i386 only required on + Ubuntu 12.04) + + + + + HDS SSC package (hds-ssc-v1.0-1) to + communicate with an HNAS array using the SSC + command. This utility package is available + in the RPM package distributed with the hardware + through physical media or it can be manually + copied from the SMU to the Block Storage host. + + + + + + Version: 2.2-1: + + + + + Icehouse OpenStack deployment for RHOSP 5 + (RH 7.0), SUSE Cloud 4, and Mirantis Fuel (Ubuntu or + CentOS hosts) + + + + hds-ssc-v1.0-1 package if not using SSH auth + + + + + + +
+
+ Package installation + + If you are installing the driver from an RPM or DEB package, + follow the steps bellow: + Install SSC: + $ rpm -i hds-ssc-v1.0-1.rpm + Or in Ubuntu: + $ dpkg -i hds-ssc_1.0-1_all.deb + + Install the dependencies: + # yum install nfs-utils nfs-utils-lib + Or in Ubuntu: + # apt-get install nfs-common + Or in openSUSE and SUSE Linux Enterprise Server: + # zypper install nfs-client + If you are using Ubuntu 12.04, you also need to install + libc6-i386 + + Configure the driver as described in the "Driver + Configuration" section. + Restart all cinder services (volume, scheduler and + backup). + +
+
+ Driver configuration + The HDS driver supports the concept of differentiated + services (also referred to as quality of service) by mapping + volume types to services provided through HNAS. + HNAS supports a variety of storage options and file + system capabilities, which are selected through the definition + of volume types and the use of multiple back ends. The driver + maps up to four volume types into separated exports or file + systems, and can support any number if using multiple back + ends. + The configuration for the driver is read from an + XML-formatted file (one per back end), which you need to create + and set its path in the cinder.conf configuration + file. Below are the configuration needed in + the cinder.conf configuration file + + The configuration file location may differ. + : + + [DEFAULT] +enabled_backends = hnas_iscsi1, hnas_nfs1 + For HNAS iSCSI driver create this section: + [hnas_iscsi1] volume_driver = cinder.volume.drivers.hds.iscsi.HDSISCSIDriver -hds_hnas_iscsi_config_file = /opt/hds/hnas/cinder_iscsi1_conf.xml -volume_backend_name=hnas-1 - -[hnas2] -volume_driver = cinder.volume.drivers.hds.iscsi.HDSISCSIDriver -hds_hnas_iscsi_config_file = /opt/hds/hnas/cinder_iscsi2_conf.xml -volume_backend_name=hnas-2 - - - Configure the - /opt/hds/hnas/cinder_iscsi1_conf.xml - file: - <?xml version="1.0" encoding="UTF-8" ?> -<config> - <mgmt_ip0>172.17.44.16</mgmt_ip0> - <hnas_cmd>ssc</hnas_cmd> - <chap_enabled>True</chap_enabled> - <username>supervisor</username> - <password>supervisor</password> - <svc_0> - <volume_type>regular</volume_type> - <iscsi_ip>172.17.39.132</iscsi_ip> - <hdp>fs-01</hdp> - </svc_0> -</config> - - - Configure the - /opt/hds/hnas/cinder_iscsi2_conf.xml - file: - <?xml version="1.0" encoding="UTF-8" ?> -<config> - <mgmt_ip0>172.17.44.20</mgmt_ip0> - <hnas_cmd>ssc</hnas_cmd> - <chap_enabled>True</chap_enabled> - <username>supervisor</username> - <password>supervisor</password> - <svc_0> - <volume_type>platinum</volume_type> - <iscsi_ip>172.17.30.130</iscsi_ip> - <hdp>fs-02</hdp> - </svc_0> -</config> - - - - - For NFS, configure - /etc/cinder/cinder.conf: - the hnas1 and - hnas2 configuration blocks are - created. Set the - - option to point to an unique configuration - file for each block. Set the - option for - each back-end to - cinder.volume.drivers.hds.nfs.HDSNFSDriver. - enabled_backends=hnas1,hnas2 - -[hnas1] +hds_hnas_iscsi_config_file = /path/to/config/hnas_config_file.xml +volume_backend_name = HNAS-ISCSI + For HNAS NFS driver create this section: + [hnas_nfs1] volume_driver = cinder.volume.drivers.hds.nfs.HDSNFSDriver -hds_hnas_nfs_config_file = /opt/hds/hnas/cinder_nfs1_conf.xml -volume_backend_name=hnas-1 - -[hnas2] -volume_driver = cinder.volume.drivers.hds.nfs.HDSNFSDriver -hds_hnas_nfs_config_file = /opt/hds/hnas/cinder_nfs2_conf.xml -volume_backend_name=hnas-2 - - - Configure the - /opt/hds/hnas/cinder_nfs1_conf.xml - file: - <?xml version="1.0" encoding="UTF-8" ?> -<config> - <mgmt_ip0>172.17.44.16</mgmt_ip0> +hds_hnas_nfs_config_file = /path/to/config/hnas_config_file.xml +volume_backend_name = HNAS-NFS + The XML file has the following format: + + <?xml version = "1.0" encoding = "UTF-8" ?> + <config> + <mgmt_ip0>172.24.44.15</mgmt_ip0> <hnas_cmd>ssc</hnas_cmd> + <chap_enabled>False</chap_enabled> + <ssh_enabled>False</ssh_enabled> + <cluster_admin_ip0>10.1.1.1</cluster_admin_ip0> <username>supervisor</username> <password>supervisor</password> - <chap_enabled>False</chap_enabled> <svc_0> - <volume_type>regular</volume_type> - <hdp>172.17.44.100:/virtual-01</hdp> + <volume_type>default</volume_type> + <iscsi_ip>172.24.44.20</iscsi_ip> + <hdp>fs01-husvm</hdp> </svc_0> -</config> - - - Configure the - /opt/hds/hnas/cinder_nfs2_conf.xml - file: - <?xml version="1.0" encoding="UTF-8" ?> -<config> - <mgmt_ip0>172.17.44.20</mgmt_ip0> - <hnas_cmd>ssc</hnas_cmd> - <username>supervisor</username> - <password>supervisor</password> - <chap_enabled>False</chap_enabled> - <svc_0> - <volume_type>platinum</volume_type> - <hdp>172.17.44.100:/virtual-02</hdp> - </svc_0> -</config> - - - - - Type extra specs: <option>volume_backend</option> - and volume type - If you use volume types, you must configure them in - the configuration file and set the - option to the - appropriate back-end. In the previous multi back-end - example, the platinum volume type - is served by hnas-2, and the regular - volume type is served by hnas-1. - cinder type-key regular set volume_backend_name=hnas-1 -cinder type-key platinum set volume_backend_name=hnas-2 - - - Non-differentiated deployment of HNAS arrays - You can deploy multiple OpenStack HNAS drivers instances that each - control a separate HNAS array. Each instance does not need to have a - volume type associated with it. The OpenStack Block Storage filtering - algorithm selects the HNAS array with the largest - available free space. In each configuration file, you - must define the default - volume type in the service labels. - -
-
- HDS HNAS volume driver configuration options - These details apply to the XML format configuration file - that is read by HDS volume driver. These differentiated - service labels are predefined: svc_0, - svc_1, svc_2 - and svc_3 - There is no relative precedence or weight among - these four labels. - . Each respective service label associates with - these parameters and tags: - - volume_type - A create_volume - call with a certain volume type shall be matched - up with this tag. The value default - is special in that any service associated with this - type is used to create volume when no other labels - match. Other labels are case sensitive and should - exactly match. If no configured volume types match - the incoming requested type, an error occurs in - volume creation. - - hdp - - (iSCSI only) Virtual filesystem label associated - with the service. - (NFS only) Path to the volume - <ip_address>:/<path> associated with - the service. Additionally, this entry must be added - in the file used to list available NFS shares. This file is located, - by default, in /etc/cinder/nfs_shares - or you can specify the location in the - option in the cinder configuration file. - - - iscsi_ip - (iSCSI only) An iSCSI IP address dedicated - to the service. - - - Typically an OpenStack Block Storage volume instance has only one such - service label. For example, any svc_0, - svc_1, svc_2 or - svc_3 can be associated with it. - But any mix of these service labels can be used in the - same instance - The get_volume_stats() function always provides the available - capacity based on the combined sum of all the HDPs - that are used in these services labels. - . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Configuration options
OptionTypeDefaultDescription
- RequiredManagement Port 0 IP address. Should be the IP - address of the 'Admin' EVS. -
- Optionalssc - is a command to - communicate to HNAS array. -
- OptionalTrue - (iSCSI only) is a boolean tag used - to enable CHAP authentication protocol. -
- Requiredsupervisor - Username is always required on HNAS. -
- Requiredsupervisor - Password is always required on HNAS. -
- - Optional(at least one label has to be - defined) - Service labels: these four predefined - names help four different sets of - configuration options. Each can specify - HDP and a unique volume type. -
- - Requireddefault - volume_type tag is used - to match volume type. - default meets any - type of volume type, or - if it is not specified. Any other - volume type is selected if exactly matched - during volume creation. -
- - Required - (iSCSI only) iSCSI IP address where volume - attaches for this volume type. -
- - Required - HDP, for HNAS iSCSI is the virtual filesystem label - or the path (for HNAS NFS) where volume, or - snapshot should be created. -
-
+ <svc_1> + <volume_type>platinun</volume_type> + <iscsi_ip>172.24.44.20</iscsi_ip> + <hdp>fs01-platinun</hdp> + </svc_1> + </config>
+
+
+ HNAS volume driver XML configuration options + An OpenStack Block Storage node using HNAS drivers can have up to + four services. Each service is defined by a svc_n + tag (svc_0, svc_1, + svc_2, or svc_3 + + There is no relative precedence or weight among these + four labels. + , for example). These are the configuration options + avaliable for each service label: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Configuration options for service labels
OptionTypeDefaultDescription
+ + Requireddefault + + When a create_volume call with + a certain volume type happens, the volume type will + try to be matched up with this tag. In each + configuration file you must define the + default volume type in the service + labels and, if no volume type is specified, the + default is used. Other labels + are case sensitive and should match exactly. If no + configured volume types match the incoming + requested type, an error occurs in the volume + creation. + +
+ + Required only for iSCSI + + An iSCSI IP address dedicated to the service. + +
+ + Required + + For iSCSI driver: virtual file system label + associated with the service. + + + For NFS driver: path to the volume + (<ip_address>:/<path>) associated with + the service. + + + Additionally, this entry must be added in the file + used to list available NFS shares. This file is + located, by default, in + /etc/cinder/nfs_shares or you + can specify the location in the + nfs_shares_config option in the + cinder.conf configuration file. + +
+ + These are the configuration options available to the + config section of the XML config file: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Configuration options
OptionTypeDefaultDescription
+ Required + + Management Port 0 IP address. Should be the IP + address of the "Admin" EVS. + +
+ Optionalssc + + Command to communicate to HNAS array. + +
+ Optional (iSCSI only)True + + Boolean tag used to enable CHAP authentication + protocol. + +
+ Requiredsupervisor + It's always required on HNAS. +
+ Requiredsupervisor + Password is always required on HNAS. +
+ + + + + Optional + + + (at least one label has to be defined) + + + + Service labels: these four predefined + names help four different sets of + configuration options. Each can specify + HDP and a unique volume type. + +
+ + + + + Optional if + is True + + + + The address of HNAS cluster admin. + +
+ + + + + Optional + + False + + + Enables SSH authentication between Block Storage + host and the SMU. + +
+ + + + + Required if ssh_enabled is + True + + False + + + Path to the SSH private key used to authenticate + in HNAS SMU. The public key must be uploaded to + HNAS SMU using ssh-register-public-key + (this is an SSH subcommand). Note that + copying the public key HNAS using + ssh-copy-id doesn't work + properly as the SMU periodically wipe out those + keys. + +
+
+
+ Service labels + + HNAS driver supports differentiated types of service using the service + labels. It is possible to create up to four types of them, as gold, + platinun, silver and ssd, for example. + + + Each service is treated by OpenStack Block Storage as a unit of + scheduling (pool). After creating the services in the XML + configuration file, you must configure one + volume_type per service. Each + volume_type must have the metadata + service_label with the same name configured in the + <volume_type> section of that + service. If this is not set, OpenStack Block Storage will + schedule the volume creation to the pool with largest avaliable + free space or other criteria configured in volume filters. + + $ cinder type-create 'default' +$ cinder type-key 'default' set service_label = 'default' +$ cinder type-create 'platinun-tier' +$ cinder type-key 'platinun' set service_label = 'platinun' +
+
+ Multi-back-end configuration + + If you use multiple back ends and intend to enable the creation of + a volume in a specific back end, you must configure volume types to + set the volume_backend_name option to the + appropriate back end. Then, create volume_type + configurations with the same volume_backend_name + . + + $ cinder type-create 'iscsi' +$ cinder type-key 'iscsi' set volume_backend_name = 'HNAS-ISCSI' +$ cinder type-create 'nfs' +$ cinder type-key 'nfs' set volume_backend_name = 'HNAS-NFS' + + You can deploy multiple OpenStack HNAS drivers instances that each + control a separate HNAS array. Each service (svc_0, svc_1, + svc_2, svc_3) on the instances need to have a volume_type + and service_label metadata associated with it. If no metadata is + associated with a pool, OpenStack Block Storage filtering algorithm + selects the pool with the largest available free space. + +
+
+ SSH configuration + + Instead of using SSC on the Block Storage host + and store its credential on the XML configuration file, HNAS driver + supports SSH authentication. To configure that: + + + + + Create a pair of public keys in the Block Storage host + (leave the pass-phrase empty): + + $ mkdir -p /opt/hds/ssh + + + + Change the owner of the key to cinder + (or the user the volume service will be run): + + # chown -R cinder.cinder /opt/hds/ssh + + + + Export your pubkey to SMU (HNAS): + + $ ssh-copy-id -i /opt/hds/ssh/hnaskey.pub [manager|supervisor]@<smu-ip> + + + + Check the communication with HNAS: + + $ ssh [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a' + + + + <cluster_admin_ip0> is "localhost" for + single node deployments. This should return a list of avaliable + file systems on HNAS. + +
+
+ Editing the XML config file: + + + + Set the "username". + + + + + Enable SSH adding the line "<ssh_enabled> + True</ssh_enabled>" under + "<config>" session. + + + + + Set the private key path: "<ssh_private_key> + /opt/hds/ssh/hnaskey</ssh_private_key>" under + "<config>" session. + + + + + If the HNAS is in a multi-cluster configuration set + "<cluster_admin_ip0>" to the cluster + node admin IP. In a single node HNAS, leave it empty. + + + + + Restart the cinder service. + + + +
+
+ Additional notes + + + + The get_volume_stats() function always + provides the available capacity based on the + combined sum of all the HDPs that are used in these + services labels. + + + + + After changing the configuration on the storage, + the OpenStack Block Storage driver must be + restarted. + + + + + HNAS iSCSI driver, due to an HNAS limitation, + allows only 32 volumes per target. + + + + + On RedHat, if the system is configured to use SELinux, you + need to set for NFS + driver work properly. + + + +