HDS HNAS iSCSI and NFS driver
- This Block Storage volume driver provides iSCSI and NFS support for
+ This OpenStack Block Storage volume driver provides iSCSI and NFS
+ support for
HNAS (Hitachi Network-attached Storage)
- arrays such as, HNAS 3000 and 4000 family.
+ >Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080
+ and 4100.Supported operationsThe NFS and iSCSI drivers support these operations:
@@ -45,10 +46,10 @@
HNAS storage requirements
- Before using iSCSI and NFS services, use the HNAS Web
- Interface to create storage pool(s), file system(s), and assign
- an EVS. Make sure that the file system used is not created as
- replication targets. Additionally:
+ Before using iSCSI and NFS services, use the HNAS configuration and
+ management GUI (SMU) or SSC CLI to create storage pool(s), file system(s),
+ and assign an EVS. Make sure that the file system used is not
+ created as replication targets. Additionally:
For NFS:
@@ -83,22 +84,27 @@
Block storage host requirements
- All versions:
+ The HNAS driver is supported for Red Hat, SUSE Cloud
+ and Ubuntu Cloud. The following packages must be installed:
- nfs-utils for RPM packages
+ nfs-utils for Red Hat
+
+
+ nfs-client for SUSEnfs-common, libc6-i386
- for DEB packages (libc6-i386 only required on
+ for Ubuntu (libc6-i386 only required on
Ubuntu 12.04)
- HDS SSC package (hds-ssc-v1.0-1) to
+ If you are not using SSH, you need the HDS SSC package
+ (hds-ssc-v1.0-1) to
communicate with an HNAS array using the SSC
command. This utility package is available
in the RPM package distributed with the hardware
@@ -109,43 +115,31 @@
- Version: 2.2-1:
-
-
-
-
- Icehouse OpenStack deployment for RHOSP 5
- (RH 7.0), SUSE Cloud 4, and Mirantis Fuel (Ubuntu or
- CentOS hosts)
-
-
-
- hds-ssc-v1.0-1 package if not using SSH auth
-
-
-
-
- Package installation
- If you are installing the driver from an RPM or DEB package,
+ If you are installing the driver from a RPM or DEB package,
follow the steps bellow:Install SSC:
- $rpm -i hds-ssc-v1.0-1.rpm
+ In Red Hat:
+ #rpm -i hds-ssc-v1.0-1.rpm
+ Or in SUSE:
+ #zypper hds-ssc-v1.0-1.rpmOr in Ubuntu:
- $dpkg -i hds-ssc_1.0-1_all.deb
+ #dpkg -i hds-ssc_1.0-1_all.debInstall the dependencies:
+ In Red Hat:#yum install nfs-utils nfs-utils-libOr in Ubuntu:#apt-get install nfs-common
- Or in openSUSE and SUSE Linux Enterprise Server:
+ Or in SUSE:#zypper install nfs-clientIf you are using Ubuntu 12.04, you also need to install
- libc6-i386
+ libc6-i386
+ #apt-get install libc6-i386Configure the driver as described in the "Driver
Configuration" section.
@@ -156,7 +150,7 @@
Driver configurationThe HDS driver supports the concept of differentiated
- services (also referred to as quality of service) by mapping
+ services (also referred as quality of service) by mapping
volume types to services provided through HNAS.HNAS supports a variety of storage options and file
system capabilities, which are selected through the definition
@@ -461,22 +455,20 @@ volume_backend_name = HNAS-NFSService labels
-
- HNAS driver supports differentiated types of service using the service
- labels. It is possible to create up to four types of them, as gold,
- platinun, silver and ssd, for example.
-
-
- Each service is treated by OpenStack Block Storage as a unit of
- scheduling (pool). After creating the services in the XML
- configuration file, you must configure one
- volume_type per service. Each
- volume_type must have the metadata
- service_label with the same name configured in the
- <volume_type> section of that
- service. If this is not set, OpenStack Block Storage will
- schedule the volume creation to the pool with largest available
- free space or other criteria configured in volume filters.
+
+ HNAS driver supports differentiated types of service using the service
+ labels. It is possible to create up to four types of them, as gold,
+ platinun, silver and ssd, for example.
+
+
+ After creating the services in the XML configuration file, you
+ must configure one volume_type per service.
+ Each volume_type must have the metadata
+ service_label with the same name configured in the
+ <volume_type> section of that
+ service. If this is not set, OpenStack Block Storage will
+ schedule the volume creation to the pool with largest available
+ free space or other criteria configured in volume filters.
$cinder type-create 'default'$cinder type-key 'default' set service_label = 'default'
@@ -516,8 +508,9 @@ volume_backend_name = HNAS-NFS
- Create a pair of public keys in the Block Storage host
- (leave the pass-phrase empty):
+ If you don't have a pair of public keys already generated,
+ create it in the Block Storage host (leave the pass-phrase
+ empty):
$mkdir -p /opt/hds/ssh$ssh-keygen -f /opt/hds/ssh/hnaskey
@@ -531,15 +524,33 @@ volume_backend_name = HNAS-NFS
- Export your pubkey to SMU (HNAS):
+ Create the directory "ssh_keys" in the SMU server:
- $ssh-copy-id -i /opt/hds/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>
+ $ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
- Check the communication with HNAS:
+ Copy the public key to the "ssh_keys" directory:
+
+ $scp /opt/hds/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
+
+
+
+ Access the SMU server:
+
+ $ssh [manager|supervisor]@<smu-ip>
+
+
+
+ Run the command to register the SSH keys:
+
+ $ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
+
+
+
+ Check the communication with HNAS in the Block Storage host:
- $ssh [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
+ $ssh -i /opt/hds/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'