Merge "[config-ref] Improvements in shared file systems drivers"
This commit is contained in:
commit
dfa23b7631
@ -24,7 +24,7 @@ parameters described in these sections.
|
||||
The Shared File Systems service can handle multiple drivers at once.
|
||||
The configuration for all of them follows a common paradigm:
|
||||
|
||||
#. In file ``manila.conf``, configure the option
|
||||
#. In the configuration file ``manila.conf``, configure the option
|
||||
``enabled_backends`` with the list of names for your configuration.
|
||||
|
||||
For example, if you want to enable two drivers and name them
|
||||
|
@ -10,7 +10,7 @@ plug-ins to manage different EMC storage products.
|
||||
The Isilon driver is a plug-in for the EMC framework which allows the
|
||||
Shared File Systems service to interface with an Isilon back end to
|
||||
provide a shared filesystem. The EMC driver framework with the Isilon
|
||||
plug-in is referred to as the "Isilon Driver" in this document.
|
||||
plug-in is referred to as the ``Isilon Driver`` in this document.
|
||||
|
||||
This Isilon Driver interfaces with an Isilon cluster via the REST Isilon
|
||||
Platform API (PAPI) and the RESTful Access to Namespace API (RAN).
|
||||
@ -54,11 +54,11 @@ Systems service configuration file for the Isilon driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
share_driver = manila.share.drivers.emc.driver.EMCShareDriver
|
||||
emc_share_backend = isilon
|
||||
emc_nas_server = <IP address of Isilon cluster>
|
||||
emc_nas_login = <username>
|
||||
emc_nas_password = <password>
|
||||
share_driver = manila.share.drivers.emc.driver.EMCShareDriver
|
||||
emc_share_backend = isilon
|
||||
emc_nas_server = <IP address of Isilon cluster>
|
||||
emc_nas_login = <username>
|
||||
emc_nas_password = <password>
|
||||
|
||||
Restrictions
|
||||
~~~~~~~~~~~~
|
||||
|
@ -186,11 +186,12 @@ Pre-configurations on VNX
|
||||
the first network device (physical port on NIC) of Data Mover to
|
||||
access the network.
|
||||
|
||||
Go to Unisphere to check the device list: Settings -> Network ->
|
||||
Settings for File (Unified system only) -> Device.
|
||||
Go to :guilabel:`Unisphere` to check the device list:
|
||||
:menuselection:`Settings > Network > Settings for File (Unified system
|
||||
only) > Device`.
|
||||
|
||||
The following parameters need to be configured in
|
||||
``/etc/manila/manila.conf`` for the VNX driver:
|
||||
The following parameters need to be configured in the
|
||||
``/etc/manila/manila.conf`` file for the VNX driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@ -202,23 +203,24 @@ The following parameters need to be configured in
|
||||
emc_nas_pool_name = <pool name>
|
||||
share_driver = manila.share.drivers.emc.driver.EMCShareDriver
|
||||
|
||||
- ``emc_share_backend`` is the plug-in name. Set it to ``vnx`` for
|
||||
the VNX driver.
|
||||
emc_share_backend
|
||||
The plug-in name. Set it to ``vnx`` for the VNX driver.
|
||||
|
||||
- ``emc_nas_server`` is the control station IP address of the VNX
|
||||
system to be managed.
|
||||
emc_nas_server
|
||||
The control station IP address of the VNX system to be managed.
|
||||
|
||||
- ``emc_nas_password`` and ``emc_nas_login`` fields are used to
|
||||
provide credentials to the VNX system. Only local users of VNX File
|
||||
is supported.
|
||||
emc_nas_password and emc_nas_login
|
||||
They are the fields that are used to provide credentials to the
|
||||
VNX system. Only local users of VNX File is supported.
|
||||
|
||||
- ``emc_nas_server_container`` field is the name of the Data Mover to
|
||||
serve the share service.
|
||||
emc_nas_server_container
|
||||
It is the name of the Data Mover to serve the share service.
|
||||
|
||||
- ``emc_nas_pool_name`` is the pool name user wants to create volume
|
||||
from. The pools can be created using Unisphere for VNX.
|
||||
emc_nas_pool_name
|
||||
It is the pool name user wants to create volume from. The pools
|
||||
can be created using Unisphere for VNX.
|
||||
|
||||
Restart of the manila-share service is needed for the configuration
|
||||
Restart of the ``manila-share`` service is needed for the configuration
|
||||
changes to take effect.
|
||||
|
||||
The VNX driver has the following restrictions:
|
||||
@ -235,7 +237,7 @@ The VNX driver has the following restrictions:
|
||||
communicate with the hosts in the VLANs. To create shares for
|
||||
different VLANs with same subnet address, use different Data Movers.
|
||||
|
||||
- The 'Active Directory' security service is the only supported
|
||||
- The ``Active Directory`` security service is the only supported
|
||||
security service type and it is required to create CIFS shares.
|
||||
|
||||
- Only one security service can be configured for each share network.
|
||||
|
@ -7,11 +7,11 @@ and Block Storage service volumes. There are two modules that handle
|
||||
them in the Shared File Systems service:
|
||||
|
||||
- The ``service_instance`` module creates VMs in Compute with a
|
||||
predefined image called "service image". This module can be used by
|
||||
predefined image called ``service image``. This module can be used by
|
||||
any driver for provisioning of service VMs to be able to separate
|
||||
share resources among tenants.
|
||||
|
||||
- The 'generic' module operates with Block Storage service volumes
|
||||
- The ``generic`` module operates with Block Storage service volumes
|
||||
and VMs created by the ``service_instance`` module, then creates
|
||||
shared filesystems based on volumes attached to VMs.
|
||||
|
||||
@ -21,7 +21,7 @@ Network configurations
|
||||
Each driver can handle networking in its own way, see:
|
||||
https://wiki.openstack.org/wiki/manila/Networking.
|
||||
|
||||
One of two possible configurations can be chosen for share provisioning
|
||||
One of the two possible configurations can be chosen for share provisioning
|
||||
using the ``service_instance`` module:
|
||||
|
||||
- Service VM has one network interface from a network that is
|
||||
|
@ -8,11 +8,11 @@ Systems service clients.
|
||||
|
||||
A Shared File Systems service share is a GlusterFS volume. This driver
|
||||
uses flat-network (share-server-less) model. Instances directly talk
|
||||
with the GlusterFS back end storage pool. The instances use 'glusterfs'
|
||||
with the GlusterFS back end storage pool. The instances use ``glusterfs``
|
||||
protocol to mount the GlusterFS shares. Access to each share is allowed
|
||||
via TLS Certificates. Only the instance which has the TLS trust
|
||||
established with the GlusterFS back end can mount and hence use the
|
||||
share. Currently only 'rw' access is supported.
|
||||
share. Currently only ``read-write (rw)`` access is supported.
|
||||
|
||||
Network approach
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
@ -57,35 +57,35 @@ steps:
|
||||
#. Install and configure an OpenStack environment with default Shared File
|
||||
System parameters and services. Refer to OpenStack Manila configuration
|
||||
reference.
|
||||
#. Configure HNAS parameters on manila.conf.
|
||||
#. Configure HNAS parameters in the ``manila.conf`` file.
|
||||
#. Prepare the network.
|
||||
#. Configure and create share type.
|
||||
#. Restart the services.
|
||||
#. Configure the network.
|
||||
|
||||
In the following sections we cover steps 3, 4, 5, 6 and 7. Steps 1 and 2 are
|
||||
not in the scope of this document.
|
||||
The first two steps are not in the scope of this document. We cover all the
|
||||
remaining steps in the following sections.
|
||||
|
||||
Step 3 - HNAS parameter configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Below is an example of a minimal configuration of HNAS driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_share_backends = hnas1
|
||||
enabled_share_protocols = NFS
|
||||
[hnas1]
|
||||
share_backend_name = HNAS1
|
||||
share_driver = manila.share.drivers.hitachi.hds_hnas.HDSHNASDriver
|
||||
driver_handles_share_servers = False
|
||||
hds_hnas_ip = 172.24.44.15
|
||||
hds_hnas_user = supervisor
|
||||
hds_hnas_password = supervisor
|
||||
hds_hnas_evs_id = 1
|
||||
hds_hnas_evs_ip = 10.0.1.20
|
||||
hds_hnas_file_system_name = FS-Manila
|
||||
[DEFAULT]
|
||||
enabled_share_backends = hnas1
|
||||
enabled_share_protocols = NFS
|
||||
[hnas1]
|
||||
share_backend_name = HNAS1
|
||||
share_driver = manila.share.drivers.hitachi.hds_hnas.HDSHNASDriver
|
||||
driver_handles_share_servers = False
|
||||
hds_hnas_ip = 172.24.44.15
|
||||
hds_hnas_user = supervisor
|
||||
hds_hnas_password = supervisor
|
||||
hds_hnas_evs_id = 1
|
||||
hds_hnas_evs_ip = 10.0.1.20
|
||||
hds_hnas_file_system_name = FS-Manila
|
||||
|
||||
The following table contains the configuration options specific to the
|
||||
share driver.
|
||||
@ -95,7 +95,7 @@ share driver.
|
||||
Step 4 - prepare the network
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In the driver mode used by HNAS Driver (DHSS = False), the driver does not
|
||||
In the driver mode used by HNAS Driver (DHSS = ``False``), the driver does not
|
||||
handle network configuration, it is up to the administrator to configure it.
|
||||
It is mandatory that HNAS management interface is reachable from Shared File
|
||||
System node through Admin network, while the selected EVS data interface is
|
||||
@ -115,25 +115,25 @@ Run in **Networking node**:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ifconfig eth1 0
|
||||
# ovs-vsctl add-br br-eth1
|
||||
# ovs-vsctl add-port br-eth1 eth1
|
||||
# ifconfig eth1 up
|
||||
# ifconfig eth1 0
|
||||
# ovs-vsctl add-br br-eth1
|
||||
# ovs-vsctl add-port br-eth1 eth1
|
||||
# ifconfig eth1 up
|
||||
|
||||
Edit ``/etc/neutron/plugins/ml2/ml2_conf.ini`` (default directory), change the
|
||||
following settings as follows in their respective tags:
|
||||
Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` (default directory),
|
||||
change the following settings as follows in their respective tags:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan,vxlan,gre
|
||||
mechanism_drivers = openvswitch
|
||||
[ml2_type_flat]
|
||||
flat_networks = physnet1,physnet2
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = physnet1:1000:1500,physnet2:2000:2500
|
||||
[ovs]
|
||||
bridge_mappings = physnet1:br-ex,physnet2:br-eth1
|
||||
[ml2]
|
||||
type_drivers = flat,vlan,vxlan,gre
|
||||
mechanism_drivers = openvswitch
|
||||
[ml2_type_flat]
|
||||
flat_networks = physnet1,physnet2
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = physnet1:1000:1500,physnet2:2000:2500
|
||||
[ovs]
|
||||
bridge_mappings = physnet1:br-ex,physnet2:br-eth1
|
||||
|
||||
You may have to repeat the last line above in another file on the Compute node,
|
||||
if it exists it is located in:
|
||||
@ -148,8 +148,8 @@ be the ID of EVS in use, such as in the following example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ console-context --evs 3 route-net-add --gateway 192.168.1.1 \
|
||||
10.0.0.0/24
|
||||
$ console-context --evs 3 route-net-add --gateway 192.168.1.1 \
|
||||
10.0.0.0/24
|
||||
|
||||
Step 5 - share type configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -162,13 +162,14 @@ System driver, this must be set to ``False``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ manila type-create hitachi False
|
||||
$ manila type-create hitachi False
|
||||
|
||||
Step 6 - restart the services
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Restart all Shared File Systems services (manila-share, manila-scheduler and
|
||||
manila-api) and Networking services (neutron-\*).
|
||||
Restart all Shared File Systems services
|
||||
(``manila-share``, ``manila-scheduler`` and ``manila-api``) and
|
||||
Networking services (``neutron-\*``).
|
||||
|
||||
Step 7 - configure the network
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -176,37 +177,39 @@ Step 7 - configure the network
|
||||
In Networking node it is necessary to create a network, a subnet and to add
|
||||
this subnet interface to a router:
|
||||
|
||||
Create a network to the given tenant (demo), providing the DEMO_ID (this can be
|
||||
fetched using :command:`openstack project list`), a name for the network, the
|
||||
name of the physical network over which the virtual network is implemented and
|
||||
the type of the physical mechanism by which the virtual network is
|
||||
implemented:
|
||||
Create a network to the given tenant (demo), providing the DEMO_ID (this can
|
||||
be fetched using :command:`openstack project list` command), a name for the
|
||||
network, the name of the physical network over which the virtual network is
|
||||
implemented and the type of the physical mechanism by which the virtual
|
||||
network is implemented:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-create --tenant-id <DEMO_ID> hnas_network \
|
||||
--provider:physical_network=physnet2 --provider:network_type=flat
|
||||
$ neutron net-create --tenant-id <DEMO_ID> hnas_network \
|
||||
--provider:physical_network=physnet2 --provider:network_type=flat
|
||||
|
||||
Create a subnet to same tenant (demo), providing the DEMO_ID (this can be
|
||||
fetched using :command:`openstack project list`), the gateway IP of this
|
||||
subnet, a name for the subnet, the network ID created in the previous step
|
||||
(this can be fetched using :command:`neutron net-list`) and CIDR of subnet:
|
||||
fetched using :command:`openstack project list` command), the gateway IP of
|
||||
this subnet, a name for the subnet, the network ID created in the previous
|
||||
step (this can be fetched using :command:`neutron net-list` command) and
|
||||
CIDR of subnet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --tenant-id <DEMO_ID> --gateway <GATEWAY> \
|
||||
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
|
||||
$ neutron subnet-create --tenant-id <DEMO_ID> --gateway <GATEWAY> \
|
||||
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
|
||||
|
||||
Finally, add the subnet interface to a router, providing the router ID and
|
||||
subnet ID created in the previous step (can be fetched using :command:`neutron
|
||||
subnet-list`):
|
||||
subnet-list` command):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
|
||||
$ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
|
||||
|
||||
Manage and unmanage shares
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Shared File Systems has the ability to manage and unmanage shares. If there is
|
||||
a share in the storage and it is not in OpenStack, you can manage that share
|
||||
and use it as a Shared File Systems share. HNAS drivers use virtual-volumes
|
||||
|
@ -51,7 +51,7 @@ Requirements
|
||||
|
||||
- Enable quotas for the GPFS file system, use :command:`mmchfs -Q yes`.
|
||||
|
||||
- Establish network connection between the Shared File Systems Service
|
||||
- Establish network connection between the Shared File Systems service
|
||||
host and the storage back end.
|
||||
|
||||
Shared File Systems service driver configuration setting
|
||||
@ -95,9 +95,9 @@ Known restrictions
|
||||
network.
|
||||
|
||||
- While using remote GPFS node, with Ganesha NFS,
|
||||
'gpfs_ssh_private_key' for remote login to the GPFS node must be
|
||||
``gpfs_ssh_private_key`` for remote login to the GPFS node must be
|
||||
specified and there must be a passwordless authentication already
|
||||
setup between the manila-share service and the remote GPFS node.
|
||||
setup between the ``manila-share`` service and the remote GPFS node.
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
@ -39,9 +39,9 @@ Configuration
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
To configure Quobyte access for the Shared File System service, a back end
|
||||
configuration section has to be added in ``manila.conf``. Add the name
|
||||
of the configuration section to ``enabled_share_backends`` in
|
||||
``manila.conf``. For example, if the section is named ``Quobyte``:
|
||||
configuration section has to be added in the ``manila.conf`` file. Add the
|
||||
name of the configuration section to ``enabled_share_backends`` in the
|
||||
``manila.conf`` file. For example, if the section is named ``Quobyte``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
|
@ -7,17 +7,20 @@ Compute instances can consume.
|
||||
|
||||
The Shared File Systems service provides:
|
||||
|
||||
- ``manila-api``. A WSGI app that authenticates and routes requests
|
||||
manila-api
|
||||
A WSGI app that authenticates and routes requests
|
||||
throughout the Shared File Systems service. It supports the OpenStack
|
||||
APIs.
|
||||
|
||||
- ``manila-scheduler``. Schedules and routes requests to the appropriate
|
||||
manila-scheduler
|
||||
Schedules and routes requests to the appropriate
|
||||
share service. The scheduler uses configurable filters and weighers
|
||||
to route requests. The Filter Scheduler is the default and enables
|
||||
filters on things like Capacity, Availability Zone, Share Types, and
|
||||
Capabilities as well as custom filters.
|
||||
|
||||
- ``manila-share``. Manages back-end devices that provide shared file
|
||||
manila-share
|
||||
Manages back-end devices that provide shared file
|
||||
systems. A manila-share service can run in one of two modes, with or
|
||||
without handling of share servers. Share servers export file shares
|
||||
via share networks. When share servers are not used, the networking
|
||||
|
Loading…
Reference in New Issue
Block a user