glusterfs_native: negotiate volumes with glusterd
So far glusterfs_native was able to create shares from volumes listed in 'glusterfs_targets' config option. New behavior: a regexp pattern is to be provided through the glusterfs_volume_pattern config option. Upon share creation, grep gluster server's volumes with this pattern and create the new share from one among those. Change-Id: I12ba0dbad0b1174c57e94acd5e7f6653f5bfaae8 Closes-Bug: #1437176
This commit is contained in:
parent
1029277388
commit
482d304e38
|
@ -21,7 +21,7 @@ GlusterFS Native driver uses GlusterFS, an open source distributed file system,
|
|||
as the storage backend for serving file shares to Manila clients.
|
||||
|
||||
A Manila share is a GlusterFS volume. This driver uses flat-network
|
||||
(share-server less) model. Instances directly talk with the GlusterFS backend
|
||||
(share-server-less) model. Instances directly talk with the GlusterFS backend
|
||||
storage pool. The instances use 'glusterfs' protocol to mount the GlusterFS
|
||||
shares. Access to each share is allowed via TLS Certificates. Only the instance
|
||||
which has the TLS trust established with the GlusterFS backend can mount and
|
||||
|
@ -51,6 +51,8 @@ Supported Operations
|
|||
- Delete GlusterFS Share
|
||||
- Allow GlusterFS Share access (rw)
|
||||
- Deny GlusterFS Share access
|
||||
- Create GlusterFS Snapshot
|
||||
- Delete GlusterFS Snapshot
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
@ -68,17 +70,24 @@ The following parameters in Manila's configuration file need to be set:
|
|||
|
||||
- `share_driver` =
|
||||
manila.share.drivers.glusterfs_native.GlusterfsNativeShareDriver
|
||||
- `glusterfs_targets` = List of GlusterFS volumes that can be used to create
|
||||
shares. Each GlusterFS volume should be of the form
|
||||
``[remoteuser@]<glustervolserver>:/<glustervolid>``
|
||||
- `glusterfs_servers` = List of GlusterFS servers which provide volumes
|
||||
that can be used to create shares. The servers are expected to be of
|
||||
distinct Gluster clusters (ie. should not be gluster peers). Each server
|
||||
should be of the form ``[<remoteuser>@]<glustervolserver>``.
|
||||
|
||||
If the backend GlusterFS server runs on the host running the Manila share
|
||||
service, each member of the `glusterfs_targets` list can be of the form
|
||||
``<glustervolserver>:/<glustervolid>``
|
||||
|
||||
If the backend GlusterFS server runs remotely, each member of the
|
||||
`glusterfs_targets` list can be of the form
|
||||
``<remoteuser>@<glustervolserver>:/<glustervolid>``
|
||||
The optional ``<remoteuser>@`` part of the server URI indicates SSH
|
||||
access for cluster management (see related optional parameters below).
|
||||
If it is not given, direct command line management is performed (ie.
|
||||
Manila host is assumed to be part of the GlusterFS cluster the server
|
||||
belongs to).
|
||||
- `glusterfs_volume_pattern` = Regular expression template
|
||||
used to filter GlusterFS volumes for share creation. The regex template can
|
||||
contain the #{size} parameter which matches a number (sequence of digits)
|
||||
and the value shall be intepreted as size of the volume in GB. Examples:
|
||||
``manila-share-volume-\d+$``, ``manila-share-volume-#{size}G-\d+$``; with
|
||||
matching volume names, respectively: *manila-share-volume-12*,
|
||||
*manila-share-volume-3G-13*". In latter example, the number that matches
|
||||
``#{size}``, that is, 3, is an indication that the size of volume is 3G.
|
||||
|
||||
The following configuration parameters are optional:
|
||||
|
||||
|
@ -91,11 +100,18 @@ The following configuration parameters are optional:
|
|||
Known Restrictions
|
||||
------------------
|
||||
|
||||
- GlusterFS volumes are not created on the fly. A pre-existing list of
|
||||
GlusterFS volumes must be supplied in `glusterfs_targets`.
|
||||
- GlusterFS volumes are not created on demand. A pre-existing set of
|
||||
GlusterFS volumes should be supplied by the GlusterFS cluster(s), conforming
|
||||
to the naming convention encoded by ``glusterfs_volume_pattern``. However,
|
||||
the GlusterFS endpoint is allowed to extend this set any time (so Manila
|
||||
and GlusterFS endpoints are expected to communicate volume supply/demand
|
||||
out-of-band). ``glusterfs_volume_pattern`` can include a size hint (with
|
||||
``#{size}`` syntax), which, if present, requires the GlusterFS end to
|
||||
indicate the size of the shares in GB in the name. (On share creation,
|
||||
Manila picks volumes *at least* as big as the requested one.)
|
||||
- Certificate setup (aka trust setup) between instance and storage backend is
|
||||
out of band of Manila.
|
||||
- Support for 'create_snapshot' and 'create_share_from_snapshot' is planned for Liberty release.
|
||||
- Support for 'create_share_from_snapshot' is planned for Liberty release.
|
||||
|
||||
The :mod:`manila.share.drivers.glusterfs_native.GlusterfsNativeShareDriver` Module
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
|
|
@ -90,27 +90,38 @@ class GlusterManager(object):
|
|||
"""Interface with a GlusterFS volume."""
|
||||
|
||||
scheme = re.compile('\A(?:(?P<user>[^:@/]+)@)?'
|
||||
'(?P<host>[^:@/]+):'
|
||||
'/(?P<vol>.+)')
|
||||
'(?P<host>[^:@/]+)'
|
||||
'(?::/(?P<vol>.+))?')
|
||||
|
||||
def __init__(self, address, execf, path_to_private_key=None,
|
||||
remote_server_password=None):
|
||||
"""Initialize a GaneshaManager instance.
|
||||
remote_server_password=None, has_volume=True):
|
||||
"""Initialize a GlusterManager instance.
|
||||
|
||||
:param address: the Gluster URI (in [<user>@]<host>:/<vol> format).
|
||||
:param execf: executor function for management commands.
|
||||
:param path_to_private_key: path to private ssh key of remote server.
|
||||
:param remote_server_password: ssh password for remote server.
|
||||
:param has_volume: instruction to uri parser regarding how to deal
|
||||
with the optional volume part (True: require its
|
||||
presence, False: require its absence, None: don't
|
||||
require anything about volume).
|
||||
"""
|
||||
m = self.scheme.search(address)
|
||||
if m:
|
||||
self.volume = m.group('vol')
|
||||
if (has_volume is True and not self.volume) or (
|
||||
has_volume is False and self.volume):
|
||||
m = None
|
||||
if not m:
|
||||
raise exception.GlusterfsException('invalid gluster address ' +
|
||||
address)
|
||||
raise exception.GlusterfsException(
|
||||
_('Invalid gluster address %s.') % address)
|
||||
self.remote_user = m.group('user')
|
||||
self.host = m.group('host')
|
||||
self.volume = m.group('vol')
|
||||
self.qualified = address
|
||||
self.export = ':/'.join([self.host, self.volume])
|
||||
if self.volume:
|
||||
self.export = ':/'.join([self.host, self.volume])
|
||||
else:
|
||||
self.export = None
|
||||
self.path_to_private_key = path_to_private_key
|
||||
self.remote_server_password = remote_server_password
|
||||
self.gluster_call = self.make_gluster_call(execf)
|
||||
|
|
|
@ -27,7 +27,10 @@ Supports working with multiple glusterfs volumes.
|
|||
|
||||
import errno
|
||||
import pipes
|
||||
import random
|
||||
import re
|
||||
import shutil
|
||||
import string
|
||||
import tempfile
|
||||
import xml.etree.cElementTree as etree
|
||||
|
||||
|
@ -46,11 +49,13 @@ from manila import utils
|
|||
LOG = log.getLogger(__name__)
|
||||
|
||||
glusterfs_native_manila_share_opts = [
|
||||
cfg.ListOpt('glusterfs_targets',
|
||||
cfg.ListOpt('glusterfs_servers',
|
||||
default=[],
|
||||
help='List of GlusterFS volumes that can be used to create '
|
||||
'shares. Each GlusterFS volume should be of the form '
|
||||
'[remoteuser@]<volserver>:/<volid>'),
|
||||
deprecated_name='glusterfs_targets',
|
||||
help='List of GlusterFS servers that can be used to create '
|
||||
'shares. Each GlusterFS server should be of the form '
|
||||
'[remoteuser@]<volserver>, and they are assumed to '
|
||||
'belong to distinct Gluster clusters.'),
|
||||
cfg.StrOpt('glusterfs_native_server_password',
|
||||
default=None,
|
||||
secret=True,
|
||||
|
@ -61,6 +66,22 @@ glusterfs_native_manila_share_opts = [
|
|||
cfg.StrOpt('glusterfs_native_path_to_private_key',
|
||||
default=None,
|
||||
help='Path of Manila host\'s private SSH key file.'),
|
||||
cfg.StrOpt('glusterfs_volume_pattern',
|
||||
default=None,
|
||||
help='Regular expression template used to filter '
|
||||
'GlusterFS volumes for share creation. '
|
||||
'The regex template can optionally (ie. with support '
|
||||
'of the GlusterFS backend) contain the #{size} '
|
||||
'parameter which matches an integer (sequence of '
|
||||
'digits) in which case the value shall be intepreted as '
|
||||
'size of the volume in GB. Examples: '
|
||||
'"manila-share-volume-\d+$", '
|
||||
'"manila-share-volume-#{size}G-\d+$"; '
|
||||
'with matching volume names, respectively: '
|
||||
'"manila-share-volume-12", "manila-share-volume-3G-13". '
|
||||
'In latter example, the number that matches "#{size}", '
|
||||
'that is, 3, is an indication that the size of volume '
|
||||
'is 3G.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
@ -71,6 +92,14 @@ AUTH_SSL_ALLOW = 'auth.ssl-allow'
|
|||
CLIENT_SSL = 'client.ssl'
|
||||
NFS_EXPORT_VOL = 'nfs.export-volumes'
|
||||
SERVER_SSL = 'server.ssl'
|
||||
# The dict specifying named parameters
|
||||
# that can be used with glusterfs_volume_pattern
|
||||
# in #{<param>} format.
|
||||
# For each of them we give regex pattern it matches
|
||||
# and a transformer function ('trans') for the matched
|
||||
# string value.
|
||||
# Currently we handle only #{size}.
|
||||
PATTERN_DICT = {'size': {'pattern': '(?P<size>\d+)', 'trans': int}}
|
||||
|
||||
|
||||
class GlusterfsNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
||||
|
@ -90,13 +119,38 @@ class GlusterfsNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
|||
False, *args, **kwargs)
|
||||
self.db = db
|
||||
self._helpers = None
|
||||
self.gluster_unused_vols_dict = {}
|
||||
self.gluster_used_vols_dict = {}
|
||||
self.configuration.append_config_values(
|
||||
glusterfs_native_manila_share_opts)
|
||||
self.gluster_nosnap_vols_dict = {}
|
||||
self.backend_name = self.configuration.safe_get(
|
||||
'share_backend_name') or 'GlusterFS-Native'
|
||||
self.volume_pattern = self._compile_volume_pattern()
|
||||
self.volume_pattern_keys = self.volume_pattern.groupindex.keys()
|
||||
glusterfs_servers = {}
|
||||
for srvaddr in self.configuration.glusterfs_servers:
|
||||
glusterfs_servers[srvaddr] = self._glustermanager(
|
||||
srvaddr, has_volume=False)
|
||||
self.glusterfs_servers = glusterfs_servers
|
||||
|
||||
def _compile_volume_pattern(self):
|
||||
"""Compile a RegexObject from the config specified regex template.
|
||||
|
||||
(cfg.glusterfs_volume_pattern)
|
||||
"""
|
||||
|
||||
subdict = {}
|
||||
for key, val in six.iteritems(PATTERN_DICT):
|
||||
subdict[key] = val['pattern']
|
||||
|
||||
# Using templates with placeholder syntax #{<var>}
|
||||
class CustomTemplate(string.Template):
|
||||
delimiter = '#'
|
||||
|
||||
volume_pattern = CustomTemplate(
|
||||
self.configuration.glusterfs_volume_pattern).substitute(
|
||||
subdict)
|
||||
return re.compile(volume_pattern)
|
||||
|
||||
def do_setup(self, context):
|
||||
"""Setup the GlusterFS volumes."""
|
||||
|
@ -104,18 +158,18 @@ class GlusterfsNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
|||
|
||||
# We don't use a service mount as its not necessary for us.
|
||||
# Do some sanity checks.
|
||||
if len(self.configuration.glusterfs_targets) == 0:
|
||||
# No volumes specified in the config file. Raise exception.
|
||||
msg = (_("glusterfs_targets list seems to be empty! "
|
||||
"Add one or more gluster volumes to work "
|
||||
"with in the glusterfs_targets configuration "
|
||||
"parameter."))
|
||||
gluster_volumes_initial = set(self._fetch_gluster_volumes())
|
||||
if not gluster_volumes_initial:
|
||||
# No suitable volumes are found on the Gluster end.
|
||||
# Raise exception.
|
||||
msg = (_("Gluster backend does not provide any volume "
|
||||
"matching pattern %s"
|
||||
) % self.configuration.glusterfs_volume_pattern)
|
||||
LOG.error(msg)
|
||||
raise exception.GlusterfsException(msg)
|
||||
|
||||
LOG.info(_LI("Number of gluster volumes read from config: "
|
||||
"%(numvols)s"),
|
||||
{'numvols': len(self.configuration.glusterfs_targets)})
|
||||
LOG.info(_LI("Found %d Gluster volumes allocated for Manila."
|
||||
), len(gluster_volumes_initial))
|
||||
|
||||
try:
|
||||
self._execute('mount.glusterfs', check_exit_code=False)
|
||||
|
@ -129,24 +183,70 @@ class GlusterfsNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
|||
LOG.error(msg)
|
||||
raise
|
||||
|
||||
# Update gluster_unused_vols_dict, gluster_used_vols_dict by walking
|
||||
# through the DB.
|
||||
# Update gluster_used_vols_dict by walking through the DB.
|
||||
self._update_gluster_vols_dict(context)
|
||||
if len(self.gluster_unused_vols_dict) == 0:
|
||||
unused_vols = gluster_volumes_initial - set(
|
||||
self.gluster_used_vols_dict)
|
||||
if not unused_vols:
|
||||
# No volumes available for use as share. Warn user.
|
||||
msg = (_("No unused gluster volumes available for use as share! "
|
||||
"Create share won't be supported unless existing shares "
|
||||
"are deleted or add one or more gluster volumes to work "
|
||||
"with in the glusterfs_targets configuration parameter."))
|
||||
"are deleted or some gluster volumes are created with "
|
||||
"names matching 'glusterfs_volume_pattern'."))
|
||||
LOG.warn(msg)
|
||||
else:
|
||||
LOG.info(_LI("Number of gluster volumes in use: "
|
||||
"%(inuse-numvols)s. Number of gluster volumes "
|
||||
"available for use as share: %(unused-numvols)s"),
|
||||
{'inuse-numvols': len(self.gluster_used_vols_dict),
|
||||
'unused-numvols': len(self.gluster_unused_vols_dict)})
|
||||
'unused-numvols': len(unused_vols)})
|
||||
|
||||
self._setup_gluster_vols()
|
||||
def _glustermanager(self, gluster_address, has_volume=True):
|
||||
"""Create GlusterManager object for gluster_address."""
|
||||
|
||||
return glusterfs.GlusterManager(
|
||||
gluster_address, self._execute,
|
||||
self.configuration.glusterfs_native_path_to_private_key,
|
||||
self.configuration.glusterfs_native_server_password,
|
||||
has_volume=has_volume)
|
||||
|
||||
def _fetch_gluster_volumes(self):
|
||||
"""Do a 'gluster volume list | grep <volume pattern>'.
|
||||
|
||||
Aggregate the results from all servers.
|
||||
Extract the named groups from the matching volume names
|
||||
using the specs given in PATTERN_DICT.
|
||||
Return a dict with keys of the form <server>:/<volname>
|
||||
and values being dicts that map names of named groups
|
||||
to their extracted value.
|
||||
"""
|
||||
|
||||
volumes_dict = {}
|
||||
for gsrv, gluster_mgr in six.iteritems(self.glusterfs_servers):
|
||||
try:
|
||||
out, err = gluster_mgr.gluster_call('volume', 'list')
|
||||
except exception.ProcessExecutionError as exc:
|
||||
msgdict = {'err': exc.stderr, 'hostinfo': ''}
|
||||
if gluster_mgr.remote_user:
|
||||
msgdict['hostinfo'] = ' on host %s' % gluster_mgr.host
|
||||
LOG.error(_LE("Error retrieving volume list%(hostinfo)s: "
|
||||
"%(err)s") % msgdict)
|
||||
raise exception.GlusterfsException(
|
||||
_('gluster volume list failed'))
|
||||
for volname in out.split("\n"):
|
||||
patmatch = self.volume_pattern.match(volname)
|
||||
if not patmatch:
|
||||
continue
|
||||
pattern_dict = {}
|
||||
for key in self.volume_pattern_keys:
|
||||
keymatch = patmatch.group(key)
|
||||
if keymatch is None:
|
||||
pattern_dict[key] = None
|
||||
else:
|
||||
trans = PATTERN_DICT[key].get('trans', lambda x: x)
|
||||
pattern_dict[key] = trans(keymatch)
|
||||
volumes_dict[gsrv + ':/' + volname] = pattern_dict
|
||||
return volumes_dict
|
||||
|
||||
@utils.synchronized("glusterfs_native", external=False)
|
||||
def _update_gluster_vols_dict(self, context):
|
||||
|
@ -154,82 +254,42 @@ class GlusterfsNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
|||
|
||||
shares = self.db.share_get_all(context)
|
||||
|
||||
# Store the gluster volumes in dict thats helpful to track
|
||||
# (push and pop) in future. {gluster_export: gluster_mgr, ...}
|
||||
# gluster_export is of form hostname:/volname which is unique
|
||||
# enough to be used as a key.
|
||||
self.gluster_unused_vols_dict = {}
|
||||
self.gluster_used_vols_dict = {}
|
||||
for s in shares:
|
||||
if (s['status'].lower() == 'available'):
|
||||
vol = s['export_location']
|
||||
gluster_mgr = self._glustermanager(vol)
|
||||
self.gluster_used_vols_dict[vol] = gluster_mgr
|
||||
|
||||
for gv in self.configuration.glusterfs_targets:
|
||||
gmgr = glusterfs.GlusterManager(
|
||||
gv, self._execute,
|
||||
self.configuration.glusterfs_native_path_to_private_key,
|
||||
self.configuration.glusterfs_native_server_password)
|
||||
exp_locn_gv = gmgr.export
|
||||
|
||||
# Assume its unused to begin with.
|
||||
self.gluster_unused_vols_dict.update({exp_locn_gv: gmgr})
|
||||
|
||||
for s in shares:
|
||||
exp_locn_share = s.get('export_location', None)
|
||||
if exp_locn_share == exp_locn_gv:
|
||||
# gluster volume is in use, move it to used list.
|
||||
self.gluster_used_vols_dict.update({exp_locn_gv: gmgr})
|
||||
self.gluster_unused_vols_dict.pop(exp_locn_gv)
|
||||
break
|
||||
|
||||
@utils.synchronized("glusterfs_native", external=False)
|
||||
def _setup_gluster_vols(self):
|
||||
def _setup_gluster_vol(self, vol):
|
||||
# Enable gluster volumes for SSL access only.
|
||||
|
||||
for gluster_mgr in six.itervalues(self.gluster_unused_vols_dict):
|
||||
gluster_mgr = self._glustermanager(vol)
|
||||
|
||||
for option, value in six.iteritems(
|
||||
{NFS_EXPORT_VOL: 'off', CLIENT_SSL: 'on', SERVER_SSL: 'on'}
|
||||
):
|
||||
try:
|
||||
gluster_mgr.gluster_call(
|
||||
'volume', 'set', gluster_mgr.volume,
|
||||
NFS_EXPORT_VOL, 'off')
|
||||
option, value)
|
||||
except exception.ProcessExecutionError as exc:
|
||||
msg = (_("Error in gluster volume set during volume setup. "
|
||||
"Volume: %(volname)s, Option: %(option)s, "
|
||||
"rror: %(error)s"),
|
||||
"volume: %(volname)s, option: %(option)s, "
|
||||
"value: %(value)s, error: %(error)s") %
|
||||
{'volname': gluster_mgr.volume,
|
||||
'option': NFS_EXPORT_VOL, 'error': exc.stderr})
|
||||
'option': option, 'value': value, 'error': exc.stderr})
|
||||
LOG.error(msg)
|
||||
raise exception.GlusterfsException(msg)
|
||||
|
||||
try:
|
||||
gluster_mgr.gluster_call(
|
||||
'volume', 'set', gluster_mgr.volume,
|
||||
CLIENT_SSL, 'on')
|
||||
except exception.ProcessExecutionError as exc:
|
||||
msg = (_("Error in gluster volume set during volume setup. "
|
||||
"Volume: %(volname)s, Option: %(option)s, "
|
||||
"Error: %(error)s"),
|
||||
{'volname': gluster_mgr.volume,
|
||||
'option': CLIENT_SSL, 'error': exc.stderr})
|
||||
LOG.error(msg)
|
||||
raise exception.GlusterfsException(msg)
|
||||
# TODO(deepakcs) Remove this once ssl options can be
|
||||
# set dynamically.
|
||||
self._restart_gluster_vol(gluster_mgr)
|
||||
return gluster_mgr
|
||||
|
||||
try:
|
||||
gluster_mgr.gluster_call(
|
||||
'volume', 'set', gluster_mgr.volume,
|
||||
SERVER_SSL, 'on')
|
||||
except exception.ProcessExecutionError as exc:
|
||||
msg = (_("Error in gluster volume set during volume setup. "
|
||||
"Volume: %(volname)s, Option: %(option)s, "
|
||||
"Error: %(error)s"),
|
||||
{'volname': gluster_mgr.volume,
|
||||
'option': SERVER_SSL, 'error': exc.stderr})
|
||||
LOG.error(msg)
|
||||
raise exception.GlusterfsException(msg)
|
||||
|
||||
# TODO(deepakcs) Remove this once ssl options can be
|
||||
# set dynamically.
|
||||
self._restart_gluster_vol(gluster_mgr)
|
||||
|
||||
def _restart_gluster_vol(self, gluster_mgr):
|
||||
@staticmethod
|
||||
def _restart_gluster_vol(gluster_mgr):
|
||||
try:
|
||||
# TODO(csaba): eradicate '--mode=script' as it's unnecessary.
|
||||
gluster_mgr.gluster_call(
|
||||
'volume', 'stop', gluster_mgr.volume, '--mode=script')
|
||||
except exception.ProcessExecutionError as exc:
|
||||
|
@ -250,28 +310,95 @@ class GlusterfsNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
|||
raise exception.GlusterfsException(msg)
|
||||
|
||||
@utils.synchronized("glusterfs_native", external=False)
|
||||
def _pop_gluster_vol(self):
|
||||
try:
|
||||
exp_locn, gmgr = self.gluster_unused_vols_dict.popitem()
|
||||
except KeyError:
|
||||
def _pop_gluster_vol(self, size=None):
|
||||
"""Pick an unbound volume.
|
||||
|
||||
Do a _fetch_gluster_volumes() first to get the complete
|
||||
list of usable volumes.
|
||||
Keep only the unbound ones (ones that are not yet used to
|
||||
back a share).
|
||||
If size is given, try to pick one which has a size specification
|
||||
(according to the 'size' named group of the volume pattern),
|
||||
and its size is greater-than-or-equal to the given size.
|
||||
Return the volume chosen (in <host>:/<volname> format).
|
||||
"""
|
||||
|
||||
voldict = self._fetch_gluster_volumes()
|
||||
# calculate the set of unused volumes
|
||||
set1, set2 = (
|
||||
set(d) for d in (voldict, self.gluster_used_vols_dict)
|
||||
)
|
||||
unused_vols = set1 - set2
|
||||
# volmap is the data structure used to categorize and sort
|
||||
# the unused volumes. It's a nested dictionary of structure
|
||||
# {<size>: <hostmap>}
|
||||
# where <size> is either an integer or None,
|
||||
# <hostmap> is a dictionary of structure {<host>: <vols>}
|
||||
# where <host> is a host name (IP address), <vols> is a list
|
||||
# of volumes (gluster addresses).
|
||||
volmap = {None: {}}
|
||||
# if both caller has specified size and 'size' occurs as
|
||||
# a parameter in the volume pattern...
|
||||
if size and 'size' in self.volume_pattern_keys:
|
||||
# then this function is used to extract the
|
||||
# size value for a given volume from the voldict...
|
||||
get_volsize = lambda vol: voldict[vol]['size']
|
||||
else:
|
||||
# else just use a stub.
|
||||
get_volsize = lambda vol: None
|
||||
for vol in unused_vols:
|
||||
# For each unused volume, we extract the <size>
|
||||
# and <host> values with which it can be inserted
|
||||
# into the volmap, and conditionally perform
|
||||
# the insertion (with the condition being: once
|
||||
# caller specified size and a size indication was
|
||||
# found in the volume name, we require that the
|
||||
# indicated size adheres to caller's spec).
|
||||
volsize = get_volsize(vol)
|
||||
if not volsize or volsize >= size:
|
||||
hostmap = volmap.get(volsize)
|
||||
if not hostmap:
|
||||
hostmap = {}
|
||||
volmap[volsize] = hostmap
|
||||
host = self._glustermanager(vol).host
|
||||
hostvols = hostmap.get(host)
|
||||
if not hostvols:
|
||||
hostvols = []
|
||||
hostmap[host] = hostvols
|
||||
hostvols.append(vol)
|
||||
if len(volmap) > 1:
|
||||
# volmap has keys apart from the default None,
|
||||
# ie. volumes with sensible and adherent size
|
||||
# indication have been found. Then pick the smallest
|
||||
# of the size values.
|
||||
chosen_size = sorted(n for n in volmap.keys() if n)[0]
|
||||
else:
|
||||
chosen_size = None
|
||||
chosen_hostmap = volmap[chosen_size]
|
||||
if not chosen_hostmap:
|
||||
msg = (_("Couldn't find a free gluster volume to use."))
|
||||
LOG.error(msg)
|
||||
raise exception.GlusterfsException(msg)
|
||||
|
||||
self.gluster_used_vols_dict.update({exp_locn: gmgr})
|
||||
return exp_locn
|
||||
# From the hosts we choose randomly to tend towards
|
||||
# even distribution of share backing volumes among
|
||||
# Gluster clusters.
|
||||
chosen_host = random.choice(list(chosen_hostmap.keys()))
|
||||
# Within a host's volumes, choose alphabetically first,
|
||||
# to make it predictable.
|
||||
vol = sorted(chosen_hostmap[chosen_host])[0]
|
||||
self.gluster_used_vols_dict[vol] = self._setup_gluster_vol(vol)
|
||||
return vol
|
||||
|
||||
@utils.synchronized("glusterfs_native", external=False)
|
||||
def _push_gluster_vol(self, exp_locn):
|
||||
try:
|
||||
gmgr = self.gluster_used_vols_dict.pop(exp_locn)
|
||||
self.gluster_used_vols_dict.pop(exp_locn)
|
||||
except KeyError:
|
||||
msg = (_("Couldn't find the share in used list."))
|
||||
LOG.error(msg)
|
||||
raise exception.GlusterfsException(msg)
|
||||
|
||||
self.gluster_unused_vols_dict.update({exp_locn: gmgr})
|
||||
|
||||
def _do_mount(self, gluster_export, mntdir):
|
||||
|
||||
cmd = ['mount', '-t', 'glusterfs', gluster_export, mntdir]
|
||||
|
@ -390,7 +517,7 @@ class GlusterfsNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
|||
GlusterFS volume for use as a share.
|
||||
"""
|
||||
try:
|
||||
export_location = self._pop_gluster_vol()
|
||||
export_location = self._pop_gluster_vol(share['size'])
|
||||
except exception.GlusterfsException:
|
||||
msg = (_("Error creating share %(share_id)s"),
|
||||
{'share_id': share['id']})
|
||||
|
|
|
@ -17,6 +17,7 @@ import copy
|
|||
import errno
|
||||
import os
|
||||
|
||||
import ddt
|
||||
import mock
|
||||
from oslo_config import cfg
|
||||
|
||||
|
@ -53,6 +54,7 @@ NFS_EXPORT_DIR = 'nfs.export-dir'
|
|||
NFS_EXPORT_VOL = 'nfs.export-volumes'
|
||||
|
||||
|
||||
@ddt.ddt
|
||||
class GlusterManagerTestCase(test.TestCase):
|
||||
"""Tests GlusterManager."""
|
||||
|
||||
|
@ -84,6 +86,30 @@ class GlusterManagerTestCase(test.TestCase):
|
|||
self.assertEqual(self.fake_executor,
|
||||
self._gluster_manager.gluster_call)
|
||||
|
||||
@ddt.data(None, True)
|
||||
def test_gluster_manager_init_has_vol(self, has_volume):
|
||||
test_gluster_manager = glusterfs.GlusterManager(
|
||||
'testuser@127.0.0.1:/testvol', self.fake_execf,
|
||||
has_volume=has_volume)
|
||||
self.assertEqual('testvol', test_gluster_manager.volume)
|
||||
|
||||
@ddt.data(None, False)
|
||||
def test_gluster_manager_init_no_vol(self, has_volume):
|
||||
test_gluster_manager = glusterfs.GlusterManager(
|
||||
'testuser@127.0.0.1', self.fake_execf, has_volume=has_volume)
|
||||
self.assertEqual(None, test_gluster_manager.volume)
|
||||
|
||||
def test_gluster_manager_init_has_shouldnt_have_vol(self):
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
glusterfs.GlusterManager,
|
||||
'testuser@127.0.0.1:/testvol',
|
||||
self.fake_execf, has_volume=False)
|
||||
|
||||
def test_gluster_manager_hasnt_should_have_vol(self):
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
glusterfs.GlusterManager, 'testuser@127.0.0.1',
|
||||
self.fake_execf, has_volume=True)
|
||||
|
||||
def test_gluster_manager_invalid(self):
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
glusterfs.GlusterManager, '127.0.0.1:vol',
|
||||
|
|
|
@ -18,10 +18,11 @@
|
|||
Test cases for GlusterFS native protocol driver.
|
||||
"""
|
||||
|
||||
|
||||
import re
|
||||
import shutil
|
||||
import tempfile
|
||||
|
||||
import ddt
|
||||
import mock
|
||||
from oslo_config import cfg
|
||||
|
||||
|
@ -37,30 +38,6 @@ from manila.tests import fake_utils
|
|||
CONF = cfg.CONF
|
||||
|
||||
|
||||
def fake_db_share1(**kwargs):
|
||||
share = {
|
||||
'id': 'fakeid',
|
||||
'name': 'fakename',
|
||||
'size': 1,
|
||||
'share_proto': 'glusterfs',
|
||||
'export_location': 'host1:/gv1',
|
||||
}
|
||||
share.update(kwargs)
|
||||
return [share]
|
||||
|
||||
|
||||
def fake_db_share2(**kwargs):
|
||||
share = {
|
||||
'id': 'fakeid',
|
||||
'name': 'fakename',
|
||||
'size': 1,
|
||||
'share_proto': 'glusterfs',
|
||||
'export_location': 'host2:/gv2',
|
||||
}
|
||||
share.update(kwargs)
|
||||
return [share]
|
||||
|
||||
|
||||
def new_share(**kwargs):
|
||||
share = {
|
||||
'id': 'fakeid',
|
||||
|
@ -88,6 +65,7 @@ class GlusterXMLOut(object):
|
|||
return self.template % self.params, ''
|
||||
|
||||
|
||||
@ddt.ddt
|
||||
class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
||||
"""Tests GlusterfsNativeShareDriver."""
|
||||
|
||||
|
@ -97,180 +75,190 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
self._execute = fake_utils.fake_execute
|
||||
self._context = context.get_admin_context()
|
||||
|
||||
self.gluster_target1 = 'root@host1:/gv1'
|
||||
self.gluster_target2 = 'root@host2:/gv2'
|
||||
CONF.set_default('glusterfs_targets',
|
||||
[self.gluster_target1, self.gluster_target2])
|
||||
CONF.set_default('glusterfs_server_password',
|
||||
self.glusterfs_target1 = 'root@host1:/gv1'
|
||||
self.glusterfs_target2 = 'root@host2:/gv2'
|
||||
self.glusterfs_server1 = 'root@host1'
|
||||
self.glusterfs_server2 = 'root@host2'
|
||||
self.glusterfs_server1_volumes = 'manila-share-1-1G\nshare1'
|
||||
self.glusterfs_server2_volumes = 'manila-share-2-2G\nshare2'
|
||||
self.share1 = new_share(
|
||||
export_location=self.glusterfs_target1,
|
||||
status="available")
|
||||
self.share2 = new_share(
|
||||
export_location=self.glusterfs_target2,
|
||||
status="available")
|
||||
gmgr = glusterfs.GlusterManager
|
||||
self.gmgr1 = gmgr(self.glusterfs_server1, self._execute, None, None,
|
||||
has_volume=False)
|
||||
self.gmgr2 = gmgr(self.glusterfs_server2, self._execute, None, None,
|
||||
has_volume=False)
|
||||
self.glusterfs_volumes_dict = (
|
||||
{'root@host1:/manila-share-1-1G': {'size': 1},
|
||||
'root@host2:/manila-share-2-2G': {'size': 2}})
|
||||
|
||||
CONF.set_default('glusterfs_servers',
|
||||
[self.glusterfs_server1, self.glusterfs_server2])
|
||||
CONF.set_default('glusterfs_native_server_password',
|
||||
'fake_password')
|
||||
CONF.set_default('glusterfs_path_to_private_key',
|
||||
CONF.set_default('glusterfs_native_path_to_private_key',
|
||||
'/fakepath/to/privatekey')
|
||||
CONF.set_default('glusterfs_volume_pattern',
|
||||
'manila-share-\d+-#{size}G$')
|
||||
CONF.set_default('driver_handles_share_servers', False)
|
||||
|
||||
self.fake_conf = config.Configuration(None)
|
||||
self._db = mock.Mock()
|
||||
self._driver = glusterfs_native.GlusterfsNativeShareDriver(
|
||||
self._db, execute=self._execute,
|
||||
configuration=self.fake_conf)
|
||||
self.mock_object(tempfile, 'mkdtemp',
|
||||
mock.Mock(return_value='/tmp/tmpKGHKJ'))
|
||||
self.mock_object(glusterfs.GlusterManager, 'make_gluster_call')
|
||||
|
||||
with mock.patch.object(glusterfs_native.GlusterfsNativeShareDriver,
|
||||
'_glustermanager',
|
||||
side_effect=[self.gmgr1, self.gmgr2]):
|
||||
self._driver = glusterfs_native.GlusterfsNativeShareDriver(
|
||||
self._db, execute=self._execute,
|
||||
configuration=self.fake_conf)
|
||||
|
||||
self.addCleanup(fake_utils.fake_execute_set_repliers, [])
|
||||
self.addCleanup(fake_utils.fake_execute_clear_log)
|
||||
|
||||
def test_do_setup(self):
|
||||
self._driver._setup_gluster_vols = mock.Mock()
|
||||
self._db.share_get_all = mock.Mock(return_value=[])
|
||||
expected_exec = ['mount.glusterfs']
|
||||
gmgr = glusterfs.GlusterManager
|
||||
@ddt.data({"test_kwargs": {}, "has_volume": True},
|
||||
{"test_kwargs": {'has_volume': False}, "has_volume": False})
|
||||
@ddt.unpack
|
||||
def test_glustermanager(self, test_kwargs, has_volume):
|
||||
fake_obj = mock.Mock()
|
||||
self.mock_object(glusterfs, 'GlusterManager',
|
||||
mock.Mock(return_value=fake_obj))
|
||||
ret = self._driver._glustermanager(self.glusterfs_target1,
|
||||
**test_kwargs)
|
||||
glusterfs.GlusterManager.assert_called_once_with(
|
||||
self.glusterfs_target1, self._execute,
|
||||
self._driver.configuration.glusterfs_native_path_to_private_key,
|
||||
self._driver.configuration.glusterfs_native_server_password,
|
||||
has_volume=has_volume)
|
||||
self.assertEqual(fake_obj, ret)
|
||||
|
||||
self._driver.do_setup(self._context)
|
||||
def test_compile_volume_pattern(self):
|
||||
volume_pattern = 'manila-share-\d+-(?P<size>\d+)G$'
|
||||
ret = self._driver._compile_volume_pattern()
|
||||
self.assertEqual(re.compile(volume_pattern), ret)
|
||||
|
||||
self.assertEqual(2, len(self._driver.gluster_unused_vols_dict))
|
||||
self.assertTrue(
|
||||
gmgr(self.gluster_target1, self._execute, None, None).export in
|
||||
self._driver.gluster_unused_vols_dict)
|
||||
self.assertTrue(
|
||||
gmgr(self.gluster_target2, self._execute, None, None).export in
|
||||
self._driver.gluster_unused_vols_dict)
|
||||
self.assertTrue(self._driver._setup_gluster_vols.called)
|
||||
self.assertTrue(self._db.share_get_all.called)
|
||||
self.assertEqual(expected_exec, fake_utils.fake_execute_get_log())
|
||||
def test_fetch_gluster_volumes(self):
|
||||
test_args = ('volume', 'list')
|
||||
self.mock_object(
|
||||
self.gmgr1, 'gluster_call',
|
||||
mock.Mock(return_value=(self.glusterfs_server1_volumes, '')))
|
||||
self.mock_object(
|
||||
self.gmgr2, 'gluster_call',
|
||||
mock.Mock(return_value=(self.glusterfs_server2_volumes, '')))
|
||||
expected_output = self.glusterfs_volumes_dict
|
||||
ret = self._driver._fetch_gluster_volumes()
|
||||
self.gmgr1.gluster_call.assert_called_once_with(*test_args)
|
||||
self.gmgr2.gluster_call.assert_called_once_with(*test_args)
|
||||
self.assertEqual(expected_output, ret)
|
||||
|
||||
def test_do_setup_glusterfs_targets_empty(self):
|
||||
self._driver.configuration.glusterfs_targets = []
|
||||
self.assertRaises(exception.GlusterfsException, self._driver.do_setup,
|
||||
self._context)
|
||||
|
||||
def test_update_gluster_vols_dict(self):
|
||||
self._db.share_get_all = mock.Mock(return_value=fake_db_share1())
|
||||
|
||||
self._driver._update_gluster_vols_dict(self._context)
|
||||
|
||||
self.assertEqual(1, len(self._driver.gluster_used_vols_dict))
|
||||
self.assertEqual(1, len(self._driver.gluster_unused_vols_dict))
|
||||
self.assertTrue(self._db.share_get_all.called)
|
||||
|
||||
share_in_use = fake_db_share1()[0]
|
||||
share_not_in_use = fake_db_share2()[0]
|
||||
|
||||
self.assertTrue(
|
||||
share_in_use['export_location'] in
|
||||
self._driver.gluster_used_vols_dict)
|
||||
self.assertFalse(
|
||||
share_not_in_use['export_location'] in
|
||||
self._driver.gluster_used_vols_dict)
|
||||
self.assertTrue(
|
||||
share_not_in_use['export_location'] in
|
||||
self._driver.gluster_unused_vols_dict)
|
||||
self.assertFalse(
|
||||
share_in_use['export_location'] in
|
||||
self._driver.gluster_unused_vols_dict)
|
||||
|
||||
def test_setup_gluster_vols(self):
|
||||
test_args = [
|
||||
('volume', 'set', 'gv2', 'nfs.export-volumes', 'off'),
|
||||
('volume', 'set', 'gv2', 'client.ssl', 'on'),
|
||||
('volume', 'set', 'gv2', 'server.ssl', 'on')]
|
||||
self._driver._restart_gluster_vol = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {gmgr2.export: gmgr2}
|
||||
|
||||
self._driver._setup_gluster_vols()
|
||||
gmgr2.gluster_call.has_calls(
|
||||
mock.call(*test_args[0]),
|
||||
mock.call(*test_args[1]),
|
||||
mock.call(*test_args[2]))
|
||||
self.assertTrue(self._driver._restart_gluster_vol.called)
|
||||
|
||||
def test_setup_gluster_vols_excp1(self):
|
||||
test_args = ('volume', 'set', 'gv2', 'nfs.export-volumes', 'off')
|
||||
def test_fetch_gluster_volumes_error(self):
|
||||
test_args = ('volume', 'list')
|
||||
|
||||
def raise_exception(*args, **kwargs):
|
||||
if(args == test_args):
|
||||
raise exception.ProcessExecutionError()
|
||||
|
||||
self._driver.glusterfs_servers = {self.glusterfs_server1: self.gmgr1}
|
||||
self.mock_object(self.gmgr1, 'gluster_call',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
self.mock_object(glusterfs_native.LOG, 'error')
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver._fetch_gluster_volumes)
|
||||
self.gmgr1.gluster_call.assert_called_once_with(*test_args)
|
||||
self.assertTrue(glusterfs_native.LOG.error.called)
|
||||
|
||||
def test_do_setup(self):
|
||||
self.mock_object(self._driver, '_fetch_gluster_volumes',
|
||||
mock.Mock(return_value=self.glusterfs_volumes_dict))
|
||||
self.mock_object(self._driver, '_update_gluster_vols_dict')
|
||||
self._driver.gluster_used_vols_dict = self.glusterfs_volumes_dict
|
||||
self.mock_object(glusterfs_native.LOG, 'warn')
|
||||
expected_exec = ['mount.glusterfs']
|
||||
|
||||
self._driver.do_setup(self._context)
|
||||
|
||||
self._driver._fetch_gluster_volumes.assert_called_once_with()
|
||||
self.assertEqual(expected_exec, fake_utils.fake_execute_get_log())
|
||||
self._driver._update_gluster_vols_dict.assert_called_once_with(
|
||||
self._context)
|
||||
glusterfs_native.LOG.warn.assert_called_once_with(mock.ANY)
|
||||
|
||||
def test_do_setup_glusterfs_no_volumes_provided_by_backend(self):
|
||||
self.mock_object(self._driver, '_fetch_gluster_volumes',
|
||||
mock.Mock(return_value={}))
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver.do_setup, self._context)
|
||||
self._driver._fetch_gluster_volumes.assert_called_once_with()
|
||||
|
||||
def test_update_gluster_vols_dict(self):
|
||||
share_in_error = new_share(status="error")
|
||||
self._db.share_get_all = mock.Mock(
|
||||
return_value=[self.share1, share_in_error])
|
||||
|
||||
self._driver._update_gluster_vols_dict(self._context)
|
||||
|
||||
self.assertEqual(1, len(self._driver.gluster_used_vols_dict))
|
||||
self._db.share_get_all.assert_called_once_with(self._context)
|
||||
|
||||
share_in_use = self.share1
|
||||
|
||||
self.assertTrue(
|
||||
share_in_use['export_location'] in
|
||||
self._driver.gluster_used_vols_dict)
|
||||
|
||||
def test_setup_gluster_vol(self):
|
||||
test_args = [
|
||||
('volume', 'set', 'gv1', 'nfs.export-volumes', 'off'),
|
||||
('volume', 'set', 'gv1', 'client.ssl', 'on'),
|
||||
('volume', 'set', 'gv1', 'server.ssl', 'on')]
|
||||
self._driver._restart_gluster_vol = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
self._driver._glustermanager = mock.Mock(return_value=gmgr1)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {gmgr2.export: gmgr2}
|
||||
self.mock_object(gmgr2, 'gluster_call',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
ret = self._driver._setup_gluster_vol(gmgr1.volume)
|
||||
gmgr1.gluster_call.has_calls(
|
||||
mock.call(*test_args[0]),
|
||||
mock.call(*test_args[1]),
|
||||
mock.call(*test_args[2]))
|
||||
self.assertEqual(gmgr1, ret)
|
||||
self.assertTrue(self._driver._restart_gluster_vol.called)
|
||||
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver._setup_gluster_vols)
|
||||
gmgr2.gluster_call.assert_called_once_with(*test_args)
|
||||
self.assertFalse(self._driver._restart_gluster_vol.called)
|
||||
|
||||
def test_setup_gluster_vols_excp2(self):
|
||||
@ddt.data(0, 1, 2)
|
||||
def test_setup_gluster_vols_excp(self, idx):
|
||||
test_args = [
|
||||
('volume', 'set', 'gv2', 'nfs.export-volumes', 'off'),
|
||||
('volume', 'set', 'gv2', 'client.ssl', 'on'),
|
||||
('volume', 'set', 'gv2', 'server.ssl', 'off')]
|
||||
('volume', 'set', 'gv1', 'nfs.export-volumes', 'off'),
|
||||
('volume', 'set', 'gv1', 'client.ssl', 'on'),
|
||||
('volume', 'set', 'gv1', 'server.ssl', 'on')]
|
||||
|
||||
def raise_exception(*args, **kwargs):
|
||||
if(args == test_args[1]):
|
||||
if(args == test_args[idx]):
|
||||
raise exception.ProcessExecutionError()
|
||||
|
||||
self._driver._restart_gluster_vol = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {gmgr2.export: gmgr2}
|
||||
self.mock_object(gmgr2, 'gluster_call',
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
self._driver._glustermanager = mock.Mock(return_value=gmgr1)
|
||||
self.mock_object(gmgr1, 'gluster_call',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver._setup_gluster_vols)
|
||||
self.assertEqual(
|
||||
[mock.call(*test_args[0]), mock.call(*test_args[1])],
|
||||
gmgr2.gluster_call.call_args_list)
|
||||
self.assertFalse(self._driver._restart_gluster_vol.called)
|
||||
|
||||
def test_setup_gluster_vols_excp3(self):
|
||||
test_args = [
|
||||
('volume', 'set', 'gv2', 'nfs.export-volumes', 'off'),
|
||||
('volume', 'set', 'gv2', 'client.ssl', 'on'),
|
||||
('volume', 'set', 'gv2', 'server.ssl', 'on')]
|
||||
|
||||
def raise_exception(*args, **kwargs):
|
||||
if(args == test_args[2]):
|
||||
raise exception.ProcessExecutionError()
|
||||
|
||||
self._driver._restart_gluster_vol = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {gmgr2.export: gmgr2}
|
||||
self.mock_object(gmgr2, 'gluster_call',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver._setup_gluster_vols)
|
||||
self.assertEqual(
|
||||
[mock.call(*test_args[0]), mock.call(*test_args[1]),
|
||||
mock.call(*test_args[2])],
|
||||
gmgr2.gluster_call.call_args_list)
|
||||
self._driver._setup_gluster_vol, gmgr1.volume)
|
||||
self.assertTrue(
|
||||
mock.call(*test_args[idx]) in gmgr1.gluster_call.call_args_list)
|
||||
self.assertFalse(self._driver._restart_gluster_vol.called)
|
||||
|
||||
def test_restart_gluster_vol(self):
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
test_args = [('volume', 'stop', 'gv1', '--mode=script'),
|
||||
('volume', 'start', 'gv1')]
|
||||
|
||||
|
@ -281,7 +269,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
|
||||
def test_restart_gluster_vol_excp1(self):
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
test_args = ('volume', 'stop', 'gv1', '--mode=script')
|
||||
|
||||
def raise_exception(*args, **kwargs):
|
||||
|
@ -299,7 +287,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
|
||||
def test_restart_gluster_vol_excp2(self):
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
test_args = [('volume', 'stop', 'gv1', '--mode=script'),
|
||||
('volume', 'start', 'gv1')]
|
||||
|
||||
|
@ -316,68 +304,90 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
[mock.call(*test_args[0]), mock.call(*test_args[1])],
|
||||
gmgr1.gluster_call.call_args_list)
|
||||
|
||||
def test_pop_gluster_vol(self):
|
||||
@ddt.data({"voldict": {"host:/share2G": {"size": 2}}, "used_vols": {},
|
||||
"size": 1, "expected": "host:/share2G"},
|
||||
{"voldict": {"host:/share2G": {"size": 2}}, "used_vols": {},
|
||||
"size": 2, "expected": "host:/share2G"},
|
||||
{"voldict": {"host:/share2G": {"size": 2}}, "used_vols": {},
|
||||
"size": None, "expected": "host:/share2G"},
|
||||
{"voldict": {"host:/share2G": {"size": 2},
|
||||
"host:/share": {"size": None}},
|
||||
"used_vols": {"host:/share2G": "fake_mgr"}, "size": 1,
|
||||
"expected": "host:/share"},
|
||||
{"voldict": {"host:/share2G": {"size": 2},
|
||||
"host:/share": {"size": None}},
|
||||
"used_vols": {"host:/share2G": "fake_mgr"}, "size": 2,
|
||||
"expected": "host:/share"},
|
||||
{"voldict": {"host:/share2G": {"size": 2},
|
||||
"host:/share": {"size": None}},
|
||||
"used_vols": {"host:/share2G": "fake_mgr"}, "size": 3,
|
||||
"expected": "host:/share"},
|
||||
{"voldict": {"host:/share2G": {"size": 2},
|
||||
"host:/share": {"size": None}},
|
||||
"used_vols": {"host:/share2G": "fake_mgr"}, "size": None,
|
||||
"expected": "host:/share"},
|
||||
{"voldict": {"host:/share": {}}, "used_vols": {}, "size": 1,
|
||||
"expected": "host:/share"},
|
||||
{"voldict": {"host:/share": {}}, "used_vols": {}, "size": None,
|
||||
"expected": "host:/share"})
|
||||
@ddt.unpack
|
||||
def test_pop_gluster_vol(self, voldict, used_vols, size, expected):
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
gmgr1 = gmgr(expected, self._execute, None, None)
|
||||
self._driver._fetch_gluster_volumes = mock.Mock(return_value=voldict)
|
||||
self._driver.gluster_used_vols_dict = used_vols
|
||||
self._driver._setup_gluster_vol = mock.Mock(return_value=gmgr1)
|
||||
self._driver.volume_pattern_keys = voldict.values()[0].keys()
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {gmgr2.export: gmgr2}
|
||||
result = self._driver._pop_gluster_vol(size=size)
|
||||
|
||||
exp_locn = self._driver._pop_gluster_vol()
|
||||
self.assertEqual(expected, result)
|
||||
self.assertEqual(used_vols[expected].export, result)
|
||||
self._driver._setup_gluster_vol.assert_called_once_with(result)
|
||||
|
||||
self.assertEqual(0, len(self._driver.gluster_unused_vols_dict))
|
||||
self.assertFalse(
|
||||
gmgr2.export in self._driver.gluster_unused_vols_dict)
|
||||
self.assertEqual(2, len(self._driver.gluster_used_vols_dict))
|
||||
self.assertTrue(
|
||||
gmgr2.export in self._driver.gluster_used_vols_dict)
|
||||
self.assertEqual(exp_locn, gmgr2.export)
|
||||
|
||||
def test_pop_gluster_vol_excp(self):
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {
|
||||
gmgr2.export: gmgr2, gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {}
|
||||
@ddt.data({"voldict": {"share2G": {"size": 2}},
|
||||
"used_vols": {}, "size": 3},
|
||||
{"voldict": {"share2G": {"size": 2}},
|
||||
"used_vols": {"share2G": "fake_mgr"}, "size": None})
|
||||
@ddt.unpack
|
||||
def test_pop_gluster_vol_excp(self, voldict, used_vols, size):
|
||||
self._driver._fetch_gluster_volumes = mock.Mock(return_value=voldict)
|
||||
self._driver.gluster_used_vols_dict = used_vols
|
||||
self._driver.volume_pattern_keys = voldict.values()[0].keys()
|
||||
self._driver._setup_gluster_vol = mock.Mock()
|
||||
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver._pop_gluster_vol)
|
||||
self._driver._pop_gluster_vol, size=size)
|
||||
self.assertFalse(self._driver._setup_gluster_vol.called)
|
||||
|
||||
def test_push_gluster_vol(self):
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.glusterfs_target2, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {
|
||||
gmgr1.export: gmgr1, gmgr2.export: gmgr2}
|
||||
self._driver.gluster_unused_vols_dict = {}
|
||||
self.glusterfs_target1: gmgr1, self.glusterfs_target2: gmgr2}
|
||||
|
||||
self._driver._push_gluster_vol(gmgr2.export)
|
||||
self._driver._push_gluster_vol(self.glusterfs_target2)
|
||||
|
||||
self.assertEqual(1, len(self._driver.gluster_unused_vols_dict))
|
||||
self.assertTrue(
|
||||
gmgr2.export in self._driver.gluster_unused_vols_dict)
|
||||
self.assertEqual(1, len(self._driver.gluster_used_vols_dict))
|
||||
self.assertFalse(
|
||||
gmgr2.export in self._driver.gluster_used_vols_dict)
|
||||
self.glusterfs_target2 in self._driver.gluster_used_vols_dict)
|
||||
|
||||
def test_push_gluster_vol_excp(self):
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {}
|
||||
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver._push_gluster_vol, gmgr2.export)
|
||||
self._driver._push_gluster_vol,
|
||||
self.glusterfs_target2)
|
||||
|
||||
def test_do_mount(self):
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
tmpdir = '/tmp/tmpKGHKJ'
|
||||
expected_exec = ['mount -t glusterfs host1:/gv1 /tmp/tmpKGHKJ']
|
||||
|
||||
|
@ -390,7 +400,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
raise exception.ProcessExecutionError
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
tmpdir = '/tmp/tmpKGHKJ'
|
||||
expected_exec = ['mount -t glusterfs host1:/gv1 /tmp/tmpKGHKJ']
|
||||
|
||||
|
@ -431,7 +441,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
('volume', 'set', 'gv1', 'server.ssl', 'on')]
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
|
||||
expected_exec = ['find /tmp/tmpKGHKJ -mindepth 1 -delete']
|
||||
|
||||
|
@ -460,7 +470,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
raise exception.ProcessExecutionError()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
self.mock_object(gmgr1, 'gluster_call',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
|
||||
|
@ -488,7 +498,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
raise exception.ProcessExecutionError()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
self.mock_object(gmgr1, 'gluster_call',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
|
||||
|
@ -518,7 +528,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
raise exception.ProcessExecutionError()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
self.mock_object(gmgr1, 'gluster_call',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
|
||||
|
@ -553,7 +563,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
raise exception.ProcessExecutionError()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
self.mock_object(gmgr1, 'gluster_call',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
|
||||
|
@ -579,7 +589,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
shutil.rmtree = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
|
||||
test_args = [
|
||||
('volume', 'set', 'gv1', 'client.ssl', 'off'),
|
||||
|
@ -612,7 +622,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
shutil.rmtree = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
|
||||
test_args = [
|
||||
('volume', 'set', 'gv1', 'client.ssl', 'off'),
|
||||
|
@ -637,7 +647,7 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
shutil.rmtree = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
|
||||
test_args = [
|
||||
('volume', 'set', 'gv1', 'client.ssl', 'off'),
|
||||
|
@ -658,69 +668,44 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
self.assertFalse(shutil.rmtree.called)
|
||||
|
||||
def test_create_share(self):
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {gmgr2.export: gmgr2}
|
||||
self._driver._pop_gluster_vol = mock.Mock(
|
||||
return_value=self.glusterfs_target1)
|
||||
|
||||
share = new_share()
|
||||
|
||||
exp_locn = self._driver.create_share(self._context, share)
|
||||
|
||||
self.assertEqual(exp_locn, gmgr2.export)
|
||||
self.assertEqual(exp_locn, self.glusterfs_target1)
|
||||
self._driver._pop_gluster_vol.assert_called_once_with(share['size'])
|
||||
|
||||
def test_create_share_excp(self):
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {
|
||||
gmgr2.export: gmgr2, gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {}
|
||||
self._driver._pop_gluster_vol = mock.Mock(
|
||||
side_effect=exception.GlusterfsException)
|
||||
|
||||
share = new_share()
|
||||
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver.create_share, self._context, share)
|
||||
self._driver._pop_gluster_vol.assert_called_once_with(
|
||||
share['size'])
|
||||
|
||||
def test_delete_share(self):
|
||||
self._driver._push_gluster_vol = mock.Mock()
|
||||
self._driver._wipe_gluster_vol = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {
|
||||
gmgr2.export: gmgr2, gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {}
|
||||
|
||||
share = fake_db_share2()[0]
|
||||
|
||||
self._driver.delete_share(self._context, share)
|
||||
|
||||
self.assertEqual(1, len(self._driver.gluster_used_vols_dict))
|
||||
self.assertEqual(1, len(self._driver.gluster_unused_vols_dict))
|
||||
self.assertTrue(self._driver._wipe_gluster_vol.called)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
self._driver.delete_share(self._context, self.share1)
|
||||
self._driver._wipe_gluster_vol.assert_called_once_with(gmgr1)
|
||||
self._driver._push_gluster_vol.assert_called_once_with(
|
||||
self.glusterfs_target1)
|
||||
|
||||
def test_delete_share_warn(self):
|
||||
glusterfs_native.LOG.warn = mock.Mock()
|
||||
self._driver._wipe_gluster_vol = mock.Mock()
|
||||
self._driver._push_gluster_vol = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {}
|
||||
self._driver.gluster_unused_vols_dict = {
|
||||
gmgr2.export: gmgr2, gmgr1.export: gmgr1}
|
||||
|
||||
share = fake_db_share2()[0]
|
||||
|
||||
self._driver.delete_share(self._context, share)
|
||||
|
||||
self._driver.delete_share(self._context, self.share1)
|
||||
self.assertTrue(glusterfs_native.LOG.warn.called)
|
||||
self.assertFalse(self._driver._wipe_gluster_vol.called)
|
||||
self.assertFalse(self._driver._push_gluster_vol.called)
|
||||
|
@ -730,33 +715,24 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
self._driver._wipe_gluster_vol.side_effect = (
|
||||
exception.GlusterfsException)
|
||||
self._driver._push_gluster_vol = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {
|
||||
gmgr2.export: gmgr2, gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {}
|
||||
|
||||
share = fake_db_share2()[0]
|
||||
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver.delete_share, self._context, share)
|
||||
|
||||
self.assertTrue(self._driver._wipe_gluster_vol.called)
|
||||
self._driver.delete_share, self._context,
|
||||
self.share1)
|
||||
self._driver._wipe_gluster_vol.assert_called_once_with(gmgr1)
|
||||
self.assertFalse(self._driver._push_gluster_vol.called)
|
||||
|
||||
def test_create_snapshot(self):
|
||||
share = fake_db_share1()[0]
|
||||
self._db.share_get = mock.Mock(return_value=share)
|
||||
self._db.share_get = mock.Mock(return_value=self.share1)
|
||||
self._driver.gluster_nosnap_vols_dict = {}
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(share['export_location'], self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': share['id']}
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': self.share1['id']}
|
||||
|
||||
args = ('--xml', 'snapshot', 'create', 'fake_snap_id',
|
||||
gmgr1.volume)
|
||||
|
@ -767,15 +743,14 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
gmgr1.gluster_call.assert_called_once_with(*args)
|
||||
|
||||
def test_create_snapshot_error(self):
|
||||
share = fake_db_share1()[0]
|
||||
self._db.share_get = mock.Mock(return_value=share)
|
||||
self._db.share_get = mock.Mock(return_value=self.share1)
|
||||
self._driver.gluster_nosnap_vols_dict = {}
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(share['export_location'], self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': share['id']}
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': self.share1['id']}
|
||||
|
||||
args = ('--xml', 'snapshot', 'create', 'fake_snap_id',
|
||||
gmgr1.volume)
|
||||
|
@ -787,15 +762,14 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
gmgr1.gluster_call.assert_called_once_with(*args)
|
||||
|
||||
def test_create_snapshot_no_snap(self):
|
||||
share = fake_db_share1()[0]
|
||||
self._db.share_get = mock.Mock(return_value=share)
|
||||
self._db.share_get = mock.Mock(return_value=self.share1)
|
||||
self._driver.gluster_nosnap_vols_dict = {}
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(share['export_location'], self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': share['id']}
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': self.share1['id']}
|
||||
|
||||
args = ('--xml', 'snapshot', 'create', 'fake_snap_id',
|
||||
gmgr1.volume)
|
||||
|
@ -807,27 +781,25 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
gmgr1.gluster_call.assert_called_once_with(*args)
|
||||
|
||||
def test_create_snapshot_no_snap_cached(self):
|
||||
share = fake_db_share1()[0]
|
||||
self._db.share_get = mock.Mock(return_value=share)
|
||||
self._driver.gluster_nosnap_vols_dict = {share['export_location']:
|
||||
'fake error'}
|
||||
self._db.share_get = mock.Mock(return_value=self.share1)
|
||||
self._driver.gluster_nosnap_vols_dict = {
|
||||
self.share1['export_location']: 'fake error'}
|
||||
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': share['id']}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': self.share1['id']}
|
||||
|
||||
self.assertRaises(exception.ShareSnapshotNotSupported,
|
||||
self._driver.create_snapshot, self._context,
|
||||
snapshot)
|
||||
|
||||
def test_delete_snapshot(self):
|
||||
share = fake_db_share1()[0]
|
||||
self._db.share_get = mock.Mock(return_value=share)
|
||||
self._db.share_get = mock.Mock(return_value=self.share1)
|
||||
self._driver.gluster_nosnap_vols_dict = {}
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(share['export_location'], self._execute, None, None)
|
||||
gmgr1 = gmgr(self.share1['export_location'], self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': share['id']}
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': self.share1['id']}
|
||||
|
||||
args = ('--xml', 'snapshot', 'delete', 'fake_snap_id')
|
||||
self.mock_object(gmgr1, 'gluster_call',
|
||||
|
@ -837,15 +809,14 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
gmgr1.gluster_call.assert_called_once_with(*args)
|
||||
|
||||
def test_delete_snapshot_error(self):
|
||||
share = fake_db_share1()[0]
|
||||
self._db.share_get = mock.Mock(return_value=share)
|
||||
self._db.share_get = mock.Mock(return_value=self.share1)
|
||||
self._driver.gluster_nosnap_vols_dict = {}
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(share['export_location'], self._execute, None, None)
|
||||
gmgr1 = gmgr(self.share1['export_location'], self._execute, None, None)
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': share['id']}
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
snapshot = {'id': 'fake_snap_id', 'share_id': self.share1['id']}
|
||||
|
||||
args = ('--xml', 'snapshot', 'delete', 'fake_snap_id')
|
||||
self.mock_object(gmgr1, 'gluster_call',
|
||||
|
@ -859,29 +830,23 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
self._driver._restart_gluster_vol = mock.Mock()
|
||||
access = {'access_type': 'cert', 'access_to': 'client.example.com'}
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
test_args = ('volume', 'set', 'gv1', 'auth.ssl-allow',
|
||||
access['access_to'])
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {gmgr2.export: gmgr2}
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
|
||||
share = fake_db_share1()[0]
|
||||
|
||||
self._driver.allow_access(self._context, share, access)
|
||||
self._driver.allow_access(self._context, self.share1, access)
|
||||
gmgr1.gluster_call.assert_called_once_with(*test_args)
|
||||
self.assertTrue(self._driver._restart_gluster_vol.called)
|
||||
|
||||
def test_allow_access_invalid_access_type(self):
|
||||
self._driver._restart_gluster_vol = mock.Mock()
|
||||
access = {'access_type': 'invalid', 'access_to': 'client.example.com'}
|
||||
share = fake_db_share1()[0]
|
||||
expected_exec = []
|
||||
|
||||
self.assertRaises(exception.InvalidShareAccess,
|
||||
self._driver.allow_access,
|
||||
self._context, share, access)
|
||||
self._context, self.share1, access)
|
||||
self.assertFalse(self._driver._restart_gluster_vol.called)
|
||||
self.assertEqual(expected_exec, fake_utils.fake_execute_get_log())
|
||||
|
||||
|
@ -897,18 +862,15 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
self._driver._restart_gluster_vol = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {gmgr2.export: gmgr2}
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
|
||||
self.mock_object(gmgr1, 'gluster_call',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
share = fake_db_share1()[0]
|
||||
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver.allow_access,
|
||||
self._context, share, access)
|
||||
self._context, self.share1, access)
|
||||
gmgr1.gluster_call.assert_called_once_with(*test_args)
|
||||
self.assertFalse(self._driver._restart_gluster_vol.called)
|
||||
|
||||
|
@ -916,30 +878,23 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
self._driver._restart_gluster_vol = mock.Mock()
|
||||
access = {'access_type': 'cert', 'access_to': 'NotApplicable'}
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
test_args = ('volume', 'reset', 'gv1', 'auth.ssl-allow')
|
||||
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {gmgr2.export: gmgr2}
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
|
||||
share = fake_db_share1()[0]
|
||||
|
||||
self._driver.deny_access(self._context, share, access)
|
||||
self._driver.deny_access(self._context, self.share1, access)
|
||||
gmgr1.gluster_call.assert_called_once_with(*test_args)
|
||||
self.assertTrue(self._driver._restart_gluster_vol.called)
|
||||
self._driver._restart_gluster_vol.assert_called_once_with(gmgr1)
|
||||
|
||||
def test_deny_access_invalid_access_type(self):
|
||||
self._driver._restart_gluster_vol = mock.Mock()
|
||||
access = {'access_type': 'invalid', 'access_to': 'NotApplicable'}
|
||||
share = fake_db_share1()[0]
|
||||
expected_exec = []
|
||||
|
||||
self.assertRaises(exception.InvalidShareAccess,
|
||||
self._driver.deny_access,
|
||||
self._context, share, access)
|
||||
self._context, self.share1, access)
|
||||
self.assertFalse(self._driver._restart_gluster_vol.called)
|
||||
self.assertEqual(expected_exec, fake_utils.fake_execute_get_log())
|
||||
|
||||
def test_deny_access_excp(self):
|
||||
access = {'access_type': 'cert', 'access_to': 'NotApplicable'}
|
||||
|
@ -952,19 +907,15 @@ class GlusterfsNativeShareDriverTestCase(test.TestCase):
|
|||
self._driver._restart_gluster_vol = mock.Mock()
|
||||
|
||||
gmgr = glusterfs.GlusterManager
|
||||
gmgr1 = gmgr(self.gluster_target1, self._execute, None, None)
|
||||
gmgr2 = gmgr(self.gluster_target2, self._execute, None, None)
|
||||
self._driver.gluster_used_vols_dict = {gmgr1.export: gmgr1}
|
||||
self._driver.gluster_unused_vols_dict = {gmgr2.export: gmgr2}
|
||||
gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None)
|
||||
self._driver.gluster_used_vols_dict = {self.glusterfs_target1: gmgr1}
|
||||
|
||||
self.mock_object(gmgr1, 'gluster_call',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
|
||||
share = fake_db_share1()[0]
|
||||
|
||||
self.assertRaises(exception.GlusterfsException,
|
||||
self._driver.deny_access,
|
||||
self._context, share, access)
|
||||
self._context, self.share1, access)
|
||||
gmgr1.gluster_call.assert_called_once_with(*test_args)
|
||||
self.assertFalse(self._driver._restart_gluster_vol.called)
|
||||
|
||||
|
|
Loading…
Reference in New Issue