Refactor and rename CephFSNativeDriver

Refactor CephFSNativeDriver as a driver class using protocol helper
classes. The helper classes would handle protocol specific driver
actions such as controlling access and fetching share's export
locations. For now, the driver uses a protocol helper to support
CephFS's native protocol. The driver can be made to support other
NAS protocols later on by adding protocol helper classes.

Since the driver would not just support the native protocol
rename the driver's file name and its driver class as
`driver` and `CephFSDriver` respectively. The driver would by
default support the native protocol, and can be referred to
by its previous class name and module name.

DocImpact

Partially-impelements: blueprint cephfs-nfs-support

Change-Id: I8a33be1df4864131435b794e791cc2d651fbe741
This commit is contained in:
Ramana Raja 2017-01-12 17:02:26 +05:30
parent 4e25ce5a0c
commit 23075e6c0b
8 changed files with 411 additions and 304 deletions

View File

@ -14,10 +14,10 @@
License for the specific language governing permissions and limitations
under the License.
CephFS Native driver
====================
CephFS driver
=============
The CephFS Native driver enables manila to export shared filesystems to guests
The CephFS driver enables manila to export shared filesystems to guests
using the Ceph network protocol. Guests require a Ceph client in order to
mount the filesystem.
@ -140,7 +140,7 @@ Create a section like this to define a CephFS backend:
[cephfs1]
driver_handles_share_servers = False
share_backend_name = CEPHFS1
share_driver = manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila
cephfs_cluster_name = ceph
@ -155,6 +155,12 @@ using the section name. In this example we are also including another backend
("generic1"), you would include whatever other backends you have configured.
.. note::
For Mitaka, Newton, and Ocata releases, the ``share_driver`` path
was ``manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver``
.. code-block:: ini
enabled_share_backends = generic1, cephfs1
@ -312,10 +318,10 @@ Security
public network.
The :mod:`manila.share.drivers.cephfs.cephfs_native` Module
The :mod:`manila.share.drivers.cephfs.driver` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: manila.share.drivers.cephfs.cephfs_native
.. automodule:: manila.share.drivers.cephfs.driver
:noindex:
:members:
:undoc-members:

View File

@ -115,7 +115,7 @@ Share backends
generic_driver
glusterfs_driver
glusterfs_native_driver
cephfs_native_driver
cephfs_driver
gpfs_driver
huawei_nas_driver
hdfs_native_driver

View File

@ -73,7 +73,7 @@ Mapping of share drivers and share features support
+----------------------------------------+-----------------------+-----------------------+--------------+--------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+
| Oracle ZFSSA | K | N | M | M | K | K | \- | \- | \- |
+----------------------------------------+-----------------------+-----------------------+--------------+--------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+
| CephFS Native | M | \- | M | M | M | \- | \- | \- | \- |
| CephFS | M | \- | M | M | M | \- | \- | \- | \- |
+----------------------------------------+-----------------------+-----------------------+--------------+--------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+
| Tegile | M | \- | M | M | M | M | \- | \- | \- |
+----------------------------------------+-----------------------+-----------------------+--------------+--------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+
@ -134,7 +134,7 @@ Mapping of share drivers and share access rules support
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Oracle ZFSSA | NFS,CIFS(K) | \- | \- | \- | \- | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| CephFS Native | \- | \- | \- | CEPHFS (M) | \- | \- | \- | CEPHFS (N) |
| CephFS | \- | \- | \- | CEPHFS (M) | \- | \- | \- | CEPHFS (N) |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Tegile | NFS (M) |NFS (M),CIFS (M)| \- | \- | NFS (M) |NFS (M),CIFS (M)| \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
@ -193,7 +193,7 @@ Mapping of share drivers and security services support
+----------------------------------------+------------------+-----------------+------------------+
| Oracle ZFSSA | \- | \- | \- |
+----------------------------------------+------------------+-----------------+------------------+
| CephFS Native | \- | \- | \- |
| CephFS | \- | \- | \- |
+----------------------------------------+------------------+-----------------+------------------+
| Tegile | \- | \- | \- |
+----------------------------------------+------------------+-----------------+------------------+
@ -254,7 +254,7 @@ More information: :ref:`capabilities_and_extra_specs`
+----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+
| Oracle ZFSSA | \- | K | \- | \- | \- | L | \- | K | \- | \- |
+----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+
| CephFS Native | \- | M | \- | \- | \- | M | \- | \- | \- | \- |
| CephFS | \- | M | \- | \- | \- | M | \- | \- | \- | \- |
+----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+
| Tegile | \- | M | M | M | M | \- | \- | M | \- | \- |
+----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+

View File

@ -50,7 +50,7 @@ import manila.scheduler.weighers.pool
import manila.service
import manila.share.api
import manila.share.driver
import manila.share.drivers.cephfs.cephfs_native
import manila.share.drivers.cephfs.driver
import manila.share.drivers.container.driver
import manila.share.drivers.container.storage_helper
import manila.share.drivers.dell_emc.driver
@ -123,7 +123,7 @@ _global_opt_lists = [
manila.share.driver.share_opts,
manila.share.driver.ssh_opts,
manila.share.drivers_private_data.private_data_opts,
manila.share.drivers.cephfs.cephfs_native.cephfs_native_opts,
manila.share.drivers.cephfs.driver.cephfs_opts,
manila.share.drivers.container.driver.container_opts,
manila.share.drivers.container.storage_helper.lv_opts,
manila.share.drivers.dell_emc.driver.EMC_NAS_OPTS,

View File

@ -14,6 +14,8 @@
# under the License.
import sys
from oslo_config import cfg
from oslo_log import log
from oslo_utils import units
@ -22,6 +24,7 @@ from manila.common import constants
from manila import exception
from manila.i18n import _
from manila.share import driver
from manila.share.drivers import ganesha
from manila.share import share_types
@ -41,7 +44,7 @@ CEPH_DEFAULT_AUTH_ID = "admin"
LOG = log.getLogger(__name__)
cephfs_native_opts = [
cephfs_opts = [
cfg.StrOpt('cephfs_conf_path',
default="",
help="Fully qualified path to the ceph.conf file."),
@ -57,35 +60,50 @@ cephfs_native_opts = [
default=False,
help="Whether to enable snapshots in this driver."
),
cfg.StrOpt('cephfs_protocol_helper_type',
default="CEPHFS",
# TODO(rraja): Add 'NFS' once CephFS/Ganesha support is
# is available in manila.
choices=['CEPHFS', ],
ignore_case=True,
help="The type of protocol helper to use. Default is "
"CEPHFS."
),
]
CONF = cfg.CONF
CONF.register_opts(cephfs_native_opts)
CONF.register_opts(cephfs_opts)
class CephFSNativeDriver(driver.ShareDriver,):
"""Driver for the Ceph Filesystem.
def cephfs_share_path(share):
"""Get VolumePath from Share."""
return ceph_volume_client.VolumePath(
share['share_group_id'], share['id'])
This driver is 'native' in the sense that it exposes a CephFS filesystem
for use directly by guests, with no intermediate layer like NFS.
"""
supported_protocols = ('CEPHFS',)
class CephFSDriver(driver.ShareDriver,):
"""Driver for the Ceph Filesystem."""
def __init__(self, *args, **kwargs):
super(CephFSNativeDriver, self).__init__(False, *args, **kwargs)
super(CephFSDriver, self).__init__(False, *args, **kwargs)
self.backend_name = self.configuration.safe_get(
'share_backend_name') or 'CephFS-Native'
'share_backend_name') or 'CephFS'
self._volume_client = None
self.configuration.append_config_values(cephfs_native_opts)
self.configuration.append_config_values(cephfs_opts)
def check_for_setup_error(self):
# NOTE: make sure that we can really connect to the ceph,
# otherwise an exception is raised
self.volume_client
def do_setup(self, context):
protocol_helper_class = getattr(sys.modules[__name__],
'NativeProtocolHelper')
self.protocol_helper = protocol_helper_class(
None,
self.configuration,
volume_client=self.volume_client)
self.protocol_helper.init_helper()
def _update_share_stats(self):
stats = self.volume_client.rados.get_cluster_stats()
@ -97,7 +115,8 @@ class CephFSNativeDriver(driver.ShareDriver,):
'vendor_name': 'Ceph',
'driver_version': '1.0',
'share_backend_name': self.backend_name,
'storage_protocol': "CEPHFS",
'storage_protocol': self.configuration.safe_get(
'cephfs_protocol_helper_type'),
'pools': [
{
'pool_name': 'cephfs',
@ -115,7 +134,7 @@ class CephFSNativeDriver(driver.ShareDriver,):
'snapshot_support': self.configuration.safe_get(
'cephfs_enable_snapshots'),
}
super(CephFSNativeDriver, self)._update_share_stats(data)
super(CephFSDriver, self)._update_share_stats(data)
def _to_bytes(self, gigs):
"""Convert a Manila size into bytes.
@ -162,11 +181,6 @@ class CephFSNativeDriver(driver.ShareDriver,):
return self._volume_client
def _share_path(self, share):
"""Get VolumePath from Share."""
return ceph_volume_client.VolumePath(
share['share_group_id'], share['id'])
def create_share(self, context, share, share_server=None):
"""Create a CephFS volume.
@ -175,6 +189,12 @@ class CephFSNativeDriver(driver.ShareDriver,):
:param share_server: Always None for CephFS native.
:return: The export locations dictionary.
"""
requested_proto = share['share_proto'].upper()
supported_proto = (
self.configuration.cephfs_protocol_helper_type.upper())
if (requested_proto != supported_proto):
msg = _("Share protocol %s is not supported.") % requested_proto
raise exception.ShareBackendException(msg=msg)
# `share` is a Share
msg = _("create_share {be} name={id} size={size}"
@ -189,15 +209,111 @@ class CephFSNativeDriver(driver.ShareDriver,):
size = self._to_bytes(share['size'])
# Create the CephFS volume
volume = self.volume_client.create_volume(
self._share_path(share), size=size, data_isolated=data_isolated)
cephfs_volume = self.volume_client.create_volume(
cephfs_share_path(share), size=size, data_isolated=data_isolated)
return self.protocol_helper.get_export_locations(share, cephfs_volume)
def delete_share(self, context, share, share_server=None):
extra_specs = share_types.get_extra_specs_from_share(share)
data_isolated = extra_specs.get("cephfs:data_isolated", False)
self.volume_client.delete_volume(cephfs_share_path(share),
data_isolated=data_isolated)
self.volume_client.purge_volume(cephfs_share_path(share),
data_isolated=data_isolated)
def update_access(self, context, share, access_rules, add_rules,
delete_rules, share_server=None):
return self.protocol_helper.update_access(
context, share, access_rules, add_rules, delete_rules,
share_server=share_server)
def ensure_share(self, context, share, share_server=None):
# Creation is idempotent
return self.create_share(context, share, share_server)
def extend_share(self, share, new_size, share_server=None):
LOG.debug("extend_share {id} {size}".format(
id=share['id'], size=new_size))
self.volume_client.set_max_bytes(cephfs_share_path(share),
self._to_bytes(new_size))
def shrink_share(self, share, new_size, share_server=None):
LOG.debug("shrink_share {id} {size}".format(
id=share['id'], size=new_size))
new_bytes = self._to_bytes(new_size)
used = self.volume_client.get_used_bytes(cephfs_share_path(share))
if used > new_bytes:
# While in fact we can "shrink" our volumes to less than their
# used bytes (it's just a quota), raise error anyway to avoid
# confusing API consumers that might depend on typical shrink
# behaviour.
raise exception.ShareShrinkingPossibleDataLoss(
share_id=share['id'])
self.volume_client.set_max_bytes(cephfs_share_path(share), new_bytes)
def create_snapshot(self, context, snapshot, share_server=None):
self.volume_client.create_snapshot_volume(
cephfs_share_path(snapshot['share']),
'_'.join([snapshot['snapshot_id'], snapshot['id']]))
def delete_snapshot(self, context, snapshot, share_server=None):
self.volume_client.destroy_snapshot_volume(
cephfs_share_path(snapshot['share']),
'_'.join([snapshot['snapshot_id'], snapshot['id']]))
def create_share_group(self, context, sg_dict, share_server=None):
self.volume_client.create_group(sg_dict['id'])
def delete_share_group(self, context, sg_dict, share_server=None):
self.volume_client.destroy_group(sg_dict['id'])
def delete_share_group_snapshot(self, context, snap_dict,
share_server=None):
self.volume_client.destroy_snapshot_group(
snap_dict['share_group_id'],
snap_dict['id'])
return None, []
def create_share_group_snapshot(self, context, snap_dict,
share_server=None):
self.volume_client.create_snapshot_group(
snap_dict['share_group_id'],
snap_dict['id'])
return None, []
def __del__(self):
if self._volume_client:
self._volume_client.disconnect()
self._volume_client = None
class NativeProtocolHelper(ganesha.NASHelperBase):
"""Helper class for native CephFS protocol"""
supported_access_types = (CEPHX_ACCESS_TYPE, )
supported_access_levels = (constants.ACCESS_LEVEL_RW,
constants.ACCESS_LEVEL_RO)
def __init__(self, execute, config, **kwargs):
self.volume_client = kwargs.pop('volume_client')
super(NativeProtocolHelper, self).__init__(execute, config,
**kwargs)
def _init_helper(self):
pass
def get_export_locations(self, share, cephfs_volume):
# To mount this you need to know the mon IPs and the path to the volume
mon_addrs = self.volume_client.get_mon_addrs()
export_location = "{addrs}:{path}".format(
addrs=",".join(mon_addrs),
path=volume['mount_path'])
path=cephfs_volume['mount_path'])
LOG.info("Calculated export location for share %(id)s: %(loc)s",
{"id": share['id'], "loc": export_location})
@ -233,11 +349,11 @@ class CephFSNativeDriver(driver.ShareDriver,):
raise exception.InvalidShareAccessLevel(
level=constants.ACCESS_LEVEL_RO)
auth_result = self.volume_client.authorize(
self._share_path(share), ceph_auth_id)
cephfs_share_path(share), ceph_auth_id)
else:
readonly = access['access_level'] == constants.ACCESS_LEVEL_RO
auth_result = self.volume_client.authorize(
self._share_path(share), ceph_auth_id, readonly=readonly,
cephfs_share_path(share), ceph_auth_id, readonly=readonly,
tenant_id=share['project_id'])
return auth_result['auth_key']
@ -249,11 +365,11 @@ class CephFSNativeDriver(driver.ShareDriver,):
{"type": access['access_type']})
return
self.volume_client.deauthorize(self._share_path(share),
self.volume_client.deauthorize(cephfs_share_path(share),
access['access_to'])
self.volume_client.evict(
access['access_to'],
volume_path=self._share_path(share))
volume_path=cephfs_share_path(share))
def update_access(self, context, share, access_rules, add_rules,
delete_rules, share_server=None):
@ -268,7 +384,7 @@ class CephFSNativeDriver(driver.ShareDriver,):
# the list of auth IDs that have share access.
if getattr(self.volume_client, 'version', None):
existing_auths = self.volume_client.get_authorized_ids(
self._share_path(share))
cephfs_share_path(share))
if existing_auths:
existing_auth_ids = set(
@ -296,74 +412,3 @@ class CephFSNativeDriver(driver.ShareDriver,):
self._deny_access(context, share, rule)
return access_keys
def delete_share(self, context, share, share_server=None):
extra_specs = share_types.get_extra_specs_from_share(share)
data_isolated = extra_specs.get("cephfs:data_isolated", False)
self.volume_client.delete_volume(self._share_path(share),
data_isolated=data_isolated)
self.volume_client.purge_volume(self._share_path(share),
data_isolated=data_isolated)
def ensure_share(self, context, share, share_server=None):
# Creation is idempotent
return self.create_share(context, share, share_server)
def extend_share(self, share, new_size, share_server=None):
LOG.debug("extend_share {id} {size}".format(
id=share['id'], size=new_size))
self.volume_client.set_max_bytes(self._share_path(share),
self._to_bytes(new_size))
def shrink_share(self, share, new_size, share_server=None):
LOG.debug("shrink_share {id} {size}".format(
id=share['id'], size=new_size))
new_bytes = self._to_bytes(new_size)
used = self.volume_client.get_used_bytes(self._share_path(share))
if used > new_bytes:
# While in fact we can "shrink" our volumes to less than their
# used bytes (it's just a quota), raise error anyway to avoid
# confusing API consumers that might depend on typical shrink
# behaviour.
raise exception.ShareShrinkingPossibleDataLoss(
share_id=share['id'])
self.volume_client.set_max_bytes(self._share_path(share), new_bytes)
def create_snapshot(self, context, snapshot, share_server=None):
self.volume_client.create_snapshot_volume(
self._share_path(snapshot['share']),
'_'.join([snapshot['snapshot_id'], snapshot['id']]))
def delete_snapshot(self, context, snapshot, share_server=None):
self.volume_client.destroy_snapshot_volume(
self._share_path(snapshot['share']),
'_'.join([snapshot['snapshot_id'], snapshot['id']]))
def create_share_group(self, context, sg_dict, share_server=None):
self.volume_client.create_group(sg_dict['id'])
def delete_share_group(self, context, sg_dict, share_server=None):
self.volume_client.destroy_group(sg_dict['id'])
def delete_share_group_snapshot(self, context, snap_dict,
share_server=None):
self.volume_client.destroy_snapshot_group(
snap_dict['share_group_id'],
snap_dict['id'])
return None, []
def create_share_group_snapshot(self, context, snap_dict,
share_server=None):
self.volume_client.create_snapshot_group(
snap_dict['share_group_id'],
snap_dict['id'])
return None, []
def __del__(self):
if self._volume_client:
self._volume_client.disconnect()
self._volume_client = None

View File

@ -125,6 +125,8 @@ MAPPING = {
'GlusterfsNativeShareDriver',
'manila.share.drivers.emc.driver.EMCShareDriver':
'manila.share.drivers.dell_emc.driver.EMCShareDriver',
'manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver':
'manila.share.drivers.cephfs.driver.CephFSDriver',
}
QUOTAS = quota.QUOTAS

View File

@ -23,7 +23,7 @@ from manila.common import constants
from manila import context
import manila.exception as exception
from manila.share import configuration
from manila.share.drivers.cephfs import cephfs_native
from manila.share.drivers.cephfs import driver
from manila.share import share_types
from manila import test
from manila.tests import fake_share
@ -76,8 +76,8 @@ class MockVolumeClientModule(object):
@ddt.ddt
class CephFSNativeDriverTestCase(test.TestCase):
"""Test the CephFS native driver.
class CephFSDriverTestCase(test.TestCase):
"""Test the CephFS driver.
This is a very simple driver that mainly
calls through to the CephFSVolumeClient interface, so the tests validate
@ -86,7 +86,7 @@ class CephFSNativeDriverTestCase(test.TestCase):
"""
def setUp(self):
super(CephFSNativeDriverTestCase, self).setUp()
super(CephFSDriverTestCase, self).setUp()
self.fake_conf = configuration.Configuration(None)
self._context = context.get_admin_context()
self._share = fake_share.fake_share(share_proto='CEPHFS')
@ -94,38 +94,72 @@ class CephFSNativeDriverTestCase(test.TestCase):
self.fake_conf.set_default('driver_handles_share_servers', False)
self.fake_conf.set_default('cephfs_auth_id', 'manila')
self.mock_object(cephfs_native, "ceph_volume_client",
self.mock_object(driver, "ceph_volume_client",
MockVolumeClientModule)
self.mock_object(cephfs_native, "ceph_module_found", True)
self.mock_object(driver, "ceph_module_found", True)
self.mock_object(driver, "cephfs_share_path")
self.mock_object(driver, 'NativeProtocolHelper')
self._driver = (
cephfs_native.CephFSNativeDriver(configuration=self.fake_conf))
driver.CephFSDriver(configuration=self.fake_conf))
self._driver.protocol_helper = mock.Mock()
self.mock_object(share_types, 'get_share_type_extra_specs',
mock.Mock(return_value={}))
def test_do_setup(self):
self._driver.do_setup(self._context)
driver.NativeProtocolHelper.assert_called_once_with(
None, self._driver.configuration,
volume_client=self._driver._volume_client)
self._driver.protocol_helper.init_helper.assert_called_once_with()
def test_create_share(self):
expected_export_locations = {
'path': '1.2.3.4,5.6.7.8:/foo/bar',
'is_admin_only': False,
'metadata': {},
}
cephfs_volume = {"mount_path": "/foo/bar"}
export_locations = self._driver.create_share(self._context,
self._share)
self._driver.create_share(self._context, self._share)
self.assertEqual(expected_export_locations, export_locations)
self._driver._volume_client.create_volume.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
size=self._share['size'] * units.Gi,
data_isolated=False)
(self._driver.protocol_helper.get_export_locations.
assert_called_once_with(self._share, cephfs_volume))
def test_create_share_error(self):
share = fake_share.fake_share(share_proto='NFS')
self.assertRaises(exception.ShareBackendException,
self._driver.create_share,
self._context,
share)
def test_update_access(self):
alice = {
'id': 'instance_mapping_id1',
'access_id': 'accessid1',
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'alice'
}
add_rules = access_rules = [alice, ]
delete_rules = []
self._driver.update_access(
self._context, self._share, access_rules, add_rules, delete_rules,
None)
self._driver.protocol_helper.update_access.assert_called_once_with(
self._context, self._share, access_rules, add_rules, delete_rules,
share_server=None)
def test_ensure_share(self):
self._driver.ensure_share(self._context,
self._share)
self._driver._volume_client.create_volume.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
size=self._share['size'] * units.Gi,
data_isolated=False)
@ -137,7 +171,7 @@ class CephFSNativeDriverTestCase(test.TestCase):
self._driver.create_share(self._context, self._share)
self._driver._volume_client.create_volume.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
size=self._share['size'] * units.Gi,
data_isolated=True)
@ -145,10 +179,10 @@ class CephFSNativeDriverTestCase(test.TestCase):
self._driver.delete_share(self._context, self._share)
self._driver._volume_client.delete_volume.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
data_isolated=False)
self._driver._volume_client.purge_volume.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
data_isolated=False)
def test_delete_data_isolated(self):
@ -159,164 +193,12 @@ class CephFSNativeDriverTestCase(test.TestCase):
self._driver.delete_share(self._context, self._share)
self._driver._volume_client.delete_volume.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
data_isolated=True)
self._driver._volume_client.purge_volume.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
data_isolated=True)
@ddt.data(None, 1)
def test_allow_access_rw(self, volume_client_version):
rule = {
'access_level': constants.ACCESS_LEVEL_RW,
'access_to': 'alice',
'access_type': 'cephx',
}
self._driver.volume_client.version = volume_client_version
auth_key = self._driver._allow_access(
self._context, self._share, rule)
self.assertEqual("abc123", auth_key)
if not volume_client_version:
self._driver._volume_client.authorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice")
else:
self._driver._volume_client.authorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice",
readonly=False,
tenant_id=self._share['project_id'])
@ddt.data(None, 1)
def test_allow_access_ro(self, volume_client_version):
rule = {
'access_level': constants.ACCESS_LEVEL_RO,
'access_to': 'alice',
'access_type': 'cephx',
}
self._driver.volume_client.version = volume_client_version
if not volume_client_version:
self.assertRaises(exception.InvalidShareAccessLevel,
self._driver._allow_access,
self._context, self._share, rule)
else:
auth_key = self._driver._allow_access(self._context, self._share,
rule)
self.assertEqual("abc123", auth_key)
self._driver._volume_client.authorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice",
readonly=True,
tenant_id=self._share['project_id'],
)
def test_allow_access_wrong_type(self):
self.assertRaises(exception.InvalidShareAccess,
self._driver._allow_access,
self._context, self._share, {
'access_level': constants.ACCESS_LEVEL_RW,
'access_type': 'RHUBARB',
'access_to': 'alice'
})
def test_allow_access_same_cephx_id_as_manila_service(self):
self.assertRaises(exception.InvalidInput,
self._driver._allow_access,
self._context, self._share, {
'access_level': constants.ACCESS_LEVEL_RW,
'access_type': 'cephx',
'access_to': 'manila',
})
def test_deny_access(self):
self._driver._deny_access(self._context, self._share, {
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'alice'
})
self._driver._volume_client.deauthorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice")
self._driver._volume_client.evict.assert_called_once_with(
"alice",
volume_path=self._driver._share_path(self._share))
def test_update_access_add_rm(self):
alice = {
'id': 'instance_mapping_id1',
'access_id': 'accessid1',
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'alice'
}
bob = {
'id': 'instance_mapping_id2',
'access_id': 'accessid2',
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'bob'
}
access_updates = self._driver.update_access(
self._context, self._share, access_rules=[alice],
add_rules=[alice], delete_rules=[bob])
self.assertEqual(
{'accessid1': {'access_key': 'abc123'}}, access_updates)
self._driver._volume_client.authorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice",
readonly=False,
tenant_id=self._share['project_id'])
self._driver._volume_client.deauthorize.assert_called_once_with(
self._driver._share_path(self._share),
"bob")
@ddt.data(None, 1)
def test_update_access_all(self, volume_client_version):
alice = {
'id': 'instance_mapping_id1',
'access_id': 'accessid1',
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'alice'
}
self._driver.volume_client.version = volume_client_version
access_updates = self._driver.update_access(self._context, self._share,
access_rules=[alice],
add_rules=[],
delete_rules=[])
self.assertEqual(
{'accessid1': {'access_key': 'abc123'}}, access_updates)
if volume_client_version:
(self._driver._volume_client.get_authorized_ids.
assert_called_once_with(self._driver._share_path(self._share)))
self._driver._volume_client.authorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice",
readonly=False,
tenant_id=self._share['project_id']
)
self._driver._volume_client.deauthorize.assert_called_once_with(
self._driver._share_path(self._share),
"eve",
)
else:
self.assertFalse(
self._driver._volume_client.get_authorized_ids.called)
self._driver._volume_client.authorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice",
)
def test_extend_share(self):
new_size_gb = self._share['size'] * 2
new_size = new_size_gb * units.Gi
@ -324,7 +206,7 @@ class CephFSNativeDriverTestCase(test.TestCase):
self._driver.extend_share(self._share, new_size_gb, None)
self._driver._volume_client.set_max_bytes.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
new_size)
def test_shrink_share(self):
@ -334,9 +216,9 @@ class CephFSNativeDriverTestCase(test.TestCase):
self._driver.shrink_share(self._share, new_size_gb, None)
self._driver._volume_client.get_used_bytes.assert_called_once_with(
self._driver._share_path(self._share))
driver.cephfs_share_path(self._share))
self._driver._volume_client.set_max_bytes.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
new_size)
def test_shrink_share_full(self):
@ -363,7 +245,7 @@ class CephFSNativeDriverTestCase(test.TestCase):
(self._driver._volume_client.create_snapshot_volume
.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
"snappy1_instance1"))
def test_delete_snapshot(self):
@ -377,7 +259,7 @@ class CephFSNativeDriverTestCase(test.TestCase):
(self._driver._volume_client.destroy_snapshot_volume
.assert_called_once_with(
self._driver._share_path(self._share),
driver.cephfs_share_path(self._share),
"snappy1_instance1"))
def test_create_share_group(self):
@ -443,22 +325,185 @@ class CephFSNativeDriverTestCase(test.TestCase):
self.assertEqual("CEPHFS", result['storage_protocol'])
def test_module_missing(self):
cephfs_native.ceph_module_found = False
cephfs_native.ceph_volume_client = None
driver.ceph_module_found = False
driver.ceph_volume_client = None
self.assertRaises(exception.ManilaException,
self._driver.create_share,
self._context,
self._share)
def test_check_for_setup_error(self):
self._driver.check_for_setup_error()
self._driver._volume_client.connect.assert_called_once_with(
premount_evict='manila')
def test_check_for_setup_error_with_connection_error(self):
cephfs_native.ceph_module_found = False
cephfs_native.ceph_volume_client = None
@ddt.ddt
class NativeProtocolHelperTestCase(test.TestCase):
self.assertRaises(exception.ManilaException,
self._driver.check_for_setup_error)
def setUp(self):
super(NativeProtocolHelperTestCase, self).setUp()
self.fake_conf = configuration.Configuration(None)
self._context = context.get_admin_context()
self._share = fake_share.fake_share(share_proto='CEPHFS')
self.fake_conf.set_default('driver_handles_share_servers', False)
self.mock_object(driver, "cephfs_share_path")
self._native_protocol_helper = driver.NativeProtocolHelper(
None,
self.fake_conf,
volume_client=MockVolumeClientModule.CephFSVolumeClient()
)
def test_get_export_locations(self):
vc = self._native_protocol_helper.volume_client
fake_cephfs_volume = {'mount_path': '/foo/bar'}
expected_export_locations = {
'path': '1.2.3.4,5.6.7.8:/foo/bar',
'is_admin_only': False,
'metadata': {},
}
export_locations = self._native_protocol_helper.get_export_locations(
self._share, fake_cephfs_volume)
self.assertEqual(expected_export_locations, export_locations)
vc.get_mon_addrs.assert_called_once_with()
@ddt.data(None, 1)
def test_allow_access_rw(self, volume_client_version):
vc = self._native_protocol_helper.volume_client
rule = {
'access_level': constants.ACCESS_LEVEL_RW,
'access_to': 'alice',
'access_type': 'cephx',
}
vc.version = volume_client_version
auth_key = self._native_protocol_helper._allow_access(
self._context, self._share, rule)
self.assertEqual("abc123", auth_key)
if not volume_client_version:
vc.authorize.assert_called_once_with(
driver.cephfs_share_path(self._share), "alice")
else:
vc.authorize.assert_called_once_with(
driver.cephfs_share_path(self._share), "alice",
readonly=False, tenant_id=self._share['project_id'])
@ddt.data(None, 1)
def test_allow_access_ro(self, volume_client_version):
vc = self._native_protocol_helper.volume_client
rule = {
'access_level': constants.ACCESS_LEVEL_RO,
'access_to': 'alice',
'access_type': 'cephx',
}
vc.version = volume_client_version
if not volume_client_version:
self.assertRaises(exception.InvalidShareAccessLevel,
self._native_protocol_helper._allow_access,
self._context, self._share, rule)
else:
auth_key = (
self._native_protocol_helper._allow_access(
self._context, self._share, rule)
)
self.assertEqual("abc123", auth_key)
vc.authorize.assert_called_once_with(
driver.cephfs_share_path(self._share), "alice", readonly=True,
tenant_id=self._share['project_id'])
def test_allow_access_wrong_type(self):
self.assertRaises(exception.InvalidShareAccess,
self._native_protocol_helper._allow_access,
self._context, self._share, {
'access_level': constants.ACCESS_LEVEL_RW,
'access_type': 'RHUBARB',
'access_to': 'alice'
})
def test_allow_access_same_cephx_id_as_manila_service(self):
self.assertRaises(exception.InvalidInput,
self._native_protocol_helper._allow_access,
self._context, self._share, {
'access_level': constants.ACCESS_LEVEL_RW,
'access_type': 'cephx',
'access_to': 'manila',
})
def test_deny_access(self):
vc = self._native_protocol_helper.volume_client
self._native_protocol_helper._deny_access(self._context, self._share, {
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'alice'
})
vc.deauthorize.assert_called_once_with(
driver.cephfs_share_path(self._share), "alice")
vc.evict.assert_called_once_with(
"alice", volume_path=driver.cephfs_share_path(self._share))
def test_update_access_add_rm(self):
vc = self._native_protocol_helper.volume_client
alice = {
'id': 'instance_mapping_id1',
'access_id': 'accessid1',
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'alice'
}
bob = {
'id': 'instance_mapping_id2',
'access_id': 'accessid2',
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'bob'
}
access_updates = self._native_protocol_helper.update_access(
self._context, self._share, access_rules=[alice],
add_rules=[alice], delete_rules=[bob])
self.assertEqual(
{'accessid1': {'access_key': 'abc123'}}, access_updates)
vc.authorize.assert_called_once_with(
driver.cephfs_share_path(self._share), "alice", readonly=False,
tenant_id=self._share['project_id'])
vc.deauthorize.assert_called_once_with(
driver.cephfs_share_path(self._share), "bob")
@ddt.data(None, 1)
def test_update_access_all(self, volume_client_version):
vc = self._native_protocol_helper.volume_client
alice = {
'id': 'instance_mapping_id1',
'access_id': 'accessid1',
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'alice'
}
vc.version = volume_client_version
access_updates = self._native_protocol_helper.update_access(
self._context, self._share, access_rules=[alice], add_rules=[],
delete_rules=[])
self.assertEqual(
{'accessid1': {'access_key': 'abc123'}}, access_updates)
if volume_client_version:
vc.get_authorized_ids.assert_called_once_with(
driver.cephfs_share_path(self._share))
vc.authorize.assert_called_once_with(
driver.cephfs_share_path(self._share), "alice", readonly=False,
tenant_id=self._share['project_id'])
vc.deauthorize.assert_called_once_with(
driver.cephfs_share_path(self._share), "eve")
else:
self.assertFalse(vc.get_authorized_ids.called)
vc.authorize.assert_called_once_with(
driver.cephfs_share_path(self._share), "alice")

View File

@ -0,0 +1,9 @@
---
upgrade:
- To use the CephFS driver, which enables CephFS access via the native Ceph
protocol, set the `share_driver` in the driver section of the config file
as `manila.share.drivers.cephfs.driver.CephFSDriver`. The previous
`share_driver` setting in Mitaka/Newton/Ocata releases
`manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver` would still
work (usually until one more release, Queens, as part of standard deprecation
process.), but it's usage is no longer preferred.