Browse Source

Add Ceph Native driver

This driver enables use of the Ceph filesystem for
Manila shares.  Clients require a native CephFS
client to access the share.  The interface
to Ceph is CephFSVolumeClient, included in
the 'Jewel' Ceph release and later.

APIImpact
The API microversion is bumped to 2.13 to add a
new access type, cephx, supported by the driver.

Co-Authored-By: Ramana Raja <rraja@redhat.com>

Implements: blueprint cephfs-driver
Change-Id: I33801215f64eacb9dade4d92f11f659173abb7f5
changes/11/270211/42
John Spray 6 years ago
committed by John Spray
parent
commit
9ff0d1312e
  1. 136
      doc/source/devref/cephfs_native_driver.rst
  2. 1
      doc/source/devref/index.rst
  3. 81
      doc/source/devref/share_back_ends_feature_support_mapping.rst
  4. 3
      manila/api/openstack/api_version_request.py
  5. 4
      manila/api/openstack/rest_api_version_history.rst
  6. 34
      manila/api/v1/shares.py
  7. 7
      manila/api/v2/shares.py
  8. 2
      manila/common/constants.py
  9. 2
      manila/opts.py
  10. 0
      manila/share/drivers/cephfs/__init__.py
  11. 319
      manila/share/drivers/cephfs/cephfs_native.py
  12. 15
      manila/tests/api/v1/test_shares.py
  13. 44
      manila/tests/api/v2/test_shares.py
  14. 21
      manila/tests/fake_share.py
  15. 0
      manila/tests/share/drivers/cephfs/__init__.py
  16. 374
      manila/tests/share/drivers/cephfs/test_cephfs_native.py
  17. 5
      manila_tempest_tests/config.py
  18. 4
      manila_tempest_tests/tests/api/admin/test_share_manage.py
  19. 2
      manila_tempest_tests/tests/api/base.py
  20. 51
      manila_tempest_tests/tests/api/test_rules.py
  21. 60
      manila_tempest_tests/tests/api/test_rules_negative.py
  22. 5
      manila_tempest_tests/tests/api/test_shares.py

136
doc/source/devref/cephfs_native_driver.rst

@ -0,0 +1,136 @@
CephFS Native driver
====================
The CephFS Native driver enables manila to export shared filesystems to guests
using the Ceph network protocol. Guests require a Ceph client in order to
mount the filesystem.
Access is controlled via Ceph's cephx authentication system. Each share has
a distinct authentication key that must be passed to clients for them to use
it.
To learn more about configuring Ceph clients to access the shares created
using this driver, please see the Ceph documentation(
http://docs.ceph.com/docs/master/cephfs/). If you choose to use the kernel
client rather than the FUSE client, the share size limits set in Manila
may not be obeyed.
Prerequisities
--------------
- A Ceph cluster with a filesystem configured (
http://docs.ceph.com/docs/master/cephfs/createfs/)
- Network connectivity between your Ceph cluster's public network and the
server running the :term:`manila-share` service.
- Network connectivity between your Ceph cluster's public network and guests
.. important:: A manila share backed onto CephFS is only as good as the
underlying filesystem. Take care when configuring your Ceph
cluster, and consult the latest guidance on the use of
CephFS in the Ceph documentation (
http://docs.ceph.com/docs/master/cephfs/)
Authorize the driver to communicate with Ceph
---------------------------------------------
Run the following command to create a Ceph identity for manila to use:
.. code-block:: console
ceph auth get-or-create client.manila mon 'allow r; allow command "auth del" with entity prefix client.manila.; allow command "auth caps" with entity prefix client.manila.; allow command "auth get" with entity prefix client.manila., allow command "auth get-or-create" with entity prefix client.manila.' mds 'allow *' osd 'allow rw' > keyring.manila
keyring.manila, along with your ceph.conf file, will then need to be placed
on the server where the :term:`manila-share` service runs, and the paths to these
configured in your manila.conf.
Enable snapshots in Ceph if you want to use them in manila:
.. code-block:: console
ceph mds set allow_new_snaps true --yes-i-really-mean-it
Configure CephFS backend in manila.conf
---------------------------------------
Add CephFS to ``enabled_share_protocols`` (enforced at manila api layer). In
this example we leave NFS and CIFS enabled, although you can remove these
if you will only use CephFS:
.. code-block:: ini
enabled_share_protocols = NFS,CIFS,CEPHFS
Create a section like this to define a CephFS backend:
.. code-block:: ini
[cephfs1]
driver_handles_share_servers = False
share_backend_name = CEPHFS1
share_driver = manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila
Then edit ``enabled_share_backends`` to point to it, using the same
name that you used for the backend section. In this example we are
also including another backend ("generic1"), you would include
whatever other backends you have configured.
.. code-block:: ini
enabled_share_backends = generic1, cephfs1
Creating shares
---------------
The default share type may have driver_handles_share_servers set to True.
Configure a share type suitable for cephfs:
.. code-block:: console
manila type-create cephfstype false
Then create yourself a share:
.. code-block:: console
manila create --share-type cephfstype --name cephshare1 cephfs 1
Mounting a client with FUSE
---------------------------
Using the key from your export location, and the share ID, create a keyring
file like:
.. code-block:: ini
[client.share-4c55ad20-9c55-4a5e-9233-8ac64566b98c]
key = AQA8+ANW/4ZWNRAAOtWJMFPEihBA1unFImJczA==
Using the mon IP addresses from your export location, create a ceph.conf file
like:
.. code-block:: ini
[client]
client quota = true
[mon.a]
mon addr = 192.168.1.7:6789
[mon.b]
mon addr = 192.168.1.8:6789
[mon.c]
mon addr = 192.168.1.9:6789
Finally, mount the filesystem, substituting the filenames of the keyring and
configuration files you just created:
.. code-block:: console
ceph-fuse --id=share-4c55ad20-9c55-4a5e-9233-8ac64566b98c -c ./client.conf --keyring=./client.keyring --client-mountpoint=/volumes/share-4c55ad20-9c55-4a5e-9233-8ac64566b98c ~/mnt

1
doc/source/devref/index.rst

@ -106,6 +106,7 @@ Share backends
generic_driver
glusterfs_driver
glusterfs_native_driver
cephfs_native_driver
gpfs_driver
huawei_nas_driver
hdfs_native_driver

81
doc/source/devref/share_back_ends_feature_support_mapping.rst

@ -65,6 +65,8 @@ Mapping of share drivers and share features support
+----------------------------------------+-----------------------------+-----------------------+--------------+--------------+------------------------+----------------------------+--------------------------+
| Oracle ZFSSA | DHSS = False (K) | \- | M | M | K | K | \- |
+----------------------------------------+-----------------------------+-----------------------+--------------+--------------+------------------------+----------------------------+--------------------------+
| CephFS Native | DHSS = False (M) | \- | M | M | M | \- | \- |
+----------------------------------------+-----------------------------+-----------------------+--------------+--------------+------------------------+----------------------------+--------------------------+
.. note::
@ -73,43 +75,45 @@ Mapping of share drivers and share features support
Mapping of share drivers and share access rules support
-------------------------------------------------------
+----------------------------------------+--------------------------------------------+--------------------------------------------+
| | Read & Write | Read Only |
+ Driver name +--------------+----------------+------------+--------------+----------------+------------+
| | IP | USER | Cert | IP | USER | Cert |
+========================================+==============+================+============+==============+================+============+
| ZFSonLinux | NFS (M) | \- | \- | NFS (M) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| Generic (Cinder as back-end) | NFS,CIFS (J) | \- | \- | NFS (K) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| NetApp Clustered Data ONTAP | NFS (J) | CIFS (J) | \- | NFS (K) | CIFS (M) | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| EMC VNX | NFS (J) | CIFS (J) | \- | NFS (L) | CIFS (L) | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| EMC Isilon | NFS,CIFS (K) | CIFS (M) | \- | NFS (M) | CIFS (M) | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| Red Hat GlusterFS | NFS (J) | \- | \- | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| Red Hat GlusterFS-Native | \- | \- | J | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| HDFS | \- | HDFS(K) | \- | \- | HDFS(K) | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| Hitachi HNAS | NFS (L) | \- | \- | NFS (L) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| HPE 3PAR | NFS,CIFS (K) | CIFS (K) | \- | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| Huawei | NFS (K) |NFS (M),CIFS (K)| \- | NFS (K) |NFS (M),CIFS (K)| \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| LVM | NFS (M) | CIFS (M) | \- | NFS (M) | CIFS (M) | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| Quobyte | NFS (K) | \- | \- | NFS (K) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| Windows SMB | \- | CIFS (L) | \- | \- | CIFS (L) | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| IBM GPFS | NFS (K) | \- | \- | NFS (K) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
| Oracle ZFSSA | NFS,CIFS(K) | \- | \- | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+----------------+------------+
+----------------------------------------+-----------------------------------------------------------+---------------------------------------------------------+
| | Read & Write | Read Only |
+ Driver name +--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| | IP | USER | Cert | CephX | IP | USER | Cert | CephX |
+========================================+==============+================+============+==============+==============+================+============+============+
| ZFSonLinux | NFS (M) | \- | \- | \- | NFS (M) | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Generic (Cinder as back-end) | NFS,CIFS (J) | \- | \- | \- | NFS (K) | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| NetApp Clustered Data ONTAP | NFS (J) | CIFS (J) | \- | \- | NFS (K) | CIFS (M) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| EMC VNX | NFS (J) | CIFS (J) | \- | \- | NFS (L) | CIFS (L) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| EMC Isilon | NFS,CIFS (K) | CIFS (M) | \- | \- | NFS (M) | CIFS (M) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Red Hat GlusterFS | NFS (J) | \- | \- | \- | \- | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Red Hat GlusterFS-Native | \- | \- | J | \- | \- | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| HDFS | \- | HDFS(K) | \- | \- | \- | HDFS(K) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Hitachi HNAS | NFS (L) | \- | \- | \- | NFS (L) | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| HPE 3PAR | NFS,CIFS (K) | CIFS (K) | \- | \- | \- | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Huawei | NFS (K) |NFS (M),CIFS (K)| \- | \- | NFS (K) |NFS (M),CIFS (K)| \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| LVM | NFS (M) | CIFS (M) | \- | \- | NFS (M) | CIFS (M) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Quobyte | NFS (K) | \- | \- | \- | NFS (K) | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Windows SMB | \- | CIFS (L) | \- | \- | \- | CIFS (L) | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| IBM GPFS | NFS (K) | \- | \- | \- | NFS (K) | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Oracle ZFSSA | NFS,CIFS(K) | \- | \- | \- | \- | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| CephFS Native | \- | \- | \- | CEPH(M) | \- | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
Mapping of share drivers and security services support
------------------------------------------------------
@ -149,3 +153,6 @@ Mapping of share drivers and security services support
+----------------------------------------+------------------+-----------------+------------------+
| Oracle ZFSSA | \- | \- | \- |
+----------------------------------------+------------------+-----------------+------------------+
| CephFS Native | \- | \- | \- |
+----------------------------------------+------------------+-----------------+------------------+

3
manila/api/openstack/api_version_request.py

@ -59,13 +59,14 @@ REST_API_VERSION_HISTORY = """
instances.
* 2.11 - Share Replication support
* 2.12 - Manage/unmanage snapshot API.
* 2.13 - Add "cephx" auth type to allow_access
"""
# The minimum and maximum versions of the API supported
# The default api version request is defined to be the
# the minimum version of the API supported.
_MIN_API_VERSION = "2.0"
_MAX_API_VERSION = "2.12"
_MAX_API_VERSION = "2.13"
DEFAULT_API_VERSION = _MIN_API_VERSION

4
manila/api/openstack/rest_api_version_history.rst

@ -89,3 +89,7 @@ user documentation.
2.12
----
Share snapshot manage and unmanage API.
2.13
----
Add 'cephx' authentication type for the CephFS Native driver.

34
manila/api/v1/shares.py

@ -17,6 +17,7 @@
import ast
import re
import string
from oslo_log import log
from oslo_utils import strutils
@ -383,7 +384,27 @@ class ShareMixin(object):
except ValueError:
raise webob.exc.HTTPBadRequest(explanation=exc_str)
def _allow_access(self, req, id, body):
@staticmethod
def _validate_cephx_id(cephx_id):
if not cephx_id:
raise webob.exc.HTTPBadRequest(explanation=_(
'Ceph IDs may not be empty'))
# This restriction may be lifted in Ceph in the future:
# http://tracker.ceph.com/issues/14626
if not set(cephx_id) <= set(string.printable):
raise webob.exc.HTTPBadRequest(explanation=_(
'Ceph IDs must consist of ASCII printable characters'))
# Periods are technically permitted, but we restrict them here
# to avoid confusion where users are unsure whether they should
# include the "client." prefix: otherwise they could accidentally
# create "client.client.foobar".
if '.' in cephx_id:
raise webob.exc.HTTPBadRequest(explanation=_(
'Ceph IDs may not contain periods'))
def _allow_access(self, req, id, body, enable_ceph=False):
"""Add share access rule."""
context = req.environ['manila.context']
access_data = body.get('allow_access', body.get('os-allow_access'))
@ -397,9 +418,16 @@ class ShareMixin(object):
self._validate_username(access_to)
elif access_type == 'cert':
self._validate_common_name(access_to.strip())
elif access_type == "cephx" and enable_ceph:
self._validate_cephx_id(access_to)
else:
exc_str = _("Only 'ip','user',or'cert' access types "
"are supported.")
if enable_ceph:
exc_str = _("Only 'ip', 'user', 'cert' or 'cephx' access "
"types are supported.")
else:
exc_str = _("Only 'ip', 'user' or 'cert' access types "
"are supported.")
raise webob.exc.HTTPBadRequest(explanation=exc_str)
try:
access = self.share_api.allow_access(

7
manila/api/v2/shares.py

@ -13,6 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
from manila.api.openstack import api_version_request as api_version
from manila.api.openstack import wsgi
from manila.api.v1 import share_manage
from manila.api.v1 import share_unmanage
@ -27,7 +28,6 @@ class ShareController(shares.ShareMixin,
wsgi.Controller,
wsgi.AdminActionsMixin):
"""The Shares API v2 controller for the OpenStack API."""
resource_name = 'share'
_view_builder_class = share_views.ViewBuilder
@ -86,7 +86,10 @@ class ShareController(shares.ShareMixin,
@wsgi.action('allow_access')
def allow_access(self, req, id, body):
"""Add share access rule."""
return self._allow_access(req, id, body)
if req.api_version_request < api_version.APIVersionRequest("2.13"):
return self._allow_access(req, id, body)
else:
return self._allow_access(req, id, body, enable_ceph=True)
@wsgi.Controller.api_version('2.0', '2.6')
@wsgi.action('os-deny_access')

2
manila/common/constants.py

@ -55,7 +55,7 @@ TRANSITIONAL_STATUSES = (
)
SUPPORTED_SHARE_PROTOCOLS = (
'NFS', 'CIFS', 'GLUSTERFS', 'HDFS')
'NFS', 'CIFS', 'GLUSTERFS', 'HDFS', 'CEPHFS')
SECURITY_SERVICES_ALLOWED_TYPES = ['active_directory', 'ldap', 'kerberos']

2
manila/opts.py

@ -50,6 +50,7 @@ import manila.scheduler.weighers.pool
import manila.service
import manila.share.api
import manila.share.driver
import manila.share.drivers.cephfs.cephfs_native
import manila.share.drivers.emc.driver
import manila.share.drivers.emc.plugins.isilon.isilon
import manila.share.drivers.generic
@ -113,6 +114,7 @@ _global_opt_lists = [
manila.share.driver.share_opts,
manila.share.driver.ssh_opts,
manila.share.drivers_private_data.private_data_opts,
manila.share.drivers.cephfs.cephfs_native.cephfs_native_opts,
manila.share.drivers.emc.driver.EMC_NAS_OPTS,
manila.share.drivers.generic.share_opts,
manila.share.drivers.glusterfs.common.glusterfs_common_opts,

0
manila/share/drivers/cephfs/__init__.py

319
manila/share/drivers/cephfs/cephfs_native.py

@ -0,0 +1,319 @@
# Copyright (c) 2016 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log
from oslo_utils import units
from manila.common import constants
from manila import exception
from manila.i18n import _, _LI, _LW
from manila.share import driver
from manila.share import share_types
try:
import ceph_volume_client
ceph_module_found = True
except ImportError as e:
ceph_volume_client = None
ceph_module_found = False
CEPHX_ACCESS_TYPE = "cephx"
# The default Ceph administrative identity
CEPH_DEFAULT_AUTH_ID = "admin"
LOG = log.getLogger(__name__)
cephfs_native_opts = [
cfg.StrOpt('cephfs_conf_path',
default="",
help="Fully qualified path to the ceph.conf file."),
cfg.StrOpt('cephfs_cluster_name',
help="The name of the cluster in use, if it is not "
"the default ('ceph')."
),
cfg.StrOpt('cephfs_auth_id',
default="manila",
help="The name of the ceph auth identity to use."
),
cfg.BoolOpt('cephfs_enable_snapshots',
default=False,
help="Whether to enable snapshots in this driver."
),
]
CONF = cfg.CONF
CONF.register_opts(cephfs_native_opts)
class CephFSNativeDriver(driver.ShareDriver,):
"""Driver for the Ceph Filsystem.
This driver is 'native' in the sense that it exposes a CephFS filesystem
for use directly by guests, with no intermediate layer like NFS.
"""
supported_protocols = ('CEPHFS',)
def __init__(self, *args, **kwargs):
super(CephFSNativeDriver, self).__init__(False, *args, **kwargs)
self.backend_name = self.configuration.safe_get(
'share_backend_name') or 'CephFS-Native'
self._volume_client = None
self.configuration.append_config_values(cephfs_native_opts)
def _update_share_stats(self):
stats = self.volume_client.rados.get_cluster_stats()
total_capacity_gb = stats['kb'] * units.Mi
free_capacity_gb = stats['kb_avail'] * units.Mi
data = {
'consistency_group_support': 'pool',
'vendor_name': 'Ceph',
'driver_version': '1.0',
'share_backend_name': self.backend_name,
'storage_protocol': "CEPHFS",
'pools': [
{
'pool_name': 'cephfs',
'total_capacity_gb': total_capacity_gb,
'free_capacity_gb': free_capacity_gb,
'qos': 'False',
'reserved_percentage': 0,
'dedupe': [False],
'compression': [False],
'thin_provisioning': [False]
}
],
'total_capacity_gb': total_capacity_gb,
'free_capacity_gb': free_capacity_gb,
'snapshot_support': self.configuration.safe_get(
'cephfs_enable_snapshots'),
}
super(CephFSNativeDriver, self)._update_share_stats(data)
def _to_bytes(self, gigs):
"""Convert a Manila size into bytes.
Manila uses gibibytes everywhere.
:param gigs: integer number of gibibytes.
:return: integer number of bytes.
"""
return gigs * units.Gi
@property
def volume_client(self):
if self._volume_client:
return self._volume_client
if not ceph_module_found:
raise exception.ManilaException(
_("Ceph client libraries not found.")
)
conf_path = self.configuration.safe_get('cephfs_conf_path')
cluster_name = self.configuration.safe_get('cephfs_cluster_name')
auth_id = self.configuration.safe_get('cephfs_auth_id')
self._volume_client = ceph_volume_client.CephFSVolumeClient(
auth_id, conf_path, cluster_name)
LOG.info(_LI("[%(be)s}] Ceph client found, connecting..."),
{"be": self.backend_name})
if auth_id != CEPH_DEFAULT_AUTH_ID:
# Evict any other manila sessions. Only do this if we're
# using a client ID that isn't the default admin ID, to avoid
# rudely disrupting anyone else.
premount_evict = auth_id
else:
premount_evict = None
try:
self._volume_client.connect(premount_evict=premount_evict)
except Exception:
self._volume_client = None
raise
else:
LOG.info(_LI("[%(be)s] Ceph client connection complete."),
{"be": self.backend_name})
return self._volume_client
def _share_path(self, share):
"""Get VolumePath from Share."""
return ceph_volume_client.VolumePath(
share['consistency_group_id'], share['id'])
def create_share(self, context, share, share_server=None):
"""Create a CephFS volume.
:param context: A RequestContext.
:param share: A Share.
:param share_server: Always None for CephFS native.
:return: The export locations dictionary.
"""
# `share` is a Share
LOG.debug("create_share {be} name={id} size={size} cg_id={cg}".format(
be=self.backend_name, id=share['id'], size=share['size'],
cg=share['consistency_group_id']))
extra_specs = share_types.get_extra_specs_from_share(share)
data_isolated = extra_specs.get("cephfs:data_isolated", False)
size = self._to_bytes(share['size'])
# Create the CephFS volume
volume = self.volume_client.create_volume(
self._share_path(share), size=size, data_isolated=data_isolated)
# To mount this you need to know the mon IPs and the path to the volume
mon_addrs = self.volume_client.get_mon_addrs()
export_location = "{addrs}:{path}".format(
addrs=",".join(mon_addrs),
path=volume['mount_path'])
LOG.info(_LI("Calculated export location for share %(id)s: %(loc)s"),
{"id": share['id'], "loc": export_location})
return {
'path': export_location,
'is_admin_only': False,
'metadata': {},
}
def _allow_access(self, context, share, access, share_server=None):
if access['access_type'] != CEPHX_ACCESS_TYPE:
raise exception.InvalidShareAccess(
reason=_("Only 'cephx' access type allowed."))
if access['access_level'] == constants.ACCESS_LEVEL_RO:
raise exception.InvalidShareAccessLevel(
level=constants.ACCESS_LEVEL_RO)
ceph_auth_id = access['access_to']
auth_result = self.volume_client.authorize(self._share_path(share),
ceph_auth_id)
return auth_result['auth_key']
def _deny_access(self, context, share, access, share_server=None):
if access['access_type'] != CEPHX_ACCESS_TYPE:
LOG.warning(_LW("Invalid access type '%(type)s', "
"ignoring in deny."),
{"type": access['access_type']})
return
self.volume_client.deauthorize(self._share_path(share),
access['access_to'])
self.volume_client.evict(access['access_to'])
def update_access(self, context, share, access_rules, add_rules=None,
delete_rules=None, share_server=None):
# The interface to Ceph just provides add/remove methods, since it
# was created at start of mitaka cycle when there was no requirement
# to be able to list access rules or set them en masse. Therefore
# we implement update_access as best we can. In future ceph's
# interface should be extended to enable a full implementation
# of update_access.
for rule in add_rules:
self._allow_access(context, share, rule)
for rule in delete_rules:
self._deny_access(context, share, rule)
# This is where we would list all permitted clients and remove
# those that are not in `access_rules` if the ceph interface
# enabled it.
if not (add_rules or delete_rules):
for rule in access_rules:
self._allow_access(context, share, rule)
def delete_share(self, context, share, share_server=None):
extra_specs = share_types.get_extra_specs_from_share(share)
data_isolated = extra_specs.get("cephfs:data_isolated", False)
self.volume_client.delete_volume(self._share_path(share),
data_isolated=data_isolated)
self.volume_client.purge_volume(self._share_path(share),
data_isolated=data_isolated)
def ensure_share(self, context, share, share_server=None):
# Creation is idempotent
return self.create_share(context, share, share_server)
def extend_share(self, share, new_size, share_server=None):
LOG.debug("extend_share {id} {size}".format(
id=share['id'], size=new_size))
self.volume_client.set_max_bytes(self._share_path(share),
self._to_bytes(new_size))
def shrink_share(self, share, new_size, share_server=None):
LOG.debug("shrink_share {id} {size}".format(
id=share['id'], size=new_size))
new_bytes = self._to_bytes(new_size)
used = self.volume_client.get_used_bytes(self._share_path(share))
if used > new_bytes:
# While in fact we can "shrink" our volumes to less than their
# used bytes (it's just a quota), raise error anyway to avoid
# confusing API consumers that might depend on typical shrink
# behaviour.
raise exception.ShareShrinkingPossibleDataLoss(
share_id=share['id'])
self.volume_client.set_max_bytes(self._share_path(share), new_bytes)
def create_snapshot(self, context, snapshot, share_server=None):
self.volume_client.create_snapshot_volume(
self._share_path(snapshot['share']), snapshot['name'])
def delete_snapshot(self, context, snapshot, share_server=None):
self.volume_client.destroy_snapshot_volume(
self._share_path(snapshot['share']), snapshot['name'])
def create_consistency_group(self, context, cg_dict, share_server=None):
self.volume_client.create_group(cg_dict['id'])
def delete_consistency_group(self, context, cg_dict, share_server=None):
self.volume_client.destroy_group(cg_dict['id'])
def delete_cgsnapshot(self, context, snap_dict, share_server=None):
self.volume_client.destroy_snapshot_group(
snap_dict['consistency_group_id'],
snap_dict['id'])
return None, []
def create_cgsnapshot(self, context, snap_dict, share_server=None):
self.volume_client.create_snapshot_group(
snap_dict['consistency_group_id'],
snap_dict['id'])
return None, []
def __del__(self):
if self._volume_client:
self._volume_client.disconnect()
self._volume_client = None

15
manila/tests/api/v1/test_shares.py

@ -753,6 +753,20 @@ class ShareAPITest(test.TestCase):
common.remove_invalid_options(ctx, search_opts, allowed_opts)
self.assertEqual(expected_opts, search_opts)
def test_validate_cephx_id_invalid_with_period(self):
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller._validate_cephx_id,
"client.manila")
def test_validate_cephx_id_invalid_with_non_ascii(self):
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller._validate_cephx_id,
u"bj\u00F6rn")
@ddt.data("alice", "alice_bob", "alice bob")
def test_validate_cephx_id_valid(self, test_id):
self.controller._validate_cephx_id(test_id)
def _fake_access_get(self, ctxt, access_id):
@ -816,6 +830,7 @@ class ShareActionsTest(test.TestCase):
{'access_type': 'cert', 'access_to': ''},
{'access_type': 'cert', 'access_to': ' '},
{'access_type': 'cert', 'access_to': 'x' * 65},
{'access_type': 'cephx', 'access_to': 'alice'}
)
def test_allow_access_error(self, access):
id = 'fake_share_id'

44
manila/tests/api/v2/test_shares.py

@ -1017,10 +1017,11 @@ class ShareActionsTest(test.TestCase):
mock.Mock(return_value={'fake': 'fake'}))
id = 'fake_share_id'
body = {'os-allow_access': access}
body = {'allow_access': access}
expected = {'access': {'fake': 'fake'}}
req = fakes.HTTPRequest.blank('/v1/tenant1/shares/%s/action' % id)
res = self.controller._allow_access(req, id, body)
req = fakes.HTTPRequest.blank(
'/v2/tenant1/shares/%s/action' % id, version="2.7")
res = self.controller.allow_access(req, id, body)
self.assertEqual(expected, res)
@ddt.data(
@ -1039,10 +1040,41 @@ class ShareActionsTest(test.TestCase):
)
def test_allow_access_error(self, access):
id = 'fake_share_id'
body = {'os-allow_access': access}
req = fakes.HTTPRequest.blank('/v1/tenant1/shares/%s/action' % id)
body = {'allow_access': access}
req = fakes.HTTPRequest.blank('/v2/tenant1/shares/%s/action' % id,
version="2.7")
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller._allow_access, req, id, body)
self.controller.allow_access, req, id, body)
@ddt.unpack
@ddt.data(
{'exc': None, 'access_to': 'alice', 'version': '2.13'},
{'exc': webob.exc.HTTPBadRequest, 'access_to': 'alice',
'version': '2.11'}
)
def test_allow_access_ceph(self, exc, access_to, version):
share_id = "fake_id"
self.mock_object(share_api.API,
'allow_access',
mock.Mock(return_value={'fake': 'fake'}))
req = fakes.HTTPRequest.blank(
'/v2/shares/%s/action' % share_id, version=version)
body = {'allow_access':
{
'access_type': 'cephx',
'access_to': access_to,
'access_level': 'rw'
}}
if exc:
self.assertRaises(exc, self.controller.allow_access, req, share_id,
body)
else:
expected = {'access': {'fake': 'fake'}}
res = self.controller.allow_access(req, id, body)
self.assertEqual(expected, res)
def test_deny_access(self):
def _stub_deny_access(*args, **kwargs):

21
manila/tests/fake_share.py

@ -16,6 +16,7 @@
import datetime
import uuid
from manila.db.sqlalchemy import models
from manila.tests.db import fakes as db_fakes
@ -27,17 +28,37 @@ def fake_share(**kwargs):
'share_proto': 'fake_proto',
'share_network_id': 'fake share network id',
'share_server_id': 'fake share server id',
'share_type_id': 'fake share type id',
'export_location': 'fake_location:/fake_share',
'project_id': 'fake_project_uuid',
'availability_zone': 'fake_az',
'snapshot_support': 'True',
'replication_type': None,
'is_busy': False,
'consistency_group_id': 'fakecgid',
}
share.update(kwargs)
return db_fakes.FakeModel(share)
def fake_share_instance(base_share=None, **kwargs):
if base_share is None:
share = fake_share()
else:
share = base_share
share_instance = {
'share_id': share['id'],
'id': "fakeinstanceid",
'status': "active",
}
for attr in models.ShareInstance._proxified_properties:
share_instance[attr] = getattr(share, attr, None)
return db_fakes.FakeModel(share_instance)
def fake_snapshot(**kwargs):
snapshot = {
'id': 'fakesnapshotid',

0
manila/tests/share/drivers/cephfs/__init__.py

374
manila/tests/share/drivers/cephfs/test_cephfs_native.py

@ -0,0 +1,374 @@
# Copyright (c) 2016 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_utils import units
from manila.common import constants
from manila import context
import manila.exception as exception
from manila.share import configuration
from manila.share.drivers.cephfs import cephfs_native
from manila.share import share_types
from manila import test
from manila.tests import fake_share
class MockVolumeClientModule(object):
"""Mocked up version of ceph's VolumeClient interface."""
class VolumePath(object):
"""Copy of VolumePath from CephFSVolumeClient."""
def __init__(self, group_id, volume_id):
self.group_id = group_id
self.volume_id = volume_id
def __eq__(self, other):
return (self.group_id == other.group_id
and self.volume_id == other.volume_id)
def __str__(self):
return "{0}/{1}".format(self.group_id, self.volume_id)
class CephFSVolumeClient(mock.Mock):
mock_used_bytes = 0
def __init__(self, *args, **kwargs):
mock.Mock.__init__(self, spec=[
"connect", "disconnect",
"create_snapshot_volume", "destroy_snapshot_volume",
"create_group", "destroy_group",
"delete_volume", "purge_volume",
"deauthorize", "evict", "set_max_bytes",
"destroy_snapshot_group", "create_snapshot_group",
"disconnect"
])
self.create_volume = mock.Mock(return_value={
"mount_path": "/foo/bar"
})
self.get_mon_addrs = mock.Mock(return_value=["1.2.3.4", "5.6.7.8"])
self.authorize = mock.Mock(return_value={"auth_key": "abc123"})
self.get_used_bytes = mock.Mock(return_value=self.mock_used_bytes)
self.rados = mock.Mock()
self.rados.get_cluster_stats = mock.Mock(return_value={
"kb": 1000,
"kb_avail": 500
})
class CephFSNativeDriverTestCase(test.TestCase):
"""Test the CephFS native driver.
This is a very simple driver that mainly
calls through to the CephFSVolumeClient interface, so the tests validate
that the Manila driver calls map to the appropriate CephFSVolumeClient
calls.
"""
def setUp(self):
super(CephFSNativeDriverTestCase, self).setUp()
self.fake_conf = configuration.Configuration(None)
self._context = context.get_admin_context()
self._share = fake_share.fake_share(share_proto='CEPHFS')
self.fake_conf.set_default('driver_handles_share_servers', False)
self.mock_object(cephfs_native, "ceph_volume_client",
MockVolumeClientModule)
self.mock_object(cephfs_native, "ceph_module_found", True)
self._driver = (
cephfs_native.CephFSNativeDriver(configuration=self.fake_conf))
self.mock_object(share_types, 'get_share_type_extra_specs',
mock.Mock(return_value={}))
def test_create_share(self):
expected_export_locations = {
'path': '1.2.3.4,5.6.7.8:/foo/bar',
'is_admin_only': False,
'metadata': {},
}
export_locations = self._driver.create_share(self._context,
self._share)
self.assertEqual(expected_export_locations, export_locations)
self._driver._volume_client.create_volume.assert_called_once_with(
self._driver._share_path(self._share),
size=self._share['size'] * units.Gi,
data_isolated=False)
def test_ensure_share(self):
self._driver.ensure_share(self._context,
self._share)
self._driver._volume_client.create_volume.assert_called_once_with(
self._driver._share_path(self._share),
size=self._share['size'] * units.Gi,
data_isolated=False)
def test_create_data_isolated(self):
self.mock_object(share_types, 'get_share_type_extra_specs',
mock.Mock(return_value={"cephfs:data_isolated": True})
)
self._driver.create_share(self._context, self._share)
self._driver._volume_client.create_volume.assert_called_once_with(
self._driver._share_path(self._share),
size=self._share['size'] * units.Gi,
data_isolated=True)
def test_delete_share(self):
self._driver.delete_share(self._context, self._share)
self._driver._volume_client.delete_volume.assert_called_once_with(
self._driver._share_path(self._share),
data_isolated=False)
self._driver._volume_client.purge_volume.assert_called_once_with(
self._driver._share_path(self._share),
data_isolated=False)
def test_delete_data_isolated(self):
self.mock_object(share_types, 'get_share_type_extra_specs',
mock.Mock(return_value={"cephfs:data_isolated": True})
)
self._driver.delete_share(self._context, self._share)
self._driver._volume_client.delete_volume.assert_called_once_with(
self._driver._share_path(self._share),
data_isolated=True)
self._driver._volume_client.purge_volume.assert_called_once_with(
self._driver._share_path(self._share),
data_isolated=True)
def test_allow_access(self):
access_rule = {
'access_level': constants.ACCESS_LEVEL_RW,
'access_type': 'cephx',
'access_to': 'alice'
}
self._driver._allow_access(self._context, self._share, access_rule)
self._driver._volume_client.authorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice")
def test_allow_access_wrong_type(self):
self.assertRaises(exception.InvalidShareAccess,
self._driver._allow_access,
self._context, self._share, {
'access_level': constants.ACCESS_LEVEL_RW,
'access_type': 'RHUBARB',
'access_to': 'alice'
})
def test_allow_access_ro(self):
self.assertRaises(exception.InvalidShareAccessLevel,
self._driver._allow_access,
self._context, self._share, {
'access_level': constants.ACCESS_LEVEL_RO,
'access_type': 'cephx',
'access_to': 'alice'
})
def test_deny_access(self):
self._driver._deny_access(self._context, self._share, {
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'alice'
})
self._driver._volume_client.deauthorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice")
def test_update_access_add_rm(self):
alice = {
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'alice'
}
bob = {
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'bob'
}
self._driver.update_access(self._context, self._share,
access_rules=[alice],
add_rules=[alice],
delete_rules=[bob])
self._driver._volume_client.authorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice")
self._driver._volume_client.deauthorize.assert_called_once_with(
self._driver._share_path(self._share),
"bob")
def test_update_access_all(self):
alice = {
'access_level': 'rw',
'access_type': 'cephx',
'access_to': 'alice'
}
self._driver.update_access(self._context, self._share,
access_rules=[alice], add_rules=[],
delete_rules=[])
self._driver._volume_client.authorize.assert_called_once_with(
self._driver._share_path(self._share),
"alice")
def test_extend_share(self):
new_size_gb = self._share['size'] * 2
new_size = new_size_gb * units.Gi
self._driver.extend_share(self._share, new_size_gb, None)
self._driver._volume_client.set_max_bytes.assert_called_once_with(
self._driver._share_path(self._share),
new_size)
def test_shrink_share(self):
new_size_gb = self._share['size'] * 0.5
new_size = new_size_gb * units.Gi
self._driver.shrink_share(self._share, new_size_gb, None)
self._driver._volume_client.get_used_bytes.assert_called_once_with(
self._driver._share_path(self._share))
self._driver._volume_client.set_max_bytes.assert_called_once_with(
self._driver._share_path(self._share),
new_size)
def test_shrink_share_full(self):
"""That shrink fails when share is too full."""
new_size_gb = self._share['size'] * 0.5
# Pretend to be full up
vc = MockVolumeClientModule.CephFSVolumeClient
vc.mock_used_bytes = (units.Gi * self._share['size'])
self.assertRaises(exception.ShareShrinkingPossibleDataLoss,
self._driver.shrink_share,
self._share, new_size_gb, None)
self._driver._volume_client.set_max_bytes.assert_not_called()
def test_create_snapshot(self):
self._driver.create_snapshot(self._context,
{
"share": self._share,
"name": "snappy1"
},
None)
(self._driver._volume_client.create_snapshot_volume
.assert_called_once_with(
self._driver._share_path(self._share),
"snappy1"))
def test_delete_snapshot(self):
self._driver.delete_snapshot(self._context,
{
"share": self._share,
"name": "snappy1"
},
None)
(self._driver._volume_client.destroy_snapshot_volume
.assert_called_once_with(
self._driver._share_path(self._share),
"snappy1"))
def test_create_consistency_group(self):
self._driver.create_consistency_group(self._context, {"id": "grp1"},
None)
self._driver._volume_client.create_group.assert_called_once_with(
"grp1")
def test_delete_consistency_group(self):
self._driver.delete_consistency_group(self._context, {"id": "grp1"},
None)
self._driver._volume_client.destroy_group.assert_called_once_with(
"grp1")
def test_create_cg_snapshot(self):
self._driver.create_cgsnapshot(self._context, {
'consistency_group_id': 'cgid',
'id': 'snapid'
})
(self._driver._volume_client.create_snapshot_group.
assert_called_once_with("cgid", "snapid"))
def test_delete_cgsnapshot(self):
self._driver.delete_cgsnapshot(self._context, {
'consistency_group_id': 'cgid',
'id': 'snapid'
})
(self._driver._volume_client.destroy_snapshot_group.
assert_called_once_with("cgid", "snapid"))
def test_delete_driver(self):
# Create share to prompt volume_client construction
self._driver.create_share(self._context,
self._share)
vc = self._driver._volume_client
del self._driver
vc.disconnect.assert_called_once_with()
def test_delete_driver_no_client(self):
self.assertEqual(None, self._driver._volume_client)
del self._driver
def test_connect_noevict(self):
# When acting as "admin", driver should skip evicting
self._driver.configuration.local_conf.set_override('cephfs_auth_id',
"admin")
self._driver.create_share(self._context,
self._share)
vc = self._driver._volume_client
vc.connect.assert_called_once_with(premount_evict=None)
def test_update_share_stats(self):
self._driver._volume_client
self._driver._update_share_stats()
result = self._driver._stats
self.assertEqual("CEPHFS", result['storage_protocol'])
def test_module_missing(self):
cephfs_native.ceph_module_found = False
cephfs_native.ceph_volume_client = None
self.assertRaises(exception.ManilaException,
self._driver.create_share,
self._context,
self._share)

5
manila_tempest_tests/config.py

@ -36,7 +36,7 @@ ShareGroup = [
help="The minimum api microversion is configured to be the "
"value of the minimum microversion supported by Manila."),
cfg.StrOpt("max_api_microversion",
default="2.12",
default="2.13",
help="The maximum api microversion is configured to be the "
"value of the latest microversion supported by Manila."),
cfg.StrOpt("region",
@ -73,6 +73,9 @@ ShareGroup = [
cfg.ListOpt("enable_cert_rules_for_protocols",
default=["glusterfs", ],
help="Protocols that should be covered with cert rule tests."),
cfg.ListOpt("enable_cephx_rules_for_protocols",
default=["cephfs", ],
help="Protocols to be covered with cephx rule tests."),
cfg.StrOpt("username_for_user_rules",
default="Administrator",
help="Username, that will be used in user tests."),

4
manila_tempest_tests/tests/api/admin/test_share_manage.py

@ -196,3 +196,7 @@ class ManageGLUSTERFSShareTest(ManageNFSShareTest):
class ManageHDFSShareTest(ManageNFSShareTest):
protocol = 'hdfs'
class ManageCephFSShareTest(ManageNFSShareTest):
protocol = 'cephfs'

2
manila_tempest_tests/tests/api/base.py

@ -86,7 +86,7 @@ class BaseSharesTest(test.BaseTestCase):
"""Base test case class for all Manila API tests."""
force_tenant_isolation = False
protocols = ["nfs", "cifs", "glusterfs", "hdfs"]
protocols = ["nfs", "cifs", "glusterfs", "hdfs", "cephfs"]
# Will be cleaned up in resource_cleanup
class_resources = []

51
manila_tempest_tests/tests/api/test_rules.py

@ -358,6 +358,41 @@ class ShareCertRulesForGLUSTERFSTest(base.BaseSharesTest):
rule_id=rule["id"], share_id=self.share['id'], version=version)
@ddt.ddt
class ShareCephxRulesForCephFSTest(base.BaseSharesTest):
protocol = "cephfs"
@classmethod
def resource_setup(cls):
super(ShareCephxRulesForCephFSTest, cls).resource_setup()
if (cls.protocol not in CONF.share.enable_protocols or
cls.protocol not in
CONF.share.enable_cephx_rules_for_protocols):
msg = ("Cephx rule tests for %s protocol are disabled." %
cls.protocol)
raise cls.skipException(msg)
cls.share = cls.create_share(cls.protocol)
cls.access_type = "cephx"
# Provide access to a client identified by a cephx auth id.
cls.access_to = "bob"
@test.attr(type=["gate", ])
@ddt.data("alice", "alice_bob", "alice bob")
def test_create_delete_cephx_rule(self, access_to):
rule = self.shares_v2_client.create_access_rule(
self.share["id"], self.access_type, access_to)
self.assertEqual('rw', rule['access_level'])
for key in ('deleted', 'deleted_at', 'instance_mappings'):
self.assertNotIn(key, rule.keys())
self.shares_v2_client.wait_for_access_rule_status(
self.share["id"], rule["id"], "active")
self.shares_v2_client.delete_access_rule(self.share["id"], rule["id"])
self.shares_v2_client.wait_for_resource_deletion(
rule_id=rule["id"], share_id=self.share['id'])
@ddt.ddt
class ShareRulesTest(base.BaseSharesTest):
@ -369,6 +404,8 @@ class ShareRulesTest(base.BaseSharesTest):
any(p in CONF.share.enable_user_rules_for_protocols
for p in cls.protocols) or
any(p in CONF.share.enable_cert_rules_for_protocols
for p in cls.protocols) or
any(p in CONF.share.enable_cephx_rules_for_protocols
for p in cls.protocols)):
cls.message = "Rule tests are disabled"
raise cls.skipException(cls.message)
@ -384,12 +421,21 @@ class ShareRulesTest(base.BaseSharesTest):
cls.protocol = CONF.share.enable_cert_rules_for_protocols[0]
cls.access_type = "cert"
cls.access_to = "client3.com"
elif CONF.share.enable_cephx_rules_for_protocols:
cls.protocol = CONF.share.enable_cephx_rules_for_protocols[0]
cls.access_type = "cephx"
cls.access_to = "alice"
cls.shares_v2_client.share_protocol = cls.protocol
cls.share = cls.create_share()
@test.attr(type=["gate", ])
@ddt.data('1.0', '2.9', LATEST_MICROVERSION)
def test_list_access_rules(self, version):
if (utils.is_microversion_lt(version, '2.13') and
CONF.share.enable_cephx_rules_for_protocols):
msg = ("API version %s does not support cephx access type, "
"need version greater than 2.13." % version)
raise self.skipException(msg)
# create rule
if utils.is_microversion_eq(version, '1.0'):
@ -447,6 +493,11 @@ class ShareRulesTest(base.BaseSharesTest):
@test.attr(type=["gate", ])
@ddt.data('1.0', '2.9', LATEST_MICROVERSION)
def test_access_rules_deleted_if_share_deleted(self, version):
if (utils.is_microversion_lt(version, '2.13') and
CONF.share.enable_cephx_rules_for_protocols):
msg = ("API version %s does not support cephx access type, "
"need version greater than 2.13." % version)
raise self.skipException(msg)
# create share
share = self.create_share()

60
manila_tempest_tests/tests/api/test_rules_negative.py

@ -19,6 +19,7 @@ from tempest import test
from tempest_lib import exceptions as lib_exc
import testtools
from manila_tempest_tests import share_exceptions
from manila_tempest_tests.tests.api import base
from manila_tempest_tests import utils
@ -316,6 +317,48 @@ class ShareCertRulesForGLUSTERFSNegativeTest(base.BaseSharesTest):
access_to="fakeclient2.com")
@ddt.ddt
class ShareCephxRulesForCephFSNegativeTest(base.BaseSharesTest):
protocol = "cephfs"
@classmethod
def resource_setup(cls):
super(ShareCephxRulesForCephFSNegativeTest, cls).resource_setup()
if not (cls.protocol in CONF.share.enable_protocols and
cls.protocol in CONF.share.enable_cephx_rules_for_protocols):
msg = ("CEPHX rule tests for %s protocol are disabled" %
cls.protocol)
raise cls.skipException(msg)
# create share
cls.share = cls.create_share(cls.protocol)
cls.access_type = "cephx"
cls.access_to = "david"
@test.attr(type=["negative", "gate", ])
@ddt.data('jane.doe', u"bj\u00F6rn")
def test_create_access_rule_cephx_with_invalid_cephx_id(self, access_to):
self.assertRaises(lib_exc.BadRequest,
self.shares_v2_client.create_access_rule,
self.share["id"], self.access_type, access_to)
@test.attr(type=["negative", "gate", ])
def test_create_access_rule_cephx_with_wrong_level(self):
self.assertRaises(lib_exc.BadRequest,
self.shares_v2_client.create_access_rule,
self.share["id"], self.access_type, self.access_to,
access_level="su")
@test.attr(type=["negative", "gate", ])
def test_create_access_rule_cephx_with_unsupported_access_level_ro(self):
rule = self.shares_v2_client.create_access_rule(
self.share["id"], self.access_type, self.access_to,
access_level="ro")
self.assertRaises(
share_exceptions.AccessRuleBuildErrorException,
self.shares_client.wait_for_access_rule_status,
self.share['id'], rule['id'], "active")
@ddt.ddt
class ShareRulesNegativeTest(base.BaseSharesTest):
# Tests independent from rule type and share protocol
@ -328,6 +371,8 @@ class ShareRulesNegativeTest(base.BaseSharesTest):
any(p in CONF.share.enable_user_rules_for_protocols
for p in cls.protocols) or
any(p in CONF.share.enable_cert_rules_for_protocols
for p in cls.protocols) or
any(p in CONF.share.enable_cephx_rules_for_protocols
for p in cls.protocols)):
cls.message = "Rule tests are disabled"
raise cls.skipException(cls.message)
@ -337,9 +382,21 @@ class ShareRulesNegativeTest(base.BaseSharesTest):
# create snapshot
cls.snap = cls.create_snapshot_wait_for_active(cls.share["id"])
def skip_if_cephx_access_type_not_supported_by_client(self, client):
if client == 'shares_client':
version = '1.0'
else:
version = LATEST_MICROVERSION
if (CONF.share.enable_cephx_rules_for_protocols and
utils.is_microversion_lt(version, '2.13')):
msg = ("API version %s does not support cephx access type, "
"need version greater than 2.13." % version)
raise self.skipException(msg)
@test.attr(type=["negative", "gate", ])
@ddt.data('shares_client', 'shares_v2_client')
def test_delete_access_rule_with_wrong_id(self, client_name):
self.skip_if_cephx_access_type_not_supported_by_client(client_name)
self.assertRaises(lib_exc.NotFound,
getattr(self, client_name).delete_access_rule,
self.share["id"], "wrong_rule_id")
@ -347,6 +404,7 @@ class ShareRulesNegativeTest(base.BaseSharesTest):
@test.attr(type=["negative", "gate", ])
@ddt.data('shares_client', 'shares_v2_client')
def test_create_access_rule_ip_with_wrong_type(self, client_name):
self.skip_if_cephx_access_type_not_supported_by_client(client_name)
self.assertRaises(lib_exc.BadRequest,
getattr(self, client_name).create_access_rule,
self.share["id"], "wrong_type", "1.2.3.4")
@ -354,6 +412,7 @@ class ShareRulesNegativeTest(base.BaseSharesTest):
@test.attr(type=["negative",