When we delete the shared instance, in addition to erasing the data
in the share, we should disconnect the client's mount point to
prevent the data from being written in.
Closes-Bug: #1886010
Change-Id: I7a334fb895669cc807a288e6aefe62154a89a7e4
(cherry picked from commit 9d44ba0b6a)
This patch enables the creation of a share from snapshot
specifying another pool or backend. In the scheduler, a
new filter and weigher were implemented in order to consider
this operation if the backend supports it. Also, a new
field called 'progress' was added in the share and share
instance. The 'progress' field indicates the status
of the operation create share from snapshot (in percentage).
Finally, a new periodic task was added in order to constantly
check the share status.
Partially-implements: bp create-share-from-snapshot-in-another-pool-or-backend
DOCImpact
Change-Id: Iab13a0961eb4a387a502246e5d4b79bc9046e04b
Co-authored-by: carloss <ces.eduardo98@gmail.com>
Co-authored-by: dviroel <viroel@gmail.com>
Fix:
W605 invalid escape sequence
This is the final change I plan for hacking, the remaining problems
need further investigation by manila team and decision whether and how
to solve them.
Change-Id: I73d73e044eaaf412bf7ace358a3f07c8d269d6cf
Fix:
E731 do not assign a lambda expression, use a def
I just marked the lambdas with noqa.
Fix also other problems found by hacking in files changed.
Change-Id: I4e47670f5a96e61fba617e4cb9478958f7089711
Some of the available checks are disabled by default, like:
[H106] Don't put vim configuration in source files
[H203] Use assertIs(Not)None to check for None
[H904] Use ',' instead of '%', String interpolation should be
delayed to be handled by the logging code, rather than
being done at the point of the logging call.
Change-Id: Ie985fcf78997a86d41e40eacbb4a5ace8592a348
Some configuration options were accepting both IP addresses
and hostnames. Since there was no specific OSLO opt type to
support this, we were using ``StrOpt``. The change [1] that
added support for ``HostAddressOpt`` type was merged in Ocata
and became available for use with oslo version 3.22.
This patch changes the opt type of configuration options to use
this more relevant opt type - HostAddressOpt.
[1] I77bdb64b7e6e56ce761d76696bc4448a9bd325eb
TrivialFix
Change-Id: I44ba478ff14a6184434dd030efd9b7fa92458c7a
Remove redundant 'error' parameter in LOG.exception,
and replace some LOG.error with LOG.exception.
Change-Id: I46c14014c9dc38da9ea3b8ae98c9bd2aafe478d7
Change http://review.gluster.org/14931,
debuting in GlusterFS 3.7.14 has changed the
XML output emitted by the gluster command
line interface. Here we implement parsing
for the new variant as well.
Change-Id: Ia9f340f1d56c95d5ebf5577df6aae9d708a026c0
Closes-Bug: 1609858
As with Ganesha there is no direct way to
bulk process a set of rules, in update_access()
we just call down to the old allow/deny methods
iteratively.
However, they got underscore prefixed:
{allow,deny}_access -> _{allow,deny}_access
The update_access method has the
update_access(base_path, share, add_rules, delete_rules, recovery_mode)
interface. Drivers using a ganesha.NASHelperBase derived
helpers and implementing the
update_access(..., access_rules, add_rules, delete_rules, ...)
interface should decide about recovery mode by access_rules
content and pass down either access_rules or add_rules
to the helper's update_rules as add_rules (respectively in recovery
and normal mode, and also setting the recovery_mode flag
appropriately). The driver is also responsible for checking
the validity of the rules, for which we add support
by the NASHelperBase
supported_access_types
supported_access_levels
attributes and the utils._get_valid_access_rules utility
method.
Co-Authored-By: Ramana Raja <rraja@redhat.com>
Implements bp ganesha-update-access
Change-Id: Iea3a3ce3db44df792b5cf516979ff79c61d5b182
This patch removes unused global LOG variable
and logging imports from various manila modules,
and adds a script to be run as part of pep8 that
will ensure that these do not creep back into
the codebase.
Change-Id: I162c4b2478df45aaf6ea8009b102d6de1a4e309e
In delete_share, if private_storage entry of share is
missing, recognize this as indication of a botched
creation and return immediately so that the runtime
can go on with evicting the dangling share entry.
Change-Id: I76dabe0acc0b67ea2b03e77eb0743772ef25579d
Closes-bug: #1554290
Show the proper execution failure summary
rendered by ProcessExecutionError.__str__
instead of extracting partial information
from the ProcessExecutionError instance by
ad hoc means.
Change-Id: I8dcbfd301752c24686cb6ca7bd2505dc2d5c0464
Closes-bug: #1554607
The glusterfs.GlusterNFSVolHelper class does not
customize instantiation, so it does not need to
maintain an __init__ function.
Change-Id: I6549cb3c27cfaa046f6685d4774a425d99b8fe6b
Closes-bug: #1555157
GlusterManager can be instantiated in two ways:
- with an address string like "johndoe@example.com:/vol/dir"
- with a component dict like
{'user': 'johndoe',
'host': 'example.com',
'volume':'vol',
'path': 'dir'}
If the instance is created with an address string, it will be parsed
using regexp which facilitates the validation of the certain component
values.
Made changes such that if the GlusterManager is created using a
component dict, then change the format into the address string and allow
it to be parsed.
Mangling it through the parse ensures it meets the same criteria as
string addresses, without a partial reimplementation of parsing.
Closes-Bug: #1496733
Change-Id:I9128ab086d63b72f33a6e879cb038a6759b56ef9
This function is not specific to GlusterFS interaction.
Partially implements bp gluster-code-cleanup
Change-Id: I96ef68f13287d6654b65744df67880ab9deccb3f
Having a driver in the top level of the drivers
directory was against source tree placement
conventions. It was especially weird because
all other GlusterFS related code lived in
a dedicated subdirectory.
Partially implements bp gluster-code-cleanup
Change-Id: I52d717ab6396454c2e7a9b62aa00ea443b26d5cc
With Ganesha the actual location where the storage backend entities
are exported include the id of the access rule on behalf of which
the export is made; see
http://docs.openstack.org/developer/manila/devref/ganesha.html#known-issueshttps://review.openstack.org/#/c/249998/2/doc/source/devref/ganesha.rst
Thus the export location reported through the Manila API/UI
is inaccurate and current Manila interface is not capable of
a faithful representation of the actual export mechanism
(as the exposed export location values depend only on the share but
noot on the acces rules).
We can at least try to find partial representations that give
more clue than a straightaway false export location value.
With this patch we make such attempt, using export locations
of the format
example.com:/share-6c0262e8-f989-4df1-9f01-c758b0d72428--<access-id>
Change-Id: I029a28341225087101920c3792e9b2fb4beed081
Closes-Bug: #1501178
GlusterFS has two kind of options:
- regular ones, which are a hardcoded set, and option names
are verified in "gluster volume get"
- user ones, whose name matches user.* -- these are
arbitrarily named, are ignored by "gluster volume get" and
are listed in "gluster volume info" output
So far we used "gluster volume info" universally, but that,
apart from being cumbersome for regular options, is also
incorrect, as it can't distinguish an unset option name from
an undefined one (querying the former should be treated OK,
querying the second should be treated as error).
- implement querying of regular options with "gluster volume
get" (accepting empty response)
- implement querying of user options with searching "gluster vol
info" data
- verify operations on the XML tree, make tacit XML layout
assumptions explicit
- implement optional Boolean coercion of values
Partially implements bp gluster-code-cleanup
Change-Id: I9e0843b88cd1a1668fe48c6979029c012dcbaa13
GlusterManager:
- add various error policies to gluster_call
- add set_vol_option method, with optional error tolerance
(making use of the respective gluster call error policy),
with support for Boolean options
- rename get_gluster_vol_option method to get_vol_option
for uniform nomenclature and simplicity (the "gluster" in
the method name was redundant as the classname already
hints about the Gluster scope)
Partially implements bp gluster-code-cleanup
Change-Id: I02a1d591d36c6a64eea55ed64cf715f94c1fd1c8
Replacing dict.iteritems()/.itervalues() with
six.iteritems(dict)/six.itervalues(dict) was preferred in the past,
but there was a discussion suggesting to avoid six for this[1].
The overhead of creating a temporary list on Python 2 is negligible.
[1]http://lists.openstack.org/pipermail/openstack-dev/2015-June/066391.html
Partially-implements blueprint py3-compatibility
Change-Id: Ia2298733188b3d964d43a547504ede2ebeaba9bd
So far, GlusterManager.gluster_call was directly calling
into the given execution function, of which the expected
error type is ProcessExecutionError; while practically in
all use cases we wanted to raise upwards a GlusterfsException,
so all use cases were individually coercing the
ProcessExecutionError into a GlusterfsException. This produced
a huge amount of excess boilerplate code.
Here we include the coercion in the definiton of gluster_call
and clean out the excess code.
Partially implements bp gluster-code-cleanup
Change-Id: I0ad0478393df2cbb6d077363ebd6b91ceed21679
With volume layout the volume we use to back a share can
be pre-created (part of the volume pool provided for Manila),
or can be created by Manila (that happens if share is created
from snapshot, in which case the volume is obtained by performing
a 'snapshot clone' gluster operation).
In terms of resource management, pre-created volumes are owned
by the pool, and Manila cloned ones are owned by Manila. So
far we kept all the volumes upon giving up its use (ie. deleting
the share it belonged to) -- we only ran a cleanup routine on them.
However, that's appropriate action only for the pool owned ones.
However, the ones we own should rather be extinguished to avoid
a resource leak. This patch implements this practice by marking
Manila owned volumes with a gluster user option.
Closes-Bug: #1506298
Change-Id: I165cc225cb7aca44785ed9ef60f459b8d46af564
Also remove unnecessary fake _ function
from the test suite so that other possible
occurrences of this genre of bug would be
detected by unit tests.
Change-Id: I8292fceda02442f83210e98cee4619c81bc181e5
Closes-Bug: #1506297
With volume layout, share-volume association was kept solely
in the manila DB. That is not robust enough, first and
foremost because upon starting the service, the manager
will indicate existing share associations only of those
volumes whose shares is in 'available' state.
We need to know though if a volume is in a pristine state or
not, regardless of the state of their shares. To this end,
we introduce the 'user.manila-share' GlusterFS volume option
to indicate manila share association -- made possible by
GlusterFS allowing any user defined option to exist in the
'user' option name space --, which indicator remains there
until we explicitely drop it in `delete_share`. (The value
of 'user.manila-share' is the id of the share owning the
volume).
As a beneficial side effect, this change will also provide
insight to the Gluster storage admin about usage of the
Manila volume pool.
Change-Id: Icb388fd31fb6a992bee7e731f5e84403d5fd1a85
Partial-Bug: #1501670
Actually, all uses of export_location are incorrect -- from the
layout code's point of view, export_location is an arbitrary
opaque value, obtained from self.driver._setup_via_manager with
which the only legit action is to return it from create_share*.
That we use export_location as dict key in ensure_share is just
a pre-layout relict that survived by the virtue of remaining
unnoticed. Now the referred bug forced it out of the dark.
Change-Id: I965dae99486002f00145daff0cd2a848777b5b81
Partial-Bug: #1501670
nfs.export-volumes is required to be set to 'on' on the backing
GlusterFS cluster (nb. it's a cluster-wide setting) if the driver's
setup is
glusterfs_nfs_server_type = Gluster
glusterfs_share_layout = layout_volume.GlusterfsVolumeMappedLayout
Change-Id: I472eb5534110e8b216275ad3f4295f27a81f9815
Closes-Bug: #1499124
(cherry picked from commit d7ff0f4314)
GlusterNFSVolHelper._get_vol_exports should return an
empty list if nfs.rpc-auth-allow is not set on the
backing volume.
Change-Id: I481bcef66efb7f49da7ea9dab27300825388d6d8
Closes-Bug: #1498835
(cherry picked from commit 42eb36b745)
When handling create_share_from_snapshot with glusterfs
volume layout, we do a snapshot clone gluster operation
that gives us a new volume (which will be used to back
the new share). 'snapshot clone' does not start the
resultant volume, we have explicitly start it from Manila.
So far the volume layout code did not bother about it,
rather the 'vol start' was called from glusterfs-native
driver. That however broke all other volume layout based
configs (ie. glusterfs driver with vol layout).
Fix this now by doing the 'vol start' call in the vol
layout code.
Change-Id: I63c13ce468a3227f09e381814f55e8c914fbef95
Closes-Bug: #1499347
(cherry picked from commit 4e4c8759a2)
When handling create_share_from_snapshot with glusterfs
volume layout, we do a snapshot clone gluster operation
that gives us a new volume (which will be used to back
the new share). 'snapshot clone' does not start the
resultant volume, we have explicitly start it from Manila.
So far the volume layout code did not bother about it,
rather the 'vol start' was called from glusterfs-native
driver. That however broke all other volume layout based
configs (ie. glusterfs driver with vol layout).
Fix this now by doing the 'vol start' call in the vol
layout code.
Change-Id: I63c13ce468a3227f09e381814f55e8c914fbef95
Closes-Bug: #1499347
nfs.export-volumes is required to be set to 'on' on the backing
GlusterFS cluster (nb. it's a cluster-wide setting) if the driver's
setup is
glusterfs_nfs_server_type = Gluster
glusterfs_share_layout = layout_volume.GlusterfsVolumeMappedLayout
Change-Id: I472eb5534110e8b216275ad3f4295f27a81f9815
Closes-Bug: #1499124
GlusterNFSVolHelper._get_vol_exports should return an
empty list if nfs.rpc-auth-allow is not set on the
backing volume.
Change-Id: I481bcef66efb7f49da7ea9dab27300825388d6d8
Closes-Bug: #1498835
glusterfs and glusterfs_native had a distinct
set of options to specify ssh credentials
(glusterfs_server_password vs glusterfs_native_server_password
and glusterfs_path_to_private_key vs glusterfs_native_path_to_private_key).
There is no reason to keep these separate; but worsening the situations
these options have been moved to layouts in an ad-hoc manner,
breaking certain driver/layout combos whereby the credential
option used by the driver is not provided by the chosen layout
and thus it was undefined.
Fix all the mess by defining glusterfs_server_password and
glusterfs_path_to_private_key in glusterfs.common, and
providing the native variants as deprecated aliases.
Change-Id: I48f8673858d2bff95e66bb7e72911e87030fdc0e
Closes-Bug: #1497212
The nfs.export-dir option is not suitable for specifying
whole volume exports (ie. exports for root directory).
Instead, we have to have nfs.export-volumes = on (contrary
to all other scenarios), and control the export via
the nfs.rpc-auth-{allow,reject} options.
So we subclassed GlusterNFSHelper to GlusterNFSVolHelper,
a new helper class that operates with a similar logic to
its parent, just works with nfs.rpc-auth-{allow,reject}
instead of nfs.export-dir.
The driver code detects if {allow,deny}-acces is performed
with a whole volume backend and if the helper given in
configuration is GlusterNFSHelper, then for the handling of
this call it switches over to GlusterNFSVolHelper.
NOTE: What we *don't* do: we don't set nfs.export-volumes to "on",
it's expected to be done by the admin, beforehand. The reason
is that nfs.export-volumes is not a per-volume option, but a
per-cluster and we don't want to mess up the access control of
the cluster by chance in an over-permissive way. The per-cluster
scope of nfs.export-volumes also implies that using a GlusterFS
backed with gluster-nfs export mechasim and volume mapped layout
is an exclusive choice: the cluster can't host it along with other
export / layout schemes.
Change-Id: Ie4e4d03608f7a380cae790d429f88a5482d88ac8
Closes-Bug: #1495910
The basic problem is that determining the
export location of a share should happen in
driver scope (as it depends on available
export mechanims, which are implemented by
the driver) while the code did it in layout
scope in ad-hoc ways. Also in native driver
the export location was abused to store the
address of the backing GlusterFS resource.
Fix these by
- layout:
- GlusterfsShareDriverBase._setup_via_manager
(the layer -> driver reverse callback) will
provide the export location as return value
- the share object is also passed to
GlusterfsShareDriverBase._setup_via_manager
(besides the gluster manager), because some
driver configs will need it to specify the
export location
- glusterfs-native:
- free the code from using export location
(apart from composing it in _setup_via_manager);
store the address of backing resource in
private storage instead of the export location
field
- glusterfs:
- define the `get_export` method for the export
helpers that provide the export location
- _setup_via_manager determines export location
by calling the helper's get_export
Change-Id: Id02e4908a3e8e435c4c51ecacb6576785ac8afb6
Closes-Bug: #1476774
Closes-Bug: #1493080
Previously, a 'ShareSnapshot' object was passed to the driver's API
method by share's manager.py during a create_share_from_snapshot call.
Now, a 'ShareSnapshotInstance' object is passed to the driver during
the same call. The object no longer has the attribute 'share' used by
the driver code, and in it's place has the attribute 'share_instance'.
So replace use of 'share' attribute with 'share_instance'.
Change-Id: Ibea11b33772f24609f9cd3180d61ab7f6307c1b8
Closes-Bug: #1495382
The directory mapped layout has been isolated from
glusterfs driver's management logic, and captured
in a layout class.
This enables the glusterfs driver to work with
several layouts -- as of the time of writing this,
with both layouts implemented so far:
- directory mapped layout (the old way, also the
default);
- volume mapped layout.
Partially implements bp modular-glusterfs-share-layouts
Change-Id: I832f8ee3defcb9b76b3e7db0e1ce64a52abe09b7
The volume management done by gluster_native has
been isolated and captured in a separate layout class.
gluster_native implements only {allow,deny}_access,
for the rest it uses the layout code.
Semantics is preserved with one difference:
the Manila host now is assumed to be set up so that
it can mount the GlusterFS volumes without any
complications. Earlier we assumed to not have cert
based access from Manila host, and turned therefore
SSL off and on on the GlusterFS side. This does not
make sense for the separate, layout-agnostic logic.
(Nb. we already wanted to make the move to set this
assumption, regardless of the layout work.)
Partially implements bp modular-glusterfs-share-layouts
Change-Id: I3cbc55eed0f61fe4808873f78811b6c3fd1c66aa
Added:
class GlusterfsShareDriver(driver.ShareDrive)
@six.add_metaclass(abc.ABCMeta)
class GlusterfsShareLayoutBase(object)
Partially implements bp modular-glusterfs-share-layouts
Change-Id: I32d028afd736c5e93fc64c83dc0ab345a6335438