When we delete the shared instance, in addition to erasing the data
in the share, we should disconnect the client's mount point to
prevent the data from being written in.
Closes-Bug: #1886010
Change-Id: I7a334fb895669cc807a288e6aefe62154a89a7e4
(cherry picked from commit 9d44ba0b6a)
This patch enables the creation of a share from snapshot
specifying another pool or backend. In the scheduler, a
new filter and weigher were implemented in order to consider
this operation if the backend supports it. Also, a new
field called 'progress' was added in the share and share
instance. The 'progress' field indicates the status
of the operation create share from snapshot (in percentage).
Finally, a new periodic task was added in order to constantly
check the share status.
Partially-implements: bp create-share-from-snapshot-in-another-pool-or-backend
DOCImpact
Change-Id: Iab13a0961eb4a387a502246e5d4b79bc9046e04b
Co-authored-by: carloss <ces.eduardo98@gmail.com>
Co-authored-by: dviroel <viroel@gmail.com>
Fix:
E731 do not assign a lambda expression, use a def
I just marked the lambdas with noqa.
Fix also other problems found by hacking in files changed.
Change-Id: I4e47670f5a96e61fba617e4cb9478958f7089711
Change http://review.gluster.org/14931,
debuting in GlusterFS 3.7.14 has changed the
XML output emitted by the gluster command
line interface. Here we implement parsing
for the new variant as well.
Change-Id: Ia9f340f1d56c95d5ebf5577df6aae9d708a026c0
Closes-Bug: 1609858
In delete_share, if private_storage entry of share is
missing, recognize this as indication of a botched
creation and return immediately so that the runtime
can go on with evicting the dangling share entry.
Change-Id: I76dabe0acc0b67ea2b03e77eb0743772ef25579d
Closes-bug: #1554290
This function is not specific to GlusterFS interaction.
Partially implements bp gluster-code-cleanup
Change-Id: I96ef68f13287d6654b65744df67880ab9deccb3f
GlusterFS has two kind of options:
- regular ones, which are a hardcoded set, and option names
are verified in "gluster volume get"
- user ones, whose name matches user.* -- these are
arbitrarily named, are ignored by "gluster volume get" and
are listed in "gluster volume info" output
So far we used "gluster volume info" universally, but that,
apart from being cumbersome for regular options, is also
incorrect, as it can't distinguish an unset option name from
an undefined one (querying the former should be treated OK,
querying the second should be treated as error).
- implement querying of regular options with "gluster volume
get" (accepting empty response)
- implement querying of user options with searching "gluster vol
info" data
- verify operations on the XML tree, make tacit XML layout
assumptions explicit
- implement optional Boolean coercion of values
Partially implements bp gluster-code-cleanup
Change-Id: I9e0843b88cd1a1668fe48c6979029c012dcbaa13
GlusterManager:
- add various error policies to gluster_call
- add set_vol_option method, with optional error tolerance
(making use of the respective gluster call error policy),
with support for Boolean options
- rename get_gluster_vol_option method to get_vol_option
for uniform nomenclature and simplicity (the "gluster" in
the method name was redundant as the classname already
hints about the Gluster scope)
Partially implements bp gluster-code-cleanup
Change-Id: I02a1d591d36c6a64eea55ed64cf715f94c1fd1c8
Replacing dict.iteritems()/.itervalues() with
six.iteritems(dict)/six.itervalues(dict) was preferred in the past,
but there was a discussion suggesting to avoid six for this[1].
The overhead of creating a temporary list on Python 2 is negligible.
[1]http://lists.openstack.org/pipermail/openstack-dev/2015-June/066391.html
Partially-implements blueprint py3-compatibility
Change-Id: Ia2298733188b3d964d43a547504ede2ebeaba9bd
So far, GlusterManager.gluster_call was directly calling
into the given execution function, of which the expected
error type is ProcessExecutionError; while practically in
all use cases we wanted to raise upwards a GlusterfsException,
so all use cases were individually coercing the
ProcessExecutionError into a GlusterfsException. This produced
a huge amount of excess boilerplate code.
Here we include the coercion in the definiton of gluster_call
and clean out the excess code.
Partially implements bp gluster-code-cleanup
Change-Id: I0ad0478393df2cbb6d077363ebd6b91ceed21679
With volume layout the volume we use to back a share can
be pre-created (part of the volume pool provided for Manila),
or can be created by Manila (that happens if share is created
from snapshot, in which case the volume is obtained by performing
a 'snapshot clone' gluster operation).
In terms of resource management, pre-created volumes are owned
by the pool, and Manila cloned ones are owned by Manila. So
far we kept all the volumes upon giving up its use (ie. deleting
the share it belonged to) -- we only ran a cleanup routine on them.
However, that's appropriate action only for the pool owned ones.
However, the ones we own should rather be extinguished to avoid
a resource leak. This patch implements this practice by marking
Manila owned volumes with a gluster user option.
Closes-Bug: #1506298
Change-Id: I165cc225cb7aca44785ed9ef60f459b8d46af564
With volume layout, share-volume association was kept solely
in the manila DB. That is not robust enough, first and
foremost because upon starting the service, the manager
will indicate existing share associations only of those
volumes whose shares is in 'available' state.
We need to know though if a volume is in a pristine state or
not, regardless of the state of their shares. To this end,
we introduce the 'user.manila-share' GlusterFS volume option
to indicate manila share association -- made possible by
GlusterFS allowing any user defined option to exist in the
'user' option name space --, which indicator remains there
until we explicitely drop it in `delete_share`. (The value
of 'user.manila-share' is the id of the share owning the
volume).
As a beneficial side effect, this change will also provide
insight to the Gluster storage admin about usage of the
Manila volume pool.
Change-Id: Icb388fd31fb6a992bee7e731f5e84403d5fd1a85
Partial-Bug: #1501670
Actually, all uses of export_location are incorrect -- from the
layout code's point of view, export_location is an arbitrary
opaque value, obtained from self.driver._setup_via_manager with
which the only legit action is to return it from create_share*.
That we use export_location as dict key in ensure_share is just
a pre-layout relict that survived by the virtue of remaining
unnoticed. Now the referred bug forced it out of the dark.
Change-Id: I965dae99486002f00145daff0cd2a848777b5b81
Partial-Bug: #1501670
When handling create_share_from_snapshot with glusterfs
volume layout, we do a snapshot clone gluster operation
that gives us a new volume (which will be used to back
the new share). 'snapshot clone' does not start the
resultant volume, we have explicitly start it from Manila.
So far the volume layout code did not bother about it,
rather the 'vol start' was called from glusterfs-native
driver. That however broke all other volume layout based
configs (ie. glusterfs driver with vol layout).
Fix this now by doing the 'vol start' call in the vol
layout code.
Change-Id: I63c13ce468a3227f09e381814f55e8c914fbef95
Closes-Bug: #1499347
(cherry picked from commit 4e4c8759a2)
When handling create_share_from_snapshot with glusterfs
volume layout, we do a snapshot clone gluster operation
that gives us a new volume (which will be used to back
the new share). 'snapshot clone' does not start the
resultant volume, we have explicitly start it from Manila.
So far the volume layout code did not bother about it,
rather the 'vol start' was called from glusterfs-native
driver. That however broke all other volume layout based
configs (ie. glusterfs driver with vol layout).
Fix this now by doing the 'vol start' call in the vol
layout code.
Change-Id: I63c13ce468a3227f09e381814f55e8c914fbef95
Closes-Bug: #1499347
glusterfs and glusterfs_native had a distinct
set of options to specify ssh credentials
(glusterfs_server_password vs glusterfs_native_server_password
and glusterfs_path_to_private_key vs glusterfs_native_path_to_private_key).
There is no reason to keep these separate; but worsening the situations
these options have been moved to layouts in an ad-hoc manner,
breaking certain driver/layout combos whereby the credential
option used by the driver is not provided by the chosen layout
and thus it was undefined.
Fix all the mess by defining glusterfs_server_password and
glusterfs_path_to_private_key in glusterfs.common, and
providing the native variants as deprecated aliases.
Change-Id: I48f8673858d2bff95e66bb7e72911e87030fdc0e
Closes-Bug: #1497212
The basic problem is that determining the
export location of a share should happen in
driver scope (as it depends on available
export mechanims, which are implemented by
the driver) while the code did it in layout
scope in ad-hoc ways. Also in native driver
the export location was abused to store the
address of the backing GlusterFS resource.
Fix these by
- layout:
- GlusterfsShareDriverBase._setup_via_manager
(the layer -> driver reverse callback) will
provide the export location as return value
- the share object is also passed to
GlusterfsShareDriverBase._setup_via_manager
(besides the gluster manager), because some
driver configs will need it to specify the
export location
- glusterfs-native:
- free the code from using export location
(apart from composing it in _setup_via_manager);
store the address of backing resource in
private storage instead of the export location
field
- glusterfs:
- define the `get_export` method for the export
helpers that provide the export location
- _setup_via_manager determines export location
by calling the helper's get_export
Change-Id: Id02e4908a3e8e435c4c51ecacb6576785ac8afb6
Closes-Bug: #1476774
Closes-Bug: #1493080
Previously, a 'ShareSnapshot' object was passed to the driver's API
method by share's manager.py during a create_share_from_snapshot call.
Now, a 'ShareSnapshotInstance' object is passed to the driver during
the same call. The object no longer has the attribute 'share' used by
the driver code, and in it's place has the attribute 'share_instance'.
So replace use of 'share' attribute with 'share_instance'.
Change-Id: Ibea11b33772f24609f9cd3180d61ab7f6307c1b8
Closes-Bug: #1495382
The volume management done by gluster_native has
been isolated and captured in a separate layout class.
gluster_native implements only {allow,deny}_access,
for the rest it uses the layout code.
Semantics is preserved with one difference:
the Manila host now is assumed to be set up so that
it can mount the GlusterFS volumes without any
complications. Earlier we assumed to not have cert
based access from Manila host, and turned therefore
SSL off and on on the GlusterFS side. This does not
make sense for the separate, layout-agnostic logic.
(Nb. we already wanted to make the move to set this
assumption, regardless of the layout work.)
Partially implements bp modular-glusterfs-share-layouts
Change-Id: I3cbc55eed0f61fe4808873f78811b6c3fd1c66aa
Do not store superfluous mappings of urls to
GlusterManager instances (GlusterManager
instances are stateless and cheap, so they
are better to be produced on the fly).
Thus drop the `gluster_used_vols_dict` and
`glusterfs_servers` instance attributes of
GlusterfsNativeShareDriver; use instead,
respectively, the set reduct `glusterfs_used_vols`,
and the CONF provided `glusterfs_servers` list.
Also avoid direct dispatch on
`share['export_location']`, hide it into the utility
function `_share_manager`.
Partially implements bp modular-glusterfs-share-layouts
Change-Id: I9d9e76915d6fb6b56e1ef81a7de9e80eebbb5006
- separate basic uri attributes (components)
from derived ones
- components are directly taken from regex
match groupdict
- components are accessible either as
`gmgr.components["<comp>"]` or in sugared/backward compatible
format `gmgr.<comp>`
- derived attributes implemented as properties
- add a "path" component
- initialization can happen both with uri string or
component dict
This allows easy and correct cloning and changing
of GlusterManager objects.
Furthermore,
- the `has_volume` keyword argument has been replaced
by the `requires` keyword argument which can take
a component keyed dict as value; `has_volume=<value>`
is equivalent with `requires={'volume': <value>}`
- `remote_user` component has been renamed to `user`
- `management_address` derived attribute has been renamed to
`host_access`
- execf keyword argument is made optional
- relax default volume requirement
Partially implements bp modular-glusterfs-share-layouts
Change-Id: I5ce088c4ec39a403c9ded2aea1d3ca172e514a01
GlusterManager and a few standalone
management routines have been moved
to the newly created glusterfs/common.py;
both glusterfs and glusterfs_native drivers
import these from there. The obnoxious
import of glusterfs to glusterfs_native
has been eliminated.
Partially implements bp modular-glusterfs-share-layouts
Change-Id: I6a94f1056bd45187c0268d75fa854b127b2b562d
As was discussed on Liberty Mid-Cycle meetup
instances of "infinite" capacity should be replaced
with "unknown" and this capacity should be sorted
to the bottom in the capacity weigher.
Change-Id: I9b37a5ff9cceb35a964b2a0d243688eb73e5cadc
Closes-Bug: #1487478
Add the missing create share from snapshot feature in gluster_native
driver. The snapshot of a share/GlusterFS volume, is cloned to create
a new GlusterFS volume that is served as a new share.
Change-Id: I93208a844a18423e3ade6cfed0f34c8da3c1c598
Change iterators and 'dict_items' objects to list with additional 'list()' method
to avoid versions incompatibility
Partially-Implements: bp py3-compatibility
Change-Id: Ic6b437cd0d57ca3d4b343b590d11c4e53492d223
We need the so-called management address of a GlusterFS
volume (ssh address of its management node) at a few places
for bookkeping reasons (certain data like GlusterFS version
is stored in management address-keyed dicts).
So far it was available only through ad-hoc mangling of
various attributes of a GlusterManager instance that
represents the volume. Now we make it available
directly from GlusterManager as an attribute.
Also fix an occurrence of ad-hoc mangling going wrong,
ie. where a wrongly constructed address is attempted
to be used as dict key.
Change-Id: Ic5a96bc99943dda3592372512916257d53f61b80
Closes-Bug: #1476710
Due to change I40873208c7431e42885bee4db06d6229a202bad6,
we are free to change snapshot naming on the GlusterFS
side, as long as the Manila snap id occurs in it.
This is part of an effort to provide better
indication to the GlusterFS admin about the
origin of Manila created entities.
Partially implements bp manila-prefix-for-gluster-entities
Change-Id: Ie15e89cd49ab8450921ab08c78eb382096b57266
So far the delete_snapshot() logic assumed that a GlusterFS snapshot
exists with exactly the same name as the Manila snapshot id.
This was a fragile assumption -- both GlusterFS and Manila can
potentially change how the GlusterFS snapshots are named.
(And currently we have a problem as GlusterFS has actually changed
this -- see referred bug about details.)
Therefore from now on we'll use the weaker assumption that the
Manila snapshot id is a substring of the backing GlusterFS snapshot
name. (That's a robust assumption, unlikely to break in the future.)
The actual GlusterFS snapshot name is found by doing a
`gluster snapshot list` and grepping for the Manila snapshot id in
it.
Change-Id: I40873208c7431e42885bee4db06d6229a202bad6
Closes-Bug: #1473044
With GlusterFS 3.7.x versions, the delete share operation fails when
deleting the contents of a GlusterFS volume, a share. This is because
two directories are auto-created within a GlusterFS volume when it's
started and GlusterFS refuses to unlink their paths. Fix this issue, by
not trying to remove the two directory paths, but remove their contents
and the rest of the contents of the volume.
Change-Id: I1675bbf593bf578301d6899ee3f9860320080956
Closes-Bug: #1473324
- Remove passing DB reference to drivers in __init__() method
- Remove db reference from Generic driver and service_instance
- Remove db reference from Netapp share driver
- Remove db reference from Glusterfs share driver
- Remove db reference from Glusterfs_Native share driver
- Remove db reference from Quobyte share driver
- Remove db reference from IBM GPFS driver
- Remove db reference from HDS_SOP driver
- Remove db reference from HDFSNative driver
- Remove db reference from fake driver
- Remove db reference from unit tests.
Change-Id: I74a636a8897caa6fc4af833c1568471fe1cb0987
- Remove direct DB calls from methods create_snapshot() and
delete_snapshot().
- Move code from _update_gluster_vols_dict() method to
ensure_share() method.
Change-Id: I6c95f6d9361093d832a536971a460c3cdda44dcb
Partial-Bug: #1444914
In the delete_snapshot API, the Gluster command issued to delete the
snapshot ends up being run in the interactive mode of the
Gluster-CLI. So instead of an XML output, the command results in a
Gluster-CLI read error. Fix this by forcing the Gluster command to be
run in the script mode.
Change-Id: Ic09f59732bf08942a9d216f70f1fc969d9ae0a2d
Closes-Bug: #1442339
- add gluster_version method to GlusterManager class
- gluster: check if version of the GlusterFS server is at least 3.5
- gluster_native: check if version of the GlusterFS server is at least 3.6
- gluster_native: on snaphot creation failure, interpret errno only
for GlusterFS strictly later than 3.6
Change-Id: I242ea83c3a31670eb6a13c11e39d0c2228170c50
Closes-Bug: #1417352
- Use '%' for variable substitution in the message used for logging a
call and throwing an exception.
- Use log marker functions(_LI(), _LW(), _LE(), and _LC()) to send
message directly to the log.
- Use ',' for variable subsitution in the message in a log marker
function.
Change-Id: Ib925014e79d2c380954d952f5dbc835971a0b320
Closes-Bug: #1439762
With this patch the allow_access and deny_access methods of
glusterfs_native will keep preexisting common names in the
affected GlusterFS volume 'auth.ssl-allow' option.
Also check in _setup_gluster_vol if 'auth.ssl-allow' is
set to a non-empty value to avoid semantically problematic
(and from Manila POV, useless) edge cases.
Change-Id: I952049d694509a338c7f56b45c5ef0872c3e7d70
Closes-Bug: #1439198
So far glusterfs_native was able to create shares
from volumes listed in 'glusterfs_targets' config
option.
New behavior: a regexp pattern is to be provided
through the glusterfs_volume_pattern config option.
Upon share creation, grep gluster server's volumes
with this pattern and create the new share from
one among those.
Change-Id: I12ba0dbad0b1174c57e94acd5e7f6653f5bfaae8
Closes-Bug: #1437176
Module 'log' from oslo-incubator was removed after release of oslo_log library.
So, start using oslo_log, but keep oslo-incubator code yet other common modules
within Manila codebase use it.
Implements bp use-oslo-log-lib
Change-Id: I88224f7c2bd99adb78140dfc3fa73cea437f29cd
The oslo team is recommending everyone to switch to the
non-namespaced versions of libraries. Updating the hacking
rule to include a check to prevent oslo.* import from
creeping back in.
oslo.messaging is the only exception because this package doesn't
currently support non-namespaced imports.
Change-Id: I3987e651bc880c8ffa7c0105df0298679dcd3a43
- Refactor the main driver class, GlusterFSShareDriver, to make it
pluggable with different NAS helpers.
- Add GlusterNFS helper class to manage shares served by Gluster-NFS
server. The management of these share was earlier done within the
main driver class.
- Enhance the methods of the GlusterAddress class that would be used
by the main driver class and the helper classes. Rename the
GlusterAddress class to GlusterManager class. This class would
contain the methods to interface with the backend GlusterFS volumes.
- Retire GlusterAddress.make_gluster_args() in favor of
GlusterManager.make_gluster_call(). The make_gluster_call() method
implements a more correct approach to remote execution. It's
remote execution is based on processutils.ssh_execute. The
make_gluster_args() method that it replaces facilitated remote
execution by prefixing arguments with 'ssh'.
- Move the interface used to fetch the value of an option set on a
GlusterFS volume, from the main driver class to the GlusterManager
class.
Partially implements blueprint gateway-mediated-with-ganesha
Change-Id: I3cbeb49c26f5f24152b42b649ce3bc75451964ef
Due to unclear meanings of existing names for share drivers modes it was
decided to replace string driver modes with boolean value, since we have only
two possible values that will clearly say what it used for by name of opt.
This replacement includes following changes:
- String opt 'share_driver_mode' is replaced with
bool opt 'driver_handles_share_servers'. New bool opt does not have
default value and should be defined explicitly.
- Class ShareDriver (parent class for share drivers) now expects additional
argument 'driver_handles_share_servers' which is mandatory and should be
provided by child classes. Expected values are boolean or tuple/list/set of
booleans that says what modes are supported. Above new config opt will be
compared to these.
- Update interfaces 'setup_server' and 'teardown_server' of class ShareDriver.
These interfaces now consider driver modes and call new additional private
methods only when driver is enabled to mode with share servers handling.
These new private methods are '_setup_server' and '_teardown_server', they
should be redefined by child classes when share servers should be handled by
Manila.
- To get know current driver mode within child classes just call property
'driver_handles_share_servers'. It can not be changed by child classes and
returns value that is set by config opt with same name.
- Remove methods 'setup_server' and 'teardown_server' from all share drivers,
that do not support handling of share servers.
- Rename methods 'setup_server' and 'teardown_server' to appropriate
private methods for drivers that do support handling of share servers.
- Update unit tests related to all changed places.
- Make Devstack set new mandatory bool opt.
Implements bp rename-driver-modes
Change-Id: I33402959bc4bbc34cfd4d7308ad0a7bcff3669b5
Driver mode functionality was implemented to be able to specify how
driver should work and filter backends scheduling share creation based on this.
Add to all drivers update of attr 'mode' based on its current behavior.
Set 'share_driver_mode' extra spec to volume/share type with one of available
values. Scheduler will use it for host filtering.
Implements blueprint driver-modes-for-scheduler
Change-Id: Ida644f630ee07c51c02aea5d6280980b5d704c2f
Several things are implemented:
- Allocation/deallocation now handled by drivers instead of share manager.
It provides flexibility for drivers.
- Network plugin interface was updated to support new approach for
configuration options setting.
Config opts for network plugin can be defined via three sources:
a) using separate config group
b) using config group of back end
c) using DEFAULT config group
Variants (a) and (b) are mutually exclusive, there are switched by
opt 'network_config_group' that belongs to share driver interface.
Implements bp network-helper
Change-Id: I3b05369f01777675c1b834af5ee076d8b7219a0f