Keystone v3 requires domain info to be supplied when
making calls to Keystone. Not providing this means
that Cinder can't work with deployments that only
support Keystone v3.
(Specifically, this fails when trying to communicate
with Keystone/Barbican from Cinder.)
The domain information is retrieved from HTTP headers
provided by Keystone and stored in our context object.
This includes both "<x>_domain" and "<x>_domain_id"
in our context object, since castellan depends on
project_domain_id, but oslo.context has deprecated
project_domain and user_domain.
Closes-Bug: #1765766
Change-Id: If389788f06a3cee75b30485e90e05745d559e2ed
Change https://review.openstack.org/#/c/486734 moved try-except
statement to the wrong place. We need to open image in Ceph inside
try-except block.
This patch also fixes race condition between volume deletion and
_get_usage_info calls.
Closed-Bug: #1765845
Change-Id: I7d3d006b023ca4b7963c4c684e4c036399d1295c
Co-Authored-By: Melanie Witt <melwittt@gmail.com>
When NAS is busy or network delay, the volume or snapshot would
fail to created. We fixed the logic of waiting NAS to creating
LUN or snapshot.
Change-Id: Ieb524d9b192e2a222f7d25a0df80cf52f1423e81
Closes-Bug: #1765610
VolumeAttachStatus has been defined already, This change
just replace the missing string volume attach status with
volume attach enum field.
Change-Id: I1e873d66138b88745bdb325f124b6bd781c7d621
Partial-Implements: bp cinder-object-fields
The api-ref stated new volumes created from a source volume or
snapshot would have the same size as the original, but that is
not the case if a larger size is requested.
Change-Id: Id2e0d53b56d5879026c182521a512dc2cfcd28f7
We have tracing in the wsgi code to log the method being called,
but since we are just formatting the function object into a string,
we end up with the less than friendly output of:
Calling method '<bound method VolumeController.delete of
<cinder.api.v3.volumes.VolumeController object at 0x7f9f14f86dd0>>'
This changes the logging to extract the actual name of the method
being called. It also removes some unnecessary string coersion.
Change-Id: I3a0a5973a1798a7fcf25c4288b9cbef1195f7015
Until ONTAP v8.3, the driver used to perform sub-lun clone of up to 64GB
per operation, but as of v9.1, ONTAP handles only 128MB per operation.
This new limit is causing error on volume extending operations, which
triggers sub-lun cloning when resizing exceeds max-resize-geometry.
For instance, if a volume is extended from 1GB to 64GB, the operation
fails because it triggers a single sub-lun cloning of 1GB.
This fix sets the limit used by sub-lun cloning to 128MB, therefore
if a volume is extended from 1GB to 64GB, it no longer fails, and
instead it will trigger 8 sub-lun cloning operations of 128MB.
Change-Id: Ib294cbdbdfe19d34a7702dafec2c8d29136a2f25
Closes-Bug: #1762424
This patch changes the Fibre Channel Zone Manager
utility decorators to functions. Those functions
now have to be called manually. The intention of this
is to unify how the FC drivers are to be declared and
used vs. the iSCSI. No more magic decorators for FC
drivers only.
Change-Id: I8e6e964e3694654b8ba93fe432a0dd49fa5e1df0
A typical ZFSSA iSCSI deployment involves a ZFSSA cluster with two
control heads, two storage pools, and two network interfaces. Under
normal operation, one of the pools and one of the network interfaces
would be owned by one control head, and the others by the other control
head. This provides a form of "active/active" operation.
Cinder would be configured with two backends - one pointing to the pool
and an iSCSI target (group) normally served by the first head, and the
other the second head.
In the event of a failure of one control head, the other head will
"takeover" its storage pool and network interface, allowing service to
continue (perhaps with reduced performance) until the problem head is
restored, at which time the resources would be "given back".
In the takeover scenario, a sendtargets iSCSI discovery to the portal
address normally associated with the failed head will be serviced by the
other head, and it will return BOTH targets. os-brick, when configured
to use multipath, interprets this as both targets being basically
equivalent, and will use either one of them to determine the
host/controller/target/LUN ID to use to find SCSI devices. When the same
LU number is assigned to LUNs in both pools that are presented to the
same compute node, it can cause a LUN belonging to one VM to to
connected to a different VM.
Whilst full multipath support is not implemented in the ZFSSA driver,
using multipath on compute nodes provides some utility, even with only a
single path (notably the I/O can be queued during a transient interruption
(cluster takeover, network switch reboot, etc.). To support this scenario,
whilst avoiding the above problem, this change implements "enhanced multipath
support" [1] to provide explicit target info to os-brick (but only
supporting a single path). This could be expanded to full multipath support
in the future.
[1] https://blueprints.launchpad.net/cinder/+spec/iscsi-multipath-enhancement
Change-Id: I6402a6bc2a0d51efd3b60d93c11c1a81e8a73269
Closes-Bug: 1762584
On ONTAP NFS driver, the export path was being used as the volume
name. If the export path is different than the volume name, the
API call to delete files would fail and the driver would invoke
a fallback method that deletes the files manually. This patch fixes
that by finding the correct volume name.
Change-Id: Ice78889573a36ff5e8873a0d316ddcf180d0263f
Closes-bug: #1690954
In the supported operation section, add new features supported.
In the preparation section, improve description of access control
configuration.
Change-Id: I0a5bbc7b91035c55491c37a73bd23edc3a01b808
Closes-Bug: #1745549
When a node in a ZFSSA cluster fails, storage pools and network
interfaces associated with it get "taken over" by the other node. In
this scenario, the identity of the node does not match the "owner" of
the pool. Currently, the cinder volume drivers consider this to be a
failure. With this change, it will not be considered a failure, so long
as the cluster node is stripped. If the cluster is operating normally,
the node responding to API requests must be the node that owns the pool.
Change-Id: I073cf97bc65f93acd5732c9b7072c7130e6b248f
Closes-Bug: 1762777
We should only be able to create more than one attachment to the
same volume if it's (1) multiattach=True or (2) it's multiattach=False
AND the attachments are to the same instance. It is not valid to
attach more than one instance to the same multiattach=False volume.
The _attachment_reserve method is checking for this if the
conditional update to the volume status fails, but it does
not refresh the volume before checking the attachments. Since
we could be racing to create attachments concurrently, when
the request that failed the conditional update actually pulled
the volume out of the DB, it might not have had any attachments,
so we need to refresh it before checking the list of attachments
to see if the instance we're trying to attach (reserve the volume)
is the same as what's already attached to the volume.
Change-Id: Iee78555163bbcbb5ff3e0ba008d7c87a3aedfb0f
Closes-Bug: #1762687
- update the configuration to include multipath settings
- indicate unsupportability with HPE Nimble Linux Toolkit
Change-Id: I6f6c8063f6995d485e7db63c27cdb19bcc5125fc
Closes-Bug: 1755546