This reverts commit 5edc77a18c505e22ee56e3ebda6d400cded6fa32.
Reason for revert: Bug: #2019190
This revealed a bug in Cinder optimized migration code that can lead to data loss with RBD and potentially other drivers, by using the optimized migration path in more cases. Once the issues there are fixed, this should be re-introduced.
Change-Id: I893105cbd270300be9ec48b3127e66022f739314
When doing a retype of a volume that requires a migration, the manager
only uses driver assisted migration when the source and the destination
belong to the same backend (different pools).
Driver assisted migration should also be tried for other cases, just
like when we do a normal migration.
One case were this would be beneficial is when doing migrations from one
pool to another on the same storage system on single pool drivers (such
as RBD/Ceph).
This patch checks what are the changes between the types to see if it is
safe to use driver assisted migration (from the perspective of keeping
the resulting volume consistent with the volume type) and when it is it
tries to use it.
If driver assisted migration indicates that it couldn't move the volume,
then we go with the generic volume migration like we used to.
Closes-Bug: #1886543
Change-Id: I2532cfc9b98788a1a1e765f07d0c9f8c98bc77f6
Added support of authenticity verification through
self-signed certificates for JovianDSS data storage.
Added support of revert to snapshot functionality.
Expanded unit-test coverage for JovianDSS driver.
Change-Id: If0444fe479750dd79f3d3c3eb83b9d5c3e14053c
Implements: bp jdss-add-cert-and-snapshot-revert
This patch changes following items of the Zadara driver:
- Changing the code layout of the Zadara driver.
- Using json format
This patch adds some missing features to the Zadara driver:
- Volume manage and unmanage
- Snapshot manage and unmanage
- List manageable volumes and snapshots
- Multiattach
- IPv6
Change-Id: I787e9e40c882e6ab252e10c239778019acb2e6c6
Nova defaults to using the QEMU decryption layer
instead of cryptsetup now, and cryptsetup was never
needed on the compute for RBD encrypted volumes, so
this is not very useful advice.
Ref: https://review.opendev.org/c/openstack/nova/+/523958
Change-Id: Ib158ffa9543fcbf5bf1cc8bfbd42e1ca766bfa30
[Spectrum Virtualize Family] During terminate_connection
volume_name is passed as input to ensure that volume
mapping for the host is removed.
Currently get_host_by_connector ignore the volume_name validation
if the host is found in the connector wwpns. This causes issues in
some scenarios where WWPNS from different host entry are passed.
Closes-Bug: #1892034
Change-Id: I55f7dd92a4a1bab4a6b00d1b42707aa98b4b2eae
[Spectrum Virtualize Family] During hyperswap volume delete operation
check_vdisk_fc_mappings is trying to delete the remote copy controlled
fcmaps which are created for hyperswap volumes.
This patch fixes the issue by ignoring the remote copy controlled fcmaps.
Closes-Bug: #1912564
Change-Id: I98f1c60810d675a51ce1cae8bcce51c6d88e6cd2
Method joinedload_all was deprecated in SQLAlchemy in version 0.9.0 and
has been removed in version 1.4.0.
Standard way of fixing this is by replacing joinedload_all with a custom
method that chains joinedload calls for each word between ".".
In Cinder's case it's not even necessary, since all our joinedload_all
should have been joinedload in the first place.
This patch replaces joinedload_all with joinedload to resolve the
SADeprecationWarning issue and failure in newer releases.
Closes-Bug: #1832164
Change-Id: I50dc67b12764e6baa0ef05983242029b1f3d765b
For most of our tox jobs we use the usedevelop, which means that tox
will do a pip install -e on our source code on its own.
This means that first it will install deps (which has the constraints)
and then cinder without constraints, since they are defined in deps, and
will pull cinder dependencies ignoring the constraints.
This patch changes how we install dependencies for all tox jobs except
the lower-constraints one, and will make sure that they are always
properly constrained by changing our install_command to include the -c
option.
Change-Id: Ic9a6ac412a334710eb5e45935cd301ca80a5edb9
Recently we merged change I43036837274a7c8dba612db53b34a6ce2cfb2f07
which adds format info support for *fs type volumes.
This seems to cause issues when volume has groups or we are cloning
from a bootable volume.
This patch disables the format info support for volumes with groups
and fixes the issue with cloning.
Change-Id: I41ffedfa1abd0708c84494a0aaaa3815810102b8
In python 3, all strings are considered as unicode string.
This patch drops the explicit unicode literal (u'...')
or (u"..") appearances from the unicode strings.
Change-Id: I5c4b0eb24ecade37c22e7777640466116a893a89
We're getting divergence between local test runs and the gate based
on what version of pip is being used by tox, which depends on what
version of virtualenv (which specifies the versions of pip,
setuptools, and wheel) that tox uses in your local environment. To
eliminate guesswork, tell tox what version of virtualenv it should
use. Also specify a minversion of tox that can do this.
Change-Id: I01451ec27d3ad6e759636902e497272a89cf6d16
Adds the support for FlexGroup pool using the NFS storage mode.
The FlexGroup pool has a different view of aggregate capabilites,
changing them by a list of elements, instead of an element. They
are: `netapp_aggregate`, `netapp_raid_type`, `netapp_disk_type`
and `netapp_hybrid_aggregate`. The `netapp_aggregate_used_percent`
capability is an average of used percent of all FlexGroup's
aggregates.
The `utilization` capability is not calculated to FlexGroup pools,
it is always set to default value.
The driver cannot support consistency group with volumens that are
over FlexGroup pools.
ONTAP does not support FlexClone for file inside a FlexGroup pool,
so the operations of clone volume, create snapshot and create volume
from an image are implemented as the NFS generic driver.
The driver with FlexGroup pools has the snapshot support disabled, requiring
that setting the `nfs_snapshot_supprot` to true on the backend definition.
This config is the same as the NFS generic driver.
The driver image cache relies on FlexClone for file, so it is not applied
for volumes over FlexGroup pools. It can use the core cache image, though.
The QoS minimum is only enabled for FlexGroup pool if all nodes of the
FlexGroup support it.
Implements: blueprint netapp-flexgroup-support
Change-Id: I507083c3e34e5a5cf1db9a3d1f6bef47bd51a9f8
Apply the new cinder_host validation spec to the
host field in the enable_and_disable, failover_host,
and freeze_and_thaw API calls.
Related-Bug: #1904892
Change-Id: I593ff70172de0d5b42d483d0c4bd2a463001155f
This feature adds format info in filesystem type
drivers with the following changes:
1) Store format info in admin_metadata while creating/cloning
volumes
2) Use format info while extending volumes
3) Modify volume format when performing snapshot delete
(blockRebase) operation on attached volume.
4) Return format in connection_info
blueprint add-support-store-volume-format-info
Change-Id: I43036837274a7c8dba612db53b34a6ce2cfb2f07
Cinder driver for PowerStore supports volumes/snapshots with
replication enabled according to OpenStack volume replication specification.
Implements: blueprint powerstore-replication-support
Change-Id: I94d089374dee76d401dc6cf83a9c594779e7eb3e
This will fail due to overly restrictive regex validation:
cinder manage hostgroup@cloud#[dead:beef::cafe]:/cinder01 abcd
This is because validation for this call checks for a valid
hostname but needs to check for any string that would be a valid
"host" in Cinder, which is not the same thing.
Closes-Bug: #1904892
Change-Id: I1349e8d3eb422f9dcd533c54f922f7ab8133b753