Incremental backups only work if there's a previous backup to base
it on. With the posix driver, this means there needs to be a previous
backup on the same host where the incremental backup is created.
This patch ensures that an incremental backup is scheduled on the
host that that contains the base backup being used for the increment.
Previously we were relying on luck for this and much of the time an
incremental backup would fail due for want of a base backup.
Closes-bug: 1952805
Change-Id: Id239b4150b1c8e9f4bf32f2ef867fdffbe84f96d
When uploading a volume to an image, send the new-style
cinder://<store-id>/<volume-id> URL to the Glance API if
image_service:store_id is present in the volume type extra specs.
Closes-Bug: #1978020
Co-Authored-By: Rajat Dhasmana <rajatdhasmana@gmail.com>
Change-Id: I815706f691a7d1e5a0c54eb15222417008ef1f34
[Spectrum Virtualize family] As part of Flashcopy 2.0 implementation,
added support for creation and deletion of volumegroup-snapshots.
Implements: blueprint ibm-svf-volumegroup
Change-Id: Ibb21dc473115ffc2e61fced0a6a7b2da8852fd58
Attach operation fails when volume and back-end host are on different
iogroup. The assignment of the node IP addresses is done outside the
try block, and the "for ip_data in lsip_resp" loop. Because of this
the loop was not consistent before, so only latest node was detected
instead of all nodes.
Closes-Bug: #1985065
Change-Id: I72074357140e9817eee704b6e9ebfd7f50343235
[Spectrum Virtualize family] During creation of replicated volumes,
lsmdiskgrp calls are made at L919 in storwize_svc_common.py file as
the secondary pools information is not available in stats.
Hence, Optimized the lsmdiskgrp SSH calls to reduce the computational
time by storing the secondary pools information in stats.
Closes-Bug: #1978290
Change-Id: Iea7a7b7d7ce226b3c29aba9641b052ee7fe8614f
Fixed Infinidat driver to take into account the group
identifier property when creating a new volume and
add the volume to the consistency group.
Closes-bug: #1984000
Change-Id: Ie5b19330c4d2b7018bde615494b58b9f89945e02
Signed-off-by: Alexander Deiter <adeiter@infinidat.com>
Also add a more general check to convert_image that the image format
reported by qemu-img matches what the caller says it is.
Change-Id: I3c60ee4c0795aadf03108ed9b5a46ecd116894af
Partial-bug: #1996188
Added a check if there are multiple attachments to the volume
from the same connector and terminate connection only for the
last attachment from the corresponding host.
Closes-bug: #1982350
Change-Id: Ibda3a09a181160b8ee9129795429a7f1795e907d
Signed-off-by: Alexander Deiter <adeiter@infinidat.com>
- Use volume name_id to resolve Infinidat volume name.
- Add update_migrated_volume function to fix generic volume
migration between two pools within the same storage cluster.
Closes-bug: #1982405
Change-Id: I80923f9968de31738c554b77c0942a6182f0e67c
Signed-off-by: Alexander Deiter <adeiter@infinidat.com>
The nvmet target was using the nvmetcli command to manage the NVMe-oF
targets, but the command has a big limitation when it comes to
controlling it's behaviour through the command line, as it can only
restore all its configuration and small parts, such as ports or
subsystems, cannot be directly modified without entering into the
interactive mode.
Due to this limitation the nvmet target would:
- Save current nvmet configuration
- Make changes to the json data
- Restore the updated configuration
The problem with this process, besides being slow because it runs a CLI
command and uses temporary files, is that the restoration completely
deletes EVERYTHING, before recreating it again. This means that all
hosts that are connected to volumes will suddenly experience a
disconnect to the volumes (because the namespace and subsystems have
disappeared) and will keep retrying to connect. The reconnect succeeds
after the configuration has been restored by nvmet, but that's 10 to 20
seconds that hosts cannot access volumes (this may block things in VMs)
and will present nnvme kernel log error messages.
To fix all these issues, speed and disconnect, this patch stops using
the nvmetcli as a CLI and uses it as a Python library, since that's the
most feature rich functionality of nvmetcli.
Querying the nvmet system can be done directly with the library, but to
make changes (create/destroy ports, subsystems, namespaces) it requires
privileges, so this patch adds a privsep wrapper for the operations that
we use and cannot be done as a normal user.
The nvmet wrapper doesn't provide privsep support for ALL operations,
only for those that we currently use.
Due to the client-server architecture of privsep and nvmet using non
primitive instances as parameters, the privsep wrapper needs custom
serialization code to pass these instances.
As a side effect of the refactoring we also fix a bug were we tried to
create the port over and over again on each create_export call which
resulted in nvme kernel warning logs.
Closes-Bug: #1964391
Closes-Bug: #1964394
Change-Id: Icae9802713867fa148bc041c86beb010086dacc9
When creating a backup and it fails we will try
to update the status on the volume but if it does
not exist we will instead fail on that and the
backup will be stuck in creating status which means
cloud admin need to help resetting the status.
This catches the VolumeNotFound exception and ignores
updating the volume status and thus backup will be
set to error and can be deleted by user instead.
Closes-Bug: #1996049
Change-Id: Iafa92ece7f83323af257c1702df5029469b11739
This patch support multi pool feature for Hitachi VSP driver
and NEC V driver.
Implements: blueprint hitachi-vsp-add-multi-pool
Change-Id: I49ac061011293900b04a7a5b90ff5b840521993d
Currently snapshot delete requires access to the source volume and
the operation fails if the source volume doesn't exist in the backend.
This prevents some snapshots from being deleted when the source volume
image is deleted from the backend for some reason (for example, after
cluster format).
This change makes the rbd driver to skip updating the source volume
if it doesn't exist. A warning log is left so that operators can be
aware of any skip event.
Closes-Bug: #1957073
Change-Id: Icd9dad9ad7b3ad71b3962b078e5b94670ac41c87
Use `driver_use_ssl` under backend section to enable or disable TLS/SSL
communication between the Cinder volume service and the storage backend.
And `suppress_requests_ssl_warnings` under backend section to suppress
requests library SSL certificate warnings.
Closes-bug: #1981982
Change-Id: I4b302ffd1d0bef673143cd7db427bb9aac27ba33
Signed-off-by: Alexander Deiter <adeiter@infinidat.com>