Introduces microversion (MV) 3.63 to add volume type ID
in the volume details.
This change comes to fix a problem we found in Gnocchi concerning
volumes monitored data. When a volume event is created, Cinder uses
the volume type as the volume type ID. On the other hand, Ceilometer
pollsters are always updating the volume_type value in Gnocchi with the
volume type from volume details API (which uses the volume type name
value).
That situation creates a massive number of resource revisions in
Gnocchi. This MV along with the Dynamic pollster system in Ceilometer
enables operators to overcome the issue.
Closes-Bug: https://bugs.launchpad.net/cinder/+bug/1911660
Change-Id: Icb88faeb00040250a25a5630aeb312a8434ed3c8
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
While getting volume details, if the user is a non-admin then
hide the host name.
Change-Id: Iaf0ac52d9227f9a0efbf32b1faca78c8456a84ca
Closes-Bug: #1740950
With the new v3 cinder attach flow, nova calls attachment_update
and then attachment_complete. During that time the volume
has 'attaching' status. Because of that status, cinder show volume
will show all the attachments as empty, even if the volume has
attachments. Currently this will only happen during migration,
but when multi-attach is working, this will be a general issue.
The change is to the show volume flow. It will no longer look at the
volume attach status, but only at the individual attachment status's.
Any attachment that has status of attached will be returned by show.
(This will only be an issue in Queens when the new multi-attach flow
is used. There is no need for this change in Pike.)
Partially Implements: blueprint cinder-new-attach-apis
Change-Id: I8a2bff5a668ec58ee80c192cb72326f2b3599c39
Closes-Bug: 1713521
CG APIs work as follows:
* Create CG - Create only in groups table
* Modify CG - Modify in CG table if CG in CG table, otherwise modify
in groups table.
* Delete CG - Delete from CG or groups table depending on where it is
* List CG - Check both CG and groups tables
* List CG snapshots - Check both CG and groups tables
* Show CG - Check both tables
* Show CG snapshot - Check both tables
* Create CG snapshot - Create either in CG or groups table depending on
the CG.
* Create CG from source - Create in either CG or groups table
depending on the source.
* Create volume - Add volume either to CG or group
Additional notes:
* default_cgsnapshot_type is reserved for migrating CGs.
* Group APIs will only write/read in/from the groups table.
* Group APIs won't work on groups with default_cgsnapshot_type.
* Groups with default_cgsnapshot_type can only be operated by CG APIs.
* After CG tables are removed, we'll allow default_cgsnapshot_type
to be used by group APIs.
Partial-Implements: blueprint generic-volume-group
Change-Id: Idd88a5c9587023a56231de42ce59d672e9600770
This change adds a new enum and field, VolumeAttachStatus
and VolumeAttachStatusField, that will hold the constants for the
'attach_status' field of the VolumeAttachStatus object. This enum
and field are based on the base oslo.versionedobjects enum and field.
This also changes over the volume object to use the new field. Finally,
all uses of strings for comparison and assignment to this field are
changed over to use the constants defined within the enum.
Partial-Implements: bp cinder-object-fields
Change-Id: Ie727348daf425bd988425767f9dfb82da4c3baa8
This feature introduce 'managing' and 'error_managing'
status into managing process and 'error_managing_deleting'
status into deleting processto to fix the quota decrease
issue when some exception is raised in c-vol. If volume is
in error_managing, quota wouldn't be decreased
when deleting this volume. But we still expose
'creating','error' and 'deleting' status to user for API
compatibility.
Change-Id: I5887c5f2ded6d6a18f497a018d5bf6105bc5afd7
Closes-Bug: #1504007
Implements: blueprint refactor-volume-status-in-managing-vol
In some modules the global LOG is not used any more. And the import
of logging is not used. This patch removes the unused logging import
and LOG vars.
Co-Authored-By: ChangBo Guo(gcb) <eric.guo@easystack.cn>
Change-Id: Ia72f99688ce00aeecca9239f9ef123611259a2fa
The following patch updates get_volume and delete_volume
API to use volume versionedobjects. Changes were made to
be backwards compatible with older RPC clients. It only
includes changes to the core cinder code. Changes in the
drivers are left to each driver maintainer to update.
Note that this patch DOES NOT try to use
object dot notation everywhere, since it would
increase the size of the patch. Instead, it
will be done in subsequent patches.
Co-Authored-By: Michal Dulko <michal.dulko@intel.com>
Co-Authored-By: Szymon Wroblewski <szymon.wroblewski@intel.com>
Change-Id: Ifb36726f8372e21d1d9825d6ab04a072e5db4a6a
Partial-Implements: blueprint cinder-objects
Now when query cinder resource(volume/snapshot/backup) detail, the
response of this resource just show created_at to end user, and do not
contain the updated_at property.
The updated_at time is an important information to end user, for example,
they can delete the useless volumes by checking if the volumes have been
updated in a period time. Also users can filter resources by using
changes_since for all projects in future. Although cinder haven't yet,
this is first step to let users know the time of update.
On the other hand, if volume is 'in-use', attached_at time should also be
returned.
APIImpact
Add "updated_at: XXX" into response body of querying resource detail.
Add "attached_at: XXX" into attachments with response of querying resource
detail.
Change-Id: I8353e95f5a962f3b89132a65c31999218cbe79dc
Implements: blueprint improve-timestamp-in-response-when-querying-cinder-resources
This patch proposes a new implementation for the status and
the migration_status for volumes.
* The initial migration_status is None, meaning no migration has been
done; Migration_status 'error' means the previous migration failed.
Migration_status 'success' means the previous migration succeeded.
* If the key 'lock_volume' is set to True from the request, the volume
status should be set to 'maintenance' during migration and goes
back to its original status after migration. Otherwise, if the
key 'lock_volume' is set to False, the volume status will remain the
same as its original status. The default value for lock_volume is
False and it applies to the available volume.
* From the REST's perspectives, all the create, update and delete
actions are not allowed if the volume is in 'maintenance', because
it means the volume is out of service. If it is not in maintenance
mode, the migration can be interrupted if other requests are
issued, e.g. attach. For the termination of migration, another
patch will target to resolve it.
DocImpact
APIImpact The key 'lock_volume' has been added into the API,
telling the volume to change the status to 'maintenance' or not.
The migration_status has been added into results returned
from volume list command, if the request is from an admin.
Change-Id: Ia86421f2d6fce61dcfeb073f8e7b9c9dde517373
Partial-implements: blueprint migration-improvement
Currently when there is no limit set on a volume list query we retrieve
all volumes and then limit them locally using osapi_max_limit. Similar
thing happens when we are using the marker for next pages, we get all
volumes from that marker until the last volume and then limit it
locally.
We should be limiting it on the DB side so we only retrieve the data we
are actually going to return to the API caller.
This patch always limits the data retrieved from the DB and for the
offset to keep working as it was before we need to do the offset on the
DB side as well.
For reference some tests were performed:
On a deployment with 60,000 volumes, 370,000 volume_metadata items and
240,000 volume_glance_metadata items in cinder db. Before the patch
this will use nearly 10G memory. With the patch we will just use about
500M.
Co-Authored-By: wangxiyuan <wangxiyuan@huawei.com>
Closes-bug:#1483165i
Change-Id: Ie903e546074fe118299e8e1acfb9c88c8a10d78c
PEP-0274 introduced dict comprehensions to replace dict constructor
with a sqeuence of key-pairs[1], these are benefits:
First, it makes the code look neater.
Second, it gains a micro-optimization.
Cinder dropped python 2.6 support in Kilo, we can leverage this now.
Note: This commit doesn't handle dict constructor with kwargs.
This commit also adds a hacking rule.
[1]http://legacy.python.org/dev/peps/pep-0274/
Change-Id: Ie65796438160b1aa1f47c8519fb9943e81bff3e9
Two cases this patch will resolve:
1) Currently if the number of items equals the osapi_max_limit or
the last page of items equals the osapi_max_limit without the
parameter limit set in the user request, a next link is generated
in the response, though this next link will return empty volume
list. In fact it is unnecessary to generate the next link in
this case.
2) If the number of items equals the osapi_max_limit and limit is
greater than osapi_max_limit, a next link is generated. Actually,
the next link does not need to be generated, because it is certain
that there is no more volumes left in the database.
The method _get_collection_links has been called in volumes,
volume_transfers and backups. The patch can only affect the next
link generation for three of them. However, other lists like
consistency groups, qos specs, cgsnapshots have not implemented the
generation for the next link. Potentially this can be a wishlist
item for them.
Change-Id: I0f1f449c73d51675281497a095d869c1e72c889f
closes-bug: #1350558
This patch includes the Cinder changes needed
to support volume multiple attaches. Nova and
python-cinderclient also need patches associated
to provide support for multiple attachments.
This adds the multiattach flag to volumes. When a
volume is created, a multiattach flag can be set,
which allows a volume to be attached to more than
one Nova instance or host. If the multiattach flag is
not set on a volume, it cannot be attached to more
than one Nova instance or host
Each volume attachment is tracked in a
new volume_attachment table. The attachment id is
the unique identifier for each attachment to an
instance or host.
When a volume is to be detached the attachment
uuid must be passed in to the detach call in
order to determine which attachment should be
removed. Since a volume can be attached to an
instance and a host, the attachment id is used
as the attachment identifier.
Nova:
https://review.openstack.org/#/c/153033/https://review.openstack.org/#/c/153038/
python-cinderclient:
https://review.openstack.org/#/c/85856/
Change-Id: I950fa00ed5a30e7758245d5b0557f6df42dc58a3
Implements: blueprint multi-attach-volume
APIImpact
This patch enables Consistency Groups support in Cinder.
It will be implemented for snapshots for CGs in phase 1.
Design
------------------------------------------------
The workflow is as follows:
1) Create a CG, specifying all volume types that can be supported by this
CG. The scheduler chooses a backend that supports all specified volume types.
The CG will be empty when it is first created. Backend needs to report
consistencygroup_support = True. Volume type can have the following in
extra specs: {'capabilities:consistencygroup_support': '<is> True'}.
If consistencygroup_support is not in volume type extra specs, it will be
added to filter_properties by the scheduler to make sure that the scheduler
will select the backend which reports consistency group support capability.
Create CG CLI:
cinder consisgroup-create --volume-type type1,type2 mycg1
This will add a CG entry in the new consistencygroups table.
2) After the CG is created, create a new volume and add to the CG.
Repeat until all volumes are created for the CG.
Create volume CLI (with CG):
cinder create --volume-type type1 --consisgroup-id <CG uuid> 10
This will add a consistencygroup_id foreign key in the new volume
entry in the db.
3) Create a snapshot of the CG (cgsnapshot).
Create cgsnapshot CLI:
cinder cgsnapshot-create <CG uuid>
This will add a cgsnapshot entry in the new cgsnapshots table, create
snapshot for each volume in the CG, and add a cgsnapshot_id foreign key
in each newly created snapshot entry in the db.
DocImpact
Implements: blueprint consistency-groups
Change-Id: Ic105698aaad86ee30ef57ecf5107c224fdadf724
This is take #2 for managing replicaiton in Cinder.
This patch provides the foundation in Cinder to make volume
replication available to the cloud admin. It makes Cinder aware
of volume replicas, and allows the cloud admin to define storage
policies (volume types) that will enable replication.
In this version Cinder delegates most the work on replication
to the driver itself.
This includes:
1. Driver exposes replication capabilities via volume type convention.
2. Extend volume table to include columns to support replicaion.
3. Create replicas in the driver, making it transparant to Cinder.
4. Volume manager code to handle API, updates to create_volume to
support creating test replicas.
5. Driver methods to expose per replication functions
Cinder-specs available at https://review.openstack.org/#/c/98308/
Volume replication use-case: Simplified disaster recovery
The OpenStack cloud is deployed across two metro distance data centers.
Storage backends are available in both data ceneters. The backends
are managed by either a single Cinder host or two, depending on the
storage backend requirements.
Storage admin configures the Cinder volume driver to support
replication.
Cloud admin creates a volume type "replicated" with extra-specs:
capabilities:replication="<is> True"
Every volume created in type "replicated" has a copy on both
backends.
In case of data center failure in first data center, the cloud admin
promotes the replica, and redeploy the VMs - they will now run on
a host in the secondary data center using the storage on the
secondary data center.
Implements: blueprint volume-replication
DocImpact
Change-Id: I964852f08b500400a27bff99e5200386e00643c9
When executing a pagination query a "next" link is included in the
API reply when there are more items then the specified limit.
See pagination documentation for more information:
http://docs.openstack.org/api/openstack-compute/2/content/
Paginated_Collections-d1e664.html
The caller should be able to invoke the "next" link (without
having to re-format it) in order to get the next page of data.
The documentation states "Subsequent links will honor the
initial page size. Thus, a client may follow links to traverse
a paginated collection without having to input the marker parameter."
The problem is that the "next" link is always scoped to the non-
detailed query.
For example, if you execute "/v2/<tenant>/volumes/detail?limit=1",
the "next" link does not have the URL for a detailed query and is
formatted as "/v2/<tenant>/volumes?limit=1&marker=<marker>". In this
case the "next" link needs to be scoped to "/v2/<tenant>/volumes/detail".
The user could work around this issue my manually inserting '/details'
into the "next" link URL.
Test code is included to verify that the '/details' URL is correctly added
when the "next" link is included in a detailed pagination query. Also,
existing tests were changed to ensure that the correct controller function
(ie, 'index' vs. 'detail) are invoked for the appropriate query
(ie, non-detailed or detailed) -- 'index' was previously alwayed invoked
for detailed URL requests.
Change-Id: Ib00d6deb25255fac1db0f7bf4ecd3c8d30e1c39d
Closes-bug: 1299247
Currently, the only way to determine whether a volume is encrypted
is by retrieving the encryption metadata about the volume (through
cinder.api.contrib.volume_encryption_metadata) and checking the
encryption_key_id value. This patch adds an "encrypted" flag to
the basic volume api to enable other services (like Horizon: see
patch https://review.openstack.org/#/c/71125) to easily tell
whether a volume is encrypted using a basic get call,instead of
requiring an additional call.
Implements blueprint encrypt-cinder-volumes
https://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes
Change-Id: Id8e422135f17795de06589930afd0309fde28fd1
Add bootable to the list of values returned by display_list. This
was returned in v1 api, and is still in v2 header, but was missed
in the new implementation.
Closes-Bug: #1232287
Change-Id: If7460b1c8ab4af417117c4bf6cfdccc5fcf21f46
Changing 'os-attach' API interface to allow client mark a volume as be
attached to a host.
Implement bp: volume-host-attaching
docimpact
Change-Id: Iaf442ad0fb37ce369d838f3a512724f830071763
Signed-off-by: Zhi Yan Liu <zhiyanl@cn.ibm.com>
This was left over from the UUID switch. Looking up volume_types by name
is deprecated in v1 and removed in v2.
Fixes: bug #1087817
Change-Id: I02cc035565f9496cd5af228c55ced5cafef2ad81
Rest v2 api volume responses key display_description changes to
description to be consistent with other modules and projects.
Change-Id: Ibafa2865c6c4a73fc1cf4694ae77a6709fd2e6e5
This implements the capability to create usable volume clones in Cinder,
for the LVM case we create a temporary snapshot to copy from so that
volumes can remain attached during cloning. This works by passing
in a source-volume-id to the create command (similar to create-from-snapshot).
Currently we limit clone to the same Cinder node, and only for the base LVM driver.
All other drivers should raise NotImplemented, most inherit from the SANISCSIDriver,
so move the function there and raise until we have a general implementation
for SANISCSI based drivers.
Those drivers that inherit from ISCSI directly instead of SANISCSI,
add the function explicitly and raise NotImplementedError there as well.
Implements blueprint add-cloning-support-to-cinder
Change-Id: I72bf90baf22bec2d4806d00e2b827a594ed213f4
This allows the v2 api responses to be more consistent with other
projects. This approach is making the idea compatible in v2, rather
than changing the model.
blueprint name-attr-consistency
Change-Id: I74e46b4204353d5712be788fe2138784b2a72305
This introduces the v2 volume view builder for ease of response
changing.
blueprint vol-api-consistency
Change-Id: If061e069d3b09ee5de15f1cbc7a46fa29c95a4cd