Quotas are required to exist in the API database as we need to enforce
quotas across cells.
blueprint cells-quota-api-db
Change-Id: I52fd680eaa4880b06f7f8d4bd1bb74920e73195d
We will store custom resource classes in the new resource_classes table.
These custom resource classes represent non-standardized resource
classes. Followup patches add the plumbing to handle existing
standardized classes currently using the fields.ResourceClass field type
and to perform CRUD operations against a /resource-classes REST API
endpoint.
Change-Id: I60ea0dcb392c1b82fead4b859fc7ed6b32d4bda0
blueprint: custom-resource-classes
The DeleteFromSelect function was added to sqla.utils, which is mostly
the home of 2 shadow table functions that need to import api.py to get
constants related to shadow tables. We now exclusively use that
function in api.py for archiving code. That causes some oddities
around module import.
This moves DeleteFromSelect so we can get rid of these odd late
imports.
Change-Id: I59c3444c2258f59a09a9c885bd9490055e278998
This is something I expect has been very broken for a long time. We
have rows in tables such as instance_extra, instance_faults, etc that
pertain to a single instance, and thus have a foreign key on their
instance_uuid column that points to the instance. If any of those
records exist, an instance can not be archived out of the main
instances table.
The archive routine currently "handles" this by skipping over said
instances, and eventually iterating over all the tables to pull out
any records that point to that instance, thus freeing up the instance
itself for archival. The problem is, this only happens if those extra
records are actually marked as deleted themselves. If we fail during
a cleanup routine and leave some of them not marked as deleted, but
where the instance they reference *is* marked as deleted, we will
never archive them.
This patch adds another phase of the archival process for any table
that has an "instance_uuid" column, which attempts to archive records
that point to these deleted instances. With this, using a very large
real world sample database, I was able to archive my way down to
zero deleted, un-archivable instances (from north of 100k).
Closes-Bug: #1622545
Change-Id: I77255c77780f0c2b99d59a9c20adecc85335bb18
As of now, UUID is being generated using either uuid.uuid4()
or uuidutils.generate_uuid(). In order to maintain consistency,
we propose to use uuidutils.generate_uuid() from oslo_utils.
Change-Id: I69d4b979fc0e37bc351f9b5516dae899e7e7dba0
This migration was added in mitaka and should have been done either in
mitaka or newton. Newton had some migrations that came after it which
were higher-visibility and should have forced everyone to have done
this by now.
Because of the nature of the migration, there isn't really an easy or
efficient way to validate that they have done this like we have done
for some other migrations.
Change-Id: Idb033e9e52b149372308eabb19c5774e10c56156
Add soft deleting 'migrations' table when the VM instance is deleted.
And add soft deleting 'migrations' table when archiving deleted rows
for the case to upgrade.
Change-Id: Ica35ce2628dfcf412eb097c2c61fdde8828e9d90
Closes-Bug: #1584702
When deploying with a very large number of volumes the
block_device_mappings column is not sufficient. The column
needs to be increased to MEDIUMTEXT size to support this use case.
Change-Id: Ia34d06429c1f8f0a8259616bcba0c349c4c9aa33
Closes-Bug: #1621138
We are hitting deadlocks in the gate when we are inserting the new
instance_extra row into the DB.
We should follow up this fix and look at way to avoid the deadlock
happening rather than retrying it. It currently doesn't happen too
often, so this should be enough to stop the problem while we work on a
better fix.
Closes-Bug: #1480305
Change-Id: Iba218bf28c7d1e6040c551fe836d6fa5e5e45f4d
This reverts commit 1b5f9f8203c90fe447d33c89f238104026052d1e.
On IRC I we decided that we agreed no migrations should be placement
specific, we should just use the API table migrations to generate the
schema for both DBs.
There is also a separate debate around the alias for the aggregates
table, but that is not really a reason to revert, its just things in
here that will need rework.
Change-Id: I275945aee9d9be8e35d6ddc05515df39d559457a
Add the ability to sync the cell0 database using nova-manage.
The `db sync` command will be used to sync the cell0 database.
This ensures that operators will only have two db sync commands to
perform in the single cell case.
blueprint cells-cell0
Change-Id: I21ae13a6c029e8ac89484faa212434911160fd51
BandwidthUsage model has no UniqueConstraints.
In 'bw_usage_cache' table in nova db there is single primary
autoincrement key. So the duplicate entry problem is solved by
db itself and db_exc.DBDuplicateEntry could not be raised in Nova.
Ideally we should add UniqueConstraint to prevent multiple bw usage
records existing for the same date range and UUID. That fix for this
will mean we should be able to remove the .first() call and instead
use .one(). The current code that uses .first() is not correct
because there is no order_by() applied on the SQL query and
therefore the returned "first record" is indeterminate.
This workaround fix removed misleading note and exception and
added order_by() to ensure that the same record is updated every time.
Co-Authored-By: Sergey Nikitin <snikitin@mirantis.com>
Closes-bug: #1614561
Change-Id: I408bc3a3e5623965a619d8c7241e4e77c8bf44f5
If 'connection' is set in the 'placement_database' conf group use
that as the connection URL for the placement database. Otherwise if
it is None, the default, then use the entire api_database conf group
to configure a database connection.
This works by:
* adding a 'placement sync' and 'placement version' command to
nova-manage
* adding placement_migrations that that sync will run
* adding a placement_database config group with the relevant
database configuration settings
* adding a placement_context_manager. If
CONF.placement_database.connection is None this is the same as
the api_context_manager, otherwise it is a new one from its own config
* adjust nova/tests/fixtures to be aware of a 'placement' database
and the placement_context_manager
This version of this change differs from others by not requiring
separate placement commands for migration, instead using existing
tooling, which makes the size of the change a bit smaller and also
addresses problems with the test fixtures needing to be too aware of
what migration code to run. Now it runs the same code.
This functionality is being provided to allow deployers to choose
between establishing a new database now or requiring a migration
later. The default is migration later.
This is a modification of Id93cb93a0f4e8667c8e7848aa8cff1d994c2c364
and I3290e26d0a212911f8ef386418b9fa08c685380b.
Change-Id: Ice03144376c9868c064e4393d531510615fc6fc1
Co-Authored-By: Chris Dent <cdent@anticdent.org>
Partially-Implements: blueprint generic-resource-pools
This change enhances the help texts of the database related config
options. They are in the "DEFAULT" section of the "nova.conf" and
get used on the DB layer but describe a usage on the "compute" level.
They should probably live in another section than "DEFAULT".
bp centralize-config-options-newton
Change-Id: I5caeff8bcb5f0b63f5ec4e79ab5f6cde16ae018f
Make db.security_group_get only join rules if specified in
the columns_to_join. This works around a performance issue
with lots of instances and security groups.
Co-Authored-By: Dan Smith <dansmith@redhat.com>
Change-Id: Ie3daed133419c41ed22646f9a790570ff47f0eec
Closes-Bug: #1552971
This adds a new method to the service list object to get all compute
service records for a given hypervisor type. It's queried with a join to
the compute_nodes table, as that's the only place hypervisor type is
stored.
Change-Id: Ic044362d232e340145fa30f100c3e5e37abb5e6e
Make aggregate.create() and destroy() use the API rather than cell database.
Also block aggregate creation until main database empty. This makes
Aggregate.create() fail until the main database has had all of its aggreagtes
migrated. Since we want to avoid any overlap or clashes in integer ids we
need to enforce this.
Note that this includes a change to a notification sample, which encodes
the function and module of a sample exception (which happens to be during
an aggregate operation). Since the notifications are encoding internal
function names, which can and will change over time, this is an expected
change.
blueprint cells-aggregate-api-db
Co-Authored-By: Dan Smith <dansmith@redhat.com>
Change-Id: Ida70e3c05f93d6044ddef4fcbc1af999ac1b1944
This adds a destroy() method for VirtualInterface which has not been
required before but is now.
Change-Id: Ie00f52153a816049f8efcc9aa8071371ce0b7e5a
Related-Bug: #1602357
Due to the os-extended-volumes API it is necessary to be able to
retrieve block device mapping info for an instance at any time. In order
to do so it needs to be stored with the build request.
This also makes it available during instance deletion where it may be
useful to look up whether delete_on_termination is set on a bdm so that
it can be cleaned up.
Change-Id: Ib774a43e49b7153b3f7b099a59483c62003ee7a8
Partially-Implements: bp add-buildrequest-obj
keypair object and db_api supports 'limit' and 'marker'
parameters, these changes are required for keypairs pagination
for Nova API.
Part of blueprint keypairs-pagination
Change-Id: I9776efc609f01d053824b31f64126cfcd6dadc18
Several places raising MarkerNotFound were overwriting the
error message with the marker key. This fixes that by passing
the marker as a kwarg so the message is formatted properly.
Change-Id: I6d5c7048a5636e2cef15a6dab0d568c89e713a87
An Allocation represents a use of some class of resource from a
particular resource provider, by a consumer. They can be created
and destroyed.
It is also possible get a list of all allocations associated with a
particular resource provider.
This also includes: a small correction to functional tests of Inventory
so that its structure (where resource_class is defined) is aligned
with the Allocation tests.
Co-Authored-By: Dan Smith <dansmith@redhat.com>
Partially-Implements: blueprint resource-providers-allocations
Change-Id: Iaec8a8e318adfd8feae6496ad962bc02d13adebd
When there are thousands of compute nodes, it would be slow to get the
whole hypervisor list, and it is bad for user experience to display
thousands of items in a table in horizon. This patch is proposed to
support pagination for hypervisor by adding `limit` and `marker` to
the list api.
Implements blueprint: pagination-for-hypervisor
Change-Id: Ie7f8b5c733b383f3e69fa23188e56257e503b5f7
If there is a DBError raised during VIF creation it is nearly impossible
to debug since all exception information is lost. This logs the
exception to aid in debugging. Note that this exception should only be
raised if something is really wrong so this should not spam a normal
setup.
Change-Id: Ifed88c981cc9d564a7443744bf298677a0782cee
Adding a save method to the virtual_interface object
and an update method to its database model.
Partially implements blueprint virt-device-role-tagging
Co-authored-by: Artom Lifshitz <alifshit@redhat.com>
Change-Id: I52673fc297cb578995be5c7a075c5693b0793bf5
Console auth tokens will be saved in the database
instead of in memory in a console auth server.
Adding the db api methods to create token records,
get them and delete all tokens for an instance
in this patch.
The following patch in the series will add the console
auth token object.
Change-Id: I881faa62f3be4986b38d11c4ac059672ae45c11f
Co-Authored-By: Eli Qiao <qiaoliyong@gmail.com>
partially-implements: blueprint convert-consoles-to-objects
CellsV2 requires that instance groups be available in the API database.
Create the 'instance_groups', 'instance_group_policy', and
'instance_group_member' tables in the API database.
The instance_id column of instance_group_member has been renamed to
instance_uuid.
Part of blueprint cells-instance-groups-api-db
Change-Id: Id8efd701c9a8e142688ecb276ade41e92ae1b936
Console auth tokens will be saved in the database
instead of in memory in a console auth server.
Adding the table and models is the first step to
do this.
Co-Authored-By: Eli Qiao <qiaoliyong@gmail.com>
Change-Id: I42c25b465efb7e03769c9557f66feaa92cfa207c
partially-implements: blueprint convert-consoles-to-objects
DB call method get_all_by_resource_provider_uuid queries all
Inventories and filters them by provider's uuid from ResourceProvider,
but query list is not valid (contains inventories for all providers),
the correct way to load ResourceProvider is to use query join. Replace
joinedload with contains_eager to avoid redundant joins.
As result last updated provider overwrites all provider's inventories
with wrong data.
Closes-Bug: #1572555
Change-Id: I49fb1bf63400280635c202dea8f870d727a91e81
db.api.aggregate_get_by_uuid and objects.Aggregate.get_by_uuid
methods are added to make it possible to work with Aggregates by
their newly added uuid attribute.
Partially-Implements: blueprint generic-resource-pools
Change-Id: I7c46fd1ffebb6907c949cfa302131bfbfcd433de
This reverts the code paths (now obselete since the inventories and
allocations tables have moved to the API database) that join'd
compute_nodes to inventories and allocations tables when doing the
compute_node_get() DB API calls.
Change-Id: I7912f3664ecdce7bc149d8c51b2c350c7be74bf2
Since we have introduced the new API for aborting a running live
migration we have introduced a new state called "cancelled" which is
applied to all the aborted live migration job in the libvirt driver.
This new status is not filtered by the sqlalchemy query used to get the
list of the all migration in progress for host and node.
Change-Id: I219591297f73c4bb8b1d97aaf298681c0421d1ae
Closes-bug: #1588378