As the comments in the code suggest, we can remove this
code now. There was a blocking database migration which
enforced that the online data migrations to generate
uuids for aggregates has been completed:
4d0915568a1011ddee7317ddfb237be0803e7790
So this code is safe to remove now. As a result, we
also remove the online data migration routine which
relies on the object code.
Change-Id: I2050b5bdf906d2d2f8a87c79ca0804e0fc955755
The existing unit test for this is a bit hard to follow
given the setup, so this change simply ensures that the
reservations created during the seutp are actually processed
during reservation_expire in the DB API.
Change-Id: I21fc6a441090b86f89b52c92976a81ba3c28f7d7
The name of model object should be singular. This patch corrects it.
Partial implement blueprint resource-provider-traits
Change-Id: I7600d7e3775d237e0593ec52fbb474e1dad079c1
The instance_actions and instance_actions_events tables are handled
differently than most when we soft-delete an instance.
It's not immediately obvious why this is, so let's add a comment
explaining what's going on.
Change-Id: Ie353650861c2911e4f55628c55f049fc1756e591
This patch adds DB table `traits` and `resource_provider_traits` into
the api database.
The traits table is used to represent the trait. The resource_provider_traits
table is used to represent the relationship between trait and resource provider.
Part of implementation blueprint shared-resources-pike
Change-Id: I5c34bdd1423beab53cc4af45e016d9a9bba5ffda
When we go to detect the minimum version for a given service, we
should ignore any deleted services. Without this, we will return
the minimum version of all records, including those that have been
deleted with "nova service-delete". This patch filters deleted
services from the query.
Closes-Bug: #1668310
Change-Id: Ic96a5eb3728f97a3c35d2c5121e6fdcd4fd1c70b
Normally we reserve five slots for the API database instead of the default
ten. However, given all the stuff that merged in Ocata related to API-level
services, I'm going to reserve ten here just so we have space if we need
it.
Change-Id: I57c2edcf1fb80e24017cb1b4be00065aa20b342c
The check and subsequent hard failure for HostMapping records in
API migration 30 is inconvenient at times during a new setup where
we have flavors in place but no hosts yet. Since we can now check
for and warn about missing HostMapping records in our upgrade
status command, this patch lowers the lack of host mappings check
from a failure to a warning. This migration was really just to make
sure you ran the simple setup command, and the cell mapping check
does that for us.
Change-Id: I8b757fa7c805ec6f4d578ecb6f33d3f1ceff29fc
* Add osprofiler wsgi middleware. This middleware is used for 2 things:
1) It checks that person who want to trace is trusted and knows
secret HMAC key.
2) It starts tracing in case of proper trace headers
and adds the first wsgi trace point with info about the HTTP request
* Add initialization of osprofiler on start of a service
Currently that includes oslo.messaging notifier instance creation
to send Ceilometer backend notifications.
oslo-spec: https://review.openstack.org/#/c/103825/
python-novaclient change: https://review.openstack.org/#/c/254699/
based on: https://review.openstack.org/#/c/105096/
Co-Authored-By: Boris Pavlovic <boris@pavlovic.me>
Co-Authored-By: Munoz, Obed N <obed.n.munoz@intel.com>
Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
Co-Authored-By: Tovin Seven <vinhnt@vn.fujitsu.com>
Implements: blueprint osprofiler-support-in-nova
Change-Id: I82d2badc8c1fcec27c3fce7c3c20e0f3b76414f1
1.As mentioned in [1], we should avoid using
six.iteritems to achieve iterators. We can
use dict.items instead, as it will return
iterators in PY3 as well. And dict.items/keys
will more readable. 2.In py2, the performance
about list should be negligible, see the link [2].
[1] https://wiki.openstack.org/wiki/Python3
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-June/066391.html
The patch list:
1. cells.
2. compute api.
3. image.
4. network.
5. objects.
6. scheduler.
7. virt.
8. other resources.
Partial-Implements: blueprint replace-iteritems-with-items
Change-Id: Ic6e469eb80ee1774de1374bb36f38b5134b6b311
As suggested in John Garbutt in I7e1986c5f11356060cc9db12605b1322c39e79c0,
move the quota config options into a config group of their own.
Change-Id: Ie06a370868f01fb9a1cc246a77d7823fac83e70e
This patch addresses slowness that can occur when doing a list servers
API operation when there are many thousands of records in the
instance_faults table.
Previously, in the Instance.fill_faults() method, we were getting all
instance fault records for a set of instances having one of a set of
supplied instance UUIDs and then iterating over those faults and
returning a dict of instance UUID to the first fault returned (which
happened to be the latest fault because of ordering the SQL query by
created_at).
This patch adds a new InstanceFaultList.get_latest_by_instance_uuids()
method that does some SQL-fu to only return the latest fault records for
each instance being inspected.
Closes-Bug: #1632247
Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
Change-Id: I8f2227b3969791ebb2d04d74a316b9d97a4b1571
Add optional parameters 'limit' and 'marker' to the
os-simple-tenant-usage endpoints for pagaination.
/os-simple-tenant-usage?limit={limit}&marker={instance_uuid}
/os-simple-tenant-usage/{tenant}?limit={limit}&marker={instance_uuid}
The aggregate usage totals may no longer reflect all instances for a
tenant, but rather just the instances for a given page. API consumers
will need to stitch the aggregate data back together (add the totals)
if a tenant's instances span several pages.
Implements blueprint paginate-simple-tenant-usage
Change-Id: Ic8e9f869f1b855f968967bedbf77542f287f26c0
There are codes for tags search in EC2 API in db layer.
Those code are added in the commit 24582abf82842cbb922e38c78e281eda56f981e2.
The EC2 compatible API already removed from Nova, and this parameter
didn't works for Nova REST API also. So those codes should be removed.
Change-Id: Ic7843e6b90f5c8ba204eb3183bbb21bea9e0f907
Change ff6b9998bb977421a5cbc94878ced8542d910c9e enforces in
a database migration that you've run the simple_cell_setup
command for cells v2 but the instructions in the error and
in the release note said to use 'nova-manage db' when it should
be 'nova-manage cell_v2'.
Change-Id: I8e71d1c7022d1000f26b7c16ed1c56f6e87ab8ac
Closes-Bug: #1649341
You can currently create a 500 error on mysql by passing | as the name
filter because mysql assumes regex values are well crafted by the
application layer.
This puts in facilities to provide a safe regex filter per db engine.
It also refactors some of the inline code from _regex_instance_filter
into slightly more logical blocks, which will make it a little more
straight forward about where we need to do something smarter about
determining the dbtype in a cellsv2 world.
Change-Id: Ice2e21666905fdb76c001195e8fca21b427ea737
Closes-Bug: 1546396
There are a couple of places in nova/db/api/sqlalchemy.py where the
context argument is passed as a positional arg instead of a kwarg,
causing it to be erroneously mapped to the use_slave kwarg:
def get_engine(use_slave=False, context=None):
This corrects to calls to pass context=context instead.
Change-Id: I8fb7f04a54d9f7f645df8287cdda0ae665a22368
We have code going into Ocata that needs to be sure that cell and
host mappings are in place. Since this was required homework in
Newton, we can land a migration to intentionally fail if this was
not completed.
This is, however, a little difficult to require because a first-time
deployment will be initialized schema-wise with none of these records,
which is also sane. So, we look to see if any flavors are defined as
a sentinel to indicate that this is an upgrade of an existing
deployment instead of a first-time event. Not perfect, but since this
is really just a helper for the user, it seems like a reasonable
risk.
Depends-On: If1af9c478e8ea2420f2523a9bb8b70fafddc86b7
Change-Id: I72fb724dc13e1a5f4e97c58915b538ba761c582d
The pick_context_manager method will use a connection to a cell
database if one is present in the RequestContext, else it falls
back on the global main_context_manager in the DB API.
Currently, there are several places in our DB API code where
pick_context_manager isn't used because in a real scenario, each
cell is in a separate process where main_context_manager points
to its local database. This causes problems for testing though,
because we are unable to patch the DB API to simulate switching
between multiple 'main' databases in our functional tests because
of the global nature of main_context_manager.
This replaces all uses of main_context_manager with
pick_context_manager to:
1. Make switching between multiple databases able to work in
functional tests
2. Fix any possible cases where pick_context_manager is not
used for a DB API method that could be called from the
API using target_cell
Change-Id: I31e3170e0953cefbf49bfc84b29edab514c90cb5
In order to facilitate a future extraction of the placement service
we want to record the association between a resource provider and an
arbitrary aggregate uuid in its own table.
A PlacementAggregate model is joined from ResourceProvider via
ResourceProviderAggregate. Note that this structure is used so we can
join on ids instead of strings (the uuids). A direct mapping between
ResourceProvider uuid and Aggregate uuid was mooted earlier in the year
but was determined to be suboptimal.
The name 'placement_aggregates' is used as the least problematic of
several choices after discussion amongst several parties.
The data will be used by the forthcoming get_ and set_aggregates
methods on the ResourceProvider object.
Change-Id: Id0355cb022f68e962af306ff04cf724d22b68d19
Partially-Implements: blueprint generic-resource-pools-ocata
The build_requests.instance column is a serialized
instance object, and the instances.user_data column
is MediumText, so the build_requests.instance column
itself needs to be at least MediumText in size for MySQL.
Change-Id: I7d65df37c02750593037744543ad15e5bc64e913
Closes-Bug: #1635446
Quotas are required to exist in the API database as we need to enforce
quotas across cells.
blueprint cells-quota-api-db
Change-Id: I52fd680eaa4880b06f7f8d4bd1bb74920e73195d
We will store custom resource classes in the new resource_classes table.
These custom resource classes represent non-standardized resource
classes. Followup patches add the plumbing to handle existing
standardized classes currently using the fields.ResourceClass field type
and to perform CRUD operations against a /resource-classes REST API
endpoint.
Change-Id: I60ea0dcb392c1b82fead4b859fc7ed6b32d4bda0
blueprint: custom-resource-classes
The DeleteFromSelect function was added to sqla.utils, which is mostly
the home of 2 shadow table functions that need to import api.py to get
constants related to shadow tables. We now exclusively use that
function in api.py for archiving code. That causes some oddities
around module import.
This moves DeleteFromSelect so we can get rid of these odd late
imports.
Change-Id: I59c3444c2258f59a09a9c885bd9490055e278998
This is something I expect has been very broken for a long time. We
have rows in tables such as instance_extra, instance_faults, etc that
pertain to a single instance, and thus have a foreign key on their
instance_uuid column that points to the instance. If any of those
records exist, an instance can not be archived out of the main
instances table.
The archive routine currently "handles" this by skipping over said
instances, and eventually iterating over all the tables to pull out
any records that point to that instance, thus freeing up the instance
itself for archival. The problem is, this only happens if those extra
records are actually marked as deleted themselves. If we fail during
a cleanup routine and leave some of them not marked as deleted, but
where the instance they reference *is* marked as deleted, we will
never archive them.
This patch adds another phase of the archival process for any table
that has an "instance_uuid" column, which attempts to archive records
that point to these deleted instances. With this, using a very large
real world sample database, I was able to archive my way down to
zero deleted, un-archivable instances (from north of 100k).
Closes-Bug: #1622545
Change-Id: I77255c77780f0c2b99d59a9c20adecc85335bb18
As of now, UUID is being generated using either uuid.uuid4()
or uuidutils.generate_uuid(). In order to maintain consistency,
we propose to use uuidutils.generate_uuid() from oslo_utils.
Change-Id: I69d4b979fc0e37bc351f9b5516dae899e7e7dba0
This migration was added in mitaka and should have been done either in
mitaka or newton. Newton had some migrations that came after it which
were higher-visibility and should have forced everyone to have done
this by now.
Because of the nature of the migration, there isn't really an easy or
efficient way to validate that they have done this like we have done
for some other migrations.
Change-Id: Idb033e9e52b149372308eabb19c5774e10c56156
Add soft deleting 'migrations' table when the VM instance is deleted.
And add soft deleting 'migrations' table when archiving deleted rows
for the case to upgrade.
Change-Id: Ica35ce2628dfcf412eb097c2c61fdde8828e9d90
Closes-Bug: #1584702
When deploying with a very large number of volumes the
block_device_mappings column is not sufficient. The column
needs to be increased to MEDIUMTEXT size to support this use case.
Change-Id: Ia34d06429c1f8f0a8259616bcba0c349c4c9aa33
Closes-Bug: #1621138
We are hitting deadlocks in the gate when we are inserting the new
instance_extra row into the DB.
We should follow up this fix and look at way to avoid the deadlock
happening rather than retrying it. It currently doesn't happen too
often, so this should be enough to stop the problem while we work on a
better fix.
Closes-Bug: #1480305
Change-Id: Iba218bf28c7d1e6040c551fe836d6fa5e5e45f4d
This reverts commit 1b5f9f8203c90fe447d33c89f238104026052d1e.
On IRC I we decided that we agreed no migrations should be placement
specific, we should just use the API table migrations to generate the
schema for both DBs.
There is also a separate debate around the alias for the aggregates
table, but that is not really a reason to revert, its just things in
here that will need rework.
Change-Id: I275945aee9d9be8e35d6ddc05515df39d559457a
Add the ability to sync the cell0 database using nova-manage.
The `db sync` command will be used to sync the cell0 database.
This ensures that operators will only have two db sync commands to
perform in the single cell case.
blueprint cells-cell0
Change-Id: I21ae13a6c029e8ac89484faa212434911160fd51
BandwidthUsage model has no UniqueConstraints.
In 'bw_usage_cache' table in nova db there is single primary
autoincrement key. So the duplicate entry problem is solved by
db itself and db_exc.DBDuplicateEntry could not be raised in Nova.
Ideally we should add UniqueConstraint to prevent multiple bw usage
records existing for the same date range and UUID. That fix for this
will mean we should be able to remove the .first() call and instead
use .one(). The current code that uses .first() is not correct
because there is no order_by() applied on the SQL query and
therefore the returned "first record" is indeterminate.
This workaround fix removed misleading note and exception and
added order_by() to ensure that the same record is updated every time.
Co-Authored-By: Sergey Nikitin <snikitin@mirantis.com>
Closes-bug: #1614561
Change-Id: I408bc3a3e5623965a619d8c7241e4e77c8bf44f5
If 'connection' is set in the 'placement_database' conf group use
that as the connection URL for the placement database. Otherwise if
it is None, the default, then use the entire api_database conf group
to configure a database connection.
This works by:
* adding a 'placement sync' and 'placement version' command to
nova-manage
* adding placement_migrations that that sync will run
* adding a placement_database config group with the relevant
database configuration settings
* adding a placement_context_manager. If
CONF.placement_database.connection is None this is the same as
the api_context_manager, otherwise it is a new one from its own config
* adjust nova/tests/fixtures to be aware of a 'placement' database
and the placement_context_manager
This version of this change differs from others by not requiring
separate placement commands for migration, instead using existing
tooling, which makes the size of the change a bit smaller and also
addresses problems with the test fixtures needing to be too aware of
what migration code to run. Now it runs the same code.
This functionality is being provided to allow deployers to choose
between establishing a new database now or requiring a migration
later. The default is migration later.
This is a modification of Id93cb93a0f4e8667c8e7848aa8cff1d994c2c364
and I3290e26d0a212911f8ef386418b9fa08c685380b.
Change-Id: Ice03144376c9868c064e4393d531510615fc6fc1
Co-Authored-By: Chris Dent <cdent@anticdent.org>
Partially-Implements: blueprint generic-resource-pools
This change enhances the help texts of the database related config
options. They are in the "DEFAULT" section of the "nova.conf" and
get used on the DB layer but describe a usage on the "compute" level.
They should probably live in another section than "DEFAULT".
bp centralize-config-options-newton
Change-Id: I5caeff8bcb5f0b63f5ec4e79ab5f6cde16ae018f
Make db.security_group_get only join rules if specified in
the columns_to_join. This works around a performance issue
with lots of instances and security groups.
Co-Authored-By: Dan Smith <dansmith@redhat.com>
Change-Id: Ie3daed133419c41ed22646f9a790570ff47f0eec
Closes-Bug: #1552971