This changes floating ips from a ReservableResource to a
CountableResource and replaces quota reserve/commit/rollback with
check_deltas accordingly. Note that floating ip quota is only
relevant to nova-network and will be obsolete when nova-network is
removed.
Part of blueprint cells-count-resources-to-check-quota-in-api
Change-Id: I9e6c16ebe73f2af11bcc47899f25289f08c1204a
We were a bit naughty in Iecbe0eb5717afb0b13ca90d4868a3ca5f9e8902b in
that we didn't normalize the storage of Keystone project and user
identifiers in the API DB's new consumers table. This means that we
will use a whole lot more storage for what ends up being very
repetitive data.
This patch changes the consumers DB table schema's project_id and
user_id columns from VARCHAR(255) to INT data type. This should result
in significantly faster queries for usage information since 9X the
index records can fit into a single index block in memory (36-byte
UUIDs stored in VARCHAR(255) versus 4-byte integers. The more index
records we can fit into a single page of memory, the faster both scans
and seeks will be.
Let's address this now before anything uses the consumers table.
Change-Id: I1b7357739f2a7e55c55d3acb9bd604731c4a2b32
blueprint: placement-project-user
Hypervisor statistics could be incorrect if not
exclude deleted service records from DB.
User may stop 'nova-compute' service on some
compute nodes and delete the service from nova.
When delete 'nova-compute' service, it performs
'soft-delete' to the corresponding db records in
both 'service' table and 'compute_nodes' table if
the compute_nodes record is old, i.e. it is linked
to the service record. For modern compute_nodes
records, they aren't linked to the services table
so deleting the services record will not delete
the compute_nodes record, and the ResourceTracker
won't recreate the compute_nodes record if the host
and hypervisor_hostname still match the existing
record, but restarting the process after deleting
the service will create a new services table record
with the same host/binary/topic.
If the 'nova-compute' service on that server
re-starts, it will automatically add a record
in 'compute_nodes' table (assuming it was deleted
because it was an old-style record) and also a correspoding
record in 'service' table, and if the host name
of the compute node did not change, the newly
created records in 'service' and 'compute_nodes'
table will be identical to the priously soft-deleted
records except the deleted row.
When calling Hypervisor-statistics, the DB layer
joined records across the whole deployment by
comparing records' host field selected from
serivce table and records' host field selected
from compute_nodes table, and the calculated
results could be multiplied if multiple records
from service table have the same host field,
and this scenario could happen if user perform
the above actions.
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Change-Id: I9dfa15f69f8ef9c6cb36b2734a8601bd73e9d6b3
Closes-Bug: #1692397
Originally it was felt that we would need this column to distinguish
between compute nodes and non-compute providers. With the advent of
traits, though, this column is no longer used or needed.
Closes-Bug: #1648197
Change-Id: I614db98727f4737deb6728ee874ab0f68024ebe5
This is going to be used by the Service.get_by_uuid method
which will later be used by the HostAPI to uniquely lookup
a service within a cell.
Part of blueprint service-hyper-uuid-in-api
Change-Id: Iff58296d5b05670116d4e0dc7846a260c48d84ed
This adds the online data migration for populating the
services.uuid column on older records.
Part of blueprint service-hyper-uuid-in-api
Change-Id: I6baf546d7075f7671df0abeb1e7d68372fd177ed
This is the first patch of the series,
this patch adds tags to build_request
table and corresponding db update
script.
Change-Id: I01e1973449a572ecd647ede0a70d274eac1583bd
Part of blueprint support-tag-instance-when-boot
In db API when we process filters, we didn't
use deepcopy. In cases of "tags" and "not-tags"
we used pop to get the first tag, filtered out
results, and then joined with other tags for
later filtering. When we did pop(), the original
value was deleted, the key "tags"/"not-tags" remains.
In the cell scenario, both single cell(we will
query cell0 and the other cell) and multicell,
as we have to query all the cells in a loop and
the tags list in the filter will keep popping,
this will lead to either a HTTP 500 error(popping
from an empty list) or incorrect result(when
number of tags in the list is larger than cell
number, no HTTP 500 will show, but the filter
results for each cell will be different as
each loop will pop one tag).
closes-bug: #1682693
Change-Id: Ia2738dd0c7d1842b68c83d0a9e75e26b2f8d492a
As the comments in the code suggest, we can remove this
code now. There was a blocking database migration which
enforced that the online data migrations to generate
uuids for aggregates has been completed:
4d0915568a1011ddee7317ddfb237be0803e7790
So this code is safe to remove now. As a result, we
also remove the online data migration routine which
relies on the object code.
Change-Id: I2050b5bdf906d2d2f8a87c79ca0804e0fc955755
Some callers of instance_get_all_by_host are passing
in columns_to_join=[], like the _sync_scheduler_instance_info
periodic task in the compute manager, to avoid unnecessary
joins with other tables.
The problem was columns_to_join wasn't being passed through
to _instance_get_all_query which builds the actual query
method, and defaults to join on info_cache and security_groups.
This fixes the problem by passing through columns_to_join and
provides tests to show it working both with and without the joins.
Change-Id: I69f2ddca8fb0935e03b0f426891d01360940a85a
Closes-Bug: #1680616
The existing unit test for this is a bit hard to follow
given the setup, so this change simply ensures that the
reservations created during the seutp are actually processed
during reservation_expire in the DB API.
Change-Id: I21fc6a441090b86f89b52c92976a81ba3c28f7d7
The name of model object should be singular. This patch corrects it.
Partial implement blueprint resource-provider-traits
Change-Id: I7600d7e3775d237e0593ec52fbb474e1dad079c1
The instance_actions and instance_actions_events tables are handled
differently than most when we soft-delete an instance.
It's not immediately obvious why this is, so let's add a comment
explaining what's going on.
Change-Id: Ie353650861c2911e4f55628c55f049fc1756e591
This patch adds DB table `traits` and `resource_provider_traits` into
the api database.
The traits table is used to represent the trait. The resource_provider_traits
table is used to represent the relationship between trait and resource provider.
Part of implementation blueprint shared-resources-pike
Change-Id: I5c34bdd1423beab53cc4af45e016d9a9bba5ffda
When we go to detect the minimum version for a given service, we
should ignore any deleted services. Without this, we will return
the minimum version of all records, including those that have been
deleted with "nova service-delete". This patch filters deleted
services from the query.
Closes-Bug: #1668310
Change-Id: Ic96a5eb3728f97a3c35d2c5121e6fdcd4fd1c70b
Normally we reserve five slots for the API database instead of the default
ten. However, given all the stuff that merged in Ocata related to API-level
services, I'm going to reserve ten here just so we have space if we need
it.
Change-Id: I57c2edcf1fb80e24017cb1b4be00065aa20b342c
The check and subsequent hard failure for HostMapping records in
API migration 30 is inconvenient at times during a new setup where
we have flavors in place but no hosts yet. Since we can now check
for and warn about missing HostMapping records in our upgrade
status command, this patch lowers the lack of host mappings check
from a failure to a warning. This migration was really just to make
sure you ran the simple setup command, and the cell mapping check
does that for us.
Change-Id: I8b757fa7c805ec6f4d578ecb6f33d3f1ceff29fc
* Add osprofiler wsgi middleware. This middleware is used for 2 things:
1) It checks that person who want to trace is trusted and knows
secret HMAC key.
2) It starts tracing in case of proper trace headers
and adds the first wsgi trace point with info about the HTTP request
* Add initialization of osprofiler on start of a service
Currently that includes oslo.messaging notifier instance creation
to send Ceilometer backend notifications.
oslo-spec: https://review.openstack.org/#/c/103825/
python-novaclient change: https://review.openstack.org/#/c/254699/
based on: https://review.openstack.org/#/c/105096/
Co-Authored-By: Boris Pavlovic <boris@pavlovic.me>
Co-Authored-By: Munoz, Obed N <obed.n.munoz@intel.com>
Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
Co-Authored-By: Tovin Seven <vinhnt@vn.fujitsu.com>
Implements: blueprint osprofiler-support-in-nova
Change-Id: I82d2badc8c1fcec27c3fce7c3c20e0f3b76414f1
1.As mentioned in [1], we should avoid using
six.iteritems to achieve iterators. We can
use dict.items instead, as it will return
iterators in PY3 as well. And dict.items/keys
will more readable. 2.In py2, the performance
about list should be negligible, see the link [2].
[1] https://wiki.openstack.org/wiki/Python3
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-June/066391.html
The patch list:
1. cells.
2. compute api.
3. image.
4. network.
5. objects.
6. scheduler.
7. virt.
8. other resources.
Partial-Implements: blueprint replace-iteritems-with-items
Change-Id: Ic6e469eb80ee1774de1374bb36f38b5134b6b311
As suggested in John Garbutt in I7e1986c5f11356060cc9db12605b1322c39e79c0,
move the quota config options into a config group of their own.
Change-Id: Ie06a370868f01fb9a1cc246a77d7823fac83e70e
This patch addresses slowness that can occur when doing a list servers
API operation when there are many thousands of records in the
instance_faults table.
Previously, in the Instance.fill_faults() method, we were getting all
instance fault records for a set of instances having one of a set of
supplied instance UUIDs and then iterating over those faults and
returning a dict of instance UUID to the first fault returned (which
happened to be the latest fault because of ordering the SQL query by
created_at).
This patch adds a new InstanceFaultList.get_latest_by_instance_uuids()
method that does some SQL-fu to only return the latest fault records for
each instance being inspected.
Closes-Bug: #1632247
Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
Change-Id: I8f2227b3969791ebb2d04d74a316b9d97a4b1571
Add optional parameters 'limit' and 'marker' to the
os-simple-tenant-usage endpoints for pagaination.
/os-simple-tenant-usage?limit={limit}&marker={instance_uuid}
/os-simple-tenant-usage/{tenant}?limit={limit}&marker={instance_uuid}
The aggregate usage totals may no longer reflect all instances for a
tenant, but rather just the instances for a given page. API consumers
will need to stitch the aggregate data back together (add the totals)
if a tenant's instances span several pages.
Implements blueprint paginate-simple-tenant-usage
Change-Id: Ic8e9f869f1b855f968967bedbf77542f287f26c0
There are codes for tags search in EC2 API in db layer.
Those code are added in the commit 24582abf82842cbb922e38c78e281eda56f981e2.
The EC2 compatible API already removed from Nova, and this parameter
didn't works for Nova REST API also. So those codes should be removed.
Change-Id: Ic7843e6b90f5c8ba204eb3183bbb21bea9e0f907
Change ff6b9998bb977421a5cbc94878ced8542d910c9e enforces in
a database migration that you've run the simple_cell_setup
command for cells v2 but the instructions in the error and
in the release note said to use 'nova-manage db' when it should
be 'nova-manage cell_v2'.
Change-Id: I8e71d1c7022d1000f26b7c16ed1c56f6e87ab8ac
Closes-Bug: #1649341
You can currently create a 500 error on mysql by passing | as the name
filter because mysql assumes regex values are well crafted by the
application layer.
This puts in facilities to provide a safe regex filter per db engine.
It also refactors some of the inline code from _regex_instance_filter
into slightly more logical blocks, which will make it a little more
straight forward about where we need to do something smarter about
determining the dbtype in a cellsv2 world.
Change-Id: Ice2e21666905fdb76c001195e8fca21b427ea737
Closes-Bug: 1546396
There are a couple of places in nova/db/api/sqlalchemy.py where the
context argument is passed as a positional arg instead of a kwarg,
causing it to be erroneously mapped to the use_slave kwarg:
def get_engine(use_slave=False, context=None):
This corrects to calls to pass context=context instead.
Change-Id: I8fb7f04a54d9f7f645df8287cdda0ae665a22368
We have code going into Ocata that needs to be sure that cell and
host mappings are in place. Since this was required homework in
Newton, we can land a migration to intentionally fail if this was
not completed.
This is, however, a little difficult to require because a first-time
deployment will be initialized schema-wise with none of these records,
which is also sane. So, we look to see if any flavors are defined as
a sentinel to indicate that this is an upgrade of an existing
deployment instead of a first-time event. Not perfect, but since this
is really just a helper for the user, it seems like a reasonable
risk.
Depends-On: If1af9c478e8ea2420f2523a9bb8b70fafddc86b7
Change-Id: I72fb724dc13e1a5f4e97c58915b538ba761c582d
The pick_context_manager method will use a connection to a cell
database if one is present in the RequestContext, else it falls
back on the global main_context_manager in the DB API.
Currently, there are several places in our DB API code where
pick_context_manager isn't used because in a real scenario, each
cell is in a separate process where main_context_manager points
to its local database. This causes problems for testing though,
because we are unable to patch the DB API to simulate switching
between multiple 'main' databases in our functional tests because
of the global nature of main_context_manager.
This replaces all uses of main_context_manager with
pick_context_manager to:
1. Make switching between multiple databases able to work in
functional tests
2. Fix any possible cases where pick_context_manager is not
used for a DB API method that could be called from the
API using target_cell
Change-Id: I31e3170e0953cefbf49bfc84b29edab514c90cb5
In order to facilitate a future extraction of the placement service
we want to record the association between a resource provider and an
arbitrary aggregate uuid in its own table.
A PlacementAggregate model is joined from ResourceProvider via
ResourceProviderAggregate. Note that this structure is used so we can
join on ids instead of strings (the uuids). A direct mapping between
ResourceProvider uuid and Aggregate uuid was mooted earlier in the year
but was determined to be suboptimal.
The name 'placement_aggregates' is used as the least problematic of
several choices after discussion amongst several parties.
The data will be used by the forthcoming get_ and set_aggregates
methods on the ResourceProvider object.
Change-Id: Id0355cb022f68e962af306ff04cf724d22b68d19
Partially-Implements: blueprint generic-resource-pools-ocata
The build_requests.instance column is a serialized
instance object, and the instances.user_data column
is MediumText, so the build_requests.instance column
itself needs to be at least MediumText in size for MySQL.
Change-Id: I7d65df37c02750593037744543ad15e5bc64e913
Closes-Bug: #1635446