os-traits 0.14.0 hit upper-constraints [1] so placement's lower bounds
and canary test need to be updated accordingly.
[1] https://review.opendev.org/663507
Change-Id: Ib6f08a681162101f91c0de4ce1d2eed35975ef96
Because os-traits and os-resource-classes are libraries that
provide enumerations, we usually want whatever the latest
version is, even in lower-constraints.txt. We haven't, however,
been good about keeping those up to date.
This updates both requriements.txt and lower-constraints.txt to
the latest versions of both os-traits and os-resource-clases.
No change is required is global upper-constraints. That already
has the latest.
Change-Id: I0bd0e8e071d6275c3e21556018d476c97e8533ae
Story: 2005651
Task: 30939
Turn ResourceProvider and ResourceProviderList into
classical Python objects.
There are two changes here which deserve particular
attention because they are more changing than the other
OVO removals:
* There were two deepcopy calls in the allocation request
handling. When using OVO, the __deepcopy__ handling
built into it prevents deep recursion. I changed them
to copy and things still work as expected. This will
be because using the nested objects by reference is
acceptable.
* The removal of OVO removes the detection of changed fields.
This was being used when creating and saving resource
providers (at the object level) to automagically detect
and prevent writing fields we don't want to.
This change removes that functionality. Instead if bad
data has made it as far as the create or save calls, we
simply don't write it. The HTTP layer continues to
maintain the guards it already had in place to prevent
badness. Tests which were testing the object layer
are removed.
The create_provider function in functional/db/test_base
allowed (trying to) create a provider with a root
but that functionality was only called from one place.
That place and the functionality in create_provider
is removed.
Other things to note:
* As with the other changes, where context is actually used
by the object it is required in the constructor. This
cascades some changes to tests.
* A test that checks to see if adding traits resets the
changes on a rp has been removed, because we don't
track that any more if we haven't got OVO.
* The last_modified handling in placement/util no longer
need NotImplemented handling, that was an artifact of
OVO.
oslo.versionedobjects is removed from requirements. This
doesn't have a huge impact on the size of the virtualenv:
114M -> 107M [1] but it does take us from 132 python
dependencies (via pip list) to 119, removing plenty of
stuff that was only being called in because stuff that
we don't use depended on it.
lower-constraints.txt is updated to reflect the removal
of dependencies that are no longer needed.
[1] Of note, 26M of that is babel. Do we need to translate
exceptions? See email disussion at
http://lists.openstack.org/pipermail/openstack-discuss/2019-February/thread.html#3002
Change-Id: Ie0a9351e0d7c762c9c96c45cbe50132a0fbd1486
Because of a conflict between how the oslo.upgradecheck library uses
oslo config and how we want to use it in placement, two different
ConfigOpts were needed to avoid an args already parsed error. The
new release of the oslo.upgradecheck library is change to allow
the two steps in main (registering the CLI opts and running the
command) to be called separately if desired. Doing so allows us
to use just one ConfigOpts.
Requirements and constraints are updated.
Change-Id: I792df18cb17da95659628bfe7f7a69897c6f37ab
The 0.2.0 release of os-resource-classes has happened, adding the
'PCPU' class. It's already updated in global upper-constraints,
so update it in requirements.txt and lower-constraints and change
the gabbi tests which count resource classes.
Change-Id: I4a189aca59485d65ad8a7c9bfbeca7ac995ed336
os-resource-classes is a python library in which the standardized
resource classes are maintained. It is done as a library so that
multiple services (e.g., placement and nova) can use the same stuff.
It is used and managed here in the same way the os-traits library is
used: At system start up we compare the contents of the
resource_classes table with the classes in the library and add any
that are missing. CUSTOM resource classes are added with a high id
(and always were, even before this change). Because we need to
insert standard resource classes with an id of zero, so we need to
protect against mysql thinking 0 on a primary key id is "generate
the next one". We don't need a similar thing in os-traits because
we don't care about the ids there. And we don't need to guard
against postgresql or sqlite at this point because they do not have
the same behavior.
The resource_class_cache of id to string and string to id mappings
continues to be maintained, but now it looks solely in the database.
As part of confirming that code, it was discovered that the reader
context manager was being entered twice, this has been fixed.
Locking around every access to the resource class cache is fairly
expensive (changes the perfload job from <2s to >5s). Prior to this
change we would only go to cache if the resource classes in the
query were not standards. Now we always look at the cache so rather
than locking around reads and writes we only lock around writes.
This should be okay, because as long as we do a get (intead of the
previous two separate accesses) on the cache's dict that operation
is safe and if it misses (because something else destroyed the
cache) the fall through is to refresh the cache, which still has the
lock.
While updating the database fixture to ensure that the resource
classes are synched properly, it was discovered that addCleanup was
being called twice with the same args. That has been fixed.
In objects/resource_provider.py the ResourceClass field is changed to a
StringField. The field definition was in rc_field and we simply don't
need it anymore. This is satisfactory because we don't do any validation
on the field internal to the objects (but do elsewhere).
Based on initial feedback, 'os_resource_classes' is imported as 'orc'
throughout to avoid unwieldy length.
Change-Id: Ib7e8081519c3b310cd526284db28c623c8410fbe
With oslo.config 6.7.0 configuration options may be found in environment
variables. Because of the small number of options that placement
requires to run it is straightforward to source all config from the
environment, useful in container-based environments.
However, prior to this change, placement-the-wsgi-app required that a
configuration file existed, placement.conf, either in /etc/placement or
in a custom directory.
This change makes it so that the placement.conf can be any of:
* in the custom directory
* in the default project locations (including /etc/placement)
* nowhere
The change maintains the behavior that if
[placement_database]/connection is not set the application will fail to
start.
Though this change is orthogonal to the oslo.config change, requirements
and lower-constraints are updated to ensure the desired behavior.
Closes-Bug: #1802925
Change-Id: Iefa8ad22dcb6a128293ea71ab77c377db56e8d70
While exploring removing unused packages from lower-constraints.txt it
became clear that the lower-constraints job was not working as expected:
Because our tox config has usedevelop=True 'setup.py develop' is called
to install the placement package after the install command is called.
This means that the lower-constraints are clobbered.
I had mistakenly assumed that turning off 'usedevelop', which causes
'setup.py install' would not make any difference, because it usually
installs dependencies too. It turns out however, that when using pbr
and within a git working dir, it does not. That took some time to
figure out. Oh well.
This change makes it so that we create the tox environment using
usedevelop=False and with our own install_command, to avoid upper
constraints conflicting with lower constraints.
This flagged up a few changes, the main one being that we did not have
a new enough version of keystonemiddleware in order to require use of
www_authenticate_uri. requirements.txt is updated for this as well.
And PasteDeploy needed to be updated to work with Python 3's notion
of namespace packages.
psycopg2 need a newer version to work with Postgresql 10.
oslotest needs to be raised to 3.4.0 because the tests in
cmd.test_manage use features to control what is capture by the Output
fixture from oslotest. Note that the lower-constraints job found
this problem and also demonstrates why we must run the lower-constraints
job without upper-constraints being involved. upper-constraints will
"win" and we don't want that. The point of the job is find packages
where lower-constraints are wrong, so it must "win".
The end result here is a lower-constraints.txt file that starts from
the lower-constraints.txt defined by nova, and then is adapted to
update the versions of packages that were not up to date, remove
those packages which are no longer present, and add some that are
now required.
Change-Id: Id66a28f7ace6fc2adf0e1201d9de5f901234d870
This adds the basic framework for the
placement-status upgrade check command which
is a community-wide goal for the Stein release.
A simple placeholder check is added which should
be replaced later when we have a real upgrade
check.
Change-Id: I9291386fe7fcbfc035c104ea9fdbe5eb875c4776
Story: 2003657
Task: 27518
With the migrations now gone, we can safely remove sqlalchemy-migrate
from requirements.txt and lower-constraints.txt.
Change-Id: Id8d03d9532bcf4c4f7ad1c1039fa6a42990f8d5d
This reverts commit bd7d991309ea2bea5d175cb1f2710519936fd0c2 and bumps
the minimum version of oslo.db to 4.40.0, as that is the first version
of the library to include the renamed attribute.
Change-Id: Ic9e7864be3af7ef362cad5648dfc7bdecd104465
Related-Bug: #1788833
This changes the max_concurrent_live_migrations handling
to use a ThreadPoolExecutor so that we can control a bounded
pool of Futures in order to cancel queued live migrations
later in this series.
There is a slight functional difference in the unlimited
case since starting in python 3.5, ThreadPoolExecutor will
default to ncpu * 5 concurrently running threads. However,
max_concurrent_live_migrations defaults to 1 and assuming
compute hosts run with 32 physical CPUs on average, you'd
be looking at a maximum of 160 concurrently running live
migrations, which is probably way above what anyone would
consider sane.
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Part of blueprint abort-live-migration-in-queued-status
Change-Id: Ia9ea1e164fb3b4a386405538eed58d94ad115172
This is the first change that implements basic virt.driver methods
to allow nova-compute process start successfully.
A set of subsequent changes will implement spawn, snapshot, destroy
and instance power actions.
Change-Id: Ica6117c2c64f7518b78b7fb02487622250638e88
blueprint: add-zvm-driver-rocky
Bump the minimum version of oslo.config to 6.1.0, which adds proper
support for parsing Opt.help as rST [1]. This in turn allows us to
revert commit 75fc30090133c31316e6ae790568f1b622807d0c, which is a
temporary fix relying on deprecated features of Sphinx.
[1] https://review.openstack.org/#/c/553860/
Change-Id: I8f56bdce37cfc538348490052a24e463164c86a3
In Ie4d81fa178b3ed6b2a7b450b4978009486f07810 we started using a new WebOb API
for introspecting headers but since this new API isn't supported by older
versions than 1.8, we need to only accept 1.8.1 or 1.8.2 for Nova
(because 1.8.0 was having a bug fixed by 1.8.1 at least).
Change-Id: I345f372815aef5ac0fb6fc607812ce81587734bf
Closes-Bug: #1773225
With the new image handler, it creates an image proxy which
will use the vdi streaming function from os-xenapi to
remotely export VHD from XenServer(image upload) or import
VHD to Xenerver(image download).
The existing GlanceStore uses custom functionality to directly
manipulate files on-disk, so it has the restriction that SR's
type must be file system based: e.g. ext or nfs. The new
image handler invokes APIs formally supported by XenServer
to export/import VDI remotely, it can support other SR
types also e.g. lvm, iscsi, etc.
Note:
vdi streaming would be supported by XenServer 6.5 or above.
The function of image handler depends on os-xenapi 0.3.3 or
above, so bump os-xenapi's version to 0.3.3 and also declare
depends on the patch which bump version in openstack/requirements.
Blueprint: xenapi-image-handler-option-improvement
Change-Id: I0ad8e34808401ace9b85e1b937a542f4c4e61690
Depends-On: Ib8bc0f837c55839dc85df1d1f0c76b320b9d97b8
This change makes nova configure oslo.messaging's active call monitoring
feature if the operator increases the rpc_response_timeout configuration
option beyond the default of 60 seconds. If this happens, oslo.messaging will
heartbeat actively-running calls to indicate that they are still running,
avoiding a false timeout at the shorter interval, while still detecting
actual dead-service failures before the longer timeout value.
In addition, this adds a long_rpc_timeout configuration option that we
can use for known-to-run-long operations separately from the base
rpc_response_timeout value, and pre_live_migration() is changed to use
this, as it is known to suffer from early false timeouts.
Depends-On: Iecb7bef61b3b8145126ead1f74dbaadd7d97b407
Change-Id: Icb0bdc6d4ce4524341e70e737eafcb25f346d197
Adding NVMEoF libvirt driver for supporting NVMEoF initiator CLI.
Libvirt NVMe volume driver is added to handle required calls for
attaching and detaching volume from instaces through calling
os-brick's NVMe Connector.
Implements: blueprint nvme-over-fabirc-nova
Co-Authored-By: Ivan Kolodyazhny <e0ne@e0ne.info>
Change-Id: I67a72c4226e54c18b3a6e4a13b5055fa6e85af09
This adds a granular policy checking framework for
placement based on nova.policy but with a lot of
the legacy cruft removed, like the is_admin and
context_is_admin rules.
A new PlacementPolicyFixture is added along with
a new configuration option, [placement]/policy_file,
which is needed because the default policy file
that gets used in config is from [oslo_policy]/policy_file
which is being used as the nova policy file. As
far as I can tell, oslo.policy doesn't allow for
multiple policy files with different names unless
I'm misunderstanding how the policy_dirs option works.
With these changes, we can have something like:
/etc/nova/policy.json - for nova policy rules
/etc/nova/placement-policy.yaml - for placement rules
The docs are also updated to include the placement
policy sample along with a tox builder for the sample.
This starts by adding granular rules for CRUD operations
on the /resource_providers and /resource_providers/{uuid}
routes which use the same descriptions from the placement
API reference. Subsequent patches will add new granular
rules for the other routes.
Part of blueprint granular-placement-policy
Change-Id: I17573f5210314341c332fdcb1ce462a989c21940
With a fix in wsgi-intercept 1.7.0 we can properly
use the PlacementFixture as a context manager to
simulate when placement is configured for a given
operation. This allows us to remove the ugly stub
that one of the tests in here had to rely on.
While in here, the CastAsCall fixture is removed
since we shouldn't rely on that in these tests
where we're trying to simulate the user experience.
Change-Id: I2074b45126b839ea6307a8740364393e9dddd50b
Create a tox environment for running the unit tests against the lower
bounds of the dependencies.
Create a lower-constraints.txt to be used to enforce the lower bounds
in those tests.
Add openstack-tox-lower-constraints job to the zuul configuration.
See http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html
for more details.
Change-Id: Ic28558ee6481d49a9b4e5dc2c4182504e330448f
Signed-off-by: Doug Hellmann <doug@doughellmann.com>
Co-Authored-by: Eric Fried <efried@us.ibm.com>
Co-Authored-by: Jim Rollenhagen <jim@jimrollenhagen.com>