Opportunistic db test cases create schemas on demand, so that each
test case which inherits the base test case class, will get its own
db schema (i. e. races between tests are not possible).
In order to do schema provisioning we have to connect to RDBMS server
first. So far we've been connecting to the openstack_citest database,
which is guaranteed to exist on CI nodes. It turns out, there are a
few test cases in Nova (maybe in other projects as well), that drop
and recreate the openstack_citest database. If they happen to do that
when the opportunistic db fixture is in the middle of provisioning a
schema, those tests will fail (as there is an an open session to the
database and thus it can't be dropped).
This can be solved easily by changing the way we provision new
schemas in opportunistic db test cases as actually, we don't have to
connect to the openstack_citest database at all:
- for MySQL we can use an empty db name to connect to MySQL server,
but not to a particular database
- PostgreSQL requires us to specify the database name. We can use
the service postgres database here (PostgreSQL shell utils such
as createdb, createuser, etc use it for the very same reason)
Make it possible to use an additional (aka, 'slave') database
connection. This might be useful for offloading of read operations
to reduce the load on the RDBMS. Currently, this is only used in
Nova, but can be ported to other projects easily.
Currently, function model_query() from oslo.db.sqlalchemy.utils uses
is_use_context() function in order to determine whether a normal or an
admin user request is being processed. But usage of RequestContext is
not unified across OpenStack projects, so there is no way for oslo.db to
guarantee it processes project_id/deleted filters correctly here.
To remove this ambiguity, project_id/deleted filters should be passed
explicitly instead. At the same time, developers of OpenStack projects
are encouraged to provide wrappers for oslo.db model_query() function,
so that they could determine the values for project_id/deleted filters
depending on how they use RequestContext in their project.
There are files containing string format arguments inside
logging messages. Using logging function parameters should
get_session() method is used to construct a new SQLAlchemy session
instance. Currently, it takes two keyword arguments which are used
to configure the Session instance to be created. Actually, there are
more arguments which can be passed to a sessiomaker instance. oslo.db
should not stay in the way of a developer who wants to use specific
features of SQLAlchemy session: it should handle oslo.db specific
arguments and pass keyword arguments as is when calling the
When greenthreads aren't being used this call is not
so useful and since oslo.db is a library it can't be
guaranteed that those using it will be also using
eventlet (or similar library).
Add to openstack/common/db/sqlalchemy/utils.py
methods for modifying indexes:
Add tests for these methods to TestUtils class
If opportunistic test case, i.e. fixture, can't connect to a test
database, it is happening silently, without any logs. Most of times we
need to know a cause of failure.
Instead of messing with another libraries logger by default
only adjust the levels of that logger if the connection debug
is greater than or equal to zero. This allows those who have
configuration files or other ways to adjust logging to not be
affected by this codes level manipulation (which is likely
a side-effect users would not expect).
- make it possible to create a DBAPI instance given a config instance
- remove usage of global cfg.CONF instance (it's up to a user to pass
a ConfigOpts instance to oslo.db)
- ensure end applications don't need to import/register/use config
options directly - instead they pass a config instance to oslo.db
helpers (EngineFacade.from_config() and DBAPI.from_config() class
At the same time, usage of oslo.config remains completely optional as
we provide an API to pass those parameters programatically.