1. The database name is a random string, so shouldn't already exist.
2. If it *does* already exist, we quite likely want to throw an error
rather than just delete it.
3. "drop database if exists non_existant_database" throws an error with
mysqlconnector and a noisy-but-harmless warning with others.
(We could fix (3), if it was important)
For consistency, this also makes drop_database raise an error if the
database doesn't exist (or has been already deleted).
Closes-Bug: #1341906
Change-Id: I0c4a460088ffb7eb79cbf8594d2ff37fd70cf157
The last part where we are handling DB exceptions is the
_is_db_connection_error() function, which is called within a retry
loop inside of session.create_engine() to perform an initial
connection test. SQLAlchemy currently does not run "connect"
errors through the exception handling system. In order to have
these exceptions participate in the filtering system, add
a new function handle_connect_error(engine) which runs
engine.connect() and feeds the exception into the
handler(context) system directly, producing a compatible context
that the filters can use. Refactor the looping mechanism within
session.create_engine() into a separate function _test_connection()
so that the logic is encapsulated and can be tested.
partially implement bp: use-events-for-error-wrapping
Change-Id: Iad1bac9d0f9202b21e4c9c170aa84494b770728d
Replace the use of the connection pool "checkout" event
for "pinging" a connection with the use of the Connection.begin()
event. This event corresponds to the start of a transaction, so
will "ping" the connection for every transaction rather than just
on checkout, and by running at the level of Connection rather
than the pool, we have access to the whole range of services we
need, including that we can emit a core "select([1])" that will
work on all backends, we get the use of the handle_error() event,
and we get the built in checking for "is disconnect" as well
as the engine/pool being invalidated or disposed as is appropriate
for the SQLAlchemy version in use. The begin() event is a safe
place to recycle the connection from an invalid state back to
a ready state.
partially implement bp: use-events-for-error-wrapping
Change-Id: Ic29e12f1288f084a5e727101686dd71b12a5787b
This patch backports an additional feature from the
SQLAlchemy 0.9.7 handle_error() event, the ability to alter the
"is disconnect" status of an exception. In this case, the
supported case is only to take an exception that's not a disconnect,
and to graduate it to a disconnect. Going in the other direction
isn't possible with the approach here because the connection invalidation
would already have taken place, as we are wrapping the _handle_dbapi_error()
method.
We may want to be able to promote an exception to "is disconnect" in
cases where special database configurations or backends such as that of
Percona have complex detection scenarios not covered by SQLAlchemy's
provided backends.
partially implement bp: use-events-for-error-wrapping
Change-Id: I23a4da2dc39af015a01d6d587c57c5e1c4b95f31
Replace @_wrap_db_error() with an event-based system
that leverages the new SQLAlchemy handle_error() event.
The internal logic of @_wrap_db_error() is replaced by a
system of declarative filters, which is also expressed
on the testing side using a new test framework for these
conditions.
partially implement bp: use-events-for-error-wrapping
Closes-Bug: #1214341
Change-Id: Ib8c7f8d5c6f44c49a53dd6411adbf8aaaecb74be
This event hook is to be released as of SQLAlchemy 0.9.7, and
supersedes the existing SQLAlchemy ``dbapi_error()`` event, which
is called upon interception of all DBAPI errors and other exceptions
that are raised within core Engine operations. The new event offers
the ability to intercept all errors associated with statement execution
and result reception, not just DBAPI-level errors, and also supports
re-raising of the given exception with a replacement exception.
Here, we implement a compatible system that works in SQLAlchemy 0.8
and forward.
partially implement bp: use-events-for-error-wrapping
Change-Id: Ife52be4ae870739dc296742fa4aefb3e31c36be4
Currently, deadlock is only checked for in mysql and postgresql.
This adds a check for db2 deadlock
Fixes bug #1340793
Change-Id: Icef725ed75929ff13234ea58026e6bf0f2e2c852
Migrations should be tested with real database backends. For this goal
intended number of base test cases which used in current implementation.
Previously there were two ways to run migration tests: using
opportunistic test cases (default way we've been using on CI) and
by using database connection strings, specified in test_migrations.conf,
for every particular database test case. For the sake of simplicity and
consistency we are moving to using of opportunistic db test cases here.
With this change we are free from locking, so we don't need `lockfile`
anymore.
Closes-Bug: #1327397
Closes-Bug: #1328997
Change-Id: I92b1dcd830c4755f429a0f6529911e607c2c7de7
Moving InvalidSortKey, ColumnError from oslo/db/sqlalchemy/utils.py to
oslo/db/exception.py for ease of use.
Add references in utils.py to exception py for backwards compatibility.
Change-Id: I38595d6bcceee84ef0ca7d88dae5617c006ba8c1
Enable test_modelbase_iteritems and test_modelbase_iter which were
always skipped. These tests need a mapped object instead of
ModelBase instance.
Refactor ModelBaseTest a bit - move creation of ModelBase and
ExtraKeysModel instances to setUp().
Refactor test_modelbase_iteritems and test_modelbase_iter
to check if we can iterate over columns and extra_keys.
Co-Authored-By: Oleksii Chuprykov <ochuprykov@mirantis.com>
Closes-Bug: #1312220
Change-Id: If19137cc81959031764c1b78f990b67a8912a304
Changes only the places where list comprehensions were used instead of loops
and their return values were not used
Change-Id: Ib518e77b26cf61cac4acb6a7e9b851aafffcd855
Storing iterator object as attribute of model doesn't allow
to use several iterators at the same time
Now the new Iterator object which is created by iter()
stores model object, and in such way several iterators can be
created and work without interfering
Change-Id: I5806877a2d15d449b948307a522bf2fb56bf8110
Actually, adjust the mysql regex to match on the "Duplicate entry 'foo'
for key 'bar'" error string that is presumably buried deep within mysqld
itself, with less regard to specific punctuation.
Hopefully this will be portable between all mysql drivers.
Change-Id: I241c5cb61a6335857942bf79ee2301ae0c94ff6e
The check for mysql_sql_mode uses an empty boolean check,
which misses the case where sql_mode is passed as the blank string.
As a result, the blank string value is not passed to SET SESSION,
causing tests to break where a blank value is expected.
The conditional is changed to "is not None".
Change-Id: I919ab7c10aadf4b8778bd1e3425b9221fcdaa2f8
Closes-Bug: 1340395
Add a base test case that allows to check if the DB schema obtained
by applying of migration scripts is equal to the one produced from
models definitions. It's very important that those two stay in sync
to be able to use models definitions for generation of the initial
DB schema. Though, due to the using of sqlalchemy-migrate (since we
had to write migration scripts manually and alembic can generate
them automatically) we have many divergences in projects right now,
which must be detected and fixed.
Fortunately, we can now rely on alembic + sqlalchemy implementation
of schema comparison instead of writing our own tools.
Blueprint: sync-models-with-migrations
Change-Id: I2d2c35987426dacb1f566569d23a80eee3575a58
This change presents one way we might include test support
for oslo.db against specific SQLAlchemy major releases, currently
including the 0.7, 0.8, and 0.9 series. As we will want to
begin including features within oslo.db that target advanced
and in some cases semi-public APIs within SQLAlchemy, it will
be important that we test these features against each major release,
as there may be variances between major revs as well as
version-specific approaches within oslo.
To accomplish this, I was not able to override "deps" alone,
as the SQLAlchemy revision within requirements.txt conflicts
with a hand-entered requirement, and due to pip's lack of
a dependency resolver (see https://github.com/pypa/pip/issues/988
and https://github.com/pypa/pip/issues/56) I instead overrode
"commands". I don't know that this is the best approach, nor
do I know how the tox.ini file is accommodated by CI servers,
if these CI servers would need their tox invocation altered or
how that works.
This patch may or may not be the way to go, but in any case
I'd like to get input on how we can ensure that more SQLAlchemy-specific
oslo.db features can be tested against multiple SQLAlchemy versions.
Note that even with this change, running the "sqla_07" environment
does in fact produce test failures, see http://paste.openstack.org/show/85263/;
so already oslo.db expects behaviors that are not present in
all SQLAlchemy versions listed in the common requirements.txt.
Change-Id: I4128272ce15b9e576d7b97b1adab4d5027108c7c
Oslo Db ModelBase implements __getitem__, __setitem__ but does not
implement __contains__. It behaves misleading on "in" operations,
silenlty returns False even if such key exists. Add this missing
method.
Change-Id: I1ddea1312424da8c372bb599371481ab76c19a3a
Closes-Bug: #1333337
The test suite TpoolDbapiWrapperTestCase makes a blanket assumption
that the test environment does not have eventlet installed; it
places a mock object into sys.modules in order to take the place
of eventlet. The test_db_api_without_installed_eventlet however
deletes this mock, assuming that subsequent access will raise an
ImportError. The change is to instead place None in this dictionary
so that an ImportError is guaranteed regardless of whether or not
eventlet is actually installed in the environment.
Change-Id: I7b5b7c0d20581f8ecfa319f3cfa0f48112bc86b9
Closes-Bug: #1334427
Passing mutable objects as default args is a known Python pitfall.
We'd better avoid this. This commit changes mutable default args with
None, then use 'arg = arg or []'.
Change-Id: Id91b53e03c17a19e94830b43fe0c4970c824ff70
slave_connection keyword argument is passed to EngineFacade init
twice: the first time in code explicitly, and the second time when
CONF.database options are unpacked. Actually, it's better to pass
all the arguments explicitly rather than rely on the fact that
argument names correspond to database options names (now they are
tightly coupled).
Closes-Bug: #1330426
Change-Id: Ic0aaf26e76f68adc48f7ed58fd4c7e9e87648005
We have a few opportunistic test cases which are skipped in the gate
now because psycopg2 is missing.
Change-Id: Ia5e96ed4a6af7a311d8dc89b00a1fb4cc02f56ac
Opportunistic db test cases create schemas on demand, so that each
test case which inherits the base test case class, will get its own
db schema (i. e. races between tests are not possible).
In order to do schema provisioning we have to connect to RDBMS server
first. So far we've been connecting to the openstack_citest database,
which is guaranteed to exist on CI nodes. It turns out, there are a
few test cases in Nova (maybe in other projects as well), that drop
and recreate the openstack_citest database. If they happen to do that
when the opportunistic db fixture is in the middle of provisioning a
schema, those tests will fail (as there is an an open session to the
database and thus it can't be dropped).
This can be solved easily by changing the way we provision new
schemas in opportunistic db test cases as actually, we don't have to
connect to the openstack_citest database at all:
- for MySQL we can use an empty db name to connect to MySQL server,
but not to a particular database
- PostgreSQL requires us to specify the database name. We can use
the service postgres database here (PostgreSQL shell utils such
as createdb, createuser, etc use it for the very same reason)
Closes-Bug: #1328997
Change-Id: I0dc0becc5cb40d3dab3289c865a96113522a0b9a