Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: I452ed203c74dc83ca328116d6111a05cf7342c53
This commit is contained in:
Tony Breeds 2017-09-12 16:05:48 -06:00
parent 2af0348d26
commit 0053017d18
104 changed files with 14 additions and 19937 deletions

View File

@ -1,8 +0,0 @@
[run]
branch = True
source = oslo_db
omit = oslo_db/tests/*
[report]
ignore_errors = True
precision = 2

21
.gitignore vendored
View File

@ -1,21 +0,0 @@
*~
*.swp
*.pyc
*.log
.coverage
.venv
.tox
cover/
.openstack-common-venv/
skeleton.egg-info/
build/
dist/
AUTHORS
.update-venv/
ChangeLog
*.egg
.testrepository/
.project
.pydevproject
oslo.db.egg-info/
doc/source/reference/api

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/oslo.db.git

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ ./oslo_db/tests $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,93 +0,0 @@
=================
How to contribute
=================
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/oslo.db
How to run unit tests
=====================
oslo.db (as all OpenStack projects) uses tox to run unit tests. You can find
general information about OpenStack unit tests and testing with tox in wiki_.
oslo.db tests use PyMySQL as the default MySQL DB API driver (which is true for
OpenStack), and psycopg2 for PostgreSQL. pip will build these libs in your
venv, so you must ensure that you have the required system packages installed
for psycopg2 (PyMySQL is a pure-Python implementation and so needs no
additional system packages). For Ubuntu/Debian they are python-dev, and
libpq-dev. For Fedora/CentOS - gcc, python-devel and postgresql-devel.
There is also a separate env for testing with MySQL-python. If you are suppose
to run these tests as well, you need to install libmysqlclient-dev on
Ubuntu/Debian or mysql-devel for Fedora/CentOS.
The oslo.db unit tests system allows to run unittests on real databases. At the
moment it supports MySQL, PostgreSQL and SQLite.
For testing on a real database backend you need to set up a user
``openstack_citest`` with password ``openstack_citest`` on localhost (some
OpenStack projects require a database named 'openstack_citest' too).
Please note, that this user must have permissions to create and drop databases.
If the testing system is not able to connect to the backend, tests on it will
be skipped.
For PostgreSQL on Ubuntu you can create a user in the following way::
sudo -u postgres psql
postgres=# create user openstack_citest with createdb login password
'openstack_citest';
For MySQL you can use the following commands::
mysql -u root
mysql> CREATE USER 'openstack_citest'@'localhost' IDENTIFIED BY
'openstack_citest';
mysql> GRANT ALL PRIVILEGES ON * . * TO 'openstack_citest'@'localhost';
mysql> FLUSH PRIVILEGES;
See the script ``tools/test-setup.sh`` on how the databases are set up
excactly in the OpenStack CI infrastructure and use that for your
set up.
Alternatively, you can use `pifpaf`_ to run the unit tests directly without
setting up the database yourself. You still need to have the database software
installed on your system. The following tox environments can be used::
tox -e py27-mysql
tox -e py27-postgresql
tox -e py34-mysql
tox -e py34-postgresql
tox -e py27-all
tox -e py34-all
The database will be set up for you locally and temporarily on each run.
Another way is to start `pifpaf` manually and use it to run the tests as you
wish::
$ eval `pifpaf -g OS_TEST_DBAPI_ADMIN_CONNECTION run postgresql`
$ echo $OS_TEST_DBAPI_ADMIN_CONNECTION
postgresql://localhost/postgres?host=/var/folders/7k/pwdhb_mj2cv4zyr0kyrlzjx40000gq/T/tmpMGqN8C&port=9824
$ tox -e py27
[…]
$ tox -e py34
[…]
# Kill pifpaf once you're done
$ kill $PIFPAF_PID
.. _wiki: https://wiki.openstack.org/wiki/Testing#Unit_Tests
.. _pifpaf: https://github.com/jd/pifpaf

View File

@ -1,4 +0,0 @@
Style Commandments
==================
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,29 +0,0 @@
========================
Team and repository tags
========================
.. image:: http://governance.openstack.org/badges/oslo.db.svg
:target: http://governance.openstack.org/reference/tags/index.html
.. Change things from this point on
===============================================
oslo.db -- OpenStack Database Pattern Library
===============================================
.. image:: https://img.shields.io/pypi/v/oslo.db.svg
:target: https://pypi.python.org/pypi/oslo.db/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/dm/oslo.db.svg
:target: https://pypi.python.org/pypi/oslo.db/
:alt: Downloads
The oslo db (database) handling library, provides database
connectivity to different database backends and various other helper
utils.
* Free software: Apache license
* Documentation: https://docs.openstack.org/oslo.db/latest
* Source: https://git.openstack.org/cgit/openstack/oslo.db
* Bugs: https://bugs.launchpad.net/oslo.db

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,93 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslo_config.sphinxext',
'openstackdocstheme',
'stevedore.sphinxext'
]
# openstackdocstheme options
repository_name = 'openstack/oslo.db'
bug_project = 'oslo.db'
bug_tag = ''
# Must set this variable to include year, month, day, hours, and minutes.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# A list of glob-style patterns that should be excluded when looking for source
# files.
exclude_patterns = [
'api/setup.rst', # workaround for https://launchpad.net/bugs/1260495
'api/tests.*', # avoid of docs generation from tests
]
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'oslo.db'
copyright = u'2014, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['oslo_db.']
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
html_theme = 'openstackdocs'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1 +0,0 @@
.. include:: ../../../CONTRIBUTING.rst

View File

@ -1,22 +0,0 @@
===============================================
oslo.db -- OpenStack Database Pattern Library
===============================================
The oslo.db (database) handling library, provides database
connectivity to different database backends and various other helper
utils.
.. toctree::
:maxdepth: 2
install/index
contributor/index
user/index
reference/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,49 +0,0 @@
============
Installation
============
At the command line::
$ pip install oslo.db
You will also need to install at least one SQL backend::
$ pip install psycopg2
Or::
$ pip install PyMySQL
Or::
$ pip install pysqlite
Using with PostgreSQL
---------------------
If you are using PostgreSQL make sure to install the PostgreSQL client
development package for your distro. On Ubuntu this is done as follows::
$ sudo apt-get install libpq-dev
$ pip install psycopg2
The installation of psycopg2 will fail if libpq-dev is not installed first.
Note that even in a virtual environment the libpq-dev will be installed
system wide.
Using with MySQL-python
-----------------------
PyMySQL is a default MySQL DB API driver for oslo.db, as well as for the whole
OpenStack. But you still can use MySQL-python as an alternative DB API driver.
For MySQL-python you must install the MySQL client development package for
your distro. On Ubuntu this is done as follows::
$ sudo apt-get install libmysqlclient-dev
$ pip install MySQL-python
The installation of MySQL-python will fail if libmysqlclient-dev is not
installed first. Note that even in a virtual environment the MySQL package will
be installed system wide.

View File

@ -1,18 +0,0 @@
.. _using:
=========
Reference
=========
.. toctree::
:maxdepth: 2
opts
API
===
.. toctree::
:maxdepth: 1
api/autoindex

View File

@ -1,9 +0,0 @@
=====================
Configuration Options
=====================
oslo.db uses oslo.config to define and manage configuration
options to allow the deployer to control how an application uses the
underlying database.
.. show-options:: oslo.db

View File

@ -1 +0,0 @@
.. include:: ../../../ChangeLog

View File

@ -1,9 +0,0 @@
==============
Using oslo.db
==============
.. toctree::
:maxdepth: 2
usage
history

View File

@ -1,184 +0,0 @@
=======
Usage
=======
To use oslo.db in a project:
Session Handling
================
Session handling is achieved using the :mod:`oslo_db.sqlalchemy.enginefacade`
system. This module presents a function decorator as well as a
context manager approach to delivering :class:`.session.Session` as well as
:class:`.Connection` objects to a function or block.
Both calling styles require the use of a context object. This object may
be of any class, though when used with the decorator form, requires
special instrumentation.
The context manager form is as follows:
.. code:: python
from oslo_db.sqlalchemy import enginefacade
class MyContext(object):
"User-defined context class."
def some_reader_api_function(context):
with enginefacade.reader.using(context) as session:
return session.query(SomeClass).all()
def some_writer_api_function(context, x, y):
with enginefacade.writer.using(context) as session:
session.add(SomeClass(x, y))
def run_some_database_calls():
context = MyContext()
results = some_reader_api_function(context)
some_writer_api_function(context, 5, 10)
The decorator form accesses attributes off the user-defined context
directly; the context must be decorated with the
:func:`oslo_db.sqlalchemy.enginefacade.transaction_context_provider`
decorator. Each function must receive the context argument:
.. code:: python
from oslo_db.sqlalchemy import enginefacade
@enginefacade.transaction_context_provider
class MyContext(object):
"User-defined context class."
@enginefacade.reader
def some_reader_api_function(context):
return context.session.query(SomeClass).all()
@enginefacade.writer
def some_writer_api_function(context, x, y):
context.session.add(SomeClass(x, y))
def run_some_database_calls():
context = MyContext()
results = some_reader_api_function(context)
some_writer_api_function(context, 5, 10)
``connection`` modifier can be used when a :class:`.session.Session` object is not
needed, e.g. when `SQLAlchemy Core <http://docs.sqlalchemy.org/en/latest/core/>`_
is preferred:
.. code:: python
@enginefacade.reader.connection
def _refresh_from_db(context, cache):
sel = sa.select([table.c.id, table.c.name])
res = context.connection.execute(sel).fetchall()
cache.id_cache = {r[1]: r[0] for r in res}
cache.str_cache = {r[0]: r[1] for r in res}
.. note:: The ``context.session`` and ``context.connection`` attributes
must be accessed within the scope of an appropriate writer/reader block
(either the decorator or contextmanager approach). An AttributeError is
raised otherwise.
The decorator form can also be used with class and instance methods which
implicitly receive the first positional argument:
.. code:: python
class DatabaseAccessLayer(object):
@classmethod
@enginefacade.reader
def some_reader_api_function(cls, context):
return context.session.query(SomeClass).all()
@enginefacade.writer
def some_writer_api_function(self, context, x, y):
context.session.add(SomeClass(x, y))
.. note:: Note that enginefacade decorators must be applied **before**
`classmethod`, otherwise you will get a ``TypeError`` at import time
(as enginefacade will try to use ``inspect.getargspec()`` on a descriptor,
not on a bound method, please refer to the `Data Model
<https://docs.python.org/3/reference/datamodel.html#data-model>`_ section
of the Python Language Reference for details).
The scope of transaction and connectivity for both approaches is managed
transparently. The configuration for the connection comes from the standard
:obj:`oslo_config.cfg.CONF` collection. Additional configurations can be
established for the enginefacade using the
:func:`oslo_db.sqlalchemy.enginefacade.configure` function, before any use of
the database begins:
.. code:: python
from oslo_db.sqlalchemy import enginefacade
enginefacade.configure(
sqlite_fk=True,
max_retries=5,
mysql_sql_mode='ANSI'
)
Base class for models usage
===========================
.. code:: python
from oslo_db.sqlalchemy import models
class ProjectSomething(models.TimestampMixin,
models.ModelBase):
id = Column(Integer, primary_key=True)
...
DB API backend support
======================
.. code:: python
from oslo_config import cfg
from oslo_db import api as db_api
_BACKEND_MAPPING = {'sqlalchemy': 'project.db.sqlalchemy.api'}
IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING)
def get_engine():
return IMPL.get_engine()
def get_session():
return IMPL.get_session()
# DB-API method
def do_something(somethind_id):
return IMPL.do_something(somethind_id)
DB migration extensions
=======================
Available extensions for `oslo_db.migration`.
.. list-plugins:: oslo_db.sqlalchemy.migration
:detailed:

View File

View File

@ -1,25 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""oslo.i18n integration module.
See http://docs.openstack.org/developer/oslo.i18n/usage.html .
"""
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='oslo_db')
# The primary translation function using the well-known name "_"
_ = _translators.primary

View File

@ -1,289 +0,0 @@
# Copyright (c) 2013 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
=================================
Multiple DB API backend support.
=================================
A DB backend module should implement a method named 'get_backend' which
takes no arguments. The method can return any object that implements DB
API methods.
"""
import logging
import threading
import time
from debtcollector import removals
from oslo_utils import excutils
from oslo_utils import importutils
from oslo_utils import reflection
import six
from oslo_db import exception
from oslo_db import options
LOG = logging.getLogger(__name__)
def safe_for_db_retry(f):
"""Indicate api method as safe for re-connection to database.
Database connection retries will be enabled for the decorated api method.
Database connection failure can have many causes, which can be temporary.
In such cases retry may increase the likelihood of connection.
Usage::
@safe_for_db_retry
def api_method(self):
self.engine.connect()
:param f: database api method.
:type f: function.
"""
f.__dict__['enable_retry_on_disconnect'] = True
return f
def retry_on_deadlock(f):
"""Retry a DB API call if Deadlock was received.
wrap_db_entry will be applied to all db.api functions marked with this
decorator.
"""
f.__dict__['enable_retry_on_deadlock'] = True
return f
def retry_on_request(f):
"""Retry a DB API call if RetryRequest exception was received.
wrap_db_entry will be applied to all db.api functions marked with this
decorator.
"""
f.__dict__['enable_retry_on_request'] = True
return f
class wrap_db_retry(object):
"""Retry db.api methods, if db_error raised
Retry decorated db.api methods. This decorator catches db_error and retries
function in a loop until it succeeds, or until maximum retries count
will be reached.
Keyword arguments:
:param retry_interval: seconds between transaction retries
:type retry_interval: int or float
:param max_retries: max number of retries before an error is raised
:type max_retries: int
:param inc_retry_interval: determine increase retry interval or not
:type inc_retry_interval: bool
:param max_retry_interval: max interval value between retries
:type max_retry_interval: int or float
:param exception_checker: checks if an exception should trigger a retry
:type exception_checker: callable
"""
@removals.removed_kwarg("retry_on_request",
"Retry on request is always enabled")
def __init__(self, retry_interval=1, max_retries=20,
inc_retry_interval=True,
max_retry_interval=10, retry_on_disconnect=False,
retry_on_deadlock=False, retry_on_request=False,
exception_checker=lambda exc: False):
super(wrap_db_retry, self).__init__()
self.db_error = (exception.RetryRequest, )
# default is that we re-raise anything unexpected
self.exception_checker = exception_checker
if retry_on_disconnect:
self.db_error += (exception.DBConnectionError, )
if retry_on_deadlock:
self.db_error += (exception.DBDeadlock, )
self.retry_interval = retry_interval
self.max_retries = max_retries
self.inc_retry_interval = inc_retry_interval
self.max_retry_interval = max_retry_interval
def __call__(self, f):
@six.wraps(f)
def wrapper(*args, **kwargs):
next_interval = self.retry_interval
remaining = self.max_retries
while True:
try:
return f(*args, **kwargs)
except Exception as e:
with excutils.save_and_reraise_exception() as ectxt:
expected = self._is_exception_expected(e)
if remaining > 0:
ectxt.reraise = not expected
else:
if expected:
LOG.exception('DB exceeded retry limit.')
# if it's a RetryRequest, we need to unpack it
if isinstance(e, exception.RetryRequest):
ectxt.type_ = type(e.inner_exc)
ectxt.value = e.inner_exc
LOG.debug("Performing DB retry for function %s",
reflection.get_callable_name(f))
# NOTE(vsergeyev): We are using patched time module, so
# this effectively yields the execution
# context to another green thread.
time.sleep(next_interval)
if self.inc_retry_interval:
next_interval = min(
next_interval * 2,
self.max_retry_interval
)
remaining -= 1
return wrapper
def _is_exception_expected(self, exc):
if isinstance(exc, self.db_error):
# RetryRequest is application-initated exception
# and not an error condition in case retries are
# not exceeded
if not isinstance(exc, exception.RetryRequest):
LOG.debug('DB error: %s', exc)
return True
return self.exception_checker(exc)
class DBAPI(object):
"""Initialize the chosen DB API backend.
After initialization API methods is available as normal attributes of
``DBAPI`` subclass. Database API methods are supposed to be called as
DBAPI instance methods.
:param backend_name: name of the backend to load
:type backend_name: str
:param backend_mapping: backend name -> module/class to load mapping
:type backend_mapping: dict
:default backend_mapping: None
:param lazy: load the DB backend lazily on the first DB API method call
:type lazy: bool
:default lazy: False
:keyword use_db_reconnect: retry DB transactions on disconnect or not
:type use_db_reconnect: bool
:keyword retry_interval: seconds between transaction retries
:type retry_interval: int
:keyword inc_retry_interval: increase retry interval or not
:type inc_retry_interval: bool
:keyword max_retry_interval: max interval value between retries
:type max_retry_interval: int
:keyword max_retries: max number of retries before an error is raised
:type max_retries: int
"""
def __init__(self, backend_name, backend_mapping=None, lazy=False,
**kwargs):
self._backend = None
self._backend_name = backend_name
self._backend_mapping = backend_mapping or {}
self._lock = threading.Lock()
if not lazy:
self._load_backend()
self.use_db_reconnect = kwargs.get('use_db_reconnect', False)
self._wrap_db_kwargs = {k: v for k, v in kwargs.items()
if k in ('retry_interval',
'inc_retry_interval',
'max_retry_interval',
'max_retries')}
def _load_backend(self):
with self._lock:
if not self._backend:
# Import the untranslated name if we don't have a mapping
backend_path = self._backend_mapping.get(self._backend_name,
self._backend_name)
LOG.debug('Loading backend %(name)r from %(path)r',
{'name': self._backend_name,
'path': backend_path})
backend_mod = importutils.import_module(backend_path)
self._backend = backend_mod.get_backend()
def __getattr__(self, key):
if not self._backend:
self._load_backend()
attr = getattr(self._backend, key)
if not hasattr(attr, '__call__'):
return attr
# NOTE(vsergeyev): If `use_db_reconnect` option is set to True, retry
# DB API methods, decorated with @safe_for_db_retry
# on disconnect.
retry_on_disconnect = self.use_db_reconnect and attr.__dict__.get(
'enable_retry_on_disconnect', False)
retry_on_deadlock = attr.__dict__.get('enable_retry_on_deadlock',
False)
retry_on_request = attr.__dict__.get('enable_retry_on_request', False)
if retry_on_disconnect or retry_on_deadlock or retry_on_request:
attr = wrap_db_retry(
retry_on_disconnect=retry_on_disconnect,
retry_on_deadlock=retry_on_deadlock,
**self._wrap_db_kwargs)(attr)
return attr
@classmethod
def from_config(cls, conf, backend_mapping=None, lazy=False):
"""Initialize DBAPI instance given a config instance.
:param conf: oslo.config config instance
:type conf: oslo.config.cfg.ConfigOpts
:param backend_mapping: backend name -> module/class to load mapping
:type backend_mapping: dict
:param lazy: load the DB backend lazily on the first DB API method call
:type lazy: bool
"""
conf.register_opts(options.database_opts, 'database')
return cls(backend_name=conf.database.backend,
backend_mapping=backend_mapping,
lazy=lazy,
use_db_reconnect=conf.database.use_db_reconnect,
retry_interval=conf.database.db_retry_interval,
inc_retry_interval=conf.database.db_inc_retry_interval,
max_retry_interval=conf.database.db_max_retry_interval,
max_retries=conf.database.db_max_retries)

View File

@ -1,80 +0,0 @@
# Copyright 2014 Mirantis.inc
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import logging
import threading
from oslo_config import cfg
from oslo_db import api
LOG = logging.getLogger(__name__)
tpool_opts = [
cfg.BoolOpt('use_tpool',
default=False,
deprecated_name='dbapi_use_tpool',
deprecated_group='DEFAULT',
help='Enable the experimental use of thread pooling for '
'all DB API calls'),
]
class TpoolDbapiWrapper(object):
"""DB API wrapper class.
This wraps the oslo DB API with an option to be able to use eventlet's
thread pooling. Since the CONF variable may not be loaded at the time
this class is instantiated, we must look at it on the first DB API call.
"""
def __init__(self, conf, backend_mapping):
self._db_api = None
self._backend_mapping = backend_mapping
self._conf = conf
self._conf.register_opts(tpool_opts, 'database')
self._lock = threading.Lock()
@property
def _api(self):
if not self._db_api:
with self._lock:
if not self._db_api:
db_api = api.DBAPI.from_config(
conf=self._conf, backend_mapping=self._backend_mapping)
if self._conf.database.use_tpool:
try:
from eventlet import tpool
except ImportError:
LOG.exception("'eventlet' is required for "
"TpoolDbapiWrapper.")
raise
self._db_api = tpool.Proxy(db_api)
else:
self._db_api = db_api
return self._db_api
def __getattr__(self, key):
return getattr(self._api, key)
def list_opts():
"""Returns a list of oslo.config options available in this module.
:returns: a list of (group_name, opts) tuples
"""
return [('database', copy.deepcopy(tpool_opts))]

View File

@ -1,344 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""DB related custom exceptions.
Custom exceptions intended to determine the causes of specific database
errors. This module provides more generic exceptions than the database-specific
driver libraries, and so users of oslo.db can catch these no matter which
database the application is using. Most of the exceptions are wrappers. Wrapper
exceptions take an original exception as positional argument and keep it for
purposes of deeper debug.
Example::
try:
statement(arg)
except sqlalchemy.exc.OperationalError as e:
raise DBDuplicateEntry(e)
This is useful to determine more specific error cases further at execution,
when you need to add some extra information to an error message. Wrapper
exceptions takes care about original error message displaying to not to loose
low level cause of an error. All the database api exceptions wrapped into
the specific exceptions provided belove.
Please use only database related custom exceptions with database manipulations
with `try/except` statement. This is required for consistent handling of
database errors.
"""
import debtcollector.removals
import six
from oslo_db._i18n import _
from oslo_utils.excutils import CausedByException
class DBError(CausedByException):
"""Base exception for all custom database exceptions.
:kwarg inner_exception: an original exception which was wrapped with
DBError or its subclasses.
"""
def __init__(self, inner_exception=None, cause=None):
self.inner_exception = inner_exception
super(DBError, self).__init__(six.text_type(inner_exception), cause)
class DBDuplicateEntry(DBError):
"""Duplicate entry at unique column error.
Raised when made an attempt to write to a unique column the same entry as
existing one. :attr: `columns` available on an instance of the exception
and could be used at error handling::
try:
instance_type_ref.save()
except DBDuplicateEntry as e:
if 'colname' in e.columns:
# Handle error.
:kwarg columns: a list of unique columns have been attempted to write a
duplicate entry.
:type columns: list
:kwarg value: a value which has been attempted to write. The value will
be None, if we can't extract it for a particular database backend. Only
MySQL and PostgreSQL 9.x are supported right now.
"""
def __init__(self, columns=None, inner_exception=None, value=None):
self.columns = columns or []
self.value = value
super(DBDuplicateEntry, self).__init__(inner_exception)
class DBConstraintError(DBError):
"""Check constraint fails for column error.
Raised when made an attempt to write to a column a value that does not
satisfy a CHECK constraint.
:kwarg table: the table name for which the check fails
:type table: str
:kwarg check_name: the table of the check that failed to be satisfied
:type check_name: str
"""
def __init__(self, table, check_name, inner_exception=None):
self.table = table
self.check_name = check_name
super(DBConstraintError, self).__init__(inner_exception)
class DBReferenceError(DBError):
"""Foreign key violation error.
:param table: a table name in which the reference is directed.
:type table: str
:param constraint: a problematic constraint name.
:type constraint: str
:param key: a broken reference key name.
:type key: str
:param key_table: a table name which contains the key.
:type key_table: str
"""
def __init__(self, table, constraint, key, key_table,
inner_exception=None):
self.table = table
self.constraint = constraint
self.key = key
self.key_table = key_table
super(DBReferenceError, self).__init__(inner_exception)
class DBNonExistentConstraint(DBError):
"""Constraint does not exist.
:param table: table name
:type table: str
:param constraint: constraint name
:type table: str
"""
def __init__(self, table, constraint, inner_exception=None):
self.table = table
self.constraint = constraint
super(DBNonExistentConstraint, self).__init__(inner_exception)
class DBNonExistentTable(DBError):
"""Table does not exist.
:param table: table name
:type table: str
"""
def __init__(self, table, inner_exception=None):
self.table = table
super(DBNonExistentTable, self).__init__(inner_exception)
class DBNonExistentDatabase(DBError):
"""Database does not exist.
:param database: database name
:type database: str
"""
def __init__(self, database, inner_exception=None):
self.database = database
super(DBNonExistentDatabase, self).__init__(inner_exception)
class DBDeadlock(DBError):
"""Database dead lock error.
Deadlock is a situation that occurs when two or more different database
sessions have some data locked, and each database session requests a lock
on the data that another, different, session has already locked.
"""
def __init__(self, inner_exception=None):
super(DBDeadlock, self).__init__(inner_exception)
class DBInvalidUnicodeParameter(Exception):
"""Database unicode error.
Raised when unicode parameter is passed to a database
without encoding directive.
"""
@debtcollector.removals.removed_property
def message(self):
# NOTE(rpodolyaka): provided for compatibility with python 3k, where
# exceptions do not have .message attribute, while we used to have one
# in this particular exception class. See LP #1542961 for details.
return str(self)
def __init__(self):
super(DBInvalidUnicodeParameter, self).__init__(
_("Invalid Parameter: Encoding directive wasn't provided."))
class DbMigrationError(DBError):
"""Wrapped migration specific exception.
Raised when migrations couldn't be completed successfully.
"""
def __init__(self, message=None):
super(DbMigrationError, self).__init__(message)
class DBMigrationError(DbMigrationError):
"""Wrapped migration specific exception.
Raised when migrations couldn't be completed successfully.
"""
def __init__(self, message):
super(DBMigrationError, self).__init__(message)
debtcollector.removals.removed_class(DbMigrationError,
replacement=DBMigrationError)
class DBConnectionError(DBError):
"""Wrapped connection specific exception.
Raised when database connection is failed.
"""
pass
class DBDataError(DBError):
"""Raised for errors that are due to problems with the processed data.
E.g. division by zero, numeric value out of range, incorrect data type, etc
"""
class DBNotSupportedError(DBError):
"""Raised when a database backend has raised sqla.exc.NotSupportedError"""
class InvalidSortKey(Exception):
"""A sort key destined for database query usage is invalid."""
@debtcollector.removals.removed_property
def message(self):
# NOTE(rpodolyaka): provided for compatibility with python 3k, where
# exceptions do not have .message attribute, while we used to have one
# in this particular exception class. See LP #1542961 for details.
return str(self)
def __init__(self, key=None):
super(InvalidSortKey, self).__init__(
_("Sort key supplied is invalid: %s") % key)
self.key = key
class ColumnError(Exception):
"""Error raised when no column or an invalid column is found."""
class BackendNotAvailable(Exception):
"""Error raised when a particular database backend is not available
within a test suite.
"""
class RetryRequest(Exception):
"""Error raised when DB operation needs to be retried.
That could be intentionally raised by the code without any real DB errors.
"""
def __init__(self, inner_exc):
self.inner_exc = inner_exc
class NoEngineContextEstablished(AttributeError):
"""Error raised for enginefacade attribute access with no context.
This applies to the ``session`` and ``connection`` attributes
of a user-defined context and/or RequestContext object, when they
are accessed *outside* of the scope of an enginefacade decorator
or context manager.
The exception is a subclass of AttributeError so that
normal Python missing attribute behaviors are maintained, such
as support for ``getattr(context, 'session', None)``.
"""
class ContextNotRequestedError(AttributeError):
"""Error raised when requesting a not-setup enginefacade attribute.
This applies to the ``session`` and ``connection`` attributes
of a user-defined context and/or RequestContext object, when they
are accessed *within* the scope of an enginefacade decorator
or context manager, but the context has not requested that
attribute (e.g. like "with enginefacade.connection.using(context)"
and "context.session" is requested).
"""
class CantStartEngineError(Exception):
"""Error raised when the enginefacade cannot start up correctly."""
class NotSupportedWarning(Warning):
"""Warn that an argument or call that was passed is not supported.
This subclasses Warning so that it can be filtered as a distinct
category.
.. seealso::
https://docs.python.org/2/library/warnings.html
"""
class OsloDBDeprecationWarning(DeprecationWarning):
"""Issued per usage of a deprecated API.
This subclasses DeprecationWarning so that it can be filtered as a distinct
category.
.. seealso::
https://docs.python.org/2/library/warnings.html
"""

View File

@ -1,85 +0,0 @@
# Translations template for oslo.db.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the oslo.db project.
#
# Translators:
# Andi Chandler <andi@gowling.com>, 2014-2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.db 4.6.1.dev46\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-06-15 11:18+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-20 06:31+0000\n"
"Last-Translator: Andreas Jaeger <jaegerandi@gmail.com>\n"
"Language: en-GB\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: English (United Kingdom)\n"
msgid "Invalid Parameter: Encoding directive wasn't provided."
msgstr "Invalid Parameter: Encoding directive wasn't provided."
#, python-format
msgid ""
"Please specify column %s in col_name_col_instance param. It is required "
"because column has unsupported type by SQLite."
msgstr ""
"Please specify column %s in col_name_col_instance param. It is required "
"because column has unsupported type by SQLite."
#, python-format
msgid "Sort key supplied is invalid: %s"
msgstr "Sort key supplied is invalid: %s"
#, python-format
msgid ""
"Tables \"%s\" have non utf8 collation, please make sure all tables are "
"CHARSET=utf8"
msgstr ""
"Tables \"%s\" have non utf8 collation, please make sure all tables are "
"CHARSET=utf8"
msgid ""
"The database is not under version control, but has tables. Please stamp the "
"current version of the schema manually."
msgstr ""
"The database is not under version control, but has tables. Please stamp the "
"current version of the schema manually."
#, python-format
msgid ""
"There is no `deleted` column in `%s` table. Project doesn't use soft-deleted "
"feature."
msgstr ""
"There is no `deleted` column in `%s` table. Project doesn't use soft-deleted "
"feature."
#, python-format
msgid "There is no `project_id` column in `%s` table."
msgstr "There is no `project_id` column in `%s` table."
#, python-format
msgid "Unknown sort direction, must be one of: %s"
msgstr "Unknown sort direction, must be one of: %s"
msgid "Unsupported id columns type"
msgstr "Unsupported id columns type"
#, python-format
msgid ""
"col_name_col_instance param has wrong type of column instance for column %s "
"It should be instance of sqlalchemy.Column."
msgstr ""
"col_name_col_instance param has wrong type of column instance for column %s "
"It should be instance of sqlalchemy.Column."
msgid "model should be a subclass of ModelBase"
msgstr "model should be a subclass of ModelBase"
msgid "version should be an integer"
msgstr "version should be an integer"

View File

@ -1,82 +0,0 @@
# Translations template for oslo.db.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the oslo.db project.
#
# Translators:
# Adriana Chisco Landazábal <achisco94@gmail.com>, 2015
# Miriam Godinez <miriamgc@hotmail.com>, 2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.db 4.6.1.dev19\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-19 04:28+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2015-09-07 10:45+0000\n"
"Last-Translator: Miriam Godinez <miriamgc@hotmail.com>\n"
"Language: es\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Spanish\n"
msgid "Invalid Parameter: Encoding directive wasn't provided."
msgstr "Parámetro no válido: No se proporcionó directiva de codificación."
#, python-format
msgid ""
"Please specify column %s in col_name_col_instance param. It is required "
"because column has unsupported type by SQLite."
msgstr ""
"Por favor especifique la columna %s en el parámetro col_name_col_instance. "
"Es necesario porque la columna tiene un tipo no soportado por SQLite."
#, python-format
msgid ""
"Tables \"%s\" have non utf8 collation, please make sure all tables are "
"CHARSET=utf8"
msgstr ""
"Las tablas \"%s\" no tienen una colación utf8, por favor asegúrese de que "
"todas las tablas sean CHARSET=utf8"
msgid ""
"The database is not under version control, but has tables. Please stamp the "
"current version of the schema manually."
msgstr ""
"La base de datos no está bajo el control de la versión, pero tiene tablas. "
"Por favor indique manualmente la versión actual del esquema."
#, python-format
msgid ""
"There is no `deleted` column in `%s` table. Project doesn't use soft-deleted "
"feature."
msgstr ""
"No existe la columna `deleted` en la tabla `%s`. El projecto no utiliza la "
"característica de eliminación suave."
#, python-format
msgid "There is no `project_id` column in `%s` table."
msgstr "No existe la columna `project_id` en la tabla `%s`."
#, python-format
msgid "Unknown sort direction, must be one of: %s"
msgstr "Clase de dirección desconocida, debe ser una de: %s"
msgid "Unsupported id columns type"
msgstr "Tipo de identificador de columnas no soportado"
#, python-format
msgid ""
"col_name_col_instance param has wrong type of column instance for column %s "
"It should be instance of sqlalchemy.Column."
msgstr ""
"El parámetro col_name_col_instance contiene el tipo incorrecto de instancia "
"de columna para la columna %s. Debe ser una instancia de sqlalchemy.Column."
msgid "model should be a subclass of ModelBase"
msgstr "el modelo debe ser una subclase del ModelBase"
msgid "version should be an integer"
msgstr "la versión debe ser un entero"

View File

@ -1,83 +0,0 @@
# Translations template for oslo.db.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the oslo.db project.
#
# Translators:
# Lucas Mascaro <mascaro.lucas@yahoo.fr>, 2015
# Maxime COQUEREL <max.coquerel@gmail.com>, 2014-2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.db 4.6.1.dev19\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-19 04:28+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2015-08-07 04:24+0000\n"
"Last-Translator: Lucas Mascaro <mascaro.lucas@yahoo.fr>\n"
"Language: fr\n"
"Plural-Forms: nplurals=2; plural=(n > 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: French\n"
msgid "Invalid Parameter: Encoding directive wasn't provided."
msgstr "Paramètre non valide : La directive encodage n'a pas été fourni."
#, python-format
msgid ""
"Please specify column %s in col_name_col_instance param. It is required "
"because column has unsupported type by SQLite."
msgstr ""
"Spécifiez la colonne %s dans le paramètre col_name_col_instance. Ceci est "
"obligatoire car la colonne a un type non pris en charge dans SQLite."
#, python-format
msgid ""
"Tables \"%s\" have non utf8 collation, please make sure all tables are "
"CHARSET=utf8"
msgstr ""
"Les tables \"%s\" ont une collation non utf8, assurez-vous que pour toutes "
"les tables CHARSET=utf8."
msgid ""
"The database is not under version control, but has tables. Please stamp the "
"current version of the schema manually."
msgstr ""
"La base de données n'est pas versionnée, mais contient des tables. Veuillez "
"indiquer manuellement la version courante du schéma."
#, python-format
msgid ""
"There is no `deleted` column in `%s` table. Project doesn't use soft-deleted "
"feature."
msgstr ""
"Il n'y a aucune colonne `deleted` dans la table `%s`. Le projet ne peut pas "
"utiliser cette fonctionnalité."
#, python-format
msgid "There is no `project_id` column in `%s` table."
msgstr "Il n'y a pas de colonne `project_id` dans la table `%s`."
#, python-format
msgid "Unknown sort direction, must be one of: %s"
msgstr "Ordre de tris inconnu, il doit être un de: %s"
msgid "Unsupported id columns type"
msgstr "Type de colonnes id non pris en charge"
#, python-format
msgid ""
"col_name_col_instance param has wrong type of column instance for column %s "
"It should be instance of sqlalchemy.Column."
msgstr ""
"Le paramètre col_name_col_instance contient un type d'instance de colonne "
"incorrect pour la colonne %s. Il devrait être une instance de sqlalchemy."
"Column."
msgid "model should be a subclass of ModelBase"
msgstr "model doit etre une sous-classe de ModelBase"
msgid "version should be an integer"
msgstr "version doit être un entier"

View File

@ -1,216 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
database_opts = [
cfg.BoolOpt('sqlite_synchronous',
deprecated_group='DEFAULT',
default=True,
help='If True, SQLite uses synchronous mode.'),
cfg.StrOpt('backend',
default='sqlalchemy',
deprecated_name='db_backend',
deprecated_group='DEFAULT',
help='The back end to use for the database.'),
cfg.StrOpt('connection',
help='The SQLAlchemy connection string to use to connect to '
'the database.',
secret=True,
deprecated_opts=[cfg.DeprecatedOpt('sql_connection',
group='DEFAULT'),
cfg.DeprecatedOpt('sql_connection',
group='DATABASE'),
cfg.DeprecatedOpt('connection',
group='sql'), ]),
cfg.StrOpt('slave_connection',
secret=True,
help='The SQLAlchemy connection string to use to connect to the'
' slave database.'),
cfg.StrOpt('mysql_sql_mode',
default='TRADITIONAL',
help='The SQL mode to be used for MySQL sessions. '
'This option, including the default, overrides any '
'server-set SQL mode. To use whatever SQL mode '
'is set by the server configuration, '
'set this to no value. Example: mysql_sql_mode='),
cfg.BoolOpt('mysql_enable_ndb',
default=False,
help='If True, transparently enables support for handling '
'MySQL Cluster (NDB).'),
cfg.IntOpt('idle_timeout',
default=3600,
deprecated_opts=[cfg.DeprecatedOpt('sql_idle_timeout',
group='DEFAULT'),
cfg.DeprecatedOpt('sql_idle_timeout',
group='DATABASE'),
cfg.DeprecatedOpt('idle_timeout',
group='sql')],
help='Timeout before idle SQL connections are reaped.'),
cfg.IntOpt('min_pool_size',
default=1,
deprecated_opts=[cfg.DeprecatedOpt('sql_min_pool_size',
group='DEFAULT'),
cfg.DeprecatedOpt('sql_min_pool_size',
group='DATABASE')],
help='Minimum number of SQL connections to keep open in a '
'pool.'),
cfg.IntOpt('max_pool_size',
default=5,
deprecated_opts=[cfg.DeprecatedOpt('sql_max_pool_size',
group='DEFAULT'),
cfg.DeprecatedOpt('sql_max_pool_size',
group='DATABASE')],
help='Maximum number of SQL connections to keep open in a '
'pool. Setting a value of 0 indicates no limit.'),
cfg.IntOpt('max_retries',
default=10,
deprecated_opts=[cfg.DeprecatedOpt('sql_max_retries',
group='DEFAULT'),
cfg.DeprecatedOpt('sql_max_retries',
group='DATABASE')],
help='Maximum number of database connection retries '
'during startup. Set to -1 to specify an infinite '
'retry count.'),
cfg.IntOpt('retry_interval',
default=10,
deprecated_opts=[cfg.DeprecatedOpt('sql_retry_interval',
group='DEFAULT'),
cfg.DeprecatedOpt('reconnect_interval',
group='DATABASE')],
help='Interval between retries of opening a SQL connection.'),
cfg.IntOpt('max_overflow',
default=50,
deprecated_opts=[cfg.DeprecatedOpt('sql_max_overflow',
group='DEFAULT'),
cfg.DeprecatedOpt('sqlalchemy_max_overflow',
group='DATABASE')],
help='If set, use this value for max_overflow with '
'SQLAlchemy.'),
cfg.IntOpt('connection_debug',
default=0,
min=0, max=100,
deprecated_opts=[cfg.DeprecatedOpt('sql_connection_debug',
group='DEFAULT')],
help='Verbosity of SQL debugging information: 0=None, '
'100=Everything.'),
cfg.BoolOpt('connection_trace',
default=False,
deprecated_opts=[cfg.DeprecatedOpt('sql_connection_trace',
group='DEFAULT')],
help='Add Python stack traces to SQL as comment strings.'),
cfg.IntOpt('pool_timeout',
deprecated_opts=[cfg.DeprecatedOpt('sqlalchemy_pool_timeout',
group='DATABASE')],
help='If set, use this value for pool_timeout with '
'SQLAlchemy.'),
cfg.BoolOpt('use_db_reconnect',
default=False,
help='Enable the experimental use of database reconnect '
'on connection lost.'),
cfg.IntOpt('db_retry_interval',
default=1,
help='Seconds between retries of a database transaction.'),
cfg.BoolOpt('db_inc_retry_interval',
default=True,
help='If True, increases the interval between retries '
'of a database operation up to db_max_retry_interval.'),
cfg.IntOpt('db_max_retry_interval',
default=10,
help='If db_inc_retry_interval is set, the '
'maximum seconds between retries of a '
'database operation.'),
cfg.IntOpt('db_max_retries',
default=20,
help='Maximum retries in case of connection error or deadlock '
'error before error is '
'raised. Set to -1 to specify an infinite retry '
'count.'),
]
def set_defaults(conf, connection=None, max_pool_size=None,
max_overflow=None, pool_timeout=None):
"""Set defaults for configuration variables.
Overrides default options values.
:param conf: Config instance specified to set default options in it. Using
of instances instead of a global config object prevents conflicts between
options declaration.
:type conf: oslo.config.cfg.ConfigOpts instance.
:keyword connection: SQL connection string.
Valid SQLite URL forms are:
* sqlite:///:memory: (or, sqlite://)
* sqlite:///relative/path/to/file.db
* sqlite:////absolute/path/to/file.db
:type connection: str
:keyword max_pool_size: maximum connections pool size. The size of the pool
to be maintained, defaults to 5. This is the largest number of connections
that will be kept persistently in the pool. Note that the pool begins with
no connections; once this number of connections is requested, that number
of connections will remain.
:type max_pool_size: int
:default max_pool_size: 5
:keyword max_overflow: The maximum overflow size of the pool. When the
number of checked-out connections reaches the size set in pool_size,
additional connections will be returned up to this limit. When those
additional connections are returned to the pool, they are disconnected and
discarded. It follows then that the total number of simultaneous
connections the pool will allow is pool_size + max_overflow, and the total
number of "sleeping" connections the pool will allow is pool_size.
max_overflow can be set to -1 to indicate no overflow limit; no limit will
be placed on the total number of concurrent connections. Defaults to 10,
will be used if value of the parameter in `None`.
:type max_overflow: int
:default max_overflow: None
:keyword pool_timeout: The number of seconds to wait before giving up on
returning a connection. Defaults to 30, will be used if value of the
parameter is `None`.
:type pool_timeout: int
:default pool_timeout: None
"""
conf.register_opts(database_opts, group='database')
if connection is not None:
conf.set_default('connection', connection, group='database')
if max_pool_size is not None:
conf.set_default('max_pool_size', max_pool_size, group='database')
if max_overflow is not None:
conf.set_default('max_overflow', max_overflow, group='database')
if pool_timeout is not None:
conf.set_default('pool_timeout', pool_timeout, group='database')
def list_opts():
"""Returns a list of oslo.config options available in the library.
The returned list includes all oslo.config options which may be registered
at runtime by the library.
Each element of the list is a tuple. The first element is the name of the
group under which the list of elements in the second element will be
registered. A group name of None corresponds to the [DEFAULT] group in
config files.
The purpose of this is to allow tools like the Oslo sample config file
generator to discover the options exposed to users by this library.
:returns: a list of (group_name, opts) tuples
"""
return [('database', database_opts)]

View File

@ -1,45 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
import sqlalchemy
SQLA_VERSION = tuple(
int(num) if re.match(r'^\d+$', num) else num
for num in sqlalchemy.__version__.split(".")
)
sqla_110 = SQLA_VERSION >= (1, 1, 0)
sqla_100 = SQLA_VERSION >= (1, 0, 0)
sqla_097 = SQLA_VERSION >= (0, 9, 7)
sqla_094 = SQLA_VERSION >= (0, 9, 4)
sqla_090 = SQLA_VERSION >= (0, 9, 0)
sqla_08 = SQLA_VERSION >= (0, 8)
def get_postgresql_enums(conn):
"""Return a list of ENUM type names on a Postgresql backend.
For SQLAlchemy 0.9 and lower, makes use of the semi-private
_load_enums() method of the Postgresql dialect. In SQLAlchemy
1.0 this feature is supported using get_enums().
This function may only be called when the given connection
is against the Postgresql backend. It will fail for other
kinds of backends.
"""
if sqla_100:
return [e['name'] for e in sqlalchemy.inspect(conn).get_enums()]
else:
return conn.dialect._load_enums(conn).keys()

File diff suppressed because it is too large Load Diff

View File

@ -1,452 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Core SQLAlchemy connectivity routines.
"""
import itertools
import logging
import os
import re
import time
import six
import sqlalchemy
from sqlalchemy import event
from sqlalchemy import exc
from sqlalchemy import pool
from sqlalchemy.sql.expression import select
from oslo_db import exception
from oslo_db.sqlalchemy import exc_filters
from oslo_db.sqlalchemy import ndb
from oslo_db.sqlalchemy import utils
LOG = logging.getLogger(__name__)
def _thread_yield(dbapi_con, con_record):
"""Ensure other greenthreads get a chance to be executed.
If we use eventlet.monkey_patch(), eventlet.greenthread.sleep(0) will
execute instead of time.sleep(0).
Force a context switch. With common database backends (eg MySQLdb and
sqlite), there is no implicit yield caused by network I/O since they are
implemented by C libraries that eventlet cannot monkey patch.
"""
time.sleep(0)
def _connect_ping_listener(connection, branch):
"""Ping the server at connection startup.
Ping the server at transaction begin and transparently reconnect
if a disconnect exception occurs.
"""
if branch:
return
# turn off "close with result". This can also be accomplished
# by branching the connection, however just setting the flag is
# more performant and also doesn't get involved with some
# connection-invalidation awkardness that occurs (see
# https://bitbucket.org/zzzeek/sqlalchemy/issue/3215/)
save_should_close_with_result = connection.should_close_with_result
connection.should_close_with_result = False
try:
# run a SELECT 1. use a core select() so that
# any details like that needed by Oracle, DB2 etc. are handled.
connection.scalar(select([1]))
except exception.DBConnectionError:
# catch DBConnectionError, which is raised by the filter
# system.
# disconnect detected. The connection is now
# "invalid", but the pool should be ready to return
# new connections assuming they are good now.
# run the select again to re-validate the Connection.
LOG.exception(
'Database connection was found disconnected; reconnecting')
connection.scalar(select([1]))
finally:
connection.should_close_with_result = save_should_close_with_result
def _setup_logging(connection_debug=0):
"""setup_logging function maps SQL debug level to Python log level.
Connection_debug is a verbosity of SQL debugging information.
0=None(default value),
1=Processed only messages with WARNING level or higher
50=Processed only messages with INFO level or higher
100=Processed only messages with DEBUG level
"""
if connection_debug >= 0:
logger = logging.getLogger('sqlalchemy.engine')
if connection_debug == 100:
logger.setLevel(logging.DEBUG)
elif connection_debug >= 50:
logger.setLevel(logging.INFO)
else:
logger.setLevel(logging.WARNING)
def _vet_url(url):
if "+" not in url.drivername and not url.drivername.startswith("sqlite"):
if url.drivername.startswith("mysql"):
LOG.warning(
"URL %r does not contain a '+drivername' portion, "
"and will make use of a default driver. "
"A full dbname+drivername:// protocol is recommended. "
"For MySQL, it is strongly recommended that mysql+pymysql:// "
"be specified for maximum service compatibility", url
)
else:
LOG.warning(
"URL %r does not contain a '+drivername' portion, "
"and will make use of a default driver. "
"A full dbname+drivername:// protocol is recommended.", url
)
def create_engine(sql_connection, sqlite_fk=False, mysql_sql_mode=None,
mysql_enable_ndb=False,
idle_timeout=3600,
connection_debug=0, max_pool_size=None, max_overflow=None,
pool_timeout=None, sqlite_synchronous=True,
connection_trace=False, max_retries=10, retry_interval=10,
thread_checkin=True, logging_name=None,
json_serializer=None,
json_deserializer=None):
"""Return a new SQLAlchemy engine."""
url = sqlalchemy.engine.url.make_url(sql_connection)
_vet_url(url)
engine_args = {
"pool_recycle": idle_timeout,
'convert_unicode': True,
'connect_args': {},
'logging_name': logging_name
}
_setup_logging(connection_debug)
_init_connection_args(
url, engine_args,
max_pool_size=max_pool_size,
max_overflow=max_overflow,
pool_timeout=pool_timeout,
json_serializer=json_serializer,
json_deserializer=json_deserializer,
)
engine = sqlalchemy.create_engine(url, **engine_args)
if mysql_enable_ndb:
ndb.enable_ndb_support(engine)
_init_events(
engine,
mysql_sql_mode=mysql_sql_mode,
sqlite_synchronous=sqlite_synchronous,
sqlite_fk=sqlite_fk,
thread_checkin=thread_checkin,
connection_trace=connection_trace
)
# register alternate exception handler
exc_filters.register_engine(engine)
# register engine connect handler
event.listen(engine, "engine_connect", _connect_ping_listener)
# initial connect + test
# NOTE(viktors): the current implementation of _test_connection()
# does nothing, if max_retries == 0, so we can skip it
if max_retries:
test_conn = _test_connection(engine, max_retries, retry_interval)
test_conn.close()
return engine
@utils.dispatch_for_dialect('*', multiple=True)
def _init_connection_args(
url, engine_args,
max_pool_size=None, max_overflow=None, pool_timeout=None, **kw):
pool_class = url.get_dialect().get_pool_class(url)
if issubclass(pool_class, pool.QueuePool):
if max_pool_size is not None:
engine_args['pool_size'] = max_pool_size
if max_overflow is not None:
engine_args['max_overflow'] = max_overflow
if pool_timeout is not None:
engine_args['pool_timeout'] = pool_timeout
@_init_connection_args.dispatch_for("sqlite")
def _init_connection_args(url, engine_args, **kw):
pool_class = url.get_dialect().get_pool_class(url)
# singletonthreadpool is used for :memory: connections;
# replace it with StaticPool.
if issubclass(pool_class, pool.SingletonThreadPool):
engine_args["poolclass"] = pool.StaticPool
engine_args['connect_args']['check_same_thread'] = False
@_init_connection_args.dispatch_for("postgresql")
def _init_connection_args(url, engine_args, **kw):
if 'client_encoding' not in url.query:
# Set encoding using engine_args instead of connect_args since
# it's supported for PostgreSQL 8.*. More details at:
# http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html
engine_args['client_encoding'] = 'utf8'
engine_args['json_serializer'] = kw.get('json_serializer')
engine_args['json_deserializer'] = kw.get('json_deserializer')
@_init_connection_args.dispatch_for("mysql")
def _init_connection_args(url, engine_args, **kw):
if 'charset' not in url.query:
engine_args['connect_args']['charset'] = 'utf8'
@_init_connection_args.dispatch_for("mysql+mysqlconnector")
def _init_connection_args(url, engine_args, **kw):
# mysqlconnector engine (<1.0) incorrectly defaults to
# raise_on_warnings=True
# https://bitbucket.org/zzzeek/sqlalchemy/issue/2515
if 'raise_on_warnings' not in url.query:
engine_args['connect_args']['raise_on_warnings'] = False
@_init_connection_args.dispatch_for("mysql+mysqldb")
@_init_connection_args.dispatch_for("mysql+oursql")
def _init_connection_args(url, engine_args, **kw):
# Those drivers require use_unicode=0 to avoid performance drop due
# to internal usage of Python unicode objects in the driver
# http://docs.sqlalchemy.org/en/rel_0_9/dialects/mysql.html
if 'use_unicode' not in url.query:
if six.PY3:
engine_args['connect_args']['use_unicode'] = 1
else:
engine_args['connect_args']['use_unicode'] = 0
@utils.dispatch_for_dialect('*', multiple=True)
def _init_events(engine, thread_checkin=True, connection_trace=False, **kw):
"""Set up event listeners for all database backends."""
_add_process_guards(engine)
if connection_trace:
_add_trace_comments(engine)
if thread_checkin:
sqlalchemy.event.listen(engine, 'checkin', _thread_yield)
@_init_events.dispatch_for("mysql")
def _init_events(engine, mysql_sql_mode=None, **kw):
"""Set up event listeners for MySQL."""
if mysql_sql_mode is not None:
@sqlalchemy.event.listens_for(engine, "connect")
def _set_session_sql_mode(dbapi_con, connection_rec):
cursor = dbapi_con.cursor()
cursor.execute("SET SESSION sql_mode = %s", [mysql_sql_mode])
@sqlalchemy.event.listens_for(engine, "first_connect")
def _check_effective_sql_mode(dbapi_con, connection_rec):
if mysql_sql_mode is not None:
_set_session_sql_mode(dbapi_con, connection_rec)
cursor = dbapi_con.cursor()
cursor.execute("SHOW VARIABLES LIKE 'sql_mode'")
realmode = cursor.fetchone()
if realmode is None:
LOG.warning('Unable to detect effective SQL mode')
else:
realmode = realmode[1]
LOG.debug('MySQL server mode set to %s', realmode)
if 'TRADITIONAL' not in realmode.upper() and \
'STRICT_ALL_TABLES' not in realmode.upper():
LOG.warning(
"MySQL SQL mode is '%s', "
"consider enabling TRADITIONAL or STRICT_ALL_TABLES",
realmode)
if ndb.ndb_status(engine):
ndb.init_ndb_events(engine)
@_init_events.dispatch_for("sqlite")
def _init_events(engine, sqlite_synchronous=True, sqlite_fk=False, **kw):
"""Set up event listeners for SQLite.
This includes several settings made on connections as they are
created, as well as transactional control extensions.
"""
def regexp(expr, item):
reg = re.compile(expr)
return reg.search(six.text_type(item)) is not None
@sqlalchemy.event.listens_for(engine, "connect")
def _sqlite_connect_events(dbapi_con, con_record):
# Add REGEXP functionality on SQLite connections
dbapi_con.create_function('regexp', 2, regexp)
if not sqlite_synchronous:
# Switch sqlite connections to non-synchronous mode
dbapi_con.execute("PRAGMA synchronous = OFF")
# Disable pysqlite's emitting of the BEGIN statement entirely.
# Also stops it from emitting COMMIT before any DDL.
# below, we emit BEGIN ourselves.
# see http://docs.sqlalchemy.org/en/rel_0_9/dialects/\
# sqlite.html#serializable-isolation-savepoints-transactional-ddl
dbapi_con.isolation_level = None
if sqlite_fk:
# Ensures that the foreign key constraints are enforced in SQLite.
dbapi_con.execute('pragma foreign_keys=ON')
@sqlalchemy.event.listens_for(engine, "begin")
def _sqlite_emit_begin(conn):
# emit our own BEGIN, checking for existing
# transactional state
if 'in_transaction' not in conn.info:
conn.execute("BEGIN")
conn.info['in_transaction'] = True
@sqlalchemy.event.listens_for(engine, "rollback")
@sqlalchemy.event.listens_for(engine, "commit")
def _sqlite_end_transaction(conn):
# remove transactional marker
conn.info.pop('in_transaction', None)
def _test_connection(engine, max_retries, retry_interval):
if max_retries == -1:
attempts = itertools.count()
else:
attempts = six.moves.range(max_retries)
# See: http://legacy.python.org/dev/peps/pep-3110/#semantic-changes for
# why we are not using 'de' directly (it can be removed from the local
# scope).
de_ref = None
for attempt in attempts:
try:
return engine.connect()
except exception.DBConnectionError as de:
msg = 'SQL connection failed. %s attempts left.'
LOG.warning(msg, max_retries - attempt)
time.sleep(retry_interval)
de_ref = de
else:
if de_ref is not None:
six.reraise(type(de_ref), de_ref)
def _add_process_guards(engine):
"""Add multiprocessing guards.
Forces a connection to be reconnected if it is detected
as having been shared to a sub-process.
"""
@sqlalchemy.event.listens_for(engine, "connect")
def connect(dbapi_connection, connection_record):
connection_record.info['pid'] = os.getpid()
@sqlalchemy.event.listens_for(engine, "checkout")
def checkout(dbapi_connection, connection_record, connection_proxy):
pid = os.getpid()
if connection_record.info['pid'] != pid:
LOG.debug(
"Parent process %(orig)s forked (%(newproc)s) with an open "
"database connection, "
"which is being discarded and recreated.",
{"newproc": pid, "orig": connection_record.info['pid']})
connection_record.connection = connection_proxy.connection = None
raise exc.DisconnectionError(
"Connection record belongs to pid %s, "
"attempting to check out in pid %s" %
(connection_record.info['pid'], pid)
)
def _add_trace_comments(engine):
"""Add trace comments.
Augment statements with a trace of the immediate calling code
for a given statement.
"""
import os
import sys
import traceback
target_paths = set([
os.path.dirname(sys.modules['oslo_db'].__file__),
os.path.dirname(sys.modules['sqlalchemy'].__file__)
])
try:
skip_paths = set([
os.path.dirname(sys.modules['oslo_db.tests'].__file__),
])
except KeyError:
skip_paths = set()
@sqlalchemy.event.listens_for(engine, "before_cursor_execute", retval=True)
def before_cursor_execute(conn, cursor, statement, parameters, context,
executemany):
# NOTE(zzzeek) - if different steps per DB dialect are desirable
# here, switch out on engine.name for now.
stack = traceback.extract_stack()
our_line = None
for idx, (filename, line, method, function) in enumerate(stack):
for tgt in skip_paths:
if filename.startswith(tgt):
break
else:
for tgt in target_paths:
if filename.startswith(tgt):
our_line = idx
break
if our_line:
break
if our_line:
trace = "; ".join(
"File: %s (%s) %s" % (
line[0], line[1], line[2]
)
# include three lines of context.
for line in stack[our_line - 3:our_line]
)
statement = "%s -- %s" % (statement, trace)
return statement, parameters

View File

@ -1,526 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Define exception redefinitions for SQLAlchemy DBAPI exceptions."""
import collections
import logging
import re
import sys
from sqlalchemy import event
from sqlalchemy import exc as sqla_exc
from oslo_db import exception
LOG = logging.getLogger(__name__)
_registry = collections.defaultdict(
lambda: collections.defaultdict(
list
)
)
def filters(dbname, exception_type, regex):
"""Mark a function as receiving a filtered exception.
:param dbname: string database name, e.g. 'mysql'
:param exception_type: a SQLAlchemy database exception class, which
extends from :class:`sqlalchemy.exc.DBAPIError`.
:param regex: a string, or a tuple of strings, that will be processed
as matching regular expressions.
"""
def _receive(fn):
_registry[dbname][exception_type].extend(
(fn, re.compile(reg))
for reg in
((regex,) if not isinstance(regex, tuple) else regex)
)
return fn
return _receive
# NOTE(zzzeek) - for Postgresql, catch both OperationalError, as the
# actual error is
# psycopg2.extensions.TransactionRollbackError(OperationalError),
# as well as sqlalchemy.exc.DBAPIError, as SQLAlchemy will reraise it
# as this until issue #3075 is fixed.
@filters("mysql", sqla_exc.OperationalError, r"^.*\b1213\b.*Deadlock found.*")
@filters("mysql", sqla_exc.DatabaseError,
r"^.*\b1205\b.*Lock wait timeout exceeded.*")
@filters("mysql", sqla_exc.InternalError, r"^.*\b1213\b.*Deadlock found.*")
@filters("mysql", sqla_exc.InternalError,
r"^.*\b1213\b.*detected deadlock/conflict.*")
@filters("postgresql", sqla_exc.OperationalError, r"^.*deadlock detected.*")
@filters("postgresql", sqla_exc.DBAPIError, r"^.*deadlock detected.*")
@filters("ibm_db_sa", sqla_exc.DBAPIError, r"^.*SQL0911N.*")
def _deadlock_error(operational_error, match, engine_name, is_disconnect):
"""Filter for MySQL or Postgresql deadlock error.
NOTE(comstud): In current versions of DB backends, Deadlock violation
messages follow the structure:
mysql+mysqldb:
(OperationalError) (1213, 'Deadlock found when trying to get lock; try '
'restarting transaction') <query_str> <query_args>
mysql+mysqlconnector:
(InternalError) 1213 (40001): Deadlock found when trying to get lock; try
restarting transaction
postgresql:
(TransactionRollbackError) deadlock detected <deadlock_details>
ibm_db_sa:
SQL0911N The current transaction has been rolled back because of a
deadlock or timeout <deadlock details>
"""
raise exception.DBDeadlock(operational_error)
@filters("mysql", sqla_exc.IntegrityError,
r"^.*\b1062\b.*Duplicate entry '(?P<value>.*)'"
r" for key '(?P<columns>[^']+)'.*$")
# NOTE(jd) For binary types
@filters("mysql", sqla_exc.IntegrityError,
r"^.*\b1062\b.*Duplicate entry \\'(?P<value>.*)\\'"
r" for key \\'(?P<columns>.+)\\'.*$")
# NOTE(pkholkin): the first regex is suitable only for PostgreSQL 9.x versions
# the second regex is suitable for PostgreSQL 8.x versions
@filters("postgresql", sqla_exc.IntegrityError,
(r'^.*duplicate\s+key.*"(?P<columns>[^"]+)"\s*\n.*'
r'Key\s+\((?P<key>.*)\)=\((?P<value>.*)\)\s+already\s+exists.*$',
r"^.*duplicate\s+key.*\"(?P<columns>[^\"]+)\"\s*\n.*$"))
def _default_dupe_key_error(integrity_error, match, engine_name,
is_disconnect):
"""Filter for MySQL or Postgresql duplicate key error.
note(boris-42): In current versions of DB backends unique constraint
violation messages follow the structure:
postgres:
1 column - (IntegrityError) duplicate key value violates unique
constraint "users_c1_key"
N columns - (IntegrityError) duplicate key value violates unique
constraint "name_of_our_constraint"
mysql+mysqldb:
1 column - (IntegrityError) (1062, "Duplicate entry 'value_of_c1' for key
'c1'")
N columns - (IntegrityError) (1062, "Duplicate entry 'values joined
with -' for key 'name_of_our_constraint'")
mysql+mysqlconnector:
1 column - (IntegrityError) 1062 (23000): Duplicate entry 'value_of_c1' for
key 'c1'
N columns - (IntegrityError) 1062 (23000): Duplicate entry 'values
joined with -' for key 'name_of_our_constraint'
"""
columns = match.group('columns')
# note(vsergeyev): UniqueConstraint name convention: "uniq_t0c10c2"
# where `t` it is table name and columns `c1`, `c2`
# are in UniqueConstraint.
uniqbase = "uniq_"
if not columns.startswith(uniqbase):
if engine_name == "postgresql":
columns = [columns[columns.index("_") + 1:columns.rindex("_")]]
else:
columns = [columns]
else:
columns = columns[len(uniqbase):].split("0")[1:]
value = match.groupdict().get('value')
raise exception.DBDuplicateEntry(columns, integrity_error, value)
@filters("sqlite", sqla_exc.IntegrityError,
(r"^.*columns?(?P<columns>[^)]+)(is|are)\s+not\s+unique$",
r"^.*UNIQUE\s+constraint\s+failed:\s+(?P<columns>.+)$",
r"^.*PRIMARY\s+KEY\s+must\s+be\s+unique.*$"))
def _sqlite_dupe_key_error(integrity_error, match, engine_name, is_disconnect):
"""Filter for SQLite duplicate key error.
note(boris-42): In current versions of DB backends unique constraint
violation messages follow the structure:
sqlite:
1 column - (IntegrityError) column c1 is not unique
N columns - (IntegrityError) column c1, c2, ..., N are not unique
sqlite since 3.7.16:
1 column - (IntegrityError) UNIQUE constraint failed: tbl.k1
N columns - (IntegrityError) UNIQUE constraint failed: tbl.k1, tbl.k2
sqlite since 3.8.2:
(IntegrityError) PRIMARY KEY must be unique
"""
columns = []
# NOTE(ochuprykov): We can get here by last filter in which there are no
# groups. Trying to access the substring that matched by
# the group will lead to IndexError. In this case just
# pass empty list to exception.DBDuplicateEntry
try:
columns = match.group('columns')
columns = [c.split('.')[-1] for c in columns.strip().split(", ")]
except IndexError:
pass
raise exception.DBDuplicateEntry(columns, integrity_error)
@filters("sqlite", sqla_exc.IntegrityError,
r"(?i).*foreign key constraint failed")
@filters("postgresql", sqla_exc.IntegrityError,
r".*on table \"(?P<table>[^\"]+)\" violates "
"foreign key constraint \"(?P<constraint>[^\"]+)\".*\n"
"DETAIL: Key \((?P<key>.+)\)=\(.+\) "
"is (not present in|still referenced from) table "
"\"(?P<key_table>[^\"]+)\".")
@filters("mysql", sqla_exc.IntegrityError,
r".*Cannot (add|delete) or update a (child|parent) row: "
'a foreign key constraint fails \([`"].+[`"]\.[`"](?P<table>.+)[`"], '
'CONSTRAINT [`"](?P<constraint>.+)[`"] FOREIGN KEY '
'\([`"](?P<key>.+)[`"]\) REFERENCES [`"](?P<key_table>.+)[`"] ')
def _foreign_key_error(integrity_error, match, engine_name, is_disconnect):
"""Filter for foreign key errors."""
try:
table = match.group("table")
except IndexError:
table = None
try:
constraint = match.group("constraint")
except IndexError:
constraint = None
try:
key = match.group("key")
except IndexError:
key = None
try:
key_table = match.group("key_table")
except IndexError:
key_table = None
raise exception.DBReferenceError(table, constraint, key, key_table,
integrity_error)
@filters("postgresql", sqla_exc.IntegrityError,
r".*new row for relation \"(?P<table>.+)\" "
"violates check constraint "
"\"(?P<check_name>.+)\"")
def _check_constraint_error(
integrity_error, match, engine_name, is_disconnect):
"""Filter for check constraint errors."""
try:
table = match.group("table")
except IndexError:
table = None
try:
check_name = match.group("check_name")
except IndexError:
check_name = None
raise exception.DBConstraintError(table, check_name, integrity_error)
@filters("postgresql", sqla_exc.ProgrammingError,
r".* constraint \"(?P<constraint>.+)\" "
"of relation "
"\"(?P<relation>.+)\" does not exist")
@filters("mysql", sqla_exc.InternalError,
r".*1091,.*Can't DROP '(?P<constraint>.+)'; "
"check that column/key exists")
@filters("mysql", sqla_exc.OperationalError,
r".*1091,.*Can't DROP '(?P<constraint>.+)'; "
"check that column/key exists")
@filters("mysql", sqla_exc.InternalError,
r".*1025,.*Error on rename of '.+/(?P<relation>.+)' to ")
def _check_constraint_non_existing(
programming_error, match, engine_name, is_disconnect):
"""Filter for constraint non existing errors."""
try:
relation = match.group("relation")
except IndexError:
relation = None
try:
constraint = match.group("constraint")
except IndexError:
constraint = None
raise exception.DBNonExistentConstraint(relation,
constraint,
programming_error)
@filters("sqlite", sqla_exc.OperationalError,
r".* no such table: (?P<table>.+)")
@filters("mysql", sqla_exc.InternalError,
r".*1051,.*Unknown table '(.+\.)?(?P<table>.+)'\"")
@filters("mysql", sqla_exc.OperationalError,
r".*1051,.*Unknown table '(.+\.)?(?P<table>.+)'\"")
@filters("postgresql", sqla_exc.ProgrammingError,
r".* table \"(?P<table>.+)\" does not exist")
def _check_table_non_existing(
programming_error, match, engine_name, is_disconnect):
"""Filter for table non existing errors."""
raise exception.DBNonExistentTable(match.group("table"), programming_error)
@filters("mysql", sqla_exc.InternalError,
r".*1049,.*Unknown database '(?P<database>.+)'\"")
@filters("mysql", sqla_exc.OperationalError,
r".*1049,.*Unknown database '(?P<database>.+)'\"")
@filters("postgresql", sqla_exc.OperationalError,
r".*database \"(?P<database>.+)\" does not exist")
@filters("sqlite", sqla_exc.OperationalError,
".*unable to open database file.*")
def _check_database_non_existing(
error, match, engine_name, is_disconnect):
try:
database = match.group("database")
except IndexError:
database = None
raise exception.DBNonExistentDatabase(database, error)
@filters("ibm_db_sa", sqla_exc.IntegrityError, r"^.*SQL0803N.*$")
def _db2_dupe_key_error(integrity_error, match, engine_name, is_disconnect):
"""Filter for DB2 duplicate key errors.
N columns - (IntegrityError) SQL0803N One or more values in the INSERT
statement, UPDATE statement, or foreign key update caused by a
DELETE statement are not valid because the primary key, unique
constraint or unique index identified by "2" constrains table
"NOVA.KEY_PAIRS" from having duplicate values for the index
key.
"""
# NOTE(mriedem): The ibm_db_sa integrity error message doesn't provide the
# columns so we have to omit that from the DBDuplicateEntry error.
raise exception.DBDuplicateEntry([], integrity_error)
@filters("mysql", sqla_exc.DBAPIError, r".*\b1146\b")
def _raise_mysql_table_doesnt_exist_asis(
error, match, engine_name, is_disconnect):
"""Raise MySQL error 1146 as is.
Raise MySQL error 1146 as is, so that it does not conflict with
the MySQL dialect's checking a table not existing.
"""
raise error
@filters("mysql", sqla_exc.OperationalError,
r".*(1292|1366).*Incorrect \w+ value.*")
@filters("mysql", sqla_exc.DataError,
r".*1265.*Data truncated for column.*")
@filters("mysql", sqla_exc.DataError,
r".*1264.*Out of range value for column.*")
@filters("mysql", sqla_exc.InternalError,
r"^.*1366.*Incorrect string value:*")
@filters("sqlite", sqla_exc.ProgrammingError,
r"(?i).*You must not use 8-bit bytestrings*")
@filters("mysql", sqla_exc.DataError,
r".*1406.*Data too long for column.*")
def _raise_data_error(error, match, engine_name, is_disconnect):
"""Raise DBDataError exception for different data errors."""
raise exception.DBDataError(error)
@filters("mysql", sqla_exc.OperationalError,
r".*\(1305,\s+\'SAVEPOINT\s+(.+)\s+does not exist\'\)")
def _raise_savepoints_as_dberrors(error, match, engine_name, is_disconnect):
# NOTE(rpodolyaka): this is a special case of an OperationalError that used
# to be an InternalError. It's expected to be wrapped into oslo.db error.
raise exception.DBError(error)
@filters("*", sqla_exc.OperationalError, r".*")
def _raise_operational_errors_directly_filter(operational_error,
match, engine_name,
is_disconnect):
"""Filter for all remaining OperationalError classes and apply.
Filter for all remaining OperationalError classes and apply
special rules.
"""
if is_disconnect:
# operational errors that represent disconnect
# should be wrapped
raise exception.DBConnectionError(operational_error)
else:
# NOTE(comstud): A lot of code is checking for OperationalError
# so let's not wrap it for now.
raise operational_error
@filters("mysql", sqla_exc.OperationalError, r".*\(.*(?:2002|2003|2006|2013|1047)") # noqa
@filters("mysql", sqla_exc.InternalError, r".*\(.*(?:1927)") # noqa
@filters("mysql", sqla_exc.InternalError, r".*Packet sequence number wrong") # noqa
@filters("postgresql", sqla_exc.OperationalError, r".*could not connect to server") # noqa
@filters("ibm_db_sa", sqla_exc.OperationalError, r".*(?:30081)")
def _is_db_connection_error(operational_error, match, engine_name,
is_disconnect):
"""Detect the exception as indicating a recoverable error on connect."""
raise exception.DBConnectionError(operational_error)
@filters("*", sqla_exc.NotSupportedError, r".*")
def _raise_for_NotSupportedError(error, match, engine_name, is_disconnect):
raise exception.DBNotSupportedError(error)
@filters("*", sqla_exc.DBAPIError, r".*")
def _raise_for_remaining_DBAPIError(error, match, engine_name, is_disconnect):
"""Filter for remaining DBAPIErrors.
Filter for remaining DBAPIErrors and wrap if they represent
a disconnect error.
"""
if is_disconnect:
raise exception.DBConnectionError(error)
else:
LOG.exception(
'DBAPIError exception wrapped from %s' % error)
raise exception.DBError(error)
@filters('*', UnicodeEncodeError, r".*")
def _raise_for_unicode_encode(error, match, engine_name, is_disconnect):
raise exception.DBInvalidUnicodeParameter()
@filters("*", Exception, r".*")
def _raise_for_all_others(error, match, engine_name, is_disconnect):
LOG.exception('DB exception wrapped.')
raise exception.DBError(error)
ROLLBACK_CAUSE_KEY = 'oslo.db.sp_rollback_cause'
def handler(context):
"""Iterate through available filters and invoke those which match.
The first one which raises wins. The order in which the filters
are attempted is sorted by specificity - dialect name or "*",
exception class per method resolution order (``__mro__``).
Method resolution order is used so that filter rules indicating a
more specific exception class are attempted first.
"""
def _dialect_registries(engine):
if engine.dialect.name in _registry:
yield _registry[engine.dialect.name]
if '*' in _registry:
yield _registry['*']
for per_dialect in _dialect_registries(context.engine):
for exc in (
context.sqlalchemy_exception,
context.original_exception):
for super_ in exc.__class__.__mro__:
if super_ in per_dialect:
regexp_reg = per_dialect[super_]
for fn, regexp in regexp_reg:
match = regexp.match(exc.args[0])
if match:
try:
fn(
exc,
match,
context.engine.dialect.name,
context.is_disconnect)
except exception.DBError as dbe:
if (
context.connection is not None and
not context.connection.closed and
not context.connection.invalidated and
ROLLBACK_CAUSE_KEY
in context.connection.info
):
dbe.cause = \
context.connection.info.pop(
ROLLBACK_CAUSE_KEY)
if isinstance(
dbe, exception.DBConnectionError):
context.is_disconnect = True
raise
def register_engine(engine):
event.listen(engine, "handle_error", handler)
@event.listens_for(engine, "rollback_savepoint")
def rollback_savepoint(conn, name, context):
exc_info = sys.exc_info()
if exc_info[1]:
# NOTE(zzzeek) accessing conn.info on an invalidated
# connection causes it to reconnect, which we don't
# want to do inside a rollback handler
if not conn.invalidated:
conn.info[ROLLBACK_CAUSE_KEY] = exc_info[1]
# NOTE(zzzeek) this eliminates a reference cycle between tracebacks
# that would occur in Python 3 only, which has been shown to occur if
# this function were in fact part of the traceback. That's not the
# case here however this is left as a defensive measure.
del exc_info
# try to clear the "cause" ASAP outside of savepoints,
# by grabbing the end of transaction events...
@event.listens_for(engine, "rollback")
@event.listens_for(engine, "commit")
def pop_exc_tx(conn):
# NOTE(zzzeek) accessing conn.info on an invalidated
# connection causes it to reconnect, which we don't
# want to do inside a rollback handler
if not conn.invalidated:
conn.info.pop(ROLLBACK_CAUSE_KEY, None)
# .. as well as connection pool checkin (just in case).
# the .info dictionary lasts as long as the DBAPI connection itself
# and is cleared out when the connection is recycled or closed
# due to invalidate etc.
@event.listens_for(engine, "checkin")
def pop_exc_checkin(dbapi_conn, connection_record):
connection_record.info.pop(ROLLBACK_CAUSE_KEY, None)
def handle_connect_error(engine):
"""Connect to the engine, including handle_error handlers.
The compat library now builds this into the engine.connect()
system as per SQLAlchemy 1.0's behavior.
"""
return engine.connect()

View File

@ -1,173 +0,0 @@
# coding=utf-8
# Copyright (c) 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Base on code in migrate/changeset/databases/sqlite.py which is under
# the following license:
#
# The MIT License
#
# Copyright (c) 2009 Evan Rosson, Jan Dittberner, Domen Kožar
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import os
from migrate import exceptions as versioning_exceptions
from migrate.versioning import api as versioning_api
from migrate.versioning.repository import Repository
import sqlalchemy
from oslo_db._i18n import _
from oslo_db import exception
def db_sync(engine, abs_path, version=None, init_version=0, sanity_check=True):
"""Upgrade or downgrade a database.
Function runs the upgrade() or downgrade() functions in change scripts.
:param engine: SQLAlchemy engine instance for a given database
:param abs_path: Absolute path to migrate repository.
:param version: Database will upgrade/downgrade until this version.
If None - database will update to the latest
available version.
:param init_version: Initial database version
:param sanity_check: Require schema sanity checking for all tables
"""
if version is not None:
try:
version = int(version)
except ValueError:
raise exception.DBMigrationError(_("version should be an integer"))
current_version = db_version(engine, abs_path, init_version)
repository = _find_migrate_repo(abs_path)
if sanity_check:
_db_schema_sanity_check(engine)
if version is None or version > current_version:
try:
migration = versioning_api.upgrade(engine, repository, version)
except Exception as ex:
raise exception.DbMigrationError(ex)
else:
migration = versioning_api.downgrade(engine, repository,
version)
if sanity_check:
_db_schema_sanity_check(engine)
return migration
def _db_schema_sanity_check(engine):
"""Ensure all database tables were created with required parameters.
:param engine: SQLAlchemy engine instance for a given database
"""
if engine.name == 'mysql':
onlyutf8_sql = ('SELECT TABLE_NAME,TABLE_COLLATION '
'from information_schema.TABLES '
'where TABLE_SCHEMA=%s and '
'TABLE_COLLATION NOT LIKE \'%%utf8%%\'')
# NOTE(morganfainberg): exclude the sqlalchemy-migrate and alembic
# versioning tables from the tables we need to verify utf8 status on.
# Non-standard table names are not supported.
EXCLUDED_TABLES = ['migrate_version', 'alembic_version']
table_names = [res[0] for res in
engine.execute(onlyutf8_sql, engine.url.database) if
res[0].lower() not in EXCLUDED_TABLES]
if len(table_names) > 0:
raise ValueError(_('Tables "%s" have non utf8 collation, '
'please make sure all tables are CHARSET=utf8'
) % ','.join(table_names))
def db_version(engine, abs_path, init_version):
"""Show the current version of the repository.
:param engine: SQLAlchemy engine instance for a given database
:param abs_path: Absolute path to migrate repository
:param init_version: Initial database version
"""
repository = _find_migrate_repo(abs_path)
try:
return versioning_api.db_version(engine, repository)
except versioning_exceptions.DatabaseNotControlledError:
meta = sqlalchemy.MetaData()
meta.reflect(bind=engine)
tables = meta.tables
if (len(tables) == 0 or 'alembic_version' in tables or
'migrate_version' in tables):
db_version_control(engine, abs_path, version=init_version)
return versioning_api.db_version(engine, repository)
else:
raise exception.DBMigrationError(
_("The database is not under version control, but has "
"tables. Please stamp the current version of the schema "
"manually."))
def db_version_control(engine, abs_path, version=None):
"""Mark a database as under this repository's version control.
Once a database is under version control, schema changes should
only be done via change scripts in this repository.
:param engine: SQLAlchemy engine instance for a given database
:param abs_path: Absolute path to migrate repository
:param version: Initial database version
"""
repository = _find_migrate_repo(abs_path)
try:
versioning_api.version_control(engine, repository, version)
except versioning_exceptions.InvalidVersionError as ex:
raise exception.DBMigrationError("Invalid version : %s" % ex)
except versioning_exceptions.DatabaseAlreadyControlledError:
raise exception.DBMigrationError("Database is already controlled.")
return version
def _find_migrate_repo(abs_path):
"""Get the project's change script repository
:param abs_path: Absolute path to migrate repository
"""
if not os.path.exists(abs_path):
raise exception.DBMigrationError("Path %s not found" % abs_path)
return Repository(abs_path)

View File

@ -1,9 +0,0 @@
This module could be used either for:
1. Smooth transition from migrate tool to alembic
2. As standalone alembic tool
Core points:
1. Upgrade/downgrade database with usage of alembic/migrate migrations
or both
2. Compatibility with oslo.config
3. The way to autogenerate new revisions or stamps

View File

@ -1,112 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import alembic
from alembic import config as alembic_config
import alembic.migration as alembic_migration
from alembic import script as alembic_script
from oslo_db.sqlalchemy.migration_cli import ext_base
class AlembicExtension(ext_base.MigrationExtensionBase):
"""Extension to provide alembic features.
:param engine: SQLAlchemy engine instance for a given database
:type engine: sqlalchemy.engine.Engine
:param migration_config: Stores specific configuration for migrations
:type migration_config: dict
"""
order = 2
@property
def enabled(self):
return os.path.exists(self.alembic_ini_path)
def __init__(self, engine, migration_config):
self.alembic_ini_path = migration_config.get('alembic_ini_path', '')
self.config = alembic_config.Config(self.alembic_ini_path)
# TODO(viktors): Remove this, when we will use Alembic 0.7.5 or
# higher, because the ``attributes`` dictionary was
# added to Alembic in version 0.7.5.
if not hasattr(self.config, 'attributes'):
self.config.attributes = {}
# option should be used if script is not in default directory
repo_path = migration_config.get('alembic_repo_path')
if repo_path:
self.config.set_main_option('script_location', repo_path)
self.engine = engine
def upgrade(self, version):
with self.engine.begin() as connection:
self.config.attributes['connection'] = connection
return alembic.command.upgrade(self.config, version or 'head')
def downgrade(self, version):
if isinstance(version, int) or version is None or version.isdigit():
version = 'base'
with self.engine.begin() as connection:
self.config.attributes['connection'] = connection
return alembic.command.downgrade(self.config, version)
def version(self):
with self.engine.connect() as conn:
context = alembic_migration.MigrationContext.configure(conn)
return context.get_current_revision()
def revision(self, message='', autogenerate=False):
"""Creates template for migration.
:param message: Text that will be used for migration title
:type message: string
:param autogenerate: If True - generates diff based on current database
state
:type autogenerate: bool
"""
with self.engine.begin() as connection:
self.config.attributes['connection'] = connection
return alembic.command.revision(self.config, message=message,
autogenerate=autogenerate)
def stamp(self, revision):
"""Stamps database with provided revision.
:param revision: Should match one from repository or head - to stamp
database with most recent revision
:type revision: string
"""
with self.engine.begin() as connection:
self.config.attributes['connection'] = connection
return alembic.command.stamp(self.config, revision=revision)
def has_revision(self, rev_id):
if rev_id in ['base', 'head']:
return True
# Although alembic supports relative upgrades and downgrades,
# get_revision always returns False for relative revisions.
# Since only alembic supports relative revisions, assume the
# revision belongs to this plugin.
if rev_id: # rev_id can be None, so the check is required
if '-' in rev_id or '+' in rev_id:
return True
script = alembic_script.ScriptDirectory(
self.config.get_main_option('script_location'))
try:
script.get_revision(rev_id)
return True
except alembic.util.CommandError:
return False

View File

@ -1,88 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class MigrationExtensionBase(object):
# used to sort migration in logical order
order = 0
@property
def enabled(self):
"""Used for availability verification of a plugin.
:rtype: bool
"""
return False
@abc.abstractmethod
def upgrade(self, version):
"""Used for upgrading database.
:param version: Desired database version
:type version: string
"""
@abc.abstractmethod
def downgrade(self, version):
"""Used for downgrading database.
:param version: Desired database version
:type version: string
"""
@abc.abstractmethod
def version(self):
"""Current database version.
:returns: Databse version
:rtype: string
"""
def revision(self, *args, **kwargs):
"""Used to generate migration script.
In migration engines that support this feature, it should generate
new migration script.
Accept arbitrary set of arguments.
"""
raise NotImplementedError()
def stamp(self, *args, **kwargs):
"""Stamps database based on plugin features.
Accept arbitrary set of arguments.
"""
raise NotImplementedError()
def has_revision(self, rev_id):
"""Checks whether the repo contains a revision
:param rev_id: Revision to check
:returns: Whether the revision is in the repo
:rtype: bool
"""
raise NotImplementedError()
def __cmp__(self, other):
"""Used for definition of plugin order.
:param other: MigrationExtensionBase instance
:rtype: bool
"""
return self.order > other.order

View File

@ -1,79 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
from migrate.versioning import version as migrate_version
from oslo_db.sqlalchemy import migration
from oslo_db.sqlalchemy.migration_cli import ext_base
LOG = logging.getLogger(__name__)
class MigrateExtension(ext_base.MigrationExtensionBase):
"""Extension to provide sqlalchemy-migrate features.
:param migration_config: Stores specific configuration for migrations
:type migration_config: dict
"""
order = 1
def __init__(self, engine, migration_config):
self.engine = engine
self.repository = migration_config.get('migration_repo_path', '')
self.init_version = migration_config.get('init_version', 0)
@property
def enabled(self):
return os.path.exists(self.repository)
def upgrade(self, version):
version = None if version == 'head' else version
return migration.db_sync(
self.engine, self.repository, version,
init_version=self.init_version)
def downgrade(self, version):
try:
# version for migrate should be valid int - else skip
if version in ('base', None):
version = self.init_version
version = int(version)
return migration.db_sync(
self.engine, self.repository, version,
init_version=self.init_version)
except ValueError:
LOG.error(
'Migration number for migrate plugin must be valid '
'integer or empty, if you want to downgrade '
'to initial state'
)
raise
def version(self):
return migration.db_version(
self.engine, self.repository, init_version=self.init_version)
def has_revision(self, rev_id):
collection = migrate_version.Collection(self.repository)
try:
collection.version(rev_id)
return True
except (KeyError, ValueError):
# NOTE(breton): migrate raises KeyError if an int is passed but not
# found in the list of revisions and ValueError if non-int is
# passed. Both mean there is no requested revision.
return False

View File

@ -1,107 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy
from stevedore import enabled
from oslo_db import exception
MIGRATION_NAMESPACE = 'oslo.db.migration'
def check_plugin_enabled(ext):
"""Used for EnabledExtensionManager."""
return ext.obj.enabled
class MigrationManager(object):
def __init__(self, migration_config, engine=None):
if engine is None:
if migration_config.get('db_url'):
engine = sqlalchemy.create_engine(
migration_config['db_url'],
poolclass=sqlalchemy.pool.NullPool,
)
else:
raise ValueError('Either database url or engine'
' must be provided.')
self._manager = enabled.EnabledExtensionManager(
MIGRATION_NAMESPACE,
check_plugin_enabled,
invoke_args=(engine, migration_config),
invoke_on_load=True
)
if not self._plugins:
raise ValueError('There must be at least one plugin active.')
@property
def _plugins(self):
return sorted(ext.obj for ext in self._manager.extensions)
def upgrade(self, revision):
"""Upgrade database with all available backends."""
# a revision exists only in a single plugin. Until we reached it, we
# should upgrade to the plugins' heads.
# revision=None is a special case meaning latest revision.
rev_in_plugins = [p.has_revision(revision) for p in self._plugins]
if not any(rev_in_plugins) and revision is not None:
raise exception.DBMigrationError('Revision does not exist')
results = []
for plugin, has_revision in zip(self._plugins, rev_in_plugins):
if not has_revision or revision is None:
results.append(plugin.upgrade(None))
else:
results.append(plugin.upgrade(revision))
break
return results
def downgrade(self, revision):
"""Downgrade database with available backends."""
# a revision exists only in a single plugin. Until we reached it, we
# should upgrade to the plugins' first revision.
# revision=None is a special case meaning initial revision.
rev_in_plugins = [p.has_revision(revision) for p in self._plugins]
if not any(rev_in_plugins) and revision is not None:
raise exception.DBMigrationError('Revision does not exist')
# downgrading should be performed in reversed order
results = []
for plugin, has_revision in zip(reversed(self._plugins),
reversed(rev_in_plugins)):
if not has_revision or revision is None:
results.append(plugin.downgrade(None))
else:
results.append(plugin.downgrade(revision))
break
return results
def version(self):
"""Return last version of db."""
last = None
for plugin in self._plugins:
version = plugin.version()
if version is not None:
last = version
return last
def revision(self, message, autogenerate):
"""Generate template or autogenerated revision."""
# revision should be done only by last plugin
return self._plugins[-1].revision(message, autogenerate)
def stamp(self, revision):
"""Create stamp for a given revision."""
return self._plugins[-1].stamp(revision)

View File

@ -1,150 +0,0 @@
# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Piston Cloud Computing, Inc.
# Copyright 2012 Cloudscaling Group, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
SQLAlchemy models.
"""
import six
from oslo_utils import timeutils
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy.orm import object_mapper
from oslo_db.sqlalchemy import types
class ModelBase(six.Iterator):
"""Base class for models."""
__table_initialized__ = False
def save(self, session):
"""Save this object."""
# NOTE(boris-42): This part of code should be look like:
# session.add(self)
# session.flush()
# But there is a bug in sqlalchemy and eventlet that
# raises NoneType exception if there is no running
# transaction and rollback is called. As long as
# sqlalchemy has this bug we have to create transaction
# explicitly.
with session.begin(subtransactions=True):
session.add(self)
session.flush()
def __setitem__(self, key, value):
setattr(self, key, value)
def __getitem__(self, key):
return getattr(self, key)
def __contains__(self, key):
# Don't use hasattr() because hasattr() catches any exception, not only
# AttributeError. We want to passthrough SQLAlchemy exceptions
# (ex: sqlalchemy.orm.exc.DetachedInstanceError).
try:
getattr(self, key)
except AttributeError:
return False
else:
return True
def get(self, key, default=None):
return getattr(self, key, default)
@property
def _extra_keys(self):
"""Specifies custom fields
Subclasses can override this property to return a list
of custom fields that should be included in their dict
representation.
For reference check tests/db/sqlalchemy/test_models.py
"""
return []
def __iter__(self):
columns = list(dict(object_mapper(self).columns).keys())
# NOTE(russellb): Allow models to specify other keys that can be looked
# up, beyond the actual db columns. An example would be the 'name'
# property for an Instance.
columns.extend(self._extra_keys)
return ModelIterator(self, iter(columns))
def update(self, values):
"""Make the model object behave like a dict."""
for k, v in six.iteritems(values):
setattr(self, k, v)
def _as_dict(self):
"""Make the model object behave like a dict.
Includes attributes from joins.
"""
local = dict((key, value) for key, value in self)
joined = dict([(k, v) for k, v in six.iteritems(self.__dict__)
if not k[0] == '_'])
local.update(joined)
return local
def iteritems(self):
"""Make the model object behave like a dict."""
return six.iteritems(self._as_dict())
def items(self):
"""Make the model object behave like a dict."""
return self._as_dict().items()
def keys(self):
"""Make the model object behave like a dict."""
return [key for key, value in self.iteritems()]
class ModelIterator(six.Iterator):
def __init__(self, model, columns):
self.model = model
self.i = columns
def __iter__(self):
return self
# In Python 3, __next__() has replaced next().
def __next__(self):
n = six.advance_iterator(self.i)
return n, getattr(self.model, n)
class TimestampMixin(object):
created_at = Column(DateTime, default=lambda: timeutils.utcnow())
updated_at = Column(DateTime, onupdate=lambda: timeutils.utcnow())
class SoftDeleteMixin(object):
deleted_at = Column(DateTime)
deleted = Column(types.SoftDeleteInteger, default=0)
def soft_delete(self, session):
"""Mark this object as deleted."""
self.deleted = self.id
self.deleted_at = timeutils.utcnow()
self.save(session=session)

View File

@ -1,137 +0,0 @@
# Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Core functions for MySQL Cluster (NDB) Support."""
import re
from sqlalchemy import String, event, schema
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.types import VARCHAR
engine_regex = re.compile("engine=innodb", re.IGNORECASE)
trans_regex = re.compile("savepoint|rollback|release savepoint", re.IGNORECASE)
def enable_ndb_support(engine):
"""Enable NDB Support.
Function to flag the MySQL engine dialect to support features specific
to MySQL Cluster (NDB).
"""
engine.dialect._oslodb_enable_ndb_support = True
def ndb_status(engine_or_compiler):
"""Test if NDB Support is enabled.
Function to test if NDB support is enabled or not.
"""
return getattr(engine_or_compiler.dialect,
'_oslodb_enable_ndb_support',
False)
def init_ndb_events(engine):
"""Initialize NDB Events.
Function starts NDB specific events.
"""
@event.listens_for(engine, "before_cursor_execute", retval=True)
def before_cursor_execute(conn, cursor, statement, parameters, context,
executemany):
"""Listen for specific SQL strings and replace automatically.
Function will intercept any raw execute calls and automatically
convert InnoDB to NDBCLUSTER, drop SAVEPOINT requests, drop
ROLLBACK requests, and drop RELEASE SAVEPOINT requests.
"""
if ndb_status(engine):
statement = engine_regex.sub("ENGINE=NDBCLUSTER", statement)
if re.match(trans_regex, statement):
statement = "SET @oslo_db_ndb_savepoint_rollback_disabled = 0;"
return statement, parameters
@compiles(schema.CreateTable, "mysql")
def prefix_inserts(create_table, compiler, **kw):
"""Replace InnoDB with NDBCLUSTER automatically.
Function will intercept CreateTable() calls and automatically
convert InnoDB to NDBCLUSTER. Targets compiler events.
"""
existing = compiler.visit_create_table(create_table, **kw)
if ndb_status(compiler):
existing = engine_regex.sub("ENGINE=NDBCLUSTER", existing)
return existing
class AutoStringTinyText(String):
"""Class definition for AutoStringTinyText.
Class is used by compiler function _auto-string_tiny_text().
"""
pass
@compiles(AutoStringTinyText, 'mysql')
def _auto_string_tiny_text(element, compiler, **kw):
if ndb_status(compiler):
return "TINYTEXT"
else:
return compiler.visit_string(element, **kw)
class AutoStringText(String):
"""Class definition for AutoStringText.
Class is used by compiler function _auto_string_text().
"""
pass
@compiles(AutoStringText, 'mysql')
def _auto_string_text(element, compiler, **kw):
if ndb_status(compiler):
return "TEXT"
else:
return compiler.visit_string(element, **kw)
class AutoStringSize(String):
"""Class definition for AutoStringSize.
Class is used by the compiler function _auto_string_size().
"""
def __init__(self, length, ndb_size, **kw):
"""Initialize and extend the String arguments.
Function adds the innodb_size and ndb_size arguments to the
function String().
"""
super(AutoStringSize, self).__init__(length=length, **kw)
self.ndb_size = ndb_size
self.length = length
@compiles(AutoStringSize, 'mysql')
def _auto_string_size(element, compiler, **kw):
if ndb_status(compiler):
return compiler.process(VARCHAR(element.ndb_size), **kw)
else:
return compiler.visit_string(element, **kw)

View File

@ -1,66 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""SQLAlchemy ORM connectivity and query structures.
"""
from oslo_utils import timeutils
import sqlalchemy.orm
from sqlalchemy.sql.expression import literal_column
from oslo_db.sqlalchemy import update_match
class Query(sqlalchemy.orm.query.Query):
"""Subclass of sqlalchemy.query with soft_delete() method."""
def soft_delete(self, synchronize_session='evaluate'):
return self.update({'deleted': literal_column('id'),
'updated_at': literal_column('updated_at'),
'deleted_at': timeutils.utcnow()},
synchronize_session=synchronize_session)
def update_returning_pk(self, values, surrogate_key):
"""Perform an UPDATE, returning the primary key of the matched row.
This is a method-version of
oslo_db.sqlalchemy.update_match.update_returning_pk(); see that
function for usage details.
"""
return update_match.update_returning_pk(self, values, surrogate_key)
def update_on_match(self, specimen, surrogate_key, values, **kw):
"""Emit an UPDATE statement matching the given specimen.
This is a method-version of
oslo_db.sqlalchemy.update_match.update_on_match(); see that function
for usage details.
"""
return update_match.update_on_match(
self, specimen, surrogate_key, values, **kw)
class Session(sqlalchemy.orm.session.Session):
"""oslo.db-specific Session subclass."""
def get_maker(engine, autocommit=True, expire_on_commit=False):
"""Return a SQLAlchemy sessionmaker using the given engine."""
return sqlalchemy.orm.sessionmaker(bind=engine,
class_=Session,
autocommit=autocommit,
expire_on_commit=expire_on_commit,
query_cls=Query)

View File

@ -1,698 +0,0 @@
# Copyright 2014 Red Hat
# Copyright 2013 Mirantis.inc
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Provision test environment for specific DB backends"""
import abc
import debtcollector
import logging
import os
import random
import re
import string
import six
from six import moves
import sqlalchemy
from sqlalchemy.engine import url as sa_url
from sqlalchemy import schema
import testresources
from oslo_db import exception
from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy import session
from oslo_db.sqlalchemy import utils
LOG = logging.getLogger(__name__)
class ProvisionedDatabase(object):
"""Represents a database engine pointing to a DB ready to run tests.
backend: an instance of :class:`.Backend`
enginefacade: an instance of :class:`._TransactionFactory`
engine: a SQLAlchemy :class:`.Engine`
db_token: if provision_new_database were used, this is the randomly
generated name of the database. Note that with SQLite memory
connections, this token is ignored. For a database that
wasn't actually created, will be None.
"""
__slots__ = 'backend', 'enginefacade', 'engine', 'db_token'
def __init__(self, backend, enginefacade, engine, db_token):
self.backend = backend
self.enginefacade = enginefacade
self.engine = engine
self.db_token = db_token
class Schema(object):
""""Represents a database schema that has or will be populated.
This is a marker object as required by testresources but otherwise
serves no purpose.
"""
__slots__ = 'database',
class BackendResource(testresources.TestResourceManager):
def __init__(self, database_type, ad_hoc_url=None):
super(BackendResource, self).__init__()
self.database_type = database_type
self.backend = Backend.backend_for_database_type(self.database_type)
self.ad_hoc_url = ad_hoc_url
if ad_hoc_url is None:
self.backend = Backend.backend_for_database_type(
self.database_type)
else:
self.backend = Backend(self.database_type, ad_hoc_url)
self.backend._verify()
def make(self, dependency_resources):
return self.backend
def clean(self, resource):
self.backend._dispose()
def isDirty(self):
return False
class DatabaseResource(testresources.TestResourceManager):
"""Database resource which connects and disconnects to a URL.
For SQLite, this means the database is created implicitly, as a result
of SQLite's usual behavior. If the database is a file-based URL,
it will remain after the resource has been torn down.
For all other kinds of databases, the resource indicates to connect
and disconnect from that database.
"""
def __init__(self, database_type, _enginefacade=None,
provision_new_database=True, ad_hoc_url=None):
super(DatabaseResource, self).__init__()
self.database_type = database_type
self.provision_new_database = provision_new_database
# NOTE(zzzeek) the _enginefacade is an optional argument
# here in order to accomodate Neutron's current direct use
# of the DatabaseResource object. Within oslo_db's use,
# the "enginefacade" will always be passed in from the
# test and/or fixture.
if _enginefacade:
self._enginefacade = _enginefacade
else:
self._enginefacade = enginefacade._context_manager
self.resources = [
('backend', BackendResource(database_type, ad_hoc_url))
]
def make(self, dependency_resources):
backend = dependency_resources['backend']
_enginefacade = self._enginefacade.make_new_manager()
if self.provision_new_database:
db_token = _random_ident()
url = backend.provisioned_database_url(db_token)
LOG.info(
"CREATE BACKEND %s TOKEN %s", backend.engine.url, db_token)
backend.create_named_database(db_token, conditional=True)
else:
db_token = None
url = backend.url
_enginefacade.configure(
logging_name="%s@%s" % (self.database_type, db_token))
_enginefacade._factory._start(connection=url)
engine = _enginefacade._factory._writer_engine
return ProvisionedDatabase(backend, _enginefacade, engine, db_token)
def clean(self, resource):
if self.provision_new_database:
LOG.info(
"DROP BACKEND %s TOKEN %s",
resource.backend.engine, resource.db_token)
resource.backend.drop_named_database(resource.db_token)
def isDirty(self):
return False
@debtcollector.removals.removed_class("TransactionResource")
class TransactionResource(testresources.TestResourceManager):
def __init__(self, database_resource, schema_resource):
super(TransactionResource, self).__init__()
self.resources = [
('database', database_resource),
('schema', schema_resource)
]
def clean(self, resource):
resource._dispose()
def make(self, dependency_resources):
conn = dependency_resources['database'].engine.connect()
return utils.NonCommittingEngine(conn)
def isDirty(self):
return True
class SchemaResource(testresources.TestResourceManager):
def __init__(self, database_resource, generate_schema, teardown=False):
super(SchemaResource, self).__init__()
self.generate_schema = generate_schema
self.teardown = teardown
self.resources = [
('database', database_resource)
]
def clean(self, resource):
LOG.info(
"DROP ALL OBJECTS, BACKEND %s",
resource.database.engine.url)
resource.database.backend.drop_all_objects(
resource.database.engine)
def make(self, dependency_resources):
if self.generate_schema:
self.generate_schema(dependency_resources['database'].engine)
return Schema()
def isDirty(self):
if self.teardown:
return True
else:
return False
class Backend(object):
"""Represent a particular database backend that may be provisionable.
The ``Backend`` object maintains a database type (e.g. database without
specific driver type, such as "sqlite", "postgresql", etc.),
a target URL, a base ``Engine`` for that URL object that can be used
to provision databases and a ``BackendImpl`` which knows how to perform
operations against this type of ``Engine``.
"""
backends_by_database_type = {}
def __init__(self, database_type, url):
self.database_type = database_type
self.url = url
self.verified = False
self.engine = None
self.impl = BackendImpl.impl(database_type)
self.current_dbs = set()
@classmethod
def backend_for_database_type(cls, database_type):
"""Return the ``Backend`` for the given database type.
"""
try:
backend = cls.backends_by_database_type[database_type]
except KeyError:
raise exception.BackendNotAvailable(
"Backend '%s' is unavailable: No such backend" % database_type)
else:
return backend._verify()
@classmethod
def all_viable_backends(cls):
"""Return an iterator of all ``Backend`` objects that are present
and provisionable.
"""
for backend in cls.backends_by_database_type.values():
try:
yield backend._verify()
except exception.BackendNotAvailable:
pass
def _verify(self):
"""Verify that this ``Backend`` is available and provisionable.
:return: this ``Backend``
:raises: ``BackendNotAvailable`` if the backend is not available.
"""
if not self.verified:
try:
eng = self._ensure_backend_available(self.url)
except exception.BackendNotAvailable as bne:
self._no_engine_reason = str(bne)
raise
else:
self.engine = eng
finally:
self.verified = True
if self.engine is None:
raise exception.BackendNotAvailable(self._no_engine_reason)
return self
@classmethod
def _ensure_backend_available(cls, url):
url = sa_url.make_url(str(url))
try:
eng = sqlalchemy.create_engine(url)
except ImportError as i_e:
# SQLAlchemy performs an "import" of the DBAPI module
# within create_engine(). So if ibm_db_sa, cx_oracle etc.
# isn't installed, we get an ImportError here.
LOG.info(
"The %(dbapi)s backend is unavailable: %(err)s",
dict(dbapi=url.drivername, err=i_e))
raise exception.BackendNotAvailable(
"Backend '%s' is unavailable: No DBAPI installed" %
url.drivername)
else:
try:
conn = eng.connect()
except sqlalchemy.exc.DBAPIError as d_e:
# upon connect, SQLAlchemy calls dbapi.connect(). This
# usually raises OperationalError and should always at
# least raise a SQLAlchemy-wrapped DBAPI Error.
LOG.info(
"The %(dbapi)s backend is unavailable: %(err)s",
dict(dbapi=url.drivername, err=d_e)
)
raise exception.BackendNotAvailable(
"Backend '%s' is unavailable: Could not connect" %
url.drivername)
else:
conn.close()
return eng
def _dispose(self):
"""Dispose main resources of this backend."""
self.impl.dispose(self.engine)
def create_named_database(self, ident, conditional=False):
"""Create a database with the given name."""
if not conditional or ident not in self.current_dbs:
self.current_dbs.add(ident)
self.impl.create_named_database(
self.engine, ident, conditional=conditional)
def drop_named_database(self, ident, conditional=False):
"""Drop a database with the given name."""
self.impl.drop_named_database(
self.engine, ident,
conditional=conditional)
self.current_dbs.discard(ident)
def drop_all_objects(self, engine):
"""Drop all database objects.
Drops all database objects remaining on the default schema of the
given engine.
"""
self.impl.drop_all_objects(engine)
def database_exists(self, ident):
"""Return True if a database of the given name exists."""
return self.impl.database_exists(self.engine, ident)
def provisioned_database_url(self, ident):
"""Given the identifier of an anoymous database, return a URL.
For hostname-based URLs, this typically involves switching just the
'database' portion of the URL with the given name and creating
a URL.
For SQLite URLs, the identifier may be used to create a filename
or may be ignored in the case of a memory database.
"""
return self.impl.provisioned_database_url(self.url, ident)
@debtcollector.removals.remove()
def provisioned_engine(self, ident):
"""Given the URL of a particular database backend and the string
name of a particular 'database' within that backend, return
an Engine instance whose connections will refer directly to the
named database.
"""
return self.impl.provisioned_engine(self.url, ident)
@classmethod
def _setup(cls):
"""Initial startup feature will scan the environment for configured
URLs and place them into the list of URLs we will use for provisioning.
This searches through OS_TEST_DBAPI_ADMIN_CONNECTION for URLs. If
not present, we set up URLs based on the "opportunstic" convention,
e.g. username+password = "openstack_citest".
The provisioning system will then use or discard these URLs as they
are requested, based on whether or not the target database is actually
found to be available.
"""
configured_urls = os.getenv('OS_TEST_DBAPI_ADMIN_CONNECTION', None)
if configured_urls:
configured_urls = configured_urls.split(";")
else:
configured_urls = [
impl.create_opportunistic_driver_url()
for impl in BackendImpl.all_impls()
]
for url_str in configured_urls:
url = sa_url.make_url(url_str)
m = re.match(r'([^+]+?)(?:\+(.+))?$', url.drivername)
database_type = m.group(1)
Backend.backends_by_database_type[database_type] = \
Backend(database_type, url)
@six.add_metaclass(abc.ABCMeta)
class BackendImpl(object):
"""Provide database-specific implementations of key provisioning
functions.
``BackendImpl`` is owned by a ``Backend`` instance which delegates
to it for all database-specific features.
"""
default_engine_kwargs = {}
supports_drop_fk = True
def dispose(self, engine):
LOG.info("DISPOSE ENGINE %s", engine)
engine.dispose()
@classmethod
def all_impls(cls):
"""Return an iterator of all possible BackendImpl objects.
These are BackendImpls that are implemented, but not
necessarily provisionable.
"""
for database_type in cls.impl.reg:
if database_type == '*':
continue
yield BackendImpl.impl(database_type)
@utils.dispatch_for_dialect("*")
def impl(drivername):
"""Return a ``BackendImpl`` instance corresponding to the
given driver name.
This is a dispatched method which will refer to the constructor
of implementing subclasses.
"""
raise NotImplementedError(
"No provision impl available for driver: %s" % drivername)
def __init__(self, drivername):
self.drivername = drivername
@abc.abstractmethod
def create_opportunistic_driver_url(self):
"""Produce a string url known as the 'opportunistic' URL.
This URL is one that corresponds to an established OpenStack
convention for a pre-established database login, which, when
detected as available in the local environment, is automatically
used as a test platform for a specific type of driver.
"""
@abc.abstractmethod
def create_named_database(self, engine, ident, conditional=False):
"""Create a database with the given name."""
@abc.abstractmethod
def drop_named_database(self, engine, ident, conditional=False):
"""Drop a database with the given name."""
def drop_all_objects(self, engine):
"""Drop all database objects.
Drops all database objects remaining on the default schema of the
given engine.
Per-db implementations will also need to drop items specific to those
systems, such as sequences, custom types (e.g. pg ENUM), etc.
"""
with engine.begin() as conn:
inspector = sqlalchemy.inspect(engine)
metadata = schema.MetaData()
tbs = []
all_fks = []
for table_name in inspector.get_table_names():
fks = []
for fk in inspector.get_foreign_keys(table_name):
# note that SQLite reflection does not have names
# for foreign keys until SQLAlchemy 1.0
if not fk['name']:
continue
fks.append(
schema.ForeignKeyConstraint((), (), name=fk['name'])
)
table = schema.Table(table_name, metadata, *fks)
tbs.append(table)
all_fks.extend(fks)
if self.supports_drop_fk:
for fkc in all_fks:
conn.execute(schema.DropConstraint(fkc))
for table in tbs:
conn.execute(schema.DropTable(table))
self.drop_additional_objects(conn)
def drop_additional_objects(self, conn):
pass
def provisioned_database_url(self, base_url, ident):
"""Return a provisioned database URL.
Given the URL of a particular database backend and the string
name of a particular 'database' within that backend, return
an URL which refers directly to the named database.
For hostname-based URLs, this typically involves switching just the
'database' portion of the URL with the given name and creating
an engine.
For URLs that instead deal with DSNs, the rules may be more custom;
for example, the engine may need to connect to the root URL and
then emit a command to switch to the named database.
"""
url = sa_url.make_url(str(base_url))
url.database = ident
return url
@debtcollector.removals.remove()
def provisioned_engine(self, base_url, ident):
"""Return a provisioned engine.
Given the URL of a particular database backend and the string
name of a particular 'database' within that backend, return
an Engine instance whose connections will refer directly to the
named database.
For hostname-based URLs, this typically involves switching just the
'database' portion of the URL with the given name and creating
an engine.
For URLs that instead deal with DSNs, the rules may be more custom;
for example, the engine may need to connect to the root URL and
then emit a command to switch to the named database.
"""
url = self.provisioned_database_url(base_url, ident)
return session.create_engine(
url,
logging_name="%s@%s" % (self.drivername, ident),
**self.default_engine_kwargs
)
@BackendImpl.impl.dispatch_for("mysql")
class MySQLBackendImpl(BackendImpl):
# only used for deprecated provisioned_engine() function.
default_engine_kwargs = {'mysql_sql_mode': 'TRADITIONAL'}
def create_opportunistic_driver_url(self):
return "mysql+pymysql://openstack_citest:openstack_citest@localhost/"
def create_named_database(self, engine, ident, conditional=False):
with engine.connect() as conn:
if not conditional or not self.database_exists(conn, ident):
conn.execute("CREATE DATABASE %s" % ident)
def drop_named_database(self, engine, ident, conditional=False):
with engine.connect() as conn:
if not conditional or self.database_exists(conn, ident):
conn.execute("DROP DATABASE %s" % ident)
def database_exists(self, engine, ident):
return bool(engine.scalar("SHOW DATABASES LIKE '%s'" % ident))
@BackendImpl.impl.dispatch_for("sqlite")
class SQLiteBackendImpl(BackendImpl):
supports_drop_fk = False
def dispose(self, engine):
LOG.info("DISPOSE ENGINE %s", engine)
engine.dispose()
url = engine.url
self._drop_url_file(url, True)
def _drop_url_file(self, url, conditional):
filename = url.database
if filename and (not conditional or os.access(filename, os.F_OK)):
os.remove(filename)
def create_opportunistic_driver_url(self):
return "sqlite://"
def create_named_database(self, engine, ident, conditional=False):
url = self.provisioned_database_url(engine.url, ident)
filename = url.database
if filename and (not conditional or not os.access(filename, os.F_OK)):
eng = sqlalchemy.create_engine(url)
eng.connect().close()
def drop_named_database(self, engine, ident, conditional=False):
url = self.provisioned_database_url(engine.url, ident)
filename = url.database
if filename and (not conditional or os.access(filename, os.F_OK)):
os.remove(filename)
def database_exists(self, engine, ident):
url = self._provisioned_database_url(engine.url, ident)
filename = url.database
return not filename or os.access(filename, os.F_OK)
def provisioned_database_url(self, base_url, ident):
if base_url.database:
return sa_url.make_url("sqlite:////tmp/%s.db" % ident)
else:
return base_url
@BackendImpl.impl.dispatch_for("postgresql")
class PostgresqlBackendImpl(BackendImpl):
def create_opportunistic_driver_url(self):
return "postgresql://openstack_citest:openstack_citest"\
"@localhost/postgres"
def create_named_database(self, engine, ident, conditional=False):
with engine.connect().execution_options(
isolation_level="AUTOCOMMIT") as conn:
if not conditional or not self.database_exists(conn, ident):
conn.execute("CREATE DATABASE %s" % ident)
def drop_named_database(self, engine, ident, conditional=False):
with engine.connect().execution_options(
isolation_level="AUTOCOMMIT") as conn:
self._close_out_database_users(conn, ident)
if conditional:
conn.execute("DROP DATABASE IF EXISTS %s" % ident)
else:
conn.execute("DROP DATABASE %s" % ident)
def drop_additional_objects(self, conn):
enums = [e['name'] for e in sqlalchemy.inspect(conn).get_enums()]
for e in enums:
conn.execute("DROP TYPE %s" % e)
def database_exists(self, engine, ident):
return bool(
engine.scalar(
sqlalchemy.text(
"select datname from pg_database "
"where datname=:name"), name=ident)
)
def _close_out_database_users(self, conn, ident):
"""Attempt to guarantee a database can be dropped.
Optional feature which guarantees no connections with our
username are attached to the DB we're going to drop.
This method has caveats; for one, the 'pid' column was named
'procpid' prior to Postgresql 9.2. But more critically,
prior to 9.2 this operation required superuser permissions,
even if the connections we're closing are under the same username
as us. In more recent versions this restriction has been
lifted for same-user connections.
"""
if conn.dialect.server_version_info >= (9, 2):
conn.execute(
sqlalchemy.text(
"select pg_terminate_backend(pid) "
"from pg_stat_activity "
"where usename=current_user and "
"pid != pg_backend_pid() "
"and datname=:dname"
), dname=ident)
def _random_ident():
return ''.join(
random.choice(string.ascii_lowercase)
for i in moves.range(10))
Backend._setup()

View File

@ -1,196 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Session Handling for SQLAlchemy backend.
Recommended ways to use sessions within this framework:
* Use the ``enginefacade`` system for connectivity, session and
transaction management:
.. code-block:: python
from oslo_db.sqlalchemy import enginefacade
@enginefacade.reader
def get_foo(context, foo):
return (model_query(models.Foo, context.session).
filter_by(foo=foo).
first())
@enginefacade.writer
def update_foo(context, id, newfoo):
(model_query(models.Foo, context.session).
filter_by(id=id).
update({'foo': newfoo}))
@enginefacade.writer
def create_foo(context, values):
foo_ref = models.Foo()
foo_ref.update(values)
foo_ref.save(context.session)
return foo_ref
In the above system, transactions are committed automatically, and
are shared among all dependent database methods. Ensure
that methods which "write" data are enclosed within @writer blocks.
.. note:: Statements in the session scope will not be automatically retried.
* If you create models within the session, they need to be added, but you
do not need to call `model.save()`:
.. code-block:: python
@enginefacade.writer
def create_many_foo(context, foos):
for foo in foos:
foo_ref = models.Foo()
foo_ref.update(foo)
context.session.add(foo_ref)
@enginefacade.writer
def update_bar(context, foo_id, newbar):
foo_ref = (model_query(models.Foo, context.session).
filter_by(id=foo_id).
first())
(model_query(models.Bar, context.session).
filter_by(id=foo_ref['bar_id']).
update({'bar': newbar}))
The two queries in `update_bar` can alternatively be expressed using
a single query, which may be more efficient depending on scenario:
.. code-block:: python
@enginefacade.writer
def update_bar(context, foo_id, newbar):
subq = (model_query(models.Foo.id, context.session).
filter_by(id=foo_id).
limit(1).
subquery())
(model_query(models.Bar, context.session).
filter_by(id=subq.as_scalar()).
update({'bar': newbar}))
For reference, this emits approximately the following SQL statement:
.. code-block:: sql
UPDATE bar SET bar = '${newbar}'
WHERE id=(SELECT bar_id FROM foo WHERE id = '${foo_id}' LIMIT 1);
.. note:: `create_duplicate_foo` is a trivially simple example of catching an
exception while using a savepoint. Here we create two duplicate
instances with same primary key, must catch the exception out of context
managed by a single session:
.. code-block:: python
@enginefacade.writer
def create_duplicate_foo(context):
foo1 = models.Foo()
foo2 = models.Foo()
foo1.id = foo2.id = 1
try:
with context.session.begin_nested():
session.add(foo1)
session.add(foo2)
except exception.DBDuplicateEntry as e:
handle_error(e)
* The enginefacade system eliminates the need to decide when sessions need
to be passed between methods. All methods should instead share a common
context object; the enginefacade system will maintain the transaction
across method calls.
.. code-block:: python
@enginefacade.writer
def myfunc(context, foo):
# do some database things
bar = _private_func(context, foo)
return bar
def _private_func(context, foo):
with enginefacade.using_writer(context) as session:
# do some other database things
session.add(SomeObject())
return bar
* Avoid ``with_lockmode('UPDATE')`` when possible.
FOR UPDATE is not compatible with MySQL/Galera. Instead, an "opportunistic"
approach should be used, such that if an UPDATE fails, the entire
transaction should be retried. The @wrap_db_retry decorator is one
such system that can be used to achieve this.
Enabling soft deletes:
* To use/enable soft-deletes, `SoftDeleteMixin` may be used. For example:
.. code-block:: python
class NovaBase(models.SoftDeleteMixin, models.ModelBase):
pass
Efficient use of soft deletes:
* While there is a ``model.soft_delete()`` method, prefer
``query.soft_delete()``. Some examples:
.. code-block:: python
@enginefacade.writer
def soft_delete_bar(context):
# synchronize_session=False will prevent the ORM from attempting
# to search the Session for instances matching the DELETE;
# this is typically not necessary for small operations.
count = model_query(BarModel, context.session).\\
find(some_condition).soft_delete(synchronize_session=False)
if count == 0:
raise Exception("0 entries were soft deleted")
@enginefacade.writer
def complex_soft_delete_with_synchronization_bar(context):
# use synchronize_session='evaluate' when you'd like to attempt
# to update the state of the Session to match that of the DELETE.
# This is potentially helpful if the operation is complex and
# continues to work with instances that were loaded, though
# not usually needed.
count = (model_query(BarModel, context.session).
find(some_condition).
soft_delete(synchronize_session='evaulate'))
if count == 0:
raise Exception("0 entries were soft deleted")
"""
from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy import engines
from oslo_db.sqlalchemy import orm
EngineFacade = enginefacade.LegacyEngineFacade
create_engine = engines.create_engine
get_maker = orm.get_maker
Query = orm.Query
Session = orm.Session
__all__ = ["EngineFacade", "create_engine", "get_maker", "Query", "Session"]

View File

@ -1,247 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import debtcollector
import debtcollector.moves
import fixtures
import testresources
try:
from oslotest import base as test_base
except ImportError:
raise NameError('Oslotest is not installed. Please add oslotest in your'
' test-requirements')
from oslo_utils import reflection
import six
from oslo_db import exception
from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy import provision
from oslo_db.sqlalchemy import session
from oslo_db.sqlalchemy.test_fixtures import optimize_package_test_loader
@debtcollector.removals.removed_class("DbFixture")
class DbFixture(fixtures.Fixture):
"""Basic database fixture.
Allows to run tests on various db backends, such as SQLite, MySQL and
PostgreSQL. By default use sqlite backend. To override default backend
uri set env variable OS_TEST_DBAPI_ADMIN_CONNECTION with database admin
credentials for specific backend.
"""
DRIVER = "sqlite"
# these names are deprecated, and are not used by DbFixture.
# they are here for backwards compatibility with test suites that
# are referring to them directly.
DBNAME = PASSWORD = USERNAME = 'openstack_citest'
def __init__(self, test, skip_on_unavailable_db=True):
super(DbFixture, self).__init__()
self.test = test
self.skip_on_unavailable_db = skip_on_unavailable_db
def setUp(self):
super(DbFixture, self).setUp()
testresources.setUpResources(
self.test, self.test.resources, testresources._get_result())
self.addCleanup(
testresources.tearDownResources,
self.test, self.test.resources, testresources._get_result()
)
if not self.test._has_db_resource():
msg = self.test._get_db_resource_not_available_reason()
if self.test.SKIP_ON_UNAVAILABLE_DB:
self.test.skip(msg)
else:
self.test.fail(msg)
if self.test.SCHEMA_SCOPE:
self.test.engine = self.test.transaction_engine
self.test.sessionmaker = session.get_maker(
self.test.transaction_engine)
else:
self.test.engine = self.test.db.engine
self.test.sessionmaker = session.get_maker(self.test.engine)
self.addCleanup(setattr, self.test, 'sessionmaker', None)
self.addCleanup(setattr, self.test, 'engine', None)
self.test.enginefacade = enginefacade._TestTransactionFactory(
self.test.engine, self.test.sessionmaker, apply_global=True)
self.addCleanup(self.test.enginefacade.dispose_global)
@debtcollector.removals.removed_class("DbTestCase")
class DbTestCase(test_base.BaseTestCase):
"""Base class for testing of DB code.
"""
FIXTURE = DbFixture
SCHEMA_SCOPE = None
SKIP_ON_UNAVAILABLE_DB = True
_db_not_available = {}
_schema_resources = {}
_database_resources = {}
def _get_db_resource_not_available_reason(self):
return self._db_not_available.get(self.FIXTURE.DRIVER, None)
def _has_db_resource(self):
return self._database_resources.get(
self.FIXTURE.DRIVER, None) is not None
def _resources_for_driver(self, driver, schema_scope, generate_schema):
# testresources relies on the identity and state of the
# TestResourceManager objects in play to correctly manage
# resources, and it also hardcodes to looking at the
# ".resources" attribute on the test object, even though the
# setUpResources() function passes the list of resources in,
# so we have to code the TestResourceManager logic into the
# .resources attribute and ensure that the same set of test
# variables always produces the same TestResourceManager objects.
if driver not in self._database_resources:
try:
self._database_resources[driver] = \
provision.DatabaseResource(driver,
provision_new_database=True)
except exception.BackendNotAvailable as bne:
self._database_resources[driver] = None
self._db_not_available[driver] = str(bne)
database_resource = self._database_resources[driver]
if database_resource is None:
return []
if schema_scope:
key = (driver, schema_scope)
if key not in self._schema_resources:
schema_resource = provision.SchemaResource(
database_resource, generate_schema)
transaction_resource = provision.TransactionResource(
database_resource, schema_resource)
self._schema_resources[key] = \
transaction_resource
transaction_resource = self._schema_resources[key]
return [
('transaction_engine', transaction_resource),
('db', database_resource),
]
else:
key = (driver, None)
if key not in self._schema_resources:
self._schema_resources[key] = provision.SchemaResource(
database_resource, generate_schema, teardown=True)
schema_resource = self._schema_resources[key]
return [
('schema', schema_resource),
('db', database_resource)
]
@property
def resources(self):
return self._resources_for_driver(
self.FIXTURE.DRIVER, self.SCHEMA_SCOPE, self.generate_schema)
def setUp(self):
super(DbTestCase, self).setUp()
self.useFixture(
self.FIXTURE(
self, skip_on_unavailable_db=self.SKIP_ON_UNAVAILABLE_DB))
def generate_schema(self, engine):
"""Generate schema objects to be used within a test.
The function is separate from the setUp() case as the scope
of this method is controlled by the provisioning system. A
test that specifies SCHEMA_SCOPE may not call this method
for each test, as the schema may be maintained from a previous run.
"""
if self.SCHEMA_SCOPE:
# if SCHEMA_SCOPE is set, then this method definitely
# has to be implemented. This is a guard against a test
# that inadvertently does schema setup within setUp().
raise NotImplementedError(
"This test requires schema-level setup to be "
"implemented within generate_schema().")
@debtcollector.removals.removed_class("OpportunisticTestCase")
class OpportunisticTestCase(DbTestCase):
"""Placeholder for backwards compatibility."""
ALLOWED_DIALECTS = ['sqlite', 'mysql', 'postgresql']
def backend_specific(*dialects):
"""Decorator to skip backend specific tests on inappropriate engines.
::dialects: list of dialects names under which the test will be launched.
"""
def wrap(f):
@six.wraps(f)
def ins_wrap(self):
if not set(dialects).issubset(ALLOWED_DIALECTS):
raise ValueError(
"Please use allowed dialects: %s" % ALLOWED_DIALECTS)
if self.engine.name not in dialects:
msg = ('The test "%s" can be run '
'only on %s. Current engine is %s.')
args = (reflection.get_callable_name(f), ', '.join(dialects),
self.engine.name)
self.skip(msg % args)
else:
return f(self)
return ins_wrap
return wrap
@debtcollector.removals.removed_class("MySQLOpportunisticFixture")
class MySQLOpportunisticFixture(DbFixture):
DRIVER = 'mysql'
@debtcollector.removals.removed_class("PostgreSQLOpportunisticFixture")
class PostgreSQLOpportunisticFixture(DbFixture):
DRIVER = 'postgresql'
@debtcollector.removals.removed_class("MySQLOpportunisticTestCase")
class MySQLOpportunisticTestCase(OpportunisticTestCase):
FIXTURE = MySQLOpportunisticFixture
@debtcollector.removals.removed_class("PostgreSQLOpportunisticTestCase")
class PostgreSQLOpportunisticTestCase(OpportunisticTestCase):
FIXTURE = PostgreSQLOpportunisticFixture
optimize_db_test_loader = debtcollector.moves.moved_function(
optimize_package_test_loader, "optimize_db_test_loader", __name__)

View File

@ -1,622 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
import os
import testresources
import testscenarios
from oslo_db import exception
from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy import provision
from oslo_db.sqlalchemy import utils
class ReplaceEngineFacadeFixture(fixtures.Fixture):
"""A fixture that will plug the engine of one enginefacade into another.
This fixture can be used by test suites that already have their own non-
oslo_db database setup / teardown schemes, to plug any URL or test-oriented
enginefacade as-is into an enginefacade-oriented API.
For applications that use oslo.db's testing fixtures, the
ReplaceEngineFacade fixture is used internally.
E.g.::
class MyDBTest(TestCase):
def setUp(self):
from myapplication.api import main_enginefacade
my_test_enginefacade = enginefacade.transaction_context()
my_test_enginefacade.configure(connection=my_test_url)
self.useFixture(
ReplaceEngineFacadeFixture(
main_enginefacade, my_test_enginefacade))
Above, the main_enginefacade object is the normal application level
one, and my_test_enginefacade is a local one that we've created to
refer to some testing database. Throughout the fixture's setup,
the application level enginefacade will use the engine factory and
engines of the testing enginefacade, and at fixture teardown will be
replaced back.
"""
def __init__(self, enginefacade, replace_with_enginefacade):
super(ReplaceEngineFacadeFixture, self).__init__()
self.enginefacade = enginefacade
self.replace_with_enginefacade = replace_with_enginefacade
def _setUp(self):
_reset_facade = self.enginefacade.patch_factory(
self.replace_with_enginefacade._factory
)
self.addCleanup(_reset_facade)
class BaseDbFixture(fixtures.Fixture):
"""Base database provisioning fixture.
This serves as the base class for the other fixtures, but by itself
does not implement _setUp(). It provides the basis for the flags
implemented by the various capability mixins (GenerateSchema,
DeletesFromSchema, etc.) as well as providing an abstraction over
the provisioning objects, which are specific to testresources.
Overall, consumers of this fixture just need to use the right classes
and the testresources mechanics are taken care of.
"""
DRIVER = "sqlite"
_DROP_SCHEMA_PER_TEST = True
_BUILD_SCHEMA = False
_BUILD_WITH_MIGRATIONS = False
_database_resources = {}
_db_not_available = {}
_schema_resources = {}
def __init__(self, driver=None, ident=None):
super(BaseDbFixture, self).__init__()
self.driver = driver or self.DRIVER
self.ident = ident or "default"
self.resource_key = (self.driver, self.__class__, self.ident)
def get_enginefacade(self):
"""Return an enginefacade._TransactionContextManager.
This is typically a global variable like "context_manager" declared
in the db/api.py module and is the object returned by
enginefacade.transaction_context().
If left not implemented, the global enginefacade manager is used.
For the case where a project uses per-object or per-test enginefacades
like Gnocchi, the get_per_test_enginefacade()
method should also be implemented.
"""
return enginefacade._context_manager
def get_per_test_enginefacade(self):
"""Return an enginefacade._TransactionContextManager per test.
This facade should be the one that the test expects the code to
use. Usually this is the same one returned by get_engineafacade()
which is the default. For special applications like Gnocchi,
this can be overridden to provide an instance-level facade.
"""
return self.get_enginefacade()
def _get_db_resource_not_available_reason(self):
return self._db_not_available.get(self.resource_key, None)
def _has_db_resource(self):
return self._database_resources.get(
self.resource_key, None) is not None
def _generate_schema_resource(self, database_resource):
return provision.SchemaResource(
database_resource,
None if not self._BUILD_SCHEMA
else self.generate_schema_create_all
if not self._BUILD_WITH_MIGRATIONS
else self.generate_schema_migrations,
self._DROP_SCHEMA_PER_TEST
)
def _get_resources(self):
key = self.resource_key
# the DatabaseResource and SchemaResource provision objects
# can be used by testresources as a marker outside of an individual
# test to indicate that this database / schema can be used across
# multiple tests. To make this work, many instances of this
# fixture have to return the *same* resource object given the same
# inputs. so we cache these in class-level dictionaries.
if key not in self._database_resources:
_enginefacade = self.get_enginefacade()
try:
self._database_resources[key] = \
self._generate_database_resource(_enginefacade)
except exception.BackendNotAvailable as bne:
self._database_resources[key] = None
self._db_not_available[key] = str(bne)
database_resource = self._database_resources[key]
if database_resource is None:
return []
else:
if key in self._schema_resources:
schema_resource = self._schema_resources[key]
else:
schema_resource = self._schema_resources[key] = \
self._generate_schema_resource(database_resource)
return [
('_schema_%s' % self.ident, schema_resource),
('_db_%s' % self.ident, database_resource)
]
class GeneratesSchema(object):
"""Mixin defining a fixture as generating a schema using create_all().
This is a "capability" mixin that works in conjunction with classes
that include BaseDbFixture as a base.
"""
_BUILD_SCHEMA = True
_BUILD_WITH_MIGRATIONS = False
def generate_schema_create_all(self, engine):
"""A hook which should generate the model schema using create_all().
This hook is called within the scope of creating the database
assuming BUILD_WITH_MIGRATIONS is False.
"""
class GeneratesSchemaFromMigrations(GeneratesSchema):
"""Mixin defining a fixture as generating a schema using migrations.
This is a "capability" mixin that works in conjunction with classes
that include BaseDbFixture as a base.
"""
_BUILD_WITH_MIGRATIONS = True
def generate_schema_migrations(self, engine):
"""A hook which should generate the model schema using migrations.
This hook is called within the scope of creating the database
assuming BUILD_WITH_MIGRATIONS is True.
"""
class ResetsData(object):
"""Mixin defining a fixture that resets schema data without dropping."""
_DROP_SCHEMA_PER_TEST = False
def setup_for_reset(self, engine, enginefacade):
""""Perform setup that may be needed before the test runs."""
def reset_schema_data(self, engine, enginefacade):
"""Reset the data in the schema."""
class DeletesFromSchema(ResetsData):
"""Mixin defining a fixture that can delete from all tables in place.
When DeletesFromSchema is present in a fixture,
_DROP_SCHEMA_PER_TEST is now False; this means that the
"teardown" flag of provision.SchemaResource will be False, which
prevents SchemaResource from dropping all objects within the schema
after each test.
This is a "capability" mixin that works in conjunction with classes
that include BaseDbFixture as a base.
"""
def reset_schema_data(self, engine, facade):
self.delete_from_schema(engine)
def delete_from_schema(self, engine):
"""A hook which should delete all data from an existing schema.
Should *not* drop any objects, just remove data from tables
that needs to be reset between tests.
"""
class RollsBackTransaction(ResetsData):
"""Fixture class that maintains a database transaction per test.
"""
def setup_for_reset(self, engine, facade):
conn = engine.connect()
engine = utils.NonCommittingEngine(conn)
self._reset_engine = enginefacade._TestTransactionFactory.apply_engine(
engine, facade)
def reset_schema_data(self, engine, facade):
self._reset_engine()
engine._dispose()
class SimpleDbFixture(BaseDbFixture):
"""Fixture which provides an engine from a fixed URL.
The SimpleDbFixture is generally appropriate only for a SQLite memory
database, as this database is naturally isolated from other processes and
does not require management of schemas. For tests that need to
run specifically against MySQL or Postgresql, the OpportunisticDbFixture
is more appropriate.
The database connection information itself comes from the provisoning
system, matching the desired driver (typically sqlite) to the default URL
that provisioning provides for this driver (in the case of sqlite, it's
the SQLite memory URL, e.g. sqlite://. For MySQL and Postgresql, it's
the familiar "openstack_citest" URL on localhost).
There are a variety of create/drop schemes that can take place:
* The default is to procure a database connection on setup,
and at teardown, an instruction is issued to "drop" all
objects in the schema (e.g. tables, indexes). The SQLAlchemy
engine itself remains referenced at the class level for subsequent
re-use.
* When the GeneratesSchema or GeneratesSchemaFromMigrations mixins
are implemented, the appropriate generate_schema method is also
called when the fixture is set up, by default this is per test.
* When the DeletesFromSchema mixin is implemented, the generate_schema
method is now only called **once**, and the "drop all objects"
system is replaced with the delete_from_schema method. This
allows the same database to remain set up with all schema objects
intact, so that expensive migrations need not be run on every test.
* The fixture does **not** dispose the engine at the end of a test.
It is assumed the same engine will be re-used many times across
many tests. The AdHocDbFixture extends this one to provide
engine.dispose() at the end of a test.
This fixture is intended to work without needing a reference to
the test itself, and therefore cannot take advantage of the
OptimisingTestSuite.
"""
_dependency_resources = {}
def _get_provisioned_db(self):
return self._dependency_resources["_db_%s" % self.ident]
def _generate_database_resource(self, _enginefacade):
return provision.DatabaseResource(self.driver, _enginefacade,
provision_new_database=False)
def _setUp(self):
super(SimpleDbFixture, self)._setUp()
cls = self.__class__
if "_db_%s" % self.ident not in cls._dependency_resources:
resources = self._get_resources()
# initialize resources the same way that testresources does.
for name, resource in resources:
cls._dependency_resources[name] = resource.getResource()
provisioned_db = self._get_provisioned_db()
if not self._DROP_SCHEMA_PER_TEST:
self.setup_for_reset(
provisioned_db.engine, provisioned_db.enginefacade)
self.useFixture(ReplaceEngineFacadeFixture(
self.get_per_test_enginefacade(),
provisioned_db.enginefacade
))
if not self._DROP_SCHEMA_PER_TEST:
self.addCleanup(
self.reset_schema_data,
provisioned_db.engine, provisioned_db.enginefacade)
self.addCleanup(self._cleanup)
def _teardown_resources(self):
for name, resource in self._get_resources():
dep = self._dependency_resources.pop(name)
resource.finishedWith(dep)
def _cleanup(self):
pass
class AdHocDbFixture(SimpleDbFixture):
""""Fixture which creates and disposes a database engine per test.
Also allows a specific URL to be passed, meaning the fixture can
be hardcoded to a specific SQLite file.
For a SQLite, this fixture will create the named database upon setup
and tear it down upon teardown. For other databases, the
database is assumed to exist already and will remain after teardown.
"""
def __init__(self, url=None):
if url:
self.url = provision.sa_url.make_url(str(url))
driver = self.url.get_backend_name()
else:
driver = None
self.url = None
BaseDbFixture.__init__(
self, driver=driver,
ident=provision._random_ident())
self.url = url
def _generate_database_resource(self, _enginefacade):
return provision.DatabaseResource(
self.driver, _enginefacade, ad_hoc_url=self.url,
provision_new_database=False)
def _cleanup(self):
self._teardown_resources()
class OpportunisticDbFixture(BaseDbFixture):
"""Fixture which uses testresources fully for optimised runs.
This fixture relies upon the use of the OpportunisticDBTestMixin to supply
a test.resources attribute, and also works much more effectively when
combined the testresources.OptimisingTestSuite. The
optimize_db_test_loader() function should be used at the module and package
levels to optimize database provisioning across many tests.
"""
def __init__(self, test, driver=None, ident=None):
super(OpportunisticDbFixture, self).__init__(
driver=driver, ident=ident)
self.test = test
def _get_provisioned_db(self):
return getattr(self.test, "_db_%s" % self.ident)
def _generate_database_resource(self, _enginefacade):
return provision.DatabaseResource(
self.driver, _enginefacade, provision_new_database=True)
def _setUp(self):
super(OpportunisticDbFixture, self)._setUp()
if not self._has_db_resource():
return
provisioned_db = self._get_provisioned_db()
if not self._DROP_SCHEMA_PER_TEST:
self.setup_for_reset(
provisioned_db.engine, provisioned_db.enginefacade)
self.useFixture(ReplaceEngineFacadeFixture(
self.get_per_test_enginefacade(),
provisioned_db.enginefacade
))
if not self._DROP_SCHEMA_PER_TEST:
self.addCleanup(
self.reset_schema_data,
provisioned_db.engine, provisioned_db.enginefacade)
class OpportunisticDBTestMixin(object):
"""Test mixin that integrates the test suite with testresources.
There are three goals to this system:
1. Allow creation of "stub" test suites that will run all the tests in a
parent suite against a specific kind of database (e.g. Mysql,
Postgresql), where the entire suite will be skipped if that target
kind of database is not available to the suite.
2. provide a test with a process-local, anonymously named schema within a
target database, so that the test can run concurrently with other tests
without conflicting data
3. provide compatibility with the testresources.OptimisingTestSuite, which
organizes TestCase instances ahead of time into groups that all
make use of the same type of database, setting up and tearing down
a database schema once for the scope of any number of tests within.
This technique is essential when testing against a non-SQLite database
because building of a schema is expensive, and also is most ideally
accomplished using the applications schema migration which are
even more vastly slow than a straight create_all().
This mixin provides the .resources attribute required by testresources when
using the OptimisingTestSuite.The .resources attribute then provides a
collection of testresources.TestResourceManager objects, which are defined
here in oslo_db.sqlalchemy.provision. These objects know how to find
available database backends, build up temporary databases, and invoke
schema generation and teardown instructions. The actual "build the schema
objects" part of the equation, and optionally a "delete from all the
tables" step, is provided by the implementing application itself.
"""
SKIP_ON_UNAVAILABLE_DB = True
FIXTURE = OpportunisticDbFixture
_collected_resources = None
_instantiated_fixtures = None
@property
def resources(self):
"""Provide a collection of TestResourceManager objects.
The collection here is memoized, both at the level of the test
case itself, as well as in the fixture object(s) which provide
those resources.
"""
if self._collected_resources is not None:
return self._collected_resources
fixtures = self._instantiate_fixtures()
self._collected_resources = []
for fixture in fixtures:
self._collected_resources.extend(fixture._get_resources())
return self._collected_resources
def setUp(self):
self._setup_fixtures()
super(OpportunisticDBTestMixin, self).setUp()
def _get_default_provisioned_db(self):
return self._db_default
def _instantiate_fixtures(self):
if self._instantiated_fixtures:
return self._instantiated_fixtures
self._instantiated_fixtures = utils.to_list(self.generate_fixtures())
return self._instantiated_fixtures
def generate_fixtures(self):
return self.FIXTURE(test=self)
def _setup_fixtures(self):
testresources.setUpResources(
self, self.resources, testresources._get_result())
self.addCleanup(
testresources.tearDownResources,
self, self.resources, testresources._get_result()
)
fixtures = self._instantiate_fixtures()
for fixture in fixtures:
self.useFixture(fixture)
if not fixture._has_db_resource():
msg = fixture._get_db_resource_not_available_reason()
if self.SKIP_ON_UNAVAILABLE_DB:
self.skip(msg)
else:
self.fail(msg)
class MySQLOpportunisticFixture(OpportunisticDbFixture):
DRIVER = 'mysql'
class PostgresqlOpportunisticFixture(OpportunisticDbFixture):
DRIVER = 'postgresql'
def optimize_package_test_loader(file_):
"""Organize package-level tests into a testresources.OptimizingTestSuite.
This function provides a unittest-compatible load_tests hook
for a given package; for per-module, use the
:func:`.optimize_module_test_loader` function.
When a unitest or subunit style
test runner is used, the function will be called in order to
return a TestSuite containing the tests to run; this function
ensures that this suite is an OptimisingTestSuite, which will organize
the production of test resources across groups of tests at once.
The function is invoked as::
from oslo_db.sqlalchemy import test_base
load_tests = test_base.optimize_package_test_loader(__file__)
The loader *must* be present in the package level __init__.py.
The function also applies testscenarios expansion to all test collections.
This so that an existing test suite that already needs to build
TestScenarios from a load_tests call can still have this take place when
replaced with this function.
"""
this_dir = os.path.dirname(file_)
def load_tests(loader, found_tests, pattern):
result = testresources.OptimisingTestSuite()
result.addTests(found_tests)
pkg_tests = loader.discover(start_dir=this_dir, pattern=pattern)
result.addTests(testscenarios.generate_scenarios(pkg_tests))
return result
return load_tests
def optimize_module_test_loader():
"""Organize module-level tests into a testresources.OptimizingTestSuite.
This function provides a unittest-compatible load_tests hook
for a given module; for per-package, use the
:func:`.optimize_package_test_loader` function.
When a unitest or subunit style
test runner is used, the function will be called in order to
return a TestSuite containing the tests to run; this function
ensures that this suite is an OptimisingTestSuite, which will organize
the production of test resources across groups of tests at once.
The function is invoked as::
from oslo_db.sqlalchemy import test_base
load_tests = test_base.optimize_module_test_loader()
The loader *must* be present in an individual module, and *not* the
package level __init__.py.
The function also applies testscenarios expansion to all test collections.
This so that an existing test suite that already needs to build
TestScenarios from a load_tests call can still have this take place when
replaced with this function.
"""
def load_tests(loader, found_tests, pattern):
result = testresources.OptimisingTestSuite()
result.addTests(testscenarios.generate_scenarios(found_tests))
return result
return load_tests

View File

@ -1,619 +0,0 @@
# Copyright 2010-2011 OpenStack Foundation
# Copyright 2012-2013 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import collections
import functools
import logging
import pprint
import alembic
import alembic.autogenerate
import alembic.migration
import pkg_resources as pkg
import six
import sqlalchemy
import sqlalchemy.exc
import sqlalchemy.sql.expression as expr
import sqlalchemy.types as types
from oslo_db import exception as exc
from oslo_db.sqlalchemy import provision
from oslo_db.sqlalchemy import utils
LOG = logging.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class WalkVersionsMixin(object):
"""Test mixin to check upgrade and downgrade ability of migration.
This is only suitable for testing of migrate_ migration scripts. An
abstract class mixin. `INIT_VERSION`, `REPOSITORY` and `migration_api`
attributes must be implemented in subclasses.
.. _auxiliary-dynamic-methods:
Auxiliary Methods:
`migrate_up` and `migrate_down` instance methods of the class can be
used with auxiliary methods named `_pre_upgrade_<revision_id>`,
`_check_<revision_id>`, `_post_downgrade_<revision_id>`. The methods
intended to check applied changes for correctness of data operations.
This methods should be implemented for every particular revision
which you want to check with data. Implementation recommendations for
`_pre_upgrade_<revision_id>`, `_check_<revision_id>`,
`_post_downgrade_<revision_id>` implementation:
* `_pre_upgrade_<revision_id>`: provide a data appropriate to
a next revision. Should be used an id of revision which
going to be applied.
* `_check_<revision_id>`: Insert, select, delete operations
with newly applied changes. The data provided by
`_pre_upgrade_<revision_id>` will be used.
* `_post_downgrade_<revision_id>`: check for absence
(inability to use) changes provided by reverted revision.
Execution order of auxiliary methods when revision is upgrading:
`_pre_upgrade_###` => `upgrade` => `_check_###`
Execution order of auxiliary methods when revision is downgrading:
`downgrade` => `_post_downgrade_###`
.. _migrate: https://sqlalchemy-migrate.readthedocs.org/en/latest/
"""
@abc.abstractproperty
def INIT_VERSION(self):
"""Initial version of a migration repository.
Can be different from 0, if a migrations were squashed.
:rtype: int
"""
pass
@abc.abstractproperty
def REPOSITORY(self):
"""Allows basic manipulation with migration repository.
:returns: `migrate.versioning.repository.Repository` subclass.
"""
pass
@abc.abstractproperty
def migration_api(self):
"""Provides API for upgrading, downgrading and version manipulations.
:returns: `migrate.api` or overloaded analog.
"""
pass
@abc.abstractproperty
def migrate_engine(self):
"""Provides engine instance.
Should be the same instance as used when migrations are applied. In
most cases, the `engine` attribute provided by the test class in a
`setUp` method will work.
Example of implementation:
def migrate_engine(self):
return self.engine
:returns: sqlalchemy engine instance
"""
pass
def _walk_versions(self, snake_walk=False, downgrade=True):
"""Check if migration upgrades and downgrades successfully.
DEPRECATED: this function is deprecated and will be removed from
oslo.db in a few releases. Please use walk_versions() method instead.
"""
self.walk_versions(snake_walk, downgrade)
def _migrate_down(self, version, with_data=False):
"""Migrate down to a previous version of the db.
DEPRECATED: this function is deprecated and will be removed from
oslo.db in a few releases. Please use migrate_down() method instead.
"""
return self.migrate_down(version, with_data)
def _migrate_up(self, version, with_data=False):
"""Migrate up to a new version of the db.
DEPRECATED: this function is deprecated and will be removed from
oslo.db in a few releases. Please use migrate_up() method instead.
"""
self.migrate_up(version, with_data)
def walk_versions(self, snake_walk=False, downgrade=True):
"""Check if migration upgrades and downgrades successfully.
Determine the latest version script from the repo, then
upgrade from 1 through to the latest, with no data
in the databases. This just checks that the schema itself
upgrades successfully.
`walk_versions` calls `migrate_up` and `migrate_down` with
`with_data` argument to check changes with data, but these methods
can be called without any extra check outside of `walk_versions`
method.
:param snake_walk: enables checking that each individual migration can
be upgraded/downgraded by itself.
If we have ordered migrations 123abc, 456def, 789ghi and we run
upgrading with the `snake_walk` argument set to `True`, the
migrations will be applied in the following order::
`123abc => 456def => 123abc =>
456def => 789ghi => 456def => 789ghi`
:type snake_walk: bool
:param downgrade: Check downgrade behavior if True.
:type downgrade: bool
"""
# Place the database under version control
self.migration_api.version_control(self.migrate_engine,
self.REPOSITORY,
self.INIT_VERSION)
self.assertEqual(self.INIT_VERSION,
self.migration_api.db_version(self.migrate_engine,
self.REPOSITORY))
LOG.debug('latest version is %s', self.REPOSITORY.latest)
versions = range(int(self.INIT_VERSION) + 1,
int(self.REPOSITORY.latest) + 1)
for version in versions:
# upgrade -> downgrade -> upgrade
self.migrate_up(version, with_data=True)
if snake_walk:
downgraded = self.migrate_down(version - 1, with_data=True)
if downgraded:
self.migrate_up(version)
if downgrade:
# Now walk it back down to 0 from the latest, testing
# the downgrade paths.
for version in reversed(versions):
# downgrade -> upgrade -> downgrade
downgraded = self.migrate_down(version - 1)
if snake_walk and downgraded:
self.migrate_up(version)
self.migrate_down(version - 1)
def migrate_down(self, version, with_data=False):
"""Migrate down to a previous version of the db.
:param version: id of revision to downgrade.
:type version: str
:keyword with_data: Whether to verify the absence of changes from
migration(s) being downgraded, see
:ref:`Auxiliary Methods <auxiliary-dynamic-methods>`.
:type with_data: Bool
"""
try:
self.migration_api.downgrade(self.migrate_engine,
self.REPOSITORY, version)
except NotImplementedError:
# NOTE(sirp): some migrations, namely release-level
# migrations, don't support a downgrade.
return False
self.assertEqual(version, self.migration_api.db_version(
self.migrate_engine, self.REPOSITORY))
# NOTE(sirp): `version` is what we're downgrading to (i.e. the 'target'
# version). So if we have any downgrade checks, they need to be run for
# the previous (higher numbered) migration.
if with_data:
post_downgrade = getattr(
self, "_post_downgrade_%03d" % (version + 1), None)
if post_downgrade:
post_downgrade(self.migrate_engine)
return True
def migrate_up(self, version, with_data=False):
"""Migrate up to a new version of the db.
:param version: id of revision to upgrade.
:type version: str
:keyword with_data: Whether to verify the applied changes with data,
see :ref:`Auxiliary Methods <auxiliary-dynamic-methods>`.
:type with_data: Bool
"""
# NOTE(sdague): try block is here because it's impossible to debug
# where a failed data migration happens otherwise
try:
if with_data:
data = None
pre_upgrade = getattr(
self, "_pre_upgrade_%03d" % version, None)
if pre_upgrade:
data = pre_upgrade(self.migrate_engine)
self.migration_api.upgrade(self.migrate_engine,
self.REPOSITORY, version)
self.assertEqual(version,
self.migration_api.db_version(self.migrate_engine,
self.REPOSITORY))
if with_data:
check = getattr(self, "_check_%03d" % version, None)
if check:
check(self.migrate_engine, data)
except exc.DBMigrationError:
msg = "Failed to migrate to version %(ver)s on engine %(eng)s"
LOG.error(msg, {"ver": version, "eng": self.migrate_engine})
raise
@six.add_metaclass(abc.ABCMeta)
class ModelsMigrationsSync(object):
"""A helper class for comparison of DB migration scripts and models.
It's intended to be inherited by test cases in target projects. They have
to provide implementations for methods used internally in the test (as
we have no way to implement them here).
test_model_sync() will run migration scripts for the engine provided and
then compare the given metadata to the one reflected from the database.
The difference between MODELS and MIGRATION scripts will be printed and
the test will fail, if the difference is not empty. The return value is
really a list of actions, that should be performed in order to make the
current database schema state (i.e. migration scripts) consistent with
models definitions. It's left up to developers to analyze the output and
decide whether the models definitions or the migration scripts should be
modified to make them consistent.
Output::
[(
'add_table',
description of the table from models
),
(
'remove_table',
description of the table from database
),
(
'add_column',
schema,
table name,
column description from models
),
(
'remove_column',
schema,
table name,
column description from database
),
(
'add_index',
description of the index from models
),
(
'remove_index',
description of the index from database
),
(
'add_constraint',
description of constraint from models
),
(
'remove_constraint,
description of constraint from database
),
(
'modify_nullable',
schema,
table name,
column name,
{
'existing_type': type of the column from database,
'existing_server_default': default value from database
},
nullable from database,
nullable from models
),
(
'modify_type',
schema,
table name,
column name,
{
'existing_nullable': database nullable,
'existing_server_default': default value from database
},
database column type,
type of the column from models
),
(
'modify_default',
schema,
table name,
column name,
{
'existing_nullable': database nullable,
'existing_type': type of the column from database
},
connection column default value,
default from models
)]
Method include_object() can be overridden to exclude some tables from
comparison (e.g. migrate_repo).
"""
@abc.abstractmethod
def db_sync(self, engine):
"""Run migration scripts with the given engine instance.
This method must be implemented in subclasses and run migration scripts
for a DB the given engine is connected to.
"""
@abc.abstractmethod
def get_engine(self):
"""Return the engine instance to be used when running tests.
This method must be implemented in subclasses and return an engine
instance to be used when running tests.
"""
@abc.abstractmethod
def get_metadata(self):
"""Return the metadata instance to be used for schema comparison.
This method must be implemented in subclasses and return the metadata
instance attached to the BASE model.
"""
def include_object(self, object_, name, type_, reflected, compare_to):
"""Return True for objects that should be compared.
:param object_: a SchemaItem object such as a Table or Column object
:param name: the name of the object
:param type_: a string describing the type of object (e.g. "table")
:param reflected: True if the given object was produced based on
table reflection, False if it's from a local
MetaData object
:param compare_to: the object being compared against, if available,
else None
"""
return True
def compare_type(self, ctxt, insp_col, meta_col, insp_type, meta_type):
"""Return True if types are different, False if not.
Return None to allow the default implementation to compare these types.
:param ctxt: alembic MigrationContext instance
:param insp_col: reflected column
:param meta_col: column from model
:param insp_type: reflected column type
:param meta_type: column type from model
"""
# some backends (e.g. mysql) don't provide native boolean type
BOOLEAN_METADATA = (types.BOOLEAN, types.Boolean)
BOOLEAN_SQL = BOOLEAN_METADATA + (types.INTEGER, types.Integer)
if issubclass(type(meta_type), BOOLEAN_METADATA):
return not issubclass(type(insp_type), BOOLEAN_SQL)
# Alembic <=0.8.4 do not contain logic of comparing Variant type with
# others.
if isinstance(meta_type, types.Variant):
orig_type = meta_col.type
impl_type = meta_type.load_dialect_impl(ctxt.dialect)
meta_col.type = impl_type
try:
return self.compare_type(ctxt, insp_col, meta_col, insp_type,
impl_type)
finally:
meta_col.type = orig_type
return ctxt.impl.compare_type(insp_col, meta_col)
def compare_server_default(self, ctxt, ins_col, meta_col,
insp_def, meta_def, rendered_meta_def):
"""Compare default values between model and db table.
Return True if the defaults are different, False if not, or None to
allow the default implementation to compare these defaults.
:param ctxt: alembic MigrationContext instance
:param insp_col: reflected column
:param meta_col: column from model
:param insp_def: reflected column default value
:param meta_def: column default value from model
:param rendered_meta_def: rendered column default value (from model)
"""
return self._compare_server_default(ctxt.bind, meta_col, insp_def,
meta_def)
@utils.DialectFunctionDispatcher.dispatch_for_dialect("*")
def _compare_server_default(bind, meta_col, insp_def, meta_def):
pass
@_compare_server_default.dispatch_for('mysql')
def _compare_server_default(bind, meta_col, insp_def, meta_def):
if isinstance(meta_col.type, sqlalchemy.Boolean):
if meta_def is None or insp_def is None:
return meta_def != insp_def
return not (
isinstance(meta_def.arg, expr.True_) and insp_def == "'1'" or
isinstance(meta_def.arg, expr.False_) and insp_def == "'0'"
)
impl_type = meta_col.type
if isinstance(impl_type, types.Variant):
impl_type = impl_type.load_dialect_impl(bind.dialect)
if isinstance(impl_type, (sqlalchemy.Integer, sqlalchemy.BigInteger)):
if meta_def is None or insp_def is None:
return meta_def != insp_def
return meta_def.arg != insp_def.split("'")[1]
@_compare_server_default.dispatch_for('postgresql')
def _compare_server_default(bind, meta_col, insp_def, meta_def):
if isinstance(meta_col.type, sqlalchemy.Enum):
if meta_def is None or insp_def is None:
return meta_def != insp_def
return insp_def != "'%s'::%s" % (meta_def.arg, meta_col.type.name)
elif isinstance(meta_col.type, sqlalchemy.String):
if meta_def is None or insp_def is None:
return meta_def != insp_def
return insp_def != "'%s'::character varying" % meta_def.arg
FKInfo = collections.namedtuple('fk_info', ['constrained_columns',
'referred_table',
'referred_columns'])
def check_foreign_keys(self, metadata, bind):
"""Compare foreign keys between model and db table.
:returns: a list that contains information about:
* should be a new key added or removed existing,
* name of that key,
* source table,
* referred table,
* constrained columns,
* referred columns
Output::
[('drop_key',
'testtbl_fk_check_fkey',
'testtbl',
fk_info(constrained_columns=(u'fk_check',),
referred_table=u'table',
referred_columns=(u'fk_check',)))]
DEPRECATED: this function is deprecated and will be removed from
oslo.db in a few releases. Alembic autogenerate.compare_metadata()
now includes foreign key comparison directly.
"""
diff = []
insp = sqlalchemy.engine.reflection.Inspector.from_engine(bind)
# Get all tables from db
db_tables = insp.get_table_names()
# Get all tables from models
model_tables = metadata.tables
for table in db_tables:
if table not in model_tables:
continue
# Get all necessary information about key of current table from db
fk_db = dict((self._get_fk_info_from_db(i), i['name'])
for i in insp.get_foreign_keys(table))
fk_db_set = set(fk_db.keys())
# Get all necessary information about key of current table from
# models
fk_models = dict((self._get_fk_info_from_model(fk), fk)
for fk in model_tables[table].foreign_keys)
fk_models_set = set(fk_models.keys())
for key in (fk_db_set - fk_models_set):
diff.append(('drop_key', fk_db[key], table, key))
LOG.info(("Detected removed foreign key %(fk)r on "
"table %(table)r"), {'fk': fk_db[key],
'table': table})
for key in (fk_models_set - fk_db_set):
diff.append(('add_key', fk_models[key], table, key))
LOG.info((
"Detected added foreign key for column %(fk)r on table "
"%(table)r"), {'fk': fk_models[key].column.name,
'table': table})
return diff
def _get_fk_info_from_db(self, fk):
return self.FKInfo(tuple(fk['constrained_columns']),
fk['referred_table'],
tuple(fk['referred_columns']))
def _get_fk_info_from_model(self, fk):
return self.FKInfo((fk.parent.name,), fk.column.table.name,
(fk.column.name,))
def filter_metadata_diff(self, diff):
"""Filter changes before assert in test_models_sync().
Allow subclasses to whitelist/blacklist changes. By default, no
filtering is performed, changes are returned as is.
:param diff: a list of differences (see `compare_metadata()` docs for
details on format)
:returns: a list of differences
"""
return diff
def test_models_sync(self):
# recent versions of sqlalchemy and alembic are needed for running of
# this test, but we already have them in requirements
try:
pkg.require('sqlalchemy>=0.8.4', 'alembic>=0.6.2')
except (pkg.VersionConflict, pkg.DistributionNotFound) as e:
self.skipTest('sqlalchemy>=0.8.4 and alembic>=0.6.3 are required'
' for running of this test: %s' % e)
# drop all objects after a test run
engine = self.get_engine()
backend = provision.Backend(engine.name, engine.url)
self.addCleanup(functools.partial(backend.drop_all_objects, engine))
# run migration scripts
self.db_sync(self.get_engine())
with self.get_engine().connect() as conn:
opts = {
'include_object': self.include_object,
'compare_type': self.compare_type,
'compare_server_default': self.compare_server_default,
}
mc = alembic.migration.MigrationContext.configure(conn, opts=opts)
# compare schemas and fail with diff, if it's not empty
diff = self.filter_metadata_diff(
alembic.autogenerate.compare_metadata(mc, self.get_metadata()))
if diff:
msg = pprint.pformat(diff, indent=2, width=20)
self.fail(
"Models and migration scripts aren't in sync:\n%s" % msg)

View File

@ -1,103 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from sqlalchemy.types import Integer, TypeDecorator, Text
from sqlalchemy.dialects import mysql
class JsonEncodedType(TypeDecorator):
"""Base column type for data serialized as JSON-encoded string in db."""
type = None
impl = Text
def __init__(self, mysql_as_long=False, mysql_as_medium=False):
super(JsonEncodedType, self).__init__()
if mysql_as_long and mysql_as_medium:
raise TypeError("mysql_as_long and mysql_as_medium are mutually "
"exclusive")
if mysql_as_long:
self.impl = Text().with_variant(mysql.LONGTEXT(), 'mysql')
elif mysql_as_medium:
self.impl = Text().with_variant(mysql.MEDIUMTEXT(), 'mysql')
def process_bind_param(self, value, dialect):
if value is None:
if self.type is not None:
# Save default value according to current type to keep the
# interface consistent.
value = self.type()
elif self.type is not None and not isinstance(value, self.type):
raise TypeError("%s supposes to store %s objects, but %s given"
% (self.__class__.__name__,
self.type.__name__,
type(value).__name__))
serialized_value = json.dumps(value)
return serialized_value
def process_result_value(self, value, dialect):
if value is not None:
value = json.loads(value)
return value
class JsonEncodedDict(JsonEncodedType):
"""Represents dict serialized as json-encoded string in db.
Note that this type does NOT track mutations. If you want to update it, you
have to assign existing value to a temporary variable, update, then assign
back. See this page for more robust work around:
http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/mutable.html
"""
type = dict
class JsonEncodedList(JsonEncodedType):
"""Represents list serialized as json-encoded string in db.
Note that this type does NOT track mutations. If you want to update it, you
have to assign existing value to a temporary variable, update, then assign
back. See this page for more robust work around:
http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/mutable.html
"""
type = list
class SoftDeleteInteger(TypeDecorator):
"""Coerce a bound param to be a proper integer before passing it to DBAPI.
Some backends like PostgreSQL are very strict about types and do not
perform automatic type casts, e.g. when trying to INSERT a boolean value
like ``false`` into an integer column. Coercing of the bound param in DB
layer by the means of a custom SQLAlchemy type decorator makes sure we
always pass a proper integer value to a DBAPI implementation.
This is not a general purpose boolean integer type as it specifically
allows for arbitrary positive integers outside of the boolean int range
(0, 1, False, True), so that it's possible to have compound unique
constraints over multiple columns including ``deleted`` (e.g. to
soft-delete flavors with the same name in Nova without triggering
a constraint violation): ``deleted`` is set to be equal to a PK
int value on deletion, 0 denotes a non-deleted row.
"""
impl = Integer
def process_bind_param(self, value, dialect):
if value is None:
return None
else:
return int(value)

View File

@ -1,508 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
from sqlalchemy import inspect
from sqlalchemy import orm
from sqlalchemy import sql
from sqlalchemy import types as sqltypes
from oslo_db.sqlalchemy import utils
def update_on_match(
query,
specimen,
surrogate_key,
values=None,
attempts=3,
include_only=None,
process_query=None,
handle_failure=None
):
"""Emit an UPDATE statement matching the given specimen.
E.g.::
with enginefacade.writer() as session:
specimen = MyInstance(
uuid='ccea54f',
interface_id='ad33fea',
vm_state='SOME_VM_STATE',
)
values = {
'vm_state': 'SOME_NEW_VM_STATE'
}
base_query = model_query(
context, models.Instance,
project_only=True, session=session)
hostname_query = model_query(
context, models.Instance, session=session,
read_deleted='no').
filter(func.lower(models.Instance.hostname) == 'SOMEHOSTNAME')
surrogate_key = ('uuid', )
def process_query(query):
return query.where(~exists(hostname_query))
def handle_failure(query):
try:
instance = base_query.one()
except NoResultFound:
raise exception.InstanceNotFound(instance_id=instance_uuid)
if session.query(hostname_query.exists()).scalar():
raise exception.InstanceExists(
name=values['hostname'].lower())
# try again
return False
persistent_instance = base_query.update_on_match(
specimen,
surrogate_key,
values=values,
process_query=process_query,
handle_failure=handle_failure
)
The UPDATE statement is constructed against the given specimen
using those values which are present to construct a WHERE clause.
If the specimen contains additional values to be ignored, the
``include_only`` parameter may be passed which indicates a sequence
of attributes to use when constructing the WHERE.
The UPDATE is performed against an ORM Query, which is created from
the given ``Session``, or alternatively by passing the ```query``
parameter referring to an existing query.
Before the query is invoked, it is also passed through the callable
sent as ``process_query``, if present. This hook allows additional
criteria to be added to the query after it is created but before
invocation.
The function will then invoke the UPDATE statement and check for
"success" one or more times, up to a maximum of that passed as
``attempts``.
The initial check for "success" from the UPDATE statement is that the
number of rows returned matches 1. If zero rows are matched, then
the UPDATE statement is assumed to have "failed", and the failure handling
phase begins.
The failure handling phase involves invoking the given ``handle_failure``
function, if any. This handler can perform additional queries to attempt
to figure out why the UPDATE didn't match any rows. The handler,
upon detection of the exact failure condition, should throw an exception
to exit; if it doesn't, it has the option of returning True or False,
where False means the error was not handled, and True means that there
was not in fact an error, and the function should return successfully.
If the failure handler is not present, or returns False after ``attempts``
number of attempts, then the function overall raises CantUpdateException.
If the handler returns True, then the function returns with no error.
The return value of the function is a persistent version of the given
specimen; this may be the specimen itself, if no matching object were
already present in the session; otherwise, the existing object is
returned, with the state of the specimen merged into it. The returned
persistent object will have the given values populated into the object.
The object is is returned as "persistent", meaning that it is
associated with the given
Session and has an identity key (that is, a real primary key
value).
In order to produce this identity key, a strategy must be used to
determine it as efficiently and safely as possible:
1. If the given specimen already contained its primary key attributes
fully populated, then these attributes were used as criteria in the
UPDATE, so we have the primary key value; it is populated directly.
2. If the target backend supports RETURNING, then when the update() query
is performed with a RETURNING clause so that the matching primary key
is returned atomically. This currently includes Postgresql, Oracle
and others (notably not MySQL or SQLite).
3. If the target backend is MySQL, and the given model uses a
single-column, AUTO_INCREMENT integer primary key value (as is
the case for Nova), MySQL's recommended approach of making use
of ``LAST_INSERT_ID(expr)`` is used to atomically acquire the
matching primary key value within the scope of the UPDATE
statement, then it fetched immediately following by using
``SELECT LAST_INSERT_ID()``.
http://dev.mysql.com/doc/refman/5.0/en/information-\
functions.html#function_last-insert-id
4. Otherwise, for composite keys on MySQL or other backends such
as SQLite, the row as UPDATED must be re-fetched in order to
acquire the primary key value. The ``surrogate_key``
parameter is used for this in order to re-fetch the row; this
is a column name with a known, unique value where
the object can be fetched.
"""
if values is None:
values = {}
entity = inspect(specimen)
mapper = entity.mapper
assert \
[desc['type'] for desc in query.column_descriptions] == \
[mapper.class_], "Query does not match given specimen"
criteria = manufacture_entity_criteria(
specimen, include_only=include_only, exclude=[surrogate_key])
query = query.filter(criteria)
if process_query:
query = process_query(query)
surrogate_key_arg = (
surrogate_key, entity.attrs[surrogate_key].loaded_value)
pk_value = None
for attempt in range(attempts):
try:
pk_value = query.update_returning_pk(values, surrogate_key_arg)
except MultiRowsMatched:
raise
except NoRowsMatched:
if handle_failure and handle_failure(query):
break
else:
break
else:
raise NoRowsMatched("Zero rows matched for %d attempts" % attempts)
if pk_value is None:
pk_value = entity.mapper.primary_key_from_instance(specimen)
# NOTE(mdbooth): Can't pass the original specimen object here as it might
# have lists of multiple potential values rather than actual values.
values = copy.copy(values)
values[surrogate_key] = surrogate_key_arg[1]
persistent_obj = manufacture_persistent_object(
query.session, specimen.__class__(), values, pk_value)
return persistent_obj
def manufacture_persistent_object(
session, specimen, values=None, primary_key=None):
"""Make an ORM-mapped object persistent in a Session without SQL.
The persistent object is returned.
If a matching object is already present in the given session, the specimen
is merged into it and the persistent object returned. Otherwise, the
specimen itself is made persistent and is returned.
The object must contain a full primary key, or provide it via the values or
primary_key parameters. The object is peristed to the Session in a "clean"
state with no pending changes.
:param session: A Session object.
:param specimen: a mapped object which is typically transient.
:param values: a dictionary of values to be applied to the specimen,
in addition to the state that's already on it. The attributes will be
set such that no history is created; the object remains clean.
:param primary_key: optional tuple-based primary key. This will also
be applied to the instance if present.
"""
state = inspect(specimen)
mapper = state.mapper
for k, v in values.items():
orm.attributes.set_committed_value(specimen, k, v)
pk_attrs = [
mapper.get_property_by_column(col).key
for col in mapper.primary_key
]
if primary_key is not None:
for key, value in zip(pk_attrs, primary_key):
orm.attributes.set_committed_value(
specimen,
key,
value
)
for key in pk_attrs:
if state.attrs[key].loaded_value is orm.attributes.NO_VALUE:
raise ValueError("full primary key must be present")
orm.make_transient_to_detached(specimen)
if state.key not in session.identity_map:
session.add(specimen)
return specimen
else:
return session.merge(specimen, load=False)
def manufacture_entity_criteria(entity, include_only=None, exclude=None):
"""Given a mapped instance, produce a WHERE clause.
The attributes set upon the instance will be combined to produce
a SQL expression using the mapped SQL expressions as the base
of comparison.
Values on the instance may be set as tuples in which case the
criteria will produce an IN clause. None is also acceptable as a
scalar or tuple entry, which will produce IS NULL that is properly
joined with an OR against an IN expression if appropriate.
:param entity: a mapped entity.
:param include_only: optional sequence of keys to limit which
keys are included.
:param exclude: sequence of keys to exclude
"""
state = inspect(entity)
exclude = set(exclude) if exclude is not None else set()
existing = dict(
(attr.key, attr.loaded_value)
for attr in state.attrs
if attr.loaded_value is not orm.attributes.NO_VALUE
and attr.key not in exclude
)
if include_only:
existing = dict(
(k, existing[k])
for k in set(existing).intersection(include_only)
)
return manufacture_criteria(state.mapper, existing)
def manufacture_criteria(mapped, values):
"""Given a mapper/class and a namespace of values, produce a WHERE clause.
The class should be a mapped class and the entries in the dictionary
correspond to mapped attribute names on the class.
A value may also be a tuple in which case that particular attribute
will be compared to a tuple using IN. The scalar value or
tuple can also contain None which translates to an IS NULL, that is
properly joined with OR against an IN expression if appropriate.
:param cls: a mapped class, or actual :class:`.Mapper` object.
:param values: dictionary of values.
"""
mapper = inspect(mapped)
# organize keys using mapped attribute ordering, which is deterministic
value_keys = set(values)
keys = [k for k in mapper.column_attrs.keys() if k in value_keys]
return sql.and_(*[
_sql_crit(mapper.column_attrs[key].expression, values[key])
for key in keys
])
def _sql_crit(expression, value):
"""Produce an equality expression against the given value.
This takes into account a value that is actually a collection
of values, as well as a value of None or collection that contains
None.
"""
values = utils.to_list(value, default=(None, ))
if len(values) == 1:
if values[0] is None:
return expression == sql.null()
else:
return expression == values[0]
elif _none_set.intersection(values):
return sql.or_(
expression == sql.null(),
_sql_crit(expression, set(values).difference(_none_set))
)
else:
return expression.in_(values)
def update_returning_pk(query, values, surrogate_key):
"""Perform an UPDATE, returning the primary key of the matched row.
The primary key is returned using a selection of strategies:
* if the database supports RETURNING, RETURNING is used to retrieve
the primary key values inline.
* If the database is MySQL and the entity is mapped to a single integer
primary key column, MySQL's last_insert_id() function is used
inline within the UPDATE and then upon a second SELECT to get the
value.
* Otherwise, a "refetch" strategy is used, where a given "surrogate"
key value (typically a UUID column on the entity) is used to run
a new SELECT against that UUID. This UUID is also placed into
the UPDATE query to ensure the row matches.
:param query: a Query object with existing criterion, against a single
entity.
:param values: a dictionary of values to be updated on the row.
:param surrogate_key: a tuple of (attrname, value), referring to a
UNIQUE attribute that will also match the row. This attribute is used
to retrieve the row via a SELECT when no optimized strategy exists.
:return: the primary key, returned as a tuple.
Is only returned if rows matched is one. Otherwise, CantUpdateException
is raised.
"""
entity = query.column_descriptions[0]['type']
mapper = inspect(entity).mapper
session = query.session
bind = session.connection(mapper=mapper)
if bind.dialect.implicit_returning:
pk_strategy = _pk_strategy_returning
elif bind.dialect.name == 'mysql' and \
len(mapper.primary_key) == 1 and \
isinstance(
mapper.primary_key[0].type, sqltypes.Integer):
pk_strategy = _pk_strategy_mysql_last_insert_id
else:
pk_strategy = _pk_strategy_refetch
return pk_strategy(query, mapper, values, surrogate_key)
def _assert_single_row(rows_updated):
if rows_updated == 1:
return rows_updated
elif rows_updated > 1:
raise MultiRowsMatched("%d rows matched; expected one" % rows_updated)
else:
raise NoRowsMatched("No rows matched the UPDATE")
def _pk_strategy_refetch(query, mapper, values, surrogate_key):
surrogate_key_name, surrogate_key_value = surrogate_key
surrogate_key_col = mapper.attrs[surrogate_key_name].expression
rowcount = query.\
filter(surrogate_key_col == surrogate_key_value).\
update(values, synchronize_session=False)
_assert_single_row(rowcount)
# SELECT my_table.id AS my_table_id FROM my_table
# WHERE my_table.y = ? AND my_table.z = ?
# LIMIT ? OFFSET ?
fetch_query = query.session.query(
*mapper.primary_key).filter(
surrogate_key_col == surrogate_key_value)
primary_key = fetch_query.one()
return primary_key
def _pk_strategy_returning(query, mapper, values, surrogate_key):
surrogate_key_name, surrogate_key_value = surrogate_key
surrogate_key_col = mapper.attrs[surrogate_key_name].expression
update_stmt = _update_stmt_from_query(mapper, query, values)
update_stmt = update_stmt.where(surrogate_key_col == surrogate_key_value)
update_stmt = update_stmt.returning(*mapper.primary_key)
# UPDATE my_table SET x=%(x)s, z=%(z)s WHERE my_table.y = %(y_1)s
# AND my_table.z = %(z_1)s RETURNING my_table.id
result = query.session.execute(update_stmt)
rowcount = result.rowcount
_assert_single_row(rowcount)
primary_key = tuple(result.first())
return primary_key
def _pk_strategy_mysql_last_insert_id(query, mapper, values, surrogate_key):
surrogate_key_name, surrogate_key_value = surrogate_key
surrogate_key_col = mapper.attrs[surrogate_key_name].expression
surrogate_pk_col = mapper.primary_key[0]
update_stmt = _update_stmt_from_query(mapper, query, values)
update_stmt = update_stmt.where(surrogate_key_col == surrogate_key_value)
update_stmt = update_stmt.values(
{surrogate_pk_col: sql.func.last_insert_id(surrogate_pk_col)})
# UPDATE my_table SET id=last_insert_id(my_table.id),
# x=%s, z=%s WHERE my_table.y = %s AND my_table.z = %s
result = query.session.execute(update_stmt)
rowcount = result.rowcount
_assert_single_row(rowcount)
# SELECT last_insert_id() AS last_insert_id_1
primary_key = query.session.scalar(sql.func.last_insert_id()),
return primary_key
def _update_stmt_from_query(mapper, query, values):
upd_values = dict(
(
mapper.column_attrs[key], value
) for key, value in values.items()
)
query = query.enable_eagerloads(False)
context = query._compile_context()
primary_table = context.statement.froms[0]
update_stmt = sql.update(primary_table,
context.whereclause,
upd_values)
return update_stmt
_none_set = frozenset([None])
class CantUpdateException(Exception):
pass
class NoRowsMatched(CantUpdateException):
pass
class MultiRowsMatched(CantUpdateException):
pass

File diff suppressed because it is too large Load Diff

View File

@ -1,25 +0,0 @@
# Copyright 2014 Rackspace
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
def should_run_eventlet_tests():
return bool(int(os.environ.get('TEST_EVENTLET') or '0'))
if should_run_eventlet_tests():
import eventlet
eventlet.monkey_patch()

View File

@ -1,53 +0,0 @@
# Copyright 2010-2011 OpenStack Foundation
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import fixtures
import testtools
_TRUE_VALUES = ('true', '1', 'yes')
# FIXME(dhellmann) Update this to use oslo.test library
class TestCase(testtools.TestCase):
"""Test case base class for all unit tests."""
def setUp(self):
"""Run before each test method to initialize test environment."""
super(TestCase, self).setUp()
test_timeout = os.environ.get('OS_TEST_TIMEOUT', 0)
try:
test_timeout = int(test_timeout)
except ValueError:
# If timeout value is invalid do not set a timeout.
test_timeout = 0
if test_timeout > 0:
self.useFixture(fixtures.Timeout(test_timeout, gentle=True))
self.useFixture(fixtures.NestedTempfile())
self.useFixture(fixtures.TempHomeDir())
if os.environ.get('OS_STDOUT_CAPTURE') in _TRUE_VALUES:
stdout = self.useFixture(fixtures.StringStream('stdout')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout))
if os.environ.get('OS_STDERR_CAPTURE') in _TRUE_VALUES:
stderr = self.useFixture(fixtures.StringStream('stderr')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr))
self.log_fixture = self.useFixture(fixtures.FakeLogger())

View File

@ -1,18 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_db.sqlalchemy import test_base
load_tests = test_base.optimize_db_test_loader(__file__)

View File

@ -1,43 +0,0 @@
# Copyright (c) 2016 Openstack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy.test_base import backend_specific # noqa
from oslo_db.sqlalchemy import test_fixtures as db_fixtures
from oslotest import base as test_base
@enginefacade.transaction_context_provider
class Context(object):
pass
context = Context()
class DbTestCase(db_fixtures.OpportunisticDBTestMixin, test_base.BaseTestCase):
def setUp(self):
super(DbTestCase, self).setUp()
self.engine = enginefacade.writer.get_engine()
self.sessionmaker = enginefacade.writer.get_sessionmaker()
class MySQLOpportunisticTestCase(DbTestCase):
FIXTURE = db_fixtures.MySQLOpportunisticFixture
class PostgreSQLOpportunisticTestCase(DbTestCase):
FIXTURE = db_fixtures.PostgresqlOpportunisticFixture

View File

@ -1,127 +0,0 @@
# Copyright (c) 2014 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Unit tests for SQLAlchemy and eventlet interaction."""
import logging
import unittest2
from oslo_utils import importutils
import sqlalchemy as sa
from sqlalchemy.ext import declarative as sa_decl
from oslo_db import exception as db_exc
from oslo_db.sqlalchemy import models
from oslo_db import tests
from oslo_db.tests.sqlalchemy import base as test_base
class EventletTestMixin(object):
def setUp(self):
super(EventletTestMixin, self).setUp()
BASE = sa_decl.declarative_base()
class TmpTable(BASE, models.ModelBase):
__tablename__ = 'test_async_eventlet'
id = sa.Column('id', sa.Integer, primary_key=True, nullable=False)
foo = sa.Column('foo', sa.Integer)
__table_args__ = (
sa.UniqueConstraint('foo', name='uniq_foo'),
)
self.test_table = TmpTable
TmpTable.__table__.create(self.engine)
self.addCleanup(lambda: TmpTable.__table__.drop(self.engine))
@unittest2.skipIf(not tests.should_run_eventlet_tests(),
'eventlet tests disabled unless TEST_EVENTLET=1')
def test_concurrent_transaction(self):
# Cause sqlalchemy to log executed SQL statements. Useful to
# determine exactly what and when was sent to DB.
sqla_logger = logging.getLogger('sqlalchemy.engine')
sqla_logger.setLevel(logging.INFO)
self.addCleanup(sqla_logger.setLevel, logging.NOTSET)
def operate_on_row(name, ready=None, proceed=None):
logging.debug('%s starting', name)
_session = self.sessionmaker()
with _session.begin():
logging.debug('%s ready', name)
# Modify the same row, inside transaction
tbl = self.test_table()
tbl.update({'foo': 10})
tbl.save(_session)
if ready is not None:
ready.send()
if proceed is not None:
logging.debug('%s waiting to proceed', name)
proceed.wait()
logging.debug('%s exiting transaction', name)
logging.debug('%s terminating', name)
return True
eventlet = importutils.try_import('eventlet')
if eventlet is None:
return self.skip('eventlet is required for this test')
a_ready = eventlet.event.Event()
a_proceed = eventlet.event.Event()
b_proceed = eventlet.event.Event()
# thread A opens transaction
logging.debug('spawning A')
a = eventlet.spawn(operate_on_row, 'A',
ready=a_ready, proceed=a_proceed)
logging.debug('waiting for A to enter transaction')
a_ready.wait()
# thread B opens transaction on same row
logging.debug('spawning B')
b = eventlet.spawn(operate_on_row, 'B',
proceed=b_proceed)
logging.debug('waiting for B to (attempt to) enter transaction')
eventlet.sleep(1) # should(?) advance B to blocking on transaction
# While B is still blocked, A should be able to proceed
a_proceed.send()
# Will block forever(*) if DB library isn't reentrant.
# (*) Until some form of timeout/deadlock detection kicks in.
# This is the key test that async is working. If this hangs
# (or raises a timeout/deadlock exception), then you have failed
# this test.
self.assertTrue(a.wait())
b_proceed.send()
# If everything proceeded without blocking, B will throw a
# "duplicate entry" exception when it tries to insert the same row
self.assertRaises(db_exc.DBDuplicateEntry, b.wait)
# Note that sqlite fails the above concurrency tests, and is not
# mentioned below.
# ie: This file performs no tests by default.
class MySQLEventletTestCase(EventletTestMixin,
test_base.MySQLOpportunisticTestCase):
pass
class PostgreSQLEventletTestCase(EventletTestMixin,
test_base.PostgreSQLOpportunisticTestCase):
pass

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,310 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import os
import testresources
import testscenarios
import unittest
from oslo_db import exception
from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy import provision
from oslo_db.sqlalchemy import test_base as legacy_test_base
from oslo_db.sqlalchemy import test_fixtures
from oslotest import base as oslo_test_base
start_dir = os.path.dirname(__file__)
class BackendSkipTest(oslo_test_base.BaseTestCase):
def test_skip_no_dbapi(self):
class FakeDatabaseOpportunisticFixture(
test_fixtures.OpportunisticDbFixture):
DRIVER = 'postgresql'
class SomeTest(test_fixtures.OpportunisticDBTestMixin,
oslo_test_base.BaseTestCase):
FIXTURE = FakeDatabaseOpportunisticFixture
def runTest(self):
pass
st = SomeTest()
# patch in replacement lookup dictionaries to avoid
# leaking from/to other tests
with mock.patch(
"oslo_db.sqlalchemy.provision."
"Backend.backends_by_database_type", {
"postgresql":
provision.Backend("postgresql", "postgresql://")}):
st._database_resources = {}
st._db_not_available = {}
st._schema_resources = {}
with mock.patch(
"sqlalchemy.create_engine",
mock.Mock(side_effect=ImportError())):
self.assertEqual([], st.resources)
ex = self.assertRaises(
self.skipException,
st.setUp
)
self.assertEqual(
"Backend 'postgresql' is unavailable: No DBAPI installed",
str(ex)
)
def test_skip_no_such_backend(self):
class FakeDatabaseOpportunisticFixture(
test_fixtures.OpportunisticDbFixture):
DRIVER = 'postgresql+nosuchdbapi'
class SomeTest(test_fixtures.OpportunisticDBTestMixin,
oslo_test_base.BaseTestCase):
FIXTURE = FakeDatabaseOpportunisticFixture
def runTest(self):
pass
st = SomeTest()
ex = self.assertRaises(
self.skipException,
st.setUp
)
self.assertEqual(
"Backend 'postgresql+nosuchdbapi' is unavailable: No such backend",
str(ex)
)
def test_skip_no_dbapi_legacy(self):
class FakeDatabaseOpportunisticFixture(
legacy_test_base.DbFixture):
DRIVER = 'postgresql'
class SomeTest(legacy_test_base.DbTestCase):
FIXTURE = FakeDatabaseOpportunisticFixture
def runTest(self):
pass
st = SomeTest()
# patch in replacement lookup dictionaries to avoid
# leaking from/to other tests
with mock.patch(
"oslo_db.sqlalchemy.provision."
"Backend.backends_by_database_type", {
"postgresql":
provision.Backend("postgresql", "postgresql://")}):
st._database_resources = {}
st._db_not_available = {}
st._schema_resources = {}
with mock.patch(
"sqlalchemy.create_engine",
mock.Mock(side_effect=ImportError())):
self.assertEqual([], st.resources)
ex = self.assertRaises(
self.skipException,
st.setUp
)
self.assertEqual(
"Backend 'postgresql' is unavailable: No DBAPI installed",
str(ex)
)
def test_skip_no_such_backend_legacy(self):
class FakeDatabaseOpportunisticFixture(
legacy_test_base.DbFixture):
DRIVER = 'postgresql+nosuchdbapi'
class SomeTest(legacy_test_base.DbTestCase):
FIXTURE = FakeDatabaseOpportunisticFixture
def runTest(self):
pass
st = SomeTest()
ex = self.assertRaises(
self.skipException,
st.setUp
)
self.assertEqual(
"Backend 'postgresql+nosuchdbapi' is unavailable: No such backend",
str(ex)
)
class EnginefacadeIntegrationTest(oslo_test_base.BaseTestCase):
def test_db_fixture(self):
normal_mgr = enginefacade.transaction_context()
normal_mgr.configure(
connection="sqlite://",
sqlite_fk=True,
mysql_sql_mode="FOOBAR",
max_overflow=38
)
class MyFixture(test_fixtures.OpportunisticDbFixture):
def get_enginefacade(self):
return normal_mgr
test = mock.Mock(SCHEMA_SCOPE=None)
fixture = MyFixture(test=test)
resources = fixture._get_resources()
testresources.setUpResources(test, resources, None)
self.addCleanup(
testresources.tearDownResources,
test, resources, None
)
fixture.setUp()
self.addCleanup(fixture.cleanUp)
self.assertTrue(normal_mgr._factory._started)
test.engine = normal_mgr.writer.get_engine()
self.assertEqual("sqlite://", str(test.engine.url))
self.assertIs(test.engine, normal_mgr._factory._writer_engine)
engine_args = normal_mgr._factory._engine_args_for_conf(None)
self.assertTrue(engine_args['sqlite_fk'])
self.assertEqual("FOOBAR", engine_args["mysql_sql_mode"])
self.assertEqual(38, engine_args["max_overflow"])
fixture.cleanUp()
fixture._clear_cleanups() # so the real cleanUp works
self.assertFalse(normal_mgr._factory._started)
class LegacyBaseClassTest(oslo_test_base.BaseTestCase):
def test_new_db_is_provisioned_by_default_pg(self):
self._test_new_db_is_provisioned_by_default(
legacy_test_base.PostgreSQLOpportunisticTestCase
)
def test_new_db_is_provisioned_by_default_mysql(self):
self._test_new_db_is_provisioned_by_default(
legacy_test_base.MySQLOpportunisticTestCase
)
def _test_new_db_is_provisioned_by_default(self, base_cls):
try:
provision.DatabaseResource(base_cls.FIXTURE.DRIVER)
except exception.BackendNotAvailable:
self.skip("Backend %s is not available" %
base_cls.FIXTURE.DRIVER)
class SomeTest(base_cls):
def runTest(self):
pass
st = SomeTest()
db_resource = dict(st.resources)['db']
self.assertTrue(db_resource.provision_new_database)
class TestLoadHook(unittest.TestCase):
"""Test the 'load_tests' hook supplied by test_base.
The purpose of this loader is to organize tests into an
OptimisingTestSuite using the standard unittest load_tests hook.
The hook needs to detect if it is being invoked at the module
level or at the package level. It has to behave completely differently
in these two cases.
"""
def test_module_level(self):
load_tests = test_fixtures.optimize_module_test_loader()
loader = unittest.TestLoader()
found_tests = loader.discover(start_dir, pattern="test_fixtures.py")
new_loader = load_tests(loader, found_tests, "test_fixtures.py")
self.assertTrue(
isinstance(new_loader, testresources.OptimisingTestSuite)
)
actual_tests = unittest.TestSuite(
testscenarios.generate_scenarios(found_tests)
)
self.assertEqual(
new_loader.countTestCases(), actual_tests.countTestCases()
)
def test_package_level(self):
self._test_package_level(test_fixtures.optimize_package_test_loader)
def test_package_level_legacy(self):
self._test_package_level(legacy_test_base.optimize_db_test_loader)
def _test_package_level(self, fn):
load_tests = fn(
os.path.join(start_dir, "__init__.py"))
loader = unittest.TestLoader()
new_loader = load_tests(
loader, unittest.suite.TestSuite(), "test_fixtures.py")
self.assertTrue(
isinstance(new_loader, testresources.OptimisingTestSuite)
)
actual_tests = unittest.TestSuite(
testscenarios.generate_scenarios(
loader.discover(start_dir, pattern="test_fixtures.py"))
)
self.assertEqual(
new_loader.countTestCases(), actual_tests.countTestCases()
)
class TestWScenarios(unittest.TestCase):
"""a 'do nothing' test suite.
Should generate exactly four tests when testscenarios is used.
"""
def test_one(self):
pass
def test_two(self):
pass
scenarios = [
('scenario1', dict(scenario='scenario 1')),
('scenario2', dict(scenario='scenario 2'))
]

View File

@ -1,348 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import alembic
import mock
from oslotest import base as test_base
import sqlalchemy
from oslo_db import exception
from oslo_db.sqlalchemy.migration_cli import ext_alembic
from oslo_db.sqlalchemy.migration_cli import ext_migrate
from oslo_db.sqlalchemy.migration_cli import manager
class MockWithCmp(mock.MagicMock):
order = 0
def __init__(self, *args, **kwargs):
super(MockWithCmp, self).__init__(*args, **kwargs)
self.__lt__ = lambda self, other: self.order < other.order
@mock.patch(('oslo_db.sqlalchemy.migration_cli.'
'ext_alembic.alembic.command'))
class TestAlembicExtension(test_base.BaseTestCase):
def setUp(self):
self.migration_config = {'alembic_ini_path': '.',
'db_url': 'sqlite://'}
self.engine = sqlalchemy.create_engine(self.migration_config['db_url'])
self.alembic = ext_alembic.AlembicExtension(
self.engine, self.migration_config)
super(TestAlembicExtension, self).setUp()
def test_check_enabled_true(self, command):
"""Check enabled returns True
Verifies that enabled returns True on non empty
alembic_ini_path conf variable
"""
self.assertTrue(self.alembic.enabled)
def test_check_enabled_false(self, command):
"""Check enabled returns False
Verifies enabled returns False on empty alembic_ini_path variable
"""
self.migration_config['alembic_ini_path'] = ''
alembic = ext_alembic.AlembicExtension(
self.engine, self.migration_config)
self.assertFalse(alembic.enabled)
def test_upgrade_none(self, command):
self.alembic.upgrade(None)
command.upgrade.assert_called_once_with(self.alembic.config, 'head')
def test_upgrade_normal(self, command):
self.alembic.upgrade('131daa')
command.upgrade.assert_called_once_with(self.alembic.config, '131daa')
def test_downgrade_none(self, command):
self.alembic.downgrade(None)
command.downgrade.assert_called_once_with(self.alembic.config, 'base')
def test_downgrade_int(self, command):
self.alembic.downgrade(111)
command.downgrade.assert_called_once_with(self.alembic.config, 'base')
def test_downgrade_normal(self, command):
self.alembic.downgrade('131daa')
command.downgrade.assert_called_once_with(
self.alembic.config, '131daa')
def test_revision(self, command):
self.alembic.revision(message='test', autogenerate=True)
command.revision.assert_called_once_with(
self.alembic.config, message='test', autogenerate=True)
def test_stamp(self, command):
self.alembic.stamp('stamp')
command.stamp.assert_called_once_with(
self.alembic.config, revision='stamp')
def test_version(self, command):
version = self.alembic.version()
self.assertIsNone(version)
def test_has_revision(self, command):
with mock.patch(('oslo_db.sqlalchemy.migration_cli.'
'ext_alembic.alembic_script')) as mocked:
self.alembic.config.get_main_option = mock.Mock()
# since alembic_script is mocked and no exception is raised, call
# will result in success
self.assertIs(True, self.alembic.has_revision('test'))
self.alembic.config.get_main_option.assert_called_once_with(
'script_location')
mocked.ScriptDirectory().get_revision.assert_called_once_with(
'test')
self.assertIs(True, self.alembic.has_revision(None))
self.assertIs(True, self.alembic.has_revision('head'))
# relative revision, should be True for alembic
self.assertIs(True, self.alembic.has_revision('+1'))
def test_has_revision_negative(self, command):
with mock.patch(('oslo_db.sqlalchemy.migration_cli.'
'ext_alembic.alembic_script')) as mocked:
mocked.ScriptDirectory().get_revision.side_effect = (
alembic.util.CommandError)
self.alembic.config.get_main_option = mock.Mock()
# exception is raised, the call should be false
self.assertIs(False, self.alembic.has_revision('test'))
self.alembic.config.get_main_option.assert_called_once_with(
'script_location')
mocked.ScriptDirectory().get_revision.assert_called_once_with(
'test')
@mock.patch(('oslo_db.sqlalchemy.migration_cli.'
'ext_migrate.migration'))
class TestMigrateExtension(test_base.BaseTestCase):
def setUp(self):
self.migration_config = {'migration_repo_path': '.',
'db_url': 'sqlite://'}
self.engine = sqlalchemy.create_engine(self.migration_config['db_url'])
self.migrate = ext_migrate.MigrateExtension(
self.engine, self.migration_config)
super(TestMigrateExtension, self).setUp()
def test_check_enabled_true(self, migration):
self.assertTrue(self.migrate.enabled)
def test_check_enabled_false(self, migration):
self.migration_config['migration_repo_path'] = ''
migrate = ext_migrate.MigrateExtension(
self.engine, self.migration_config)
self.assertFalse(migrate.enabled)
def test_upgrade_head(self, migration):
self.migrate.upgrade('head')
migration.db_sync.assert_called_once_with(
self.migrate.engine, self.migrate.repository, None, init_version=0)
def test_upgrade_normal(self, migration):
self.migrate.upgrade(111)
migration.db_sync.assert_called_once_with(
mock.ANY, self.migrate.repository, 111, init_version=0)
def test_downgrade_init_version_from_base(self, migration):
self.migrate.downgrade('base')
migration.db_sync.assert_called_once_with(
self.migrate.engine, self.migrate.repository, mock.ANY,
init_version=mock.ANY)
def test_downgrade_init_version_from_none(self, migration):
self.migrate.downgrade(None)
migration.db_sync.assert_called_once_with(
self.migrate.engine, self.migrate.repository, mock.ANY,
init_version=mock.ANY)
def test_downgrade_normal(self, migration):
self.migrate.downgrade(101)
migration.db_sync.assert_called_once_with(
self.migrate.engine, self.migrate.repository, 101, init_version=0)
def test_version(self, migration):
self.migrate.version()
migration.db_version.assert_called_once_with(
self.migrate.engine, self.migrate.repository, init_version=0)
def test_change_init_version(self, migration):
self.migration_config['init_version'] = 101
migrate = ext_migrate.MigrateExtension(
self.engine, self.migration_config)
migrate.downgrade(None)
migration.db_sync.assert_called_once_with(
migrate.engine,
self.migrate.repository,
self.migration_config['init_version'],
init_version=self.migration_config['init_version'])
def test_has_revision(self, command):
with mock.patch(('oslo_db.sqlalchemy.migration_cli.'
'ext_migrate.migrate_version')) as mocked:
self.migrate.has_revision('test')
mocked.Collection().version.assert_called_once_with('test')
# tip of the branch should always be True
self.assertIs(True, self.migrate.has_revision(None))
def test_has_revision_negative(self, command):
with mock.patch(('oslo_db.sqlalchemy.migration_cli.'
'ext_migrate.migrate_version')) as mocked:
mocked.Collection().version.side_effect = ValueError
self.assertIs(False, self.migrate.has_revision('test'))
mocked.Collection().version.assert_called_once_with('test')
# relative revision, should be False for migrate
self.assertIs(False, self.migrate.has_revision('+1'))
class TestMigrationManager(test_base.BaseTestCase):
def setUp(self):
self.migration_config = {'alembic_ini_path': '.',
'migrate_repo_path': '.',
'db_url': 'sqlite://'}
engine = sqlalchemy.create_engine(self.migration_config['db_url'])
self.migration_manager = manager.MigrationManager(
self.migration_config, engine)
self.ext = mock.Mock()
self.ext.obj.version = mock.Mock(return_value=0)
self.migration_manager._manager.extensions = [self.ext]
super(TestMigrationManager, self).setUp()
def test_manager_update(self):
self.migration_manager.upgrade('head')
self.ext.obj.upgrade.assert_called_once_with('head')
def test_manager_update_revision_none(self):
self.migration_manager.upgrade(None)
self.ext.obj.upgrade.assert_called_once_with(None)
def test_downgrade_normal_revision(self):
self.migration_manager.downgrade('111abcd')
self.ext.obj.downgrade.assert_called_once_with('111abcd')
def test_version(self):
self.migration_manager.version()
self.ext.obj.version.assert_called_once_with()
def test_version_return_value(self):
version = self.migration_manager.version()
self.assertEqual(0, version)
def test_revision_message_autogenerate(self):
self.migration_manager.revision('test', True)
self.ext.obj.revision.assert_called_once_with('test', True)
def test_revision_only_message(self):
self.migration_manager.revision('test', False)
self.ext.obj.revision.assert_called_once_with('test', False)
def test_stamp(self):
self.migration_manager.stamp('stamp')
self.ext.obj.stamp.assert_called_once_with('stamp')
def test_wrong_config(self):
err = self.assertRaises(ValueError,
manager.MigrationManager,
{'wrong_key': 'sqlite://'})
self.assertEqual('Either database url or engine must be provided.',
err.args[0])
class TestMigrationMultipleExtensions(test_base.BaseTestCase):
def setUp(self):
self.migration_config = {'alembic_ini_path': '.',
'migrate_repo_path': '.',
'db_url': 'sqlite://'}
engine = sqlalchemy.create_engine(self.migration_config['db_url'])
self.migration_manager = manager.MigrationManager(
self.migration_config, engine)
self.first_ext = MockWithCmp()
self.first_ext.obj.order = 1
self.first_ext.obj.upgrade.return_value = 100
self.first_ext.obj.downgrade.return_value = 0
self.second_ext = MockWithCmp()
self.second_ext.obj.order = 2
self.second_ext.obj.upgrade.return_value = 200
self.second_ext.obj.downgrade.return_value = 100
self.migration_manager._manager.extensions = [self.first_ext,
self.second_ext]
super(TestMigrationMultipleExtensions, self).setUp()
def test_upgrade_right_order(self):
results = self.migration_manager.upgrade(None)
self.assertEqual([100, 200], results)
def test_downgrade_right_order(self):
results = self.migration_manager.downgrade(None)
self.assertEqual([100, 0], results)
def test_upgrade_does_not_go_too_far(self):
self.first_ext.obj.has_revision.return_value = True
self.second_ext.obj.has_revision.return_value = False
self.second_ext.obj.upgrade.side_effect = AssertionError(
'this method should not have been called')
results = self.migration_manager.upgrade(100)
self.assertEqual([100], results)
def test_downgrade_does_not_go_too_far(self):
self.second_ext.obj.has_revision.return_value = True
self.first_ext.obj.has_revision.return_value = False
self.first_ext.obj.downgrade.side_effect = AssertionError(
'this method should not have been called')
results = self.migration_manager.downgrade(100)
self.assertEqual([100], results)
def test_upgrade_checks_rev_existence(self):
self.first_ext.obj.has_revision.return_value = False
self.second_ext.obj.has_revision.return_value = False
# upgrade to a specific non-existent revision should fail
self.assertRaises(exception.DBMigrationError,
self.migration_manager.upgrade, 100)
# upgrade to the "head" should succeed
self.assertEqual([100, 200], self.migration_manager.upgrade(None))
# let's assume the second ext has the revision, upgrade should succeed
self.second_ext.obj.has_revision.return_value = True
self.assertEqual([100, 200], self.migration_manager.upgrade(200))
# upgrade to the "head" should still succeed
self.assertEqual([100, 200], self.migration_manager.upgrade(None))
def test_downgrade_checks_rev_existence(self):
self.first_ext.obj.has_revision.return_value = False
self.second_ext.obj.has_revision.return_value = False
# upgrade to a specific non-existent revision should fail
self.assertRaises(exception.DBMigrationError,
self.migration_manager.downgrade, 100)
# downgrade to the "base" should succeed
self.assertEqual([100, 0], self.migration_manager.downgrade(None))
# let's assume the second ext has the revision, downgrade should
# succeed
self.first_ext.obj.has_revision.return_value = True
self.assertEqual([100, 0], self.migration_manager.downgrade(200))
# downgrade to the "base" should still succeed
self.assertEqual([100, 0], self.migration_manager.downgrade(None))
self.assertEqual([100, 0], self.migration_manager.downgrade('base'))

View File

@ -1,287 +0,0 @@
# Copyright 2013 Mirantis Inc.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import os
import tempfile
from migrate import exceptions as migrate_exception
from migrate.versioning import api as versioning_api
import mock
import sqlalchemy
from oslo_db import exception as db_exception
from oslo_db.sqlalchemy import migration
from oslo_db.tests.sqlalchemy import base as test_base
from oslo_db.tests import utils as test_utils
class TestMigrationCommon(test_base.DbTestCase):
def setUp(self):
super(TestMigrationCommon, self).setUp()
migration._REPOSITORY = None
self.path = tempfile.mkdtemp('test_migration')
self.path1 = tempfile.mkdtemp('test_migration')
self.return_value = '/home/openstack/migrations'
self.return_value1 = '/home/extension/migrations'
self.init_version = 1
self.test_version = 123
self.patcher_repo = mock.patch.object(migration, 'Repository')
self.repository = self.patcher_repo.start()
self.repository.side_effect = [self.return_value, self.return_value1]
self.mock_api_db = mock.patch.object(versioning_api, 'db_version')
self.mock_api_db_version = self.mock_api_db.start()
self.mock_api_db_version.return_value = self.test_version
def tearDown(self):
os.rmdir(self.path)
self.mock_api_db.stop()
self.patcher_repo.stop()
super(TestMigrationCommon, self).tearDown()
def test_find_migrate_repo_path_not_found(self):
self.assertRaises(
db_exception.DBMigrationError,
migration._find_migrate_repo,
"/foo/bar/",
)
self.assertIsNone(migration._REPOSITORY)
def test_find_migrate_repo_called_once(self):
my_repository = migration._find_migrate_repo(self.path)
self.repository.assert_called_once_with(self.path)
self.assertEqual(self.return_value, my_repository)
def test_find_migrate_repo_called_few_times(self):
repo1 = migration._find_migrate_repo(self.path)
repo2 = migration._find_migrate_repo(self.path1)
self.assertNotEqual(repo1, repo2)
def test_db_version_control(self):
with test_utils.nested(
mock.patch.object(migration, '_find_migrate_repo'),
mock.patch.object(versioning_api, 'version_control'),
) as (mock_find_repo, mock_version_control):
mock_find_repo.return_value = self.return_value
version = migration.db_version_control(
self.engine, self.path, self.test_version)
self.assertEqual(self.test_version, version)
mock_version_control.assert_called_once_with(
self.engine, self.return_value, self.test_version)
@mock.patch.object(migration, '_find_migrate_repo')
@mock.patch.object(versioning_api, 'version_control')
def test_db_version_control_version_less_than_actual_version(
self, mock_version_control, mock_find_repo):
mock_find_repo.return_value = self.return_value
mock_version_control.side_effect = (migrate_exception.
DatabaseAlreadyControlledError)
self.assertRaises(db_exception.DBMigrationError,
migration.db_version_control, self.engine,
self.path, self.test_version - 1)
@mock.patch.object(migration, '_find_migrate_repo')
@mock.patch.object(versioning_api, 'version_control')
def test_db_version_control_version_greater_than_actual_version(
self, mock_version_control, mock_find_repo):
mock_find_repo.return_value = self.return_value
mock_version_control.side_effect = (migrate_exception.
InvalidVersionError)
self.assertRaises(db_exception.DBMigrationError,
migration.db_version_control, self.engine,
self.path, self.test_version + 1)
def test_db_version_return(self):
ret_val = migration.db_version(self.engine, self.path,
self.init_version)
self.assertEqual(self.test_version, ret_val)
def test_db_version_raise_not_controlled_error_first(self):
with mock.patch.object(migration, 'db_version_control') as mock_ver:
self.mock_api_db_version.side_effect = [
migrate_exception.DatabaseNotControlledError('oups'),
self.test_version]
ret_val = migration.db_version(self.engine, self.path,
self.init_version)
self.assertEqual(self.test_version, ret_val)
mock_ver.assert_called_once_with(self.engine, self.path,
version=self.init_version)
def test_db_version_raise_not_controlled_error_tables(self):
with mock.patch.object(sqlalchemy, 'MetaData') as mock_meta:
self.mock_api_db_version.side_effect = \
migrate_exception.DatabaseNotControlledError('oups')
my_meta = mock.MagicMock()
my_meta.tables = {'a': 1, 'b': 2}
mock_meta.return_value = my_meta
self.assertRaises(
db_exception.DBMigrationError, migration.db_version,
self.engine, self.path, self.init_version)
@mock.patch.object(versioning_api, 'version_control')
def test_db_version_raise_not_controlled_error_no_tables(self, mock_vc):
with mock.patch.object(sqlalchemy, 'MetaData') as mock_meta:
self.mock_api_db_version.side_effect = (
migrate_exception.DatabaseNotControlledError('oups'),
self.init_version)
my_meta = mock.MagicMock()
my_meta.tables = {}
mock_meta.return_value = my_meta
migration.db_version(self.engine, self.path, self.init_version)
mock_vc.assert_called_once_with(self.engine, self.return_value1,
self.init_version)
@mock.patch.object(versioning_api, 'version_control')
def test_db_version_raise_not_controlled_alembic_tables(self, mock_vc):
# When there are tables but the alembic control table
# (alembic_version) is present, attempt to version the db.
# This simulates the case where there is are multiple repos (different
# abs_paths) and a different path has been versioned already.
with mock.patch.object(sqlalchemy, 'MetaData') as mock_meta:
self.mock_api_db_version.side_effect = [
migrate_exception.DatabaseNotControlledError('oups'), None]
my_meta = mock.MagicMock()
my_meta.tables = {'alembic_version': 1, 'b': 2}
mock_meta.return_value = my_meta
migration.db_version(self.engine, self.path, self.init_version)
mock_vc.assert_called_once_with(self.engine, self.return_value1,
self.init_version)
@mock.patch.object(versioning_api, 'version_control')
def test_db_version_raise_not_controlled_migrate_tables(self, mock_vc):
# When there are tables but the sqlalchemy-migrate control table
# (migrate_version) is present, attempt to version the db.
# This simulates the case where there is are multiple repos (different
# abs_paths) and a different path has been versioned already.
with mock.patch.object(sqlalchemy, 'MetaData') as mock_meta:
self.mock_api_db_version.side_effect = [
migrate_exception.DatabaseNotControlledError('oups'), None]
my_meta = mock.MagicMock()
my_meta.tables = {'migrate_version': 1, 'b': 2}
mock_meta.return_value = my_meta
migration.db_version(self.engine, self.path, self.init_version)
mock_vc.assert_called_once_with(self.engine, self.return_value1,
self.init_version)
def test_db_sync_wrong_version(self):
self.assertRaises(db_exception.DBMigrationError,
migration.db_sync, self.engine, self.path, 'foo')
@mock.patch.object(versioning_api, 'upgrade')
def test_db_sync_script_not_present(self, upgrade):
# For non existent migration script file sqlalchemy-migrate will raise
# VersionNotFoundError which will be wrapped in DbMigrationError.
upgrade.side_effect = migrate_exception.VersionNotFoundError
self.assertRaises(db_exception.DbMigrationError,
migration.db_sync, self.engine, self.path,
self.test_version + 1)
@mock.patch.object(versioning_api, 'upgrade')
def test_db_sync_known_error_raised(self, upgrade):
upgrade.side_effect = migrate_exception.KnownError
self.assertRaises(db_exception.DbMigrationError,
migration.db_sync, self.engine, self.path,
self.test_version + 1)
def test_db_sync_upgrade(self):
init_ver = 55
with test_utils.nested(
mock.patch.object(migration, '_find_migrate_repo'),
mock.patch.object(versioning_api, 'upgrade')
) as (mock_find_repo, mock_upgrade):
mock_find_repo.return_value = self.return_value
self.mock_api_db_version.return_value = self.test_version - 1
migration.db_sync(self.engine, self.path, self.test_version,
init_ver)
mock_upgrade.assert_called_once_with(
self.engine, self.return_value, self.test_version)
def test_db_sync_downgrade(self):
with test_utils.nested(
mock.patch.object(migration, '_find_migrate_repo'),
mock.patch.object(versioning_api, 'downgrade')
) as (mock_find_repo, mock_downgrade):
mock_find_repo.return_value = self.return_value
self.mock_api_db_version.return_value = self.test_version + 1
migration.db_sync(self.engine, self.path, self.test_version)
mock_downgrade.assert_called_once_with(
self.engine, self.return_value, self.test_version)
def test_db_sync_sanity_called(self):
with test_utils.nested(
mock.patch.object(migration, '_find_migrate_repo'),
mock.patch.object(migration, '_db_schema_sanity_check'),
mock.patch.object(versioning_api, 'downgrade')
) as (mock_find_repo, mock_sanity, mock_downgrade):
mock_find_repo.return_value = self.return_value
migration.db_sync(self.engine, self.path, self.test_version)
self.assertEqual([mock.call(self.engine), mock.call(self.engine)],
mock_sanity.call_args_list)
def test_db_sync_sanity_skipped(self):
with test_utils.nested(
mock.patch.object(migration, '_find_migrate_repo'),
mock.patch.object(migration, '_db_schema_sanity_check'),
mock.patch.object(versioning_api, 'downgrade')
) as (mock_find_repo, mock_sanity, mock_downgrade):
mock_find_repo.return_value = self.return_value
migration.db_sync(self.engine, self.path, self.test_version,
sanity_check=False)
self.assertFalse(mock_sanity.called)
def test_db_sanity_table_not_utf8(self):
with mock.patch.object(self, 'engine') as mock_eng:
type(mock_eng).name = mock.PropertyMock(return_value='mysql')
mock_eng.execute.return_value = [['table_A', 'latin1'],
['table_B', 'latin1']]
self.assertRaises(ValueError, migration._db_schema_sanity_check,
mock_eng)
def test_db_sanity_table_not_utf8_exclude_migrate_tables(self):
with mock.patch.object(self, 'engine') as mock_eng:
type(mock_eng).name = mock.PropertyMock(return_value='mysql')
# NOTE(morganfainberg): Check both lower and upper case versions
# of the migration table names (validate case insensitivity in
# the sanity check.
mock_eng.execute.return_value = [['migrate_version', 'latin1'],
['alembic_version', 'latin1'],
['MIGRATE_VERSION', 'latin1'],
['ALEMBIC_VERSION', 'latin1']]
migration._db_schema_sanity_check(mock_eng)

View File

@ -1,566 +0,0 @@
# Copyright 2010-2011 OpenStack Foundation
# Copyright 2012-2013 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
from migrate.versioning import api as versioning_api
import mock
from oslotest import base as test
import six
import sqlalchemy as sa
import sqlalchemy.ext.declarative as sa_decl
from oslo_db import exception as exc
from oslo_db.sqlalchemy import test_migrations as migrate
from oslo_db.tests.sqlalchemy import base as test_base
class TestWalkVersions(test.BaseTestCase, migrate.WalkVersionsMixin):
migration_api = mock.MagicMock()
REPOSITORY = mock.MagicMock()
engine = mock.MagicMock()
INIT_VERSION = versioning_api.VerNum(4)
@property
def migrate_engine(self):
return self.engine
def test_migrate_up(self):
self.migration_api.db_version.return_value = 141
self.migrate_up(141)
self.migration_api.upgrade.assert_called_with(
self.engine, self.REPOSITORY, 141)
self.migration_api.db_version.assert_called_with(
self.engine, self.REPOSITORY)
@staticmethod
def _fake_upgrade_boom(*args, **kwargs):
raise exc.DBMigrationError("boom")
def test_migrate_up_fail(self):
version = 141
self.migration_api.db_version.return_value = version
expected_output = (u"Failed to migrate to version %(version)s on "
"engine %(engine)s\n" %
{'version': version, 'engine': self.engine})
with mock.patch.object(self.migration_api,
'upgrade',
side_effect=self._fake_upgrade_boom):
log = self.useFixture(fixtures.FakeLogger())
self.assertRaises(exc.DBMigrationError, self.migrate_up, version)
self.assertEqual(expected_output, log.output)
def test_migrate_up_with_data(self):
test_value = {"a": 1, "b": 2}
self.migration_api.db_version.return_value = 141
self._pre_upgrade_141 = mock.MagicMock()
self._pre_upgrade_141.return_value = test_value
self._check_141 = mock.MagicMock()
self.migrate_up(141, True)
self._pre_upgrade_141.assert_called_with(self.engine)
self._check_141.assert_called_with(self.engine, test_value)
def test_migrate_down(self):
self.migration_api.db_version.return_value = 42
self.assertTrue(self.migrate_down(42))
self.migration_api.db_version.assert_called_with(
self.engine, self.REPOSITORY)
def test_migrate_down_not_implemented(self):
with mock.patch.object(self.migration_api,
'downgrade',
side_effect=NotImplementedError):
self.assertFalse(self.migrate_down(self.engine, 42))
def test_migrate_down_with_data(self):
self._post_downgrade_043 = mock.MagicMock()
self.migration_api.db_version.return_value = 42
self.migrate_down(42, True)
self._post_downgrade_043.assert_called_with(self.engine)
@mock.patch.object(migrate.WalkVersionsMixin, 'migrate_up')
@mock.patch.object(migrate.WalkVersionsMixin, 'migrate_down')
def test_walk_versions_all_default(self, migrate_up, migrate_down):
self.REPOSITORY.latest = versioning_api.VerNum(20)
self.migration_api.db_version.return_value = self.INIT_VERSION
self.walk_versions()
self.migration_api.version_control.assert_called_with(
self.engine, self.REPOSITORY, self.INIT_VERSION)
self.migration_api.db_version.assert_called_with(
self.engine, self.REPOSITORY)
versions = range(int(self.INIT_VERSION) + 1,
int(self.REPOSITORY.latest) + 1)
upgraded = [mock.call(v, with_data=True)
for v in versions]
self.assertEqual(upgraded, self.migrate_up.call_args_list)
downgraded = [mock.call(v - 1) for v in reversed(versions)]
self.assertEqual(downgraded, self.migrate_down.call_args_list)
@mock.patch.object(migrate.WalkVersionsMixin, 'migrate_up')
@mock.patch.object(migrate.WalkVersionsMixin, 'migrate_down')
def test_walk_versions_all_true(self, migrate_up, migrate_down):
self.REPOSITORY.latest = versioning_api.VerNum(20)
self.migration_api.db_version.return_value = self.INIT_VERSION
self.walk_versions(snake_walk=True, downgrade=True)
versions = range(int(self.INIT_VERSION) + 1,
int(self.REPOSITORY.latest) + 1)
upgraded = []
for v in versions:
upgraded.append(mock.call(v, with_data=True))
upgraded.append(mock.call(v))
upgraded.extend([mock.call(v) for v in reversed(versions)])
self.assertEqual(upgraded, self.migrate_up.call_args_list)
downgraded_1 = [mock.call(v - 1, with_data=True) for v in versions]
downgraded_2 = []
for v in reversed(versions):
downgraded_2.append(mock.call(v - 1))
downgraded_2.append(mock.call(v - 1))
downgraded = downgraded_1 + downgraded_2
self.assertEqual(downgraded, self.migrate_down.call_args_list)
@mock.patch.object(migrate.WalkVersionsMixin, 'migrate_up')
@mock.patch.object(migrate.WalkVersionsMixin, 'migrate_down')
def test_walk_versions_true_false(self, migrate_up, migrate_down):
self.REPOSITORY.latest = versioning_api.VerNum(20)
self.migration_api.db_version.return_value = self.INIT_VERSION
self.walk_versions(snake_walk=True, downgrade=False)
versions = range(int(self.INIT_VERSION) + 1,
int(self.REPOSITORY.latest) + 1)
upgraded = []
for v in versions:
upgraded.append(mock.call(v, with_data=True))
upgraded.append(mock.call(v))
self.assertEqual(upgraded, self.migrate_up.call_args_list)
downgraded = [mock.call(v - 1, with_data=True) for v in versions]
self.assertEqual(downgraded, self.migrate_down.call_args_list)
@mock.patch.object(migrate.WalkVersionsMixin, 'migrate_up')
@mock.patch.object(migrate.WalkVersionsMixin, 'migrate_down')
def test_walk_versions_all_false(self, migrate_up, migrate_down):
self.REPOSITORY.latest = versioning_api.VerNum(20)
self.migration_api.db_version.return_value = self.INIT_VERSION
self.walk_versions(snake_walk=False, downgrade=False)
versions = range(int(self.INIT_VERSION) + 1,
int(self.REPOSITORY.latest) + 1)
upgraded = [mock.call(v, with_data=True) for v in versions]
self.assertEqual(upgraded, self.migrate_up.call_args_list)
class ModelsMigrationSyncMixin(test_base.DbTestCase):
def setUp(self):
super(ModelsMigrationSyncMixin, self).setUp()
self.metadata = sa.MetaData()
self.metadata_migrations = sa.MetaData()
sa.Table(
'testtbl', self.metadata_migrations,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('spam', sa.String(10), nullable=False),
sa.Column('eggs', sa.DateTime),
sa.Column('foo', sa.Boolean,
server_default=sa.sql.expression.true()),
sa.Column('bool_wo_default', sa.Boolean),
sa.Column('bar', sa.Numeric(10, 5)),
sa.Column('defaulttest', sa.Integer, server_default='5'),
sa.Column('defaulttest2', sa.String(8), server_default=''),
sa.Column('defaulttest3', sa.String(5), server_default="test"),
sa.Column('defaulttest4', sa.Enum('first', 'second',
name='testenum'),
server_default="first"),
sa.Column('variant', sa.BigInteger()),
sa.Column('variant2', sa.BigInteger(), server_default='0'),
sa.Column('fk_check', sa.String(36), nullable=False),
sa.UniqueConstraint('spam', 'eggs', name='uniq_cons'),
)
BASE = sa_decl.declarative_base(metadata=self.metadata)
class TestModel(BASE):
__tablename__ = 'testtbl'
__table_args__ = (
sa.UniqueConstraint('spam', 'eggs', name='uniq_cons'),
)
id = sa.Column('id', sa.Integer, primary_key=True)
spam = sa.Column('spam', sa.String(10), nullable=False)
eggs = sa.Column('eggs', sa.DateTime)
foo = sa.Column('foo', sa.Boolean,
server_default=sa.sql.expression.true())
fk_check = sa.Column('fk_check', sa.String(36), nullable=False)
bool_wo_default = sa.Column('bool_wo_default', sa.Boolean)
defaulttest = sa.Column('defaulttest',
sa.Integer, server_default='5')
defaulttest2 = sa.Column('defaulttest2', sa.String(8),
server_default='')
defaulttest3 = sa.Column('defaulttest3', sa.String(5),
server_default="test")
defaulttest4 = sa.Column('defaulttest4', sa.Enum('first', 'second',
name='testenum'),
server_default="first")
variant = sa.Column(sa.BigInteger().with_variant(
sa.Integer(), 'sqlite'))
variant2 = sa.Column(sa.BigInteger().with_variant(
sa.Integer(), 'sqlite'), server_default='0')
bar = sa.Column('bar', sa.Numeric(10, 5))
class ModelThatShouldNotBeCompared(BASE):
__tablename__ = 'testtbl2'
id = sa.Column('id', sa.Integer, primary_key=True)
spam = sa.Column('spam', sa.String(10), nullable=False)
def get_metadata(self):
return self.metadata
def get_engine(self):
return self.engine
def db_sync(self, engine):
self.metadata_migrations.create_all(bind=engine)
def include_object(self, object_, name, type_, reflected, compare_to):
if type_ == 'table':
return name == 'testtbl'
else:
return True
def _test_models_not_sync_filtered(self):
self.metadata_migrations.clear()
sa.Table(
'table', self.metadata_migrations,
sa.Column('fk_check', sa.String(36), nullable=False),
sa.PrimaryKeyConstraint('fk_check'),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl', self.metadata_migrations,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('spam', sa.String(8), nullable=True),
sa.Column('eggs', sa.DateTime),
sa.Column('foo', sa.Boolean,
server_default=sa.sql.expression.false()),
sa.Column('bool_wo_default', sa.Boolean, unique=True),
sa.Column('bar', sa.BigInteger),
sa.Column('defaulttest', sa.Integer, server_default='7'),
sa.Column('defaulttest2', sa.String(8), server_default=''),
sa.Column('defaulttest3', sa.String(5), server_default="fake"),
sa.Column('defaulttest4',
sa.Enum('first', 'second', name='testenum'),
server_default="first"),
sa.Column('fk_check', sa.String(36), nullable=False),
sa.UniqueConstraint('spam', 'foo', name='uniq_cons'),
sa.ForeignKeyConstraint(['fk_check'], ['table.fk_check']),
mysql_engine='InnoDB'
)
with mock.patch.object(self, 'filter_metadata_diff') as filter_mock:
def filter_diffs(diffs):
# test filter returning only constraint related diffs
return [
diff
for diff in diffs
if 'constraint' in diff[0]
]
filter_mock.side_effect = filter_diffs
msg = six.text_type(self.assertRaises(AssertionError,
self.test_models_sync))
self.assertNotIn('defaulttest', msg)
self.assertNotIn('defaulttest3', msg)
self.assertNotIn('remove_fk', msg)
self.assertIn('constraint', msg)
def _test_models_not_sync(self):
self.metadata_migrations.clear()
sa.Table(
'table', self.metadata_migrations,
sa.Column('fk_check', sa.String(36), nullable=False),
sa.PrimaryKeyConstraint('fk_check'),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl', self.metadata_migrations,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('spam', sa.String(8), nullable=True),
sa.Column('eggs', sa.DateTime),
sa.Column('foo', sa.Boolean,
server_default=sa.sql.expression.false()),
sa.Column('bool_wo_default', sa.Boolean, unique=True),
sa.Column('bar', sa.BigInteger),
sa.Column('defaulttest', sa.Integer, server_default='7'),
sa.Column('defaulttest2', sa.String(8), server_default=''),
sa.Column('defaulttest3', sa.String(5), server_default="fake"),
sa.Column('defaulttest4',
sa.Enum('first', 'second', name='testenum'),
server_default="first"),
sa.Column('variant', sa.String(10)),
sa.Column('fk_check', sa.String(36), nullable=False),
sa.UniqueConstraint('spam', 'foo', name='uniq_cons'),
sa.ForeignKeyConstraint(['fk_check'], ['table.fk_check']),
mysql_engine='InnoDB'
)
msg = six.text_type(self.assertRaises(AssertionError,
self.test_models_sync))
# NOTE(I159): Check mentioning of the table and columns.
# The log is invalid json, so we can't parse it and check it for
# full compliance. We have no guarantee of the log items ordering,
# so we can't use regexp.
self.assertTrue(msg.startswith(
'Models and migration scripts aren\'t in sync:'))
self.assertIn('testtbl', msg)
self.assertIn('spam', msg)
self.assertIn('eggs', msg) # test that the unique constraint is added
self.assertIn('foo', msg)
self.assertIn('bar', msg)
self.assertIn('bool_wo_default', msg)
self.assertIn('defaulttest', msg)
self.assertIn('defaulttest3', msg)
self.assertIn('remove_fk', msg)
self.assertIn('variant', msg)
class ModelsMigrationsSyncMysql(ModelsMigrationSyncMixin,
migrate.ModelsMigrationsSync,
test_base.MySQLOpportunisticTestCase):
def test_models_not_sync(self):
self._test_models_not_sync()
def test_models_not_sync_filtered(self):
self._test_models_not_sync_filtered()
class ModelsMigrationsSyncPsql(ModelsMigrationSyncMixin,
migrate.ModelsMigrationsSync,
test_base.PostgreSQLOpportunisticTestCase):
def test_models_not_sync(self):
self._test_models_not_sync()
def test_models_not_sync_filtered(self):
self._test_models_not_sync_filtered()
class TestOldCheckForeignKeys(test_base.DbTestCase):
def setUp(self):
super(TestOldCheckForeignKeys, self).setUp()
test = self
class MigrateSync(migrate.ModelsMigrationsSync):
def get_engine(self):
return test.engine
def get_metadata(self):
return test.metadata
def db_sync(self):
raise NotImplementedError()
self.migration_sync = MigrateSync()
def _fk_added_fixture(self):
self.metadata = sa.MetaData()
self.metadata_migrations = sa.MetaData()
sa.Table(
'testtbl_one', self.metadata,
sa.Column('id', sa.Integer, primary_key=True),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl_two', self.metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('tone_id', sa.Integer),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl_one', self.metadata_migrations,
sa.Column('id', sa.Integer, primary_key=True),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl_two', self.metadata_migrations,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column(
'tone_id', sa.Integer,
sa.ForeignKey('testtbl_one.id', name="tone_id_fk")),
mysql_engine='InnoDB'
)
def _fk_removed_fixture(self):
self.metadata = sa.MetaData()
self.metadata_migrations = sa.MetaData()
sa.Table(
'testtbl_one', self.metadata,
sa.Column('id', sa.Integer, primary_key=True),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl_two', self.metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column(
'tone_id', sa.Integer,
sa.ForeignKey('testtbl_one.id', name="tone_id_fk")),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl_one', self.metadata_migrations,
sa.Column('id', sa.Integer, primary_key=True),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl_two', self.metadata_migrations,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('tone_id', sa.Integer),
mysql_engine='InnoDB'
)
def _fk_no_change_fixture(self):
self.metadata = sa.MetaData()
self.metadata_migrations = sa.MetaData()
sa.Table(
'testtbl_one', self.metadata,
sa.Column('id', sa.Integer, primary_key=True),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl_two', self.metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column(
'tone_id', sa.Integer,
sa.ForeignKey('testtbl_one.id', name="tone_id_fk")),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl_one', self.metadata_migrations,
sa.Column('id', sa.Integer, primary_key=True),
mysql_engine='InnoDB'
)
sa.Table(
'testtbl_two', self.metadata_migrations,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column(
'tone_id', sa.Integer,
sa.ForeignKey('testtbl_one.id', name="tone_id_fk")),
mysql_engine='InnoDB'
)
def _run_test(self):
self.metadata.create_all(bind=self.engine)
return self.migration_sync.check_foreign_keys(
self.metadata_migrations, self.engine)
def _compare_diffs(self, diffs, compare_to):
diffs = [
(
cmd,
fk._get_colspec() if isinstance(fk, sa.ForeignKey)
else "tone_id_fk" if fk is None # sqlite workaround
else fk,
tname, fk_info
)
for cmd, fk, tname, fk_info in diffs
]
self.assertEqual(compare_to, diffs)
def test_fk_added(self):
self._fk_added_fixture()
diffs = self._run_test()
self._compare_diffs(
diffs,
[(
'add_key',
'testtbl_one.id',
'testtbl_two',
self.migration_sync.FKInfo(
constrained_columns=('tone_id',),
referred_table='testtbl_one',
referred_columns=('id',))
)]
)
def test_fk_removed(self):
self._fk_removed_fixture()
diffs = self._run_test()
self._compare_diffs(
diffs,
[(
'drop_key',
"tone_id_fk",
'testtbl_two',
self.migration_sync.FKInfo(
constrained_columns=('tone_id',),
referred_table='testtbl_one',
referred_columns=('id',))
)]
)
def test_fk_no_change(self):
self._fk_no_change_fixture()
diffs = self._run_test()
self._compare_diffs(
diffs,
[])
class PGTestOldCheckForeignKeys(
TestOldCheckForeignKeys, test_base.PostgreSQLOpportunisticTestCase):
pass
class MySQLTestOldCheckForeignKeys(
TestOldCheckForeignKeys, test_base.MySQLOpportunisticTestCase):
pass

View File

@ -1,241 +0,0 @@
# Copyright 2012 Cloudscaling Group, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import datetime
import mock
from oslotest import base as oslo_test
from sqlalchemy import Column
from sqlalchemy import Integer, String
from sqlalchemy import event
from sqlalchemy.ext.declarative import declarative_base
from oslo_db.sqlalchemy import models
from oslo_db.tests.sqlalchemy import base as test_base
BASE = declarative_base()
class ModelBaseTest(test_base.DbTestCase):
def setUp(self):
super(ModelBaseTest, self).setUp()
self.mb = models.ModelBase()
self.ekm = ExtraKeysModel()
def test_modelbase_has_dict_methods(self):
dict_methods = ('__getitem__',
'__setitem__',
'__contains__',
'get',
'update',
'save',
'items',
'iteritems',
'keys')
for method in dict_methods:
self.assertTrue(hasattr(models.ModelBase, method),
"Method %s() is not found" % method)
def test_modelbase_is_iterable(self):
self.assertTrue(issubclass(models.ModelBase, collections.Iterable))
def test_modelbase_set(self):
self.mb['world'] = 'hello'
self.assertEqual('hello', self.mb['world'])
def test_modelbase_update(self):
h = {'a': '1', 'b': '2'}
self.mb.update(h)
for key in h.keys():
self.assertEqual(h[key], self.mb[key])
def test_modelbase_contains(self):
mb = models.ModelBase()
h = {'a': '1', 'b': '2'}
mb.update(h)
for key in h.keys():
# Test 'in' syntax (instead of using .assertIn)
self.assertIn(key, mb)
self.assertNotIn('non-existent-key', mb)
def test_modelbase_contains_exc(self):
class ErrorModel(models.ModelBase):
@property
def bug(self):
raise ValueError
model = ErrorModel()
model.update({'attr': 5})
self.assertIn('attr', model)
self.assertRaises(ValueError, lambda: 'bug' in model)
def test_modelbase_items_iteritems(self):
h = {'a': '1', 'b': '2'}
expected = {
'id': None,
'smth': None,
'name': 'NAME',
'a': '1',
'b': '2',
}
self.ekm.update(h)
self.assertEqual(expected, dict(self.ekm.items()))
self.assertEqual(expected, dict(self.ekm.iteritems()))
def test_modelbase_dict(self):
h = {'a': '1', 'b': '2'}
expected = {
'id': None,
'smth': None,
'name': 'NAME',
'a': '1',
'b': '2',
}
self.ekm.update(h)
self.assertEqual(expected, dict(self.ekm))
def test_modelbase_iter(self):
expected = {
'id': None,
'smth': None,
'name': 'NAME',
}
i = iter(self.ekm)
found_items = 0
while True:
r = next(i, None)
if r is None:
break
self.assertEqual(expected[r[0]], r[1])
found_items += 1
self.assertEqual(len(expected), found_items)
def test_modelbase_keys(self):
self.assertEqual(set(('id', 'smth', 'name')), set(self.ekm.keys()))
self.ekm.update({'a': '1', 'b': '2'})
self.assertEqual(set(('a', 'b', 'id', 'smth', 'name')),
set(self.ekm.keys()))
def test_modelbase_several_iters(self):
mb = ExtraKeysModel()
it1 = iter(mb)
it2 = iter(mb)
self.assertFalse(it1 is it2)
self.assertEqual(dict(mb), dict(it1))
self.assertEqual(dict(mb), dict(it2))
def test_extra_keys_empty(self):
"""Test verifies that by default extra_keys return empty list."""
self.assertEqual([], self.mb._extra_keys)
def test_extra_keys_defined(self):
"""Property _extra_keys will return list with attributes names."""
self.assertEqual(['name'], self.ekm._extra_keys)
def test_model_with_extra_keys(self):
data = dict(self.ekm)
self.assertEqual({'smth': None,
'id': None,
'name': 'NAME'},
data)
class ExtraKeysModel(BASE, models.ModelBase):
__tablename__ = 'test_model'
id = Column(Integer, primary_key=True)
smth = Column(String(255))
@property
def name(self):
return 'NAME'
@property
def _extra_keys(self):
return ['name']
class TimestampMixinTest(oslo_test.BaseTestCase):
def test_timestampmixin_attr(self):
methods = ('created_at',
'updated_at')
for method in methods:
self.assertTrue(hasattr(models.TimestampMixin, method),
"Method %s() is not found" % method)
class SoftDeletedModel(BASE, models.ModelBase, models.SoftDeleteMixin):
__tablename__ = 'test_model_soft_deletes'
id = Column('id', Integer, primary_key=True)
smth = Column('smth', String(255))
class SoftDeleteMixinTest(test_base.DbTestCase):
def setUp(self):
super(SoftDeleteMixinTest, self).setUp()
t = BASE.metadata.tables['test_model_soft_deletes']
t.create(self.engine)
self.addCleanup(t.drop, self.engine)
self.session = self.sessionmaker(autocommit=False)
self.addCleanup(self.session.close)
@mock.patch('oslo_utils.timeutils.utcnow')
def test_soft_delete(self, mock_utcnow):
dt = datetime.datetime.utcnow().replace(microsecond=0)
mock_utcnow.return_value = dt
m = SoftDeletedModel(id=123456, smth='test')
self.session.add(m)
self.session.commit()
self.assertEqual(0, m.deleted)
self.assertIsNone(m.deleted_at)
m.soft_delete(self.session)
self.assertEqual(123456, m.deleted)
self.assertIs(dt, m.deleted_at)
def test_soft_delete_coerce_deleted_to_integer(self):
def listener(conn, cur, stmt, params, context, executemany):
if 'insert' in stmt.lower(): # ignore SELECT 1 and BEGIN
self.assertNotIn('False', str(params))
event.listen(self.engine, 'before_cursor_execute', listener)
self.addCleanup(event.remove,
self.engine, 'before_cursor_execute', listener)
m = SoftDeletedModel(id=1, smth='test', deleted=False)
self.session.add(m)
self.session.commit()
def test_deleted_set_to_null(self):
m = SoftDeletedModel(id=123456, smth='test')
self.session.add(m)
self.session.commit()
m.deleted = None
self.session.commit()
self.assertIsNone(m.deleted)

View File

@ -1,176 +0,0 @@
# Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests for MySQL Cluster (NDB) Support."""
import logging
import mock
from oslo_db import exception
from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy import engines
from oslo_db.sqlalchemy import ndb
from oslo_db.sqlalchemy import test_fixtures
from oslo_db.sqlalchemy import utils
from oslotest import base as test_base
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import MetaData
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import Text
from sqlalchemy import create_engine
from sqlalchemy import schema
from sqlalchemy.dialects.mysql import TINYTEXT
LOG = logging.getLogger(__name__)
_MOCK_CONNECTION = 'mysql+pymysql://'
_TEST_TABLE = Table("test_ndb", MetaData(),
Column('id', Integer, primary_key=True),
Column('test1', ndb.AutoStringTinyText(255)),
Column('test2', ndb.AutoStringText(4096)),
Column('test3', ndb.AutoStringSize(255, 64)),
mysql_engine='InnoDB')
class NDBMockTestBase(test_base.BaseTestCase):
def setUp(self):
super(NDBMockTestBase, self).setUp()
mock_dbapi = mock.Mock()
self.test_engine = test_engine = create_engine(
_MOCK_CONNECTION, module=mock_dbapi)
test_engine.dialect._oslodb_enable_ndb_support = True
ndb.init_ndb_events(test_engine)
class NDBEventTestCase(NDBMockTestBase):
def test_ndb_createtable_override(self):
test_engine = self.test_engine
self.assertRegex(
str(schema.CreateTable(_TEST_TABLE).compile(
dialect=test_engine.dialect)),
"ENGINE=NDBCLUSTER")
test_engine.dialect._oslodb_enable_ndb_support = False
def test_ndb_engine_override(self):
test_engine = self.test_engine
statement = "ENGINE=InnoDB"
for fn in test_engine.dispatch.before_cursor_execute:
statement, dialect = fn(
mock.Mock(), mock.Mock(), statement, {}, mock.Mock(), False)
self.assertEqual(statement, "ENGINE=NDBCLUSTER")
test_engine.dialect._oslodb_enable_ndb_support = False
def test_ndb_savepoint_override(self):
test_engine = self.test_engine
statement = "SAVEPOINT xyx"
for fn in test_engine.dispatch.before_cursor_execute:
statement, dialect = fn(
mock.Mock(), mock.Mock(), statement, {}, mock.Mock(), False)
self.assertEqual(statement,
"SET @oslo_db_ndb_savepoint_rollback_disabled = 0;")
test_engine.dialect._oslodb_enable_ndb_support = False
def test_ndb_rollback_override(self):
test_engine = self.test_engine
statement = "ROLLBACK TO SAVEPOINT xyz"
for fn in test_engine.dispatch.before_cursor_execute:
statement, dialect = fn(
mock.Mock(), mock.Mock(), statement, {}, mock.Mock(), False)
self.assertEqual(statement,
"SET @oslo_db_ndb_savepoint_rollback_disabled = 0;")
test_engine.dialect._oslodb_enable_ndb_support = False
def test_ndb_rollback_release_override(self):
test_engine = self.test_engine
statement = "RELEASE SAVEPOINT xyz"
for fn in test_engine.dispatch.before_cursor_execute:
statement, dialect = fn(
mock.Mock(), mock.Mock(), statement, {}, mock.Mock(), False)
self.assertEqual(statement,
"SET @oslo_db_ndb_savepoint_rollback_disabled = 0;")
test_engine.dialect._oslodb_enable_ndb_support = False
class NDBDatatypesTestCase(NDBMockTestBase):
def test_ndb_autostringtinytext(self):
test_engine = self.test_engine
self.assertEqual("TINYTEXT",
str(ndb.AutoStringTinyText(255).compile(
dialect=test_engine.dialect)))
test_engine.dialect._oslodb_enable_ndb_support = False
def test_ndb_autostringtext(self):
test_engine = self.test_engine
self.assertEqual("TEXT",
str(ndb.AutoStringText(4096).compile(
dialect=test_engine.dialect)))
test_engine.dialect._oslodb_enable_ndb_support = False
def test_ndb_autostringsize(self):
test_engine = self.test_engine
self.assertEqual('VARCHAR(64)',
str(ndb.AutoStringSize(255, 64).compile(
dialect=test_engine.dialect)))
test_engine.dialect._oslodb_enable_ndb_support = False
class NDBOpportunisticTestCase(
test_fixtures.OpportunisticDBTestMixin, test_base.BaseTestCase):
FIXTURE = test_fixtures.MySQLOpportunisticFixture
def init_db(self, use_ndb):
# get the MySQL engine created by the opportunistic
# provisioning system
self.engine = enginefacade.writer.get_engine()
if use_ndb:
# if we want NDB, make a new local engine that uses the
# URL / database / schema etc. of the provisioned engine,
# since NDB-ness is a per-table thing
self.engine = engines.create_engine(
self.engine.url, mysql_enable_ndb=True
)
self.addCleanup(self.engine.dispose)
self.test_table = _TEST_TABLE
try:
self.test_table.create(self.engine)
except exception.DBNotSupportedError:
self.skip("MySQL NDB Cluster not available")
def test_ndb_enabled(self):
self.init_db(True)
self.assertTrue(ndb.ndb_status(self.engine))
self.assertIsInstance(self.test_table.c.test1.type, TINYTEXT)
self.assertIsInstance(self.test_table.c.test2.type, Text)
self.assertIsInstance(self.test_table.c.test3.type, String)
self.assertEqual(64, self.test_table.c.test3.type.length)
self.assertEqual([], utils.get_non_ndbcluster_tables(self.engine))
def test_ndb_disabled(self):
self.init_db(False)
self.assertFalse(ndb.ndb_status(self.engine))
self.assertIsInstance(self.test_table.c.test1.type, String)
self.assertEqual(255, self.test_table.c.test1.type.length)
self.assertIsInstance(self.test_table.c.test2.type, String)
self.assertEqual(4096, self.test_table.c.test2.type.length)
self.assertIsInstance(self.test_table.c.test3.type, String)
self.assertEqual(255, self.test_table.c.test3.type.length)
self.assertEqual([], utils.get_non_innodb_tables(self.engine))

View File

@ -1,127 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_config import fixture as config
from oslo_db import options
from oslo_db.tests import utils as test_utils
class DbApiOptionsTestCase(test_utils.BaseTestCase):
def setUp(self):
super(DbApiOptionsTestCase, self).setUp()
config_fixture = self.useFixture(config.Config())
self.conf = config_fixture.conf
self.conf.register_opts(options.database_opts, group='database')
self.config = config_fixture.config
def test_deprecated_session_parameters(self):
path = self.create_tempfiles([["tmp", b"""[DEFAULT]
sql_connection=x://y.z
sql_min_pool_size=10
sql_max_pool_size=20
sql_max_retries=30
sql_retry_interval=40
sql_max_overflow=50
sql_connection_debug=60
sql_connection_trace=True
"""]])[0]
self.conf(['--config-file', path])
self.assertEqual('x://y.z', self.conf.database.connection)
self.assertEqual(10, self.conf.database.min_pool_size)
self.assertEqual(20, self.conf.database.max_pool_size)
self.assertEqual(30, self.conf.database.max_retries)
self.assertEqual(40, self.conf.database.retry_interval)
self.assertEqual(50, self.conf.database.max_overflow)
self.assertEqual(60, self.conf.database.connection_debug)
self.assertEqual(True, self.conf.database.connection_trace)
def test_session_parameters(self):
path = self.create_tempfiles([["tmp", b"""[database]
connection=x://y.z
min_pool_size=10
max_pool_size=20
max_retries=30
retry_interval=40
max_overflow=50
connection_debug=60
connection_trace=True
pool_timeout=7
"""]])[0]
self.conf(['--config-file', path])
self.assertEqual('x://y.z', self.conf.database.connection)
self.assertEqual(10, self.conf.database.min_pool_size)
self.assertEqual(20, self.conf.database.max_pool_size)
self.assertEqual(30, self.conf.database.max_retries)
self.assertEqual(40, self.conf.database.retry_interval)
self.assertEqual(50, self.conf.database.max_overflow)
self.assertEqual(60, self.conf.database.connection_debug)
self.assertEqual(True, self.conf.database.connection_trace)
self.assertEqual(7, self.conf.database.pool_timeout)
def test_dbapi_database_deprecated_parameters(self):
path = self.create_tempfiles([['tmp', b'[DATABASE]\n'
b'sql_connection=fake_connection\n'
b'sql_idle_timeout=100\n'
b'sql_min_pool_size=99\n'
b'sql_max_pool_size=199\n'
b'sql_max_retries=22\n'
b'reconnect_interval=17\n'
b'sqlalchemy_max_overflow=101\n'
b'sqlalchemy_pool_timeout=5\n'
]])[0]
self.conf(['--config-file', path])
self.assertEqual('fake_connection', self.conf.database.connection)
self.assertEqual(100, self.conf.database.idle_timeout)
self.assertEqual(99, self.conf.database.min_pool_size)
self.assertEqual(199, self.conf.database.max_pool_size)
self.assertEqual(22, self.conf.database.max_retries)
self.assertEqual(17, self.conf.database.retry_interval)
self.assertEqual(101, self.conf.database.max_overflow)
self.assertEqual(5, self.conf.database.pool_timeout)
def test_dbapi_database_deprecated_parameters_sql(self):
path = self.create_tempfiles([['tmp', b'[sql]\n'
b'connection=test_sql_connection\n'
b'idle_timeout=99\n'
]])[0]
self.conf(['--config-file', path])
self.assertEqual('test_sql_connection', self.conf.database.connection)
self.assertEqual(99, self.conf.database.idle_timeout)
def test_deprecated_dbapi_parameters(self):
path = self.create_tempfiles([['tmp', b'[DEFAULT]\n'
b'db_backend=test_123\n'
]])[0]
self.conf(['--config-file', path])
self.assertEqual('test_123', self.conf.database.backend)
def test_dbapi_parameters(self):
path = self.create_tempfiles([['tmp', b'[database]\n'
b'backend=test_123\n'
]])[0]
self.conf(['--config-file', path])
self.assertEqual('test_123', self.conf.database.backend)
def test_set_defaults(self):
conf = cfg.ConfigOpts()
options.set_defaults(conf,
connection='sqlite:///:memory:')
self.assertTrue(len(conf.database.items()) > 1)
self.assertEqual('sqlite:///:memory:', conf.database.connection)

View File

@ -1,264 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import os
from oslotest import base as oslo_test_base
from sqlalchemy import exc as sa_exc
from sqlalchemy import inspect
from sqlalchemy import schema
from sqlalchemy import types
from oslo_db import exception
from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy import provision
from oslo_db.sqlalchemy import test_fixtures
from oslo_db.sqlalchemy import utils
from oslo_db.tests.sqlalchemy import base as test_base
class DropAllObjectsTest(test_base.DbTestCase):
def setUp(self):
super(DropAllObjectsTest, self).setUp()
self.metadata = metadata = schema.MetaData()
schema.Table(
'a', metadata,
schema.Column('id', types.Integer, primary_key=True),
mysql_engine='InnoDB'
)
schema.Table(
'b', metadata,
schema.Column('id', types.Integer, primary_key=True),
schema.Column('a_id', types.Integer, schema.ForeignKey('a.id')),
mysql_engine='InnoDB'
)
schema.Table(
'c', metadata,
schema.Column('id', types.Integer, primary_key=True),
schema.Column('b_id', types.Integer, schema.ForeignKey('b.id')),
schema.Column(
'd_id', types.Integer,
schema.ForeignKey('d.id', use_alter=True, name='c_d_fk')),
mysql_engine='InnoDB'
)
schema.Table(
'd', metadata,
schema.Column('id', types.Integer, primary_key=True),
schema.Column('c_id', types.Integer, schema.ForeignKey('c.id')),
mysql_engine='InnoDB'
)
metadata.create_all(self.engine, checkfirst=False)
# will drop nothing if the test worked
self.addCleanup(metadata.drop_all, self.engine, checkfirst=True)
def test_drop_all(self):
insp = inspect(self.engine)
self.assertEqual(
set(['a', 'b', 'c', 'd']),
set(insp.get_table_names())
)
self._get_default_provisioned_db().\
backend.drop_all_objects(self.engine)
insp = inspect(self.engine)
self.assertEqual(
[],
insp.get_table_names()
)
class BackendNotAvailableTest(oslo_test_base.BaseTestCase):
def test_no_dbapi(self):
backend = provision.Backend(
"postgresql", "postgresql+nosuchdbapi://hostname/dsn")
with mock.patch(
"sqlalchemy.create_engine",
mock.Mock(side_effect=ImportError("nosuchdbapi"))):
# NOTE(zzzeek): Call and test the _verify function twice, as it
# exercises a different code path on subsequent runs vs.
# the first run
ex = self.assertRaises(
exception.BackendNotAvailable,
backend._verify)
self.assertEqual(
"Backend 'postgresql+nosuchdbapi' is unavailable: "
"No DBAPI installed", str(ex))
ex = self.assertRaises(
exception.BackendNotAvailable,
backend._verify)
self.assertEqual(
"Backend 'postgresql+nosuchdbapi' is unavailable: "
"No DBAPI installed", str(ex))
def test_cant_connect(self):
backend = provision.Backend(
"postgresql", "postgresql+nosuchdbapi://hostname/dsn")
with mock.patch(
"sqlalchemy.create_engine",
mock.Mock(return_value=mock.Mock(connect=mock.Mock(
side_effect=sa_exc.OperationalError(
"can't connect", None, None))
))
):
# NOTE(zzzeek): Call and test the _verify function twice, as it
# exercises a different code path on subsequent runs vs.
# the first run
ex = self.assertRaises(
exception.BackendNotAvailable,
backend._verify)
self.assertEqual(
"Backend 'postgresql+nosuchdbapi' is unavailable: "
"Could not connect", str(ex))
ex = self.assertRaises(
exception.BackendNotAvailable,
backend._verify)
self.assertEqual(
"Backend 'postgresql+nosuchdbapi' is unavailable: "
"Could not connect", str(ex))
class MySQLDropAllObjectsTest(
DropAllObjectsTest, test_base.MySQLOpportunisticTestCase):
pass
class PostgreSQLDropAllObjectsTest(
DropAllObjectsTest, test_base.PostgreSQLOpportunisticTestCase):
pass
class RetainSchemaTest(oslo_test_base.BaseTestCase):
DRIVER = "sqlite"
def setUp(self):
super(RetainSchemaTest, self).setUp()
metadata = schema.MetaData()
self.test_table = schema.Table(
'test_table', metadata,
schema.Column('x', types.Integer),
schema.Column('y', types.Integer),
mysql_engine='InnoDB'
)
def gen_schema(engine):
metadata.create_all(engine, checkfirst=False)
self._gen_schema = gen_schema
def test_once(self):
self._run_test()
def test_twice(self):
self._run_test()
def _run_test(self):
try:
database_resource = provision.DatabaseResource(
self.DRIVER, provision_new_database=True)
except exception.BackendNotAvailable:
self.skip("database not available")
schema_resource = provision.SchemaResource(
database_resource, self._gen_schema)
schema = schema_resource.getResource()
conn = schema.database.engine.connect()
engine = utils.NonCommittingEngine(conn)
with engine.connect() as conn:
rows = conn.execute(self.test_table.select())
self.assertEqual([], rows.fetchall())
trans = conn.begin()
conn.execute(
self.test_table.insert(),
{"x": 1, "y": 2}
)
trans.rollback()
rows = conn.execute(self.test_table.select())
self.assertEqual([], rows.fetchall())
trans = conn.begin()
conn.execute(
self.test_table.insert(),
{"x": 2, "y": 3}
)
trans.commit()
rows = conn.execute(self.test_table.select())
self.assertEqual([(2, 3)], rows.fetchall())
engine._dispose()
schema_resource.finishedWith(schema)
class MySQLRetainSchemaTest(RetainSchemaTest):
DRIVER = "mysql"
class PostgresqlRetainSchemaTest(RetainSchemaTest):
DRIVER = "postgresql"
class AdHocURLTest(oslo_test_base.BaseTestCase):
def test_sqlite_setup_teardown(self):
fixture = test_fixtures.AdHocDbFixture("sqlite:///foo.db")
fixture.setUp()
self.assertEqual(
str(enginefacade._context_manager._factory._writer_engine.url),
"sqlite:///foo.db"
)
self.assertTrue(os.path.exists("foo.db"))
fixture.cleanUp()
self.assertFalse(os.path.exists("foo.db"))
def test_mysql_setup_teardown(self):
try:
mysql_backend = provision.Backend.backend_for_database_type(
"mysql")
except exception.BackendNotAvailable:
self.skip("mysql backend not available")
mysql_backend.create_named_database("adhoc_test")
self.addCleanup(
mysql_backend.drop_named_database, "adhoc_test"
)
url = str(mysql_backend.provisioned_database_url("adhoc_test"))
fixture = test_fixtures.AdHocDbFixture(url)
fixture.setUp()
self.assertEqual(
str(enginefacade._context_manager._factory._writer_engine.url),
url
)
fixture.cleanUp()

View File

@ -1,772 +0,0 @@
# coding=utf-8
# Copyright (c) 2012 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Unit tests for SQLAlchemy specific code."""
import logging
import os
import fixtures
import mock
from oslo_config import cfg
from oslotest import base as oslo_test
import six
import sqlalchemy
from sqlalchemy import Column, MetaData, Table
from sqlalchemy.engine import url
from sqlalchemy import Integer, String
from sqlalchemy.ext.declarative import declarative_base
from oslo_db import exception
from oslo_db import options as db_options
from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy import engines
from oslo_db.sqlalchemy import models
from oslo_db.sqlalchemy import session
from oslo_db.tests.sqlalchemy import base as test_base
BASE = declarative_base()
_TABLE_NAME = '__tmp__test__tmp__'
_REGEXP_TABLE_NAME = _TABLE_NAME + "regexp"
class RegexpTable(BASE, models.ModelBase):
__tablename__ = _REGEXP_TABLE_NAME
id = Column(Integer, primary_key=True)
bar = Column(String(255))
class RegexpFilterTestCase(test_base.DbTestCase):
def setUp(self):
super(RegexpFilterTestCase, self).setUp()
meta = MetaData()
meta.bind = self.engine
test_table = Table(_REGEXP_TABLE_NAME, meta,
Column('id', Integer, primary_key=True,
nullable=False),
Column('bar', String(255)))
test_table.create()
self.addCleanup(test_table.drop)
def _test_regexp_filter(self, regexp, expected):
with enginefacade.writer.using(test_base.context):
_session = test_base.context.session
for i in ['10', '20', u'']:
tbl = RegexpTable()
tbl.update({'bar': i})
tbl.save(session=_session)
regexp_op = RegexpTable.bar.op('REGEXP')(regexp)
result = _session.query(RegexpTable).filter(regexp_op).all()
self.assertEqual(expected, [r.bar for r in result])
def test_regexp_filter(self):
self._test_regexp_filter('10', ['10'])
def test_regexp_filter_nomatch(self):
self._test_regexp_filter('11', [])
def test_regexp_filter_unicode(self):
self._test_regexp_filter(u'', [u''])
def test_regexp_filter_unicode_nomatch(self):
self._test_regexp_filter(u'', [])
class SQLiteSavepointTest(test_base.DbTestCase):
def setUp(self):
super(SQLiteSavepointTest, self).setUp()
meta = MetaData()
self.test_table = Table(
"test_table", meta,
Column('id', Integer, primary_key=True),
Column('data', String(10)))
self.test_table.create(self.engine)
self.addCleanup(self.test_table.drop, self.engine)
def test_plain_transaction(self):
conn = self.engine.connect()
trans = conn.begin()
conn.execute(
self.test_table.insert(),
{'data': 'data 1'}
)
self.assertEqual(
[(1, 'data 1')],
self.engine.execute(
self.test_table.select().
order_by(self.test_table.c.id)
).fetchall()
)
trans.rollback()
self.assertEqual(
0,
self.engine.scalar(self.test_table.count())
)
def test_savepoint_middle(self):
with self.engine.begin() as conn:
conn.execute(
self.test_table.insert(),
{'data': 'data 1'}
)
savepoint = conn.begin_nested()
conn.execute(
self.test_table.insert(),
{'data': 'data 2'}
)
savepoint.rollback()
conn.execute(
self.test_table.insert(),
{'data': 'data 3'}
)
self.assertEqual(
[(1, 'data 1'), (2, 'data 3')],
self.engine.execute(
self.test_table.select().
order_by(self.test_table.c.id)
).fetchall()
)
def test_savepoint_beginning(self):
with self.engine.begin() as conn:
savepoint = conn.begin_nested()
conn.execute(
self.test_table.insert(),
{'data': 'data 1'}
)
savepoint.rollback()
conn.execute(
self.test_table.insert(),
{'data': 'data 2'}
)
self.assertEqual(
[(1, 'data 2')],
self.engine.execute(
self.test_table.select().
order_by(self.test_table.c.id)
).fetchall()
)
class FakeDBAPIConnection(object):
def cursor(self):
return FakeCursor()
class FakeCursor(object):
def execute(self, sql):
pass
class FakeConnectionProxy(object):
pass
class FakeConnectionRec(object):
pass
class OperationalError(Exception):
pass
class ProgrammingError(Exception):
pass
class FakeDB2Engine(object):
class Dialect(object):
def is_disconnect(self, e, *args):
expected_error = ('SQL30081N: DB2 Server connection is no longer '
'active')
return (str(e) == expected_error)
dialect = Dialect()
name = 'ibm_db_sa'
def dispose(self):
pass
class MySQLDefaultModeTestCase(test_base.MySQLOpportunisticTestCase):
def test_default_is_traditional(self):
with self.engine.connect() as conn:
sql_mode = conn.execute(
"SHOW VARIABLES LIKE 'sql_mode'"
).first()[1]
self.assertIn("TRADITIONAL", sql_mode)
class MySQLModeTestCase(test_base.MySQLOpportunisticTestCase):
def __init__(self, *args, **kwargs):
super(MySQLModeTestCase, self).__init__(*args, **kwargs)
# By default, run in empty SQL mode.
# Subclasses override this with specific modes.
self.mysql_mode = ''
def setUp(self):
super(MySQLModeTestCase, self).setUp()
mode_engine = session.create_engine(
self.engine.url,
mysql_sql_mode=self.mysql_mode)
self.connection = mode_engine.connect()
meta = MetaData()
self.test_table = Table(_TABLE_NAME + "mode", meta,
Column('id', Integer, primary_key=True),
Column('bar', String(255)))
self.test_table.create(self.connection)
def cleanup():
self.test_table.drop(self.connection)
self.connection.close()
mode_engine.dispose()
self.addCleanup(cleanup)
def _test_string_too_long(self, value):
with self.connection.begin():
self.connection.execute(self.test_table.insert(),
bar=value)
result = self.connection.execute(self.test_table.select())
return result.fetchone()['bar']
def test_string_too_long(self):
value = 'a' * 512
# String is too long.
# With no SQL mode set, this gets truncated.
self.assertNotEqual(value,
self._test_string_too_long(value))
class MySQLStrictAllTablesModeTestCase(MySQLModeTestCase):
"Test data integrity enforcement in MySQL STRICT_ALL_TABLES mode."
def __init__(self, *args, **kwargs):
super(MySQLStrictAllTablesModeTestCase, self).__init__(*args, **kwargs)
self.mysql_mode = 'STRICT_ALL_TABLES'
def test_string_too_long(self):
value = 'a' * 512
# String is too long.
# With STRICT_ALL_TABLES or TRADITIONAL mode set, this is an error.
self.assertRaises(exception.DBError,
self._test_string_too_long, value)
class MySQLTraditionalModeTestCase(MySQLStrictAllTablesModeTestCase):
"""Test data integrity enforcement in MySQL TRADITIONAL mode.
Since TRADITIONAL includes STRICT_ALL_TABLES, this inherits all
STRICT_ALL_TABLES mode tests.
"""
def __init__(self, *args, **kwargs):
super(MySQLTraditionalModeTestCase, self).__init__(*args, **kwargs)
self.mysql_mode = 'TRADITIONAL'
class EngineFacadeTestCase(oslo_test.BaseTestCase):
def setUp(self):
super(EngineFacadeTestCase, self).setUp()
self.facade = session.EngineFacade('sqlite://')
def test_get_engine(self):
eng1 = self.facade.get_engine()
eng2 = self.facade.get_engine()
self.assertIs(eng1, eng2)
def test_get_session(self):
ses1 = self.facade.get_session()
ses2 = self.facade.get_session()
self.assertIsNot(ses1, ses2)
def test_get_session_arguments_override_default_settings(self):
ses = self.facade.get_session(autocommit=False, expire_on_commit=True)
self.assertFalse(ses.autocommit)
self.assertTrue(ses.expire_on_commit)
@mock.patch('oslo_db.sqlalchemy.orm.get_maker')
@mock.patch('oslo_db.sqlalchemy.engines.create_engine')
def test_creation_from_config(self, create_engine, get_maker):
conf = cfg.ConfigOpts()
conf.register_opts(db_options.database_opts, group='database')
overrides = {
'connection': 'sqlite:///:memory:',
'slave_connection': None,
'connection_debug': 100,
'max_pool_size': 10,
'mysql_sql_mode': 'TRADITIONAL',
}
for optname, optvalue in overrides.items():
conf.set_override(optname, optvalue, group='database')
session.EngineFacade.from_config(conf,
autocommit=False,
expire_on_commit=True)
create_engine.assert_called_once_with(
sql_connection='sqlite:///:memory:',
connection_debug=100,
max_pool_size=10,
mysql_sql_mode='TRADITIONAL',
mysql_enable_ndb=False,
sqlite_fk=False,
idle_timeout=mock.ANY,
retry_interval=mock.ANY,
max_retries=mock.ANY,
max_overflow=mock.ANY,
connection_trace=mock.ANY,
sqlite_synchronous=mock.ANY,
pool_timeout=mock.ANY,
thread_checkin=mock.ANY,
json_serializer=None,
json_deserializer=None,
logging_name=mock.ANY,
)
get_maker.assert_called_once_with(engine=create_engine(),
autocommit=False,
expire_on_commit=True)
def test_slave_connection(self):
paths = self.create_tempfiles([('db.master', ''), ('db.slave', '')],
ext='')
master_path = 'sqlite:///' + paths[0]
slave_path = 'sqlite:///' + paths[1]
facade = session.EngineFacade(
sql_connection=master_path,
slave_connection=slave_path
)
master = facade.get_engine()
self.assertEqual(master_path, str(master.url))
slave = facade.get_engine(use_slave=True)
self.assertEqual(slave_path, str(slave.url))
master_session = facade.get_session()
self.assertEqual(master_path, str(master_session.bind.url))
slave_session = facade.get_session(use_slave=True)
self.assertEqual(slave_path, str(slave_session.bind.url))
def test_slave_connection_string_not_provided(self):
master_path = 'sqlite:///' + self.create_tempfiles(
[('db.master', '')], ext='')[0]
facade = session.EngineFacade(sql_connection=master_path)
master = facade.get_engine()
slave = facade.get_engine(use_slave=True)
self.assertIs(master, slave)
self.assertEqual(master_path, str(master.url))
master_session = facade.get_session()
self.assertEqual(master_path, str(master_session.bind.url))
slave_session = facade.get_session(use_slave=True)
self.assertEqual(master_path, str(slave_session.bind.url))
class SQLiteConnectTest(oslo_test.BaseTestCase):
def _fixture(self, **kw):
return session.create_engine("sqlite://", **kw)
def test_sqlite_fk_listener(self):
engine = self._fixture(sqlite_fk=True)
self.assertEqual(
1,
engine.scalar("pragma foreign_keys")
)
engine = self._fixture(sqlite_fk=False)
self.assertEqual(
0,
engine.scalar("pragma foreign_keys")
)
def test_sqlite_synchronous_listener(self):
engine = self._fixture()
# "The default setting is synchronous=FULL." (e.g. 2)
# http://www.sqlite.org/pragma.html#pragma_synchronous
self.assertEqual(
2,
engine.scalar("pragma synchronous")
)
engine = self._fixture(sqlite_synchronous=False)
self.assertEqual(
0,
engine.scalar("pragma synchronous")
)
class MysqlConnectTest(test_base.MySQLOpportunisticTestCase):
def _fixture(self, sql_mode):
return session.create_engine(self.engine.url, mysql_sql_mode=sql_mode)
def _assert_sql_mode(self, engine, sql_mode_present, sql_mode_non_present):
mode = engine.execute("SHOW VARIABLES LIKE 'sql_mode'").fetchone()[1]
self.assertIn(
sql_mode_present, mode
)
if sql_mode_non_present:
self.assertNotIn(
sql_mode_non_present, mode
)
def test_set_mode_traditional(self):
engine = self._fixture(sql_mode='TRADITIONAL')
self._assert_sql_mode(engine, "TRADITIONAL", "ANSI")
def test_set_mode_ansi(self):
engine = self._fixture(sql_mode='ANSI')
self._assert_sql_mode(engine, "ANSI", "TRADITIONAL")
def test_set_mode_no_mode(self):
# If _mysql_set_mode_callback is called with sql_mode=None, then
# the SQL mode is NOT set on the connection.
# get the GLOBAL sql_mode, not the @@SESSION, so that
# we get what is configured for the MySQL database, as opposed
# to what our own session.create_engine() has set it to.
expected = self.engine.execute(
"SELECT @@GLOBAL.sql_mode").scalar()
engine = self._fixture(sql_mode=None)
self._assert_sql_mode(engine, expected, None)
def test_fail_detect_mode(self):
# If "SHOW VARIABLES LIKE 'sql_mode'" results in no row, then
# we get a log indicating can't detect the mode.
log = self.useFixture(fixtures.FakeLogger(level=logging.WARN))
mysql_conn = self.engine.raw_connection()
self.addCleanup(mysql_conn.close)
mysql_conn.detach()
mysql_cursor = mysql_conn.cursor()
def execute(statement, parameters=()):
if "SHOW VARIABLES LIKE 'sql_mode'" in statement:
statement = "SHOW VARIABLES LIKE 'i_dont_exist'"
return mysql_cursor.execute(statement, parameters)
test_engine = sqlalchemy.create_engine(self.engine.url,
_initialize=False)
with mock.patch.object(
test_engine.pool, '_creator',
mock.Mock(
return_value=mock.Mock(
cursor=mock.Mock(
return_value=mock.Mock(
execute=execute,
fetchone=mysql_cursor.fetchone,
fetchall=mysql_cursor.fetchall
)
)
)
)
):
engines._init_events.dispatch_on_drivername("mysql")(test_engine)
test_engine.raw_connection()
self.assertIn('Unable to detect effective SQL mode',
log.output)
def test_logs_real_mode(self):
# If "SHOW VARIABLES LIKE 'sql_mode'" results in a value, then
# we get a log with the value.
log = self.useFixture(fixtures.FakeLogger(level=logging.DEBUG))
engine = self._fixture(sql_mode='TRADITIONAL')
actual_mode = engine.execute(
"SHOW VARIABLES LIKE 'sql_mode'").fetchone()[1]
self.assertIn('MySQL server mode set to %s' % actual_mode,
log.output)
def test_warning_when_not_traditional(self):
# If "SHOW VARIABLES LIKE 'sql_mode'" results in a value that doesn't
# include 'TRADITIONAL', then a warning is logged.
log = self.useFixture(fixtures.FakeLogger(level=logging.WARN))
self._fixture(sql_mode='ANSI')
self.assertIn("consider enabling TRADITIONAL or STRICT_ALL_TABLES",
log.output)
def test_no_warning_when_traditional(self):
# If "SHOW VARIABLES LIKE 'sql_mode'" results in a value that includes
# 'TRADITIONAL', then no warning is logged.
log = self.useFixture(fixtures.FakeLogger(level=logging.WARN))
self._fixture(sql_mode='TRADITIONAL')
self.assertNotIn("consider enabling TRADITIONAL or STRICT_ALL_TABLES",
log.output)
def test_no_warning_when_strict_all_tables(self):
# If "SHOW VARIABLES LIKE 'sql_mode'" results in a value that includes
# 'STRICT_ALL_TABLES', then no warning is logged.
log = self.useFixture(fixtures.FakeLogger(level=logging.WARN))
self._fixture(sql_mode='TRADITIONAL')
self.assertNotIn("consider enabling TRADITIONAL or STRICT_ALL_TABLES",
log.output)
class CreateEngineTest(oslo_test.BaseTestCase):
"""Test that dialect-specific arguments/ listeners are set up correctly.
"""
def setUp(self):
super(CreateEngineTest, self).setUp()
self.args = {'connect_args': {}}
def test_queuepool_args(self):
engines._init_connection_args(
url.make_url("mysql+pymysql://u:p@host/test"), self.args,
max_pool_size=10, max_overflow=10)
self.assertEqual(10, self.args['pool_size'])
self.assertEqual(10, self.args['max_overflow'])
def test_sqlite_memory_pool_args(self):
for _url in ("sqlite://", "sqlite:///:memory:"):
engines._init_connection_args(
url.make_url(_url), self.args,
max_pool_size=10, max_overflow=10)
# queuepool arguments are not peresnet
self.assertNotIn(
'pool_size', self.args)
self.assertNotIn(
'max_overflow', self.args)
self.assertEqual(False,
self.args['connect_args']['check_same_thread'])
# due to memory connection
self.assertIn('poolclass', self.args)
def test_sqlite_file_pool_args(self):
engines._init_connection_args(
url.make_url("sqlite:///somefile.db"), self.args,
max_pool_size=10, max_overflow=10)
# queuepool arguments are not peresnet
self.assertNotIn('pool_size', self.args)
self.assertNotIn(
'max_overflow', self.args)
self.assertFalse(self.args['connect_args'])
# NullPool is the default for file based connections,
# no need to specify this
self.assertNotIn('poolclass', self.args)
def _test_mysql_connect_args_default(self, connect_args):
if six.PY3:
self.assertEqual({'charset': 'utf8', 'use_unicode': 1},
connect_args)
else:
self.assertEqual({'charset': 'utf8', 'use_unicode': 0},
connect_args)
def test_mysql_connect_args_default(self):
engines._init_connection_args(
url.make_url("mysql://u:p@host/test"), self.args)
self._test_mysql_connect_args_default(self.args['connect_args'])
def test_mysql_oursql_connect_args_default(self):
engines._init_connection_args(
url.make_url("mysql+oursql://u:p@host/test"), self.args)
self._test_mysql_connect_args_default(self.args['connect_args'])
def test_mysql_pymysql_connect_args_default(self):
engines._init_connection_args(
url.make_url("mysql+pymysql://u:p@host/test"), self.args)
self.assertEqual({'charset': 'utf8'}, self.args['connect_args'])
def test_mysql_mysqldb_connect_args_default(self):
engines._init_connection_args(
url.make_url("mysql+mysqldb://u:p@host/test"), self.args)
self._test_mysql_connect_args_default(self.args['connect_args'])
def test_postgresql_connect_args_default(self):
engines._init_connection_args(
url.make_url("postgresql://u:p@host/test"), self.args)
self.assertEqual('utf8', self.args['client_encoding'])
self.assertFalse(self.args['connect_args'])
def test_mysqlconnector_raise_on_warnings_default(self):
engines._init_connection_args(
url.make_url("mysql+mysqlconnector://u:p@host/test"),
self.args)
self.assertEqual(False, self.args['connect_args']['raise_on_warnings'])
def test_mysqlconnector_raise_on_warnings_override(self):
engines._init_connection_args(
url.make_url(
"mysql+mysqlconnector://u:p@host/test"
"?raise_on_warnings=true"),
self.args
)
self.assertNotIn('raise_on_warnings', self.args['connect_args'])
def test_thread_checkin(self):
with mock.patch("sqlalchemy.event.listens_for"):
with mock.patch("sqlalchemy.event.listen") as listen_evt:
engines._init_events.dispatch_on_drivername(
"sqlite")(mock.Mock())
self.assertEqual(
listen_evt.mock_calls[0][1][-1],
engines._thread_yield
)
def test_warn_on_missing_driver(self):
warnings = mock.Mock()
def warn_interpolate(msg, args):
# test the interpolation itself to ensure the password
# is concealed
warnings.warning(msg % args)
with mock.patch(
"oslo_db.sqlalchemy.engines.LOG.warning",
warn_interpolate):
engines._vet_url(
url.make_url("mysql://scott:tiger@some_host/some_db"))
engines._vet_url(url.make_url(
"mysql+mysqldb://scott:tiger@some_host/some_db"))
engines._vet_url(url.make_url(
"mysql+pymysql://scott:tiger@some_host/some_db"))
engines._vet_url(url.make_url(
"postgresql+psycopg2://scott:tiger@some_host/some_db"))
engines._vet_url(url.make_url(
"postgresql://scott:tiger@some_host/some_db"))
self.assertEqual(
[
mock.call.warning(
"URL mysql://scott:***@some_host/some_db does not contain "
"a '+drivername' portion, "
"and will make use of a default driver. "
"A full dbname+drivername:// protocol is recommended. "
"For MySQL, it is strongly recommended that "
"mysql+pymysql:// "
"be specified for maximum service compatibility",
),
mock.call.warning(
"URL postgresql://scott:***@some_host/some_db does not "
"contain a '+drivername' portion, "
"and will make use of a default driver. "
"A full dbname+drivername:// protocol is recommended."
)
],
warnings.mock_calls
)
class ProcessGuardTest(test_base.DbTestCase):
def test_process_guard(self):
self.engine.dispose()
def get_parent_pid():
return 4
def get_child_pid():
return 5
with mock.patch("os.getpid", get_parent_pid):
with self.engine.connect() as conn:
dbapi_id = id(conn.connection.connection)
with mock.patch("os.getpid", get_child_pid):
with self.engine.connect() as conn:
new_dbapi_id = id(conn.connection.connection)
self.assertNotEqual(dbapi_id, new_dbapi_id)
# ensure it doesn't trip again
with mock.patch("os.getpid", get_child_pid):
with self.engine.connect() as conn:
newer_dbapi_id = id(conn.connection.connection)
self.assertEqual(new_dbapi_id, newer_dbapi_id)
class PatchStacktraceTest(test_base.DbTestCase):
def test_trace(self):
engine = self.engine
# NOTE(viktors): The code in oslo_db.sqlalchemy.session filters out
# lines from modules under oslo_db, so we should remove
# "oslo_db/" from file path in traceback.
import traceback
orig_extract_stack = traceback.extract_stack
def extract_stack():
return [(row[0].replace("oslo_db/", ""), row[1], row[2], row[3])
for row in orig_extract_stack()]
with mock.patch("traceback.extract_stack", side_effect=extract_stack):
engines._add_trace_comments(engine)
conn = engine.connect()
orig_do_exec = engine.dialect.do_execute
with mock.patch.object(engine.dialect, "do_execute") as mock_exec:
mock_exec.side_effect = orig_do_exec
conn.execute("select 1;")
call = mock_exec.mock_calls[0]
# we're the caller, see that we're in there
caller = os.path.join("tests", "sqlalchemy", "test_sqlalchemy.py")
self.assertIn(caller, call[1][1])

View File

@ -1,111 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests for JSON SQLAlchemy types."""
from sqlalchemy import Column, Integer
from sqlalchemy.dialects import mysql
from sqlalchemy.ext.declarative import declarative_base
from oslo_db import exception as db_exc
from oslo_db.sqlalchemy import models
from oslo_db.sqlalchemy import types
from oslo_db.tests.sqlalchemy import base as test_base
BASE = declarative_base()
class JsonTable(BASE, models.ModelBase):
__tablename__ = 'test_json_types'
id = Column(Integer, primary_key=True)
jdict = Column(types.JsonEncodedDict)
jlist = Column(types.JsonEncodedList)
json = Column(types.JsonEncodedType)
class JsonTypesTestCase(test_base.DbTestCase):
def setUp(self):
super(JsonTypesTestCase, self).setUp()
JsonTable.__table__.create(self.engine)
self.addCleanup(JsonTable.__table__.drop, self.engine)
self.session = self.sessionmaker()
self.addCleanup(self.session.close)
def test_default_value(self):
with self.session.begin():
JsonTable(id=1).save(self.session)
obj = self.session.query(JsonTable).filter_by(id=1).one()
self.assertEqual([], obj.jlist)
self.assertEqual({}, obj.jdict)
self.assertIsNone(obj.json)
def test_dict(self):
test = {'a': 42, 'b': [1, 2, 3]}
with self.session.begin():
JsonTable(id=1, jdict=test).save(self.session)
obj = self.session.query(JsonTable).filter_by(id=1).one()
self.assertEqual(test, obj.jdict)
def test_list(self):
test = [1, True, "hello", {}]
with self.session.begin():
JsonTable(id=1, jlist=test).save(self.session)
obj = self.session.query(JsonTable).filter_by(id=1).one()
self.assertEqual(test, obj.jlist)
def test_dict_type_check(self):
self.assertRaises(db_exc.DBError,
JsonTable(id=1, jdict=[]).save, self.session)
def test_list_type_check(self):
self.assertRaises(db_exc.DBError,
JsonTable(id=1, jlist={}).save, self.session)
def test_generic(self):
tested = [
"string",
42,
True,
None,
[1, 2, 3],
{'a': 'b'}
]
for i, test in enumerate(tested):
with self.session.begin():
JsonTable(id=i, json=test).save(self.session)
obj = self.session.query(JsonTable).filter_by(id=i).one()
self.assertEqual(test, obj.json)
def test_mysql_variants(self):
self.assertEqual(
"LONGTEXT",
str(
types.JsonEncodedDict(mysql_as_long=True).compile(
dialect=mysql.dialect())
)
)
self.assertEqual(
"MEDIUMTEXT",
str(
types.JsonEncodedDict(mysql_as_medium=True).compile(
dialect=mysql.dialect())
)
)
self.assertRaises(
TypeError,
lambda: types.JsonEncodedDict(
mysql_as_long=True,
mysql_as_medium=True)
)

View File

@ -1,445 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslotest import base as oslo_test_base
from sqlalchemy.ext import declarative
from sqlalchemy import schema
from sqlalchemy import sql
from sqlalchemy import types as sqltypes
from oslo_db.sqlalchemy import update_match
from oslo_db.tests.sqlalchemy import base as test_base
Base = declarative.declarative_base()
class MyModel(Base):
__tablename__ = 'my_table'
id = schema.Column(sqltypes.Integer, primary_key=True)
uuid = schema.Column(sqltypes.String(36), nullable=False, unique=True)
x = schema.Column(sqltypes.Integer)
y = schema.Column(sqltypes.String(40))
z = schema.Column(sqltypes.String(40))
class ManufactureCriteriaTest(oslo_test_base.BaseTestCase):
def test_instance_criteria_basic(self):
specimen = MyModel(
y='y1', z='z3',
uuid='136254d5-3869-408f-9da7-190e0072641a'
)
self.assertEqual(
"my_table.uuid = :uuid_1 AND my_table.y = :y_1 "
"AND my_table.z = :z_1",
str(update_match.manufacture_entity_criteria(specimen).compile())
)
def test_instance_criteria_basic_wnone(self):
specimen = MyModel(
y='y1', z=None,
uuid='136254d5-3869-408f-9da7-190e0072641a'
)
self.assertEqual(
"my_table.uuid = :uuid_1 AND my_table.y = :y_1 "
"AND my_table.z IS NULL",
str(update_match.manufacture_entity_criteria(specimen).compile())
)
def test_instance_criteria_tuples(self):
specimen = MyModel(
y='y1', z=('z1', 'z2'),
)
self.assertEqual(
"my_table.y = :y_1 AND my_table.z IN (:z_1, :z_2)",
str(update_match.manufacture_entity_criteria(specimen).compile())
)
def test_instance_criteria_tuples_wnone(self):
specimen = MyModel(
y='y1', z=('z1', 'z2', None),
)
self.assertEqual(
"my_table.y = :y_1 AND (my_table.z IS NULL OR "
"my_table.z IN (:z_1, :z_2))",
str(update_match.manufacture_entity_criteria(specimen).compile())
)
def test_instance_criteria_none_list(self):
specimen = MyModel(
y='y1', z=[None],
)
self.assertEqual(
"my_table.y = :y_1 AND my_table.z IS NULL",
str(update_match.manufacture_entity_criteria(specimen).compile())
)
class UpdateMatchTest(test_base.DbTestCase):
def setUp(self):
super(UpdateMatchTest, self).setUp()
Base.metadata.create_all(self.engine)
self.addCleanup(Base.metadata.drop_all, self.engine)
# self.engine.echo = 'debug'
self.session = self.sessionmaker(autocommit=False)
self.addCleanup(self.session.close)
self.session.add_all([
MyModel(
id=1,
uuid='23cb9224-9f8e-40fe-bd3c-e7577b7af37d',
x=5, y='y1', z='z1'),
MyModel(
id=2,
uuid='136254d5-3869-408f-9da7-190e0072641a',
x=6, y='y1', z='z2'),
MyModel(
id=3,
uuid='094eb162-d5df-494b-a458-a91a1b2d2c65',
x=7, y='y1', z='z1'),
MyModel(
id=4,
uuid='94659b3f-ea1f-4ffd-998d-93b28f7f5b70',
x=8, y='y2', z='z2'),
MyModel(
id=5,
uuid='bdf3893c-ee3c-40a0-bc79-960adb6cd1d4',
x=8, y='y2', z=None),
])
self.session.commit()
def _assert_row(self, pk, values):
row = self.session.execute(
sql.select([MyModel.__table__]).where(MyModel.__table__.c.id == pk)
).first()
values['id'] = pk
self.assertEqual(values, dict(row))
def test_update_specimen_successful(self):
uuid = '136254d5-3869-408f-9da7-190e0072641a'
specimen = MyModel(
y='y1', z='z2', uuid=uuid
)
result = self.session.query(MyModel).update_on_match(
specimen,
'uuid',
values={'x': 9, 'z': 'z3'}
)
self.assertEqual(uuid, result.uuid)
self.assertEqual(2, result.id)
self.assertEqual('z3', result.z)
self.assertIn(result, self.session)
self._assert_row(
2,
{
'uuid': '136254d5-3869-408f-9da7-190e0072641a',
'x': 9, 'y': 'y1', 'z': 'z3'
}
)
def test_update_specimen_include_only(self):
uuid = '136254d5-3869-408f-9da7-190e0072641a'
specimen = MyModel(
y='y9', z='z5', x=6, uuid=uuid
)
# Query the object first to test that we merge when the object is
# already cached in the session.
self.session.query(MyModel).filter(MyModel.uuid == uuid).one()
result = self.session.query(MyModel).update_on_match(
specimen,
'uuid',
values={'x': 9, 'z': 'z3'},
include_only=('x', )
)
self.assertEqual(uuid, result.uuid)
self.assertEqual(2, result.id)
self.assertEqual('z3', result.z)
self.assertIn(result, self.session)
self.assertNotIn(result, self.session.dirty)
self._assert_row(
2,
{
'uuid': '136254d5-3869-408f-9da7-190e0072641a',
'x': 9, 'y': 'y1', 'z': 'z3'
}
)
def test_update_specimen_no_rows(self):
specimen = MyModel(
y='y1', z='z3',
uuid='136254d5-3869-408f-9da7-190e0072641a'
)
exc = self.assertRaises(
update_match.NoRowsMatched,
self.session.query(MyModel).update_on_match,
specimen, 'uuid', values={'x': 9, 'z': 'z3'}
)
self.assertEqual("Zero rows matched for 3 attempts", exc.args[0])
def test_update_specimen_process_query_no_rows(self):
specimen = MyModel(
y='y1', z='z2',
uuid='136254d5-3869-408f-9da7-190e0072641a'
)
def process_query(query):
return query.filter_by(x=10)
exc = self.assertRaises(
update_match.NoRowsMatched,
self.session.query(MyModel).update_on_match,
specimen, 'uuid', values={'x': 9, 'z': 'z3'},
process_query=process_query
)
self.assertEqual("Zero rows matched for 3 attempts", exc.args[0])
def test_update_specimen_given_query_no_rows(self):
specimen = MyModel(
y='y1', z='z2',
uuid='136254d5-3869-408f-9da7-190e0072641a'
)
query = self.session.query(MyModel).filter_by(x=10)
exc = self.assertRaises(
update_match.NoRowsMatched,
query.update_on_match,
specimen, 'uuid', values={'x': 9, 'z': 'z3'},
)
self.assertEqual("Zero rows matched for 3 attempts", exc.args[0])
def test_update_specimen_multi_rows(self):
specimen = MyModel(
y='y1', z='z1',
)
exc = self.assertRaises(
update_match.MultiRowsMatched,
self.session.query(MyModel).update_on_match,
specimen, 'y', values={'x': 9, 'z': 'z3'}
)
self.assertEqual("2 rows matched; expected one", exc.args[0])
def test_update_specimen_query_mismatch_error(self):
specimen = MyModel(
y='y1'
)
q = self.session.query(MyModel.x, MyModel.y)
exc = self.assertRaises(
AssertionError,
q.update_on_match,
specimen, 'y', values={'x': 9, 'z': 'z3'},
)
self.assertEqual("Query does not match given specimen", exc.args[0])
def test_custom_handle_failure_raise_new(self):
class MyException(Exception):
pass
def handle_failure(query):
# ensure the query is usable
result = query.count()
self.assertEqual(0, result)
raise MyException("test: %d" % result)
specimen = MyModel(
y='y1', z='z3',
uuid='136254d5-3869-408f-9da7-190e0072641a'
)
exc = self.assertRaises(
MyException,
self.session.query(MyModel).update_on_match,
specimen, 'uuid', values={'x': 9, 'z': 'z3'},
handle_failure=handle_failure
)
self.assertEqual("test: 0", exc.args[0])
def test_custom_handle_failure_cancel_raise(self):
uuid = '136254d5-3869-408f-9da7-190e0072641a'
class MyException(Exception):
pass
def handle_failure(query):
# ensure the query is usable
result = query.count()
self.assertEqual(0, result)
return True
specimen = MyModel(
id=2, y='y1', z='z3', uuid=uuid
)
result = self.session.query(MyModel).update_on_match(
specimen, 'uuid', values={'x': 9, 'z': 'z3'},
handle_failure=handle_failure
)
self.assertEqual(uuid, result.uuid)
self.assertEqual(2, result.id)
self.assertEqual('z3', result.z)
self.assertEqual(9, result.x)
self.assertIn(result, self.session)
def test_update_specimen_on_none_successful(self):
uuid = 'bdf3893c-ee3c-40a0-bc79-960adb6cd1d4'
specimen = MyModel(
y='y2', z=None, uuid=uuid
)
result = self.session.query(MyModel).update_on_match(
specimen,
'uuid',
values={'x': 9, 'z': 'z3'},
)
self.assertIn(result, self.session)
self.assertEqual(uuid, result.uuid)
self.assertEqual(5, result.id)
self.assertEqual('z3', result.z)
self._assert_row(
5,
{
'uuid': 'bdf3893c-ee3c-40a0-bc79-960adb6cd1d4',
'x': 9, 'y': 'y2', 'z': 'z3'
}
)
def test_update_specimen_on_multiple_nonnone_successful(self):
uuid = '094eb162-d5df-494b-a458-a91a1b2d2c65'
specimen = MyModel(
y=('y1', 'y2'), x=(5, 7), uuid=uuid
)
result = self.session.query(MyModel).update_on_match(
specimen,
'uuid',
values={'x': 9, 'z': 'z3'},
)
self.assertIn(result, self.session)
self.assertEqual(uuid, result.uuid)
self.assertEqual(3, result.id)
self.assertEqual('z3', result.z)
self._assert_row(
3,
{
'uuid': '094eb162-d5df-494b-a458-a91a1b2d2c65',
'x': 9, 'y': 'y1', 'z': 'z3'
}
)
def test_update_specimen_on_multiple_wnone_successful(self):
uuid = 'bdf3893c-ee3c-40a0-bc79-960adb6cd1d4'
specimen = MyModel(
y=('y1', 'y2'), x=(8, 7), z=('z1', 'z2', None), uuid=uuid
)
result = self.session.query(MyModel).update_on_match(
specimen,
'uuid',
values={'x': 9, 'z': 'z3'},
)
self.assertIn(result, self.session)
self.assertEqual(uuid, result.uuid)
self.assertEqual(5, result.id)
self.assertEqual('z3', result.z)
self._assert_row(
5,
{
'uuid': 'bdf3893c-ee3c-40a0-bc79-960adb6cd1d4',
'x': 9, 'y': 'y2', 'z': 'z3'
}
)
def test_update_returning_pk_matched(self):
pk = self.session.query(MyModel).\
filter_by(y='y1', z='z2').update_returning_pk(
{'x': 9, 'z': 'z3'},
('uuid', '136254d5-3869-408f-9da7-190e0072641a')
)
self.assertEqual((2,), pk)
self._assert_row(
2,
{
'uuid': '136254d5-3869-408f-9da7-190e0072641a',
'x': 9, 'y': 'y1', 'z': 'z3'
}
)
def test_update_returning_wrong_uuid(self):
exc = self.assertRaises(
update_match.NoRowsMatched,
self.session.query(MyModel).
filter_by(y='y1', z='z2').update_returning_pk,
{'x': 9, 'z': 'z3'},
('uuid', '23cb9224-9f8e-40fe-bd3c-e7577b7af37d')
)
self.assertEqual("No rows matched the UPDATE", exc.args[0])
def test_update_returning_no_rows(self):
exc = self.assertRaises(
update_match.NoRowsMatched,
self.session.query(MyModel).
filter_by(y='y1', z='z3').update_returning_pk,
{'x': 9, 'z': 'z3'},
('uuid', '136254d5-3869-408f-9da7-190e0072641a')
)
self.assertEqual("No rows matched the UPDATE", exc.args[0])
def test_update_multiple_rows(self):
exc = self.assertRaises(
update_match.MultiRowsMatched,
self.session.query(MyModel).
filter_by(y='y1', z='z1').update_returning_pk,
{'x': 9, 'z': 'z3'},
('y', 'y1')
)
self.assertEqual("2 rows matched; expected one", exc.args[0])
class PGUpdateMatchTest(
UpdateMatchTest,
test_base.PostgreSQLOpportunisticTestCase):
pass
class MySQLUpdateMatchTest(
UpdateMatchTest,
test_base.MySQLOpportunisticTestCase):
pass

File diff suppressed because it is too large Load Diff

View File

@ -1,255 +0,0 @@
# Copyright (c) 2013 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Unit tests for DB API."""
import mock
from oslo_config import cfg
from oslo_utils import importutils
from oslo_db import api
from oslo_db import exception
from oslo_db.tests import utils as test_utils
sqla = importutils.try_import('sqlalchemy')
if not sqla:
raise ImportError("Unable to import module 'sqlalchemy'.")
def get_backend():
return DBAPI()
class DBAPI(object):
def _api_raise(self, *args, **kwargs):
"""Simulate raising a database-has-gone-away error
This method creates a fake OperationalError with an ID matching
a valid MySQL "database has gone away" situation. It also decrements
the error_counter so that we can artificially keep track of
how many times this function is called by the wrapper. When
error_counter reaches zero, this function returns True, simulating
the database becoming available again and the query succeeding.
"""
if self.error_counter > 0:
self.error_counter -= 1
orig = sqla.exc.DBAPIError(False, False, False)
orig.args = [2006, 'Test raise operational error']
e = exception.DBConnectionError(orig)
raise e
else:
return True
def api_raise_default(self, *args, **kwargs):
return self._api_raise(*args, **kwargs)
@api.safe_for_db_retry
def api_raise_enable_retry(self, *args, **kwargs):
return self._api_raise(*args, **kwargs)
def api_class_call1(_self, *args, **kwargs):
return args, kwargs
class DBAPITestCase(test_utils.BaseTestCase):
def test_dbapi_full_path_module_method(self):
dbapi = api.DBAPI('oslo_db.tests.test_api')
result = dbapi.api_class_call1(1, 2, kwarg1='meow')
expected = ((1, 2), {'kwarg1': 'meow'})
self.assertEqual(expected, result)
def test_dbapi_unknown_invalid_backend(self):
self.assertRaises(ImportError, api.DBAPI, 'tests.unit.db.not_existent')
def test_dbapi_lazy_loading(self):
dbapi = api.DBAPI('oslo_db.tests.test_api', lazy=True)
self.assertIsNone(dbapi._backend)
dbapi.api_class_call1(1, 'abc')
self.assertIsNotNone(dbapi._backend)
def test_dbapi_from_config(self):
conf = cfg.ConfigOpts()
dbapi = api.DBAPI.from_config(conf,
backend_mapping={'sqlalchemy': __name__})
self.assertIsNotNone(dbapi._backend)
class DBReconnectTestCase(DBAPITestCase):
def setUp(self):
super(DBReconnectTestCase, self).setUp()
self.test_db_api = DBAPI()
patcher = mock.patch(__name__ + '.get_backend',
return_value=self.test_db_api)
patcher.start()
self.addCleanup(patcher.stop)
def test_raise_connection_error(self):
self.dbapi = api.DBAPI('sqlalchemy', {'sqlalchemy': __name__})
self.test_db_api.error_counter = 5
self.assertRaises(exception.DBConnectionError, self.dbapi._api_raise)
def test_raise_connection_error_decorated(self):
self.dbapi = api.DBAPI('sqlalchemy', {'sqlalchemy': __name__})
self.test_db_api.error_counter = 5
self.assertRaises(exception.DBConnectionError,
self.dbapi.api_raise_enable_retry)
self.assertEqual(4, self.test_db_api.error_counter, 'Unexpected retry')
def test_raise_connection_error_enabled(self):
self.dbapi = api.DBAPI('sqlalchemy',
{'sqlalchemy': __name__},
use_db_reconnect=True)
self.test_db_api.error_counter = 5
self.assertRaises(exception.DBConnectionError,
self.dbapi.api_raise_default)
self.assertEqual(4, self.test_db_api.error_counter, 'Unexpected retry')
@mock.patch('oslo_db.api.time.sleep', return_value=None)
def test_retry_one(self, p_time_sleep):
self.dbapi = api.DBAPI('sqlalchemy',
{'sqlalchemy': __name__},
use_db_reconnect=True,
retry_interval=1)
try:
func = self.dbapi.api_raise_enable_retry
self.test_db_api.error_counter = 1
self.assertTrue(func(), 'Single retry did not succeed.')
except Exception:
self.fail('Single retry raised an un-wrapped error.')
p_time_sleep.assert_called_with(1)
self.assertEqual(
0, self.test_db_api.error_counter,
'Counter not decremented, retry logic probably failed.')
@mock.patch('oslo_db.api.time.sleep', return_value=None)
def test_retry_two(self, p_time_sleep):
self.dbapi = api.DBAPI('sqlalchemy',
{'sqlalchemy': __name__},
use_db_reconnect=True,
retry_interval=1,
inc_retry_interval=False)
try:
func = self.dbapi.api_raise_enable_retry
self.test_db_api.error_counter = 2
self.assertTrue(func(), 'Multiple retry did not succeed.')
except Exception:
self.fail('Multiple retry raised an un-wrapped error.')
p_time_sleep.assert_called_with(1)
self.assertEqual(
0, self.test_db_api.error_counter,
'Counter not decremented, retry logic probably failed.')
@mock.patch('oslo_db.api.time.sleep', return_value=None)
def test_retry_float_interval(self, p_time_sleep):
self.dbapi = api.DBAPI('sqlalchemy',
{'sqlalchemy': __name__},
use_db_reconnect=True,
retry_interval=0.5)
try:
func = self.dbapi.api_raise_enable_retry
self.test_db_api.error_counter = 1
self.assertTrue(func(), 'Single retry did not succeed.')
except Exception:
self.fail('Single retry raised an un-wrapped error.')
p_time_sleep.assert_called_with(0.5)
self.assertEqual(
0, self.test_db_api.error_counter,
'Counter not decremented, retry logic probably failed.')
@mock.patch('oslo_db.api.time.sleep', return_value=None)
def test_retry_until_failure(self, p_time_sleep):
self.dbapi = api.DBAPI('sqlalchemy',
{'sqlalchemy': __name__},
use_db_reconnect=True,
retry_interval=1,
inc_retry_interval=False,
max_retries=3)
func = self.dbapi.api_raise_enable_retry
self.test_db_api.error_counter = 5
self.assertRaises(
exception.DBError, func,
'Retry of permanent failure did not throw DBError exception.')
p_time_sleep.assert_called_with(1)
self.assertNotEqual(
0, self.test_db_api.error_counter,
'Retry did not stop after sql_max_retries iterations.')
class DBRetryRequestCase(DBAPITestCase):
def test_retry_wrapper_succeeds(self):
@api.wrap_db_retry(max_retries=10)
def some_method():
pass
some_method()
def test_retry_wrapper_reaches_limit(self):
max_retries = 2
@api.wrap_db_retry(max_retries=max_retries)
def some_method(res):
res['result'] += 1
raise exception.RetryRequest(ValueError())
res = {'result': 0}
self.assertRaises(ValueError, some_method, res)
self.assertEqual(max_retries + 1, res['result'])
def test_retry_wrapper_exception_checker(self):
def exception_checker(exc):
return isinstance(exc, ValueError) and exc.args[0] < 5
@api.wrap_db_retry(max_retries=10,
exception_checker=exception_checker)
def some_method(res):
res['result'] += 1
raise ValueError(res['result'])
res = {'result': 0}
self.assertRaises(ValueError, some_method, res)
# our exception checker should have stopped returning True after 5
self.assertEqual(5, res['result'])
@mock.patch.object(DBAPI, 'api_class_call1')
@mock.patch.object(api, 'wrap_db_retry')
def test_mocked_methods_are_not_wrapped(self, mocked_wrap, mocked_method):
dbapi = api.DBAPI('oslo_db.tests.test_api')
dbapi.api_class_call1()
self.assertFalse(mocked_wrap.called)
@mock.patch('oslo_db.api.LOG')
def test_retry_wrapper_non_db_error_not_logged(self, mock_log):
# Tests that if the retry wrapper hits a non-db error (raised from the
# wrapped function), then that exception is reraised but not logged.
@api.wrap_db_retry(max_retries=5, retry_on_deadlock=True)
def some_method():
raise AttributeError('test')
self.assertRaises(AttributeError, some_method)
self.assertFalse(mock_log.called)

View File

@ -1,108 +0,0 @@
# Copyright 2014 Mirantis.inc
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
import mock
from oslo_db import concurrency
from oslo_db.tests import utils as test_utils
FAKE_BACKEND_MAPPING = {'sqlalchemy': 'fake.db.sqlalchemy.api'}
class TpoolDbapiWrapperTestCase(test_utils.BaseTestCase):
def setUp(self):
super(TpoolDbapiWrapperTestCase, self).setUp()
self.db_api = concurrency.TpoolDbapiWrapper(
conf=self.conf, backend_mapping=FAKE_BACKEND_MAPPING)
# NOTE(akurilin): We are not going to add `eventlet` to `oslo_db` in
# requirements (`requirements.txt` and `test-requirements.txt`) due to
# the following reasons:
# - supporting of eventlet's thread pooling is totally optional;
# - we don't need to test `tpool.Proxy` functionality itself,
# because it's a tool from the third party library;
# - `eventlet` would prevent us from running unit tests on Python 3.x
# versions, because it doesn't support them yet.
#
# As we don't test `tpool.Proxy`, we can safely mock it in tests.
self.proxy = mock.MagicMock()
self.eventlet = mock.MagicMock()
self.eventlet.tpool.Proxy.return_value = self.proxy
sys.modules['eventlet'] = self.eventlet
self.addCleanup(sys.modules.pop, 'eventlet', None)
@mock.patch('oslo_db.api.DBAPI')
def test_db_api_common(self, mock_db_api):
# test context:
# CONF.database.use_tpool == False
# eventlet is installed
# expected result:
# TpoolDbapiWrapper should wrap DBAPI
fake_db_api = mock.MagicMock()
mock_db_api.from_config.return_value = fake_db_api
# get access to some db-api method
self.db_api.fake_call_1
mock_db_api.from_config.assert_called_once_with(
conf=self.conf, backend_mapping=FAKE_BACKEND_MAPPING)
self.assertEqual(fake_db_api, self.db_api._db_api)
self.assertFalse(self.eventlet.tpool.Proxy.called)
# get access to other db-api method to be sure that api didn't changed
self.db_api.fake_call_2
self.assertEqual(fake_db_api, self.db_api._db_api)
self.assertFalse(self.eventlet.tpool.Proxy.called)
self.assertEqual(1, mock_db_api.from_config.call_count)
@mock.patch('oslo_db.api.DBAPI')
def test_db_api_config_change(self, mock_db_api):
# test context:
# CONF.database.use_tpool == True
# eventlet is installed
# expected result:
# TpoolDbapiWrapper should wrap tpool proxy
fake_db_api = mock.MagicMock()
mock_db_api.from_config.return_value = fake_db_api
self.conf.set_override('use_tpool', True, group='database')
# get access to some db-api method
self.db_api.fake_call
# CONF.database.use_tpool is True, so we get tpool proxy in this case
mock_db_api.from_config.assert_called_once_with(
conf=self.conf, backend_mapping=FAKE_BACKEND_MAPPING)
self.eventlet.tpool.Proxy.assert_called_once_with(fake_db_api)
self.assertEqual(self.proxy, self.db_api._db_api)
@mock.patch('oslo_db.api.DBAPI')
def test_db_api_without_installed_eventlet(self, mock_db_api):
# test context:
# CONF.database.use_tpool == True
# eventlet is not installed
# expected result:
# raise ImportError
self.conf.set_override('use_tpool', True, group='database')
sys.modules['eventlet'] = None
self.assertRaises(ImportError, getattr, self.db_api, 'fake')

View File

@ -1,40 +0,0 @@
# Copyright 2010-2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
from oslo_config import cfg
from oslotest import base as test_base
from oslotest import moxstubout
import six
if six.PY3:
@contextlib.contextmanager
def nested(*contexts):
with contextlib.ExitStack() as stack:
yield [stack.enter_context(c) for c in contexts]
else:
nested = contextlib.nested
class BaseTestCase(test_base.BaseTestCase):
def setUp(self, conf=cfg.CONF):
super(BaseTestCase, self).setUp()
moxfixture = self.useFixture(moxstubout.MoxStubout())
self.mox = moxfixture.mox
self.stubs = moxfixture.stubs
self.conf = conf
self.addCleanup(self.conf.reset)

View File

@ -1,3 +0,0 @@
---
other:
- Introduce reno for deployer release notes.

View File

@ -1,7 +0,0 @@
---
upgrade:
- The allowed values for the ``connection_debug`` option are now restricted to
the range between 0 and 100 (inclusive). Previously a number lower than 0
or higher than 100 could be given without error. But now, a
``ConfigFileValueError`` will be raised when the option value is outside this
range.

View File

@ -1,6 +0,0 @@
---
deprecations:
- class ``InsertFromSelect`` from module ``oslo_db.sqlalchemy.utils`` is
deprecated in favor of ``sqlalchemy.sql.expression.Insert.from_select()``
method of Insert expression, that is available in SQLAlchemy versions
1.0.0 and newer

View File

@ -1,7 +0,0 @@
---
deprecations:
- |
The configuration option ``sqlite_db`` is now deprecated and
will be removed in the future. Please use configuration
option ``connection`` or ``slave_connection`` to connect to the database.

View File

@ -1,6 +0,0 @@
---
features:
- enginefacade decorators can now be used for class and instance methods,
which implicitly receive the first positional argument. Previously, it
was required that all decorated functions receive a context value as the
first argument.

View File

@ -1,25 +0,0 @@
---
upgrade:
- |
The default value of ``max_overflow`` config option
has been increased from 10 to 50 in order to allow
OpenStack services heavily using DBs to better handle
spikes of concurrent requests and lower the probability
of getting a pool timeout issue.
This change potentially leads to increasing of the number
of open connections to an RDBMS server. Depending on the
configuration, you may see "too many connections" errors
in logs of OpenStack services / RDBMS server. The max limit of
connections can be set by the means of these config options:
http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_connections
http://www.postgresql.org/docs/current/static/runtime-config-connection.html#GUC-MAX-CONNECTIONS
For details, please see the following LP:
https://bugs.launchpad.net/oslo.db/+bug/1535375
and the ML thread:
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html

View File

@ -1,5 +0,0 @@
---
deprecations:
- base test classes from ``oslo_db.sqlalchemy.test_base`` are deprecated in
favor of new fixtures introduced in ``oslo_db.sqlalchemy.test_fixtures``
module

View File

@ -1,5 +0,0 @@
---
upgrade:
- The configuration option ``sqlite_db`` is removed. Pease use
configuration option ``connection`` or ``slave_connection``
to connect to the database.

View File

@ -1,14 +0,0 @@
---
upgrade:
- |
oslo.db now logs a warning when the connection URL does not
explicitly mention a driver. The default driver is still used, but
in some cases, such as MySQL, the default is incompatible with the
concurrency library eventlet.
- |
It is strongly recommended to use the `PyMySQL
<https://pypi.python.org/pypi/PyMySQL>`__ driver when connecting
to a MySQL-compatible database to ensure the best compatibility
with the concurrency library eventlet. To use PyMySQL, ensure the
connection URL is specified with ``mysql+pymysql://`` as the
scheme.

View File

@ -1,6 +0,0 @@
---
fixes:
- Decorator ``oslo_db.api.wrap_db_retry`` now defaults to 10 retries.
Previously the number of attempts was 0, and users had to explicitly
pass ``max_retry_interval`` value greater than 0 to actually enable
retries on errors.

View File

@ -1,284 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# oslo.db Release Notes documentation build configuration file, created by
# sphinx-quickstart on Tue Nov 3 17:40:50 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'openstackdocstheme',
'reno.sphinxext',
]
# openstackdocstheme options
repository_name = 'openstack/oslo.db'
bug_project = 'oslo.db'
bug_tag = ''
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'oslo.db Release Notes'
copyright = u'2016, oslo.db Developers'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
# The full version, including alpha/beta/rc tags.
import pkg_resources
release = pkg_resources.get_distribution('oslo.db').version
# The short X.Y version.
version = release
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'oslo.configReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'oslo.configReleaseNotes.tex',
u'oslo.db Release Notes Documentation',
u'oslo.db Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'oslo.configreleasenotes',
u'oslo.db Release Notes Documentation',
[u'oslo.db Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'oslo.dbReleaseNotes',
u'oslo.db Release Notes Documentation',
u'oslo.db Developers', 'oslo.configReleaseNotes',
'An OpenStack library for parsing configuration options from the command'
' line and configuration files.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@ -1,12 +0,0 @@
=======================
oslo.db Release Notes
=======================
.. toctree::
:maxdepth: 1
unreleased
ocata
newton
mitaka
liberty

View File

@ -1,6 +0,0 @@
==============================
Liberty Series Release Notes
==============================
.. release-notes::
:branch: origin/stable/liberty

View File

@ -1,86 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.db Release Notes 4.18.1.dev1\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-03-14 11:56+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-28 05:55+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "4.6.0"
msgstr "4.6.0"
msgid "For details, please see the following LP:"
msgstr "For details, please see the following LP:"
msgid "Introduce reno for deployer release notes."
msgstr "Introduce reno for deployer release notes."
msgid "Liberty Series Release Notes"
msgstr "Liberty Series Release Notes"
msgid "Mitaka Series Release Notes"
msgstr "Mitaka Series Release Notes"
msgid "Other Notes"
msgstr "Other Notes"
msgid ""
"The default value of ``max_overflow`` config option has been increased from "
"10 to 50 in order to allow OpenStack services heavily using DBs to better "
"handle spikes of concurrent requests and lower the probability of getting a "
"pool timeout issue."
msgstr ""
"The default value of ``max_overflow`` config option has been increased from "
"10 to 50 in order to allow OpenStack services heavily using DBs to better "
"handle spikes of concurrent requests and lower the probability of getting a "
"pool timeout issue."
msgid ""
"This change potentially leads to increasing of the number of open "
"connections to an RDBMS server. Depending on the configuration, you may see "
"\"too many connections\" errors in logs of OpenStack services / RDBMS "
"server. The max limit of connections can be set by the means of these config "
"options:"
msgstr ""
"This change potentially leads to increasing of the number of open "
"connections to an RDBMS server. Depending on the configuration, you may see "
"\"too many connections\" errors in logs of OpenStack services / RDBMS "
"server. The max limit of connections can be set by the means of these config "
"options:"
msgid "Unreleased Release Notes"
msgstr "Unreleased Release Notes"
msgid "Upgrade Notes"
msgstr "Upgrade Notes"
msgid "and the ML thread:"
msgstr "and the ML thread:"
msgid ""
"http://dev.mysql.com/doc/refman/5.7/en/server-system-variables."
"html#sysvar_max_connections http://www.postgresql.org/docs/current/static/"
"runtime-config-connection.html#GUC-MAX-CONNECTIONS"
msgstr ""
"http://dev.mysql.com/doc/refman/5.7/en/server-system-variables."
"html#sysvar_max_connections http://www.postgresql.org/docs/current/static/"
"runtime-config-connection.html#GUC-MAX-CONNECTIONS"
msgid ""
"http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html"
msgstr ""
"http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html"
msgid "https://bugs.launchpad.net/oslo.db/+bug/1535375"
msgstr "https://bugs.launchpad.net/oslo.db/+bug/1535375"
msgid "oslo.db Release Notes"
msgstr "oslo.db Release Notes"

View File

@ -1,68 +0,0 @@
# Gérald LONLAS <g.lonlas@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.db Release Notes 4.18.1.dev1\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-03-14 11:56+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-10-22 05:59+0000\n"
"Last-Translator: Gérald LONLAS <g.lonlas@gmail.com>\n"
"Language-Team: French\n"
"Language: fr\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n > 1)\n"
msgid "2.6.0-9"
msgstr "2.6.0-9"
msgid "4.12.0"
msgstr "4.12.0"
msgid "4.6.0"
msgstr "4.6.0"
msgid "4.8.0"
msgstr "4.8.0"
msgid "4.9.0"
msgstr "4.9.0"
msgid "Bug Fixes"
msgstr "Corrections de bugs"
msgid "Deprecation Notes"
msgstr "Notes dépréciées "
msgid "Liberty Series Release Notes"
msgstr "Note de release pour Liberty"
msgid "Mitaka Series Release Notes"
msgstr "Note de release pour Mitaka"
msgid "New Features"
msgstr "Nouvelles fonctionnalités"
msgid "Newton Series Release Notes"
msgstr "Note de release pour Newton"
msgid "Other Notes"
msgstr "Autres notes"
msgid "Unreleased Release Notes"
msgstr "Note de release pour les changements non déployées"
msgid "Upgrade Notes"
msgstr "Notes de mises à jours"
msgid ""
"http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html"
msgstr ""
"http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html"
msgid "https://bugs.launchpad.net/oslo.db/+bug/1535375"
msgstr "https://bugs.launchpad.net/oslo.db/+bug/1535375"
msgid "oslo.db Release Notes"
msgstr "Note de release pour oslo.db"

View File

@ -1,6 +0,0 @@
===================================
Mitaka Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/mitaka

View File

@ -1,6 +0,0 @@
=============================
Newton Series Release Notes
=============================
.. release-notes::
:branch: origin/stable/newton

View File

@ -1,6 +0,0 @@
===================================
Ocata Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/ocata

View File

@ -1,5 +0,0 @@
==========================
Unreleased Release Notes
==========================
.. release-notes::

View File

@ -1,14 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
alembic>=0.8.10 # MIT
debtcollector>=1.2.0 # Apache-2.0
oslo.i18n!=3.15.2,>=2.1.0 # Apache-2.0
oslo.config!=4.3.0,!=4.4.0,>=4.0.0 # Apache-2.0
oslo.utils>=3.20.0 # Apache-2.0
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
sqlalchemy-migrate>=0.11.0 # Apache-2.0
stevedore>=1.20.0 # Apache-2.0
six>=1.9.0 # MIT

View File

@ -1,97 +0,0 @@
[metadata]
name = oslo.db
summary = Oslo Database library
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = https://docs.openstack.org/oslo.db/latest
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.5
[extras]
# So e.g. nova can test-depend on oslo.db[mysql]
mysql =
PyMySQL>=0.7.6 # MIT License
# or oslo.db[mysql-c]
mysql-c =
MySQL-python:python_version=='2.7' # GPL with FOSS exception
# or oslo.db[postgresql]
postgresql =
psycopg2>=2.5 # LGPL/ZPL
# Dependencies for testing oslo.db itself.
test =
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
coverage!=4.4,>=4.0 # Apache-2.0
doc8 # Apache-2.0
eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT
fixtures>=3.0.0 # Apache-2.0/BSD
mock>=2.0 # BSD
python-subunit>=0.0.18 # Apache-2.0/BSD
sphinx>=1.6.2 # BSD
openstackdocstheme>=1.11.0 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
oslo.context>=2.14.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
os-testr>=0.8.0 # Apache-2.0
reno!=2.3.1,>=1.8.0 # Apache-2.0
fixtures =
testresources>=0.2.4 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
pifpaf =
pifpaf>=0.10.0 # Apache-2.0
[files]
packages =
oslo_db
[entry_points]
oslo.config.opts =
oslo.db = oslo_db.options:list_opts
oslo.db.concurrency = oslo_db.concurrency:list_opts
oslo.db.migration =
alembic = oslo_db.sqlalchemy.migration_cli.ext_alembic:AlembicExtension
migrate = oslo_db.sqlalchemy.migration_cli.ext_migrate:MigrateExtension
[wheel]
universal = 1
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
warning-is-error = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = oslo_db/locale
domain = oslo_db
[update_catalog]
domain = oslo_db
output_dir = oslo_db/locale
input_file = oslo_db/locale/oslo_db.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = oslo_db/locale/oslo_db.pot
[pbr]
autodoc_index_modules = True
api_doc_dir = reference/api
autodoc_exclude_modules =
oslo_db.tests.*

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=2.0.0'],
pbr=True)

View File

@ -1,16 +0,0 @@
#!/usr/bin/env bash
# return nonzero exit status of rightmost command, so that we
# get nonzero exit on test failure without halting subunit-trace
set -o pipefail
TESTRARGS=$1
python setup.py testr --testr-args="--subunit $TESTRARGS" | subunit-trace -f
retval=$?
# NOTE(mtreinish) The pipe above would eat the slowest display from pbr's testr
# wrapper so just manually print the slowest tests
echo -e "\nSlowest Tests:\n"
testr slowest
exit $retval

Some files were not shown because too many files have changed in this diff Show More