convert svn to hg

This commit is contained in:
iElectric 2009-06-30 22:50:18 +02:00
commit 9cedba8b8b
99 changed files with 9941 additions and 0 deletions

3
MANIFEST.in Normal file
View File

@ -0,0 +1,3 @@
include CHANGELOG
include docs/*.rst
include docs/conf.py

35
README Normal file
View File

@ -0,0 +1,35 @@
Inspired by Ruby on Rails' migrations, Migrate provides a way to deal with database schema changes in `SQLAlchemy <http://sqlalchemy.org>`_ projects.
Migrate extends SQLAlchemy to have database changeset handling. It provides a database change repository mechanism which can be used from the command line as well as from inside python code.
Help
----
Sphinx documentation is available at the project page `packages.python.org <http://packages.python.org/sqlalchemy-migrate/>`_.
Users and developers can be found at #sqlalchemy-migrate on Freenode IRC network
and at the public users mailing list `migrate-users <http://groups.google.com/group/migrate-users>`_.
New releases and major changes are announced at the public announce
mailing list `migrate-announce <http://groups.google.com/group/migrate-announce>`_
and at the Python package index `sqlalchemy-migrate <http://pypi.python.org/pypi/sqlalchemy-migrate>`_.
Homepage is located at `code.google.com <http://code.google.com/p/sqlalchemy-migrate/>`_
You can also download `development version <http://sqlalchemy-migrate.googlecode.com/svn/trunk/#egg=sqlalchemy-migrate-dev>` from SVN trunk.
Tests and Bugs
--------------
To run automated tests:
- Copy test_db.cfg.tmpl to test_db.cfg
- Edit test_db.cfg with database connection strings suitable for running tests. (Use empty databases.)
- python setup.py test
Note that `nose <http://somethingaboutorange.com/mrl/projects/nose/>`_ is required to run migrate's tests. It should be
installed automatically; if not, try "easy_install nose".
Please report any issues with sqlalchemy-migrate to the issue tracker
at `code.google.com issues <http://code.google.com/p/sqlalchemy-migrate/issues/list>`_

8
TODO Normal file
View File

@ -0,0 +1,8 @@
- better SQL scripts support (testing, source viewing)
make_update_script_for_model:
- calculated differences between models are actually differences between metas
- columns are not compared?
- even if two "models" are equal, it doesn't yield so
- controlledschema.drop() drops whole migrate table, maybe there are some other repositories bound to it!

75
docs/Makefile Normal file
View File

@ -0,0 +1,75 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d _build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html web pickle htmlhelp latex changes linkcheck
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " changes to make an overview over all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
clean:
-rm -rf _build/*
html:
mkdir -p _build/html _build/doctrees
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) _build/html
@echo
@echo "Build finished. The HTML pages are in _build/html."
pickle:
mkdir -p _build/pickle _build/doctrees
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) _build/pickle
@echo
@echo "Build finished; now you can process the pickle files."
web: pickle
json:
mkdir -p _build/json _build/doctrees
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) _build/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
mkdir -p _build/htmlhelp _build/doctrees
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) _build/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in _build/htmlhelp."
latex:
mkdir -p _build/latex _build/doctrees
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) _build/latex
@echo
@echo "Build finished; the LaTeX files are in _build/latex."
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
changes:
mkdir -p _build/changes _build/doctrees
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) _build/changes
@echo
@echo "The overview file is in _build/changes."
linkcheck:
mkdir -p _build/linkcheck _build/doctrees
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) _build/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in _build/linkcheck/output.txt."

199
docs/api.rst Normal file
View File

@ -0,0 +1,199 @@
Module :mod:`migrate.changeset`
===============================
.. automodule:: migrate.changeset
:members:
:synopsis: Database changeset management
Module :mod:`ansisql <migrate.changeset.ansisql>`
-------------------------------------------------
.. automodule:: migrate.changeset.ansisql
:members:
:member-order: groupwise
:synopsis: Standard SQL implementation for altering database schemas
Module :mod:`constraint <migrate.changeset.constraint>`
-------------------------------------------------------
.. automodule:: migrate.changeset.constraint
:members:
:show-inheritance:
:member-order: groupwise
:synopsis: Standalone schema constraint objects
Module :mod:`databases <migrate.changeset.databases>`
-----------------------------------------------------
.. automodule:: migrate.changeset.databases
:members:
:synopsis: Database specific changeset implementations
.. _mysql-d:
Module :mod:`mysql <migrate.changeset.databases.mysql>`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: migrate.changeset.databases.mysql
:members:
:synopsis: MySQL database specific changeset implementations
.. _firebird-d:
Module :mod:`firebird <migrate.changeset.databases.firebird>`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: migrate.changeset.databases.firebird
:members:
:synopsis: Firebird database specific changeset implementations
.. _oracle-d:
Module :mod:`oracle <migrate.changeset.databases.oracle>`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: migrate.changeset.databases.oracle
:members:
:synopsis: Oracle database specific changeset implementations
.. _postgres-d:
Module :mod:`postgres <migrate.changeset.databases.postgres>`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: migrate.changeset.databases.postgres
:members:
:synopsis: PostgreSQL database specific changeset implementations
.. _sqlite-d:
Module :mod:`sqlite <migrate.changeset.databases.sqlite>`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: migrate.changeset.databases.sqlite
:members:
:synopsis: SQLite database specific changeset implementations
Module :mod:`visitor <migrate.changeset.databases.visitor>`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: migrate.changeset.databases.visitor
:members:
Module :mod:`exceptions <migrate.changeset.exceptions>`
-------------------------------------------------------
.. automodule:: migrate.changeset.exceptions
:members:
:synopsis: Changeset exception classes
Module :mod:`schema <migrate.changeset.schema>`
-----------------------------------------------
.. automodule:: migrate.changeset.schema
:members:
:synopsis: Schema changeset handling functions
Module :mod:`migrate.versioning`
================================
.. automodule:: migrate.versioning
:members:
:synopsis: Database version and repository management
Module :mod:`api <migrate.versioning.api>`
------------------------------------------
.. automodule:: migrate.versioning.api
:members:
:synopsis: External API for :mod:`migrate.versioning`
Module :mod:`exceptions <migrate.versioning.exceptions>`
--------------------------------------------------------
.. automodule:: migrate.versioning.exceptions
:members:
:synopsis: Exception classes for :mod:`migrate.versioning`
Module :mod:`genmodel <migrate.versioning.genmodel>`
----------------------------------------------------
.. automodule:: migrate.versioning.genmodel
:members:
:synopsis: Python database model generator and differencer
Module :mod:`pathed <migrate.versioning.pathed>`
------------------------------------------------
.. automodule:: migrate.versioning.pathed
:members:
:synopsis: File/Directory handling class
Module :mod:`repository <migrate.versioning.repository>`
--------------------------------------------------------
.. automodule:: migrate.versioning.repository
:members:
:synopsis: SQLAlchemy migrate repository management
:member-order: groupwise
Module :mod:`schema <migrate.versioning.schema>`
------------------------------------------------
.. automodule:: migrate.versioning.schema
:members:
:member-order: groupwise
:synopsis: Database schema management
Module :mod:`schemadiff <migrate.versioning.schemadiff>`
--------------------------------------------------------
.. automodule:: migrate.versioning.schemadiff
:members:
:synopsis: Database schema and model differencing
Module :mod:`script <migrate.versioning.script>`
------------------------------------------------
.. automodule:: migrate.versioning.script.base
:synopsis: Script utilities
:member-order: groupwise
:members:
.. automodule:: migrate.versioning.script.py
:members:
:member-order: groupwise
:inherited-members:
:show-inheritance:
.. automodule:: migrate.versioning.script.sql
:members:
:member-order: groupwise
:show-inheritance:
:inherited-members:
Module :mod:`shell <migrate.versioning.shell>`
----------------------------------------------
.. automodule:: migrate.versioning.shell
:members:
:synopsis: Shell commands
Module :mod:`util <migrate.versioning.util>`
--------------------------------------------
.. automodule:: migrate.versioning.util
:members:
:synopsis: Utility functions
Module :mod:`version <migrate.versioning.version>`
--------------------------------------------------
.. automodule:: migrate.versioning.version
:members:
:member-order: groupwise
:synopsis: Version management

152
docs/changelog.rst Normal file
View File

@ -0,0 +1,152 @@
0.5.5
-----
- url parameter can also be an Engine instance (this usage is discouraged though sometimes necessary)
- added support for SQLAlchemy 0.6 (missing oracle and firebird) by Michael Bayer
- alter, create, drop column / rename table / rename index constructs now accept `alter_metadata` parameter. If True, it will modify Column/Table objects according to changes. Otherwise, everything will be untouched.
- complete refactoring of :class:`~migrate.changeset.schema.ColumnDelta` (fixes issue 23)
- added support for :ref:`firebird <firebird-d>`
- fixed bug when :meth:`Column.alter <migrate.changeset.schema.ChangesetColumn.alter>`\(server_default='string') was not properly set
- `server_defaults` passed to :meth:`Column.create <migrate.changeset.schema.ChangesetColumn.create>` are now issued correctly
- constraints passed to :meth:`Column.create <migrate.changeset.schema.ChangesetColumn.create>` are correctly interpreted (``ALTER TABLE ADD CONSTRAINT`` is issued after ``ATLER TABLE ADD COLUMN``)
- :meth:`Column.create <migrate.changeset.schema.ChangesetColumn.create>` accepts `primary_key_name`, `unique_name` and `index_name` as string value which is used as contraint name when adding a column
- Constraint classes have `cascade=True` keyword argument to issue ``DROP CASCADE`` where supported
- added :class:`~migrate.changeset.constraint.UniqueConstraint`/:class:`~migrate.changeset.constraint.CheckConstraint` and corresponding create/drop methods
- use SQLAlchemy quoting system to avoid name conflicts (for issue 32)
- code coverage is up to 99% with more than 100 tests
- partial refactoring of :mod:`changeset` package
- majoy update to documentation
- :ref:`dialect support <dialect-support>` table was added to documentation
.. _backwards-055:
**Backward incompatible changes**:
- python upgrade/downgrade scripts do not import `migrate_engine` magically, but recieve engine as the only parameter to function (eg. ``def upgrade(migrate_engine):``)
- :meth:`Column.alter <migrate.changeset.schema.ChangesetColumn.alter>` does not accept `current_name` anymore, it extracts name from the old column.
0.5.4
-----
- fixed preview_sql parameter for downgrade/upgrade. Now it prints SQL if the step is SQL script and runs step with mocked engine to only print SQL statements if ORM is used. [Domen Kozar]
- use entrypoints terminology to specify dotted model names (module.model:User) [Domen Kozar]
- added engine_dict and engine_arg_* parameters to all api functions (deprecated echo) [Domen Kozar]
- make --echo parameter a bit more forgivable (better Python API support) [Domen Kozar]
- apply patch to refactor cmd line parsing for Issue 54 by Domen Kozar
0.5.3
-----
- apply patch for Issue 29 by Jonathan Ellis
- fix Issue 52 by removing needless parameters from object.__new__ calls
0.5.2
-----
- move sphinx and nose dependencies to extras_require and tests_require
- integrate patch for Issue 36 by Kumar McMillan
- fix unit tests
- mark ALTER TABLE ADD COLUMN with FOREIGN KEY as not supported by SQLite
0.5.1.2
-------
- corrected build
0.5.1.1
-------
- add documentation in tarball
- add a MANIFEST.in
0.5.1
-----
- SA 0.5.x support. SQLAlchemy < 0.5.1 not supported anymore.
- use nose instead of py.test for testing
- Added --echo=True option for all commands, which will make the sqlalchemy connection echo SQL statements.
- Better PostgreSQL support, especially for schemas.
- modification to the downgrade command to simplify the calling (old way still works just fine)
- improved support for SQLite
- add support for check constraints (EXPERIMENTAL)
- print statements removed from APIs
- improved sphinx based documentation
- removal of old commented code
- PEP-8 clean code
0.4.5
-----
- work by Christian Simms to compare metadata against databases
- new repository format
- a repository format migration tool is in migrate/versioning/migrate_repository.py
- support for default SQL scripts
- EXPERIMENTAL support for dumping database to model
0.4.4
-----
- patch by pwannygoodness for Issue #15
- fixed unit tests to work with py.test 0.9.1
- fix for a SQLAlchemy deprecation warning
0.4.3
-----
- patch by Kevin Dangoor to handle database versions as packages and ignore their __init__.py files in version.py
- fixed unit tests and Oracle changeset support by Christian Simms
0.4.2
-----
- package name is sqlalchemy-migrate again to make pypi work
- make import of sqlalchemy's SchemaGenerator work regardless of previous imports
0.4.1
-----
- setuptools patch by Kevin Dangoor
- re-rename module to migrate
0.4.0
-----
- SA 0.4.0 compatibility thanks to Christian Simms
- all unit tests are working now (with sqlalchemy >= 0.3.10)
0.3
---
- SA 0.3.10 compatibility
0.2.3
-----
- Removed lots of SA monkeypatching in Migrate's internals
- SA 0.3.3 compatibility
- Removed logsql (#75)
- Updated py.test version from 0.8 to 0.9; added a download link to setup.py
- Fixed incorrect "function not defined" error (#88)
- Fixed SQLite and .sql scripts (#87)
0.2.2
-----
- Deprecated driver(engine) in favor of engine.name (#80)
- Deprecated logsql (#75)
- Comments in .sql scripts don't make things fail silently now (#74)
- Errors while downgrading (and probably other places) are shown on their own line
- Created mailing list and announcements list, updated documentation accordingly
- Automated tests now require py.test (#66)
- Documentation fix to .sql script commits (#72)
- Fixed a pretty major bug involving logengine, dealing with commits/tests (#64)
- Fixes to the online docs - default DB versioning table name (#68)
- Fixed the engine name in the scripts created by the command 'migrate script' (#69)
- Added Evan's email to the online docs
0.2.1
-----
- Created this changelog
- Now requires (and is now compatible with) SA 0.3
- Commits across filesystems now allowed (shutil.move instead of os.rename) (#62)

195
docs/changeset.rst Normal file
View File

@ -0,0 +1,195 @@
.. _changeset-system:
.. highlight:: python
******************
Database changeset
******************
.. currentmodule:: migrate.changeset.schema
Importing :mod:`migrate.changeset` adds some new methods to existing
SA objects, as well as creating functions of its own. Most operations
can be done either by a method or a function. Methods match
SQLAlchemy's existing API and are more intuitive when the object is
available; functions allow one to make changes when only the name of
an object is available (for example, adding a column to a table in the
database without having to load that table into Python).
Changeset operations can be used independently of SQLAlchemy Migrate's
:ref:`versioning <versioning-system>`.
For more information, see the generated documentation for
:mod:`migrate.changeset`.
.. note::
alter_metadata keyword defaults to True.
Column
======
Given a standard SQLAlchemy table::
table = Table('mytable', meta,
Column('id', Integer, primary_key=True),
)
table.create()
.. _column-create:
:meth:`Create a column <ChangesetColumn.create>`::
col = Column('col1', String)
col.create(table)
# Column is added to table based on its name
assert col is table.c.col1
.. _column-drop:
:meth:`Drop a column <ChangesetColumn.drop>`::
col.drop()
.. _column-alter:
:meth:`Alter a column <ChangesetColumn.alter>`::
col.alter(name='col2')
# Renaming a column affects how it's accessed by the table object
assert col is table.c.col2
# Other properties can be modified as well
col.alter(type=String(42), default="life, the universe, and everything", nullable=False)
# Given another column object, col1.alter(col2), col1 will be changed to match col2
col.alter(Column('col3', String(77), nullable=True))
assert col.nullable
assert table.c.col3 is col
.. note::
Since version ``0.5.5`` you can pass primary_key_name, index_name and unique_name to column.create method to issue ALTER TABLE ADD CONSTRAINT after changing the column. Note for multi columns constraints and other advanced configuration, check :ref:`constraint tutorial <constraint-tutorial>`.
.. _table-rename:
Table
=====
SQLAlchemy supports `table create/drop`_.
:meth:`Rename a table <ChangesetTable.rename>`::
table.rename('newtablename')
.. _`table create/drop`: http://www.sqlalchemy.org/docs/05/metadata.html#creating-and-dropping-database-tables
.. currentmodule:: migrate.changeset.constraint
.. _index-rename:
Index
=====
SQLAlchemy supports `index create/drop`_.
:meth:`Rename an index <migrate.changeset.schema.ChangesetIndex.rename>`, given an SQLAlchemy ``Index`` object::
index.rename('newindexname')
.. _`index create/drop`: http://www.sqlalchemy.org/docs/05/metadata.html#indexes
.. _constraint-tutorial:
Constraint
==========
SQLAlchemy supports creating/dropping constraints at the same time a table is created/dropped. SQLAlchemy Migrate adds support for creating/dropping :class:`PrimaryKeyConstraint`/:class:`ForeignKeyConstraint`/:class:`CheckConstraint`/:class:`UniqueConstraint` constraints independently. (as ALTER TABLE statements).
The following rundowns are true for all constraints classes:
1. Make sure you do ``from migrate.changeset import *`` after SQLAlchemy imports since `migrate` does not patch SA's Constraints.
2. You can also use Constraints as in SQLAlchemy. In this case passing table argument explicitly is required::
cons = PrimaryKeyConstraint('id', 'num', table=self.table)
# Create the constraint
cons.create()
# Drop the constraint
cons.drop()
or you can pass in column objects (and table argument can be left out)::
cons = PrimaryKeyConstraint(col1, col2)
3. Some dialects support CASCADE option when dropping constraints::
cons = PrimaryKeyConstraint(col1, col2)
# Create the constraint
cons.create()
# Drop the constraint
cons.drop(cascade=True)
.. note::
SQLAlchemy Migrate will try to guess the name of the constraints for databases, but if it's something other than the default, you'll need to give its name. Best practice is to always name your constraints. Note that Oracle requires that you state the name of the constraint to be created/dropped.
Examples
---------
Primary key constraints::
from migrate.changeset import *
cons = PrimaryKeyConstraint(col1, col2)
# Create the constraint
cons.create()
# Drop the constraint
cons.drop()
Foreign key constraints::
from migrate.changeset import *
cons = ForeignKeyConstraint([table.c.fkey], [othertable.c.id])
# Create the constraint
cons.create()
# Drop the constraint
cons.drop()
Check constraints::
from migrate.changeset import *
cons = CheckConstraint('id > 3', columns=[table.c.id])
# Create the constraint
cons.create()
# Drop the constraint
cons.drop()
Unique constraints::
from migrate.changeset import *
cons = UniqueConstraint('id', 'age', table=self.table)
# Create the constraint
cons.create()
# Drop the constraint
cons.drop()

195
docs/conf.py Normal file
View File

@ -0,0 +1,195 @@
# -*- coding: utf-8 -*-
#
# SQLAlchemy Migrate documentation build configuration file, created by
# sphinx-quickstart on Fri Feb 13 12:58:57 2009.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# The contents of this file are pickled, so don't put values in the namespace
# that aren't pickleable (module imports are okay, they're removed automatically).
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If your extensions are in another directory, add it here. If the directory
# is relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
#sys.path.append(os.path.abspath('.'))
# Allow module docs to build without having sqlalchemy-migrate installed:
sys.path.append(os.path.dirname(os.path.abspath('.')))
# General configuration
# ---------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx']
# link to sqlalchemy docs
intersphinx_mapping = {'http://www.sqlalchemy.org/docs/05/': None}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'SQLAlchemy Migrate'
copyright = u'2009, Evan Rosson, Jan Dittberner'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.5.4'
# The full version, including alpha/beta/rc tags.
release = '0.5.4'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# Options for HTML output
# -----------------------
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
html_style = 'default.css'
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_use_modindex = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, the reST sources are included in the HTML build as _sources/<name>.
#html_copy_source = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'SQLAlchemyMigratedoc'
# Options for LaTeX output
# ------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class [howto/manual]).
latex_documents = [
('index', 'SQLAlchemyMigrate.tex', ur'SQLAlchemy Migrate Documentation',
ur'Evan Rosson, Jan Dittberner', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_use_modindex = True

42
docs/download.rst Normal file
View File

@ -0,0 +1,42 @@
Download
--------
You can get the latest version of SQLAlchemy Migrate from the
`project's download page`_, the `cheese shop`_, or via easy_install_::
easy_install sqlalchemy-migrate
You should now be able to use the *migrate* command from the command
line::
migrate
This should list all available commands. ``migrate help COMMAND`` will
display more information about each command.
If you'd like to be notified when new versions of SQLAlchemy Migrate
are released, subscribe to `migrate-announce`_.
.. _easy_install: http://peak.telecommunity.com/DevCenter/EasyInstall#installing-easy-install
.. _sqlalchemy: http://www.sqlalchemy.org/download.html
.. _`project's download page`: http://code.google.com/p/sqlalchemy-migrate/downloads/list
.. _`cheese shop`: http://pypi.python.org/pypi/sqlalchemy-migrate
.. _`migrate-announce`: http://groups.google.com/group/migrate-announce
Development
-----------
Migrate's Subversion_ repository is at
http://sqlalchemy-migrate.googlecode.com/svn/
To get the latest trunk::
svn co http://sqlalchemy-migrate.googlecode.com/svn/trunk
Patches should be submitted to the `issue tracker`_.
We use `buildbot`_ to help us run tests on all databases that migrate supports.
.. _subversion: http://subversion.tigris.org/
.. _issue tracker: http://code.google.com/p/sqlalchemy-migrate/issues/list
.. _buildbot: http://buildbot.fubar.si

View File

@ -0,0 +1,26 @@
There are many migrations that don't require a lot of thought - for example, if we add a column to a table definition, we probably want to have an "ALTER TABLE...ADD COLUMN" statement show up in our migration.
The difficulty lies in the automation of changes where the requirements aren't obvious. What happens when you add a unique constraint to a column whose data is not already unique? What happens when we split an existing table in two? Completely automating database migrations is not possible.
That said - we shouldn't have to hunt down and handwrite the ALTER TABLE statements for every new column; this is often just tedious. Many other common migration tasks require little serious thought; such tasks are ripe for automation. Any automation attempted, however, should not interfere with our ability to write scripts by hand if we so choose; our tool should ''not'' be centered around automation.
Automatically generating the code for this sort of task seems like a good solution:
* It does not obstruct us from writing changes by hand; if we don't like the autogenerated code, delete it or don't generate it to begin with
* We can easily add other migration tasks to the autogenerated code
* We can see right away if the code is what we're expecting, or if it's wrong
* If the generated code is wrong, it is easily modified; we can use parts of the generated code, rather than being required to use either 100% or 0%
* Maintence, usually a problem with auto-generated code, is not an issue: old database migration scripts are not the subject of maintenance; the correct solution is usually a new migration script.
Implementation is a problem: finding the 'diff' of two databases to determine what columns to add is not trivial. Fortunately, there exist tools that claim to do this for us: [http://sqlfairy.sourceforge.net/ SQL::Translator] and [http://xml2ddl.berlios.de/ XML to DDL] both claim to have this capability.
...
All that said, this is ''not'' something I'm going to attempt during the Summer of Code.
* I'd have to rely tremendously on a tool I'm not at all familiar with
* Creates a risk of the project itself relying too much on the automation, a Bad Thing
* The project has a deadline and I have plenty else to do already
* Lots of people with more experience than me say this would take more time than it's worth
It's something that might be considered for future work if this project is successful, though.

View File

@ -0,0 +1,147 @@
Important to our system is the API used for making database changes.
=== Raw SQL; .sql script ===
Require users to write raw SQL. Migration scripts are .sql scripts (with database version information in a header comment).
+ Familiar interface for experienced DBAs.
+ No new API to learn[[br]]
SQL is used elsewhere; many people know SQL already. Those who are still learning SQL will gain expertise not in the API of a specific tool, but in a language which will help them elsewhere. (On the other hand, those who are familiar with Python with no desire to learn SQL might find a Python API more intuitive.)
- Difficult to extend when necessary[[br]]
.sql scripts mean that we can't write new functions specific to our migration system when necessary. (We can't always assume that the DBMS supports functions/procedures.)
- Lose the power of Python[[br]]
Some things are possible in Python that aren't in SQL - for example, suppose we want to use some functions from our application in a migration script. (The user might also simply prefer Python.)
- Loss of database independence.[[br]]
There isn't much we can do to specify different actions for a particular DBMS besides copying the .sql file, which is obviously bad form.
=== Raw SQL; Python script ===
Require users to write raw SQL. Migration scripts are python scripts whose API does little beyond specifying what DBMS(es) a particular statement should apply to.
For example,
{{{
run("CREATE TABLE test[...]") # runs for all databases
run("ALTER TABLE test ADD COLUMN varchar2[...]",oracle) # runs for Oracle only
run("ALTER TABLE test ADD COLUMN varchar[...]",postgres|mysql) # runs for Postgres or MySQL only
}}}
We could also allow parts of a single statement to apply to a specific DBMS:
{{{
run("ALTER TABLE test ADD COLUMN"+sql("varchar",postgres|mysql)+sql("varchar2",oracle))
}}}
or, the same thing:
{{{
run("ALTER TABLE test ADD COLUMN"+sql("varchar",postgres|mysql,"varchar2",oracle))
}}}
+ Allows the user to write migration scripts for multiple DBMSes.
- The user must manage the conflicts between different databases themselves. [[br]]
The user can write scripts to deal with conflicts between databases, but they're not really database-independent: the user has to deal with conflicts between databases; our system doesn't help them.
+ Minimal new API to learn. [[br]]
There is a new API to learn, but it is extremely small, depending mostly on SQL DDL. This has the advantages of "no new API" in our first solution.
- More verbose than .sql scripts.
=== Raw SQL; automatic translation between each dialect ===
Same as the above suggestion, but allow the user to specify a 'default' dialect of SQL that we'll interpret and whose quirks we'll deal with.
That is, write everything in SQL and try to automatically resolve the conflicts of different DBMSes.
For example, take the following script:
{{{
engine=postgres
run("""
CREATE TABLE test (
id serial
)
""")
}}}
Running this on a Postgres database, surprisingly enough, would generate exactly what we typed:
{{{
CREATE TABLE test (
id serial
)
}}}
Running it on a MySQL database, however, would generate something like
{{{
CREATE TABLE test (
id integer auto_increment
)
}}}
+ Database-independence issues of the above SQL solutions are resolved.[[br]]
Ideally, this solution would be as database-independent as a Python API for database changes (discussed next), but with all the advantages of writing SQL (no new API).
- Difficult implementation[[br]]
Obviously, this is not easy to implement - there is a great deal of parsing logic and a great many things that need to be accounted for. In addition, this is a complex operation; any implementation will likely have errors somewhere.
It seems tools for this already exist; an effective tool would trivialize this implementation. I experimented a bit with [http://sqlfairy.sourceforge.net/ SQL::Translator] and [http://xml2ddl.berlios.de/ XML to DDL]; however, I had difficulties with both.
- Database-specific features ensure that this cannot possibly be "complete". [[br]]
For example, Postgres has an 'interval' type to represent times and (AFAIK) MySQL does not.
=== Database-independent Python API ===
Create a Python API through which we may manage database changes. Scripts would be based on the existing SQLAlchemy API when possible.
Scripts would look something like
{{{
# Create a table
test_table = table('test'
,Column('id',Integer,notNull=True)
)
table.create()
# Add a column to an existing table
test_table.add_column('id',Integer,notNull=True)
# Or, use a column object instead of its parameters
test_table.add_column(Column('id',Integer,notNull=True))
# Or, don't use a table object at all
add_column('test','id',Integer,notNull=True)
}}}
This would use engines, similar to SQLAlchemy's, to deal with database-independence issues.
We would, of course, allow users to write raw SQL if they wish. This would be done in the manner outlined in the second solution above; this allows us to write our entire script in SQL and ignore the Python API if we wish, or write parts of our solution in SQL to deal with specific databases.
+ Deals with database-independence thoroughly and with minimal user effort.[[br]]
SQLAlchemy-style engines would be used for this; issues of different DBMS syntax are resolved with minimal user effort. (Database-specific features would still need handwritten SQL.)
+ Familiar interface for SQLAlchemy users.[[br]]
In addition, we can often cut-and-paste column definitions from SQLAlchemy tables, easing one particular task.
- Requires that the user learn a new API. [[br]]
SQL already exists; people know it. SQL newbies might be more comfortable with a Python interface, but folks who already know SQL must learn a whole new API. (On the other hand, the user *can* write things in SQL if they wish, learning only the most minimal of APIs, if they are willing to resolve issues of database-independence themself.)
- More difficult to implement than pure SQL solutions. [[br]]
SQL already exists/has been tested. A new Python API does not/has not, and much of the work seems to consist of little more than reinventing the wheel.
- Script behavior might change under different versions of the project.[[br]]
...where .sql scripts behave the same regardless of the project's version.
=== Generate .sql scripts from a Python API ===
Attempts to take the best of the first and last solutions. An API similar to the previous solution would be used, but rather than immediately being applied to the database, .sql scripts are generated for each type of database we're interested in. These .sql scripts are what's actually applied to the database.
This would essentially allow users to skip the Python script step entirely if they wished, and write migration scripts in SQL instead, as in solution 1.
+ Database-independence is an option, when needed.
+ A familiar interface/an interface that can interact with other tools is an option, when needed.
+ Easy to inspect the SQL generated by a script, to ensure it's what we're expecting.
+ Migration scripts won't change behavior across different versions of the project. [[br]]
Once a Python script is translated to a .sql script, its behavior is consistent across different versions of the project, unlike a pure Python solution.
- Multiple ways to do a single task: not Pythonic.[[br]]
I never really liked that word - "Pythonic" - but it does apply here. Multiple ways to do a single task has the potential to cause confusion, especially in a large project if many people do the same task different ways. We have to support both ways of doing things, as well.
----
'''Conclusion''': The last solution, generating .sql scripts from a Python API, seems to be best.
The first solution (.sql scripts) suffers from a lack of database-independence, but is familiar to experienced database developers, useful with other tools, and shows exactly what will be done to the database. The Python API solution has no trouble with database-independence, but suffers from other problems that the .sql solution doesn't. The last solution resolves both reasonably well. Multiple ways to do a single task might be called "not Pythonic", but IMO, the trade-off is worth this cost.
Automatic translation between different dialects of SQL might have potential for use in a solution, but existing tools for this aren't reliable enough, as far as I can tell.

View File

@ -0,0 +1,56 @@
An important aspect of this project is database versioning. For migration scripts to be most useful, we need to know what version the database is: that is, has a particular migration script already been run?
An option not discussed below is "no versioning"; that is, simply apply any script we're given, and rely on the user to ensure it's valid. This is entirely too error-prone to seriously consider, and takes a lot of the usefulness out of the proposed tool.
=== Database-wide version numbers ===
A single integer version number would specify the version of each database. This is stored in the database in a table, let's call it "schema"; each migration script is associated with a certain database version number.
+ Simple implementation[[br]]
Of the 3 solutions presented here, this one is by far the simplest.
+ Past success[[br]]
Used in [http://www.rubyonrails.org/ Ruby on Rails' migrations].
~ Can detect corrupt schemas, but requires some extra work and a *complete* set of migrations.[[br]]
If we have a set of database migration scripts that build the database from the ground up, we can apply them in sequence to a 'dummy' database, dump a diff of the real and dummy schemas, and expect a valid schema to match the dummy schema.
- Requires changes to the database schema.[[br]]
Not a tremendous change - a single table with a single column and a single row - but a change nonetheless.
=== Table/object-specific version numbers ===
Each database "object" - usually tables, though we might also deal with other database objects, such as stored procedures or Postgres' sequences - would have a version associated with it, initially 1. These versions are stored in a table, let's call it "schema". This table has two columns: the name of the database object and its current version number.
+ Allows us to write migration scripts for a subset of the database.[[br]]
If we have multiple people working on a very large database, we may want to write migration scripts for a section of the database without stepping on another person's work. This allows unrelated to
- Requires changes to the database schema.
Similar to the database-wide version number; the contents of the new table are more complex, but still shouldn't conflict with anything.
- More difficult to implement than a database-wide version number.
- Determining the version of database-specific objects (ie. stored procedures, functions) is difficult.
- Ultimately gains nothing over the previous solution.[[br]]
The intent here was to allow multiple people to write scripts for a single database, but if database-wide version numbers aren't assigned until the script is placed in the repository, we could already do this.
=== Version determined via introspection ===
Each script has a schema associated with it, rather than a version number. The database schema is loaded, analyzed, and compared to the schema expected by the script.
+ No modifications to the database are necessary for this versioning system.[[br]]
The primary advantage here is that no changes to the database are required.
- Most difficult solution to implement, by far.[[br]]
Comparing the state of every schema object in the database is much more complex than simply comparing a version number, especially since we need to do it in a database-independent way (ie. we can't just diff the dump of each schema). SQLAlchemy's reflection would certainly be very helpful, but this remains the most complex solution.
+ "Automatically" detects corrupt schemas.[[br]]
A corrupt schema won't match any migration script.
- Difficult to deal with corrupt schemas.[[br]]
When version numbers are stored in the database, you have some idea of where an error occurred. Without this, we have no idea what version the database was in before corruption.
- Potential ambiguity: what if two database migration scripts expect the same schema?
----
'''Conclusion''': database-wide version numbers are the best way to go.

View File

@ -0,0 +1,29 @@
This is very much a draft/brainstorm right now. It should be made prettier and thought about in more detail later, but it at least gives some idea of the direction we're headed right now.
----
* Two distinct tools; should not be coupled (can work independently):
* Versioning tool
* Command line tool; let's call it "samigrate"
* Organizes old migration scripts into repositories
* Runs groups of migration scripts on a database, updating it to a specified version/latest version
* Helps run various tests
* usage
* "samigrate create PATH": Create project migration-script repository
* We shouldn't have to enter the path for every other command. Use a hidden file
* (This means we can't move the repository after it's created. Oh well)
* "samigrate add SCRIPT [VERSION]": Add script to this project's repository; latest version
* If a .sql script: how to determine engine, operation (up/down)? Options:
* specify at the command line: "samigrate add SCRIPT UP_OR_DOWN ENGINE"
* naming convention: SCRIPT is named something like NAME.postgres.up.sql
* "samigrate upgrade CONNECTION_STRING [VERSION] [SCRIPT...]": connect to the specified database and upgrade (or downgrade) it to the specified version (default latest)
* If SCRIPT... specified: act like these scripts are in the repository (useful for testing?)
* "samigrate dump CONNECTION_STRING [VERSION] [SCRIPT...]": like update, but sends all sql to stdout instead of the db
* (Later: some more commands, to be used for script testing tools)
* Alchemy API extensions for altering schema
* Operations here are DB-independent
* Each database modification is a script that may use this API
* Can handwrite SQL for all databases or a single database
* upgrade()/downgrade() functions: need only one file for both operations
* sql scripts reqire either (2 files, *.up.sql;*.down.sql) or (don't use downgrade)
* usage
* "python NAME.py ENGINE up": upgrade sql > stdout
* "python NAME.py ENGINE down": downgrade sql > stdout

View File

@ -0,0 +1,50 @@
== Goals ==
=== DBMS-independent schema changes ===
Many projects need to run on more than one DBMS. Similar changes need to be applied to both types of databases upon a schema change. The usual solution to database changes - .sql scripts with ALTER statements - runs into problems since different DBMSes have different dialects of SQL; we end up having to create a different script for each DBMS. This project will simplify this by providing an API, similar to the table definition API that already exists in SQLAlchemy, to alter a table independent of the DBMS being used, where possible.
This project will support all DBMSes currently supported by SQLAlchemy: SQLite, Postgres, MySQL, Oracle, and MS SQL. Adding support for more should be as possible as it is in SQLAlchemy.
Many are already used to writing .sql scripts for database changes, aren't interested in learning a new API, and have projects where DBMS-independence isn't an issue. Writing SQL statements as part of a (Python) change script must be an option, of course. Writing change scripts as .sql scripts, eliminating Python scripts from the picture entirely, would be nice too, although this is a lower-priority goal.
=== Database versioning and change script organization ===
Once we've accumulated a set of change scripts, it's important to know which ones have been applied/need to be applied to a particular database: suppose we need to upgrade a database that's extremenly out-of-date; figuring out the scripts to run by hand is tedious. Applying changes in the wrong order, or applying changes when they shouldn't be applied, is bad; attempting to manage all of this by hand inevitably leads to an accident. This project will be able to detect the version of a particular database and apply the scripts required to bring it up to the latest version, or up to any specified version number (given all change scripts required to reach that version number).
Sometimes we need to be able to revert a schema to an older version. There's no automatic way to do this without rebuilding the database from scratch, so our project will allow one to write scripts to downgrade the database as well as upgrade it. If such scripts have been written, we should be able to apply them in the correct order, just like upgrading.
Large projects inevitably accumulate a large number of database change scripts; it's important that we have a place to keep them. Once a script has been written, this project will deal with organizing it among existing change scripts, and the user will never have to look at it again.
=== Change testing ===
It's important to test one's database changes before applying them to a production database (unless you happen to like disasters). Much testing is up to the user and can't be automated, but there's a few places we can help ensure at least a minimal level of schema integrity. A few examples are below; we could add more later.
Given an obsolete schema, a database change script, and an up-to-date schema known to be correct, this project will be able to ensure that applying the
change script to the obsolete schema will result in an up-to-date schema - all without actually changing the obsolete database. Folks who have SQLAlchemy create their database using table.create() might find this useful; this is also useful for ensuring database downgrade scripts are correct.
Given a schema of a known version and a complete set of change scripts up to that version, this project will be able to detect if the schema matches its version. If a schema has gone through changes not present in migration scripts, this test will fail; if applying all scripts in sequence up to the specified version creates an identical schema, this test will succeed. Identifying that a schema is corrupt is sufficient; it would be nice if we could give a clue as to what's wrong, but this is lower priority. (Implementation: we'll probably show a diff of two schema dumps; this should be enough to tell the user what's gone wrong.)
== Non-Goals ==
ie. things we will '''not''' try to do (at least, during the Summer of Code)
=== Automatic generation of schema changes ===
For example, one might define a table:
{{{
CREATE TABLE person (
id integer,
name varchar(80)
);
}}}
Later, we might add additional columns to the definition:
{{{
CREATE TABLE person (
id integer,
name varchar(80),
profile text
);
}}}
It might be nice if a tool could look at both table definitions and spit out a change script; something like
{{{
ALTER TABLE person ADD COLUMN profile text;
}}}
This is a difficult problem for a number of reasons. I have no intention of tackling this problem as part of the Summer of Code. This project aims to give you a better way to write that ALTER statement and make sure it's applied correctly, not to write it for you.
(Using an [http://sqlfairy.sourceforge.net/ existing] [http://xml2ddl.berlios.de/ tool] to add this sort of thing later might be worth looking into, but it will not be done during the Summer of Code. Among other reasons, methinks it's best to start with a system that isn't dependent on this sort of automation.)

View File

@ -0,0 +1,73 @@
Evan Rosson
Project
---
SQLAlchemy Schema Migration
Synopsis
---
SQLAlchemy is an excellent object-relational database mapper for Python projects. Currently, it does a fine job of creating a database from scratch, but provides no tool to assist the user in modifying an existing database. This project aims to provide such a tool.
Benefits
---
Application requirements change; a database schema must be able to change with them. It's possible to write SQL scripts that make the proper modifications without any special tools, but this setup quickly becomes difficult to manage - when we need to apply multiple updates to a database, organize old migration scripts, or have a single application support more than one DBMS, a tool to support database changes becomes necessary. This tool will aid the creation of organizing migration scripts, applying multiple updates or removing updates to revert to an old version, and creating DBMS-independent migration scripts.
Writing one's schema migration scripts by hand often results in problems when dealing with multiple obsolete database instances - we must figure out what scripts are necessary to bring the database up-to-date. Database versioning tools are helpful for this task; this project will track the version of a particular database to determine what scripts are necessary to update an old schema.
Description
---
The migration system used by Ruby on Rails has had much success, and for good reason - the system is easy to understand, generally database-independent, as powerful as the application itself, and capable of dealing nicely with a schema with multiple instances of different versions. A migration system similar to that of Rails is a fine place to begin this project.
Each instance of the schema will have a version associated with it; this version is tracked using a single table with a single row and a single integer column. A set of changes to the database schema will increment the schema's version number; each migration script will be associated with a schema version.
A migration script will be written by the user, and consist of two functions:
- upgrade(): brings an old database up-to-date, from version n-1 to version n
- downgrade(): reverts an up-to-date database to the previous schema; an 'undo' for upgrade()
When applying multiple updates to an old schema instance, migration scripts are applied in sequence: when updating a schema to version n from version n-2, two migration scripts are run; n-2 => n-1 => n.
A command-line tool will create empty migration scripts (empty upgrade()/downgrade() functions), display the SQL that will be generated by a migration script for a particular DBMS, and apply migration scripts to a specified database.
This project will implement the command-line tool that manages the above functionality. This project will also extend SQLAlchemy with the functions necessary to construct DBMS-independent migration scripts: in particular, column creation/deletion/alteration and the ability to rename existing tables/indexes/columns will be implemented. We'll also need a way to write raw SQL for a specific DBMS/set of DBMSes for situations where our abstraction doesn't fit a script's requirements. The creation/deletion of existing tables and indexes are operations already provided by SQLAlchemy.
On DBMS support - I intend to support MySQL, Postgres, SQLite, Oracle, and MS-SQL by the end of the project. (Update: I previously omitted support for Oracle and MS-SQL because I don't have access to the full version of each; I wasn't aware Oracle Lite and MS-SQL Express were available for free.) The system will be abstracted in such a way that adding support for other databases will not be any more difficult than adding support for them in SQLAlchemy.
Schedule
---
This project will be my primary activity this summer. Unfortunately, I am in school when things begin, until June 9, but I can still begin the project during that period. I have no other commitments this summer - I can easily make up any lost time.
I will be spending my spare time this summer further developing my online game (discussed below), but this has no deadline and will not interfere with the project proposed here.
I'll begin by familiarizing myself with the internals of SQLAlchemy and creating a detailed plan for the project. This plan will be reviewed by the current SQLAlchemy developers and other potential users, and will be modified based on their feedback. This will be completed no later than May 30, one week after SoC begins.
Development will follow, in this order:
- The database versioning system. This will manage the creation and application of (initially empty) migration scripts. Complete by June 16.
- Access the database; read/update the schema's version number
- Apply a single (empty) script to the database
- Apply a set of (empty) scripts to upgrade/downgrade the database to a specified version; examine all migration scripts and apply all to update the database to the latest version available
- An API for table/column alterations, to make the above system useful. Complete by August 11.
- Implement an empty API - does nothing at this point, but written in such a way that syntax for each supported DBMS may be added as a module. Completed June 26-30, the mid-project review deadline.
- Implement/test the above API for a single DBMS (probably Postgres, as I'm familiar with it). Users should be able to test the 'complete' application with this DBMS.
- Implement the database modification API for other supported databases
All development will have unit tests written where appropriate. Unit testing the SQL generated for each DBMS will be particularly important.
The project will finish with various wrap-up activities, documentation, and some final tests, to be completed by the project deadline.
About me
---
I am a 3rd year BS Computer Science student; Cal Poly, San Luis Obispo, California, USA; currently applying for a Master's degree in CS from the same school. I've taken several classes dealing with databases, though much of what I know on the subject is self-taught. Outside of class, I've developed a browser-based online game, Zeal, at http://zealgame.com ; it has been running for well over a year and gone through many changes. It has taught me firsthand the importance of using appropriate tools and designing one's application well early on (largely through the pain that follows when you don't); I've learned a great many other things from the experience as well.
One recurring problem I've had with this project is dealing with changes to the database schema. I've thought much about how I'd like to see this solved, but hadn't done much to implement it.
I'm now working on another project that will be making use of SQLAlchemy: it fits many of my project's requirements, but lacks a migration tool that will be much needed. This presents an opportunity for me to make my first contribution to open source - I've long been interested in open source software and use it regularly, but haven't contributed to any until now. I'm particularly interested in the application of this tool with the TurboGears framework, as this project was inspired by a suggestion the TurboGears mailing list and I'm working on a project using TurboGears - but there is no reason to couple an SQLAlchemy enhancement with TurboGears; this project may be used by anyone who uses SQLAlchemy.
Further information:
http://evan.zealgame.com/soc

View File

@ -0,0 +1,56 @@
This plan has several problems and has been modified; new plan is discussed in wiki:RepositoryFormat2
----
One problem with [http://www.rubyonrails.org/ Ruby on Rails'] (very good) schema migration system is the behavior of scripts that depend on outside sources; ie. the application. If those change, there's no guarantee that such scripts will behave as they did before, and you'll get strange results.
For example, suppose one defines a SQLAlchemy table:
{{{
users = Table('users', metadata,
Column('user_id', Integer, primary_key = True),
Column('user_name', String(16), nullable = False),
Column('password', String(20), nullable = False)
)
}}}
and creates it in a change script:
{{{
from project import table
def upgrade():
table.users.create()
}}}
Suppose we later add a column to this table. We write an appropriate change script:
{{{
from project import table
def upgrade():
# This syntax isn't set in stone yet
table.users.add_column('email_address', String(60), key='email')
}}}
...and change our application's table definition:
{{{
users = Table('users', metadata,
Column('user_id', Integer, primary_key = True),
Column('user_name', String(16), nullable = False),
Column('password', String(20), nullable = False),
Column('email_address', String(60), key='email') #new column
)
}}}
Modifying the table definition changes how our first script behaves - it will create the table with the new column. This might work if we only apply change scripts to a few database which are always kept up to date (or very close), but we'll run into errors eventually if our migration scripts' behavior isn't consistent.
----
One solution is to generate .sql files from a Python change script at the time it's added to a repository. The sql generated by the script for each database is set in stone at this point; changes to outside files won't affect it.
This limits what change scripts are capable of - we can't write dynamic SQL; ie., we can't do something like this:
{{{
for row in db.execute("select id from table1"):
db.execute("insert into table2 (table1_id, value) values (:id,42)",**row)
}}}
But SQL is usually powerful enough to where the above is rarely necessary in a migration script:
{{{
db.execute("insert into table2 select id,42 from table1")
}}}
This is a reasonable solution. The limitations aren't serious (everything possible in a traditional .sql script is still possible), and change scripts are much less prone to error.

View File

@ -0,0 +1,28 @@
My original plan for Migrate's RepositoryFormat had several problems:
* Bind parameters: We needed to bind parameters into statements to get something suitable for an .sql file. For some types of parameters, there's no clean way to do this without writing an entire parser - too great a cost for this project. There's a reason why SQLAlchemy's logs display the statement and its parameters separately: the binding is done at a lower level than we have access to.
* Failure: Discussed in #17, the old format had no easy way to find the Python statements associated with an SQL error. This makes it difficult to debug scripts.
A new format will be used to solve this problem instead.
Similar to our previous solution, where one .sql file was created per version/operation/DBMS (version_1.upgrade.postgres.sql, for example), one file will be created per version/operation/DBMS here.
These files will contain the following information:
* The dialect used to perform the logging. Particularly,
* The paramstyle expected by the dbapi
* The DBMS this log applies to
* Information on each logged SQL statement, each of which contains:
* The text of the statement
* Parameters to be bound to the statement
* A Python stack trace at the point the statement was logged - this allows us to tell what Python statements are associated with an SQL statement when there's an error
These files will be created by pickling a Python object with the above information.
Such files may be executed by loading the log and having SQLAlchemy execute them as it might have before.
Good:
* Since the statements and bind parameters are stored separately and executed as SQLAlchemy would normally execute them, one problem discussed above is eliminated.
* Storing the stack trace at the point each statement was logged allows us to identify what Python statements are responsible for an SQL error. This makes it much easier for users to debug their scripts.
Bad:
* It's less trivial to commit .sql scripts to our repository, since they're no longer used internally. This isn't a huge loss, and .sql commits can still be implemented later if need be.
* There's some danger of script behavior changing if changes are made to the dbapi the script is associated with. The primary place where problems would occur is during parameter binding, but the chance of this changing significantly isn't large. The danger of changes in behavior due to changes in the user's application is not affected.

120
docs/index.rst Normal file
View File

@ -0,0 +1,120 @@
:mod:`migrate` - SQLAlchemy Migrate (schema change management)
==============================================================
.. module:: migrate
.. moduleauthor:: Evan Rosson
:Author: Evan Rosson
:Maintainer: Domen Kozar <domenNO@SPAMdev.si>
:Source code: http://code.google.com/p/sqlalchemy-migrate/issues/list
:Issues: http://code.google.com/p/sqlalchemy-migrate/
:Version: |release|
.. topic:: Overview
Inspired by Ruby on Rails' migrations, SQLAlchemy Migrate provides a
way to deal with database schema changes in SQLAlchemy_ projects.
Migrate was started as part of `Google's Summer of Code`_ by Evan
Rosson, mentored by Jonathan LaCour.
The project was taken over by a small group of volunteers when Evan
had no free time for the project. It is now hosted as a `Google Code
project`_. During the hosting change the project was renamed to
SQLAlchemy Migrate.
Currently, sqlalchemy-migrate supports Python versions from 2.4 to 2.6.
SQLAlchemy >=0.5 is supported only.
.. warning::
Version **0.5.5** breaks backward compatability, please read :ref:`changelog <backwards-055>` for more info.
Download and Development
------------------------
.. toctree::
download
.. _dialect-support:
Dialect support
---------------
+---------------------------------------------------------+--------------------------+------------------------------+------------------------+---------------------------+-------------------------------+-------+
| Operation / Dialect | :ref:`sqlite <sqlite-d>` | :ref:`postgres <postgres-d>` | :ref:`mysql <mysql-d>` | :ref:`oracle <oracle-d>` | :ref:`firebird <firebird-d>` | mssql |
| | | | | | | |
+=========================================================+==========================+==============================+========================+===========================+===============================+=======+
| :ref:`ALTER TABLE RENAME TABLE <table-rename>` | yes | yes | yes | yes | no | |
| | | | | | | |
+---------------------------------------------------------+--------------------------+------------------------------+------------------------+---------------------------+-------------------------------+-------+
| :ref:`ALTER TABLE RENAME COLUMN <column-alter>` | yes | yes | yes | yes | yes | |
| | (workaround) [#1]_ | | | | | |
+---------------------------------------------------------+--------------------------+------------------------------+------------------------+---------------------------+-------------------------------+-------+
| :ref:`ALTER TABLE ADD COLUMN <column-create>` | yes | yes | yes | yes | yes | |
| | (with limitations) [#2]_ | | | | | |
+---------------------------------------------------------+--------------------------+------------------------------+------------------------+---------------------------+-------------------------------+-------+
| :ref:`ALTER TABLE DROP COLUMN <column-drop>` | yes | yes | yes | yes | yes | |
| | (workaround) [#1]_ | | | | | |
+---------------------------------------------------------+--------------------------+------------------------------+------------------------+---------------------------+-------------------------------+-------+
| :ref:`ALTER TABLE ALTER COLUMN <column-alter>` | yes | yes | yes | yes | yes [#4]_ | |
| | (workaround) [#1]_ | | | (with limitations) [#3]_ | | |
+---------------------------------------------------------+--------------------------+------------------------------+------------------------+---------------------------+-------------------------------+-------+
| :ref:`ALTER TABLE ADD CONSTRAINT <constraint-tutorial>` | no | yes | yes | yes | yes | |
| | | | | | | |
+---------------------------------------------------------+--------------------------+------------------------------+------------------------+---------------------------+-------------------------------+-------+
| :ref:`ALTER TABLE DROP CONSTRAINT <constraint-tutorial>`| no | yes | yes | yes | yes | |
| | | | | | | |
+---------------------------------------------------------+--------------------------+------------------------------+------------------------+---------------------------+-------------------------------+-------+
| :ref:`RENAME INDEX <index-rename>` | no | yes | no | yes | yes | |
| | | | | | | |
+---------------------------------------------------------+--------------------------+------------------------------+------------------------+---------------------------+-------------------------------+-------+
.. [#1] Table is renamed to temporary table, new table is created followed by INSERT statements.
.. [#2] Visit http://www.sqlite.org/lang_altertable.html for more information.
.. [#3] You can not change datatype or rename column if table has NOT NULL data, see http://blogs.x2line.com/al/archive/2005/08/30/1231.aspx for more information.
.. [#4] Changing nullable is not supported
Documentation
-------------
SQLAlchemy is split into two parts, database schema versioning and
database changeset management. This is represented by two python
packages :mod:`migrate.versioning` and :mod:`migrate.changeset`. The
versioning API is available as the :ref:`migrate <command-line-usage>` command.
.. toctree::
versioning
changeset
tools
.. _`google's summer of code`: http://code.google.com/soc
.. _`Google Code project`: http://code.google.com/p/sqlalchemy-migrate
.. _sqlalchemy: http://www.sqlalchemy.org
API Documentation
------------------
.. toctree::
api
Changelog
---------
.. toctree::
changelog
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

288
docs/theme/almodovar.css vendored Normal file
View File

@ -0,0 +1,288 @@
/*
* Original theme modified by Evan Rosson
* http://erosson.com/migrate
* ---
*
* Theme Name: Almodovar
* Theme URI: http://blog.ratterobert.com/archiv/2005/03/09/almodovar/
* Description: Das Theme basiert im Ursprung auf Michael Heilemanns <a href="http://binarybonsai.com/kubrick/">Kubrick</a>-Template und ist von dem einen oder anderen Gimmick anderer sehr guter Templates inspiriert worden.
* Version: 0.7
* Author: ratte / robert
* Author URI: http://blog.ratterobert.com/
* */
/* Begin Typography & Colors */
body {
font-size: 75%;
font-family: 'Lucida Grande', 'Trebuchet MS', 'Bitstream Vera Sans', Sans-Serif;
background-color: #CCF;
color: #333;
text-align: center;
}
#page {
background-color: #fff;
border: 1px solid #88f;
text-align: left;
}
#content {
font-size: 1.2em;
margin: 1em;
}
#content p,
#content ul,
#content blockquote {
line-height: 1.6em;
}
#footer {
border-top: 1px solid #006;
margin-top: 2em;
}
small {
font-family: 'Trebuchet MS', Arial, Helvetica, Sans-Serif;
font-size: 0.9em;
line-height: 1.5em;
}
h1, h2, h3 {
font-family: 'Trebuchet MS', 'Lucida Grande', Verdana, Arial, Sans-Serif;
font-weight: bold;
margin-top: .7em;
margin-bottom: .7em;
}
h1 {
font-size: 2.5em;
}
h2 {
font-size: 2em;
}
h3 {
font-size: 1.5em;
}
h1, h2, h3 {
color: #33a;
}
h1 a, h2 a, h3 a {
color: #33a;
}
h1, h1 a, h1 a:hover, h1 a:visited,
h2, h2 a, h2 a:hover, h2 a:visited,
h3, h3 a, h3 a:hover, h3 a:visited,
cite {
text-decoration: none;
}
#content p a:visited {
color: #004099;
/*font-weight: normal;*/
}
small, blockquote, strike {
color: #33a;
}
#links ul ul li, #links li {
list-style: none;
}
code {
font: 1.1em 'Courier', 'Courier New', Fixed;
}
acronym, abbr, span.caps {
font-size: 0.9em;
letter-spacing: .07em;
}
a {
color: #0050FF;
/*text-decoration: none;*/
text-decoration:underline;
/*font-weight: bold;*/
}
a:hover {
color: #0080FF;
}
/* Special case doc-title */
h1.doc-title {
text-transform: lowercase;
font-size: 4em;
margin: 0;
}
h1.doc-title a {
display: block;
padding-left: 0.8em;
padding-bottom: .5em;
padding-top: .5em;
margin: 0;
border-bottom: 1px #fff solid;
}
h1.doc-title,
h1.doc-title a,
h1.doc-title a:visited,
h1.doc-title a:hover {
text-decoration: none;
color: #0050FF;
}
/* End Typography & Colors */
/* Begin Structure */
body {
margin: 0;
padding: 0;
}
#page {
background-color: white;
margin: 0 auto 0 9em;
padding: 0;
max-width: 60em;
border: 1px solid #555596;
}
* html #page {
* width: 60em;
* }
*
* #content {
* margin: 0 1em 0 3em;
* }
*
* #content h1 {
* margin-left: 0;
* }
*
* #footer {
* padding: 0 0 0 1px;
* margin: 0;
* margin-top: 1.5em;
* clear: both;
* }
*
* #footer p {
* margin: 1em;
* }
*
*/* End Structure */
/* Begin Headers */
.description {
text-align: center;
}
/* End Headers */
/* Begin Form Elements */
#searchform {
margin: 1em auto;
text-align: right;
}
#searchform #s {
width: 100px;
padding: 2px;
}
#searchsubmit {
padding: 1px;
}
/* End Form Elements */
/* Begin Various Tags & Classes */
acronym, abbr, span.caps {
cursor: help;
}
acronym, abbr {
border-bottom: 1px dashed #999;
}
blockquote {
margin: 15px 30px 0 10px;
padding-left: 20px;
border-left: 5px solid #CCC;
}
blockquote cite {
margin: 5px 0 0;
display: block;
}
hr {
display: none;
}
a img {
border: none;
}
.navigation {
display: block;
text-align: center;
margin-top: 10px;
margin-bottom: 60px;
}
/* End Various Tags & Classes*/
span a { color: #CCC; }
span a:hover { color: #0050FF; }
#navcontainer {
margin-top: 0px;
padding-top: 0px;
width: 100%;
background-color: #AAF;
text-align: right;
}
#navlist ul {
margin-left: 0;
margin-right: 5px;
padding-left: 0;
white-space: nowrap;
}
#navlist li {
display: inline;
list-style-type: none;
}
#navlist a {
padding: 3px 10px;
color: #fff;
background-color: #339;
text-decoration: none;
border: 1px solid #44F;
font-weight: normal;
}
#navlist a:hover {
color: #000;
background-color: #FFF;
text-decoration: none;
font-weight: normal;
}
#navlist a:active, #navlist a.selected {
padding: 3px 10px;
color: #000;
background-color: #EEF;
text-decoration: none;
border: 1px solid #CCF;
font-weight: normal;
}

123
docs/theme/layout.css vendored Normal file
View File

@ -0,0 +1,123 @@
@import url("pudge.css");
@import url("almodovar.css");
/* Basic Style
----------------------------------- */
h1.pudge-member-page-heading {
font-size: 300%;
}
h4.pudge-member-page-subheading {
font-size: 130%;
font-style: italic;
margin-top: -2.0em;
margin-left: 2em;
margin-bottom: .3em;
color: #0050CC;
}
p.pudge-member-blurb {
font-style: italic;
font-weight: bold;
font-size: 120%;
margin-top: 0.2em;
color: #999;
}
p.pudge-member-parent-link {
margin-top: 0;
}
/*div.pudge-module-doc {
max-width: 45em;
}*/
div.pudge-section {
margin-left: 2em;
max-width: 45em;
}
/* Section Navigation
----------------------------------- */
div#pudge-section-nav
{
margin: 1em 0 1.5em 0;
padding: 0;
height: 20px;
}
div#pudge-section-nav ul {
border: 0;
margin: 0;
padding: 0;
list-style-type: none;
text-align: center;
border-right: 1px solid #aaa;
}
div#pudge-section-nav ul li
{
display: block;
float: left;
text-align: center;
padding: 0;
margin: 0;
}
div#pudge-section-nav ul li .pudge-section-link,
div#pudge-section-nav ul li .pudge-missing-section-link
{
background: #aaa;
width: 9em;
height: 1.8em;
border: 1px solid #bbb;
padding: 0;
margin: 0 0 10px 0;
color: #ddd;
text-decoration: none;
display: block;
text-align: center;
font: 11px/20px "Verdana", "Lucida Grande";
cursor: hand;
text-transform: lowercase;
}
div#pudge-section-nav ul li a:hover {
color: #000;
background: #fff;
}
div#pudge-section-nav ul li .pudge-section-link {
background: #888;
color: #eee;
border: 1px solid #bbb;
}
/* Module Lists
----------------------------------- */
dl.pudge-module-list dt {
font-style: normal;
font-size: 110%;
}
dl.pudge-module-list dd {
color: #555;
}
/* Misc Overrides */
.rst-doc p.topic-title a {
color: #777;
}
.rst-doc ul.auto-toc a,
.rst-doc div.contents a {
color: #333;
}
pre { background: #eee; }
.rst-doc dl dt {
color: #444;
margin-top: 1em;
font-weight: bold;
}
.rst-doc dl dd {
margin-top: .2em;
}
.rst-doc hr {
display: block;
margin: 2em 0;
}

90
docs/theme/layout.html vendored Normal file
View File

@ -0,0 +1,90 @@
<?xml version="1.0"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<?python
import pudge
def initialize(t):
g = t.generator
if not hasattr(t, 'title'):
t.title = 'Untitled'
t.doc_title = g.index_document['title']
t.home_url = g.organization_url or g.blog_url or g.trac_url
t.home_title = g.organization
?>
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:py="http://purl.org/kid/ns#"
py:def="layout">
<head>
<title>${title}</title>
<link rel="stylesheet" type="text/css" href="layout.css"/>
<link py:if="generator.syndication_url"
rel="alternate"
type="application/rss+xml"
title="RSS 2.0" href="${generator.syndication_url}"/>
</head>
<body>
<div id="page">
<h1 class="doc-title"><a href="${home_url}">${home_title}</a></h1>
<div id="navcontainer">
<ul id="navlist">
<li class="pagenav">
<ul>
<li class="page_item">
<a href="index.html"
class="${'index.html'== destfile and 'selected' or ''}"
title="Project Home / Index">${doc_title}</a>
</li>
<li class="page_item">
<a href="module-index.html"
class="${'module-index.html'== destfile and 'selected' or ''}"
title="${doc_title.lower()} package and module reference">Modules</a>
</li>
<?python
trac_url = generator.trac_url
mailing_list_url = generator.mailing_list_url
?>
<li py:if="trac_url">
<a href="${trac_url}"
title="Wiki / Subversion / Roadmap / Bug Tracker"
>Trac</a>
</li>
<li py:if="generator.blog_url">
<a href="${generator.blog_url}">Blog</a>
</li>
<li py:if="mailing_list_url">
<a href="${mailing_list_url}"
class="${mailing_list_url == destfile and 'selected' or ''}"
title="Mailing List">Discuss</a>
</li>
</ul>
</li>
</ul>
</div>
<hr />
<div id="content" py:content="content()"/>
<div id="footer">
<?python license = generator.get_document('doc-license') ?>
<p style="float: left;">
built with
<a href="http://lesscode.org/projects/pudge/"
>pudge/${pudge.__version__}</a><br />
original design by
<a href="http://blog.ratterobert.com/"
>ratter / robert</a><br />
</p>
<p style="float:right;">
evan.rosson (at) gmail.com
</p>
</div>
</div>
</body>
</html>

17
docs/tools.rst Normal file
View File

@ -0,0 +1,17 @@
SQLAlchemy migrate tools
========================
The most commonly used tool is the :ref:`migrate <command-line-usage>`.
.. index:: repository migration
There is a second tool :command:`migrate_repository.py` that may be
used to migrate your repository from a version before 0.4.5 of
SQLAlchemy migrate to the current version.
.. module:: migrate.versioning.migrate_repository
:synopsis: Tool for migrating pre 0.4.5 repositories to current layout
Running :command:`migrate_repository.py` is as easy as:
:samp:`migrate_repository.py {repository_directory}`

495
docs/versioning.rst Normal file
View File

@ -0,0 +1,495 @@
.. _versioning-system:
.. currentmodule:: migrate.versioning
.. highlight:: bash
***********************************
Database schema versioning workflow
***********************************
SQLAlchemy migrate provides the :mod:`migrate.versioning` API that is
also available as the :ref:`migrate <command-line-usage>` command.
Purpose of this package is frontend for migrations. It provides commands
to manage migrate repository and database selection aswell as script versioning.
Project Setup
=============
.. _create_change_repository:
Create a change repository
--------------------------
To begin, we'll need to create a *repository* for our
project. Repositories are associated with a single database schema,
and store collections of change scripts to manage that schema. The
scripts in a repository may be applied to any number of databases.
Repositories each have a name. This name is used to identify the
repository we're working with.
All work with repositories is done using the migrate command. Let's
create our project's repository::
$ migrate create my_repository "Example project"
This creates an initially empty repository relative to current directory at
my_repository/ named `Example project`. The repository directory
contains a sub directory versions that will store the schema versions,
a configuration file :file:`migrate.cfg` that contains
:ref:`repository configuration <repository_configuration>`, a
:file:`README` file containing information that the directory is an
sqlalchemy-migrate repository and a script :ref:`manage.py <project_management_script>`
that has the same functionality as the :ref:`migrate <command-line-usage>` command but is
preconfigured with the repository.
Version-control a database
--------------------------
Next, we need to create a database and declare it to be under version
control. Information on a database's version is stored in the database
itself; declaring a database to be under version control creates a
table, named 'migrate_version' by default, and associates it with your
repository.
The database is specified as a `SQLAlchemy database url`_.
.. _`sqlalchemy database url`:
http://www.sqlalchemy.org/docs/05/dbengine.html#create-engine-url-arguments
::
$ python my_repository/manage.py version_control sqlite:///project.db
We can have any number of databases under this repository's version
control.
Each schema has a version that SQLAlchemy Migrate manages. Each change
script applied to the database increments this version number. You can
see a database's current version::
$ python my_repository/manage.py db_version sqlite:///project.db
0
A freshly versioned database begins at version 0 by default. This
assumes the database is empty. (If this is a bad assumption, you can
specify the version at the time the database is declared under version
control, with the "version_control" command.) We'll see that creating
and applying change scripts changes the database's version number.
Similarly, we can also see the latest version available in a
repository with the command::
$ python my_repository/manage.py version
0
We've entered no changes so far, so our repository cannot upgrade a
database past version 0.
Project management script
-------------------------
.. _project_management_script:
Many commands need to know our project's database url and repository
path - typing them each time is tedious. We can create a script for
our project that remembers the database and repository we're using,
and use it to perform commands::
$ migrate manage manage.py --repository=my_repository --url=sqlite:///project.db
$ python manage.py db_version
0
The script manage.py was created. All commands we perform with it are
the same as those performed with the 'migrate' tool, using the
repository and database connection entered above. The difference
between the script :file:`manage.py` in the current directory and the
script inside the repository is, that the one in the current directory
has the database URL preconfigured.
Making schema changes
=====================
All changes to a database schema under version control should be done
via change scripts - you should avoid schema modifications (creating
tables, etc.) outside of change scripts. This allows you to determine
what the schema looks like based on the version number alone, and
helps ensure multiple databases you're working with are consistent.
Create a change script
----------------------
Our first change script will create a simple table
.. code-block:: python
account = Table('account', meta,
Column('id', Integer, primary_key=True),
Column('login', String(40)),
Column('passwd', String(40)),
)
This table should be created in a change script. Let's create one::
$ python manage.py script "Add account table"
This creates an empty change script at
:file:`my_repository/versions/001_Add_account_table.py`. Next, we'll
edit this script to create our table.
Edit the change script
----------------------
Our change script defines two functions, currently empty:
:func:`upgrade`` and :func:`downgrade`. We'll fill those in
.. code-block:: python
from sqlalchemy import *
from migrate import *
meta = MetaData()
account = Table('account', meta,
Column('id', Integer, primary_key=True),
Column('login', String(40)),
Column('passwd', String(40)),
)
def upgrade(migrate_engine):
meta.bind(migrate_engine)
account.create()
def downgrade(migrate_engine):
meta.bind(migrate_engine)
account.drop()
As you might have guessed, :func:`upgrade` upgrades the database to the next
version. This function should contain the changes we want to perform;
here, we're creating a table. :func:`downgrade` should reverse changes made
by :func:`upgrade`. You'll need to write both functions for every change
script. (Well, you don't *have* to write downgrade, but you won't be
able to revert to an older version of the database or test your
scripts without it.)
As you can see, **migrate_engine** is passed to both functions.
You should use this in your change scripts, rather
than creating your own engine.
You should be very careful about importing files from the rest of your
application, as your change scripts might break when your application
changes. More about `writing scripts with consistent behavior`_.
Test the change script
------------------------
Change scripts should be tested before they are committed. Testing a
script will run its :func:`upgrade` and :func:`downgrade` functions on a specified
database; you can ensure the script runs without error. You should be
testing on a test database - if something goes wrong here, you'll need
to correct it by hand. If the test is successful, the database should
appear unchanged after upgrade() and downgrade() run.
To test the script::
$ python manage.py test
Upgrading... done
Downgrading... done
Success
Our script runs on our database (``sqlite:///project.db``, as
specified in manage.py) without any errors.
Our repository's version now is::
$ python manage.py version
1
.. warning::
test command exectues actual script, be sure you are NOT doing this on production database.
Upgrade the database
--------------------
Now, we can apply this change script to our database::
$ python manage.py upgrade
0 -> 1... done
This upgrades the database (``sqlite:///project.db``, as specified
when we created manage.py above) to the latest available version. (We
could also specify a version number if we wished, using the ``--version``
option.) We can see the database's version number has changed, and our
table has been created::
$ python manage.py db_version
1
$ sqlite3 project.db
sqlite> .tables
account migrate_version
Our account table was created - success! As our application evolves,
we can create more change scripts using a similar process.
Writing change scripts
======================
By default, change scripts may do anything any other SQLAlchemy
program can do.
SQLAlchemy Migrate extends SQLAlchemy with several operations used to
change existing schemas - ie. ``ALTER TABLE`` stuff. See
:ref:`changeset <changeset-system>` documentation for details.
Writing scripts with consistent behavior
----------------------------------------
Normally, it's important to write change scripts in a way that's
independent of your application - the same SQL should be generated
every time, despite any changes to your app's source code. You don't
want your change scripts' behavior changing when your source code
does.
**Consider the following example of what can go wrong (i.e. what NOT to
do)**:
Your application defines a table in the model.py file:
.. code-block:: python
from sqlalchemy import *
meta = MetaData()
table = Table('mytable', meta,
Column('id', Integer, primary_key=True),
)
... and uses this file to create a table in a change script:
.. code-block:: python
from sqlalchemy import *
from migrate import *
import model
def upgrade(migrate_engine):
model.meta.bind(migrate_engine)
def downgrade(migrate_engine):
model.meta.bind(migrate_engine)
model.table.drop()
This runs successfully the first time. But what happens if we change
the table definition?
.. code-block:: python
table = Table('mytable', meta,
Column('id', Integer, primary_key=True),
Column('data', String(42)),
)
We'll create a new column with a matching change script
.. code-block:: python
from sqlalchemy import *
from migrate import *
import model
def upgrade(migrate_engine):
model.meta.bind(migrate_engine)
model.table.data.create()
def downgrade(migrate_engine):
model.meta.bind(migrate_engine)
model.table.data.drop()
This appears to run fine when upgrading an existing database - but the
first script's behavior changed! Running all our change scripts on a
new database will result in an error - the first script creates the
table based on the new definition, with both columns; the second
cannot add the column because it already exists.
To avoid the above problem, you should copy-paste your table
definition into each change script rather than importing parts of your
application.
Writing for a specific database
-------------------------------
Sometimes you need to write code for a specific database. Migrate
scripts can run under any database, however - the engine you're given
might belong to any database. Use engine.name to get the name of the
database you're working with
.. code-block:: python
>>> from sqlalchemy import *
>>> from migrate import *
>>>
>>> engine = create_engine('sqlite:///:memory:')
>>> engine.name
'sqlite'
Writings .sql scripts
---------------------
You might prefer to write your change scripts in SQL, as .sql files,
rather than as Python scripts. SQLAlchemy-migrate can work with that
.. code-block:: python
$ python manage.py version
1
$ python manage.py script_sql postgres
This creates two scripts
:file:`my_repository/versions/002_postgresql_upgrade.sql` and
:file:`my_repository/versions/002_postgresql_downgrade.sql`, one for
each *operation*, or function defined in a Python change script -
upgrade and downgrade. Both are specified to run with Postgres
databases - we can add more for different databases if we like. Any
database defined by SQLAlchemy may be used here - ex. sqlite,
postgres, oracle, mysql...
.. _command-line-usage:
Command line usage
==================
.. currentmodule:: migrate.versioning.shell
:command:`migrate` command is used for API interface. For list of commands and help use::
$ migrate --help
:program:`migrate` command exectues :func:`main` function.
For ease of usage, generate your own :ref:`project management script <project_management_script>`,
which calls :func:`main` function with keywords arguments.
You may want to specify `url` and `repository` arguments which almost all API functions require.
If api command looks like::
$ migrate downgrade URL REPOSITORY VERSION [--preview_sql|--preview_py]
and you have a project management script that looks like
.. code-block:: python
from migrate.versioning.shell import main
main(url='sqlite://', repository='./project/migrations/')
you have first two slots filed, and command line usage would look like::
# preview Python script
migrate downgrade 2 --preview_py
# downgrade to version 2
migrate downgrade 2
.. versionchanged:: 0.5.4
Command line parsing refactored: positional parameters usage
Whole command line parsing was rewriten from scratch with use of OptionParser.
Options passed as kwargs to :func:`~migrate.versioning.shell.main` are now parsed correctly.
Options are passed to commands in the following priority (starting from highest):
- optional (given by ``--some_option`` in commandline)
- positional arguments
- kwargs passed to migrate.versioning.shell.main
Python API
==========
.. currentmodule:: migrate.versioning.api
All commands available from the command line are also available for
your Python scripts by importing :mod:`migrate.versioning.api`. See the
:mod:`migrate.versioning.api` documentation for a list of functions;
function names match equivalent shell commands. You can use this to
help integrate SQLAlchemy Migrate with your existing update process.
For example, the following commands are similar:
*From the command line*::
$ migrate help help
/usr/bin/migrate help COMMAND
Displays help on a given command.
*From Python*
.. code-block:: python
import migrate.versioning.api
migrate.versioning.api.help('help')
# Output:
# %prog help COMMAND
#
# Displays help on a given command.
.. _migrate.versioning.api: module-migrate.versioning.api.html
.. _repository_configuration:
Experimental commands
=====================
Some interesting new features to create SQLAlchemy db models from
existing databases and vice versa were developed by Christian Simms
during the development of SQLAlchemy-migrate 0.4.5. These features are
roughly documented in a `thread in migrate-users`_.
.. _`thread in migrate-users`:
http://groups.google.com/group/migrate-users/browse_thread/thread/a5605184e08abf33#msg_85c803b71b29993f
Here are the commands' descriptions as given by ``migrate help <command>``:
- ``compare_model_to_db``: Compare the current model (assumed to be a
module level variable of type sqlalchemy.MetaData) against the
current database.
- ``create_model``: Dump the current database as a Python model to
stdout.
- ``make_update_script_for_model``: Create a script changing the old
Python model to the new (current) Python model, sending to stdout.
- ``upgrade_db_from_model``: Modify the database to match the
structure of the current Python model. This also sets the db_version
number to the latest in the repository.
As this sections headline says: These features are EXPERIMENTAL. Take
the necessary arguments to the commands from the output of ``migrate
help <command>``.
Repository configuration
========================
SQLAlchemy-migrate repositories can be configured in their migrate.cfg
files. The initial configuration is performed by the `migrate create`
call explained in :ref:`Create a change repository
<create_change_repository>`. The following options are available
currently:
- `repository_id` Used to identify which repository this database is
versioned under. You can use the name of your project.
- `version_table` The name of the database table used to track the
schema version. This name shouldn't already be used by your
project. If this is changed once a database is under version
control, you'll need to change the table name in each database too.
- `required_dbs` When committing a change script, SQLAlchemy-migrate
will attempt to generate the sql for all supported databases;
normally, if one of them fails - probably because you don't have
that database installed - it is ignored and the commit continues,
perhaps ending successfully. Databases in this list MUST compile
successfully during a commit, or the entire commit will fail. List
the databases your application will actually be using to ensure your
updates to that database work properly. This must be a list;
example: `['postgres', 'sqlite']`

6
migrate/__init__.py Normal file
View File

@ -0,0 +1,6 @@
"""
SQLAlchemy migrate provides two APIs :mod:`migrate.versioning` for
database schema version and repository management and
:mod:`migrate.changeset` that allows to define database schema changes
using Python.
"""

BIN
migrate/__init__.pyc Normal file

Binary file not shown.

View File

@ -0,0 +1,28 @@
"""
This module extends SQLAlchemy and provides additional DDL [#]_
support.
.. [#] SQL Data Definition Language
"""
import sqlalchemy
from sqlalchemy import __version__ as _sa_version
import re
_sa_version = tuple(int(re.match("\d+", x).group(0)) for x in _sa_version.split("."))
SQLA_06 = _sa_version >= (0, 6)
del re
del _sa_version
from migrate.changeset.schema import *
from migrate.changeset.constraint import *
sqlalchemy.schema.Table.__bases__ += (ChangesetTable, )
sqlalchemy.schema.Column.__bases__ += (ChangesetColumn, )
sqlalchemy.schema.Index.__bases__ += (ChangesetIndex, )
sqlalchemy.schema.DefaultClause.__bases__ += (ChangesetDefaultClause, )

View File

@ -0,0 +1,340 @@
"""
Extensions to SQLAlchemy for altering existing tables.
At the moment, this isn't so much based off of ANSI as much as
things that just happen to work with multiple databases.
"""
import sqlalchemy as sa
from sqlalchemy.schema import SchemaVisitor
from sqlalchemy.engine.default import DefaultDialect
from sqlalchemy.schema import (ForeignKeyConstraint,
PrimaryKeyConstraint,
CheckConstraint,
UniqueConstraint,
Index)
from migrate.changeset import exceptions, constraint, SQLA_06
import StringIO
if not SQLA_06:
from sqlalchemy.sql.compiler import SchemaGenerator, SchemaDropper
else:
from sqlalchemy.schema import AddConstraint, DropConstraint
from sqlalchemy.sql.compiler import DDLCompiler
SchemaGenerator = SchemaDropper = DDLCompiler
class AlterTableVisitor(SchemaVisitor):
"""Common operations for ``ALTER TABLE`` statements."""
def append(self, s):
"""Append content to the SchemaIterator's query buffer."""
self.buffer.write(s)
def execute(self):
"""Execute the contents of the SchemaIterator's buffer."""
try:
return self.connection.execute(self.buffer.getvalue())
finally:
self.buffer.truncate(0)
def __init__(self, dialect, connection, **kw):
self.connection = connection
self.buffer = StringIO.StringIO()
self.preparer = dialect.identifier_preparer
self.dialect = dialect
def traverse_single(self, elem):
ret = super(AlterTableVisitor, self).traverse_single(elem)
if ret:
# adapt to 0.6 which uses a string-returning
# object
self.append(ret)
def _to_table(self, param):
"""Returns the table object for the given param object."""
if isinstance(param, (sa.Column, sa.Index, sa.schema.Constraint)):
ret = param.table
else:
ret = param
return ret
def start_alter_table(self, param):
"""Returns the start of an ``ALTER TABLE`` SQL-Statement.
Use the param object to determine the table name and use it
for building the SQL statement.
:param param: object to determine the table from
:type param: :class:`sqlalchemy.Column`, :class:`sqlalchemy.Index`,
:class:`sqlalchemy.schema.Constraint`, :class:`sqlalchemy.Table`,
or string (table name)
"""
table = self._to_table(param)
self.append('\nALTER TABLE %s ' % self.preparer.format_table(table))
return table
class ANSIColumnGenerator(AlterTableVisitor, SchemaGenerator):
"""Extends ansisql generator for column creation (alter table add col)"""
def visit_column(self, column):
"""Create a column (table already exists).
:param column: column object
:type column: :class:`sqlalchemy.Column` instance
"""
if column.default is not None:
self.traverse_single(column.default)
table = self.start_alter_table(column)
self.append("ADD ")
self.append(self.get_column_specification(column))
for cons in column.constraints:
self.traverse_single(cons)
self.execute()
# ALTER TABLE STATEMENTS
# add indexes and unique constraints
if column.index_name:
ix = Index(column.index_name,
column,
unique=bool(column.index_name or column.index))
ix.create()
elif column.unique_name:
constraint.UniqueConstraint(column,
name=column.unique_name).create()
# SA bounds FK constraints to table, add manually
for fk in column.foreign_keys:
self.add_foreignkey(fk.constraint)
# add primary key constraint if needed
if column.primary_key_name:
cons = constraint.PrimaryKeyConstraint(column,
name=column.primary_key_name)
cons.create()
if SQLA_06:
def add_foreignkey(self, fk):
self.connection.execute(AddConstraint(fk))
class ANSIColumnDropper(AlterTableVisitor, SchemaDropper):
"""Extends ANSI SQL dropper for column dropping (``ALTER TABLE
DROP COLUMN``).
"""
def visit_column(self, column):
"""Drop a column from its table.
:param column: the column object
:type column: :class:`sqlalchemy.Column`
"""
table = self.start_alter_table(column)
self.append('DROP COLUMN %s' % self.preparer.format_column(column))
self.execute()
class ANSISchemaChanger(AlterTableVisitor, SchemaGenerator):
"""Manages changes to existing schema elements.
Note that columns are schema elements; ``ALTER TABLE ADD COLUMN``
is in SchemaGenerator.
All items may be renamed. Columns can also have many of their properties -
type, for example - changed.
Each function is passed a tuple, containing (object, name); where
object is a type of object you'd expect for that function
(ie. table for visit_table) and name is the object's new
name. NONE means the name is unchanged.
"""
def visit_table(self, table):
"""Rename a table. Other ops aren't supported."""
self.start_alter_table(table)
self.append("RENAME TO %s" % self.preparer.quote(table.new_name,
table.quote))
self.execute()
def visit_index(self, index):
"""Rename an index"""
self.append("ALTER INDEX %s RENAME TO %s" %
(self.preparer.quote(self._validate_identifier(index.name,
True), index.quote),
self.preparer.quote(self._validate_identifier(index.new_name,
True), index.quote)))
self.execute()
def visit_column(self, delta):
"""Rename/change a column."""
# ALTER COLUMN is implemented as several ALTER statements
keys = delta.keys()
if 'type' in keys:
self._run_subvisit(delta, self._visit_column_type)
if 'nullable' in keys:
self._run_subvisit(delta, self._visit_column_nullable)
if 'server_default' in keys:
# Skip 'default': only handle server-side defaults, others
# are managed by the app, not the db.
self._run_subvisit(delta, self._visit_column_default)
if 'name' in keys:
self._run_subvisit(delta, self._visit_column_name, start_alter=False)
def _run_subvisit(self, delta, func, start_alter=True):
"""Runs visit method based on what needs to be changed on column"""
table = self._to_table(delta.table)
col_name = delta.current_name
if start_alter:
self.start_alter_column(table, col_name)
ret = func(table, delta.result_column, delta)
self.execute()
def start_alter_column(self, table, col_name):
"""Starts ALTER COLUMN"""
self.start_alter_table(table)
self.append("ALTER COLUMN %s " % self.preparer.quote(col_name, table.quote))
def _visit_column_nullable(self, table, column, delta):
nullable = delta['nullable']
if nullable:
self.append("DROP NOT NULL")
else:
self.append("SET NOT NULL")
def _visit_column_default(self, table, column, delta):
default_text = self.get_column_default_string(column)
if default_text is not None:
self.append("SET DEFAULT %s" % default_text)
else:
self.append("DROP DEFAULT")
def _visit_column_type(self, table, column, delta):
type_ = delta['type']
if SQLA_06:
type_text = str(type_.compile(dialect=self.dialect))
else:
type_text = type_.dialect_impl(self.dialect).get_col_spec()
self.append("TYPE %s" % type_text)
def _visit_column_name(self, table, column, delta):
self.start_alter_table(table)
col_name = self.preparer.quote(delta.current_name, table.quote)
new_name = self.preparer.format_column(delta.result_column)
self.append('RENAME COLUMN %s TO %s' % (col_name, new_name))
class ANSIConstraintCommon(AlterTableVisitor):
"""
Migrate's constraints require a separate creation function from
SA's: Migrate's constraints are created independently of a table;
SA's are created at the same time as the table.
"""
def get_constraint_name(self, cons):
"""Gets a name for the given constraint.
If the name is already set it will be used otherwise the
constraint's :meth:`autoname <migrate.changeset.constraint.ConstraintChangeset.autoname>`
method is used.
:param cons: constraint object
"""
if cons.name is not None:
ret = cons.name
else:
ret = cons.name = cons.autoname()
return self.preparer.quote(ret, cons.quote)
def visit_migrate_primary_key_constraint(self, *p, **k):
self._visit_constraint(*p, **k)
def visit_migrate_foreign_key_constraint(self, *p, **k):
self._visit_constraint(*p, **k)
def visit_migrate_check_constraint(self, *p, **k):
self._visit_constraint(*p, **k)
def visit_migrate_unique_constraint(self, *p, **k):
self._visit_constraint(*p, **k)
if SQLA_06:
class ANSIConstraintGenerator(ANSIConstraintCommon, SchemaGenerator):
def _visit_constraint(self, constraint):
constraint.name = self.get_constraint_name(constraint)
self.append(self.process(AddConstraint(constraint)))
self.execute()
class ANSIConstraintDropper(ANSIConstraintCommon, SchemaDropper):
def _visit_constraint(self, constraint):
constraint.name = self.get_constraint_name(constraint)
self.append(self.process(DropConstraint(constraint, cascade=constraint.cascade)))
self.execute()
else:
class ANSIConstraintGenerator(ANSIConstraintCommon, SchemaGenerator):
def get_constraint_specification(self, cons, **kwargs):
"""Constaint SQL generators.
We cannot use SA visitors because they append comma.
"""
if isinstance(cons, PrimaryKeyConstraint):
if cons.name is not None:
self.append("CONSTRAINT %s " % self.preparer.format_constraint(cons))
self.append("PRIMARY KEY ")
self.append("(%s)" % ', '.join(self.preparer.quote(c.name, c.quote)
for c in cons))
self.define_constraint_deferrability(cons)
elif isinstance(cons, ForeignKeyConstraint):
self.define_foreign_key(cons)
elif isinstance(cons, CheckConstraint):
if cons.name is not None:
self.append("CONSTRAINT %s " %
self.preparer.format_constraint(cons))
self.append("CHECK (%s)" % cons.sqltext)
self.define_constraint_deferrability(cons)
elif isinstance(cons, UniqueConstraint):
if cons.name is not None:
self.append("CONSTRAINT %s " %
self.preparer.format_constraint(cons))
self.append("UNIQUE (%s)" % \
(', '.join(self.preparer.quote(c.name, c.quote) for c in cons)))
self.define_constraint_deferrability(cons)
else:
raise exceptions.InvalidConstraintError(cons)
def _visit_constraint(self, constraint):
table = self.start_alter_table(constraint)
constraint.name = self.get_constraint_name(constraint)
self.append("ADD ")
self.get_constraint_specification(constraint)
self.execute()
class ANSIConstraintDropper(ANSIConstraintCommon, SchemaDropper):
def _visit_constraint(self, constraint):
self.start_alter_table(constraint)
self.append("DROP CONSTRAINT ")
constraint.name = self.get_constraint_name(constraint)
self.append(self.preparer.format_constraint(constraint))
if constraint.cascade:
self.cascade_constraint(constraint)
self.execute()
def cascade_constraint(self, constraint):
self.append(" CASCADE")
class ANSIDialect(DefaultDialect):
columngenerator = ANSIColumnGenerator
columndropper = ANSIColumnDropper
schemachanger = ANSISchemaChanger
constraintgenerator = ANSIConstraintGenerator
constraintdropper = ANSIConstraintDropper

View File

@ -0,0 +1,190 @@
"""
This module defines standalone schema constraint classes.
"""
from sqlalchemy import schema
from migrate.changeset.exceptions import *
from migrate.changeset import SQLA_06
class ConstraintChangeset(object):
"""Base class for Constraint classes."""
def _normalize_columns(self, cols, table_name=False):
"""Given: column objects or names; return col names and
(maybe) a table"""
colnames = []
table = None
for col in cols:
if isinstance(col, schema.Column):
if col.table is not None and table is None:
table = col.table
if table_name:
col = '.'.join((col.table.name, col.name))
else:
col = col.name
colnames.append(col)
return colnames, table
def __do_imports(self, visitor_name, *a, **kw):
engine = kw.pop('engine', self.table.bind)
from migrate.changeset.databases.visitor import (get_engine_visitor,
run_single_visitor)
visitorcallable = get_engine_visitor(engine, visitor_name)
run_single_visitor(engine, visitorcallable, self, *a, **kw)
def create(self, *a, **kw):
"""Create the constraint in the database.
:param engine: the database engine to use. If this is \
:keyword:`None` the instance's engine will be used
:type engine: :class:`sqlalchemy.engine.base.Engine`
"""
# TODO: set the parent here instead of in __init__
self.__do_imports('constraintgenerator', *a, **kw)
def drop(self, *a, **kw):
"""Drop the constraint from the database.
:param engine: the database engine to use. If this is
:keyword:`None` the instance's engine will be used
:param cascade: Issue CASCADE drop if database supports it
:type engine: :class:`sqlalchemy.engine.base.Engine`
:type cascade: bool
:returns: Instance with cleared columns
"""
self.cascade = kw.pop('cascade', False)
self.__do_imports('constraintdropper', *a, **kw)
# the spirit of Constraint objects is that they
# are immutable (just like in a DB. they're only ADDed
# or DROPped).
#self.columns.clear()
return self
class PrimaryKeyConstraint(ConstraintChangeset, schema.PrimaryKeyConstraint):
"""Construct PrimaryKeyConstraint
Migrate's additional parameters:
:param cols: Columns in constraint.
:param table: If columns are passed as strings, this kw is required
:type table: Table instance
:type cols: strings or Column instances
"""
__migrate_visit_name__ = 'migrate_primary_key_constraint'
def __init__(self, *cols, **kwargs):
colnames, table = self._normalize_columns(cols)
table = kwargs.pop('table', table)
super(PrimaryKeyConstraint, self).__init__(*colnames, **kwargs)
if table is not None:
self._set_parent(table)
def autoname(self):
"""Mimic the database's automatic constraint names"""
return "%s_pkey" % self.table.name
class ForeignKeyConstraint(ConstraintChangeset, schema.ForeignKeyConstraint):
"""Construct ForeignKeyConstraint
Migrate's additional parameters:
:param columns: Columns in constraint
:param refcolumns: Columns that this FK reffers to in another table.
:param table: If columns are passed as strings, this kw is required
:type table: Table instance
:type columns: list of strings or Column instances
:type refcolumns: list of strings or Column instances
"""
__migrate_visit_name__ = 'migrate_foreign_key_constraint'
def __init__(self, columns, refcolumns, *args, **kwargs):
colnames, table = self._normalize_columns(columns)
table = kwargs.pop('table', table)
refcolnames, reftable = self._normalize_columns(refcolumns,
table_name=True)
super(ForeignKeyConstraint, self).__init__(colnames, refcolnames, *args,
**kwargs)
if table is not None:
self._set_parent(table)
@property
def referenced(self):
return [e.column for e in self.elements]
@property
def reftable(self):
return self.referenced[0].table
def autoname(self):
"""Mimic the database's automatic constraint names"""
ret = "%(table)s_%(reftable)s_fkey" % dict(
table=self.table.name,
reftable=self.reftable.name,)
return ret
class CheckConstraint(ConstraintChangeset, schema.CheckConstraint):
"""Construct CheckConstraint
Migrate's additional parameters:
:param sqltext: Plain SQL text to check condition
:param columns: If not name is applied, you must supply this kw\
to autoname constraint
:param table: If columns are passed as strings, this kw is required
:type table: Table instance
:type columns: list of Columns instances
:type sqltext: string
"""
__migrate_visit_name__ = 'migrate_check_constraint'
def __init__(self, sqltext, *args, **kwargs):
cols = kwargs.pop('columns', [])
if not cols and not kwargs.get('name', False):
raise InvalidConstraintError('You must either set "name"'
'parameter or "columns" to autogenarate it.')
colnames, table = self._normalize_columns(cols)
table = kwargs.pop('table', table)
schema.CheckConstraint.__init__(self, sqltext, *args, **kwargs)
if table is not None:
if not SQLA_06:
self.table = table
self._set_parent(table)
self.colnames = colnames
def autoname(self):
return "%(table)s_%(cols)s_check" % \
dict(table=self.table.name, cols="_".join(self.colnames))
class UniqueConstraint(ConstraintChangeset, schema.UniqueConstraint):
"""Construct UniqueConstraint
Migrate's additional parameters:
:param cols: Columns in constraint.
:param table: If columns are passed as strings, this kw is required
:type table: Table instance
:type cols: strings or Column instances
.. versionadded:: 0.5.5
"""
__migrate_visit_name__ = 'migrate_unique_constraint'
def __init__(self, *cols, **kwargs):
self.colnames, table = self._normalize_columns(cols)
table = kwargs.pop('table', table)
super(UniqueConstraint, self).__init__(*self.colnames, **kwargs)
if table is not None:
self._set_parent(table)
def autoname(self):
"""Mimic the database's automatic constraint names"""
return "%s_%s_key" % (self.table.name, self.colnames[0])

View File

@ -0,0 +1,10 @@
"""
This module contains database dialect specific changeset
implementations.
"""
__all__ = [
'postgres',
'sqlite',
'mysql',
'oracle',
]

View File

@ -0,0 +1,63 @@
"""
Firebird database specific implementations of changeset classes.
"""
from migrate.changeset import ansisql, exceptions
# TODO: SQLA 0.6 has not migrated the FB dialect over yet
from sqlalchemy.databases import firebird as sa_base
FBSchemaGenerator = sa_base.FBSchemaGenerator
class FBColumnGenerator(FBSchemaGenerator, ansisql.ANSIColumnGenerator):
"""Firebird column generator implementation."""
class FBColumnDropper(ansisql.ANSIColumnDropper):
"""Firebird column dropper implementation."""
def visit_column(self, column):
table = self.start_alter_table(column)
self.append('DROP %s' % self.preparer.format_column(column))
self.execute()
class FBSchemaChanger(ansisql.ANSISchemaChanger):
"""Firebird schema changer implementation."""
def visit_table(self, table):
"""Rename table not supported"""
raise exceptions.NotSupportedError(
"Firebird does not support renaming tables.")
def _visit_column_name(self, table, column, delta):
self.start_alter_table(table)
col_name = self.preparer.quote(delta.current_name, table.quote)
new_name = self.preparer.format_column(delta.result_column)
self.append('ALTER COLUMN %s TO %s' % (col_name, new_name))
def _visit_column_nullable(self, table, column, delta):
"""Changing NULL is not supported"""
# TODO: http://www.firebirdfaq.org/faq103/
raise exceptions.NotSupportedError(
"Firebird does not support altering NULL bevahior.")
class FBConstraintGenerator(ansisql.ANSIConstraintGenerator):
"""Firebird constraint generator implementation."""
class FBConstraintDropper(ansisql.ANSIConstraintDropper):
"""Firebird constaint dropper implementation."""
def cascade_constraint(self, constraint):
"""Cascading constraints is not supported"""
raise exceptions.NotSupportedError(
"Firebird does not support cascading constraints")
class FBDialect(ansisql.ANSIDialect):
columngenerator = FBColumnGenerator
columndropper = FBColumnDropper
schemachanger = FBSchemaChanger
constraintgenerator = FBConstraintGenerator
constraintdropper = FBConstraintDropper

View File

@ -0,0 +1,80 @@
"""
MySQL database specific implementations of changeset classes.
"""
from migrate.changeset import ansisql, exceptions, SQLA_06
from sqlalchemy.databases import mysql as sa_base
if not SQLA_06:
MySQLSchemaGenerator = sa_base.MySQLSchemaGenerator
else:
MySQLSchemaGenerator = sa_base.MySQLDDLCompiler
class MySQLColumnGenerator(MySQLSchemaGenerator, ansisql.ANSIColumnGenerator):
pass
class MySQLColumnDropper(ansisql.ANSIColumnDropper):
pass
class MySQLSchemaChanger(MySQLSchemaGenerator, ansisql.ANSISchemaChanger):
def visit_column(self, delta):
table = delta.table
colspec = self.get_column_specification(delta.result_column)
old_col_name = self.preparer.quote(delta.current_name, table.quote)
self.start_alter_table(table)
self.append("CHANGE COLUMN %s " % old_col_name)
self.append(colspec)
self.execute()
def visit_index(self, param):
# If MySQL can do this, I can't find how
raise exceptions.NotSupportedError("MySQL cannot rename indexes")
class MySQLConstraintGenerator(ansisql.ANSIConstraintGenerator):
pass
if SQLA_06:
class MySQLConstraintDropper(MySQLSchemaGenerator, ansisql.ANSIConstraintDropper):
def visit_migrate_check_constraint(self, *p, **k):
raise exceptions.NotSupportedError("MySQL does not support CHECK"
" constraints, use triggers instead.")
else:
class MySQLConstraintDropper(ansisql.ANSIConstraintDropper):
def visit_migrate_primary_key_constraint(self, constraint):
self.start_alter_table(constraint)
self.append("DROP PRIMARY KEY")
self.execute()
def visit_migrate_foreign_key_constraint(self, constraint):
self.start_alter_table(constraint)
self.append("DROP FOREIGN KEY ")
constraint.name = self.get_constraint_name(constraint)
self.append(self.preparer.format_constraint(constraint))
self.execute()
def visit_migrate_check_constraint(self, *p, **k):
raise exceptions.NotSupportedError("MySQL does not support CHECK"
" constraints, use triggers instead.")
def visit_migrate_unique_constraint(self, constraint, *p, **k):
self.start_alter_table(constraint)
self.append('DROP INDEX ')
constraint.name = self.get_constraint_name(constraint)
self.append(self.preparer.format_constraint(constraint))
self.execute()
class MySQLDialect(ansisql.ANSIDialect):
columngenerator = MySQLColumnGenerator
columndropper = MySQLColumnDropper
schemachanger = MySQLSchemaChanger
constraintgenerator = MySQLConstraintGenerator
constraintdropper = MySQLConstraintDropper

View File

@ -0,0 +1,112 @@
"""
Oracle database specific implementations of changeset classes.
"""
import sqlalchemy as sa
from migrate.changeset import ansisql, exceptions
from sqlalchemy.databases import oracle as sa_base
from migrate.changeset import ansisql, exceptions, SQLA_06
if not SQLA_06:
OracleSchemaGenerator = sa_base.OracleSchemaGenerator
else:
OracleSchemaGenerator = sa_base.OracleDDLCompiler
class OracleColumnGenerator(OracleSchemaGenerator, ansisql.ANSIColumnGenerator):
pass
class OracleColumnDropper(ansisql.ANSIColumnDropper):
pass
class OracleSchemaChanger(OracleSchemaGenerator, ansisql.ANSISchemaChanger):
def get_column_specification(self, column, **kwargs):
# Ignore the NOT NULL generated
override_nullable = kwargs.pop('override_nullable', None)
if override_nullable:
orig = column.nullable
column.nullable = True
ret = super(OracleSchemaChanger, self).get_column_specification(
column, **kwargs)
if override_nullable:
column.nullable = orig
return ret
def visit_column(self, delta):
keys = delta.keys()
if 'name' in keys:
self._run_subvisit(delta,
self._visit_column_name,
start_alter=False)
if len(set(('type', 'nullable', 'server_default')).intersection(keys)):
self._run_subvisit(delta,
self._visit_column_change,
start_alter=False)
def _visit_column_change(self, table, column, delta):
# Oracle cannot drop a default once created, but it can set it
# to null. We'll do that if default=None
# http://forums.oracle.com/forums/message.jspa?messageID=1273234#1273234
dropdefault_hack = (column.server_default is None \
and 'server_default' in delta.keys())
# Oracle apparently doesn't like it when we say "not null" if
# the column's already not null. Fudge it, so we don't need a
# new function
notnull_hack = ((not column.nullable) \
and ('nullable' not in delta.keys()))
# We need to specify NULL if we're removing a NOT NULL
# constraint
null_hack = (column.nullable and ('nullable' in delta.keys()))
if dropdefault_hack:
column.server_default = sa.PassiveDefault(sa.sql.null())
if notnull_hack:
column.nullable = True
colspec = self.get_column_specification(column,
override_nullable=null_hack)
if null_hack:
colspec += ' NULL'
if notnull_hack:
column.nullable = False
if dropdefault_hack:
column.server_default = None
self.start_alter_table(table)
self.append("MODIFY (")
self.append(colspec)
self.append(")")
class OracleConstraintCommon(object):
def get_constraint_name(self, cons):
# Oracle constraints can't guess their name like other DBs
if not cons.name:
raise exceptions.NotSupportedError(
"Oracle constraint names must be explicitly stated")
return cons.name
class OracleConstraintGenerator(OracleConstraintCommon,
ansisql.ANSIConstraintGenerator):
pass
class OracleConstraintDropper(OracleConstraintCommon,
ansisql.ANSIConstraintDropper):
pass
class OracleDialect(ansisql.ANSIDialect):
columngenerator = OracleColumnGenerator
columndropper = OracleColumnDropper
schemachanger = OracleSchemaChanger
constraintgenerator = OracleConstraintGenerator
constraintdropper = OracleConstraintDropper

View File

@ -0,0 +1,45 @@
"""
`PostgreSQL`_ database specific implementations of changeset classes.
.. _`PostgreSQL`: http://www.postgresql.org/
"""
from migrate.changeset import ansisql, SQLA_06
from sqlalchemy.databases import postgres as sa_base
if not SQLA_06:
PGSchemaGenerator = sa_base.PGSchemaGenerator
else:
PGSchemaGenerator = sa_base.PGDDLCompiler
class PGColumnGenerator(PGSchemaGenerator, ansisql.ANSIColumnGenerator):
"""PostgreSQL column generator implementation."""
pass
class PGColumnDropper(ansisql.ANSIColumnDropper):
"""PostgreSQL column dropper implementation."""
pass
class PGSchemaChanger(ansisql.ANSISchemaChanger):
"""PostgreSQL schema changer implementation."""
pass
class PGConstraintGenerator(ansisql.ANSIConstraintGenerator):
"""PostgreSQL constraint generator implementation."""
pass
class PGConstraintDropper(ansisql.ANSIConstraintDropper):
"""PostgreSQL constaint dropper implementation."""
pass
class PGDialect(ansisql.ANSIDialect):
columngenerator = PGColumnGenerator
columndropper = PGColumnDropper
schemachanger = PGSchemaChanger
constraintgenerator = PGConstraintGenerator
constraintdropper = PGConstraintDropper

View File

@ -0,0 +1,132 @@
"""
`SQLite`_ database specific implementations of changeset classes.
.. _`SQLite`: http://www.sqlite.org/
"""
from UserDict import DictMixin
from copy import copy
from sqlalchemy.databases import sqlite as sa_base
from migrate.changeset import ansisql, exceptions, SQLA_06
if not SQLA_06:
SQLiteSchemaGenerator = sa_base.SQLiteSchemaGenerator
else:
SQLiteSchemaGenerator = sa_base.SQLiteDDLCompiler
class SQLiteCommon(object):
def _not_supported(self, op):
raise exceptions.NotSupportedError("SQLite does not support "
"%s; see http://www.sqlite.org/lang_altertable.html" % op)
class SQLiteHelper(SQLiteCommon):
def visit_column(self, delta):
if isinstance(delta, DictMixin):
column = delta.result_column
table = self._to_table(delta.table)
else:
column = delta
table = self._to_table(column.table)
table_name = self.preparer.format_table(table)
# we remove all constraints, indexes so it doesnt recreate them
ixbackup = copy(table.indexes)
consbackup = copy(table.constraints)
table.indexes = set()
table.constraints = set()
self.append('ALTER TABLE %s RENAME TO migration_tmp' % table_name)
self.execute()
insertion_string = self._modify_table(table, column, delta)
table.create()
self.append(insertion_string % {'table_name': table_name})
self.execute()
self.append('DROP TABLE migration_tmp')
self.execute()
# restore indexes, constraints
table.indexes = ixbackup
table.constraints = consbackup
class SQLiteColumnGenerator(SQLiteSchemaGenerator, SQLiteCommon,
ansisql.ANSIColumnGenerator):
"""SQLite ColumnGenerator"""
def add_foreignkey(self, constraint):
"""Does not support ALTER TABLE ADD FOREIGN KEY"""
self._not_supported("ALTER TABLE ADD CONSTRAINT")
class SQLiteColumnDropper(SQLiteHelper, ansisql.ANSIColumnDropper):
"""SQLite ColumnDropper"""
def _modify_table(self, table, column, delta):
columns = ' ,'.join(map(self.preparer.format_column, table.columns))
return 'INSERT INTO %(table_name)s SELECT ' + columns + \
' from migration_tmp'
class SQLiteSchemaChanger(SQLiteHelper, ansisql.ANSISchemaChanger):
"""SQLite SchemaChanger"""
def _modify_table(self, table, column, delta):
column = table.columns[delta.current_name]
return 'INSERT INTO %(table_name)s SELECT * from migration_tmp'
def visit_index(self, index):
"""Does not support ALTER INDEX"""
self._not_supported('ALTER INDEX')
class SQLiteConstraintGenerator(ansisql.ANSIConstraintGenerator, SQLiteCommon):
def visit_migrate_primary_key_constraint(self, constraint):
tmpl = "CREATE UNIQUE INDEX %s ON %s ( %s )"
cols = ', '.join(map(self.preparer.format_column, constraint.columns))
tname = self.preparer.format_table(constraint.table)
name = self.get_constraint_name(constraint)
msg = tmpl % (name, tname, cols)
self.append(msg)
self.execute()
def visit_migrate_foreign_key_constraint(self, *p, **k):
self._not_supported('ALTER TABLE ADD CONSTRAINT')
def visit_migrate_unique_constraint(self, *p, **k):
self._not_supported('ALTER TABLE ADD CONSTRAINT')
class SQLiteConstraintDropper(ansisql.ANSIColumnDropper,
SQLiteCommon,
ansisql.ANSIConstraintCommon):
def visit_migrate_primary_key_constraint(self, constraint):
tmpl = "DROP INDEX %s "
name = self.get_constraint_name(constraint)
msg = tmpl % (name)
self.append(msg)
self.execute()
def visit_migrate_foreign_key_constraint(self, *p, **k):
self._not_supported('ALTER TABLE DROP CONSTRAINT')
def visit_migrate_check_constraint(self, *p, **k):
self._not_supported('ALTER TABLE DROP CONSTRAINT')
def visit_migrate_unique_constraint(self, *p, **k):
self._not_supported('ALTER TABLE DROP CONSTRAINT')
# TODO: technically primary key is a NOT NULL + UNIQUE constraint, should add NOT NULL to index
class SQLiteDialect(ansisql.ANSIDialect):
columngenerator = SQLiteColumnGenerator
columndropper = SQLiteColumnDropper
schemachanger = SQLiteSchemaChanger
constraintgenerator = SQLiteConstraintGenerator
constraintdropper = SQLiteConstraintDropper

View File

@ -0,0 +1,70 @@
"""
Module for visitor class mapping.
"""
import sqlalchemy as sa
from migrate.changeset import ansisql
from migrate.changeset.databases import (sqlite,
postgres,
mysql,
oracle,
firebird)
# Map SA dialects to the corresponding Migrate extensions
DIALECTS = {
"default": ansisql.ANSIDialect,
"sqlite": sqlite.SQLiteDialect,
"postgres": postgres.PGDialect,
"mysql": mysql.MySQLDialect,
"oracle": oracle.OracleDialect,
"firebird": firebird.FBDialect,
}
def get_engine_visitor(engine, name):
"""
Get the visitor implementation for the given database engine.
:param engine: SQLAlchemy Engine
:param name: Name of the visitor
:type name: string
:type engine: Engine
:returns: visitor
"""
# TODO: link to supported visitors
return get_dialect_visitor(engine.dialect, name)
def get_dialect_visitor(sa_dialect, name):
"""
Get the visitor implementation for the given dialect.
Finds the visitor implementation based on the dialect class and
returns and instance initialized with the given name.
Binds dialect specific preparer to visitor.
"""
# map sa dialect to migrate dialect and return visitor
sa_dialect_name = getattr(sa_dialect, 'name', 'default')
migrate_dialect_cls = DIALECTS[sa_dialect_name]
visitor = getattr(migrate_dialect_cls, name)
# bind preparer
visitor.preparer = sa_dialect.preparer(sa_dialect)
return visitor
def run_single_visitor(engine, visitorcallable, element, **kwargs):
"""Runs only one method on the visitor"""
conn = engine.contextual_connect(close_with_result=False)
try:
visitor = visitorcallable(engine.dialect, conn)
if hasattr(element, '__migrate_visit_name__'):
fn = getattr(visitor, 'visit_' + element.__migrate_visit_name__)
else:
fn = getattr(visitor, 'visit_' + element.__visit_name__)
fn(element, **kwargs)
finally:
conn.close()

View File

@ -0,0 +1,21 @@
"""
This module provides exception classes.
"""
class Error(Exception):
"""
Changeset error.
"""
class NotSupportedError(Error):
"""
Not supported error.
"""
class InvalidConstraintError(Error):
"""
Invalid constraint error.
"""

597
migrate/changeset/schema.py Normal file
View File

@ -0,0 +1,597 @@
"""
Schema module providing common schema operations.
"""
from UserDict import DictMixin
import sqlalchemy
from migrate.changeset import SQLA_06
from migrate.changeset.exceptions import *
from migrate.changeset.databases.visitor import (get_engine_visitor,
run_single_visitor)
__all__ = [
'create_column',
'drop_column',
'alter_column',
'rename_table',
'rename_index',
'ChangesetTable',
'ChangesetColumn',
'ChangesetIndex',
'ChangesetDefaultClause',
'ColumnDelta',
]
DEFAULT_ALTER_METADATA = True
def create_column(column, table=None, *p, **kw):
"""Create a column, given the table.
API to :meth:`ChangesetColumn.create`.
"""
if table is not None:
return table.create_column(column, *p, **kw)
return column.create(*p, **kw)
def drop_column(column, table=None, *p, **kw):
"""Drop a column, given the table.
API to :meth:`ChangesetColumn.drop`.
"""
if table is not None:
return table.drop_column(column, *p, **kw)
return column.drop(*p, **kw)
def rename_table(table, name, engine=None, **kw):
"""Rename a table.
If Table instance is given, engine is not used.
API to :meth:`ChangesetTable.rename`.
:param table: Table to be renamed.
:param name: New name for Table.
:param engine: Engine instance.
:type table: string or Table instance
:type name: string
:type engine: obj
"""
table = _to_table(table, engine)
table.rename(name, **kw)
def rename_index(index, name, table=None, engine=None, **kw):
"""Rename an index.
If Index instance is given,
table and engine are not used.
API to :meth:`ChangesetIndex.rename`.
:param index: Index to be renamed.
:param name: New name for index.
:param table: Table to which Index is reffered.
:param engine: Engine instance.
:type index: string or Index instance
:type name: string
:type table: string or Table instance
:type engine: obj
"""
index = _to_index(index, table, engine)
index.rename(name, **kw)
def alter_column(*p, **k):
"""Alter a column.
Direct API to :class:`ColumnDelta`.
:param table: Table or table name (will issue reflection).
:param engine: Will be used for reflection.
:param alter_metadata: Defaults to True. It will alter changes also to objects.
:returns: :class:`Columndelta` instance
"""
k.setdefault('alter_metadata', DEFAULT_ALTER_METADATA)
if 'table' not in k and isinstance(p[0], sqlalchemy.Column):
k['table'] = p[0].table
if 'engine' not in k:
k['engine'] = k['table'].bind
engine = k['engine']
delta = ColumnDelta(*p, **k)
visitorcallable = get_engine_visitor(engine, 'schemachanger')
engine._run_visitor(visitorcallable, delta)
return delta
def _to_table(table, engine=None):
"""Return if instance of Table, else construct new with metadata"""
if isinstance(table, sqlalchemy.Table):
return table
# Given: table name, maybe an engine
meta = sqlalchemy.MetaData()
if engine is not None:
meta.bind = engine
return sqlalchemy.Table(table, meta)
def _to_index(index, table=None, engine=None):
"""Return if instance of Index, else construct new with metadata"""
if isinstance(index, sqlalchemy.Index):
return index
# Given: index name; table name required
table = _to_table(table, engine)
ret = sqlalchemy.Index(index)
ret.table = table
return ret
class ColumnDelta(DictMixin, sqlalchemy.schema.SchemaItem):
"""Extracts the differences between two columns/column-parameters
May receive parameters arranged in several different ways:
* **current_column, new_column, \*p, \*\*kw**
Additional parameters can be specified to override column
differences.
* **current_column, \*p, \*\*kw**
Additional parameters alter current_column. Table name is extracted
from current_column object.
Name is changed to current_column.name from current_name,
if current_name is specified.
* **current_col_name, \*p, \*\*kw**
Table kw must specified.
:param table: Table at which current Column should be bound to.\
If table name is given, reflection will be used.
:type table: string or Table instance
:param alter_metadata: If True, it will apply changes to metadata.
:type alter_metadata: bool
:param metadata: If `alter_metadata` is true, \
metadata is used to reflect table names into
:type metadata: :class:`MetaData` instance
:param engine: When reflecting tables, either engine or metadata must \
be specified to acquire engine object.
:type engine: :class:`Engine` instance
:returns: :class:`ColumnDelta` instance provides interface for altered attributes to \
`result_column` through :func:`dict` alike object.
* :class:`ColumnDelta`.result_column is altered column with new attributes
* :class:`ColumnDelta`.current_name is current name of column in db
"""
# Column attributes that can be altered
diff_keys = ('name', 'type', 'primary_key', 'nullable',
'server_onupdate', 'server_default')
diffs = dict()
__visit_name__ = 'column'
def __init__(self, *p, **kw):
self.alter_metadata = kw.pop("alter_metadata", False)
self.meta = kw.pop("metadata", None)
self.engine = kw.pop("engine", None)
# Things are initialized differently depending on how many column
# parameters are given. Figure out how many and call the appropriate
# method.
if len(p) >= 1 and isinstance(p[0], sqlalchemy.Column):
# At least one column specified
if len(p) >= 2 and isinstance(p[1], sqlalchemy.Column):
# Two columns specified
diffs = self.compare_2_columns(*p, **kw)
else:
# Exactly one column specified
diffs = self.compare_1_column(*p, **kw)
else:
# Zero columns specified
if not len(p) or not isinstance(p[0], basestring):
raise ValueError("First argument must be column name")
diffs = self.compare_parameters(*p, **kw)
self.apply_diffs(diffs)
def __repr__(self):
return '<ColumnDelta altermetadata=%r, %s>' % (self.alter_metadata,
super(ColumnDelta, self).__repr__())
def __getitem__(self, key):
if key not in self.keys():
raise KeyError("No such diff key, available: %s" % self.diffs )
return getattr(self.result_column, key)
def __setitem__(self, key, value):
if key not in self.keys():
raise KeyError("No such diff key, available: %s" % self.diffs )
setattr(self.result_column, key, value)
def __delitem__(self, key):
raise NotImplementedError
def keys(self):
return self.diffs.keys()
def compare_parameters(self, current_name, *p, **k):
"""Compares Column objects with reflection"""
self.table = k.pop('table')
self.result_column = self._table.c.get(current_name)
if len(p):
k = self._extract_parameters(p, k, self.result_column)
return k
def compare_1_column(self, col, *p, **k):
"""Compares one Column object"""
self.table = k.pop('table', None) or col.table
self.result_column = col
if len(p):
k = self._extract_parameters(p, k, self.result_column)
return k
def compare_2_columns(self, old_col, new_col, *p, **k):
"""Compares two Column objects"""
self.process_column(new_col)
self.table = k.pop('table', None) or old_col.table or new_col.table
self.result_column = old_col
# set differences
# leave out some stuff for later comp
for key in (set(self.diff_keys) - set(('type',))):
val = getattr(new_col, key, None)
if getattr(self.result_column, key, None) != val:
k.setdefault(key, val)
# inspect types
if not self.are_column_types_eq(self.result_column.type, new_col.type):
k.setdefault('type', new_col.type)
if len(p):
k = self._extract_parameters(p, k, self.result_column)
return k
def apply_diffs(self, diffs):
"""Populate dict and column object with new values"""
self.diffs = diffs
for key in self.diff_keys:
if key in diffs:
setattr(self.result_column, key, diffs[key])
self.process_column(self.result_column)
# create an instance of class type if not yet
if 'type' in diffs and callable(self.result_column.type):
self.result_column.type = self.result_column.type()
# add column to the table
if self.table and self.alter_metadata:
self.result_column.add_to_table(self.table)
def are_column_types_eq(self, old_type, new_type):
"""Compares two types to be equal"""
ret = old_type.__class__ == new_type.__class__
# String length is a special case
if ret and isinstance(new_type, sqlalchemy.types.String):
ret = (getattr(old_type, 'length', None) == \
getattr(new_type, 'length', None))
return ret
def _extract_parameters(self, p, k, column):
"""Extracts data from p and modifies diffs"""
p = list(p)
while len(p):
if isinstance(p[0], basestring):
k.setdefault('name', p.pop(0))
elif isinstance(p[0], sqlalchemy.types.AbstractType):
k.setdefault('type', p.pop(0))
elif callable(p[0]):
p[0] = p[0]()
else:
break
if len(p):
new_col = column.copy_fixed()
new_col._init_items(*p)
k = self.compare_2_columns(column, new_col, **k)
return k
def process_column(self, column):
"""Processes default values for column"""
# XXX: this is a snippet from SA processing of positional parameters
if not SQLA_06 and column.args:
toinit = list(column.args)
else:
toinit = list()
if column.server_default is not None:
if isinstance(column.server_default, sqlalchemy.FetchedValue):
toinit.append(column.server_default)
else:
toinit.append(sqlalchemy.DefaultClause(column.server_default))
if column.server_onupdate is not None:
if isinstance(column.server_onupdate, FetchedValue):
toinit.append(column.server_default)
else:
toinit.append(sqlalchemy.DefaultClause(column.server_onupdate,
for_update=True))
if toinit:
column._init_items(*toinit)
if not SQLA_06:
column.args = []
def _get_table(self):
return getattr(self, '_table', None)
def _set_table(self, table):
if isinstance(table, basestring):
if self.alter_metadata:
if not self.meta:
raise ValueError("metadata must be specified for table"
" reflection when using alter_metadata")
meta = self.meta
if self.engine:
meta.bind = self.engine
else:
if not self.engine and not self.meta:
raise ValueError("engine or metadata must be specified"
" to reflect tables")
if not self.engine:
self.engine = self.meta.bind
meta = sqlalchemy.MetaData(bind=self.engine)
self._table = sqlalchemy.Table(table, meta, autoload=True)
elif isinstance(table, sqlalchemy.Table):
self._table = table
if not self.alter_metadata:
self._table.meta = sqlalchemy.MetaData(bind=self._table.bind)
def _get_result_column(self):
return getattr(self, '_result_column', None)
def _set_result_column(self, column):
"""Set Column to Table based on alter_metadata evaluation."""
self.process_column(column)
if not hasattr(self, 'current_name'):
self.current_name = column.name
if self.alter_metadata:
self._result_column = column
else:
self._result_column = column.copy_fixed()
table = property(_get_table, _set_table)
result_column = property(_get_result_column, _set_result_column)
class ChangesetTable(object):
"""Changeset extensions to SQLAlchemy tables."""
def create_column(self, column, *p, **kw):
"""Creates a column.
The column parameter may be a column definition or the name of
a column in this table.
API to :meth:`ChangesetColumn.create`
:param column: Column to be created
:type column: Column instance or string
"""
if not isinstance(column, sqlalchemy.Column):
# It's a column name
column = getattr(self.c, str(column))
column.create(table=self, *p, **kw)
def drop_column(self, column, *p, **kw):
"""Drop a column, given its name or definition.
API to :meth:`ChangesetColumn.drop`
:param column: Column to be droped
:type column: Column instance or string
"""
if not isinstance(column, sqlalchemy.Column):
# It's a column name
try:
column = getattr(self.c, str(column))
except AttributeError:
# That column isn't part of the table. We don't need
# its entire definition to drop the column, just its
# name, so create a dummy column with the same name.
column = sqlalchemy.Column(str(column))
column.drop(table=self, *p, **kw)
def rename(self, name, *args, **kwargs):
"""Rename this table.
:param name: New name of the table.
:type name: string
:param alter_metadata: If True, table will be removed from metadata
:type alter_metadata: bool
"""
self.alter_metadata = kwargs.pop('alter_metadata', DEFAULT_ALTER_METADATA)
engine = self.bind
self.new_name = name
visitorcallable = get_engine_visitor(engine, 'schemachanger')
run_single_visitor(engine, visitorcallable, self, *args, **kwargs)
# Fix metadata registration
if self.alter_metadata:
self.name = name
self.deregister()
self._set_parent(self.metadata)
def _meta_key(self):
return sqlalchemy.schema._get_table_key(self.name, self.schema)
def deregister(self):
"""Remove this table from its metadata"""
key = self._meta_key()
meta = self.metadata
if key in meta.tables:
del meta.tables[key]
class ChangesetColumn(object):
"""Changeset extensions to SQLAlchemy columns."""
def alter(self, *p, **k):
"""Alter a column's definition: ``ALTER TABLE ALTER COLUMN``.
May supply a new column object, or a list of properties to
change.
For example; the following are equivalent::
col.alter(Column('myint', Integer, DefaultClause('foobar')))
col.alter('myint', Integer, server_default='foobar', nullable=False)
col.alter(DefaultClause('foobar'), name='myint', type=Integer,\
nullable=False)
Column name, type, server_default, and nullable may be changed
here.
Direct API to :func:`alter_column`
"""
if 'table' not in k:
k['table'] = self.table
if 'engine' not in k:
k['engine'] = k['table'].bind
return alter_column(self, *p, **k)
def create(self, table=None, index_name=None, unique_name=None,
primary_key_name=None, *args, **kwargs):
"""Create this column in the database.
Assumes the given table exists. ``ALTER TABLE ADD COLUMN``,
for most databases.
:param table: Table instance to create on.
:param index_name: Creates :class:`ChangesetIndex` on this column.
:param unique_name: Creates :class:\
`~migrate.changeset.constraint.UniqueConstraint` on this column.
:param primary_key_name: Creates :class:\
`~migrate.changeset.constraint.PrimaryKeyConstraint` on this column.
:param alter_metadata: If True, column will be added to table object.
:type table: Table instance
:type index_name: string
:type unique_name: string
:type primary_key_name: string
:type alter_metadata: bool
"""
self.alter_metadata = kwargs.pop('alter_metadata', DEFAULT_ALTER_METADATA)
self.index_name = index_name
self.unique_name = unique_name
self.primary_key_name = primary_key_name
for cons in ('index_name', 'unique_name', 'primary_key_name'):
self._check_sanity_constraints(cons)
if self.alter_metadata:
self.add_to_table(table)
engine = self.table.bind
visitorcallable = get_engine_visitor(engine, 'columngenerator')
engine._run_visitor(visitorcallable, self, *args, **kwargs)
return self
def drop(self, table=None, *args, **kwargs):
"""Drop this column from the database, leaving its table intact.
``ALTER TABLE DROP COLUMN``, for most databases.
:param alter_metadata: If True, column will be removed from table object.
:type alter_metadata: bool
"""
self.alter_metadata = kwargs.pop('alter_metadata', DEFAULT_ALTER_METADATA)
if table is not None:
self.table = table
engine = self.table.bind
if self.alter_metadata:
self.remove_from_table(self.table, unset_table=False)
visitorcallable = get_engine_visitor(engine, 'columndropper')
engine._run_visitor(visitorcallable, self, *args, **kwargs)
if self.alter_metadata:
self.table = None
return self
def add_to_table(self, table):
if table and not self.table:
self._set_parent(table)
def remove_from_table(self, table, unset_table=True):
# TODO: remove indexes, primary keys, constraints, etc
if unset_table:
self.table = None
if table.c.contains_column(self):
table.c.remove(self)
# TODO: this is fixed in 0.6
def copy_fixed(self, **kw):
"""Create a copy of this ``Column``, with all attributes."""
return sqlalchemy.Column(self.name, self.type, self.default,
key=self.key,
primary_key=self.primary_key,
nullable=self.nullable,
quote=self.quote,
index=self.index,
unique=self.unique,
onupdate=self.onupdate,
autoincrement=self.autoincrement,
server_default=self.server_default,
server_onupdate=self.server_onupdate,
*[c.copy(**kw) for c in self.constraints])
def _check_sanity_constraints(self, name):
"""Check if constraints names are correct"""
obj = getattr(self, name)
if (getattr(self, name[:-5]) and not obj):
raise InvalidConstraintError("Column.create() accepts index_name,"
" primary_key_name and unique_name to generate constraints")
if not isinstance(obj, basestring) and obj is not None:
raise InvalidConstraintError(
"%s argument for column must be constraint name" % name)
class ChangesetIndex(object):
"""Changeset extensions to SQLAlchemy Indexes."""
__visit_name__ = 'index'
def rename(self, name, *args, **kwargs):
"""Change the name of an index.
:param name: New name of the Index.
:type name: string
:param alter_metadata: If True, Index object will be altered.
:type alter_metadata: bool
"""
self.alter_metadata = kwargs.pop('alter_metadata', DEFAULT_ALTER_METADATA)
engine = self.table.bind
self.new_name = name
visitorcallable = get_engine_visitor(engine, 'schemachanger')
engine._run_visitor(visitorcallable, self, *args, **kwargs)
if self.alter_metadata:
self.name = name
class ChangesetDefaultClause(object):
"""Implements comparison between :class:`DefaultClause` instances"""
def __eq__(self, other):
if isinstance(other, self.__class__):
if self.arg == other.arg:
return True
def __ne__(self, other):
return not self.__eq__(other)

View File

@ -0,0 +1,5 @@
"""
This package provides functionality to create and manage
repositories of database schema changesets and to apply these
changesets to databases.
"""

368
migrate/versioning/api.py Normal file
View File

@ -0,0 +1,368 @@
"""
This module provides an external API to the versioning system.
.. versionchanged:: 0.4.5
``--preview_sql`` displays source file when using SQL scripts.
If Python script is used, it runs the action with mocked engine and
returns captured SQL statements.
.. versionchanged:: 0.4.5
Deprecated ``--echo`` parameter in favour of new
:func:`migrate.versioning.util.construct_engine` behavior.
"""
# Dear migrate developers,
#
# please do not comment this module using sphinx syntax because its
# docstrings are presented as user help and most users cannot
# interpret sphinx annotated ReStructuredText.
#
# Thanks,
# Jan Dittberner
import sys
import inspect
import warnings
from migrate.versioning import (exceptions, repository, schema, version,
script as script_) # command name conflict
from migrate.versioning.util import catch_known_errors, construct_engine
__all__ = [
'help',
'create',
'script',
'script_sql',
'make_update_script_for_model',
'version',
'source',
'version_control',
'db_version',
'upgrade',
'downgrade',
'drop_version_control',
'manage',
'test',
'compare_model_to_db',
'create_model',
'update_db_from_model',
]
Repository = repository.Repository
ControlledSchema = schema.ControlledSchema
VerNum = version.VerNum
PythonScript = script_.PythonScript
SqlScript = script_.SqlScript
# deprecated
def help(cmd=None, **opts):
"""%prog help COMMAND
Displays help on a given command.
"""
if cmd is None:
raise exceptions.UsageError(None)
try:
func = globals()[cmd]
except:
raise exceptions.UsageError(
"'%s' isn't a valid command. Try 'help COMMAND'" % cmd)
ret = func.__doc__
if sys.argv[0]:
ret = ret.replace('%prog', sys.argv[0])
return ret
@catch_known_errors
def create(repository, name, **opts):
"""%prog create REPOSITORY_PATH NAME [--table=TABLE]
Create an empty repository at the specified path.
You can specify the version_table to be used; by default, it is
'migrate_version'. This table is created in all version-controlled
databases.
"""
repo_path = Repository.create(repository, name, **opts)
@catch_known_errors
def script(description, repository, **opts):
"""%prog script DESCRIPTION REPOSITORY_PATH
Create an empty change script using the next unused version number
appended with the given description.
For instance, manage.py script "Add initial tables" creates:
repository/versions/001_Add_initial_tables.py
"""
repo = Repository(repository)
repo.create_script(description, **opts)
@catch_known_errors
def script_sql(database, repository, **opts):
"""%prog script_sql DATABASE REPOSITORY_PATH
Create empty change SQL scripts for given DATABASE, where DATABASE
is either specific ('postgres', 'mysql', 'oracle', 'sqlite', etc.)
or generic ('default').
For instance, manage.py script_sql postgres creates:
repository/versions/001_postgres_upgrade.sql and
repository/versions/001_postgres_postgres.sql
"""
repo = Repository(repository)
repo.create_script_sql(database, **opts)
def version(repository, **opts):
"""%prog version REPOSITORY_PATH
Display the latest version available in a repository.
"""
repo = Repository(repository)
return repo.latest
def db_version(url, repository, **opts):
"""%prog db_version URL REPOSITORY_PATH
Show the current version of the repository with the given
connection string, under version control of the specified
repository.
The url should be any valid SQLAlchemy connection string.
"""
engine = construct_engine(url, **opts)
schema = ControlledSchema(engine, repository)
return schema.version
def source(version, dest=None, repository=None, **opts):
"""%prog source VERSION [DESTINATION] --repository=REPOSITORY_PATH
Display the Python code for a particular version in this
repository. Save it to the file at DESTINATION or, if omitted,
send to stdout.
"""
if repository is None:
raise exceptions.UsageError("A repository must be specified")
repo = Repository(repository)
ret = repo.version(version).script().source()
if dest is not None:
dest = open(dest, 'w')
dest.write(ret)
dest.close()
ret = None
return ret
def upgrade(url, repository, version=None, **opts):
"""%prog upgrade URL REPOSITORY_PATH [VERSION] [--preview_py|--preview_sql]
Upgrade a database to a later version.
This runs the upgrade() function defined in your change scripts.
By default, the database is updated to the latest available
version. You may specify a version instead, if you wish.
You may preview the Python or SQL code to be executed, rather than
actually executing it, using the appropriate 'preview' option.
"""
err = "Cannot upgrade a database of version %s to version %s. "\
"Try 'downgrade' instead."
return _migrate(url, repository, version, upgrade=True, err=err, **opts)
def downgrade(url, repository, version, **opts):
"""%prog downgrade URL REPOSITORY_PATH VERSION [--preview_py|--preview_sql]
Downgrade a database to an earlier version.
This is the reverse of upgrade; this runs the downgrade() function
defined in your change scripts.
You may preview the Python or SQL code to be executed, rather than
actually executing it, using the appropriate 'preview' option.
"""
err = "Cannot downgrade a database of version %s to version %s. "\
"Try 'upgrade' instead."
return _migrate(url, repository, version, upgrade=False, err=err, **opts)
def test(repository, url, **opts):
"""%prog test REPOSITORY_PATH URL [VERSION]
Performs the upgrade and downgrade option on the given
database. This is not a real test and may leave the database in a
bad state. You should therefore better run the test on a copy of
your database.
"""
engine = construct_engine(url, **opts)
repos = Repository(repository)
script = repos.version(None).script()
# Upgrade
print "Upgrading...",
script.run(engine, 1)
print "done"
print "Downgrading...",
script.run(engine, -1)
print "done"
print "Success"
def version_control(url, repository, version=None, **opts):
"""%prog version_control URL REPOSITORY_PATH [VERSION]
Mark a database as under this repository's version control.
Once a database is under version control, schema changes should
only be done via change scripts in this repository.
This creates the table version_table in the database.
The url should be any valid SQLAlchemy connection string.
By default, the database begins at version 0 and is assumed to be
empty. If the database is not empty, you may specify a version at
which to begin instead. No attempt is made to verify this
version's correctness - the database schema is expected to be
identical to what it would be if the database were created from
scratch.
"""
engine = construct_engine(url, **opts)
ControlledSchema.create(engine, repository, version)
def drop_version_control(url, repository, **opts):
"""%prog drop_version_control URL REPOSITORY_PATH
Removes version control from a database.
"""
engine = construct_engine(url, **opts)
schema = ControlledSchema(engine, repository)
schema.drop()
def manage(file, **opts):
"""%prog manage FILENAME [VARIABLES...]
Creates a script that runs Migrate with a set of default values.
For example::
%prog manage manage.py --repository=/path/to/repository \
--url=sqlite:///project.db
would create the script manage.py. The following two commands
would then have exactly the same results::
python manage.py version
%prog version --repository=/path/to/repository
"""
return Repository.create_manage_file(file, **opts)
def compare_model_to_db(url, model, repository, **opts):
"""%prog compare_model_to_db URL MODEL REPOSITORY_PATH
Compare the current model (assumed to be a module level variable
of type sqlalchemy.MetaData) against the current database.
NOTE: This is EXPERIMENTAL.
""" # TODO: get rid of EXPERIMENTAL label
engine = construct_engine(url, **opts)
print ControlledSchema.compare_model_to_db(engine, model, repository)
def create_model(url, repository, **opts):
"""%prog create_model URL REPOSITORY_PATH [DECLERATIVE=True]
Dump the current database as a Python model to stdout.
NOTE: This is EXPERIMENTAL.
""" # TODO: get rid of EXPERIMENTAL label
engine = construct_engine(url, **opts)
declarative = opts.get('declarative', False)
print ControlledSchema.create_model(engine, repository, declarative)
# TODO: get rid of this? if we don't add back path param
@catch_known_errors
def make_update_script_for_model(url, oldmodel, model, repository, **opts):
"""%prog make_update_script_for_model URL OLDMODEL MODEL REPOSITORY_PATH
Create a script changing the old Python model to the new (current)
Python model, sending to stdout.
NOTE: This is EXPERIMENTAL.
""" # TODO: get rid of EXPERIMENTAL label
engine = construct_engine(url, **opts)
print PythonScript.make_update_script_for_model(
engine, oldmodel, model, repository, **opts)
def update_db_from_model(url, model, repository, **opts):
"""%prog update_db_from_model URL MODEL REPOSITORY_PATH
Modify the database to match the structure of the current Python
model. This also sets the db_version number to the latest in the
repository.
NOTE: This is EXPERIMENTAL.
""" # TODO: get rid of EXPERIMENTAL label
engine = construct_engine(url, **opts)
schema = ControlledSchema(engine, repository)
schema.update_db_from_model(model)
def _migrate(url, repository, version, upgrade, err, **opts):
engine = construct_engine(url, **opts)
schema = ControlledSchema(engine, repository)
version = _migrate_version(schema, version, upgrade, err)
changeset = schema.changeset(version)
for ver, change in changeset:
nextver = ver + changeset.step
print '%s -> %s... ' % (ver, nextver)
if opts.get('preview_sql'):
if isinstance(change, PythonScript):
print change.preview_sql(url, changeset.step, **opts)
elif isinstance(change, SqlScript):
print change.source()
elif opts.get('preview_py'):
source_ver = max(ver, nextver)
module = schema.repository.version(source_ver).script().module
funcname = upgrade and "upgrade" or "downgrade"
func = getattr(module, funcname)
if isinstance(change, PythonScript):
print inspect.getsource(func)
else:
raise UsageError("Python source can be only displayed"
" for python migration files")
else:
schema.runchange(ver, change, changeset.step)
print 'done'
def _migrate_version(schema, version, upgrade, err):
if version is None:
return version
# Version is specified: ensure we're upgrading in the right direction
# (current version < target version for upgrading; reverse for down)
version = VerNum(version)
cur = schema.version
if upgrade is not None:
if upgrade:
direction = cur <= version
else:
direction = cur >= version
if not direction:
raise exceptions.KnownError(err % (cur, version))
return version

View File

@ -0,0 +1,5 @@
"""Things that should be imported by all migrate packages"""
#__all__ = ['logging','log','databases','operations']
from logger import logging, log
from const import databases, operations

View File

@ -0,0 +1,11 @@
from sqlalchemy.util import OrderedDict
__all__ = ['databases', 'operations']
databases = ('sqlite', 'postgres', 'mysql', 'oracle', 'mssql', 'firebird')
# Map operation names to function names
operations = OrderedDict()
operations['upgrade'] = 'upgrade'
operations['downgrade'] = 'downgrade'

View File

@ -0,0 +1,9 @@
"""Manages logging (to stdout) for our versioning system.
"""
import logging
log=logging.getLogger('migrate.versioning')
log.setLevel(logging.WARNING)
log.addHandler(logging.StreamHandler())
__all__ = ['log','logging']

View File

@ -0,0 +1,27 @@
"""
Configuration parser module.
"""
from ConfigParser import ConfigParser
from migrate.versioning.base import *
from migrate.versioning import pathed
class Parser(ConfigParser):
"""A project configuration file."""
def to_dict(self, sections=None):
"""It's easier to access config values like dictionaries"""
return self._sections
class Config(pathed.Pathed, Parser):
"""Configuration class."""
def __init__(self, path, *p, **k):
"""Confirm the config file exists; read it."""
self.require_found(path)
pathed.Pathed.__init__(self, path)
Parser.__init__(self, *p, **k)
self.read(path)

View File

@ -0,0 +1,75 @@
"""
Provide exception classes for :mod:`migrate.versioning`
"""
class Error(Exception):
"""Error base class."""
class ApiError(Error):
"""Base class for API errors."""
class KnownError(ApiError):
"""A known error condition."""
class UsageError(ApiError):
"""A known error condition where help should be displayed."""
class ControlledSchemaError(Error):
"""Base class for controlled schema errors."""
class InvalidVersionError(ControlledSchemaError):
"""Invalid version number."""
class DatabaseNotControlledError(ControlledSchemaError):
"""Database should be under version control, but it's not."""
class DatabaseAlreadyControlledError(ControlledSchemaError):
"""Database shouldn't be under version control, but it is"""
class WrongRepositoryError(ControlledSchemaError):
"""This database is under version control by another repository."""
class NoSuchTableError(ControlledSchemaError):
"""The table does not exist."""
class PathError(Error):
"""Base class for path errors."""
class PathNotFoundError(PathError):
"""A path with no file was required; found a file."""
class PathFoundError(PathError):
"""A path with a file was required; found no file."""
class RepositoryError(Error):
"""Base class for repository errors."""
class InvalidRepositoryError(RepositoryError):
"""Invalid repository error."""
class ScriptError(Error):
"""Base class for script errors."""
class InvalidScriptError(ScriptError):
"""Invalid script error."""
class InvalidVersionError(Error):
"""Invalid version error."""

View File

@ -0,0 +1,220 @@
"""
Code to generate a Python model from a database or differences
between a model and database.
Some of this is borrowed heavily from the AutoCode project at:
http://code.google.com/p/sqlautocode/
"""
import sys
import migrate
import sqlalchemy
HEADER = """
## File autogenerated by genmodel.py
from sqlalchemy import *
meta = MetaData()
"""
DECLARATIVE_HEADER = """
## File autogenerated by genmodel.py
from sqlalchemy import *
from sqlalchemy.ext import declarative
Base = declarative.declarative_base()
"""
class ModelGenerator(object):
def __init__(self, diff, declarative=False):
self.diff = diff
self.declarative = declarative
def column_repr(self, col):
kwarg = []
if col.key != col.name:
kwarg.append('key')
if col.primary_key:
col.primary_key = True # otherwise it dumps it as 1
kwarg.append('primary_key')
if not col.nullable:
kwarg.append('nullable')
if col.onupdate:
kwarg.append('onupdate')
if col.default:
if col.primary_key:
# I found that PostgreSQL automatically creates a
# default value for the sequence, but let's not show
# that.
pass
else:
kwarg.append('default')
ks = ', '.join('%s=%r' % (k, getattr(col, k)) for k in kwarg)
# crs: not sure if this is good idea, but it gets rid of extra
# u''
name = col.name.encode('utf8')
type_ = col.type
for cls in col.type.__class__.__mro__:
if cls.__module__ == 'sqlalchemy.types' and \
not cls.__name__.isupper():
if cls is not type_.__class__:
type_ = cls()
break
data = {
'name': name,
'type': type_,
'constraints': ', '.join([repr(cn) for cn in col.constraints]),
'args': ks and ks or ''}
if data['constraints']:
if data['args']:
data['args'] = ',' + data['args']
if data['constraints'] or data['args']:
data['maybeComma'] = ','
else:
data['maybeComma'] = ''
commonStuff = """ %(maybeComma)s %(constraints)s %(args)s)""" % data
commonStuff = commonStuff.strip()
data['commonStuff'] = commonStuff
if self.declarative:
return """%(name)s = Column(%(type)r%(commonStuff)s""" % data
else:
return """Column(%(name)r, %(type)r%(commonStuff)s""" % data
def getTableDefn(self, table):
out = []
tableName = table.name
if self.declarative:
out.append("class %(table)s(Base):" % {'table': tableName})
out.append(" __tablename__ = '%(table)s'" % {'table': tableName})
for col in table.columns:
out.append(" %s" % self.column_repr(col))
else:
out.append("%(table)s = Table('%(table)s', meta," % \
{'table': tableName})
for col in table.columns:
out.append(" %s," % self.column_repr(col))
out.append(")")
return out
def toPython(self):
"""Assume database is current and model is empty."""
out = []
if self.declarative:
out.append(DECLARATIVE_HEADER)
else:
out.append(HEADER)
out.append("")
for table in self.diff.tablesMissingInModel:
out.extend(self.getTableDefn(table))
out.append("")
return '\n'.join(out)
def toUpgradeDowngradePython(self, indent=' '):
''' Assume model is most current and database is out-of-date. '''
decls = ['meta = MetaData()']
for table in self.diff.tablesMissingInModel + \
self.diff.tablesMissingInDatabase:
decls.extend(self.getTableDefn(table))
upgradeCommands, downgradeCommands = [], []
for table in self.diff.tablesMissingInModel:
tableName = table.name
upgradeCommands.append("%(table)s.drop()" % {'table': tableName})
downgradeCommands.append("%(table)s.create()" % \
{'table': tableName})
for table in self.diff.tablesMissingInDatabase:
tableName = table.name
upgradeCommands.append("%(table)s.create()" % {'table': tableName})
downgradeCommands.append("%(table)s.drop()" % {'table': tableName})
pre_command = 'meta.bind(migrate_engine)'
return (
'\n'.join(decls),
'\n'.join([pre_command] + ['%s%s' % (indent, line) for line in upgradeCommands]),
'\n'.join([pre_command] + ['%s%s' % (indent, line) for line in downgradeCommands]))
def applyModel(self):
"""Apply model to current database."""
# Yuck! We have to import from changeset to apply the
# monkey-patch to allow column adding/dropping.
from migrate.changeset import schema
def dbCanHandleThisChange(missingInDatabase, missingInModel, diffDecl):
if missingInDatabase and not missingInModel and not diffDecl:
# Even sqlite can handle this.
return True
else:
return not self.diff.conn.url.drivername.startswith('sqlite')
meta = sqlalchemy.MetaData(self.diff.conn.engine)
for table in self.diff.tablesMissingInModel:
table = table.tometadata(meta)
table.drop()
for table in self.diff.tablesMissingInDatabase:
table = table.tometadata(meta)
table.create()
for modelTable in self.diff.tablesWithDiff:
modelTable = modelTable.tometadata(meta)
dbTable = self.diff.reflected_model.tables[modelTable.name]
tableName = modelTable.name
missingInDatabase, missingInModel, diffDecl = \
self.diff.colDiffs[tableName]
if dbCanHandleThisChange(missingInDatabase, missingInModel,
diffDecl):
for col in missingInDatabase:
modelTable.columns[col.name].create()
for col in missingInModel:
dbTable.columns[col.name].drop()
for modelCol, databaseCol, modelDecl, databaseDecl in diffDecl:
databaseCol.alter(modelCol)
else:
# Sqlite doesn't support drop column, so you have to
# do more: create temp table, copy data to it, drop
# old table, create new table, copy data back.
#
# I wonder if this is guaranteed to be unique?
tempName = '_temp_%s' % modelTable.name
def getCopyStatement():
preparer = self.diff.conn.engine.dialect.preparer
commonCols = []
for modelCol in modelTable.columns:
if modelCol.name in dbTable.columns:
commonCols.append(modelCol.name)
commonColsStr = ', '.join(commonCols)
return 'INSERT INTO %s (%s) SELECT %s FROM %s' % \
(tableName, commonColsStr, commonColsStr, tempName)
# Move the data in one transaction, so that we don't
# leave the database in a nasty state.
connection = self.diff.conn.connect()
trans = connection.begin()
try:
connection.execute(
'CREATE TEMPORARY TABLE %s as SELECT * from %s' % \
(tempName, modelTable.name))
# make sure the drop takes place inside our
# transaction with the bind parameter
modelTable.drop(bind=connection)
modelTable.create(bind=connection)
connection.execute(getCopyStatement())
connection.execute('DROP TABLE %s' % tempName)
trans.commit()
except:
trans.rollback()
raise

View File

@ -0,0 +1,97 @@
"""
Script to migrate repository from sqlalchemy <= 0.4.4 to the new
repository schema. This shouldn't use any other migrate modules, so
that it can work in any version.
"""
import os
import sys
def usage():
"""Gives usage information."""
print """Usage: %(prog)s repository-to-migrate
Upgrade your repository to the new flat format.
NOTE: You should probably make a backup before running this.
""" % {'prog': sys.argv[0]}
sys.exit(1)
def delete_file(filepath):
"""Deletes a file and prints a message."""
print ' Deleting file: %s' % filepath
os.remove(filepath)
def move_file(src, tgt):
"""Moves a file and prints a message."""
print ' Moving file %s to %s' % (src, tgt)
if os.path.exists(tgt):
raise Exception(
'Cannot move file %s because target %s already exists' % \
(src, tgt))
os.rename(src, tgt)
def delete_directory(dirpath):
"""Delete a directory and print a message."""
print ' Deleting directory: %s' % dirpath
os.rmdir(dirpath)
def migrate_repository(repos):
"""Does the actual migration to the new repository format."""
print 'Migrating repository at: %s to new format' % repos
versions = '%s/versions' % repos
dirs = os.listdir(versions)
# Only use int's in list.
numdirs = [int(dirname) for dirname in dirs if dirname.isdigit()]
numdirs.sort() # Sort list.
for dirname in numdirs:
origdir = '%s/%s' % (versions, dirname)
print ' Working on directory: %s' % origdir
files = os.listdir(origdir)
files.sort()
for filename in files:
# Delete compiled Python files.
if filename.endswith('.pyc') or filename.endswith('.pyo'):
delete_file('%s/%s' % (origdir, filename))
# Delete empty __init__.py files.
origfile = '%s/__init__.py' % origdir
if os.path.exists(origfile) and len(open(origfile).read()) == 0:
delete_file(origfile)
# Move sql upgrade scripts.
if filename.endswith('.sql'):
version, dbms, operation = filename.split('.', 3)[0:3]
origfile = '%s/%s' % (origdir, filename)
# For instance: 2.postgres.upgrade.sql ->
# 002_postgres_upgrade.sql
tgtfile = '%s/%03d_%s_%s.sql' % (
versions, int(version), dbms, operation)
move_file(origfile, tgtfile)
# Move Python upgrade script.
pyfile = '%s.py' % dirname
pyfilepath = '%s/%s' % (origdir, pyfile)
if os.path.exists(pyfilepath):
tgtfile = '%s/%03d.py' % (versions, int(dirname))
move_file(pyfilepath, tgtfile)
# Try to remove directory. Will fail if it's not empty.
delete_directory(origdir)
def main():
"""Main function to be called when using this script."""
if len(sys.argv) != 2:
usage()
migrate_repository(sys.argv[1])
if __name__ == '__main__':
main()

View File

@ -0,0 +1,72 @@
"""
A path/directory class.
"""
import os
import shutil
from migrate.versioning import exceptions
from migrate.versioning.base import *
from migrate.versioning.util import KeyedInstance
class Pathed(KeyedInstance):
"""
A class associated with a path/directory tree.
Only one instance of this class may exist for a particular file;
__new__ will return an existing instance if possible
"""
parent = None
@classmethod
def _key(cls, path):
return str(path)
def __init__(self, path):
self.path = path
if self.__class__.parent is not None:
self._init_parent(path)
def _init_parent(self, path):
"""Try to initialize this object's parent, if it has one"""
parent_path = self.__class__._parent_path(path)
self.parent = self.__class__.parent(parent_path)
log.info("Getting parent %r:%r" % (self.__class__.parent, parent_path))
self.parent._init_child(path, self)
def _init_child(self, child, path):
"""Run when a child of this object is initialized.
Parameters: the child object; the path to this object (its
parent)
"""
@classmethod
def _parent_path(cls, path):
"""
Fetch the path of this object's parent from this object's path.
"""
# os.path.dirname(), but strip directories like files (like
# unix basename)
#
# Treat directories like files...
if path[-1] == '/':
path = path[:-1]
ret = os.path.dirname(path)
return ret
@classmethod
def require_notfound(cls, path):
"""Ensures a given path does not already exist"""
if os.path.exists(path):
raise exceptions.PathFoundError(path)
@classmethod
def require_found(cls, path):
"""Ensures a given path already exists"""
if not os.path.exists(path):
raise exceptions.PathNotFoundError(path)
def __str__(self):
return self.path

View File

@ -0,0 +1,216 @@
"""
SQLAlchemy migrate repository management.
"""
import os
import shutil
import string
from pkg_resources import resource_string, resource_filename
from migrate.versioning import exceptions, script, version, pathed, cfgparse
from migrate.versioning.template import template
from migrate.versioning.base import *
class Changeset(dict):
"""A collection of changes to be applied to a database.
Changesets are bound to a repository and manage a set of
scripts from that repository.
Behaves like a dict, for the most part. Keys are ordered based on step value.
"""
def __init__(self, start, *changes, **k):
"""
Give a start version; step must be explicitly stated.
"""
self.step = k.pop('step', 1)
self.start = version.VerNum(start)
self.end = self.start
for change in changes:
self.add(change)
def __iter__(self):
return iter(self.items())
def keys(self):
"""
In a series of upgrades x -> y, keys are version x. Sorted.
"""
ret = super(Changeset, self).keys()
# Reverse order if downgrading
ret.sort(reverse=(self.step < 1))
return ret
def values(self):
return [self[k] for k in self.keys()]
def items(self):
return zip(self.keys(), self.values())
def add(self, change):
"""Add new change to changeset"""
key = self.end
self.end += self.step
self[key] = change
def run(self, *p, **k):
"""Run the changeset scripts"""
for version, script in self:
script.run(*p, **k)
class Repository(pathed.Pathed):
"""A project's change script repository"""
_config = 'migrate.cfg'
_versions = 'versions'
def __init__(self, path):
log.info('Loading repository %s...' % path)
self.verify(path)
super(Repository, self).__init__(path)
self.config = cfgparse.Config(os.path.join(self.path, self._config))
self.versions = version.Collection(os.path.join(self.path,
self._versions))
log.info('Repository %s loaded successfully' % path)
log.debug('Config: %r' % self.config.to_dict())
@classmethod
def verify(cls, path):
"""
Ensure the target path is a valid repository.
:raises: :exc:`InvalidRepositoryError <migrate.versioning.exceptions.InvalidRepositoryError>`
"""
# Ensure the existance of required files
try:
cls.require_found(path)
cls.require_found(os.path.join(path, cls._config))
cls.require_found(os.path.join(path, cls._versions))
except exceptions.PathNotFoundError, e:
raise exceptions.InvalidRepositoryError(path)
# TODO: what are those options?
@classmethod
def prepare_config(cls, pkg, rsrc, name, **opts):
"""
Prepare a project configuration file for a new project.
"""
# Prepare opts
defaults = dict(
version_table = 'migrate_version',
repository_id = name,
required_dbs = [])
defaults.update(opts)
tmpl = resource_string(pkg, rsrc)
ret = string.Template(tmpl).substitute(defaults)
return ret
@classmethod
def create(cls, path, name, **opts):
"""Create a repository at a specified path"""
cls.require_notfound(path)
pkg, rsrc = template.get_repository(as_pkg=True)
tmplpkg = '.'.join((pkg, rsrc))
tmplfile = resource_filename(pkg, rsrc)
config_text = cls.prepare_config(tmplpkg, cls._config, name, **opts)
# Create repository
shutil.copytree(tmplfile, path)
# Edit config defaults
fd = open(os.path.join(path, cls._config), 'w')
fd.write(config_text)
fd.close()
# Create a management script
manager = os.path.join(path, 'manage.py')
Repository.create_manage_file(manager, repository=path)
return cls(path)
def create_script(self, description, **k):
"""API to :meth:`migrate.versioning.version.Collection.create_new_python_version`"""
self.versions.create_new_python_version(description, **k)
def create_script_sql(self, database, **k):
"""API to :meth:`migrate.versioning.version.Collection.create_new_sql_version`"""
self.versions.create_new_sql_version(database, **k)
@property
def latest(self):
"""API to :attr:`migrate.versioning.version.Collection.latest`"""
return self.versions.latest
@property
def version_table(self):
"""Returns version_table name specified in config"""
return self.config.get('db_settings', 'version_table')
@property
def id(self):
"""Returns repository id specified in config"""
return self.config.get('db_settings', 'repository_id')
def version(self, *p, **k):
"""API to :attr:`migrate.versioning.version.Collection.version`"""
return self.versions.version(*p, **k)
@classmethod
def clear(cls):
# TODO: deletes repo
super(Repository, cls).clear()
version.Collection.clear()
def changeset(self, database, start, end=None):
"""Create a changeset to migrate this database from ver. start to end/latest.
:param database: name of database to generate changeset
:param start: version to start at
:param end: version to end at (latest if None given)
:type database: string
:type start: int
:type end: int
:returns: :class:`Changeset instance <migration.versioning.repository.Changeset>`
"""
start = version.VerNum(start)
if end is None:
end = self.latest
else:
end = version.VerNum(end)
if start <= end:
step = 1
range_mod = 1
op = 'upgrade'
else:
step = -1
range_mod = 0
op = 'downgrade'
versions = range(start + range_mod, end + range_mod, step)
changes = [self.version(v).script(database, op) for v in versions]
ret = Changeset(start, step=step, *changes)
return ret
@classmethod
def create_manage_file(cls, file_, **opts):
"""Create a project management script (manage.py)
:param file_: Destination file to be written
:param opts: Options that are passed to template
"""
vars_ = ",".join(["%s='%s'" % var for var in opts.iteritems()])
pkg, rsrc = template.manage(as_pkg=True)
tmpl = resource_string(pkg, rsrc)
result = tmpl % dict(defaults=vars_)
fd = open(file_, 'w')
fd.write(result)
fd.close()

View File

@ -0,0 +1,210 @@
"""
Database schema version management.
"""
from sqlalchemy import (Table, Column, MetaData, String, Text, Integer,
create_engine)
from sqlalchemy.sql import and_
from sqlalchemy import exceptions as sa_exceptions
from sqlalchemy.sql import bindparam
from migrate.versioning import exceptions, genmodel, schemadiff
from migrate.versioning.repository import Repository
from migrate.versioning.util import load_model
from migrate.versioning.version import VerNum
class ControlledSchema(object):
"""A database under version control"""
def __init__(self, engine, repository):
if isinstance(repository, str):
repository = Repository(repository)
self.engine = engine
self.repository = repository
self.meta = MetaData(engine)
self.load()
def __eq__(self, other):
"""Compare two schemas by repositories and versions"""
return (self.repository is other.repository \
and self.version == other.version)
def load(self):
"""Load controlled schema version info from DB"""
tname = self.repository.version_table
if not hasattr(self, 'table') or self.table is None:
try:
self.table = Table(tname, self.meta, autoload=True)
except (sa_exceptions.NoSuchTableError,
AssertionError):
# assertionerror is raised if no table is found in oracle db
raise exceptions.DatabaseNotControlledError(tname)
# TODO?: verify that the table is correct (# cols, etc.)
result = self.engine.execute(self.table.select(
self.table.c.repository_id == str(self.repository.id)))
try:
data = list(result)[0]
except IndexError:
raise exceptions.DatabaseNotControlledError(tname)
self.version = data['version']
return data
def drop(self):
"""
Remove version control from a database.
"""
try:
self.table.drop()
except (sa_exceptions.SQLError):
raise exceptions.DatabaseNotControlledError(str(self.table))
def changeset(self, version=None):
"""API to Changeset creation.
Uses self.version for start version and engine.name
to get database name.
"""
database = self.engine.name
start_ver = self.version
changeset = self.repository.changeset(database, start_ver, version)
return changeset
def runchange(self, ver, change, step):
startver = ver
endver = ver + step
# Current database version must be correct! Don't run if corrupt!
if self.version != startver:
raise exceptions.InvalidVersionError("%s is not %s" % \
(self.version, startver))
# Run the change
change.run(self.engine, step)
# Update/refresh database version
self.update_repository_table(startver, endver)
self.load()
def update_repository_table(self, startver, endver):
"""Update version_table with new information"""
update = self.table.update(and_(self.table.c.version == int(startver),
self.table.c.repository_id == str(self.repository.id)))
self.engine.execute(update, version=int(endver))
def upgrade(self, version=None):
"""
Upgrade (or downgrade) to a specified version, or latest version.
"""
changeset = self.changeset(version)
for ver, change in changeset:
self.runchange(ver, change, changeset.step)
def update_db_from_model(self, model):
"""
Modify the database to match the structure of the current Python model.
"""
model = load_model(model)
diff = schemadiff.getDiffOfModelAgainstDatabase(
model, self.engine, excludeTables=[self.repository.version_table])
genmodel.ModelGenerator(diff).applyModel()
self.update_repository_table(self.version, int(self.repository.latest))
self.load()
@classmethod
def create(cls, engine, repository, version=None):
"""
Declare a database to be under a repository's version control.
:raises: :exc:`DatabaseAlreadyControlledError`
:returns: :class:`ControlledSchema`
"""
# Confirm that the version # is valid: positive, integer,
# exists in repos
if isinstance(repository, basestring):
repository = Repository(repository)
version = cls._validate_version(repository, version)
table = cls._create_table_version(engine, repository, version)
# TODO: history table
# Load repository information and return
return cls(engine, repository)
@classmethod
def _validate_version(cls, repository, version):
"""
Ensures this is a valid version number for this repository.
:raises: :exc:`InvalidVersionError` if invalid
:return: valid version number
"""
if version is None:
version = 0
try:
version = VerNum(version) # raises valueerror
if version < 0 or version > repository.latest:
raise ValueError()
except ValueError:
raise exceptions.InvalidVersionError(version)
return version
@classmethod
def _create_table_version(cls, engine, repository, version):
"""
Creates the versioning table in a database.
:raises: :exc:`DatabaseAlreadyControlledError`
"""
# Create tables
tname = repository.version_table
meta = MetaData(engine)
table = Table(
tname, meta,
Column('repository_id', String(250), primary_key=True),
Column('repository_path', Text),
Column('version', Integer), )
# there can be multiple repositories/schemas in the same db
if not table.exists():
table.create()
# test for existing repository_id
s = table.select(table.c.repository_id == bindparam("repository_id"))
result = engine.execute(s, repository_id=repository.id)
if result.fetchone():
raise exceptions.DatabaseAlreadyControlledError
# Insert data
engine.execute(table.insert().values(
repository_id=repository.id,
repository_path=repository.path,
version=int(version)))
return table
@classmethod
def compare_model_to_db(cls, engine, model, repository):
"""
Compare the current model against the current database.
"""
if isinstance(repository, basestring):
repository = Repository(repository)
model = load_model(model)
diff = schemadiff.getDiffOfModelAgainstDatabase(
model, engine, excludeTables=[repository.version_table])
return diff
@classmethod
def create_model(cls, engine, repository, declarative=False):
"""
Dump the current database as a Python model.
"""
if isinstance(repository, basestring):
repository = Repository(repository)
diff = schemadiff.getDiffOfModelAgainstDatabase(
MetaData(), engine, excludeTables=[repository.version_table])
return genmodel.ModelGenerator(diff, declarative).toPython()

View File

@ -0,0 +1,214 @@
"""
Schema differencing support.
"""
import sqlalchemy
from migrate.changeset import SQLA_06
def getDiffOfModelAgainstDatabase(model, conn, excludeTables=None):
"""
Return differences of model against database.
:return: object which will evaluate to :keyword:`True` if there \
are differences else :keyword:`False`.
"""
return SchemaDiff(model, conn, excludeTables)
def getDiffOfModelAgainstModel(oldmodel, model, conn, excludeTables=None):
"""
Return differences of model against another model.
:return: object which will evaluate to :keyword:`True` if there \
are differences else :keyword:`False`.
"""
return SchemaDiff(model, conn, excludeTables, oldmodel=oldmodel)
class SchemaDiff(object):
"""
Differences of model against database.
"""
def __init__(self, model, conn, excludeTables=None, oldmodel=None):
"""
:param model: Python model's metadata
:param conn: active database connection.
"""
self.model = model
self.conn = conn
if not excludeTables:
# [] can't be default value in Python parameter
excludeTables = []
self.excludeTables = excludeTables
if oldmodel:
self.reflected_model = oldmodel
else:
self.reflected_model = sqlalchemy.MetaData(conn, reflect=True)
self.tablesMissingInDatabase, self.tablesMissingInModel, \
self.tablesWithDiff = [], [], []
self.colDiffs = {}
self.compareModelToDatabase()
def compareModelToDatabase(self):
"""
Do actual comparison.
"""
# Setup common variables.
cc = self.conn.contextual_connect()
if SQLA_06:
from sqlalchemy.ext import compiler
from sqlalchemy.schema import DDLElement
class DefineColumn(DDLElement):
def __init__(self, col):
self.col = col
@compiler.compiles(DefineColumn)
def compile(elem, compiler, **kw):
return compiler.get_column_specification(elem.col)
def get_column_specification(col):
return str(DefineColumn(col).compile(dialect=self.conn.dialect))
else:
schemagenerator = self.conn.dialect.schemagenerator(
self.conn.dialect, cc)
def get_column_specification(col):
return schemagenerator.get_column_specification(col)
# For each in model, find missing in database.
for modelName, modelTable in self.model.tables.items():
if modelName in self.excludeTables:
continue
reflectedTable = self.reflected_model.tables.get(modelName, None)
if reflectedTable:
# Table exists.
pass
else:
self.tablesMissingInDatabase.append(modelTable)
# For each in database, find missing in model.
for reflectedName, reflectedTable in \
self.reflected_model.tables.items():
if reflectedName in self.excludeTables:
continue
modelTable = self.model.tables.get(reflectedName, None)
if modelTable:
# Table exists.
# Find missing columns in database.
for modelCol in modelTable.columns:
databaseCol = reflectedTable.columns.get(modelCol.name,
None)
if databaseCol:
pass
else:
self.storeColumnMissingInDatabase(modelTable, modelCol)
# Find missing columns in model.
for databaseCol in reflectedTable.columns:
# TODO: no test coverage here? (mrb)
modelCol = modelTable.columns.get(databaseCol.name, None)
if modelCol:
# Compare attributes of column.
modelDecl = \
get_column_specification(modelCol)
databaseDecl = \
get_column_specification(databaseCol)
if modelDecl != databaseDecl:
# Unfortunately, sometimes the database
# decl won't quite match the model, even
# though they're the same.
mc, dc = modelCol.type.__class__, \
databaseCol.type.__class__
if (issubclass(mc, dc) \
or issubclass(dc, mc)) \
and modelCol.nullable == \
databaseCol.nullable:
# Types and nullable are the same.
pass
else:
self.storeColumnDiff(
modelTable, modelCol, databaseCol,
modelDecl, databaseDecl)
else:
self.storeColumnMissingInModel(modelTable, databaseCol)
else:
self.tablesMissingInModel.append(reflectedTable)
def __str__(self):
''' Summarize differences. '''
def colDiffDetails():
colout = []
for table in self.tablesWithDiff:
tableName = table.name
missingInDatabase, missingInModel, diffDecl = \
self.colDiffs[tableName]
if missingInDatabase:
colout.append(
' %s missing columns in database: %s' % \
(tableName, ', '.join(
[col.name for col in missingInDatabase])))
if missingInModel:
colout.append(
' %s missing columns in model: %s' % \
(tableName, ', '.join(
[col.name for col in missingInModel])))
if diffDecl:
colout.append(
' %s with different declaration of columns\
in database: %s' % (tableName, str(diffDecl)))
return colout
out = []
if self.tablesMissingInDatabase:
out.append(
' tables missing in database: %s' % \
', '.join(
[table.name for table in self.tablesMissingInDatabase]))
if self.tablesMissingInModel:
out.append(
' tables missing in model: %s' % \
', '.join(
[table.name for table in self.tablesMissingInModel]))
if self.tablesWithDiff:
out.append(
' tables with differences: %s' % \
', '.join([table.name for table in self.tablesWithDiff]))
if out:
out.insert(0, 'Schema diffs:')
out.extend(colDiffDetails())
return '\n'.join(out)
else:
return 'No schema diffs'
def __len__(self):
"""
Used in bool evaluation, return of 0 means no diffs.
"""
return len(self.tablesMissingInDatabase) + \
len(self.tablesMissingInModel) + len(self.tablesWithDiff)
def storeColumnMissingInDatabase(self, table, col):
if table not in self.tablesWithDiff:
self.tablesWithDiff.append(table)
missingInDatabase, missingInModel, diffDecl = \
self.colDiffs.setdefault(table.name, ([], [], []))
missingInDatabase.append(col)
def storeColumnMissingInModel(self, table, col):
if table not in self.tablesWithDiff:
self.tablesWithDiff.append(table)
missingInDatabase, missingInModel, diffDecl = \
self.colDiffs.setdefault(table.name, ([], [], []))
missingInModel.append(col)
def storeColumnDiff(self, table, modelCol, databaseCol, modelDecl,
databaseDecl):
if table not in self.tablesWithDiff:
self.tablesWithDiff.append(table)
missingInDatabase, missingInModel, diffDecl = \
self.colDiffs.setdefault(table.name, ([], [], []))
diffDecl.append((modelCol, databaseCol, modelDecl, databaseDecl))

View File

@ -0,0 +1,6 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from migrate.versioning.script.base import BaseScript
from migrate.versioning.script.py import PythonScript
from migrate.versioning.script.sql import SqlScript

View File

@ -0,0 +1,53 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from migrate.versioning.base import log, operations
from migrate.versioning import pathed, exceptions
class BaseScript(pathed.Pathed):
"""Base class for other types of scripts.
All scripts have the following properties:
source (script.source())
The source code of the script
version (script.version())
The version number of the script
operations (script.operations())
The operations defined by the script: upgrade(), downgrade() or both.
Returns a tuple of operations.
Can also check for an operation with ex. script.operation(Script.ops.up)
""" # TODO: sphinxfy this and implement it correctly
def __init__(self, path):
log.info('Loading script %s...' % path)
self.verify(path)
super(BaseScript, self).__init__(path)
log.info('Script %s loaded successfully' % path)
@classmethod
def verify(cls, path):
"""Ensure this is a valid script
This version simply ensures the script file's existence
:raises: :exc:`InvalidScriptError <migrate.versioning.exceptions.InvalidScriptError>`
"""
try:
cls.require_found(path)
except:
raise exceptions.InvalidScriptError(path)
def source(self):
""":returns: source code of the script.
:rtype: string
"""
fd = open(self.path)
ret = fd.read()
fd.close()
return ret
def run(self, engine):
"""Core of each BaseScript subclass.
This method executes the script.
"""
raise NotImplementedError()

View File

@ -0,0 +1,160 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import shutil
from StringIO import StringIO
import migrate
from migrate.versioning import exceptions, genmodel, schemadiff
from migrate.versioning.base import operations
from migrate.versioning.template import template
from migrate.versioning.script import base
from migrate.versioning.util import import_path, load_model, construct_engine
class PythonScript(base.BaseScript):
"""Base for Python scripts"""
@classmethod
def create(cls, path, **opts):
"""Create an empty migration script at specified path
:returns: :class:`PythonScript instance <migrate.versioning.script.py.PythonScript>`"""
cls.require_notfound(path)
# TODO: Use the default script template (defined in the template
# module) for now, but we might want to allow people to specify a
# different one later.
template_file = None
src = template.get_script(template_file)
shutil.copy(src, path)
return cls(path)
@classmethod
def make_update_script_for_model(cls, engine, oldmodel,
model, repository, **opts):
"""Create a migration script based on difference between two SA models.
:param repository: path to migrate repository
:param oldmodel: dotted.module.name:SAClass or SAClass object
:param model: dotted.module.name:SAClass or SAClass object
:param engine: SQLAlchemy engine
:type repository: string or :class:`Repository instance <migrate.versioning.repository.Repository>`
:type oldmodel: string or Class
:type model: string or Class
:type engine: Engine instance
:returns: Upgrade / Downgrade script
:rtype: string
"""
if isinstance(repository, basestring):
# oh dear, an import cycle!
from migrate.versioning.repository import Repository
repository = Repository(repository)
oldmodel = load_model(oldmodel)
model = load_model(model)
# Compute differences.
diff = schemadiff.getDiffOfModelAgainstModel(
oldmodel,
model,
engine,
excludeTables=[repository.version_table])
# TODO: diff can be False (there is no difference?)
decls, upgradeCommands, downgradeCommands = \
genmodel.ModelGenerator(diff).toUpgradeDowngradePython()
# Store differences into file.
# TODO: add custom templates
src = template.get_script(None)
f = open(src)
contents = f.read()
f.close()
# generate source
search = 'def upgrade(migrate_engine):'
contents = contents.replace(search, '\n\n'.join((decls, search)), 1)
if upgradeCommands:
contents = contents.replace(' pass', upgradeCommands, 1)
if downgradeCommands:
contents = contents.replace(' pass', downgradeCommands, 1)
return contents
@classmethod
def verify_module(cls, path):
"""Ensure path is a valid script
:param path: Script location
:type path: string
:raises: :exc:`InvalidScriptError <migrate.versioning.exceptions.InvalidScriptError>`
:returns: Python module
"""
# Try to import and get the upgrade() func
try:
module = import_path(path)
except:
# If the script itself has errors, that's not our problem
raise
try:
assert callable(module.upgrade)
except Exception, e:
raise exceptions.InvalidScriptError(path + ': %s' % str(e))
return module
def preview_sql(self, url, step, **args):
"""Mocks SQLAlchemy Engine to store all executed calls in a string
and runs :meth:`PythonScript.run <migrate.versioning.script.py.PythonScript.run>`
:returns: SQL file
"""
buf = StringIO()
args['engine_arg_strategy'] = 'mock'
args['engine_arg_executor'] = lambda s, p = '': buf.write(str(s) + p)
engine = construct_engine(url, **args)
self.run(engine, step)
return buf.getvalue()
def run(self, engine, step):
"""Core method of Script file.
Exectues :func:`update` or :func:`downgrade` functions
:param engine: SQLAlchemy Engine
:param step: Operation to run
:type engine: string
:type step: int
"""
if step > 0:
op = 'upgrade'
elif step < 0:
op = 'downgrade'
else:
raise exceptions.ScriptError("%d is not a valid step" % step)
funcname = base.operations[op]
func = self._func(funcname)
try:
func(engine)
except TypeError:
print "upgrade/downgrade functions must accept engine parameter (since ver 0.5.5)"
raise
@property
def module(self):
"""Calls :meth:`migrate.versioning.script.py.verify_module`
and returns it.
"""
if not hasattr(self, '_module'):
self._module = self.verify_module(self.path)
return self._module
def _func(self, funcname):
try:
return getattr(self.module, funcname)
except AttributeError:
msg = "The function %s is not defined in this script"
raise exceptions.ScriptError(msg % funcname)

View File

@ -0,0 +1,33 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from migrate.versioning.script import base
class SqlScript(base.BaseScript):
"""A file containing plain SQL statements."""
# TODO: why is step parameter even here?
def run(self, engine, step=None):
"""Runs SQL script through raw dbapi execute call"""
text = self.source()
# Don't rely on SA's autocommit here
# (SA uses .startswith to check if a commit is needed. What if script
# starts with a comment?)
conn = engine.connect()
try:
trans = conn.begin()
try:
# HACK: SQLite doesn't allow multiple statements through
# its execute() method, but it provides executescript() instead
dbapi = conn.engine.raw_connection()
if getattr(dbapi, 'executescript', None):
dbapi.executescript(text)
else:
conn.execute(text)
trans.commit()
except:
trans.rollback()
raise
finally:
conn.close()

164
migrate/versioning/shell.py Normal file
View File

@ -0,0 +1,164 @@
"""The migrate command-line tool."""
import sys
import inspect
from optparse import OptionParser, BadOptionError
from migrate.versioning.base import *
from migrate.versioning import api, exceptions
alias = dict(
s=api.script,
vc=api.version_control,
dbv=api.db_version,
v=api.version,
)
def alias_setup():
global alias
for key, val in alias.iteritems():
setattr(api, key, val)
alias_setup()
class PassiveOptionParser(OptionParser):
def _process_args(self, largs, rargs, values):
"""little hack to support all --some_option=value parameters"""
while rargs:
arg = rargs[0]
if arg == "--":
del rargs[0]
return
elif arg[0:2] == "--":
# if parser does not know about the option
# pass it along (make it anonymous)
try:
opt = arg.split('=', 1)[0]
self._match_long_opt(opt)
except BadOptionError:
largs.append(arg)
del rargs[0]
else:
self._process_long_opt(rargs, values)
elif arg[:1] == "-" and len(arg) > 1:
self._process_short_opts(rargs, values)
elif self.allow_interspersed_args:
largs.append(arg)
del rargs[0]
def main(argv=None, **kwargs):
"""kwargs are default options that can be overriden with passing
--some_option to cmdline
"""
argv = argv or list(sys.argv[1:])
commands = list(api.__all__)
commands.sort()
usage = """%%prog COMMAND ...
Available commands:
%s
Enter "%%prog help COMMAND" for information on a particular command.
""" % '\n\t'.join(commands)
parser = PassiveOptionParser(usage=usage)
parser.add_option("-v", "--verbose", action="store_true", dest="verbose")
parser.add_option("-d", "--debug", action="store_true", dest="debug")
parser.add_option("-f", "--force", action="store_true", dest="force")
help_commands = ['help', '-h', '--help']
HELP = False
try:
command = argv.pop(0)
if command in help_commands:
HELP = True
command = argv.pop(0)
except IndexError:
parser.print_help()
return
command_func = getattr(api, command, None)
if command_func is None or command.startswith('_'):
parser.error("Invalid command %s" % command)
parser.set_usage(inspect.getdoc(command_func))
f_args, f_varargs, f_kwargs, f_defaults = inspect.getargspec(command_func)
for arg in f_args:
parser.add_option(
"--%s" % arg,
dest=arg,
action='store',
type="string")
# display help of the current command
if HELP:
parser.print_help()
return
options, args = parser.parse_args(argv)
# override kwargs with anonymous parameters
override_kwargs = dict()
for arg in list(args):
if arg.startswith('--'):
args.remove(arg)
if '=' in arg:
opt, value = arg[2:].split('=', 1)
else:
opt = arg[2:]
value = True
override_kwargs[opt] = value
# override kwargs with options if user is overwriting
for key, value in options.__dict__.iteritems():
if value is not None:
override_kwargs[key] = value
# arguments that function accepts without passed kwargs
f_required = list(f_args)
candidates = dict(kwargs)
candidates.update(override_kwargs)
for key, value in candidates.iteritems():
if key in f_args:
f_required.remove(key)
# map function arguments to parsed arguments
for arg in args:
try:
kw = f_required.pop(0)
except IndexError:
parser.error("Too many arguments for command %s: %s" % (command,
arg))
kwargs[kw] = arg
# apply overrides
kwargs.update(override_kwargs)
# check if all args are given
try:
num_defaults = len(f_defaults)
except TypeError:
num_defaults = 0
f_args_default = f_args[len(f_args) - num_defaults:]
required = list(set(f_required) - set(f_args_default))
if required:
parser.error("Not enough arguments for command %s: %s not specified" \
% (command, ', '.join(required)))
# handle command
try:
ret = command_func(**kwargs)
if ret is not None:
print ret
except (exceptions.UsageError, exceptions.KnownError), e:
if e.args[0] is None:
parser.print_help()
parser.error(e.args[0])
if __name__ == "__main__":
main()

View File

@ -0,0 +1,84 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import shutil
import sys
from pkg_resources import resource_filename
from migrate.versioning.base import *
from migrate.versioning import pathed
class Packaged(pathed.Pathed):
"""An object assoc'ed with a Python package"""
def __init__(self, pkg):
self.pkg = pkg
path = self._find_path(pkg)
super(Packaged, self).__init__(path)
@classmethod
def _find_path(cls, pkg):
pkg_name, resource_name = pkg.rsplit('.', 1)
ret = resource_filename(pkg_name, resource_name)
return ret
class Collection(Packaged):
"""A collection of templates of a specific type"""
_default = None
def get_path(self, file):
return os.path.join(self.path, str(file))
def get_pkg(self, file):
return (self.pkg, str(file))
class RepositoryCollection(Collection):
_default = 'default'
class ScriptCollection(Collection):
_default = 'default.py_tmpl'
class Template(Packaged):
"""Finds the paths/packages of various Migrate templates"""
_repository = 'repository'
_script = 'script'
_manage = 'manage.py_tmpl'
def __init__(self, pkg):
super(Template, self).__init__(pkg)
self.repository = RepositoryCollection('.'.join((self.pkg,
self._repository)))
self.script = ScriptCollection('.'.join((self.pkg, self._script)))
def get_item(self, attr, filename=None, as_pkg=None, as_str=None):
item = getattr(self, attr)
if filename is None:
filename = getattr(item, '_default')
if as_pkg:
ret = item.get_pkg(filename)
if as_str:
ret = '.'.join(ret)
else:
ret = item.get_path(filename)
return ret
def get_repository(self, filename=None, as_pkg=None, as_str=None):
return self.get_item('repository', filename, as_pkg, as_str)
def get_script(self, filename=None, as_pkg=None, as_str=None):
return self.get_item('script', filename, as_pkg, as_str)
def manage(self, **k):
return (self.pkg, self._manage)
template_pkg = 'migrate.versioning.templates'
template = Template(template_pkg)

View File

View File

@ -0,0 +1,4 @@
#!/usr/bin/env python
from migrate.versioning.shell import main
main(%(defaults)s)

View File

@ -0,0 +1,4 @@
This is a database migration repository.
More information at
http://code.google.com/p/sqlalchemy-migrate/

View File

@ -0,0 +1,20 @@
[db_settings]
# Used to identify which repository this database is versioned under.
# You can use the name of your project.
repository_id=${repository_id}
# The name of the database table used to track the schema version.
# This name shouldn't already be used by your project.
# If this is changed once a database is under version control, you'll need to
# change the table name in each database too.
version_table=${version_table}
# When committing a change script, Migrate will attempt to generate the
# sql for all supported databases; normally, if one of them fails - probably
# because you don't have that database installed - it is ignored and the
# commit continues, perhaps ending successfully.
# Databases in this list MUST compile successfully during a commit, or the
# entire commit will fail. List the databases your application will actually
# be using to ensure your updates to that database work properly.
# This must be a list; example: ['postgres','sqlite']
required_dbs=${required_dbs}

View File

@ -0,0 +1,11 @@
from sqlalchemy import *
from migrate import *
def upgrade(migrate_engine):
# Upgrade operations go here. Don't create your own engine; bind migrate_engine
# to your metadata
pass
def downgrade(migrate_engine):
# Operations to reverse the above upgrade go here.
pass

View File

@ -0,0 +1,132 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import warnings
from decorator import decorator
from pkg_resources import EntryPoint
from sqlalchemy import create_engine
from sqlalchemy.engine import Engine
from migrate.versioning import exceptions
from migrate.versioning.util.keyedinstance import KeyedInstance
from migrate.versioning.util.importpath import import_path
def load_model(dotted_name):
"""Import module and use module-level variable".
:param dotted_name: path to model in form of string: ``some.python.module:Class``
.. versionchanged:: 0.5.4
"""
if isinstance(dotted_name, basestring):
if ':' not in dotted_name:
# backwards compatibility
warnings.warn('model should be in form of module.model:User '
'and not module.model.User', DeprecationWarning)
dotted_name = ':'.join(dotted_name.rsplit('.', 1))
return EntryPoint.parse('x=%s' % dotted_name).load(False)
else:
# Assume it's already loaded.
return dotted_name
def asbool(obj):
"""Do everything to use object as bool"""
if isinstance(obj, basestring):
obj = obj.strip().lower()
if obj in ['true', 'yes', 'on', 'y', 't', '1']:
return True
elif obj in ['false', 'no', 'off', 'n', 'f', '0']:
return False
else:
raise ValueError("String is not true/false: %r" % obj)
if obj in (True, False):
return bool(obj)
else:
raise ValueError("String is not true/false: %r" % obj)
def guess_obj_type(obj):
"""Do everything to guess object type from string
Tries to convert to `int`, `bool` and finally returns if not succeded.
.. versionadded: 0.5.4
"""
result = None
try:
result = int(obj)
except:
pass
if result is None:
try:
result = asbool(obj)
except:
pass
if result is not None:
return result
else:
return obj
@decorator
def catch_known_errors(f, *a, **kw):
"""Decorator that catches known api errors
.. versionadded: 0.5.4
"""
try:
f(*a, **kw)
except exceptions.PathFoundError, e:
raise exceptions.KnownError("The path %s already exists" % e.args[0])
def construct_engine(engine, **opts):
""".. versionadded:: 0.5.4
Constructs and returns SQLAlchemy engine.
Currently, there are 2 ways to pass create_engine options to :mod:`migrate.versioning.api` functions:
:param engine: connection string or a existing engine
:param engine_dict: python dictionary of options to pass to `create_engine`
:param engine_arg_*: keyword parameters to pass to `create_engine` (evaluated with :func:`migrate.versioning.util.guess_obj_type`)
:type engine_dict: dict
:type engine: string or Engine instance
:type engine_arg_*: string
:returns: SQLAlchemy Engine
.. note::
keyword parameters override ``engine_dict`` values.
"""
if isinstance(engine, Engine):
return engine
elif not isinstance(engine, basestring):
raise ValueError("you need to pass either an existing engine or a database uri")
# get options for create_engine
if opts.get('engine_dict') and isinstance(opts['engine_dict'], dict):
kwargs = opts['engine_dict']
else:
kwargs = dict()
# DEPRECATED: handle echo the old way
echo = asbool(opts.get('echo', False))
if echo:
warnings.warn('echo=True parameter is deprecated, pass '
'engine_arg_echo=True or engine_dict={"echo": True}',
DeprecationWarning)
kwargs['echo'] = echo
# parse keyword arguments
for key, value in opts.iteritems():
if key.startswith('engine_arg_'):
kwargs[key[11:]] = guess_obj_type(value)
return create_engine(engine, **kwargs)

View File

@ -0,0 +1,16 @@
import os
import sys
def import_path(fullpath):
""" Import a file with full path specification. Allows one to
import from anywhere, something __import__ does not do.
"""
# http://zephyrfalcon.org/weblog/arch_d7_2002_08_31.html
path, filename = os.path.split(fullpath)
filename, ext = os.path.splitext(filename)
sys.path.append(path)
module = __import__(filename)
reload(module) # Might be out of date during tests
del sys.path[-1]
return module

View File

@ -0,0 +1,36 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
class KeyedInstance(object):
"""A class whose instances have a unique identifier of some sort
No two instances with the same unique ID should exist - if we try to create
a second instance, the first should be returned.
"""
_instances = dict()
def __new__(cls, *p, **k):
instances = cls._instances
clskey = str(cls)
if clskey not in instances:
instances[clskey] = dict()
instances = instances[clskey]
key = cls._key(*p, **k)
if key not in instances:
instances[key] = super(KeyedInstance, cls).__new__(cls)
return instances[key]
@classmethod
def _key(cls, *p, **k):
"""Given a unique identifier, return a dictionary key
This should be overridden by child classes, to specify which parameters
should determine an object's uniqueness
"""
raise NotImplementedError()
@classmethod
def clear(cls):
# Allow cls.clear() as well as uniqueInstance.clear(cls)
if str(cls) in cls._instances:
del cls._instances[str(cls)]

View File

@ -0,0 +1,223 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import re
import shutil
from migrate.versioning import exceptions, pathed, script
class VerNum(object):
"""A version number that behaves like a string and int at the same time"""
_instances = dict()
def __new__(cls, value):
val = str(value)
if val not in cls._instances:
cls._instances[val] = super(VerNum, cls).__new__(cls)
ret = cls._instances[val]
return ret
def __init__(self,value):
self.value = str(int(value))
if self < 0:
raise ValueError("Version number cannot be negative")
def __add__(self, value):
ret = int(self) + int(value)
return VerNum(ret)
def __sub__(self, value):
return self + (int(value) * -1)
def __cmp__(self, value):
return int(self) - int(value)
def __repr__(self):
return "<VerNum(%s)>" % self.value
def __str__(self):
return str(self.value)
def __int__(self):
return int(self.value)
class Collection(pathed.Pathed):
"""A collection of versioning scripts in a repository"""
FILENAME_WITH_VERSION = re.compile(r'^(\d{3,}).*')
def __init__(self, path):
"""Collect current version scripts in repository
and store them in self.versions
"""
super(Collection, self).__init__(path)
# Create temporary list of files, allowing skipped version numbers.
files = os.listdir(path)
if '1' in files:
# deprecation
raise Exception('It looks like you have a repository in the old '
'format (with directories for each version). '
'Please convert repository before proceeding.')
tempVersions = dict()
for filename in files:
match = self.FILENAME_WITH_VERSION.match(filename)
if match:
num = int(match.group(1))
tempVersions.setdefault(num, []).append(filename)
else:
pass # Must be a helper file or something, let's ignore it.
# Create the versions member where the keys
# are VerNum's and the values are Version's.
self.versions = dict()
for num, files in tempVersions.items():
self.versions[VerNum(num)] = Version(num, path, files)
@property
def latest(self):
""":returns: Latest version in Collection"""
return max([VerNum(0)] + self.versions.keys())
def create_new_python_version(self, description, **k):
"""Create Python files for new version"""
ver = self.latest + 1
extra = str_to_filename(description)
if extra:
if extra == '_':
extra = ''
elif not extra.startswith('_'):
extra = '_%s' % extra
filename = '%03d%s.py' % (ver, extra)
filepath = self._version_path(filename)
if os.path.exists(filepath):
raise Exception('Script already exists: %s' % filepath)
else:
script.PythonScript.create(filepath)
self.versions[ver] = Version(ver, self.path, [filename])
def create_new_sql_version(self, database, **k):
"""Create SQL files for new version"""
ver = self.latest + 1
self.versions[ver] = Version(ver, self.path, [])
# Create new files.
for op in ('upgrade', 'downgrade'):
filename = '%03d_%s_%s.sql' % (ver, database, op)
filepath = self._version_path(filename)
if os.path.exists(filepath):
raise Exception('Script already exists: %s' % filepath)
else:
open(filepath, "w").close()
self.versions[ver].add_script(filepath)
def version(self, vernum=None):
"""Returns latest Version if vernum is not given.
Otherwise, returns wanted version"""
if vernum is None:
vernum = self.latest
return self.versions[VerNum(vernum)]
@classmethod
def clear(cls):
super(Collection, cls).clear()
def _version_path(self, ver):
"""Returns path of file in versions repository"""
return os.path.join(self.path, str(ver))
class Version(object):
"""A single version in a collection """
def __init__(self, vernum, path, filelist):
self.version = VerNum(vernum)
# Collect scripts in this folder
self.sql = dict()
self.python = None
for script in filelist:
self.add_script(os.path.join(path, script))
def script(self, database=None, operation=None):
"""Returns SQL or Python Script"""
for db in (database, 'default'):
# Try to return a .sql script first
try:
return self.sql[db][operation]
except KeyError:
continue # No .sql script exists
# TODO: maybe add force Python parameter?
ret = self.python
assert ret is not None, \
"There is no script for %d version" % self.version
return ret
# deprecated?
@classmethod
def create(cls, path):
os.mkdir(path)
# create the version as a proper Python package
initfile = os.path.join(path, "__init__.py")
if not os.path.exists(initfile):
# just touch the file
open(initfile, "w").close()
try:
ret = cls(path)
except:
os.rmdir(path)
raise
return ret
def add_script(self, path):
"""Add script to Collection/Version"""
if path.endswith(Extensions.py):
self._add_script_py(path)
elif path.endswith(Extensions.sql):
self._add_script_sql(path)
SQL_FILENAME = re.compile(r'^(\d+)_([^_]+)_([^_]+).sql')
def _add_script_sql(self, path):
match = self.SQL_FILENAME.match(os.path.basename(path))
if match:
version, dbms, op = match.group(1), match.group(2), match.group(3)
else:
raise exceptions.ScriptError("Invalid SQL script name %s" % path)
# File the script into a dictionary
self.sql.setdefault(dbms, {})[op] = script.SqlScript(path)
def _add_script_py(self, path):
if self.python is not None:
raise Exception('You can only have one Python script per version,'
' but you have: %s and %s' % (self.python, path))
self.python = script.PythonScript(path)
class Extensions:
"""A namespace for file extensions"""
py = 'py'
sql = 'sql'
def str_to_filename(s):
"""Replaces spaces, (double and single) quotes
and double underscores to underscores
"""
s = s.replace(' ', '_').replace('"', '_').replace("'", '_')
while '__' in s:
s = s.replace('__', '_')
return s

15
setup.cfg Normal file
View File

@ -0,0 +1,15 @@
[build_sphinx]
source-dir = docs
build-dir = docs/_build
[egg_info]
tag_svn_revision = 1
tag_build = .dev
[nosetests]
#pdb = true
#pdb-failures = true
#stop = true
[aliases]
release = egg_info -RDb ''

45
setup.py Normal file
View File

@ -0,0 +1,45 @@
#!/usr/bin/python
import os
try:
from setuptools import setup, find_packages
except ImportError:
from ez_setup import use_setuptools
use_setuptools()
from setuptools import setup, find_packages
try:
import buildutils
except ImportError:
pass
test_requirements = ['nose >= 0.10']
required_deps = ['sqlalchemy >= 0.5', 'decorator']
readme_file = open(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'README'))
setup(
name = "sqlalchemy-migrate",
version = "0.5.5",
packages = find_packages(exclude=['test*']),
include_package_data = True,
description = "Database schema migration for SQLAlchemy",
long_description = readme_file.read(),
install_requires = required_deps,
tests_require = test_requirements,
extras_require = {
'docs' : ['sphinx >= 0.5'],
},
author = "Evan Rosson",
author_email = "evan.rosson@gmail.com",
url = "http://code.google.com/p/sqlalchemy-migrate/",
maintainer = "Jan Dittberner",
maintainer_email = "jan@dittberner.info",
license = "MIT",
entry_points = """
[console_scripts]
migrate = migrate.versioning.shell:main
migrate-repository = migrate.versioning.migrate_repository:main
""",
test_suite = "nose.collector",
)

0
test/__init__.py Normal file
View File

View File

View File

@ -0,0 +1,759 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sqlalchemy
from sqlalchemy import *
from migrate import changeset
from migrate.changeset import *
from migrate.changeset.schema import ColumnDelta
from test import fixture
class TestAddDropColumn(fixture.DB):
"""Test add/drop column through all possible interfaces
also test for constraints
"""
level = fixture.DB.CONNECT
table_name = 'tmp_adddropcol'
table_int = 0
def _setup(self, url):
super(TestAddDropColumn, self)._setup(url)
self.meta = MetaData()
self.table = Table(self.table_name, self.meta,
Column('id', Integer, unique=True),
)
self.meta.bind = self.engine
if self.engine.has_table(self.table.name):
self.table.drop()
self.table.create()
def _teardown(self):
if self.engine.has_table(self.table.name):
self.table.drop()
self.meta.clear()
super(TestAddDropColumn,self)._teardown()
def run_(self, create_column_func, drop_column_func, *col_p, **col_k):
col_name = 'data'
def assert_numcols(num_of_expected_cols):
# number of cols should be correct in table object and in database
self.refresh_table(self.table_name)
result = len(self.table.c)
self.assertEquals(result, num_of_expected_cols),
if col_k.get('primary_key', None):
# new primary key: check its length too
result = len(self.table.primary_key)
self.assertEquals(result, num_of_expected_cols)
assert_numcols(1)
if len(col_p) == 0:
col_p = [String(40)]
col = Column(col_name, *col_p, **col_k)
create_column_func(col)
assert_numcols(2)
col2 = getattr(self.table.c, col_name)
self.assertEquals(col2, col)
drop_column_func(col2)
assert_numcols(1)
@fixture.usedb()
def test_undefined(self):
"""Add/drop columns not yet defined in the table"""
def add_func(col):
return create_column(col, self.table)
def drop_func(col):
return drop_column(col, self.table)
return self.run_(add_func, drop_func)
@fixture.usedb()
def test_defined(self):
"""Add/drop columns already defined in the table"""
def add_func(col):
self.meta.clear()
self.table = Table(self.table_name, self.meta,
Column('id', Integer, primary_key=True),
col,
)
return create_column(col)
def drop_func(col):
return drop_column(col)
return self.run_(add_func, drop_func)
@fixture.usedb()
def test_method_bound(self):
"""Add/drop columns via column methods; columns bound to a table
ie. no table parameter passed to function
"""
def add_func(col):
self.assert_(col.table is None, col.table)
self.table.append_column(col)
return col.create()
def drop_func(col):
#self.assert_(col.table is None,col.table)
#self.table.append_column(col)
return col.drop()
return self.run_(add_func, drop_func)
@fixture.usedb()
def test_method_notbound(self):
"""Add/drop columns via column methods; columns not bound to a table"""
def add_func(col):
return col.create(self.table)
def drop_func(col):
return col.drop(self.table)
return self.run_(add_func, drop_func)
@fixture.usedb()
def test_tablemethod_obj(self):
"""Add/drop columns via table methods; by column object"""
def add_func(col):
return self.table.create_column(col)
def drop_func(col):
return self.table.drop_column(col)
return self.run_(add_func, drop_func)
@fixture.usedb()
def test_tablemethod_name(self):
"""Add/drop columns via table methods; by column name"""
def add_func(col):
# must be bound to table
self.table.append_column(col)
return self.table.create_column(col.name)
def drop_func(col):
# Not necessarily bound to table
return self.table.drop_column(col.name)
return self.run_(add_func, drop_func)
@fixture.usedb()
def test_byname(self):
"""Add/drop columns via functions; by table object and column name"""
def add_func(col):
self.table.append_column(col)
return create_column(col.name, self.table)
def drop_func(col):
return drop_column(col.name, self.table)
return self.run_(add_func, drop_func)
@fixture.usedb()
def test_drop_column_not_in_table(self):
"""Drop column by name"""
def add_func(col):
return self.table.create_column(col)
def drop_func(col):
self.table.c.remove(col)
return self.table.drop_column(col.name)
self.run_(add_func, drop_func)
@fixture.usedb()
def test_fk(self):
"""Can create columns with foreign keys"""
# create FK's target
reftable = Table('tmp_ref', self.meta,
Column('id', Integer, primary_key=True),
)
if self.engine.has_table(reftable.name):
reftable.drop()
reftable.create()
# create column with fk
col = Column('data', Integer, ForeignKey(reftable.c.id))
if self.url.startswith('sqlite'):
self.assertRaises(changeset.exceptions.NotSupportedError,
col.create, self.table)
else:
col.create(self.table)
# check if constraint is added
for cons in self.table.constraints:
if isinstance(cons, sqlalchemy.schema.ForeignKeyConstraint):
break
else:
self.fail('No constraint found')
# TODO: test on db level if constraints work
self.assertEqual(reftable.c.id.name, col.foreign_keys[0].column.name)
col.drop(self.table)
if self.engine.has_table(reftable.name):
reftable.drop()
@fixture.usedb(not_supported='sqlite')
def test_pk(self):
"""Can create columns with primary key"""
col = Column('data', Integer, nullable=False)
self.assertRaises(changeset.exceptions.InvalidConstraintError,
col.create, self.table, primary_key_name=True)
col.create(self.table, primary_key_name='data_pkey')
# check if constraint was added (cannot test on objects)
self.table.insert(values={'data': 4}).execute()
try:
self.table.insert(values={'data': 4}).execute()
except (sqlalchemy.exc.IntegrityError,
sqlalchemy.exc.ProgrammingError):
pass
else:
self.fail()
col.drop()
@fixture.usedb(not_supported='mysql')
def test_check(self):
"""Can create columns with check constraint"""
col = Column('data',
Integer,
sqlalchemy.schema.CheckConstraint('data > 4'))
col.create(self.table)
# check if constraint was added (cannot test on objects)
self.table.insert(values={'data': 5}).execute()
try:
self.table.insert(values={'data': 3}).execute()
except (sqlalchemy.exc.IntegrityError,
sqlalchemy.exc.ProgrammingError):
pass
else:
self.fail()
col.drop()
@fixture.usedb(not_supported='sqlite')
def test_unique(self):
"""Can create columns with unique constraint"""
self.assertRaises(changeset.exceptions.InvalidConstraintError,
Column('data', Integer, unique=True).create, self.table)
col = Column('data', Integer)
col.create(self.table, unique_name='data_unique')
# check if constraint was added (cannot test on objects)
self.table.insert(values={'data': 5}).execute()
try:
self.table.insert(values={'data': 5}).execute()
except (sqlalchemy.exc.IntegrityError,
sqlalchemy.exc.ProgrammingError):
pass
else:
self.fail()
col.drop(self.table)
@fixture.usedb()
def test_index(self):
"""Can create columns with indexes"""
self.assertRaises(changeset.exceptions.InvalidConstraintError,
Column('data', Integer).create, self.table, index_name=True)
col = Column('data', Integer)
col.create(self.table, index_name='ix_data')
# check if index was added
self.table.insert(values={'data': 5}).execute()
try:
self.table.insert(values={'data': 5}).execute()
except (sqlalchemy.exc.IntegrityError,
sqlalchemy.exc.ProgrammingError):
pass
else:
self.fail()
Index('ix_data', col).drop(bind=self.engine)
col.drop()
@fixture.usedb()
def test_server_defaults(self):
"""Can create columns with server_default values"""
col = Column('data', String(244), server_default='foobar')
col.create(self.table)
self.table.insert(values={'id': 10}).execute()
row = self.table.select(autocommit=True).execute().fetchone()
self.assertEqual(u'foobar', row['data'])
col.drop()
# TODO: test sequence
# TODO: test quoting
# TODO: test non-autoname constraints
class TestRename(fixture.DB):
"""Tests for table and index rename methods"""
level = fixture.DB.CONNECT
meta = MetaData()
def _setup(self, url):
super(TestRename, self)._setup(url)
self.meta.bind = self.engine
@fixture.usedb(not_supported='firebird')
def test_rename_table(self):
"""Tables can be renamed"""
c_name = 'col_1'
table_name1 = 'name_one'
table_name2 = 'name_two'
index_name1 = 'x' + table_name1
index_name2 = 'x' + table_name2
self.meta.clear()
self.column = Column(c_name, Integer)
self.table = Table(table_name1, self.meta, self.column)
self.index = Index(index_name1, self.column, unique=False)
if self.engine.has_table(self.table.name):
self.table.drop()
if self.engine.has_table(table_name2):
tmp = Table(table_name2, self.meta, autoload=True)
tmp.drop()
tmp.deregister()
del tmp
self.table.create()
def assert_table_name(expected, skip_object_check=False):
"""Refresh a table via autoload
SA has changed some since this test was written; we now need to do
meta.clear() upon reloading a table - clear all rather than a
select few. So, this works only if we're working with one table at
a time (else, others will vanish too).
"""
if not skip_object_check:
# Table object check
self.assertEquals(self.table.name,expected)
newname = self.table.name
else:
# we know the object's name isn't consistent: just assign it
newname = expected
# Table DB check
self.meta.clear()
self.table = Table(newname, self.meta, autoload=True)
self.assertEquals(self.table.name, expected)
def assert_index_name(expected, skip_object_check=False):
if not skip_object_check:
# Index object check
self.assertEquals(self.index.name, expected)
else:
# object is inconsistent
self.index.name = expected
# TODO: Index DB check
try:
# Table renames
assert_table_name(table_name1)
rename_table(self.table, table_name2)
assert_table_name(table_name2)
self.table.rename(table_name1)
assert_table_name(table_name1)
# test by just the string
rename_table(table_name1, table_name2, engine=self.engine)
assert_table_name(table_name2, True) # object not updated
# Index renames
if self.url.startswith('sqlite') or self.url.startswith('mysql'):
self.assertRaises(changeset.exceptions.NotSupportedError,
self.index.rename, index_name2)
else:
assert_index_name(index_name1)
rename_index(self.index, index_name2, engine=self.engine)
assert_index_name(index_name2)
self.index.rename(index_name1)
assert_index_name(index_name1)
# test by just the string
rename_index(index_name1, index_name2, engine=self.engine)
assert_index_name(index_name2, True)
finally:
if self.table.exists():
self.table.drop()
class TestColumnChange(fixture.DB):
level = fixture.DB.CONNECT
table_name = 'tmp_colchange'
def _setup(self, url):
super(TestColumnChange, self)._setup(url)
self.meta = MetaData(self.engine)
self.table = Table(self.table_name, self.meta,
Column('id', Integer, primary_key=True),
Column('data', String(40), server_default=DefaultClause("tluafed"),
nullable=True),
)
if self.table.exists():
self.table.drop()
try:
self.table.create()
except sqlalchemy.exceptions.SQLError, e:
# SQLite: database schema has changed
if not self.url.startswith('sqlite://'):
raise
def _teardown(self):
if self.table.exists():
try:
self.table.drop(self.engine)
except sqlalchemy.exceptions.SQLError,e:
# SQLite: database schema has changed
if not self.url.startswith('sqlite://'):
raise
super(TestColumnChange, self)._teardown()
@fixture.usedb()
def test_rename(self):
"""Can rename a column"""
def num_rows(col, content):
return len(list(self.table.select(col == content).execute()))
# Table content should be preserved in changed columns
content = "fgsfds"
self.engine.execute(self.table.insert(), data=content, id=42)
self.assertEquals(num_rows(self.table.c.data, content), 1)
# ...as a function, given a column object and the new name
alter_column('data', name='data2', table=self.table)
self.refresh_table()
alter_column(self.table.c.data2, name='atad')
self.refresh_table(self.table.name)
self.assert_('data' not in self.table.c.keys())
self.assert_('atad' in self.table.c.keys())
self.assertEquals(num_rows(self.table.c.atad, content), 1)
# ...as a method, given a new name
self.table.c.atad.alter(name='data')
self.refresh_table(self.table.name)
self.assert_('atad' not in self.table.c.keys())
self.table.c.data # Should not raise exception
self.assertEquals(num_rows(self.table.c.data, content), 1)
# ...as a function, given a new object
col = Column('atad', String(40), server_default=self.table.c.data.server_default)
alter_column(self.table.c.data, col)
self.refresh_table(self.table.name)
self.assert_('data' not in self.table.c.keys())
self.table.c.atad # Should not raise exception
self.assertEquals(num_rows(self.table.c.atad, content), 1)
# ...as a method, given a new object
col = Column('data', String(40), server_default=self.table.c.atad.server_default)
self.table.c.atad.alter(col)
self.refresh_table(self.table.name)
self.assert_('atad' not in self.table.c.keys())
self.table.c.data # Should not raise exception
self.assertEquals(num_rows(self.table.c.data,content), 1)
@fixture.usedb()
def test_type(self):
"""Can change a column's type"""
# Entire column definition given
self.table.c.data.alter(Column('data', String(42)))
self.refresh_table(self.table.name)
self.assert_(isinstance(self.table.c.data.type, String))
if self.engine.name == 'firebird':
self.assertEquals(self.table.c.data.type.length, 42 * 4)
else:
self.assertEquals(self.table.c.data.type.length, 42)
# Just the new type
self.table.c.data.alter(type=String(43))
self.refresh_table(self.table.name)
self.assert_(isinstance(self.table.c.data.type, String))
if self.engine.name == 'firebird':
self.assertEquals(self.table.c.data.type.length, 43 * 4)
else:
self.assertEquals(self.table.c.data.type.length, 43)
# Different type
self.assert_(isinstance(self.table.c.id.type, Integer))
self.assertEquals(self.table.c.id.nullable, False)
if not self.engine.name == 'firebird':
self.table.c.id.alter(type=String(20))
self.assertEquals(self.table.c.id.nullable, False)
self.refresh_table(self.table.name)
self.assert_(isinstance(self.table.c.id.type, String))
@fixture.usedb()
def test_default(self):
"""Can change a column's server_default value (DefaultClauses only)
Only DefaultClauses are changed here: others are managed by the
application / by SA
"""
self.assertEquals(self.table.c.data.server_default.arg, 'tluafed')
# Just the new default
default = 'my_default'
self.table.c.data.alter(server_default=DefaultClause(default))
self.refresh_table(self.table.name)
#self.assertEquals(self.table.c.data.server_default.arg,default)
# TextClause returned by autoload
self.assert_(default in str(self.table.c.data.server_default.arg))
self.engine.execute(self.table.insert(), id=12)
row = self.table.select(autocommit=True).execute().fetchone()
self.assertEqual(row['data'], default)
# Column object
default = 'your_default'
self.table.c.data.alter(Column('data', String(40), server_default=DefaultClause(default)))
self.refresh_table(self.table.name)
self.assert_(default in str(self.table.c.data.server_default.arg))
# Drop/remove default
self.table.c.data.alter(server_default=None)
self.assertEqual(self.table.c.data.server_default, None)
self.refresh_table(self.table.name)
# server_default isn't necessarily None for Oracle
#self.assert_(self.table.c.data.server_default is None,self.table.c.data.server_default)
self.engine.execute(self.table.insert(), id=11)
row = self.table.select(self.table.c.id == 11, autocommit=True).execute().fetchone()
self.assert_(row['data'] is None, row['data'])
@fixture.usedb(not_supported='firebird')
def test_null(self):
"""Can change a column's null constraint"""
self.assertEquals(self.table.c.data.nullable, True)
# Column object
self.table.c.data.alter(Column('data', String(40), nullable=False))
self.table.nullable = None
self.refresh_table(self.table.name)
self.assertEquals(self.table.c.data.nullable, False)
# Just the new status
self.table.c.data.alter(nullable=True)
self.refresh_table(self.table.name)
self.assertEquals(self.table.c.data.nullable, True)
@fixture.usedb()
def test_alter_metadata(self):
"""Test if alter_metadata is respected"""
self.table.c.data.alter(Column('data', String(100)))
self.assert_(isinstance(self.table.c.data.type, String))
self.assertEqual(self.table.c.data.type.length, 100)
# nothing should change
self.table.c.data.alter(Column('data', String(200)), alter_metadata=False)
self.assert_(isinstance(self.table.c.data.type, String))
self.assertEqual(self.table.c.data.type.length, 100)
@fixture.usedb()
def test_alter_returns_delta(self):
"""Test if alter constructs return delta"""
delta = self.table.c.data.alter(Column('data', String(100)))
self.assert_('type' in delta)
@fixture.usedb()
def test_alter_all(self):
"""Tests all alter changes at one time"""
# test for each db separately
# since currently some dont support everything
# test pre settings
self.assertEqual(self.table.c.data.nullable, True)
self.assertEqual(self.table.c.data.server_default.arg, 'tluafed')
self.assertEqual(self.table.c.data.name, 'data')
self.assertTrue(isinstance(self.table.c.data.type, String))
self.assertTrue(self.table.c.data.type.length, 40)
kw = dict(nullable=False,
server_default='foobar',
name='data_new',
type=String(50),
alter_metadata=True)
if self.engine.name == 'firebird':
del kw['nullable']
self.table.c.data.alter(**kw)
# test altered objects
self.assertEqual(self.table.c.data.server_default.arg, 'foobar')
if not self.engine.name == 'firebird':
self.assertEqual(self.table.c.data.nullable, False)
self.assertEqual(self.table.c.data.name, 'data_new')
self.assertEqual(self.table.c.data.type.length, 50)
self.refresh_table(self.table.name)
# test post settings
if not self.engine.name == 'firebird':
self.assertEqual(self.table.c.data_new.nullable, False)
self.assertEqual(self.table.c.data_new.name, 'data_new')
self.assertTrue(isinstance(self.table.c.data_new.type, String))
self.assertTrue(self.table.c.data_new.type.length, 50)
# insert data and assert default
self.table.insert(values={'id': 10}).execute()
row = self.table.select(autocommit=True).execute().fetchone()
self.assertEqual(u'foobar', row['data_new'])
class TestColumnDelta(fixture.DB):
"""Tests ColumnDelta class"""
level = fixture.DB.CONNECT
table_name = 'tmp_coldelta'
table_int = 0
def _setup(self, url):
super(TestColumnDelta, self)._setup(url)
self.meta = MetaData()
self.table = Table(self.table_name, self.meta,
Column('ids', String(10)),
)
self.meta.bind = self.engine
if self.engine.has_table(self.table.name):
self.table.drop()
self.table.create()
def _teardown(self):
if self.engine.has_table(self.table.name):
self.table.drop()
self.meta.clear()
super(TestColumnDelta,self)._teardown()
def mkcol(self, name='id', type=String, *p, **k):
return Column(name, type, *p, **k)
def verify(self, expected, original, *p, **k):
self.delta = ColumnDelta(original, *p, **k)
result = self.delta.keys()
result.sort()
self.assertEquals(expected, result)
return self.delta
def test_deltas_two_columns(self):
"""Testing ColumnDelta with two columns"""
col_orig = self.mkcol(primary_key=True)
col_new = self.mkcol(name='ids', primary_key=True)
self.verify([], col_orig, col_orig)
self.verify(['name'], col_orig, col_orig, 'ids')
self.verify(['name'], col_orig, col_orig, name='ids')
self.verify(['name'], col_orig, col_new)
self.verify(['name', 'type'], col_orig, col_new, type=String)
# Type comparisons
self.verify([], self.mkcol(type=String), self.mkcol(type=String))
self.verify(['type'], self.mkcol(type=String), self.mkcol(type=Integer))
self.verify(['type'], self.mkcol(type=String), self.mkcol(type=String(42)))
self.verify([], self.mkcol(type=String(42)), self.mkcol(type=String(42)))
self.verify(['type'], self.mkcol(type=String(24)), self.mkcol(type=String(42)))
self.verify(['type'], self.mkcol(type=String(24)), self.mkcol(type=Text(24)))
# Other comparisons
self.verify(['primary_key'], self.mkcol(nullable=False), self.mkcol(primary_key=True))
# PK implies nullable=False
self.verify(['nullable', 'primary_key'], self.mkcol(nullable=True), self.mkcol(primary_key=True))
self.verify([], self.mkcol(primary_key=True), self.mkcol(primary_key=True))
self.verify(['nullable'], self.mkcol(nullable=True), self.mkcol(nullable=False))
self.verify([], self.mkcol(nullable=True), self.mkcol(nullable=True))
self.verify([], self.mkcol(server_default=None), self.mkcol(server_default=None))
self.verify([], self.mkcol(server_default='42'), self.mkcol(server_default='42'))
# test server default
delta = self.verify(['server_default'], self.mkcol(), self.mkcol('id', String, DefaultClause('foobar')))
self.assertEqual(delta['server_default'].arg, 'foobar')
self.verify([], self.mkcol(server_default='foobar'), self.mkcol('id', String, DefaultClause('foobar')))
self.verify(['type'], self.mkcol(server_default='foobar'), self.mkcol('id', Text, DefaultClause('foobar')))
# test alter_metadata
col = self.mkcol(server_default='foobar')
self.verify(['type'], col, self.mkcol('id', Text, DefaultClause('foobar')), alter_metadata=True)
self.assert_(isinstance(col.type, Text))
col = self.mkcol()
self.verify(['name', 'server_default', 'type'], col, self.mkcol('beep', Text, DefaultClause('foobar')), alter_metadata=True)
self.assert_(isinstance(col.type, Text))
self.assertEqual(col.name, 'beep')
self.assertEqual(col.server_default.arg, 'foobar')
col = self.mkcol()
self.verify(['name', 'server_default', 'type'], col, self.mkcol('beep', Text, DefaultClause('foobar')), alter_metadata=False)
self.assertFalse(isinstance(col.type, Text))
self.assertNotEqual(col.name, 'beep')
self.assertFalse(col.server_default)
@fixture.usedb()
def test_deltas_zero_columns(self):
"""Testing ColumnDelta with zero columns"""
self.verify(['name'], 'ids', table=self.table, name='hey')
# test reflection
self.verify(['type'], 'ids', table=self.table.name, type=String(80), engine=self.engine)
self.verify(['type'], 'ids', table=self.table.name, type=String(80), metadata=self.meta)
# check if alter_metadata is respected
self.meta.clear()
delta = self.verify(['type'], 'ids', table=self.table.name, type=String(80), alter_metadata=True, metadata=self.meta)
self.assert_(self.table.name in self.meta)
self.assertEqual(delta.result_column.type.length, 80)
self.assertEqual(self.meta.tables.get(self.table.name).c.ids.type.length, 80)
self.meta.clear()
self.verify(['type'], 'ids', table=self.table.name, type=String(80), alter_metadata=False, engine=self.engine)
self.assert_(self.table.name not in self.meta)
self.meta.clear()
self.verify(['type'], 'ids', table=self.table.name, type=String(80), alter_metadata=False, metadata=self.meta)
self.assert_(self.table.name not in self.meta)
# test defaults
self.meta.clear()
self.verify(['server_default'], 'ids', table=self.table.name, server_default='foobar', alter_metadata=True, metadata=self.meta)
self.meta.tables.get(self.table.name).c.ids.server_default.arg == 'foobar'
# test missing parameters
self.assertRaises(ValueError, ColumnDelta, table=self.table.name)
self.assertRaises(ValueError, ColumnDelta, 'ids', table=self.table.name, alter_metadata=True)
self.assertRaises(ValueError, ColumnDelta, 'ids', table=self.table.name, alter_metadata=False)
def test_deltas_one_column(self):
"""Testing ColumnDelta with one column"""
col_orig = self.mkcol(primary_key=True)
self.verify([], col_orig)
self.verify(['name'], col_orig, 'ids')
# Parameters are always executed, even if they're 'unchanged'
# (We can't assume given column is up-to-date)
self.verify(['name', 'primary_key', 'type'], col_orig, 'id', Integer, primary_key=True)
self.verify(['name', 'primary_key', 'type'], col_orig, name='id', type=Integer, primary_key=True)
# Change name, given an up-to-date definition and the current name
delta = self.verify(['name'], col_orig, name='blah')
self.assertEquals(delta.get('name'), 'blah')
self.assertEquals(delta.current_name, 'id')
# check if alter_metadata is respected
col_orig = self.mkcol(primary_key=True)
self.verify(['name', 'type'], col_orig, name='id12', type=Text, alter_metadata=True)
self.assert_(isinstance(col_orig.type, Text))
self.assertEqual(col_orig.name, 'id12')
col_orig = self.mkcol(primary_key=True)
self.verify(['name', 'type'], col_orig, name='id12', type=Text, alter_metadata=False)
self.assert_(isinstance(col_orig.type, String))
self.assertEqual(col_orig.name, 'id')
# test server default
col_orig = self.mkcol(primary_key=True)
delta = self.verify(['server_default'], col_orig, DefaultClause('foobar'))
self.assertEqual(delta['server_default'].arg, 'foobar')
delta = self.verify(['server_default'], col_orig, server_default=DefaultClause('foobar'))
self.assertEqual(delta['server_default'].arg, 'foobar')
# no change
col_orig = self.mkcol(server_default=DefaultClause('foobar'))
delta = self.verify(['type'], col_orig, DefaultClause('foobar'), type=PickleType)
self.assert_(isinstance(delta.result_column.type, PickleType))
# TODO: test server on update
# TODO: test bind metadata

View File

@ -0,0 +1,278 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from sqlalchemy import *
from sqlalchemy.util import *
from sqlalchemy.exc import *
from migrate.changeset import *
from migrate.changeset.exceptions import *
from test import fixture
class CommonTestConstraint(fixture.DB):
"""helper functions to test constraints.
we just create a fresh new table and make sure everything is
as required.
"""
def _setup(self, url):
super(CommonTestConstraint, self)._setup(url)
self._create_table()
def _teardown(self):
if hasattr(self, 'table') and self.engine.has_table(self.table.name):
self.table.drop()
super(CommonTestConstraint, self)._teardown()
def _create_table(self):
self._connect(self.url)
self.meta = MetaData(self.engine)
self.tablename = 'mytable'
self.table = Table(self.tablename, self.meta,
Column('id', Integer, nullable=False),
Column('fkey', Integer, nullable=False),
mysql_engine='InnoDB')
if self.engine.has_table(self.table.name):
self.table.drop()
self.table.create()
# make sure we start at zero
self.assertEquals(len(self.table.primary_key), 0)
self.assert_(isinstance(self.table.primary_key,
schema.PrimaryKeyConstraint), self.table.primary_key.__class__)
class TestConstraint(CommonTestConstraint):
level = fixture.DB.CONNECT
def _define_pk(self, *cols):
# Add a pk by creating a PK constraint
if (self.engine.name in ('oracle', 'firebird')):
# Can't drop Oracle PKs without an explicit name
pk = PrimaryKeyConstraint(table=self.table, name='temp_pk_key', *cols)
else:
pk = PrimaryKeyConstraint(table=self.table, *cols)
self.assertEquals(list(pk.columns), list(cols))
pk.create()
self.refresh_table()
if not self.url.startswith('sqlite'):
self.assertEquals(list(self.table.primary_key), list(cols))
# Drop the PK constraint
#if (self.engine.name in ('oracle', 'firebird')):
# # Apparently Oracle PK names aren't introspected
# pk.name = self.table.primary_key.name
pk.drop()
self.refresh_table()
self.assertEquals(len(self.table.primary_key), 0)
self.assert_(isinstance(self.table.primary_key, schema.PrimaryKeyConstraint))
return pk
@fixture.usedb(not_supported='sqlite')
def test_define_fk(self):
"""FK constraints can be defined, created, and dropped"""
# FK target must be unique
pk = PrimaryKeyConstraint(self.table.c.id, table=self.table, name="pkid")
pk.create()
# Add a FK by creating a FK constraint
self.assertEquals(self.table.c.fkey.foreign_keys._list, [])
fk = ForeignKeyConstraint([self.table.c.fkey],
[self.table.c.id],
name="fk_id_fkey",
ondelete="CASCADE")
self.assert_(self.table.c.fkey.foreign_keys._list is not [])
self.assertEquals(list(fk.columns), [self.table.c.fkey])
self.assertEquals([e.column for e in fk.elements], [self.table.c.id])
self.assertEquals(list(fk.referenced), [self.table.c.id])
if self.url.startswith('mysql'):
# MySQL FKs need an index
index = Index('index_name', self.table.c.fkey)
index.create()
fk.create()
# test for ondelete/onupdate
fkey = self.table.c.fkey.foreign_keys._list[0]
self.assertEquals(fkey.ondelete, "CASCADE")
# TODO: test on real db if it was set
self.refresh_table()
self.assert_(self.table.c.fkey.foreign_keys._list is not [])
fk.drop()
self.refresh_table()
self.assertEquals(self.table.c.fkey.foreign_keys._list, [])
@fixture.usedb()
def test_define_pk(self):
"""PK constraints can be defined, created, and dropped"""
self._define_pk(self.table.c.fkey)
@fixture.usedb()
def test_define_pk_multi(self):
"""Multicolumn PK constraints can be defined, created, and dropped"""
self._define_pk(self.table.c.id, self.table.c.fkey)
@fixture.usedb()
def test_drop_cascade(self):
"""Drop constraint cascaded"""
pk = PrimaryKeyConstraint('fkey', table=self.table, name="id_pkey")
pk.create()
self.refresh_table()
# Drop the PK constraint forcing cascade
try:
pk.drop(cascade=True)
except NotSupportedError:
if self.engine.name == 'firebird':
pass
# TODO: add real assertion if it was added
@fixture.usedb(supported=['mysql'])
def test_fail_mysql_check_constraints(self):
"""Check constraints raise NotSupported for mysql on drop"""
cons = CheckConstraint('id > 3', name="id_check", table=self.table)
cons.create()
self.refresh_table()
try:
cons.drop()
except NotSupportedError:
pass
else:
self.fail()
@fixture.usedb(not_supported=['sqlite', 'mysql'])
def test_named_check_constraints(self):
"""Check constraints can be defined, created, and dropped"""
self.assertRaises(InvalidConstraintError, CheckConstraint, 'id > 3')
cons = CheckConstraint('id > 3', name="id_check", table=self.table)
cons.create()
self.refresh_table()
self.table.insert(values={'id': 4, 'fkey': 1}).execute()
try:
self.table.insert(values={'id': 1, 'fkey': 1}).execute()
except (IntegrityError, ProgrammingError):
pass
else:
self.fail()
# Remove the name, drop the constraint; it should succeed
cons.drop()
self.refresh_table()
self.table.insert(values={'id': 2, 'fkey': 2}).execute()
self.table.insert(values={'id': 1, 'fkey': 2}).execute()
class TestAutoname(CommonTestConstraint):
"""Every method tests for a type of constraint wether it can autoname
itself and if you can pass object instance and names to classes.
"""
level = fixture.DB.CONNECT
@fixture.usedb(not_supported=['oracle', 'firebird'])
def test_autoname_pk(self):
"""PrimaryKeyConstraints can guess their name if None is given"""
# Don't supply a name; it should create one
cons = PrimaryKeyConstraint(self.table.c.id)
cons.create()
self.refresh_table()
if not self.url.startswith('sqlite'):
# TODO: test for index for sqlite
self.assertEquals(list(cons.columns), list(self.table.primary_key))
# Remove the name, drop the constraint; it should succeed
cons.name = None
cons.drop()
self.refresh_table()
self.assertEquals(list(), list(self.table.primary_key))
# test string names
cons = PrimaryKeyConstraint('id', table=self.table)
cons.create()
self.refresh_table()
if not self.url.startswith('sqlite'):
# TODO: test for index for sqlite
self.assertEquals(list(cons.columns), list(self.table.primary_key))
cons.name = None
cons.drop()
@fixture.usedb(not_supported=['oracle', 'sqlite', 'firebird'])
def test_autoname_fk(self):
"""ForeignKeyConstraints can guess their name if None is given"""
cons = PrimaryKeyConstraint(self.table.c.id)
cons.create()
cons = ForeignKeyConstraint([self.table.c.fkey], [self.table.c.id])
cons.create()
self.refresh_table()
self.table.c.fkey.foreign_keys[0].column is self.table.c.id
# Remove the name, drop the constraint; it should succeed
cons.name = None
cons.drop()
self.refresh_table()
self.assertEquals(self.table.c.fkey.foreign_keys._list, list())
# test string names
cons = ForeignKeyConstraint(['fkey'], ['%s.id' % self.tablename], table=self.table)
cons.create()
self.refresh_table()
self.table.c.fkey.foreign_keys[0].column is self.table.c.id
# Remove the name, drop the constraint; it should succeed
cons.name = None
cons.drop()
@fixture.usedb(not_supported=['oracle', 'sqlite', 'mysql'])
def test_autoname_check(self):
"""CheckConstraints can guess their name if None is given"""
cons = CheckConstraint('id > 3', columns=[self.table.c.id])
cons.create()
self.refresh_table()
if not self.engine.name == 'mysql':
self.table.insert(values={'id': 4, 'fkey': 1}).execute()
try:
self.table.insert(values={'id': 1, 'fkey': 2}).execute()
except (IntegrityError, ProgrammingError):
pass
else:
self.fail()
# Remove the name, drop the constraint; it should succeed
cons.name = None
cons.drop()
self.refresh_table()
self.table.insert(values={'id': 2, 'fkey': 2}).execute()
self.table.insert(values={'id': 1, 'fkey': 3}).execute()
@fixture.usedb(not_supported=['oracle', 'sqlite'])
def test_autoname_unique(self):
"""UniqueConstraints can guess their name if None is given"""
cons = UniqueConstraint(self.table.c.fkey)
cons.create()
self.refresh_table()
self.table.insert(values={'fkey': 4, 'id': 1}).execute()
try:
self.table.insert(values={'fkey': 4, 'id': 2}).execute()
except (sqlalchemy.exc.IntegrityError,
sqlalchemy.exc.ProgrammingError):
pass
else:
self.fail()
# Remove the name, drop the constraint; it should succeed
cons.name = None
cons.drop()
self.refresh_table()
self.table.insert(values={'fkey': 4, 'id': 2}).execute()
self.table.insert(values={'fkey': 4, 'id': 1}).execute()

66
test/fixture/__init__.py Normal file
View File

@ -0,0 +1,66 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
import sys
# Append test method name,etc. to descriptions automatically.
# Yes, this is ugly, but it's the simplest way...
def getDescription(self, test):
ret = str(test)
if self.descriptions:
return test.shortDescription() or ret
return ret
unittest._TextTestResult.getDescription = getDescription
class Result(unittest._TextTestResult):
# test description may be changed as we go; store the description at
# exception-time and print later
def __init__(self,*p,**k):
super(Result,self).__init__(*p,**k)
self.desc=dict()
def _addError(self,test,err,errs):
test,err=errs.pop()
errdata=(test,err,self.getDescription(test))
errs.append(errdata)
def addFailure(self,test,err):
super(Result,self).addFailure(test,err)
self._addError(test,err,self.failures)
def addError(self,test,err):
super(Result,self).addError(test,err)
self._addError(test,err,self.errors)
def printErrorList(self, flavour, errors):
# Copied from unittest.py
#for test, err in errors:
for errdata in errors:
test,err,desc=errdata
self.stream.writeln(self.separator1)
#self.stream.writeln("%s: %s" % (flavour,self.getDescription(test)))
self.stream.writeln("%s: %s" % (flavour,desc or self.getDescription(test)))
self.stream.writeln(self.separator2)
self.stream.writeln("%s" % err)
class Runner(unittest.TextTestRunner):
def _makeResult(self):
return Result(self.stream,self.descriptions,self.verbosity)
def suite(imports):
return unittest.TestLoader().loadTestsFromNames(imports)
def main(imports=None):
if imports:
global suite
suite = suite(imports)
defaultTest='fixture.suite'
else:
defaultTest=None
return unittest.TestProgram(defaultTest=defaultTest,\
testRunner=Runner(verbosity=1))
from base import Base
from pathed import Pathed
from shell import Shell
from database import DB,usedb

35
test/fixture/base.py Normal file
View File

@ -0,0 +1,35 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
import unittest
from nose.tools import raises, eq_
class Base(unittest.TestCase):
def setup_method(self,func=None):
self.setUp()
def teardown_method(self,func=None):
self.tearDown()
def assertEqualsIgnoreWhitespace(self, v1, v2):
"""Compares two strings that should be\
identical except for whitespace
"""
def strip_whitespace(s):
return re.sub(r'\s', '', s)
line1 = strip_whitespace(v1)
line2 = strip_whitespace(v2)
self.assertEquals(line1, line2, "%s != %s" % (v1, v2))
def ignoreErrors(self, func, *p,**k):
"""Call a function, ignoring any exceptions"""
try:
func(*p,**k)
except:
pass

144
test/fixture/database.py Normal file
View File

@ -0,0 +1,144 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
from decorator import decorator
from sqlalchemy import create_engine, Table, MetaData
from sqlalchemy.orm import create_session
from test.fixture.base import Base
from test.fixture.pathed import Pathed
def readurls():
"""read URLs from config file return a list"""
filename = 'test_db.cfg'
ret = list()
# TODO: remove tmpfile since sqlite can store db in memory
tmpfile = Pathed.tmp()
fullpath = os.path.join(os.curdir, filename)
try:
fd = open(fullpath)
except IOError:
raise IOError("""You must specify the databases to use for testing!
Copy %(filename)s.tmpl to %(filename)s and edit your database URLs.""" % locals())
for line in fd:
if line.startswith('#'):
continue
line = line.replace('__tmp__', tmpfile).strip()
ret.append(line)
fd.close()
return ret
def is_supported(url, supported, not_supported):
db = url.split(':', 1)[0]
if supported is not None:
if isinstance(supported, basestring):
return supported == db
else:
return db in supported
elif not_supported is not None:
if isinstance(not_supported, basestring):
return not_supported != db
else:
return not (db in not_supported)
return True
# we make the engines global, which should make the tests run a bit faster
urls = readurls()
engines = dict([(url, create_engine(url, echo=True)) for url in urls])
def usedb(supported=None, not_supported=None):
"""Decorates tests to be run with a database connection
These tests are run once for each available database
@param supported: run tests for ONLY these databases
@param not_supported: run tests for all databases EXCEPT these
If both supported and not_supported are empty, all dbs are assumed
to be supported
"""
if supported is not None and not_supported is not None:
raise AssertionError("Can't specify both supported and not_supported in fixture.db()")
my_urls = [url for url in urls if is_supported(url, supported, not_supported)]
@decorator
def dec(f, self, *a, **kw):
for url in my_urls:
self._setup(url)
f(self, *a, **kw)
self._teardown()
return dec
class DB(Base):
# Constants: connection level
NONE = 0 # No connection; just set self.url
CONNECT = 1 # Connect; no transaction
TXN = 2 # Everything in a transaction
level = TXN
def _engineInfo(self, url=None):
if url is None:
url = self.url
return url
def _setup(self, url):
self._connect(url)
def _teardown(self):
self._disconnect()
def _connect(self, url):
self.url = url
self.engine = engines[url]
self.meta = MetaData(bind=self.engine)
if self.level < self.CONNECT:
return
#self.conn = self.engine.connect()
self.session = create_session(bind=self.engine)
if self.level < self.TXN:
return
self.txn = self.session.begin()
#self.txn.add(self.engine)
def _disconnect(self):
if hasattr(self, 'txn'):
self.txn.rollback()
if hasattr(self, 'session'):
self.session.close()
#if hasattr(self,'conn'):
# self.conn.close()
def _supported(self, url):
db = url.split(':',1)[0]
func = getattr(self, self._TestCase__testMethodName)
if hasattr(func, 'supported'):
return db in func.supported
if hasattr(func, 'not_supported'):
return not (db in func.not_supported)
# Neither list assigned; assume all are supported
return True
def _not_supported(self, url):
return not self._supported(url)
def refresh_table(self, name=None):
"""Reload the table from the database
Assumes we're working with only a single table, self.table, and
metadata self.meta
Working w/ multiple tables is not possible, as tables can only be
reloaded with meta.clear()
"""
if name is None:
name = self.table.name
self.meta.clear()
self.table = Table(name, self.meta, autoload=True)

77
test/fixture/pathed.py Normal file
View File

@ -0,0 +1,77 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import shutil
import tempfile
from test.fixture import base
class Pathed(base.Base):
# Temporary files
_tmpdir = tempfile.mkdtemp()
def setUp(self):
super(Pathed, self).setUp()
self.temp_usable_dir = tempfile.mkdtemp()
sys.path.append(self.temp_usable_dir)
def tearDown(self):
super(Pathed, self).tearDown()
try:
sys.path.remove(self.temp_usable_dir)
except:
pass # w00t?
Pathed.purge(self.temp_usable_dir)
@classmethod
def _tmp(cls, prefix='', suffix=''):
"""Generate a temporary file name that doesn't exist
All filenames are generated inside a temporary directory created by
tempfile.mkdtemp(); only the creating user has access to this directory.
It should be secure to return a nonexistant temp filename in this
directory, unless the user is messing with their own files.
"""
file, ret = tempfile.mkstemp(suffix,prefix,cls._tmpdir)
os.close(file)
os.remove(ret)
return ret
@classmethod
def tmp(cls, *p, **k):
return cls._tmp(*p, **k)
@classmethod
def tmp_py(cls, *p, **k):
return cls._tmp(suffix='.py', *p, **k)
@classmethod
def tmp_sql(cls, *p, **k):
return cls._tmp(suffix='.sql', *p, **k)
@classmethod
def tmp_named(cls, name):
return os.path.join(cls._tmpdir, name)
@classmethod
def tmp_repos(cls, *p, **k):
return cls._tmp(*p, **k)
@classmethod
def purge(cls, path):
"""Removes this path if it exists, in preparation for tests
Careful - all tests should take place in /tmp.
We don't want to accidentally wipe stuff out...
"""
if os.path.exists(path):
if os.path.isdir(path):
shutil.rmtree(path)
else:
os.remove(path)
if path.endswith('.py'):
pyc = path + 'c'
if os.path.exists(pyc):
os.remove(pyc)

56
test/fixture/shell.py Normal file
View File

@ -0,0 +1,56 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import shutil
import sys
import types
from test.fixture.pathed import *
class Shell(Pathed):
"""Base class for command line tests"""
def execute(self, command, *p, **k):
"""Return the fd of a command; can get output (stdout/err) and exitcode"""
# We might be passed a file descriptor for some reason; if so, just return it
if isinstance(command, types.FileType):
return command
# Redirect stderr to stdout
# This is a bit of a hack, but I've not found a better way
py_path = os.environ.get('PYTHONPATH', '')
py_path_list = py_path.split(':')
py_path_list.append(os.path.abspath('.'))
os.environ['PYTHONPATH'] = ':'.join(py_path_list)
fd = os.popen(command + ' 2>&1')
if py_path:
py_path = os.environ['PYTHONPATH'] = py_path
else:
del os.environ['PYTHONPATH']
return fd
def output_and_exitcode(self, *p, **k):
fd=self.execute(*p, **k)
output = fd.read().strip()
exitcode = fd.close()
if k.pop('emit',False):
print output
return (output, exitcode)
def exitcode(self, *p, **k):
"""Execute a command and return its exit code
...without printing its output/errors
"""
ret = self.output_and_exitcode(*p, **k)
return ret[1]
def assertFailure(self, *p, **k):
output,exitcode = self.output_and_exitcode(*p, **k)
assert (exitcode), output
def assertSuccess(self, *p, **k):
output,exitcode = self.output_and_exitcode(*p, **k)
#self.assert_(not exitcode, output)
assert (not exitcode), output

View File

View File

@ -0,0 +1,16 @@
from test import fixture
import doctest
import os
# Collect tests for all handwritten docs: doc/*.rst
dir = ('..','..','docs')
absdir = (os.path.dirname(os.path.abspath(__file__)),)+dir
dirpath = os.path.join(*absdir)
files = [f for f in os.listdir(dirpath) if f.endswith('.rst')]
paths = [os.path.join(*(dir+(f,))) for f in files]
assert len(paths) > 0
suite = doctest.DocFileSuite(*paths)
def test_docs():
suite.debug()

View File

View File

@ -0,0 +1,21 @@
from test import fixture
from migrate.versioning import cfgparse
from migrate.versioning.repository import *
class TestConfigParser(fixture.Base):
def test_to_dict(self):
"""Correctly interpret config results as dictionaries"""
parser = cfgparse.Parser(dict(default_value=42))
self.assert_(len(parser.sections())==0)
parser.add_section('section')
parser.set('section','option','value')
self.assert_(parser.get('section','option')=='value')
self.assert_(parser.to_dict()['section']['option']=='value')
def test_table_config(self):
"""We should be able to specify the table to be used with a repository"""
default_text=Repository.prepare_config(template.get_repository(as_pkg=True,as_str=True),
Repository._config,'repository_name')
specified_text=Repository.prepare_config(template.get_repository(as_pkg=True,as_str=True),
Repository._config,'repository_name',version_table='_other_table')
self.assertNotEquals(default_text,specified_text)

View File

@ -0,0 +1,11 @@
from sqlalchemy import select
from test import fixture
class TestConnect(fixture.DB):
level=fixture.DB.TXN
@fixture.usedb()
def test_connect(self):
"""Connect to the database successfully"""
# Connection is done in fixture.DB setup; make sure we can do stuff
select(['42'],bind=self.engine).execute()

View File

@ -0,0 +1,13 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
from migrate.versioning.genmodel import *
from migrate.versioning.exceptions import *
from test import fixture
class TestModelGenerator(fixture.Pathed, fixture.DB):
level = fixture.DB.TXN

View File

@ -0,0 +1,40 @@
from test import fixture
from migrate.versioning.util.keyedinstance import *
class TestKeydInstance(fixture.Base):
def test_unique(self):
"""UniqueInstance should produce unique object instances"""
class Uniq1(KeyedInstance):
@classmethod
def _key(cls,key):
return str(key)
def __init__(self,value):
self.value=value
class Uniq2(KeyedInstance):
@classmethod
def _key(cls,key):
return str(key)
def __init__(self,value):
self.value=value
a10 = Uniq1('a')
# Different key: different instance
b10 = Uniq1('b')
self.assert_(a10 is not b10)
# Different class: different instance
a20 = Uniq2('a')
self.assert_(a10 is not a20)
# Same key/class: same instance
a11 = Uniq1('a')
self.assert_(a10 is a11)
# __init__ is called
self.assertEquals(a10.value,'a')
# clear() causes us to forget all existing instances
Uniq1.clear()
a12 = Uniq1('a')
self.assert_(a10 is not a12)

View File

@ -0,0 +1,51 @@
from test import fixture
from migrate.versioning.pathed import *
class TestPathed(fixture.Base):
def test_parent_path(self):
"""Default parent_path should behave correctly"""
filepath='/fgsfds/moot.py'
dirpath='/fgsfds/moot'
sdirpath='/fgsfds/moot/'
result='/fgsfds'
self.assert_(result==Pathed._parent_path(filepath))
self.assert_(result==Pathed._parent_path(dirpath))
self.assert_(result==Pathed._parent_path(sdirpath))
def test_new(self):
"""Pathed(path) shouldn't create duplicate objects of the same path"""
path='/fgsfds'
class Test(Pathed):
attr=None
o1=Test(path)
o2=Test(path)
self.assert_(isinstance(o1,Test))
self.assert_(o1.path==path)
self.assert_(o1 is o2)
o1.attr='herring'
self.assert_(o2.attr=='herring')
o2.attr='shrubbery'
self.assert_(o1.attr=='shrubbery')
def test_parent(self):
"""Parents should be fetched correctly"""
class Parent(Pathed):
parent=None
children=0
def _init_child(self,child,path):
"""Keep a tally of children.
(A real class might do something more interesting here)
"""
self.__class__.children+=1
class Child(Pathed):
parent=Parent
path='/fgsfds/moot.py'
parent_path='/fgsfds'
object=Child(path)
self.assert_(isinstance(object,Child))
self.assert_(isinstance(object.parent,Parent))
self.assert_(object.path==path)
self.assert_(object.parent.path==parent_path)

View File

@ -0,0 +1,196 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import shutil
from migrate.versioning import exceptions
from migrate.versioning.repository import *
from migrate.versioning.script import *
from nose.tools import raises
from test import fixture
class TestRepository(fixture.Pathed):
def test_create(self):
"""Repositories are created successfully"""
path = self.tmp_repos()
name = 'repository_name'
# Creating a repository that doesn't exist should succeed
repo = Repository.create(path, name)
config_path = repo.config.path
manage_path = os.path.join(repo.path, 'manage.py')
self.assert_(repo)
# Files should actually be created
self.assert_(os.path.exists(path))
self.assert_(os.path.exists(config_path))
self.assert_(os.path.exists(manage_path))
# Can't create it again: it already exists
self.assertRaises(exceptions.PathFoundError, Repository.create, path, name)
return path
def test_load(self):
"""We should be able to load information about an existing repository"""
# Create a repository to load
path = self.test_create()
repos = Repository(path)
self.assert_(repos)
self.assert_(repos.config)
self.assert_(repos.config.get('db_settings', 'version_table'))
# version_table's default isn't none
self.assertNotEquals(repos.config.get('db_settings', 'version_table'), 'None')
def test_load_notfound(self):
"""Nonexistant repositories shouldn't be loaded"""
path = self.tmp_repos()
self.assert_(not os.path.exists(path))
self.assertRaises(exceptions.InvalidRepositoryError, Repository, path)
def test_load_invalid(self):
"""Invalid repos shouldn't be loaded"""
# Here, invalid=empty directory. There may be other conditions too,
# but we shouldn't need to test all of them
path = self.tmp_repos()
os.mkdir(path)
self.assertRaises(exceptions.InvalidRepositoryError, Repository, path)
class TestVersionedRepository(fixture.Pathed):
"""Tests on an existing repository with a single python script"""
def setUp(self):
super(TestVersionedRepository, self).setUp()
Repository.clear()
self.path_repos = self.tmp_repos()
Repository.create(self.path_repos, 'repository_name')
def test_version(self):
"""We should correctly detect the version of a repository"""
repos = Repository(self.path_repos)
# Get latest version, or detect if a specified version exists
self.assertEquals(repos.latest, 0)
# repos.latest isn't an integer, but a VerNum
# (so we can't just assume the following tests are correct)
self.assert_(repos.latest >= 0)
self.assert_(repos.latest < 1)
# Create a script and test again
repos.create_script('')
self.assertEquals(repos.latest, 1)
self.assert_(repos.latest >= 0)
self.assert_(repos.latest >= 1)
self.assert_(repos.latest < 2)
# Create a new script and test again
repos.create_script('')
self.assertEquals(repos.latest, 2)
self.assert_(repos.latest >= 0)
self.assert_(repos.latest >= 1)
self.assert_(repos.latest >= 2)
self.assert_(repos.latest < 3)
def test_source(self):
"""Get a script object by version number and view its source"""
# Load repository and commit script
repo = Repository(self.path_repos)
repo.create_script('')
# Get script object
source = repo.version(1).script().source()
# Source is valid: script must have an upgrade function
# (not a very thorough test, but should be plenty)
self.assert_(source.find('def upgrade') >= 0)
def test_latestversion(self):
"""Repository.version() (no params) returns the latest version"""
repos = Repository(self.path_repos)
repos.create_script('')
self.assert_(repos.version(repos.latest) is repos.version())
self.assert_(repos.version() is not None)
def test_changeset(self):
"""Repositories can create changesets properly"""
# Create a nonzero-version repository of empty scripts
repos = Repository(self.path_repos)
for i in range(10):
repos.create_script('')
def check_changeset(params, length):
"""Creates and verifies a changeset"""
changeset = repos.changeset('postgres', *params)
self.assertEquals(len(changeset), length)
self.assertTrue(isinstance(changeset, Changeset))
uniq = list()
# Changesets are iterable
for version, change in changeset:
self.assert_(isinstance(change, script.BaseScript))
# Changes aren't identical
self.assert_(id(change) not in uniq)
uniq.append(id(change))
return changeset
# Upgrade to a specified version...
cs = check_changeset((0, 10), 10)
self.assertEquals(cs.keys().pop(0),0 ) # 0 -> 1: index is starting version
self.assertEquals(cs.keys().pop(), 9) # 9 -> 10: index is starting version
self.assertEquals(cs.start, 0) # starting version
self.assertEquals(cs.end, 10) # ending version
check_changeset((0, 1), 1)
check_changeset((0, 5), 5)
check_changeset((0, 0), 0)
check_changeset((5, 5), 0)
check_changeset((10, 10), 0)
check_changeset((5, 10), 5)
# Can't request a changeset of higher version than this repository
self.assertRaises(Exception, repos.changeset, 'postgres', 5, 11)
self.assertRaises(Exception, repos.changeset, 'postgres', -1, 5)
# Upgrade to the latest version...
cs = check_changeset((0,), 10)
self.assertEquals(cs.keys().pop(0), 0)
self.assertEquals(cs.keys().pop(), 9)
self.assertEquals(cs.start, 0)
self.assertEquals(cs.end, 10)
check_changeset((1,), 9)
check_changeset((5,), 5)
check_changeset((9,), 1)
check_changeset((10,), 0)
# run changes
cs.run('postgres', 'upgrade')
# Can't request a changeset of higher/lower version than this repository
self.assertRaises(Exception, repos.changeset, 'postgres', 11)
self.assertRaises(Exception, repos.changeset, 'postgres', -1)
# Downgrade
cs = check_changeset((10, 0),10)
self.assertEquals(cs.keys().pop(0), 10) # 10 -> 9
self.assertEquals(cs.keys().pop(), 1) # 1 -> 0
self.assertEquals(cs.start, 10)
self.assertEquals(cs.end, 0)
check_changeset((10, 5), 5)
check_changeset((5, 0), 5)
def test_many_versions(self):
"""Test what happens when lots of versions are created"""
repos = Repository(self.path_repos)
for i in range(1001):
repos.create_script('')
# since we normally create 3 digit ones, let's see if we blow up
self.assert_(os.path.exists('%s/versions/1000.py' % self.path_repos))
self.assert_(os.path.exists('%s/versions/1001.py' % self.path_repos))
# TODO: test manage file
# TODO: test changeset

View File

@ -0,0 +1,47 @@
from test import fixture
from migrate.versioning.schema import *
from migrate.versioning import script
import os,shutil
class TestRunChangeset(fixture.Pathed,fixture.DB):
level=fixture.DB.CONNECT
def _setup(self, url):
super(TestRunChangeset, self)._setup(url)
Repository.clear()
self.path_repos=self.tmp_repos()
# Create repository, script
Repository.create(self.path_repos,'repository_name')
@fixture.usedb()
def test_changeset_run(self):
"""Running a changeset against a repository gives expected results"""
repos=Repository(self.path_repos)
for i in range(10):
repos.create_script('')
try:
ControlledSchema(self.engine,repos).drop()
except:
pass
db=ControlledSchema.create(self.engine,repos)
# Scripts are empty; we'll check version # correctness.
# (Correct application of their content is checked elsewhere)
self.assertEquals(db.version,0)
db.upgrade(1)
self.assertEquals(db.version,1)
db.upgrade(5)
self.assertEquals(db.version,5)
db.upgrade(5)
self.assertEquals(db.version,5)
db.upgrade(None) # Latest is implied
self.assertEquals(db.version,10)
self.assertRaises(Exception,db.upgrade,11)
self.assertEquals(db.version,10)
db.upgrade(9)
self.assertEquals(db.version,9)
db.upgrade(0)
self.assertEquals(db.version,0)
self.assertRaises(Exception,db.upgrade,-1)
self.assertEquals(db.version,0)
#changeset = repos.changeset(self.url,0)
db.drop()

View File

@ -0,0 +1,203 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import shutil
from migrate.versioning.schema import *
from migrate.versioning import script, exceptions, schemadiff
from sqlalchemy import *
from test import fixture
class TestControlledSchema(fixture.Pathed, fixture.DB):
# Transactions break postgres in this test; we'll clean up after ourselves
level = fixture.DB.CONNECT
def setUp(self):
super(TestControlledSchema, self).setUp()
path_repos = self.temp_usable_dir + '/repo/'
self.repos = Repository.create(path_repos, 'repo_name')
def _setup(self, url):
self.setUp()
super(TestControlledSchema, self)._setup(url)
self.cleanup()
def _teardown(self):
super(TestControlledSchema, self)._teardown()
self.cleanup()
self.tearDown()
def cleanup(self):
# drop existing version table if necessary
try:
ControlledSchema(self.engine, self.repos).drop()
except:
# No table to drop; that's fine, be silent
pass
def tearDown(self):
self.cleanup()
super(TestControlledSchema, self).tearDown()
@fixture.usedb()
def test_version_control(self):
"""Establish version control on a particular database"""
# Establish version control on this database
dbcontrol = ControlledSchema.create(self.engine, self.repos)
# Trying to create another DB this way fails: table exists
self.assertRaises(exceptions.DatabaseAlreadyControlledError,
ControlledSchema.create, self.engine, self.repos)
# We can load a controlled DB this way, too
dbcontrol0 = ControlledSchema(self.engine, self.repos)
self.assertEquals(dbcontrol, dbcontrol0)
# We can also use a repository path, instead of a repository
dbcontrol0 = ControlledSchema(self.engine, self.repos.path)
self.assertEquals(dbcontrol, dbcontrol0)
# We don't have to use the same connection
engine = create_engine(self.url)
dbcontrol0 = ControlledSchema(engine, self.repos.path)
self.assertEquals(dbcontrol, dbcontrol0)
# Clean up:
dbcontrol.drop()
# Attempting to drop vc from a db without it should fail
self.assertRaises(exceptions.DatabaseNotControlledError, dbcontrol.drop)
# No table defined should raise error
self.assertRaises(exceptions.DatabaseNotControlledError,
ControlledSchema, self.engine, self.repos)
@fixture.usedb()
def test_version_control_specified(self):
"""Establish version control with a specified version"""
# Establish version control on this database
version = 0
dbcontrol = ControlledSchema.create(self.engine, self.repos, version)
self.assertEquals(dbcontrol.version, version)
# Correct when we load it, too
dbcontrol = ControlledSchema(self.engine, self.repos)
self.assertEquals(dbcontrol.version, version)
dbcontrol.drop()
# Now try it with a nonzero value
version = 10
for i in range(version):
self.repos.create_script('')
self.assertEquals(self.repos.latest, version)
# Test with some mid-range value
dbcontrol = ControlledSchema.create(self.engine,self.repos, 5)
self.assertEquals(dbcontrol.version, 5)
dbcontrol.drop()
# Test with max value
dbcontrol = ControlledSchema.create(self.engine, self.repos, version)
self.assertEquals(dbcontrol.version, version)
dbcontrol.drop()
@fixture.usedb()
def test_version_control_invalid(self):
"""Try to establish version control with an invalid version"""
versions = ('Thirteen', '-1', -1, '' , 13)
# A fresh repository doesn't go up to version 13 yet
for version in versions:
#self.assertRaises(ControlledSchema.InvalidVersionError,
# Can't have custom errors with assertRaises...
try:
ControlledSchema.create(self.engine,self.repos,version)
self.assert_(False, repr(version))
except exceptions.InvalidVersionError:
pass
@fixture.usedb()
def test_changeset(self):
"""Create changeset from controlled schema"""
dbschema = ControlledSchema.create(self.engine, self.repos)
# empty schema doesn't have changesets
cs = dbschema.changeset()
self.assertEqual(cs, {})
for i in range(5):
self.repos.create_script('')
self.assertEquals(self.repos.latest, 5)
cs = dbschema.changeset(5)
self.assertEqual(len(cs), 5)
# cleanup
dbschema.drop()
@fixture.usedb()
def test_upgrade_runchange(self):
dbschema = ControlledSchema.create(self.engine, self.repos)
for i in range(10):
self.repos.create_script('')
self.assertEquals(self.repos.latest, 10)
dbschema.upgrade(10)
self.assertRaises(ValueError, dbschema.upgrade, 'a')
self.assertRaises(exceptions.InvalidVersionError, dbschema.runchange, 20, '', 1)
# TODO: test for table version in db
# cleanup
dbschema.drop()
@fixture.usedb()
def test_create_model(self):
"""Test workflow to generate create_model"""
model = ControlledSchema.create_model(self.engine, self.repos, declarative=False)
self.assertTrue(isinstance(model, basestring))
model = ControlledSchema.create_model(self.engine, self.repos.path, declarative=True)
self.assertTrue(isinstance(model, basestring))
@fixture.usedb()
def test_compare_model_to_db(self):
meta = self.construct_model()
diff = ControlledSchema.compare_model_to_db(self.engine, meta, self.repos)
self.assertTrue(isinstance(diff, schemadiff.SchemaDiff))
diff = ControlledSchema.compare_model_to_db(self.engine, meta, self.repos.path)
self.assertTrue(isinstance(diff, schemadiff.SchemaDiff))
meta.drop_all(self.engine)
@fixture.usedb()
def test_update_db_from_model(self):
dbschema = ControlledSchema.create(self.engine, self.repos)
meta = self.construct_model()
dbschema.update_db_from_model(meta)
# TODO: test for table version in db
# cleanup
dbschema.drop()
meta.drop_all(self.engine)
def construct_model(self):
meta = MetaData()
user = Table('temp_model_schema', meta, Column('id', Integer), Column('user', String(245)))
return meta
# TODO: test how are tables populated in db

View File

@ -0,0 +1,158 @@
import os
import sqlalchemy
from sqlalchemy import *
from test import fixture
from migrate.versioning import genmodel, schemadiff
from nose.tools import eq_
class TestSchemaDiff(fixture.DB):
level=fixture.DB.CONNECT
table_name = 'tmp_schemadiff'
def _setup(self, url):
super(TestSchemaDiff, self)._setup(url)
self.meta = MetaData(self.engine, reflect=True)
self.meta.drop_all() # in case junk tables are lying around in the test database
self.meta = MetaData(self.engine, reflect=True) # needed if we just deleted some tables
self.table = Table(self.table_name,self.meta,
Column('id',Integer(),primary_key=True),
Column('name',UnicodeText()),
Column('data',UnicodeText()),
)
WANT_ENGINE_ECHO = os.environ.get('WANT_ENGINE_ECHO', 'F') # to get debugging: set this to T and run py.test with --pdb
if WANT_ENGINE_ECHO == 'T':
self.engine.echo = True
def _teardown(self):
if self.table.exists():
#self.table.drop() # bummer, this doesn't work because the list of tables is out of date, but calling reflect didn't work
self.meta = MetaData(self.engine, reflect=True)
self.meta.drop_all()
super(TestSchemaDiff, self)._teardown()
def _applyLatestModel(self):
diff = schemadiff.getDiffOfModelAgainstDatabase(self.meta, self.engine, excludeTables=['migrate_version'])
genmodel.ModelGenerator(diff).applyModel()
@fixture.usedb()
def test_rundiffs(self):
# Yuck! We have to import from changeset to apply the monkey-patch to allow column adding/dropping.
from migrate.changeset import schema
def assertDiff(isDiff, tablesMissingInDatabase, tablesMissingInModel, tablesWithDiff):
diff = schemadiff.getDiffOfModelAgainstDatabase(self.meta, self.engine, excludeTables=['migrate_version'])
eq_(bool(diff), isDiff)
eq_( ([t.name for t in diff.tablesMissingInDatabase], [t.name for t in diff.tablesMissingInModel], [t.name for t in diff.tablesWithDiff]),
(tablesMissingInDatabase, tablesMissingInModel, tablesWithDiff) )
# Model is defined but database is empty.
assertDiff(True, [self.table_name], [], [])
# Check Python upgrade and downgrade of database from updated model.
diff = schemadiff.getDiffOfModelAgainstDatabase(self.meta, self.engine, excludeTables=['migrate_version'])
decls, upgradeCommands, downgradeCommands = genmodel.ModelGenerator(diff).toUpgradeDowngradePython()
self.assertEqualsIgnoreWhitespace(decls, '''
meta = MetaData()
tmp_schemadiff = Table('tmp_schemadiff',meta,
Column('id',Integer(),primary_key=True,nullable=False),
Column('name',UnicodeText(length=None)),
Column('data',UnicodeText(length=None)),
)
''')
self.assertEqualsIgnoreWhitespace(upgradeCommands,
'''meta.bind(migrate_engine)
tmp_schemadiff.create()''')
self.assertEqualsIgnoreWhitespace(downgradeCommands,
'''meta.bind(migrate_engine)
tmp_schemadiff.drop()''')
# Create table in database, now model should match database.
self._applyLatestModel()
assertDiff(False, [], [], [])
# Check Python code gen from database.
diff = schemadiff.getDiffOfModelAgainstDatabase(MetaData(), self.engine, excludeTables=['migrate_version'])
src = genmodel.ModelGenerator(diff).toPython()
src = src.replace(genmodel.HEADER, '')
self.assertEqualsIgnoreWhitespace(src, '''
tmp_schemadiff = Table('tmp_schemadiff',meta,
Column('id',Integer(),primary_key=True,nullable=False),
Column('name',Text(length=None,convert_unicode=False,assert_unicode=None)),
Column('data',Text(length=None,convert_unicode=False,assert_unicode=None)),
)
''')
if not self.engine.name == 'oracle':
# Add data, later we'll make sure it's still present.
result = self.engine.execute(self.table.insert(), id=1, name=u'mydata')
dataId = result.last_inserted_ids()[0]
# Modify table in model (by removing it and adding it back to model) -- drop column data and add column data2.
self.meta.remove(self.table)
self.table = Table(self.table_name,self.meta,
Column('id',Integer(),primary_key=True),
Column('name',UnicodeText(length=None)),
Column('data2',Integer(),nullable=True),
)
assertDiff(True, [], [], [self.table_name])
# Apply latest model changes and find no more diffs.
self._applyLatestModel()
assertDiff(False, [], [], [])
if not self.engine.name == 'oracle':
# Make sure data is still present.
result = self.engine.execute(self.table.select(self.table.c.id==dataId))
rows = result.fetchall()
eq_(len(rows), 1)
eq_(rows[0].name, 'mydata')
# Add data, later we'll make sure it's still present.
result = self.engine.execute(self.table.insert(), id=2, name=u'mydata2', data2=123)
dataId2 = result.last_inserted_ids()[0]
# Change column type in model.
self.meta.remove(self.table)
self.table = Table(self.table_name,self.meta,
Column('id',Integer(),primary_key=True),
Column('name',UnicodeText(length=None)),
Column('data2',String(255),nullable=True),
)
assertDiff(True, [], [], [self.table_name]) # TODO test type diff
# Apply latest model changes and find no more diffs.
self._applyLatestModel()
assertDiff(False, [], [], [])
if not self.engine.name == 'oracle':
# Make sure data is still present.
result = self.engine.execute(self.table.select(self.table.c.id==dataId2))
rows = result.fetchall()
self.assertEquals(len(rows), 1)
self.assertEquals(rows[0].name, 'mydata2')
self.assertEquals(rows[0].data2, '123')
# Delete data, since we're about to make a required column.
# Not even using sqlalchemy.PassiveDefault helps because we're doing explicit column select.
self.engine.execute(self.table.delete(), id=dataId)
if not self.engine.name == 'firebird':
# Change column nullable in model.
self.meta.remove(self.table)
self.table = Table(self.table_name,self.meta,
Column('id',Integer(),primary_key=True),
Column('name',UnicodeText(length=None)),
Column('data2',String(255),nullable=False),
)
assertDiff(True, [], [], [self.table_name]) # TODO test nullable diff
# Apply latest model changes and find no more diffs.
self._applyLatestModel()
assertDiff(False, [], [], [])
# Remove table from model.
self.meta.remove(self.table)
assertDiff(True, [], [self.table_name], [])

View File

@ -0,0 +1,198 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import shutil
from migrate.versioning import exceptions, version, repository
from migrate.versioning.script import *
from migrate.versioning.util import *
from test import fixture
class TestBaseScript(fixture.Pathed):
def test_all(self):
"""Testing all basic BaseScript operations"""
# verify / source / run
src = self.tmp()
open(src, 'w').close()
bscript = BaseScript(src)
BaseScript.verify(src)
self.assertEqual(bscript.source(), '')
self.assertRaises(NotImplementedError, bscript.run, 'foobar')
class TestPyScript(fixture.Pathed, fixture.DB):
cls = PythonScript
def test_create(self):
"""We can create a migration script"""
path = self.tmp_py()
# Creating a file that doesn't exist should succeed
self.cls.create(path)
self.assert_(os.path.exists(path))
# Created file should be a valid script (If not, raises an error)
self.cls.verify(path)
# Can't create it again: it already exists
self.assertRaises(exceptions.PathFoundError,self.cls.create,path)
@fixture.usedb(supported='sqlite')
def test_run(self):
script_path = self.tmp_py()
pyscript = PythonScript.create(script_path)
pyscript.run(self.engine, 1)
pyscript.run(self.engine, -1)
self.assertRaises(exceptions.ScriptError, pyscript.run, self.engine, 0)
self.assertRaises(exceptions.ScriptError, pyscript._func, 'foobar')
def test_verify_notfound(self):
"""Correctly verify a python migration script: nonexistant file"""
path = self.tmp_py()
self.assertFalse(os.path.exists(path))
# Fails on empty path
self.assertRaises(exceptions.InvalidScriptError,self.cls.verify,path)
self.assertRaises(exceptions.InvalidScriptError,self.cls,path)
def test_verify_invalidpy(self):
"""Correctly verify a python migration script: invalid python file"""
path=self.tmp_py()
# Create empty file
f=open(path,'w')
f.write("def fail")
f.close()
self.assertRaises(Exception,self.cls.verify_module,path)
# script isn't verified on creation, but on module reference
py = self.cls(path)
self.assertRaises(Exception,(lambda x: x.module),py)
def test_verify_nofuncs(self):
"""Correctly verify a python migration script: valid python file; no upgrade func"""
path = self.tmp_py()
# Create empty file
f = open(path, 'w')
f.write("def zergling():\n\tprint 'rush'")
f.close()
self.assertRaises(exceptions.InvalidScriptError, self.cls.verify_module, path)
# script isn't verified on creation, but on module reference
py = self.cls(path)
self.assertRaises(exceptions.InvalidScriptError,(lambda x: x.module),py)
@fixture.usedb(supported='sqlite')
def test_preview_sql(self):
"""Preview SQL abstract from ORM layer (sqlite)"""
path = self.tmp_py()
f = open(path, 'w')
content = """
from migrate import *
from sqlalchemy import *
metadata = MetaData()
UserGroup = Table('Link', metadata,
Column('link1ID', Integer),
Column('link2ID', Integer),
UniqueConstraint('link1ID', 'link2ID'))
def upgrade(migrate_engine):
metadata.create_all(migrate_engine)
"""
f.write(content)
f.close()
pyscript = self.cls(path)
SQL = pyscript.preview_sql(self.url, 1)
self.assertEqualsIgnoreWhitespace("""
CREATE TABLE "Link"
("link1ID" INTEGER,
"link2ID" INTEGER,
UNIQUE ("link1ID", "link2ID"))
""", SQL)
# TODO: test: No SQL should be executed!
def test_verify_success(self):
"""Correctly verify a python migration script: success"""
path = self.tmp_py()
# Succeeds after creating
self.cls.create(path)
self.cls.verify(path)
# test for PythonScript.make_update_script_for_model
@fixture.usedb()
def test_make_update_script_for_model(self):
"""Construct script source from differences of two models"""
self.setup_model_params()
self.write_file(self.first_model_path, self.base_source)
self.write_file(self.second_model_path, self.base_source + self.model_source)
source_script = self.pyscript.make_update_script_for_model(
engine=self.engine,
oldmodel=load_model('testmodel_first:meta'),
model=load_model('testmodel_second:meta'),
repository=self.repo_path,
)
self.assertTrue('User.create()' in source_script)
self.assertTrue('User.drop()' in source_script)
#@fixture.usedb()
#def test_make_update_script_for_model_equals(self):
# """Try to make update script from two identical models"""
# self.setup_model_params()
# self.write_file(self.first_model_path, self.base_source + self.model_source)
# self.write_file(self.second_model_path, self.base_source + self.model_source)
# source_script = self.pyscript.make_update_script_for_model(
# engine=self.engine,
# oldmodel=load_model('testmodel_first:meta'),
# model=load_model('testmodel_second:meta'),
# repository=self.repo_path,
# )
# self.assertFalse('User.create()' in source_script)
# self.assertFalse('User.drop()' in source_script)
def setup_model_params(self):
self.script_path = self.tmp_py()
self.repo_path = self.tmp()
self.first_model_path = os.path.join(self.temp_usable_dir, 'testmodel_first.py')
self.second_model_path = os.path.join(self.temp_usable_dir, 'testmodel_second.py')
self.base_source = """from sqlalchemy import *\nmeta = MetaData()\n"""
self.model_source = """
User = Table('User', meta,
Column('id', Integer, primary_key=True),
Column('login', Unicode(40)),
Column('passwd', String(40)),
)"""
self.repo = repository.Repository.create(self.repo_path, 'repo')
self.pyscript = PythonScript.create(self.script_path)
def write_file(self, path, contents):
f = open(path, 'w')
f.write(contents)
f.close()
class TestSqlScript(fixture.Pathed, fixture.DB):
@fixture.usedb()
def test_error(self):
"""Test if exception is raised on wrong script source"""
src = self.tmp()
f = open(src, 'w')
f.write("""foobar""")
f.close()
sqls = SqlScript(src)
self.assertRaises(Exception, sqls.run, self.engine)

View File

@ -0,0 +1,568 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import shutil
import traceback
from types import FileType
from StringIO import StringIO
from sqlalchemy import MetaData,Table
from migrate.versioning.repository import Repository
from migrate.versioning import genmodel, shell, api
from migrate.versioning.exceptions import *
from test import fixture
class Shell(fixture.Shell):
_cmd = os.path.join(sys.executable + ' migrate', 'versioning', 'shell.py')
@classmethod
def cmd(cls, *args):
safe_parameters = map(lambda arg: str(arg), args)
return ' '.join([cls._cmd] + safe_parameters)
def execute(self, shell_cmd, runshell=None, **kwargs):
"""A crude simulation of a shell command, to speed things up"""
# If we get an fd, the command is already done
if isinstance(shell_cmd, (FileType, StringIO)):
return shell_cmd
# Analyze the command; see if we can 'fake' the shell
try:
# Forced to run in shell?
# if runshell or '--runshell' in sys.argv:
if runshell:
raise Exception
# Remove the command prefix
if not shell_cmd.startswith(self._cmd):
raise Exception
cmd = shell_cmd[(len(self._cmd) + 1):]
params = cmd.split(' ')
command = params[0]
except:
return super(Shell, self).execute(shell_cmd)
# Redirect stdout to an object; redirect stderr to stdout
fd = StringIO()
orig_stdout = sys.stdout
orig_stderr = sys.stderr
sys.stdout = fd
sys.stderr = fd
# Execute this command
try:
try:
shell.main(params, **kwargs)
except SystemExit, e:
# Simulate the exit status
fd_close = fd.close
def close_():
fd_close()
return e.args[0]
fd.close = close_
except Exception, e:
# Print the exception, but don't re-raise it
traceback.print_exc()
# Simulate a nonzero exit status
fd_close = fd.close
def close_():
fd_close()
return 2
fd.close = close_
finally:
# Clean up
sys.stdout = orig_stdout
sys.stderr = orig_stderr
fd.seek(0)
return fd
def cmd_version(self, repos_path):
fd = self.execute(self.cmd('version', repos_path))
result = int(fd.read().strip())
self.assertSuccess(fd)
return result
def cmd_db_version(self, url, repos_path):
fd = self.execute(self.cmd('db_version', url, repos_path))
txt = fd.read()
#print txt
ret = int(txt.strip())
self.assertSuccess(fd)
return ret
class TestShellCommands(Shell):
"""Tests migrate.py commands"""
def test_help(self):
"""Displays default help dialog"""
self.assertSuccess(self.cmd('-h'), runshell=True)
self.assertSuccess(self.cmd('--help'), runshell=True)
self.assertSuccess(self.cmd('help'), runshell=True)
self.assertSuccess(self.cmd('help'))
self.assertRaises(UsageError, api.help)
self.assertRaises(UsageError, api.help, 'foobar')
self.assert_(isinstance(api.help('create'), str))
def test_help_commands(self):
"""Display help on a specific command"""
for cmd in shell.api.__all__:
fd = self.execute(self.cmd('help', cmd))
# Description may change, so best we can do is ensure it shows up
output = fd.read()
self.assertNotEquals(output, '')
self.assertSuccess(fd)
def test_create(self):
"""Repositories are created successfully"""
repos = self.tmp_repos()
# Creating a file that doesn't exist should succeed
cmd = self.cmd('create', repos, 'repository_name')
self.assertSuccess(cmd)
# Files should actually be created
self.assert_(os.path.exists(repos))
# The default table should not be None
repos_ = Repository(repos)
self.assertNotEquals(repos_.config.get('db_settings', 'version_table'), 'None')
# Can't create it again: it already exists
self.assertFailure(cmd)
def test_script(self):
"""We can create a migration script via the command line"""
repos = self.tmp_repos()
self.assertSuccess(self.cmd('create', repos, 'repository_name'))
self.assertSuccess(self.cmd('script', '--repository=%s' % repos, 'Desc'))
self.assert_(os.path.exists('%s/versions/001_Desc.py' % repos))
self.assertSuccess(self.cmd('script', '--repository=%s' % repos, 'More'))
self.assert_(os.path.exists('%s/versions/002_More.py' % repos))
self.assertSuccess(self.cmd('script', '--repository=%s' % repos, '"Some Random name"'), runshell=True)
self.assert_(os.path.exists('%s/versions/003_Some_Random_name.py' % repos))
def test_script_sql(self):
"""We can create a migration sql script via the command line"""
repos = self.tmp_repos()
self.assertSuccess(self.cmd('create', repos, 'repository_name'))
self.assertSuccess(self.cmd('script_sql', '--repository=%s' % repos, 'mydb'))
self.assert_(os.path.exists('%s/versions/001_mydb_upgrade.sql' % repos))
self.assert_(os.path.exists('%s/versions/001_mydb_downgrade.sql' % repos))
# Test creating a second
self.assertSuccess(self.cmd('script_sql', '--repository=%s' % repos, 'postgres'))
self.assert_(os.path.exists('%s/versions/002_postgres_upgrade.sql' % repos))
self.assert_(os.path.exists('%s/versions/002_postgres_downgrade.sql' % repos))
def test_manage(self):
"""Create a project management script"""
script = self.tmp_py()
self.assert_(not os.path.exists(script))
# No attempt is made to verify correctness of the repository path here
self.assertSuccess(self.cmd('manage', script, '--repository=/path/to/repository'))
self.assert_(os.path.exists(script))
class TestShellRepository(Shell):
"""Shell commands on an existing repository/python script"""
def setUp(self):
"""Create repository, python change script"""
super(TestShellRepository, self).setUp()
self.path_repos = repos = self.tmp_repos()
self.assertSuccess(self.cmd('create', repos, 'repository_name'))
def test_version(self):
"""Correctly detect repository version"""
# Version: 0 (no scripts yet); successful execution
fd = self.execute(self.cmd('version','--repository=%s' % self.path_repos))
self.assertEquals(fd.read().strip(), "0")
self.assertSuccess(fd)
# Also works as a positional param
fd = self.execute(self.cmd('version', self.path_repos))
self.assertEquals(fd.read().strip(), "0")
self.assertSuccess(fd)
# Create a script and version should increment
self.assertSuccess(self.cmd('script', '--repository=%s' % self.path_repos, 'Desc'))
fd = self.execute(self.cmd('version',self.path_repos))
self.assertEquals(fd.read().strip(), "1")
self.assertSuccess(fd)
def test_source(self):
"""Correctly fetch a script's source"""
self.assertSuccess(self.cmd('script', '--repository=%s' % self.path_repos, 'Desc'))
filename = '%s/versions/001_Desc.py' % self.path_repos
source = open(filename).read()
self.assert_(source.find('def upgrade') >= 0)
# Version is now 1
fd = self.execute(self.cmd('version', self.path_repos))
self.assert_(fd.read().strip() == "1")
self.assertSuccess(fd)
# Output/verify the source of version 1
fd = self.execute(self.cmd('source', 1, '--repository=%s' % self.path_repos))
result = fd.read()
self.assertSuccess(fd)
self.assert_(result.strip() == source.strip())
# We can also send the source to a file... test that too
self.assertSuccess(self.cmd('source', 1, filename, '--repository=%s'%self.path_repos))
self.assert_(os.path.exists(filename))
fd = open(filename)
result = fd.read()
self.assert_(result.strip() == source.strip())
class TestShellDatabase(Shell, fixture.DB):
"""Commands associated with a particular database"""
# We'll need to clean up after ourself, since the shell creates its own txn;
# we need to connect to the DB to see if things worked
level = fixture.DB.CONNECT
@fixture.usedb()
def test_version_control(self):
"""Ensure we can set version control on a database"""
path_repos = repos = self.tmp_repos()
self.assertSuccess(self.cmd('create', path_repos, 'repository_name'))
self.exitcode(self.cmd('drop_version_control', self.url, path_repos))
self.assertSuccess(self.cmd('version_control', self.url, path_repos))
# Clean up
self.assertSuccess(self.cmd('drop_version_control',self.url,path_repos))
# Attempting to drop vc from a database without it should fail
self.assertFailure(self.cmd('drop_version_control',self.url,path_repos))
@fixture.usedb()
def test_wrapped_kwargs(self):
"""Commands with default arguments set by manage.py"""
path_repos = repos = self.tmp_repos()
self.assertSuccess(self.cmd('create', '--', '--name=repository_name'), repository=path_repos)
self.exitcode(self.cmd('drop_version_control'), url=self.url, repository=path_repos)
self.assertSuccess(self.cmd('version_control'), url=self.url, repository=path_repos)
# Clean up
self.assertSuccess(self.cmd('drop_version_control'), url=self.url, repository=path_repos)
# Attempting to drop vc from a database without it should fail
self.assertFailure(self.cmd('drop_version_control'), url=self.url, repository=path_repos)
@fixture.usedb()
def test_version_control_specified(self):
"""Ensure we can set version control to a particular version"""
path_repos = self.tmp_repos()
self.assertSuccess(self.cmd('create', path_repos, 'repository_name'))
self.exitcode(self.cmd('drop_version_control', self.url, path_repos))
# Fill the repository
path_script = self.tmp_py()
version = 1
for i in range(version):
self.assertSuccess(self.cmd('script', '--repository=%s' % path_repos, 'Desc'))
# Repository version is correct
fd = self.execute(self.cmd('version', path_repos))
self.assertEquals(fd.read().strip(), str(version))
self.assertSuccess(fd)
# Apply versioning to DB
self.assertSuccess(self.cmd('version_control', self.url, path_repos, version))
# Test version number
fd = self.execute(self.cmd('db_version', self.url, path_repos))
self.assertEquals(fd.read().strip(), str(version))
self.assertSuccess(fd)
# Clean up
self.assertSuccess(self.cmd('drop_version_control', self.url, path_repos))
@fixture.usedb()
def test_upgrade(self):
"""Can upgrade a versioned database"""
# Create a repository
repos_name = 'repos_name'
repos_path = self.tmp()
self.assertSuccess(self.cmd('create', repos_path,repos_name))
self.assertEquals(self.cmd_version(repos_path), 0)
# Version the DB
self.exitcode(self.cmd('drop_version_control', self.url, repos_path))
self.assertSuccess(self.cmd('version_control', self.url, repos_path))
# Upgrades with latest version == 0
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
self.assertSuccess(self.cmd('upgrade', self.url, repos_path))
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
self.assertSuccess(self.cmd('upgrade', self.url, repos_path, 0))
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
self.assertFailure(self.cmd('upgrade', self.url, repos_path, 1))
self.assertFailure(self.cmd('upgrade', self.url, repos_path, -1))
# Add a script to the repository; upgrade the db
self.assertSuccess(self.cmd('script', '--repository=%s' % repos_path, 'Desc'))
self.assertEquals(self.cmd_version(repos_path), 1)
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
# Test preview
self.assertSuccess(self.cmd('upgrade', self.url, repos_path, 0, "--preview_sql"))
self.assertSuccess(self.cmd('upgrade', self.url, repos_path, 0, "--preview_py"))
self.assertSuccess(self.cmd('upgrade', self.url, repos_path))
self.assertEquals(self.cmd_db_version(self.url, repos_path), 1)
# Downgrade must have a valid version specified
self.assertFailure(self.cmd('downgrade', self.url, repos_path))
self.assertFailure(self.cmd('downgrade', self.url, repos_path, '-1', 2))
#self.assertFailure(self.cmd('downgrade', self.url, repos_path, '1', 2))
self.assertEquals(self.cmd_db_version(self.url, repos_path), 1)
self.assertSuccess(self.cmd('downgrade', self.url, repos_path, 0))
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
self.assertFailure(self.cmd('downgrade',self.url, repos_path, 1))
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
self.assertSuccess(self.cmd('drop_version_control', self.url, repos_path))
def _run_test_sqlfile(self, upgrade_script, downgrade_script):
# TODO: add test script that checks if db really changed
repos_path = self.tmp()
repos_name = 'repos'
self.assertSuccess(self.cmd('create', repos_path, repos_name))
self.exitcode(self.cmd('drop_version_control', self.url, repos_path))
self.assertSuccess(self.cmd('version_control', self.url, repos_path))
self.assertEquals(self.cmd_version(repos_path), 0)
self.assertEquals(self.cmd_db_version(self.url,repos_path), 0)
beforeCount = len(os.listdir(os.path.join(repos_path, 'versions'))) # hmm, this number changes sometimes based on running from svn
self.assertSuccess(self.cmd('script_sql', '--repository=%s' % repos_path, 'postgres'))
self.assertEquals(self.cmd_version(repos_path), 1)
self.assertEquals(len(os.listdir(os.path.join(repos_path,'versions'))), beforeCount + 2)
open('%s/versions/001_postgres_upgrade.sql' % repos_path, 'a').write(upgrade_script)
open('%s/versions/001_postgres_downgrade.sql' % repos_path, 'a').write(downgrade_script)
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
self.assertRaises(Exception, self.engine.text('select * from t_table').execute)
self.assertSuccess(self.cmd('upgrade', self.url,repos_path))
self.assertEquals(self.cmd_db_version(self.url,repos_path), 1)
self.engine.text('select * from t_table').execute()
self.assertSuccess(self.cmd('downgrade', self.url, repos_path, 0))
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
self.assertRaises(Exception, self.engine.text('select * from t_table').execute)
# The tests below are written with some postgres syntax, but the stuff
# being tested (.sql files) ought to work with any db.
@fixture.usedb(supported='postgres')
def test_sqlfile(self):
upgrade_script = """
create table t_table (
id serial,
primary key(id)
);
"""
downgrade_script = """
drop table t_table;
"""
self.meta.drop_all()
self._run_test_sqlfile(upgrade_script, downgrade_script)
@fixture.usedb(supported='postgres')
def test_sqlfile_comment(self):
upgrade_script = """
-- Comments in SQL break postgres autocommit
create table t_table (
id serial,
primary key(id)
);
"""
downgrade_script = """
-- Comments in SQL break postgres autocommit
drop table t_table;
"""
self._run_test_sqlfile(upgrade_script,downgrade_script)
@fixture.usedb()
def test_command_test(self):
repos_name = 'repos_name'
repos_path = self.tmp()
self.assertSuccess(self.cmd('create', repos_path, repos_name))
self.exitcode(self.cmd('drop_version_control', self.url, repos_path))
self.assertSuccess(self.cmd('version_control', self.url, repos_path))
self.assertEquals(self.cmd_version(repos_path), 0)
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
# Empty script should succeed
self.assertSuccess(self.cmd('script', '--repository=%s' % repos_path, 'Desc'))
self.assertSuccess(self.cmd('test', repos_path, self.url))
self.assertEquals(self.cmd_version(repos_path), 1)
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
# Error script should fail
script_path = self.tmp_py()
script_text="""
from sqlalchemy import *
from migrate import *
def upgrade():
print 'fgsfds'
raise Exception()
def downgrade():
print 'sdfsgf'
raise Exception()
""".replace("\n ","\n")
file = open(script_path, 'w')
file.write(script_text)
file.close()
self.assertFailure(self.cmd('test', repos_path, self.url, 'blah blah'))
self.assertEquals(self.cmd_version(repos_path), 1)
self.assertEquals(self.cmd_db_version(self.url, repos_path),0)
# Nonempty script using migrate_engine should succeed
script_path = self.tmp_py()
script_text="""
from sqlalchemy import *
from migrate import *
meta = MetaData(migrate_engine)
account = Table('account',meta,
Column('id',Integer,primary_key=True),
Column('login',String(40)),
Column('passwd',String(40)),
)
def upgrade():
# Upgrade operations go here. Don't create your own engine; use the engine
# named 'migrate_engine' imported from migrate.
meta.create_all()
def downgrade():
# Operations to reverse the above upgrade go here.
meta.drop_all()
""".replace("\n ","\n")
file = open(script_path, 'w')
file.write(script_text)
file.close()
self.assertSuccess(self.cmd('test', repos_path, self.url))
self.assertEquals(self.cmd_version(repos_path), 1)
self.assertEquals(self.cmd_db_version(self.url, repos_path), 0)
@fixture.usedb()
def test_rundiffs_in_shell(self):
# This is a variant of the test_schemadiff tests but run through the shell level.
# These shell tests are hard to debug (since they keep forking processes), so they shouldn't replace the lower-level tests.
repos_name = 'repos_name'
repos_path = self.tmp()
script_path = self.tmp_py()
old_model_path = self.tmp_named('oldtestmodel.py')
model_path = self.tmp_named('testmodel.py')
# Create empty repository.
self.meta = MetaData(self.engine, reflect=True)
self.meta.drop_all() # in case junk tables are lying around in the test database
self.assertSuccess(self.cmd('create',repos_path,repos_name))
self.exitcode(self.cmd('drop_version_control',self.url,repos_path))
self.assertSuccess(self.cmd('version_control',self.url,repos_path))
self.assertEquals(self.cmd_version(repos_path),0)
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
# Setup helper script.
model_module = 'testmodel:meta'
self.assertSuccess(self.cmd('manage',script_path,'--repository=%s --url=%s --model=%s' % (repos_path, self.url, model_module)))
self.assert_(os.path.exists(script_path))
# Write old and new model to disk - old model is empty!
script_preamble="""
from sqlalchemy import *
meta = MetaData()
""".replace("\n ","\n")
script_text="""
""".replace("\n ","\n")
open(old_model_path, 'w').write(script_preamble + script_text)
script_text="""
tmp_account_rundiffs = Table('tmp_account_rundiffs',meta,
Column('id',Integer,primary_key=True),
Column('login',String(40)),
Column('passwd',String(40)),
)
""".replace("\n ","\n")
open(model_path, 'w').write(script_preamble + script_text)
# Model is defined but database is empty.
output, exitcode = self.output_and_exitcode('%s %s compare_model_to_db' % (sys.executable, script_path))
assert "tables missing in database: tmp_account_rundiffs" in output, output
# Test Deprecation
output, exitcode = self.output_and_exitcode('%s %s compare_model_to_db --model=testmodel.meta' % (sys.executable, script_path))
assert "tables missing in database: tmp_account_rundiffs" in output, output
# Update db to latest model.
output, exitcode = self.output_and_exitcode('%s %s update_db_from_model' % (sys.executable, script_path))
self.assertEquals(exitcode, None)
self.assertEquals(self.cmd_version(repos_path),0)
self.assertEquals(self.cmd_db_version(self.url,repos_path),0) # version did not get bumped yet because new version not yet created
output, exitcode = self.output_and_exitcode('%s %s compare_model_to_db' % (sys.executable, script_path))
assert "No schema diffs" in output, output
output, exitcode = self.output_and_exitcode('%s %s create_model' % (sys.executable, script_path))
output = output.replace(genmodel.HEADER.strip(), '') # need strip b/c output_and_exitcode called strip
assert """tmp_account_rundiffs = Table('tmp_account_rundiffs', meta,
Column('id', Integer(), primary_key=True, nullable=False),
Column('login', String(length=None, convert_unicode=False, assert_unicode=None)),
Column('passwd', String(length=None, convert_unicode=False, assert_unicode=None)),""" in output.strip(), output
# We're happy with db changes, make first db upgrade script to go from version 0 -> 1.
output, exitcode = self.output_and_exitcode('%s %s make_update_script_for_model' % (sys.executable, script_path)) # intentionally omit a parameter
self.assertEquals('Not enough arguments' in output, True)
output, exitcode = self.output_and_exitcode('%s %s make_update_script_for_model --oldmodel=oldtestmodel:meta' % (sys.executable, script_path))
self.assertEqualsIgnoreWhitespace(output,
"""from sqlalchemy import *
from migrate import *
meta = MetaData()
tmp_account_rundiffs = Table('tmp_account_rundiffs', meta,
Column('id', Integer(), primary_key=True, nullable=False),
Column('login', String(length=40, convert_unicode=False, assert_unicode=None)),
Column('passwd', String(length=40, convert_unicode=False, assert_unicode=None)),
)
def upgrade(migrate_engine):
# Upgrade operations go here. Don't create your own engine; bind migrate_engine
# to your metadata
meta.bind(migrate_engine)
tmp_account_rundiffs.create()
def downgrade(migrate_engine):
# Operations to reverse the above upgrade go here.
meta.bind(migrate_engine)
tmp_account_rundiffs.drop()""")
# Save the upgrade script.
self.assertSuccess(self.cmd('script', '--repository=%s' % repos_path, 'Desc'))
upgrade_script_path = '%s/versions/001_Desc.py' % repos_path
open(upgrade_script_path, 'w').write(output)
#output, exitcode = self.output_and_exitcode('%s %s test %s' % (sys.executable, script_path, upgrade_script_path)) # no, we already upgraded the db above
#self.assertEquals(output, "")
output, exitcode = self.output_and_exitcode('%s %s update_db_from_model' % (sys.executable, script_path)) # bump the db_version
self.assertEquals(exitcode, None)
self.assertEquals(self.cmd_version(repos_path),1)
self.assertEquals(self.cmd_db_version(self.url,repos_path),1)

View File

@ -0,0 +1,17 @@
from test import fixture
from migrate.versioning.repository import *
import os
class TestPathed(fixture.Base):
def test_templates(self):
"""We can find the path to all repository templates"""
path = str(template)
self.assert_(os.path.exists(path))
def test_repository(self):
"""We can find the path to the default repository"""
path = template.get_repository()
self.assert_(os.path.exists(path))
def test_script(self):
"""We can find the path to the default migration script"""
path = template.get_script()
self.assert_(os.path.exists(path))

View File

@ -0,0 +1,87 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
from sqlalchemy import *
from test import fixture
from migrate.versioning.util import *
class TestUtil(fixture.Pathed):
def test_construct_engine(self):
"""Construct engine the smart way"""
url = 'sqlite://'
engine = construct_engine(url)
self.assert_(engine.name == 'sqlite')
# keyword arg
engine = construct_engine(url, engine_arg_assert_unicode=True)
self.assertTrue(engine.dialect.assert_unicode)
# dict
engine = construct_engine(url, engine_dict={'assert_unicode': True})
self.assertTrue(engine.dialect.assert_unicode)
# engine parameter
engine_orig = create_engine('sqlite://')
engine = construct_engine(engine_orig)
self.assertEqual(engine, engine_orig)
# test precedance
engine = construct_engine(url, engine_dict={'assert_unicode': False},
engine_arg_assert_unicode=True)
self.assertTrue(engine.dialect.assert_unicode)
# deprecated echo= parameter
engine = construct_engine(url, echo='True')
self.assertTrue(engine.echo)
def test_asbool(self):
"""test asbool parsing"""
result = asbool(True)
self.assertEqual(result, True)
result = asbool(False)
self.assertEqual(result, False)
result = asbool('y')
self.assertEqual(result, True)
result = asbool('n')
self.assertEqual(result, False)
self.assertRaises(ValueError, asbool, 'test')
self.assertRaises(ValueError, asbool, object)
def test_load_model(self):
"""load model from dotted name"""
model_path = os.path.join(self.temp_usable_dir, 'test_load_model.py')
f = open(model_path, 'w')
f.write("class FakeFloat(int): pass")
f.close()
FakeFloat = load_model('test_load_model.FakeFloat')
self.assert_(isinstance(FakeFloat(), int))
FakeFloat = load_model('test_load_model:FakeFloat')
self.assert_(isinstance(FakeFloat(), int))
FakeFloat = load_model(FakeFloat)
self.assert_(isinstance(FakeFloat(), int))
def test_guess_obj_type(self):
"""guess object type from string"""
result = guess_obj_type('7')
self.assertEqual(result, 7)
result = guess_obj_type('y')
self.assertEqual(result, True)
result = guess_obj_type('test')
self.assertEqual(result, 'test')

View File

@ -0,0 +1,142 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from test import fixture
from migrate.versioning.version import *
class TestVerNum(fixture.Base):
def test_invalid(self):
"""Disallow invalid version numbers"""
versions = ('-1', -1, 'Thirteen', '')
for version in versions:
self.assertRaises(ValueError, VerNum, version)
def test_is(self):
"""Two version with the same number should be equal"""
a = VerNum(1)
b = VerNum(1)
self.assert_(a is b)
self.assertEqual(VerNum(VerNum(2)), VerNum(2))
def test_add(self):
self.assertEqual(VerNum(1) + VerNum(1), VerNum(2))
self.assertEqual(VerNum(1) + 1, 2)
self.assertEqual(VerNum(1) + 1, '2')
self.assert_(isinstance(VerNum(1) + 1, VerNum))
def test_sub(self):
self.assertEqual(VerNum(1) - 1, 0)
self.assert_(isinstance(VerNum(1) - 1, VerNum))
self.assertRaises(ValueError, lambda: VerNum(0) - 1)
def test_eq(self):
"""Two versions are equal"""
self.assertEqual(VerNum(1), VerNum('1'))
self.assertEqual(VerNum(1), 1)
self.assertEqual(VerNum(1), '1')
self.assertNotEqual(VerNum(1), 2)
def test_ne(self):
self.assert_(VerNum(1) != 2)
self.assertFalse(VerNum(1) != 1)
def test_lt(self):
self.assertFalse(VerNum(1) < 1)
self.assert_(VerNum(1) < 2)
self.assertFalse(VerNum(2) < 1)
def test_le(self):
self.assert_(VerNum(1) <= 1)
self.assert_(VerNum(1) <= 2)
self.assertFalse(VerNum(2) <= 1)
def test_gt(self):
self.assertFalse(VerNum(1) > 1)
self.assertFalse(VerNum(1) > 2)
self.assert_(VerNum(2) > 1)
def test_ge(self):
self.assert_(VerNum(1) >= 1)
self.assert_(VerNum(2) >= 1)
self.assertFalse(VerNum(1) >= 2)
class TestVersion(fixture.Pathed):
def setUp(self):
super(TestVersion, self).setUp()
def test_str_to_filename(self):
self.assertEquals(str_to_filename(''), '')
self.assertEquals(str_to_filename('__'), '_')
self.assertEquals(str_to_filename('a'), 'a')
self.assertEquals(str_to_filename('Abc Def'), 'Abc_Def')
self.assertEquals(str_to_filename('Abc "D" Ef'), 'Abc_D_Ef')
self.assertEquals(str_to_filename("Abc's Stuff"), 'Abc_s_Stuff')
self.assertEquals(str_to_filename("a b"), 'a_b')
def test_collection(self):
"""Let's see how we handle versions collection"""
coll = Collection(self.temp_usable_dir)
coll.create_new_python_version("foo bar")
coll.create_new_sql_version("postgres")
coll.create_new_sql_version("sqlite")
coll.create_new_python_version("")
self.assertEqual(coll.latest, 4)
self.assertEqual(len(coll.versions), 4)
self.assertEqual(coll.version(4), coll.version(coll.latest))
coll2 = Collection(self.temp_usable_dir)
self.assertEqual(coll.versions, coll2.versions)
#def test_collection_unicode(self):
# pass
def test_create_new_python_version(self):
coll = Collection(self.temp_usable_dir)
coll.create_new_python_version("foo bar")
ver = coll.version()
self.assert_(ver.script().source())
def test_create_new_sql_version(self):
coll = Collection(self.temp_usable_dir)
coll.create_new_sql_version("sqlite")
ver = coll.version()
ver_up = ver.script('sqlite', 'upgrade')
ver_down = ver.script('sqlite', 'downgrade')
ver_up.source()
ver_down.source()
def test_selection(self):
"""Verify right sql script is selected"""
# Create empty directory.
path = self.tmp_repos()
os.mkdir(path)
# Create files -- files must be present or you'll get an exception later.
python_file = '001_initial_.py'
sqlite_upgrade_file = '001_sqlite_upgrade.sql'
default_upgrade_file = '001_default_upgrade.sql'
for file_ in [sqlite_upgrade_file, default_upgrade_file, python_file]:
filepath = '%s/%s' % (path, file_)
open(filepath, 'w').close()
ver = Version(1, path, [sqlite_upgrade_file])
self.assertEquals(os.path.basename(ver.script('sqlite', 'upgrade').path), sqlite_upgrade_file)
ver = Version(1, path, [default_upgrade_file])
self.assertEquals(os.path.basename(ver.script('default', 'upgrade').path), default_upgrade_file)
ver = Version(1, path, [sqlite_upgrade_file, default_upgrade_file])
self.assertEquals(os.path.basename(ver.script('sqlite', 'upgrade').path), sqlite_upgrade_file)
ver = Version(1, path, [sqlite_upgrade_file, default_upgrade_file, python_file])
self.assertEquals(os.path.basename(ver.script('postgres', 'upgrade').path), default_upgrade_file)
ver = Version(1, path, [sqlite_upgrade_file, python_file])
self.assertEquals(os.path.basename(ver.script('postgres', 'upgrade').path), python_file)

14
test_db.cfg Normal file
View File

@ -0,0 +1,14 @@
# test_db.cfg
#
# This file contains a list of connection strings which will be used by
# database tests. Tests will be executed once for each string in this file.
# You should be sure that the database used for the test doesn't contain any
# important data. See README for more information.
#
# The string '__tmp__' is substituted for a temporary file in each connection
# string. This is useful for sqlite tests.
sqlite:///__tmp__
postgres://migrate:UPd2icyw@localhost/migrate_test
mysql://migrate:fTP82sjf@localhost/migrate_test
oracle://migrate:FdnjJK8s@localhost
firebird://migrate:BowV7EEm@localhost//var/db/migrate.gdb

13
test_db.cfg.tmpl Normal file
View File

@ -0,0 +1,13 @@
# test_db.cfg
#
# This file contains a list of connection strings which will be used by
# database tests. Tests will be executed once for each string in this file.
# You should be sure that the database used for the test doesn't contain any
# important data. See README for more information.
#
# The string '__tmp__' is substituted for a temporary file in each connection
# string. This is useful for sqlite tests.
sqlite:///__tmp__
postgres://scott:tiger@localhost/test
mysql://scott:tiger@localhost/test
oracle://scott:tiger@localhost