Added alembic environment

Added alembic, and created an initial database revision.

For the sake of testing with real databases, tools/test-setup.sh is added
so that CI can start up a mysql and/or postgresql database on which to
test migrations. The OpportunisticDBMixin from oslo_db was originally
used for this, but because of the way placement configures the database
this proved hard to debug and get correct, so we've used something
that leverages the available oslo_db tools, but with more visibility.

Because these tests mix sqlite, mysql and postgresql settings in the
potentially the same process we need a way to insure that global
settings for databases do not leak into other tests. This is done with
a reset() on the placement db fixture, called by the msyql and
postgresql tests before and after they run. We also need careful
management of that cleanup when these tests are skipped (because db
server or database is not there).

Those tests will confirm that the models match the migrations so we also
need to remove model files that no longer matter.

Since we no longer need to distinguish among multiple database files, we
can simplify the naming of these files.

Co-Authored-By: Chris Dent <cdent@anticdent.org>
Change-Id: I51ed1e4e7dbb76a3eab23af7d0d106f716059112
This commit is contained in:
EdLeafe 2018-10-15 20:39:46 +00:00
parent 6bb6c0bbdd
commit c8ab9ff906
17 changed files with 1025 additions and 751 deletions

View File

@ -0,0 +1,76 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = %(here)s/alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# timezone to use when rendering the date
# within the migration file as well as the filename.
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
#truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; this defaults
# to alembic/versions. When using multiple version
# directories, initial revisions must be specified with --version-path
# version_locations = %(here)s/bar %(here)s/bat alembic/versions
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
# NOTE: this next line is commented out because it is set in
# CONF.placement_database.connection
#sqlalchemy.url = driver://user:pass@localhost/dbname
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@ -0,0 +1,93 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import with_statement
from logging.config import fileConfig
from alembic import context
from placement import conf
from placement.db.sqlalchemy import models
from placement import db_api as placement_db
CONF = conf.CONF
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = models.BASE.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = CONF.placement_database.connection
context.configure(
url=url, target_metadata=target_metadata, literal_binds=True)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
# If CONF and the database are not already configured, set them up. This
# can happen when using the alembic command line tool.
if not CONF.placement_database.connection:
CONF([], project="placement", default_config_files=None)
placement_db.configure(CONF)
connectable = placement_db.get_placement_engine()
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -0,0 +1,24 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
branch_labels = ${repr(branch_labels)}
depends_on = ${repr(depends_on)}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@ -0,0 +1,188 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Initial
Revision ID: b4ed3a175331
Revises: 158782c7f38c
Create Date: 2018-10-19 18:27:55.950383
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'b4ed3a175331'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
op.create_table('allocations',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('resource_provider_id', sa.Integer(), nullable=False),
sa.Column('consumer_id', sa.String(length=36), nullable=False),
sa.Column('resource_class_id', sa.Integer(), nullable=False),
sa.Column('used', sa.Integer(), nullable=False),
sa.PrimaryKeyConstraint('id'),
)
op.create_index('allocations_resource_provider_class_used_idx',
'allocations', ['resource_provider_id', 'resource_class_id',
'used'], unique=False)
op.create_index('allocations_resource_class_id_idx', 'allocations',
['resource_class_id'], unique=False)
op.create_index('allocations_consumer_id_idx', 'allocations',
['consumer_id'], unique=False)
op.create_table('consumers',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('uuid', sa.String(length=36), nullable=False),
sa.Column('project_id', sa.Integer(), nullable=False),
sa.Column('user_id', sa.Integer(), nullable=False),
sa.Column('generation', sa.Integer(), server_default=sa.text('0'),
nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('uuid', name='uniq_consumers0uuid'),
)
op.create_index('consumers_project_id_user_id_uuid_idx', 'consumers',
['project_id', 'user_id', 'uuid'], unique=False)
op.create_index('consumers_project_id_uuid_idx', 'consumers',
['project_id', 'uuid'], unique=False)
op.create_table('inventories',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('resource_provider_id', sa.Integer(), nullable=False),
sa.Column('resource_class_id', sa.Integer(), nullable=False),
sa.Column('total', sa.Integer(), nullable=False),
sa.Column('reserved', sa.Integer(), nullable=False),
sa.Column('min_unit', sa.Integer(), nullable=False),
sa.Column('max_unit', sa.Integer(), nullable=False),
sa.Column('step_size', sa.Integer(), nullable=False),
sa.Column('allocation_ratio', sa.Float(), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('resource_provider_id', 'resource_class_id',
name='uniq_inventories0resource_provider_resource_class'),
)
op.create_index('inventories_resource_class_id_idx', 'inventories',
['resource_class_id'], unique=False)
op.create_index('inventories_resource_provider_id_idx', 'inventories',
['resource_provider_id'], unique=False)
op.create_index('inventories_resource_provider_resource_class_idx',
'inventories', ['resource_provider_id', 'resource_class_id'],
unique=False)
op.create_table('placement_aggregates',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('uuid', sa.String(length=36), nullable=True),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('uuid', name='uniq_placement_aggregates0uuid')
)
op.create_index(op.f('ix_placement_aggregates_uuid'),
'placement_aggregates', ['uuid'], unique=False)
op.create_table('projects',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('external_id', sa.String(length=255), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('external_id',
name='uniq_projects0external_id'),
)
op.create_table('resource_classes',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=255), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name', name='uniq_resource_classes0name'),
)
op.create_table('resource_provider_aggregates',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('resource_provider_id', sa.Integer(), nullable=False),
sa.Column('aggregate_id', sa.Integer(), nullable=False),
sa.PrimaryKeyConstraint('resource_provider_id', 'aggregate_id'),
)
op.create_index('resource_provider_aggregates_aggregate_id_idx',
'resource_provider_aggregates', ['aggregate_id'], unique=False)
op.create_table('resource_providers',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('uuid', sa.String(length=36), nullable=False),
sa.Column('name', sa.Unicode(length=200), nullable=True),
sa.Column('generation', sa.Integer(), nullable=True),
sa.Column('root_provider_id', sa.Integer(), nullable=True),
sa.Column('parent_provider_id', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['parent_provider_id'],
['resource_providers.id']),
sa.ForeignKeyConstraint(['root_provider_id'],
['resource_providers.id']),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name', name='uniq_resource_providers0name'),
sa.UniqueConstraint('uuid', name='uniq_resource_providers0uuid'),
)
op.create_index('resource_providers_name_idx', 'resource_providers',
['name'], unique=False)
op.create_index('resource_providers_parent_provider_id_idx',
'resource_providers', ['parent_provider_id'], unique=False)
op.create_index('resource_providers_root_provider_id_idx',
'resource_providers', ['root_provider_id'], unique=False)
op.create_index('resource_providers_uuid_idx', 'resource_providers',
['uuid'], unique=False)
op.create_table('traits',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('name', sa.Unicode(length=255), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name', name='uniq_traits0name'),
)
op.create_table('users',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('external_id', sa.String(length=255), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('external_id', name='uniq_users0external_id'),
)
op.create_table('resource_provider_traits',
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('trait_id', sa.Integer(), nullable=False),
sa.Column('resource_provider_id', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['resource_provider_id'],
['resource_providers.id'], ),
sa.ForeignKeyConstraint(['trait_id'], ['traits.id'], ),
sa.PrimaryKeyConstraint('trait_id', 'resource_provider_id'),
)
op.create_index('resource_provider_traits_resource_provider_trait_idx',
'resource_provider_traits', ['resource_provider_id', 'trait_id'],
unique=False)

View File

@ -1,658 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_db.sqlalchemy import models
from oslo_log import log as logging
from sqlalchemy import Boolean
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy.dialects.mysql import MEDIUMTEXT
from sqlalchemy import Enum
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Float
from sqlalchemy import ForeignKey
from sqlalchemy import Index
from sqlalchemy import Integer
from sqlalchemy import orm
from sqlalchemy.orm import backref
from sqlalchemy import schema
from sqlalchemy import String
from sqlalchemy import Text
from sqlalchemy import Unicode
LOG = logging.getLogger(__name__)
def MediumText():
return Text().with_variant(MEDIUMTEXT(), 'mysql')
class _NovaAPIBase(models.ModelBase, models.TimestampMixin):
pass
API_BASE = declarative_base(cls=_NovaAPIBase)
class AggregateHost(API_BASE):
"""Represents a host that is member of an aggregate."""
__tablename__ = 'aggregate_hosts'
__table_args__ = (schema.UniqueConstraint(
"host", "aggregate_id",
name="uniq_aggregate_hosts0host0aggregate_id"
),
)
id = Column(Integer, primary_key=True, autoincrement=True)
host = Column(String(255))
aggregate_id = Column(Integer, ForeignKey('aggregates.id'), nullable=False)
class AggregateMetadata(API_BASE):
"""Represents a metadata key/value pair for an aggregate."""
__tablename__ = 'aggregate_metadata'
__table_args__ = (
schema.UniqueConstraint("aggregate_id", "key",
name="uniq_aggregate_metadata0aggregate_id0key"
),
Index('aggregate_metadata_key_idx', 'key'),
)
id = Column(Integer, primary_key=True)
key = Column(String(255), nullable=False)
value = Column(String(255), nullable=False)
aggregate_id = Column(Integer, ForeignKey('aggregates.id'), nullable=False)
class Aggregate(API_BASE):
"""Represents a cluster of hosts that exists in this zone."""
__tablename__ = 'aggregates'
__table_args__ = (Index('aggregate_uuid_idx', 'uuid'),
schema.UniqueConstraint(
"name", name="uniq_aggregate0name")
)
id = Column(Integer, primary_key=True, autoincrement=True)
uuid = Column(String(36))
name = Column(String(255))
_hosts = orm.relationship(AggregateHost,
primaryjoin='Aggregate.id == AggregateHost.aggregate_id',
cascade='delete')
_metadata = orm.relationship(AggregateMetadata,
primaryjoin='Aggregate.id == AggregateMetadata.aggregate_id',
cascade='delete')
@property
def _extra_keys(self):
return ['hosts', 'metadetails', 'availability_zone']
@property
def hosts(self):
return [h.host for h in self._hosts]
@property
def metadetails(self):
return {m.key: m.value for m in self._metadata}
@property
def availability_zone(self):
if 'availability_zone' not in self.metadetails:
return None
return self.metadetails['availability_zone']
class CellMapping(API_BASE):
"""Contains information on communicating with a cell"""
__tablename__ = 'cell_mappings'
__table_args__ = (Index('uuid_idx', 'uuid'),
schema.UniqueConstraint('uuid',
name='uniq_cell_mappings0uuid'))
id = Column(Integer, primary_key=True)
uuid = Column(String(36), nullable=False)
name = Column(String(255))
transport_url = Column(Text())
database_connection = Column(Text())
disabled = Column(Boolean, default=False)
host_mapping = orm.relationship('HostMapping',
backref=backref('cell_mapping', uselist=False),
foreign_keys=id,
primaryjoin=(
'CellMapping.id == HostMapping.cell_id'))
class InstanceMapping(API_BASE):
"""Contains the mapping of an instance to which cell it is in"""
__tablename__ = 'instance_mappings'
__table_args__ = (Index('project_id_idx', 'project_id'),
Index('instance_uuid_idx', 'instance_uuid'),
schema.UniqueConstraint('instance_uuid',
name='uniq_instance_mappings0instance_uuid'))
id = Column(Integer, primary_key=True)
instance_uuid = Column(String(36), nullable=False)
cell_id = Column(Integer, ForeignKey('cell_mappings.id'),
nullable=True)
project_id = Column(String(255), nullable=False)
queued_for_delete = Column(Boolean)
cell_mapping = orm.relationship('CellMapping',
backref=backref('instance_mapping', uselist=False),
foreign_keys=cell_id,
primaryjoin=('InstanceMapping.cell_id == CellMapping.id'))
class HostMapping(API_BASE):
"""Contains mapping of a compute host to which cell it is in"""
__tablename__ = "host_mappings"
__table_args__ = (Index('host_idx', 'host'),
schema.UniqueConstraint('host',
name='uniq_host_mappings0host'))
id = Column(Integer, primary_key=True)
cell_id = Column(Integer, ForeignKey('cell_mappings.id'),
nullable=False)
host = Column(String(255), nullable=False)
class RequestSpec(API_BASE):
"""Represents the information passed to the scheduler."""
__tablename__ = 'request_specs'
__table_args__ = (
Index('request_spec_instance_uuid_idx', 'instance_uuid'),
schema.UniqueConstraint('instance_uuid',
name='uniq_request_specs0instance_uuid'),
)
id = Column(Integer, primary_key=True)
instance_uuid = Column(String(36), nullable=False)
spec = Column(MediumText(), nullable=False)
class Flavors(API_BASE):
"""Represents possible flavors for instances"""
__tablename__ = 'flavors'
__table_args__ = (
schema.UniqueConstraint("flavorid", name="uniq_flavors0flavorid"),
schema.UniqueConstraint("name", name="uniq_flavors0name"))
id = Column(Integer, primary_key=True)
name = Column(String(255), nullable=False)
memory_mb = Column(Integer, nullable=False)
vcpus = Column(Integer, nullable=False)
root_gb = Column(Integer)
ephemeral_gb = Column(Integer)
flavorid = Column(String(255), nullable=False)
swap = Column(Integer, nullable=False, default=0)
rxtx_factor = Column(Float, default=1)
vcpu_weight = Column(Integer)
disabled = Column(Boolean, default=False)
is_public = Column(Boolean, default=True)
description = Column(Text)
class FlavorExtraSpecs(API_BASE):
"""Represents additional specs as key/value pairs for a flavor"""
__tablename__ = 'flavor_extra_specs'
__table_args__ = (
Index('flavor_extra_specs_flavor_id_key_idx', 'flavor_id', 'key'),
schema.UniqueConstraint('flavor_id', 'key',
name='uniq_flavor_extra_specs0flavor_id0key'),
{'mysql_collate': 'utf8_bin'},
)
id = Column(Integer, primary_key=True)
key = Column(String(255), nullable=False)
value = Column(String(255))
flavor_id = Column(Integer, ForeignKey('flavors.id'), nullable=False)
flavor = orm.relationship(Flavors, backref='extra_specs',
foreign_keys=flavor_id,
primaryjoin=(
'FlavorExtraSpecs.flavor_id == Flavors.id'))
class FlavorProjects(API_BASE):
"""Represents projects associated with flavors"""
__tablename__ = 'flavor_projects'
__table_args__ = (schema.UniqueConstraint('flavor_id', 'project_id',
name='uniq_flavor_projects0flavor_id0project_id'),)
id = Column(Integer, primary_key=True)
flavor_id = Column(Integer, ForeignKey('flavors.id'), nullable=False)
project_id = Column(String(255), nullable=False)
flavor = orm.relationship(Flavors, backref='projects',
foreign_keys=flavor_id,
primaryjoin=(
'FlavorProjects.flavor_id == Flavors.id'))
class BuildRequest(API_BASE):
"""Represents the information passed to the scheduler."""
__tablename__ = 'build_requests'
__table_args__ = (
Index('build_requests_instance_uuid_idx', 'instance_uuid'),
Index('build_requests_project_id_idx', 'project_id'),
schema.UniqueConstraint('instance_uuid',
name='uniq_build_requests0instance_uuid'),
)
id = Column(Integer, primary_key=True)
instance_uuid = Column(String(36))
project_id = Column(String(255), nullable=False)
instance = Column(MediumText())
block_device_mappings = Column(MediumText())
tags = Column(Text())
# TODO(alaski): Drop these from the db in Ocata
# columns_to_drop = ['request_spec_id', 'user_id', 'display_name',
# 'instance_metadata', 'progress', 'vm_state', 'task_state',
# 'image_ref', 'access_ip_v4', 'access_ip_v6', 'info_cache',
# 'security_groups', 'config_drive', 'key_name', 'locked_by',
# 'reservation_id', 'launch_index', 'hostname', 'kernel_id',
# 'ramdisk_id', 'root_device_name', 'user_data']
class KeyPair(API_BASE):
"""Represents a public key pair for ssh / WinRM."""
__tablename__ = 'key_pairs'
__table_args__ = (
schema.UniqueConstraint("user_id", "name",
name="uniq_key_pairs0user_id0name"),
)
id = Column(Integer, primary_key=True, nullable=False)
name = Column(String(255), nullable=False)
user_id = Column(String(255), nullable=False)
fingerprint = Column(String(255))
public_key = Column(Text())
type = Column(Enum('ssh', 'x509', name='keypair_types'),
nullable=False, server_default='ssh')
class ResourceClass(API_BASE):
"""Represents the type of resource for an inventory or allocation."""
__tablename__ = 'resource_classes'
__table_args__ = (
schema.UniqueConstraint("name", name="uniq_resource_classes0name"),
)
id = Column(Integer, primary_key=True, nullable=False)
name = Column(String(255), nullable=False)
class ResourceProvider(API_BASE):
"""Represents a mapping to a providers of resources."""
__tablename__ = "resource_providers"
__table_args__ = (
Index('resource_providers_uuid_idx', 'uuid'),
schema.UniqueConstraint('uuid',
name='uniq_resource_providers0uuid'),
Index('resource_providers_name_idx', 'name'),
Index('resource_providers_root_provider_id_idx',
'root_provider_id'),
Index('resource_providers_parent_provider_id_idx',
'parent_provider_id'),
schema.UniqueConstraint('name',
name='uniq_resource_providers0name')
)
id = Column(Integer, primary_key=True, nullable=False)
uuid = Column(String(36), nullable=False)
name = Column(Unicode(200), nullable=True)
generation = Column(Integer, default=0)
# Represents the root of the "tree" that the provider belongs to
root_provider_id = Column(Integer, ForeignKey('resource_providers.id'),
nullable=True)
# The immediate parent provider of this provider, or NULL if there is no
# parent. If parent_provider_id == NULL then root_provider_id == id
parent_provider_id = Column(Integer, ForeignKey('resource_providers.id'),
nullable=True)
class Inventory(API_BASE):
"""Represents a quantity of available resource."""
__tablename__ = "inventories"
__table_args__ = (
Index('inventories_resource_provider_id_idx',
'resource_provider_id'),
Index('inventories_resource_class_id_idx',
'resource_class_id'),
Index('inventories_resource_provider_resource_class_idx',
'resource_provider_id', 'resource_class_id'),
schema.UniqueConstraint('resource_provider_id', 'resource_class_id',
name='uniq_inventories0resource_provider_resource_class')
)
id = Column(Integer, primary_key=True, nullable=False)
resource_provider_id = Column(Integer, nullable=False)
resource_class_id = Column(Integer, nullable=False)
total = Column(Integer, nullable=False)
reserved = Column(Integer, nullable=False)
min_unit = Column(Integer, nullable=False)
max_unit = Column(Integer, nullable=False)
step_size = Column(Integer, nullable=False)
allocation_ratio = Column(Float, nullable=False)
resource_provider = orm.relationship(
"ResourceProvider",
primaryjoin=('Inventory.resource_provider_id == '
'ResourceProvider.id'),
foreign_keys=resource_provider_id)
class Allocation(API_BASE):
"""A use of inventory."""
__tablename__ = "allocations"
__table_args__ = (
Index('allocations_resource_provider_class_used_idx',
'resource_provider_id', 'resource_class_id',
'used'),
Index('allocations_resource_class_id_idx',
'resource_class_id'),
Index('allocations_consumer_id_idx', 'consumer_id')
)
id = Column(Integer, primary_key=True, nullable=False)
resource_provider_id = Column(Integer, nullable=False)
consumer_id = Column(String(36), nullable=False)
resource_class_id = Column(Integer, nullable=False)
used = Column(Integer, nullable=False)
resource_provider = orm.relationship(
"ResourceProvider",
primaryjoin=('Allocation.resource_provider_id == '
'ResourceProvider.id'),
foreign_keys=resource_provider_id)
class ResourceProviderAggregate(API_BASE):
"""Associate a resource provider with an aggregate."""
__tablename__ = 'resource_provider_aggregates'
__table_args__ = (
Index('resource_provider_aggregates_aggregate_id_idx',
'aggregate_id'),
)
resource_provider_id = Column(Integer, primary_key=True, nullable=False)
aggregate_id = Column(Integer, primary_key=True, nullable=False)
class PlacementAggregate(API_BASE):
"""A grouping of resource providers."""
__tablename__ = 'placement_aggregates'
__table_args__ = (
schema.UniqueConstraint("uuid", name="uniq_placement_aggregates0uuid"),
)
id = Column(Integer, primary_key=True, autoincrement=True)
uuid = Column(String(36), index=True)
class InstanceGroupMember(API_BASE):
"""Represents the members for an instance group."""
__tablename__ = 'instance_group_member'
__table_args__ = (
Index('instance_group_member_instance_idx', 'instance_uuid'),
)
id = Column(Integer, primary_key=True, nullable=False)
instance_uuid = Column(String(255))
group_id = Column(Integer, ForeignKey('instance_groups.id'),
nullable=False)
class InstanceGroupPolicy(API_BASE):
"""Represents the policy type for an instance group."""
__tablename__ = 'instance_group_policy'
__table_args__ = (
Index('instance_group_policy_policy_idx', 'policy'),
)
id = Column(Integer, primary_key=True, nullable=False)
policy = Column(String(255))
group_id = Column(Integer, ForeignKey('instance_groups.id'),
nullable=False)
rules = Column(Text)
class InstanceGroup(API_BASE):
"""Represents an instance group.
A group will maintain a collection of instances and the relationship
between them.
"""
__tablename__ = 'instance_groups'
__table_args__ = (
schema.UniqueConstraint('uuid', name='uniq_instance_groups0uuid'),
)
id = Column(Integer, primary_key=True, autoincrement=True)
user_id = Column(String(255))
project_id = Column(String(255))
uuid = Column(String(36), nullable=False)
name = Column(String(255))
_policies = orm.relationship(InstanceGroupPolicy,
primaryjoin='InstanceGroup.id == InstanceGroupPolicy.group_id')
_members = orm.relationship(InstanceGroupMember,
primaryjoin='InstanceGroup.id == InstanceGroupMember.group_id')
@property
def policy(self):
if len(self._policies) > 1:
msg = ("More than one policy (%(policies)s) is associated with "
"group %(group_name)s, only the first one in the list "
"would be returned.")
LOG.warning(msg, {"policies": [p.policy for p in self._policies],
"group_name": self.name})
return self._policies[0] if self._policies else None
@property
def members(self):
return [m.instance_uuid for m in self._members]
class Quota(API_BASE):
"""Represents a single quota override for a project.
If there is no row for a given project id and resource, then the
default for the quota class is used. If there is no row for a
given quota class and resource, then the default for the
deployment is used. If the row is present but the hard limit is
Null, then the resource is unlimited.
"""
__tablename__ = 'quotas'
__table_args__ = (
schema.UniqueConstraint("project_id", "resource",
name="uniq_quotas0project_id0resource"
),
)
id = Column(Integer, primary_key=True)
project_id = Column(String(255))
resource = Column(String(255), nullable=False)
hard_limit = Column(Integer)
class ProjectUserQuota(API_BASE):
"""Represents a single quota override for a user with in a project."""
__tablename__ = 'project_user_quotas'
uniq_name = "uniq_project_user_quotas0user_id0project_id0resource"
__table_args__ = (
schema.UniqueConstraint("user_id", "project_id", "resource",
name=uniq_name),
Index('project_user_quotas_project_id_idx',
'project_id'),
Index('project_user_quotas_user_id_idx',
'user_id',)
)
id = Column(Integer, primary_key=True, nullable=False)
project_id = Column(String(255), nullable=False)
user_id = Column(String(255), nullable=False)
resource = Column(String(255), nullable=False)
hard_limit = Column(Integer)
class QuotaClass(API_BASE):
"""Represents a single quota override for a quota class.
If there is no row for a given quota class and resource, then the
default for the deployment is used. If the row is present but the
hard limit is Null, then the resource is unlimited.
"""
__tablename__ = 'quota_classes'
__table_args__ = (
Index('quota_classes_class_name_idx', 'class_name'),
)
id = Column(Integer, primary_key=True)
class_name = Column(String(255))
resource = Column(String(255))
hard_limit = Column(Integer)
class QuotaUsage(API_BASE):
"""Represents the current usage for a given resource."""
__tablename__ = 'quota_usages'
__table_args__ = (
Index('quota_usages_project_id_idx', 'project_id'),
Index('quota_usages_user_id_idx', 'user_id'),
)
id = Column(Integer, primary_key=True)
project_id = Column(String(255))
user_id = Column(String(255))
resource = Column(String(255), nullable=False)
in_use = Column(Integer, nullable=False)
reserved = Column(Integer, nullable=False)
@property
def total(self):
return self.in_use + self.reserved
until_refresh = Column(Integer)
class Reservation(API_BASE):
"""Represents a resource reservation for quotas."""
__tablename__ = 'reservations'
__table_args__ = (
Index('reservations_project_id_idx', 'project_id'),
Index('reservations_uuid_idx', 'uuid'),
Index('reservations_expire_idx', 'expire'),
Index('reservations_user_id_idx', 'user_id'),
)
id = Column(Integer, primary_key=True, nullable=False)
uuid = Column(String(36), nullable=False)
usage_id = Column(Integer, ForeignKey('quota_usages.id'), nullable=False)
project_id = Column(String(255))
user_id = Column(String(255))
resource = Column(String(255))
delta = Column(Integer, nullable=False)
expire = Column(DateTime)
usage = orm.relationship(
"QuotaUsage",
foreign_keys=usage_id,
primaryjoin='Reservation.usage_id == QuotaUsage.id')
class Trait(API_BASE):
"""Represents a trait."""
__tablename__ = "traits"
__table_args__ = (
schema.UniqueConstraint('name', name='uniq_traits0name'),
)
id = Column(Integer, primary_key=True, nullable=False, autoincrement=True)
name = Column(Unicode(255), nullable=False)
class ResourceProviderTrait(API_BASE):
"""Represents the relationship between traits and resource provider"""
__tablename__ = "resource_provider_traits"
__table_args__ = (
Index('resource_provider_traits_resource_provider_trait_idx',
'resource_provider_id', 'trait_id'),
)
trait_id = Column(Integer, ForeignKey('traits.id'), primary_key=True,
nullable=False)
resource_provider_id = Column(Integer,
ForeignKey('resource_providers.id'),
primary_key=True,
nullable=False)
class Project(API_BASE):
"""The project is the Keystone project."""
__tablename__ = 'projects'
__table_args__ = (
schema.UniqueConstraint(
'external_id',
name='uniq_projects0external_id',
),
)
id = Column(Integer, primary_key=True, nullable=False, autoincrement=True)
external_id = Column(String(255), nullable=False)
class User(API_BASE):
"""The user is the Keystone user."""
__tablename__ = 'users'
__table_args__ = (
schema.UniqueConstraint(
'external_id',
name='uniq_users0external_id',
),
)
id = Column(Integer, primary_key=True, nullable=False, autoincrement=True)
external_id = Column(String(255), nullable=False)
class Consumer(API_BASE):
"""Represents a resource consumer."""
__tablename__ = 'consumers'
__table_args__ = (
Index('consumers_project_id_uuid_idx', 'project_id', 'uuid'),
Index('consumers_project_id_user_id_uuid_idx', 'project_id', 'user_id',
'uuid'),
schema.UniqueConstraint('uuid', name='uniq_consumers0uuid'),
)
id = Column(Integer, primary_key=True, nullable=False, autoincrement=True)
uuid = Column(String(36), nullable=False)
project_id = Column(Integer, nullable=False)
user_id = Column(Integer, nullable=False)
# FIXME(mriedem): Change this to server_default=text("0") to match the
# 059_add_consumer_generation script once bug 1776527 is fixed.
generation = Column(Integer, nullable=False, server_default="0", default=0)

View File

@ -16,77 +16,42 @@
import os
from migrate import exceptions as versioning_exceptions
from migrate.versioning import api as versioning_api
from migrate.versioning.repository import Repository
from oslo_log import log as logging
import sqlalchemy
import alembic
from alembic import config as alembic_config
from alembic import migration as alembic_migration
from placement import db_api as placement_db
INIT_VERSION = 0
_REPOSITORY = None
LOG = logging.getLogger(__name__)
def get_engine(context=None):
def get_engine():
return placement_db.get_placement_engine()
def db_sync(version=None, context=None):
if version is not None:
# Let ValueError raise
version = int(version)
current_version = db_version(context=context)
repository = _find_migrate_repo()
if version is None or version > current_version:
return versioning_api.upgrade(get_engine(context=context),
repository, version)
else:
return versioning_api.downgrade(get_engine(context=context),
repository, version)
def _alembic_config():
path = os.path.join(os.path.dirname(__file__), "alembic.ini")
config = alembic_config.Config(path)
return config
def db_version(context=None):
repository = _find_migrate_repo()
try:
return versioning_api.db_version(get_engine(context=context),
repository)
except versioning_exceptions.DatabaseNotControlledError as exc:
meta = sqlalchemy.MetaData()
engine = get_engine(context=context)
meta.reflect(bind=engine)
tables = meta.tables
if len(tables) == 0:
db_version_control(INIT_VERSION, context=context)
return versioning_api.db_version(
get_engine(context=context), repository)
else:
LOG.exception(exc)
raise exc
def version(config=None, engine=None):
"""Current database version.
:returns: Database version
:rtype: string
"""
if engine is None:
engine = get_engine()
with engine.connect() as conn:
context = alembic_migration.MigrationContext.configure(conn)
return context.get_current_revision()
def db_initial_version():
return INIT_VERSION
def upgrade(revision, config=None):
"""Used for upgrading database.
def db_version_control(version=None, context=None):
repository = _find_migrate_repo()
versioning_api.version_control(get_engine(context=context),
repository,
version)
return version
def _find_migrate_repo():
"""Get the path for the migrate repository."""
global _REPOSITORY
rel_path = os.path.join('api_migrations', 'migrate_repo')
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
rel_path)
assert os.path.exists(path)
if _REPOSITORY is None:
_REPOSITORY = Repository(path)
return _REPOSITORY
:param version: Desired database version
:type version: string
"""
revision = revision or "head"
config = config or _alembic_config()
alembic.command.upgrade(config, revision)

View File

@ -0,0 +1,232 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_db.sqlalchemy import models
from oslo_log import log as logging
from sqlalchemy import Column
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Float
from sqlalchemy import ForeignKey
from sqlalchemy import Index
from sqlalchemy import Integer
from sqlalchemy import orm
from sqlalchemy import schema
from sqlalchemy import String
from sqlalchemy import Unicode
LOG = logging.getLogger(__name__)
class _Base(models.ModelBase, models.TimestampMixin):
pass
BASE = declarative_base(cls=_Base)
class ResourceClass(BASE):
"""Represents the type of resource for an inventory or allocation."""
__tablename__ = 'resource_classes'
__table_args__ = (
schema.UniqueConstraint("name", name="uniq_resource_classes0name"),
)
id = Column(Integer, primary_key=True, nullable=False)
name = Column(String(255), nullable=False)
class ResourceProvider(BASE):
"""Represents a mapping to a providers of resources."""
__tablename__ = "resource_providers"
__table_args__ = (
Index('resource_providers_uuid_idx', 'uuid'),
schema.UniqueConstraint('uuid',
name='uniq_resource_providers0uuid'),
Index('resource_providers_name_idx', 'name'),
Index('resource_providers_root_provider_id_idx',
'root_provider_id'),
Index('resource_providers_parent_provider_id_idx',
'parent_provider_id'),
schema.UniqueConstraint('name',
name='uniq_resource_providers0name')
)
id = Column(Integer, primary_key=True, nullable=False)
uuid = Column(String(36), nullable=False)
name = Column(Unicode(200), nullable=True)
generation = Column(Integer, default=0)
# Represents the root of the "tree" that the provider belongs to
root_provider_id = Column(Integer, ForeignKey('resource_providers.id'),
nullable=True)
# The immediate parent provider of this provider, or NULL if there is no
# parent. If parent_provider_id == NULL then root_provider_id == id
parent_provider_id = Column(Integer, ForeignKey('resource_providers.id'),
nullable=True)
class Inventory(BASE):
"""Represents a quantity of available resource."""
__tablename__ = "inventories"
__table_args__ = (
Index('inventories_resource_provider_id_idx',
'resource_provider_id'),
Index('inventories_resource_class_id_idx',
'resource_class_id'),
Index('inventories_resource_provider_resource_class_idx',
'resource_provider_id', 'resource_class_id'),
schema.UniqueConstraint('resource_provider_id', 'resource_class_id',
name='uniq_inventories0resource_provider_resource_class')
)
id = Column(Integer, primary_key=True, nullable=False)
resource_provider_id = Column(Integer, nullable=False)
resource_class_id = Column(Integer, nullable=False)
total = Column(Integer, nullable=False)
reserved = Column(Integer, nullable=False)
min_unit = Column(Integer, nullable=False)
max_unit = Column(Integer, nullable=False)
step_size = Column(Integer, nullable=False)
allocation_ratio = Column(Float, nullable=False)
resource_provider = orm.relationship(
"ResourceProvider",
primaryjoin=('Inventory.resource_provider_id == '
'ResourceProvider.id'),
foreign_keys=resource_provider_id)
class Allocation(BASE):
"""A use of inventory."""
__tablename__ = "allocations"
__table_args__ = (
Index('allocations_resource_provider_class_used_idx',
'resource_provider_id', 'resource_class_id',
'used'),
Index('allocations_resource_class_id_idx',
'resource_class_id'),
Index('allocations_consumer_id_idx', 'consumer_id')
)
id = Column(Integer, primary_key=True, nullable=False)
resource_provider_id = Column(Integer, nullable=False)
consumer_id = Column(String(36), nullable=False)
resource_class_id = Column(Integer, nullable=False)
used = Column(Integer, nullable=False)
resource_provider = orm.relationship(
"ResourceProvider",
primaryjoin=('Allocation.resource_provider_id == '
'ResourceProvider.id'),
foreign_keys=resource_provider_id)
class ResourceProviderAggregate(BASE):
"""Associate a resource provider with an aggregate."""
__tablename__ = 'resource_provider_aggregates'
__table_args__ = (
Index('resource_provider_aggregates_aggregate_id_idx',
'aggregate_id'),
)
resource_provider_id = Column(Integer, primary_key=True, nullable=False)
aggregate_id = Column(Integer, primary_key=True, nullable=False)
class PlacementAggregate(BASE):
"""A grouping of resource providers."""
__tablename__ = 'placement_aggregates'
__table_args__ = (
schema.UniqueConstraint("uuid", name="uniq_placement_aggregates0uuid"),
)
id = Column(Integer, primary_key=True, autoincrement=True)
uuid = Column(String(36), index=True)
class Trait(BASE):
"""Represents a trait."""
__tablename__ = "traits"
__table_args__ = (
schema.UniqueConstraint('name', name='uniq_traits0name'),
)
id = Column(Integer, primary_key=True, nullable=False, autoincrement=True)
name = Column(Unicode(255), nullable=False)
class ResourceProviderTrait(BASE):
"""Represents the relationship between traits and resource provider"""
__tablename__ = "resource_provider_traits"
__table_args__ = (
Index('resource_provider_traits_resource_provider_trait_idx',
'resource_provider_id', 'trait_id'),
)
trait_id = Column(Integer, ForeignKey('traits.id'), primary_key=True,
nullable=False)
resource_provider_id = Column(Integer,
ForeignKey('resource_providers.id'),
primary_key=True,
nullable=False)
class Project(BASE):
"""The project is the Keystone project."""
__tablename__ = 'projects'
__table_args__ = (
schema.UniqueConstraint(
'external_id',
name='uniq_projects0external_id',
),
)
id = Column(Integer, primary_key=True, nullable=False, autoincrement=True)
external_id = Column(String(255), nullable=False)
class User(BASE):
"""The user is the Keystone user."""
__tablename__ = 'users'
__table_args__ = (
schema.UniqueConstraint(
'external_id',
name='uniq_users0external_id',
),
)
id = Column(Integer, primary_key=True, nullable=False, autoincrement=True)
external_id = Column(String(255), nullable=False)
class Consumer(BASE):
"""Represents a resource consumer."""
__tablename__ = 'consumers'
__table_args__ = (
Index('consumers_project_id_uuid_idx', 'project_id', 'uuid'),
Index('consumers_project_id_user_id_uuid_idx', 'project_id', 'user_id',
'uuid'),
schema.UniqueConstraint('uuid', name='uniq_consumers0uuid'),
)
id = Column(Integer, primary_key=True, nullable=False, autoincrement=True)
uuid = Column(String(36), nullable=False)
project_id = Column(Integer, nullable=False)
user_id = Column(Integer, nullable=False)
generation = Column(Integer, nullable=False, server_default="0", default=0)

View File

@ -9,9 +9,7 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Database context manager for placement database connection, kept in its
own file so the nova db_api (which has cascading imports) is not imported.
"""
"""Database context manager for placement database connection."""
from oslo_db.sqlalchemy import enginefacade
from oslo_log import log as logging

View File

@ -15,7 +15,7 @@ from oslo_versionedobjects import base
from oslo_versionedobjects import fields
import sqlalchemy as sa
from placement.db.sqlalchemy import api_models as models
from placement.db.sqlalchemy import models
from placement import db_api
from placement import exception
from placement.objects import project as project_obj

View File

@ -16,7 +16,7 @@ from oslo_versionedobjects import base
from oslo_versionedobjects import fields
import sqlalchemy as sa
from placement.db.sqlalchemy import api_models as models
from placement.db.sqlalchemy import models
from placement import db_api
from placement import exception

View File

@ -37,7 +37,7 @@ from sqlalchemy import func
from sqlalchemy import sql
from sqlalchemy.sql import null
from placement.db.sqlalchemy import api_models as models
from placement.db.sqlalchemy import models
from placement import db_api
from placement import exception
from placement.i18n import _

View File

@ -16,7 +16,7 @@ from oslo_versionedobjects import base
from oslo_versionedobjects import fields
import sqlalchemy as sa
from placement.db.sqlalchemy import api_models as models
from placement.db.sqlalchemy import models
from placement import db_api
from placement import exception

View File

@ -13,7 +13,7 @@
from oslo_concurrency import lockutils
import sqlalchemy as sa
from placement.db.sqlalchemy import api_models as models
from placement.db.sqlalchemy import models
from placement import db_api
from placement import exception
from placement import rc_fields as fields

View File

@ -26,43 +26,43 @@ from placement import db_api as placement_db
CONF = cfg.CONF
db_schema = None
session_configured = False
def reset():
"""Call this to allow the placement db fixture to be reconfigured
in the same process.
"""
global session_configured
session_configured = False
placement_db.placement_context_manager.dispose_pool()
# TODO(cdent): Future handling in sqlalchemy may allow doing this
# in a less hacky way.
placement_db.placement_context_manager._factory._started = False
# Reset the run once decorator.
placement_db.configure.reset()
class Database(fixtures.Fixture):
def __init__(self):
"""Create a database fixture."""
super(Database, self).__init__()
# NOTE(pkholkin): oslo_db.enginefacade is configured in tests the same
# way as it is done for any other service that uses db
global session_configured
if not session_configured:
placement_db.configure(CONF)
session_configured = True
self.get_engine = placement_db.get_placement_engine
def _cache_schema(self):
global db_schema
if not db_schema:
engine = self.get_engine()
conn = engine.connect()
migration.db_sync()
db_schema = "".join(line for line in conn.connection.iterdump())
engine.dispose()
def setUp(self):
super(Database, self).setUp()
self.reset()
# NOTE(cdent): Instead of upgrade here we could also do create_schema
# here (assuming it works). That would be faster than running
# migrations (once we actually have some migrations).
# The migration commands will look in alembic's env.py which itself
# has access to the oslo config for things like the database
# connection string.
migration.upgrade("head")
self.addCleanup(self.cleanup)
def cleanup(self):
engine = self.get_engine()
engine.dispose()
def reset(self):
self._cache_schema()
engine = self.get_engine()
engine.dispose()
conn = engine.connect()
conn.connection.executescript(db_schema)
reset()

View File

@ -0,0 +1,302 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests for database migrations. There are "opportunistic" tests for sqlite in
memory, mysql and postgresql in here, which allows testing against these
databases in a properly configured unit test environment.
For the opportunistic testing you need to set up a db named 'openstack_citest'
with user 'openstack_citest' and password 'openstack_citest' on localhost. This
can be accomplished by running the `test-setup.sh` script in the `tools`
subdirectory. The test will then use that DB and username/password combo to run
the tests.
"""
import contextlib
import functools
import tempfile
from alembic import script
import mock
from oslo_concurrency.fixture import lockutils as concurrency
from oslo_config import fixture as config_fixture
from oslo_db import exception as db_exc
from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy import provision
from oslo_db.sqlalchemy import test_migrations
from oslo_log import log as logging
from oslotest import base as test_base
import testtools
from placement import conf
from placement.db.sqlalchemy import migration
from placement.db.sqlalchemy import models
from placement import db_api
from placement.tests import fixtures as db_fixture
CONF = conf.CONF
DB_NAME = 'openstack_citest'
LOG = logging.getLogger(__name__)
@contextlib.contextmanager
def patch_with_engine(engine):
with mock.patch.object(enginefacade.writer,
'get_engine') as patch_engine:
patch_engine.return_value = engine
yield
def configure(conf_fixture, db_url):
"""Set database and lockfile configuration. Aggregate configure setting
here, not done as a base class as the mess of mixins makes that
inscrutable. So instead we create a nice simple function.
"""
conf_fixture.config(lock_path=tempfile.gettempdir(),
group='oslo_concurrency')
conf_fixture.config(group='placement_database', connection=db_url)
# We need to retry at least once (and quickly) otherwise the connection
# test routines in oslo_db do not run, and the exception handling for
# determining if an opportunistic database is presents gets more
# complicated.
conf_fixture.config(group='placement_database', max_retries=1)
conf_fixture.config(group='placement_database', retry_interval=0)
def generate_url(driver):
"""Make a database URL to be used with the opportunistic tests.
NOTE(cdent): Because of the way we need to configure the
[placement_database]/connection, we need to have a predictable database
URL.
"""
backend = provision.BackendImpl.impl(driver)
db_url = backend.create_opportunistic_driver_url()
if driver == 'sqlite':
# For sqlite this is all we want since it's in memory.
return db_url
# if a dbname is present or the db_url ends with '/' take it off
db_url = db_url[:db_url.rindex('/')]
db_url = db_url + '/' + DB_NAME
return db_url
class WalkVersionsMixin(object):
def _walk_versions(self, engine=None, alembic_cfg=None):
"""Determine latest version script from the repo, then upgrade from 1
through to the latest, with no data in the databases. This just checks
that the schema itself upgrades successfully.
"""
# Place the database under version control
with patch_with_engine(engine):
script_directory = script.ScriptDirectory.from_config(alembic_cfg)
self.assertIsNone(self.migration_api.version(alembic_cfg))
versions = [ver for ver in script_directory.walk_revisions()]
for version in reversed(versions):
self._migrate_up(engine, alembic_cfg,
version.revision, with_data=True)
def _migrate_up(self, engine, config, version, with_data=False):
"""Migrate up to a new version of the db.
We allow for data insertion and post checks at every
migration version with special _pre_upgrade_### and
_check_### functions in the main test.
"""
# NOTE(sdague): try block is here because it's impossible to debug
# where a failed data migration happens otherwise
try:
if with_data:
data = None
pre_upgrade = getattr(
self, "_pre_upgrade_%s" % version, None)
if pre_upgrade:
data = pre_upgrade(engine)
self.migration_api.upgrade(version, config=config)
self.assertEqual(version, self.migration_api.version(config))
if with_data:
check = getattr(self, "_check_%s" % version, None)
if check:
check(engine, data)
except Exception:
LOG.error("Failed to migrate to version %(version)s on engine "
"%(engine)s",
{'version': version, 'engine': engine})
raise
class TestWalkVersions(testtools.TestCase, WalkVersionsMixin):
def setUp(self):
super(TestWalkVersions, self).setUp()
self.migration_api = mock.MagicMock()
self.engine = mock.MagicMock()
self.config = mock.MagicMock()
self.versions = [mock.Mock(revision='2b2'), mock.Mock(revision='1a1')]
def test_migrate_up(self):
self.migration_api.version.return_value = 'dsa123'
self._migrate_up(self.engine, self.config, 'dsa123')
self.migration_api.upgrade.assert_called_with('dsa123',
config=self.config)
self.migration_api.version.assert_called_with(self.config)
def test_migrate_up_with_data(self):
test_value = {"a": 1, "b": 2}
self.migration_api.version.return_value = '141'
self._pre_upgrade_141 = mock.MagicMock()
self._pre_upgrade_141.return_value = test_value
self._check_141 = mock.MagicMock()
self._migrate_up(self.engine, self.config, '141', True)
self._pre_upgrade_141.assert_called_with(self.engine)
self._check_141.assert_called_with(self.engine, test_value)
@mock.patch.object(script, 'ScriptDirectory')
@mock.patch.object(WalkVersionsMixin, '_migrate_up')
def test_walk_versions_all_default(self, _migrate_up, script_directory):
fc = script_directory.from_config()
fc.walk_revisions.return_value = self.versions
self.migration_api.version.return_value = None
self._walk_versions(self.engine, self.config)
self.migration_api.version.assert_called_with(self.config)
upgraded = [mock.call(self.engine, self.config, v.revision,
with_data=True) for v in reversed(self.versions)]
self.assertEqual(self._migrate_up.call_args_list, upgraded)
@mock.patch.object(script, 'ScriptDirectory')
@mock.patch.object(WalkVersionsMixin, '_migrate_up')
def test_walk_versions_all_false(self, _migrate_up, script_directory):
fc = script_directory.from_config()
fc.walk_revisions.return_value = self.versions
self.migration_api.version.return_value = None
self._walk_versions(self.engine, self.config)
upgraded = [mock.call(self.engine, self.config, v.revision,
with_data=True) for v in reversed(self.versions)]
self.assertEqual(upgraded, self._migrate_up.call_args_list)
class MigrationCheckersMixin(object):
def setUp(self):
self.addCleanup(db_fixture.reset)
db_url = generate_url(self.DRIVER)
conf_fixture = self.useFixture(config_fixture.Config(CONF))
configure(conf_fixture, db_url)
self.useFixture(concurrency.LockFixture('test_mig'))
db_fixture.reset()
db_api.configure(CONF)
try:
self.engine = db_api.get_placement_engine()
except (db_exc.DBNonExistentDatabase, db_exc.DBConnectionError):
self.skipTest('%s not available' % self.DRIVER)
self.config = migration._alembic_config()
self.migration_api = migration
super(MigrationCheckersMixin, self).setUp()
# The following is done here instead of in the fixture because it is
# much slower for the RAM-based DB tests, and isn't needed. But it is
# needed for the migration tests, so we do the complete drop/rebuild
# here.
backend = provision.Backend(self.engine.name, self.engine.url)
self.addCleanup(functools.partial(
backend.drop_all_objects, self.engine))
# This is required to prevent the global opportunistic db settings
# leaking into other tests.
self.addCleanup(self.engine.dispose)
def test_walk_versions(self):
self._walk_versions(self.engine, self.config)
# # Leaving this here as a sort of template for when we do migration tests.
# def _check_fb3f10dd262e(self, engine, data):
# nodes_tbl = db_utils.get_table(engine, 'nodes')
# col_names = [column.name for column in nodes_tbl.c]
# self.assertIn('fault', col_names)
# self.assertIsInstance(nodes_tbl.c.fault.type,
# sqlalchemy.types.String)
def test_upgrade_and_version(self):
self.migration_api.upgrade('head')
self.assertIsNotNone(self.migration_api.version())
def test_upgrade_twice(self):
# Start with the empty version
self.migration_api.upgrade('base')
v1 = self.migration_api.version()
# Now upgrade to head
self.migration_api.upgrade('head')
v2 = self.migration_api.version()
self.assertNotEqual(v1, v2)
class TestMigrationsSQLite(MigrationCheckersMixin,
WalkVersionsMixin,
test_base.BaseTestCase):
DRIVER = "sqlite"
class TestMigrationsMySQL(MigrationCheckersMixin,
WalkVersionsMixin,
test_base.BaseTestCase):
DRIVER = 'mysql'
class TestMigrationsPostgresql(MigrationCheckersMixin,
WalkVersionsMixin,
test_base.BaseTestCase):
DRIVER = 'postgresql'
class ModelsMigrationSyncMixin(object):
def setUp(self):
url = generate_url(self.DRIVER)
conf_fixture = self.useFixture(config_fixture.Config(CONF))
configure(conf_fixture, url)
self.useFixture(concurrency.LockFixture('test_mig'))
db_fixture.reset()
db_api.configure(CONF)
super(ModelsMigrationSyncMixin, self).setUp()
# This is required to prevent the global opportunistic db settings
# leaking into other tests.
self.addCleanup(db_fixture.reset)
def get_metadata(self):
return models.BASE.metadata
def get_engine(self):
try:
return db_api.get_placement_engine()
except (db_exc.DBNonExistentDatabase, db_exc.DBConnectionError):
self.skipTest('%s not available' % self.DRIVER)
def db_sync(self, engine):
migration.upgrade('head')
class ModelsMigrationsSyncSqlite(ModelsMigrationSyncMixin,
test_migrations.ModelsMigrationsSync,
test_base.BaseTestCase):
DRIVER = 'sqlite'
class ModelsMigrationsSyncMysql(ModelsMigrationSyncMixin,
test_migrations.ModelsMigrationsSync,
test_base.BaseTestCase):
DRIVER = 'mysql'
class ModelsMigrationsSyncPostgresql(ModelsMigrationSyncMixin,
test_migrations.ModelsMigrationsSync,
test_base.BaseTestCase):
DRIVER = 'postgresql'

View File

@ -19,7 +19,7 @@ from oslo_utils.fixture import uuidsentinel
import sqlalchemy as sa
import placement
from placement.db.sqlalchemy import api_models as models
from placement.db.sqlalchemy import models
from placement import exception
from placement.objects import consumer as consumer_obj
from placement.objects import resource_provider as rp_obj

54
tools/test-setup.sh Executable file
View File

@ -0,0 +1,54 @@
#!/bin/bash -xe
# This script will be run by OpenStack CI before unit tests are run,
# it sets up the test system as needed.
# Developers should setup their test systems in a similar way.
# This setup needs to be run as a user that can run sudo.
# The root password for the MySQL database; pass it in via
# MYSQL_ROOT_PW.
DB_ROOT_PW=${MYSQL_ROOT_PW:-insecure_slave}
# This user and its password are used by the tests, if you change it,
# your tests might fail.
DB_USER=openstack_citest
DB_PW=openstack_citest
sudo -H mysqladmin -u root password $DB_ROOT_PW
# It's best practice to remove anonymous users from the database. If
# an anonymous user exists, then it matches first for connections and
# other connections from that host will not work.
sudo -H mysql -u root -p$DB_ROOT_PW -h localhost -e "
DELETE FROM mysql.user WHERE User='';
FLUSH PRIVILEGES;
GRANT ALL PRIVILEGES ON *.*
TO '$DB_USER'@'%' identified by '$DB_PW' WITH GRANT OPTION;"
# Now create our database.
mysql -u $DB_USER -p$DB_PW -h 127.0.0.1 -e "
SET default_storage_engine=MYISAM;
DROP DATABASE IF EXISTS openstack_citest;
CREATE DATABASE openstack_citest CHARACTER SET utf8;"
# Same for PostgreSQL
# Setup user
root_roles=$(sudo -H -u postgres psql -t -c "
SELECT 'HERE' from pg_roles where rolname='$DB_USER'")
if [[ ${root_roles} == *HERE ]];then
sudo -H -u postgres psql -c "ALTER ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'"
else
sudo -H -u postgres psql -c "CREATE ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'"
fi
# Store password for tests
cat << EOF > $HOME/.pgpass
*:*:*:$DB_USER:$DB_PW
EOF
chmod 0600 $HOME/.pgpass
# Now create our database
psql -h 127.0.0.1 -U $DB_USER -d template1 -c "DROP DATABASE IF EXISTS openstack_citest"
createdb -h 127.0.0.1 -U $DB_USER -l C -T template0 -E utf8 openstack_citest