Added Pike directory for pike specs

Created Pike directory and required files for Pike release.
Moved un-completed blueprints and specs to pike

Change-Id: I98506ce6f047accea7820f729f250431bceff298
This commit is contained in:
Saad Zaher 2017-03-16 15:08:27 +00:00
parent 444da46d60
commit d0f312f15d
6 changed files with 1192 additions and 0 deletions

View File

@ -0,0 +1,9 @@
.. _pike-priorities:
=======================
Pike Project Priorities
=======================
List of themes (in the form of use cases) the freezer development team will
prioritize in Pike.

365
specs/pike-template.rst Normal file
View File

@ -0,0 +1,365 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Example Spec - The title of your blueprint
==========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/freezer/+spec/example
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that operators can understand. The title and this first paragraph
should be used as the subject line and body of the commit message
respectively.
Some notes about the freezer-spec and blueprint process:
* Not all blueprints need a spec. For more information see
http://docs.openstack.org/developer/nova/devref/kilo.blueprints.html#when-is-a-blueprint-needed
* The aim of this document is first to define the problem we need to solve,
and second agree the overall approach to solve that problem.
* This is not intended to be extensive documentation for a new feature.
For example, there is no need to specify the exact configuration changes,
nor the exact details of any DB model changes. But you should still define
that such changes are required, and be clear on how that will affect
upgrades.
* You should aim to get your spec approved before writing your code.
While you are free to write prototypes and code before getting your spec
approved, its possible that the outcome of the spec review process leads
you towards a fundamentally different solution than you first envisaged.
* But, API changes are held to a much higher level of scrutiny.
As soon as an API change merges, we must assume it could be in production
somewhere, and as such, we then need to support that API change forever.
To avoid getting that wrong, we do want lots of details about API changes
upfront.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 79 columns.
* The filename in the git repository should match the launchpad URL, for
example a URL of: https://blueprints.launchpad.net/freezer/+spec/awesome-thing
should be named awesome-thing.rst
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox and see the generated
HTML file in doc/build/html/specs/<path_of_your_file>
* If you would like to provide a diagram with your spec, ascii diagrams are
required. http://asciiflow.com/ is a very nice tool to assist with making
ascii diagrams. The reason for this is that the tool used to review specs is
based purely on plain text. Plain text will allow review to proceed without
having to look at additional files which can not be viewed in gerrit. It
will also allow inline feedback on the diagram itself.
Problem description
===================
A detailed description of the problem. What problem is this blueprint
addressing?
Use Cases
---------
What use cases does this address? What impact on actors does this change have?
Ensure you are clear about the actors in each use case: Developer, End User,
Deployer etc.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
At this point, if you would like to just get feedback on if the problem and
proposed change fit in freezer, you can stop here and post this for review to get
preliminary feedback. If so please say:
Posting to get preliminary feedback on the scope of this spec.
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to
require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing backups/jobs/... , or modify other
existing data describe how that will work.
REST API impact
---------------
Each API method which is either added or changed should have the following
* Specification for the method
* A description of what the method does suitable for use in
user documentation
* Method type (POST/PUT/GET/DELETE)
* Normal http response code(s)
* Expected error http response code(s)
* A description for each possible error code should be included
describing semantic errors which can cause it such as
inconsistent parameters supplied to the method, or when an
instance is not in an appropriate state for the request to
succeed. Errors caused by syntactic problems covered by the JSON
schema definition do not need to be included.
* URL for the resource
* URL should not include underscores, and use hyphens instead.
* Parameters which can be passed via the url
* JSON schema definition for the request body data if allowed
* Field names should use snake_case style, not CamelCase or MixedCase
style.
* JSON schema definition for the response body data if any
* Field names should use snake_case style, not CamelCase or MixedCase
style.
* Example use case including typical API samples for both data supplied
by the caller and the response
* Discuss any policy changes, and discuss what things a deployer needs to
think about when defining their policy.
Reuse of existing predefined parameter types such as regexps for
passwords and user defined names is highly encouraged.
Security impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Does this change touch sensitive data such as tokens, keys, or user data?
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
* Does this change involve cryptography or hashing?
* Does this change require the use of sudo or any elevated privileges?
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
For more detailed guidance, please see the OpenStack Security Guidelines as
a reference (https://wiki.openstack.org/wiki/Security/Guidelines). These
guidelines are a work in progress and are designed to help you identify
security best practices. For further information, feel free to reach out
to the OpenStack Security Group at openstack-security@lists.openstack.org.
Notifications impact
--------------------
Please specify any changes to notifications. Be that an extra notification,
changes to an existing notification, or removing a notification.
Other end user impact
---------------------
Aside from the API, are there other ways a user will interact with this
feature?
* Does this change have an impact on python-freezerclient? What does the user
interface there look like?
* Does this change have an impact on freezer-web-ui? What does the user
interface there look like?
* Does this change have an impact on freezeri-dr?
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries can have a profound impact on
performance when called in critical sections of the code.
* Will the change include any locking, and if so what considerations are there
on holding the lock?
Other deployer impact
---------------------
Discuss things that will affect how you deploy and configure OpenStack
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed? Are the default values ones which will work well in real
deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features. For example, if we
change the directory name that instances are stored in, how do we handle
instance directories created before the change landed? Do we move them? Do
we have a special case in the code? Do we assume that the operator will
recreate all the instances in their cloud?
Developer impact
----------------
Discuss things that will affect other developers working on OpenStack.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a blueprint where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* Include specific references to specs and/or blueprints in freezer, or in
other projects, that this one either depends on or is related to.
* If this requires functionality of another project that is not currently used
by Freezer (such as the glance v2 API when we previously only required v1),
document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss the important scenarios needed to test here, as well as
specific edge cases we should be ensuring work correctly.
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Documentation Impact
====================
Please describe how and where this change will be documented.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. if it's an EC2 thing, link the
EC2 docs)
* Anything else you feel it is worthwhile to refer to
History
=======
Optional section for Pike intended to be used each time the spec
is updated to describe new design, API or any database schema
updated. Useful to let reader understand what's happened along the
time.
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Pike
- Introduced

View File

@ -0,0 +1,229 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==================================
Block based backup support (rsync)
==================================
https://blueprints.launchpad.net/freezer/+spec/rsync
Taking advantage of the rsync to provide a possibility to create
space/bandwidth efficient backups.
Problem description
===================
Currently Freezer checks only ctime and mtime inode information
to verify if files are changed or not (tar functionality). While
this approach gives speed (time efficient), it is not bandwidth
and storage efficient. Freezer needs to support both rsync and tar
approach to execute incremental backups and restore.
Since Freezer will provide two options for incremental backups, it
would be more convenient to choose the best approach to backup data
in accordance with each particular case (more speed or storage/bandwidth
efficient).
Use Cases
---------
* For developers, this change will not create negative impacts because
this code will be gracefully bundled in Freezer engine API and
will not cause any major changes in Freezer architecture.
* For Deployers there is no need to install any additional components,
Freezer will use it's own implementation of rsync algorithm
(written in Python).
* For End User it would be less difficult to select more efficient
option for create backups based on dataset (e.g. few big files or a lot of
small files) for backup and speed/storage/bandwidth requirements,
since Freezer would support both rsync and tar approaches.
Proposed change
===============
Implementing the new engine classes for rsync (as well as for tar).
Providing new engine (-e) choice in config.
For this type of backup will be created following metadata structure:
files_meta = {
'files': {},
'directories': {},
'meta': {
'broken_links_tot': '',
'total_files': '',
'total_directories': '',
'backup_size_on_disk': '',
'backup_size_uncompressed': '',
'backup_size_compressed': '',
'platform': sys.platform
},
'abs_backup_path': os.getcwd(),
'broken_links': [],
'rsync_struct_ver': RSYNC_DATA_STRUCT_VERSION,
'rsync_block_size': RSYNC_BLOCK_SIZE}
file_meta = {'inode': {
'inumber': os_stat.st_ino,
'nlink': os_stat.st_nlink,
'mode': file_mode,
'uid': os_stat.st_uid,
'gid': os_stat.st_gid,
'size': os_stat.st_size,
'devmajor': os.major(dev),
'devminor': os.minor(dev),
'mtime': mtime,
'ctime': ctime,
'uname': uname,
'gname': gname,
'ftype': file_type,
'lname': lname,
'rsync_block_size': rsync_block_size,
'file_status: status
}
}
Current version of implementation you always can find here [1].
Alternatives
------------
Because of the flexibility, speed, and scriptability of rsync, it has
become a standard Linux utility, included in all popular Linux distributions.
It has been ported to Windows (via Cygwin, Grsync, or SFU), FreeBSD, NetBSD,
OpenBSD, and Mac OS. De facto, rsync is the default fallback for most data
transfers. It has a clear algorithm written for 20 years ago and different
variations (e.g. acrosync, zsync, etc). librsync is used by Dropbox.
Using other alternative (like bbcp or lftp) would not be more effective
or portable solution.
Data model impact
-----------------
Changes in data model has already described in oslo.db migration document.
Actions entity should contain 'engine' field for performing appropriate action
using particular type of engine (tar, rsync or openstack).
From new relational database schema:
Actions
action_id (uuid) [p_key]
resource (varchar)
type (varchar)
name (varchar)
application (varchar)
engine (varchar) <-- Require this
snapshot (varchar)
storage (varchar)
global_options (JSON)
application_options (JSON)
storage_options (JSON)
snapshot_options (JSON)
engine_options (JSON)
REST API impact
---------------
None.
Security impact
---------------
None.
Notifications impact
--------------------
There are no special logs will be added, just some info messages about
start/stop backup process, backup metrics, etc.
Other end user impact
---------------------
* There are no additional changes to python-freezerclient CLI. To choice
appropriate engine for action, end user should specify 'engine' field
in provided JSON configuration in case of creating or updating action.
* freezer-web-ui should provide additional 'engine' field in 'Action
Configuration' window. It has to be drop-down list with values 'tar',
'rsync' or 'openstack'.
Performance Impact
------------------
None.
Other deployer impact
---------------------
Will be added new choice to freezer-agent -e (engine) option - 'rsync'.
Developer impact
----------------
None.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Ruslan Aliev (raliev) <raliev@mirantis.com>
Other contributors:
Fausto Marzi (daemontool) <fausto.marzi@ericsson.com>
Work Items
----------
* implementing the new engine (rsync)
* bundling this engine to freezer code (API calls) and mechanism
for using this engine ('-e rsync' option)
* implementing the new database schema for actions (oslo.db migration)
* updating freezer-web-ui 'Action Configuration' window
* updating documentation
Dependencies
============
* This spec depends on Freezer oslo.db migration [2].
* Pluggable engines described here [3].
* There are no additional library dependencies.
Testing
=======
There is a question - do we actually need separate tempest test
for this change or we can be satisfied with existing one?
Documentation Impact
====================
* freezer README doc
* freezer-api README doc
* freezer-web-ui README doc
References
==========
.. [1] https://review.openstack.org/#/c/409796/
.. [2] https://etherpad.openstack.org/p/freezer_mysql_migration
.. [3] https://etherpad.openstack.org/p/freezer_new_archi

View File

@ -0,0 +1,284 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=============================================
Freezer Cinder Volumes backup using OS Bricks
=============================================
* https://blueprints.launchpad.net/freezer/+spec/cinder-osbrick
Provide efficient way to backup Cinder Volumes leveraging os_bricks
Problem description
===================
Currently Freezer provides basic features to execute Cinder volumes backup.
The current approach present significant challenges,
due mainly to the difficulty of downloading Cinder Volumes without passing
through Glance. This can be an issue for time and scalability reasons,
(i.e. volumes of few hundreds GB size, potential error probability increase,
as more services are part of the process, unailability of cinder-backup)
Use Cases
---------
* Users that want to backup cinder volumes.
* Store backed up volumes in a different storage media than Swift.
This is important for disaster recovery purpose, as it should be
possible to restore the volume even if the swift or other services are
down in the original OpenStack deployment.
* OpenStack distributions deployed without cinder-backup module.
* Provide a more efficient way of executing incrementals backup.
* Avoid uploading volumes image to Glance to be processed.
* Volumes can be backed up while attached or detached (hot and cold).
- Hot backup will be provide a crach consistent backup and the data present
in the volumes can be accesses at all times during backup
- If Cold backup is executed, the Volume is detached first,
then the backup is executed.
Proposed change
===============
Implement in the freezer-agent a new engine called cinder-osbrick.
The new engine cinder-osbrick execute backup and restore related
operations direclty on the Volumes, without passing through Glance API.
The freezer-agent needs to back up a single volume, all the volumes
owned by the tenant or all volumes from all tenants (admin).
Volumes backup and restore can happen in parallel (i.e. 10 Volumes
simultaneously can be backup or restore)
Technical details
-----------------
Openstack provide the os_brick library to attach volumes:
* https://github.com/openstack/os-brick
It mainly provides the following features:
* Volumes discovery
* Volumes attach
* Volumes removal
Related docs:
* http://docs.openstack.org/developer/os-brick/api/index.html
* http://docs.openstack.org/developer/os-brick/tutorial.html
The python client module that could be used is brick-cinderclient-ex:
* https://github.com/openstack/python-brick-cinderclient-ext
It is preferrable to implement the Volumes related operations from cinder
in python, rather wrapping around any possible related os-brick command.
* The freezer-scheduler and the freezer-agent needs to support,
in the json and ini config file respectively, engine specific settings.
Backup workflow with osbrick:
-----------------------------
* freezer-agent workflow:
(Common steps)
single-vol-backup:
- Backup any available metadata of the Volume
- A Snapshot is execute on the volume (--force if volume is attached)
- Snapshot is converted to Volume, in oder to be mounted using osbrick
- The new Volume is attached using os_brick. The Volume can be attached
using iSCSI, Local or FC according the information provided by
os_brick about the volume.
- The new Volume is mounted on the node where the freezer-agent is executing
- The freezer-agent will execute a backup of the volume content, starting
from the volume mount (i.e. volume root /)
- Every single file in the volume is backed up.
- If the execution is part of an incremental backup, each file/block is
compared against the previous execution.
- Data can be storage on each supported freezer storage backend
- When finished, the new Volume is detached
- Once detached, the new volume is removed
Backup of a Single Volume:
1) The freezer-agent take the volume id as input param (either from ini
file or json file provided to the scheduler):
2) single-vol-backup from Common steps
Backup of all Volumes owned by a tenant:
1) freezer-agent discover all the volumes owned by the tenant from Cinder API
2) Iterate over each Volume
3) single-vol-backup from Common steps
Backup of all Volumes (admin):
1) freezer-agent get the list of all Volumes available from Cinder API
2) Iterate over each volume
3) single-vol-backup from Common steps
Backup of all volumes part of a Consistency Group
1) get list of all volumes from the Consistency Group. It can be provided
as a single element id or a list of elements comma separated:
2) freezer-agent get the list of all Volumes available from Cinder API
3) Iterate over each volume
4) single-vol-backup from Common steps
Restore workflow with osbrick
-----------------------------
* freezer-agent workflow:
(Common steps)
single-vol-restore:
- Get the original Volume metadata
- Check if the volume id exists
- If the same volume id exists
+ snapshot the volume
+ convert from snap to volume
+ attach the volume
+ mount the volume
+ restore the backup data in the volume filesystem
+ if meta-override option is provided, the volume metadata from backup
is applied to the current Volume meta
- If the volume id does not exist
+ Create a new Volume with the same metadata from backup
+ attach the volume with os-brick
+ mount the volume
+ restore the backup data in the volume filesystem
- unmount
- deattach the volum
- if remove_old_vol is provided, any existing volume not matching with the
new ones will be removed (Dangerous Option)
Restote of a single volume:
1) The freezer-agent take the volume id as input param (either from ini
file or json file provided to the scheduler):
2) single-vol-restore from Common steps
Restore of all volumes owned by a tenant:
1) freezer-agent discover all the volumes owned by the tenant from Cinder API
2) Iterate over each volume
3) single-vol-restore from Common steps
Restore of all volumes from all tenants (admin):
1) freezer-agent get the list of all Volumes available from Cinder API
2) Iterate over each volume
3) single-vol-restore from Common steps
Restore of all volumes part of a Consistency Group
1) get list of all volumes from the Consistency Group. It can be provided as
a single element id or a list of elements comma separated:
2) Iterate over each volume
3) single-vol-restore from Common steps
Data model impact
-----------------
* new engine in the db
* DB model for single, all tenant, tenant owned volumes and consistency groups
New Options to be added:
------------------------
* engine-os-brick
* recreate-vol-on-error
* meta-override
* consistency groups [id]
* all_tenants
* all_tenant_volumes
* single_volume_id
* remove_old_vol
Alternatives
------------
Impacts
-------
* freezer-agent
* freezer-api
* freezer-web-ui
REST API impact
-----------------------
* API needs to support this new engine
Security impact
---------------------
None
Notifications impact
---------------------------
TBD.
Other end user impact
------------------------------
None. TBD.
Performance Impact
------------------
None.
Other deployer impact
------------------------------
Developer impact
------------------------
Implementation
==============
Assignee(s)
-----------------
Primary assignee:
Other contributors:
daemontool
Work Items
----------
Dependencies
============
Testing
=======
TBD.
Documentation Impact
====================
* Freezer API installation doc
* Freezer agent docu
* Freezer web ui doc
References
==========
.. https://etherpad.openstack.org/p/freezer_cinder-os-brick

View File

@ -0,0 +1,305 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=================================
Relational DB Schema with OSLO.DB
=================================
https://blueprints.launchpad.net/freezer/+spec/oslo.db
Taking advantage of the oslo.db library to have a more uniform database
backend architecture to other OpenStack projects.
Problem description
===================
Currently Freezer uses Elastic Search (ES) as a database backend, which
is a NoSQL database specialized for ranked query results. Elastic Search
adds additional complexity to an OpenStack system. Most of the
components use a relational database management system (DBMS like MySQL or
PostgreSQL) which are more common. It is more familiar how to
maintain, troubleshoot and develop on top of relational databases.
Since Freezer related data turned out to be relational, it would be more
convenient to use it trough the oslo.db pattern library. Using it, the
database mapping would be more uniform to other OpenStack projects.
It would be less challenging for new developers to contribute.
Use Cases
---------
* For new developers, already familiar with OpenStack, it should be less
challenging to get familiar with the backend code, since most of the
OpenStack projects use relational database backend trough the oslo.db
pattern library.
* For Deployers there would be no longer needed to set up a special
DBMS just for Freezer, since it could share the relational DBMS used
by the other (core) OpenStack projects, still well isolated in it's
own database.
* For End User it would be less difficult to maintain, since Freezer
would not add additional complexity with a less common component,
instead it can take advantage of the DBMS that is already deployed for
OpenStack.
Proposed change
===============
Implementing the entities using oslo.db and SQLAlchemy base classes.
And expose the new entities trough the REST API.
Alternatives
------------
Oslo.db with SQLAlchemy is the de facto standard for OpenStack projects
to implement database backends with relational DBMS. It provides high
level ORM mapping and abstracts the different database backends.
Therefore we gain compability with multiple relational DBMS just like
any other OpenStack component using oslo.db.
Using other alternative would either not be more uniform to other
OpenStack project tooling, either we would have to implement low level,
directly to a specific database driver (just like now with ES).
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to
require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing backups/jobs/... , or modify other
existing data describe how that will work.
As there will be a brand new relational database schema [MIG1]_:
Clients
id (varchar) [p_key]
project_id (uuid)
config_id (varchar)
description (varchar)
uuid (uuid)
Actions
id (uuid) [p_key]
action (varchar)
project_id (uuid)
mode (varchar)
src_file (varchar)
backup_name (varchar)
container (varchar(
restore_abs_path (varchar)
Action_reports
id (uuid) [p_key]
action_id (uuid) [f_key]
action_attachment_id (uuid) [f_key]
project_id (uuid)
result (varchar)
time_elapsed (varchar)
metadata (JSON)
report_date (timestamp)
log (blob) < only on failure
Jobs
id (uuid) [p_key]
project_id (uuid)
scheduling (JSON)
description (varchar)
Action_attachments
id (uuid) [p_key]
action_id (uuid) [f_key]
job_id (uuid) [f_key]
project_id (uuid)
priority (int)
retries (int)
retry_interval (int)
mandatory (bool)
Sessions
id (uuid) [p_key]
project_id (uuid)
scheduling (JSON)
policy (varchar)
Job_attachments
id (uuid) [p_key]
client_id (varchar) [f_key]
job_id (uuid) [f_key]
session_id (uuid) [f_key]
project_id (uuid)
event (varchar)
status (varchar)
current_pid (int)
REST API impact
---------------
There should be a new v2 API implemented. TBD.
Security impact
---------------
None
Notifications impact
--------------------
TBD.
Other end user impact
---------------------
None. TBD.
Performance Impact
------------------
None.
Other deployer impact
---------------------
* The Elastic Search configurations should be replaced with oslo.db
configurations
* When updating from a previous version there must be a data migration
from ES to oslo.db (this will be addressed by a nother spec - TBD).
Developer impact
----------------
There will be no longer needed to deploy ES.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
neilus
Other contributors:
daemontool
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
* implementing the database models
* create adapter for API v1(?) and v2
* implementing the CRUD API
* updating the devStack plugin
* updating documentation
Dependencies
============
* Implementing the database migration script (TBD), which migrates data
from ES to oslo.db backend DB.
* We will be using oslo.db library and SQLAlchemy for iplementation.
Testing
=======
TBD.
Documentation Impact
====================
TBD.
* Freezer API installation doc
References
==========
.. [MIG1] https://etherpad.openstack.org/p/freezer_mysql_migration
.. https://etherpad.openstack.org/p/freezer_db_switch

0
specs/pike/redirects Normal file
View File