Format as a Cinder-related OpenStack project

Since we are going to be importing the project into OpenStack we need it
to follow the same structure as the other projects under the Cinder
umbrella.
This commit is contained in:
Gorka Eguileor
2019-02-18 12:53:57 +01:00
parent 49554c7386
commit 77f399fd96
79 changed files with 1329 additions and 2476 deletions

3
doc/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
build/*
source/api/*
.autogenerated

6
doc/requirements.txt Normal file
View File

@@ -0,0 +1,6 @@
openstackdocstheme>=1.18.1 # Apache-2.0
reno>=2.5.0 # Apache-2.0
doc8>=0.6.0 # Apache-2.0
sphinx!=1.6.6,!=1.6.7,>=1.6.2 # BSD
os-api-ref>=1.4.0 # Apache-2.0
sphinxcontrib-apidoc>=0.2.0 # BSD

177
doc/source/Makefile Normal file
View File

@@ -0,0 +1,177 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/cinderlib.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/cinderlib.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/cinderlib"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/cinderlib"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

300
doc/source/conf.py Executable file
View File

@@ -0,0 +1,300 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# cinderlib documentation build configuration file, created by
# sphinx-quickstart on Tue Jul 9 22:26:36 2013.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
# If extensions (or modules to document with autodoc) are in another
# directory, add these directories to sys.path here. If the directory is
# relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
project_root = os.path.abspath('../../')
sys.path.insert(0, project_root)
# # Get the project root dir, which is the parent dir of this
# import pdb; pdb.set_trace()
# cwd = os.getcwd()
# project_root = os.path.dirname(cwd)
#
# # Insert the project root dir as the first element in the PYTHONPATH.
# # This lets us ensure that the source package is imported, and that its
# # version is used.
# sys.path.insert(0, project_root)
# -- General configuration ---------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.6.5'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinxcontrib.apidoc',
'openstackdocstheme']
# sphinxcontrib.apidoc options
apidoc_module_dir = '../../cinderlib'
apidoc_output_dir = 'api'
apidoc_excluded_paths = [
'tests/*',
'tests',
'persistence/dbms.py',
'persistence/memory.py',
]
apidoc_separate_modules = True
apidoc_toc_file = False
autodoc_mock_imports = ['cinder', 'os_brick', 'oslo_utils',
'oslo_versionedobjects', 'oslo_concurrency',
'oslo_log', 'stevedore', 'oslo_db', 'oslo_config',
'oslo_privsep', 'cinder.db.sqlalchemy']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = []
# General information about the project.
project = u'Cinder Library'
copyright = u"2017, Cinder Developers"
# openstackdocstheme options
repository_name = 'openstack/cinderlib'
bug_project = 'cinderlib'
bug_tag = ''
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to
# some non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['cinderlib.']
# If true, keep warnings as "system message" paragraphs in the built
# documents.
#keep_warnings = False
# -- Options for HTML output -------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a
# theme further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as
# html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the
# top of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon
# of the docs. This file should be a Windows icon file (.ico) being
# 16x16 or 32x32 pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets)
# here, relative to this directory. They are copied after the builtin
# static files, so a file named "default.css" will overwrite the builtin
# "default.css".
html_static_path = ['_static']
# Add any paths that contain "extra" files, such as .htaccess.
html_extra_path = ['_extra']
# If not '', a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names
# to template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer.
# Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer.
# Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages
# will contain a <link> tag referring to it. The value of this option
# must be the base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'cinderlibdoc'
# -- Options for LaTeX output ------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index', 'cinderlib.tex',
u'Cinder Library Documentation',
u'Cinder Contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at
# the top of the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings
# are parts, not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'cinderlib',
u'Cinder Library Documentation',
[u'Cinder Contributors'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ----------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'cinderlib',
u'Cinder Library Documentation',
u'Cinder Contributors',
'cinderlib',
'Direct usage of Cinder Block Storage drivers without the services.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False

View File

@@ -0,0 +1,4 @@
Contributing
============
.. include:: ../../CONTRIBUTING.rst

102
doc/source/index.rst Normal file
View File

@@ -0,0 +1,102 @@
Welcome to Cinder Library's documentation!
==========================================
.. image:: https://img.shields.io/pypi/v/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://img.shields.io/pypi/pyversions/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://img.shields.io/:license-apache-blue.svg
:target: http://www.apache.org/licenses/LICENSE-2.0
|
The Cinder Library, also known as cinderlib, is a Python library that leverages
the Cinder project to provide an object oriented abstraction around Cinder's
storage drivers to allow their usage directly without running any of the Cinder
services or surrounding services, such as KeyStone, MySQL or RabbitMQ.
The library is intended for developers who only need the basic CRUD
functionality of the drivers and don't care for all the additional features
Cinder provides such as quotas, replication, multi-tenancy, migrations,
retyping, scheduling, backups, authorization, authentication, REST API, etc.
The library was originally created as an external project, so it didn't have
the broad range of backend testing Cinder does, and only a limited number of
drivers were validated at the time. Drivers should work out of the box, and
we'll keep a list of drivers that have added the cinderlib functional tests to
the driver gates confirming they work and ensuring they will keep working.
Features
--------
* Use a Cinder driver without running a DBMS, Message broker, or Cinder
service.
* Using multiple simultaneous drivers on the same application.
* Basic operations support:
- Create volume
- Delete volume
- Extend volume
- Clone volume
- Create snapshot
- Delete snapshot
- Create volume from snapshot
- Connect volume
- Disconnect volume
- Local attach
- Local detach
- Validate connector
- Extra Specs for specific backend functionality.
- Backend QoS
- Multi-pool support
* Metadata persistence plugins:
- Stateless: Caller stores JSON serialization.
- Database: Metadata is stored in a database: MySQL, PostgreSQL, SQLite...
- Custom plugin: Caller provides module to store Metadata and cinderlib calls
it when necessary.
Example
-------
The following code extract is a simple example to illustrate how cinderlib
works. The code will use the LVM backend to create a volume, attach it to the
local host via iSCSI, and finally snapshot it:
.. code-block:: python
import cinderlib as cl
# Initialize the LVM driver
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
# Create a 1GB volume
vol = lvm.create_volume(1, name='lvm-vol')
# Export, initialize, and do a local attach of the volume
attach = vol.attach()
print('Volume %s attached to %s' % (vol.id, attach.path))
# Snapshot it
snap = vol.create_snapshot('lvm-snap')
Table of Contents
-----------------
.. toctree::
:maxdepth: 2
installation
usage
contributing
limitations

114
doc/source/installation.rst Normal file
View File

@@ -0,0 +1,114 @@
.. highlight:: shell
============
Installation
============
The Cinder Library is an interfacing library that doesn't have any storage
driver code, so it expects Cinder drivers to be installed in the system to run
properly.
We can use the latest stable release or the latest code from master branch.
Stable release
--------------
Drivers
_______
For Red Hat distributions the recommendation is to use RPMs to install the
Cinder drivers instead of using `pip`. If we don't have access to the
`Red Hat OpenStack Platform packages
<https://www.redhat.com/en/technologies/linux-platforms/openstack-platform>`_
we can use the `RDO community packages <https://www.rdoproject.org/>`_.
On CentOS, the Extras repository provides the RPM that enables the OpenStack
repository. Extras is enabled by default on CentOS 7, so you can simply install
the RPM to set up the OpenStack repository:
.. code-block:: console
# yum install -y centos-release-openstack-rocky
# yum install -y openstack-cinder
On RHEL and Fedora, you'll need to download and install the RDO repository RPM
to set up the OpenStack repository:
.. code-block:: console
# yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
# yum install -y openstack-cinder
We can also install directly from source on the system or a virtual environment:
.. code-block:: console
$ virtualenv venv
$ source venv/bin/activate
(venv) $ pip install git+git://github.com/openstack/cinder.git@stable/rocky
Library
_______
To install Cinder Library we'll use PyPI, so we'll make sure to have the `pip`_
command available:
.. code-block:: console
# yum install -y python-pip
# pip install cinderlib
This is the preferred method to install Cinder Library, as it will always
install the most recent stable release.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
Latest code
-----------
Drivers
_______
If we don't have a packaged version or if we want to use a virtual environment
we can install the drivers from source:
.. code-block:: console
$ virtualenv cinder
$ source cinder/bin/activate
$ pip install git+git://github.com/openstack/cinder.git
Library
_______
The sources for Cinder Library can be downloaded from the `Github repo`_ to use
the latest version of the library.
You can either clone the public repository:
.. code-block:: console
$ git clone git://github.com/akrog/cinderlib
Or download the `tarball`_:
.. code-block:: console
$ curl -OL https://github.com/akrog/cinderlib/tarball/master
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ virtualenv cinder
$ python setup.py install
.. _Github repo: https://github.com/openstack/cinderlib
.. _tarball: https://github.com/openstack/cinderlib/tarball/master

View File

@@ -0,0 +1,49 @@
Limitations
-----------
Cinderlib works around a number of issues that were preventing the usage of the
drivers by other Python applications, some of these are:
- *Oslo config* configuration loading.
- Cinder-volume dynamic configuration loading.
- Privileged helper service.
- DLM configuration.
- Disabling of cinder logging.
- Direct DB access within drivers.
- *Oslo Versioned Objects* DB access methods such as `refresh` and `save`.
- Circular references in *Oslo Versioned Objects* for serialization.
- Using multiple drivers in the same process.
Being in its early development stages, the library is in no way close to the
robustness or feature richness that the Cinder project provides. Some of the
more noticeable limitations one should be aware of are:
- Most methods don't perform argument validation so it's a classic GIGO_
library.
- The logic has been kept to a minimum and higher functioning logic is expected
to be handled by the caller: Quotas, tenant control, migration, etc.
- Limited test coverage.
- Only a subset of Cinder available operations are supported by the library.
Besides *cinderlib's* own limitations the library also inherits some from
*Cinder's* code and will be bound by the same restrictions and behaviors of the
drivers as if they were running under the standard *Cinder* services. The most
notorious ones are:
- Dependency on the *eventlet* library.
- Behavior inconsistency on some operations across drivers. For example you
can find drivers where cloning is a cheap operation performed by the storage
array whereas other will actually create a new volume, attach the source and
new volume and perform a full copy of the data.
- External dependencies must be handled manually. So users will have to take
care of any library, package, or CLI tool that is required by the driver.
- Relies on command execution via *sudo* for attach/detach operations as well
as some CLI tools.
.. _GIGO: https://en.wikipedia.org/wiki/Garbage_in,_garbage_out

242
doc/source/make.bat Normal file
View File

@@ -0,0 +1,242 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\cinderlib.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\cinderlib.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end

View File

@@ -0,0 +1,287 @@
========
Backends
========
The *Backend* class provides the abstraction to access a storage array with an
specific configuration, which usually constraint our ability to operate on the
backend to a single pool.
.. note::
While some drivers have been manually validated most drivers have not, so
there's a good chance that using any non tested driver will show unexpected
behavior.
If you are testing *cinderlib* with a non verified backend you should use
an exclusive pool for the validation so you don't have to be so careful
when creating resources as you know that everything within that pool is
related to *cinderlib* and can be deleted using the vendor's management
tool.
If you try the library with another storage array I would love to hear
about your results, the library version, and configuration used (masked
IPs, passwords, and users).
Initialization
--------------
Before we can have access to an storage array we have to initialize the
*Backend*, which only has one defined parameter and all other parameters are
not defined in the method prototype:
.. code-block:: python
class Backend(object):
def __init__(self, volume_backend_name, **driver_cfg):
There are two arguments that we'll always have to pass on the initialization,
one is the `volume_backend_name` that is the unique identifier that *cinderlib*
will use to identify this specific driver initialization, so we'll need to make
sure not to repeat the name, and the other one is the `volume_driver` which
refers to the Python namespace that points to the *Cinder* driver.
All other *Backend* configuration options are free-form keyword arguments.
Each driver and storage array requires different information to operate, some
require credentials to be passed as parameters, while others use a file, and
some require the control address as well as the data addresses. This behavior
is inherited from the *Cinder* project.
To find what configuration options are available and which ones are compulsory
the best is going to the Vendor's documentation or to the `OpenStack's Cinder
volume driver configuration documentation`_.
.. attention::
Some drivers have external dependencies which we must satisfy before
initializing the driver or it may fail either on the initialization or when
running specific operations. For example Kaminario requires the *krest*
Python library, and Pure requires *purestorage* Python library.
Python library dependencies are usually documented in the
`driver-requirements.txt file
<https://github.com/openstack/cinder/blob/master/driver-requirements.txt>`_,
as for the CLI required tools, we'll have to check in the Vendor's
documentation.
Cinder only supports using one driver at a time, as each process only handles
one backend, but *cinderlib* has overcome this limitation and supports having
multiple *Backends* simultaneously.
Let's see now initialization examples of some storage backends:
LVM
---
.. code-block:: python
import cinderlib
lvm = cinderlib.Backend(
volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi',
)
XtremIO
-------
.. code-block:: python
import cinderlib
xtremio = cinderlib.Backend(
volume_driver='cinder.volume.drivers.dell_emc.xtremio.XtremIOISCSIDriver',
san_ip='10.10.10.1',
xtremio_cluster_name='xtremio_cluster',
san_login='xtremio_user',
san_password='xtremio_password',
volume_backend_name='xtremio',
)
Kaminario
---------
.. code-block:: python
import cinderlib
kaminario = cl.Backend(
volume_driver='cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver',
san_ip='10.10.10.2',
san_login='kaminario_user',
san_password='kaminario_password',
volume_backend_name='kaminario_iscsi',
)
Available Backends
------------------
Usual procedure is to initialize a *Backend* and store it in a variable at the
same time so we can use it to manage our storage backend, but there are cases
where we may have lost the reference or we are in a place in our code where we
don't have access to the original variable.
For these situations we can use *cinderlib's* tracking of *Backends* through
the `backends` class dictionary where all created *Backends* are stored using
the `volume_backend_name` as the key.
.. code-block:: python
for backend in cinderlib.Backend.backends.values():
initialized_msg = '' if backend.initialized else 'not '
print('Backend %s is %sinitialized with configuration: %s' %
(backend.id, initialized_msg, backend.config))
Installed Drivers
-----------------
Available drivers for *cinderlib* depend on the Cinder version installed, so we
have a method, called `list_supported_drivers` to list information about the
drivers that are included with the Cinder release installed in the system.
.. code-block:: python
import cinderlib
drivers = cinderlib.list_supported_drivers()
And what we'll get is a dictionary with the class name of the driver, a
description, the version of the driver, etc.
Here's the entry for the LVM driver:
... code-block:: python
{'LVMVolumeDriver':
{'ci_wiki_name': 'Cinder_Jenkins',
'class_fqn': 'cinder.volume.drivers.lvm.LVMVolumeDriver',
'class_name': 'LVMVolumeDriver',
'desc': 'Executes commands relating to Volumes.',
'supported': True,
'version': '3.0.0'}}
Stats
-----
In *Cinder* all cinder-volume services periodically report the stats of their
backend to the cinder-scheduler services so they can do informed placing
decisions on operations such as volume creation and volume migration.
Some of the keys provided in the stats dictionary include:
- `driver_version`
- `free_capacity_gb`
- `storage_protocol`
- `total_capacity_gb`
- `vendor_name volume_backend_name`
Additional information can be found in the `Volume Stats section
<https://docs.openstack.org/cinder/queens/contributor/drivers.html#volume-stats>`_
within the Developer's Documentation.
Gathering stats is a costly operation for many storage backends, so by default
the stats method will return cached values instead of collecting them again.
If latest data is required parameter `refresh=True` should be passed in the
`stats` method call.
Here's an example of the output from the LVM *Backend* with refresh:
.. code-block:: python
>>> from pprint import pprint
>>> pprint(lvm.stats(refresh=True))
{'driver_version': '3.0.0',
'pools': [{'QoS_support': False,
'filter_function': None,
'free_capacity_gb': 20.9,
'goodness_function': None,
'location_info': 'LVMVolumeDriver:router:cinder-volumes:thin:0',
'max_over_subscription_ratio': 20.0,
'multiattach': False,
'pool_name': 'LVM',
'provisioned_capacity_gb': 0.0,
'reserved_percentage': 0,
'thick_provisioning_support': False,
'thin_provisioning_support': True,
'total_capacity_gb': '20.90',
'total_volumes': 1}],
'sparse_copy_volume': True,
'storage_protocol': 'iSCSI',
'vendor_name': 'Open Source',
'volume_backend_name': 'LVM'}
Available volumes
-----------------
The *Backend* class keeps track of all the *Backend* instances in the
`backends` class attribute, and each *Backend* instance has a `volumes`
property that will return a `list` all the existing volumes in the specific
backend. Deleted volumes will no longer be present.
So assuming that we have an `lvm` variable holding an initialized *Backend*
instance where we have created volumes we could list them with:
.. code-block:: python
for vol in lvm.volumes:
print('Volume %s has %s GB' % (vol.id, vol.size))
Attribute `volumes` is a lazy loadable property that will only update its value
on the first access. More information about lazy loadable properties can be
found in the :doc:`tracking` section. For more information on data loading
please refer to the :doc:`metadata` section.
.. note::
The `volumes` property does not query the storage array for a list of
existing volumes. It queries the metadata storage to see what volumes
have been created using *cinderlib* and return this list. This means that
we won't be able to manage pre-existing resources from the backend, and we
won't notice when a resource is removed directly on the backend.
Attributes
----------
The *Backend* class has no attributes of interest besides the `backends`
mentioned above and the `id`, `config`, and JSON related properties we'll see
later in the :doc:`serialization` section.
The `id` property refers to the `volume_backend_name`, which is also the key
used in the `backends` class attribute.
The `config` property will return a dictionary with only the volume backend's
name by default to limit unintended exposure of backend credentials on
serialization. If we want it to return all the configuration options we need
to pass `output_all_backend_info=True` on *cinderlib* initialization.
If we try to access any non-existent attribute in the *Backend*, *cinderlib*
will understand we are trying to access a *Cinder* driver attribute and will
try to retrieve it from the driver's instance. This is the case with the
`initialized` property we accessed in the backends listing example.
Other methods
-------------
All other methods available in the *Backend* class will be explained in their
relevant sections:
- `load` and `load_backend` will be explained together with `json`, `jsons`,
`dump`, `dumps` properties and `to_dict` method in the :doc:`serialization`
section.
- `create_volume` method will be covered in the :doc:`volumes` section.
- `validate_connector` will be explained in the :doc:`connections` section.
- `global_setup` has been covered in the :doc:`initialization` section.
- `pool_names` tuple with all the pools available in the driver. Non pool
aware drivers will have only 1 pool and use the name of the backend as its
name. Pool aware drivers may report multiple values, which can be passed to
the `create_volume` method in the `pool_name` parameter.
.. _OpenStack's Cinder volume driver configuration documentation: https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-drivers.html

View File

@@ -0,0 +1,275 @@
===========
Connections
===========
When talking about attaching a *Cinder* volume there are three steps that must
happen before the volume is available in the host:
1. Retrieve connection information from the host where the volume is going to
be attached. Here we would be getting iSCSI initiator name, IP, and similar
information.
2. Use the connection information from step 1 and make the volume accessible to
it in the storage backend returning the volume connection information. This
step entails exporting the volume and initializing the connection.
3. Attaching the volume to the host using the data retrieved on step 2.
If we are running *cinderlib* and doing the attach in the same host, then all
steps will be done in the same host. But in many cases you may want to manage
the storage backend in one host and attach a volume in another. In such cases,
steps 1 and 3 will happen in the host that needs the attach and step 2 on the
node running *cinderlib*.
Projects in *OpenStack* use the *OS-Brick* library to manage the attaching and
detaching processes. Same thing happens in *cinderlib*. The only difference
is that there are some connection types that are handled by the hypervisors in
*OpenStack*, so we need some alternative code in *cinderlib* to manage them.
*Connection* objects' most interesting attributes are:
- `connected`: Boolean that reflects if the connection is complete.
- `volume`: The *Volume* to which this instance holds the connection
information.
- `protocol`: String with the connection protocol for this volume, ie: `iscsi`,
`rbd`.
- `connector_info`: Dictionary with the connection information from the host
that is attaching. Such as it's hostname, IP address, initiator name, etc.
- `conn_info`: Dictionary with the connection information the host requires to
do the attachment, such as IP address, target name, credentials, etc.
- `device`: If we have done a local attachment this will hold a dictionary with
all the attachment information, such as the `path`, the `type`, the
`scsi_wwn`, etc.
- `path`: String with the path of the system device that has been created when
the volume was attached.
Local attach
------------
Once we have created a volume with *cinderlib* doing a local attachment is
really simple, we just have to call the `attach` method from the *Volume* and
we'll get the *Connection* information from the attached volume, and once we
are done we call the `detach` method on the *Volume*.
.. code-block:: python
vol = lvm.create_volume(size=1)
attach = vol.attach()
with open(attach.path, 'w') as f:
f.write('*' * 100)
vol.detach()
This `attach` method will take care of everything, from gathering our local
connection information, to exporting the volume, initializing the connection,
and finally doing the local attachment of the volume to our host.
The `detach` operation works in a similar way, but performing the exact
opposite steps and in reverse. It will detach the volume from our host,
terminate the connection, and if there are no more connections to the volume it
will also remove the export of the volume.
.. attention::
The *Connection* instance returned by the *Volume* `attach` method also has
a `detach` method, but this one behaves differently than the one we've seen
in the *Volume*, as it will just perform the local detach step and not the
termiante connection or the remove export method.
Remote connection
-----------------
For a remote connection, where you don't have the driver configuration or
access to the management storage network, attaching and detaching volumes is a
little more inconvenient, and how you do it will depend on whether you have
access to the metadata persistence storage or not.
In any case the general attach flow looks something like this:
- Consumer gets connector information from its host.
- Controller receives the connector information from the consumer. -
Controller exports and maps the volume using the connector information and
gets the connection information needed to attach the volume on the consumer.
- The consumer gets the connection information. - The consumer attaches the
volume using the connection information.
With access to the metadata persistence storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this case things are easier, as you can use the persistence storage to pass
information between the consumer and the controller node.
Assuming you have the following variables:
- `persistence_config` configuration of your metadata persistence storage.
- `node_id` unique string identifier for your consumer nodes that doesn't
change between reboots.
- `cinderlib_driver_configuration` is a dictionary with the Cinder backend
configuration needed by cinderlib to connect to the storage.
- `volume_id` ID of the volume we want to attach.
The consumer node must store its connector properties on start using the
key-value storage provided by the persistence plugin:
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
kv = cl.Backend.persistence.get_key_values(node_id)
if not kw:
storage_nw_ip = socket.gethostbyname(socket.gethostname())
connector_dict = cl.get_connector_properties('sudo', storage_nw_ip,
True, False)
value = json.dumps(connector_dict, separators=(',', ':'))
kv = cl.KeyValue(node_id, value)
cl.Backend.persistence.set_key_value(kv)
Then when we want to attach a volume to `node_id` the controller can retrieve
this information using the persistence plugin and export and map the volume for
the specific host.
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
storage = cl.Backend(**cinderlib_driver_configuration)
kv = cl.Backend.persistence.get_key_values(node_id)
if not kv:
raise Exception('Unknown node')
connector_info = json.loads(kv[0].value)
vol = storage.Volume.get_by_id(volume_id)
vol.connect(connector_info, attached_host=node_id)
Once the volume has been exported and mapped, the connection information is
automatically stored by the persistence plugin and the consumer host can attach
the volume:
.. code-block:: python
vol = storage.Volume.get_by_id(volume_id)
connection = vol.connections[0]
connection.attach()
print('Volume %s attached to %s' % (vol.id, connection.path))
When attaching the volume the metadata plugin will store changes to the
Connection instance that are needed for the detaching.
No access to the metadata persistence storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is more inconvenient, as you'll have to handle the data exchange manually
as well as the *OS-Brick* library calls to do the attach/detach.
First we need to get the connection information on the host that is going to do
the attach:
.. code-block:: python
from os_brick.initiator import connector
connector_dict = connector.get_connector_properties('sudo', storage_nw_ip,
True, False)
Now we need to pass this connector information dictionary to the controller
node. This part will depend on your specific application/system.
In the controller node, once we have the contents of the `connector_dict`
variable we can export and map the volume and get the info needed by the
consumer:
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
storage = cl.Backend(**cinderlib_driver_configuration)
vol = storage.Volume.get_by_id(volume_id)
conn = vol.connect(connector_info, attached_host=node_id)
connection_info = conn.connection_info
We have to pass the contents of `connection_info` information to the consumer
node, and that node will use it to attach the volume:
.. code-block:: python
import os_brick
from os_brick.initiator import connector
connector_dict = connection_info['connector']
conn_info = connection_info['conn']
protocol = conn_info['driver_volume_type']
conn = connector.InitiatorConnector.factory(
protocol, 'sudo', user_multipath=True,
device_scan_attempts=3, conn=connector_dict)
device = conn.connect_volume(conn_info['data'])
print('Volume attached to %s' % device.get('path'))
At this point we have the `device` variable that needs to be stored for the
disconnection, so we have to either store it on the consumer node, or pass it
to the controller node so we can save it with the connector info.
Here's an example on how to save it on the controller node:
.. code-block:: python
conn = vol.connections[0]
conn.device = device
conn.save()
.. warning:: At the time of this writing this mechanism doesn't support RBD
connections, as this support is added by cinderlib itself.
Multipath
---------
If we want to use multipathing for local attachments we must let the *Backend*
know when instantiating the driver by passing the
`use_multipath_for_image_xfer=True`:
.. code-block:: python
import cinderlib
lvm = cinderlib.Backend(
volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi',
use_multipath_for_image_xfer=True,
)
Multi attach
------------
Multi attach support has been added to *Cinder* in the Queens cycle, and it's
not currently supported by *cinderlib*.
Other methods
-------------
All other methods available in the *Snapshot* class will be explained in their
relevant sections:
- `load` will be explained together with `json`, `jsons`, `dump`, and `dumps`
properties, and the `to_dict` method in the :doc:`serialization` section.
- `refresh` will reload the volume from the metadata storage and reload any
lazy loadable property that has already been loaded. Covered in the
:doc:`serialization` and :doc:`tracking` sections.

View File

@@ -0,0 +1,197 @@
==============
Initialization
==============
The cinderlib itself doesn't require an initialization, as it tries to provide
sensible settings, but in some cases we may want to modify these defaults to
fit a specific desired behavior and the library provides a mechanism to support
this.
Library initialization should be done before making any other library call,
including *Backend* initialization and loading serialized data, if we try to do
it after other calls the library will raise an `Exception`.
Provided *setup* method is `cinderlib.Backend.global_setup`, but for
convenience the library provides a reference to this class method in
`cinderlib.setup`
The method definition is as follows:
.. code-block:: python
@classmethod
def global_setup(cls, file_locks_path=None, root_helper='sudo',
suppress_requests_ssl_warnings=True, disable_logs=True,
non_uuid_ids=False, output_all_backend_info=False,
project_id=None, user_id=None, persistence_config=None,
fail_on_missing_backend=True, host=None,
**cinder_config_params):
The meaning of the library's configuration options are:
file_locks_path
---------------
Cinder is a complex system that can support Active-Active deployments, and each
driver and storage backend has different restrictions, so in order to
facilitate mutual exclusion it provides 3 different types of locks depending
on the scope the driver requires:
- Between threads of the same process.
- Between different processes on the same host.
- In all the OpenStack deployment.
Cinderlib doesn't currently support the third type of locks, but that should
not be an inconvenience for most cinderlib usage.
Cinder uses file locks for the between process locking and cinderlib uses that
same kind of locking for the third type of locks, which is also what Cinder
uses when not deployed in an Active-Active fashion.
Parameter defaults to `None`, which will use the path indicated by the
`state_path` configuration option. It defaults to the current directory.
root_helper
-----------
There are some operations in *Cinder* drivers that require `sudo` privileges,
this could be because they are running Python code that requires it or because
they are running a command with `sudo`.
Attaching and detaching operations with *cinderlib* will also require `sudo`
privileges.
This configuration option allows us to define a custom root helper or disabling
all `sudo` operations passing an empty string when we know we don't require
them and we are running the process with a non passwordless `sudo` user.
Defaults to `sudo`.
suppress_requests_ssl_warnings
------------------------------
Controls the suppression of the *requests* library SSL certificate warnings.
Defaults to `True`.
non_uuid_ids
------------
As mentioned in the :doc:`volumes` section we can provide resource IDs manually
at creation time, and some drivers even support non UUID identificators, but
since that's not a given validation will reject any non UUID value.
This configuration option allows us to disable the validation on the IDs, at
the user's risk.
Defaults to `False`.
output_all_backend_info
-----------------------
Whether to include the *Backend* configuration when serializing objects.
Detailed information can be found in the :doc:`serialization` section.
Defaults to `False`.
disable_logs
------------
*Cinder* drivers are meant to be run within a full blown service, so they can
be quite verbose in terms of logging, that's why *cinderlib* disables it by
default.
Defaults to `True`.
project_id
----------
*Cinder* is a multi-tenant service, and when resources are created they belong
to a specific tenant/project. With this parameter we can define, using a
string, an identifier for our project that will be assigned to the resources we
create.
Defaults to `cinderlib`.
user_id
-------
Within each project/tenant the *Cinder* project supports multiple users, so
when it creates a resource a reference to the user that created it is stored
in the resource. Using this this parameter we can define, using a string, an
identifier for the user of cinderlib to be recorded in the resources.
Defaults to `cinderlib`.
persistence_config
------------------
*Cinderlib* operation requires data persistence, which is achieved with a
metadata persistence plugin mechanism.
The project includes 2 types of plugins providing 3 different persistence
solutions and more can be used via Python modules and passing custom plugins in
this parameter.
Users of the *cinderlib* library must decide which plugin best fits their needs
and pass the appropriate configuration in a dictionary as the
`persistence_config` parameter.
The parameter is optional, and defaults to the `memory` plugin, but if it's
passed it must always include the `storage` key specifying the plugin to be
used. All other key-value pairs must be valid parameters for the specific
plugin.
Value for the `storage` key can be a string identifying a plugin registered
using Python entrypoints, an instance of a class inheriting from
`PersistenceDriverBase`, or a `PersistenceDriverBase` class.
Information regarding available plugins, their description and parameters, and
different ways to initialize the persistence can be found in the
:doc:`metadata` section.
fail_on_missing_backend
-----------------------
To facilitate operations on resources, *Cinderlib* stores a reference to the
instance of the *backend* in most of the in-memory objects.
When deserializing or retrieving objects from the metadata persistence storage
*cinderlib* tries to properly set this *backend* instance based on the
*backends* currently in memory.
Trying to load an object without having instantiated the *backend* will result
in an error, unless we define `fail_on_missing_backend` to `False` on
initialization.
This is useful if we are sharing the metadata persistence storage and we want
to load a volume that is already connected to do just the attachment.
host
----
Host configuration option used for all volumes created by this cinderlib
execution.
On cinderlib volumes are selected based on the backend name, not on the
host@backend combination like cinder does. Therefore backend names must be
unique across all cinderlib applications that are using the same persistence
storage backend.
A second application running cinderlib with a different host value will have
access to the same resources if it uses the same backend name.
Defaults to the host's hostname.
Other keyword arguments
-----------------------
Any other keyword argument passed to the initialization method will be
considered a *Cinder* configuration option in the `[DEFAULT]` section.
This can be useful to set additional logging configuration like debug log
level, the `state_path` used by default in many option, or other options like
the `ssh_hosts_key_file` required by drivers that use SSH.
For a list of the possible configuration options one should look into the
*Cinder* project's documentation.

View File

@@ -0,0 +1,268 @@
====================
Metadata Persistence
====================
*Cinder* drivers are not stateless, and the interface between the *Cinder* core
code and the drivers allows them to return data that can be stored in the
database. Some drivers, that have not been updated, are even accessing the
database directly.
Because *cinderlib* uses the *Cinder* drivers as they are, it cannot be
stateless either.
Originally *cinderlib* stored all the required metadata in RAM, and passed the
responsibility of persisting this information to the user of the library.
Library users would create or modify resources using *cinderlib*, and then
serialize the resources and manage the storage of this information themselves.
This allowed referencing those resources after exiting the application and in
case of a crash.
This solution would result in code duplication across projects, as many library
users would end up using the same storage types for the serialized data.
That's when the metadata persistence plugin was introduced in the code.
With the metadata plugin mechanism we can have plugins for different storages
and they can be shared between different projects.
*Cinderlib* includes 2 types of plugins providing 3 different persistence
solutions:
- Memory (the default)
- Database
- Database in memory
Using the memory mechanisms users can still use the JSON serialization
mechanism to store the medatada.
Currently we have memory and database plugins. Users can store the data
wherever they want using the JSON serialization mechanism or with a custom
metadata plugin.
Persistence mechanism must be configured before initializing any *Backend*
using the `persistence_config` parameter in the `setup` or `global_setup`
methods.
.. note:: When deserializing data using the `load` method on memory based
storage we will not be making this data available using the *Backend* unless
we pass `save=True` on the `load` call.
Memory plugin
-------------
The memory plugin is the fastest one, but it's has its drawbacks. It doesn't
provide persistence across application restarts and it's more likely to have
issues than the database plugin.
Even though it's more likely to present issues with some untested drivers, it
is still the default plugin, because it's the plugin that exposes the raw
plugin mechanism and will expose any incompatibility issues with external
plugins in *Cinder* drivers.
This plugin is identified with the name `memory`, and here we can see a simple
example of how to save everything to the database:
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config={'storage': 'memory'})
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
vol = lvm.create_volume(1)
with open('lvm.txt', 'w') as f:
f.write(lvm.dumps)
And how to load it back:
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config={'storage': 'memory'})
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
with open('cinderlib.txt', 'r') as f:
data = f.read()
backends = cl.load(data, save=True)
print backends[0].volumes
Database plugin
---------------
This metadata plugin is the most likely to be compatible with any *Cinder*
driver, as its built on top of *Cinder's* actual database layer.
This plugin includes 2 storage options: memory and real database. They are
identified with the storage identifiers `memory_db` and `db` respectively.
The memory option will store the data as an in memory SQLite database. This
option helps debugging issues on untested drivers. If a driver works with the
memory database plugin, but doesn't with the `memory` one, then the issue is
most likely caused by the driver accessing the database. Accessing the
database could be happening directly importing the database layer, or
indirectly using versioned objects.
The memory database doesn't require any additional configuration, but when
using a real database we must pass the connection information using `SQLAlchemy
database URLs format`_ as the value of the `connection` key.
.. code-block:: python
import cinderlib as cl
persistence_config = {'storage': 'db', 'connection': 'sqlite:///cl.sqlite'}
cl.setup(persistence_config=persistence_config)
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
vol = lvm.create_volume(1)
Using it later is exactly the same:
.. code-block:: python
import cinderlib as cl
persistence_config = {'storage': 'db', 'connection': 'sqlite:///cl.sqlite'}
cl.setup(persistence_config=persistence_config)
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
print lvm.volumes
Custom plugins
--------------
The plugin mechanism uses Python entrypoints to identify plugins present in the
system. So any module exposing the `cinderlib.persistence.storage` entrypoint
will be recognized as a *cinderlib* metadata persistence plugin.
As an example, the definition in `setup.py` of the entrypoints for the plugins
included in *cinderlib* is:
.. code-block:: python
entry_points={
'cinderlib.persistence.storage': [
'memory = cinderlib.persistence.memory:MemoryPersistence',
'db = cinderlib.persistence.dbms:DBPersistence',
'memory_db = cinderlib.persistence.dbms:MemoryDBPersistence',
],
},
But there may be cases were we don't want to create entry points available
system wide, and we want an application only plugin mechanism. For this
purpose *cinderlib* supports passing a plugin instance or class as the value of
the `storage` key in the `persistence_config` parameters.
The instance and class must inherit from the `PersistenceDriverBase` in
`cinderlib/persistence/base.py` and implement all the following methods:
- `db`
- `get_volumes`
- `get_snapshots`
- `get_connections`
- `get_key_values`
- `set_volume`
- `set_snapshot`
- `set_connection`
- `set_key_value`
- `delete_volume`
- `delete_snapshot`
- `delete_connection`
- `delete_key_value`
And the `__init__` method is usually needed as well, and it will receive as
keyword arguments the parameters provided in the `persistence_config`. The
`storage` key-value pair is not included as part of the keyword parameters.
The invocation with a class plugin would look something like this:
.. code-block:: python
import cinderlib as cl
from cinderlib.persistence import base
class MyPlugin(base.PersistenceDriverBase):
def __init__(self, location, user, password):
...
persistence_config = {'storage': MyPlugin, 'location': '127.0.0.1',
'user': 'admin', 'password': 'nomoresecrets'}
cl.setup(persistence_config=persistence_config)
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
Migrating storage
-----------------
Metadata is crucial for the proper operation of *cinderlib*, as the *Cinder*
drivers cannot retrieve this information from the storage backend.
There may be cases where we want to stop using a metadata plugin and start
using another one, but we have metadata on the old plugin, so we need to
migrate this information from one backend to another.
To achieve a metadata migration we can use methods `refresh`, `dump`, `load`,
and `set_persistence`.
An example code of how to migrate from SQLite to MySQL could look like this:
.. code-block:: python
import cinderlib as cl
# Setup the source persistence plugin
persistence_config = {'storage': 'db',
'connection': 'sqlite:///cinderlib.sqlite'}
cl.setup(persistence_config=persistence_config)
# Setup backends we want to migrate
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
# Get all the data into memory
data = cl.dump()
# Setup new persistence plugin
new_config = {
'storage': 'db',
'connection': 'mysql+pymysql://user:password@IP/cinder?charset=utf8'
}
cl.Backend.set_persistence(new_config)
# Load and save the data into the new plugin
backends = cl.load(data, save=True)
.. _SQLAlchemy database URLs format: http://docs.sqlalchemy.org/en/latest/core/engines.html#database-urls

View File

@@ -0,0 +1,210 @@
=============
Serialization
=============
A *Cinder* driver is stateless on itself, but it still requires the right data
to work, and that's why the cinder-volume service takes care of storing the
state in the DB. This means that *cinderlib* will have to simulate the DB for
the drivers, as some operations actually return additional data that needs to
be kept and provided in any future operation.
Originally *cinderlib* stored all the required metadata in RAM, and passed the
responsibility of persisting this information to the user of the library.
Library users would create or modify resources using *cinderlib*, and then
would have to serialize the resources and manage the storage of this
information. This allowed referencing those resources after exiting the
application and in case of a crash.
Now we support :doc:`metadata` plugins, but there are still cases were we'll
want to serialize the data:
- When logging or debugging resources.
- When using a metadata plugin that stores the data in memory.
- Over the wire transmission of the connection information to attach a volume
on a remote nodattach a volume on a remote node.
We have multiple methods to satisfy these needs, to serialize the data (`json`,
`jsons`, `dump`, `dumps`), to deserialize it (`load`), and to convert to a user
friendly object (`to_dict`).
To JSON
-------
We can get a JSON representation of any *cinderlib* object - *Backend*,
*Volume*, *Snapshot*, and *Connection* - using their following properties:
- `json`: Returns a JSON representation of the current object information as a
Python dictionary. Lazy loadable objects that have not been loaded will not
be present in the resulting dictionary.
- `jsons`: Returns a string with the JSON representation. It's the equivalent
of converting to a string the dictionary from the `json` property.
- `dump`: Identical to the `json` property with the exception that it ensures
all lazy loadable attributes have been loaded. If an attribute had already
been loaded its contents will not be refreshed.
- `dumps`: Returns a string with the JSON representation of the fully loaded
object. It's the equivalent of converting to a string the dictionary from
the `dump` property.
Besides these resource specific properties, we also have their equivalent
methods at the library level that will operate on all the *Backends* present in
the application.
.. attention:: On the objects, these are properties (`volume.dumps`), but on
the library, these are methods (`cinderlib.dumps()`).
.. note::
We don't have to worry about circular references, such as a *Volume* with a
*Snapshot* that has a reference to its source *Volume*, since *cinderlib*
is prepared to handle them.
To demonstrate the serialization in *cinderlib* we can look at an easy way to
save all the *Backends'* resources information from an application that uses
*cinderlib* with the metadata stored in memory:
.. code-block:: python
with open('cinderlib.txt', 'w') as f:
f.write(cinderlib.dumps())
In a similar way we can also store a single *Backend* or a single *Volume*:
.. code-block:: python
vol = lvm.create_volume(size=1)
with open('lvm.txt', 'w') as f:
f.write(lvm.dumps)
with open('vol.txt', 'w') as f:
f.write(vol.dumps)
We must remember that `dump` and `dumps` triggers loading of properties that
are not already loaded. Any lazy loadable property that was already loaded
will not be updated. A good way to ensure we are using the latest data is to
trigger a `refresh` on the backends before doing the `dump` or `dumps`.
.. code-block:: python
for backend in cinderlib.Backend.backends:
backend.refresh()
with open('cinderlib.txt', 'w') as f:
f.write(cinderlib.dumps())
When serializing *cinderlib* resources we'll get all the data currently
present. This means that when serializing a volume that is attached and has
snapshots we'll get them all serialized.
There are some cases where we don't want this, such as when implementing a
persistence metadata plugin. We should use the `to_json` and `to_jsons`
methods for such cases, as they will return a simplified serialization of the
resource containing only the data from the resource itself.
From JSON
---------
Just like we had the `json`, `jsons`, `dump`, and `dumps` methods in all the
*cinderlib* objects to serialize data, we also have the `load` method to
deserialize this data back and recreate a *cinderlib* internal representation
from JSON, be it stored in a Python string or a Python dictionary.
The `load` method is present in *Backend*, *Volume*, *Snapshot*, and
*Connection* classes as well as in the library itself. The resource specific
`load` class method is the exact counterpart of the serialization methods, and
it will deserialize the specific resource from the class its being called from.
The library's `load` method is capable of loading anything we have serialized.
Not only can it load the full list of *Backends* with their resources, but it
can also load individual resources. This makes it the recommended way to
deserialize any data in *cinderlib*. By default, serialization and the
metadata storage are disconnected, so loading serialized data will not ensure
that the data is present in the persistence storage. We can ensure that
deserialized data is present in the persistence storage passing `save=True` to
the loading method.
Considering the files we created in the earlier examples we can easily load our
whole configuration with:
.. code-block:: python
# We must have initialized the Backends before reaching this point
with open('cinderlib.txt', 'r') as f:
data = f.read()
backends = cinderlib.load(data, save=True)
And for a specific backend or an individual volume:
.. code-block:: python
# We must have initialized the Backends before reaching this point
with open('lvm.txt', 'r') as f:
data = f.read()
lvm = cinderlib.load(data, save=True)
with open('vol.txt', 'r') as f:
data = f.read()
vol = cinderlib.load(data)
This is the preferred way to deserialize objects, but we could also use the
specific object's `load` method.
.. code-block:: python
# We must have initialized the Backends before reaching this point
with open('lvm.txt', 'r') as f:
data = f.read()
lvm = cinderlib.Backend.load(data)
with open('vol.txt', 'r') as f:
data = f.read()
vol = cinderlib.Volume.load(data)
To dict
-------
Serialization properties and methos presented earlier are meant to store all
the data and allow reuse of that data when using drivers of different releases.
So it will include all required information to be backward compatible when
moving from release N *Cinder* drivers to release N+1 drivers.
There will be times when we'll just want to have a nice dictionary
representation of a resource, be it to log it, to display it while debugging,
or to send it from our controller application to the node where we are going to
be doing the attachment. For these specific cases all resources, except the
*Backend* have a `to_dict` method (not property this time) that will only
return the relevant data from the resources.
Backend configuration
---------------------
When *cinderlib* serializes any object it also stores the *Backend* this object
belongs to. For security reasons it only stores the identifier of the backend
by default, which is the `volume_backend_name`. Since we are only storing a
reference to the *Backend*, this means that when we are going through the
deserialization process the *Backend* the object belonged to must already be
present in *cinderlib*.
This should be OK for most *cinderlib* usages, since it's common practice to
store the storage backend connection information (credentials, addresses, etc.)
in a different location than the data; but there may be situations (for example
while testing) where we'll want to store everything in the same file, not only
the *cinderlib* representation of all the storage resources but also the
*Backend* configuration required to access the storage array.
To enable the serialization of the whole driver configuration we have to
specify `output_all_backend_info=True` on the *cinderlib* initialization
resulting in a self contained file with all the information required to manage
the resources.
This means that with this configuration option we won't need to configure the
*Backends* prior to loading the serialized JSON data, we can just load the data
and *cinderlib* will automatically setup the *Backends*.

View File

@@ -0,0 +1,69 @@
=========
Snapshots
=========
The *Snapshot* class provides the abstraction layer required to perform all
operations on an existing snapshot, which means that the snapshot creation
operation must be invoked from other class instance, since the new snapshot we
want to create doesn't exist yet and we cannot use the *Snapshot* class to
manage it.
Create
------
Once we have a *Volume* instance we are ready to create snapshots from it, and
we can do it for attached as well as detached volumes.
.. note::
Some drivers, like the NFS, require assistance from the Compute service for
attached volumes, so there is currently no way of doing this with
*cinderlib*
Creating a snapshot can only be performed by the `create_snapshot` method from
our *Volume* instance, and once we have created a snapshot it will be tracked
in the *Volume* instance's `snapshots` set.
Here is a simple code to create a snapshot and use the `snapshots` set to
verify that both, the returned value by the call as well as the entry added to
the `snapshots` attribute, reference the same object and that the `volume`
attribute in the *Snapshot* is referencing the source volume.
.. code-block:: python
vol = lvm.create_volume(size=1)
snap = vol.create_snapshot()
assert snap is list(vol.snapshots)[0]
assert vol is snap.volume
Delete
------
Once we have created a *Snapshot* we can use its `delete` method to permanently
remove it from the storage backend.
Deleting a snapshot will remove its reference from the source *Volume*'s
`snapshots` set.
.. code-block:: python
vol = lvm.create_volume(size=1)
snap = vol.create_snapshot()
assert 1 == len(vol.snapshots)
snap.delete()
assert 0 == len(vol.snapshots)
Other methods
-------------
All other methods available in the *Snapshot* class will be explained in their
relevant sections:
- `load` will be explained together with `json`, `jsons`, `dump`, and `dumps`
properties, and the `to_dict` method in the :doc:`serialization` section.
- `refresh` will reload the volume from the metadata storage and reload any
lazy loadable property that has already been loaded. Covered in the
:doc:`serialization` and :doc:`tracking` sections.
- `create_volume` method has been covered in the :doc:`volumes` section.

View File

@@ -0,0 +1,66 @@
Resource tracking
-----------------
*Cinderlib* users will surely have their own variables to keep track of the
*Backends*, *Volumes*, *Snapshots*, and *Connections*, but there may be cases
where this is not enough, be it because we are in a place in our code where we
don't have access to the original variables, because we want to iterate all
instances, or maybe we are running some manual tests and we have lost the
reference to a resource.
For these cases we can use *cinderlib's* various tracking systems to access the
resources. These tracking systems are also used by *cinderlib* in the
serialization process. They all used to be in memory, but some will now reside
in the metadata persistence storage.
*Cinderlib* keeps track of all:
- Initialized *Backends*.
- Existing volumes in a *Backend*.
- Connections to a volume.
- Local attachment to a volume.
- Snapshots for a given volume.
Initialized *Backends* are stored in a dictionary in `Backends.backends` using
the `volume_backend_name` as key.
Existing volumes in a *Backend* are stored in the persistence storage, and can
be lazy loaded using the *Backend* instance's `volumes` property.
Existing *Snapshots* for a *Volume* are stored in the persistence storage, and
can be lazy loaded using the *Volume* instance's `snapshots` property.
Connections to a *Volume* are stored in the persistence storage, and can be
lazy loaded using the *Volume* instance's `connections` property.
.. note:: Lazy loadable properties will only load the value the first time we
access them. Successive accesses will just return the cached value. To
retrieve latest values for them as well as for the instance we can use the
`refresh` method.
The local attachment *Connection* of a volume is stored in the *Volume*
instance's `local_attach` attribute and is stored in memory, so unloading the
library will lose this information.
We can easily use all these properties to display the status of all the
resources we've created:
.. code-block:: python
# If volumes lazy loadable property was already loaded, refresh it
lvm_backend.refresh()
for vol in lvm_backend.volumes:
print('Volume %s is currently %s' % (vol.id, vol.status)
# Refresh volume's snapshots and connections if previously lazy loaded
vol.refresh()
for snap in vol.snapshots:
print('Snapshot %s for volume %s is currently %s' %
(snap.id, snap.volume.id, snap.status))
for conn in vol.connections:
print('Connection from %s with ip %s to volume %s is %s' %
(conn.connector_info['host'], conn.connector_info['ip'],
conn.volume.id, conn.status))

View File

@@ -0,0 +1,254 @@
=======
Volumes
=======
"The *Volume* class provides the abstraction layer required to perform all
operations on an existing volume. Volume creation operations are carried out
at the *Backend* level.
Create
------
The base resource in storage is the volume, and to create one the *cinderlib*
provides three different mechanisms, each one with a different method that will
be called on the source of the new volume.
So we have:
- Empty volumes that have no resource source and will have to be created
directly on the *Backend* via the `create_volume` method.
- Cloned volumes that will be created from a source *Volume* using its `clone`
method.
- Volumes from a snapshot, where the creation is initiated by the
`create_volume` method from the *Snapshot* instance.
.. note::
*Cinder* NFS backends will create an image and not a directory to store
files, which falls in line with *Cinder* being a Block Storage provider and
not filesystem provider like *Manila* is.
So assuming that we have an `lvm` variable holding an initialized *Backend*
instance we could create a new 1GB volume quite easily:
.. code-block:: python
print('Stats before creating the volume are:')
pprint(lvm.stats())
vol = lvm.create_volume(1)
print('Stats after creating the volume are:')
pprint(lvm.stats())
Now, if we have a volume that already contains data and we want to create a new
volume that starts with the same contents we can use the source volume as the
cloning source:
.. code-block:: python
cloned_vol = vol.clone()
Some drivers support cloning to a bigger volume, so we could define the new
size in the call and the driver would take care of extending the volume after
cloning it, this is usually tightly linked to the `extend` operation support by
the driver.
Cloning to a greater size would look like this:
.. code-block:: python
new_size = vol.size + 1
cloned_bigger_volume = vol.clone(size=new_size)
.. note::
Cloning efficiency is directly linked to the storage backend in use, so it
will not have the same performance in all backends. While some backends
like the Ceph/RBD will be extremely efficient others may range from slow to
being actually implemented as a `dd` operation performed by the driver
attaching source and destination volumes.
.. code-block:: python
vol = snap.create_volume()
.. note::
Just like with the cloning functionality, not all storage backends can
efficiently handle creating a volume from a snapshot.
On volume creation we can pass additional parameters like a `name` or a
`description`, but these will be irrelevant for the actual volume creation and
will only be useful to us to easily identify our volumes or to store additional
information.
Available fields with their types can be found in `Cinder's Volume OVO
definition
<https://github.com/openstack/cinder/blob/stable/queens/cinder/objects/volume.py#L71-L131>`_,
but most of them are only relevant within the full *Cinder* service.
We can access these fields as if they were part of the *cinderlib* *Volume*
instance, since the class will try to retrieve any non *cinderlib* *Volume*
from *Cinder*'s internal OVO representation.
Some of the fields we could be interested in are:
- `id`: UUID-4 unique identifier for the volume.
- `user_id`: String identifier, in *Cinder* it's a UUID, but we can choose
here.
- `project_id`: String identifier, in *Cinder* it's a UUID, but we can choose
here.
- `snapshot_id`: ID of the source snapshot used to create the volume. This
will be filled by *cinderlib*.
- `host`: Used to store the backend name information together with the host
name where cinderlib is running. This information is stored as a string in
the form of *host@backend#pool*. This is an optional parameter, and passing
it to `create_volume` will override default value, allowing us caller to
request a specific pool for multi-pool backends, though we recommend using
the `pool_name` parameter instead. Issues will arise if parameter doesn't
contain correct information.
- `pool_name`: Pool name to use when creating the volume. Default is to use
the first or only pool. To know possible values for a backend use the
`pool_names` property on the *Backend* instance.
- `size`: Volume size in GBi.
- `availability_zone`: In case we want to define AZs.
- `status`: This represents the status of the volume, and the most important
statuses are `available`, `error`, `deleted`, `in-use`, `creating`.
- `attach_status`: This can be `attached` or `detached`.
- `scheduled_at`: Date-time when the volume was scheduled to be created.
Currently not being used by *cinderlib*.
- `launched_at`: Date-time when the volume creation was completed. Currently
not being used by *cinderlib*.
- `deleted`: Boolean value indicating whether the volume has already been
deleted. It will be filled by *cinderlib*.
- `terminated_at`: When the volume delete was sent to the backend.
- `deleted_at`: When the volume delete was completed.
- `display_name`: Name identifier, this is passed as `name` to all *cinderlib*
volume creation methods.
- `display_description`: Long description of the volume, this is passed as
`description` to all *cinderlib* volume creation methods.
- `source_volid`: ID of the source volume used to create this volume. This
will be filled by *cinderlib*.
- `bootable`: Not relevant for *cinderlib*, but maybe useful for the
*cinderlib* user.
- `extra_specs`: Extra volume configuration used by some drivers to specify
additional information, such as compression, deduplication, etc. Key-Value
pairs are driver specific.
- `qos_specs`: Backend QoS configuration. Dictionary with driver specific
key-value pares that enforced by the backend.
.. note::
*Cinderlib* automatically generates a UUID for the `id` if one is not
provided at volume creation time, but the caller can actually provide a
specific `id`.
By default the `id` is limited to valid UUID and this is the only kind of
ID that is guaranteed to work on all drivers. For drivers that support non
UUID IDs we can instruct *cinderlib* to modify *Cinder*'s behavior and
allow them. This is done on *cinderlib* initialization time passing
`non_uuid_ids=True`.
.. note::
*Cinderlib* does not do scheduling on driver pools, so setting the
`extra_specs` for a volume on drivers that expect the scheduler to select
a specific pool using them will have the same behavior as in Cinder.
In that case the caller of Cinderlib is expected to go through the stats
and check the pool that matches the criteria and pass it to the Backend's
`create_volume` method on the `pool_name` parameter.
Delete
------
Once we have created a *Volume* we can use its `delete` method to permanently
remove it from the storage backend.
In *Cinder* there are safeguards to prevent a delete operation from completing
if it has snapshots (unless the delete request comes with the `cascade` option
set to true), but here in *cinderlib* we don't, so it's the callers
responsibility to delete the snapshots.
Deleting a volume with snapshots doesn't have a defined behavior for *Cinder*
drivers, since it's never meant to happen, so some storage backends delete the
snapshots, other leave them as they were, and others will fail the request.
Example of creating and deleting a volume:
.. code-block:: python
vol = lvm.create_volume(size=1)
vol.delete()
.. attention::
When deleting a volume that was the source of a cloning operation some
backends cannot delete them (since they have copy-on-write clones) and they
just keep them as a silent volume that will be deleted when its snapshot
and clones are deleted.
Extend
------
Many storage backends and *Cinder* drivers support extending a volume to have
more space and you can do this via the `extend` method present in your *Volume*
instance.
If the *Cinder* driver doesn't implement the extend operation it will raise a
`NotImplementedError`.
The only parameter received by the `extend` method is the new size, and this
must always be greater than the current value because *cinderlib* is not
validating this at the moment.
Example of creating, extending, and deleting a volume:
.. code-block:: python
vol = lvm.create_volume(size=1)
print('Vol %s has %s GBi' % (vol.id, vol.size))
vol.extend(2)
print('Extended vol %s has %s GBi' % (vol.id, vol.size))
vol.delete()
Other methods
-------------
All other methods available in the *Volume* class will be explained in their
relevant sections:
- `load` will be explained together with `json`, `jsons`, `dump`, and `dumps`
properties, and the `to_dict` method in the :doc:`serialization` section.
- `refresh` will reload the volume from the metadata storage and reload any
lazy loadable property that has already been loaded. Covered in the
:doc:`serialization` and :doc:`tracking` sections.
- `create_snapshot` method will be covered in the :doc:`snapshots` section
together with the `snapshots` attribute.
- `attach`, `detach`, `connect`, and `disconnect` methods will be explained in
the :doc:`connections` section.

67
doc/source/usage.rst Normal file
View File

@@ -0,0 +1,67 @@
=====
Usage
=====
Thanks to the fully Object Oriented abstraction, instead of a classic method
invocation passing the resources to work on, *cinderlib* makes it easy to hit
the ground running when managing storage resources.
Once the *Cinder* and *cinderlib* packages are installed we just have to import
the library to start using it:
.. code-block:: python
import cinderlib
.. note::
Installing the *Cinder* package does not require to start any of its
services (volume, scheduler, api) or auxiliary services (KeyStone, MySQL,
RabbitMQ, etc.).
Usage documentation is not too long, and it is recommended to read it all
before using the library to be sure we have at least a high level view of the
different aspects related to managing our storage with *cinderlib*.
Before going into too much detail there are some aspects we need to clarify to
make sure our terminology is in sync and we understand where each piece fits.
In *cinderlib* we have *Backends*, that refer to a storage array's specific
connection configuration so it usually doesn't refer to the whole storage. With
a backend we'll usually have access to the configured pool.
Resources managed by *cinderlib* are *Volumes* and *Snapshots*, and a *Volume*
can be created from a *Backend*, another *Volume*, or from a *Snapshot*, and a
*Snapshot* can only be created from a *Volume*.
Once we have a volume we can create *Connections* so it can be accessible from
other hosts or we can do a local *Attachment* of the volume which will retrieve
required local connection information of this host, create a *Connection* on
the storage to this host, and then do the local *Attachment*.
Given that *Cinder* drivers are not stateless, *cinderlib* cannot be either.
That's why there is a metadata persistence plugin mechanism to provide
different ways to store resource states. Currently we have memory and database
plugins. Users can store the data wherever they want using the JSON
serialization mechanism or with a custom metadata plugin.
Each of the different topics are treated in detail on their specific sections:
.. toctree::
:maxdepth: 1
topics/initialization
topics/backends
topics/volumes
topics/snapshots
topics/connections
topics/serialization
topics/tracking
topics/metadata
Auto-generated documentation is also available:
.. toctree::
:maxdepth: 2
api/cinderlib