Format as a Cinder-related OpenStack project

Since we are going to be importing the project into OpenStack we need it
to follow the same structure as the other projects under the Cinder
umbrella.
This commit is contained in:
Gorka Eguileor 2019-02-18 12:53:57 +01:00
parent 49554c7386
commit 77f399fd96
79 changed files with 1329 additions and 2476 deletions

View File

@ -1,13 +0,0 @@
build/
dist/
docs/
.venv/
.tox/
tests/
tmp/
.git/
.github/
*.py[cod]
.*.sw?
Dockerfile
Dockerfile-master

8
.gitignore vendored
View File

@ -47,6 +47,7 @@ htmlcov/
nosetests.xml
coverage.xml
*,cover
cover/
.hypothesis/
# Translations
@ -56,11 +57,6 @@ coverage.xml
# Django stuff:
*.log
# Sphinx documentation
docs/_build/
docs/cinderlib.rst
docs/modules.rst
# PyBuilder
target/
@ -69,3 +65,5 @@ target/
# Temp directory, for example for the LVM file, our custom config, etc.
temp/
cinder-lioadm

4
.gitreview Normal file
View File

@ -0,0 +1,4 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/cinderlib.git

View File

@ -1,66 +0,0 @@
language: python
python: 2.7
git:
quiet: true
jobs:
include:
- name: "PEP8"
stage: phase1-tests
sudo: false
script: make lint
- name: "Unit tests"
stage: phase1-tests
sudo: false
install: sudo pip install tox
script: tox -epy27
- name: "LVM baremetal master"
stage: phase2-tests
sudo: required
install:
- sudo travis-scripts/setup-lvm.sh
- sudo apt-get install -y open-iscsi targetcli multipath-tools
- sudo pip install git+https://github.com/openstack/cinder.git
- sudo pip install -e ./
script:
- sudo make functional-tests
- name: "LVM baremetal latest"
stage: phase2-tests
sudo: required
install:
- sudo travis-scripts/setup-lvm.sh
- sudo apt-get install -y open-iscsi targetcli multipath-tools
- sudo pip install git+https://github.com/openstack/cinder.git@stable/rocky
- sudo pip install -e ./
script:
- sudo make functional-tests
- name: "Image build"
stage: build
sudo: required
script:
- echo "$DOCKER_PASSWORD" | docker login --password-stdin --username "$DOCKER_USERNAME"
- travis-scripts/build
# Travis-CI only supports Ubuntu, which is incompatible with our images
# - name: "LVM"
# stage: sanity-checks
# sudo: required
# script:
# - sudo travis-scripts/setup-lvm.sh
# - sudo make ubuntu-lvm
- name: "Tag and push images"
stage: push
sudo: required
script:
- echo "$DOCKER_PASSWORD" | docker login --password-stdin --username "$DOCKER_USERNAME"
- travis-scripts/push
# Noop, each job hast its own requirements and we don't want to install
# requirements.txt
install: true

View File

@ -1,13 +0,0 @@
=======
Credits
=======
Development Lead
----------------
* Gorka Eguileor <geguileo@redhat.com>
Contributors
------------
None yet. Why not be the first?

View File

@ -1,233 +1,16 @@
============
Contributing
============
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
Contributions are welcome, and they are greatly appreciated! Every
little bit helps, and credit will always be given.
https://docs.openstack.org/infra/manual/developers.html
You can contribute in many ways:
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
Types of Contributions
----------------------
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Report Bugs
~~~~~~~~~~~
Pull requests submitted through GitHub will be ignored.
Report bugs at https://github.com/akrog/cinderlib/issues.
Bugs should be filed as stories on StoryBoard, not in GitHub's issue tracker:
If you are reporting a bug, please include:
* Your operating system name and version.
* Storage backend and configuration used (replacing sensitive information with
asterisks).
* Any details about your local setup that might be helpful in troubleshooting.
* Detailed steps to reproduce the bug.
Fix Bugs
~~~~~~~~
Look through the GitHub issues for bugs. Anything tagged with "bug"
and "help wanted" is open to whoever wants to implement it.
Implement Features
~~~~~~~~~~~~~~~~~~
Look through the GitHub issues and the :doc:`todo` file for features. Anything
tagged with "enhancement" and "help wanted" is open to whoever wants to
implement it.
Write tests
~~~~~~~~~~~
We currently lack decent test coverage, so feel free to look into our existing
tests to add missing tests, because any test that increases our coverage is
more than welcome.
Write Documentation
~~~~~~~~~~~~~~~~~~~
Cinder Library could always use more documentation, whether as part of the
official Cinder Library docs, in docstrings, or even on the web in blog posts,
articles, and such.
Submit Feedback
~~~~~~~~~~~~~~~
The best way to send feedback is to file an issue at https://github.com/akrog/cinderlib/issues.
If you are proposing a feature:
* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that contributions
are welcome :)
Get Started!
------------
Ready to contribute? Here's how to set up `cinderlib` for local development.
1. Fork the `cinderlib` repo on GitHub.
2. Clone your fork locally:
.. code-block:: shell
$ git clone git@github.com:YOUR_NAME_HERE/cinderlib.git
3. Install tox:
.. code-block:: shell
$ sudo dnf install python2-tox
4. Generate a virtual environment, for example for Python 2.7:
.. code-block:: shell
$ tox --notest -epy27
5. Create a branch for local development:
.. code-block:: shell
$ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
6. When you're done making changes, you can check that your changes pass flake8
and unit tests with:
.. code-block:: shell
$ tox -eflake8
$ tox -epy27
Or if you don't want to create a specific environment for flake8 you can run
things directly without tox:
.. code-block:: shell
$ source .tox/py27/bin/activate
$ flake8 cinderlib tests
$ python setup.py test
7. Run functional tests at least with the default LVM configuration:
.. code-block:: shell
$ tox -efunctional
To run the LVM functional tests you'll need to have the expected LVM VG
ready. This can be done using the script we have for this purpose (assuming
we are in the *cinderlib* base directory):
.. code-block:: shell
$ mkdir temp
$ cd temp
$ sudo ../tools/lvm-prepare.sh
The default configuration for the functional tests can be found in the
`tests/functional/lvm.yaml` file. For additional information on this file
format and running functional tests please refer to the
:doc:`validating_backends` section.
And preferably with all the backends you have at your disposal:
.. code-block:: shell
$ CL_FTESTS_CFG=temp/my-test-config.yaml tox -efunctional
8. Commit your changes making sure the commit message is descriptive enough,
covering the patch changes as well as why the patch might be necessary. The
commit message should also conform to the `50/72 rule
<https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html>`_.
$ git add .
$ git commit
9. Push your branch to GitHub:
.. code-block:: shell
$ git push origin name-of-your-bugfix-or-feature
10. Submit a pull request through the GitHub website.
LVM Backend
-----------
You may not have a fancy storage array, but that doesn't mean that you cannot
use *cinderlib*, because you can always the LVM driver. Here we are going to
see how to setup an LVM backend that we can use with *cinderlib*.
Before doing anything you need to make sure you have the required package, for Fedora, CentOS, and RHEL this will be the `targetcli` package, and for Ubuntu the `lio-utils` package.
.. code-block:: shell
$ sudo yum install targetcli
Then we'll need to create your "storage backend", which is actually just a file
on your normal filesystem. We'll create a 22GB file with only 1MB currently
allocated (this is worse for performance, but better for space), and then we'll
mount it as a loopback device and create a PV and VG on the loopback device.
.. code-block:: shell
$ dd if=/dev/zero of=temp/cinder-volumes bs=1048576 seek=22527 count=1
$ sudo lodevice=`losetup --show -f ./cinder-volumes`
$ sudo pvcreate $lodevice
$ sudo vgcreate cinder-volumes $lodevice
$ sudo vgscan --cache
There is a script included in the repository that will do all this for us, so
we can just call it from the location where we want to create the file:
.. code-block:: shell
$ sudo tools/lvm-prepare.sh
Now we can use this LVM backend in *cinderlib*:
.. code-block:: python
import cinderlib as cl
from pprint import pprint as pp
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
vol = lvm.create_volume(size=1)
attach = vol.attach()
pp('Volume %s attached to %s' % (vol.id, attach.path))
vol.detach()
vol.delete()
Pull Request Guidelines
-----------------------
Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
2. If the pull request adds functionality, the docs should be updated. Put
your new functionality into a function with a docstring, and add the
feature to the list in README.rst.
3. The pull request should work for Python 2.7, 3.3, 3.4 and 3.5, and for PyPy.
Check https://travis-ci.org/akrog/cinderlib/pull_requests and make sure that
the tests pass for all supported Python versions.
Tips
----
To run a subset of tests:
.. code-block:: shell
$ source .tox/py27/bin/activate
$ python -m unittest tests.test_cinderlib.TestCinderlib.test_lib_setup
https://storyboard.openstack.org/#!/project/openstack/cinderlib

View File

@ -1,29 +0,0 @@
# Based on centos this image builds cinderlib master with Cinder master branch
FROM centos:7
ARG VERSION
ARG RELEASE
LABEL maintainers="Gorka Eguileor <geguileo@redhat.com>" \
description="Cinderlib" \
version=${VERSION:-master}
RUN yum -y install targetcli iscsi-initiator-utils device-mapper-multipath epel-release lvm2 which && \
yum -y install python2-pip python-devel gcc openssl-devel pywbem && \
yum -y install python-rbd ceph-common git && \
# Need new setuptools version or we'll get "SyntaxError: '<' operator not allowed in environment markers" when installing Cinder
pip install 'setuptools>=38.6.0' && \
git clone 'https://github.com/openstack/cinder.git' && \
pip install --no-cache-dir cinder/ && \
pip install --no-cache-dir 'krest>=1.3.0' 'purestorage>=1.6.0' 'pyxcli>=1.1.5' 'pyOpenSSL>=1.0.0' && \
rm -rf cinder && \
yum -y remove git python-devel gcc openssl-devel && \
yum clean all && \
rm -rf /var/cache/yum
# Copy cinderlib
COPY . /cinderlib
RUN pip install --no-cache-dir /cinderlib/ && \
rm -rf /cinderlib
# Define default command
CMD ["bash"]

View File

@ -1,23 +0,0 @@
# Based on centos this image builds cinderlib master with Cinder master branch
FROM centos:7
ARG VERSION
ARG RELEASE
LABEL maintainers="Gorka Eguileor <geguileo@redhat.com>" \
description="Cinderlib" \
version=${VERSION:-latest}
RUN yum -y install targetcli iscsi-initiator-utils device-mapper-multipath epel-release lvm2 which && \
yum -y install python2-pip centos-release-openstack-${RELEASE} pywbem && \
yum -y install openstack-cinder python-rbd ceph-common python2-pyOpenSSL && \
pip install --no-cache-dir 'krest>=1.3.0' 'purestorage>=1.6.0' 'pyxcli>=1.1.5' && \
yum clean all && \
rm -rf /var/cache/yum
# Copy cinderlib
COPY . /cinderlib
RUN pip install --no-cache-dir /cinderlib/ && \
rm -rf /cinderlib
# Define default command
CMD ["bash"]

View File

@ -1,17 +0,0 @@
# Based on centos
FROM centos:7
ARG VERSION
ARG RELEASE
LABEL maintainers="Gorka Eguileor <geguileo@redhat.com>" \
description="Cinderlib" \
version=${VERSION:-latest}
RUN yum -y install targetcli iscsi-initiator-utils device-mapper-multipath epel-release lvm2 which && \
yum -y install python2-pip centos-release-openstack-$RELEASE pywbem && \
yum -y install openstack-cinder python-rbd ceph-common python2-pyOpenSSL && \
yum clean all && \
rm -rf /var/cache/yum && \
pip install --no-cache-dir --process-dependency-links cinderlib 'krest>=1.3.0' 'purestorage>=1.6.0' 'pyxcli>=1.1.5'
# Define default command
CMD ["bash"]

53
HACKING.rst Normal file
View File

@ -0,0 +1,53 @@
Cinderlib Style Commandments
============================
- Step 1: Read the OpenStack Style Commandments
https://docs.openstack.org/hacking/latest/
- Step 2: Read on
Cinder Specific Commandments
----------------------------
- [N314] Check for vi editor configuration in source files.
- [N322] Ensure default arguments are not mutable.
- [N323] Add check for explicit import of _() to ensure proper translation.
- [N325] str() and unicode() cannot be used on an exception. Remove or use six.text_type().
- [N336] Must use a dict comprehension instead of a dict constructor with a sequence of key-value pairs.
- [C301] timeutils.utcnow() from oslo_utils should be used instead of datetime.now().
- [C302] six.text_type should be used instead of unicode.
- [C303] Ensure that there are no 'print()' statements in code that is being committed.
- [C304] Enforce no use of LOG.audit messages. LOG.info should be used instead.
- [C305] Prevent use of deprecated contextlib.nested.
- [C306] timeutils.strtime() must not be used (deprecated).
- [C307] LOG.warn is deprecated. Enforce use of LOG.warning.
- [C308] timeutils.isotime() must not be used (deprecated).
- [C309] Unit tests should not perform logging.
- [C310] Check for improper use of logging format arguments.
- [C311] Check for proper naming and usage in option registration.
- [C312] Validate that logs are not translated.
- [C313] Check that assertTrue(value) is used and not assertEqual(True, value).
General
-------
- Use 'raise' instead of 'raise e' to preserve original traceback or exception being reraised::
except Exception as e:
...
raise e # BAD
except Exception:
...
raise # OKAY
Creating Unit Tests
-------------------
For every new feature, unit tests should be created that both test and
(implicitly) document the usage of said feature. If submitting a patch for a
bug that had no unit test, a new passing unit test should be added. If a
submitted bug fix does have a unit test, be sure to add a new one that fails
without the patch and passes with the patch.
For more information on creating unit tests and utilizing the testing
infrastructure in OpenStack Cinder, please see
https://docs.openstack.org/cinder/latest/contributor/testing.html

View File

@ -1,147 +0,0 @@
=======
History
=======
0.3.9 (2019-02-18)
------------------
- Bug fixes:
- Incorrect raise on Snapshot initialization backend errors.
0.3.8 (2019-02-15)
------------------
- Bug fixes:
- Fix exception raising on failed attach
0.3.7 (2019-02-05)
------------------
- Bug fixes:
- Fix start with drivers returning legacy stats
0.3.6 (2019-02-04)
------------------
- Bug fixes:
- Replace stats workaround with our stats caching mechanism
0.3.5 (2019-02-03)
------------------
- Bug fixes:
- Support MultiOpt configuration parameters
- Workaround for Cinder cached stats bug
0.3.4 (2019-01-26)
------------------
- Features:
- RBD volumes in container without RBD installed on host
- Removals:
- RBD-NBD support was prematurely added, removed in this release
0.3.3 (2019-01-24)
------------------
- Features:
- List drivers available in current Cinder installation.
- Support RBD-NBD as well as RBD-KO
0.3.2 (2019-01-22)
------------------
- Bug fixes:
- Failure when the caller has arguments
0.3.1 (2019-01-16)
------------------
- Bug fixes:
- Translation of execute's OSError exceptions
0.3.0 (2019-01-14)
------------------
- Bug fixes:
- Detach a volume when it's unavailable.
- Features:
- Provide better message when device is not available.
- Backend name stored in host instead of in the AZ (backward incompatible).
- Support multi-pool drivers.
- Support QoS
- Support extra specs
0.2.2 (2018-07-24)
------------------
- Features:
- Use NOS-Brick to setup OS-Brick for non OpenStack usage.
- Can setup persistence directly to use key-value storage.
- Support loading objects without configured backend.
- Support for Cinder Queens, Rocky, and Master
- Serialization returns a compact string
- Bug fixes:
- Workaround for Python 2 getaddrinfo bug
- Compatibility with requests and requests-kerberos
- Fix key-value support set_key_value.
- Fix get_key_value to return KeyValue.
- Fix loading object without configured backend.
0.2.1 (2018-06-14)
------------------
- Features:
- Modify fields on connect method.
- Support setting custom root_helper.
- Setting default project_id and user_id.
- Metadata persistence plugin mechanism
- DB persistence plugin
- No longer dependent on Cinder's attach/detach code
- Add device_attached method to update volume on attaching node
- Support attaching/detaching RBD volumes
- Support changing persistence plugin after initialization
- Add saving and refreshing object's metadata
- Add dump, dumps methods
- Bug fixes:
- Serialization of non locally attached connections.
- Accept id field set to None on resource creation.
- Disabling of sudo command wasn't working.
- Fix volume cloning on XtremIO
- Fix iSCSI detach issue related to privsep
- Fix wrong size in volume from snapshot
- Fix name & description inconsistency
- Set created_at field on creation
- Connection fields not being set
- DeviceUnavailable exception
- Multipath settings after persistence retrieval
- Fix PyPi package created tests module
- Fix connector without multipath info
- Always call create_export and remove_export
- iSCSI unlinking on disconnect
0.1.0 (2017-11-03)
------------------
* First release on PyPI.

181
LICENSE
View File

@ -1,17 +1,176 @@
Apache Software License 2.0
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Copyright (c) 2017, Red Hat, Inc.
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
1. Definitions.
http://www.apache.org/licenses/LICENSE-2.0
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,13 +0,0 @@
include AUTHORS.rst
include CONTRIBUTING.rst
include HISTORY.rst
include LICENSE
include README.rst
recursive-include tests *
recursive-exclude * __pycache__
recursive-exclude * *.py[co]
recursive-include docs *.rst conf.py Makefile make.bat *.jpg *.png *.gif

105
Makefile
View File

@ -1,105 +0,0 @@
.PHONY: clean clean-test clean-pyc clean-build docs help
.DEFAULT_GOAL := help
define BROWSER_PYSCRIPT
import os, webbrowser, sys
try:
from urllib import pathname2url
except:
from urllib.request import pathname2url
webbrowser.open("file://" + pathname2url(os.path.abspath(sys.argv[1])))
endef
export BROWSER_PYSCRIPT
define PRINT_HELP_PYSCRIPT
import re, sys
for line in sys.stdin:
match = re.match(r'^([a-zA-Z_-]+):.*?## (.*)$$', line)
if match:
target, help = match.groups()
print("%-20s %s" % (target, help))
endef
export PRINT_HELP_PYSCRIPT
BROWSER := python -c "$$BROWSER_PYSCRIPT"
help:
@python -c "$$PRINT_HELP_PYSCRIPT" < $(MAKEFILE_LIST)
clean: clean-build clean-pyc ## remove all build, coverage and Python artifacts
clean-build: ## remove build artifacts
rm -fr build/
rm -fr dist/
rm -fr .eggs/
find . -name '*.egg-info' -exec rm -fr {} +
find . -name '*.egg' -exec rm -f {} +
clean-pyc: ## remove Python file artifacts
find . -name '*.pyc' -exec rm -f {} +
find . -name '*.pyo' -exec rm -f {} +
find . -name '*~' -exec rm -f {} +
find . -name '__pycache__' -exec rm -fr {} +
clean-test: ## remove test and coverage artifacts
rm -fr .tox/
rm -f .coverage
rm -fr htmlcov/
python-requirements:
pip install -r requirements_dev.txt
pip install -e .
lint: python-requirements ## check style with flake8
flake8 cinderlib
unit-tests:
tox -epy27
functional-tests:
CL_FTEST_CFG=`pwd`/tools/lvm.yaml unit2 discover -v -s cinderlib/tests/functional
test-all: ## run tests on every Python version with tox
tox
coverage: ## check code coverage quickly with the default Python
coverage run --source cinderlib setup.py test
coverage report -m
coverage html
$(BROWSER) htmlcov/index.html
docs: ## generate Sphinx HTML documentation, including API docs
rm -f docs/cinderlib.rst
rm -f docs/modules.rst
sphinx-apidoc -o docs/ cinderlib
$(MAKE) -C docs clean
$(MAKE) -C docs html
$(BROWSER) docs/_build/html/index.html
servedocs: docs ## compile the docs watching for changes
watchmedo shell-command -p '*.rst' -c '$(MAKE) -C docs html' -R -D .
register: ## register package in pypi
python setup.py register --repository pypi
test-package:
python setup.py check -r -s
test-release: clean
python setup.py sdist upload --repository pypitest
python setup.py bdist_wheel upload --repository pypitest
release: clean ## package and upload a release
python setup.py sdist upload --repository pypi
python setup.py bdist_wheel upload --repository pypi
dist: clean ## builds source and wheel package
python setup.py sdist
python setup.py bdist_wheel
ls -l dist
install: clean ## install the package to the active Python's site-packages
python setup.py install

48
README
View File

@ -1,48 +0,0 @@
# Cinder library
Cinder Library is a Python library that allows using storage drivers outside of
Cinder.
* Free software: Apache Software License 2.0
* Full Documentation: [https://cinderlib.readthedocs.io](https://cinderlib.readthedocs.io).
This library is currently in Alpha stage and is primarily intended as a proof
of concept at this stage. While some drivers have been manually validated most
drivers have not, so there's a good chance that they could experience issues.
When using this library one should be aware that this is in no way close to the
robustness or feature richness that the Cinder project provides, for detailed
information on the current limitations please refer to the documentation.
Due to the limited access to Cinder backends and time constraints the list of
drivers that have been manually tested are (I'll try to test more):
- LVM with LIO
- Dell EMC XtremIO
- Dell EMC VMAX
- Kaminario K2
- Ceph/RBD
- NetApp SolidFire
If you try the library with another storage array I would appreciate a note on
the library version, Cinder release, and results of your testing.
## Features
* Use a Cinder driver without running a DBMS, Message broker, or Cinder
service.
* Using multiple simultaneous drivers on the same program.
* Basic operations support:
- Create volume
- Delete volume
- Extend volume
- Clone volume
- Create snapshot
- Delete snapshot
- Create volume from snapshot
- Connect volume
- Disconnect volume
- Local attach
- Local detach
- Validate connector

View File

@ -4,19 +4,9 @@ Cinder Library
.. image:: https://img.shields.io/pypi/v/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://readthedocs.org/projects/cinderlib/badge/?version=latest
:target: https://cinderlib.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://img.shields.io/pypi/pyversions/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://img.shields.io/docker/build/akrog/cinderlib.svg
:target: https://hub.docker.com/r/akrog/cinderlib
.. image:: https://img.shields.io/docker/automated/akrog/cinderlib.svg
:target: https://hub.docker.com/r/akrog/cinderlib/builds
.. image:: https://img.shields.io/:license-apache-blue.svg
:target: http://www.apache.org/licenses/LICENSE-2.0
@ -24,39 +14,31 @@ Cinder Library
Introduction
------------
Cinder Library is a Python library that allows using storage drivers outside of
Cinder.
The Cinder Library, also known as cinderlib, is a Python library that leverages
the Cinder project to provide an object oriented abstraction around Cinder's
storage drivers to allow their usage directly without running any of the Cinder
services or surrounding services, such as KeyStone, MySQL or RabbitMQ.
* Free software: Apache Software License 2.0
* Documentation: https://cinderlib.readthedocs.io.
* Documentation: https://docs.openstack.org/cinderlib/latest/
This library is currently in Alpha stage and is primarily intended as a proof
of concept at this stage. While some drivers have been manually validated most
drivers have not, so there's a good chance that they could experience issues.
The library is intended for developers who only need the basic CRUD
functionality of the drivers and don't care for all the additional features
Cinder provides such as quotas, replication, multi-tenancy, migrations,
retyping, scheduling, backups, authorization, authentication, REST API, etc.
When using this library one should be aware that this is in no way close to the
robustness or feature richness that the Cinder project provides, for detailed
information on the current limitations please refer to the documentation.
Due to the limited access to Cinder backends and time constraints the list of
drivers that have been manually tested are (I'll try to test more):
- LVM with LIO
- Dell EMC XtremIO
- Dell EMC VMAX
- Kaminario K2
- Ceph/RBD
- NetApp SolidFire
If you try the library with another storage array I would appreciate a note on
the library version, Cinder release, and results of your testing.
The library was originally created as an external project, so it didn't have
the broad range of backend testing Cinder does, and only a limited number of
drivers were validated at the time. Drivers should work out of the box, and
we'll keep a list of drivers that have added the cinderlib functional tests to
the driver gates confirming they work and ensuring they will keep working.
Features
--------
* Use a Cinder driver without running a DBMS, Message broker, or Cinder
service.
* Using multiple simultaneous drivers on the same program.
* Using multiple simultaneous drivers on the same application.
* Basic operations support:
- Create volume
@ -71,14 +53,16 @@ Features
- Local attach
- Local detach
- Validate connector
- Extra Specs for specific backend functionality.
- Backend QoS
- Multi-pool support
* Code should support multiple concurrent connections to a volume, though this
has not yet been tested.
* Metadata persistence plugin:
* Metadata persistence plugins:
- Stateless: Caller stores JSON serialization.
- Database: Metadata is stored in a database: MySQL, PostgreSQL, SQLite...
- Custom plugin: Metadata is stored in another metadata storage.
- Custom plugin: Caller provides module to store Metadata and cinderlib calls
it when necessary.
Demo
----
@ -89,163 +73,6 @@ Demo
target="_blank"><img
src="https://asciinema.org/a/TcTR7Lu7jI0pEsd9ThEn01l7n.png"/></a>
Example
-------
The following example uses CentOS 7 and the Cinder LVM driver, which should be
the easiest to setup and test.
First you need to setup your system.
The easiest way to set things up is using Vagrant + libvirt using the provided
docker example, as it will create a small VM (1GB and 1CPU) and provision
everything so we can run a Python interpreter in a cinderlib container:
.. code-block:: shell
$ cd examples/docker
$ vagrant up
$ vagrant ssh -c 'sudo docker exec -it cinderlib python'
If we don't want to use the example we have to setup an LVM VG to use:
.. code-block:: shell
$ sudo dd if=/dev/zero of=cinder-volumes bs=1048576 seek=22527 count=1
$ sudo lodevice=`losetup --show -f ./cinder-volumes`
$ sudo vgcreate cinder-volumes $lodevice
$ sudo vgscan --cache
Now we can install everything on baremetal:
$ sudo yum install -y centos-release-openstack-queens
$ test -f /etc/yum/vars/contentdir || echo centos >/etc/yum/vars/contentdir
$ sudo yum install -y openstack-cinder targetcli python-pip
$ sudo pip install cinderlib
Or run it in a container. To be able to run it in a container we need to
change our host's LVM configuration and set `udev_rules = 0` and
`udev_sync = 0` before we start the container:
.. code-block:: shell
$ sudo docker run --name=cinderlib --privileged --net=host \
-v /etc/iscsi:/etc/iscsi \
-v /dev:/dev \
-v /etc/lvm:/etc/lvm \
-v /var/lock/lvm:/var/lock/lvm \
-v /lib/modules:/lib/modules:ro \
-v /run:/run \
-v /var/lib/iscsi:/var/lib/iscsi \
-v /etc/localtime:/etc/localtime:ro \
-v /root/cinder:/var/lib/cinder \
-v /sys/kernel/config:/configfs \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-it akrog/cinderlib:latest python
Or install things on baremetal/VM:
.. code-block:: shell
$ sudo yum install -y centos-release-openstack-queens
$ test -f /etc/yum/vars/contentdir || echo centos >/etc/yum/vars/contentdir
$ sudo yum install -y openstack-cinder targetcli python-pip
$ sudo pip install cinderlib
$ sudo dd if=/dev/zero of=cinder-volumes bs=1048576 seek=22527 count=1
$ sudo lodevice=`losetup --show -f ./cinder-volumes`
$ sudo pvcreate $lodevice
$ sudo vgcreate cinder-volumes $lodevice
$ sudo vgscan --cache
Then you need to run `python` with a passwordless sudo user (required to
control LVM and do the attach) and execute:
.. code-block:: python
import cinderlib as cl
from pprint import pprint as pp
# We setup the library to setup the driver configuration when serializing
cl.setup(output_all_backend_info=True)
# Initialize the LVM driver
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
# Show the LVM backend stats
pp(lvm.stats())
# Create a 1GB volume
vol = lvm.create_volume(1, name='lvm-vol')
# Export, initialize, and do a local attach of the volume
attach = vol.attach()
pp('Volume %s attached to %s' % (vol.id, attach.path))
# Snapshot it
snap = vol.create_snapshot('lvm-snap')
# Show the JSON string
pp(vol.jsons)
# Save the whole environment to a file
with open('cinderlib-test.txt', 'w') as f:
f.write(cl.dumps())
# Exit python
exit()
Now we can check that the logical volume is there, exported, and attached to
our system:
.. code-block:: shell
# lvdisplay
# targetcli ls
# iscsiadm -m session
# lsblk
And now let's run a new `python` interpreter and clean things up:
.. code-block:: python
import cinderlib as cl
# Get the whole environment up
with open('cinderlib-test.txt') as f:
backends = cl.load(f.read(), save=True)
# Get the volume reference we loaded from file and detach
vol = backends[0].volumes[0]
# Volume no longer knows that the attach is local, so we cannot do
# vol.detach(), but we can get the connection and use it.
conn = vol.connections[0]
# Physically detach the volume from the node
conn.detach()
# Unmap the volume and remove the export
conn.disconnect()
# Get the snapshot and delete it
snap = vol.snapshots[0]
snap.delete()
# Finally delete the volume
vol.delete()
We should confirm that the logical volume is no longer there, there's nothing
exported or attached to our system:
.. code-block:: shell
# lvdisplay
# targetcli ls
# iscsiadm -m session
# lsblk
.. _GIGO: https://en.wikipedia.org/wiki/Garbage_in,_garbage_out
.. _official project documentation: https://readthedocs.org/projects/cinderlib/badge/?version=latest
.. _OpenStack's Cinder volume driver configuration documentation: https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-drivers.html

View File

@ -1,26 +0,0 @@
====
TODO
====
There are many things that need improvements in *cinderlib*, this is a simple
list to keep track of the most relevant topics.
- Connect & attach snapshot for drivers that support it.
- Replication and failover support
- QoS
- Support custom features via extra specs
- Unit tests
- Complete functional tests
- Parameter validation
- Support using *cinderlib* without cinder to just handle the attach/detach
- Add .py examples
- Add support for new Attach/Detach mechanism
- Consistency Groups
- Encryption
- Support name and description attributes in Volume and Snapshot
- Verify multiattach support
- Revert to snapshot support.
- Add documentation to connect remote host. `use_multipath_for_image_xfer` and
the `enforce_multipath_for_image_xfer` options.
- Complete internals documentation.
- Document the code.

2
babel.cfg Normal file
View File

@ -0,0 +1,2 @@
[python: **.py]

View File

@ -13,13 +13,17 @@
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
import pkg_resources
from cinderlib import cinderlib
from cinderlib import objects
from cinderlib import serialization
from cinderlib import workarounds # noqa
__version__ = '0.3.9'
try:
__version__ = pkg_resources.get_distribution('cinderlib').version
except pkg_resources.DistributionNotFound:
__version__ = '0.0.0'
DEFAULT_PROJECT_ID = objects.DEFAULT_PROJECT_ID
DEFAULT_USER_ID = objects.DEFAULT_USER_ID

View File

@ -71,7 +71,7 @@ def setup(config):
if inspect.isclass(storage) and issubclass(storage,
base.PersistenceDriverBase):
return storage(**config)
return storage(**config)
if not isinstance(storage, six.string_types):
raise exception.InvalidPersistence(storage)

View File

@ -1,19 +0,0 @@
# For Fedora, CentOS, RHEL we require the targetcli package.
# For Ubuntu we require lio-utils or changing the target iscsi_helper
#
# Logs are way too verbose, so we disable them
logs: false
# LVM backend uses cinder-rtstool command that is installed by Cinder in the
# virtual environment, so we need the custom sudo command that inherits the
# virtualenv binaries PATH
venv_sudo: false
# We only define one backend
backends:
- volume_backend_name: lvm
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
target_protocol: iscsi
target_helper: lioadm

3
doc/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
build/*
source/api/*
.autogenerated

6
doc/requirements.txt Normal file
View File

@ -0,0 +1,6 @@
openstackdocstheme>=1.18.1 # Apache-2.0
reno>=2.5.0 # Apache-2.0
doc8>=0.6.0 # Apache-2.0
sphinx!=1.6.6,!=1.6.7,>=1.6.2 # BSD
os-api-ref>=1.4.0 # Apache-2.0
sphinxcontrib-apidoc>=0.2.0 # BSD

View File

@ -13,34 +13,26 @@
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
import sys
# If extensions (or modules to document with autodoc) are in another
# directory, add these directories to sys.path here. If the directory is
# relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# Get the project root dir, which is the parent dir of this
cwd = os.getcwd()
project_root = os.path.dirname(cwd)
# Insert the project root dir as the first element in the PYTHONPATH.
# This lets us ensure that the source package is imported, and that its
# version is used.
project_root = os.path.abspath('../../')
sys.path.insert(0, project_root)
# # Get the project root dir, which is the parent dir of this
# import pdb; pdb.set_trace()
# cwd = os.getcwd()
# project_root = os.path.dirname(cwd)
#
# # Insert the project root dir as the first element in the PYTHONPATH.
# # This lets us ensure that the source package is imported, and that its
# # version is used.
# sys.path.insert(0, project_root)
import modulefaker
for module in ('cinder', 'os_brick', 'oslo_utils', 'oslo_versionedobjects',
'oslo_concurrency', 'oslo_log', 'stevedore', 'oslo_db',
'cinder.db.sqlalchemy'):
modulefaker.fake_module(module)
import cinderlib
# -- General configuration ---------------------------------------------
@ -49,7 +41,27 @@ needs_sphinx = '1.6.5'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode']
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinxcontrib.apidoc',
'openstackdocstheme']
# sphinxcontrib.apidoc options
apidoc_module_dir = '../../cinderlib'
apidoc_output_dir = 'api'
apidoc_excluded_paths = [
'tests/*',
'tests',
'persistence/dbms.py',
'persistence/memory.py',
]
apidoc_separate_modules = True
apidoc_toc_file = False
autodoc_mock_imports = ['cinder', 'os_brick', 'oslo_utils',
'oslo_versionedobjects', 'oslo_concurrency',
'oslo_log', 'stevedore', 'oslo_db', 'oslo_config',
'oslo_privsep', 'cinder.db.sqlalchemy']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@ -63,18 +75,19 @@ source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = []
# General information about the project.
project = u'Cinder Library'
copyright = u"2017, Gorka Eguileor"
copyright = u"2017, Cinder Developers"
# The version info for the project you're documenting, acts as replacement
# for |version| and |release|, also used in various other places throughout
# the built documents.
#
# The short X.Y version.
version = cinderlib.__version__
# The full version, including alpha/beta/rc tags.
release = cinderlib.__version__
# openstackdocstheme options
repository_name = 'openstack/cinderlib'
bug_project = 'cinderlib'
bug_tag = ''
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
@ -88,7 +101,7 @@ release = cinderlib.__version__
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
@ -99,17 +112,17 @@ exclude_patterns = ['_build']
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
modindex_common_prefix = ['cinderlib.']
# If true, keep warnings as "system message" paragraphs in the built
# documents.
@ -120,7 +133,7 @@ pygments_style = 'sphinx'
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a
# theme further. For a list of options available for each theme, see the
@ -153,6 +166,9 @@ html_theme = 'default'
# "default.css".
html_static_path = ['_static']
# Add any paths that contain "extra" files, such as .htaccess.
html_extra_path = ['_extra']
# If not '', a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
@ -219,7 +235,7 @@ latex_elements = {
latex_documents = [
('index', 'cinderlib.tex',
u'Cinder Library Documentation',
u'Gorka Eguileor', 'manual'),
u'Cinder Contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at
@ -250,7 +266,7 @@ latex_documents = [
man_pages = [
('index', 'cinderlib',
u'Cinder Library Documentation',
[u'Gorka Eguileor'], 1)
[u'Cinder Contributors'], 1)
]
# If true, show URL addresses after external links.
@ -265,9 +281,9 @@ man_pages = [
texinfo_documents = [
('index', 'cinderlib',
u'Cinder Library Documentation',
u'Gorka Eguileor',
u'Cinder Contributors',
'cinderlib',
'One line description of project.',
'Direct usage of Cinder Block Storage drivers without the services.',
'Miscellaneous'),
]

View File

@ -0,0 +1,4 @@
Contributing
============
.. include:: ../../CONTRIBUTING.rst

102
doc/source/index.rst Normal file
View File

@ -0,0 +1,102 @@
Welcome to Cinder Library's documentation!
==========================================
.. image:: https://img.shields.io/pypi/v/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://img.shields.io/pypi/pyversions/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://img.shields.io/:license-apache-blue.svg
:target: http://www.apache.org/licenses/LICENSE-2.0
|
The Cinder Library, also known as cinderlib, is a Python library that leverages
the Cinder project to provide an object oriented abstraction around Cinder's
storage drivers to allow their usage directly without running any of the Cinder
services or surrounding services, such as KeyStone, MySQL or RabbitMQ.
The library is intended for developers who only need the basic CRUD
functionality of the drivers and don't care for all the additional features
Cinder provides such as quotas, replication, multi-tenancy, migrations,
retyping, scheduling, backups, authorization, authentication, REST API, etc.
The library was originally created as an external project, so it didn't have
the broad range of backend testing Cinder does, and only a limited number of
drivers were validated at the time. Drivers should work out of the box, and
we'll keep a list of drivers that have added the cinderlib functional tests to
the driver gates confirming they work and ensuring they will keep working.
Features
--------
* Use a Cinder driver without running a DBMS, Message broker, or Cinder
service.
* Using multiple simultaneous drivers on the same application.
* Basic operations support:
- Create volume
- Delete volume
- Extend volume
- Clone volume
- Create snapshot
- Delete snapshot
- Create volume from snapshot
- Connect volume
- Disconnect volume
- Local attach
- Local detach
- Validate connector
- Extra Specs for specific backend functionality.
- Backend QoS
- Multi-pool support
* Metadata persistence plugins:
- Stateless: Caller stores JSON serialization.
- Database: Metadata is stored in a database: MySQL, PostgreSQL, SQLite...
- Custom plugin: Caller provides module to store Metadata and cinderlib calls
it when necessary.
Example
-------
The following code extract is a simple example to illustrate how cinderlib
works. The code will use the LVM backend to create a volume, attach it to the
local host via iSCSI, and finally snapshot it:
.. code-block:: python
import cinderlib as cl
# Initialize the LVM driver
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
# Create a 1GB volume
vol = lvm.create_volume(1, name='lvm-vol')
# Export, initialize, and do a local attach of the volume
attach = vol.attach()
print('Volume %s attached to %s' % (vol.id, attach.path))
# Snapshot it
snap = vol.create_snapshot('lvm-snap')
Table of Contents
-----------------
.. toctree::
:maxdepth: 2
installation
usage
contributing
limitations

View File

@ -4,14 +4,16 @@
Installation
============
The Cinder Library is an interfacing library that doesn't have any storage
driver code, so it expects Cinder drivers to be installed in the system to run
properly.
We can use the latest stable release or the latest code from master branch.
Stable release
--------------
The Cinder Library is an interfacing library that doesn't have any storage
driver and expects Cinder drivers to be properly installed in the system to run
properly.
Drivers
_______
@ -27,9 +29,7 @@ the RPM to set up the OpenStack repository:
.. code-block:: console
# yum install -y centos-release-openstack-queens
# yum-config-manager --enable openstack-queens
# yum update -y
# yum install -y centos-release-openstack-rocky
# yum install -y openstack-cinder
On RHEL and Fedora, you'll need to download and install the RDO repository RPM
@ -37,11 +37,18 @@ to set up the OpenStack repository:
.. code-block:: console
# yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-queens/rdo-release-queens-1.noarch.rpm
# yum-config-manager --enable openstack-queens
# sudo yum update -y
# yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
# yum install -y openstack-cinder
We can also install directly from source on the system or a virtual environment:
.. code-block:: console
$ virtualenv venv
$ source venv/bin/activate
(venv) $ pip install git+git://github.com/openstack/cinder.git@stable/rocky
Library
_______
@ -62,33 +69,10 @@ you through the process.
.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
Container
_________
There is a docker image, in case you prefer trying the library without any
installation.
The image is called `akrog/cinderlib:stable`, and we can run Python dirrectly
with:
.. code-block:: console
$ docker run --name=cinderlib --privileged --net=host -v /etc/iscsi:/etc/iscsi -v /dev:/dev -it akrog/cinderlib:stable python
Latest code
-----------
Container
_________
A Docker image is automatically built on every commit to the *master* branch.
Running a Python shell with the latest *cinderlib* code is as simple as:
.. code-block:: console
$ docker run --name=cinderlib --privileged --net=host -v /etc/iscsi:/etc/iscsi -v /dev:/dev -it akrog/cinderlib python
Drivers
_______
@ -99,7 +83,7 @@ we can install the drivers from source:
$ virtualenv cinder
$ source cinder/bin/activate
$ pip install git+https://github.com/openstack/cinder.git
$ pip install git+git://github.com/openstack/cinder.git
Library
_______
@ -123,8 +107,8 @@ Once you have a copy of the source, you can install it with:
.. code-block:: console
# python setup.py install
$ virtualenv cinder
$ python setup.py install
.. _Github repo: https://github.com/akrog/cinderlib
.. _tarball: https://github.com/akrog/cinderlib/tarball/master
.. _Github repo: https://github.com/openstack/cinderlib
.. _tarball: https://github.com/openstack/cinderlib/tarball/master

View File

@ -0,0 +1,49 @@
Limitations
-----------
Cinderlib works around a number of issues that were preventing the usage of the
drivers by other Python applications, some of these are:
- *Oslo config* configuration loading.
- Cinder-volume dynamic configuration loading.
- Privileged helper service.
- DLM configuration.
- Disabling of cinder logging.
- Direct DB access within drivers.
- *Oslo Versioned Objects* DB access methods such as `refresh` and `save`.
- Circular references in *Oslo Versioned Objects* for serialization.
- Using multiple drivers in the same process.
Being in its early development stages, the library is in no way close to the
robustness or feature richness that the Cinder project provides. Some of the
more noticeable limitations one should be aware of are:
- Most methods don't perform argument validation so it's a classic GIGO_
library.
- The logic has been kept to a minimum and higher functioning logic is expected
to be handled by the caller: Quotas, tenant control, migration, etc.
- Limited test coverage.
- Only a subset of Cinder available operations are supported by the library.
Besides *cinderlib's* own limitations the library also inherits some from
*Cinder's* code and will be bound by the same restrictions and behaviors of the
drivers as if they were running under the standard *Cinder* services. The most
notorious ones are:
- Dependency on the *eventlet* library.
- Behavior inconsistency on some operations across drivers. For example you
can find drivers where cloning is a cheap operation performed by the storage
array whereas other will actually create a new volume, attach the source and
new volume and perform a full copy of the data.
- External dependencies must be handled manually. So users will have to take
care of any library, package, or CLI tool that is required by the driver.
- Relies on command execution via *sudo* for attach/detach operations as well
as some CLI tools.
.. _GIGO: https://en.wikipedia.org/wiki/Garbage_in,_garbage_out

View File

@ -3,7 +3,7 @@ Backends
========
The *Backend* class provides the abstraction to access a storage array with an
specific configuration, which usually constraints our ability to operate on the
specific configuration, which usually constraint our ability to operate on the
backend to a single pool.
.. note::
@ -115,8 +115,6 @@ Kaminario
volume_backend_name='kaminario_iscsi',
)
For more configurations refer to the :doc:`../validated_backends` section.
Available Backends
------------------

View File

@ -0,0 +1,275 @@
===========
Connections
===========
When talking about attaching a *Cinder* volume there are three steps that must
happen before the volume is available in the host:
1. Retrieve connection information from the host where the volume is going to
be attached. Here we would be getting iSCSI initiator name, IP, and similar
information.
2. Use the connection information from step 1 and make the volume accessible to
it in the storage backend returning the volume connection information. This
step entails exporting the volume and initializing the connection.
3. Attaching the volume to the host using the data retrieved on step 2.
If we are running *cinderlib* and doing the attach in the same host, then all
steps will be done in the same host. But in many cases you may want to manage
the storage backend in one host and attach a volume in another. In such cases,
steps 1 and 3 will happen in the host that needs the attach and step 2 on the
node running *cinderlib*.
Projects in *OpenStack* use the *OS-Brick* library to manage the attaching and
detaching processes. Same thing happens in *cinderlib*. The only difference
is that there are some connection types that are handled by the hypervisors in
*OpenStack*, so we need some alternative code in *cinderlib* to manage them.
*Connection* objects' most interesting attributes are:
- `connected`: Boolean that reflects if the connection is complete.
- `volume`: The *Volume* to which this instance holds the connection
information.
- `protocol`: String with the connection protocol for this volume, ie: `iscsi`,
`rbd`.
- `connector_info`: Dictionary with the connection information from the host
that is attaching. Such as it's hostname, IP address, initiator name, etc.
- `conn_info`: Dictionary with the connection information the host requires to
do the attachment, such as IP address, target name, credentials, etc.
- `device`: If we have done a local attachment this will hold a dictionary with
all the attachment information, such as the `path`, the `type`, the
`scsi_wwn`, etc.
- `path`: String with the path of the system device that has been created when
the volume was attached.
Local attach
------------
Once we have created a volume with *cinderlib* doing a local attachment is
really simple, we just have to call the `attach` method from the *Volume* and
we'll get the *Connection* information from the attached volume, and once we
are done we call the `detach` method on the *Volume*.
.. code-block:: python
vol = lvm.create_volume(size=1)
attach = vol.attach()
with open(attach.path, 'w') as f:
f.write('*' * 100)
vol.detach()
This `attach` method will take care of everything, from gathering our local
connection information, to exporting the volume, initializing the connection,
and finally doing the local attachment of the volume to our host.
The `detach` operation works in a similar way, but performing the exact
opposite steps and in reverse. It will detach the volume from our host,
terminate the connection, and if there are no more connections to the volume it
will also remove the export of the volume.
.. attention::
The *Connection* instance returned by the *Volume* `attach` method also has
a `detach` method, but this one behaves differently than the one we've seen
in the *Volume*, as it will just perform the local detach step and not the
termiante connection or the remove export method.
Remote connection
-----------------
For a remote connection, where you don't have the driver configuration or
access to the management storage network, attaching and detaching volumes is a
little more inconvenient, and how you do it will depend on whether you have
access to the metadata persistence storage or not.
In any case the general attach flow looks something like this:
- Consumer gets connector information from its host.
- Controller receives the connector information from the consumer. -
Controller exports and maps the volume using the connector information and
gets the connection information needed to attach the volume on the consumer.
- The consumer gets the connection information. - The consumer attaches the
volume using the connection information.
With access to the metadata persistence storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this case things are easier, as you can use the persistence storage to pass
information between the consumer and the controller node.
Assuming you have the following variables:
- `persistence_config` configuration of your metadata persistence storage.
- `node_id` unique string identifier for your consumer nodes that doesn't
change between reboots.
- `cinderlib_driver_configuration` is a dictionary with the Cinder backend
configuration needed by cinderlib to connect to the storage.
- `volume_id` ID of the volume we want to attach.
The consumer node must store its connector properties on start using the
key-value storage provided by the persistence plugin:
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
kv = cl.Backend.persistence.get_key_values(node_id)
if not kw:
storage_nw_ip = socket.gethostbyname(socket.gethostname())
connector_dict = cl.get_connector_properties('sudo', storage_nw_ip,
True, False)
value = json.dumps(connector_dict, separators=(',', ':'))
kv = cl.KeyValue(node_id, value)
cl.Backend.persistence.set_key_value(kv)
Then when we want to attach a volume to `node_id` the controller can retrieve
this information using the persistence plugin and export and map the volume for
the specific host.
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
storage = cl.Backend(**cinderlib_driver_configuration)
kv = cl.Backend.persistence.get_key_values(node_id)
if not kv:
raise Exception('Unknown node')
connector_info = json.loads(kv[0].value)
vol = storage.Volume.get_by_id(volume_id)
vol.connect(connector_info, attached_host=node_id)
Once the volume has been exported and mapped, the connection information is
automatically stored by the persistence plugin and the consumer host can attach
the volume:
.. code-block:: python
vol = storage.Volume.get_by_id(volume_id)
connection = vol.connections[0]
connection.attach()
print('Volume %s attached to %s' % (vol.id, connection.path))
When attaching the volume the metadata plugin will store changes to the
Connection instance that are needed for the detaching.
No access to the metadata persistence storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is more inconvenient, as you'll have to handle the data exchange manually
as well as the *OS-Brick* library calls to do the attach/detach.
First we need to get the connection information on the host that is going to do
the attach:
.. code-block:: python
from os_brick.initiator import connector
connector_dict = connector.get_connector_properties('sudo', storage_nw_ip,
True, False)
Now we need to pass this connector information dictionary to the controller
node. This part will depend on your specific application/system.
In the controller node, once we have the contents of the `connector_dict`
variable we can export and map the volume and get the info needed by the
consumer:
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
storage = cl.Backend(**cinderlib_driver_configuration)
vol = storage.Volume.get_by_id(volume_id)
conn = vol.connect(connector_info, attached_host=node_id)
connection_info = conn.connection_info
We have to pass the contents of `connection_info` information to the consumer
node, and that node will use it to attach the volume:
.. code-block:: python
import os_brick
from os_brick.initiator import connector
connector_dict = connection_info['connector']
conn_info = connection_info['conn']
protocol = conn_info['driver_volume_type']
conn = connector.InitiatorConnector.factory(
protocol, 'sudo', user_multipath=True,
device_scan_attempts=3, conn=connector_dict)
device = conn.connect_volume(conn_info['data'])
print('Volume attached to %s' % device.get('path'))
At this point we have the `device` variable that needs to be stored for the
disconnection, so we have to either store it on the consumer node, or pass it
to the controller node so we can save it with the connector info.
Here's an example on how to save it on the controller node:
.. code-block:: python
conn = vol.connections[0]
conn.device = device
conn.save()
.. warning:: At the time of this writing this mechanism doesn't support RBD
connections, as this support is added by cinderlib itself.
Multipath
---------
If we want to use multipathing for local attachments we must let the *Backend*
know when instantiating the driver by passing the
`use_multipath_for_image_xfer=True`:
.. code-block:: python
import cinderlib
lvm = cinderlib.Backend(
volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi',
use_multipath_for_image_xfer=True,
)
Multi attach
------------
Multi attach support has been added to *Cinder* in the Queens cycle, and it's
not currently supported by *cinderlib*.
Other methods
-------------
All other methods available in the *Snapshot* class will be explained in their
relevant sections:
- `load` will be explained together with `json`, `jsons`, `dump`, and `dumps`
properties, and the `to_dict` method in the :doc:`serialization` section.
- `refresh` will reload the volume from the metadata storage and reload any
lazy loadable property that has already been loaded. Covered in the
:doc:`serialization` and :doc:`tracking` sections.

View File

@ -8,8 +8,8 @@ fit a specific desired behavior and the library provides a mechanism to support
this.
Library initialization should be done before making any other library call,
including *Backend* initialization and loading serialized data, if we try to
do it after other calls the library will raise and `Exception`.
including *Backend* initialization and loading serialized data, if we try to do
it after other calls the library will raise an `Exception`.
Provided *setup* method is `cinderlib.Backend.global_setup`, but for
convenience the library provides a reference to this class method in
@ -24,7 +24,8 @@ The method definition is as follows:
suppress_requests_ssl_warnings=True, disable_logs=True,
non_uuid_ids=False, output_all_backend_info=False,
project_id=None, user_id=None, persistence_config=None,
fail_on_missing_backend=True, host=None, **log_params):
fail_on_missing_backend=True, host=None,
**cinder_config_params):
The meaning of the library's configuration options are:
@ -37,7 +38,7 @@ facilitate mutual exclusion it provides 3 different types of locks depending
on the scope the driver requires:
- Between threads of the same process.
- Between different process on the same host.
- Between different processes on the same host.
- In all the OpenStack deployment.
Cinderlib doesn't currently support the third type of locks, but that should
@ -47,8 +48,8 @@ Cinder uses file locks for the between process locking and cinderlib uses that
same kind of locking for the third type of locks, which is also what Cinder
uses when not deployed in an Active-Active fashion.
Parameter defaults to `None`, which will use the current directory to store all
file locks required by the drivers.
Parameter defaults to `None`, which will use the path indicated by the
`state_path` configuration option. It defaults to the current directory.
root_helper
-----------
@ -182,15 +183,15 @@ access to the same resources if it uses the same backend name.
Defaults to the host's hostname.
other keyword arguments
Other keyword arguments
-----------------------
Any other keyword argument passed to the initialization method will be
considered a *Cinder* configuration option and passed directly to all the
drivers.
considered a *Cinder* configuration option in the `[DEFAULT]` section.
This can be useful to set additional logging configuration like debug log
level, or many other advanced features.
level, the `state_path` used by default in many option, or other options like
the `ssh_hosts_key_file` required by drivers that use SSH.
For a list of the possible configuration options one should look into the
*Cinder* project's documentation.

View File

@ -96,6 +96,15 @@ trigger a `refresh` on the backends before doing the `dump` or `dumps`.
with open('cinderlib.txt', 'w') as f:
f.write(cinderlib.dumps())
When serializing *cinderlib* resources we'll get all the data currently
present. This means that when serializing a volume that is attached and has
snapshots we'll get them all serialized.
There are some cases where we don't want this, such as when implementing a
persistence metadata plugin. We should use the `to_json` and `to_jsons`
methods for such cases, as they will return a simplified serialization of the
resource containing only the data from the resource itself.
From JSON
---------
@ -178,18 +187,18 @@ Backend configuration
---------------------
When *cinderlib* serializes any object it also stores the *Backend* this object
belongs to. For security reasons by default it only stores the identifier of
the backend, which is the `volume_backend_name`. Since we are only storing a
reference to the *Backend*, this means that when you are going through the
deserialization process you require that the *Backend* the object belonged to
already present in *cinderlib*.
belongs to. For security reasons it only stores the identifier of the backend
by default, which is the `volume_backend_name`. Since we are only storing a
reference to the *Backend*, this means that when we are going through the
deserialization process the *Backend* the object belonged to must already be
present in *cinderlib*.
This should be OK for most *cinderlib* usages, since it's common practice to
store you storage backend connection information (credentials, addresses, etc.)
in a different location than your data, but there may be situations (for
example while testing) where we'll want to store everything in the same file,
not only the *cinderlib* representation of all the storage resources but also
the *Backend* configuration required to access the storage array.
store the storage backend connection information (credentials, addresses, etc.)
in a different location than the data; but there may be situations (for example
while testing) where we'll want to store everything in the same file, not only
the *cinderlib* representation of all the storage resources but also the
*Backend* configuration required to access the storage array.
To enable the serialization of the whole driver configuration we have to
specify `output_all_backend_info=True` on the *cinderlib* initialization

View File

@ -17,12 +17,12 @@ we can do it for attached as well as detached volumes.
.. note::
Some drivers, like the NFS, require assistance from the Compute service for
attached volumes, so they is currently no way of doing this with
attached volumes, so there is currently no way of doing this with
*cinderlib*
Creating a snapshot can only be performed by the `create_snapshot` method from
our *Volume* instance, and once we have have created a snapshot it will be
tracked in the *Volume* instance's `snapshots` set.
our *Volume* instance, and once we have created a snapshot it will be tracked
in the *Volume* instance's `snapshots` set.
Here is a simple code to create a snapshot and use the `snapshots` set to
verify that both, the returned value by the call as well as the entry added to

View File

@ -2,11 +2,9 @@
Volumes
=======
The *Volume* class provides the abstraction layer required to perform all
operations on an existing volume, which means that there will be volume
creation operations that will be invoked from other class instances, since the
new volume we want to create doesn't exist yet and we cannot use the *Volume*
class to manage it.
"The *Volume* class provides the abstraction layer required to perform all
operations on an existing volume. Volume creation operations are carried out
at the *Backend* level.
Create
------
@ -28,9 +26,9 @@ So we have:
.. note::
*Cinder* NFS backends will create an image and not a directory where to
store files, which falls in line with *Cinder* being a Block Storage
provider and not filesystem provider like *Manila* is.
*Cinder* NFS backends will create an image and not a directory to store
files, which falls in line with *Cinder* being a Block Storage provider and
not filesystem provider like *Manila* is.
So assuming that we have an `lvm` variable holding an initialized *Backend*
instance we could create a new 1GB volume quite easily:
@ -40,6 +38,7 @@ instance we could create a new 1GB volume quite easily:
print('Stats before creating the volume are:')
pprint(lvm.stats())
vol = lvm.create_volume(1)
print('Stats after creating the volume are:')
pprint(lvm.stats())
@ -172,6 +171,16 @@ Some of the fields we could be interested in are:
allow them. This is done on *cinderlib* initialization time passing
`non_uuid_ids=True`.
.. note::
*Cinderlib* does not do scheduling on driver pools, so setting the
`extra_specs` for a volume on drivers that expect the scheduler to select
a specific pool using them will have the same behavior as in Cinder.
In that case the caller of Cinderlib is expected to go through the stats
and check the pool that matches the criteria and pass it to the Backend's
`create_volume` method on the `pool_name` parameter.
Delete
------

View File

@ -2,19 +2,25 @@
Usage
=====
Providing a fully Object Oriented abstraction, instead of a classic method
Thanks to the fully Object Oriented abstraction, instead of a classic method
invocation passing the resources to work on, *cinderlib* makes it easy to hit
the ground running when managing storage resources.
Once Cinder drivers and *cinderlib* are installed we just have to import the
library to start using it:
Once the *Cinder* and *cinderlib* packages are installed we just have to import
the library to start using it:
.. code-block:: python
import cinderlib
Usage documentation is not too long and it is recommended to read it all before
using the library to be sure we have at least a high level view of the
.. note::
Installing the *Cinder* package does not require to start any of its
services (volume, scheduler, api) or auxiliary services (KeyStone, MySQL,
RabbitMQ, etc.).
Usage documentation is not too long, and it is recommended to read it all
before using the library to be sure we have at least a high level view of the
different aspects related to managing our storage with *cinderlib*.
Before going into too much detail there are some aspects we need to clarify to
@ -34,13 +40,12 @@ required local connection information of this host, create a *Connection* on
the storage to this host, and then do the local *Attachment*.
Given that *Cinder* drivers are not stateless, *cinderlib* cannot be either.
That's why we have a metadata persistence plugin mechanism to provide different
ways to store resource states. Currently we have memory and database plugins.
Users can store the data wherever they want using the JSON serialization
mechanism or with a custom metadata plugin.
That's why there is a metadata persistence plugin mechanism to provide
different ways to store resource states. Currently we have memory and database
plugins. Users can store the data wherever they want using the JSON
serialization mechanism or with a custom metadata plugin.
For extended information on these topics please refer to their specific
sections.
Each of the different topics are treated in detail on their specific sections:
.. toctree::
:maxdepth: 1
@ -53,3 +58,10 @@ sections.
topics/serialization
topics/tracking
topics/metadata
Auto-generated documentation is also available:
.. toctree::
:maxdepth: 2
api/cinderlib

View File

@ -1 +0,0 @@
.. include:: ../AUTHORS.rst

View File

@ -1,38 +0,0 @@
cinderlib\.persistence package
==============================
Submodules
----------
cinderlib\.persistence\.base module
-----------------------------------
.. automodule:: cinderlib.persistence.base
:members:
:undoc-members:
:show-inheritance:
cinderlib\.persistence\.dbms module
-----------------------------------
.. automodule:: cinderlib.persistence.dbms
:members:
:undoc-members:
:show-inheritance:
cinderlib\.persistence\.memory module
-------------------------------------
.. automodule:: cinderlib.persistence.memory
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: cinderlib.persistence
:members:
:undoc-members:
:show-inheritance:

View File

@ -1 +0,0 @@
.. include:: ../CONTRIBUTING.rst

View File

@ -1 +0,0 @@
.. include:: ../HISTORY.rst

View File

@ -1,41 +0,0 @@
Welcome to Cinder Library's documentation!
==========================================
.. image:: https://img.shields.io/pypi/v/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://readthedocs.org/projects/cinderlib/badge/?version=latest
:target: https://cinderlib.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://img.shields.io/pypi/pyversions/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://img.shields.io/:license-apache-blue.svg
:target: http://www.apache.org/licenses/LICENSE-2.0
Cinder Library is a Python library that allows using Cinder storage drivers not
only outside of OpenStack but also outside of Cinder, which means there's no
need to run MySQL, RabbitMQ, Cinder API, Scheduler, or Volume services to be
able to manage your storage.
.. toctree::
:maxdepth: 2
introduction
installation
validated_backends
usage
contributing
validating_backends
internals
authors
todo
history
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,19 +0,0 @@
=========
Internals
=========
Here we'll go over some of the implementation details within *cinderlib* as
well as explanations of how we've resolved the different issues that arise from
accessing the driver's directly from outside of the cinder-volume service.
Some of the issues *cinderlib* has had to resolve are:
- *Oslo config* configuration loading.
- Cinder-volume dynamic configuration loading.
- Privileged helper service.
- DLM configuration.
- Disabling of cinder logging.
- Direct DB access within drivers.
- *Oslo Versioned Objects* DB access methods such as `refresh` and `save`.
- Circular references in *Oslo Versioned Objects* for serialization.
- Using multiple drivers in the same process.

View File

@ -1,113 +0,0 @@
Cinder Library
==============
.. image:: https://img.shields.io/pypi/v/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://readthedocs.org/projects/cinderlib/badge/?version=latest
:target: https://cinderlib.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://img.shields.io/pypi/pyversions/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://img.shields.io/:license-apache-blue.svg
:target: http://www.apache.org/licenses/LICENSE-2.0
Introduction
------------
Cinder Library is a Python library that allows using storage drivers provided
by Cinder outside of OpenStack and without needing to run the Cinder service,
so we don't need Keystone, MySQL, or RabbitMQ services to control our storage.
The library is currently in an early development stage and can be considered as
a proof of concept and not a finished product at this moment, so please
carefully go over the limitations section to avoid surprises.
Due to the limited access to Cinder backends and time constraints the list of
drivers that have been manually tested, and using the existing limited
functional tests, are:
- LVM with LIO
- Dell EMC XtremIO
- Dell EMC VMAX
- Kaminario K2
- Ceph/RBD
- NetApp SolidFire
Features
--------
* Use a Cinder driver without running a DBMS, Message broker, or Cinder
services.
* Using multiple simultaneous drivers on the same program.
* Stateless: Support full serialization of objects and context to JSON or
string so the state can be restored.
* Metadata persistence plugin mechanism.
* Basic operations support:
- Create volume
- Delete volume
- Extend volume
- Clone volume
- Create snapshot
- Delete snapshot
- Create volume from snapshot
- Connect volume
- Disconnect volume
- Local attach
- Local detach
- Validate connector
Demo
----
.. raw:: html
<script type="text/javascript" src="https://asciinema.org/a/TcTR7Lu7jI0pEsd9ThEn01l7n.js"
id="asciicast-TcTR7Lu7jI0pEsd9ThEn01l7n" async data-autoplay="false"
data-loop="false"></script>
Limitations
-----------
Being in its early development stages the library is in no way close to the
robustness or feature richness that the Cinder project provides. Some of the
more noticeable limitations one should be aware of are:
- Most methods don't perform argument validation so it's a classic GIGO_
library.
- The logic has been kept to a minimum and higher functioning logic is expected
to be handled by the caller.
- There is no CI, or unit tests for that matter, and certainly nothing so fancy
as third party vendor CIs, so things could be broken at any point. We only
have some automated, yet limited, functional tests.
- Only a subset of Cinder available operations are supported by the library.
- Access to a small number of storage arrays has limited the number of drivers
that have been verified to work with cinderlib.
Besides *cinderlib's* own limitations the library also inherits some from
*Cinder's* code and will be bound by the same restrictions and behaviors of the
drivers as if they were running under the standard *Cinder* services. The most
notorious ones are:
- Dependency on the *eventlet* library.
- Behavior inconsistency on some operations across drivers. For example you
can find drivers where cloning is a cheap operation performed by the storage
array whereas other will actually create a new volume, attach the source and
new volume and perform a full copy of the data.
- External dependencies must be handled manually. So we'll have to take care of
any library, package, or CLI tool that is required by the driver.
- Relies on command execution via *sudo* for attach/detach operations as well
as some CLI tools.
.. _GIGO: https://en.wikipedia.org/wiki/Garbage_in,_garbage_out

View File

@ -1 +0,0 @@
.. include:: ../TODO.rst

View File

@ -1,157 +0,0 @@
===========
Connections
===========
When talking about attaching a *Cinder* volume there are three steps that must
happen before the volume is available in the host:
1. Retrieve connection information from the host where the volume is going to
be attached. Here we would be getting iSCSI initiator name, IP, and similar
information.
2. Use the connection information from step 1 and make the volume accessible to
it in the storage backend returning the volume connection information. This
step entails exporting the volume and initializing the connection.
3. Attaching the volume to the host using the data retrieved on step 2.
If we are running *cinderlib* and doing the attach in the same host, then all
steps will be done in the same host. But in many cases you may want to manage
the storage backend in one host and attach a volume in another. In such cases,
steps 1 and 3 will happen in the host that needs the attach and step 2 on the
node running *cinderlib*.
Projects in *OpenStack* use the *OS-Brick* library to manage the attaching and
detaching processes. Same thing happens in *cinderlib*. The only difference
is that there are some connection types that are handled by the hypervisors in
*OpenStack*, so we need some alternative code in *cinderlib* to manage them.
*Connection* objects' most interesting attributes are:
- `connected`: Boolean that reflects if the connection is complete.
- `volume`: The *Volume* to which this instance holds the connection
information.
- `protocol`: String with the connection protocol for this volume, ie: `iscsi`,
`rbd`.
- `connector_info`: Dictionary with the connection information from the host
that is attaching. Such as it's hostname, IP address, initiator name, etc.
- `conn_info`: Dictionary with the connection information the host requires to
do the attachment, such as IP address, target name, credentials, etc.
- `device`: If we have done a local attachment this will hold a dictionary with
all the attachment information, such as the `path`, the `type`, the
`scsi_wwn`, etc.
- `path`: String with the path of the system device that has been created when
the volume was attached.
Local attach
------------
Once we have created a volume with *cinderlib* doing a local attachment is
really simple, we just have to call the `attach` method from the *Volume* and
we'll get the *Connection* information from the attached volume, and once we
are done we call the `detach` method on the *Volume*.
.. code-block:: python
vol = lvm.create_volume(size=1)
attach = vol.attach()
with open(attach.path, 'w') as f:
f.write('*' * 100)
vol.detach()
This `attach` method will take care of everything, from gathering our local
connection information, to exporting the volume, initializing the connection,
and finally doing the local attachment of the volume to our host.
The `detach` operation works in a similar way, but performing the exact
opposite steps and in reverse. It will detach the volume from our host,
terminate the connection, and if there are no more connections to the volume it
will also remove the export of the volume.
.. attention::
The *Connection* instance returned by the *Volume* `attach` method also has
a `detach` method, but this one behaves differently than the one we've seen
in the *Volume*, as it will just perform the local detach step and not the
termiante connection or the remove export method.
Remote connection
-----------------
For a remote connection it's a little more inconvenient at the moment, since
you'll have to manually use the *OS-Brick* library on the host that is going to
do the attachment.
.. note:: THIS SECTION IS INCOMPLETE
First we need to get the connection information on the host that is going to do
the attach:
.. code-block:: python
import os_brick
# Retrieve the connection information dictionary
Then we have to do the connection
.. code-block:: python
# Create the connection
attach = vol.connect(connector_dict)
# Return the volume connection information
.. code-block:: python
import os_brick
# Do the attachment
Multipath
---------
If we want to use multipathing for local attachments we must let the *Backend*
know when instantiating the driver by passing the
`use_multipath_for_image_xfer=True`:
.. code-block:: python
import cinderlib
lvm = cinderlib.Backend(
volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi',
use_multipath_for_image_xfer=True,
)
Multi attach
------------
Multi attach support has just been added to *Cinder* in the Queens cycle, and
it's not currently supported by *cinderlib*.
Other methods
-------------
All other methods available in the *Snapshot* class will be explained in their
relevant sections:
- `load` will be explained together with `json`, `jsons`, `dump`, and `dumps`
properties, and the `to_dict` method in the :doc:`serialization` section.
- `refresh` will reload the volume from the metadata storage and reload any
lazy loadable property that has already been loaded. Covered in the
:doc:`serialization` and :doc:`tracking` sections.

View File

@ -1,258 +0,0 @@
=================
Validated drivers
=================
The *Cinder* project has a large number of storage drivers, and all the drivers
have their own CI to validate that they are working as expected.
For *cinderlib* this is more complicated, as we don't have the resources of the
*Cinder* project. We rely on contributors who have access to the hardware to
test if the storage backend works with *cinderlib*.
.. note:: If you have access to storage hardware supported by *Cinder* not
present in here and you would like to test if *cinderlib* works, please
follow the :doc:`validating_backends` section and report your results.
Currently the following backends have been verified:
- `LVM`_ with LIO
- `Ceph`_
- Dell EMC `XtremIO`_
- Dell EMC `VMAX`_
- `Kaminario`_ K2
- NetApp `SolidFire`_
LVM
---
- *Cinderlib version*: v0.1.0, v0.2.0
- *Cinder release*: *Pike*, *Queens*, *Rocky*
- *Storage*: LVM with LIO
- *Connection type*: iSCSI
- *Requirements*: None
- *Tested by*: Gorka Eguileor (geguileo/akrog)
*Configuration*:
.. code-block:: YAML
logs: false
venv_sudo: true
backends:
- volume_backend_name: lvm
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
target_protocol: iscsi
target_helper: lioadm
Ceph
----
- *Cinderlib version*: v0.2.0
- *Cinder release*: *Pike*
- *Storage*: Ceph/RBD
- *Versions*: Luminous v12.2.5
- *Connection type*: RBD
- *Requirements*:
- `ceph-common` package
- `ceph.conf` file
- Ceph keyring file
- *Tested by*: Gorka Eguileor (geguileo/akrog)
- *Notes*:
- If we don't define the `keyring` configuration parameter (must use an
absolute path) in our `rbd_ceph_conf` to point to our `rbd_keyring_conf`
file, we'll need the `rbd_keyring_conf` to be in `/etc/ceph/`.
- `rbd_keyring_confg` must always be present and must follow the naming
convention of `$cluster.client.$rbd_user.conf`.
- Current driver cannot delete a snapshot if there's a dependent (a volume
created from it exists).
*Configuration*:
.. code-block:: YAML
logs: false
venv_sudo: true
backends:
- volume_backend_name: ceph
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_user: cinder
rbd_pool: volumes
rbd_ceph_conf: tmp/ceph.conf
rbd_keyring_conf: /etc/ceph/ceph.client.cinder.keyring
XtremIO
-------
- *Cinderlib version*: v0.1.0, v0.2.0
- *Cinder release*: *Pike*, *Queens*, *Rocky*
- *Storage*: Dell EMC XtremIO
- *Versions*: v4.0.15-20_hotfix_3
- *Connection type*: iSCSI, FC
- *Requirements*: None
- *Tested by*: Gorka Eguileor (geguileo/akrog)
*Configuration* for iSCSI:
.. code-block:: YAML
logs: false
venv_sudo: true
backends:
- volume_backend_name: xtremio
volume_driver: cinder.volume.drivers.dell_emc.xtremio.XtremIOISCSIDriver
xtremio_cluster_name: CLUSTER_NAME
use_multipath_for_image_xfer: true
san_ip: w.x.y.z
san_login: user
san_password: toomanysecrets
*Configuration* for FC:
.. code-block:: YAML
logs: false
venv_sudo: false
backends:
- volume_backend_name: xtremio
volume_driver: cinder.volume.drivers.dell_emc.xtremio.XtremIOFCDriver
xtremio_cluster_name: CLUSTER_NAME
use_multipath_for_image_xfer: true
san_ip: w.x.y.z
san_login: user
san_password: toomanysecrets
Kaminario
---------
- *Cinderlib version*: v0.1.0, v0.2.0
- *Cinder release*: *Pike*, *Queens*, *Rocky*
- *Storage*: Kaminario K2
- *Versions*: VisionOS v6.0.72.10
- *Connection type*: iSCSI
- *Requirements*:
- `krest` Python package from PyPi
- *Tested by*: Gorka Eguileor (geguileo/akrog)
*Configuration*:
.. code-block:: YAML
logs: false
venv_sudo: true
backends:
- volume_backend_name: kaminario
volume_driver: cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver
san_ip: w.x.y.z
san_login: user
san_password: toomanysecrets
use_multipath_for_image_xfer: true
SolidFire
---------
- *Cinderlib version*: v0.1.0 with `later patch`_
- *Cinder release*: *Pike*
- *Storage*: NetApp SolidFire
- *Versions*: Unknown
- *Connection type*: iSCSI
- *Requirements*: None
- *Tested by*: John Griffith (jgriffith/j-griffith)
*Configuration*:
.. code-block:: YAML
logs: false
venv_sudo: true
backends:
- volume_backend_name: solidfire
volume_driver: cinder.volume.drivers.solidfire.SolidFireDriver
san_ip: 192.168.1.4
san_login: admin
san_password: admin_password
sf_allow_template_caching = false
image_volume_cache_enabled = True
volume_clear = zero
VMAX
----
- *Cinderlib version*: v0.1.0
- *Cinder release*: *Pike*, *Queens*, *Rocky*
- *Storage*: Dell EMC VMAX
- *Versions*: Unknown
- *Connection type*: iSCSI
- *Requirements*:
- On *Pike* we need file `/etc/cinder/cinder_dell_emc_config.xml`.
- *Tested by*: Helen Walsh (walshh)
*Configuration* for *Pike*:
- *Cinderlib* functional test configuration:
.. code-block:: YAML
logs: false
venv_sudo: false
size_precision: 2
backends:
- image_volume_cache_enabled: True
volume_clear: zero
volume_backend_name: VMAX_ISCSI_DIAMOND
volume_driver: cinder.volume.drivers.dell_emc.vmax.iscsi.VMAXISCSIDrive
- Contents of file `/etc/cinder/cinder_dell_emc_config.xml`:
.. code-block:: XML
<?xml version="1.0" encoding="UTF-8"?>
<EMC>
<RestServerIp>w.x.y.z</RestServerIp>
<RestServerPort>8443</RestServerPort>
<RestUserName>username</RestUserName>
<RestPassword>toomanysecrets</RestPassword>
<Array>000197800128</Array>
<PortGroups>
<PortGroup>os-iscsi-pg</PortGroup>
</PortGroups>
<SRP>SRP_1</SRP>
<ServiceLevel>Diamond</ServiceLevel>
<Workload>none</Workload>
<SSLVerify>/opt/stack/localhost.domain.com.pem</SSLVerify>
</EMC>
*Configuration* for *Queens* and *Rocky*:
.. code-block:: YAML
venv_sudo: false
size_precision: 2
backends:
- image_volume_cache_enabled: True
volume_clear: zero
volume_backend_name: VMAX_ISCSI_DIAMOND
volume_driver: cinder.volume.drivers.dell_emc.vmax.iscsi.VMAXISCSIDriver
san_ip: w.x.y.z
san_rest_port: 8443
san_login: user
san_password: toomanysecrets
vmax_srp: SRP_1
vmax_array: 000197800128
vmax_port_groups: [os-iscsi-pg]
.. _later patch: https://github.com/Akrog/cinderlib/commit/7dde24e6ccdff19de330fe826b5d449831fff2a6

View File

@ -1,229 +0,0 @@
===================
Validating a driver
===================
OK, so you have seen the project and would like to check if the Cinder driver
for your storage backend will work with *cinderlib* or not, but don't want to
spend a lot of time to do it.
In that case the best way to do it is using our functional tests with a custom
configuration file that has your driver configuration.
The environment
---------------
Before we can test anything we'll need to get our environment ready, which will
be comprised of three steps:
- Clone the *cinderlib* project:
.. code-block:: shell
$ git clone git://github.com/akrog/cinderlib
- Create the testing environment which will include the required Cinder code:
.. code-block:: shell
$ cd cinderlib
$ tox -efunctional --notest
- Install any specific packages our driver requires. Some Cinder drivers have
external dependencies that need to be manually installed. These dependencies
can be Python package or Linux binaries. If it's the former we will need to
install them in the testing virtual environment we created in the previous
step.
For example, for the Kaminario backend we need the *krest* Python package, so
here's how we would install the dependency.
.. code-block:: shell
$ source .tox/py27/bin/active
(py27) $ pip install krest
(py27) $ deactivate
To see the Python dependencies for each backend we can check the
`driver-requirements.txt
<https://raw.githubusercontent.com/openstack/cinder/stable/queens/driver-requirements.txt>`_
file from the Cinder project, or in *cinderlib*'s `setup.py` file listed in
the `extras` dictionary.
If we have binary dependencies we can copy them in `.tox/py27/bin` or just
install them globally in our system.
The configuration
-----------------
Functional test use a YAML configuration file to get the driver configuration
as well as some additional parameters for running the tests, with the default
configuration living in the `tests/functiona/lvm.yaml` file.
The configuration file currently supports 3 key-value pairs, with only one
being mandatory.
- `logs`: Boolean value defining whether we want the Cinder code to log to
stdout during the testing. Defaults to `false`.
- `venv_sudo`: Boolean value that instructs the functional tests on whether we
want to run with normal `sudo` or with a custom command that ensure that the
virtual environment's binaries are also available. This is not usually
necessary, but there are some drivers that use binaries installed by a Python
package (like the LVM that requires the `cinder-rtstool` from Cinder). This
is also necessary if we've installed a binary in the `.tox/py27/bin`
directory.
- `size_precision`: Integer value describing how much precision we must use
when comparing volume sizes. Due to cylinder sizes some storage arrays don't
abide 100% to the requested size of the volume. With this option we can
define how many decimals will be correct when testing sizes. A value of 2
means that the backend could create a 1.0015869140625GB volume when we
request a 1GB volume and the tests wouldn't fail. Default is zero, which for
us means that it must be perfect or it will fail.
- `backends`: This is a list of dictionaries each with the configuration
parameters that are configured in the `cinder.conf` file in Cinder.
The contents of the default configuration, excluding the comments, are:
.. code-block:: yaml
logs: false
venv_sudo: true
backends:
- volume_backend_name: lvm
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
target_protocol: iscsi
target_helper: lioadm
But like the name implies, `backends` can have multiple drivers configured, and
the functional tests will run the tests on them all.
For example a configuration file with LVM, Kaminario, and XtremIO backends
would look like this:
.. code-block:: yaml
logs: false
venv_sudo: true
backends:
- volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
target_protocol: iscsi
target_helper: lioadm
volume_backend_name: lvm
- volume_backend_name: xtremio
volume_driver: cinder.volume.drivers.dell_emc.xtremio.XtremIOISCSIDriver
use_multipath_for_image_xfer: true
xtremio_cluster_name: CLUSTER
san_ip: x.x.x.x
san_login: user
san_password: password
- volume_backend_name: kaminario
volume_driver: cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver
use_multipath_for_image_xfer: true
san_ip: x.x.x.y
san_login: user
san_password: password
The validation
--------------
Now it's time to run the commands, for this we'll use the `tox` command passing
the location of our configuration file via environmental variable
`CL_FTESTS_CFG`:
.. code-block:: shell
$ CL_FTEST_CFG=temp/tests.yaml tox -efunctional
functional develop-inst-nodeps: /home/geguileo/code/cinderlib
functional installed: You are using pip version 8.1.2, ...
functional runtests: PYTHONHASHSEED='2093635202'
functional runtests: commands[0] | unit2 discover -v -s tests/functional
test_attach_detach_volume_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_attach_detach_volume_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_attach_detach_volume_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_attach_detach_volume_via_attachment_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_attach_detach_volume_via_attachment_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_attach_detach_volume_via_attachment_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_attach_volume_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_attach_volume_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_attach_volume_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_clone_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_clone_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_clone_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_connect_disconnect_multiple_times_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_connect_disconnect_multiple_times_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_connect_disconnect_multiple_times_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_connect_disconnect_multiple_volumes_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_connect_disconnect_multiple_volumes_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_connect_disconnect_multiple_volumes_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_connect_disconnect_volume_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_connect_disconnect_volume_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_connect_disconnect_volume_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_create_delete_snapshot_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_create_delete_snapshot_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_create_delete_snapshot_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_create_delete_volume_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_create_delete_volume_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_create_delete_volume_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_create_snapshot_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_create_snapshot_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_create_snapshot_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_create_volume_from_snapshot_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_create_volume_from_snapshot_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_create_volume_from_snapshot_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_create_volume_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_create_volume_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_create_volume_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_disk_io_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_disk_io_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_disk_io_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_extend_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_extend_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_extend_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_stats_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_stats_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_stats_on_xtremio (tests_basic.BackendFunctBasic) ... ok
test_stats_with_creation_on_kaminario (tests_basic.BackendFunctBasic) ... ok
test_stats_with_creation_on_lvm (tests_basic.BackendFunctBasic) ... ok
test_stats_with_creation_on_xtremio (tests_basic.BackendFunctBasic) ... ok
----------------------------------------------------------------------
Ran 48 tests in x.ys
OK
As can be seen each test will have a meaningful name ending in the name of the
backend we have provided via the `volume_backend_name` key in the YAML file.
Reporting results
-----------------
Once you have run the tests, it's time to report the results so they can be
included in the :doc:`validated_backends` section.
To help others use the same backend and help us track how each storage driver
was tested please include the following information in your report:
- *Cinderlib* version.
- Storage Array: What hardware and firmware version were used.
- Connection type tested: iSCSI, FC, RBD, etc.
- Dependencies/Requirements for the backend: Packages, Python libraries,
configuration files...
- Contents of the YAML file with usernames, passwords, and IPs appropriately
masked.
- *Cinder* releases: What cinder releases have been tested.
- Additional notes: Limitations or anything worth mentioning.
To report the results of the tests please create an `issue on the project`_
with the information mentioned above and include any errors you encountered if
you did encounter any.
.. _issue on the project: https://github.com/Akrog/cinderlib/issues/new

View File

@ -1,42 +0,0 @@
==============
Docker example
==============
This Vagrant file deploys a small VM (1GB and 1CPU) with cinderlib in a
container and with LVM properly configured to be used by the container.
This makes it really easy to use the containerized version of cinderlib:
.. code-block:: shell
$ vagrant up
$ vagrant ssh -c 'sudo docker exec -it cinderlib python'
Once we've run those two commands we are in a Python interpreter shell and can
run Python code to use the LVM backend:
.. code-block:: python
import cinderlib as cl
# Initialize the LVM driver
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
# Create a 1GB volume
vol = lvm.create_volume(1)
# Export, initialize, and do a local attach of the volume
attach = vol.attach()
print('Volume %s attached to %s' % (vol.id, attach.path))
# Detach it
vol.detach()
# Delete it
vol.delete()

View File

@ -1,45 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
MEMORY = 1048
CPUS = 1
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.box = "centos/7"
# Override
config.vm.provider :libvirt do |v,override|
override.vm.synced_folder '.', '/home/vagrant/sync', disabled: true
v.memory = MEMORY
v.cpus = CPUS
# Support remote libvirt
$libvirt_host = ENV.fetch('LIBVIRT_HOST', '')
$libvirt_user = ENV.fetch('LIBVIRT_USER', 'root')
v.host = $libvirt_host
if $libvirt_host.empty? || $libvirt_host.nil?
v.connect_via_ssh = false
else
v.username = $libvirt_user
v.connect_via_ssh = true
end
end
# Make kub master
config.vm.define :master do |master|
master.vm.provision :ansible do |ansible|
ansible.limit = "all"
ansible.playbook = "site.yml"
ansible.groups = {
"master_node" => ["master"],
}
# Workaround for issue #644 on Vagrant < v1.8.6
# Replace the ProxyCommand with the command specified by
# vagrant ssh-config
req = Gem::Requirement.new('<1.8.6')
if req.satisfied_by?(Gem::Version.new(Vagrant::VERSION)) and not $libvirt_host.empty?
ansible.raw_ssh_args = "-o 'ProxyCommand=ssh #{$libvirt_host} -l #{$libvirt_user} -i #{Dir.home}/.ssh/id_rsa nc %h %p'"
end
end
end
end

View File

@ -1,76 +0,0 @@
- hosts: all
become: yes
become_method: sudo
tasks:
# Accept loop devices for the LVM cinder-volumes VG and reject anything else
- name: Disable new LVM volumes
lineinfile:
path: /etc/lvm/lvm.conf
state: present
insertafter: '# filter ='
line: "\tfilter = [ \"a|loop|\", \"r|.*\\/|\" ]\n\tglobal_filter = [ \"a|loop|\", \"r|.*\\/|\" ]"
# Workaround for lvcreate hanging inside contatiner
# https://serverfault.com/questions/802766/calling-lvcreate-from-inside-the-container-hangs
- lineinfile:
path: /etc/lvm/lvm.conf
state: present
regexp: "^\tudev_sync = 1"
line: "\tudev_sync = 0"
- lineinfile:
path: /etc/lvm/lvm.conf
state: present
regexp: "^\tudev_rules = 1"
line: "\tudev_rules = 0"
- name: Install packages
yum: name={{ item }} state=present
with_items:
- iscsi-initiator-utils
- device-mapper-multipath
- docker
- name: Configure multipath
command: mpathconf --enable --with_multipathd y --user_friendly_names n --find_multipaths y
- name: Enable services
service: name={{ item }} state=restarted enabled=yes
with_items:
- iscsid
- multipathd
- docker
- name: Create LVM backing file
command: truncate -s 10G /root/cinder-volumes
args:
creates: /root/cinder-volumes
- name: Create LVM loopback device
command: losetup --show -f /root/cinder-volumes
register: loop_device
- name: Create PV and VG
shell: "vgcreate cinder-volumes {{loop_device.stdout}}"
- command: vgscan --cache
changed_when: false
- file:
path: /root/cinder
state: directory
- shell: >
docker run --name=cinderlib --privileged --net=host
-v /etc/iscsi:/etc/iscsi
-v /dev:/dev
-v /etc/lvm:/etc/lvm
-v /var/lock/lvm:/var/lock/lvm
-v /lib/modules:/lib/modules:ro
-v /run:/run
-v /var/lib/iscsi:/var/lib/iscsi
-v /etc/localtime:/etc/localtime:ro
-v /root/cinder:/var/lib/cinder
-v /sys/kernel/config:/configfs
-v /sys/fs/cgroup:/sys/fs/cgroup:ro
-d akrog/cinderlib:latest sleep 365d

View File

@ -1,23 +0,0 @@
#!/usr/bin/env bash
if [ "$SOURCE_BRANCH" == "master" ]
then
cl_release=`git tag --sort=-v:refname|head -1`
# Build cinderlib master with cinder master
echo "Building cinderlib master with Cinder master ..."
docker build --build-arg VERSION=$cl_release -t $DOCKER_REPO:master -f Dockerfile .
# Build cinderlib master with latest supported Cinder stable release
release=`tail -1 hooks/rdo-releases`
echo "Building cinderlib master with Cinder $release ..."
docker build --build-arg RELEASE=$release --build-arg VERSION=$cl_release -t $DOCKER_REPO:latest -f Dockerfile-latest .
else
# Build cinderlib latest release with cinder stable branches
releases=`cat hooks/rdo-releases`
while read -r release; do
echo "Building $SOURCE_BRANCH with Cinder $release ..."
docker build --build-arg RELEASE=$release -t $DOCKER_REPO:$release --build-arg VERSION=$SOURCE_BRANCH -f Dockerfile-release .
done <<< "$releases"
fi

View File

@ -1,29 +0,0 @@
#!/usr/bin/env bash
# Push cinderlib master branches
if [ "$SOURCE_BRANCH" == "master" ]
then
for tag in master latest; do
echo "Pushing $tag ..."
docker push $DOCKER_REPO:$tag
done
# Push cinderlib latest release with cinder stable branches
else
releases=`cat hooks/rdo-releases`
cl_release=`git tag --sort=-v:refname|head -1`
while read -r release; do
echo "Pushing $release ..."
docker push $DOCKER_REPO:$release
# Push it also with the cinderlib version tag
tag=${release}-cl_${SOURCE_BRANCH}
echo "Pushing $tag tag ..."
docker tag $DOCKER_REPO:$release $DOCKER_REPO:$tag
docker push $DOCKER_REPO:$tag
last_release=$release
done <<< "$releases"
docker tag $DOCKER_REPO:$last_release $DOCKER_REPO:stable
docker push $DOCKER_REPO:stable
fi

View File

@ -1,3 +0,0 @@
pike
queens
rocky

17
lower-constraints.txt Normal file
View File

@ -0,0 +1,17 @@
cinder==13.0.0
flake8==2.5.5
hacking==0.12.0
mock==2.0.0
openstackdocstheme==1.18.1
os-brick==2.7.0
pyflakes==0.8.1
pbr==2.0.0
pep8==1.5.7
reno==2.5.0
six==1.10.0
Sphinx==1.6.2
sphixcontrib-websupport==1.0.1
stestr==1.0.0
stevedore==1.20.0
unittest2==1.1.0
urllib3==1.21.1

View File

@ -1,29 +1,27 @@
# Variables: devstack_base_dir, cinderlib_log_file, cinderlib_ignore_errors
- hosts: all
become: True
vars:
devstack_base_dir: "{{ devstack_base_dir|default('/opt/stack') }}"
cinderlib_dir: "{{ cinderlib_dir }}|default({{ devstack_base_dir }}/cinderlib)"
cl_log_file: "{{ devstack_base_dir }}/logs/cinderlib.txt"
cinderlib_ignore_errors: "{{ cinderlib_ignore_errors }}|default(no)"
base_dir: "{{ devstack_base_dir | default('/opt/stack/new') }}"
default_log_file: "{{ base_dir }}/logs/cinderlib.txt"
tasks:
- name: Create temporary config directory
tempfile:
state: directory
suffix: cinderlib
register: tempdir
- name: Convert Cinder's config to cinderlib functional test YAML
- name: Locate unit2 binary location
shell:
cmd: "{{ cinderlib_dir }}/tools/cinder-to-yaml.py /etc/cinder/cinder.conf {{ tempdir.path }}/cinderlib.yaml >{{ cl_log_file }} 2>&1"
ignore_errors: "{{ cinderlib_ignore_errors }}"
register: generate_config
cmd: which unit2
register: unit2_which
- name: Add sudoers role for cinderlib unit2
copy:
dest: /etc/sudoers.d/zuul-sudo-unit2
content: "zuul ALL = NOPASSWD:{{ unit2_which.stdout }} discover -v -s cinderlib/tests/functional\n"
mode: 0440
- name: Validate sudoers config after edits
command: "/usr/sbin/visudo -c"
- name: Run cinderlib functional tests
shell:
cmd: "unit2 discover -v -s tests/functional >>{{ cl_log_file }} 2>&1"
cmd: "set -o pipefail && {{ unit2_which.stdout }} discover -v -s cinderlib/tests/functional 2>&1 | tee {{ cinderlib_log_file | default(default_log_file)}}"
chdir: "{{ base_dir }}/cinderlib"
executable: /bin/bash
chdir: "{{ cinderlib_dir }}"
environment:
CL_FTEST_CFG: "{{ tempdir.path }}/cinderlib.yaml"
when: generate_config.rc != 0
ignore_errors: "{{ cinderlib_ignore_errors }}"
ignore_errors: "{{ cinderlib_ignore_errors | default(false) | bool}}"

View File

@ -0,0 +1,46 @@
---
prelude: >
The Cinder Library, also known as cinderlib, is a Python library that
leverages the Cinder project to provide an object oriented abstraction
around Cinder's storage drivers to allow their usage directly without
running any of the Cinder services or surrounding services, such as
KeyStone, MySQL or RabbitMQ.
This is the Tech Preview release of the library, and is intended for
developers who only need the basic CRUD functionality of the drivers and
don't care for all the additional features Cinder provides such as quotas,
replication, multi-tenancy, migrations, retyping, scheduling, backups,
authorization, authentication, REST API, etc.
features:
- Use a Cinder driver without running a DBMS, Message broker, or Cinder
service.
- Using multiple simultaneous drivers on the same application.
- |
Basic operations support.
* Create volume
* Delete volume
* Extend volume
* Clone volume
* Create snapshot
* Delete snapshot
* Create volume from snapshot
* Connect volume
* Disconnect volume
* Local attach
* Local detach
* Validate connector
* Extra Specs for specific backend functionality.
* Backend QoS
* Multi-pool support
- |
Metadata persistence plugins.
* Stateless: Caller stores JSON serialization.
* Database: Metadata is stored in a database: MySQL, PostgreSQL, SQLite...
* Custom plugin: Caller provides module to store Metadata and cinderlib
calls

View File

@ -0,0 +1,58 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# os-brick Release Notes documentation build configuration file
#
# Refer to the Sphinx documentation for advice on configuring this file:
#
# http://www.sphinx-doc.org/en/stable/config.html
# -- General configuration ------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'reno.sphinxext',
'openstackdocstheme',
]
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Cinderlib Release Notes'
copyright = u'2017, Cinder Developers'
# Release notes are unversioned, so we don't need to set version and release
version = ''
release = ''
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# -- Options for openstackdocstheme -------------------------------------------
repository_name = 'openstack/cinderlib'
bug_project = 'cinderlib'
bug_tag = ''

View File

@ -0,0 +1,8 @@
=========================
Cinderlib Release Notes
=========================
.. toctree::
:maxdepth: 1
unreleased

View File

@ -0,0 +1,5 @@
==============================
Current Series Release Notes
==============================
.. release-notes::

1
requirements.txt Normal file
View File

@ -0,0 +1 @@
cinder

View File

@ -1,3 +0,0 @@
Sphinx==1.6.5
git+https://github.com/akrog/modulefaker.git#egg=modulefaker
git+https://github.com/akrog/cindermock.git

View File

@ -1,32 +1,59 @@
[bumpversion]
current_version = 0.3.9
commit = True
tag = True
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)(?:\.dev(?P<dev>\d+))?
serialize =
{major}.{minor}.{patch}.dev{dev}
{major}.{minor}.{patch}
[bumpversion:file:setup.py]
search = version='{current_version}'
replace = version='{new_version}'
[bumpversion:file:cinderlib/__init__.py]
search = __version__ = '{current_version}'
replace = __version__ = '{new_version}'
[bumpversion:part:dev]
values =
0
1
2
3
4
optional_value = 4
[bdist_wheel]
universal = 1
[flake8]
exclude = docs
[metadata]
name = cinderlib
summary = Direct usage of Cinder Block Storage drivers without the services
description-file =
README.rst
author = OpenStack
author-email = openstack-discuss@lists.openstack.org
home-page = https://docs.openstack.org/cinderlib/latest/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
Intended Audience :: Developers
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.5
[global]
setup-hooks =
pbr.hooks.setup_hook
[files]
packages =
cinderlib
[entry_points]
cinderlib.persistence.storage =
memory = cinderlib.persistence.memory:MemoryPersistence
db = cinderlib.persistence.dbms:DBPersistence
memory_db = cinderlib.persistence.dbms:MemoryDBPersistence
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0
[compile_catalog]
directory = cinderlib/locale
domain = cinderlib
[update_catalog]
domain = cinderlib
output_dir = cinderlib/locale
input_file = cinderlib/locale/cinderlib.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = cinderlib/locale/cinderlib.pot

112
setup.py
View File

@ -1,93 +1,29 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2019, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
with open('README.rst') as readme_file:
readme = readme_file.read()
# Remove the demo for the PyPi package
start = readme.index('Demo\n----')
end = readme.index('Example\n-------')
readme = readme[:start] + readme[end:]
with open('HISTORY.rst') as history_file:
history = history_file.read()
requirements = [
'cinder>=11.0',
]
test_requirements = [
# TODO: put package test requirements here
]
extras = {
# DRBD
'drbd': ['dbus', 'drbdmanage'],
# HPE 3PAR
'3par': ['hpe3parclient>=4.1.0'],
# Kaminario
'kaminario': ['krest>=1.3.0'],
# Pure
'pure': ['purestorage>=1.6.0'],
# Dell EMC VMAX
'vmax': ['pyOpenSSL>=1.0.0'],
# IBM DS8K
'ds8k': ['pyOpenSSL>=1.0.0'],
# HPE Lefthand
'lefthand': ['python-lefthandclient>=2.0.0'],
# Fujitsu Eternus DX
'eternus': ['pywbem>=0.7.0'],
# IBM XIV
'xiv': ['pyxcli>=1.1.5'],
# RBD/Ceph
'rbd': ['rados', 'rbd'],
# Dell EMC VNX
'vnx': ['storops>=0.4.8'],
# Violin
'violin': ['vmemclient>=1.1.8'],
# INFINIDAT
'infinidat': ['infinisdk', 'capacity', 'infi.dtypes.wwn',
'infi.dtypes.iqn'],
}
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
name='cinderlib',
version='0.3.9',
description=("Cinder Library allows using storage drivers outside of "
"Cinder."),
long_description=readme + '\n\n' + history,
author="Gorka Eguileor",
author_email='geguileo@redhat.com',
url='https://github.com/akrog/cinderlib',
packages=setuptools.find_packages(exclude=['tmp', 'cinderlib/tests']),
include_package_data=False,
install_requires=requirements,
extras_require=extras,
license="Apache Software License 2.0",
zip_safe=False,
keywords='cinderlib',
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
'Natural Language :: English',
"Programming Language :: Python :: 2",
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
],
test_suite='unittest2.collector',
tests_require=test_requirements,
entry_points={
'cinderlib.persistence.storage': [
'memory = cinderlib.persistence.memory:MemoryPersistence',
'db = cinderlib.persistence.dbms:DBPersistence',
'memory_db = cinderlib.persistence.dbms:MemoryDBPersistence',
],
},
)
setup_requires=['pbr>=2.0.0'],
pbr=True)

13
test-requirements.txt Normal file
View File

@ -0,0 +1,13 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
coverage!=4.4,>=4.0 # Apache-2.0
ddt>=1.0.1 # MIT
oslotest>=3.2.0 # Apache-2.0
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=2.2.0 # MIT
stestr>=1.0.0 # Apache-2.0
# There's no current Cinder PyPi package we can use
git+git://github.com/openstack/cinder.git

49
tools/cinder-cfg-to-python.py Executable file
View File

@ -0,0 +1,49 @@
#!/bin/env python
"""Generate Python code to initialize cinderlib based on Cinder config file
This tool generates Python code to instantiate backends using a cinder.conf
file.
It supports multiple backends as defined in enabled_backends.
This program uses the oslo.config module to load configuration options instead
of using configparser directly because drivers will need variables to have the
right type (string, list, integer...), and the types are defined in the code
using oslo.config.
cinder-cfg-to_python cinder.conf cinderlib-conf.py
If no output is provided it will use stdout, and if we also don't provide an
input file, it will default to /etc/cinder/cinder.conf.
"""
import sys
import six
from cinderlib.tests.functional import cinder_to_yaml
def _to_str(value):
if isinstance(value, six.string_types):
return '"' + value + '"'
return value
def main(source, dest):
config = cinder_to_yaml.convert(source)
result = ['import cinderlib as cl']
for backend in config['backends']:
name = backend['volume_backend_name']
name = name.replace(' ', '_').replace('-', '_')
cfg = ', '.join('%s=%s' % (k, _to_str(v)) for k, v in backend.items())
result.append('%s = cl.Backend(%s)' % (name, cfg))
with open(dest, 'w') as f:
f.write('\n\n'.join(result) + '\n')
if __name__ == '__main__':
source = '/etc/cinder/cinder.conf' if len(sys.argv) < 2 else sys.argv[1]
dest = '/dev/stdout' if len(sys.argv) < 3 else sys.argv[2]
main(source, dest)

View File

@ -1,71 +0,0 @@
#!/bin/env python
"""Generate functional tests YAML configuration files from Cinder config file
Functional tests require a YAML file with the backend configuration parameters.
To facilitate running them on a deployment that already has Cinder configured
(ie: devstack) this program can translate from cinder.conf to a valid YAML
file that can be used to run cinderlib functional tests.
This program uses the oslo.config module to load configuration options instead
of using configparser directly because drivers will need variables to have the
right type (string, list, integer...), and the types are defined in the code
using oslo.config.
"""
import sys
import yaml
from six.moves import configparser
from cinder.cmd import volume
volume.objects.register_all() # noqa
from cinder.volume import configuration as config
from cinder.volume import manager
def convert(cinder_source, yaml_dest):
result_cfgs = []
# Manually parse the Cinder configuration file so we know which options are
# set.
parser = configparser.ConfigParser()
parser.read(cinder_source)
enabled_backends = parser.get('DEFAULT', 'enabled_backends')
backends = [name.strip() for name in enabled_backends.split(',') if name]
volume.CONF(('--config-file', cinder_source), project='cinder')
for backend in backends:
options_present = parser.options(backend)
# Dynamically loading the driver triggers adding the specific
# configuration options to the backend_defaults section
cfg = config.Configuration(manager.volume_backend_opts,
config_group=backend)
driver_ns = cfg.volume_driver.rsplit('.', 1)[0]
__import__(driver_ns)
# Use the backend_defaults section to extract the configuration for
# options that are present in the backend section and add them to
# the backend section.
opts = volume.CONF._groups['backend_defaults']._opts
known_present_options = [opt for opt in options_present if opt in opts]
volume_opts = [opts[option]['opt'] for option in known_present_options]
cfg.append_config_values(volume_opts)
# Now retrieve the options that are set in the configuration file.
result_cfgs.append({option: cfg.safe_get(option)
for option in known_present_options})
result = {'backends': result_cfgs}
# Write the YAML to the destination
with open(yaml_dest, 'w') as f:
yaml.dump(result, f)
if __name__ == '__main__':
if len(sys.argv) != 3:
sys.stderr.write('Incorrect number of arguments\n')
exit(1)
convert(sys.argv[1], sys.argv[2])

56
tools/coding-checks.sh Executable file
View File

@ -0,0 +1,56 @@
#!/bin/sh
set -eu
usage() {
echo "Usage: $0 [OPTION]..."
echo "Run Cinderlib's coding check(s)"
echo ""
echo " -Y, --pylint [<basecommit>] Run pylint check on the entire cinderlib module or just files changed in basecommit (e.g. HEAD~1)"
echo " -h, --help Print this usage message"
echo
exit 0
}
process_options() {
i=1
while [ $i -le $# ]; do
eval opt=\$$i
case $opt in
-h|--help) usage;;
-Y|--pylint) pylint=1;;
*) scriptargs="$scriptargs $opt"
esac
i=$((i+1))
done
}
run_pylint() {
local target="${scriptargs:-HEAD~1}"
if [[ "$target" = *"all"* ]]; then
files="cinderlib"
else
files=$(git diff --name-only --diff-filter=ACMRU $target "*.py")
fi
if [ -n "${files}" ]; then
echo "Running pylint against:"
printf "\t%s\n" "${files[@]}"
pylint --rcfile=.pylintrc --output-format=colorized ${files} -E \
-j `python -c 'import multiprocessing as mp; print(mp.cpu_count())'`
else
echo "No python changes in this commit, pylint check not required."
exit 0
fi
}
scriptargs=
pylint=1
process_options $@
if [ $pylint -eq 1 ]; then
run_pylint
exit 0
fi

25
tools/fast8.sh Executable file
View File

@ -0,0 +1,25 @@
#!/bin/bash
NUM_COMMITS=${FAST8_NUM_COMMITS:-1}
if [[ $NUM_COMMITS = "smart" ]]; then
# Run on all commits not submitted yet
# (sort of -- only checks vs. "master" since this is easy)
NUM_COMMITS=$(git cherry master | wc -l)
fi
echo "Checking last $NUM_COMMITS commits."
cd $(dirname "$0")/..
CHANGED=$(git diff --name-only HEAD~${NUM_COMMITS} | tr '\n' ' ')
# Skip files that don't exist
# (have been git rm'd)
CHECK=""
for FILE in $CHANGED; do
if [ -f "$FILE" ]; then
CHECK="$CHECK $FILE"
fi
done
diff -u --from-file /dev/null $CHECK | flake8 --diff

View File

@ -5,11 +5,6 @@
# Logs are way too verbose, so we disable them
logs: false
# LVM backend uses cinder-rtstool command that is installed by Cinder in the
# virtual environment, so we need the custom sudo command that inherits the
# virtualenv binaries PATH
venv_sudo: false
# We only define one backend
backends:
- volume_backend_name: lvm

119
tox.ini
View File

@ -1,33 +1,110 @@
[tox]
envlist = py27, py33, py34, py35, flake8
minversion = 2.0
envlist = py27, py36, flake8
skipsdist = True
setenv = VIRTUAL_ENV={envdir}
[testenv:flake8]
basepython=python
commands=flake8 cinderlib tests
deps=
flake8
-r{toxinidir}/requirements_docs.txt
usedevelop=True
[testenv]
usedevelop=True
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
setenv =
PYTHONPATH = {toxinidir}:{toxinidir}/cinderlib
deps= -r{toxinidir}/requirements_dev.txt
setenv = OS_STDOUT_CAPTURE=1
OS_STDERR_CAPTURE=1
OS_TEST_TIMEOUT=60
OS_TEST_PATH=./cinderlib/tests/unit
install_command = pip install {opts} {packages}
deps= -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
-r{toxinidir}/test-requirements.txt
commands =
unit2 discover -v -s cinderlib/tests/unit []
find . -ignore_readdir_race -type f -name "*.pyc" -delete
stestr run {posargs}
stestr slowest
whitelist_externals =
bash
find
passenv = *_proxy *_PROXY
[testenv:functional]
sitepackages = True
usedevelop=True
# Workaround for https://github.com/tox-dev/tox/issues/425
basepython=python2.7
envdir = {toxworkdir}/py27
setenv = OS_TEST_PATH=./cinderlib/tests/functional
CL_FTEST_CFG={env:CL_FTEST_CFG:{toxinidir}/tools/lvm.yaml}
CL_FTEST_ROOT_HELPER={env:CL_FTEST_ROOT_HELPER:{toxinidir}/tools/virtualenv-sudo.sh}
sitepackages = True
# Not reusing py27's env due to https://github.com/tox-dev/tox/issues/477
# envdir = {toxworkdir}/py27
# Pass on the location of the backend configuration to the tests
setenv = CL_FTEST_CFG = {env:CL_FTEST_CFG:tools/lvm.yaml}
# Must run serially or test_stats_with_creation may fail occasionally
commands =
unit2 discover -v -s cinderlib/tests/functional []
find . -ignore_readdir_race -type f -name "*.pyc" -delete
stestr run --serial {posargs}
stestr slowest
whitelist_externals =
bash
find
[testenv:functional-py35]
setenv =
{[testenv:functional]setenv}
sitepackages = True
basepython=python3.5
# Not reusing py35's env due to https://github.com/tox-dev/tox/issues/477
# envdir = {toxworkdir}/py35
commands = {[testenv:functional]commands}
whitelist_externals = {[testenv:functional]whitelist_externals}
[testenv:releasenotes]
# Not reusing doc's env due to https://github.com/tox-dev/tox/issues/477
# envdir = {toxworkdir}/docs
basepython = python3
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
-r{toxinidir}/doc/requirements.txt
commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
[testenv:docs]
basepython = python3
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
-r{toxinidir}/doc/requirements.txt
commands =
doc8 --ignore D001 --ignore-path .tox --ignore-path *.egg-info --ignore-path doc/build --ignore-path .eggs/*/EGG-INFO/*.txt -e txt -e rst
rm -rf doc/build .autogenerated doc/source/api
sphinx-build -W -b html doc/source doc/build/html
rm -rf api-ref/build
whitelist_externals = rm
[testenv:pylint]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt
-r{toxinidir}/requirements.txt
pylint==2.1.1
commands =
bash ./tools/coding-checks.sh --pylint {posargs}
[testenv:cover]
# Also do not run test_coverage_ext tests while gathering coverage as those
# tests conflict with coverage.
basepython = python3
setenv =
{[testenv]setenv}
PYTHON=coverage run --source cinderlib --parallel-mode
commands =
stestr run {posargs}
coverage combine
coverage html -d cover
coverage xml -o cover/coverage.xml
[testenv:flake8]
basepython=python3
commands=flake8 cinderlib
deps=
flake8
-r{toxinidir}/test-requirements.txt
[testenv:fast8]
basepython=python3
# Not reusing Flake8's env due to https://github.com/tox-dev/tox/issues/477
# envdir = {toxworkdir}/flake8
commands={toxinidir}/tools/fast8.sh
passenv = FAST8_NUM_COMMITS

View File

@ -1,13 +0,0 @@
#!/usr/bin/env bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
. "$DIR/set-tags"
set -ev
for tag_info_string in $TAGS; do
IFS=';' read -a tag_info <<< "$tag_info_string"
echo "Building ${tag_info[3]} using ${tag_info[0]} ..."
docker build --build-arg RELEASE=${tag_info[2]} --build-arg VERSION=${tag_info[1]} -t ${tag_info[3]} -f ${tag_info[0]} .
echo "Pusing ${tag_info[3]} ..."
docker push ${tag_info[3]}
done

View File

@ -1,11 +0,0 @@
+ubuntu-bm-lvm:
+ X_CSI_PERSISTENCE_CONFIG='{"storage":"memory"}' \
+ X_CSI_BACKEND_CONFIG='{"target_protocol":"iscsi","iscsi_ip_address":"127.0.0.1","volume_backend_name":"lvm","volume_driver":"cinder.volume.drivers.lvm.LVMVolumeDriver","volume_group":"ember-volumes","target_helper":"lioadm"}' \
+ X_CSI_EMBER_CONFIG='{"project_id":"io.ember-csi","user_id":"io.ember-csi","root_helper":"sudo","disable_logs":false,"debug":true,"request_multipath":false}' \
+ travis-scripts/run-bm-sanity.sh
+
+ubuntu-lvm:
+ X_CSI_PERSISTENCE_CONFIG='{"storage":"memory"}' \
+ X_CSI_BACKEND_CONFIG='{"target_protocol":"iscsi","iscsi_ip_address":"127.0.0.1","volume_backend_name":"lvm","volume_driver":"cinder.volume.drivers.lvm.LVMVolumeDriver","volume_group":"ember-volumes","target_helper":"lioadm"}' \
+ X_CSI_EMBER_CONFIG='{"project_id":"io.ember-csi","user_id":"io.ember-csi","root_helper":"sudo","disable_logs":false,"debug":true,"request_multipath":false}' \
+ travis-scripts/run-sanity.sh

View File

@ -1,27 +0,0 @@
#!/usr/bin/env bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
. "$DIR/set-tags"
set -ev
# Only push when tagging a release or making changes to master branch
if [[ "$TRAVIS_BRANCH" == "$TRAVIS_TAG" || ("$TRAVIS_BRANCH" == "master" && "$TRAVIS_PULL_REQUEST" == "false") ]]; then
for tag_info_string in $TAGS; do
IFS=';' read -a tag_info <<< "$tag_info_string"
echo "Pulling ${tag_info[3]} ..."
docker pull ${tag_info[3]}
echo "Retagging and pushing ${tag_info[4]} ..."
docker tag ${tag_info[3]} ${tag_info[4]}
docker push ${tag_info[4]}
if [ "${tag_info[5]}" == "stable" ]; then
echo "Setting stable tag ${tag_info[2]}"
docker tag ${tag_info[4]} ${FINAL_REPO}:${tag_info[2]}
docker push ${FINAL_REPO}:${tag_info[2]}
fi
done
# TODO: Trigger Ember-CSI jobs https://docs.travis-ci.com/user/triggering-builds/
else
echo "This is not a tag or a merge to master, skipping pushing to ember-csi"
fi

View File

@ -1 +0,0 @@
../hooks/rdo-releases

View File

@ -1,28 +0,0 @@
#!/usr/bin/env bash
TRAVIS_REPO='akrog/travis-ci'
export FINAL_REPO='akrog/cinderlib'
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
# Not a tag (merge to master, PR, or branch)
if [ "$TRAVIS_BRANCH" != "$TRAVIS_TAG" ]; then
version=`git tag --sort=-v:refname|head -1`
release=`tail -1 hooks/rdo-releases`
sanitized_branch="${TRAVIS_BRANCH//\//_}"
travis_tag="${sanitized_branch}-PR${TRAVIS_PULL_REQUEST}"
TAGS=("Dockerfile;$version;$release;$TRAVIS_REPO:master-${travis_tag};$FINAL_REPO:master;unstable" \
"Dockerfile-latest;$version;$release;$TRAVIS_REPO:latest-${travis_tag};$FINAL_REPO:latest;unstable" )
else
releases=`cat $DIR/rdo-releases`
while read -r release; do
release_tag="${release}-cl_${TRAVIS_TAG}"
TAGS[$i]="Dockerfile-release;${TRAVIS_TAG};$release;$TRAVIS_REPO:$release_tag;$FINAL_REPO:$release_tag;stable"
i=$((i + 1))
done <<< "$releases"
fi
export TAGS="${TAGS[@]}"
echo "set-tags returns ${TAGS[@]}"

View File

@ -1,6 +0,0 @@
#!/usr/bin/env bash
set -ev
truncate -s 10G /root/cinder-volumes
lo_dev=`losetup --show -f /root/cinder-volumes`
vgcreate cinder-volumes $lo_dev
vgscan