Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: I6371190302f621f04138927539f3fa9a7197df02
This commit is contained in:
Tony Breeds 2017-09-12 16:04:25 -06:00
parent 13b0affae5
commit 79452f470c
163 changed files with 14 additions and 19900 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = os_brick
omit = os_brick/tests/*
[report]
ignore_errors = True

65
.gitignore vendored
View File

@ -1,65 +0,0 @@
*/.*
!.coveragerc
!.gitignore
!.mailmap
!.testr.conf
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
.venv
tools/lintstack.head.py
tools/pylint_exceptions
cover
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# Release notes
releasenotes/build/
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/os-brick.git

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/os-brick

View File

@ -1,4 +0,0 @@
brick Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,48 +0,0 @@
========================
Team and repository tags
========================
.. image:: http://governance.openstack.org/badges/os-brick.svg
:target: http://governance.openstack.org/reference/tags/index.html
.. Change things from this point on
===============================
brick
===============================
.. image:: https://img.shields.io/pypi/v/os-brick.svg
:target: https://pypi.python.org/pypi/os-brick/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/dm/os-brick.svg
:target: https://pypi.python.org/pypi/os-brick/
:alt: Downloads
OpenStack Cinder brick library for managing local volume attaches
Features
--------
* Discovery of volumes being attached to a host for many transport protocols.
* Removal of volumes from a host.
Hacking
-------
Hacking on brick requires python-gdbm (for Debian derived distributions),
Python 2.7 and Python 3.4. A recent tox is required, as is a recent virtualenv
(13.1.0 or newer).
If "tox -e py34" fails with the error "db type could not be determined", remove
the .testrepository/ directory and then run "tox -e py34".
For any other information, refer to the developer documents:
http://docs.openstack.org/developer/os-brick/index.html
OR refer to the parent project, Cinder:
http://docs.openstack.org/developer/cinder/
* License: Apache License, Version 2.0
* Source: http://git.openstack.org/cgit/openstack/os-brick
* Bugs: http://bugs.launchpad.net/os-brick

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,15 +0,0 @@
# This is a cross-platform list tracking distribution packages needed for
# install and tests
# see http://docs.openstack.org/infra/bindep/ for additional information.
curl
multipath-utils [platform:dpkg rpm]
sg3-utils [platform:dpkg]
sg3_utils [platform:rpm]
libxml2-devel [platform:rpm]
libxml2-dev [platform:dpkg]
libxslt-devel [platform:rpm]
libxslt1-dev [platform:dpkg]
libssl-dev [platform:dpkg]
openssl-devel [platform:rpm !platform:suse]
libopenssl-devel [platform:suse !platform:rpm]

View File

@ -1,10 +0,0 @@
API Documentation
=================
The **os-brick** package provides the ability to collect host initiator
information as well as discovery volumes and removal of volumes from a host.
.. toctree::
:maxdepth: 2
os_brick/index

View File

@ -1,14 +0,0 @@
:mod:`exception` -- Exceptions
==============================
.. automodule:: os_brick.exception
:synopsis: Exceptions generated by os-brick
.. autoclass:: os_brick.exception.BrickException
.. autoclass:: os_brick.exception.NotFound
.. autoclass:: os_brick.exception.Invalid
.. autoclass:: os_brick.exception.InvalidParameterValue
.. autoclass:: os_brick.exception.NoFibreChannelHostsFound
.. autoclass:: os_brick.exception.NoFibreChannelVolumeDeviceFound
.. autoclass:: os_brick.exception.VolumeDeviceNotFound
.. autoclass:: os_brick.exception.ProtocolNotSupported

View File

@ -1,14 +0,0 @@
:mod:`os_brick` -- OpenStack Brick library
==========================================
.. automodule:: os_brick
:synopsis: OpenStack Brick library
Sub-modules:
.. toctree::
:maxdepth: 2
initiator/index
exception

View File

@ -1,34 +0,0 @@
:mod:`connector` -- Connector
=============================
.. automodule:: os_brick.initiator.connector
:synopsis: Connector module for os-brick
.. autoclass:: os_brick.initiator.connector.InitiatorConnector
.. automethod:: factory
.. autoclass:: os_brick.initiator.connector.ISCSIConnector
.. automethod:: connect_volume
.. automethod:: disconnect_volume
.. autoclass:: os_brick.initiator.connector.FibreChannelConnector
.. automethod:: connect_volume
.. automethod:: disconnect_volume
.. autoclass:: os_brick.initiator.connector.AoEConnector
.. automethod:: connect_volume
.. automethod:: disconnect_volume
.. autoclass:: os_brick.initiator.connector.LocalConnector
.. automethod:: connect_volume
.. automethod:: disconnect_volume
.. autoclass:: os_brick.initiator.connector.HuaweiStorHyperConnector
.. automethod:: connect_volume
.. automethod:: disconnect_volume

View File

@ -1,13 +0,0 @@
:mod:`initiator` -- Initiator
=============================
.. automodule:: os_brick.initiator
:synopsis: Initiator module
Sub-modules:
.. toctree::
:maxdepth: 2
connector

View File

@ -1 +0,0 @@
.. include:: ../../ChangeLog

View File

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'oslosphinx',
'reno.sphinxext',
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'os-brick'
copyright = u'2015, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,50 +0,0 @@
os-brick |release| Documenation
===============================
Overview
--------
**os-brick** is a Python package containing classes that help
with volume discovery and removal from a host.
:doc:`installation`
Instructions on how to get the distribution.
:doc:`tutorial`
Start here for a quick overview.
:doc:`api/index`
The complete API Documenation, organized by module.
Changes
-------
see the :doc:`changelog` for a full list of changes to **os-brick**.
About This Documentation
------------------------
This documentation is generated using the `Sphinx
<http://sphinx.pocoo.org/>`_ documentation generator. The source files
for the documentation are located in the *doc/* directory of the
**os-brick** distribution. To generate the docs locally run the
following command from the root directory of the **os-brick** source.
.. code-block:: bash
$ python setup.py doc
.. toctree::
:hidden:
installation
tutorial
changelog
api/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,18 +0,0 @@
============
Installation
============
At the command line::
$ pip install os-brick
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv os-brick
$ pip install os-brick
Or, from source::
$ git clone https://github.com/openstack/os-brick
$ cd os-brick
$ python setup.py install

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,36 +0,0 @@
Tutorial
========
This tutorial is intended as an introduction to working with
**os-brick**.
Prerequisites
-------------
Before we start, make sure that you have the **os-brick** distribution
:doc:`installed <installation>`. In the Python shell, the following
should run without raising an exception:
.. code-block:: bash
>>> import os_brick
Fetch all of the initiator information from the host
----------------------------------------------------
An example of how to collect the initiator information that is needed
to export a volume to this host.
.. code-block:: python
from os_brick.initiator import connector
# what helper do you want to use to get root access?
root_helper = "sudo"
# The ip address of the host you are running on
my_ip = "192.168.1.1"
# Do you want to support multipath connections?
multipath = True
# Do you want to enforce that multipath daemon is running?
enforce_multipath = False
initiator = connector.get_connector_properties(root_helper, my_ip,
multipath,
enforce_multipath)

View File

@ -1,8 +0,0 @@
# os-brick command filters
# This file should be owned by (and only-writeable by) the root user
[Filters]
# privileged/__init__.py: priv_context.PrivContext(default)
# This line ties the superuser privs with the config files, context name,
# and (implicitly) the actual python code invoked.
privsep-rootwrap: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.*

View File

@ -1,18 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`os_brick` -- OpenStack host based volume management
=========================================================
.. autmodule:: os_brick
:synopsis: OpenStack host based volume management.
"""

View File

@ -1,127 +0,0 @@
# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_brick.encryptors import nop
from oslo_log import log as logging
from oslo_utils import importutils
from oslo_utils import strutils
LOG = logging.getLogger(__name__)
LUKS = "luks"
PLAIN = "plain"
FORMAT_TO_FRONTEND_ENCRYPTOR_MAP = {
LUKS: 'os_brick.encryptors.luks.LuksEncryptor',
PLAIN: 'os_brick.encryptors.cryptsetup.CryptsetupEncryptor'
}
LEGACY_PROVIDER_CLASS_TO_FORMAT_MAP = {
"nova.volume.encryptors.luks.LuksEncryptor": LUKS,
"nova.volume.encryptors.cryptsetup.CryptsetupEncryptor": PLAIN,
"nova.volume.encryptors.nop.NoopEncryptor": None,
"os_brick.encryptors.luks.LuksEncryptor": LUKS,
"os_brick.encryptors.cryptsetup.CryptsetupEncryptor": PLAIN,
"os_brick.encryptors.nop.NoopEncryptor": None,
"LuksEncryptor": LUKS,
"CryptsetupEncryptor": PLAIN,
"NoOpEncryptor": None,
}
def get_volume_encryptor(root_helper,
connection_info,
keymgr,
execute=None,
*args, **kwargs):
"""Creates a VolumeEncryptor used to encrypt the specified volume.
:param: the connection information used to attach the volume
:returns VolumeEncryptor: the VolumeEncryptor for the volume
"""
encryptor = nop.NoOpEncryptor(root_helper=root_helper,
connection_info=connection_info,
keymgr=keymgr,
execute=execute,
*args, **kwargs)
location = kwargs.get('control_location', None)
if location and location.lower() == 'front-end': # case insensitive
provider = kwargs.get('provider')
# TODO(lyarwood): Remove the following in Queens and raise an
# ERROR if provider is not a key in SUPPORTED_ENCRYPTION_PROVIDERS.
# Until then continue to allow both the class name and path to be used.
if provider in LEGACY_PROVIDER_CLASS_TO_FORMAT_MAP:
LOG.warning("Use of the in tree encryptor class %(provider)s"
" by directly referencing the implementation class"
" will be blocked in the Queens release of"
" os-brick.", {'provider': provider})
provider = LEGACY_PROVIDER_CLASS_TO_FORMAT_MAP[provider]
if provider in FORMAT_TO_FRONTEND_ENCRYPTOR_MAP:
provider = FORMAT_TO_FRONTEND_ENCRYPTOR_MAP[provider]
elif provider is None:
provider = "os_brick.encryptors.nop.NoOpEncryptor"
else:
LOG.warning("Use of the out of tree encryptor class "
"%(provider)s will be blocked with the Queens "
"release of os-brick.", {'provider': provider})
try:
encryptor = importutils.import_object(
provider,
root_helper,
connection_info,
keymgr,
execute,
**kwargs)
except Exception as e:
LOG.error("Error instantiating %(provider)s: %(exception)s",
{'provider': provider, 'exception': e})
raise
msg = ("Using volume encryptor '%(encryptor)s' for connection: "
"%(connection_info)s" %
{'encryptor': encryptor, 'connection_info': connection_info})
LOG.debug(strutils.mask_password(msg))
return encryptor
def get_encryption_metadata(context, volume_api, volume_id, connection_info):
metadata = {}
if ('data' in connection_info and
connection_info['data'].get('encrypted', False)):
try:
metadata = volume_api.get_volume_encryption_metadata(context,
volume_id)
if not metadata:
LOG.warning('Volume %s should be encrypted but there is no '
'encryption metadata.', volume_id)
except Exception as e:
LOG.error("Failed to retrieve encryption metadata for "
"volume %(volume_id)s: %(exception)s",
{'volume_id': volume_id, 'exception': e})
raise
if metadata:
msg = ("Using volume encryption metadata '%(metadata)s' for "
"connection: %(connection_info)s" %
{'metadata': metadata, 'connection_info': connection_info})
LOG.debug(strutils.mask_password(msg))
return metadata

View File

@ -1,64 +0,0 @@
# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
from os_brick import executor
import six
@six.add_metaclass(abc.ABCMeta)
class VolumeEncryptor(executor.Executor):
"""Base class to support encrypted volumes.
A VolumeEncryptor provides hooks for attaching and detaching volumes, which
are called immediately prior to attaching the volume to an instance and
immediately following detaching the volume from an instance. This class
performs no actions for either hook.
"""
def __init__(self, root_helper,
connection_info,
keymgr,
execute=None,
*args, **kwargs):
super(VolumeEncryptor, self).__init__(root_helper,
execute=execute,
*args, **kwargs)
self._key_manager = keymgr
self.encryption_key_id = kwargs.get('encryption_key_id')
def _get_key(self, context):
"""Retrieves the encryption key for the specified volume.
:param: the connection information used to attach the volume
"""
return self._key_manager.get(context, self.encryption_key_id)
@abc.abstractmethod
def attach_volume(self, context, **kwargs):
"""Hook called immediately prior to attaching a volume to an instance.
"""
pass
@abc.abstractmethod
def detach_volume(self, **kwargs):
"""Hook called immediately after detaching a volume from an instance.
"""
pass

View File

@ -1,181 +0,0 @@
# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import array
import binascii
import os
from os_brick.encryptors import base
from os_brick import exception
from oslo_concurrency import processutils
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class CryptsetupEncryptor(base.VolumeEncryptor):
"""A VolumeEncryptor based on dm-crypt.
This VolumeEncryptor uses dm-crypt to encrypt the specified volume.
"""
def __init__(self, root_helper,
connection_info,
keymgr,
execute=None,
*args, **kwargs):
super(CryptsetupEncryptor, self).__init__(
root_helper=root_helper,
connection_info=connection_info,
keymgr=keymgr,
execute=execute,
*args, **kwargs)
# Fail if no device_path was set when connecting the volume, e.g. in
# the case of libvirt network volume drivers.
data = connection_info['data']
if not data.get('device_path'):
volume_id = data.get('volume_id') or connection_info.get('serial')
raise exception.VolumeEncryptionNotSupported(
volume_id=volume_id,
volume_type=connection_info['driver_volume_type'])
# the device's path as given to libvirt -- e.g., /dev/disk/by-path/...
self.symlink_path = connection_info['data']['device_path']
# a unique name for the volume -- e.g., the iSCSI participant name
self.dev_name = 'crypt-%s' % os.path.basename(self.symlink_path)
# NOTE(lixiaoy1): This is to import fix for 1439869 from Nova.
# NOTE(tsekiyama): In older version of nova, dev_name was the same
# as the symlink name. Now it has 'crypt-' prefix to avoid conflict
# with multipath device symlink. To enable rolling update, we use the
# old name when the encrypted volume already exists.
old_dev_name = os.path.basename(self.symlink_path)
wwn = data.get('multipath_id')
if self._is_crypt_device_available(old_dev_name):
self.dev_name = old_dev_name
LOG.debug("Using old encrypted volume name: %s", self.dev_name)
elif wwn and wwn != old_dev_name:
# FibreChannel device could be named '/dev/mapper/<WWN>'.
if self._is_crypt_device_available(wwn):
self.dev_name = wwn
LOG.debug("Using encrypted volume name from wwn: %s",
self.dev_name)
# the device's actual path on the compute host -- e.g., /dev/sd_
self.dev_path = os.path.realpath(self.symlink_path)
def _is_crypt_device_available(self, dev_name):
if not os.path.exists('/dev/mapper/%s' % dev_name):
return False
try:
self._execute('cryptsetup', 'status', dev_name, run_as_root=True)
except processutils.ProcessExecutionError as e:
# If /dev/mapper/<dev_name> is a non-crypt block device (such as a
# normal disk or multipath device), exit_code will be 1. In the
# case, we will omit the warning message.
if e.exit_code != 1:
LOG.warning('cryptsetup status %(dev_name)s exited '
'abnormally (status %(exit_code)s): %(err)s',
{"dev_name": dev_name, "exit_code": e.exit_code,
"err": e.stderr})
return False
return True
def _get_passphrase(self, key):
"""Convert raw key to string."""
return binascii.hexlify(key).decode('utf-8')
def _open_volume(self, passphrase, **kwargs):
"""Open the LUKS partition on the volume using passphrase.
:param passphrase: the passphrase used to access the volume
"""
LOG.debug("opening encrypted volume %s", self.dev_path)
# NOTE(joel-coffman): cryptsetup will strip trailing newlines from
# input specified on stdin unless --key-file=- is specified.
cmd = ["cryptsetup", "create", "--key-file=-"]
cipher = kwargs.get("cipher", None)
if cipher is not None:
cmd.extend(["--cipher", cipher])
key_size = kwargs.get("key_size", None)
if key_size is not None:
cmd.extend(["--key-size", key_size])
cmd.extend([self.dev_name, self.dev_path])
self._execute(*cmd, process_input=passphrase,
check_exit_code=True, run_as_root=True,
root_helper=self._root_helper)
def _get_mangled_passphrase(self, key):
"""Convert the raw key into a list of unsigned int's and then a string
"""
# NOTE(lyarwood): This replicates the methods used prior to Newton to
# first encode the passphrase as a list of unsigned int's before
# decoding back into a string. This method strips any leading 0's
# of the resulting hex digit pairs, resulting in a different
# passphrase being returned.
encoded_key = array.array('B', key).tolist()
return ''.join(hex(x).replace('0x', '') for x in encoded_key)
def attach_volume(self, context, **kwargs):
"""Shadow the device and pass an unencrypted version to the instance.
Transparent disk encryption is achieved by mounting the volume via
dm-crypt and passing the resulting device to the instance. The
instance is unaware of the underlying encryption due to modifying the
original symbolic link to refer to the device mounted by dm-crypt.
"""
key = self._get_key(context).get_encoded()
passphrase = self._get_passphrase(key)
try:
self._open_volume(passphrase, **kwargs)
except processutils.ProcessExecutionError as e:
if e.exit_code == 2:
# NOTE(lyarwood): Workaround bug#1633518 by attempting to use
# a mangled passphrase to open the device..
LOG.info("Unable to open %s with the current passphrase, "
"attempting to use a mangled passphrase to open "
"the volume.", self.dev_path)
self._open_volume(self._get_mangled_passphrase(key), **kwargs)
# modify the original symbolic link to refer to the decrypted device
self._execute('ln', '--symbolic', '--force',
'/dev/mapper/%s' % self.dev_name, self.symlink_path,
root_helper=self._root_helper,
run_as_root=True, check_exit_code=True)
def _close_volume(self, **kwargs):
"""Closes the device (effectively removes the dm-crypt mapping)."""
LOG.debug("closing encrypted volume %s", self.dev_path)
# cryptsetup returns 4 when attempting to destroy a non-active
# dm-crypt device. We are going to ignore this error code to make
# nova deleting that instance successfully.
self._execute('cryptsetup', 'remove', self.dev_name,
run_as_root=True, check_exit_code=True,
root_helper=self._root_helper)
def detach_volume(self, **kwargs):
"""Removes the dm-crypt mapping for the device."""
self._close_volume(**kwargs)

View File

@ -1,187 +0,0 @@
# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_brick.encryptors import cryptsetup
from os_brick.privileged import rootwrap as priv_rootwrap
from oslo_concurrency import processutils as putils
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
def is_luks(root_helper, device, execute=None):
"""Checks if the specified device uses LUKS for encryption.
:param device: the device to check
:returns: true if the specified device uses LUKS; false otherwise
"""
try:
# check to see if the device uses LUKS: exit status is 0
# if the device is a LUKS partition and non-zero if not
if execute is None:
execute = priv_rootwrap.execute
execute('cryptsetup', 'isLuks', '--verbose', device,
run_as_root=True, root_helper=root_helper,
check_exit_code=True)
return True
except putils.ProcessExecutionError as e:
LOG.warning("isLuks exited abnormally (status %(exit_code)s): "
"%(stderr)s",
{"exit_code": e.exit_code, "stderr": e.stderr})
return False
class LuksEncryptor(cryptsetup.CryptsetupEncryptor):
"""A VolumeEncryptor based on LUKS.
This VolumeEncryptor uses dm-crypt to encrypt the specified volume.
"""
def __init__(self, root_helper,
connection_info,
keymgr,
execute=None,
*args, **kwargs):
super(LuksEncryptor, self).__init__(
root_helper=root_helper,
connection_info=connection_info,
keymgr=keymgr,
execute=execute,
*args, **kwargs)
def _format_volume(self, passphrase, **kwargs):
"""Creates a LUKS header on the volume.
:param passphrase: the passphrase used to access the volume
"""
LOG.debug("formatting encrypted volume %s", self.dev_path)
# NOTE(joel-coffman): cryptsetup will strip trailing newlines from
# input specified on stdin unless --key-file=- is specified.
cmd = ["cryptsetup", "--batch-mode", "luksFormat", "--key-file=-"]
cipher = kwargs.get("cipher", None)
if cipher is not None:
cmd.extend(["--cipher", cipher])
key_size = kwargs.get("key_size", None)
if key_size is not None:
cmd.extend(["--key-size", key_size])
cmd.extend([self.dev_path])
self._execute(*cmd, process_input=passphrase,
check_exit_code=True, run_as_root=True,
root_helper=self._root_helper,
attempts=3)
def _open_volume(self, passphrase, **kwargs):
"""Opens the LUKS partition on the volume using passphrase.
:param passphrase: the passphrase used to access the volume
"""
LOG.debug("opening encrypted volume %s", self.dev_path)
self._execute('cryptsetup', 'luksOpen', '--key-file=-',
self.dev_path, self.dev_name, process_input=passphrase,
run_as_root=True, check_exit_code=True,
root_helper=self._root_helper)
def _unmangle_volume(self, key, passphrase, **kwargs):
"""Workaround for bug#1633518
First identify if a mangled passphrase is used and if found then
replace with the correct unmangled version of the passphrase.
"""
mangled_passphrase = self._get_mangled_passphrase(key)
self._open_volume(mangled_passphrase, **kwargs)
self._close_volume(**kwargs)
LOG.debug("%s correctly opened with a mangled passphrase, replacing "
"this with the original passphrase", self.dev_path)
# NOTE(lyarwood): Now that we are sure that the mangled passphrase is
# used attempt to add the correct passphrase before removing the
# mangled version from the volume.
# luksAddKey currently prompts for the following input :
# Enter any existing passphrase:
# Enter new passphrase for key slot:
# Verify passphrase:
self._execute('cryptsetup', 'luksAddKey', self.dev_path,
process_input=''.join([mangled_passphrase, '\n',
passphrase, '\n', passphrase]),
run_as_root=True, check_exit_code=True,
root_helper=self._root_helper)
# Verify that we can open the volume with the current passphrase
# before removing the mangled passphrase.
self._open_volume(passphrase, **kwargs)
self._close_volume(**kwargs)
# luksRemoveKey only prompts for the key to remove.
self._execute('cryptsetup', 'luksRemoveKey', self.dev_path,
process_input=mangled_passphrase,
run_as_root=True, check_exit_code=True,
root_helper=self._root_helper)
LOG.debug("%s mangled passphrase successfully replaced", self.dev_path)
def attach_volume(self, context, **kwargs):
"""Shadow the device and pass an unencrypted version to the instance.
Transparent disk encryption is achieved by mounting the volume via
dm-crypt and passing the resulting device to the instance. The
instance is unaware of the underlying encryption due to modifying the
original symbolic link to refer to the device mounted by dm-crypt.
"""
key = self._get_key(context).get_encoded()
passphrase = self._get_passphrase(key)
try:
self._open_volume(passphrase, **kwargs)
except putils.ProcessExecutionError as e:
if e.exit_code == 1 and not is_luks(self._root_helper,
self.dev_path,
execute=self._execute):
# the device has never been formatted; format it and try again
LOG.info("%s is not a valid LUKS device;"
" formatting device for first use",
self.dev_path)
self._format_volume(passphrase, **kwargs)
self._open_volume(passphrase, **kwargs)
elif e.exit_code == 2:
# NOTE(lyarwood): Workaround bug#1633518 by replacing any
# mangled passphrases that are found on the volume.
# TODO(lyarwood): Remove workaround during R.
LOG.warning("%s is not usable with the current "
"passphrase, attempting to use a mangled "
"passphrase to open the volume.",
self.dev_path)
self._unmangle_volume(key, passphrase, **kwargs)
self._open_volume(passphrase, **kwargs)
else:
raise
# modify the original symbolic link to refer to the decrypted device
self._execute('ln', '--symbolic', '--force',
'/dev/mapper/%s' % self.dev_name, self.symlink_path,
root_helper=self._root_helper,
run_as_root=True, check_exit_code=True)
def _close_volume(self, **kwargs):
"""Closes the device (effectively removes the dm-crypt mapping)."""
LOG.debug("closing encrypted volume %s", self.dev_path)
self._execute('cryptsetup', 'luksClose', self.dev_name,
run_as_root=True, check_exit_code=True,
root_helper=self._root_helper,
attempts=3)

View File

@ -1,43 +0,0 @@
# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_brick.encryptors import base
class NoOpEncryptor(base.VolumeEncryptor):
"""A VolumeEncryptor that does nothing.
This class exists solely to wrap regular (i.e., unencrypted) volumes so
that they do not require special handling with respect to an encrypted
volume. This implementation performs no action when a volume is attached
or detached.
"""
def __init__(self, root_helper,
connection_info,
keymgr,
execute=None,
*args, **kwargs):
super(NoOpEncryptor, self).__init__(
root_helper=root_helper,
connection_info=connection_info,
keymgr=keymgr,
execute=execute,
*args, **kwargs)
def attach_volume(self, context):
pass
def detach_volume(self):
pass

View File

@ -1,230 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Exceptions for the Brick library."""
from oslo_concurrency import processutils as putils
import six
import traceback
from os_brick.i18n import _
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class BrickException(Exception):
"""Base Brick Exception
To correctly use this class, inherit from it and define
a 'msg_fmt' property. That msg_fmt will get printf'd
with the keyword arguments provided to the constructor.
"""
message = _("An unknown exception occurred.")
code = 500
headers = {}
safe = False
def __init__(self, message=None, **kwargs):
self.kwargs = kwargs
if 'code' not in self.kwargs:
try:
self.kwargs['code'] = self.code
except AttributeError:
pass
if not message:
try:
message = self.message % kwargs
except Exception:
# kwargs doesn't match a variable in the message
# log the issue and the kwargs
LOG.exception("Exception in string format operation. "
"msg='%s'", self.message)
for name, value in kwargs.items():
LOG.error("%(name)s: %(value)s", {'name': name,
'value': value})
# at least get the core message out if something happened
message = self.message
# Put the message in 'msg' so that we can access it. If we have it in
# message it will be overshadowed by the class' message attribute
self.msg = message
super(BrickException, self).__init__(message)
def __unicode__(self):
return six.text_type(self.msg)
class NotFound(BrickException):
message = _("Resource could not be found.")
code = 404
safe = True
class Invalid(BrickException):
message = _("Unacceptable parameters.")
code = 400
# Cannot be templated as the error syntax varies.
# msg needs to be constructed when raised.
class InvalidParameterValue(Invalid):
message = _("%(err)s")
class NoFibreChannelHostsFound(BrickException):
message = _("We are unable to locate any Fibre Channel devices.")
class NoFibreChannelVolumeDeviceFound(BrickException):
message = _("Unable to find a Fibre Channel volume device.")
class VolumeNotDeactivated(BrickException):
message = _('Volume %(name)s was not deactivated in time.')
class VolumeDeviceNotFound(BrickException):
message = _("Volume device not found at %(device)s.")
class VolumePathsNotFound(BrickException):
message = _("Could not find any paths for the volume.")
class VolumePathNotRemoved(BrickException):
message = _("Volume path %(volume_path)s was not removed in time.")
class ProtocolNotSupported(BrickException):
message = _("Connect to volume via protocol %(protocol)s not supported.")
class TargetPortalNotFound(BrickException):
message = _("Unable to find target portal %(target_portal)s.")
class TargetPortalsNotFound(BrickException):
message = _("Unable to find target portal in %(target_portals)s.")
class FailedISCSITargetPortalLogin(BrickException):
message = _("Unable to login to iSCSI Target Portal")
class BlockDeviceReadOnly(BrickException):
message = _("Block device %(device)s is Read-Only.")
class VolumeGroupNotFound(BrickException):
message = _("Unable to find Volume Group: %(vg_name)s")
class VolumeGroupCreationFailed(BrickException):
message = _("Failed to create Volume Group: %(vg_name)s")
class CommandExecutionFailed(BrickException):
message = _("Failed to execute command %(cmd)s")
class VolumeDriverException(BrickException):
message = _('An error occurred while IO to volume %(name)s.')
class InvalidIOHandleObject(BrickException):
message = _('IO handle of %(protocol)s has wrong object '
'type %(actual_type)s.')
class VolumeEncryptionNotSupported(Invalid):
message = _("Volume encryption is not supported for %(volume_type)s "
"volume %(volume_id)s.")
# NOTE(mriedem): This extends ValueError to maintain backward compatibility.
class InvalidConnectorProtocol(ValueError):
pass
class ExceptionChainer(BrickException):
"""A Exception that can contain a group of exceptions.
This exception serves as a container for exceptions, useful when we want to
store all exceptions that happened during a series of steps and then raise
them all together as one.
The representation of the exception will include all exceptions and their
tracebacks.
This class also includes a context manager for convenience, one that will
support both swallowing the exception as if nothing had happened and
raising the exception. In both cases the exception will be stored.
If a message is provided to the context manager it will be formatted and
logged with warning level.
"""
def __init__(self, *args, **kwargs):
self._exceptions = []
self._repr = None
super(ExceptionChainer, self).__init__(*args, **kwargs)
def __repr__(self):
# Since generating the representation can be slow we cache it
if not self._repr:
tracebacks = (
''.join(traceback.format_exception(*e)).replace('\n', '\n\t')
for e in self._exceptions)
self._repr = '\n'.join('\nChained Exception #%s\n\t%s' % (i + 1, t)
for i, t in enumerate(tracebacks))
return self._repr
__str__ = __unicode__ = __repr__
def __nonzero__(self):
# We want to be able to do boolean checks on the exception
return bool(self._exceptions)
__bool__ = __nonzero__ # For Python 3
def add_exception(self, exc_type, exc_val, exc_tb):
# Clear the representation cache
self._repr = None
self._exceptions.append((exc_type, exc_val, exc_tb))
def context(self, catch_exception, msg='', *msg_args):
self._catch_exception = catch_exception
self._exc_msg = msg
self._exc_msg_args = msg_args
return self
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type:
self.add_exception(exc_type, exc_val, exc_tb)
if self._exc_msg:
LOG.warning(self._exc_msg, *self._exc_msg_args)
if self._catch_exception:
return True
class ExecutionTimeout(putils.ProcessExecutionError):
pass

View File

@ -1,84 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Generic exec utility that allows us to set the
execute and root_helper attributes for putils.
Some projects need their own execute wrapper
and root_helper settings, so this provides that hook.
"""
import threading
from oslo_concurrency import processutils as putils
from oslo_context import context as context_utils
from oslo_utils import encodeutils
from os_brick.privileged import rootwrap as priv_rootwrap
class Executor(object):
def __init__(self, root_helper, execute=None,
*args, **kwargs):
if execute is None:
execute = priv_rootwrap.execute
self.set_execute(execute)
self.set_root_helper(root_helper)
@staticmethod
def safe_decode(string):
return string and encodeutils.safe_decode(string, errors='ignore')
@classmethod
def make_putils_error_safe(cls, exc):
"""Converts ProcessExecutionError string attributes to unicode."""
for field in ('stderr', 'stdout', 'cmd', 'description'):
value = getattr(exc, field, None)
if value:
setattr(exc, field, cls.safe_decode(value))
def _execute(self, *args, **kwargs):
try:
result = self.__execute(*args, **kwargs)
if result:
result = (self.safe_decode(result[0]),
self.safe_decode(result[1]))
return result
except putils.ProcessExecutionError as e:
self.make_putils_error_safe(e)
raise
def set_execute(self, execute):
self.__execute = execute
def set_root_helper(self, helper):
self._root_helper = helper
class Thread(threading.Thread):
"""Thread class that inherits the parent's context.
This is useful when you are spawning a thread and want LOG entries to
display the right context information, such as the request.
"""
def __init__(self, *args, **kwargs):
# Store the caller's context as a private variable shared among threads
self.__context__ = context_utils.get_current()
super(Thread, self).__init__(*args, **kwargs)
def run(self):
# Store the context in the current thread's request store
if self.__context__:
self.__context__.update_store()
super(Thread, self).run()

View File

@ -1,28 +0,0 @@
# Copyright 2014 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""oslo.i18n integration module.
See http://docs.openstack.org/developer/oslo.i18n/usage.html .
"""
import oslo_i18n as i18n
DOMAIN = 'os-brick'
_translators = i18n.TranslatorFactory(domain=DOMAIN)
# The primary translation function using the well-known name "_"
_ = _translators.primary

View File

@ -1,63 +0,0 @@
# Copyright 2015 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Brick's Initiator module.
The initator module contains the capabilities for discovering the initiator
information as well as discovering and removing volumes from a host.
"""
import re
DEVICE_SCAN_ATTEMPTS_DEFAULT = 3
MULTIPATH_ERROR_REGEX = re.compile("\w{3} \d+ \d\d:\d\d:\d\d \|.*$")
MULTIPATH_PATH_CHECK_REGEX = re.compile("\s+\d+:\d+:\d+:\d+\s+")
PLATFORM_ALL = 'ALL'
PLATFORM_x86 = 'X86'
PLATFORM_S390 = 'S390'
PLATFORM_PPC64 = 'PPC64'
OS_TYPE_ALL = 'ALL'
OS_TYPE_LINUX = 'LINUX'
OS_TYPE_WINDOWS = 'WIN'
S390X = "s390x"
S390 = "s390"
PPC64 = "ppc64"
PPC64LE = "ppc64le"
ISCSI = "ISCSI"
ISER = "ISER"
FIBRE_CHANNEL = "FIBRE_CHANNEL"
AOE = "AOE"
DRBD = "DRBD"
NFS = "NFS"
SMBFS = 'SMBFS'
GLUSTERFS = "GLUSTERFS"
LOCAL = "LOCAL"
HUAWEISDSHYPERVISOR = "HUAWEISDSHYPERVISOR"
HGST = "HGST"
RBD = "RBD"
SCALEIO = "SCALEIO"
SCALITY = "SCALITY"
QUOBYTE = "QUOBYTE"
DISCO = "DISCO"
VZSTORAGE = "VZSTORAGE"
SHEEPDOG = "SHEEPDOG"
VMDK = "VMDK"
GPFS = "GPFS"
VERITAS_HYPERSCALE = "VERITAS_HYPERSCALE"

View File

@ -1,317 +0,0 @@
# Copyright 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Brick Connector objects for each supported transport protocol.
.. module: connector
The connectors here are responsible for discovering and removing volumes for
each of the supported transport protocols.
"""
import platform
import re
import socket
import sys
from oslo_concurrency import lockutils
from oslo_log import log as logging
from oslo_utils import importutils
from os_brick import exception
from os_brick.i18n import _
from os_brick import initiator
from os_brick import utils
LOG = logging.getLogger(__name__)
synchronized = lockutils.synchronized_with_prefix('os-brick-')
# These constants are being deprecated and moving to the init file.
# Please use the constants there instead.
DEVICE_SCAN_ATTEMPTS_DEFAULT = 3
MULTIPATH_ERROR_REGEX = re.compile("\w{3} \d+ \d\d:\d\d:\d\d \|.*$")
MULTIPATH_PATH_CHECK_REGEX = re.compile("\s+\d+:\d+:\d+:\d+\s+")
PLATFORM_ALL = 'ALL'
PLATFORM_x86 = 'X86'
PLATFORM_S390 = 'S390'
PLATFORM_PPC64 = 'PPC64'
OS_TYPE_ALL = 'ALL'
OS_TYPE_LINUX = 'LINUX'
OS_TYPE_WINDOWS = 'WIN'
S390X = "s390x"
S390 = "s390"
PPC64 = "ppc64"
PPC64LE = "ppc64le"
ISCSI = "ISCSI"
ISER = "ISER"
FIBRE_CHANNEL = "FIBRE_CHANNEL"
AOE = "AOE"
DRBD = "DRBD"
NFS = "NFS"
GLUSTERFS = "GLUSTERFS"
LOCAL = "LOCAL"
GPFS = "GPFS"
HUAWEISDSHYPERVISOR = "HUAWEISDSHYPERVISOR"
HGST = "HGST"
RBD = "RBD"
SCALEIO = "SCALEIO"
SCALITY = "SCALITY"
QUOBYTE = "QUOBYTE"
DISCO = "DISCO"
VZSTORAGE = "VZSTORAGE"
SHEEPDOG = "SHEEPDOG"
# List of connectors to call when getting
# the connector properties for a host
connector_list = [
'os_brick.initiator.connectors.base.BaseLinuxConnector',
'os_brick.initiator.connectors.iscsi.ISCSIConnector',
'os_brick.initiator.connectors.fibre_channel.FibreChannelConnector',
('os_brick.initiator.connectors.fibre_channel_s390x.'
'FibreChannelConnectorS390X'),
('os_brick.initiator.connectors.fibre_channel_ppc64.'
'FibreChannelConnectorPPC64'),
'os_brick.initiator.connectors.aoe.AoEConnector',
'os_brick.initiator.connectors.remotefs.RemoteFsConnector',
'os_brick.initiator.connectors.rbd.RBDConnector',
'os_brick.initiator.connectors.local.LocalConnector',
'os_brick.initiator.connectors.gpfs.GPFSConnector',
'os_brick.initiator.connectors.drbd.DRBDConnector',
'os_brick.initiator.connectors.huawei.HuaweiStorHyperConnector',
'os_brick.initiator.connectors.hgst.HGSTConnector',
'os_brick.initiator.connectors.scaleio.ScaleIOConnector',
'os_brick.initiator.connectors.disco.DISCOConnector',
'os_brick.initiator.connectors.vmware.VmdkConnector',
'os_brick.initiator.windows.base.BaseWindowsConnector',
'os_brick.initiator.windows.iscsi.WindowsISCSIConnector',
'os_brick.initiator.windows.fibre_channel.WindowsFCConnector',
'os_brick.initiator.windows.smbfs.WindowsSMBFSConnector',
'os_brick.initiator.connectors.vrtshyperscale.HyperScaleConnector',
]
# Mappings used to determine who to contruct in the factory
_connector_mapping_linux = {
initiator.AOE:
'os_brick.initiator.connectors.aoe.AoEConnector',
initiator.DRBD:
'os_brick.initiator.connectors.drbd.DRBDConnector',
initiator.GLUSTERFS:
'os_brick.initiator.connectors.remotefs.RemoteFsConnector',
initiator.NFS:
'os_brick.initiator.connectors.remotefs.RemoteFsConnector',
initiator.SCALITY:
'os_brick.initiator.connectors.remotefs.RemoteFsConnector',
initiator.QUOBYTE:
'os_brick.initiator.connectors.remotefs.RemoteFsConnector',
initiator.VZSTORAGE:
'os_brick.initiator.connectors.remotefs.RemoteFsConnector',
initiator.ISCSI:
'os_brick.initiator.connectors.iscsi.ISCSIConnector',
initiator.ISER:
'os_brick.initiator.connectors.iscsi.ISCSIConnector',
initiator.FIBRE_CHANNEL:
'os_brick.initiator.connectors.fibre_channel.FibreChannelConnector',
initiator.LOCAL:
'os_brick.initiator.connectors.local.LocalConnector',
initiator.HUAWEISDSHYPERVISOR:
'os_brick.initiator.connectors.huawei.HuaweiStorHyperConnector',
initiator.HGST:
'os_brick.initiator.connectors.hgst.HGSTConnector',
initiator.RBD:
'os_brick.initiator.connectors.rbd.RBDConnector',
initiator.SCALEIO:
'os_brick.initiator.connectors.scaleio.ScaleIOConnector',
initiator.DISCO:
'os_brick.initiator.connectors.disco.DISCOConnector',
initiator.SHEEPDOG:
'os_brick.initiator.connectors.sheepdog.SheepdogConnector',
initiator.VMDK:
'os_brick.initiator.connectors.vmware.VmdkConnector',
initiator.GPFS:
'os_brick.initiator.connectors.gpfs.GPFSConnector',
initiator.VERITAS_HYPERSCALE:
'os_brick.initiator.connectors.vrtshyperscale.HyperScaleConnector',
}
# Mapping for the S390X platform
_connector_mapping_linux_s390x = {
initiator.FIBRE_CHANNEL:
'os_brick.initiator.connectors.fibre_channel_s390x.'
'FibreChannelConnectorS390X',
initiator.DRBD:
'os_brick.initiator.connectors.drbd.DRBDConnector',
initiator.NFS:
'os_brick.initiator.connectors.remotefs.RemoteFsConnector',
initiator.ISCSI:
'os_brick.initiator.connectors.iscsi.ISCSIConnector',
initiator.LOCAL:
'os_brick.initiator.connectors.local.LocalConnector',
initiator.RBD:
'os_brick.initiator.connectors.rbd.RBDConnector',
initiator.GPFS:
'os_brick.initiator.connectors.gpfs.GPFSConnector',
}
# Mapping for the PPC64 platform
_connector_mapping_linux_ppc64 = {
initiator.FIBRE_CHANNEL:
('os_brick.initiator.connectors.fibre_channel_ppc64.'
'FibreChannelConnectorPPC64'),
initiator.DRBD:
'os_brick.initiator.connectors.drbd.DRBDConnector',
initiator.NFS:
'os_brick.initiator.connectors.remotefs.RemoteFsConnector',
initiator.ISCSI:
'os_brick.initiator.connectors.iscsi.ISCSIConnector',
initiator.LOCAL:
'os_brick.initiator.connectors.local.LocalConnector',
initiator.RBD:
'os_brick.initiator.connectors.rbd.RBDConnector',
initiator.GPFS:
'os_brick.initiator.connectors.gpfs.GPFSConnector',
}
# Mapping for the windows connectors
_connector_mapping_windows = {
initiator.ISCSI:
'os_brick.initiator.windows.iscsi.WindowsISCSIConnector',
initiator.FIBRE_CHANNEL:
'os_brick.initiator.windows.fibre_channel.WindowsFCConnector',
initiator.SMBFS:
'os_brick.initiator.windows.smbfs.WindowsSMBFSConnector',
}
# Create aliases to the old names until 2.0.0
# TODO(smcginnis) Remove this lookup once unit test code is updated to
# point to the correct location
for item in connector_list:
_name = item.split('.')[-1]
globals()[_name] = importutils.import_class(item)
@utils.trace
def get_connector_properties(root_helper, my_ip, multipath, enforce_multipath,
host=None, execute=None):
"""Get the connection properties for all protocols.
When the connector wants to use multipath, multipath=True should be
specified. If enforce_multipath=True is specified too, an exception is
thrown when multipathd is not running. Otherwise, it falls back to
multipath=False and only the first path shown up is used.
For the compatibility reason, even if multipath=False is specified,
some cinder storage drivers may export the target for multipath, which
can be found via sendtargets discovery.
:param root_helper: The command prefix for executing as root.
:type root_helper: str
:param my_ip: The IP address of the local host.
:type my_ip: str
:param multipath: Enable multipath?
:type multipath: bool
:param enforce_multipath: Should we enforce that the multipath daemon is
running? If the daemon isn't running then the
return dict will have multipath as False.
:type enforce_multipath: bool
:param host: hostname.
:param execute: execute helper.
:returns: dict containing all of the collected initiator values.
"""
props = {}
props['platform'] = platform.machine()
props['os_type'] = sys.platform
props['ip'] = my_ip
props['host'] = host if host else socket.gethostname()
for item in connector_list:
connector = importutils.import_class(item)
if (utils.platform_matches(props['platform'], connector.platform) and
utils.os_matches(props['os_type'], connector.os_type)):
props = utils.merge_dict(props,
connector.get_connector_properties(
root_helper,
host=host,
multipath=multipath,
enforce_multipath=enforce_multipath,
execute=execute))
return props
# TODO(walter-boring) We have to keep this class defined here
# so we don't break backwards compatibility
class InitiatorConnector(object):
@staticmethod
def factory(protocol, root_helper, driver=None,
use_multipath=False,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
arch=None,
*args, **kwargs):
"""Build a Connector object based upon protocol and architecture."""
# We do this instead of assigning it in the definition
# to help mocking for unit tests
if arch is None:
arch = platform.machine()
# Set the correct mapping for imports
if sys.platform == 'win32':
_mapping = _connector_mapping_windows
elif arch in (initiator.S390, initiator.S390X):
_mapping = _connector_mapping_linux_s390x
elif arch in (initiator.PPC64, initiator.PPC64LE):
_mapping = _connector_mapping_linux_ppc64
else:
_mapping = _connector_mapping_linux
LOG.debug("Factory for %(protocol)s on %(arch)s",
{'protocol': protocol, 'arch': arch})
protocol = protocol.upper()
# set any special kwargs needed by connectors
if protocol in (initiator.NFS, initiator.GLUSTERFS,
initiator.SCALITY, initiator.QUOBYTE,
initiator.VZSTORAGE):
kwargs.update({'mount_type': protocol.lower()})
elif protocol == initiator.ISER:
kwargs.update({'transport': 'iser'})
# now set all the default kwargs
kwargs.update(
{'root_helper': root_helper,
'driver': driver,
'use_multipath': use_multipath,
'device_scan_attempts': device_scan_attempts,
})
connector = _mapping.get(protocol)
if not connector:
msg = (_("Invalid InitiatorConnector protocol "
"specified %(protocol)s") %
dict(protocol=protocol))
raise exception.InvalidConnectorProtocol(msg)
conn_cls = importutils.import_class(connector)
return conn_cls(*args, **kwargs)

View File

@ -1,176 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_concurrency import lockutils
from oslo_log import log as logging
from oslo_service import loopingcall
from os_brick import exception
from os_brick import initiator
from os_brick.initiator.connectors import base
from os_brick import utils
DEVICE_SCAN_ATTEMPTS_DEFAULT = 3
LOG = logging.getLogger(__name__)
class AoEConnector(base.BaseLinuxConnector):
"""Connector class to attach/detach AoE volumes."""
def __init__(self, root_helper, driver=None,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
super(AoEConnector, self).__init__(
root_helper,
driver=driver,
device_scan_attempts=device_scan_attempts,
*args, **kwargs)
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The AoE connector properties."""
return {}
def get_search_path(self):
return '/dev/etherd'
def get_volume_paths(self, connection_properties):
aoe_device, aoe_path = self._get_aoe_info(connection_properties)
volume_paths = []
if os.path.exists(aoe_path):
volume_paths.append(aoe_path)
return volume_paths
def _get_aoe_info(self, connection_properties):
shelf = connection_properties['target_shelf']
lun = connection_properties['target_lun']
aoe_device = 'e%(shelf)s.%(lun)s' % {'shelf': shelf,
'lun': lun}
path = self.get_search_path()
aoe_path = '%(path)s/%(device)s' % {'path': path,
'device': aoe_device}
return aoe_device, aoe_path
@utils.trace
@lockutils.synchronized('aoe_control', 'aoe-')
def connect_volume(self, connection_properties):
"""Discover and attach the volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:returns: dict
connection_properties for AoE must include:
target_shelf - shelf id of volume
target_lun - lun id of volume
"""
aoe_device, aoe_path = self._get_aoe_info(connection_properties)
device_info = {
'type': 'block',
'device': aoe_device,
'path': aoe_path,
}
if os.path.exists(aoe_path):
self._aoe_revalidate(aoe_device)
else:
self._aoe_discover()
waiting_status = {'tries': 0}
# NOTE(jbr_): Device path is not always present immediately
def _wait_for_discovery(aoe_path):
if os.path.exists(aoe_path):
raise loopingcall.LoopingCallDone
if waiting_status['tries'] >= self.device_scan_attempts:
raise exception.VolumeDeviceNotFound(device=aoe_path)
LOG.info("AoE volume not yet found at: %(path)s. "
"Try number: %(tries)s",
{'path': aoe_device, 'tries': waiting_status['tries']})
self._aoe_discover()
waiting_status['tries'] += 1
timer = loopingcall.FixedIntervalLoopingCall(_wait_for_discovery,
aoe_path)
timer.start(interval=2).wait()
if waiting_status['tries']:
LOG.debug("Found AoE device %(path)s "
"(after %(tries)s rediscover)",
{'path': aoe_path,
'tries': waiting_status['tries']})
return device_info
@utils.trace
@lockutils.synchronized('aoe_control', 'aoe-')
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Detach and flush the volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:param device_info: historical difference, but same as connection_props
:type device_info: dict
connection_properties for AoE must include:
target_shelf - shelf id of volume
target_lun - lun id of volume
"""
aoe_device, aoe_path = self._get_aoe_info(connection_properties)
if os.path.exists(aoe_path):
self._aoe_flush(aoe_device)
def _aoe_discover(self):
(out, err) = self._execute('aoe-discover',
run_as_root=True,
root_helper=self._root_helper,
check_exit_code=0)
LOG.debug('aoe-discover: stdout=%(out)s stderr%(err)s',
{'out': out, 'err': err})
def _aoe_revalidate(self, aoe_device):
(out, err) = self._execute('aoe-revalidate',
aoe_device,
run_as_root=True,
root_helper=self._root_helper,
check_exit_code=0)
LOG.debug('aoe-revalidate %(dev)s: stdout=%(out)s stderr%(err)s',
{'dev': aoe_device, 'out': out, 'err': err})
def _aoe_flush(self, aoe_device):
(out, err) = self._execute('aoe-flush',
aoe_device,
run_as_root=True,
root_helper=self._root_helper,
check_exit_code=0)
LOG.debug('aoe-flush %(dev)s: stdout=%(out)s stderr%(err)s',
{'dev': aoe_device, 'out': out, 'err': err})
def extend_volume(self, connection_properties):
# TODO(walter-boring): is this possible?
raise NotImplementedError

View File

@ -1,128 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import os
from oslo_concurrency import processutils as putils
from oslo_log import log as logging
from os_brick import exception
from os_brick import initiator
from os_brick.initiator import host_driver
from os_brick.initiator import initiator_connector
from os_brick.initiator import linuxscsi
LOG = logging.getLogger(__name__)
class BaseLinuxConnector(initiator_connector.InitiatorConnector):
os_type = initiator.OS_TYPE_LINUX
def __init__(self, root_helper, driver=None, execute=None,
*args, **kwargs):
self._linuxscsi = linuxscsi.LinuxSCSI(root_helper, execute=execute)
if not driver:
driver = host_driver.HostDriver()
self.set_driver(driver)
super(BaseLinuxConnector, self).__init__(root_helper, execute=execute,
*args, **kwargs)
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The generic connector properties."""
multipath = kwargs['multipath']
enforce_multipath = kwargs['enforce_multipath']
props = {}
props['multipath'] = (multipath and
linuxscsi.LinuxSCSI.is_multipath_running(
enforce_multipath, root_helper,
execute=kwargs.get('execute')))
return props
def check_valid_device(self, path, run_as_root=True):
cmd = ('dd', 'if=%(path)s' % {"path": path},
'of=/dev/null', 'count=1')
out, info = None, None
try:
out, info = self._execute(*cmd, run_as_root=run_as_root,
root_helper=self._root_helper)
except putils.ProcessExecutionError as e:
LOG.error("Failed to access the device on the path "
"%(path)s: %(error)s.",
{"path": path, "error": e.stderr})
return False
# If the info is none, the path does not exist.
if info is None:
return False
return True
def get_all_available_volumes(self, connection_properties=None):
volumes = []
path = self.get_search_path()
if path:
# now find all entries in the search path
if os.path.isdir(path):
path_items = [path, '/*']
file_filter = ''.join(path_items)
volumes = glob.glob(file_filter)
return volumes
def _discover_mpath_device(self, device_wwn, connection_properties,
device_name):
"""This method discovers a multipath device.
Discover a multipath device based on a defined connection_property
and a device_wwn and return the multipath_id and path of the multipath
enabled device if there is one.
"""
path = self._linuxscsi.find_multipath_device_path(device_wwn)
device_path = None
multipath_id = None
if path is None:
# find_multipath_device only accept realpath not symbolic path
device_realpath = os.path.realpath(device_name)
mpath_info = self._linuxscsi.find_multipath_device(
device_realpath)
if mpath_info:
device_path = mpath_info['device']
multipath_id = device_wwn
else:
# we didn't find a multipath device.
# so we assume the kernel only sees 1 device
device_path = device_name
LOG.debug("Unable to find multipath device name for "
"volume. Using path %(device)s for volume.",
{'device': device_path})
else:
device_path = path
multipath_id = device_wwn
if connection_properties.get('access_mode', '') != 'ro':
try:
# Sometimes the multipath devices will show up as read only
# initially and need additional time/rescans to get to RW.
self._linuxscsi.wait_for_rw(device_wwn, device_path)
except exception.BlockDeviceReadOnly:
LOG.warning('Block device %s is still read-only. '
'Continuing anyway.', device_path)
return device_path, multipath_id

View File

@ -1,42 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
from os_brick.initiator import initiator_connector
class BaseISCSIConnector(initiator_connector.InitiatorConnector):
def _iterate_all_targets(self, connection_properties):
for portal, iqn, lun in self._get_all_targets(connection_properties):
props = copy.deepcopy(connection_properties)
props['target_portal'] = portal
props['target_iqn'] = iqn
props['target_lun'] = lun
for key in ('target_portals', 'target_iqns', 'target_luns'):
props.pop(key, None)
yield props
def _get_all_targets(self, connection_properties):
if all([key in connection_properties for key in ('target_portals',
'target_iqns',
'target_luns')]):
return zip(connection_properties['target_portals'],
connection_properties['target_iqns'],
connection_properties['target_luns'])
return [(connection_properties['target_portal'],
connection_properties['target_iqn'],
connection_properties.get('target_lun', 0))]

View File

@ -1,208 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import os
import socket
import struct
from oslo_concurrency import lockutils
from oslo_log import log as logging
import six
from os_brick import exception
from os_brick.i18n import _
from os_brick import initiator
from os_brick.initiator.connectors import base
from os_brick import utils
LOG = logging.getLogger(__name__)
DEVICE_SCAN_ATTEMPTS_DEFAULT = 3
synchronized = lockutils.synchronized_with_prefix('os-brick-')
class DISCOConnector(base.BaseLinuxConnector):
"""Class implements the connector driver for DISCO."""
DISCO_PREFIX = 'dms'
def __init__(self, root_helper, driver=None,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
"""Init DISCO connector."""
super(DISCOConnector, self).__init__(
root_helper,
driver=driver,
device_scan_attempts=device_scan_attempts,
*args, **kwargs
)
LOG.debug("Init DISCO connector")
self.server_port = None
self.server_ip = None
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The DISCO connector properties."""
return {}
def get_search_path(self):
"""Get directory path where to get DISCO volumes."""
return "/dev"
def get_volume_paths(self, connection_properties):
"""Get config for DISCO volume driver."""
self.get_config(connection_properties)
volume_paths = []
disco_id = connection_properties['disco_id']
disco_dev = '/dev/dms%s' % (disco_id)
device_paths = [disco_dev]
for path in device_paths:
if os.path.exists(path):
volume_paths.append(path)
return volume_paths
def get_all_available_volumes(self, connection_properties=None):
"""Return all DISCO volumes that exist in the search directory."""
path = self.get_search_path()
if os.path.isdir(path):
path_items = [path, '/', self.DISCO_PREFIX, '*']
file_filter = ''.join(path_items)
return glob.glob(file_filter)
else:
return []
def get_config(self, connection_properties):
"""Get config for DISCO volume driver."""
self.server_port = (
six.text_type(connection_properties['conf']['server_port']))
self.server_ip = (
six.text_type(connection_properties['conf']['server_ip']))
disco_id = connection_properties['disco_id']
disco_dev = '/dev/dms%s' % (disco_id)
device_info = {'type': 'block',
'path': disco_dev}
return device_info
@utils.trace
@synchronized('connect_volume')
def connect_volume(self, connection_properties):
"""Connect the volume. Returns xml for libvirt."""
LOG.debug("Enter in DISCO connect_volume")
device_info = self.get_config(connection_properties)
LOG.debug("Device info : %s.", device_info)
disco_id = connection_properties['disco_id']
disco_dev = '/dev/dms%s' % (disco_id)
LOG.debug("Attaching %s", disco_dev)
self._mount_disco_volume(disco_dev, disco_id)
return device_info
@utils.trace
@synchronized('connect_volume')
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Detach the volume from instance."""
disco_id = connection_properties['disco_id']
disco_dev = '/dev/dms%s' % (disco_id)
LOG.debug("detaching %s", disco_dev)
if os.path.exists(disco_dev):
ret = self._send_disco_vol_cmd(self.server_ip,
self.server_port,
2,
disco_id)
if ret is not None:
msg = _("Detach volume failed")
raise exception.BrickException(message=msg)
else:
LOG.info("Volume already detached from host")
def _mount_disco_volume(self, path, volume_id):
"""Send request to mount volume on physical host."""
LOG.debug("Enter in mount disco volume %(port)s "
"and %(ip)s.",
{'port': self.server_port,
'ip': self.server_ip})
if not os.path.exists(path):
ret = self._send_disco_vol_cmd(self.server_ip,
self.server_port,
1,
volume_id)
if ret is not None:
msg = _("Attach volume failed")
raise exception.BrickException(message=msg)
else:
LOG.info("Volume already attached to host")
def _connect_tcp_socket(self, client_ip, client_port):
"""Connect to TCP socket."""
sock = None
for res in socket.getaddrinfo(client_ip,
client_port,
socket.AF_UNSPEC,
socket.SOCK_STREAM):
aff, socktype, proto, canonname, saa = res
try:
sock = socket.socket(aff, socktype, proto)
except socket.error:
sock = None
continue
try:
sock.connect(saa)
except socket.error:
sock.close()
sock = None
continue
break
if sock is None:
LOG.error("Cannot connect TCP socket")
return sock
def _send_disco_vol_cmd(self, client_ip, client_port, op_code, vol_id):
"""Send DISCO client socket command."""
s = self._connect_tcp_socket(client_ip, int(client_port))
if s is not None:
inst_id = 'DEFAULT-INSTID'
pktlen = 2 + 8 + len(inst_id)
LOG.debug("pktlen=%(plen)s op=%(op)s "
"vol_id=%(vol_id)s, inst_id=%(inst_id)s",
{'plen': pktlen, 'op': op_code,
'vol_id': vol_id, 'inst_id': inst_id})
data = struct.pack("!HHQ14s",
pktlen,
op_code,
int(vol_id),
inst_id)
s.sendall(data)
ret = s.recv(4)
s.close()
LOG.debug("Received ret len=%(lenR)d, ret=%(ret)s",
{'lenR': len(repr(ret)), 'ret': repr(ret)})
ret_val = "".join("%02x" % ord(c) for c in ret)
if ret_val != '00000000':
return 'ERROR'
return None
def extend_volume(self, connection_properties):
raise NotImplementedError

View File

@ -1,110 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import tempfile
from oslo_concurrency import processutils as putils
from os_brick.initiator.connectors import base
from os_brick import utils
class DRBDConnector(base.BaseLinuxConnector):
""""Connector class to attach/detach DRBD resources."""
def __init__(self, root_helper, driver=None,
execute=putils.execute, *args, **kwargs):
super(DRBDConnector, self).__init__(root_helper, driver=driver,
execute=execute, *args, **kwargs)
self._execute = execute
self._root_helper = root_helper
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The DRBD connector properties."""
return {}
def check_valid_device(self, path, run_as_root=True):
"""Verify an existing volume."""
# TODO(linbit): check via drbdsetup first, to avoid blocking/hanging
# in case of network problems?
return super(DRBDConnector, self).check_valid_device(path, run_as_root)
def get_all_available_volumes(self, connection_properties=None):
base = "/dev/"
blkdev_list = []
for e in os.listdir(base):
path = base + e
if os.path.isblk(path):
blkdev_list.append(path)
return blkdev_list
def _drbdadm_command(self, cmd, data_dict, sh_secret):
# TODO(linbit): Write that resource file to a permanent location?
tmp = tempfile.NamedTemporaryFile(suffix="res", delete=False, mode="w")
try:
kv = {'shared-secret': sh_secret}
tmp.write(data_dict['config'] % kv)
tmp.close()
(out, err) = self._execute('drbdadm', cmd,
"-c", tmp.name,
data_dict['name'],
run_as_root=True,
root_helper=self._root_helper)
finally:
os.unlink(tmp.name)
return (out, err)
@utils.trace
def connect_volume(self, connection_properties):
"""Attach the volume."""
self._drbdadm_command("adjust", connection_properties,
connection_properties['provider_auth'])
device_info = {
'type': 'block',
'path': connection_properties['device'],
}
return device_info
@utils.trace
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Detach the volume."""
self._drbdadm_command("down", connection_properties,
connection_properties['provider_auth'])
def get_volume_paths(self, connection_properties):
path = connection_properties['device']
return [path]
def get_search_path(self):
# TODO(linbit): is it allowed to return "/dev", or is that too broad?
return None
def extend_volume(self, connection_properties):
# TODO(walter-boring): is this possible?
raise NotImplementedError

View File

@ -1,49 +0,0 @@
# Copyright 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_brick.initiator.connectors import base
from os_brick.initiator.connectors import base_iscsi
class FakeConnector(base.BaseLinuxConnector):
fake_path = '/dev/vdFAKE'
def connect_volume(self, connection_properties):
fake_device_info = {'type': 'fake',
'path': self.fake_path}
return fake_device_info
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
pass
def get_volume_paths(self, connection_properties):
return [self.fake_path]
def get_search_path(self):
return '/dev/disk/by-path'
def extend_volume(self, connection_properties):
return None
def get_all_available_volumes(self, connection_properties=None):
return ['/dev/disk/by-path/fake-volume-1',
'/dev/disk/by-path/fake-volume-X']
class FakeBaseISCSIConnector(FakeConnector, base_iscsi.BaseISCSIConnector):
pass

View File

@ -1,298 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_concurrency import lockutils
from oslo_log import log as logging
from oslo_service import loopingcall
import six
from os_brick import exception
from os_brick import initiator
from os_brick.initiator.connectors import base
from os_brick.initiator import linuxfc
from os_brick import utils
synchronized = lockutils.synchronized_with_prefix('os-brick-')
LOG = logging.getLogger(__name__)
class FibreChannelConnector(base.BaseLinuxConnector):
"""Connector class to attach/detach Fibre Channel volumes."""
def __init__(self, root_helper, driver=None,
execute=None, use_multipath=False,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
self._linuxfc = linuxfc.LinuxFibreChannel(root_helper, execute)
super(FibreChannelConnector, self).__init__(
root_helper, driver=driver,
execute=execute,
device_scan_attempts=device_scan_attempts,
*args, **kwargs)
self.use_multipath = use_multipath
def set_execute(self, execute):
super(FibreChannelConnector, self).set_execute(execute)
self._linuxscsi.set_execute(execute)
self._linuxfc.set_execute(execute)
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The Fibre Channel connector properties."""
props = {}
fc = linuxfc.LinuxFibreChannel(root_helper,
execute=kwargs.get('execute'))
wwpns = fc.get_fc_wwpns()
if wwpns:
props['wwpns'] = wwpns
wwnns = fc.get_fc_wwnns()
if wwnns:
props['wwnns'] = wwnns
return props
def get_search_path(self):
"""Where do we look for FC based volumes."""
return '/dev/disk/by-path'
def _get_possible_volume_paths(self, connection_properties, hbas):
ports = connection_properties['target_wwn']
possible_devs = self._get_possible_devices(hbas, ports)
lun = connection_properties.get('target_lun', 0)
host_paths = self._get_host_devices(possible_devs, lun)
return host_paths
def get_volume_paths(self, connection_properties):
volume_paths = []
# first fetch all of the potential paths that might exist
# how the FC fabric is zoned may alter the actual list
# that shows up on the system. So, we verify each path.
hbas = self._linuxfc.get_fc_hbas_info()
device_paths = self._get_possible_volume_paths(
connection_properties, hbas)
for path in device_paths:
if os.path.exists(path):
volume_paths.append(path)
return volume_paths
@utils.trace
@synchronized('extend_volume')
def extend_volume(self, connection_properties):
"""Update the local kernel's size information.
Try and update the local kernel's size information
for an FC volume.
"""
volume_paths = self.get_volume_paths(connection_properties)
if volume_paths:
return self._linuxscsi.extend_volume(volume_paths)
else:
LOG.warning("Couldn't find any volume paths on the host to "
"extend volume for %(props)s",
{'props': connection_properties})
raise exception.VolumePathsNotFound()
@utils.trace
@synchronized('connect_volume')
def connect_volume(self, connection_properties):
"""Attach the volume to instance_name.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:returns: dict
connection_properties for Fibre Channel must include:
target_wwn - World Wide Name
target_lun - LUN id of the volume
"""
LOG.debug("execute = %s", self._execute)
device_info = {'type': 'block'}
hbas = self._linuxfc.get_fc_hbas_info()
host_devices = self._get_possible_volume_paths(
connection_properties, hbas)
if len(host_devices) == 0:
# this is empty because we don't have any FC HBAs
LOG.warning("We are unable to locate any Fibre Channel devices")
raise exception.NoFibreChannelHostsFound()
# The /dev/disk/by-path/... node is not always present immediately
# We only need to find the first device. Once we see the first device
# multipath will have any others.
def _wait_for_device_discovery(host_devices):
tries = self.tries
for device in host_devices:
LOG.debug("Looking for Fibre Channel dev %(device)s",
{'device': device})
if os.path.exists(device) and self.check_valid_device(device):
self.host_device = device
# get the /dev/sdX device. This is used
# to find the multipath device.
self.device_name = os.path.realpath(device)
raise loopingcall.LoopingCallDone()
if self.tries >= self.device_scan_attempts:
LOG.error("Fibre Channel volume device not found.")
raise exception.NoFibreChannelVolumeDeviceFound()
LOG.info("Fibre Channel volume device not yet found. "
"Will rescan & retry. Try number: %(tries)s.",
{'tries': tries})
self._linuxfc.rescan_hosts(hbas,
connection_properties['target_lun'])
self.tries = self.tries + 1
self.host_device = None
self.device_name = None
self.tries = 0
timer = loopingcall.FixedIntervalLoopingCall(
_wait_for_device_discovery, host_devices)
timer.start(interval=2).wait()
tries = self.tries
if self.host_device is not None and self.device_name is not None:
LOG.debug("Found Fibre Channel volume %(name)s "
"(after %(tries)s rescans)",
{'name': self.device_name, 'tries': tries})
# find out the WWN of the device
device_wwn = self._linuxscsi.get_scsi_wwn(self.host_device)
LOG.debug("Device WWN = '%(wwn)s'", {'wwn': device_wwn})
device_info['scsi_wwn'] = device_wwn
# see if the new drive is part of a multipath
# device. If so, we'll use the multipath device.
if self.use_multipath:
(device_path, multipath_id) = (super(
FibreChannelConnector, self)._discover_mpath_device(
device_wwn, connection_properties, self.device_name))
if multipath_id:
# only set the multipath_id if we found one
device_info['multipath_id'] = multipath_id
else:
device_path = self.host_device
device_info['path'] = device_path
LOG.debug("connect_volume returning %s", device_info)
return device_info
def _get_host_devices(self, possible_devs, lun):
host_devices = []
for pci_num, target_wwn in possible_devs:
host_device = "/dev/disk/by-path/pci-%s-fc-%s-lun-%s" % (
pci_num,
target_wwn,
self._linuxscsi.process_lun_id(lun))
host_devices.append(host_device)
return host_devices
def _get_possible_devices(self, hbas, wwnports):
"""Compute the possible fibre channel device options.
:param hbas: available hba devices.
:param wwnports: possible wwn addresses. Can either be string
or list of strings.
:returns: list of (pci_id, wwn) tuples
Given one or more wwn (mac addresses for fibre channel) ports
do the matrix math to figure out a set of pci device, wwn
tuples that are potentially valid (they won't all be). This
provides a search space for the device connection.
"""
# the wwn (think mac addresses for fiber channel devices) can
# either be a single value or a list. Normalize it to a list
# for further operations.
wwns = []
if isinstance(wwnports, list):
for wwn in wwnports:
wwns.append(str(wwn))
elif isinstance(wwnports, six.string_types):
wwns.append(str(wwnports))
raw_devices = []
for hba in hbas:
pci_num = self._get_pci_num(hba)
if pci_num is not None:
for wwn in wwns:
target_wwn = "0x%s" % wwn.lower()
raw_devices.append((pci_num, target_wwn))
return raw_devices
@utils.trace
@synchronized('connect_volume')
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Detach the volume from instance_name.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:param device_info: historical difference, but same as connection_props
:type device_info: dict
connection_properties for Fibre Channel must include:
target_wwn - World Wide Name
target_lun - LUN id of the volume
"""
devices = []
wwn = None
volume_paths = self.get_volume_paths(connection_properties)
mpath_path = None
for path in volume_paths:
real_path = self._linuxscsi.get_name_from_path(path)
if self.use_multipath and not mpath_path:
wwn = self._linuxscsi.get_scsi_wwn(path)
mpath_path = self._linuxscsi.find_multipath_device_path(wwn)
if mpath_path:
self._linuxscsi.flush_multipath_device(mpath_path)
device_info = self._linuxscsi.get_device_info(real_path)
devices.append(device_info)
LOG.debug("devices to remove = %s", devices)
self._remove_devices(connection_properties, devices)
def _remove_devices(self, connection_properties, devices):
# There may have been more than 1 device mounted
# by the kernel for this volume. We have to remove
# all of them
for device in devices:
self._linuxscsi.remove_scsi_device(device["device"])
def _get_pci_num(self, hba):
# NOTE(walter-boring)
# device path is in format of (FC and FCoE) :
# /sys/devices/pci0000:00/0000:00:03.0/0000:05:00.3/host2/fc_host/host2
# /sys/devices/pci0000:20/0000:20:03.0/0000:21:00.2/net/ens2f2/ctlr_2
# /host3/fc_host/host3
# we always want the value prior to the host or net value
if hba is not None:
if "device_path" in hba:
device_path = hba['device_path'].split('/')
for index, value in enumerate(device_path):
if value.startswith('net') or value.startswith('host'):
return device_path[index - 1]
return None

View File

@ -1,65 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from os_brick import initiator
from os_brick.initiator.connectors import fibre_channel
from os_brick.initiator import linuxfc
LOG = logging.getLogger(__name__)
class FibreChannelConnectorPPC64(fibre_channel.FibreChannelConnector):
"""Connector class to attach/detach Fibre Channel volumes on PPC64 arch."""
platform = initiator.PLATFORM_PPC64
def __init__(self, root_helper, driver=None,
execute=None, use_multipath=False,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
super(FibreChannelConnectorPPC64, self).__init__(
root_helper,
driver=driver,
execute=execute,
device_scan_attempts=device_scan_attempts,
*args, **kwargs)
self._linuxfc = linuxfc.LinuxFibreChannelPPC64(root_helper, execute)
self.use_multipath = use_multipath
def set_execute(self, execute):
super(FibreChannelConnectorPPC64, self).set_execute(execute)
self._linuxscsi.set_execute(execute)
self._linuxfc.set_execute(execute)
def _get_host_devices(self, possible_devs, lun):
host_devices = []
for pci_num, target_wwn in possible_devs:
host_device = "/dev/disk/by-path/fc-%s-lun-%s" % (
target_wwn,
self._linuxscsi.process_lun_id(lun))
host_devices.append(host_device)
return host_devices
def _get_possible_volume_paths(self, connection_properties, hbas):
ports = connection_properties['target_wwn']
it_map = connection_properties['initiator_target_map']
for hba in hbas:
if hba['node_name'] in it_map.keys():
hba['target_wwn'] = it_map.get(hba['node_name'])
possible_devs = self._get_possible_devices(hbas, ports)
lun = connection_properties.get('target_lun', 0)
host_paths = self._get_host_devices(possible_devs, lun)
return host_paths

View File

@ -1,94 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from os_brick import initiator
from os_brick.initiator.connectors import fibre_channel
from os_brick.initiator import linuxfc
LOG = logging.getLogger(__name__)
class FibreChannelConnectorS390X(fibre_channel.FibreChannelConnector):
"""Connector class to attach/detach Fibre Channel volumes on S390X arch."""
platform = initiator.PLATFORM_S390
def __init__(self, root_helper, driver=None,
execute=None, use_multipath=False,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
super(FibreChannelConnectorS390X, self).__init__(
root_helper,
driver=driver,
execute=execute,
device_scan_attempts=device_scan_attempts,
*args, **kwargs)
LOG.debug("Initializing Fibre Channel connector for S390")
self._linuxfc = linuxfc.LinuxFibreChannelS390X(root_helper, execute)
self.use_multipath = use_multipath
def set_execute(self, execute):
super(FibreChannelConnectorS390X, self).set_execute(execute)
self._linuxscsi.set_execute(execute)
self._linuxfc.set_execute(execute)
def _get_host_devices(self, possible_devs, lun):
host_devices = []
for pci_num, target_wwn in possible_devs:
host_device = self._get_device_file_path(
pci_num,
target_wwn,
lun)
# NOTE(arne_r)
# LUN driver path is the same on all distros, so no need to have
# multiple calls here
self._linuxfc.configure_scsi_device(pci_num, target_wwn,
self._get_lun_string(lun))
host_devices.extend(host_device)
return host_devices
def _get_lun_string(self, lun):
target_lun = 0
if lun <= 0xffff:
target_lun = "0x%04x000000000000" % lun
elif lun <= 0xffffffff:
target_lun = "0x%08x00000000" % lun
return target_lun
def _get_device_file_path(self, pci_num, target_wwn, lun):
# NOTE(arne_r)
# Need to add two possible ways to resolve device paths,
# depending on OS. Since it gets passed to '_get_possible_volume_paths'
# having a mismatch is not a problem
host_device = [
"/dev/disk/by-path/ccw-%s-zfcp-%s:%s" % (
pci_num, target_wwn, self._get_lun_string(lun)),
"/dev/disk/by-path/ccw-%s-fc-%s-lun-%s" % (
pci_num, target_wwn, lun),
]
return host_device
def _remove_devices(self, connection_properties, devices):
hbas = self._linuxfc.get_fc_hbas_info()
ports = connection_properties['target_wwn']
possible_devs = self._get_possible_devices(hbas, ports)
lun = connection_properties.get('target_lun', 0)
target_lun = self._get_lun_string(lun)
for pci_num, target_wwn in possible_devs:
self._linuxfc.deconfigure_scsi_device(pci_num,
target_wwn,
target_lun)

View File

@ -1,41 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_brick.i18n import _
from os_brick.initiator.connectors import local
from os_brick import utils
class GPFSConnector(local.LocalConnector):
""""Connector class to attach/detach File System backed volumes."""
@utils.trace
def connect_volume(self, connection_properties):
"""Connect to a volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
connection_properties must include:
device_path - path to the volume to be connected
:type connection_properties: dict
:returns: dict
"""
if 'device_path' not in connection_properties:
msg = (_("Invalid connection_properties specified "
"no device_path attribute."))
raise ValueError(msg)
device_info = {'type': 'gpfs',
'path': connection_properties['device_path']}
return device_info

View File

@ -1,183 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import socket
from oslo_concurrency import processutils as putils
from oslo_log import log as logging
from os_brick import exception
from os_brick.i18n import _
from os_brick import initiator
from os_brick.initiator.connectors import base
from os_brick import utils
LOG = logging.getLogger(__name__)
class HGSTConnector(base.BaseLinuxConnector):
"""Connector class to attach/detach HGST volumes."""
VGCCLUSTER = 'vgc-cluster'
def __init__(self, root_helper, driver=None,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
super(HGSTConnector, self).__init__(root_helper, driver=driver,
device_scan_attempts=
device_scan_attempts,
*args, **kwargs)
self._vgc_host = None
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The HGST connector properties."""
return {}
def _log_cli_err(self, err):
"""Dumps the full command output to a logfile in error cases."""
LOG.error("CLI fail: '%(cmd)s' = %(code)s\nout: %(stdout)s\n"
"err: %(stderr)s",
{'cmd': err.cmd, 'code': err.exit_code,
'stdout': err.stdout, 'stderr': err.stderr})
def _find_vgc_host(self):
"""Finds vgc-cluster hostname for this box."""
params = [self.VGCCLUSTER, "domain-list", "-1"]
try:
out, unused = self._execute(*params, run_as_root=True,
root_helper=self._root_helper)
except putils.ProcessExecutionError as err:
self._log_cli_err(err)
msg = _("Unable to get list of domain members, check that "
"the cluster is running.")
raise exception.BrickException(message=msg)
domain = out.splitlines()
params = ["ip", "addr", "list"]
try:
out, unused = self._execute(*params, run_as_root=False)
except putils.ProcessExecutionError as err:
self._log_cli_err(err)
msg = _("Unable to get list of IP addresses on this host, "
"check permissions and networking.")
raise exception.BrickException(message=msg)
nets = out.splitlines()
for host in domain:
try:
ip = socket.gethostbyname(host)
for l in nets:
x = l.strip()
if x.startswith("inet %s/" % ip):
return host
except socket.error:
pass
msg = _("Current host isn't part of HGST domain.")
raise exception.BrickException(message=msg)
def _hostname(self):
"""Returns hostname to use for cluster operations on this box."""
if self._vgc_host is None:
self._vgc_host = self._find_vgc_host()
return self._vgc_host
def get_search_path(self):
return "/dev"
def get_volume_paths(self, connection_properties):
path = ("%(path)s/%(name)s" %
{'path': self.get_search_path(),
'name': connection_properties['name']})
volume_path = None
if os.path.exists(path):
volume_path = path
return [volume_path]
@utils.trace
def connect_volume(self, connection_properties):
"""Attach a Space volume to running host.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
connection_properties for HGST must include:
name - Name of space to attach
:type connection_properties: dict
:returns: dict
"""
if connection_properties is None:
msg = _("Connection properties passed in as None.")
raise exception.BrickException(message=msg)
if 'name' not in connection_properties:
msg = _("Connection properties missing 'name' field.")
raise exception.BrickException(message=msg)
device_info = {
'type': 'block',
'device': connection_properties['name'],
'path': '/dev/' + connection_properties['name']
}
volname = device_info['device']
params = [self.VGCCLUSTER, 'space-set-apphosts']
params += ['-n', volname]
params += ['-A', self._hostname()]
params += ['--action', 'ADD']
try:
self._execute(*params, run_as_root=True,
root_helper=self._root_helper)
except putils.ProcessExecutionError as err:
self._log_cli_err(err)
msg = (_("Unable to set apphost for space %s") % volname)
raise exception.BrickException(message=msg)
return device_info
@utils.trace
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Detach and flush the volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
For HGST must include:
name - Name of space to detach
noremovehost - Host which should never be removed
:type connection_properties: dict
:param device_info: historical difference, but same as connection_props
:type device_info: dict
"""
if connection_properties is None:
msg = _("Connection properties passed in as None.")
raise exception.BrickException(message=msg)
if 'name' not in connection_properties:
msg = _("Connection properties missing 'name' field.")
raise exception.BrickException(message=msg)
if 'noremovehost' not in connection_properties:
msg = _("Connection properties missing 'noremovehost' field.")
raise exception.BrickException(message=msg)
if connection_properties['noremovehost'] != self._hostname():
params = [self.VGCCLUSTER, 'space-set-apphosts']
params += ['-n', connection_properties['name']]
params += ['-A', self._hostname()]
params += ['--action', 'DELETE']
try:
self._execute(*params, run_as_root=True,
root_helper=self._root_helper)
except putils.ProcessExecutionError as err:
self._log_cli_err(err)
msg = (_("Unable to set apphost for space %s") %
connection_properties['name'])
raise exception.BrickException(message=msg)
def extend_volume(self, connection_properties):
# TODO(walter-boring): is this possible?
raise NotImplementedError

View File

@ -1,193 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_concurrency import lockutils
from oslo_log import log as logging
from os_brick import exception
from os_brick.i18n import _
from os_brick.initiator.connectors import base
from os_brick import utils
LOG = logging.getLogger(__name__)
synchronized = lockutils.synchronized_with_prefix('os-brick-')
class HuaweiStorHyperConnector(base.BaseLinuxConnector):
""""Connector class to attach/detach SDSHypervisor volumes."""
attached_success_code = 0
has_been_attached_code = 50151401
attach_mnid_done_code = 50151405
vbs_unnormal_code = 50151209
not_mount_node_code = 50155007
iscliexist = True
def __init__(self, root_helper, driver=None,
*args, **kwargs):
self.cli_path = os.getenv('HUAWEISDSHYPERVISORCLI_PATH')
if not self.cli_path:
self.cli_path = '/usr/local/bin/sds/sds_cli'
LOG.debug("CLI path is not configured, using default %s.",
self.cli_path)
if not os.path.isfile(self.cli_path):
self.iscliexist = False
LOG.error('SDS CLI file not found, '
'HuaweiStorHyperConnector init failed.')
super(HuaweiStorHyperConnector, self).__init__(root_helper,
driver=driver,
*args, **kwargs)
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The HuaweiStor connector properties."""
return {}
def get_search_path(self):
# TODO(walter-boring): Where is the location on the filesystem to
# look for Huawei volumes to show up?
return None
def get_all_available_volumes(self, connection_properties=None):
# TODO(walter-boring): what to return here for all Huawei volumes ?
return []
def get_volume_paths(self, connection_properties):
volume_path = None
try:
volume_path = self._get_volume_path(connection_properties)
except Exception:
msg = _("Couldn't find a volume.")
LOG.warning(msg)
raise exception.BrickException(message=msg)
return [volume_path]
def _get_volume_path(self, connection_properties):
out = self._query_attached_volume(
connection_properties['volume_id'])
if not out or int(out['ret_code']) != 0:
msg = _("Couldn't find attached volume.")
LOG.error(msg)
raise exception.BrickException(message=msg)
return out['dev_addr']
@utils.trace
@synchronized('connect_volume')
def connect_volume(self, connection_properties):
"""Connect to a volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:returns: dict
"""
LOG.debug("Connect_volume connection properties: %s.",
connection_properties)
out = self._attach_volume(connection_properties['volume_id'])
if not out or int(out['ret_code']) not in (self.attached_success_code,
self.has_been_attached_code,
self.attach_mnid_done_code):
msg = (_("Attach volume failed, "
"error code is %s") % out['ret_code'])
raise exception.BrickException(message=msg)
try:
volume_path = self._get_volume_path(connection_properties)
except Exception:
msg = _("query attached volume failed or volume not attached.")
LOG.error(msg)
raise exception.BrickException(message=msg)
device_info = {'type': 'block',
'path': volume_path}
return device_info
@utils.trace
@synchronized('connect_volume')
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Disconnect a volume from the local host.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:param device_info: historical difference, but same as connection_props
:type device_info: dict
"""
LOG.debug("Disconnect_volume: %s.", connection_properties)
out = self._detach_volume(connection_properties['volume_id'])
if not out or int(out['ret_code']) not in (self.attached_success_code,
self.vbs_unnormal_code,
self.not_mount_node_code):
msg = (_("Disconnect_volume failed, "
"error code is %s") % out['ret_code'])
raise exception.BrickException(message=msg)
def is_volume_connected(self, volume_name):
"""Check if volume already connected to host"""
LOG.debug('Check if volume %s already connected to a host.',
volume_name)
out = self._query_attached_volume(volume_name)
if out:
return int(out['ret_code']) == 0
return False
def _attach_volume(self, volume_name):
return self._cli_cmd('attach', volume_name)
def _detach_volume(self, volume_name):
return self._cli_cmd('detach', volume_name)
def _query_attached_volume(self, volume_name):
return self._cli_cmd('querydev', volume_name)
def _cli_cmd(self, method, volume_name):
LOG.debug("Enter into _cli_cmd.")
if not self.iscliexist:
msg = _("SDS command line doesn't exist, "
"can't execute SDS command.")
raise exception.BrickException(message=msg)
if not method or volume_name is None:
return
cmd = [self.cli_path, '-c', method, '-v', volume_name]
out, clilog = self._execute(*cmd, run_as_root=False,
root_helper=self._root_helper)
analyse_result = self._analyze_output(out)
LOG.debug('%(method)s volume returns %(analyse_result)s.',
{'method': method, 'analyse_result': analyse_result})
if clilog:
LOG.error("SDS CLI output some log: %s.", clilog)
return analyse_result
def _analyze_output(self, out):
LOG.debug("Enter into _analyze_output.")
if out:
analyse_result = {}
out_temp = out.split('\n')
for line in out_temp:
LOG.debug("Line is %s.", line)
if line.find('=') != -1:
key, val = line.split('=', 1)
LOG.debug("%(key)s = %(val)s", {'key': key, 'val': val})
if key in ['ret_code', 'ret_desc', 'dev_addr']:
analyse_result[key] = val
return analyse_result
else:
return None
def extend_volume(self, connection_properties):
# TODO(walter-boring): is this possible?
raise NotImplementedError

File diff suppressed because it is too large Load Diff

View File

@ -1,79 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_brick.i18n import _
from os_brick.initiator.connectors import base
from os_brick import utils
class LocalConnector(base.BaseLinuxConnector):
""""Connector class to attach/detach File System backed volumes."""
def __init__(self, root_helper, driver=None,
*args, **kwargs):
super(LocalConnector, self).__init__(root_helper, driver=driver,
*args, **kwargs)
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The Local connector properties."""
return {}
def get_volume_paths(self, connection_properties):
path = connection_properties['device_path']
return [path]
def get_search_path(self):
return None
def get_all_available_volumes(self, connection_properties=None):
# TODO(walter-boring): not sure what to return here.
return []
@utils.trace
def connect_volume(self, connection_properties):
"""Connect to a volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
connection_properties must include:
device_path - path to the volume to be connected
:type connection_properties: dict
:returns: dict
"""
if 'device_path' not in connection_properties:
msg = (_("Invalid connection_properties specified "
"no device_path attribute"))
raise ValueError(msg)
device_info = {'type': 'local',
'path': connection_properties['device_path']}
return device_info
@utils.trace
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Disconnect a volume from the local host.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:param device_info: historical difference, but same as connection_props
:type device_info: dict
"""
pass
def extend_volume(self, connection_properties):
# TODO(walter-boring): is this possible?
raise NotImplementedError

View File

@ -1,252 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import tempfile
from oslo_concurrency import processutils as putils
from oslo_log import log as logging
from oslo_utils import fileutils
from oslo_utils import netutils
from os_brick import exception
from os_brick.i18n import _
from os_brick import initiator
from os_brick.initiator.connectors import base
from os_brick.initiator import linuxrbd
from os_brick import utils
LOG = logging.getLogger(__name__)
class RBDConnector(base.BaseLinuxConnector):
""""Connector class to attach/detach RBD volumes."""
def __init__(self, root_helper, driver=None, use_multipath=False,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
super(RBDConnector, self).__init__(root_helper, driver=driver,
device_scan_attempts=
device_scan_attempts,
*args, **kwargs)
self.do_local_attach = kwargs.get('do_local_attach', False)
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The RBD connector properties."""
return {'do_local_attach': kwargs.get('do_local_attach', False)}
def get_volume_paths(self, connection_properties):
# TODO(e0ne): Implement this for local volume.
return []
def get_search_path(self):
# TODO(walter-boring): don't know where the connector
# looks for RBD volumes.
return None
def get_all_available_volumes(self, connection_properties=None):
# TODO(e0ne): Implement this for local volume.
return []
def _sanitize_mon_hosts(self, hosts):
def _sanitize_host(host):
if netutils.is_valid_ipv6(host):
host = '[%s]' % host
return host
return list(map(_sanitize_host, hosts))
def _check_or_get_keyring_contents(self, keyring, cluster_name, user):
try:
if keyring is None:
keyring_path = ("/etc/ceph/%s.client.%s.keyring" %
(cluster_name, user))
with open(keyring_path, 'r') as keyring_file:
keyring = keyring_file.read()
return keyring
except IOError:
msg = (_("Keyring path %s is not readable.") % (keyring_path))
raise exception.BrickException(msg=msg)
def _create_ceph_conf(self, monitor_ips, monitor_ports,
cluster_name, user, keyring):
monitors = ["%s:%s" % (ip, port) for ip, port in
zip(self._sanitize_mon_hosts(monitor_ips), monitor_ports)]
mon_hosts = "mon_host = %s" % (','.join(monitors))
keyring = self._check_or_get_keyring_contents(keyring, cluster_name,
user)
try:
fd, ceph_conf_path = tempfile.mkstemp(prefix="brickrbd_")
with os.fdopen(fd, 'w') as conf_file:
conf_file.writelines([mon_hosts, "\n", keyring, "\n"])
return ceph_conf_path
except IOError:
msg = (_("Failed to write data to %s.") % (ceph_conf_path))
raise exception.BrickException(msg=msg)
def _get_rbd_handle(self, connection_properties):
try:
user = connection_properties['auth_username']
pool, volume = connection_properties['name'].split('/')
cluster_name = connection_properties.get('cluster_name')
monitor_ips = connection_properties.get('hosts')
monitor_ports = connection_properties.get('ports')
keyring = connection_properties.get('keyring')
except IndexError:
msg = _("Connect volume failed, malformed connection properties")
raise exception.BrickException(msg=msg)
conf = self._create_ceph_conf(monitor_ips, monitor_ports,
str(cluster_name), user,
keyring)
try:
rbd_client = linuxrbd.RBDClient(user, pool, conffile=conf,
rbd_cluster_name=str(cluster_name))
rbd_volume = linuxrbd.RBDVolume(rbd_client, volume)
rbd_handle = linuxrbd.RBDVolumeIOWrapper(
linuxrbd.RBDImageMetadata(rbd_volume, pool, user, conf))
except Exception:
fileutils.delete_if_exists(conf)
raise
return rbd_handle
def _get_rbd_args(self, connection_properties):
try:
user = connection_properties['auth_username']
monitor_ips = connection_properties.get('hosts')
monitor_ports = connection_properties.get('ports')
except KeyError:
msg = _("Connect volume failed, malformed connection properties")
raise exception.BrickException(msg=msg)
args = ['--id', user]
if monitor_ips and monitor_ports:
monitors = ["%s:%s" % (ip, port) for ip, port in
zip(
self._sanitize_mon_hosts(monitor_ips),
monitor_ports)]
for monitor in monitors:
args += ['--mon_host', monitor]
return args
@staticmethod
def get_rbd_device_name(pool, volume):
"""Return device name which will be generated by RBD kernel module.
:param pool: RBD pool name.
:type pool: string
:param volume: RBD image name.
:type volume: string
"""
return '/dev/rbd/{pool}/{volume}'.format(pool=pool, volume=volume)
@utils.trace
def connect_volume(self, connection_properties):
"""Connect to a volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:returns: dict
"""
do_local_attach = connection_properties.get('do_local_attach',
self.do_local_attach)
if do_local_attach:
# NOTE(e0ne): sanity check if ceph-common is installed.
cmd = ['which', 'rbd']
try:
self._execute(*cmd)
except putils.ProcessExecutionError:
msg = _("ceph-common package is not installed.")
LOG.error(msg)
raise exception.BrickException(message=msg)
# NOTE(e0ne): map volume to a block device
# via the rbd kernel module.
pool, volume = connection_properties['name'].split('/')
rbd_dev_path = RBDConnector.get_rbd_device_name(pool, volume)
if (not os.path.islink(rbd_dev_path) or
not os.path.exists(os.path.realpath(rbd_dev_path))):
cmd = ['rbd', 'map', volume, '--pool', pool]
cmd += self._get_rbd_args(connection_properties)
self._execute(*cmd, root_helper=self._root_helper,
run_as_root=True)
else:
LOG.debug('volume %(vol)s is already mapped to local'
' device %(dev)s',
{'vol': volume,
'dev': os.path.realpath(rbd_dev_path)})
return {'path': rbd_dev_path,
'type': 'block'}
rbd_handle = self._get_rbd_handle(connection_properties)
return {'path': rbd_handle}
@utils.trace
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Disconnect a volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:param device_info: historical difference, but same as connection_props
:type device_info: dict
"""
do_local_attach = connection_properties.get('do_local_attach',
self.do_local_attach)
if do_local_attach:
pool, volume = connection_properties['name'].split('/')
dev_name = RBDConnector.get_rbd_device_name(pool, volume)
cmd = ['rbd', 'unmap', dev_name]
cmd += self._get_rbd_args(connection_properties)
self._execute(*cmd, root_helper=self._root_helper,
run_as_root=True)
else:
if device_info:
rbd_handle = device_info.get('path', None)
if rbd_handle is not None:
fileutils.delete_if_exists(rbd_handle.rbd_conf)
rbd_handle.close()
def check_valid_device(self, path, run_as_root=True):
"""Verify an existing RBD handle is connected and valid."""
rbd_handle = path
if rbd_handle is None:
return False
original_offset = rbd_handle.tell()
try:
rbd_handle.read(4096)
except Exception as e:
LOG.error("Failed to access RBD device handle: %(error)s",
{"error": e})
return False
finally:
rbd_handle.seek(original_offset, 0)
return True
def extend_volume(self, connection_properties):
# TODO(walter-boring): is this possible?
raise NotImplementedError

View File

@ -1,121 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from os_brick import initiator
from os_brick.initiator.connectors import base
from os_brick.remotefs import remotefs
from os_brick import utils
LOG = logging.getLogger(__name__)
class RemoteFsConnector(base.BaseLinuxConnector):
"""Connector class to attach/detach NFS and GlusterFS volumes."""
def __init__(self, mount_type, root_helper, driver=None,
execute=None,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
kwargs = kwargs or {}
conn = kwargs.get('conn')
mount_type_lower = mount_type.lower()
if conn:
mount_point_base = conn.get('mount_point_base')
if mount_type_lower in ('nfs', 'glusterfs', 'scality',
'quobyte', 'vzstorage'):
kwargs[mount_type_lower + '_mount_point_base'] = (
kwargs.get(mount_type_lower + '_mount_point_base') or
mount_point_base)
else:
LOG.warning("Connection details not present."
" RemoteFsClient may not initialize properly.")
if mount_type_lower == 'scality':
cls = remotefs.ScalityRemoteFsClient
elif mount_type_lower == 'vzstorage':
cls = remotefs.VZStorageRemoteFSClient
else:
cls = remotefs.RemoteFsClient
self._remotefsclient = cls(mount_type, root_helper, execute=execute,
*args, **kwargs)
super(RemoteFsConnector, self).__init__(
root_helper, driver=driver,
execute=execute,
device_scan_attempts=device_scan_attempts,
*args, **kwargs)
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The RemoteFS connector properties."""
return {}
def set_execute(self, execute):
super(RemoteFsConnector, self).set_execute(execute)
self._remotefsclient.set_execute(execute)
def get_search_path(self):
return self._remotefsclient.get_mount_base()
def _get_volume_path(self, connection_properties):
mnt_flags = []
if connection_properties.get('options'):
mnt_flags = connection_properties['options'].split()
nfs_share = connection_properties['export']
self._remotefsclient.mount(nfs_share, mnt_flags)
mount_point = self._remotefsclient.get_mount_point(nfs_share)
path = mount_point + '/' + connection_properties['name']
return path
def get_volume_paths(self, connection_properties):
path = self._get_volume_path(connection_properties)
return [path]
@utils.trace
def connect_volume(self, connection_properties):
"""Ensure that the filesystem containing the volume is mounted.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
connection_properties must include:
export - remote filesystem device (e.g. '172.18.194.100:/var/nfs')
name - file name within the filesystem
:type connection_properties: dict
:returns: dict
connection_properties may optionally include:
options - options to pass to mount
"""
path = self._get_volume_path(connection_properties)
return {'path': path}
@utils.trace
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""No need to do anything to disconnect a volume in a filesystem.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:param device_info: historical difference, but same as connection_props
:type device_info: dict
"""
def extend_volume(self, connection_properties):
# TODO(walter-boring): is this possible?
raise NotImplementedError

View File

@ -1,492 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import os
import requests
from six.moves import urllib
from oslo_concurrency import lockutils
from oslo_concurrency import processutils as putils
from oslo_log import log as logging
from os_brick import exception
from os_brick.i18n import _
from os_brick import initiator
from os_brick.initiator.connectors import base
from os_brick import utils
LOG = logging.getLogger(__name__)
DEVICE_SCAN_ATTEMPTS_DEFAULT = 3
synchronized = lockutils.synchronized_with_prefix('os-brick-')
class ScaleIOConnector(base.BaseLinuxConnector):
"""Class implements the connector driver for ScaleIO."""
OK_STATUS_CODE = 200
VOLUME_NOT_MAPPED_ERROR = 84
VOLUME_ALREADY_MAPPED_ERROR = 81
GET_GUID_CMD = ['/opt/emc/scaleio/sdc/bin/drv_cfg', '--query_guid']
def __init__(self, root_helper, driver=None,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
super(ScaleIOConnector, self).__init__(
root_helper,
driver=driver,
device_scan_attempts=device_scan_attempts,
*args, **kwargs
)
self.local_sdc_ip = None
self.server_ip = None
self.server_port = None
self.server_username = None
self.server_password = None
self.server_token = None
self.volume_id = None
self.volume_name = None
self.volume_path = None
self.iops_limit = None
self.bandwidth_limit = None
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The ScaleIO connector properties."""
return {}
def get_search_path(self):
return "/dev/disk/by-id"
def get_volume_paths(self, connection_properties):
self.get_config(connection_properties)
volume_paths = []
device_paths = [self._find_volume_path()]
for path in device_paths:
if os.path.exists(path):
volume_paths.append(path)
return volume_paths
def _find_volume_path(self):
LOG.info(
"Looking for volume %(volume_id)s, maximum tries: %(tries)s",
{'volume_id': self.volume_id, 'tries': self.device_scan_attempts}
)
# look for the volume in /dev/disk/by-id directory
by_id_path = self.get_search_path()
disk_filename = self._wait_for_volume_path(by_id_path)
full_disk_name = ("%(path)s/%(filename)s" %
{'path': by_id_path, 'filename': disk_filename})
LOG.info("Full disk name is %(full_path)s",
{'full_path': full_disk_name})
return full_disk_name
# NOTE: Usually 3 retries is enough to find the volume.
# If there are network issues, it could take much longer. Set
# the max retries to 15 to make sure we can find the volume.
@utils.retry(exceptions=exception.BrickException,
retries=15,
backoff_rate=1)
def _wait_for_volume_path(self, path):
if not os.path.isdir(path):
msg = (
_("ScaleIO volume %(volume_id)s not found at "
"expected path.") % {'volume_id': self.volume_id}
)
LOG.debug(msg)
raise exception.BrickException(message=msg)
disk_filename = None
filenames = os.listdir(path)
LOG.info(
"Files found in %(path)s path: %(files)s ",
{'path': path, 'files': filenames}
)
for filename in filenames:
if (filename.startswith("emc-vol") and
filename.endswith(self.volume_id)):
disk_filename = filename
break
if not disk_filename:
msg = (_("ScaleIO volume %(volume_id)s not found.") %
{'volume_id': self.volume_id})
LOG.debug(msg)
raise exception.BrickException(message=msg)
return disk_filename
def _get_client_id(self):
request = (
"https://%(server_ip)s:%(server_port)s/"
"api/types/Client/instances/getByIp::%(sdc_ip)s/" %
{
'server_ip': self.server_ip,
'server_port': self.server_port,
'sdc_ip': self.local_sdc_ip
}
)
LOG.info("ScaleIO get client id by ip request: %(request)s",
{'request': request})
r = requests.get(
request,
auth=(self.server_username, self.server_token),
verify=False
)
r = self._check_response(r, request)
sdc_id = r.json()
if not sdc_id:
msg = (_("Client with ip %(sdc_ip)s was not found.") %
{'sdc_ip': self.local_sdc_ip})
raise exception.BrickException(message=msg)
if r.status_code != 200 and "errorCode" in sdc_id:
msg = (_("Error getting sdc id from ip %(sdc_ip)s: %(err)s") %
{'sdc_ip': self.local_sdc_ip, 'err': sdc_id['message']})
LOG.error(msg)
raise exception.BrickException(message=msg)
LOG.info("ScaleIO sdc id is %(sdc_id)s.",
{'sdc_id': sdc_id})
return sdc_id
def _get_volume_id(self):
volname_encoded = urllib.parse.quote(self.volume_name, '')
volname_double_encoded = urllib.parse.quote(volname_encoded, '')
LOG.debug(_(
"Volume name after double encoding is %(volume_name)s."),
{'volume_name': volname_double_encoded}
)
request = (
"https://%(server_ip)s:%(server_port)s/api/types/Volume/instances"
"/getByName::%(encoded_volume_name)s" %
{
'server_ip': self.server_ip,
'server_port': self.server_port,
'encoded_volume_name': volname_double_encoded
}
)
LOG.info(
"ScaleIO get volume id by name request: %(request)s",
{'request': request}
)
r = requests.get(request,
auth=(self.server_username, self.server_token),
verify=False)
r = self._check_response(r, request)
volume_id = r.json()
if not volume_id:
msg = (_("Volume with name %(volume_name)s wasn't found.") %
{'volume_name': self.volume_name})
LOG.error(msg)
raise exception.BrickException(message=msg)
if r.status_code != self.OK_STATUS_CODE and "errorCode" in volume_id:
msg = (
_("Error getting volume id from name %(volume_name)s: "
"%(err)s") %
{'volume_name': self.volume_name, 'err': volume_id['message']}
)
LOG.error(msg)
raise exception.BrickException(message=msg)
LOG.info("ScaleIO volume id is %(volume_id)s.",
{'volume_id': volume_id})
return volume_id
def _check_response(self, response, request, is_get_request=True,
params=None):
if response.status_code == 401 or response.status_code == 403:
LOG.info("Token is invalid, "
"going to re-login to get a new one")
login_request = (
"https://%(server_ip)s:%(server_port)s/api/login" %
{'server_ip': self.server_ip, 'server_port': self.server_port}
)
r = requests.get(
login_request,
auth=(self.server_username, self.server_password),
verify=False
)
token = r.json()
# repeat request with valid token
LOG.debug(_("Going to perform request %(request)s again "
"with valid token"), {'request': request})
if is_get_request:
res = requests.get(request,
auth=(self.server_username, token),
verify=False)
else:
headers = {'content-type': 'application/json'}
res = requests.post(
request,
data=json.dumps(params),
headers=headers,
auth=(self.server_username, token),
verify=False
)
self.server_token = token
return res
return response
def get_config(self, connection_properties):
self.local_sdc_ip = connection_properties['hostIP']
self.volume_name = connection_properties['scaleIO_volname']
self.volume_id = connection_properties['scaleIO_volume_id']
self.server_ip = connection_properties['serverIP']
self.server_port = connection_properties['serverPort']
self.server_username = connection_properties['serverUsername']
self.server_password = connection_properties['serverPassword']
self.server_token = connection_properties['serverToken']
self.iops_limit = connection_properties['iopsLimit']
self.bandwidth_limit = connection_properties['bandwidthLimit']
device_info = {'type': 'block',
'path': self.volume_path}
return device_info
@utils.trace
@lockutils.synchronized('scaleio', 'scaleio-')
def connect_volume(self, connection_properties):
"""Connect the volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:returns: dict
"""
device_info = self.get_config(connection_properties)
LOG.debug(
_(
"scaleIO Volume name: %(volume_name)s, SDC IP: %(sdc_ip)s, "
"REST Server IP: %(server_ip)s, "
"REST Server username: %(username)s, "
"iops limit:%(iops_limit)s, "
"bandwidth limit: %(bandwidth_limit)s."
), {
'volume_name': self.volume_name,
'volume_id': self.volume_id,
'sdc_ip': self.local_sdc_ip,
'server_ip': self.server_ip,
'username': self.server_username,
'iops_limit': self.iops_limit,
'bandwidth_limit': self.bandwidth_limit
}
)
LOG.info("ScaleIO sdc query guid command: %(cmd)s",
{'cmd': self.GET_GUID_CMD})
try:
(out, err) = self._execute(*self.GET_GUID_CMD, run_as_root=True,
root_helper=self._root_helper)
LOG.info("Map volume %(cmd)s: stdout=%(out)s "
"stderr=%(err)s",
{'cmd': self.GET_GUID_CMD, 'out': out, 'err': err})
except putils.ProcessExecutionError as e:
msg = (_("Error querying sdc guid: %(err)s") % {'err': e.stderr})
LOG.error(msg)
raise exception.BrickException(message=msg)
guid = out
LOG.info("Current sdc guid: %(guid)s", {'guid': guid})
params = {'guid': guid, 'allowMultipleMappings': 'TRUE'}
self.volume_id = self.volume_id or self._get_volume_id()
headers = {'content-type': 'application/json'}
request = (
"https://%(server_ip)s:%(server_port)s/api/instances/"
"Volume::%(volume_id)s/action/addMappedSdc" %
{'server_ip': self.server_ip, 'server_port': self.server_port,
'volume_id': self.volume_id}
)
LOG.info("map volume request: %(request)s", {'request': request})
r = requests.post(
request,
data=json.dumps(params),
headers=headers,
auth=(self.server_username, self.server_token),
verify=False
)
r = self._check_response(r, request, False, params)
if r.status_code != self.OK_STATUS_CODE:
response = r.json()
error_code = response['errorCode']
if error_code == self.VOLUME_ALREADY_MAPPED_ERROR:
LOG.warning(
"Ignoring error mapping volume %(volume_name)s: "
"volume already mapped.",
{'volume_name': self.volume_name}
)
else:
msg = (
_("Error mapping volume %(volume_name)s: %(err)s") %
{'volume_name': self.volume_name,
'err': response['message']}
)
LOG.error(msg)
raise exception.BrickException(message=msg)
self.volume_path = self._find_volume_path()
device_info['path'] = self.volume_path
# Set QoS settings after map was performed
if self.iops_limit is not None or self.bandwidth_limit is not None:
params = {'guid': guid}
if self.bandwidth_limit is not None:
params['bandwidthLimitInKbps'] = self.bandwidth_limit
if self.iops_limit is not None:
params['iopsLimit'] = self.iops_limit
request = (
"https://%(server_ip)s:%(server_port)s/api/instances/"
"Volume::%(volume_id)s/action/setMappedSdcLimits" %
{'server_ip': self.server_ip, 'server_port': self.server_port,
'volume_id': self.volume_id}
)
LOG.info("Set client limit request: %(request)s",
{'request': request})
r = requests.post(
request,
data=json.dumps(params),
headers=headers,
auth=(self.server_username, self.server_token),
verify=False
)
r = self._check_response(r, request, False, params)
if r.status_code != self.OK_STATUS_CODE:
response = r.json()
LOG.info("Set client limit response: %(response)s",
{'response': response})
msg = (
_("Error setting client limits for volume "
"%(volume_name)s: %(err)s") %
{'volume_name': self.volume_name,
'err': response['message']}
)
LOG.error(msg)
return device_info
@utils.trace
@lockutils.synchronized('scaleio', 'scaleio-')
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Disconnect the ScaleIO volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:param device_info: historical difference, but same as connection_props
:type device_info: dict
"""
self.get_config(connection_properties)
self.volume_id = self.volume_id or self._get_volume_id()
LOG.info(
"ScaleIO disconnect volume in ScaleIO brick volume driver."
)
LOG.debug(
_("ScaleIO Volume name: %(volume_name)s, SDC IP: %(sdc_ip)s, "
"REST Server IP: %(server_ip)s"),
{'volume_name': self.volume_name, 'sdc_ip': self.local_sdc_ip,
'server_ip': self.server_ip}
)
LOG.info("ScaleIO sdc query guid command: %(cmd)s",
{'cmd': self.GET_GUID_CMD})
try:
(out, err) = self._execute(*self.GET_GUID_CMD, run_as_root=True,
root_helper=self._root_helper)
LOG.info(
"Unmap volume %(cmd)s: stdout=%(out)s stderr=%(err)s",
{'cmd': self.GET_GUID_CMD, 'out': out, 'err': err}
)
except putils.ProcessExecutionError as e:
msg = _("Error querying sdc guid: %(err)s") % {'err': e.stderr}
LOG.error(msg)
raise exception.BrickException(message=msg)
guid = out
LOG.info("Current sdc guid: %(guid)s", {'guid': guid})
params = {'guid': guid}
headers = {'content-type': 'application/json'}
request = (
"https://%(server_ip)s:%(server_port)s/api/instances/"
"Volume::%(volume_id)s/action/removeMappedSdc" %
{'server_ip': self.server_ip, 'server_port': self.server_port,
'volume_id': self.volume_id}
)
LOG.info("Unmap volume request: %(request)s",
{'request': request})
r = requests.post(
request,
data=json.dumps(params),
headers=headers,
auth=(self.server_username, self.server_token),
verify=False
)
r = self._check_response(r, request, False, params)
if r.status_code != self.OK_STATUS_CODE:
response = r.json()
error_code = response['errorCode']
if error_code == self.VOLUME_NOT_MAPPED_ERROR:
LOG.warning(
"Ignoring error unmapping volume %(volume_id)s: "
"volume not mapped.", {'volume_id': self.volume_name}
)
else:
msg = (_("Error unmapping volume %(volume_id)s: %(err)s") %
{'volume_id': self.volume_name,
'err': response['message']})
LOG.error(msg)
raise exception.BrickException(message=msg)
def extend_volume(self, connection_properties):
# TODO(walter-boring): is this possible?
raise NotImplementedError

View File

@ -1,127 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from os_brick import exception
from os_brick.i18n import _
from os_brick import initiator
from os_brick.initiator.connectors import base
from os_brick.initiator import linuxsheepdog
from os_brick import utils
DEVICE_SCAN_ATTEMPTS_DEFAULT = 3
LOG = logging.getLogger(__name__)
class SheepdogConnector(base.BaseLinuxConnector):
""""Connector class to attach/detach sheepdog volumes."""
def __init__(self, root_helper, driver=None, use_multipath=False,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
super(SheepdogConnector, self).__init__(root_helper, driver=driver,
device_scan_attempts=
device_scan_attempts,
*args, **kwargs)
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The Sheepdog connector properties."""
return {}
def get_volume_paths(self, connection_properties):
# TODO(lixiaoy1): don't know where the connector
# looks for sheepdog volumes.
return []
def get_search_path(self):
# TODO(lixiaoy1): don't know where the connector
# looks for sheepdog volumes.
return None
def get_all_available_volumes(self, connection_properties=None):
# TODO(lixiaoy1): not sure what to return here for sheepdog
return []
def _get_sheepdog_handle(self, connection_properties):
try:
host = connection_properties['hosts'][0]
name = connection_properties['name']
port = connection_properties['ports'][0]
except IndexError:
msg = _("Connect volume failed, malformed connection properties")
raise exception.BrickException(msg=msg)
sheepdog_handle = linuxsheepdog.SheepdogVolumeIOWrapper(
host, port, name)
return sheepdog_handle
@utils.trace
def connect_volume(self, connection_properties):
"""Connect to a volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:returns: dict
"""
sheepdog_handle = self._get_sheepdog_handle(connection_properties)
return {'path': sheepdog_handle}
@utils.trace
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Disconnect a volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:param device_info: historical difference, but same as connection_props
:type device_info: dict
"""
if device_info:
sheepdog_handle = device_info.get('path', None)
self.check_IO_handle_valid(sheepdog_handle,
linuxsheepdog.SheepdogVolumeIOWrapper,
'Sheepdog')
if sheepdog_handle is not None:
sheepdog_handle.close()
def check_valid_device(self, path, run_as_root=True):
"""Verify an existing sheepdog handle is connected and valid."""
sheepdog_handle = path
if sheepdog_handle is None:
return False
original_offset = sheepdog_handle.tell()
try:
sheepdog_handle.read(4096)
except Exception as e:
LOG.error("Failed to access sheepdog device "
"handle: %(error)s",
{"error": e})
return False
finally:
sheepdog_handle.seek(original_offset, 0)
return True
def extend_volume(self, connection_properties):
# TODO(lixiaoy1): is this possible?
raise NotImplementedError

View File

@ -1,277 +0,0 @@
# Copyright (c) 2016 VMware, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import tempfile
from oslo_log import log as logging
from oslo_utils import fileutils
try:
from oslo_vmware import api
from oslo_vmware import exceptions as oslo_vmw_exceptions
from oslo_vmware import image_transfer
from oslo_vmware.objects import datastore
from oslo_vmware import rw_handles
from oslo_vmware import vim_util
except ImportError:
vim_util = None
import six
from os_brick import exception
from os_brick.i18n import _
from os_brick.initiator import initiator_connector
LOG = logging.getLogger(__name__)
class VmdkConnector(initiator_connector.InitiatorConnector):
"""Connector for volumes created by the VMDK driver.
This connector is only used for backup and restore of Cinder volumes.
"""
TMP_IMAGES_DATASTORE_FOLDER_PATH = "cinder_temp"
def __init__(self, *args, **kwargs):
# Check if oslo.vmware library is available.
if vim_util is None:
message = _("Missing oslo_vmware python module, ensure oslo.vmware"
" library is installed and available.")
raise exception.BrickException(message=message)
super(VmdkConnector, self).__init__(*args, **kwargs)
self._ip = None
self._port = None
self._username = None
self._password = None
self._api_retry_count = None
self._task_poll_interval = None
self._ca_file = None
self._insecure = None
self._tmp_dir = None
self._timeout = None
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
return {}
def check_valid_device(self, path, run_as_root=True):
pass
def get_volume_paths(self, connection_properties):
return []
def get_search_path(self):
return None
def get_all_available_volumes(self, connection_properties=None):
pass
def _load_config(self, connection_properties):
config = connection_properties['config']
self._ip = config['vmware_host_ip']
self._port = config['vmware_host_port']
self._username = config['vmware_host_username']
self._password = config['vmware_host_password']
self._api_retry_count = config['vmware_api_retry_count']
self._task_poll_interval = config['vmware_task_poll_interval']
self._ca_file = config['vmware_ca_file']
self._insecure = config['vmware_insecure']
self._tmp_dir = config['vmware_tmp_dir']
self._timeout = config['vmware_image_transfer_timeout_secs']
def _create_session(self):
return api.VMwareAPISession(self._ip,
self._username,
self._password,
self._api_retry_count,
self._task_poll_interval,
port=self._port,
cacert=self._ca_file,
insecure=self._insecure)
def _create_temp_file(self, *args, **kwargs):
fileutils.ensure_tree(self._tmp_dir)
fd, tmp = tempfile.mkstemp(dir=self._tmp_dir, *args, **kwargs)
os.close(fd)
return tmp
def _download_vmdk(
self, tmp_file_path, session, backing, vmdk_path, vmdk_size):
with open(tmp_file_path, "wb") as tmp_file:
image_transfer.copy_stream_optimized_disk(
None,
self._timeout,
tmp_file,
session=session,
host=self._ip,
port=self._port,
vm=backing,
vmdk_file_path=vmdk_path,
vmdk_size=vmdk_size)
def connect_volume(self, connection_properties):
# Download the volume vmdk from vCenter server to a temporary file
# and return its path.
self._load_config(connection_properties)
session = self._create_session()
tmp_file_path = self._create_temp_file(
suffix=".vmdk", prefix=connection_properties['volume_id'])
backing = vim_util.get_moref(connection_properties['volume'],
"VirtualMachine")
vmdk_path = connection_properties['vmdk_path']
vmdk_size = connection_properties['vmdk_size']
try:
self._download_vmdk(
tmp_file_path, session, backing, vmdk_path, vmdk_size)
finally:
session.logout()
# Save the last modified time of the temporary so that we can decide
# whether to upload the file back to vCenter server during disconnect.
last_modified = os.path.getmtime(tmp_file_path)
return {'path': tmp_file_path, 'last_modified': last_modified}
def _snapshot_exists(self, session, backing):
snapshot = session.invoke_api(vim_util,
'get_object_property',
session.vim,
backing,
'snapshot')
if snapshot is None or snapshot.rootSnapshotList is None:
return False
return len(snapshot.rootSnapshotList) != 0
def _create_temp_ds_folder(self, session, ds_folder_path, dc_ref):
fileManager = session.vim.service_content.fileManager
try:
session.invoke_api(session.vim,
'MakeDirectory',
fileManager,
name=ds_folder_path,
datacenter=dc_ref)
except oslo_vmw_exceptions.FileAlreadyExistsException:
pass
# Note(vbala) remove this method when we implement it in oslo.vmware
def _upload_vmdk(
self, read_handle, host, port, dc_name, ds_name, cookies,
upload_file_path, file_size, cacerts, timeout_secs):
write_handle = rw_handles.FileWriteHandle(host,
port,
dc_name,
ds_name,
cookies,
upload_file_path,
file_size,
cacerts=cacerts)
image_transfer._start_transfer(read_handle, write_handle, timeout_secs)
def _disconnect(self, tmp_file_path, session, ds_ref, dc_ref, vmdk_path):
# The restored volume is in compressed (streamOptimized) format.
# So we upload it to a temporary location in vCenter datastore and copy
# the compressed vmdk to the volume vmdk. The copy operation
# decompresses the disk to a format suitable for attaching to Nova
# instances in vCenter.
dstore = datastore.get_datastore_by_ref(session, ds_ref)
ds_path = dstore.build_path(
VmdkConnector.TMP_IMAGES_DATASTORE_FOLDER_PATH,
os.path.basename(tmp_file_path))
self._create_temp_ds_folder(
session, six.text_type(ds_path.parent), dc_ref)
with open(tmp_file_path, "rb") as tmp_file:
dc_name = session.invoke_api(
vim_util, 'get_object_property', session.vim, dc_ref, 'name')
cookies = session.vim.client.options.transport.cookiejar
cacerts = self._ca_file if self._ca_file else not self._insecure
self._upload_vmdk(
tmp_file, self._ip, self._port, dc_name, dstore.name, cookies,
ds_path.rel_path, os.path.getsize(tmp_file_path), cacerts,
self._timeout)
# Delete the current volume vmdk because the copy operation does not
# overwrite.
LOG.debug("Deleting %s", vmdk_path)
disk_mgr = session.vim.service_content.virtualDiskManager
task = session.invoke_api(session.vim,
'DeleteVirtualDisk_Task',
disk_mgr,
name=vmdk_path,
datacenter=dc_ref)
session.wait_for_task(task)
src = six.text_type(ds_path)
LOG.debug("Copying %(src)s to %(dest)s", {'src': src,
'dest': vmdk_path})
task = session.invoke_api(session.vim,
'CopyVirtualDisk_Task',
disk_mgr,
sourceName=src,
sourceDatacenter=dc_ref,
destName=vmdk_path,
destDatacenter=dc_ref)
session.wait_for_task(task)
# Delete the compressed vmdk at the temporary location.
LOG.debug("Deleting %s", src)
file_mgr = session.vim.service_content.fileManager
task = session.invoke_api(session.vim,
'DeleteDatastoreFile_Task',
file_mgr,
name=src,
datacenter=dc_ref)
session.wait_for_task(task)
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
tmp_file_path = device_info['path']
if not os.path.exists(tmp_file_path):
msg = _("Vmdk: %s not found.") % tmp_file_path
raise exception.NotFound(message=msg)
session = None
try:
# We upload the temporary file to vCenter server only if it is
# modified after connect_volume.
if os.path.getmtime(tmp_file_path) > device_info['last_modified']:
self._load_config(connection_properties)
session = self._create_session()
backing = vim_util.get_moref(connection_properties['volume'],
"VirtualMachine")
# Currently there is no way we can restore the volume if it
# contains redo-log based snapshots (bug 1599026).
if self._snapshot_exists(session, backing):
msg = (_("Backing of volume: %s contains one or more "
"snapshots; cannot disconnect.") %
connection_properties['volume_id'])
raise exception.BrickException(message=msg)
ds_ref = vim_util.get_moref(
connection_properties['datastore'], "Datastore")
dc_ref = vim_util.get_moref(
connection_properties['datacenter'], "Datacenter")
vmdk_path = connection_properties['vmdk_path']
self._disconnect(
tmp_file_path, session, ds_ref, dc_ref, vmdk_path)
finally:
os.remove(tmp_file_path)
if session:
session.logout()
def extend_volume(self, connection_properties):
raise NotImplementedError

View File

@ -1,160 +0,0 @@
# Copyright (c) 2017 Veritas Technologies LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from oslo_concurrency import lockutils
from oslo_concurrency import processutils as putils
from oslo_log import log as logging
from os_brick import exception
from os_brick.i18n import _
from os_brick.initiator.connectors import base
from os_brick import utils
LOG = logging.getLogger(__name__)
synchronized = lockutils.synchronized_with_prefix('os-brick-vrts-hyperscale-')
class HyperScaleConnector(base.BaseLinuxConnector):
"""Class implements the os-brick connector for HyperScale volumes."""
def __init__(self, root_helper, driver=None,
execute=None,
*args, **kwargs):
super(HyperScaleConnector, self).__init__(
root_helper, driver=driver,
execute=execute,
*args, **kwargs)
def get_volume_paths(self, connection_properties):
return []
def get_search_path(self):
return None
def extend_volume(self, connection_properties):
raise NotImplementedError
@staticmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The HyperScale connector properties."""
return {}
@utils.trace
@synchronized('connect_volume')
def connect_volume(self, connection_properties):
"""Connect a volume to an instance."""
out = None
err = None
device_info = {}
volume_name = None
if 'name' in connection_properties.keys():
volume_name = connection_properties['name']
if volume_name is None:
msg = _("Failed to connect volume: invalid volume name.")
raise exception.BrickException(message=msg)
cmd_arg = {'operation': 'connect_volume'}
cmd_arg['volume_guid'] = volume_name
cmdarg_json = json.dumps(cmd_arg)
LOG.debug("HyperScale command hscli: %(cmd_arg)s",
{'cmd_arg': cmdarg_json})
try:
(out, err) = self._execute('hscli', cmdarg_json,
run_as_root=True,
root_helper=self._root_helper)
except putils.ProcessExecutionError as e:
msg = (_("Error executing hscli: %(err)s") % {'err': e.stderr})
raise exception.BrickException(message=msg)
LOG.debug("Result of hscli: stdout=%(out)s "
"stderr=%(err)s",
{'out': out, 'err': err})
if err or out is None or len(out) == 0:
msg = (_("Failed to connect volume with stdout=%(out)s "
"stderr=%(err)s") % {'out': out, 'err': err})
raise exception.BrickException(message=msg)
output = json.loads(out)
payload = output.get('payload')
if payload is None:
msg = _("Failed to connect volume: "
"hscli returned invalid payload")
raise exception.BrickException(message=msg)
if ('vsa_ip' not in payload.keys() or
'refl_factor' not in payload.keys()):
msg = _("Failed to connect volume: "
"hscli returned invalid results")
raise exception.BrickException(message=msg)
device_info['vsa_ip'] = payload.get('vsa_ip')
device_info['path'] = (
'/dev/' + connection_properties['name'][1:32])
refl_factor = int(payload.get('refl_factor'))
device_info['refl_factor'] = str(refl_factor)
if refl_factor > 0:
if 'refl_targets' not in payload.keys():
msg = _("Failed to connect volume: "
"hscli returned inconsistent results")
raise exception.BrickException(message=msg)
device_info['refl_targets'] = (
payload.get('refl_targets'))
return device_info
@utils.trace
@synchronized('connect_volume')
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Disconnect a volume from an instance."""
volume_name = None
if 'name' in connection_properties.keys():
volume_name = connection_properties['name']
if volume_name is None:
msg = _("Failed to disconnect volume: invalid volume name")
raise exception.BrickException(message=msg)
cmd_arg = {'operation': 'disconnect_volume'}
cmd_arg['volume_guid'] = volume_name
cmdarg_json = json.dumps(cmd_arg)
LOG.debug("HyperScale command hscli: %(cmd_arg)s",
{'cmd_arg': cmdarg_json})
try:
(out, err) = self._execute('hscli', cmdarg_json,
run_as_root=True,
root_helper=self._root_helper)
except putils.ProcessExecutionError as e:
msg = (_("Error executing hscli: %(err)s") % {'err': e.stderr})
raise exception.BrickException(message=msg)
if err:
msg = (_("Failed to connect volume: stdout=%(out)s "
"stderr=%(err)s") % {'out': out, 'err': err})
raise exception.BrickException(message=msg)

View File

@ -1,36 +0,0 @@
# Copyright 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import errno
import os
class HostDriver(object):
def get_all_block_devices(self):
"""Get the list of all block devices seen in /dev/disk/by-path/."""
dir = "/dev/disk/by-path/"
try:
files = os.listdir(dir)
except OSError as e:
if e.errno == errno.ENOENT:
files = []
else:
raise
devices = []
for file in files:
devices.append(dir + file)
return devices

View File

@ -1,197 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
from os_brick import exception
from os_brick import executor
from os_brick import initiator
@six.add_metaclass(abc.ABCMeta)
class InitiatorConnector(executor.Executor):
# This object can be used on any platform (x86, S390)
platform = initiator.PLATFORM_ALL
# This object can be used on any os type (linux, windows)
os_type = initiator.OS_TYPE_ALL
def __init__(self, root_helper, driver=None, execute=None,
device_scan_attempts=initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT,
*args, **kwargs):
super(InitiatorConnector, self).__init__(root_helper, execute=execute,
*args, **kwargs)
self.device_scan_attempts = device_scan_attempts
def set_driver(self, driver):
"""The driver is used to find used LUNs."""
self.driver = driver
@abc.abstractmethod
def get_connector_properties(root_helper, *args, **kwargs):
"""The generic connector properties."""
pass
@abc.abstractmethod
def check_valid_device(self, path, run_as_root=True):
"""Test to see if the device path is a real device.
:param path: The file system path for the device.
:type path: str
:param run_as_root: run the tests as root user?
:type run_as_root: bool
:returns: bool
"""
pass
@abc.abstractmethod
def connect_volume(self, connection_properties):
"""Connect to a volume.
The connection_properties describes the information needed by
the specific protocol to use to make the connection.
The connection_properties is a dictionary that describes the target
volume. It varies slightly by protocol type (iscsi, fibre_channel),
but the structure is usually the same.
An example for iSCSI:
{'driver_volume_type': 'iscsi',
'data': {
'target_luns': [0, 2],
'target_iqns': ['iqn.2000-05.com.3pardata:20810002ac00383d',
'iqn.2000-05.com.3pardata:21810002ac00383d'],
'target_discovered': True,
'encrypted': False,
'qos_specs': None,
'target_portals': ['10.52.1.11:3260', '10.52.2.11:3260'],
'access_mode': 'rw',
}}
An example for fibre_channel:
{'driver_volume_type': 'fibre_channel',
'data': {
'initiator_target_map': {'100010604b010459': ['21230002AC00383D'],
'100010604b01045d': ['21230002AC00383D']
},
'target_discovered': True,
'encrypted': False,
'qos_specs': None,
'target_lun': 1,
'access_mode': 'rw',
'target_wwn': [
'20210002AC00383D',
'20220002AC00383D',
],
}}
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:returns: dict
"""
pass
@abc.abstractmethod
def disconnect_volume(self, connection_properties, device_info,
force=False, ignore_errors=False):
"""Disconnect a volume from the local host.
The connection_properties are the same as from connect_volume.
The device_info is returned from connect_volume.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
:param device_info: historical difference, but same as connection_props
:type device_info: dict
:param force: Whether to forcefully disconnect even if flush fails.
:type force: bool
:param ignore_errors: When force is True, this will decide whether to
ignore errors or raise an exception once finished
the operation. Default is False.
:type ignore_errors: bool
"""
pass
@abc.abstractmethod
def get_volume_paths(self, connection_properties):
"""Return the list of existing paths for a volume.
The job of this method is to find out what paths in
the system are associated with a volume as described
by the connection_properties.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
"""
pass
@abc.abstractmethod
def get_search_path(self):
"""Return the directory where a Connector looks for volumes.
Some Connectors need the information in the
connection_properties to determine the search path.
"""
pass
@abc.abstractmethod
def extend_volume(self, connection_properties):
"""Update the attached volume's size.
This method will attempt to update the local hosts's
volume after the volume has been extended on the remote
system. The new volume size in bytes will be returned.
If there is a failure to update, then None will be returned.
:param connection_properties: The volume connection properties.
:returns: new size of the volume.
"""
pass
@abc.abstractmethod
def get_all_available_volumes(self, connection_properties=None):
"""Return all volumes that exist in the search directory.
At connect_volume time, a Connector looks in a specific
directory to discover a volume's paths showing up.
This method's job is to return all paths in the directory
that connect_volume uses to find a volume.
This method is used in coordination with get_volume_paths()
to verify that volumes have gone away after disconnect_volume
has been called.
:param connection_properties: The dictionary that describes all
of the target volume attributes.
:type connection_properties: dict
"""
pass
def check_IO_handle_valid(self, handle, data_type, protocol):
"""Check IO handle has correct data type."""
if (handle and not isinstance(handle, data_type)):
raise exception.InvalidIOHandleObject(
protocol=protocol,
actual_type=type(handle))

View File

@ -1,314 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Generic linux Fibre Channel utilities."""
import errno
import os
from oslo_concurrency import processutils as putils
from oslo_log import log as logging
from os_brick.initiator import linuxscsi
LOG = logging.getLogger(__name__)
class LinuxFibreChannel(linuxscsi.LinuxSCSI):
def has_fc_support(self):
FC_HOST_SYSFS_PATH = '/sys/class/fc_host'
if os.path.isdir(FC_HOST_SYSFS_PATH):
return True
else:
return False
def _get_hba_channel_scsi_target(self, hba):
"""Try to get the HBA channel and SCSI target for an HBA.
This method only works for Fibre Channel targets that implement a
single WWNN for all ports, so caller should expect us to return either
None or an empty list.
:returns: List or None
"""
# Leave only the number from the host_device field (ie: host6)
host_device = hba['host_device']
if host_device and len(host_device) > 4:
host_device = host_device[4:]
path = '/sys/class/fc_transport/target%s:' % host_device
cmd = 'grep %(wwnn)s %(path)s*/node_name' % {'wwnn': hba['node_name'],
'path': path}
try:
out, _err = self._execute(cmd)
return [line.split('/')[4].split(':')[1:]
for line in out.split('\n') if line.startswith(path)]
except Exception as exc:
LOG.debug('Could not get HBA channel and SCSI target ID, path: '
'%(path)s, reason: %(reason)s', {'path': path,
'reason': exc})
return None
def rescan_hosts(self, hbas, target_lun):
for hba in hbas:
# Try to get HBA channel and SCSI target to use as filters
cts = self._get_hba_channel_scsi_target(hba)
# If we couldn't get the channel and target use wildcards
if not cts:
cts = [('-', '-')]
for hba_channel, target_id in cts:
LOG.debug('Scanning host %(host)s (wwnn: %(wwnn)s, c: '
'%(channel)s, t: %(target)s, l: %(lun)s)',
{'host': hba['host_device'],
'wwnn': hba['node_name'], 'channel': hba_channel,
'target': target_id, 'lun': target_lun})
self.echo_scsi_command(
"/sys/class/scsi_host/%s/scan" % hba['host_device'],
"%(c)s %(t)s %(l)s" % {'c': hba_channel,
't': target_id,
'l': target_lun})
def get_fc_hbas(self):
"""Get the Fibre Channel HBA information."""
if not self.has_fc_support():
# there is no FC support in the kernel loaded
# so there is no need to even try to run systool
LOG.debug("No Fibre Channel support detected on system.")
return []
out = None
try:
out, _err = self._execute('systool', '-c', 'fc_host', '-v',
run_as_root=True,
root_helper=self._root_helper)
except putils.ProcessExecutionError as exc:
# This handles the case where rootwrap is used
# and systool is not installed
# 96 = nova.cmd.rootwrap.RC_NOEXECFOUND:
if exc.exit_code == 96:
LOG.warning("systool is not installed")
return []
except OSError as exc:
# This handles the case where rootwrap is NOT used
# and systool is not installed
if exc.errno == errno.ENOENT:
LOG.warning("systool is not installed")
return []
# No FC HBAs were found
if out is None:
return []
lines = out.split('\n')
# ignore the first 2 lines
lines = lines[2:]
hbas = []
hba = {}
lastline = None
for line in lines:
line = line.strip()
# 2 newlines denotes a new hba port
if line == '' and lastline == '':
if len(hba) > 0:
hbas.append(hba)
hba = {}
else:
val = line.split('=')
if len(val) == 2:
key = val[0].strip().replace(" ", "")
value = val[1].strip()
hba[key] = value.replace('"', '')
lastline = line
return hbas
def get_fc_hbas_info(self):
"""Get Fibre Channel WWNs and device paths from the system, if any."""
# Note(walter-boring) modern Linux kernels contain the FC HBA's in /sys
# and are obtainable via the systool app
hbas = self.get_fc_hbas()
hbas_info = []
for hba in hbas:
wwpn = hba['port_name'].replace('0x', '')
wwnn = hba['node_name'].replace('0x', '')
device_path = hba['ClassDevicepath']
device = hba['ClassDevice']
hbas_info.append({'port_name': wwpn,
'node_name': wwnn,
'host_device': device,
'device_path': device_path})
return hbas_info
def get_fc_wwpns(self):
"""Get Fibre Channel WWPNs from the system, if any."""
# Note(walter-boring) modern Linux kernels contain the FC HBA's in /sys
# and are obtainable via the systool app
hbas = self.get_fc_hbas()
wwpns = []
for hba in hbas:
if hba['port_state'] == 'Online':
wwpn = hba['port_name'].replace('0x', '')
wwpns.append(wwpn)
return wwpns
def get_fc_wwnns(self):
"""Get Fibre Channel WWNNs from the system, if any."""
# Note(walter-boring) modern Linux kernels contain the FC HBA's in /sys
# and are obtainable via the systool app
hbas = self.get_fc_hbas()
wwnns = []
for hba in hbas:
if hba['port_state'] == 'Online':
wwnn = hba['node_name'].replace('0x', '')
wwnns.append(wwnn)
return wwnns
class LinuxFibreChannelS390X(LinuxFibreChannel):
def get_fc_hbas_info(self):
"""Get Fibre Channel WWNs and device paths from the system, if any."""
hbas = self.get_fc_hbas()
hbas_info = []
for hba in hbas:
if hba['port_state'] == 'Online':
wwpn = hba['port_name'].replace('0x', '')
wwnn = hba['node_name'].replace('0x', '')
device_path = hba['ClassDevicepath']
device = hba['ClassDevice']
hbas_info.append({'port_name': wwpn,
'node_name': wwnn,
'host_device': device,
'device_path': device_path})
return hbas_info
def configure_scsi_device(self, device_number, target_wwn, lun):
"""Write the LUN to the port's unit_add attribute.
If auto-discovery of Fibre-Channel target ports is
disabled on s390 platforms, ports need to be added to
the configuration.
If auto-discovery of LUNs is disabled on s390 platforms
luns need to be added to the configuration through the
unit_add interface
"""
LOG.debug("Configure lun for s390: device_number=%(device_num)s "
"target_wwn=%(target_wwn)s target_lun=%(target_lun)s",
{'device_num': device_number,
'target_wwn': target_wwn,
'target_lun': lun})
filepath = ("/sys/bus/ccw/drivers/zfcp/%s/%s" %
(device_number, target_wwn))
if not (os.path.exists(filepath)):
zfcp_device_command = ("/sys/bus/ccw/drivers/zfcp/%s/port_rescan" %
(device_number))
LOG.debug("port_rescan call for s390: %s", zfcp_device_command)
try:
self.echo_scsi_command(zfcp_device_command, "1")
except putils.ProcessExecutionError as exc:
LOG.warning("port_rescan call for s390 failed exit"
" %(code)s, stderr %(stderr)s",
{'code': exc.exit_code, 'stderr': exc.stderr})
zfcp_device_command = ("/sys/bus/ccw/drivers/zfcp/%s/%s/unit_add" %
(device_number, target_wwn))
LOG.debug("unit_add call for s390 execute: %s", zfcp_device_command)
try:
self.echo_scsi_command(zfcp_device_command, lun)
except putils.ProcessExecutionError as exc:
LOG.warning("unit_add call for s390 failed exit %(code)s, "
"stderr %(stderr)s",
{'code': exc.exit_code, 'stderr': exc.stderr})
def deconfigure_scsi_device(self, device_number, target_wwn, lun):
"""Write the LUN to the port's unit_remove attribute.
If auto-discovery of LUNs is disabled on s390 platforms
luns need to be removed from the configuration through the
unit_remove interface
"""
LOG.debug("Deconfigure lun for s390: "
"device_number=%(device_num)s "
"target_wwn=%(target_wwn)s target_lun=%(target_lun)s",
{'device_num': device_number,
'target_wwn': target_wwn,
'target_lun': lun})
zfcp_device_command = ("/sys/bus/ccw/drivers/zfcp/%s/%s/unit_remove" %
(device_number, target_wwn))
LOG.debug("unit_remove call for s390 execute: %s", zfcp_device_command)
try:
self.echo_scsi_command(zfcp_device_command, lun)
except putils.ProcessExecutionError as exc:
LOG.warning("unit_remove call for s390 failed exit %(code)s, "
"stderr %(stderr)s",
{'code': exc.exit_code, 'stderr': exc.stderr})
class LinuxFibreChannelPPC64(LinuxFibreChannel):
def _get_hba_channel_scsi_target(self, hba, wwpn):
"""Try to get the HBA channel and SCSI target for an HBA.
This method works for Fibre Channel targets iterating over all the
target wwpn port and finding the c, t, l. so caller should expect us to
return either None or an empty list.
"""
# Leave only the number from the host_device field (ie: host6)
host_device = hba['host_device']
if host_device and len(host_device) > 4:
host_device = host_device[4:]
path = '/sys/class/fc_transport/target%s:' % host_device
cmd = 'grep -l %(wwpn)s %(path)s*/port_name' % {'wwpn': wwpn,
'path': path}
try:
out, _err = self._execute(cmd, shell=True)
return [line.split('/')[4].split(':')[1:]
for line in out.split('\n') if line.startswith(path)]
except Exception as exc:
LOG.error("Could not get HBA channel and SCSI target ID, "
"reason: %s", exc)
return None
def rescan_hosts(self, hbas, target_lun):
for hba in hbas:
# Try to get HBA channel and SCSI target to use as filters
for wwpn in hba['target_wwn']:
cts = self._get_hba_channel_scsi_target(hba, wwpn)
# If we couldn't get the channel and target use wildcards
if not cts:
cts = [('-', '-')]
for hba_channel, target_id in cts:
LOG.debug('Scanning host %(host)s (wwpn: %(wwpn)s, c: '
'%(channel)s, t: %(target)s, l: %(lun)s)',
{'host': hba['host_device'],
'wwpn': hba['target_wwn'],
'channel': hba_channel,
'target': target_id,
'lun': target_lun})
self.echo_scsi_command(
"/sys/class/scsi_host/%s/scan" % hba['host_device'],
"%(c)s %(t)s %(l)s" % {'c': hba_channel,
't': target_id,
'l': target_lun})

View File

@ -1,231 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
"""Generic RBD connection utilities."""
import io
from oslo_log import log as logging
from os_brick import exception
from os_brick.i18n import _
from os_brick import utils
try:
import rados
import rbd
except ImportError:
rados = None
rbd = None
LOG = logging.getLogger(__name__)
class RBDClient(object):
def __init__(self, user, pool, *args, **kwargs):
self.rbd_user = user
self.rbd_pool = pool
for attr in ['rbd_user', 'rbd_pool']:
val = getattr(self, attr)
if val is not None:
setattr(self, attr, utils.convert_str(val))
# allow these to be overridden for testing
self.rados = kwargs.get('rados', rados)
self.rbd = kwargs.get('rbd', rbd)
if self.rados is None:
raise exception.InvalidParameterValue(
err=_('rados module required'))
if self.rbd is None:
raise exception.InvalidParameterValue(
err=_('rbd module required'))
self.rbd_conf = kwargs.get('conffile', '/etc/ceph/ceph.conf')
self.rbd_cluster_name = kwargs.get('rbd_cluster_name', 'ceph')
self.rados_connect_timeout = kwargs.get('rados_connect_timeout', -1)
self.client, self.ioctx = self.connect()
def __enter__(self):
return self
def __exit__(self, type_, value, traceback):
self.disconnect()
def connect(self):
LOG.debug("opening connection to ceph cluster (timeout=%s).",
self.rados_connect_timeout)
client = self.rados.Rados(rados_id=self.rbd_user,
clustername=self.rbd_cluster_name,
conffile=self.rbd_conf)
try:
if self.rados_connect_timeout >= 0:
client.connect(
timeout=self.rados_connect_timeout)
else:
client.connect()
ioctx = client.open_ioctx(self.rbd_pool)
return client, ioctx
except self.rados.Error:
msg = _("Error connecting to ceph cluster.")
LOG.exception(msg)
# shutdown cannot raise an exception
client.shutdown()
raise exception.BrickException(message=msg)
def disconnect(self):
# closing an ioctx cannot raise an exception
self.ioctx.close()
self.client.shutdown()
class RBDVolume(object):
"""Context manager for dealing with an existing rbd volume."""
def __init__(self, client, name, snapshot=None, read_only=False):
if snapshot is not None:
snapshot = utils.convert_str(snapshot)
try:
self.image = client.rbd.Image(client.ioctx,
utils.convert_str(name),
snapshot=snapshot,
read_only=read_only)
except client.rbd.Error:
LOG.exception("error opening rbd image %s", name)
client.disconnect()
raise
self.client = client
def close(self):
try:
self.image.close()
finally:
self.client.disconnect()
def __enter__(self):
return self
def __exit__(self, type_, value, traceback):
self.close()
def __getattr__(self, attrib):
return getattr(self.image, attrib)
class RBDImageMetadata(object):
"""RBD image metadata to be used with RBDVolumeIOWrapper."""
def __init__(self, image, pool, user, conf):
self.image = image
self.pool = utils.convert_str(pool or '')
self.user = utils.convert_str(user or '')
self.conf = utils.convert_str(conf or '')
class RBDVolumeIOWrapper(io.RawIOBase):
"""Enables LibRBD.Image objects to be treated as Python IO objects.
Calling unimplemented interfaces will raise IOError.
"""
def __init__(self, rbd_volume):
super(RBDVolumeIOWrapper, self).__init__()
self._rbd_volume = rbd_volume
self._offset = 0
def _inc_offset(self, length):
self._offset += length
@property
def rbd_image(self):
return self._rbd_volume.image
@property
def rbd_user(self):
return self._rbd_volume.user
@property
def rbd_pool(self):
return self._rbd_volume.pool
@property
def rbd_conf(self):
return self._rbd_volume.conf
def read(self, length=None):
offset = self._offset
total = self._rbd_volume.image.size()
# NOTE(dosaboy): posix files do not barf if you read beyond their
# length (they just return nothing) but rbd images do so we need to
# return empty string if we have reached the end of the image.
if (offset >= total):
return b''
if length is None:
length = total
if (offset + length) > total:
length = total - offset
self._inc_offset(length)
return self._rbd_volume.image.read(int(offset), int(length))
def write(self, data):
self._rbd_volume.image.write(data, self._offset)
self._inc_offset(len(data))
def seekable(self):
return True
def seek(self, offset, whence=0):
if whence == 0:
new_offset = offset
elif whence == 1:
new_offset = self._offset + offset
elif whence == 2:
new_offset = self._rbd_volume.image.size()
new_offset += offset
else:
raise IOError(_("Invalid argument - whence=%s not supported") %
(whence))
if (new_offset < 0):
raise IOError(_("Invalid argument"))
self._offset = new_offset
def tell(self):
return self._offset
def flush(self):
try:
self._rbd_volume.image.flush()
except AttributeError:
LOG.warning("flush() not supported in this version of librbd")
def fileno(self):
"""RBD does not have support for fileno() so we raise IOError.
Raising IOError is recommended way to notify caller that interface is
not supported - see http://docs.python.org/2/library/io.html#io.IOBase
"""
raise IOError(_("fileno() not supported by RBD()"))
def close(self):
self.rbd_image.close()

View File

@ -1,617 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Generic linux scsi subsystem and Multipath utilities.
Note, this is not iSCSI.
"""
import glob
import os
import re
import six
from oslo_concurrency import processutils as putils
from oslo_log import log as logging
from os_brick import exception
from os_brick import executor
from os_brick.privileged import rootwrap as priv_rootwrap
from os_brick import utils
LOG = logging.getLogger(__name__)
MULTIPATH_ERROR_REGEX = re.compile("\w{3} \d+ \d\d:\d\d:\d\d \|.*$")
MULTIPATH_WWID_REGEX = re.compile("\((?P<wwid>.+)\)")
MULTIPATH_DEVICE_ACTIONS = ['unchanged:', 'reject:', 'reload:',
'switchpg:', 'rename:', 'create:',
'resize:']
class LinuxSCSI(executor.Executor):
# As found in drivers/scsi/scsi_lib.c
WWN_TYPES = {'t10.': '1', 'eui.': '2', 'naa.': '3'}
def echo_scsi_command(self, path, content):
"""Used to echo strings to scsi subsystem."""
args = ["-a", path]
kwargs = dict(process_input=content,
run_as_root=True,
root_helper=self._root_helper)
self._execute('tee', *args, **kwargs)
def get_name_from_path(self, path):
"""Translates /dev/disk/by-path/ entry to /dev/sdX."""
name = os.path.realpath(path)
if name.startswith("/dev/"):
return name
else:
return None
def remove_scsi_device(self, device, force=False, exc=None):
"""Removes a scsi device based upon /dev/sdX name."""
path = "/sys/block/%s/device/delete" % device.replace("/dev/", "")
if os.path.exists(path):
exc = exception.ExceptionChainer() if exc is None else exc
# flush any outstanding IO first
with exc.context(force, 'Flushing %s failed', device):
self.flush_device_io(device)
LOG.debug("Remove SCSI device %(device)s with %(path)s",
{'device': device, 'path': path})
with exc.context(force, 'Removing %s failed', device):
self.echo_scsi_command(path, "1")
@utils.retry(exceptions=exception.VolumePathNotRemoved)
def wait_for_volumes_removal(self, volumes_names):
"""Wait for device paths to be removed from the system."""
str_names = ', '.join(volumes_names)
LOG.debug('Checking to see if SCSI volumes %s have been removed.',
str_names)
exist = [volume_name for volume_name in volumes_names
if os.path.exists('/dev/' + volume_name)]
if exist:
LOG.debug('%s still exist.', ', '.join(exist))
raise exception.VolumePathNotRemoved(volume_path=exist)
LOG.debug("SCSI volumes %s have been removed.", str_names)
def get_device_info(self, device):
(out, _err) = self._execute('sg_scan', device, run_as_root=True,
root_helper=self._root_helper)
dev_info = {'device': device, 'host': None,
'channel': None, 'id': None, 'lun': None}
if out:
line = out.strip()
line = line.replace(device + ": ", "")
info = line.split(" ")
for item in info:
if '=' in item:
pair = item.split('=')
dev_info[pair[0]] = pair[1]
elif 'scsi' in item:
dev_info['host'] = item.replace('scsi', '')
return dev_info
def get_sysfs_wwn(self, device_names):
"""Return the wwid from sysfs in any of devices in udev format."""
wwid = self.get_sysfs_wwid(device_names)
glob_str = '/dev/disk/by-id/scsi-'
wwn_paths = glob.glob(glob_str + '*')
# If we don't have multiple designators on page 0x83
if wwid and glob_str + wwid in wwn_paths:
return wwid
# If we have multiple designators follow the symlinks
for wwn_path in wwn_paths:
try:
if os.path.islink(wwn_path) and os.stat(wwn_path):
path = os.path.realpath(wwn_path)
if path.startswith('/dev/') and path[5:] in device_names:
return wwn_path[len(glob_str):]
except OSError:
continue
return ''
def get_sysfs_wwid(self, device_names):
"""Return the wwid from sysfs in any of devices in udev format."""
for device_name in device_names:
try:
with open('/sys/block/%s/device/wwid' % device_name) as f:
wwid = f.read().strip()
except IOError:
continue
# The sysfs wwid has the wwn type in string format as a prefix,
# but udev uses its numerical representation as returned by
# scsi_id's page 0x83, so we need to map it
udev_wwid = self.WWN_TYPES.get(wwid[:4], '8') + wwid[4:]
return udev_wwid
return ''
def get_scsi_wwn(self, path):
"""Read the WWN from page 0x83 value for a SCSI device."""
(out, _err) = self._execute('/lib/udev/scsi_id', '--page', '0x83',
'--whitelisted', path,
run_as_root=True,
root_helper=self._root_helper)
return out.strip()
@staticmethod
def is_multipath_running(enforce_multipath, root_helper, execute=None):
try:
if execute is None:
execute = priv_rootwrap.execute
execute('multipathd', 'show', 'status',
run_as_root=True, root_helper=root_helper)
except putils.ProcessExecutionError as err:
LOG.error('multipathd is not running: exit code %(err)s',
{'err': err.exit_code})
if enforce_multipath:
raise
return False
return True
def get_dm_name(self, dm):
"""Get the Device map name given the device name of the dm on sysfs.
:param dm: Device map name as seen in sysfs. ie: 'dm-0'
:returns: String with the name, or empty string if not available.
ie: '36e843b658476b7ed5bc1d4d10d9b1fde'
"""
try:
with open('/sys/block/' + dm + '/dm/name') as f:
return f.read().strip()
except IOError:
return ''
def find_sysfs_multipath_dm(self, device_names):
"""Find the dm device name given a list of device names
:param device_names: Iterable with device names, not paths. ie: ['sda']
:returns: String with the dm name or None if not found. ie: 'dm-0'
"""
glob_str = '/sys/block/%s/holders/dm-*'
for dev_name in device_names:
dms = glob.glob(glob_str % dev_name)
if dms:
__, device_name, __, dm = dms[0].rsplit('/', 3)
return dm
return None
def remove_connection(self, devices_names, is_multipath, force=False,
exc=None):
"""Remove LUNs and multipath associated with devices names.
:param devices_names: Iterable with real device names ('sda', 'sdb')
:param is_multipath: Whether this is a multipath connection or not
:param force: Whether to forcefully disconnect even if flush fails.
:param exc: ExceptionChainer where to add exceptions if forcing
:returns: Multipath device map name if found and not flushed
"""
if not devices_names:
return
multipath_name = None
exc = exception.ExceptionChainer() if exc is None else exc
LOG.debug('Removing %(type)s devices %(devices)s',
{'type': 'multipathed' if is_multipath else 'single pathed',
'devices': ', '.join(devices_names)})
if is_multipath:
multipath_dm = self.find_sysfs_multipath_dm(devices_names)
multipath_name = multipath_dm and self.get_dm_name(multipath_dm)
if multipath_name:
with exc.context(force, 'Flushing %s failed', multipath_name):
self.flush_multipath_device(multipath_name)
multipath_name = None
for device_name in devices_names:
self.remove_scsi_device('/dev/' + device_name, force, exc)
# Wait until the symlinks are removed
with exc.context(force, 'Some devices remain from %s', devices_names):
try:
self.wait_for_volumes_removal(devices_names)
finally:
# Since we use /dev/disk/by-id/scsi- links to get the wwn we
# must ensure they are always removed.
self._remove_scsi_symlinks(devices_names)
return multipath_name
def _remove_scsi_symlinks(self, devices_names):
devices = ['/dev/' + dev for dev in devices_names]
links = glob.glob('/dev/disk/by-id/scsi-*')
unlink = [link for link in links
if os.path.realpath(link) in devices]
if unlink:
priv_rootwrap.unlink_root(no_errors=True, *unlink)
def flush_device_io(self, device):
"""This is used to flush any remaining IO in the buffers."""
if os.path.exists(device):
try:
# NOTE(geguileo): With 30% connection error rates flush can get
# stuck, set timeout to prevent it from hanging here forever.
# Retry twice after 20 and 40 seconds.
LOG.debug("Flushing IO for device %s", device)
self._execute('blockdev', '--flushbufs', device,
run_as_root=True, attempts=3, timeout=300,
interval=10, root_helper=self._root_helper)
except putils.ProcessExecutionError as exc:
LOG.warning("Failed to flush IO buffers prior to removing "
"device: %(code)s", {'code': exc.exit_code})
raise
def flush_multipath_device(self, device_map_name):
LOG.debug("Flush multipath device %s", device_map_name)
# NOTE(geguileo): With 30% connection error rates flush can get stuck,
# set timeout to prevent it from hanging here forever. Retry twice
# after 20 and 40 seconds.
self._execute('multipath', '-f', device_map_name, run_as_root=True,
attempts=3, timeout=300, interval=10,
root_helper=self._root_helper)
@utils.retry(exceptions=exception.VolumeDeviceNotFound)
def wait_for_path(self, volume_path):
"""Wait for a path to show up."""
LOG.debug("Checking to see if %s exists yet.",
volume_path)
if not os.path.exists(volume_path):
LOG.debug("%(path)s doesn't exists yet.", {'path': volume_path})
raise exception.VolumeDeviceNotFound(
device=volume_path)
else:
LOG.debug("%s has shown up.", volume_path)
@utils.retry(exceptions=exception.BlockDeviceReadOnly, retries=5)
def wait_for_rw(self, wwn, device_path):
"""Wait for block device to be Read-Write."""
LOG.debug("Checking to see if %s is read-only.",
device_path)
out, info = self._execute('lsblk', '-o', 'NAME,RO', '-l', '-n')
LOG.debug("lsblk output: %s", out)
blkdevs = out.splitlines()
for blkdev in blkdevs:
# Entries might look like:
#
# "3624a93709a738ed78583fd120013902b (dm-1) 1"
#
# or
#
# "sdd 0"
#
# We are looking for the first and last part of them. For FC
# multipath devices the name is in the format of '<WWN> (dm-<ID>)'
blkdev_parts = blkdev.split(' ')
ro = blkdev_parts[-1]
name = blkdev_parts[0]
# We must validate that all pieces of the dm-# device are rw,
# if some are still ro it can cause problems.
if wwn in name and int(ro) == 1:
LOG.debug("Block device %s is read-only", device_path)
self._execute('multipath', '-r', check_exit_code=[0, 1, 21],
run_as_root=True, root_helper=self._root_helper)
raise exception.BlockDeviceReadOnly(
device=device_path)
else:
LOG.debug("Block device %s is not read-only.", device_path)
def find_multipath_device_path(self, wwn):
"""Look for the multipath device file for a volume WWN.
Multipath devices can show up in several places on
a linux system.
1) When multipath friendly names are ON:
a device file will show up in
/dev/disk/by-id/dm-uuid-mpath-<WWN>
/dev/disk/by-id/dm-name-mpath<N>
/dev/disk/by-id/scsi-mpath<N>
/dev/mapper/mpath<N>
2) When multipath friendly names are OFF:
/dev/disk/by-id/dm-uuid-mpath-<WWN>
/dev/disk/by-id/scsi-<WWN>
/dev/mapper/<WWN>
"""
LOG.info("Find Multipath device file for volume WWN %(wwn)s",
{'wwn': wwn})
# First look for the common path
wwn_dict = {'wwn': wwn}
path = "/dev/disk/by-id/dm-uuid-mpath-%(wwn)s" % wwn_dict
try:
self.wait_for_path(path)
return path
except exception.VolumeDeviceNotFound:
pass
# for some reason the common path wasn't found
# lets try the dev mapper path
path = "/dev/mapper/%(wwn)s" % wwn_dict
try:
self.wait_for_path(path)
return path
except exception.VolumeDeviceNotFound:
pass
# couldn't find a path
LOG.warning("couldn't find a valid multipath device path for "
"%(wwn)s", wwn_dict)
return None
def find_multipath_device(self, device):
"""Discover multipath devices for a mpath device.
This uses the slow multipath -l command to find a
multipath device description, then screen scrapes
the output to discover the multipath device name
and it's devices.
"""
mdev = None
devices = []
out = None
try:
(out, _err) = self._execute('multipath', '-l', device,
run_as_root=True,
root_helper=self._root_helper)
except putils.ProcessExecutionError as exc:
LOG.warning("multipath call failed exit %(code)s",
{'code': exc.exit_code})
raise exception.CommandExecutionFailed(
cmd='multipath -l %s' % device)
if out:
lines = out.strip()
lines = lines.split("\n")
lines = [line for line in lines
if not re.match(MULTIPATH_ERROR_REGEX, line)]
if lines:
mdev_name = lines[0].split(" ")[0]
if mdev_name in MULTIPATH_DEVICE_ACTIONS:
mdev_name = lines[0].split(" ")[1]
mdev = '/dev/mapper/%s' % mdev_name
# Confirm that the device is present.
try:
os.stat(mdev)
except OSError:
LOG.warning("Couldn't find multipath device %s",
mdev)
return None
wwid_search = MULTIPATH_WWID_REGEX.search(lines[0])
if wwid_search is not None:
mdev_id = wwid_search.group('wwid')
else:
mdev_id = mdev_name
LOG.debug("Found multipath device = %(mdev)s",
{'mdev': mdev})
device_lines = lines[3:]
for dev_line in device_lines:
if dev_line.find("policy") != -1:
continue
dev_line = dev_line.lstrip(' |-`')
dev_info = dev_line.split()
address = dev_info[0].split(":")
dev = {'device': '/dev/%s' % dev_info[1],
'host': address[0], 'channel': address[1],
'id': address[2], 'lun': address[3]
}
devices.append(dev)
if mdev is not None:
info = {"device": mdev,
"id": mdev_id,
"name": mdev_name,
"devices": devices}
return info
return None
def get_device_size(self, device):
"""Get the size in bytes of a volume."""
(out, _err) = self._execute('blockdev', '--getsize64',
device, run_as_root=True,
root_helper=self._root_helper)
var = six.text_type(out.strip())
if var.isnumeric():
return int(var)
else:
return None
def multipath_reconfigure(self):
"""Issue a multipathd reconfigure.
When attachments come and go, the multipathd seems
to get lost and not see the maps. This causes
resize map to fail 100%. To overcome this we have
to issue a reconfigure prior to resize map.
"""
(out, _err) = self._execute('multipathd', 'reconfigure',
run_as_root=True,
root_helper=self._root_helper)
return out
def multipath_resize_map(self, mpath_id):
"""Issue a multipath resize map on device.
This forces the multipath daemon to update it's
size information a particular multipath device.
"""
(out, _err) = self._execute('multipathd', 'resize', 'map', mpath_id,
run_as_root=True,
root_helper=self._root_helper)
return out
def extend_volume(self, volume_paths):
"""Signal the SCSI subsystem to test for volume resize.
This function tries to signal the local system's kernel
that an already attached volume might have been resized.
"""
LOG.debug("extend volume %s", volume_paths)
for volume_path in volume_paths:
device = self.get_device_info(volume_path)
LOG.debug("Volume device info = %s", device)
device_id = ("%(host)s:%(channel)s:%(id)s:%(lun)s" %
{'host': device['host'],
'channel': device['channel'],
'id': device['id'],
'lun': device['lun']})
scsi_path = ("/sys/bus/scsi/drivers/sd/%(device_id)s" %
{'device_id': device_id})
size = self.get_device_size(volume_path)
LOG.debug("Starting size: %s", size)
# now issue the device rescan
rescan_path = "%(scsi_path)s/rescan" % {'scsi_path': scsi_path}
self.echo_scsi_command(rescan_path, "1")
new_size = self.get_device_size(volume_path)
LOG.debug("volume size after scsi device rescan %s", new_size)
scsi_wwn = self.get_scsi_wwn(volume_paths[0])
mpath_device = self.find_multipath_device_path(scsi_wwn)
if mpath_device:
# Force a reconfigure so that resize works
self.multipath_reconfigure()
size = self.get_device_size(mpath_device)
LOG.info("mpath(%(device)s) current size %(size)s",
{'device': mpath_device, 'size': size})
result = self.multipath_resize_map(scsi_wwn)
if 'fail' in result:
LOG.error("Multipathd failed to update the size mapping of "
"multipath device %(scsi_wwn)s volume %(volume)s",
{'scsi_wwn': scsi_wwn, 'volume': volume_paths})
return None
new_size = self.get_device_size(mpath_device)
LOG.info("mpath(%(device)s) new size %(size)s",
{'device': mpath_device, 'size': new_size})
return new_size
def process_lun_id(self, lun_ids):
if isinstance(lun_ids, list):
processed = []
for x in lun_ids:
x = self._format_lun_id(x)
processed.append(x)
else:
processed = self._format_lun_id(lun_ids)
return processed
def _format_lun_id(self, lun_id):
# make sure lun_id is an int
lun_id = int(lun_id)
if lun_id < 256:
return lun_id
else:
return ("0x%04x%04x00000000" %
(lun_id & 0xffff, lun_id >> 16 & 0xffff))
def get_hctl(self, session, lun):
"""Given an iSCSI session return the host, channel, target, and lun."""
glob_str = '/sys/class/iscsi_host/host*/device/session' + session
paths = glob.glob(glob_str + '/target*')
if paths:
__, channel, target = os.path.split(paths[0])[1].split(':')
# Check if we can get the host
else:
target = channel = '-'
paths = glob.glob(glob_str)
if not paths:
LOG.debug('No hctl found on session %s with lun %s', session, lun)
return None
# Extract the host number from the path
host = paths[0][26:paths[0].index('/', 26)]
res = (host, channel, target, lun)
LOG.debug('HCTL %s found on session %s with lun %s', res, session, lun)
return res
def device_name_by_hctl(self, session, hctl):
"""Find the device name given a session and the hctl.
:param session: A string with the session number
"param hctl: An iterable with the host, channel, target, and lun as
passed to scan. ie: ('5', '-', '-', '0')
"""
if '-' in hctl:
hctl = ['*' if x == '-' else x for x in hctl]
path = ('/sys/class/scsi_host/host%(h)s/device/session%(s)s/target'
'%(h)s:%(c)s:%(t)s/%(h)s:%(c)s:%(t)s:%(l)s/block/*' %
{'h': hctl[0], 'c': hctl[1], 't': hctl[2], 'l': hctl[3],
's': session})
# Sort devices and return the first so we don't return a partition
devices = sorted(glob.glob(path))
device = os.path.split(devices[0])[1] if devices else None
LOG.debug('Searching for a device in session %s and hctl %s yield: %s',
session, hctl, device)
return device
def scan_iscsi(self, host, channel='-', target='-', lun='-'):
"""Send an iSCSI scan request given the host and optionally the ctl."""
LOG.debug('Scanning host %(host)s c: %(channel)s, '
't: %(target)s, l: %(lun)s)',
{'host': host, 'channel': channel,
'target': target, 'lun': lun})
self.echo_scsi_command('/sys/class/scsi_host/host%s/scan' % host,
'%(c)s %(t)s %(l)s' % {'c': channel,
't': target,
'l': lun})
def multipath_add_wwid(self, wwid):
"""Add a wwid to the list of know multipath wwids.
This has the effect of multipathd being willing to create a dm for a
multipath even when there's only 1 device.
"""
out, err = self._execute('multipath', '-a', wwid,
run_as_root=True,
check_exit_code=False,
root_helper=self._root_helper)
return out.strip() == "wwid '" + wwid + "' added"
def multipath_add_path(self, realpath):
"""Add a path to multipathd for monitoring.
This has the effect of multipathd checking an already checked device
for multipath.
Together with `multipath_add_wwid` we can create a multipath when
there's only 1 path.
"""
stdout, stderr = self._execute('multipathd', 'add', 'path', realpath,
run_as_root=True, timeout=5,
check_exit_code=False,
root_helper=self._root_helper)
return stdout.strip() == 'ok'

View File

@ -1,114 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Generic SheepDog Connection Utilities.
"""
import eventlet
import io
from oslo_concurrency import processutils
from os_brick import exception
from os_brick.i18n import _
class SheepdogVolumeIOWrapper(io.RawIOBase):
"""File-like object with Sheepdog backend."""
def __init__(self, addr, port, volume, snapshot_name=None):
self._addr = addr
self._port = port
self._vdiname = volume
self._snapshot_name = snapshot_name
self._offset = 0
# SheepdogVolumeIOWrapper instance becomes invalid
# if a write error occurs.
self._valid = True
def _execute(self, cmd, data=None):
try:
# NOTE(yamada-h): processutils.execute causes busy waiting
# under eventlet.
# To avoid wasting CPU resources, it should not be used for
# the command which takes long time to execute.
# For workaround, we replace a subprocess module with
# the original one while only executing a read/write command.
_processutils_subprocess = processutils.subprocess
processutils.subprocess = eventlet.patcher.original('subprocess')
return processutils.execute(*cmd, process_input=data)[0]
except (processutils.ProcessExecutionError, OSError):
self._valid = False
raise exception.VolumeDriverException(name=self._vdiname)
finally:
processutils.subprocess = _processutils_subprocess
def read(self, length=None):
if not self._valid:
raise exception.VolumeDriverException(name=self._vdiname)
cmd = ['dog', 'vdi', 'read', '-a', self._addr, '-p', self._port]
if self._snapshot_name:
cmd.extend(('-s', self._snapshot_name))
cmd.extend((self._vdiname, self._offset))
if length:
cmd.append(length)
data = self._execute(cmd)
self._offset += len(data)
return data
def write(self, data):
if not self._valid:
raise exception.VolumeDriverException(name=self._vdiname)
length = len(data)
cmd = ('dog', 'vdi', 'write', '-a', self._addr, '-p', self._port,
self._vdiname, self._offset, length)
self._execute(cmd, data)
self._offset += length
return length
def seek(self, offset, whence=0):
if not self._valid:
raise exception.VolumeDriverException(name=self._vdiname)
if whence == 0:
# SEEK_SET or 0 - start of the stream (the default);
# offset should be zero or positive
new_offset = offset
elif whence == 1:
# SEEK_CUR or 1 - current stream position; offset may be negative
new_offset = self._offset + offset
else:
# SEEK_END or 2 - end of the stream; offset is usually negative
# TODO(yamada-h): Support SEEK_END
raise IOError(_("Invalid argument - whence=%s not supported.") %
whence)
if new_offset < 0:
raise IOError(_("Invalid argument - negative seek offset."))
self._offset = new_offset
def tell(self):
return self._offset
def flush(self):
pass
def fileno(self):
"""Sheepdog does not have support for fileno so we raise IOError.
Raising IOError is recommended way to notify caller that interface is
not supported - see http://docs.python.org/2/library/io.html#io.IOBase
"""
raise IOError(_("fileno is not supported by SheepdogVolumeIOWrapper"))

View File

@ -1,116 +0,0 @@
# Copyright 2016 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_win import utilsfactory
from oslo_log import log as logging
from os_brick import exception
from os_brick.i18n import _
from os_brick import initiator
from os_brick.initiator import initiator_connector
from os_brick import utils
LOG = logging.getLogger(__name__)
class BaseWindowsConnector(initiator_connector.InitiatorConnector):
platform = initiator.PLATFORM_ALL
os_type = initiator.OS_TYPE_WINDOWS
DEFAULT_DEVICE_SCAN_INTERVAL = 2
def __init__(self, root_helper=None, *args, **kwargs):
super(BaseWindowsConnector, self).__init__(root_helper,
*args, **kwargs)
self.device_scan_interval = kwargs.pop(
'device_scan_interval', self.DEFAULT_DEVICE_SCAN_INTERVAL)
self._diskutils = utilsfactory.get_diskutils()
@staticmethod
def check_multipath_support(enforce_multipath):
hostutils = utilsfactory.get_hostutils()
mpio_enabled = hostutils.check_server_feature(
hostutils.FEATURE_MPIO)
if not mpio_enabled:
err_msg = _("Using multipath connections for iSCSI and FC disks "
"requires the Multipath IO Windows feature to be "
"enabled. MPIO must be configured to claim such "
"devices.")
LOG.error(err_msg)
if enforce_multipath:
raise exception.BrickException(err_msg)
return False
return True
@staticmethod
def get_connector_properties(*args, **kwargs):
multipath = kwargs['multipath']
enforce_multipath = kwargs['enforce_multipath']
props = {}
props['multipath'] = (
multipath and
BaseWindowsConnector.check_multipath_support(enforce_multipath))
return props
def _get_scsi_wwn(self, device_number):
# NOTE(lpetrut): The Linux connectors use scsi_id to retrieve the
# disk unique id, which prepends the identifier type to the unique id
# retrieved from the page 83 SCSI inquiry data. We'll do the same
# to remain consistent.
disk_uid, uid_type = self._diskutils.get_disk_uid_and_uid_type(
device_number)
scsi_wwn = '%s%s' % (uid_type, disk_uid)
return scsi_wwn
def check_valid_device(self, path, *args, **kwargs):
try:
with open(path, 'r') as dev:
dev.read(1)
except IOError:
LOG.exception(
"Failed to access the device on the path "
"%(path)s", {"path": path})
return False
return True
def get_all_available_volumes(self):
# TODO(lpetrut): query for disks based on the protocol used.
return []
def _check_device_paths(self, device_paths):
if len(device_paths) > 1:
err_msg = _("Multiple volume paths were found: %s. This can "
"occur if multipath is used and MPIO is not "
"properly configured, thus not claiming the device "
"paths. This issue must be addressed urgently as "
"it can lead to data corruption.")
raise exception.BrickException(err_msg % device_paths)
@utils.trace
def extend_volume(self, connection_properties):
volume_paths = self.get_volume_paths(connection_properties)
if not volume_paths:
err_msg = _("Could not find the disk. Extend failed.")
raise exception.NotFound(err_msg)
device_path = volume_paths[0]
device_number = self._diskutils.get_device_number_from_device_name(
device_path)
self._diskutils.refresh_disk(device_number)
def get_search_path(self):
return None

View File

@ -1,131 +0,0 @@
# Copyright 2016 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import time
from os_win import utilsfactory
from oslo_log import log as logging
from os_brick import exception
from os_brick.initiator.windows import base as win_conn_base
from os_brick import utils
LOG = logging.getLogger(__name__)
class WindowsFCConnector(win_conn_base.BaseWindowsConnector):
def __init__(self, *args, **kwargs):
super(WindowsFCConnector, self).__init__(*args, **kwargs)
self._fc_utils = utilsfactory.get_fc_utils()
@staticmethod
def get_connector_properties(*args, **kwargs):
props = {}
fc_utils = utilsfactory.get_fc_utils()
fc_utils.refresh_hba_configuration()
fc_hba_ports = fc_utils.get_fc_hba_ports()
if fc_hba_ports:
wwnns = []
wwpns = []
for port in fc_hba_ports:
wwnns.append(port['node_name'])
wwpns.append(port['port_name'])
props['wwpns'] = wwpns
props['wwnns'] = list(set(wwnns))
return props
@utils.trace
def connect_volume(self, connection_properties):
volume_paths = self.get_volume_paths(connection_properties)
if not volume_paths:
raise exception.NoFibreChannelVolumeDeviceFound()
device_path = volume_paths[0]
device_number = self._diskutils.get_device_number_from_device_name(
device_path)
scsi_wwn = self._get_scsi_wwn(device_number)
device_info = {'type': 'block',
'path': device_path,
'number': device_number,
'scsi_wwn': scsi_wwn}
return device_info
@utils.trace
def get_volume_paths(self, connection_properties):
# Returns a list containing at most one disk path such as
# \\.\PhysicalDrive4.
#
# If multipath is used and the MPIO service is properly configured
# to claim the disks, we'll still get a single device path, having
# the same format, which will be used for all the IO operations.
disk_paths = set()
for attempt in range(self.device_scan_attempts):
self._diskutils.rescan_disks()
volume_mappings = self._get_fc_volume_mappings(
connection_properties)
LOG.debug("Retrieved volume mappings %(vol_mappings)s "
"for volume %(conn_props)s",
dict(vol_mappings=volume_mappings,
conn_props=connection_properties))
# Because of MPIO, we may not be able to get the device name
# from a specific mapping if the disk was accessed through
# an other HBA at that moment. In that case, the device name
# will show up as an empty string.
for mapping in volume_mappings:
device_name = mapping['device_name']
if device_name:
disk_paths.add(device_name)
if disk_paths:
break
time.sleep(self.device_scan_interval)
self._check_device_paths(disk_paths)
return list(disk_paths)
def _get_fc_volume_mappings(self, connection_properties):
# Note(lpetrut): All the WWNs returned by os-win are upper case.
target_wwpns = [wwpn.upper()
for wwpn in connection_properties['target_wwn']]
target_lun = connection_properties['target_lun']
volume_mappings = []
hba_mappings = self._get_fc_hba_mappings()
for node_name in hba_mappings:
target_mappings = self._fc_utils.get_fc_target_mappings(node_name)
for mapping in target_mappings:
if (mapping['port_name'] in target_wwpns
and mapping['lun'] == target_lun):
volume_mappings.append(mapping)
return volume_mappings
def _get_fc_hba_mappings(self):
mappings = collections.defaultdict(list)
fc_hba_ports = self._fc_utils.get_fc_hba_ports()
for port in fc_hba_ports:
mappings[port['node_name']].append(port['port_name'])
return mappings
@utils.trace
def disconnect_volume(self, connection_properties,
force=False, ignore_errors=False):
pass

View File

@ -1,166 +0,0 @@
# Copyright 2016 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_win import exceptions as os_win_exc
from os_win import utilsfactory
from oslo_log import log as logging
from os_brick import exception
from os_brick.i18n import _
from os_brick.initiator.connectors import base_iscsi
from os_brick.initiator.windows import base as win_conn_base
from os_brick import utils
LOG = logging.getLogger(__name__)
class WindowsISCSIConnector(win_conn_base.BaseWindowsConnector,
base_iscsi.BaseISCSIConnector):
def __init__(self, *args, **kwargs):
super(WindowsISCSIConnector, self).__init__(*args, **kwargs)
self.use_multipath = kwargs.pop('use_multipath', False)
self.initiator_list = kwargs.pop('initiator_list', [])
self._iscsi_utils = utilsfactory.get_iscsi_initiator_utils()
self.validate_initiators()
def validate_initiators(self):
"""Validates the list of requested initiator HBAs
Validates the list of requested initiator HBAs to be used
when establishing iSCSI sessions.
"""
valid_initiator_list = True
if not self.initiator_list:
LOG.info("No iSCSI initiator was explicitly requested. "
"The Microsoft iSCSI initiator will choose the "
"initiator when establishing sessions.")
else:
available_initiators = self._iscsi_utils.get_iscsi_initiators()
for initiator in self.initiator_list:
if initiator not in available_initiators:
LOG.warning("The requested initiator %(req_initiator)s "
"is not in the list of available initiators: "
"%(avail_initiators)s.",
dict(req_initiator=initiator,
avail_initiators=available_initiators))
valid_initiator_list = False
return valid_initiator_list
def get_initiator(self):
"""Returns the iSCSI initiator node name."""
return self._iscsi_utils.get_iscsi_initiator()
@staticmethod
def get_connector_properties(*args, **kwargs):
iscsi_utils = utilsfactory.get_iscsi_initiator_utils()
initiator = iscsi_utils.get_iscsi_initiator()
return dict(initiator=initiator)
def _get_all_paths(self, connection_properties):
initiator_list = self.initiator_list or [None]
all_targets = self._get_all_targets(connection_properties)
paths = [(initiator_name, target_portal, target_iqn, target_lun)
for target_portal, target_iqn, target_lun in all_targets
for initiator_name in initiator_list]
return paths
@utils.trace
def connect_volume(self, connection_properties):
volume_connected = False
for (initiator_name,
target_portal,
target_iqn,
target_lun) in self._get_all_paths(connection_properties):
try:
LOG.info("Attempting to establish an iSCSI session to "
"target %(target_iqn)s on portal %(target_portal)s "
"accessing LUN %(target_lun)s using initiator "
"%(initiator_name)s.",
dict(target_portal=target_portal,
target_iqn=target_iqn,
target_lun=target_lun,
initiator_name=initiator_name))
self._iscsi_utils.login_storage_target(
target_lun=target_lun,
target_iqn=target_iqn,
target_portal=target_portal,
auth_username=connection_properties.get('auth_username'),
auth_password=connection_properties.get('auth_password'),
mpio_enabled=self.use_multipath,
initiator_name=initiator_name,
ensure_lun_available=False)
self._iscsi_utils.ensure_lun_available(
target_iqn=target_iqn,
target_lun=target_lun,
rescan_attempts=self.device_scan_attempts,
retry_interval=self.device_scan_interval)
if not volume_connected:
(device_number,
device_path) = (
self._iscsi_utils.get_device_number_and_path(
target_iqn, target_lun))
volume_connected = True
if not self.use_multipath:
break
except os_win_exc.OSWinException:
LOG.exception("Could not establish the iSCSI session.")
if not volume_connected:
raise exception.BrickException(
_("Could not connect volume %s.") % connection_properties)
scsi_wwn = self._get_scsi_wwn(device_number)
device_info = {'type': 'block',
'path': device_path,
'number': device_number,
'scsi_wwn': scsi_wwn}
return device_info
@utils.trace
def disconnect_volume(self, connection_properties,
force=False, ignore_errors=False):
# We want to refresh the cached information first.
self._diskutils.rescan_disks()
for (target_portal,
target_iqn,
target_lun) in self._get_all_targets(connection_properties):
luns = self._iscsi_utils.get_target_luns(target_iqn)
# We disconnect the target only if it does not expose other
# luns which may be in use.
if not luns or luns == [target_lun]:
self._iscsi_utils.logout_storage_target(target_iqn)
@utils.trace
def get_volume_paths(self, connection_properties):
device_paths = set()
for (target_portal,
target_iqn,
target_lun) in self._get_all_targets(connection_properties):
(device_number,
device_path) = self._iscsi_utils.get_device_number_and_path(
target_iqn, target_lun)
if device_path:
device_paths.add(device_path)
self._check_device_paths(device_paths)
return list(device_paths)

View File

@ -1,95 +0,0 @@
# Copyright 2016 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from os_win import utilsfactory
from os_brick.initiator.windows import base as win_conn_base
from os_brick.remotefs import windows_remotefs as remotefs
from os_brick import utils
class WindowsSMBFSConnector(win_conn_base.BaseWindowsConnector):
def __init__(self, *args, **kwargs):
super(WindowsSMBFSConnector, self).__init__(*args, **kwargs)
# If this flag is set, we use the local paths in case of local
# shares. This is in fact mandatory in some cases, for example
# for the Hyper-C scenario.
self._local_path_for_loopback = kwargs.get('local_path_for_loopback',
False)
self._remotefsclient = remotefs.WindowsRemoteFsClient(
mount_type='smbfs',
*args, **kwargs)
self._smbutils = utilsfactory.get_smbutils()
@staticmethod
def get_connector_properties(*args, **kwargs):
# No connector properties updates in this case.
return {}
@utils.trace
def connect_volume(self, connection_properties):
self.ensure_share_mounted(connection_properties)
disk_path = self._get_disk_path(connection_properties)
device_info = {'type': 'file',
'path': disk_path}
return device_info
@utils.trace
def disconnect_volume(self, connection_properties,
force=False, ignore_errors=False):
export_path = self._get_export_path(connection_properties)
self._remotefsclient.unmount(export_path)
def _get_export_path(self, connection_properties):
return connection_properties['export'].replace('/', '\\')
def _get_disk_path(self, connection_properties):
# This is expected to be the share address, as an UNC path.
export_path = self._get_export_path(connection_properties)
mount_base = self._remotefsclient.get_mount_base()
use_local_path = (self._local_path_for_loopback and
self._smbutils.is_local_share(export_path))
disk_dir = export_path
if mount_base:
# This will be a symlink pointing to either the share
# path directly or to the local share path, if requested
# and available.
disk_dir = self._remotefsclient.get_mount_point(
export_path)
elif use_local_path:
share_name = self._remotefsclient.get_share_name(export_path)
disk_dir = self._remotefsclient.get_local_share_path(share_name)
disk_name = connection_properties['name']
disk_path = os.path.join(disk_dir, disk_name)
return disk_path
def get_search_path(self):
return self._remotefsclient.get_mount_base()
@utils.trace
def get_volume_paths(self, connection_properties):
return [self._get_disk_path(connection_properties)]
def ensure_share_mounted(self, connection_properties):
export_path = self._get_export_path(connection_properties)
mount_options = connection_properties.get('options')
self._remotefsclient.mount(export_path, mount_options)
def extend_volume(self, connection_properties):
raise NotImplementedError

View File

@ -1,829 +0,0 @@
# Copyright 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
LVM class for performing LVM operations.
"""
import math
import os
import re
from os_brick import exception
from os_brick import executor
from os_brick.privileged import rootwrap as priv_rootwrap
from os_brick import utils
from oslo_concurrency import processutils as putils
from oslo_log import log as logging
from oslo_utils import excutils
from six import moves
LOG = logging.getLogger(__name__)
class LVM(executor.Executor):
"""LVM object to enable various LVM related operations."""
LVM_CMD_PREFIX = ['env', 'LC_ALL=C']
def __init__(self, vg_name, root_helper, create_vg=False,
physical_volumes=None, lvm_type='default',
executor=None, lvm_conf=None):
"""Initialize the LVM object.
The LVM object is based on an LVM VolumeGroup, one instantiation
for each VolumeGroup you have/use.
:param vg_name: Name of existing VG or VG to create
:param root_helper: Execution root_helper method to use
:param create_vg: Indicates the VG doesn't exist
and we want to create it
:param physical_volumes: List of PVs to build VG on
:param lvm_type: VG and Volume type (default, or thin)
:param executor: Execute method to use, None uses
oslo_concurrency.processutils
"""
super(LVM, self).__init__(execute=executor, root_helper=root_helper)
self.vg_name = vg_name
self.pv_list = []
self.vg_size = 0.0
self.vg_free_space = 0.0
self.vg_lv_count = 0
self.vg_uuid = None
self.vg_thin_pool = None
self.vg_thin_pool_size = 0.0
self.vg_thin_pool_free_space = 0.0
self._supports_snapshot_lv_activation = None
self._supports_lvchange_ignoreskipactivation = None
self.vg_provisioned_capacity = 0.0
# Ensure LVM_SYSTEM_DIR has been added to LVM.LVM_CMD_PREFIX
# before the first LVM command is executed, and use the directory
# where the specified lvm_conf file is located as the value.
if lvm_conf and os.path.isfile(lvm_conf):
lvm_sys_dir = os.path.dirname(lvm_conf)
LVM.LVM_CMD_PREFIX = ['env',
'LC_ALL=C',
'LVM_SYSTEM_DIR=' + lvm_sys_dir]
if create_vg and physical_volumes is not None:
self.pv_list = physical_volumes
try:
self._create_vg(physical_volumes)
except putils.ProcessExecutionError as err:
LOG.exception('Error creating Volume Group')
LOG.error('Cmd :%s', err.cmd)
LOG.error('StdOut :%s', err.stdout)
LOG.error('StdErr :%s', err.stderr)
raise exception.VolumeGroupCreationFailed(vg_name=self.vg_name)
if self._vg_exists() is False:
LOG.error('Unable to locate Volume Group %s', vg_name)
raise exception.VolumeGroupNotFound(vg_name=vg_name)
# NOTE: we assume that the VG has been activated outside of Cinder
if lvm_type == 'thin':
pool_name = "%s-pool" % self.vg_name
if self.get_volume(pool_name) is None:
try:
self.create_thin_pool(pool_name)
except putils.ProcessExecutionError:
# Maybe we just lost the race against another copy of
# this driver being in init in parallel - e.g.
# cinder-volume and cinder-backup starting in parallel
if self.get_volume(pool_name) is None:
raise
self.vg_thin_pool = pool_name
self.activate_lv(self.vg_thin_pool)
self.pv_list = self.get_all_physical_volumes(root_helper, vg_name)
def _vg_exists(self):
"""Simple check to see if VG exists.
:returns: True if vg specified in object exists, else False
"""
exists = False
cmd = LVM.LVM_CMD_PREFIX + ['vgs', '--noheadings',
'-o', 'name', self.vg_name]
(out, _err) = self._execute(*cmd,
root_helper=self._root_helper,
run_as_root=True)
if out is not None:
volume_groups = out.split()
if self.vg_name in volume_groups:
exists = True
return exists
def _create_vg(self, pv_list):
cmd = ['vgcreate', self.vg_name, ','.join(pv_list)]
self._execute(*cmd, root_helper=self._root_helper, run_as_root=True)
def _get_vg_uuid(self):
cmd = LVM.LVM_CMD_PREFIX + ['vgs', '--noheadings',
'-o', 'uuid', self.vg_name]
(out, _err) = self._execute(*cmd,
root_helper=self._root_helper,
run_as_root=True)
if out is not None:
return out.split()
else:
return []
def _get_thin_pool_free_space(self, vg_name, thin_pool_name):
"""Returns available thin pool free space.
:param vg_name: the vg where the pool is placed
:param thin_pool_name: the thin pool to gather info for
:returns: Free space in GB (float), calculated using data_percent
"""
cmd = LVM.LVM_CMD_PREFIX + ['lvs', '--noheadings', '--unit=g',
'-o', 'size,data_percent', '--separator',
':', '--nosuffix']
# NOTE(gfidente): data_percent only applies to some types of LV so we
# make sure to append the actual thin pool name
cmd.append("/dev/%s/%s" % (vg_name, thin_pool_name))
free_space = 0.0
try:
(out, err) = self._execute(*cmd,
root_helper=self._root_helper,
run_as_root=True)
if out is not None:
out = out.strip()
data = out.split(':')
pool_size = float(data[0])
data_percent = float(data[1])
consumed_space = pool_size / 100 * data_percent
free_space = pool_size - consumed_space
free_space = round(free_space, 2)
except putils.ProcessExecutionError as err:
LOG.exception('Error querying thin pool about data_percent')
LOG.error('Cmd :%s', err.cmd)
LOG.error('StdOut :%s', err.stdout)
LOG.error('StdErr :%s', err.stderr)
return free_space
@staticmethod
def get_lvm_version(root_helper):
"""Static method to get LVM version from system.
:param root_helper: root_helper to use for execute
:returns: version 3-tuple
"""
cmd = LVM.LVM_CMD_PREFIX + ['vgs', '--version']
(out, _err) = priv_rootwrap.execute(*cmd,
root_helper=root_helper,
run_as_root=True)
lines = out.split('\n')
for line in lines:
if 'LVM version' in line:
version_list = line.split()
# NOTE(gfidente): version is formatted as follows:
# major.minor.patchlevel(library API version)[-customisation]
version = version_list[2]
version_filter = r"(\d+)\.(\d+)\.(\d+).*"
r = re.search(version_filter, version)
version_tuple = tuple(map(int, r.group(1, 2, 3)))
return version_tuple
@staticmethod
def supports_thin_provisioning(root_helper):
"""Static method to check for thin LVM support on a system.
:param root_helper: root_helper to use for execute
:returns: True if supported, False otherwise
"""
return LVM.get_lvm_version(root_helper) >= (2, 2, 95)
@property
def supports_snapshot_lv_activation(self):
"""Property indicating whether snap activation changes are supported.
Check for LVM version >= 2.02.91.
(LVM2 git: e8a40f6 Allow to activate snapshot)
:returns: True/False indicating support
"""
if self._supports_snapshot_lv_activation is not None:
return self._supports_snapshot_lv_activation
self._supports_snapshot_lv_activation = (
self.get_lvm_version(self._root_helper) >= (2, 2, 91))
return self._supports_snapshot_lv_activation
@property
def supports_lvchange_ignoreskipactivation(self):
"""Property indicating whether lvchange can ignore skip activation.
Check for LVM version >= 2.02.99.
(LVM2 git: ab789c1bc add --ignoreactivationskip to lvchange)
"""
if self._supports_lvchange_ignoreskipactivation is not None:
return self._supports_lvchange_ignoreskipactivation
self._supports_lvchange_ignoreskipactivation = (
self.get_lvm_version(self._root_helper) >= (2, 2, 99))
return self._supports_lvchange_ignoreskipactivation
@property
def supports_full_pool_create(self):
"""Property indicating whether 100% pool creation is supported.
Check for LVM version >= 2.02.115.
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=998347
"""
if self.get_lvm_version(self._root_helper) >= (2, 2, 115):
return True
else:
return False
@staticmethod
def get_lv_info(root_helper, vg_name=None, lv_name=None):
"""Retrieve info about LVs (all, in a VG, or a single LV).
:param root_helper: root_helper to use for execute
:param vg_name: optional, gathers info for only the specified VG
:param lv_name: optional, gathers info for only the specified LV
:returns: List of Dictionaries with LV info
"""
cmd = LVM.LVM_CMD_PREFIX + ['lvs', '--noheadings', '--unit=g',
'-o', 'vg_name,name,size', '--nosuffix']
if lv_name is not None and vg_name is not None:
cmd.append("%s/%s" % (vg_name, lv_name))
elif vg_name is not None:
cmd.append(vg_name)
try:
(out, _err) = priv_rootwrap.execute(*cmd,
root_helper=root_helper,
run_as_root=True)
except putils.ProcessExecutionError as err:
with excutils.save_and_reraise_exception(reraise=True) as ctx:
if "not found" in err.stderr or "Failed to find" in err.stderr:
ctx.reraise = False
LOG.info("Logical Volume not found when querying "
"LVM info. (vg_name=%(vg)s, lv_name=%(lv)s",
{'vg': vg_name, 'lv': lv_name})
out = None
lv_list = []
if out is not None:
volumes = out.split()
iterator = moves.zip(*[iter(volumes)] * 3) # pylint: disable=E1101
for vg, name, size in iterator:
lv_list.append({"vg": vg, "name": name, "size": size})
return lv_list
def get_volumes(self, lv_name=None):
"""Get all LV's associated with this instantiation (VG).
:returns: List of Dictionaries with LV info
"""
return self.get_lv_info(self._root_helper,
self.vg_name,
lv_name)
def get_volume(self, name):
"""Get reference object of volume specified by name.
:returns: dict representation of Logical Volume if exists
"""
ref_list = self.get_volumes(name)
for r in ref_list:
if r['name'] == name:
return r
return None
@staticmethod
def get_all_physical_volumes(root_helper, vg_name=None):
"""Static method to get all PVs on a system.
:param root_helper: root_helper to use for execute
:param vg_name: optional, gathers info for only the specified VG
:returns: List of Dictionaries with PV info
"""
field_sep = '|'
cmd = LVM.LVM_CMD_PREFIX + ['pvs', '--noheadings',
'--unit=g',
'-o', 'vg_name,name,size,free',
'--separator', field_sep,
'--nosuffix']
(out, _err) = priv_rootwrap.execute(*cmd,
root_helper=root_helper,
run_as_root=True)
pvs = out.split()
if vg_name is not None:
pvs = [pv for pv in pvs if vg_name == pv.split(field_sep)[0]]
pv_list = []
for pv in pvs:
fields = pv.split(field_sep)
pv_list.append({'vg': fields[0],
'name': fields[1],
'size': float(fields[2]),
'available': float(fields[3])})
return pv_list
def get_physical_volumes(self):
"""Get all PVs associated with this instantiation (VG).
:returns: List of Dictionaries with PV info
"""
self.pv_list = self.get_all_physical_volumes(self._root_helper,
self.vg_name)
return self.pv_list
@staticmethod
def get_all_volume_groups(root_helper, vg_name=None):
"""Static method to get all VGs on a system.
:param root_helper: root_helper to use for execute
:param vg_name: optional, gathers info for only the specified VG
:returns: List of Dictionaries with VG info
"""
cmd = LVM.LVM_CMD_PREFIX + ['vgs', '--noheadings',
'--unit=g', '-o',
'name,size,free,lv_count,uuid',
'--separator', ':',
'--nosuffix']
if vg_name is not None:
cmd.append(vg_name)
(out, _err) = priv_rootwrap.execute(*cmd,
root_helper=root_helper,
run_as_root=True)
vg_list = []
if out is not None:
vgs = out.split()
for vg in vgs:
fields = vg.split(':')
vg_list.append({'name': fields[0],
'size': float(fields[1]),
'available': float(fields[2]),
'lv_count': int(fields[3]),
'uuid': fields[4]})
return vg_list
def update_volume_group_info(self):
"""Update VG info for this instantiation.
Used to update member fields of object and
provide a dict of info for caller.
:returns: Dictionaries of VG info
"""
vg_list = self.get_all_volume_groups(self._root_helper, self.vg_name)
if len(vg_list) != 1:
LOG.error('Unable to find VG: %s', self.vg_name)
raise exception.VolumeGroupNotFound(vg_name=self.vg_name)
self.vg_size = float(vg_list[0]['size'])
self.vg_free_space = float(vg_list[0]['available'])
self.vg_lv_count = int(vg_list[0]['lv_count'])
self.vg_uuid = vg_list[0]['uuid']
total_vols_size = 0.0
if self.vg_thin_pool is not None:
# NOTE(xyang): If providing only self.vg_name,
# get_lv_info will output info on the thin pool and all
# individual volumes.
# get_lv_info(self._root_helper, 'stack-vg')
# sudo lvs --noheadings --unit=g -o vg_name,name,size
# --nosuffix stack-vg
# stack-vg stack-pool 9.51
# stack-vg volume-13380d16-54c3-4979-9d22-172082dbc1a1 1.00
# stack-vg volume-629e13ab-7759-46a5-b155-ee1eb20ca892 1.00
# stack-vg volume-e3e6281c-51ee-464c-b1a7-db6c0854622c 1.00
#
# If providing both self.vg_name and self.vg_thin_pool,
# get_lv_info will output only info on the thin pool, but not
# individual volumes.
# get_lv_info(self._root_helper, 'stack-vg', 'stack-pool')
# sudo lvs --noheadings --unit=g -o vg_name,name,size
# --nosuffix stack-vg/stack-pool
# stack-vg stack-pool 9.51
#
# We need info on both the thin pool and the volumes,
# therefore we should provide only self.vg_name, but not
# self.vg_thin_pool here.
for lv in self.get_lv_info(self._root_helper,
self.vg_name):
lvsize = lv['size']
# get_lv_info runs "lvs" command with "--nosuffix".
# This removes "g" from "1.00g" and only outputs "1.00".
# Running "lvs" command without "--nosuffix" will output
# "1.00g" if "g" is the unit.
# Remove the unit if it is in lv['size'].
if not lv['size'][-1].isdigit():
lvsize = lvsize[:-1]
if lv['name'] == self.vg_thin_pool:
self.vg_thin_pool_size = lvsize
tpfs = self._get_thin_pool_free_space(self.vg_name,
self.vg_thin_pool)
self.vg_thin_pool_free_space = tpfs
else:
total_vols_size = total_vols_size + float(lvsize)
total_vols_size = round(total_vols_size, 2)
self.vg_provisioned_capacity = total_vols_size
def _calculate_thin_pool_size(self):
"""Calculates the correct size for a thin pool.
Ideally we would use 100% of the containing volume group and be done.
But the 100%VG notation to lvcreate is not implemented and thus cannot
be used. See https://bugzilla.redhat.com/show_bug.cgi?id=998347
Further, some amount of free space must remain in the volume group for
metadata for the contained logical volumes. The exact amount depends
on how much volume sharing you expect.
:returns: An lvcreate-ready string for the number of calculated bytes.
"""
# make sure volume group information is current
self.update_volume_group_info()
if LVM.supports_full_pool_create:
return ["-l", "100%FREE"]
# leave 5% free for metadata
return ["-L", "%sg" % (self.vg_free_space * 0.95)]
def create_thin_pool(self, name=None):
"""Creates a thin provisioning pool for this VG.
The syntax here is slightly different than the default
lvcreate -T, so we'll just write a custom cmd here
and do it.
:param name: Name to use for pool, default is "<vg-name>-pool"
:returns: The size string passed to the lvcreate command
"""
if not LVM.supports_thin_provisioning(self._root_helper):
LOG.error('Requested to setup thin provisioning, '
'however current LVM version does not '
'support it.')
return None
if name is None:
name = '%s-pool' % self.vg_name
vg_pool_name = '%s/%s' % (self.vg_name, name)
size_args = self._calculate_thin_pool_size()
cmd = LVM.LVM_CMD_PREFIX + ['lvcreate', '-T']
cmd.extend(size_args)
cmd.append(vg_pool_name)
LOG.debug("Creating thin pool '%(pool)s' with size %(size)s of "
"total %(free)sg", {'pool': vg_pool_name,
'size': size_args,
'free': self.vg_free_space})
self._execute(*cmd,
root_helper=self._root_helper,
run_as_root=True)
self.vg_thin_pool = name
return
def create_volume(self, name, size_str, lv_type='default', mirror_count=0):
"""Creates a logical volume on the object's VG.
:param name: Name to use when creating Logical Volume
:param size_str: Size to use when creating Logical Volume
:param lv_type: Type of Volume (default or thin)
:param mirror_count: Use LVM mirroring with specified count
"""
if lv_type == 'thin':
pool_path = '%s/%s' % (self.vg_name, self.vg_thin_pool)
cmd = LVM.LVM_CMD_PREFIX + ['lvcreate', '-T', '-V', size_str, '-n',
name, pool_path]
else:
cmd = LVM.LVM_CMD_PREFIX + ['lvcreate', '-n', name, self.vg_name,
'-L', size_str]
if mirror_count > 0:
cmd.extend(['-m', mirror_count, '--nosync',
'--mirrorlog', 'mirrored'])
terras = int(size_str[:-1]) / 1024.0
if terras >= 1.5:
rsize = int(2 ** math.ceil(math.log(terras) / math.log(2)))
# NOTE(vish): Next power of two for region size. See:
# http://red.ht/U2BPOD
cmd.extend(['-R', str(rsize)])
try:
self._execute(*cmd,
root_helper=self._root_helper,
run_as_root=True)
except putils.ProcessExecutionError as err:
LOG.exception('Error creating Volume')
LOG.error('Cmd :%s', err.cmd)
LOG.error('StdOut :%s', err.stdout)
LOG.error('StdErr :%s', err.stderr)
raise
@utils.retry(putils.ProcessExecutionError)
def create_lv_snapshot(self, name, source_lv_name, lv_type='default'):
"""Creates a snapshot of a logical volume.
:param name: Name to assign to new snapshot
:param source_lv_name: Name of Logical Volume to snapshot
:param lv_type: Type of LV (default or thin)
"""
source_lvref = self.get_volume(source_lv_name)
if source_lvref is None:
LOG.error("Trying to create snapshot by non-existent LV: %s",
source_lv_name)
raise exception.VolumeDeviceNotFound(device=source_lv_name)
cmd = LVM.LVM_CMD_PREFIX + ['lvcreate', '--name', name, '--snapshot',
'%s/%s' % (self.vg_name, source_lv_name)]
if lv_type != 'thin':
size = source_lvref['size']
cmd.extend(['-L', '%sg' % (size)])
try:
self._execute(*cmd,
root_helper=self._root_helper,
run_as_root=True)
except putils.ProcessExecutionError as err:
LOG.exception('Error creating snapshot')
LOG.error('Cmd :%s', err.cmd)
LOG.error('StdOut :%s', err.stdout)
LOG.error('StdErr :%s', err.stderr)
raise
def _mangle_lv_name(self, name):
# Linux LVM reserves name that starts with snapshot, so that
# such volume name can't be created. Mangle it.
if not name.startswith('snapshot'):
return name
return '_' + name
def _lv_is_active(self, name):
cmd = LVM.LVM_CMD_PREFIX + ['lvdisplay', '--noheading', '-C', '-o',
'Attr', '%s/%s' % (self.vg_name, name)]
out, _err = self._execute(*cmd,
root_helper=self._root_helper,
run_as_root=True)
if out:
out = out.strip()
# An example output might be '-wi-a----'; the 4th index specifies
# the status of the volume. 'a' for active, '-' for inactive.
if (out[4] == 'a'):
return True
return False
def deactivate_lv(self, name):
lv_path = self.vg_name + '/' + self._mangle_lv_name(name)
cmd = ['lvchange', '-a', 'n']
cmd.append(lv_path)
try:
self._execute(*cmd,
root_helper=self._root_helper,
run_as_root=True)
except putils.ProcessExecutionError as err:
LOG.exception('Error deactivating LV')
LOG.error('Cmd :%s', err.cmd)
LOG.error('StdOut :%s', err.stdout)
LOG.error('StdErr :%s', err.stderr)
raise
# Wait until lv is deactivated to return in
# order to prevent a race condition.
self._wait_for_volume_deactivation(name)
@utils.retry(exceptions=exception.VolumeNotDeactivated, retries=3,
backoff_rate=1)
def _wait_for_volume_deactivation(self, name):
LOG.debug("Checking to see if volume %s has been deactivated.",
name)
if self._lv_is_active(name):
LOG.debug("Volume %s is still active.", name)
raise exception.VolumeNotDeactivated(name=name)
else:
LOG.debug("Volume %s has been deactivated.", name)
def activate_lv(self, name, is_snapshot=False, permanent=False):
"""Ensure that logical volume/snapshot logical volume is activated.
:param name: Name of LV to activate
:param is_snapshot: whether LV is a snapshot
:param permanent: whether we should drop skipactivation flag
:raises: putils.ProcessExecutionError
"""
# This is a no-op if requested for a snapshot on a version
# of LVM that doesn't support snapshot activation.
# (Assume snapshot LV is always active.)
if is_snapshot and not self.supports_snapshot_lv_activation:
return
lv_path = self.vg_name + '/' + self._mangle_lv_name(name)
# Must pass --yes to activate both the snap LV and its origin LV.
# Otherwise lvchange asks if you would like to do this interactively,
# and fails.
cmd = ['lvchange', '-a', 'y', '--yes']
if self.supports_lvchange_ignoreskipactivation:
cmd.append('-K')
# If permanent=True is specified, drop the skipactivation flag in
# order to make this LV automatically activated after next reboot.
if permanent:
cmd += ['-k', 'n']
cmd.append(lv_path)
try:
self._execute(*cmd,
root_helper=self._root_helper,
run_as_root=True)
except putils.ProcessExecutionError as err:
LOG.exception('Error activating LV')
LOG.error('Cmd :%s', err.cmd)
LOG.error('StdOut :%s', err.stdout)
LOG.error('StdErr :%s', err.stderr)
raise
@utils.retry(putils.ProcessExecutionError)
def delete(self, name):
"""Delete logical volume or snapshot.
:param name: Name of LV to delete
"""
def run_udevadm_settle():
self._execute('udevadm', 'settle',
root_helper=self._root_helper, run_as_root=True,
check_exit_code=False)
# LV removal seems to be a race with other writers or udev in
# some cases (see LP #1270192), so we enable retry deactivation
LVM_CONFIG = 'activation { retry_deactivation = 1} '
try:
self._execute(
'lvremove',
'--config', LVM_CONFIG,
'-f',
'%s/%s' % (self.vg_name, name),
root_helper=self._root_helper, run_as_root=True)
except putils.ProcessExecutionError as err:
LOG.debug('Error reported running lvremove: CMD: %(command)s, '
'RESPONSE: %(response)s',
{'command': err.cmd, 'response': err.stderr})
LOG.debug('Attempting udev settle and retry of lvremove...')
run_udevadm_settle()
# The previous failing lvremove -f might leave behind
# suspended devices; when lvmetad is not available, any
# further lvm command will block forever.
# Therefore we need to skip suspended devices on retry.
LVM_CONFIG += 'devices { ignore_suspended_devices = 1}'
self._execute(
'lvremove',
'--config', LVM_CONFIG,
'-f',
'%s/%s' % (self.vg_name, name),
root_helper=self._root_helper, run_as_root=True)
LOG.debug('Successfully deleted volume: %s after '
'udev settle.', name)
def revert(self, snapshot_name):
"""Revert an LV from snapshot.
:param snapshot_name: Name of snapshot to revert
"""
self._execute('lvconvert', '--merge',
snapshot_name, root_helper=self._root_helper,
run_as_root=True)
def lv_has_snapshot(self, name):
cmd = LVM.LVM_CMD_PREFIX + ['lvdisplay', '--noheading', '-C', '-o',
'Attr', '%s/%s' % (self.vg_name, name)]
out, _err = self._execute(*cmd,
root_helper=self._root_helper,
run_as_root=True)
if out:
out = out.strip()
if (out[0] == 'o') or (out[0] == 'O'):
return True
return False
def extend_volume(self, lv_name, new_size):
"""Extend the size of an existing volume."""
# Volumes with snaps have attributes 'o' or 'O' and will be
# deactivated, but Thin Volumes with snaps have attribute 'V'
# and won't be deactivated because the lv_has_snapshot method looks
# for 'o' or 'O'
if self.lv_has_snapshot(lv_name):
self.deactivate_lv(lv_name)
try:
cmd = LVM.LVM_CMD_PREFIX + ['lvextend', '-L', new_size,
'%s/%s' % (self.vg_name, lv_name)]
self._execute(*cmd, root_helper=self._root_helper,
run_as_root=True)
except putils.ProcessExecutionError as err:
LOG.exception('Error extending Volume')
LOG.error('Cmd :%s', err.cmd)
LOG.error('StdOut :%s', err.stdout)
LOG.error('StdErr :%s', err.stderr)
raise
def vg_mirror_free_space(self, mirror_count):
free_capacity = 0.0
disks = []
for pv in self.pv_list:
disks.append(float(pv['available']))
while True:
disks = sorted([a for a in disks if a > 0.0], reverse=True)
if len(disks) <= mirror_count:
break
# consume the smallest disk
disk = disks[-1]
disks = disks[:-1]
# match extents for each mirror on the largest disks
for index in list(range(mirror_count)):
disks[index] -= disk
free_capacity += disk
return free_capacity
def vg_mirror_size(self, mirror_count):
return (self.vg_free_space / (mirror_count + 1))
def rename_volume(self, lv_name, new_name):
"""Change the name of an existing volume."""
try:
self._execute('lvrename', self.vg_name, lv_name, new_name,
root_helper=self._root_helper,
run_as_root=True)
except putils.ProcessExecutionError as err:
LOG.exception('Error renaming logical volume')
LOG.error('Cmd :%s', err.cmd)
LOG.error('StdOut :%s', err.stdout)
LOG.error('StdErr :%s', err.stderr)
raise

View File

@ -1,23 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_privsep import capabilities as c
from oslo_privsep import priv_context
# It is expected that most (if not all) os-brick operations can be
# executed with these privileges.
default = priv_context.PrivContext(
__name__,
cfg_section='privsep_osbrick',
pypath=__name__ + '.default',
capabilities=[c.CAP_SYS_ADMIN],
)

View File

@ -1,220 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Just in case it wasn't clear, this is a massive security back-door.
`execute_root()` (or the same via `execute(run_as_root=True)`) allows
any command to be run as the privileged user (default "root"). This
is intended only as an expedient transition and should be removed
ASAP.
This is not completely unreasonable because:
1. We have no tool/workflow for merging changes to rootwrap filter
configs from os-brick into nova/cinder, which makes it difficult
to evolve these loosely coupled projects.
2. Let's not pretend the earlier situation was any better. The
rootwrap filters config contained several entries like "allow cp as
root with any arguments", etc, and would have posed only a mild
inconvenience to an attacker. At least with privsep we can (in
principle) run the "root" commands as a non-root uid, with
restricted Linux capabilities.
The plan is to switch os-brick to privsep using this module (removing
the urgency of (1)), then work on the larger refactor that addresses
(2) in followup changes.
"""
import os
import signal
import six
import threading
import time
from oslo_concurrency import processutils as putils
from oslo_log import log as logging
from oslo_utils import strutils
from os_brick import exception
from os_brick import privileged
LOG = logging.getLogger(__name__)
def custom_execute(*cmd, **kwargs):
"""Custom execute with additional functionality on top of Oslo's.
Additional features are timeouts and exponential backoff retries.
The exponential backoff retries replaces standard Oslo random sleep times
that range from 200ms to 2seconds when attempts is greater than 1, but it
is disabled if delay_on_retry is passed as a parameter.
Exponential backoff is controlled via interval and backoff_rate parameters,
just like the os_brick.utils.retry decorator.
To use the timeout mechanism to stop the subprocess with a specific signal
after a number of seconds we must pass a non-zero timeout value in the
call.
When using multiple attempts and timeout at the same time the method will
only raise the timeout exception to the caller if the last try timeouts.
Timeout mechanism is controlled with timeout, signal, and raise_timeout
parameters.
:param interval: The multiplier
:param backoff_rate: Base used for the exponential backoff
:param timeout: Timeout defined in seconds
:param signal: Signal to use to stop the process on timeout
:param raise_timeout: Raise and exception on timeout or return error as
stderr. Defaults to raising if check_exit_code is
not False.
:returns: Tuple with stdout and stderr
"""
# Since python 2 doesn't have nonlocal we use a mutable variable to store
# the previous attempt number, the timeout handler, and the process that
# timed out
shared_data = [0, None, None]
def on_timeout(proc):
sanitized_cmd = strutils.mask_password(' '.join(cmd))
LOG.warning('Stopping %(cmd)s with signal %(signal)s after %(time)ss.',
{'signal': sig_end, 'cmd': sanitized_cmd, 'time': timeout})
shared_data[2] = proc
proc.send_signal(sig_end)
def on_execute(proc):
# Call user's on_execute method
if on_execute_call:
on_execute_call(proc)
# Sleep if this is not the first try and we have a timeout interval
if shared_data[0] and interval:
exp = backoff_rate ** shared_data[0]
wait_for = max(0, interval * exp)
LOG.debug('Sleeping for %s seconds', wait_for)
time.sleep(wait_for)
# Increase the number of tries and start the timeout timer
shared_data[0] += 1
if timeout:
shared_data[2] = None
shared_data[1] = threading.Timer(timeout, on_timeout, (proc,))
shared_data[1].start()
def on_completion(proc):
# This is always called regardless of success or failure
# Cancel the timeout timer
if shared_data[1]:
shared_data[1].cancel()
# Call user's on_completion method
if on_completion_call:
on_completion_call(proc)
# We will be doing the wait ourselves in on_execute
if 'delay_on_retry' in kwargs:
interval = None
else:
kwargs['delay_on_retry'] = False
interval = kwargs.pop('interval', 1)
backoff_rate = kwargs.pop('backoff_rate', 2)
timeout = kwargs.pop('timeout', None)
sig_end = kwargs.pop('signal', signal.SIGTERM)
default_raise_timeout = kwargs.get('check_exit_code', True)
raise_timeout = kwargs.pop('raise_timeout', default_raise_timeout)
on_execute_call = kwargs.pop('on_execute', None)
on_completion_call = kwargs.pop('on_completion', None)
try:
return putils.execute(on_execute=on_execute,
on_completion=on_completion, *cmd, **kwargs)
except putils.ProcessExecutionError:
# proc is only stored if a timeout happened
proc = shared_data[2]
if proc:
sanitized_cmd = strutils.mask_password(' '.join(cmd))
msg = ('Time out on proc %(pid)s after waiting %(time)s seconds '
'when running %(cmd)s' %
{'pid': proc.pid, 'time': timeout, 'cmd': sanitized_cmd})
LOG.debug(msg)
if raise_timeout:
raise exception.ExecutionTimeout(stdout='', stderr=msg,
cmd=sanitized_cmd)
return '', msg
raise
# Entrypoint used for rootwrap.py transition code. Don't use this for
# other purposes, since it will be removed when we think the
# transition is finished.
def execute(*cmd, **kwargs):
"""NB: Raises processutils.ProcessExecutionError on failure."""
run_as_root = kwargs.pop('run_as_root', False)
kwargs.pop('root_helper', None)
try:
if run_as_root:
return execute_root(*cmd, **kwargs)
else:
return custom_execute(*cmd, **kwargs)
except OSError as e:
# Note:
# putils.execute('bogus', run_as_root=True)
# raises ProcessExecutionError(exit_code=1) (because there's a
# "sh -c bogus" involved in there somewhere, but:
# putils.execute('bogus', run_as_root=False)
# raises OSError(not found).
#
# Lots of code in os-brick catches only ProcessExecutionError
# and never encountered the latter when using rootwrap.
# Rather than fix all the callers, we just always raise
# ProcessExecutionError here :(
sanitized_cmd = strutils.mask_password(' '.join(cmd))
raise putils.ProcessExecutionError(
cmd=sanitized_cmd, description=six.text_type(e))
# See comment on `execute`
@privileged.default.entrypoint
def execute_root(*cmd, **kwargs):
"""NB: Raises processutils.ProcessExecutionError/OSError on failure."""
return custom_execute(*cmd, shell=False, run_as_root=False, **kwargs)
@privileged.default.entrypoint
def unlink_root(*links, **kwargs):
"""Unlink system links with sys admin privileges.
By default it will raise an exception if a link does not exist and stop
unlinking remaining links.
This behavior can be modified passing optional parameters `no_errors` and
`raise_at_end`.
:param no_errors: Don't raise an exception on error
"param raise_at_end: Don't raise an exception on first error, try to
unlink all links and then raise a ChainedException
with all the errors that where found.
"""
no_errors = kwargs.get('no_errors', False)
raise_at_end = kwargs.get('raise_at_end', False)
exc = exception.ExceptionChainer()
catch_exception = no_errors or raise_at_end
for link in links:
with exc.context(catch_exception, 'Unlink failed for %s', link):
os.unlink(link)
if not no_errors and raise_at_end and exc:
raise exc

View File

@ -1,261 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Remote filesystem client utilities."""
import hashlib
import os
import re
import tempfile
from oslo_log import log as logging
import six
from os_brick import exception
from os_brick import executor
from os_brick.i18n import _
LOG = logging.getLogger(__name__)
class RemoteFsClient(executor.Executor):
def __init__(self, mount_type, root_helper,
execute=None, *args, **kwargs):
super(RemoteFsClient, self).__init__(root_helper, execute=execute,
*args, **kwargs)
mount_type_to_option_prefix = {
'nfs': 'nfs',
'cifs': 'smbfs',
'glusterfs': 'glusterfs',
'vzstorage': 'vzstorage',
'quobyte': 'quobyte',
'scality': 'scality'
}
if mount_type not in mount_type_to_option_prefix:
raise exception.ProtocolNotSupported(protocol=mount_type)
self._mount_type = mount_type
option_prefix = mount_type_to_option_prefix[mount_type]
self._mount_base = kwargs.get(option_prefix + '_mount_point_base')
if not self._mount_base:
raise exception.InvalidParameterValue(
err=_('%s_mount_point_base required') % option_prefix)
self._mount_options = kwargs.get(option_prefix + '_mount_options')
if mount_type == "nfs":
self._check_nfs_options()
def get_mount_base(self):
return self._mount_base
def _get_hash_str(self, base_str):
"""Return a string that represents hash of base_str (hex format)."""
if isinstance(base_str, six.text_type):
base_str = base_str.encode('utf-8')
return hashlib.md5(base_str).hexdigest()
def get_mount_point(self, device_name):
"""Get Mount Point.
:param device_name: example 172.18.194.100:/var/nfs
"""
return os.path.join(self._mount_base,
self._get_hash_str(device_name))
def _read_mounts(self):
(out, _err) = self._execute('mount', check_exit_code=0)
lines = out.split('\n')
mounts = {}
for line in lines:
tokens = line.split()
if 2 < len(tokens):
device = tokens[0]
mnt_point = tokens[2]
mounts[mnt_point] = device
return mounts
def mount(self, share, flags=None):
"""Mount given share."""
mount_path = self.get_mount_point(share)
if mount_path in self._read_mounts():
LOG.info('Already mounted: %s', mount_path)
return
self._execute('mkdir', '-p', mount_path, check_exit_code=0)
if self._mount_type == 'nfs':
self._mount_nfs(share, mount_path, flags)
else:
self._do_mount(self._mount_type, share, mount_path,
self._mount_options, flags)
def _do_mount(self, mount_type, share, mount_path, mount_options=None,
flags=None):
"""Mounts share based on the specified params."""
mnt_cmd = ['mount', '-t', mount_type]
if mount_options is not None:
mnt_cmd.extend(['-o', mount_options])
if flags is not None:
mnt_cmd.extend(flags)
mnt_cmd.extend([share, mount_path])
self._execute(*mnt_cmd, root_helper=self._root_helper,
run_as_root=True, check_exit_code=0)
def _mount_nfs(self, nfs_share, mount_path, flags=None):
"""Mount nfs share using present mount types."""
mnt_errors = {}
# This loop allows us to first try to mount with NFS 4.1 for pNFS
# support but falls back to mount NFS 4 or NFS 3 if either the client
# or server do not support it.
for mnt_type in sorted(self._nfs_mount_type_opts.keys(), reverse=True):
options = self._nfs_mount_type_opts[mnt_type]
try:
self._do_mount('nfs', nfs_share, mount_path, options, flags)
LOG.debug('Mounted %(sh)s using %(mnt_type)s.',
{'sh': nfs_share, 'mnt_type': mnt_type})
return
except Exception as e:
mnt_errors[mnt_type] = six.text_type(e)
LOG.debug('Failed to do %s mount.', mnt_type)
raise exception.BrickException(_("NFS mount failed for share %(sh)s. "
"Error - %(error)s")
% {'sh': nfs_share,
'error': mnt_errors})
def _check_nfs_options(self):
"""Checks and prepares nfs mount type options."""
self._nfs_mount_type_opts = {'nfs': self._mount_options}
nfs_vers_opt_patterns = ['^nfsvers', '^vers', '^v[\d]']
for opt in nfs_vers_opt_patterns:
if self._option_exists(self._mount_options, opt):
return
# pNFS requires NFS 4.1. The mount.nfs4 utility does not automatically
# negotiate 4.1 support, we have to ask for it by specifying two
# options: vers=4 and minorversion=1.
pnfs_opts = self._update_option(self._mount_options, 'vers', '4')
pnfs_opts = self._update_option(pnfs_opts, 'minorversion', '1')
self._nfs_mount_type_opts['pnfs'] = pnfs_opts
def _option_exists(self, options, opt_pattern):
"""Checks if the option exists in nfs options and returns position."""
options = [x.strip() for x in options.split(',')] if options else []
pos = 0
for opt in options:
pos = pos + 1
if re.match(opt_pattern, opt, flags=0):
return pos
return 0
def _update_option(self, options, option, value=None):
"""Update option if exists else adds it and returns new options."""
opts = [x.strip() for x in options.split(',')] if options else []
pos = self._option_exists(options, option)
if pos:
opts.pop(pos - 1)
opt = '%s=%s' % (option, value) if value else option
opts.append(opt)
return ",".join(opts) if len(opts) > 1 else opts[0]
class ScalityRemoteFsClient(RemoteFsClient):
def __init__(self, mount_type, root_helper,
execute=None, *args, **kwargs):
super(ScalityRemoteFsClient, self).__init__(mount_type, root_helper,
execute=execute,
*args, **kwargs)
self._mount_type = mount_type
self._mount_base = kwargs.get(
'scality_mount_point_base', "").rstrip('/')
if not self._mount_base:
raise exception.InvalidParameterValue(
err=_('scality_mount_point_base required'))
self._mount_options = None
def get_mount_point(self, device_name):
return os.path.join(self._mount_base,
device_name,
"00")
def mount(self, share, flags=None):
"""Mount the Scality ScaleOut FS.
The `share` argument is ignored because you can't mount several
SOFS at the same type on a single server. But we want to keep the
same method signature for class inheritance purpose.
"""
if self._mount_base in self._read_mounts():
LOG.info('Already mounted: %s', self._mount_base)
return
self._execute('mkdir', '-p', self._mount_base, check_exit_code=0)
super(ScalityRemoteFsClient, self)._do_mount(
'sofs', '/etc/sfused.conf', self._mount_base)
class VZStorageRemoteFSClient(RemoteFsClient):
def _vzstorage_write_mds_list(self, cluster_name, mdss):
tmp_dir = tempfile.mkdtemp(prefix='vzstorage-')
tmp_bs_path = os.path.join(tmp_dir, 'bs_list')
with open(tmp_bs_path, 'w') as f:
for mds in mdss:
f.write(mds + "\n")
conf_dir = os.path.join('/etc/pstorage/clusters', cluster_name)
if os.path.exists(conf_dir):
bs_path = os.path.join(conf_dir, 'bs_list')
self._execute('cp', '-f', tmp_bs_path, bs_path,
root_helper=self._root_helper, run_as_root=True)
else:
self._execute('cp', '-rf', tmp_dir, conf_dir,
root_helper=self._root_helper, run_as_root=True)
self._execute('chown', '-R', 'root:root', conf_dir,
root_helper=self._root_helper, run_as_root=True)
def _do_mount(self, mount_type, vz_share, mount_path,
mount_options=None, flags=None):
m = re.search("(?:(\S+):\/)?([a-zA-Z0-9_-]+)(?::(\S+))?", vz_share)
if not m:
msg = (_("Invalid Virtuozzo Storage share specification: %r."
"Must be: [MDS1[,MDS2],...:/]<CLUSTER NAME>[:PASSWORD].")
% vz_share)
raise exception.BrickException(msg)
mdss = m.group(1)
cluster_name = m.group(2)
passwd = m.group(3)
if mdss:
mdss = mdss.split(',')
self._vzstorage_write_mds_list(cluster_name, mdss)
if passwd:
self._execute('pstorage', '-c', cluster_name, 'auth-node', '-P',
process_input=passwd,
root_helper=self._root_helper, run_as_root=True)
mnt_cmd = ['pstorage-mount', '-c', cluster_name]
if flags:
mnt_cmd.extend(flags)
mnt_cmd.extend([mount_path])
self._execute(*mnt_cmd, root_helper=self._root_helper,
run_as_root=True, check_exit_code=0)

View File

@ -1,129 +0,0 @@
# Copyright 2016 Cloudbase Solutions Srl
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Windows remote filesystem client utilities."""
import os
import re
from oslo_log import log as logging
from os_win import utilsfactory
from os_brick import exception
from os_brick.i18n import _
from os_brick.remotefs import remotefs
LOG = logging.getLogger(__name__)
class WindowsRemoteFsClient(remotefs.RemoteFsClient):
_username_regex = re.compile(r'user(?:name)?=([^, ]+)')
_password_regex = re.compile(r'pass(?:word)?=([^, ]+)')
_loopback_share_map = {}
def __init__(self, mount_type, root_helper=None,
execute=None, *args, **kwargs):
mount_type_to_option_prefix = {
'cifs': 'smbfs',
'smbfs': 'smbfs',
}
self._local_path_for_loopback = kwargs.get('local_path_for_loopback',
False)
if mount_type not in mount_type_to_option_prefix:
raise exception.ProtocolNotSupported(protocol=mount_type)
self._mount_type = mount_type
option_prefix = mount_type_to_option_prefix[mount_type]
self._mount_base = kwargs.get(option_prefix + '_mount_point_base')
self._mount_options = kwargs.get(option_prefix + '_mount_options')
self._smbutils = utilsfactory.get_smbutils()
self._pathutils = utilsfactory.get_pathutils()
def get_local_share_path(self, share, expect_existing=True):
local_share_path = self._smbutils.get_smb_share_path(share)
if not local_share_path and expect_existing:
err_msg = _("Could not find the local "
"share path for %(share)s.")
raise exception.VolumePathsNotFound(err_msg % dict(share=share))
return local_share_path
def _get_share_norm_path(self, share):
return share.replace('/', '\\')
def get_share_name(self, share):
return self._get_share_norm_path(share).lstrip('\\').split('\\', 1)[1]
def mount(self, share, flags=None):
share_norm_path = self._get_share_norm_path(share)
use_local_path = (self._local_path_for_loopback and
self._smbutils.is_local_share(share_norm_path))
if use_local_path:
LOG.info("Skipping mounting local share %(share_path)s.",
dict(share_path=share_norm_path))
else:
mount_options = " ".join(
[self._mount_options or '', flags or ''])
username, password = self._parse_credentials(mount_options)
if not self._smbutils.check_smb_mapping(
share_norm_path):
self._smbutils.mount_smb_share(share_norm_path,
username=username,
password=password)
if self._mount_base:
self._create_mount_point(share, use_local_path)
def unmount(self, share):
self._smbutils.unmount_smb_share(self._get_share_norm_path(share))
def _create_mount_point(self, share, use_local_path):
# The mount point will contain a hash of the share so we're
# intentionally preserving the original share path as this is
# what the caller will expect.
mnt_point = self.get_mount_point(share)
share_norm_path = self._get_share_norm_path(share)
share_name = self.get_share_name(share)
symlink_dest = (share_norm_path if not use_local_path
else self.get_local_share_path(share_name))
if not os.path.isdir(self._mount_base):
os.makedirs(self._mount_base)
if os.path.exists(mnt_point):
if not self._pathutils.is_symlink(mnt_point):
raise exception.BrickException(_("Link path already exists "
"and it's not a symlink"))
else:
self._pathutils.create_sym_link(mnt_point, symlink_dest)
def _parse_credentials(self, opts_str):
if not opts_str:
return None, None
match = self._username_regex.findall(opts_str)
username = match[0] if match and match[0] != 'guest' else None
match = self._password_regex.findall(opts_str)
password = match[0] if match else None
return username, password

View File

@ -1,103 +0,0 @@
# Copyright 2010-2011 OpenStack Foundation
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import testtools
import fixtures
import mock
from oslo_utils import strutils
class TestCase(testtools.TestCase):
"""Test case base class for all unit tests."""
SENTINEL = object()
def setUp(self):
"""Run before each test method to initialize test environment."""
super(TestCase, self).setUp()
test_timeout = os.environ.get('OS_TEST_TIMEOUT', 0)
try:
test_timeout = int(test_timeout)
except ValueError:
# If timeout value is invalid do not set a timeout.
test_timeout = 0
if test_timeout > 0:
self.useFixture(fixtures.Timeout(test_timeout, gentle=True))
self.useFixture(fixtures.NestedTempfile())
self.useFixture(fixtures.TempHomeDir())
environ_enabled = (lambda var_name:
strutils.bool_from_string(os.environ.get(var_name)))
if environ_enabled('OS_STDOUT_CAPTURE'):
stdout = self.useFixture(fixtures.StringStream('stdout')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout))
if environ_enabled('OS_STDERR_CAPTURE'):
stderr = self.useFixture(fixtures.StringStream('stderr')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr))
if environ_enabled('OS_LOG_CAPTURE'):
log_format = '%(levelname)s [%(name)s] %(message)s'
if environ_enabled('OS_DEBUG'):
level = logging.DEBUG
else:
level = logging.INFO
self.useFixture(fixtures.LoggerFixture(nuke_handlers=False,
format=log_format,
level=level))
def _common_cleanup(self):
"""Runs after each test method to tear down test environment."""
# Stop any timers
for x in self.injected:
try:
x.stop()
except AssertionError:
pass
# Delete attributes that don't start with _ so they don't pin
# memory around unnecessarily for the duration of the test
# suite
for key in [k for k in self.__dict__.keys() if k[0] != '_']:
del self.__dict__[key]
def log_level(self, level):
"""Set logging level to the specified value."""
log_root = logging.getLogger(None).logger
log_root.setLevel(level)
def mock_object(self, obj, attr_name, new_attr=SENTINEL, **kwargs):
"""Use python mock to mock an object attribute
Mocks the specified objects attribute with the given value.
Automatically performs 'addCleanup' for the mock.
"""
args = [obj, attr_name]
if new_attr is not self.SENTINEL:
args.append(new_attr)
patcher = mock.patch.object(*args, **kwargs)
mocked = patcher.start()
self.addCleanup(patcher.stop)
return mocked
def patch(self, path, *args, **kwargs):
"""Use python mock to mock a path with automatic cleanup."""
patcher = mock.patch(path, *args, **kwargs)
result = patcher.start()
self.addCleanup(patcher.stop)
return result

View File

@ -1,178 +0,0 @@
# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from castellan.tests.unit.key_manager import fake
import mock
from os_brick import encryptors
from os_brick.tests import base
class VolumeEncryptorTestCase(base.TestCase):
def _create(self):
pass
def setUp(self):
super(VolumeEncryptorTestCase, self).setUp()
self.connection_info = {
"data": {
"device_path": "/dev/disk/by-path/"
"ip-192.0.2.0:3260-iscsi-iqn.2010-10.org.openstack"
":volume-fake_uuid-lun-1",
},
}
self.root_helper = None
self.keymgr = fake.fake_api()
self.encryptor = self._create()
class BaseEncryptorTestCase(VolumeEncryptorTestCase):
def _test_get_encryptor(self, provider, expected_provider_class):
encryption = {'control_location': 'front-end',
'provider': provider}
encryptor = encryptors.get_volume_encryptor(
root_helper=self.root_helper,
connection_info=self.connection_info,
keymgr=self.keymgr,
**encryption)
self.assertIsInstance(encryptor, expected_provider_class)
def test_get_encryptors(self):
self._test_get_encryptor('luks',
encryptors.luks.LuksEncryptor)
# TODO(lyarwood): Remove the following in Pike
self._test_get_encryptor('LuksEncryptor',
encryptors.luks.LuksEncryptor)
self._test_get_encryptor('os_brick.encryptors.luks.LuksEncryptor',
encryptors.luks.LuksEncryptor)
self._test_get_encryptor('nova.volume.encryptors.luks.LuksEncryptor',
encryptors.luks.LuksEncryptor)
self._test_get_encryptor('plain',
encryptors.cryptsetup.CryptsetupEncryptor)
# TODO(lyarwood): Remove the following in Pike
self._test_get_encryptor('CryptsetupEncryptor',
encryptors.cryptsetup.CryptsetupEncryptor)
self._test_get_encryptor(
'os_brick.encryptors.cryptsetup.CryptsetupEncryptor',
encryptors.cryptsetup.CryptsetupEncryptor)
self._test_get_encryptor(
'nova.volume.encryptors.cryptsetup.CryptsetupEncryptor',
encryptors.cryptsetup.CryptsetupEncryptor)
self._test_get_encryptor(None,
encryptors.nop.NoOpEncryptor)
# TODO(lyarwood): Remove the following in Pike
self._test_get_encryptor('NoOpEncryptor',
encryptors.nop.NoOpEncryptor)
self._test_get_encryptor('os_brick.encryptors.nop.NoOpEncryptor',
encryptors.nop.NoOpEncryptor)
self._test_get_encryptor('nova.volume.encryptors.nop.NoopEncryptor',
encryptors.nop.NoOpEncryptor)
def test_get_error_encryptors(self):
encryption = {'control_location': 'front-end',
'provider': 'ErrorEncryptor'}
self.assertRaises(ValueError,
encryptors.get_volume_encryptor,
root_helper=self.root_helper,
connection_info=self.connection_info,
keymgr=self.keymgr,
**encryption)
@mock.patch('os_brick.encryptors.LOG')
def test_error_log(self, log):
encryption = {'control_location': 'front-end',
'provider': 'TestEncryptor'}
provider = 'TestEncryptor'
try:
encryptors.get_volume_encryptor(
root_helper=self.root_helper,
connection_info=self.connection_info,
keymgr=self.keymgr,
**encryption)
except Exception as e:
log.error.assert_called_once_with("Error instantiating "
"%(provider)s: "
"%(exception)s",
{'provider': provider,
'exception': e})
@mock.patch('os_brick.encryptors.LOG')
def test_get_missing_out_of_tree_encryptor_log(self, log):
provider = 'TestEncryptor'
encryption = {'control_location': 'front-end',
'provider': provider}
try:
encryptors.get_volume_encryptor(
root_helper=self.root_helper,
connection_info=self.connection_info,
keymgr=self.keymgr,
**encryption)
except Exception as e:
log.error.assert_called_once_with("Error instantiating "
"%(provider)s: "
"%(exception)s",
{'provider': provider,
'exception': e})
log.warning.assert_called_once_with("Use of the out of tree "
"encryptor class %(provider)s "
"will be blocked with the "
"Queens release of os-brick.",
{'provider': provider})
@mock.patch('os_brick.encryptors.LOG')
def test_get_direct_encryptor_log(self, log):
encryption = {'control_location': 'front-end',
'provider': 'LuksEncryptor'}
encryptors.get_volume_encryptor(
root_helper=self.root_helper,
connection_info=self.connection_info,
keymgr=self.keymgr,
**encryption)
encryption = {'control_location': 'front-end',
'provider': 'os_brick.encryptors.luks.LuksEncryptor'}
encryptors.get_volume_encryptor(
root_helper=self.root_helper,
connection_info=self.connection_info,
keymgr=self.keymgr,
**encryption)
encryption = {'control_location': 'front-end',
'provider': 'nova.volume.encryptors.luks.LuksEncryptor'}
encryptors.get_volume_encryptor(
root_helper=self.root_helper,
connection_info=self.connection_info,
keymgr=self.keymgr,
**encryption)
log.warning.assert_has_calls([
mock.call("Use of the in tree encryptor class %(provider)s by "
"directly referencing the implementation class will be "
"blocked in the Queens release of os-brick.",
{'provider': 'LuksEncryptor'}),
mock.call("Use of the in tree encryptor class %(provider)s by "
"directly referencing the implementation class will be "
"blocked in the Queens release of os-brick.",
{'provider':
'os_brick.encryptors.luks.LuksEncryptor'}),
mock.call("Use of the in tree encryptor class %(provider)s by "
"directly referencing the implementation class will be "
"blocked in the Queens release of os-brick.",
{'provider':
'nova.volume.encryptors.luks.LuksEncryptor'})])

View File

@ -1,187 +0,0 @@
# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import binascii
import copy
import mock
import six
from castellan.common.objects import symmetric_key as key
from castellan.tests.unit.key_manager import fake
from os_brick.encryptors import cryptsetup
from os_brick import exception
from os_brick.tests.encryptors import test_base
from oslo_concurrency import processutils as putils
def fake__get_key(context, passphrase):
raw = bytes(binascii.unhexlify(passphrase))
symmetric_key = key.SymmetricKey('AES', len(raw) * 8, raw)
return symmetric_key
class CryptsetupEncryptorTestCase(test_base.VolumeEncryptorTestCase):
@mock.patch('os.path.exists', return_value=False)
def _create(self, mock_exists):
return cryptsetup.CryptsetupEncryptor(
connection_info=self.connection_info,
root_helper=self.root_helper,
keymgr=self.keymgr)
def setUp(self):
super(CryptsetupEncryptorTestCase, self).setUp()
self.dev_path = self.connection_info['data']['device_path']
self.dev_name = 'crypt-%s' % self.dev_path.split('/')[-1]
self.symlink_path = self.dev_path
@mock.patch('os_brick.executor.Executor._execute')
def test__open_volume(self, mock_execute):
self.encryptor._open_volume("passphrase")
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'create', '--key-file=-', self.dev_name,
self.dev_path, process_input='passphrase',
run_as_root=True,
root_helper=self.root_helper,
check_exit_code=True),
])
@mock.patch('os_brick.executor.Executor._execute')
def test_attach_volume(self, mock_execute):
fake_key = 'e8b76872e3b04c18b3b6656bbf6f5089'
self.encryptor._get_key = mock.MagicMock()
self.encryptor._get_key.return_value = fake__get_key(None, fake_key)
self.encryptor.attach_volume(None)
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'create', '--key-file=-', self.dev_name,
self.dev_path, process_input=fake_key,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
mock.call('ln', '--symbolic', '--force',
'/dev/mapper/%s' % self.dev_name, self.symlink_path,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
])
@mock.patch('os_brick.executor.Executor._execute')
def test__close_volume(self, mock_execute):
self.encryptor.detach_volume()
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'remove', self.dev_name,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
])
@mock.patch('os_brick.executor.Executor._execute')
def test_detach_volume(self, mock_execute):
self.encryptor.detach_volume()
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'remove', self.dev_name,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
])
def test_init_volume_encryption_not_supported(self):
# Tests that creating a CryptsetupEncryptor fails if there is no
# device_path key.
type = 'unencryptable'
data = dict(volume_id='a194699b-aa07-4433-a945-a5d23802043e')
connection_info = dict(driver_volume_type=type, data=data)
exc = self.assertRaises(exception.VolumeEncryptionNotSupported,
cryptsetup.CryptsetupEncryptor,
root_helper=self.root_helper,
connection_info=connection_info,
keymgr=fake.fake_api())
self.assertIn(type, six.text_type(exc))
@mock.patch('os_brick.executor.Executor._execute')
@mock.patch('os.path.exists', return_value=True)
def test_init_volume_encryption_with_old_name(self, mock_exists,
mock_execute):
# If an old name crypt device exists, dev_path should be the old name.
old_dev_name = self.dev_path.split('/')[-1]
encryptor = cryptsetup.CryptsetupEncryptor(
root_helper=self.root_helper,
connection_info=self.connection_info,
keymgr=self.keymgr)
self.assertFalse(encryptor.dev_name.startswith('crypt-'))
self.assertEqual(old_dev_name, encryptor.dev_name)
self.assertEqual(self.dev_path, encryptor.dev_path)
self.assertEqual(self.symlink_path, encryptor.symlink_path)
mock_exists.assert_called_once_with('/dev/mapper/%s' % old_dev_name)
mock_execute.assert_called_once_with(
'cryptsetup', 'status', old_dev_name, run_as_root=True)
@mock.patch('os_brick.executor.Executor._execute')
@mock.patch('os.path.exists', side_effect=[False, True])
def test_init_volume_encryption_with_wwn(self, mock_exists, mock_execute):
# If an wwn name crypt device exists, dev_path should be based on wwn.
old_dev_name = self.dev_path.split('/')[-1]
wwn = 'fake_wwn'
connection_info = copy.deepcopy(self.connection_info)
connection_info['data']['multipath_id'] = wwn
encryptor = cryptsetup.CryptsetupEncryptor(
root_helper=self.root_helper,
connection_info=connection_info,
keymgr=fake.fake_api(),
execute=mock_execute)
self.assertFalse(encryptor.dev_name.startswith('crypt-'))
self.assertEqual(wwn, encryptor.dev_name)
self.assertEqual(self.dev_path, encryptor.dev_path)
self.assertEqual(self.symlink_path, encryptor.symlink_path)
mock_exists.assert_has_calls([
mock.call('/dev/mapper/%s' % old_dev_name),
mock.call('/dev/mapper/%s' % wwn)])
mock_execute.assert_called_once_with(
'cryptsetup', 'status', wwn, run_as_root=True)
@mock.patch('os_brick.executor.Executor._execute')
def test_attach_volume_unmangle_passphrase(self, mock_execute):
fake_key = '0725230b'
fake_key_mangled = '72523b'
self.encryptor._get_key = mock.MagicMock()
self.encryptor._get_key.return_value = fake__get_key(None, fake_key)
mock_execute.side_effect = [
putils.ProcessExecutionError(exit_code=2), # luksOpen
mock.DEFAULT,
mock.DEFAULT,
]
self.encryptor.attach_volume(None)
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'create', '--key-file=-', self.dev_name,
self.dev_path, process_input=fake_key,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True),
mock.call('cryptsetup', 'create', '--key-file=-', self.dev_name,
self.dev_path, process_input=fake_key_mangled,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True),
mock.call('ln', '--symbolic', '--force',
'/dev/mapper/%s' % self.dev_name, self.symlink_path,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True),
])
self.assertEqual(3, mock_execute.call_count)

View File

@ -1,254 +0,0 @@
# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import binascii
import mock
from castellan.common.objects import symmetric_key as key
from os_brick.encryptors import luks
from os_brick.tests.encryptors import test_cryptsetup
from oslo_concurrency import processutils as putils
class LuksEncryptorTestCase(test_cryptsetup.CryptsetupEncryptorTestCase):
def _create(self):
return luks.LuksEncryptor(root_helper=self.root_helper,
connection_info=self.connection_info,
keymgr=self.keymgr)
@mock.patch('os_brick.executor.Executor._execute')
def test_is_luks(self, mock_execute):
luks.is_luks(self.root_helper, self.dev_path, execute=mock_execute)
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'isLuks', '--verbose', self.dev_path,
run_as_root=True, root_helper=self.root_helper,
check_exit_code=True),
], any_order=False)
@mock.patch('os_brick.executor.Executor._execute')
@mock.patch('os_brick.encryptors.luks.LOG')
def test_is_luks_with_error(self, mock_log, mock_execute):
error_msg = "Device %s is not a valid LUKS device." % self.dev_path
mock_execute.side_effect = putils.ProcessExecutionError(
exit_code=1, stderr=error_msg)
luks.is_luks(self.root_helper, self.dev_path, execute=mock_execute)
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'isLuks', '--verbose', self.dev_path,
run_as_root=True, root_helper=self.root_helper,
check_exit_code=True),
])
self.assertEqual(1, mock_log.warning.call_count) # warning logged
@mock.patch('os_brick.executor.Executor._execute')
def test__format_volume(self, mock_execute):
self.encryptor._format_volume("passphrase")
mock_execute.assert_has_calls([
mock.call('cryptsetup', '--batch-mode', 'luksFormat',
'--key-file=-', self.dev_path,
process_input='passphrase',
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True, attempts=3),
])
@mock.patch('os_brick.executor.Executor._execute')
def test__open_volume(self, mock_execute):
self.encryptor._open_volume("passphrase")
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'luksOpen', '--key-file=-', self.dev_path,
self.dev_name, process_input='passphrase',
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
])
@mock.patch('os_brick.executor.Executor._execute')
def test_attach_volume(self, mock_execute):
fake_key = '0c84146034e747639b698368807286df'
self.encryptor._get_key = mock.MagicMock()
self.encryptor._get_key.return_value = (
test_cryptsetup.fake__get_key(None, fake_key))
self.encryptor.attach_volume(None)
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'luksOpen', '--key-file=-', self.dev_path,
self.dev_name, process_input=fake_key,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
mock.call('ln', '--symbolic', '--force',
'/dev/mapper/%s' % self.dev_name, self.symlink_path,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
])
@mock.patch('os_brick.executor.Executor._execute')
def test_attach_volume_not_formatted(self, mock_execute):
fake_key = 'bc37c5eccebe403f9cc2d0dd20dac2bc'
self.encryptor._get_key = mock.MagicMock()
self.encryptor._get_key.return_value = (
test_cryptsetup.fake__get_key(None, fake_key))
mock_execute.side_effect = [
putils.ProcessExecutionError(exit_code=1), # luksOpen
putils.ProcessExecutionError(exit_code=1), # isLuks
mock.DEFAULT, # luksFormat
mock.DEFAULT, # luksOpen
mock.DEFAULT, # ln
]
self.encryptor.attach_volume(None)
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'luksOpen', '--key-file=-', self.dev_path,
self.dev_name, process_input=fake_key,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
mock.call('cryptsetup', 'isLuks', '--verbose', self.dev_path,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
mock.call('cryptsetup', '--batch-mode', 'luksFormat',
'--key-file=-', self.dev_path, process_input=fake_key,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True, attempts=3),
mock.call('cryptsetup', 'luksOpen', '--key-file=-', self.dev_path,
self.dev_name, process_input=fake_key,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
mock.call('ln', '--symbolic', '--force',
'/dev/mapper/%s' % self.dev_name, self.symlink_path,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
], any_order=False)
@mock.patch('os_brick.executor.Executor._execute')
def test_attach_volume_fail(self, mock_execute):
fake_key = 'ea6c2e1b8f7f4f84ae3560116d659ba2'
self.encryptor._get_key = mock.MagicMock()
self.encryptor._get_key.return_value = (
test_cryptsetup.fake__get_key(None, fake_key))
mock_execute.side_effect = [
putils.ProcessExecutionError(exit_code=1), # luksOpen
mock.DEFAULT, # isLuks
]
self.assertRaises(putils.ProcessExecutionError,
self.encryptor.attach_volume, None)
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'luksOpen', '--key-file=-', self.dev_path,
self.dev_name, process_input=fake_key,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
mock.call('cryptsetup', 'isLuks', '--verbose', self.dev_path,
root_helper=self.root_helper,
run_as_root=True, check_exit_code=True),
], any_order=False)
@mock.patch('os_brick.executor.Executor._execute')
def test__close_volume(self, mock_execute):
self.encryptor.detach_volume()
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'luksClose', self.dev_name,
root_helper=self.root_helper,
attempts=3, run_as_root=True, check_exit_code=True),
])
@mock.patch('os_brick.executor.Executor._execute')
def test_detach_volume(self, mock_execute):
self.encryptor.detach_volume()
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'luksClose', self.dev_name,
root_helper=self.root_helper,
attempts=3, run_as_root=True, check_exit_code=True),
])
def test_get_mangled_passphrase(self):
# Confirm that a mangled passphrase is provided as per bug#1633518
unmangled_raw_key = bytes(binascii.unhexlify('0725230b'))
symmetric_key = key.SymmetricKey('AES', len(unmangled_raw_key) * 8,
unmangled_raw_key)
unmangled_encoded_key = symmetric_key.get_encoded()
self.assertEqual(self.encryptor._get_mangled_passphrase(
unmangled_encoded_key), '72523b')
@mock.patch('os_brick.executor.Executor._execute')
def test_attach_volume_unmangle_passphrase(self, mock_execute):
fake_key = '0725230b'
fake_key_mangled = '72523b'
self.encryptor._get_key = mock.MagicMock()
self.encryptor._get_key.return_value = \
test_cryptsetup.fake__get_key(None, fake_key)
mock_execute.side_effect = [
putils.ProcessExecutionError(exit_code=2), # luksOpen
mock.DEFAULT, # luksOpen
mock.DEFAULT, # luksClose
mock.DEFAULT, # luksAddKey
mock.DEFAULT, # luksOpen
mock.DEFAULT, # luksClose
mock.DEFAULT, # luksRemoveKey
mock.DEFAULT, # luksOpen
mock.DEFAULT, # ln
]
self.encryptor.attach_volume(None)
mock_execute.assert_has_calls([
mock.call('cryptsetup', 'luksOpen', '--key-file=-', self.dev_path,
self.dev_name, process_input=fake_key,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True),
mock.call('cryptsetup', 'luksOpen', '--key-file=-', self.dev_path,
self.dev_name, process_input=fake_key_mangled,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True),
mock.call('cryptsetup', 'luksClose', self.dev_name,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True, attempts=3),
mock.call('cryptsetup', 'luksAddKey', self.dev_path,
process_input=''.join([fake_key_mangled,
'\n', fake_key,
'\n', fake_key]),
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True),
mock.call('cryptsetup', 'luksOpen', '--key-file=-', self.dev_path,
self.dev_name, process_input=fake_key,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True),
mock.call('cryptsetup', 'luksClose', self.dev_name,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True, attempts=3),
mock.call('cryptsetup', 'luksRemoveKey', self.dev_path,
process_input=fake_key_mangled,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True),
mock.call('cryptsetup', 'luksOpen', '--key-file=-', self.dev_path,
self.dev_name, process_input=fake_key,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True),
mock.call('ln', '--symbolic', '--force',
'/dev/mapper/%s' % self.dev_name, self.symlink_path,
root_helper=self.root_helper, run_as_root=True,
check_exit_code=True),
], any_order=False)
self.assertEqual(9, mock_execute.call_count)

View File

@ -1,30 +0,0 @@
# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_brick.encryptors import nop
from os_brick.tests.encryptors import test_base
class NoOpEncryptorTestCase(test_base.VolumeEncryptorTestCase):
def _create(self):
return nop.NoOpEncryptor(root_helper=self.root_helper,
connection_info=self.connection_info,
keymgr=self.keymgr)
def test_attach_volume(self):
self.encryptor.attach_volume(None)
def test_detach_volume(self):
self.encryptor.detach_volume()

View File

@ -1,128 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import os
from oslo_service import loopingcall
from os_brick import exception
from os_brick.initiator.connectors import aoe
from os_brick.tests.initiator import test_connector
class FakeFixedIntervalLoopingCall(object):
def __init__(self, f=None, *args, **kw):
self.args = args
self.kw = kw
self.f = f
self._stop = False
def stop(self):
self._stop = True
def wait(self):
return self
def start(self, interval, initial_delay=None):
while not self._stop:
try:
self.f(*self.args, **self.kw)
except loopingcall.LoopingCallDone:
return self
except Exception:
raise
class AoEConnectorTestCase(test_connector.ConnectorTestCase):
"""Test cases for AoE initiator class."""
def setUp(self):
super(AoEConnectorTestCase, self).setUp()
self.connector = aoe.AoEConnector('sudo')
self.connection_properties = {'target_shelf': 'fake_shelf',
'target_lun': 'fake_lun'}
self.mock_object(loopingcall, 'FixedIntervalLoopingCall',
FakeFixedIntervalLoopingCall)
def test_get_search_path(self):
expected = "/dev/etherd"
actual_path = self.connector.get_search_path()
self.assertEqual(expected, actual_path)
@mock.patch.object(os.path, 'exists', return_value=True)
def test_get_volume_paths(self, mock_exists):
expected = ["/dev/etherd/efake_shelf.fake_lun"]
paths = self.connector.get_volume_paths(self.connection_properties)
self.assertEqual(expected, paths)
def test_get_connector_properties(self):
props = aoe.AoEConnector.get_connector_properties(
'sudo', multipath=True, enforce_multipath=True)
expected_props = {}
self.assertEqual(expected_props, props)
@mock.patch.object(os.path, 'exists', side_effect=[True, True])
def test_connect_volume(self, exists_mock):
"""Ensure that if path exist aoe-revalidate was called."""
aoe_device, aoe_path = self.connector._get_aoe_info(
self.connection_properties)
with mock.patch.object(self.connector, '_execute',
return_value=["", ""]):
self.connector.connect_volume(self.connection_properties)
@mock.patch.object(os.path, 'exists', side_effect=[False, True])
def test_connect_volume_without_path(self, exists_mock):
"""Ensure that if path doesn't exist aoe-discovery was called."""
aoe_device, aoe_path = self.connector._get_aoe_info(
self.connection_properties)
expected_info = {
'type': 'block',
'device': aoe_device,
'path': aoe_path,
}
with mock.patch.object(self.connector, '_execute',
return_value=["", ""]):
volume_info = self.connector.connect_volume(
self.connection_properties)
self.assertDictEqual(volume_info, expected_info)
@mock.patch.object(os.path, 'exists', return_value=False)
def test_connect_volume_could_not_discover_path(self, exists_mock):
_aoe_device, aoe_path = self.connector._get_aoe_info(
self.connection_properties)
with mock.patch.object(self.connector, '_execute',
return_value=["", ""]):
self.assertRaises(exception.VolumeDeviceNotFound,
self.connector.connect_volume,
self.connection_properties)
@mock.patch.object(os.path, 'exists', return_value=True)
def test_disconnect_volume(self, mock_exists):
"""Ensure that if path exist aoe-revaliadte was called."""
aoe_device, aoe_path = self.connector._get_aoe_info(
self.connection_properties)
with mock.patch.object(self.connector, '_execute',
return_value=["", ""]):
self.connector.disconnect_volume(self.connection_properties, {})
def test_extend_volume(self):
self.assertRaises(NotImplementedError,
self.connector.extend_volume,
self.connection_properties)

View File

@ -1,77 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from os_brick.initiator.connectors import base_iscsi
from os_brick.initiator.connectors import fake
from os_brick.tests import base as test_base
class BaseISCSIConnectorTestCase(test_base.TestCase):
def setUp(self):
super(BaseISCSIConnectorTestCase, self).setUp()
self.connector = fake.FakeBaseISCSIConnector(None)
@mock.patch.object(base_iscsi.BaseISCSIConnector, '_get_all_targets')
def test_iterate_all_targets(self, mock_get_all_targets):
# extra_property cannot be a sentinel, a copied sentinel will not
# identical to the original one.
connection_properties = {
'target_portals': mock.sentinel.target_portals,
'target_iqns': mock.sentinel.target_iqns,
'target_luns': mock.sentinel.target_luns,
'extra_property': 'extra_property'}
mock_get_all_targets.return_value = [(
mock.sentinel.portal, mock.sentinel.iqn, mock.sentinel.lun)]
# method is a generator, and it yields dictionaries. list() will
# iterate over all of the method's items.
list_props = list(
self.connector._iterate_all_targets(connection_properties))
mock_get_all_targets.assert_called_once_with(connection_properties)
self.assertEqual(1, len(list_props))
expected_props = {'target_portal': mock.sentinel.portal,
'target_iqn': mock.sentinel.iqn,
'target_lun': mock.sentinel.lun,
'extra_property': 'extra_property'}
self.assertEqual(expected_props, list_props[0])
def test_get_all_targets(self):
connection_properties = {
'target_portals': [mock.sentinel.target_portals],
'target_iqns': [mock.sentinel.target_iqns],
'target_luns': [mock.sentinel.target_luns]}
all_targets = self.connector._get_all_targets(connection_properties)
expected_targets = zip([mock.sentinel.target_portals],
[mock.sentinel.target_iqns],
[mock.sentinel.target_luns])
self.assertEqual(list(expected_targets), list(all_targets))
def test_get_all_targets_single_target(self):
connection_properties = {
'target_portal': mock.sentinel.target_portal,
'target_iqn': mock.sentinel.target_iqn,
'target_lun': mock.sentinel.target_lun}
all_targets = self.connector._get_all_targets(connection_properties)
expected_target = (mock.sentinel.target_portal,
mock.sentinel.target_iqn,
mock.sentinel.target_lun)
self.assertEqual([expected_target], all_targets)

View File

@ -1,152 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import os
from os_brick import exception
from os_brick.initiator.connectors import disco
from os_brick.tests.initiator import test_connector
class DISCOConnectorTestCase(test_connector.ConnectorTestCase):
"""Test cases for DISCO connector."""
# Fake volume information
volume = {
'name': 'a-disco-volume',
'disco_id': '1234567'
}
# Conf for test
conf = {
'ip': test_connector.MY_IP,
'port': 9898
}
def setUp(self):
super(DISCOConnectorTestCase, self).setUp()
self.fake_connection_properties = {
'name': self.volume['name'],
'disco_id': self.volume['disco_id'],
'conf': {
'server_ip': self.conf['ip'],
'server_port': self.conf['port']}
}
self.fake_volume_status = {'attached': True,
'detached': False}
self.fake_request_status = {'success': None,
'fail': 'ERROR'}
self.volume_status = 'detached'
self.request_status = 'success'
# Patch the request and os calls to fake versions
self.mock_object(disco.DISCOConnector,
'_send_disco_vol_cmd',
self.perform_disco_request)
self.mock_object(os.path, 'exists', self.is_volume_attached)
self.mock_object(glob, 'glob', self.list_disco_volume)
# The actual DISCO connector
self.connector = disco.DISCOConnector(
'sudo', execute=self.fake_execute)
def perform_disco_request(self, *cmd, **kwargs):
"""Fake the socket call."""
return self.fake_request_status[self.request_status]
def is_volume_attached(self, *cmd, **kwargs):
"""Fake volume detection check."""
return self.fake_volume_status[self.volume_status]
def list_disco_volume(self, *cmd, **kwargs):
"""Fake the glob call."""
path_dir = self.connector.get_search_path()
volume_id = self.volume['disco_id']
volume_items = [path_dir, '/', self.connector.DISCO_PREFIX, volume_id]
volume_path = ''.join(volume_items)
return [volume_path]
def test_get_connector_properties(self):
props = disco.DISCOConnector.get_connector_properties(
'sudo', multipath=True, enforce_multipath=True)
expected_props = {}
self.assertEqual(expected_props, props)
def test_get_search_path(self):
"""DISCO volumes should be under /dev."""
expected = "/dev"
actual = self.connector.get_search_path()
self.assertEqual(expected, actual)
def test_get_volume_paths(self):
"""Test to get all the path for a specific volume."""
expected = ['/dev/dms1234567']
self.volume_status = 'attached'
actual = self.connector.get_volume_paths(
self.fake_connection_properties)
self.assertEqual(expected, actual)
def test_connect_volume(self):
"""Attach a volume."""
self.connector.connect_volume(self.fake_connection_properties)
def test_connect_volume_already_attached(self):
"""Make sure that we don't issue the request."""
self.request_status = 'fail'
self.volume_status = 'attached'
self.test_connect_volume()
def test_connect_volume_request_fail(self):
"""Fail the attach request."""
self.volume_status = 'detached'
self.request_status = 'fail'
self.assertRaises(exception.BrickException,
self.test_connect_volume)
def test_disconnect_volume(self):
"""Detach a volume."""
self.connector.disconnect_volume(self.fake_connection_properties, None)
def test_disconnect_volume_attached(self):
"""Detach a volume attached."""
self.request_status = 'success'
self.volume_status = 'attached'
self.test_disconnect_volume()
def test_disconnect_volume_already_detached(self):
"""Ensure that we don't issue the request."""
self.request_status = 'fail'
self.volume_status = 'detached'
self.test_disconnect_volume()
def test_disconnect_volume_request_fail(self):
"""Fail the detach request."""
self.volume_status = 'attached'
self.request_status = 'fail'
self.assertRaises(exception.BrickException,
self.test_disconnect_volume)
def test_get_all_available_volumes(self):
"""Test to get all the available DISCO volumes."""
expected = ['/dev/dms1234567']
actual = self.connector.get_all_available_volumes(None)
self.assertItemsEqual(expected, actual)
def test_extend_volume(self):
self.assertRaises(NotImplementedError,
self.connector.extend_volume,
self.fake_connection_properties)

View File

@ -1,89 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_brick.initiator.connectors import drbd
from os_brick.tests.initiator import test_connector
class DRBDConnectorTestCase(test_connector.ConnectorTestCase):
RESOURCE_TEMPLATE = '''
resource r0 {
on host1 {
}
net {
shared-secret "%(shared-secret)s";
}
}
'''
def setUp(self):
super(DRBDConnectorTestCase, self).setUp()
self.connector = drbd.DRBDConnector(
None, execute=self._fake_exec)
self.execs = []
def _fake_exec(self, *cmd, **kwargs):
self.execs.append(cmd)
# out, err
return ('', '')
def test_get_connector_properties(self):
props = drbd.DRBDConnector.get_connector_properties(
'sudo', multipath=True, enforce_multipath=True)
expected_props = {}
self.assertEqual(expected_props, props)
def test_connect_volume(self):
"""Test connect_volume."""
cprop = {
'provider_auth': 'my-secret',
'config': self.RESOURCE_TEMPLATE,
'name': 'my-precious',
'device': '/dev/drbd951722',
'data': {},
}
res = self.connector.connect_volume(cprop)
self.assertEqual(cprop['device'], res['path'])
self.assertEqual('adjust', self.execs[0][1])
self.assertEqual(cprop['name'], self.execs[0][4])
def test_disconnect_volume(self):
"""Test the disconnect volume case."""
cprop = {
'provider_auth': 'my-secret',
'config': self.RESOURCE_TEMPLATE,
'name': 'my-precious',
'device': '/dev/drbd951722',
'data': {},
}
dev_info = {}
self.connector.disconnect_volume(cprop, dev_info)
self.assertEqual('down', self.execs[0][1])
def test_extend_volume(self):
cprop = {'name': 'something'}
self.assertRaises(NotImplementedError,
self.connector.extend_volume,
cprop)

View File

@ -1,452 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import os
import six
from os_brick import exception
from os_brick.initiator.connectors import base
from os_brick.initiator.connectors import fibre_channel
from os_brick.initiator import linuxfc
from os_brick.initiator import linuxscsi
from os_brick.tests.initiator import test_connector
class FibreChannelConnectorTestCase(test_connector.ConnectorTestCase):
def setUp(self):
super(FibreChannelConnectorTestCase, self).setUp()
self.connector = fibre_channel.FibreChannelConnector(
None, execute=self.fake_execute, use_multipath=False)
self.assertIsNotNone(self.connector)
self.assertIsNotNone(self.connector._linuxfc)
self.assertIsNotNone(self.connector._linuxscsi)
def fake_get_fc_hbas(self):
return [{'ClassDevice': 'host1',
'ClassDevicePath': '/sys/devices/pci0000:00/0000:00:03.0'
'/0000:05:00.2/host1/fc_host/host1',
'dev_loss_tmo': '30',
'fabric_name': '0x1000000533f55566',
'issue_lip': '<store method only>',
'max_npiv_vports': '255',
'maxframe_size': '2048 bytes',
'node_name': '0x200010604b019419',
'npiv_vports_inuse': '0',
'port_id': '0x680409',
'port_name': '0x100010604b019419',
'port_state': 'Online',
'port_type': 'NPort (fabric via point-to-point)',
'speed': '10 Gbit',
'supported_classes': 'Class 3',
'supported_speeds': '10 Gbit',
'symbolic_name': 'Emulex 554M FV4.0.493.0 DV8.3.27',
'tgtid_bind_type': 'wwpn (World Wide Port Name)',
'uevent': None,
'vport_create': '<store method only>',
'vport_delete': '<store method only>'}]
def fake_get_fc_hbas_info(self):
hbas = self.fake_get_fc_hbas()
info = [{'port_name': hbas[0]['port_name'].replace('0x', ''),
'node_name': hbas[0]['node_name'].replace('0x', ''),
'host_device': hbas[0]['ClassDevice'],
'device_path': hbas[0]['ClassDevicePath']}]
return info
def fibrechan_connection(self, volume, location, wwn):
return {'driver_volume_type': 'fibrechan',
'data': {
'volume_id': volume['id'],
'target_portal': location,
'target_wwn': wwn,
'target_lun': 1,
}}
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas')
def test_get_connector_properties(self, mock_hbas):
mock_hbas.return_value = self.fake_get_fc_hbas()
multipath = True
enforce_multipath = True
props = fibre_channel.FibreChannelConnector.get_connector_properties(
'sudo', multipath=multipath,
enforce_multipath=enforce_multipath)
hbas = self.fake_get_fc_hbas()
expected_props = {'wwpns': [hbas[0]['port_name'].replace('0x', '')],
'wwnns': [hbas[0]['node_name'].replace('0x', '')]}
self.assertEqual(expected_props, props)
def test_get_search_path(self):
search_path = self.connector.get_search_path()
expected = "/dev/disk/by-path"
self.assertEqual(expected, search_path)
def test_get_pci_num(self):
hba = {'device_path': "/sys/devices/pci0000:00/0000:00:03.0"
"/0000:05:00.3/host2/fc_host/host2"}
pci_num = self.connector._get_pci_num(hba)
self.assertEqual("0000:05:00.3", pci_num)
hba = {'device_path': "/sys/devices/pci0000:00/0000:00:03.0"
"/0000:05:00.3/0000:06:00.6/host2/fc_host/host2"}
pci_num = self.connector._get_pci_num(hba)
self.assertEqual("0000:06:00.6", pci_num)
hba = {'device_path': "/sys/devices/pci0000:20/0000:20:03.0"
"/0000:21:00.2/net/ens2f2/ctlr_2/host3"
"/fc_host/host3"}
pci_num = self.connector._get_pci_num(hba)
self.assertEqual("0000:21:00.2", pci_num)
@mock.patch.object(os.path, 'exists', return_value=True)
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas_info')
def test_get_volume_paths(self, fake_fc_hbas_info,
fake_fc_hbas, fake_exists):
fake_fc_hbas.side_effect = self.fake_get_fc_hbas
fake_fc_hbas_info.side_effect = self.fake_get_fc_hbas_info
name = 'volume-00000001'
vol = {'id': 1, 'name': name}
location = '10.0.2.15:3260'
wwn = '1234567890123456'
connection_info = self.fibrechan_connection(vol, location, wwn)
volume_paths = self.connector.get_volume_paths(
connection_info['data'])
expected = ['/dev/disk/by-path/pci-0000:05:00.2'
'-fc-0x1234567890123456-lun-1']
self.assertEqual(expected, volume_paths)
@mock.patch.object(linuxscsi.LinuxSCSI, 'wait_for_rw')
@mock.patch.object(os.path, 'exists', return_value=True)
@mock.patch.object(os.path, 'realpath', return_value='/dev/sdb')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas_info')
@mock.patch.object(linuxscsi.LinuxSCSI, 'remove_scsi_device')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_scsi_wwn')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_device_info')
@mock.patch.object(base.BaseLinuxConnector, 'check_valid_device')
def test_connect_volume(self, check_valid_device_mock,
get_device_info_mock,
get_scsi_wwn_mock,
remove_device_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock):
check_valid_device_mock.return_value = True
get_fc_hbas_mock.side_effect = self.fake_get_fc_hbas
get_fc_hbas_info_mock.side_effect = self.fake_get_fc_hbas_info
wwn = '1234567890'
multipath_devname = '/dev/md-1'
devices = {"device": multipath_devname,
"id": wwn,
"devices": [{'device': '/dev/sdb',
'address': '1:0:0:1',
'host': 1, 'channel': 0,
'id': 0, 'lun': 1}]}
get_device_info_mock.return_value = devices['devices'][0]
get_scsi_wwn_mock.return_value = wwn
location = '10.0.2.15:3260'
name = 'volume-00000001'
vol = {'id': 1, 'name': name}
# Should work for string, unicode, and list
wwns = ['1234567890123456', six.text_type('1234567890123456'),
['1234567890123456', '1234567890123457']]
for wwn in wwns:
connection_info = self.fibrechan_connection(vol, location, wwn)
dev_info = self.connector.connect_volume(connection_info['data'])
exp_wwn = wwn[0] if isinstance(wwn, list) else wwn
dev_str = ('/dev/disk/by-path/pci-0000:05:00.2-fc-0x%s-lun-1' %
exp_wwn)
self.assertEqual(dev_info['type'], 'block')
self.assertEqual(dev_info['path'], dev_str)
self.assertNotIn('multipath_id', dev_info)
self.assertNotIn('devices', dev_info)
self.connector.disconnect_volume(connection_info['data'], dev_info)
expected_commands = []
self.assertEqual(expected_commands, self.cmds)
# Should not work for anything other than string, unicode, and list
connection_info = self.fibrechan_connection(vol, location, 123)
self.assertRaises(exception.NoFibreChannelHostsFound,
self.connector.connect_volume,
connection_info['data'])
get_fc_hbas_mock.side_effect = [[]]
get_fc_hbas_info_mock.side_effect = [[]]
self.assertRaises(exception.NoFibreChannelHostsFound,
self.connector.connect_volume,
connection_info['data'])
@mock.patch.object(linuxscsi.LinuxSCSI, 'find_multipath_device_path')
def _test_connect_volume_multipath(self, get_device_info_mock,
get_scsi_wwn_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock,
find_mp_dev_mock,
access_mode,
should_wait_for_rw,
find_mp_device_path_mock):
self.connector.use_multipath = True
get_fc_hbas_mock.side_effect = self.fake_get_fc_hbas
get_fc_hbas_info_mock.side_effect = self.fake_get_fc_hbas_info
wwn = '1234567890'
multipath_devname = '/dev/md-1'
devices = {"device": multipath_devname,
"id": wwn,
"devices": [{'device': '/dev/sdb',
'address': '1:0:0:1',
'host': 1, 'channel': 0,
'id': 0, 'lun': 1},
{'device': '/dev/sdc',
'address': '1:0:0:2',
'host': 1, 'channel': 0,
'id': 0, 'lun': 1}]}
get_device_info_mock.side_effect = devices['devices']
get_scsi_wwn_mock.return_value = wwn
location = '10.0.2.15:3260'
name = 'volume-00000001'
vol = {'id': 1, 'name': name}
initiator_wwn = ['1234567890123456', '1234567890123457']
find_mp_device_path_mock.return_value = '/dev/mapper/mpatha'
find_mp_dev_mock.return_value = {"device": "dm-3",
"id": wwn,
"name": "mpatha"}
connection_info = self.fibrechan_connection(vol, location,
initiator_wwn)
connection_info['data']['access_mode'] = access_mode
self.connector.connect_volume(connection_info['data'])
self.assertEqual(should_wait_for_rw, wait_for_rw_mock.called)
self.connector.disconnect_volume(connection_info['data'],
devices['devices'][0])
expected_commands = [
'multipath -f ' + find_mp_device_path_mock.return_value,
'blockdev --flushbufs /dev/sdb',
'tee -a /sys/block/sdb/device/delete',
'blockdev --flushbufs /dev/sdc',
'tee -a /sys/block/sdc/device/delete',
]
self.assertEqual(expected_commands, self.cmds)
return connection_info
@mock.patch.object(linuxscsi.LinuxSCSI, 'find_multipath_device')
@mock.patch.object(linuxscsi.LinuxSCSI, 'wait_for_rw')
@mock.patch.object(os.path, 'exists', return_value=True)
@mock.patch.object(os.path, 'realpath', return_value='/dev/sdb')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas_info')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_scsi_wwn')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_device_info')
@mock.patch.object(base.BaseLinuxConnector, 'check_valid_device')
def test_connect_volume_multipath_rw(self, check_valid_device_mock,
get_device_info_mock,
get_scsi_wwn_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock,
find_mp_dev_mock):
check_valid_device_mock.return_value = True
self._test_connect_volume_multipath(get_device_info_mock,
get_scsi_wwn_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock,
find_mp_dev_mock,
'rw',
True)
@mock.patch.object(linuxscsi.LinuxSCSI, 'find_multipath_device')
@mock.patch.object(linuxscsi.LinuxSCSI, 'wait_for_rw')
@mock.patch.object(os.path, 'exists', return_value=True)
@mock.patch.object(os.path, 'realpath', return_value='/dev/sdb')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas_info')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_scsi_wwn')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_device_info')
@mock.patch.object(base.BaseLinuxConnector, 'check_valid_device')
def test_connect_volume_multipath_no_access_mode(self,
check_valid_device_mock,
get_device_info_mock,
get_scsi_wwn_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock,
find_mp_dev_mock):
check_valid_device_mock.return_value = True
self._test_connect_volume_multipath(get_device_info_mock,
get_scsi_wwn_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock,
find_mp_dev_mock,
None,
True)
@mock.patch.object(linuxscsi.LinuxSCSI, 'find_multipath_device')
@mock.patch.object(linuxscsi.LinuxSCSI, 'wait_for_rw')
@mock.patch.object(os.path, 'exists', return_value=True)
@mock.patch.object(os.path, 'realpath', return_value='/dev/sdb')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas_info')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_scsi_wwn')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_device_info')
@mock.patch.object(base.BaseLinuxConnector, 'check_valid_device')
def test_connect_volume_multipath_ro(self, check_valid_device_mock,
get_device_info_mock,
get_scsi_wwn_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock,
find_mp_dev_mock):
check_valid_device_mock.return_value = True
self._test_connect_volume_multipath(get_device_info_mock,
get_scsi_wwn_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock,
find_mp_dev_mock,
'ro',
False)
@mock.patch.object(base.BaseLinuxConnector, '_discover_mpath_device')
@mock.patch.object(linuxscsi.LinuxSCSI, 'find_multipath_device')
@mock.patch.object(linuxscsi.LinuxSCSI, 'wait_for_rw')
@mock.patch.object(os.path, 'exists', return_value=True)
@mock.patch.object(os.path, 'realpath', return_value='/dev/sdb')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas_info')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_scsi_wwn')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_device_info')
@mock.patch.object(base.BaseLinuxConnector, 'check_valid_device')
def test_connect_volume_multipath_not_found(self,
check_valid_device_mock,
get_device_info_mock,
get_scsi_wwn_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock,
find_mp_dev_mock,
discover_mp_dev_mock):
check_valid_device_mock.return_value = True
discover_mp_dev_mock.return_value = ("/dev/disk/by-path/something",
None)
connection_info = self._test_connect_volume_multipath(
get_device_info_mock, get_scsi_wwn_mock, get_fc_hbas_info_mock,
get_fc_hbas_mock, realpath_mock, exists_mock, wait_for_rw_mock,
find_mp_dev_mock, 'rw', False)
self.assertNotIn('multipathd_id', connection_info['data'])
@mock.patch.object(fibre_channel.FibreChannelConnector, 'get_volume_paths')
def test_extend_volume_no_path(self, mock_volume_paths):
mock_volume_paths.return_value = []
volume = {'id': 'fake_uuid'}
wwn = '1234567890123456'
connection_info = self.fibrechan_connection(volume,
"10.0.2.15:3260",
wwn)
self.assertRaises(exception.VolumePathsNotFound,
self.connector.extend_volume,
connection_info['data'])
@mock.patch.object(linuxscsi.LinuxSCSI, 'extend_volume')
@mock.patch.object(fibre_channel.FibreChannelConnector, 'get_volume_paths')
def test_extend_volume(self, mock_volume_paths, mock_scsi_extend):
fake_new_size = 1024
mock_volume_paths.return_value = ['/dev/vdx']
mock_scsi_extend.return_value = fake_new_size
volume = {'id': 'fake_uuid'}
wwn = '1234567890123456'
connection_info = self.fibrechan_connection(volume,
"10.0.2.15:3260",
wwn)
new_size = self.connector.extend_volume(connection_info['data'])
self.assertEqual(fake_new_size, new_size)
@mock.patch.object(os.path, 'isdir')
def test_get_all_available_volumes_path_not_dir(self, mock_isdir):
mock_isdir.return_value = False
expected = []
actual = self.connector.get_all_available_volumes()
self.assertItemsEqual(expected, actual)
@mock.patch('eventlet.greenthread.sleep', mock.Mock())
@mock.patch.object(linuxscsi.LinuxSCSI, 'find_multipath_device')
@mock.patch.object(linuxscsi.LinuxSCSI, 'wait_for_rw')
@mock.patch.object(os.path, 'exists', return_value=True)
@mock.patch.object(os.path, 'realpath', return_value='/dev/sdb')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_hbas_info')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_scsi_wwn')
@mock.patch.object(linuxscsi.LinuxSCSI, 'get_device_info')
@mock.patch.object(base.BaseLinuxConnector, 'check_valid_device')
def test_connect_volume_device_not_valid(self, check_valid_device_mock,
get_device_info_mock,
get_scsi_wwn_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock,
find_mp_dev_mock):
check_valid_device_mock.return_value = False
self.assertRaises(exception.NoFibreChannelVolumeDeviceFound,
self._test_connect_volume_multipath,
get_device_info_mock,
get_scsi_wwn_mock,
get_fc_hbas_info_mock,
get_fc_hbas_mock,
realpath_mock,
exists_mock,
wait_for_rw_mock,
find_mp_dev_mock,
'rw',
True)

View File

@ -1,43 +0,0 @@
# (c) Copyright 2013 IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from os_brick.initiator.connectors import fibre_channel_ppc64
from os_brick.initiator import linuxscsi
from os_brick.tests.initiator import test_connector
class FibreChannelConnectorPPC64TestCase(test_connector.ConnectorTestCase):
def setUp(self):
super(FibreChannelConnectorPPC64TestCase, self).setUp()
self.connector = fibre_channel_ppc64.FibreChannelConnectorPPC64(
None, execute=self.fake_execute, use_multipath=False)
self.assertIsNotNone(self.connector)
self.assertIsNotNone(self.connector._linuxfc)
self.assertEqual(self.connector._linuxfc.__class__.__name__,
"LinuxFibreChannelPPC64")
self.assertIsNotNone(self.connector._linuxscsi)
@mock.patch.object(linuxscsi.LinuxSCSI, 'process_lun_id', return_value='2')
def test_get_host_devices(self, mock_process_lun_id):
lun = 2
possible_devs = [(3, "0x5005076802232ade"),
(3, "0x5005076802332ade"), ]
devices = self.connector._get_host_devices(possible_devs, lun)
self.assertEqual(2, len(devices))
device_path = "/dev/disk/by-path/fc-0x5005076802232ade-lun-2"
self.assertEqual(devices[0], device_path)
device_path = "/dev/disk/by-path/fc-0x5005076802332ade-lun-2"
self.assertEqual(devices[1], device_path)

View File

@ -1,73 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from os_brick.initiator.connectors import fibre_channel_s390x
from os_brick.initiator import linuxfc
from os_brick.tests.initiator import test_connector
class FibreChannelConnectorS390XTestCase(test_connector.ConnectorTestCase):
def setUp(self):
super(FibreChannelConnectorS390XTestCase, self).setUp()
self.connector = fibre_channel_s390x.FibreChannelConnectorS390X(
None, execute=self.fake_execute, use_multipath=False)
self.assertIsNotNone(self.connector)
self.assertIsNotNone(self.connector._linuxfc)
self.assertEqual(self.connector._linuxfc.__class__.__name__,
"LinuxFibreChannelS390X")
self.assertIsNotNone(self.connector._linuxscsi)
@mock.patch.object(linuxfc.LinuxFibreChannelS390X, 'configure_scsi_device')
def test_get_host_devices(self, mock_configure_scsi_device):
lun = 2
possible_devs = [(3, 5), ]
devices = self.connector._get_host_devices(possible_devs, lun)
mock_configure_scsi_device.assert_called_with(3, 5,
"0x0002000000000000")
self.assertEqual(2, len(devices))
device_path = "/dev/disk/by-path/ccw-3-zfcp-5:0x0002000000000000"
self.assertEqual(devices[0], device_path)
device_path = "/dev/disk/by-path/ccw-3-fc-5-lun-2"
self.assertEqual(devices[1], device_path)
def test_get_lun_string(self):
lun = 1
lunstring = self.connector._get_lun_string(lun)
self.assertEqual(lunstring, "0x0001000000000000")
lun = 0xff
lunstring = self.connector._get_lun_string(lun)
self.assertEqual(lunstring, "0x00ff000000000000")
lun = 0x101
lunstring = self.connector._get_lun_string(lun)
self.assertEqual(lunstring, "0x0101000000000000")
lun = 0x4020400a
lunstring = self.connector._get_lun_string(lun)
self.assertEqual(lunstring, "0x4020400a00000000")
@mock.patch.object(fibre_channel_s390x.FibreChannelConnectorS390X,
'_get_possible_devices', return_value=[(3, 5), ])
@mock.patch.object(linuxfc.LinuxFibreChannelS390X, 'get_fc_hbas_info',
return_value=[])
@mock.patch.object(linuxfc.LinuxFibreChannelS390X,
'deconfigure_scsi_device')
def test_remove_devices(self, mock_deconfigure_scsi_device,
mock_get_fc_hbas_info, mock_get_possible_devices):
connection_properties = {'target_wwn': 5, 'target_lun': 2}
self.connector._remove_devices(connection_properties, devices=None)
mock_deconfigure_scsi_device.assert_called_with(3, 5,
"0x0002000000000000")
mock_get_fc_hbas_info.assert_called_once_with()
mock_get_possible_devices.assert_called_once_with([], 5)

View File

@ -1,36 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_brick.initiator.connectors import gpfs
from os_brick.tests.initiator.connectors import test_local
class GPFSConnectorTestCase(test_local.LocalConnectorTestCase):
def setUp(self):
super(GPFSConnectorTestCase, self).setUp()
self.connection_properties = {'name': 'foo',
'device_path': '/tmp/bar'}
self.connector = gpfs.GPFSConnector(None)
def test_connect_volume(self):
cprops = self.connection_properties
dev_info = self.connector.connect_volume(cprops)
self.assertEqual(dev_info['type'], 'gpfs')
self.assertEqual(dev_info['path'], cprops['device_path'])
def test_connect_volume_with_invalid_connection_data(self):
cprops = {}
self.assertRaises(ValueError,
self.connector.connect_volume, cprops)

View File

@ -1,219 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import os
from oslo_concurrency import processutils as putils
from os_brick import exception
from os_brick.initiator import connector
from os_brick.initiator.connectors import hgst
from os_brick.tests.initiator import test_connector
class HGSTConnectorTestCase(test_connector.ConnectorTestCase):
"""Test cases for HGST initiator class."""
IP_OUTPUT = """
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 169.254.169.254/32 scope link lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
link/ether 00:25:90:d9:18:08 brd ff:ff:ff:ff:ff:ff
inet6 fe80::225:90ff:fed9:1808/64 scope link
valid_lft forever preferred_lft forever
3: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
link/ether 00:25:90:d9:18:09 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.23/24 brd 192.168.0.255 scope global em2
valid_lft forever preferred_lft forever
inet6 fe80::225:90ff:fed9:1809/64 scope link
valid_lft forever preferred_lft forever
"""
DOMAIN_OUTPUT = """localhost"""
DOMAIN_FAILED = """this.better.not.resolve.to.a.name.or.else"""
SET_APPHOST_OUTPUT = """
VLVM_SET_APPHOSTS0000000395
Request Succeeded
"""
def setUp(self):
super(HGSTConnectorTestCase, self).setUp()
self.connector = hgst.HGSTConnector(
None, execute=self._fake_exec)
self._fail_set_apphosts = False
self._fail_ip = False
self._fail_domain_list = False
def _fake_exec_set_apphosts(self, *cmd):
if self._fail_set_apphosts:
raise putils.ProcessExecutionError(None, None, 1)
else:
return self.SET_APPHOST_OUTPUT, ''
def _fake_exec_ip(self, *cmd):
if self._fail_ip:
# Remove localhost so there is no IP match
return self.IP_OUTPUT.replace("127.0.0.1", "x.x.x.x"), ''
else:
return self.IP_OUTPUT, ''
def _fake_exec_domain_list(self, *cmd):
if self._fail_domain_list:
return self.DOMAIN_FAILED, ''
else:
return self.DOMAIN_OUTPUT, ''
def _fake_exec(self, *cmd, **kwargs):
self.cmdline = " ".join(cmd)
if cmd[0] == "ip":
return self._fake_exec_ip(*cmd)
elif cmd[0] == "vgc-cluster":
if cmd[1] == "domain-list":
return self._fake_exec_domain_list(*cmd)
elif cmd[1] == "space-set-apphosts":
return self._fake_exec_set_apphosts(*cmd)
else:
return '', ''
def test_factory(self):
"""Can we instantiate a HGSTConnector of the right kind?"""
obj = connector.InitiatorConnector.factory('HGST', None, arch='x86_64')
self.assertEqual("HGSTConnector", obj.__class__.__name__)
def test_get_search_path(self):
expected = "/dev"
actual = self.connector.get_search_path()
self.assertEqual(expected, actual)
@mock.patch.object(os.path, 'exists', return_value=True)
def test_get_volume_paths(self, mock_exists):
cprops = {'name': 'space', 'noremovehost': 'stor1'}
path = "/dev/%s" % cprops['name']
expected = [path]
actual = self.connector.get_volume_paths(cprops)
self.assertEqual(expected, actual)
def test_connect_volume(self):
"""Tests that a simple connection succeeds"""
self._fail_set_apphosts = False
self._fail_ip = False
self._fail_domain_list = False
cprops = {'name': 'space', 'noremovehost': 'stor1'}
dev_info = self.connector.connect_volume(cprops)
self.assertEqual('block', dev_info['type'])
self.assertEqual('space', dev_info['device'])
self.assertEqual('/dev/space', dev_info['path'])
def test_get_connector_properties(self):
props = hgst.HGSTConnector.get_connector_properties(
'sudo', multipath=True, enforce_multipath=True)
expected_props = {}
self.assertEqual(expected_props, props)
def test_connect_volume_nohost_fail(self):
"""This host should not be found, connect should fail."""
self._fail_set_apphosts = False
self._fail_ip = True
self._fail_domain_list = False
cprops = {'name': 'space', 'noremovehost': 'stor1'}
self.assertRaises(exception.BrickException,
self.connector.connect_volume,
cprops)
def test_connect_volume_nospace_fail(self):
"""The space command will fail, exception to be thrown"""
self._fail_set_apphosts = True
self._fail_ip = False
self._fail_domain_list = False
cprops = {'name': 'space', 'noremovehost': 'stor1'}
self.assertRaises(exception.BrickException,
self.connector.connect_volume,
cprops)
def test_disconnect_volume(self):
"""Simple disconnection should pass and disconnect me"""
self._fail_set_apphosts = False
self._fail_ip = False
self._fail_domain_list = False
self._cmdline = ""
cprops = {'name': 'space', 'noremovehost': 'stor1'}
self.connector.disconnect_volume(cprops, None)
exp_cli = ("vgc-cluster space-set-apphosts -n space "
"-A localhost --action DELETE")
self.assertEqual(exp_cli, self.cmdline)
def test_disconnect_volume_nohost(self):
"""Should not run a setapphosts because localhost will"""
"""be the noremotehost"""
self._fail_set_apphosts = False
self._fail_ip = False
self._fail_domain_list = False
self._cmdline = ""
cprops = {'name': 'space', 'noremovehost': 'localhost'}
self.connector.disconnect_volume(cprops, None)
# The last command should be the IP listing, not set apphosts
exp_cli = ("ip addr list")
self.assertEqual(exp_cli, self.cmdline)
def test_disconnect_volume_fails(self):
"""The set-apphosts should fail, exception to be thrown"""
self._fail_set_apphosts = True
self._fail_ip = False
self._fail_domain_list = False
self._cmdline = ""
cprops = {'name': 'space', 'noremovehost': 'stor1'}
self.assertRaises(exception.BrickException,
self.connector.disconnect_volume,
cprops, None)
def test_bad_connection_properties(self):
"""Send in connection_properties missing required fields"""
# Invalid connection_properties
self.assertRaises(exception.BrickException,
self.connector.connect_volume,
None)
# Name required for connect_volume
cprops = {'noremovehost': 'stor1'}
self.assertRaises(exception.BrickException,
self.connector.connect_volume,
cprops)
# Invalid connection_properties
self.assertRaises(exception.BrickException,
self.connector.disconnect_volume,
None, None)
# Name and noremovehost needed for disconnect_volume
cprops = {'noremovehost': 'stor1'}
self.assertRaises(exception.BrickException,
self.connector.disconnect_volume,
cprops, None)
cprops = {'name': 'space'}
self.assertRaises(exception.BrickException,
self.connector.disconnect_volume,
cprops, None)
def test_extend_volume(self):
cprops = {'name': 'space', 'noremovehost': 'stor1'}
self.assertRaises(NotImplementedError,
self.connector.extend_volume,
cprops)

View File

@ -1,230 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import os
import tempfile
from os_brick import exception
from os_brick.initiator.connectors import huawei
from os_brick.tests.initiator import test_connector
class HuaweiStorHyperConnectorTestCase(test_connector.ConnectorTestCase):
"""Test cases for StorHyper initiator class."""
attached = False
def setUp(self):
super(HuaweiStorHyperConnectorTestCase, self).setUp()
self.fake_sdscli_file = tempfile.mktemp()
self.addCleanup(os.remove, self.fake_sdscli_file)
newefile = open(self.fake_sdscli_file, 'w')
newefile.write('test')
newefile.close()
self.connector = huawei.HuaweiStorHyperConnector(
None, execute=self.fake_execute)
self.connector.cli_path = self.fake_sdscli_file
self.connector.iscliexist = True
self.connector_fail = huawei.HuaweiStorHyperConnector(
None, execute=self.fake_execute_fail)
self.connector_fail.cli_path = self.fake_sdscli_file
self.connector_fail.iscliexist = True
self.connector_nocli = huawei.HuaweiStorHyperConnector(
None, execute=self.fake_execute_fail)
self.connector_nocli.cli_path = self.fake_sdscli_file
self.connector_nocli.iscliexist = False
self.connection_properties = {
'access_mode': 'rw',
'qos_specs': None,
'volume_id': 'volume-b2911673-863c-4380-a5f2-e1729eecfe3f'
}
self.device_info = {'type': 'block',
'path': '/dev/vdxxx'}
HuaweiStorHyperConnectorTestCase.attached = False
def fake_execute(self, *cmd, **kwargs):
method = cmd[2]
self.cmds.append(" ".join(cmd))
if 'attach' == method:
HuaweiStorHyperConnectorTestCase.attached = True
return 'ret_code=0', None
if 'querydev' == method:
if HuaweiStorHyperConnectorTestCase.attached:
return 'ret_code=0\ndev_addr=/dev/vdxxx', None
else:
return 'ret_code=1\ndev_addr=/dev/vdxxx', None
if 'detach' == method:
HuaweiStorHyperConnectorTestCase.attached = False
return 'ret_code=0', None
def fake_execute_fail(self, *cmd, **kwargs):
method = cmd[2]
self.cmds.append(" ".join(cmd))
if 'attach' == method:
HuaweiStorHyperConnectorTestCase.attached = False
return 'ret_code=330151401', None
if 'querydev' == method:
if HuaweiStorHyperConnectorTestCase.attached:
return 'ret_code=0\ndev_addr=/dev/vdxxx', None
else:
return 'ret_code=1\ndev_addr=/dev/vdxxx', None
if 'detach' == method:
HuaweiStorHyperConnectorTestCase.attached = True
return 'ret_code=330155007', None
def test_get_connector_properties(self):
props = huawei.HuaweiStorHyperConnector.get_connector_properties(
'sudo', multipath=True, enforce_multipath=True)
expected_props = {}
self.assertEqual(expected_props, props)
def test_get_search_path(self):
actual = self.connector.get_search_path()
self.assertIsNone(actual)
@mock.patch.object(huawei.HuaweiStorHyperConnector,
'_query_attached_volume')
def test_get_volume_paths(self, mock_query_attached):
path = self.device_info['path']
mock_query_attached.return_value = {'ret_code': 0,
'dev_addr': path}
expected = [path]
actual = self.connector.get_volume_paths(self.connection_properties)
self.assertEqual(expected, actual)
def test_connect_volume(self):
"""Test the basic connect volume case."""
retval = self.connector.connect_volume(self.connection_properties)
self.assertEqual(self.device_info, retval)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
self.assertEqual(expected_commands, self.cmds)
def test_disconnect_volume(self):
"""Test the basic disconnect volume case."""
self.connector.connect_volume(self.connection_properties)
self.assertEqual(True, HuaweiStorHyperConnectorTestCase.attached)
self.connector.disconnect_volume(self.connection_properties,
self.device_info)
self.assertEqual(False, HuaweiStorHyperConnectorTestCase.attached)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c detach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
self.assertEqual(expected_commands, self.cmds)
def test_is_volume_connected(self):
"""Test if volume connected to host case."""
self.connector.connect_volume(self.connection_properties)
self.assertEqual(True, HuaweiStorHyperConnectorTestCase.attached)
is_connected = self.connector.is_volume_connected(
'volume-b2911673-863c-4380-a5f2-e1729eecfe3f')
self.assertEqual(HuaweiStorHyperConnectorTestCase.attached,
is_connected)
self.connector.disconnect_volume(self.connection_properties,
self.device_info)
self.assertEqual(False, HuaweiStorHyperConnectorTestCase.attached)
is_connected = self.connector.is_volume_connected(
'volume-b2911673-863c-4380-a5f2-e1729eecfe3f')
self.assertEqual(HuaweiStorHyperConnectorTestCase.attached,
is_connected)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c detach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
self.assertEqual(expected_commands, self.cmds)
def test__analyze_output(self):
cliout = 'ret_code=0\ndev_addr=/dev/vdxxx\nret_desc="success"'
analyze_result = {'dev_addr': '/dev/vdxxx',
'ret_desc': '"success"',
'ret_code': '0'}
result = self.connector._analyze_output(cliout)
self.assertEqual(analyze_result, result)
def test_connect_volume_fail(self):
"""Test the fail connect volume case."""
self.assertRaises(exception.BrickException,
self.connector_fail.connect_volume,
self.connection_properties)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
self.assertEqual(expected_commands, self.cmds)
def test_disconnect_volume_fail(self):
"""Test the fail disconnect volume case."""
self.connector.connect_volume(self.connection_properties)
self.assertEqual(True, HuaweiStorHyperConnectorTestCase.attached)
self.assertRaises(exception.BrickException,
self.connector_fail.disconnect_volume,
self.connection_properties,
self.device_info)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c detach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
self.assertEqual(expected_commands, self.cmds)
def test_connect_volume_nocli(self):
"""Test the fail connect volume case."""
self.assertRaises(exception.BrickException,
self.connector_nocli.connect_volume,
self.connection_properties)
def test_disconnect_volume_nocli(self):
"""Test the fail disconnect volume case."""
self.connector.connect_volume(self.connection_properties)
self.assertEqual(True, HuaweiStorHyperConnectorTestCase.attached)
self.assertRaises(exception.BrickException,
self.connector_nocli.disconnect_volume,
self.connection_properties,
self.device_info)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
self.assertEqual(expected_commands, self.cmds)
def test_extend_volume(self):
self.assertRaises(NotImplementedError,
self.connector.extend_volume,
self.connection_properties)

File diff suppressed because it is too large Load Diff

View File

@ -1,58 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_brick.initiator.connectors import local
from os_brick.tests.initiator import test_connector
class LocalConnectorTestCase(test_connector.ConnectorTestCase):
def setUp(self):
super(LocalConnectorTestCase, self).setUp()
self.connection_properties = {'name': 'foo',
'device_path': '/tmp/bar'}
self.connector = local.LocalConnector(None)
def test_get_connector_properties(self):
props = local.LocalConnector.get_connector_properties(
'sudo', multipath=True, enforce_multipath=True)
expected_props = {}
self.assertEqual(expected_props, props)
def test_get_search_path(self):
actual = self.connector.get_search_path()
self.assertIsNone(actual)
def test_get_volume_paths(self):
expected = [self.connection_properties['device_path']]
actual = self.connector.get_volume_paths(
self.connection_properties)
self.assertEqual(expected, actual)
def test_connect_volume(self):
cprops = self.connection_properties
dev_info = self.connector.connect_volume(cprops)
self.assertEqual(dev_info['type'], 'local')
self.assertEqual(dev_info['path'], cprops['device_path'])
def test_connect_volume_with_invalid_connection_data(self):
cprops = {}
self.assertRaises(ValueError,
self.connector.connect_volume, cprops)
def test_extend_volume(self):
self.assertRaises(NotImplementedError,
self.connector.extend_volume,
self.connection_properties)

View File

@ -1,269 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import ddt
import mock
from os_brick import exception
from os_brick.initiator.connectors import rbd
from os_brick.initiator import linuxrbd
from os_brick.privileged import rootwrap as priv_rootwrap
from os_brick.tests.initiator import test_connector
from os_brick import utils
@ddt.ddt
class RBDConnectorTestCase(test_connector.ConnectorTestCase):
def setUp(self):
super(RBDConnectorTestCase, self).setUp()
self.user = 'fake_user'
self.pool = 'fake_pool'
self.volume = 'fake_volume'
self.clustername = 'fake_ceph'
self.hosts = ['192.168.10.2']
self.ports = ['6789']
self.keyring = "[client.cinder]\n key = test\n"
self.connection_properties = {
'auth_username': self.user,
'name': '%s/%s' % (self.pool, self.volume),
'cluster_name': self.clustername,
'hosts': self.hosts,
'ports': self.ports,
'keyring': self.keyring,
}
def test_get_search_path(self):
rbd_connector = rbd.RBDConnector(None)
path = rbd_connector.get_search_path()
self.assertIsNone(path)
@mock.patch('os_brick.initiator.linuxrbd.rbd')
@mock.patch('os_brick.initiator.linuxrbd.rados')
def test_get_volume_paths(self, mock_rados, mock_rbd):
rbd_connector = rbd.RBDConnector(None)
expected = []
actual = rbd_connector.get_volume_paths(self.connection_properties)
self.assertEqual(expected, actual)
def test_get_connector_properties(self):
props = rbd.RBDConnector.get_connector_properties(
'sudo', multipath=True, enforce_multipath=True)
expected_props = {'do_local_attach': False}
self.assertEqual(expected_props, props)
@mock.patch('os_brick.initiator.linuxrbd.rbd')
@mock.patch('os_brick.initiator.linuxrbd.rados')
@mock.patch.object(rbd.RBDConnector, '_create_ceph_conf')
@mock.patch('os.path.exists')
def test_connect_volume(self, mock_path, mock_conf, mock_rados, mock_rbd):
"""Test the connect volume case."""
rbd_connector = rbd.RBDConnector(None)
mock_path.return_value = False
mock_conf.return_value = "/tmp/fake_dir/fake_ceph.conf"
device_info = rbd_connector.connect_volume(self.connection_properties)
# Ensure rados is instantiated correctly
mock_rados.Rados.assert_called_once_with(
clustername=self.clustername,
rados_id=utils.convert_str(self.user),
conffile='/tmp/fake_dir/fake_ceph.conf')
# Ensure correct calls to connect to cluster
self.assertEqual(1, mock_rados.Rados.return_value.connect.call_count)
mock_rados.Rados.return_value.open_ioctx.assert_called_once_with(
utils.convert_str(self.pool))
# Ensure rbd image is instantiated correctly
mock_rbd.Image.assert_called_once_with(
mock_rados.Rados.return_value.open_ioctx.return_value,
utils.convert_str(self.volume), read_only=False,
snapshot=None)
# Ensure expected object is returned correctly
self.assertIsInstance(device_info['path'],
linuxrbd.RBDVolumeIOWrapper)
@mock.patch('os_brick.initiator.linuxrbd.rbd')
@mock.patch('os_brick.initiator.linuxrbd.rados')
@mock.patch.object(rbd.RBDConnector, '_create_ceph_conf')
@mock.patch('os.path.exists')
def test_provided_keyring(self, mock_path, mock_conf, mock_rados,
mock_rbd):
conn = rbd.RBDConnector(None)
mock_path.return_value = False
mock_conf.return_value = "/tmp/fake_dir/fake_ceph.conf"
self.connection_properties['keyring'] = self.keyring
conn.connect_volume(self.connection_properties)
mock_conf.assert_called_once_with(self.hosts, self.ports,
self.clustername, self.user,
self.keyring)
def test_keyring_is_none(self):
conn = rbd.RBDConnector(None)
keyring = None
keyring_data = "[client.cinder]\n key = test\n"
mockopen = mock.mock_open(read_data=keyring_data)
mockopen.return_value.__exit__ = mock.Mock()
with mock.patch('os_brick.initiator.connectors.rbd.open', mockopen,
create=True):
self.assertEqual(
conn._check_or_get_keyring_contents(keyring, 'cluster',
'user'), keyring_data)
def test_keyring_raise_error(self):
conn = rbd.RBDConnector(None)
keyring = None
mockopen = mock.mock_open()
mockopen.return_value = ""
with mock.patch('os_brick.initiator.connectors.rbd.open', mockopen,
create=True) as mock_keyring_file:
mock_keyring_file.side_effect = IOError
self.assertRaises(exception.BrickException,
conn._check_or_get_keyring_contents, keyring,
'cluster', 'user')
@ddt.data((['192.168.1.1', '192.168.1.2'],
['192.168.1.1', '192.168.1.2']),
(['3ffe:1900:4545:3:200:f8ff:fe21:67cf',
'fe80:0:0:0:200:f8ff:fe21:67cf'],
['[3ffe:1900:4545:3:200:f8ff:fe21:67cf]',
'[fe80:0:0:0:200:f8ff:fe21:67cf]']),
(['foobar', 'fizzbuzz'], ['foobar', 'fizzbuzz']),
(['192.168.1.1',
'3ffe:1900:4545:3:200:f8ff:fe21:67cf',
'hello, world!'],
['192.168.1.1',
'[3ffe:1900:4545:3:200:f8ff:fe21:67cf]',
'hello, world!']))
@ddt.unpack
def test_sanitize_mon_host(self, hosts_in, hosts_out):
conn = rbd.RBDConnector(None)
self.assertEqual(hosts_out, conn._sanitize_mon_hosts(hosts_in))
@mock.patch('os_brick.initiator.connectors.rbd.tempfile.mkstemp')
def test_create_ceph_conf(self, mock_mkstemp):
mockopen = mock.mock_open()
fd = mock.sentinel.fd
tmpfile = mock.sentinel.tmpfile
mock_mkstemp.return_value = (fd, tmpfile)
with mock.patch('os.fdopen', mockopen, create=True):
rbd_connector = rbd.RBDConnector(None)
conf_path = rbd_connector._create_ceph_conf(
self.hosts, self.ports, self.clustername, self.user,
self.keyring)
self.assertEqual(conf_path, tmpfile)
mock_mkstemp.assert_called_once_with(prefix='brickrbd_')
@mock.patch.object(priv_rootwrap, 'execute', return_value=None)
def test_connect_local_volume(self, mock_execute):
rbd_connector = rbd.RBDConnector(None, do_local_attach=True)
conn = {'name': 'pool/image',
'auth_username': 'fake_user',
'hosts': ['192.168.10.2'],
'ports': ['6789']}
device_info = rbd_connector.connect_volume(conn)
execute_call1 = mock.call('which', 'rbd')
cmd = ['rbd', 'map', 'image', '--pool', 'pool', '--id', 'fake_user',
'--mon_host', '192.168.10.2:6789']
execute_call2 = mock.call(*cmd, root_helper=None, run_as_root=True)
mock_execute.assert_has_calls([execute_call1, execute_call2])
expected_info = {'path': '/dev/rbd/pool/image',
'type': 'block'}
self.assertEqual(expected_info, device_info)
@mock.patch.object(priv_rootwrap, 'execute', return_value=None)
@mock.patch('os.path.exists')
@mock.patch('os.path.islink')
@mock.patch('os.path.realpath')
def test_connect_local_volume_dev_exist(self, mock_realpath, mock_islink,
mock_exists, mock_execute):
rbd_connector = rbd.RBDConnector(None, do_local_attach=True)
conn = {'name': 'pool/image',
'auth_username': 'fake_user',
'hosts': ['192.168.10.2'],
'ports': ['6789']}
mock_realpath.return_value = '/dev/rbd0'
mock_islink.return_value = True
mock_exists.return_value = True
device_info = rbd_connector.connect_volume(conn)
execute_call1 = mock.call('which', 'rbd')
cmd = ['rbd', 'map', 'image', '--pool', 'pool', '--id', 'fake_user',
'--mon_host', '192.168.10.2:6789']
execute_call2 = mock.call(*cmd, root_helper=None, run_as_root=True)
mock_execute.assert_has_calls([execute_call1])
self.assertFalse(execute_call2 in mock_execute.mock_calls)
expected_info = {'path': '/dev/rbd/pool/image',
'type': 'block'}
self.assertEqual(expected_info, device_info)
@mock.patch.object(priv_rootwrap, 'execute', return_value=None)
def test_connect_local_volume_without_mons(self, mock_execute):
rbd_connector = rbd.RBDConnector(None, do_local_attach=True)
conn = {'name': 'pool/image',
'auth_username': 'fake_user'}
device_info = rbd_connector.connect_volume(conn)
execute_call1 = mock.call('which', 'rbd')
cmd = ['rbd', 'map', 'image', '--pool', 'pool', '--id', 'fake_user']
execute_call2 = mock.call(*cmd, root_helper=None, run_as_root=True)
mock_execute.assert_has_calls([execute_call1, execute_call2])
expected_info = {'path': '/dev/rbd/pool/image',
'type': 'block'}
self.assertEqual(expected_info, device_info)
@mock.patch.object(priv_rootwrap, 'execute', return_value=None)
def test_connect_local_volume_without_auth(self, mock_execute):
rbd_connector = rbd.RBDConnector(None, do_local_attach=True)
conn = {'name': 'pool/image',
'hosts': ['192.168.10.2'],
'ports': ['6789']}
self.assertRaises(exception.BrickException,
rbd_connector.connect_volume,
conn)
@mock.patch('os_brick.initiator.linuxrbd.rbd')
@mock.patch('os_brick.initiator.linuxrbd.rados')
@mock.patch.object(linuxrbd.RBDVolumeIOWrapper, 'close')
def test_disconnect_volume(self, volume_close, mock_rados, mock_rbd):
"""Test the disconnect volume case."""
rbd_connector = rbd.RBDConnector(None)
device_info = rbd_connector.connect_volume(self.connection_properties)
rbd_connector.disconnect_volume(
self.connection_properties, device_info)
self.assertEqual(1, volume_close.call_count)
@mock.patch.object(priv_rootwrap, 'execute', return_value=None)
def test_disconnect_local_volume(self, mock_execute):
rbd_connector = rbd.RBDConnector(None, do_local_attach=True)
conn = {'name': 'pool/image',
'auth_username': 'fake_user',
'hosts': ['192.168.10.2'],
'ports': ['6789']}
rbd_connector.disconnect_volume(conn, None)
dev_name = '/dev/rbd/pool/image'
cmd = ['rbd', 'unmap', dev_name, '--id', 'fake_user',
'--mon_host', '192.168.10.2:6789']
mock_execute.assert_called_once_with(*cmd, root_helper=None,
run_as_root=True)
def test_extend_volume(self):
rbd_connector = rbd.RBDConnector(None)
self.assertRaises(NotImplementedError,
rbd_connector.extend_volume,
self.connection_properties)

View File

@ -1,77 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from os_brick.initiator.connectors import remotefs
from os_brick.remotefs import remotefs as remotefs_client
from os_brick.tests.initiator import test_connector
class RemoteFsConnectorTestCase(test_connector.ConnectorTestCase):
"""Test cases for Remote FS initiator class."""
TEST_DEV = '172.18.194.100:/var/nfs'
TEST_PATH = '/mnt/test/df0808229363aad55c27da50c38d6328'
TEST_BASE = '/mnt/test'
TEST_NAME = '9c592d52-ce47-4263-8c21-4ecf3c029cdb'
def setUp(self):
super(RemoteFsConnectorTestCase, self).setUp()
self.connection_properties = {
'export': self.TEST_DEV,
'name': self.TEST_NAME}
self.connector = remotefs.RemoteFsConnector(
'nfs', root_helper='sudo',
nfs_mount_point_base=self.TEST_BASE,
nfs_mount_options='vers=3')
@mock.patch('os_brick.remotefs.remotefs.ScalityRemoteFsClient')
def test_init_with_scality(self, mock_scality_remotefs_client):
remotefs.RemoteFsConnector('scality', root_helper='sudo')
self.assertEqual(1, mock_scality_remotefs_client.call_count)
def test_get_connector_properties(self):
props = remotefs.RemoteFsConnector.get_connector_properties(
'sudo', multipath=True, enforce_multipath=True)
expected_props = {}
self.assertEqual(expected_props, props)
def test_get_search_path(self):
expected = self.TEST_BASE
actual = self.connector.get_search_path()
self.assertEqual(expected, actual)
@mock.patch.object(remotefs_client.RemoteFsClient, 'mount')
def test_get_volume_paths(self, mock_mount):
path = ("%(path)s/%(name)s" % {'path': self.TEST_PATH,
'name': self.TEST_NAME})
expected = [path]
actual = self.connector.get_volume_paths(self.connection_properties)
self.assertEqual(expected, actual)
@mock.patch.object(remotefs_client.RemoteFsClient, 'mount')
@mock.patch.object(remotefs_client.RemoteFsClient, 'get_mount_point',
return_value="something")
def test_connect_volume(self, mount_point_mock, mount_mock):
"""Test the basic connect volume case."""
self.connector.connect_volume(self.connection_properties)
def test_disconnect_volume(self):
"""Nothing should happen here -- make sure it doesn't blow up."""
self.connector.disconnect_volume(self.connection_properties, {})
def test_extend_volume(self):
self.assertRaises(NotImplementedError,
self.connector.extend_volume,
self.connection_properties)

View File

@ -1,273 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import mock
import os
import requests
import six
from oslo_concurrency import processutils as putils
from os_brick import exception
from os_brick.initiator.connectors import scaleio
from os_brick.tests.initiator import test_connector
class ScaleIOConnectorTestCase(test_connector.ConnectorTestCase):
"""Test cases for ScaleIO connector."""
# Fake volume information
vol = {
'id': 'vol1',
'name': 'test_volume',
'provider_id': 'vol1'
}
# Fake SDC GUID
fake_guid = 'FAKE_GUID'
def setUp(self):
super(ScaleIOConnectorTestCase, self).setUp()
self.fake_connection_properties = {
'hostIP': test_connector.MY_IP,
'serverIP': test_connector.MY_IP,
'scaleIO_volname': self.vol['name'],
'scaleIO_volume_id': self.vol['provider_id'],
'serverPort': 443,
'serverUsername': 'test',
'serverPassword': 'fake',
'serverToken': 'fake_token',
'iopsLimit': None,
'bandwidthLimit': None
}
# Formatting string for REST API calls
self.action_format = "instances/Volume::{}/action/{{}}".format(
self.vol['id'])
self.get_volume_api = 'types/Volume/instances/getByName::{}'.format(
self.vol['name'])
# Map of REST API calls to responses
self.mock_calls = {
self.get_volume_api:
self.MockHTTPSResponse(json.dumps(self.vol['id'])),
self.action_format.format('addMappedSdc'):
self.MockHTTPSResponse(''),
self.action_format.format('setMappedSdcLimits'):
self.MockHTTPSResponse(''),
self.action_format.format('removeMappedSdc'):
self.MockHTTPSResponse(''),
}
# Default error REST response
self.error_404 = self.MockHTTPSResponse(content=dict(
errorCode=0,
message='HTTP 404',
), status_code=404)
# Patch the request and os calls to fake versions
self.mock_object(requests, 'get', self.handle_scaleio_request)
self.mock_object(requests, 'post', self.handle_scaleio_request)
self.mock_object(os.path, 'isdir', return_value=True)
self.mock_object(os, 'listdir',
return_value=["emc-vol-{}".format(self.vol['id'])])
# The actual ScaleIO connector
self.connector = scaleio.ScaleIOConnector(
'sudo', execute=self.fake_execute)
class MockHTTPSResponse(requests.Response):
"""Mock HTTP Response
Defines the https replies from the mocked calls to do_request()
"""
def __init__(self, content, status_code=200):
super(ScaleIOConnectorTestCase.MockHTTPSResponse,
self).__init__()
self._content = content
self.encoding = 'UTF-8'
self.status_code = status_code
def json(self, **kwargs):
if isinstance(self._content, six.string_types):
return super(ScaleIOConnectorTestCase.MockHTTPSResponse,
self).json(**kwargs)
return self._content
@property
def text(self):
if not isinstance(self._content, six.string_types):
return json.dumps(self._content)
self._content = self._content.encode('utf-8')
return super(ScaleIOConnectorTestCase.MockHTTPSResponse,
self).text
def fake_execute(self, *cmd, **kwargs):
"""Fakes the rootwrap call"""
return self.fake_guid, None
def fake_missing_execute(self, *cmd, **kwargs):
"""Error when trying to call rootwrap drv_cfg"""
raise putils.ProcessExecutionError("Test missing drv_cfg.")
def handle_scaleio_request(self, url, *args, **kwargs):
"""Fake REST server"""
api_call = url.split(':', 2)[2].split('/', 1)[1].replace('api/', '')
if 'setMappedSdcLimits' in api_call:
self.assertNotIn("iops_limit", kwargs['data'])
if "iopsLimit" not in kwargs['data']:
self.assertIn("bandwidthLimitInKbps",
kwargs['data'])
elif "bandwidthLimitInKbps" not in kwargs['data']:
self.assertIn("iopsLimit", kwargs['data'])
else:
self.assertIn("bandwidthLimitInKbps",
kwargs['data'])
self.assertIn("iopsLimit", kwargs['data'])
try:
return self.mock_calls[api_call]
except KeyError:
return self.error_404
def test_get_search_path(self):
expected = "/dev/disk/by-id"
actual = self.connector.get_search_path()
self.assertEqual(expected, actual)
@mock.patch.object(os.path, 'exists', return_value=True)
@mock.patch.object(scaleio.ScaleIOConnector, '_wait_for_volume_path')
def test_get_volume_paths(self, mock_wait_for_path, mock_exists):
mock_wait_for_path.return_value = "emc-vol-vol1"
expected = ['/dev/disk/by-id/emc-vol-vol1']
actual = self.connector.get_volume_paths(
self.fake_connection_properties)
self.assertEqual(expected, actual)
def test_get_connector_properties(self):
props = scaleio.ScaleIOConnector.get_connector_properties(
'sudo', multipath=True, enforce_multipath=True)
expected_props = {}
self.assertEqual(expected_props, props)
def test_connect_volume(self):
"""Successful connect to volume"""
self.connector.connect_volume(self.fake_connection_properties)
def test_connect_with_bandwidth_limit(self):
"""Successful connect to volume with bandwidth limit"""
self.fake_connection_properties['bandwidthLimit'] = '500'
self.test_connect_volume()
def test_connect_with_iops_limit(self):
"""Successful connect to volume with iops limit"""
self.fake_connection_properties['iopsLimit'] = '80'
self.test_connect_volume()
def test_connect_with_iops_and_bandwidth_limits(self):
"""Successful connect with iops and bandwidth limits"""
self.fake_connection_properties['bandwidthLimit'] = '500'
self.fake_connection_properties['iopsLimit'] = '80'
self.test_connect_volume()
def test_disconnect_volume(self):
"""Successful disconnect from volume"""
self.connector.disconnect_volume(self.fake_connection_properties, None)
def test_error_id(self):
"""Fail to connect with bad volume name"""
self.fake_connection_properties['scaleIO_volume_id'] = 'bad_id'
self.mock_calls[self.get_volume_api] = self.MockHTTPSResponse(
dict(errorCode='404', message='Test volume not found'), 404)
self.assertRaises(exception.BrickException, self.test_connect_volume)
def test_error_no_volume_id(self):
"""Faile to connect with no volume id"""
self.fake_connection_properties['scaleIO_volume_id'] = None
self.mock_calls[self.get_volume_api] = self.MockHTTPSResponse(
'null', 200)
self.assertRaises(exception.BrickException, self.test_connect_volume)
def test_error_bad_login(self):
"""Fail to connect with bad authentication"""
self.mock_calls[self.get_volume_api] = self.MockHTTPSResponse(
'null', 401)
self.mock_calls['login'] = self.MockHTTPSResponse('null', 401)
self.mock_calls[self.action_format.format(
'addMappedSdc')] = self.MockHTTPSResponse(
dict(errorCode=401, message='bad login'), 401)
self.assertRaises(exception.BrickException, self.test_connect_volume)
def test_error_bad_drv_cfg(self):
"""Fail to connect with missing rootwrap executable"""
self.connector.set_execute(self.fake_missing_execute)
self.assertRaises(exception.BrickException, self.test_connect_volume)
def test_error_map_volume(self):
"""Fail to connect with REST API failure"""
self.mock_calls[self.action_format.format(
'addMappedSdc')] = self.MockHTTPSResponse(
dict(errorCode=self.connector.VOLUME_NOT_MAPPED_ERROR,
message='Test error map volume'), 500)
self.assertRaises(exception.BrickException, self.test_connect_volume)
@mock.patch('time.sleep')
def test_error_path_not_found(self, sleep_mock):
"""Timeout waiting for volume to map to local file system"""
self.mock_object(os, 'listdir', return_value=["emc-vol-no-volume"])
self.assertRaises(exception.BrickException, self.test_connect_volume)
self.assertTrue(sleep_mock.called)
def test_map_volume_already_mapped(self):
"""Ignore REST API failure for volume already mapped"""
self.mock_calls[self.action_format.format(
'addMappedSdc')] = self.MockHTTPSResponse(
dict(errorCode=self.connector.VOLUME_ALREADY_MAPPED_ERROR,
message='Test error map volume'), 500)
self.test_connect_volume()
def test_error_disconnect_volume(self):
"""Fail to disconnect with REST API failure"""
self.mock_calls[self.action_format.format(
'removeMappedSdc')] = self.MockHTTPSResponse(
dict(errorCode=self.connector.VOLUME_ALREADY_MAPPED_ERROR,
message='Test error map volume'), 500)
self.assertRaises(exception.BrickException,
self.test_disconnect_volume)
def test_disconnect_volume_not_mapped(self):
"""Ignore REST API failure for volume not mapped"""
self.mock_calls[self.action_format.format(
'removeMappedSdc')] = self.MockHTTPSResponse(
dict(errorCode=self.connector.VOLUME_NOT_MAPPED_ERROR,
message='Test error map volume'), 500)
self.test_disconnect_volume()
def test_extend_volume(self):
self.assertRaises(NotImplementedError,
self.connector.extend_volume,
self.fake_connection_properties)

View File

@ -1,87 +0,0 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from os_brick import exception
from os_brick.initiator.connectors import sheepdog
from os_brick.initiator import linuxsheepdog
from os_brick.tests.initiator import test_connector
class SheepdogConnectorTestCase(test_connector.ConnectorTestCase):
def setUp(self):
super(SheepdogConnectorTestCase, self).setUp()
self.hosts = ['fake_hosts']
self.ports = ['fake_ports']
self.volume = 'fake_volume'
self.connection_properties = {
'hosts': self.hosts,
'name': self.volume,
'ports': self.ports,
}
def test_get_connector_properties(self):
props = sheepdog.SheepdogConnector.get_connector_properties(
'sudo', multipath=True, enforce_multipath=True)
expected_props = {}
self.assertEqual(expected_props, props)
def test_get_search_path(self):
sd_connector = sheepdog.SheepdogConnector(None)
path = sd_connector.get_search_path()
self.assertIsNone(path)
def test_get_volume_paths(self):
sd_connector = sheepdog.SheepdogConnector(None)
expected = []
actual = sd_connector.get_volume_paths(self.connection_properties)
self.assertEqual(expected, actual)
def test_connect_volume(self):
"""Test the connect volume case."""
sd_connector = sheepdog.SheepdogConnector(None)
device_info = sd_connector.connect_volume(self.connection_properties)
# Ensure expected object is returned correctly
self.assertIsInstance(device_info['path'],
linuxsheepdog.SheepdogVolumeIOWrapper)
@mock.patch.object(linuxsheepdog.SheepdogVolumeIOWrapper, 'close')
def test_disconnect_volume(self, volume_close):
"""Test the disconnect volume case."""
sd_connector = sheepdog.SheepdogConnector(None)
device_info = sd_connector.connect_volume(self.connection_properties)
sd_connector.disconnect_volume(self.connection_properties, device_info)
self.assertEqual(1, volume_close.call_count)
def test_disconnect_volume_with_invalid_handle(self):
"""Test the disconnect volume case with invalid handle."""
sd_connector = sheepdog.SheepdogConnector(None)
device_info = {'path': 'fake_handle'}
self.assertRaises(exception.InvalidIOHandleObject,
sd_connector.disconnect_volume,
self.connection_properties,
device_info)
def test_extend_volume(self):
sd_connector = sheepdog.SheepdogConnector(None)
self.assertRaises(NotImplementedError,
sd_connector.extend_volume,
self.connection_properties)

Some files were not shown because too many files have changed in this diff Show More