Retire fuxi
This repo is not used anymore, retire it following https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project Depends-On: https://review.openstack.org/602574 Change-Id: I4f7c5a189d894270c7cdd76d62b060169031a35a
This commit is contained in:
parent
8e720cfed8
commit
0f5bfeb879
|
@ -1,63 +0,0 @@
|
|||
*.py[cod]
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Packages
|
||||
*.egg*
|
||||
dist
|
||||
build
|
||||
eggs
|
||||
parts
|
||||
bin
|
||||
var
|
||||
sdist
|
||||
develop-eggs
|
||||
lib
|
||||
lib64
|
||||
cover
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
nosetests.xml
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
|
||||
# Complexity
|
||||
output/*.html
|
||||
output/*/index.html
|
||||
|
||||
# Sphinx
|
||||
doc/build
|
||||
|
||||
# pbr generates these
|
||||
AUTHORS
|
||||
ChangeLog
|
||||
|
||||
# Editors
|
||||
*~
|
||||
*.sw?
|
||||
|
||||
# Hidden directories
|
||||
/.*
|
||||
!/.coveragerc
|
||||
!/.gitignore
|
||||
!/.gitreview
|
||||
!/.mailmap
|
||||
!/.pylintrc
|
||||
!/.testr.conf
|
||||
|
||||
contrib/vagrant/.vagrant
|
||||
|
||||
# Configuration files
|
||||
etc/fuxi.conf
|
||||
etc/fuxi.conf.sample
|
||||
|
||||
# Ignore user specific local.conf settings for vagrant
|
||||
contrib/vagrant/user_local.conf
|
||||
|
||||
# Files created by releasenotes build
|
||||
releasenotes/build
|
|
@ -1,4 +0,0 @@
|
|||
[DEFAULT]
|
||||
test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./fuxi/tests/unit} $LISTOPT $IDOPTION | cat
|
||||
test_id_option=--load-list $IDFILE
|
||||
test_list_option=--list
|
|
@ -1,17 +0,0 @@
|
|||
If you would like to contribute to the development of OpenStack, you must
|
||||
follow the steps in this page:
|
||||
|
||||
https://docs.openstack.org/infra/manual/developers.html
|
||||
|
||||
If you already have a good understanding of how the system works and your
|
||||
OpenStack accounts are set up, you can skip to the development workflow
|
||||
section of this documentation to learn how changes to OpenStack should be
|
||||
submitted for review via the Gerrit tool:
|
||||
|
||||
https://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||
|
||||
Pull requests submitted through GitHub will be ignored.
|
||||
|
||||
Bugs should be filed on Launchpad, not GitHub:
|
||||
|
||||
https://bugs.launchpad.net/fuxi
|
|
@ -1,4 +0,0 @@
|
|||
fuxi Style Commandments
|
||||
===============================================
|
||||
|
||||
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/
|
176
LICENSE
176
LICENSE
|
@ -1,176 +0,0 @@
|
|||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
include AUTHORS
|
||||
include ChangeLog
|
||||
exclude .gitignore
|
||||
exclude .gitreview
|
||||
|
||||
global-exclude *.pyc
|
37
README.rst
37
README.rst
|
@ -1,31 +1,10 @@
|
|||
========================
|
||||
Team and repository tags
|
||||
========================
|
||||
This project is no longer maintained.
|
||||
|
||||
.. image:: https://governance.openstack.org/badges/fuxi.svg
|
||||
:target: https://governance.openstack.org/reference/tags/index.html
|
||||
The contents of this repository are still available in the Git
|
||||
source code management system. To see the contents of this
|
||||
repository before it reached its end of life, please check out the
|
||||
previous commit with "git checkout HEAD^1".
|
||||
|
||||
.. Change things from this point on
|
||||
|
||||
===============================
|
||||
fuxi
|
||||
===============================
|
||||
|
||||
Enable Docker container to use Cinder volume and Manila share
|
||||
|
||||
Fuxi focuses on enabling Docker container to use Cinder volume and Manila
|
||||
share, thus Docker volume can reuse the advance features and numerous vendor
|
||||
drivers in Cinder and Manila. With Fuxi, Cinder and Manila can be used as
|
||||
the unified persistence storage provider for virtual machine, baremetal
|
||||
and Docker container.
|
||||
|
||||
* Free software: Apache license
|
||||
* Documentation: https://docs.openstack.org/fuxi/latest/
|
||||
* Source: https://git.openstack.org/cgit/openstack/fuxi
|
||||
* Bugs: https://bugs.launchpad.net/fuxi
|
||||
* Blueprints: https://blueprints.launchpad.net/fuxi
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
* TODO
|
||||
For any further questions, please email
|
||||
openstack-dev@lists.openstack.org or join #openstack-dev on
|
||||
Freenode.
|
||||
|
|
|
@ -1,24 +0,0 @@
|
|||
[[local|localrc]]
|
||||
|
||||
LOGFILE=stack.sh.log
|
||||
LOG_COLOR=False
|
||||
|
||||
DATABASE_PASSWORD=pass
|
||||
RABBIT_PASSWORD=pass
|
||||
SERVICE_PASSWORD=pass
|
||||
SERVICE_TOKEN=pass
|
||||
ADMIN_PASSWORD=pass
|
||||
|
||||
# Install kuryr git master source code by default.
|
||||
# If you want to use stable kuryr lib, please comment out this line.
|
||||
LIBS_FROM_GIT=kuryr
|
||||
|
||||
# Manila provider options
|
||||
MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='snapshot_support=True create_share_from_snapshot_support=True revert_to_snapshot_support=True mount_snapshot_support=True'
|
||||
SHARE_DRIVER=manila.share.drivers.lvm.LVMShareDriver
|
||||
MANILA_OPTGROUP_generic1_driver_handles_share_servers=False
|
||||
|
||||
FUXI_VOLUME_PROVIDERS=cinder,manila
|
||||
enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
|
||||
enable_plugin fuxi https://git.openstack.org/openstack/fuxi
|
||||
enable_plugin manila https://git.openstack.org/openstack/manila
|
|
@ -1,111 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# Save trace setting
|
||||
XTRACE=$(set +o | grep xtrace)
|
||||
set +o xtrace
|
||||
|
||||
function check_docker {
|
||||
if is_ubuntu; then
|
||||
dpkg -s docker-engine > /dev/null 2>&1
|
||||
else
|
||||
rpm -q docker-engine > /dev/null 2>&1 || rpm -q docker > /dev/null 2>&1
|
||||
fi
|
||||
}
|
||||
|
||||
function create_fuxi_account {
|
||||
if is_service_enabled fuxi; then
|
||||
create_service_user "fuxi" "admin"
|
||||
get_or_create_service "fuxi" "fuxi" "Fuxi Service"
|
||||
fi
|
||||
}
|
||||
|
||||
function configure_fuxi {
|
||||
sudo install -d -o $STACK_USER $FUXI_CONFIG_DIR
|
||||
|
||||
(cd $FUXI_HOME && exec ./tools/generate_config_file_samples.sh)
|
||||
|
||||
cp $FUXI_HOME/etc/fuxi.conf.sample $FUXI_CONFIG
|
||||
|
||||
if is_service_enabled fuxi; then
|
||||
configure_auth_token_middleware $FUXI_CONFIG fuxi \
|
||||
$FUXI_AUTH_CACHE_DIR cinder
|
||||
configure_auth_token_middleware $FUXI_CONFIG fuxi \
|
||||
$FUXI_AUTH_CACHE_DIR manila
|
||||
|
||||
iniset $FUXI_CONFIG DEFAULT fuxi_port 7879
|
||||
iniset $FUXI_CONFIG DEFAULT my_ip $HOST_IP
|
||||
iniset $FUXI_CONFIG DEFAULT volume_providers $FUXI_VOLUME_PROVIDERS
|
||||
iniset $FUXI_CONFIG DEFAULT volume_from fuxi
|
||||
iniset $FUXI_CONFIG DEFAULT default_volume_size 1
|
||||
iniset $FUXI_CONFIG DEFAULT volume_dir /fuxi/data
|
||||
iniset $FUXI_CONFIG DEFAULT threaded true
|
||||
iniset $FUXI_CONFIG DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
|
||||
|
||||
iniset $FUXI_CONFIG cinder volume_connector osbrick
|
||||
iniset $FUXI_CONFIG cinder multiattach false
|
||||
iniset $FUXI_CONFIG cinder fstype ext4
|
||||
fi
|
||||
|
||||
write_uwsgi_config "$FUXI_UWSGI_CONF" "$FUXI_UWSGI" "" ":7879"
|
||||
}
|
||||
|
||||
|
||||
# main loop
|
||||
if is_service_enabled fuxi; then
|
||||
|
||||
if [[ "$1" == "stack" && "$2" == "install" ]]; then
|
||||
if use_library_from_git "kuryr"; then
|
||||
git_clone_by_name "kuryr"
|
||||
setup_dev_lib "kuryr"
|
||||
fi
|
||||
setup_develop $FUXI_HOME
|
||||
|
||||
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
|
||||
|
||||
if [[ ! -d "${FUXI_ACTIVATOR_DIR}" ]]; then
|
||||
echo -n "${FUXI_ACTIVATOR_DIR} directory is missing. Creating it... "
|
||||
sudo mkdir -p ${FUXI_ACTIVATOR_DIR}
|
||||
echo "Done"
|
||||
fi
|
||||
|
||||
if [[ ! -f "${FUXI_ACTIVATOR}" ]]; then
|
||||
echo -n "${FUXI_ACTIVATOR} is missing. Copying the default one... "
|
||||
sudo cp ${FUXI_DEFAULT_ACTIVATOR} ${FUXI_ACTIVATOR}
|
||||
echo "Done"
|
||||
fi
|
||||
|
||||
create_fuxi_account
|
||||
configure_fuxi
|
||||
|
||||
# In case iSCSI client is used
|
||||
sudo ln -s /lib/udev/scsi_id /usr/local/bin || true
|
||||
|
||||
if [[ "$USE_PYTHON3" = "True" ]]; then
|
||||
# Switch off glance->swift communication as swift fails under py3.x
|
||||
iniset /etc/glance/glance-api.conf glance_store default_store file
|
||||
fi
|
||||
|
||||
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
|
||||
run_process fuxi "$FUXI_BIN_DIR/uwsgi --ini $FUXI_UWSGI_CONF" "" "root"
|
||||
|
||||
fi
|
||||
|
||||
if [[ "$1" == "unstack" ]]; then
|
||||
stop_process fuxi
|
||||
remove_uwsgi_config "$FUXI_UWSGI_CONF" "$FUXI_UWSGI"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Restore xtrace
|
||||
$XTRACE
|
|
@ -1,29 +0,0 @@
|
|||
FUXI_HOME=${FUXI_HOME:-$DEST/fuxi}
|
||||
FUXI_ACTIVATOR_FILENAME=fuxi.spec
|
||||
FUXI_DEFAULT_ACTIVATOR=${FUXI_HOME}/etc/${FUXI_ACTIVATOR_FILENAME}
|
||||
|
||||
# See libnetwork's plugin discovery mechanism:
|
||||
# https://github.com/docker/docker/blob/c4d45b6a29a91f2fb5d7a51ac36572f2a9b295c6/docs/extend/plugin_api.md#plugin-discovery
|
||||
FUXI_ACTIVATOR_DIR=${FUXI_ACTIVATOR_DIR:-/usr/lib/docker/plugins/fuxi}
|
||||
FUXI_ACTIVATOR=${FUXI_ACTIVATOR_DIR}/${FUXI_ACTIVATOR_FILENAME}
|
||||
|
||||
FUXI_CONFIG_FILENAME=fuxi.conf
|
||||
FUXI_DEFAULT_CONFIG=${FUXI_HOME}/etc/${FUXI_CONFIG_FILENAME}
|
||||
FUXI_CONFIG_DIR=${FUXI_CONFIG_DIR:-/etc/fuxi}
|
||||
FUXI_CONFIG=${FUXI_CONFIG_DIR}/${FUXI_CONFIG_FILENAME}
|
||||
FUXI_AUTH_CACHE_DIR=${FUXI_AUTH_CACHE_DIR:-/var/cache/fuxi}
|
||||
|
||||
FUXI_DOCKER_ENGINE_PORT=${FUXI_DOCKER_ENGINE_PORT:-2375}
|
||||
FUXI_VOLUME_PROVIDERS=${FUXI_VOLUME_PROVIDERS:-cinder,manila}
|
||||
|
||||
FUXI_BIN_DIR=$(get_python_exec_prefix)
|
||||
FUXI_UWSGI=$FUXI_BIN_DIR/fuxi-server-wsgi
|
||||
FUXI_UWSGI_CONF=$FUXI_CONFIG_DIR/fuxi-server-uwsgi.ini
|
||||
|
||||
DOCKER_CLUSTER_STORE=${DOCKER_CLUSTER_STORE:-etcd://$SERVICE_HOST:$ETCD_PORT}
|
||||
|
||||
GITREPO["kuryr"]=${KURYR_REPO:-${GIT_BASE}/openstack/kuryr.git}
|
||||
GITBRANCH["kuryr"]=${KURYR_BRANCH:-master}
|
||||
GITDIR["kuryr"]=$DEST/kuryr
|
||||
|
||||
enable_service fuxi etcd3 docker-engine
|
|
@ -1,73 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, os.path.abspath('../..'))
|
||||
# -- General configuration ----------------------------------------------------
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'oslosphinx'
|
||||
]
|
||||
|
||||
# autodoc generation is a bit aggressive and a nuisance when doing heavy
|
||||
# text edit cycles.
|
||||
# execute "export SPHINX_DEBUG=1" in your terminal to disable
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'fuxi'
|
||||
copyright = u'2013, OpenStack Foundation'
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
add_module_names = True
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# -- Options for HTML output --------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. Major themes that come with
|
||||
# Sphinx are currently 'default' and 'sphinxdoc'.
|
||||
# html_theme_path = ["."]
|
||||
# html_theme = '_theme'
|
||||
# html_static_path = ['static']
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = '%sdoc' % project
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, documentclass
|
||||
# [howto/manual]).
|
||||
latex_documents = [
|
||||
('index',
|
||||
'%s.tex' % project,
|
||||
u'%s Documentation' % project,
|
||||
u'OpenStack Foundation', 'manual'),
|
||||
]
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
# intersphinx_mapping = {'http://docs.python.org/': None}
|
|
@ -1,4 +0,0 @@
|
|||
============
|
||||
Contributing
|
||||
============
|
||||
.. include:: ../../CONTRIBUTING.rst
|
|
@ -1,92 +0,0 @@
|
|||
..
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
|
||||
Cinder provider
|
||||
===============
|
||||
|
||||
Cinder volume provider enables Fuxi create volume from OpenStack Cinder and
|
||||
provides them to Docker containers.
|
||||
|
||||
Cinder provider configuration setttings
|
||||
---------------------------------------
|
||||
|
||||
The following parameters in `cinder` group need to be set:
|
||||
|
||||
- `region_name` = <used to pick the URL from the service catalog>
|
||||
- `volume_connector` = <the way to connect or disconect volume. default
|
||||
`osbrick`, only could chose from [osbrick, openstack]>
|
||||
- `fstype` = <the filesystem type for formatting connected block device.
|
||||
default `ext4`>
|
||||
- `multiattach` = <the volume is enabled to attached to multi-host.
|
||||
deafult `False`>
|
||||
|
||||
.. note::
|
||||
|
||||
* If want to use keystone v3, please set authtoken configuration in group
|
||||
`cinder` or other group with `auth_section` marking it.
|
||||
|
||||
* `multiattach` must be setting properly according to the enabled volume
|
||||
driver backends in Cinder.
|
||||
|
||||
|
||||
Supported connectors
|
||||
--------------------
|
||||
- osbrick: fuxi.connector.osbrickconnector.CinderConnector
|
||||
- openstack: fuxi.connector.cloudconnector.openstack.CinderConnector
|
||||
|
||||
Connector osbrick
|
||||
-----------------
|
||||
osbrick connector uses OpenStack library package `os-brick`_ to manage the
|
||||
connection with Cinder volume.
|
||||
With this connector, `fuxi-server` could run in baremetal or VM normally.
|
||||
|
||||
Requirements
|
||||
~~~~~~~~~~~~
|
||||
- Install related client for connecting Cinder volume.
|
||||
eg: open-iscsi, nfs-common.
|
||||
- When iSCSI client used and `fuxi-server` is running in root user, must make
|
||||
a link for executable file `/lib/udev/scsi_id`
|
||||
::
|
||||
|
||||
ln -s /lib/udev/scsi_id /usr/local/bin
|
||||
|
||||
|
||||
Connector openstack
|
||||
-------------------
|
||||
|
||||
This connector is only supported when running the containers inside OpenStack
|
||||
Nova instances due to its usage of OpenStack Nova API 'connect' and 'disconnet'
|
||||
verbs.
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
The example for creating volume from Cinder with Docker volume command:
|
||||
|
||||
::
|
||||
|
||||
docker volume create --driver fuxi --name <vol_name> \
|
||||
--opt size=1 \
|
||||
--opt fstype=ext4 \
|
||||
--opt multiattach=true
|
||||
|
||||
Use existing Cinder volume:
|
||||
|
||||
::
|
||||
|
||||
docker volume create --driver fuxi --name test_vol \
|
||||
--opt size=1 \
|
||||
--opt volume_id=<volume_id>
|
||||
|
||||
.. _os-brick: https://github.com/openstack/os-brick
|
|
@ -1,28 +0,0 @@
|
|||
..
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
|
||||
|
||||
Developer Guide
|
||||
===============
|
||||
|
||||
Volume providers
|
||||
----------------
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
cinder_provider
|
||||
manila_provider
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`search`
|
|
@ -1,101 +0,0 @@
|
|||
..
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
Manila provider
|
||||
===============
|
||||
|
||||
Manila volume provider enables Fuxi create share from OpenStack Manila and
|
||||
provides them to Docker containers.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
- Install the related client according the driver backends that Manila
|
||||
used for mounting the remote filesystem.
|
||||
|
||||
|
||||
Manila provider configuration settings
|
||||
--------------------------------------
|
||||
|
||||
The following parameters in `manila` group need to be set:
|
||||
|
||||
- `region_name` = <used to pick the URL from the service catalog>
|
||||
|
||||
The following configuration parameters are options:
|
||||
|
||||
- `volume_connector` = osbrick
|
||||
- `share_proto` = <default share protocol used to grant access>
|
||||
- `proto_access_type_map` = <the mapping of protocol access
|
||||
that manila enabled>
|
||||
- `access_to_for_cert` = <the value of key `access_to` when Manila use
|
||||
`access_type` `CERT` to allow access for visitors>
|
||||
|
||||
.. note::
|
||||
|
||||
If want to use keystone v3, please set authtoken configuration in group
|
||||
`manila` or other group with `auth_section` marking it.
|
||||
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Set `volume_providers = manila` in group `DEFAULT` to use Manila volume
|
||||
provider.
|
||||
|
||||
For different backends that manila enabled, we need to provide different
|
||||
parameter to create volume(share) from Manila.
|
||||
|
||||
The following are some examples.
|
||||
|
||||
- If using `generic` driver in Manila, `share_network` should be provided;
|
||||
|
||||
::
|
||||
|
||||
docker volume create --driver fuxi --name <vol_name> \
|
||||
--opt share_network=<share_network_id>
|
||||
|
||||
- If using `glusterfs` driver in Manila, `share_type` should be provided;
|
||||
|
||||
::
|
||||
|
||||
docker volume create --driver fuxi --name <volume_name> \
|
||||
--opt share_type=<share_type_id>
|
||||
|
||||
- If using `glusterfs_native` driver in Manila, `share_type` and `share_proto`
|
||||
need be provided;
|
||||
|
||||
::
|
||||
|
||||
docker volume create --driver fuxi --name <vol_name> \
|
||||
--opt share_type=<share_type_id> \
|
||||
--opt share_proto=glusterfs
|
||||
|
||||
|
||||
Using existing Manila share:
|
||||
|
||||
::
|
||||
|
||||
docker volume create --driver fuxi --name <vol_name> \
|
||||
--opt volume_id=<share_id>
|
||||
|
||||
.. note::
|
||||
|
||||
The parameter `--opt volume_provider=manila` is needed, if you want
|
||||
use Manila volume provider when multi volume providers are enabled and
|
||||
`manila` is not the first one.
|
||||
|
||||
References
|
||||
----------
|
||||
|
||||
* `Manila share features support mapping`_
|
||||
|
||||
.. _Manila share features support mapping: https://docs.openstack.org/manila/latest/devref/share_back_ends_feature_support_mapping.html
|
|
@ -1,50 +0,0 @@
|
|||
==========================
|
||||
Run fullstack test locally
|
||||
==========================
|
||||
|
||||
This is a guide for developers who want to run fullstack tests in their local
|
||||
machine.
|
||||
|
||||
Prerequisite
|
||||
============
|
||||
|
||||
You need to deploy Fuxi in a devstack environment.
|
||||
|
||||
Clone devstack::
|
||||
|
||||
# Create a root directory for devstack if needed
|
||||
sudo mkdir -p /opt/stack
|
||||
sudo chown $USER /opt/stack
|
||||
|
||||
git clone https://git.openstack.org/openstack-dev/devstack /opt/stack/devstack
|
||||
|
||||
We will run devstack with minimal local.conf settings required. You can use the
|
||||
sample local.conf as a quick-start::
|
||||
|
||||
git clone https://git.openstack.org/openstack/fuxi /opt/stack/fuxi
|
||||
cp /opt/stack/fuxi/devstack/local.conf.sample /opt/stack/devstack/local.conf
|
||||
|
||||
Run devstack::
|
||||
|
||||
cd /opt/stack/devstack
|
||||
./stack.sh
|
||||
|
||||
**NOTE:** This will take a while to setup the dev environment.
|
||||
|
||||
Preparation
|
||||
===========
|
||||
|
||||
Navigate to fuxi directory::
|
||||
|
||||
cd /opt/stack/fuxi
|
||||
|
||||
Source the credential of 'fuxi' user::
|
||||
|
||||
source /opt/stack/devstack/openrc fuxi service
|
||||
|
||||
Run the test
|
||||
============
|
||||
|
||||
Run this command::
|
||||
|
||||
tox -efullstack
|
|
@ -1,34 +0,0 @@
|
|||
.. fuxi documentation master file, created by
|
||||
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Welcome to fuxi's documentation!
|
||||
================================
|
||||
|
||||
Contents:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
readme
|
||||
installation
|
||||
usage
|
||||
contributing
|
||||
reno
|
||||
fullstack-test
|
||||
|
||||
Developer Docs
|
||||
==============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
devref/index
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`search`
|
||||
|
|
@ -1,133 +0,0 @@
|
|||
============
|
||||
Installation
|
||||
============
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
* Install possibly required package for deploying Fuxi or running `fuxi-server`.
|
||||
|
||||
Ubuntu
|
||||
|
||||
::
|
||||
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install python-dev git libffi-dev libssl-dev gcc
|
||||
$ sudo apt-get install open-iscsi # Install when using iSCSI client to connect remote volume
|
||||
$ sudo apt-get install sysfsutils # Install when os_brick package and iSCSI client used
|
||||
|
||||
CentOS
|
||||
|
||||
::
|
||||
|
||||
$ sudo yum -y install python-devel git gcc openssl-devel
|
||||
$ sudo yum install iscsi-initiator-utils # Install when using iSCSI client to connect remote volume
|
||||
$ sudo yum install sysfsutils # Install when os_brick package and iSCSI client used
|
||||
|
||||
* Install requirements.
|
||||
|
||||
::
|
||||
|
||||
$ curl https://bootstrap.pypa.io/get-pip.py | sudo python
|
||||
$ git clone https://github.com/openstack/fuxi.git
|
||||
$ cd fuxi
|
||||
$ sudo pip install -r requirements.txt
|
||||
|
||||
|
||||
If `fuxi-server` run with non-root user, it is expected to enable `fuxi-server` to execute some Linux command without password interact.
|
||||
|
||||
Installing Fuxi
|
||||
---------------
|
||||
|
||||
::
|
||||
|
||||
$ sudo python setup.py install
|
||||
|
||||
Configuring Fuxi
|
||||
----------------
|
||||
|
||||
After installing Fuxi, generate sample config, etc/fuxi.conf.sample, running the following:
|
||||
|
||||
::
|
||||
|
||||
$ ./tools/generate_config_file_samples.sh
|
||||
|
||||
Rename and copy config file at required path:
|
||||
|
||||
::
|
||||
|
||||
$ sudo cp etc/fuxi.conf.sample /etc/fuxi/fuxi.conf
|
||||
|
||||
Then edit it.
|
||||
|
||||
* Default section
|
||||
|
||||
::
|
||||
|
||||
[DEFAULT]
|
||||
my_ip = MY_IP # The IP of host that Fuxi deployed on
|
||||
volume_providers = cinder # The enable volume provider for Fuxi
|
||||
|
||||
* Cinder section
|
||||
|
||||
::
|
||||
|
||||
[cinder]
|
||||
region_name = REGION_NAME # Region name of this node. This is used when picking the URL in the service catalog.
|
||||
auth_url = AUTH_URL # For example, it can be http://127.0.0.1:35357/v3/
|
||||
username = ADMIN_USER
|
||||
user_domain_name = Default
|
||||
password = ADMIN_PASSWORD
|
||||
project_name = service
|
||||
project_domain_name = Default
|
||||
auth_type = password
|
||||
volume_connector = VOLUME_CONNECTOR # The way to connect to volume. For Cinder, this could chose from `[openstack, osbrick]`
|
||||
fstype = ext4 # Default filesystem type to format, if not provided from request
|
||||
|
||||
* Nova section
|
||||
|
||||
::
|
||||
|
||||
[nova]
|
||||
region_name = REGION_NAME # Region name of this node. This is used when picking the URL in the service catalog.
|
||||
auth_url = AUTH_URL # For example, it can be http://127.0.0.1:35357/v3/
|
||||
username = ADMIN_USER
|
||||
user_domain_name = Default
|
||||
password = ADMIN_PASSWORD
|
||||
project_name = service
|
||||
project_domain_name = Default
|
||||
auth_type = password
|
||||
|
||||
Running Fuxi
|
||||
------------
|
||||
Fuxi could run with root user permission or non-root use permission. In order to make `fuxi-server` working normally, some extra config is inevitable.
|
||||
|
||||
For root user, when iSCSI client is used
|
||||
|
||||
::
|
||||
|
||||
$ ln -s /lib/udev/scsi_id /usr/local/bin
|
||||
|
||||
For non-root user
|
||||
|
||||
::
|
||||
|
||||
$ echo "fuxi ALL=(root) NOPASSWD: /usr/local/bin/fuxi-rootwrap /etc/fuxi/rootwrap.conf *" | sudo tee /etc/sudoers.d/fuxi-rootwrap
|
||||
|
||||
Here user `fuxi` should be changed to the user run `fuxi-server` on your host.
|
||||
|
||||
Start `fuxi-server`
|
||||
::
|
||||
|
||||
$ fuxi-server --config-file /etc/fuxi/fuxi.conf
|
||||
|
||||
Testing Fuxi
|
||||
------------
|
||||
|
||||
::
|
||||
|
||||
$ docker volume create --driver fuxi --name test_vol -o size=1 -o fstype=ext4 -o multiattach=true
|
||||
test_vol
|
||||
$ docker volume ls
|
||||
DRIVER VOLUME NAME
|
||||
fuxi test_vol
|
|
@ -1 +0,0 @@
|
|||
.. include:: ../../README.rst
|
|
@ -1,59 +0,0 @@
|
|||
Release Notes
|
||||
=============
|
||||
|
||||
What is reno ?
|
||||
--------------
|
||||
|
||||
Fuxi uses `reno <https://docs.openstack.org/reno/latest/user/usage.html>`_ for
|
||||
providing release notes in-tree. That means that a patch can include a *reno
|
||||
file* or a series can have a follow-on change containing that file explaining
|
||||
what the impact is.
|
||||
|
||||
A *reno file* is a YAML file written in the releasenotes/notes tree which is
|
||||
generated using the reno tool this way:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ tox -e venv -- reno new <name-your-file>
|
||||
|
||||
where usually ``<name-your-file>`` can be ``bp-<blueprint_name>`` for a
|
||||
blueprint or ``bug-XXXXXX`` for a bugfix.
|
||||
|
||||
Refer to the `reno documentation <https://docs.openstack.org/reno/latest/user/usage.html#editing-a-release-note>`_
|
||||
for the full list of sections.
|
||||
|
||||
|
||||
When a release note is needed
|
||||
-----------------------------
|
||||
|
||||
A release note is required anytime a reno section is needed. Below are some
|
||||
examples for each section. Any sections that would be blank should be left out
|
||||
of the note file entirely. If no section is needed, then you know you don't
|
||||
need to provide a release note :-)
|
||||
|
||||
* ``upgrade``
|
||||
* The patch has an `UpgradeImpact <http://docs.openstack.org/infra/manual/developers.html#peer-review>`_ tag
|
||||
* A DB change needs some deployer modification (like a migration)
|
||||
* A configuration option change (deprecation, removal or modified default)
|
||||
* some specific changes that have a `DocImpact <http://docs.openstack.org/infra/manual/developers.html#peer-review>`_ tag
|
||||
but require further action from an deployer perspective
|
||||
* any patch that requires an action from the deployer in general
|
||||
|
||||
* ``security``
|
||||
* If the patch fixes a known vulnerability
|
||||
|
||||
* ``features``
|
||||
* If the patch has an `APIImpact <http://docs.openstack.org/infra/manual/developers.html#peer-review>`_ tag
|
||||
|
||||
* ``critical``
|
||||
* Bugfixes categorized as Critical in Launchpad *impacting users*
|
||||
|
||||
* ``fixes``
|
||||
* No clear definition of such bugfixes. Hairy long-standing bugs with high
|
||||
importance that have been fixed are good candidates though.
|
||||
|
||||
|
||||
Three sections are left intentionally unexplained (``prelude``, ``issues`` and
|
||||
``other``). Those are targeted to be filled in close to the release time for
|
||||
providing details about the soon-ish release. Don't use them unless you know
|
||||
exactly what you are doing.
|
|
@ -1,7 +0,0 @@
|
|||
========
|
||||
Usage
|
||||
========
|
||||
|
||||
To use fuxi in a project::
|
||||
|
||||
import fuxi
|
|
@ -1,4 +0,0 @@
|
|||
{
|
||||
"Name": "fuxi",
|
||||
"Addr": "http://127.0.0.1:7879"
|
||||
}
|
|
@ -1 +0,0 @@
|
|||
http://127.0.0.1:7879
|
|
@ -1,4 +0,0 @@
|
|||
[DEFAULT]
|
||||
output_file = etc/fuxi.conf.sample
|
||||
wrap_width = 79
|
||||
namespace = fuxi
|
|
@ -1,27 +0,0 @@
|
|||
# Configuration for fuxi-rootwrap
|
||||
# This file should be owned by (and only-writable by) the root user
|
||||
|
||||
[DEFAULT]
|
||||
# List of directories to load filter definitions from (separated by ',').
|
||||
# These directories MUST all be only writable by root !
|
||||
filters_path=/etc/fuxi/rootwrap.d
|
||||
|
||||
# List of directories to search executables in, in case filters do not
|
||||
# explicitely specify a full path (separated by ',')
|
||||
# If not specified, defaults to system PATH environment variable.
|
||||
# These directories MUST all be only writable by root !
|
||||
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin
|
||||
|
||||
# Enable logging to syslog
|
||||
# Default value is False
|
||||
use_syslog=False
|
||||
|
||||
# Which syslog facility to use.
|
||||
# Valid values include auth, authpriv, syslog, local0, local1...
|
||||
# Default value is 'syslog'
|
||||
syslog_log_facility=syslog
|
||||
|
||||
# Which messages to log.
|
||||
# INFO means log all usage
|
||||
# ERROR means only log unsuccessful attempts
|
||||
syslog_log_level=ERROR
|
|
@ -1,31 +0,0 @@
|
|||
# fuxi-rootwrap command filters
|
||||
# This file should be owned by (and only-writeable by) the root user
|
||||
|
||||
[Filters]
|
||||
# os-brick library commands
|
||||
# os_brick.privileged.run_as_root oslo.privsep context
|
||||
# This line ties the superuser privs with the config files, context name,
|
||||
# and (implicitly) the actual python code invoked.
|
||||
privsep-rootwrap: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.*
|
||||
# The following and any cinder/brick/* entries should all be obsoleted
|
||||
# by privsep, and may be removed once the os-brick version requirement
|
||||
# is updated appropriately.
|
||||
scsi_id: CommandFilter, /lib/udev/scsi_id, root
|
||||
drbdadm: CommandFilter, drbdadm, root
|
||||
iscsiadm: CommandFilter, iscsiadm, root
|
||||
sg_scan: CommandFilter, sg_scan, root
|
||||
systool: CommandFilter, systool, root
|
||||
cat: CommandFilter, cat, root
|
||||
|
||||
# fuxi/connector/cloudconnector/openstack.py
|
||||
ln: CommandFilter, ln, root
|
||||
|
||||
# fuxi/blockdevice.py
|
||||
mount: CommandFilter, mount, root
|
||||
umount: CommandFilter, umount, root
|
||||
mkfs: CommandFilter, mkfs, root
|
||||
|
||||
mkdir: CommandFilter, mkdir, root
|
||||
tee: CommandFilter, tee, root
|
||||
ls: CommandFilter, ls, root
|
||||
rm: CommandFilter, rm, root
|
|
@ -1,15 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from fuxi import utils
|
||||
|
||||
app = utils.make_json_app(__name__)
|
|
@ -1,35 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import glob
|
||||
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import units
|
||||
|
||||
from fuxi import exceptions
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BlockerDeviceManager(object):
|
||||
def device_scan(self):
|
||||
return glob.glob('/sys/block/*')
|
||||
|
||||
def get_device_size(self, device):
|
||||
try:
|
||||
nr_sectors = open(device + '/size').read().rstrip('\n')
|
||||
sect_size = open(device + '/queue/hw_sector_size')\
|
||||
.read().rstrip('\n')
|
||||
return (float(nr_sectors) * float(sect_size)) / units.Gi
|
||||
except IOError as e:
|
||||
LOG.error("Failed to read device size. %s", str(e))
|
||||
raise exceptions.FuxiException(str(e))
|
|
@ -1,187 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
|
||||
from kuryr.lib import config as kuryr_config
|
||||
from kuryr.lib import opts as kuryr_opts
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
|
||||
from fuxi.i18n import _
|
||||
from fuxi.version import version_info
|
||||
|
||||
default_opts = [
|
||||
cfg.HostAddressOpt('my_ip',
|
||||
help=_('IP address of this machine.')),
|
||||
cfg.IntOpt('fuxi_port',
|
||||
default=7879,
|
||||
help=_('Port for fuxi volume driver server.')),
|
||||
cfg.StrOpt('volume_dir',
|
||||
default='/fuxi/data',
|
||||
help=_('At which the docker volume will create.')),
|
||||
cfg.ListOpt('volume_providers',
|
||||
help=_('Volume storage backends that provide volume for '
|
||||
'Docker')),
|
||||
cfg.StrOpt('volume_from',
|
||||
default='fuxi',
|
||||
help=_('Setting label for volume.')),
|
||||
cfg.IntOpt('default_volume_size',
|
||||
default=1,
|
||||
help=_('Default size for volume.')),
|
||||
cfg.BoolOpt('threaded',
|
||||
default=True,
|
||||
help=_('Make this volume plugin run in multi-thread.')),
|
||||
cfg.StrOpt('rootwrap_config',
|
||||
default='/etc/fuxi/rootwrap.conf',
|
||||
help=_('Path to the rootwrap configuration file to use for '
|
||||
'running commands as root.')),
|
||||
]
|
||||
|
||||
keystone_group = cfg.OptGroup(
|
||||
'keystone',
|
||||
title='Keystone Options',
|
||||
help=_('Configuration options for OpenStack Keystone'))
|
||||
|
||||
legacy_keystone_opts = [
|
||||
cfg.StrOpt('region',
|
||||
default=os.environ.get('REGION'),
|
||||
help=_('The region that this machine belongs to.'),
|
||||
deprecated_for_removal=True),
|
||||
cfg.StrOpt('auth_url',
|
||||
default=os.environ.get('IDENTITY_URL'),
|
||||
help=_('The URL for accessing the identity service.'),
|
||||
deprecated_for_removal=True),
|
||||
cfg.StrOpt('admin_user',
|
||||
default=os.environ.get('SERVICE_USER'),
|
||||
help=_('The username to auth with the identity service.'),
|
||||
deprecated_for_removal=True),
|
||||
cfg.StrOpt('admin_tenant_name',
|
||||
default=os.environ.get('SERVICE_TENANT_NAME'),
|
||||
help=_('The tenant name to auth with the identity service.'),
|
||||
deprecated_for_removal=True),
|
||||
cfg.StrOpt('admin_password',
|
||||
default=os.environ.get('SERVICE_PASSWORD'),
|
||||
help=_('The password to auth with the identity service.'),
|
||||
deprecated_for_removal=True),
|
||||
cfg.StrOpt('admin_token',
|
||||
default=os.environ.get('SERVICE_TOKEN'),
|
||||
help=_('The admin token.'),
|
||||
deprecated_for_removal=True),
|
||||
cfg.StrOpt('auth_ca_cert',
|
||||
default=os.environ.get('SERVICE_CA_CERT'),
|
||||
help=_('The CA certification file.'),
|
||||
deprecated_for_removal=True),
|
||||
cfg.BoolOpt('auth_insecure',
|
||||
default=True,
|
||||
help=_("Turn off verification of the certificate for ssl."),
|
||||
deprecated_for_removal=True),
|
||||
]
|
||||
|
||||
cinder_group = cfg.OptGroup(
|
||||
'cinder',
|
||||
title='Cinder Options',
|
||||
help=_('Configuration options for OpenStack Cinder'))
|
||||
|
||||
cinder_opts = [
|
||||
cfg.StrOpt('region_name',
|
||||
default=os.environ.get('REGION'),
|
||||
help=_('Region name of this node. This is used when picking'
|
||||
' the URL in the service catalog.')),
|
||||
cfg.StrOpt('volume_connector',
|
||||
default='osbrick',
|
||||
help=_('Volume connector for attach volume to this server, '
|
||||
'or detach volume from this server.')),
|
||||
cfg.StrOpt('availability_zone',
|
||||
default=None,
|
||||
help=_('AZ in which the current machine creates, '
|
||||
'and volume is going to create.')),
|
||||
cfg.StrOpt('volume_type',
|
||||
default=None,
|
||||
help=_('Volume type to create volume.')),
|
||||
cfg.StrOpt('fstype',
|
||||
default='ext4',
|
||||
help=_('Default filesystem type for volume.')),
|
||||
cfg.BoolOpt('multiattach',
|
||||
default=False,
|
||||
help=_('Allow the volume to be attached to more than '
|
||||
'one instance.')),
|
||||
cfg.BoolOpt('all_tenants',
|
||||
default=True,
|
||||
help=_('Allow access over all tenants by provided auth'))
|
||||
]
|
||||
|
||||
nova_group = cfg.OptGroup(
|
||||
'nova',
|
||||
title='Nova Options',
|
||||
help=_('Configuration options for OpenStack Nova'))
|
||||
|
||||
nova_opts = [
|
||||
cfg.StrOpt('region_name',
|
||||
default=os.environ.get('REGION'),
|
||||
help=_('Region name of this node. This is used when picking'
|
||||
' the URL in the service catalog.'))
|
||||
]
|
||||
|
||||
manila_group = cfg.OptGroup(
|
||||
'manila',
|
||||
title='Manila Options',
|
||||
help=_('Configuration options for OpenStack Manila'))
|
||||
|
||||
manila_opts = [
|
||||
cfg.StrOpt('region_name',
|
||||
default=os.environ.get('REGION'),
|
||||
help=_('Region name of this node. This is used when picking'
|
||||
' the URL in the service catalog.')),
|
||||
cfg.StrOpt('volume_connector',
|
||||
default='osbrick',
|
||||
help=_('Volume connector for attach share to this server, '
|
||||
'or detach share from this server.')),
|
||||
cfg.StrOpt('share_proto',
|
||||
default='NFS',
|
||||
help=_('Default protocol for manila share.')),
|
||||
cfg.DictOpt('proto_access_type_map',
|
||||
default={},
|
||||
help=_('Set the access type for client to access share.')),
|
||||
cfg.StrOpt('availability_zone',
|
||||
default=None,
|
||||
help=_('AZ in which the share is going to create.')),
|
||||
cfg.StrOpt('access_to_for_cert',
|
||||
default='',
|
||||
help=_('The value to access share for access_type cert.'))
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(default_opts)
|
||||
CONF.register_opts(legacy_keystone_opts, group=keystone_group.name)
|
||||
CONF.register_opts(cinder_opts, group=cinder_group.name)
|
||||
CONF.register_opts(nova_opts, group=nova_group.name)
|
||||
|
||||
CONF.register_group(manila_group)
|
||||
CONF.register_opts(manila_opts, group=manila_group)
|
||||
kuryr_config.register_keystoneauth_opts(CONF, manila_group.name)
|
||||
|
||||
# Settting options for Keystone.
|
||||
kuryr_config.register_keystoneauth_opts(CONF, cinder_group.name)
|
||||
CONF.set_default('auth_type', default='password', group=cinder_group.name)
|
||||
|
||||
kuryr_config.register_keystoneauth_opts(CONF, nova_group.name)
|
||||
|
||||
keystone_auth_opts = kuryr_opts.get_keystoneauth_conf_options()
|
||||
|
||||
# Setting oslo.log options for logging.
|
||||
logging.register_options(CONF)
|
||||
|
||||
|
||||
def init(args, **kwargs):
|
||||
cfg.CONF(args=args, project='fuxi',
|
||||
version=version_info.release_string(), **kwargs)
|
|
@ -1,63 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
VOLUME_FROM = 'volume_from'
|
||||
DOCKER_VOLUME_NAME = 'docker_volume_name'
|
||||
|
||||
# Volume states
|
||||
UNKNOWN = 'unknown'
|
||||
NOT_ATTACH = 'not_attach'
|
||||
ATTACH_TO_THIS = 'attach_to_this'
|
||||
ATTACH_TO_OTHER = 'attach_to_other'
|
||||
|
||||
# If volume_provider is cinder, and if cinder volume is attached to this server
|
||||
# by Nova, a link file will create under this directory to match attached
|
||||
# volume. Of course, creating link file will decrease interact time
|
||||
# with backend providers in some cases.
|
||||
VOLUME_LINK_DIR = '/dev/disk/by-id/'
|
||||
|
||||
# General scanning interval for some operation.
|
||||
SCAN_INTERVAL = 0.3
|
||||
|
||||
# Volume scanning interval
|
||||
VOLUME_SCAN_TIME_DELAY = 0.3
|
||||
|
||||
# Timeout for destroying volume from backend provider
|
||||
DESTROY_VOLUME_TIMEOUT = 300
|
||||
|
||||
# Timeout for monitoring volume status
|
||||
MONITOR_STATE_TIMEOUT = 600
|
||||
|
||||
# Device scan interval
|
||||
DEVICE_SCAN_TIME_DELAY = 0.3
|
||||
|
||||
# Timeout for scanning device
|
||||
DEVICE_SCAN_TIMEOUT = 10
|
||||
|
||||
# Timeout for querying meta-data from localhost
|
||||
CURL_MD_TIMEOUT = 10
|
||||
|
||||
# Manila
|
||||
# Manila share scanning interval
|
||||
SHARE_SCAN_INTERVAL = 0.3
|
||||
|
||||
# Manila share network scanning interval
|
||||
SHARE_NETWORK_SCAN_INTERVAL = 0.3
|
||||
|
||||
# TIMEOUT for destroying share from Manila
|
||||
DESTROY_SHARE_TIMEOUT = 300
|
||||
|
||||
# TIMEOUT for destroying share network from Manila
|
||||
DESTROY_SHARE_NETWORK_TIMEOUT = 300
|
||||
|
||||
# Timeout for revoke access to Manila share for host
|
||||
ACCSS_DENY_TIMEOUT = 300
|
|
@ -1,152 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from fuxi import exceptions
|
||||
from fuxi.i18n import _
|
||||
from fuxi import utils
|
||||
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import excutils
|
||||
|
||||
proc_mounts_path = '/proc/mounts'
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MountInfo(object):
|
||||
def __init__(self, device, mountpoint, fstype, opts):
|
||||
self.device = device
|
||||
self.mountpoint = mountpoint
|
||||
self.fstype = fstype
|
||||
self.opts = opts
|
||||
|
||||
def __repr__(self, *args, **kwargs):
|
||||
return str(self.__dict__)
|
||||
|
||||
|
||||
class Mounter(object):
|
||||
def make_filesystem(self, devpath, fstype):
|
||||
try:
|
||||
utils.execute('mkfs', '-t', fstype, '-F', devpath,
|
||||
run_as_root=True)
|
||||
except processutils.ProcessExecutionError as e:
|
||||
msg = _("Unexpected error while make filesystem. "
|
||||
"Devpath: {0}, "
|
||||
"Fstype: {1}"
|
||||
"Error: {2}").format(devpath, fstype, e)
|
||||
raise exceptions.MakeFileSystemException(msg)
|
||||
|
||||
def mount(self, devpath, mountpoint, fstype=None):
|
||||
try:
|
||||
if fstype:
|
||||
utils.execute('mount', '-t', fstype, devpath, mountpoint,
|
||||
run_as_root=True)
|
||||
else:
|
||||
utils.execute('mount', devpath, mountpoint,
|
||||
run_as_root=True)
|
||||
except processutils.ProcessExecutionError as e:
|
||||
msg = _("Unexpected error while mount block device. "
|
||||
"Devpath: {0}, "
|
||||
"Mountpoint: {1} "
|
||||
"Error: {2}").format(devpath, mountpoint, e)
|
||||
raise exceptions.MountException(msg)
|
||||
|
||||
def unmount(self, mountpoint):
|
||||
try:
|
||||
utils.execute('umount', mountpoint, run_as_root=True)
|
||||
except processutils.ProcessExecutionError as e:
|
||||
msg = _("Unexpected err while unmount block device. "
|
||||
"Mountpoint: {0}, "
|
||||
"Error: {1}").format(mountpoint, e)
|
||||
raise exceptions.UnmountException(msg)
|
||||
|
||||
def read_mounts(self, filter_device=(), filter_fstype=()):
|
||||
"""Read all mounted filesystems.
|
||||
|
||||
Read all mounted filesystems except filtered option.
|
||||
|
||||
:param filter_device: Filter for device, the result will not contain
|
||||
the mounts whose device argument in it.
|
||||
:param filter_fstype: Filter for mount point.
|
||||
:return: All mounts.
|
||||
"""
|
||||
try:
|
||||
(out, err) = processutils.execute('cat', proc_mounts_path,
|
||||
check_exit_code=0)
|
||||
except processutils.ProcessExecutionError:
|
||||
msg = _("Failed to read mounts.")
|
||||
raise exceptions.FileNotFound(msg)
|
||||
|
||||
lines = out.split('\n')
|
||||
mounts = []
|
||||
for line in lines:
|
||||
if not line:
|
||||
continue
|
||||
tokens = line.split()
|
||||
if len(tokens) < 4:
|
||||
continue
|
||||
if tokens[0] in filter_device or tokens[1] in filter_fstype:
|
||||
continue
|
||||
mounts.append(MountInfo(device=tokens[0], mountpoint=tokens[1],
|
||||
fstype=tokens[2], opts=tokens[3]))
|
||||
return mounts
|
||||
|
||||
def get_mps_by_device(self, devpath):
|
||||
"""Get all mountpoints that device mounted on.
|
||||
|
||||
:param devpath: The path of mount device.
|
||||
:return: All mountpoints.
|
||||
:rtype: list
|
||||
"""
|
||||
mps = []
|
||||
mounts = self.read_mounts()
|
||||
for m in mounts:
|
||||
if devpath == m.device:
|
||||
mps.append(m.mountpoint)
|
||||
return mps
|
||||
|
||||
|
||||
def check_already_mounted(devpath, mountpoint):
|
||||
"""Check that the mount device is mounted on the specific mount point.
|
||||
|
||||
:param devpath: The path of mount deivce.
|
||||
:param mountpoint: The path of mount point.
|
||||
:rtype: bool
|
||||
"""
|
||||
mounts = Mounter().read_mounts()
|
||||
for m in mounts:
|
||||
if devpath == m.device and mountpoint == m.mountpoint:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def do_mount(devpath, mountpoint, fstype):
|
||||
"""Execute device mount operation.
|
||||
|
||||
:param devpath: The path of mount device.
|
||||
:param mountpoint: The path of mount point.
|
||||
:param fstype: The file system type.
|
||||
"""
|
||||
try:
|
||||
if check_already_mounted(devpath, mountpoint):
|
||||
return
|
||||
|
||||
mounter = Mounter()
|
||||
mounter.mount(devpath, mountpoint, fstype)
|
||||
except exceptions.MountException:
|
||||
try:
|
||||
mounter.make_filesystem(devpath, fstype)
|
||||
mounter.mount(devpath, mountpoint, fstype)
|
||||
except exceptions.FuxiException as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(str(e))
|
|
@ -1,126 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import time
|
||||
|
||||
from cinderclient import exceptions as cinder_exception
|
||||
from manilaclient.common.apiclient import exceptions as manila_exception
|
||||
from oslo_log import log as logging
|
||||
|
||||
from fuxi.common import constants
|
||||
from fuxi import exceptions
|
||||
from fuxi.i18n import _
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class StateMonitor(object):
|
||||
"""Monitor the status of Volume.
|
||||
|
||||
Because of some volume operation is asynchronous, such as creating Cinder
|
||||
volume, this volume could be used for next stop util reached an desired
|
||||
state.
|
||||
"""
|
||||
def __init__(self, client, expected_obj,
|
||||
desired_state,
|
||||
transient_states=(),
|
||||
time_limit=constants.MONITOR_STATE_TIMEOUT,
|
||||
time_delay=1):
|
||||
self.client = client
|
||||
self.expected_obj = expected_obj
|
||||
self.desired_state = desired_state
|
||||
self.transient_states = transient_states
|
||||
self.time_limit = time_limit
|
||||
self.start_time = time.time()
|
||||
self.time_delay = time_delay
|
||||
|
||||
def _reached_desired_state(self, current_state):
|
||||
if current_state == self.desired_state:
|
||||
return True
|
||||
elif current_state in self.transient_states:
|
||||
idx = self.transient_states.index(current_state)
|
||||
if idx > 0:
|
||||
self.transient_states = self.transient_states[idx:]
|
||||
return False
|
||||
else:
|
||||
msg = _("Unexpected state while waiting for volume. "
|
||||
"Expected Volume: {0}, "
|
||||
"Expected State: {1}, "
|
||||
"Reached State: {2}").format(self.expected_obj,
|
||||
self.desired_state,
|
||||
current_state)
|
||||
LOG.error(msg)
|
||||
raise exceptions.UnexpectedStateException(msg)
|
||||
|
||||
def monitor_cinder_volume(self):
|
||||
while True:
|
||||
try:
|
||||
volume = self.client.volumes.get(self.expected_obj.id)
|
||||
except cinder_exception.ClientException:
|
||||
elapsed_time = time.time() - self.start_time
|
||||
if elapsed_time > self.time_limit:
|
||||
msg = ("Timed out while waiting for volume. "
|
||||
"Expected Volume: {0}, "
|
||||
"Expected State: {1}, "
|
||||
"Elapsed Time: {2}").format(self.expected_obj,
|
||||
self.desired_state,
|
||||
elapsed_time)
|
||||
LOG.error(msg)
|
||||
raise exceptions.TimeoutException(msg)
|
||||
raise
|
||||
|
||||
if self._reached_desired_state(volume.status):
|
||||
return volume
|
||||
|
||||
time.sleep(self.time_delay)
|
||||
|
||||
def monitor_manila_share(self):
|
||||
while True:
|
||||
try:
|
||||
share = self.client.shares.get(self.expected_obj.id)
|
||||
except manila_exception.ClientException:
|
||||
elapsed_time = time.time() - self.start_time
|
||||
if elapsed_time > self.time_limit:
|
||||
msg = ("Timed out while waiting for share. "
|
||||
"Expected Share: {0}, "
|
||||
"Expected State: {1}, "
|
||||
"Elapsed Time: {2}").format(self.expected_obj,
|
||||
self.desired_state,
|
||||
elapsed_time)
|
||||
raise exceptions.TimeoutException(msg)
|
||||
raise
|
||||
|
||||
if self._reached_desired_state(share.status):
|
||||
return share
|
||||
|
||||
time.sleep(self.time_delay)
|
||||
|
||||
def monitor_share_access(self, access_type, access_to):
|
||||
while True:
|
||||
try:
|
||||
al = self.client.shares.access_list(self.expected_obj.id)
|
||||
except manila_exception.ClientException:
|
||||
elapsed_time = time.time() - self.start_time
|
||||
if elapsed_time > self.time_limit:
|
||||
msg = ("Timed out while waiting for share access. "
|
||||
"Expected State: {0}, "
|
||||
"Elapsed Time: {1}").format(self.desired_state,
|
||||
elapsed_time)
|
||||
raise exceptions.TimeoutException(msg)
|
||||
raise
|
||||
|
||||
for a in al:
|
||||
if a.access_type == access_type and a.access_to == access_to:
|
||||
if self._reached_desired_state(a.state):
|
||||
return self.expected_obj
|
||||
|
||||
time.sleep(self.time_delay)
|
|
@ -1,141 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
import time
|
||||
|
||||
from cinderclient import exceptions as cinder_exception
|
||||
from novaclient import exceptions as nova_exception
|
||||
from oslo_concurrency import lockutils
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_log import log as logging
|
||||
|
||||
from fuxi.common import blockdevice
|
||||
from fuxi.common import config
|
||||
from fuxi.common import constants as consts
|
||||
from fuxi.common import state_monitor
|
||||
from fuxi.connector import connector
|
||||
from fuxi import exceptions
|
||||
from fuxi.i18n import _
|
||||
from fuxi import utils
|
||||
|
||||
CONF = config.CONF
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class CinderConnector(connector.Connector):
|
||||
def __init__(self):
|
||||
super(CinderConnector, self).__init__()
|
||||
self.cinderclient = utils.get_cinderclient()
|
||||
self.novaclient = utils.get_novaclient()
|
||||
|
||||
@lockutils.synchronized('openstack-attach-volume')
|
||||
def connect_volume(self, volume, **connect_opts):
|
||||
bdm = blockdevice.BlockerDeviceManager()
|
||||
ori_devices = bdm.device_scan()
|
||||
|
||||
# Do volume-attach
|
||||
try:
|
||||
server_id = connect_opts.get('server_id', None)
|
||||
if not server_id:
|
||||
server_id = utils.get_instance_uuid()
|
||||
|
||||
LOG.info("Start to connect to volume %s", volume)
|
||||
nova_volume = self.novaclient.volumes.create_server_volume(
|
||||
server_id=server_id,
|
||||
volume_id=volume.id,
|
||||
device=None)
|
||||
|
||||
volume_monitor = state_monitor.StateMonitor(
|
||||
self.cinderclient,
|
||||
nova_volume,
|
||||
'in-use',
|
||||
('available', 'attaching',))
|
||||
attached_volume = volume_monitor.monitor_cinder_volume()
|
||||
except nova_exception.ClientException as ex:
|
||||
LOG.error("Attaching volume %(vol)s to server %(s)s "
|
||||
"failed. Error: %(err)s",
|
||||
{'vol': volume.id, 's': server_id, 'err': ex})
|
||||
raise
|
||||
|
||||
# Get all devices on host after do volume-attach,
|
||||
# and then find attached device.
|
||||
LOG.info("After connected to volume, scan the added "
|
||||
"block device on host")
|
||||
curr_devices = bdm.device_scan()
|
||||
start_time = time.time()
|
||||
delta_devices = list(set(curr_devices) - set(ori_devices))
|
||||
while not delta_devices:
|
||||
time.sleep(consts.DEVICE_SCAN_TIME_DELAY)
|
||||
curr_devices = bdm.device_scan()
|
||||
delta_devices = list(set(curr_devices) - set(ori_devices))
|
||||
if time.time() - start_time > consts.DEVICE_SCAN_TIMEOUT:
|
||||
msg = _("Could not detect added device with "
|
||||
"limited time")
|
||||
raise exceptions.FuxiException(msg)
|
||||
LOG.info("Get extra added block device %s", delta_devices)
|
||||
|
||||
for device in delta_devices:
|
||||
if bdm.get_device_size(device) == volume.size:
|
||||
device = device.replace('/sys/block', '/dev')
|
||||
LOG.info("Find attached device %(dev)s"
|
||||
" for volume %(at)s %(vol)s",
|
||||
{'dev': device, 'at': attached_volume.name,
|
||||
'vol': volume})
|
||||
|
||||
link_path = os.path.join(consts.VOLUME_LINK_DIR, volume.id)
|
||||
try:
|
||||
utils.execute('ln', '-s', device,
|
||||
link_path,
|
||||
run_as_root=True)
|
||||
except processutils.ProcessExecutionError as e:
|
||||
LOG.error("Error happened when create link file for"
|
||||
" block device attached by Nova."
|
||||
" Error: %s", e)
|
||||
raise
|
||||
return {'path': link_path}
|
||||
|
||||
LOG.warning("Could not find matched device")
|
||||
raise exceptions.NotFound("Not Found Matched Device")
|
||||
|
||||
def disconnect_volume(self, volume, **disconnect_opts):
|
||||
try:
|
||||
volume = self.cinderclient.volumes.get(volume.id)
|
||||
except cinder_exception.ClientException as e:
|
||||
LOG.error("Get Volume %s from Cinder failed", volume.id)
|
||||
raise
|
||||
|
||||
try:
|
||||
link_path = self.get_device_path(volume)
|
||||
utils.execute('rm', '-f', link_path, run_as_root=True)
|
||||
except processutils.ProcessExecutionError as e:
|
||||
LOG.warning("Error happened when remove docker volume"
|
||||
" mountpoint directory. Error: %s", e)
|
||||
|
||||
try:
|
||||
self.novaclient.volumes.delete_server_volume(
|
||||
utils.get_instance_uuid(),
|
||||
volume.id)
|
||||
except nova_exception.ClientException as e:
|
||||
LOG.error("Detaching volume %(vol)s failed. Err: %(err)s",
|
||||
{'vol': volume.id, 'err': e})
|
||||
raise
|
||||
|
||||
volume_monitor = state_monitor.StateMonitor(self.cinderclient,
|
||||
volume,
|
||||
'available',
|
||||
('in-use', 'detaching',))
|
||||
return volume_monitor.monitor_cinder_volume()
|
||||
|
||||
def get_device_path(self, volume):
|
||||
return os.path.join(consts.VOLUME_LINK_DIR, volume.id)
|
|
@ -1,35 +0,0 @@
|
|||
# Copyright 2013 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import abc
|
||||
import six
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class Connector(object):
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def connect_volume(self, volume, **connect_opts):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def disconnect_volume(self, volume, **disconnect_opts):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_device_path(self, volume):
|
||||
pass
|
|
@ -1,374 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
import time
|
||||
|
||||
from os_brick.initiator import connector
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import excutils
|
||||
|
||||
from cinderclient import exceptions as cinder_exception
|
||||
from manilaclient.common.apiclient import exceptions as manila_exception
|
||||
|
||||
from fuxi.common import constants as consts
|
||||
from fuxi.common import mount
|
||||
from fuxi.common import state_monitor
|
||||
from fuxi.connector import connector as fuxi_connector
|
||||
from fuxi import exceptions
|
||||
from fuxi import utils
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def brick_get_connector_properties(multipath=False, enforce_multipath=False):
|
||||
"""Wrapper to automatically set root_helper in brick calls.
|
||||
|
||||
:param multipath: A boolean indicating whether the connector can
|
||||
support multipath.
|
||||
:param enforce_multipath: If True, it raises exception when multipath=True
|
||||
is specified but multipathd is not running.
|
||||
If False, it falls back to multipath=False
|
||||
when multipathd is not running.
|
||||
"""
|
||||
|
||||
root_helper = utils.get_root_helper()
|
||||
return connector.get_connector_properties(root_helper,
|
||||
CONF.my_ip,
|
||||
multipath,
|
||||
enforce_multipath)
|
||||
|
||||
|
||||
def brick_get_connector(protocol, driver=None,
|
||||
use_multipath=False,
|
||||
device_scan_attempts=3,
|
||||
*args, **kwargs):
|
||||
"""Wrapper to get a brick connector object.
|
||||
|
||||
This automatically populates the required protocol as well
|
||||
as the root_helper needed to execute commands.
|
||||
"""
|
||||
|
||||
root_helper = utils.get_root_helper()
|
||||
if protocol.upper() == "RBD":
|
||||
kwargs['do_local_attach'] = True
|
||||
return connector.InitiatorConnector.factory(
|
||||
protocol, root_helper,
|
||||
driver=driver,
|
||||
use_multipath=use_multipath,
|
||||
device_scan_attempts=device_scan_attempts,
|
||||
*args, **kwargs)
|
||||
|
||||
|
||||
class CinderConnector(fuxi_connector.Connector):
|
||||
def __init__(self):
|
||||
super(CinderConnector, self).__init__()
|
||||
self.cinderclient = utils.get_cinderclient()
|
||||
|
||||
def _get_connection_info(self, volume_id):
|
||||
LOG.info("Get connection info for osbrick connector and use it to "
|
||||
"connect to volume")
|
||||
try:
|
||||
conn_info = self.cinderclient.volumes.initialize_connection(
|
||||
volume_id,
|
||||
brick_get_connector_properties())
|
||||
LOG.info("Get connection information %s", conn_info)
|
||||
return conn_info
|
||||
except cinder_exception.ClientException as e:
|
||||
LOG.error("Error happened when initialize connection"
|
||||
" for volume. Error: %s", e)
|
||||
raise
|
||||
|
||||
def _connect_volume(self, volume):
|
||||
conn_info = self._get_connection_info(volume.id)
|
||||
|
||||
protocol = conn_info['driver_volume_type']
|
||||
brick_connector = brick_get_connector(protocol)
|
||||
device_info = brick_connector.connect_volume(conn_info['data'])
|
||||
LOG.info("Get device_info after connect to "
|
||||
"volume %s", device_info)
|
||||
try:
|
||||
link_path = os.path.join(consts.VOLUME_LINK_DIR, volume.id)
|
||||
utils.execute('ln', '-s', os.path.realpath(device_info['path']),
|
||||
link_path,
|
||||
run_as_root=True)
|
||||
except processutils.ProcessExecutionError as e:
|
||||
LOG.error("Failed to create link for device. %s", e)
|
||||
raise
|
||||
return {'path': link_path}
|
||||
|
||||
def _disconnect_volume(self, volume):
|
||||
try:
|
||||
link_path = self.get_device_path(volume)
|
||||
utils.execute('rm', '-f', link_path, run_as_root=True)
|
||||
except processutils.ProcessExecutionError as e:
|
||||
LOG.warning("Error happened when remove docker volume"
|
||||
" mountpoint directory. Error: %s", e)
|
||||
|
||||
conn_info = self._get_connection_info(volume.id)
|
||||
|
||||
protocol = conn_info['driver_volume_type']
|
||||
brick_get_connector(protocol).disconnect_volume(conn_info['data'],
|
||||
None)
|
||||
|
||||
def connect_volume(self, volume, **connect_opts):
|
||||
mountpoint = connect_opts.get('mountpoint', None)
|
||||
host_name = utils.get_hostname()
|
||||
|
||||
try:
|
||||
self.cinderclient.volumes.reserve(volume)
|
||||
except cinder_exception.ClientException:
|
||||
LOG.error("Reserve volume %s failed", volume)
|
||||
raise
|
||||
|
||||
try:
|
||||
device_info = self._connect_volume(volume)
|
||||
self.cinderclient.volumes.attach(volume=volume,
|
||||
instance_uuid=None,
|
||||
mountpoint=mountpoint,
|
||||
host_name=host_name)
|
||||
LOG.info("Attach volume to this server successfully")
|
||||
except Exception:
|
||||
LOG.error("Attach volume %s to this server failed", volume)
|
||||
with excutils.save_and_reraise_exception():
|
||||
try:
|
||||
self._disconnect_volume(volume)
|
||||
except Exception:
|
||||
pass
|
||||
self.cinderclient.volumes.unreserve(volume)
|
||||
|
||||
return device_info
|
||||
|
||||
def disconnect_volume(self, volume, **disconnect_opts):
|
||||
self._disconnect_volume(volume)
|
||||
|
||||
attachments = volume.attachments
|
||||
attachment_uuid = None
|
||||
for am in attachments:
|
||||
if am['host_name'].lower() == utils.get_hostname().lower():
|
||||
attachment_uuid = am['attachment_id']
|
||||
break
|
||||
try:
|
||||
self.cinderclient.volumes.detach(volume.id,
|
||||
attachment_uuid=attachment_uuid)
|
||||
LOG.info("Disconnect volume successfully")
|
||||
except cinder_exception.ClientException as e:
|
||||
LOG.error("Error happened when detach volume %(vol)s from this"
|
||||
" server. Error: %(err)s",
|
||||
{'vol': volume, 'err': e})
|
||||
raise
|
||||
|
||||
def get_device_path(self, volume):
|
||||
return os.path.join(consts.VOLUME_LINK_DIR, volume.id)
|
||||
|
||||
|
||||
SHARE_PROTO = (NFS, GLUSTERFS) = ('NFS', 'GLUSTERFS')
|
||||
SHARE_ACCESS_TYPE = (IP, CERT) = ('ip', 'cert')
|
||||
# PROTO_ACCESS_TYPE_MAP
|
||||
# key: share protocol
|
||||
# value: possible supported access type
|
||||
PROTO_ACCESS_TYPE_MAP = {
|
||||
NFS: (IP,),
|
||||
GLUSTERFS: (CERT,)
|
||||
}
|
||||
|
||||
|
||||
class ManilaConnector(fuxi_connector.Connector):
|
||||
"""Manager share access and mount.
|
||||
|
||||
share access: ManilaConnector only support one access_type for
|
||||
each share_proto that Fuxi implements. The constant
|
||||
PROTO_ACCESS_TYPE_MAP record the supported share_proto and related
|
||||
possible supported access_type. Particularly, we use the first access_type
|
||||
as default when there are more than one access_type for share_proto, of
|
||||
course, we could set this in config file with
|
||||
conf.manila.proto_access_type_map
|
||||
"""
|
||||
def __init__(self, manilaclient=None):
|
||||
super(ManilaConnector, self).__init__()
|
||||
if not manilaclient:
|
||||
manilaclient = utils.get_manilaclient()
|
||||
self.manilaclient = manilaclient
|
||||
self._set_proto_access_type_map()
|
||||
|
||||
def _set_proto_access_type_map(self):
|
||||
conf_proto_at_map = CONF.manila.proto_access_type_map
|
||||
conf_proto_at_map = dict((k.upper(), v.lower())
|
||||
for k, v in conf_proto_at_map.items())
|
||||
unable_proto = [k for k in conf_proto_at_map.keys()
|
||||
if k not in PROTO_ACCESS_TYPE_MAP.keys()]
|
||||
if unable_proto:
|
||||
raise exceptions.InvalidProtocol(
|
||||
"Find temporary unable share protocol {0}"
|
||||
.format(unable_proto))
|
||||
|
||||
self.proto_access_type_map = dict()
|
||||
for key, value in PROTO_ACCESS_TYPE_MAP.items():
|
||||
if key in conf_proto_at_map:
|
||||
if conf_proto_at_map[key] in value:
|
||||
self.proto_access_type_map[key] = conf_proto_at_map[key]
|
||||
else:
|
||||
raise exceptions.InvalidAccessType(
|
||||
"Access type {0} is not enabled for share "
|
||||
"protocol {1}, please chose from {2}"
|
||||
.format(conf_proto_at_map[key],
|
||||
key,
|
||||
PROTO_ACCESS_TYPE_MAP[key]))
|
||||
else:
|
||||
self.proto_access_type_map[key] = value[0]
|
||||
|
||||
def _get_brick_connector(self, share):
|
||||
protocol = share.share_proto
|
||||
mount_point_base = os.path.join(CONF.volume_dir, 'manila')
|
||||
conn = {'mount_point_base': mount_point_base}
|
||||
return brick_get_connector(protocol, conn=conn)
|
||||
|
||||
def _get_access_to(self, access_type):
|
||||
if access_type == IP:
|
||||
access_to = CONF.my_ip
|
||||
if not access_to:
|
||||
raise exceptions.InvalidAccessTo(
|
||||
"The my_ip could not be None")
|
||||
return access_to
|
||||
elif access_type == CERT:
|
||||
access_to = CONF.manila.access_to_for_cert
|
||||
if not access_to:
|
||||
raise exceptions.InvalidAccessTo(
|
||||
"The access_to_for_cert could not be None")
|
||||
return CONF.manila.access_to_for_cert
|
||||
raise exceptions.InvalidAccessType(
|
||||
"The access type %s is not enabled" % access_type)
|
||||
|
||||
@utils.wrap_check_authorized
|
||||
def check_access_allowed(self, share):
|
||||
access_type = self.proto_access_type_map.get(share.share_proto, None)
|
||||
if not access_type:
|
||||
LOG.warning("The share_proto %s is not enabled currently",
|
||||
share.share_proto)
|
||||
return False
|
||||
|
||||
share_access_list = self.manilaclient.shares.access_list(share)
|
||||
for access in share_access_list:
|
||||
try:
|
||||
if self._get_access_to(access_type) == access.access_to \
|
||||
and access.state == 'active':
|
||||
return True
|
||||
except (exceptions.InvalidAccessType, exceptions.InvalidAccessTo):
|
||||
pass
|
||||
return False
|
||||
|
||||
def _access_allow(self, share):
|
||||
share_proto = share.share_proto
|
||||
if share_proto not in self.proto_access_type_map.keys():
|
||||
raise exceptions.InvalidProtocol(
|
||||
"Not enabled share protocol %s" % share_proto)
|
||||
|
||||
try:
|
||||
if self.check_access_allowed(share):
|
||||
return
|
||||
|
||||
access_type = self.proto_access_type_map[share_proto]
|
||||
access_to = self._get_access_to(access_type)
|
||||
LOG.info("Allow machine to access share %(shr)s with "
|
||||
"access_type %(type)s and access_to %(to)s",
|
||||
{'shr': share, 'type': access_type, 'to': access_to})
|
||||
self.manilaclient.shares.allow(share, access_type, access_to, 'rw')
|
||||
except manila_exception.ClientException as e:
|
||||
LOG.error("Failed to grant access for server, %s", e)
|
||||
raise
|
||||
|
||||
LOG.info("Waiting share %s access to be active", share)
|
||||
state_monitor.StateMonitor(
|
||||
self.manilaclient, share,
|
||||
'active',
|
||||
('new',)).monitor_share_access(access_type, access_to)
|
||||
|
||||
@utils.wrap_check_authorized
|
||||
def connect_volume(self, share, **connect_opts):
|
||||
self._access_allow(share)
|
||||
|
||||
conn_prop = {
|
||||
'export': self.get_device_path(share),
|
||||
'name': share.share_proto
|
||||
}
|
||||
path_info = self._get_brick_connector(share).connect_volume(conn_prop)
|
||||
LOG.info("Connect share %(s)s successfully, path_info %(pi)s",
|
||||
{'s': share, 'pi': path_info})
|
||||
return {'path': share.export_location}
|
||||
|
||||
def _access_deny(self, share):
|
||||
try:
|
||||
share_access_list = self.manilaclient.shares.access_list(share)
|
||||
share_proto = share.share_proto
|
||||
access_type = self.proto_access_type_map.get(share_proto)
|
||||
access_to = self._get_access_to(access_type)
|
||||
for share_access in share_access_list:
|
||||
if share_access.access_type == access_type \
|
||||
and share_access.access_to == access_to:
|
||||
self.manilaclient.shares.deny(share, share_access.id)
|
||||
break
|
||||
except manila_exception.ClientException as e:
|
||||
LOG.error("Error happened when revoking access for share "
|
||||
"%(s)s. Error: %(err)s", {'s': share, 'err': e})
|
||||
raise
|
||||
|
||||
@utils.wrap_check_authorized
|
||||
def disconnect_volume(self, share, **disconnect_opts):
|
||||
mountpoint = self.get_mountpoint(share)
|
||||
mount.Mounter().unmount(mountpoint)
|
||||
|
||||
self._access_deny(share)
|
||||
|
||||
def _check_access_binded(s):
|
||||
sal = self.manilaclient.shares.access_list(s)
|
||||
share_proto = s.share_proto
|
||||
access_type = self.proto_access_type_map.get(share_proto)
|
||||
access_to = self._get_access_to(access_type)
|
||||
for a in sal:
|
||||
if a.access_type == access_type and a.access_to == access_to:
|
||||
if a.state in ('error', 'error_deleting'):
|
||||
raise exceptions.NotMatchedState(
|
||||
"Revoke access {0} failed".format(a))
|
||||
return True
|
||||
return False
|
||||
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < consts.ACCSS_DENY_TIMEOUT:
|
||||
if not _check_access_binded(share):
|
||||
LOG.info("Disconnect share %s successfully", share)
|
||||
return
|
||||
time.sleep(consts.SCAN_INTERVAL)
|
||||
|
||||
raise exceptions.TimeoutException("Disconnect volume timeout")
|
||||
|
||||
def get_device_path(self, share):
|
||||
return share.export_location
|
||||
|
||||
def set_client(self):
|
||||
self.manilaclient = utils.get_manilaclient()
|
||||
|
||||
@utils.wrap_check_authorized
|
||||
def get_mountpoint(self, share):
|
||||
if not self.check_access_allowed(share):
|
||||
return ''
|
||||
|
||||
conn_prop = {
|
||||
'export': self.get_device_path(share),
|
||||
'name': share.share_proto
|
||||
}
|
||||
brick_connector = self._get_brick_connector(share)
|
||||
volume_paths = brick_connector.get_volume_paths(conn_prop)
|
||||
return volume_paths[0].rsplit('/', 1)[0]
|
|
@ -1,231 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import collections
|
||||
import flask
|
||||
import os
|
||||
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_utils import importutils
|
||||
|
||||
from fuxi import app
|
||||
from fuxi import exceptions
|
||||
from fuxi.i18n import _
|
||||
from fuxi import utils
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
CINDER = 'cinder'
|
||||
MANILA = 'manila'
|
||||
|
||||
volume_providers_conf = {
|
||||
CINDER: 'fuxi.volumeprovider.cinder.Cinder',
|
||||
MANILA: 'fuxi.volumeprovider.manila.Manila', }
|
||||
|
||||
|
||||
def init_app_conf():
|
||||
# Init volume providers.
|
||||
volume_providers = CONF.volume_providers
|
||||
if not volume_providers:
|
||||
raise Exception(_("Must define volume providers in "
|
||||
"configuration file"))
|
||||
|
||||
app.volume_providers = collections.OrderedDict()
|
||||
for provider in volume_providers:
|
||||
if provider in volume_providers_conf:
|
||||
app.volume_providers[provider] = importutils\
|
||||
.import_class(volume_providers_conf[provider])()
|
||||
LOG.info("Load volume provider: %s", provider)
|
||||
else:
|
||||
LOG.warning("Could not find volume provider: %s", provider)
|
||||
if not app.volume_providers:
|
||||
raise Exception(_("Not provide at least one effective "
|
||||
"volume provider"))
|
||||
|
||||
# Init volume store directory.
|
||||
try:
|
||||
volume_dir = CONF.volume_dir
|
||||
if not os.path.exists(volume_dir) or not os.path.isdir(volume_dir):
|
||||
utils.execute('mkdir', '-p', '-m=700', volume_dir,
|
||||
run_as_root=True)
|
||||
except processutils.ProcessExecutionError:
|
||||
raise
|
||||
|
||||
|
||||
def get_docker_volume(docker_volume_name):
|
||||
for provider in app.volume_providers.values():
|
||||
try:
|
||||
return provider.show(docker_volume_name)
|
||||
except exceptions.NotFound:
|
||||
pass
|
||||
return None
|
||||
|
||||
|
||||
@app.route('/Plugin.Activate', methods=['POST'])
|
||||
def plugin_activate():
|
||||
LOG.info("/Plugin.Activate")
|
||||
return flask.jsonify(Implements=[u'VolumeDriver'])
|
||||
|
||||
|
||||
@app.route('/VolumeDriver.Create', methods=['POST'])
|
||||
def volumedriver_create():
|
||||
json_data = flask.request.get_json(force=True)
|
||||
LOG.info("Received JSON data %s for /VolumeDriver.Create", json_data)
|
||||
|
||||
docker_volume_name = json_data.get('Name', None)
|
||||
volume_opts = json_data.get('Opts', None) or {}
|
||||
if not docker_volume_name:
|
||||
msg = _("Request /VolumeDriver.Create need parameter 'Name'")
|
||||
LOG.error(msg)
|
||||
raise exceptions.InvalidInput(msg)
|
||||
if not isinstance(volume_opts, dict):
|
||||
msg = _("Request parameter 'Opts' must be dict type")
|
||||
LOG.error(msg)
|
||||
raise exceptions.InvalidInput(msg)
|
||||
|
||||
volume_provider_type = volume_opts.get('volume_provider', None)
|
||||
if not volume_provider_type:
|
||||
volume_provider_type = list(app.volume_providers.keys())[0]
|
||||
|
||||
if volume_provider_type not in app.volume_providers:
|
||||
msg_fmt = _("Could not find a handler for %(volume_provider_type)s "
|
||||
"volume") % {'volume_provider_type': volume_provider_type}
|
||||
LOG.error(msg_fmt)
|
||||
return flask.jsonify(Err=msg_fmt)
|
||||
|
||||
# If the volume with the same name already exists in other volume
|
||||
# provider backend, then raise an error
|
||||
for vpt, provider in app.volume_providers.items():
|
||||
if volume_provider_type != vpt \
|
||||
and provider.check_exist(docker_volume_name):
|
||||
msg_fmt = _("The volume with the same name already exists in "
|
||||
"other volume provider backend")
|
||||
LOG.error(msg_fmt)
|
||||
return flask.jsonify(Err=msg_fmt)
|
||||
|
||||
# Create if volume does not exist, or attach to this server if needed
|
||||
# volume exists in related volume provider.
|
||||
app.volume_providers[volume_provider_type].create(docker_volume_name,
|
||||
volume_opts)
|
||||
|
||||
return flask.jsonify(Err=u'')
|
||||
|
||||
|
||||
@app.route('/VolumeDriver.Remove', methods=['POST'])
|
||||
def volumedriver_remove():
|
||||
json_data = flask.request.get_json(force=True)
|
||||
LOG.info("Received JSON data %s for /VolumeDriver.Remove", json_data)
|
||||
|
||||
docker_volume_name = json_data.get('Name', None)
|
||||
if not docker_volume_name:
|
||||
msg = _("Request /VolumeDriver.Remove need parameter 'Name'")
|
||||
LOG.error(msg)
|
||||
raise exceptions.InvalidInput(msg)
|
||||
|
||||
for provider in app.volume_providers.values():
|
||||
if provider.delete(docker_volume_name):
|
||||
return flask.jsonify(Err=u'')
|
||||
|
||||
return flask.jsonify(Err=u'')
|
||||
|
||||
|
||||
@app.route('/VolumeDriver.Mount', methods=['POST'])
|
||||
def volumedriver_mount():
|
||||
json_data = flask.request.get_json(force=True)
|
||||
LOG.info("Receive JSON data %s for /VolumeDriver.Mount", json_data)
|
||||
|
||||
docker_volume_name = json_data.get('Name', None)
|
||||
if not docker_volume_name:
|
||||
msg = _("Request /VolumeDriver.Mount need parameter 'Name'")
|
||||
LOG.error(msg)
|
||||
raise exceptions.InvalidInput(msg)
|
||||
|
||||
for provider in app.volume_providers.values():
|
||||
if provider.check_exist(docker_volume_name):
|
||||
mountpoint = provider.mount(docker_volume_name)
|
||||
return flask.jsonify(Mountpoint=mountpoint, Err=u'')
|
||||
|
||||
return flask.jsonify(Err=u'Mount Failed')
|
||||
|
||||
|
||||
@app.route('/VolumeDriver.Path', methods=['POST'])
|
||||
def volumedriver_path():
|
||||
json_data = flask.request.get_json(force=True)
|
||||
LOG.info("Receive JSON data %s for /VolumeDriver.Path", json_data)
|
||||
|
||||
docker_volume_name = json_data.get('Name', None)
|
||||
if not docker_volume_name:
|
||||
msg = _("Request /VolumeDriver.Path need parameter 'Name'")
|
||||
LOG.error(msg)
|
||||
raise exceptions.InvalidInput(msg)
|
||||
|
||||
volume = get_docker_volume(docker_volume_name)
|
||||
if volume is not None:
|
||||
mountpoint = volume.get('Mountpoint', '')
|
||||
LOG.info("Get mountpoint %(mp)s for docker volume %(name)s",
|
||||
{'mp': mountpoint, 'name': docker_volume_name})
|
||||
return flask.jsonify(Mountpoint=mountpoint, Err=u'')
|
||||
|
||||
LOG.warning("Can't find mountpoint for docker volume %(name)s",
|
||||
{'name': docker_volume_name})
|
||||
return flask.jsonify(Err=u'Mountpoint Not Found')
|
||||
|
||||
|
||||
@app.route('/VolumeDriver.Unmount', methods=['POST'])
|
||||
def volumedriver_unmount():
|
||||
json_data = flask.request.get_json(force=True)
|
||||
LOG.info('Receive JSON data %s for VolumeDriver.Unmount', json_data)
|
||||
return flask.jsonify(Err=u'')
|
||||
|
||||
|
||||
@app.route('/VolumeDriver.Get', methods=['POST'])
|
||||
def volumedriver_get():
|
||||
json_data = flask.request.get_json(force=True)
|
||||
LOG.info("Receive JSON data %s for /VolumeDriver.Get", json_data)
|
||||
|
||||
docker_volume_name = json_data.get('Name', None)
|
||||
if not docker_volume_name:
|
||||
msg = _("Request /VolumeDriver.Get need parameter 'Name'")
|
||||
LOG.error(msg)
|
||||
raise exceptions.InvalidInput(msg)
|
||||
|
||||
volume = get_docker_volume(docker_volume_name)
|
||||
if volume is not None:
|
||||
LOG.info("Get docker volume: %s", volume)
|
||||
return flask.jsonify(Volume=volume, Err=u'')
|
||||
|
||||
LOG.warning("Can't find volume %s from every provider",
|
||||
docker_volume_name)
|
||||
return flask.jsonify(Err=u'Volume Not Found')
|
||||
|
||||
|
||||
@app.route('/VolumeDriver.List', methods=['POST'])
|
||||
def volumedriver_list():
|
||||
LOG.info("/VolumeDriver.List")
|
||||
docker_volumes = []
|
||||
for provider in app.volume_providers.values():
|
||||
vs = provider.list()
|
||||
if vs:
|
||||
docker_volumes.extend(vs)
|
||||
|
||||
LOG.info("Get volumes from volume providers. Volumes: %s",
|
||||
docker_volumes)
|
||||
return flask.jsonify(Err=u'', Volumes=docker_volumes)
|
||||
|
||||
|
||||
@app.route('/VolumeDriver.Capabilities', methods=['POST'])
|
||||
def volumedriver_capabilities():
|
||||
return flask.jsonify(Capabilities={'Scope': 'global'})
|
|
@ -1,72 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
class FuxiException(Exception):
|
||||
"""Default Fuxi exception"""
|
||||
|
||||
|
||||
class TimeoutException(FuxiException):
|
||||
"""A timeout on waiting for volume to reach destination end state."""
|
||||
|
||||
|
||||
class UnexpectedStateException(FuxiException):
|
||||
"""Unexpected volume state appeared"""
|
||||
|
||||
|
||||
class LoopExceeded(FuxiException):
|
||||
"""Raised when ``loop_until`` looped too many times."""
|
||||
|
||||
|
||||
class NotFound(FuxiException):
|
||||
"""The resource could not be found"""
|
||||
|
||||
|
||||
class TooManyResources(FuxiException):
|
||||
"""Find too many resources."""
|
||||
|
||||
|
||||
class InvalidInput(FuxiException):
|
||||
"""Request data is invalidate"""
|
||||
|
||||
|
||||
class NotMatchedState(FuxiException):
|
||||
"""Current state not match to expected state"""
|
||||
message = "Current state not match to expected state."
|
||||
|
||||
|
||||
class MakeFileSystemException(FuxiException):
|
||||
"""Unexpected error while make file system."""
|
||||
|
||||
|
||||
class MountException(FuxiException):
|
||||
"""Unexpected error while mount device."""
|
||||
|
||||
|
||||
class UnmountException(FuxiException):
|
||||
"""Unexpected error while do umount"""
|
||||
|
||||
|
||||
class FileNotFound(FuxiException):
|
||||
"""The expected file not exist"""
|
||||
|
||||
|
||||
class InvalidProtocol(FuxiException):
|
||||
"""The given protocol is invalid"""
|
||||
|
||||
|
||||
class InvalidAccessType(FuxiException):
|
||||
"""The given access type is invalid"""
|
||||
|
||||
|
||||
class InvalidAccessTo(FuxiException):
|
||||
"""The given access type in invalid"""
|
32
fuxi/i18n.py
32
fuxi/i18n.py
|
@ -1,32 +0,0 @@
|
|||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import oslo_i18n
|
||||
|
||||
DOMAIN = "fuxi"
|
||||
|
||||
_translators = oslo_i18n.TranslatorFactory(domain=DOMAIN)
|
||||
|
||||
# The primary translation function using the well-known name "_"
|
||||
_ = _translators.primary
|
||||
|
||||
# The contextual translation function using the name "_C"
|
||||
_C = _translators.contextual_form
|
||||
|
||||
# The plural translation function using the name "_P"
|
||||
_P = _translators.plural_form
|
||||
|
||||
|
||||
def get_available_languages():
|
||||
return oslo_i18n.get_available_languages(DOMAIN)
|
33
fuxi/opts.py
33
fuxi/opts.py
|
@ -1,33 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import itertools
|
||||
|
||||
from fuxi.common import config
|
||||
|
||||
__all__ = [
|
||||
'list_fuxi_opts',
|
||||
]
|
||||
|
||||
|
||||
def list_fuxi_opts():
|
||||
return [
|
||||
('DEFAULT', itertools.chain(config.default_opts,)),
|
||||
(config.keystone_group.name,
|
||||
itertools.chain(config.legacy_keystone_opts,)),
|
||||
(config.cinder_group.name,
|
||||
itertools.chain(config.cinder_opts, config.keystone_auth_opts)),
|
||||
(config.nova_group.name,
|
||||
itertools.chain(config.nova_opts, config.keystone_auth_opts,)),
|
||||
(config.manila_group.name,
|
||||
itertools.chain(config.manila_opts, config.keystone_auth_opts))
|
||||
]
|
|
@ -1,31 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from fuxi import app
|
||||
from fuxi.common import config
|
||||
from fuxi import controllers
|
||||
|
||||
from oslo_log import log as logging
|
||||
|
||||
|
||||
def start():
|
||||
config.init(sys.argv[1:])
|
||||
logging.setup(config.CONF, 'fuxi')
|
||||
|
||||
controllers.init_app_conf()
|
||||
|
||||
port = config.CONF.fuxi_port
|
||||
app.run("0.0.0.0", port,
|
||||
debug=config.CONF.debug,
|
||||
threaded=config.CONF.threaded)
|
|
@ -1,21 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
set -ex
|
||||
|
||||
VENV=${1:-"fullstack"}
|
||||
|
||||
GATE_DEST=$BASE/new
|
||||
DEVSTACK_PATH=$GATE_DEST/devstack
|
||||
|
||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container"
|
||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"enable_plugin manila git://git.openstack.org/openstack/manila"
|
||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='snapshot_support=True create_share_from_snapshot_support=True revert_to_snapshot_support=True mount_snapshot_support=True'"
|
||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"SHARE_DRIVER=manila.share.drivers.lvm.LVMShareDriver"
|
||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"MANILA_OPTGROUP_generic1_driver_handles_share_servers=False"
|
||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"FUXI_VOLUME_PROVIDERS=cinder,manila"
|
||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-account"
|
||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-container"
|
||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-object"
|
||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-proxy"
|
||||
|
||||
$BASE/new/devstack-gate/devstack-vm-gate.sh
|
|
@ -1,56 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
set -xe
|
||||
|
||||
FUXI_DIR="$BASE/new/fuxi"
|
||||
TEMPEST_DIR="$BASE/new/tempest"
|
||||
SCRIPTS_DIR="/usr/os-testr-env/bin/"
|
||||
|
||||
venv=${1:-"fullstack"}
|
||||
|
||||
function generate_test_logs {
|
||||
local path="$1"
|
||||
# Compress all $path/*.txt files and move the directories holding those
|
||||
# files to /opt/stack/logs. Files with .log suffix have their
|
||||
# suffix changed to .txt (so browsers will know to open the compressed
|
||||
# files and not download them).
|
||||
if [[ -d "$path" ]] ; then
|
||||
sudo find $path -iname "*.log" -type f -exec mv {} {}.txt \; -exec gzip -9 {}.txt \;
|
||||
sudo mv $path/* /opt/stack/logs/
|
||||
fi
|
||||
}
|
||||
|
||||
function generate_testr_results {
|
||||
# Give job user rights to access tox logs
|
||||
sudo -H -u $owner chmod o+rw .
|
||||
sudo -H -u $owner chmod o+rw -R .testrepository
|
||||
if [[ -f ".testrepository/0" ]] ; then
|
||||
.tox/$venv/bin/subunit-1to2 < .testrepository/0 > ./testrepository.subunit
|
||||
$SCRIPTS_DIR/subunit2html ./testrepository.subunit testr_results.html
|
||||
gzip -9 ./testrepository.subunit
|
||||
gzip -9 ./testr_results.html
|
||||
sudo mv ./*.gz /opt/stack/logs/
|
||||
fi
|
||||
|
||||
if [[ "$venv" == fullstack* ]] ; then
|
||||
generate_test_logs "/tmp/${venv}-logs"
|
||||
fi
|
||||
}
|
||||
|
||||
owner=stack
|
||||
|
||||
|
||||
# Set owner permissions according to job's requirements.
|
||||
cd $FUXI_DIR
|
||||
sudo chown -R $owner:stack $FUXI_DIR
|
||||
|
||||
# Run tests
|
||||
echo "Running Fuxi $venv fullstack tests"
|
||||
set +e
|
||||
sudo -H -u $owner tox -e $venv
|
||||
testr_exit_code=$?
|
||||
set -e
|
||||
|
||||
# Collect and parse results
|
||||
generate_testr_results
|
||||
exit $testr_exit_code
|
|
@ -1,132 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import docker
|
||||
import os
|
||||
|
||||
from cinderclient.v2 import client
|
||||
from keystoneauth1 import identity
|
||||
from keystoneauth1 import session as ks
|
||||
from manilaclient import client as manila_client
|
||||
import os_client_config
|
||||
from oslo_log import log
|
||||
from oslotest import base
|
||||
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
def get_cinder_client_from_env():
|
||||
# We should catch KeyError exception with the purpose of
|
||||
# source or configure openrc file.
|
||||
auth_url = os.environ['OS_AUTH_URL']
|
||||
username = os.environ['OS_USERNAME']
|
||||
password = os.environ['OS_PASSWORD']
|
||||
project_name = os.environ['OS_PROJECT_NAME']
|
||||
|
||||
# Either project(user)_domain_name or project(user)_domain_id
|
||||
# would be acceptable.
|
||||
project_domain_name = os.environ.get("OS_PROJECT_DOMAIN_NAME")
|
||||
project_domain_id = os.environ.get("OS_PROJECT_DOMAIN_ID")
|
||||
user_domain_name = os.environ.get("OS_USER_DOMAIN_NAME")
|
||||
user_domain_id = os.environ.get("OS_USER_DOMAIN_ID")
|
||||
|
||||
auth = identity.Password(auth_url=auth_url,
|
||||
username=username,
|
||||
password=password,
|
||||
project_name=project_name,
|
||||
project_domain_id=project_domain_id,
|
||||
project_domain_name=project_domain_name,
|
||||
user_domain_id=user_domain_id,
|
||||
user_domain_name=user_domain_name)
|
||||
session = ks.Session(auth=auth)
|
||||
return client.Client(session=session)
|
||||
|
||||
|
||||
def get_manila_client_from_env():
|
||||
# We should catch KeyError exception with the purpose of
|
||||
# source or configure openrc file.
|
||||
auth_url = os.environ['OS_AUTH_URL']
|
||||
username = os.environ['OS_USERNAME']
|
||||
password = os.environ['OS_PASSWORD']
|
||||
project_name = os.environ['OS_PROJECT_NAME']
|
||||
|
||||
# Either project(user)_domain_name or project(user)_domain_id
|
||||
# would be acceptable.
|
||||
project_domain_name = os.environ.get("OS_PROJECT_DOMAIN_NAME")
|
||||
project_domain_id = os.environ.get("OS_PROJECT_DOMAIN_ID")
|
||||
user_domain_name = os.environ.get("OS_USER_DOMAIN_NAME")
|
||||
user_domain_id = os.environ.get("OS_USER_DOMAIN_ID")
|
||||
|
||||
auth = identity.Password(auth_url=auth_url,
|
||||
username=username,
|
||||
password=password,
|
||||
project_name=project_name,
|
||||
project_domain_id=project_domain_id,
|
||||
project_domain_name=project_domain_name,
|
||||
user_domain_id=user_domain_id,
|
||||
user_domain_name=user_domain_name)
|
||||
session = ks.Session(auth=auth)
|
||||
return manila_client.Client(session=session, client_version='2')
|
||||
|
||||
|
||||
def _get_cloud_config_auth_data(cloud='devstack-admin'):
|
||||
"""Retrieves Keystone auth data to run functional tests
|
||||
|
||||
Credentials are either read via os-client-config from the environment
|
||||
or from a config file ('clouds.yaml'). Environment variables override
|
||||
those from the config file.
|
||||
|
||||
devstack produces a clouds.yaml with two named clouds - one named
|
||||
'devstack' which has user privs and one named 'devstack-admin' which
|
||||
has admin privs. This function will default to getting the devstack-admin
|
||||
cloud as that is the current expected behavior.
|
||||
"""
|
||||
cloud_config = os_client_config.OpenStackConfig().get_one_cloud(cloud)
|
||||
return cloud_config.get_auth(), cloud_config.get_session()
|
||||
|
||||
|
||||
def get_cinder_client_from_creds():
|
||||
auth_plugin, session = _get_cloud_config_auth_data()
|
||||
return client.Client(session=session, auth=auth_plugin)
|
||||
|
||||
|
||||
def get_manila_client_from_creds():
|
||||
auth_plugin, session = _get_cloud_config_auth_data()
|
||||
return manila_client.Client(session=session, auth=auth_plugin,
|
||||
client_version='2')
|
||||
|
||||
|
||||
class FuxiBaseTest(base.BaseTestCase):
|
||||
"""Basic class for Fuxi fullstack testing
|
||||
|
||||
This class has common code shared for Fuxi fullstack testing
|
||||
including the various clients (docker, cinder) and common
|
||||
setup/cleanup code.
|
||||
"""
|
||||
def setUp(self):
|
||||
super(FuxiBaseTest, self).setUp()
|
||||
self.docker_client = docker.APIClient(
|
||||
base_url='tcp://0.0.0.0:2375')
|
||||
try:
|
||||
self.cinder_client = get_cinder_client_from_env()
|
||||
self.manila_client = get_manila_client_from_env()
|
||||
except Exception as e:
|
||||
# We may missing or didn't source configured openrc file.
|
||||
message = ('Missing environment variable %s in your local. '
|
||||
'Please add it and also check other missing '
|
||||
'environment variables. After that please source '
|
||||
'the openrc file. '
|
||||
'Trying credentials from DevStack cloud.yaml ...')
|
||||
LOG.warning(message, e.args[0])
|
||||
self.cinder_client = get_cinder_client_from_creds()
|
||||
self.manila_client = get_manila_client_from_creds()
|
|
@ -1,89 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
from fuxi.tests.fullstack import fuxi_base
|
||||
from fuxi import utils
|
||||
|
||||
|
||||
class VolumeTest(fuxi_base.FuxiBaseTest):
|
||||
"""Test Volumes operation
|
||||
|
||||
Test volumes creation/deletion from docker to Cinder
|
||||
"""
|
||||
def test_create_delete_volume_with_fuxi_driver(self):
|
||||
"""Create and Delete docker volume with Fuxi
|
||||
|
||||
This method creates a docker volume with Fuxi driver
|
||||
and tests it was created in Cinder.
|
||||
It then deletes the docker volume and tests that it was
|
||||
deleted from Cinder.
|
||||
"""
|
||||
driver_opts = {
|
||||
'size': '1',
|
||||
'fstype': 'ext4',
|
||||
}
|
||||
vol_name = utils.get_random_string(8)
|
||||
self.docker_client.create_volume(name=vol_name, driver='fuxi',
|
||||
driver_opts=driver_opts)
|
||||
try:
|
||||
volumes = self.cinder_client.volumes.list(
|
||||
search_opts={'all_tenants': 1, 'name': vol_name})
|
||||
except Exception as e:
|
||||
self.docker_client.remove_volume(vol_name)
|
||||
message = ("Failed to list cinder volumes: %s")
|
||||
self.fail(message % str(e))
|
||||
self.assertEqual(1, len(volumes))
|
||||
self.docker_client.remove_volume(vol_name)
|
||||
volumes = self.cinder_client.volumes.list(
|
||||
search_opts={'all_tenants': 1, 'name': vol_name})
|
||||
self.assertEqual(0, len(volumes))
|
||||
|
||||
def test_create_delete_volume_without_fuxi_driver(self):
|
||||
"""Create and Delete docker volume without Fuxi
|
||||
|
||||
This method create a docker network with the default
|
||||
docker driver, It tests that it was created correctly, but
|
||||
not added to Cinder
|
||||
"""
|
||||
vol_name = utils.get_random_string(8)
|
||||
self.docker_client.create_volume(name=vol_name)
|
||||
volumes = self.cinder_client.volumes.list(
|
||||
search_opts={'all_tenants': 1, 'name': vol_name})
|
||||
self.assertEqual(0, len(volumes))
|
||||
docker_volumes = self.docker_client.volumes()['Volumes']
|
||||
volume_found = False
|
||||
for docker_vol in docker_volumes:
|
||||
if docker_vol['Name'] == vol_name:
|
||||
volume_found = True
|
||||
self.assertTrue(volume_found)
|
||||
self.docker_client.remove_volume(vol_name)
|
||||
|
||||
def test_create_delete_volume_with_manila_provider(self):
|
||||
driver_opts = {
|
||||
'volume_provider': 'manila',
|
||||
}
|
||||
vol_name = utils.get_random_string(8)
|
||||
self.docker_client.create_volume(name=vol_name, driver='fuxi',
|
||||
driver_opts=driver_opts)
|
||||
try:
|
||||
volumes = self.manila_client.shares.list(
|
||||
search_opts={'all_tenants': 1, 'name': vol_name})
|
||||
except Exception as e:
|
||||
self.docker_client.remove_volume(vol_name)
|
||||
message = ("Failed to list cinder volumes: %s")
|
||||
self.fail(message % str(e))
|
||||
self.assertEqual(1, len(volumes))
|
||||
self.docker_client.remove_volume(vol_name)
|
||||
volumes = self.manila_client.shares.list(
|
||||
search_opts={'all_tenants': 1, 'name': vol_name})
|
||||
self.assertEqual(0, len(volumes))
|
|
@ -1,23 +0,0 @@
|
|||
# Copyright 2010-2011 OpenStack Foundation
|
||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslotest import base
|
||||
|
||||
|
||||
class TestCase(base.BaseTestCase):
|
||||
|
||||
"""Test case base class for all unit tests."""
|
||||
def setUp(self):
|
||||
super(TestCase, self).setUp()
|
|
@ -1,120 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from fuxi.common import mount
|
||||
from fuxi import exceptions
|
||||
from fuxi.tests.unit import base
|
||||
|
||||
|
||||
class FakeMounter(object):
|
||||
def __init__(self, mountinfo=None):
|
||||
self.mountinfo = "/dev/0 /path/to/0 type0 flags 0 0\n" \
|
||||
"/dev/1 /path/to/1 type1 flags 0 0\n" \
|
||||
"/dev/2 /path/to/2 type2 flags,1,2=3 0 0\n" \
|
||||
if not mountinfo else mountinfo
|
||||
|
||||
def mount(self, devpath, mountpoint, fstype=None):
|
||||
if not fstype:
|
||||
fstype = 'ext4'
|
||||
self.mountinfo += ' '.join([devpath, mountpoint, fstype,
|
||||
'flags', '0', '0\n'])
|
||||
|
||||
def unmount(self, mountpoint):
|
||||
mounts = self.read_mounts()
|
||||
ori_len = len(mounts)
|
||||
for m in mounts:
|
||||
if m.mountpoint == mountpoint:
|
||||
mounts.remove(m)
|
||||
if ori_len != len(mounts):
|
||||
self.mountinfo = ''.join([' '.join([m.device, m.mountpoint,
|
||||
m.fstype, m.opts,
|
||||
'0', '0\n'])
|
||||
for m in mounts])
|
||||
else:
|
||||
raise exceptions.UnmountException()
|
||||
|
||||
def read_mounts(self, filter_device=(), filter_fstype=()):
|
||||
lines = self.mountinfo.split('\n')
|
||||
mounts = []
|
||||
for line in lines:
|
||||
if not line:
|
||||
continue
|
||||
tokens = line.split()
|
||||
if len(tokens) < 4:
|
||||
continue
|
||||
if tokens[0] in filter_device or tokens[1] in filter_fstype:
|
||||
continue
|
||||
mounts.append(mount.MountInfo(device=tokens[0],
|
||||
mountpoint=tokens[1],
|
||||
fstype=tokens[2], opts=tokens[3]))
|
||||
return mounts
|
||||
|
||||
def get_mps_by_device(self, devpath):
|
||||
mps = []
|
||||
mounts = self.read_mounts()
|
||||
for m in mounts:
|
||||
if devpath in m.device:
|
||||
mps.append(m.mountpoint)
|
||||
return mps
|
||||
|
||||
|
||||
def check_already_mounted(devpath, mountpoint):
|
||||
mounts = FakeMounter().read_mounts()
|
||||
for m in mounts:
|
||||
if m.device == devpath and m.mountpoint == mountpoint:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
class TestMounter(base.TestCase):
|
||||
def setUp(self):
|
||||
super(TestMounter, self).setUp()
|
||||
|
||||
def test_mount(self):
|
||||
fake_devpath = '/dev/3'
|
||||
fake_mp = '/path/to/3'
|
||||
fake_fstype = 'ext4'
|
||||
fake_mounter = FakeMounter()
|
||||
fake_mounter.mount(fake_devpath, fake_mp, fake_fstype)
|
||||
fake_mountinfo = "/dev/0 /path/to/0 type0 flags 0 0\n" \
|
||||
"/dev/1 /path/to/1 type1 flags 0 0\n" \
|
||||
"/dev/2 /path/to/2 type2 flags,1,2=3 0 0\n" \
|
||||
"/dev/3 /path/to/3 ext4 flags 0 0\n"
|
||||
self.assertEqual(fake_mountinfo, fake_mounter.mountinfo)
|
||||
|
||||
def test_unmount(self):
|
||||
fake_mp = '/path/to/2'
|
||||
fake_mounter = FakeMounter()
|
||||
fake_mounter.unmount(fake_mp)
|
||||
fake_mountinfo = "/dev/0 /path/to/0 type0 flags 0 0\n" \
|
||||
"/dev/1 /path/to/1 type1 flags 0 0\n"
|
||||
self.assertEqual(fake_mountinfo, fake_mounter.mountinfo)
|
||||
|
||||
def test_read_mounts(self):
|
||||
fake_mounts = [str(mount.MountInfo('/dev/0', '/path/to/0',
|
||||
'type0', 'flags')),
|
||||
str(mount.MountInfo('/dev/1', '/path/to/1',
|
||||
'type1', 'flags')),
|
||||
str(mount.MountInfo('/dev/2', '/path/to/2',
|
||||
'type2', 'flags,1,2=3'))]
|
||||
mounts = [str(m) for m in FakeMounter().read_mounts()]
|
||||
self.assertEqual(len(fake_mounts), len(mounts))
|
||||
for m in mounts:
|
||||
self.assertIn(m, fake_mounts)
|
||||
|
||||
def test_get_mps_by_device(self):
|
||||
self.assertEqual(['/path/to/0'],
|
||||
FakeMounter().get_mps_by_device('/dev/0'))
|
||||
|
||||
def test_check_alread_mounted(self):
|
||||
self.assertTrue(check_already_mounted('/dev/0', '/path/to/0'))
|
||||
self.assertFalse(check_already_mounted('/dev/0', '/path/to/1'))
|
|
@ -1,197 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import mock
|
||||
|
||||
from cinderclient import exceptions as cinder_exception
|
||||
from manilaclient.common.apiclient import exceptions as manila_exception
|
||||
|
||||
from fuxi.common import state_monitor
|
||||
from fuxi import exceptions
|
||||
from fuxi.tests.unit import base, fake_client, fake_object
|
||||
|
||||
|
||||
class TestStateMonitor(base.TestCase):
|
||||
def setUp(self):
|
||||
super(TestStateMonitor, self).setUp()
|
||||
|
||||
def test_monitor_cinder_volume(self):
|
||||
fake_cinder_client = fake_client.FakeCinderClient()
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume(status='available')
|
||||
fake_desired_state = 'in-use'
|
||||
fake_transient_states = ('in-use',)
|
||||
fake_time_limit = 0
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_cinder_client,
|
||||
fake_cinder_volume,
|
||||
fake_desired_state,
|
||||
fake_transient_states,
|
||||
fake_time_limit)
|
||||
|
||||
fake_desired_volume = fake_object.FakeCinderVolume(status='in-use')
|
||||
with mock.patch.object(fake_client.FakeCinderClient.Volumes, 'get',
|
||||
return_value=fake_desired_volume):
|
||||
self.assertEqual(fake_desired_volume,
|
||||
fake_state_monitor.monitor_cinder_volume())
|
||||
|
||||
def test_monitor_cinder_volume_get_failed(self):
|
||||
fake_cinder_client = fake_client.FakeCinderClient()
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume(status='available')
|
||||
|
||||
with mock.patch('fuxi.tests.unit.fake_client.FakeCinderClient.Volumes'
|
||||
'.get',
|
||||
side_effect=cinder_exception.ClientException(404)):
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_cinder_client,
|
||||
fake_cinder_volume,
|
||||
None, None, -1)
|
||||
self.assertRaises(exceptions.TimeoutException,
|
||||
fake_state_monitor.monitor_cinder_volume)
|
||||
|
||||
with mock.patch('fuxi.tests.unit.fake_client.FakeCinderClient.Volumes'
|
||||
'.get',
|
||||
side_effect=cinder_exception.ClientException(404)):
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_cinder_client,
|
||||
fake_cinder_volume,
|
||||
None, None)
|
||||
self.assertRaises(cinder_exception.ClientException,
|
||||
fake_state_monitor.monitor_cinder_volume)
|
||||
|
||||
def test_monitor_cinder_volume_unexpected_state(self):
|
||||
fake_cinder_client = fake_client.FakeCinderClient()
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume(status='available')
|
||||
fake_desired_state = 'in-use'
|
||||
fake_transient_states = ('in-use',)
|
||||
fake_time_limit = 0
|
||||
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_cinder_client,
|
||||
fake_cinder_volume,
|
||||
fake_desired_state,
|
||||
fake_transient_states,
|
||||
fake_time_limit)
|
||||
fake_desired_volume = fake_object.FakeCinderVolume(status='attaching')
|
||||
|
||||
with mock.patch.object(fake_client.FakeCinderClient.Volumes, 'get',
|
||||
return_value=fake_desired_volume):
|
||||
self.assertRaises(exceptions.UnexpectedStateException,
|
||||
fake_state_monitor.monitor_cinder_volume)
|
||||
|
||||
def test_monitor_manila_share(self):
|
||||
fake_manila_client = fake_client.FakeManilaClient()
|
||||
fake_manila_share = fake_object.FakeManilaShare(status='creating')
|
||||
fake_desired_state = 'available'
|
||||
fake_transient_states = ('creating',)
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_manila_client,
|
||||
fake_manila_share,
|
||||
fake_desired_state,
|
||||
fake_transient_states,
|
||||
0)
|
||||
|
||||
fake_desired_share = fake_object.FakeManilaShare(status='available')
|
||||
with mock.patch.object(fake_client.FakeManilaClient.Shares, 'get',
|
||||
return_value=fake_desired_share):
|
||||
self.assertEqual(fake_desired_share,
|
||||
fake_state_monitor.monitor_manila_share())
|
||||
|
||||
def test_monitor_manila_share_get_failed(self):
|
||||
fake_manila_client = fake_client.FakeManilaClient()
|
||||
fake_manila_share = fake_object.FakeManilaShare(status='creating')
|
||||
|
||||
with mock.patch('fuxi.tests.unit.fake_client'
|
||||
'.FakeManilaClient.Shares.get',
|
||||
side_effect=manila_exception.ClientException(404)):
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_manila_client,
|
||||
fake_manila_share,
|
||||
None, None, -1)
|
||||
self.assertRaises(exceptions.TimeoutException,
|
||||
fake_state_monitor.monitor_manila_share)
|
||||
|
||||
with mock.patch('fuxi.tests.unit.fake_client'
|
||||
'.FakeManilaClient.Shares.get',
|
||||
side_effect=manila_exception.ClientException(404)):
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_manila_client,
|
||||
fake_manila_share,
|
||||
None, None)
|
||||
self.assertRaises(manila_exception.ClientException,
|
||||
fake_state_monitor.monitor_manila_share)
|
||||
|
||||
def test_monitor_manila_share_unexpected_state(self):
|
||||
fake_manila_client = fake_client.FakeManilaClient()
|
||||
fake_manila_share = fake_object.FakeManilaShare(status='creating')
|
||||
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_manila_client,
|
||||
fake_manila_share,
|
||||
'available',
|
||||
('creating',),
|
||||
0)
|
||||
fake_desired_share = fake_object.FakeCinderVolume(status='unknown')
|
||||
|
||||
with mock.patch.object(fake_client.FakeManilaClient.Shares, 'get',
|
||||
return_value=fake_desired_share):
|
||||
self.assertRaises(exceptions.UnexpectedStateException,
|
||||
fake_state_monitor.monitor_manila_share)
|
||||
|
||||
def test_monitor_share_access(self):
|
||||
fake_manila_client = fake_client.FakeManilaClient()
|
||||
fake_manila_share = fake_object.FakeManilaShare()
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_manila_client,
|
||||
fake_manila_share,
|
||||
'active',
|
||||
('new',),
|
||||
0)
|
||||
|
||||
fake_desired_sl = [fake_object.FakeShareAccess(
|
||||
access_type='ip', access_to='192.168.0.1', state='active')]
|
||||
with mock.patch.object(fake_client.FakeManilaClient.Shares,
|
||||
'access_list',
|
||||
return_value=fake_desired_sl):
|
||||
self.assertEqual(fake_manila_share,
|
||||
fake_state_monitor.monitor_share_access(
|
||||
'ip', '192.168.0.1'))
|
||||
|
||||
def test_monitor_share_access_list_failed(self):
|
||||
fake_manila_client = fake_client.FakeManilaClient()
|
||||
fake_manila_share = fake_object.FakeManilaShare()
|
||||
with mock.patch('fuxi.tests.unit.fake_client.FakeManilaClient.Shares'
|
||||
'.access_list',
|
||||
side_effect=manila_exception.ClientException(404)):
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_manila_client,
|
||||
fake_manila_share,
|
||||
None, None, -1)
|
||||
self.assertRaises(exceptions.TimeoutException,
|
||||
fake_state_monitor.monitor_share_access,
|
||||
'ip', '192.168.0.1')
|
||||
|
||||
with mock.patch('fuxi.tests.unit.fake_client.FakeManilaClient.Shares'
|
||||
'.access_list',
|
||||
side_effect=manila_exception.ClientException(404)):
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_manila_client,
|
||||
fake_manila_share,
|
||||
None, None)
|
||||
self.assertRaises(manila_exception.ClientException,
|
||||
fake_state_monitor.monitor_share_access,
|
||||
'ip', '192.168.0.1')
|
||||
|
||||
def test_monitor_share_access_unexpected_state(self):
|
||||
fake_manila_client = fake_client.FakeManilaClient()
|
||||
fake_manila_share = fake_object.FakeManilaShare()
|
||||
|
||||
fake_state_monitor = state_monitor.StateMonitor(fake_manila_client,
|
||||
fake_manila_share,
|
||||
'active',
|
||||
('new',),
|
||||
0)
|
||||
fake_desired_sl = [fake_object.FakeShareAccess(
|
||||
access_type='ip', access_to='192.168.0.1', state='unknown')]
|
||||
with mock.patch.object(fake_client.FakeManilaClient.Shares,
|
||||
'access_list', return_value=fake_desired_sl):
|
||||
self.assertRaises(exceptions.UnexpectedStateException,
|
||||
fake_state_monitor.monitor_share_access,
|
||||
'ip', '192.168.0.1')
|
|
@ -1,109 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import mock
|
||||
import os
|
||||
|
||||
from fuxi.common import constants
|
||||
from fuxi.common import state_monitor
|
||||
from fuxi.connector.cloudconnector import openstack
|
||||
from fuxi import utils
|
||||
from fuxi.tests.unit import base, fake_client, fake_object
|
||||
|
||||
from cinderclient import exceptions as cinder_exception
|
||||
from novaclient import exceptions as nova_exception
|
||||
|
||||
|
||||
def mock_list_with_attach_to_this(cls, search_opts=None):
|
||||
if search_opts is None:
|
||||
search_opts = {}
|
||||
attachments = [{u'server_id': u'123',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': None,
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
return [fake_object.FakeCinderVolume(name='fake-vol1',
|
||||
attachments=attachments)]
|
||||
|
||||
|
||||
def mock_list_with_attach_to_other(cls, search_opts=None):
|
||||
if search_opts is None:
|
||||
search_opts = {}
|
||||
attachments = [{u'server_id': u'1234',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': None,
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
return [fake_object.FakeCinderVolume(name='fake-vol1',
|
||||
attachments=attachments)]
|
||||
|
||||
|
||||
def mock_get_mountpoint_for_device(devpath, mountpoint):
|
||||
return ''
|
||||
|
||||
|
||||
class TestCinderConnector(base.TestCase):
|
||||
def setUp(self):
|
||||
base.TestCase.setUp(self)
|
||||
self.connector = openstack.CinderConnector()
|
||||
self.connector.cinderclient = fake_client.FakeCinderClient()
|
||||
self.connector.novaclient = fake_client.FakeNovaClient()
|
||||
|
||||
def test_connect_volume(self):
|
||||
pass
|
||||
|
||||
@mock.patch.object(utils, 'get_instance_uuid', return_value='fake-123')
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch.object(state_monitor.StateMonitor, 'monitor_cinder_volume',
|
||||
return_value=None)
|
||||
def test_disconnect_volume(self, mock_inst_id, mock_execute, mock_monitor):
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume()
|
||||
result = self.connector.disconnect_volume(fake_cinder_volume)
|
||||
self.assertIsNone(result)
|
||||
|
||||
@mock.patch('fuxi.tests.unit.fake_client.FakeCinderClient.Volumes.get',
|
||||
side_effect=cinder_exception.ClientException(404))
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch.object(state_monitor.StateMonitor,
|
||||
'monitor_cinder_volume')
|
||||
def test_disconnect_volume_for_not_found(self, mock_get, mock_execute,
|
||||
mocK_monitor):
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume()
|
||||
self.assertRaises(cinder_exception.ClientException,
|
||||
self.connector.disconnect_volume,
|
||||
fake_cinder_volume)
|
||||
|
||||
@mock.patch('fuxi.tests.unit.fake_client.FakeNovaClient.Volumes'
|
||||
'.delete_server_volume',
|
||||
side_effect=nova_exception.ClientException(500))
|
||||
@mock.patch.object(utils, 'get_instance_uuid', return_value='fake-123')
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch.object(state_monitor.StateMonitor,
|
||||
'monitor_cinder_volume')
|
||||
def test_disconnect_volume_for_delete_server_volume_failed(self,
|
||||
mock_delete,
|
||||
mock_inst_id,
|
||||
mock_execute,
|
||||
mock_monitor):
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume()
|
||||
self.assertRaises(nova_exception.ClientException,
|
||||
self.connector.disconnect_volume,
|
||||
fake_cinder_volume)
|
||||
|
||||
def test_get_device_path(self):
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume()
|
||||
fake_devpath = os.path.join(constants.VOLUME_LINK_DIR,
|
||||
fake_cinder_volume.id)
|
||||
self.assertEqual(fake_devpath,
|
||||
self.connector.get_device_path(fake_cinder_volume))
|
|
@ -1,281 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import mock
|
||||
import os
|
||||
import platform
|
||||
import socket
|
||||
import sys
|
||||
|
||||
from cinderclient import exceptions as cinder_exception
|
||||
from manilaclient.common.apiclient import exceptions as manila_exception
|
||||
from oslo_concurrency import processutils
|
||||
|
||||
from fuxi.common import constants
|
||||
from fuxi.common import mount
|
||||
from fuxi.common import state_monitor
|
||||
from fuxi.connector import osbrickconnector
|
||||
from fuxi.tests.unit import base, fake_client, fake_object
|
||||
from fuxi import exceptions
|
||||
from fuxi import utils
|
||||
|
||||
|
||||
def mock_get_connector_properties(multipath=False, enforce_multipath=False):
|
||||
props = {}
|
||||
props['host'] = socket.gethostname()
|
||||
props['initiator'] = 'iqn.1993-08.org.debian:01:b57cc344932'
|
||||
props['platform'] = platform.machine()
|
||||
props['os_type'] = sys.platform
|
||||
return props
|
||||
|
||||
|
||||
def mock_list_with_attach_to_this(cls, search_opts=None):
|
||||
if search_opts is None:
|
||||
search_opts = {}
|
||||
attachments = [{u'server_id': u'123',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': utils.get_hostname(),
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
return [fake_object.FakeCinderVolume(name='fake-vol1',
|
||||
attachments=attachments)]
|
||||
|
||||
|
||||
def mock_list_with_attach_to_other(cls, search_opts=None):
|
||||
if search_opts is None:
|
||||
search_opts = {}
|
||||
attachments = [{u'server_id': u'123',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': utils.get_hostname() + u'other',
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
return [fake_object.FakeCinderVolume(name='fake-vol1',
|
||||
attachments=attachments)]
|
||||
|
||||
|
||||
def mock_get_mountpoint_for_device(devpath, mountpoint):
|
||||
return ''
|
||||
|
||||
|
||||
class TestCinderConnector(base.TestCase):
|
||||
def setUp(self):
|
||||
base.TestCase.setUp(self)
|
||||
self.connector = osbrickconnector.CinderConnector()
|
||||
self.connector.cinderclient = fake_client.FakeCinderClient()
|
||||
|
||||
def test_connect_volume(self):
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume()
|
||||
self.connector._connect_volume = mock.MagicMock()
|
||||
self.connector.connect_volume(fake_cinder_volume)
|
||||
self.assertEqual(1, len(fake_cinder_volume.attachments))
|
||||
|
||||
@mock.patch.object(osbrickconnector, 'brick_get_connector',
|
||||
return_value=fake_client.FakeOSBrickConnector())
|
||||
@mock.patch.object(utils, 'execute')
|
||||
def test_disconnect_volume(self, mock_brick_connector, mock_execute):
|
||||
attachments = [{u'server_id': u'123',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': utils.get_hostname(),
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
fake_cinder_volume = \
|
||||
fake_object.FakeCinderVolume(attachments=attachments)
|
||||
|
||||
self.connector._get_connection_info = mock.MagicMock()
|
||||
self.connector.cinderclient.volumes.detach = mock.MagicMock()
|
||||
self.assertIsNone(self.connector.disconnect_volume(fake_cinder_volume))
|
||||
|
||||
@mock.patch.object(osbrickconnector, 'brick_get_connector_properties',
|
||||
mock_get_connector_properties)
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch('fuxi.tests.unit.fake_client.FakeCinderClient.Volumes'
|
||||
'.initialize_connection',
|
||||
side_effect=cinder_exception.ClientException(500))
|
||||
def test_disconnect_volume_no_connection_info(self, mock_execute,
|
||||
mock_init_conn):
|
||||
attachments = [{u'server_id': u'123',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': utils.get_hostname(),
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
fake_cinder_volume = \
|
||||
fake_object.FakeCinderVolume(attachments=attachments)
|
||||
self.assertRaises(cinder_exception.ClientException,
|
||||
self.connector.disconnect_volume,
|
||||
fake_cinder_volume)
|
||||
|
||||
@mock.patch.object(osbrickconnector, 'brick_get_connector',
|
||||
return_value=fake_client.FakeOSBrickConnector())
|
||||
@mock.patch.object(osbrickconnector.CinderConnector,
|
||||
'_get_connection_info',
|
||||
return_value={'driver_volume_type': 'fake_proto',
|
||||
'data': {'path': '/dev/0'}})
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch('fuxi.tests.unit.fake_client.FakeOSBrickConnector'
|
||||
'.disconnect_volume',
|
||||
side_effect=processutils.ProcessExecutionError())
|
||||
def test_disconnect_volume_osbrick_disconnect_failed(self, mock_connector,
|
||||
mock_init_conn,
|
||||
mock_execute,
|
||||
mock_disconnect_vol):
|
||||
attachments = [{u'server_id': u'123',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': utils.get_hostname(),
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
fake_cinder_volume = \
|
||||
fake_object.FakeCinderVolume(attachments=attachments)
|
||||
self.assertRaises(processutils.ProcessExecutionError,
|
||||
self.connector.disconnect_volume,
|
||||
fake_cinder_volume)
|
||||
|
||||
@mock.patch('fuxi.tests.unit.fake_client.FakeCinderClient.Volumes.detach',
|
||||
side_effect=cinder_exception.ClientException(500))
|
||||
@mock.patch.object(osbrickconnector, 'brick_get_connector',
|
||||
return_value=fake_client.FakeOSBrickConnector())
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch.object(osbrickconnector.CinderConnector,
|
||||
'_get_connection_info',
|
||||
return_value={'driver_volume_type': 'fake_proto',
|
||||
'data': {'path': '/dev/0'}})
|
||||
def test_disconnect_volume_detach_failed(self, mock_detach,
|
||||
mock_brick_connector,
|
||||
mock_execute,
|
||||
mock_conn_info):
|
||||
attachments = [{u'server_id': u'123',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': utils.get_hostname(),
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
fake_cinder_volume = \
|
||||
fake_object.FakeCinderVolume(attachments=attachments)
|
||||
self.assertRaises(cinder_exception.ClientException,
|
||||
self.connector.disconnect_volume,
|
||||
fake_cinder_volume)
|
||||
|
||||
def test_get_device_path(self):
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume()
|
||||
self.assertEqual(os.path.join(constants.VOLUME_LINK_DIR,
|
||||
fake_cinder_volume.id),
|
||||
self.connector.get_device_path(fake_cinder_volume))
|
||||
|
||||
|
||||
class TestManilaConncetor(base.TestCase):
|
||||
def setUp(self):
|
||||
base.TestCase.setUp(self)
|
||||
self._set_connector()
|
||||
|
||||
@mock.patch.object(utils, 'get_manilaclient',
|
||||
return_value=fake_client.FakeManilaClient())
|
||||
def _set_connector(self, mock_client):
|
||||
self.connector = osbrickconnector.ManilaConnector()
|
||||
self.connector.manilaclient = fake_client.FakeManilaClient()
|
||||
self.connector._get_brick_connector = mock.MagicMock()
|
||||
self.connector._get_brick_connector.return_value \
|
||||
= fake_client.FakeOSBrickConnector()
|
||||
|
||||
def test_check_access_allowed(self):
|
||||
fake_share = fake_object.FakeManilaShare(share_proto='UNKNOWN')
|
||||
self.assertFalse(self.connector.check_access_allowed(fake_share))
|
||||
|
||||
fake_share = fake_object.FakeManilaShare(share_proto='NFS')
|
||||
self.assertFalse(self.connector.check_access_allowed(fake_share))
|
||||
|
||||
fake_al = [fake_object.FakeShareAccess(access_type='ip',
|
||||
access_to='192.168.0.1',
|
||||
state='active')]
|
||||
with mock.patch('fuxi.tests.unit.fake_client.FakeManilaClient.Shares'
|
||||
'.access_list',
|
||||
return_value=fake_al):
|
||||
with mock.patch.object(self.connector, '_get_access_to',
|
||||
return_value='192.168.0.1'):
|
||||
self.assertTrue(
|
||||
self.connector.check_access_allowed(fake_share))
|
||||
|
||||
def test_connect_volume(self):
|
||||
fake_share = fake_object.FakeManilaShare(share_proto='NFS')
|
||||
self.connector._get_access_to = mock.MagicMock()
|
||||
self.connector._get_access_to.return_value = '192.168.0.2'
|
||||
with mock.patch.object(state_monitor.StateMonitor,
|
||||
'monitor_share_access'):
|
||||
self.assertEqual(fake_share.export_location,
|
||||
self.connector.connect_volume(fake_share)['path'])
|
||||
|
||||
def test_connect_volume_failed(self):
|
||||
fake_share = fake_object.FakeManilaShare(share_proto='NFS')
|
||||
self.connector._get_access_to = mock.MagicMock()
|
||||
self.connector._get_access_to.return_value = '192.168.0.2'
|
||||
with mock.patch('fuxi.tests.unit.fake_client.FakeManilaClient'
|
||||
'.Shares.allow',
|
||||
side_effect=manila_exception.ClientException(500)):
|
||||
self.assertRaises(manila_exception.ClientException,
|
||||
self.connector.connect_volume,
|
||||
fake_share)
|
||||
|
||||
def test_connect_volume_invalid_proto(self):
|
||||
fake_share = fake_object.FakeManilaShare(share_proto='invalid_proto')
|
||||
self.assertRaises(exceptions.InvalidProtocol,
|
||||
self.connector.connect_volume,
|
||||
fake_share)
|
||||
|
||||
def test_connect_volume_invalid_access_type(self):
|
||||
fake_share = fake_object.FakeManilaShare(share_proto='NFS')
|
||||
self.connector.proto_access_type_map = {'NFS': 'invalid_type'}
|
||||
self.assertRaises(exceptions.InvalidAccessType,
|
||||
self.connector.connect_volume,
|
||||
fake_share)
|
||||
|
||||
def test_connect_volume_invalid_access_to(self):
|
||||
fake_share = fake_object.FakeManilaShare(share_proto='GLUSTERFS')
|
||||
fake_al = [fake_object.FakeShareAccess(access_type='cert',
|
||||
access_to='test@local',
|
||||
state='active')]
|
||||
|
||||
with mock.patch('fuxi.tests.unit.fake_client.FakeManilaClient.Shares'
|
||||
'.access_list',
|
||||
return_value=fake_al):
|
||||
self.assertRaises(exceptions.InvalidAccessTo,
|
||||
self.connector.connect_volume,
|
||||
fake_share)
|
||||
|
||||
@mock.patch.object(mount.Mounter, 'unmount')
|
||||
def test_disconnect_volume(self, mock_unmount):
|
||||
fake_share = fake_object.FakeManilaShare(share_proto='NFS')
|
||||
self.connector._get_access_to = mock.MagicMock()
|
||||
self.connector._get_access_to.return_value = '192.168.0.2'
|
||||
self.assertIsNone(self.connector.disconnect_volume(fake_share))
|
||||
|
||||
def test_get_device_path(self):
|
||||
fake_manila_share = fake_object.FakeManilaShare()
|
||||
self.assertEqual(fake_manila_share.export_location,
|
||||
self.connector.get_device_path(fake_manila_share))
|
||||
|
||||
def test_get_mountpoint(self):
|
||||
fake_manila_share = fake_object.FakeManilaShare()
|
||||
with mock.patch.object(self.connector, 'check_access_allowed',
|
||||
return_value=False):
|
||||
self.assertEqual('',
|
||||
self.connector.get_mountpoint(fake_manila_share))
|
||||
with mock.patch.object(self.connector, 'check_access_allowed',
|
||||
return_value=True):
|
||||
with mock.patch.object(fake_client.FakeOSBrickConnector,
|
||||
'get_volume_paths',
|
||||
return_value=['/fuxi/data/fake-vol/nfs']):
|
||||
self.assertEqual('/fuxi/data/fake-vol',
|
||||
self.connector.get_mountpoint(
|
||||
fake_manila_share))
|
|
@ -1,134 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from fuxi.tests.unit import fake_object
|
||||
|
||||
from cinderclient import exceptions as cinder_exception
|
||||
|
||||
|
||||
class FakeCinderClient(object):
|
||||
class Volumes(object):
|
||||
def get(self, volume_id):
|
||||
return fake_object.FakeCinderVolume(id=volume_id)
|
||||
|
||||
def list(self, search_opts=None):
|
||||
if search_opts is None:
|
||||
search_opts = {}
|
||||
return [fake_object.FakeCinderVolume(name='fake-vol1')]
|
||||
|
||||
def create(self, *args, **kwargs):
|
||||
return fake_object.FakeCinderVolume(**kwargs)
|
||||
|
||||
def delete(self, volume_id):
|
||||
return
|
||||
|
||||
def attach(self, volume, instance_uuid, mountpoint, host_name):
|
||||
if not instance_uuid and not host_name:
|
||||
raise cinder_exception.ClientException
|
||||
|
||||
attachment = {u'server_id': instance_uuid,
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': host_name,
|
||||
u'device': None,
|
||||
u'id': u'123'}
|
||||
|
||||
volume.attachments.append(attachment)
|
||||
return volume
|
||||
|
||||
def detach(self, volume_id, attachment_uuid):
|
||||
pass
|
||||
|
||||
def initialize_connection(self, volume, connector):
|
||||
return {'data': {}}
|
||||
|
||||
def reserve(self, volume):
|
||||
return
|
||||
|
||||
def update(self, volume, **kwargs):
|
||||
for key, value in kwargs.items():
|
||||
if hasattr(volume, key):
|
||||
setattr(volume, key, value)
|
||||
|
||||
def set_metadata(self, volume, metadata):
|
||||
md = volume.metadata
|
||||
md.update(metadata)
|
||||
|
||||
def __getattr__(self, item):
|
||||
return None
|
||||
|
||||
def __init__(self):
|
||||
self.volumes = self.Volumes()
|
||||
|
||||
|
||||
class FakeNovaClient(object):
|
||||
class Volumes(object):
|
||||
def create_server_volume(self, volume_id):
|
||||
pass
|
||||
|
||||
def delete_server_volume(self, server_id, volume_id):
|
||||
return None
|
||||
|
||||
def __init__(self):
|
||||
self.volumes = self.Volumes()
|
||||
|
||||
|
||||
class FakeOSBrickConnector(object):
|
||||
def connect_volume(self, connection_properties):
|
||||
return {'path': 'fake-path'}
|
||||
|
||||
def disconnect_volume(self, connection_properties, device_info):
|
||||
pass
|
||||
|
||||
def get_volume_paths(self, connection_properties):
|
||||
return ['/fuxi/data/fake-vol']
|
||||
|
||||
|
||||
class FakeManilaClient(object):
|
||||
class Shares(object):
|
||||
def get(self, share):
|
||||
try:
|
||||
return fake_object.FakeManilaShare(id=share.id)
|
||||
except AttributeError:
|
||||
return fake_object.FakeManilaShare(id=share)
|
||||
|
||||
def create(self, *args, **kawrgs):
|
||||
pass
|
||||
|
||||
def list(self):
|
||||
return []
|
||||
|
||||
def allow(self, share, access_type, access, access_level):
|
||||
pass
|
||||
|
||||
def deny(self, share, share_access_id):
|
||||
pass
|
||||
|
||||
def access_list(self, share):
|
||||
return []
|
||||
|
||||
def update(self, **kwargs):
|
||||
pass
|
||||
|
||||
def update_all_metadata(self, share, metadata):
|
||||
share.metadata.update(**metadata)
|
||||
|
||||
class ShareNetworks(object):
|
||||
def list(self):
|
||||
return []
|
||||
|
||||
def create(self):
|
||||
pass
|
||||
|
||||
def __init__(self):
|
||||
self.shares = self.Shares()
|
||||
self.share_networks = self.ShareNetworks()
|
|
@ -1,83 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import copy
|
||||
|
||||
DEFAULT_VOLUME_ID = 'efd46583-4bf7-40d5-a027-2ee3dbe74f56'
|
||||
DEFAULT_VOLUME_NAME = 'fake_vol'
|
||||
|
||||
base_cinder_volume = {
|
||||
'attachments': [],
|
||||
'availability_zone': 'nova',
|
||||
'id': DEFAULT_VOLUME_ID,
|
||||
'size': 15,
|
||||
'display_name': DEFAULT_VOLUME_NAME,
|
||||
'metadata': {
|
||||
'readonly': 'False',
|
||||
'volume_from': 'fuxi',
|
||||
'fstype': 'ext4',
|
||||
},
|
||||
'status': 'available',
|
||||
'multiattach': 'false',
|
||||
'volume_type': 'lvmdriver-1',
|
||||
}
|
||||
|
||||
|
||||
class FakeCinderVolume(object):
|
||||
def __init__(self, **kwargs):
|
||||
if 'name' in kwargs:
|
||||
kwargs['display_name'] = kwargs.pop('name')
|
||||
volume = (copy.deepcopy(base_cinder_volume))
|
||||
volume.update(kwargs)
|
||||
|
||||
for key, value in volume.items():
|
||||
setattr(self, key, value)
|
||||
|
||||
def get_name(self):
|
||||
return self.display_name
|
||||
|
||||
def set_name(self, name):
|
||||
self.display_name = name
|
||||
|
||||
name = property(get_name, set_name)
|
||||
|
||||
|
||||
fake_share = {
|
||||
'id': DEFAULT_VOLUME_ID,
|
||||
'name': DEFAULT_VOLUME_NAME,
|
||||
'export_location': '192.168.0.1:/tmp/share',
|
||||
'share_proto': 'NFS'
|
||||
}
|
||||
|
||||
|
||||
class FakeManilaShare(object):
|
||||
def __init__(self, **kwargs):
|
||||
share = copy.deepcopy(fake_share)
|
||||
share.update(kwargs)
|
||||
for key, value in share.items():
|
||||
setattr(self, key, value)
|
||||
|
||||
|
||||
fake_share_access = {
|
||||
'share_id': 'efd46583-4bf7-40d5-a027-2ee3dbe74f56',
|
||||
'access_type': 'ip',
|
||||
'access_to': '192.168.0.2',
|
||||
'access_level': 'rw'
|
||||
}
|
||||
|
||||
|
||||
class FakeShareAccess(object):
|
||||
def __init__(self, **kwargs):
|
||||
share_access = copy.deepcopy(fake_share_access)
|
||||
share_access.update(kwargs)
|
||||
for key, value in share_access.items():
|
||||
setattr(self, key, value)
|
|
@ -1,336 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
test_fuxi
|
||||
----------------------------------
|
||||
|
||||
Tests for `fuxi` module.
|
||||
"""
|
||||
import collections
|
||||
import mock
|
||||
import unittest
|
||||
|
||||
from fuxi import app
|
||||
from fuxi.common import config
|
||||
from fuxi.controllers import volume_providers_conf
|
||||
from fuxi import exceptions
|
||||
from fuxi.tests.unit import base
|
||||
|
||||
from oslo_serialization import jsonutils
|
||||
|
||||
|
||||
def fake_mountpoint(name):
|
||||
volume_dir = config.CONF.volume_dir.rstrip('/')
|
||||
return ''.join((volume_dir, name))
|
||||
|
||||
|
||||
def fake_volume(name):
|
||||
volume_dir = config.CONF.volume_dir.rstrip('/')
|
||||
return {'Name': name, 'Mountpoint': ''.join((volume_dir, name))}
|
||||
|
||||
|
||||
class FakeProvider(object):
|
||||
def __init__(self, volume_provider_type):
|
||||
self.volume_provider_type = volume_provider_type
|
||||
|
||||
def create(self, docker_volume_name, volume_opts):
|
||||
pass
|
||||
|
||||
def delete(self, docker_volume_name):
|
||||
pass
|
||||
|
||||
def list(self):
|
||||
pass
|
||||
|
||||
def path(self, docker_volume_name):
|
||||
pass
|
||||
|
||||
def show(self, docker_volume_name):
|
||||
pass
|
||||
|
||||
def mount(self, docker_volume_name):
|
||||
pass
|
||||
|
||||
def unmount(self, docker_volume_name):
|
||||
pass
|
||||
|
||||
def check_exist(self, docker_volume_name):
|
||||
return False
|
||||
|
||||
|
||||
class TestFuxi(base.TestCase):
|
||||
def setUp(self):
|
||||
super(TestFuxi, self).setUp()
|
||||
app.config['DEBUG'] = True
|
||||
app.config['TESTING'] = True
|
||||
self.app = app.test_client()
|
||||
|
||||
def volume_providers_setup(self, volume_provider_types):
|
||||
if not volume_provider_types:
|
||||
raise Exception
|
||||
|
||||
app.volume_providers = collections.OrderedDict()
|
||||
for vpt in volume_provider_types:
|
||||
if vpt in volume_providers_conf:
|
||||
app.volume_providers[vpt] = FakeProvider(vpt)
|
||||
|
||||
def test_plugin_activate(self):
|
||||
response = self.app.post('/Plugin.Activate')
|
||||
fake_response = {
|
||||
u'Implements': [u'VolumeDriver']
|
||||
}
|
||||
self.assertEqual(200, response.status_code)
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_create(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_request = {
|
||||
u'Name': u'test-vol',
|
||||
u'Opts': {u'size': u'1'},
|
||||
}
|
||||
for provider in app.volume_providers.values():
|
||||
provider.check_exist = mock.MagicMock()
|
||||
provider.check_exist.return_value = False
|
||||
provider.create = mock.MagicMock()
|
||||
|
||||
response = self.app.post('/VolumeDriver.Create',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Err': u''
|
||||
}
|
||||
|
||||
self.assertEqual(200, response.status_code)
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_create_without_name(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_request = {u'Opts': {}}
|
||||
response = self.app.post('VolumeDriver.Create',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
self.assertEqual(500, response.status_code)
|
||||
self.assertIsNotNone(jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_create_with_invalid_opts(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_request = {u'Name': u'test-vol', u'Opts': u'invalid'}
|
||||
response = self.app.post('VolumeDriver.Create',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
self.assertEqual(500, response.status_code)
|
||||
self.assertIsNotNone(jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_create_invalid_volume_provider(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_request = {
|
||||
u'Name': u'test-vol',
|
||||
u'Opts': {u'size': u'1',
|
||||
u'volume_provider': u'provider'}}
|
||||
for provider in app.volume_providers.values():
|
||||
provider.check_exist = mock.MagicMock()
|
||||
provider.check_exist.return_value = False
|
||||
provider.create = mock.MagicMock()
|
||||
|
||||
response = self.app.post('VolumeDriver.Create',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Err': u''
|
||||
}
|
||||
self.assertEqual(200, response.status_code)
|
||||
self.assertNotEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_remove(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_request = {
|
||||
u'Name': u'test-vol'
|
||||
}
|
||||
for provider in app.volume_providers.values():
|
||||
provider.delete = mock.MagicMock()
|
||||
provider.delete.return_value = True
|
||||
|
||||
response = self.app.post('/VolumeDriver.Remove',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Err': u''
|
||||
}
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_remove_with_volume_not_exist(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_request = {
|
||||
u'Name': u'test-vol',
|
||||
}
|
||||
for provider in app.volume_providers.values():
|
||||
provider.delete = mock.MagicMock()
|
||||
provider.delete.return_value = False
|
||||
|
||||
response = self.app.post('/VolumeDriver.Remove',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Err': u''
|
||||
}
|
||||
self.assertEqual(200, response.status_code)
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_mount(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_name = u'test-vol'
|
||||
fake_request = {
|
||||
u'Name': fake_name
|
||||
}
|
||||
|
||||
for provider in app.volume_providers.values():
|
||||
provider.check_exist = mock.MagicMock()
|
||||
provider.check_exist.return_value = True
|
||||
provider.mount = mock.MagicMock()
|
||||
provider.mount.return_value = fake_mountpoint(fake_name)
|
||||
|
||||
response = self.app.post('/VolumeDriver.Mount',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Mountpoint': fake_mountpoint(fake_name),
|
||||
u'Err': u''
|
||||
}
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_mount_with_volume_not_exist(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_name = u'test-vol'
|
||||
fake_request = {
|
||||
u'Name': fake_name,
|
||||
}
|
||||
for provider in app.volume_providers.values():
|
||||
provider.check_exit = mock.MagicMock()
|
||||
provider.check_exit.return_value = False
|
||||
response = self.app.post('/VolumeDriver.Mount',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Mountpoint': fake_mountpoint(fake_name),
|
||||
u'Err': u''
|
||||
}
|
||||
self.assertEqual(200, response.status_code)
|
||||
self.assertNotEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_path(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_name = u'test-vol'
|
||||
fake_request = {
|
||||
u'Name': fake_name
|
||||
}
|
||||
for provider in app.volume_providers.values():
|
||||
provider.show = mock.MagicMock()
|
||||
provider.show.return_value = fake_volume(fake_name)
|
||||
|
||||
response = self.app.post('/VolumeDriver.Path',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Mountpoint': fake_mountpoint(fake_name),
|
||||
u'Err': u''
|
||||
}
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_path_with_volume_not_exist(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_docker_volume_name = u'test-vol'
|
||||
fake_request = {
|
||||
u'Name': fake_docker_volume_name
|
||||
}
|
||||
for provider in app.volume_providers.values():
|
||||
provider.show = mock.MagicMock(side_effect=exceptions.NotFound)
|
||||
|
||||
response = self.app.post('/VolumeDriver.Path',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Err': u'Mountpoint Not Found'
|
||||
}
|
||||
self.assertEqual(200, response.status_code)
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_unmount(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_request = {
|
||||
u'Name': u'test-vol'
|
||||
}
|
||||
response = self.app.post('/VolumeDriver.Unmount',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Err': u''
|
||||
}
|
||||
self.assertEqual(200, response.status_code)
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_get(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_name = u'test-vol'
|
||||
fake_request = {
|
||||
u'Name': fake_name
|
||||
}
|
||||
for provider in app.volume_providers.values():
|
||||
provider.show = mock.MagicMock()
|
||||
provider.show.return_value = fake_volume(fake_name)
|
||||
|
||||
response = self.app.post('/VolumeDriver.Get',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Volume': {u'Name': fake_name,
|
||||
u'Mountpoint': fake_mountpoint(fake_name)},
|
||||
u'Err': u''
|
||||
}
|
||||
self.assertEqual(200, response.status_code)
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_get_with_volume_not_exist(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
fake_docker_volume_name = u'test-vol'
|
||||
fake_request = {
|
||||
u'Name': fake_docker_volume_name
|
||||
}
|
||||
for provider in app.volume_providers.values():
|
||||
provider.show = mock.MagicMock(side_effect=exceptions.NotFound())
|
||||
|
||||
response = self.app.post('/VolumeDriver.Get',
|
||||
content_type='application/json',
|
||||
data=jsonutils.dumps(fake_request))
|
||||
fake_response = {
|
||||
u'Err': u'Volume Not Found'
|
||||
}
|
||||
self.assertEqual(200, response.status_code)
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
def test_volumedriver_list(self):
|
||||
self.volume_providers_setup(['cinder'])
|
||||
for provider in app.volume_providers.values():
|
||||
provider.list = mock.MagicMock()
|
||||
provider.list.return_value = []
|
||||
|
||||
response = self.app.post('/VolumeDriver.List',
|
||||
content_type='application/json')
|
||||
|
||||
fake_response = {
|
||||
u'Volumes': [],
|
||||
u'Err': u''
|
||||
}
|
||||
self.assertEqual(fake_response, jsonutils.loads(response.data))
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
|
@ -1,442 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from mock import mock
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
from fuxi.common import config
|
||||
from fuxi.common import constants as consts
|
||||
from fuxi.common import mount
|
||||
from fuxi.common import state_monitor
|
||||
from fuxi import exceptions
|
||||
from fuxi.tests.unit import base, fake_client, fake_object
|
||||
from fuxi import utils
|
||||
from fuxi.volumeprovider import cinder
|
||||
|
||||
from cinderclient import exceptions as cinder_exception
|
||||
|
||||
volume_link_dir = consts.VOLUME_LINK_DIR
|
||||
DEFAULT_VOLUME_ID = fake_object.DEFAULT_VOLUME_ID
|
||||
|
||||
CONF = config.CONF
|
||||
|
||||
|
||||
class FakeCinderConnector(object):
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def connect_volume(self, volume, **connect_opts):
|
||||
return {'path': os.path.join(volume_link_dir, volume.id)}
|
||||
|
||||
def disconnect_volume(self, volume, **disconnect_opts):
|
||||
pass
|
||||
|
||||
def get_device_path(self, volume):
|
||||
return os.path.join(volume_link_dir, volume.id)
|
||||
|
||||
|
||||
def mock_connector(cls):
|
||||
return FakeCinderConnector()
|
||||
|
||||
|
||||
def mock_monitor_cinder_volume(cls):
|
||||
cls.expected_obj.status = cls.desired_state
|
||||
return cls.expected_obj
|
||||
|
||||
|
||||
def mock_device_path_for_delete(cls, volume):
|
||||
return volume.id
|
||||
|
||||
|
||||
class TestCinder(base.TestCase):
|
||||
volume_provider_type = 'cinder'
|
||||
|
||||
def setUp(self):
|
||||
base.TestCase.setUp(self)
|
||||
self.cinderprovider = cinder.Cinder()
|
||||
self.cinderprovider.cinderclient = fake_client.FakeCinderClient()
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(None, consts.UNKNOWN))
|
||||
def test_create_with_volume_not_exist(self, mock_docker_volume):
|
||||
self.assertEqual(os.path.join(volume_link_dir, DEFAULT_VOLUME_ID),
|
||||
self.cinderprovider.create('fake-vol', {})['path'])
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(
|
||||
status='unknown'), consts.UNKNOWN))
|
||||
@mock.patch.object(state_monitor.StateMonitor, 'monitor_cinder_volume',
|
||||
mock_monitor_cinder_volume)
|
||||
def test_create_from_volume_id(self, mock_docker_volume):
|
||||
fake_volume_name = 'fake_vol'
|
||||
fake_volume_opts = {'volume_id': DEFAULT_VOLUME_ID}
|
||||
result = self.cinderprovider.create(fake_volume_name,
|
||||
fake_volume_opts)
|
||||
self.assertEqual(os.path.join(consts.VOLUME_LINK_DIR,
|
||||
DEFAULT_VOLUME_ID),
|
||||
result['path'])
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(
|
||||
status='unknown'), consts.UNKNOWN))
|
||||
@mock.patch('fuxi.tests.unit.fake_client.FakeCinderClient.Volumes.get',
|
||||
side_effect=cinder_exception.ClientException(404))
|
||||
def test_create_from_volume_id_with_volume_not_exist(self,
|
||||
mocK_docker_volume,
|
||||
mock_volume_get):
|
||||
fake_volume_name = 'fake_vol'
|
||||
fake_volume_opts = {'volume_id': DEFAULT_VOLUME_ID}
|
||||
self.assertRaises(cinder_exception.ClientException,
|
||||
self.cinderprovider.create,
|
||||
fake_volume_name,
|
||||
fake_volume_opts)
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(
|
||||
status='unknown'), consts.UNKNOWN))
|
||||
def test_create_from_volume_id_with_unexpected_status_1(
|
||||
self, mock_docker_volume):
|
||||
fake_volume_name = 'fake_vol'
|
||||
fake_volume_args = {'volume_id': DEFAULT_VOLUME_ID,
|
||||
'status': 'attaching'}
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume(**fake_volume_args)
|
||||
self.cinderprovider._get_docker_volume = mock.MagicMock()
|
||||
self.cinderprovider._get_docker_volume.return_value \
|
||||
= (fake_cinder_volume,
|
||||
consts.UNKNOWN)
|
||||
self.cinderprovider.cinderclient.volumes.get = mock.MagicMock()
|
||||
self.cinderprovider.cinderclient.volumes.get.return_value = \
|
||||
fake_cinder_volume
|
||||
self.assertRaises(exceptions.FuxiException,
|
||||
self.cinderprovider.create,
|
||||
fake_volume_name,
|
||||
{'volume_id': DEFAULT_VOLUME_ID})
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
def test_create_from_volume_id_with_unexpected_status_2(self):
|
||||
fake_server_id = 'fake_server_123'
|
||||
fake_host_name = 'attached_to_other'
|
||||
fake_volume_name = 'fake_vol'
|
||||
fake_volume_args = {'volume_id': DEFAULT_VOLUME_ID,
|
||||
'status': 'in-use',
|
||||
'multiattach': False,
|
||||
'attachments': [{'server_id': fake_server_id,
|
||||
'host_name': fake_host_name}]}
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume(**fake_volume_args)
|
||||
self.cinderprovider._get_docker_volume = mock.MagicMock()
|
||||
self.cinderprovider._get_docker_volume.return_value \
|
||||
= (fake_cinder_volume,
|
||||
consts.UNKNOWN)
|
||||
self.cinderprovider.cinderclient.volumes.get = mock.MagicMock()
|
||||
self.cinderprovider.cinderclient.volumes.get.return_value = \
|
||||
fake_cinder_volume
|
||||
self.assertRaises(exceptions.FuxiException,
|
||||
self.cinderprovider.create,
|
||||
fake_volume_name,
|
||||
{'volume_id': DEFAULT_VOLUME_ID})
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
def test_create_with_volume_attach_to_this(self):
|
||||
fake_server_id = 'fake_server_123'
|
||||
fake_host_name = 'attached_to_this'
|
||||
fake_volume_args = {'id': DEFAULT_VOLUME_ID,
|
||||
'status': 'in-use',
|
||||
'attachments': [{'server_id': fake_server_id,
|
||||
'host_name': fake_host_name}]
|
||||
}
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume(**fake_volume_args)
|
||||
self.cinderprovider._get_docker_volume = mock.MagicMock()
|
||||
self.cinderprovider._get_docker_volume.return_value \
|
||||
= (fake_cinder_volume,
|
||||
consts.ATTACH_TO_THIS)
|
||||
self.cinderprovider.cinderclient.volumes.get = mock.MagicMock()
|
||||
self.cinderprovider.cinderclient.volumes.get.return_value = \
|
||||
fake_cinder_volume
|
||||
fake_result = self.cinderprovider.create('fake-vol', {})
|
||||
self.assertEqual(os.path.join(volume_link_dir, DEFAULT_VOLUME_ID),
|
||||
fake_result['path'])
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
def test_create_with_volume_no_attach(self):
|
||||
fake_cinder_volume = fake_object.FakeCinderVolume()
|
||||
self.cinderprovider._get_docker_volume = mock.MagicMock()
|
||||
self.cinderprovider._get_docker_volume.return_value \
|
||||
= (fake_cinder_volume,
|
||||
consts.NOT_ATTACH)
|
||||
fake_result = self.cinderprovider.create('fake-vol', {})
|
||||
self.assertEqual(os.path.join(volume_link_dir, DEFAULT_VOLUME_ID),
|
||||
fake_result['path'])
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(
|
||||
multiattach=True), consts.ATTACH_TO_OTHER))
|
||||
def test_create_with_multiable_vol_attached_to_other(self,
|
||||
mock_docker_volume):
|
||||
self.assertEqual(os.path.join(volume_link_dir,
|
||||
fake_object.DEFAULT_VOLUME_ID),
|
||||
self.cinderprovider.create('fake-vol', {})['path'])
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(
|
||||
multiattach=False), consts.ATTACH_TO_OTHER))
|
||||
def test_create_with_volume_attached_to_other(self, mock_docker_volume):
|
||||
self.assertRaises(exceptions.FuxiException,
|
||||
self.cinderprovider.create,
|
||||
'fake-vol',
|
||||
{})
|
||||
|
||||
def test_create_with_multi_matched_volumes(self):
|
||||
fake_vol_name = 'fake-vol'
|
||||
fake_vols = [fake_object.FakeCinderVolume(name=fake_vol_name),
|
||||
fake_object.FakeCinderVolume(name=fake_vol_name)]
|
||||
with mock.patch.object(fake_client.FakeCinderClient.Volumes, 'list',
|
||||
return_value=fake_vols):
|
||||
self.assertRaises(exceptions.TooManyResources,
|
||||
self.cinderprovider.create,
|
||||
fake_vol_name,
|
||||
{})
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch.object(FakeCinderConnector,
|
||||
'get_device_path',
|
||||
mock_device_path_for_delete)
|
||||
def test_delete(self, mock_execute):
|
||||
fd, tmpfname = tempfile.mkstemp()
|
||||
attachments = [{u'server_id': u'123',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': utils.get_hostname(),
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
|
||||
self.cinderprovider._get_docker_volume = mock.MagicMock()
|
||||
self.cinderprovider._get_docker_volume.return_value = (
|
||||
fake_object.FakeCinderVolume(id=tmpfname,
|
||||
attachments=attachments),
|
||||
consts.ATTACH_TO_THIS)
|
||||
self.cinderprovider._delete_volume = mock.MagicMock()
|
||||
|
||||
self.assertTrue(self.cinderprovider.delete('fake-vol'))
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(),
|
||||
consts.NOT_ATTACH))
|
||||
def test_delete_not_attach(self, mock_docker_volume):
|
||||
self.cinderprovider._delete_volume = mock.MagicMock()
|
||||
self.assertTrue(self.cinderprovider.delete('fake-vol'))
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(),
|
||||
consts.ATTACH_TO_OTHER))
|
||||
def test_delete_attach_to_other(self, mock_docker_volume):
|
||||
self.assertTrue(self.cinderprovider.delete('fake-vol'))
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(status=None),
|
||||
None))
|
||||
def test_delete_not_match_state(self, mock_docker_volume):
|
||||
self.assertRaises(exceptions.NotMatchedState,
|
||||
self.cinderprovider.delete,
|
||||
'fake-vol')
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch.object(FakeCinderConnector,
|
||||
'get_device_path',
|
||||
mock_device_path_for_delete)
|
||||
@mock.patch('fuxi.tests.unit.fake_client.FakeCinderClient.Volumes.delete',
|
||||
side_effect=cinder_exception.ClientException(500))
|
||||
def test_delete_failed(self, mock_execute, mock_delete):
|
||||
fd, tmpfname = tempfile.mkstemp()
|
||||
attachments = [{u'server_id': u'123',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': utils.get_hostname(),
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
|
||||
self.cinderprovider._get_docker_volume = mock.MagicMock()
|
||||
self.cinderprovider._get_docker_volume.return_value = (
|
||||
fake_object.FakeCinderVolume(id=tmpfname,
|
||||
attachments=attachments),
|
||||
consts.ATTACH_TO_THIS)
|
||||
|
||||
self.assertRaises(cinder_exception.ClientException,
|
||||
self.cinderprovider.delete,
|
||||
'fake-vol')
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch.object(FakeCinderConnector,
|
||||
'get_device_path',
|
||||
mock_device_path_for_delete)
|
||||
def test_delete_timeout(self, mock_execute):
|
||||
consts.DESTROY_VOLUME_TIMEOUT = 4
|
||||
fd, tmpfname = tempfile.mkstemp()
|
||||
attachments = [{u'server_id': u'123',
|
||||
u'attachment_id': u'123',
|
||||
u'attached_at': u'2016-05-20T09:19:57.000000',
|
||||
u'host_name': utils.get_hostname(),
|
||||
u'device': None,
|
||||
u'id': u'123'}]
|
||||
|
||||
self.cinderprovider._get_docker_volume = mock.MagicMock()
|
||||
self.cinderprovider._get_docker_volume.return_value = (
|
||||
fake_object.FakeCinderVolume(id=tmpfname,
|
||||
attachments=attachments),
|
||||
consts.ATTACH_TO_THIS)
|
||||
|
||||
self.assertRaises(exceptions.TimeoutException,
|
||||
self.cinderprovider.delete,
|
||||
'fake-vol')
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
def test_list(self):
|
||||
fake_vols = [fake_object.FakeCinderVolume(name='fake-vol1')]
|
||||
with mock.patch.object(fake_client.FakeCinderClient.Volumes, 'list',
|
||||
return_value=fake_vols):
|
||||
self.assertEqual([{'Name': 'fake-vol1', 'Mountpoint': ''}],
|
||||
self.cinderprovider.list())
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch('fuxi.tests.unit.fake_client.FakeCinderClient.Volumes.list',
|
||||
side_effect=cinder_exception.ClientException(500))
|
||||
def test_list_failed(self, mock_list):
|
||||
self.assertRaises(cinder_exception.ClientException,
|
||||
self.cinderprovider.list)
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(),
|
||||
consts.ATTACH_TO_THIS))
|
||||
def test_show_state_attach_to_this(self, mock_execute, mock_docker_volume):
|
||||
self.assertEqual({'Name': 'fake-vol', 'Mountpoint': ''},
|
||||
self.cinderprovider.show('fake-vol'))
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(
|
||||
status='unknown'), consts.UNKNOWN))
|
||||
def test_show_state_unknown(self, mock_docker_volume):
|
||||
self.assertRaises(exceptions.NotFound,
|
||||
self.cinderprovider.show,
|
||||
'fake-vol')
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(status=None),
|
||||
None))
|
||||
def test_show_state_not_match(self, mock_docker_volume):
|
||||
self.assertRaises(exceptions.FuxiException,
|
||||
self.cinderprovider.show,
|
||||
'fake-vol')
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(
|
||||
name='fake-vol',
|
||||
status='in-use'), consts.ATTACH_TO_THIS))
|
||||
@mock.patch.object(cinder.Cinder, '_create_mountpoint')
|
||||
@mock.patch.object(mount, 'do_mount')
|
||||
def test_mount(self, mock_docker_volume, mock_create_mp, mock_do_mount):
|
||||
fd, fake_devpath = tempfile.mkstemp()
|
||||
fake_link_path = fake_devpath
|
||||
fake_mountpoint = 'fake-mount-point/'
|
||||
with mock.patch.object(FakeCinderConnector, 'get_device_path',
|
||||
return_value=fake_link_path):
|
||||
with mock.patch.object(cinder.Cinder, '_get_mountpoint',
|
||||
return_value=fake_mountpoint):
|
||||
self.assertEqual(fake_mountpoint,
|
||||
self.cinderprovider.mount('fake-vol'))
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(status=None),
|
||||
None))
|
||||
def test_mount_state_not_match(self, mock_docker_volume):
|
||||
self.assertRaises(exceptions.NotMatchedState,
|
||||
self.cinderprovider.mount,
|
||||
'fake-vol')
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeCinderVolume(),
|
||||
consts.NOT_ATTACH))
|
||||
@mock.patch.object(cinder.Cinder, '_create_mountpoint')
|
||||
@mock.patch.object(mount, 'do_mount')
|
||||
def test_mount_state_not_attach(self, mock_docker_volume,
|
||||
mock_create_mp, mock_do_mount):
|
||||
fd, fake_devpath = tempfile.mkstemp()
|
||||
fake_link_path = fake_devpath
|
||||
fake_mountpoint = 'fake-mount-point/'
|
||||
with mock.patch.object(FakeCinderConnector, 'get_device_path',
|
||||
return_value=fake_link_path):
|
||||
with mock.patch.object(cinder.Cinder, '_get_mountpoint',
|
||||
return_value=fake_mountpoint):
|
||||
self.assertEqual(fake_mountpoint,
|
||||
self.cinderprovider.mount('fake-vol'))
|
||||
|
||||
@mock.patch.object(cinder.Cinder, '_get_connector', mock_connector)
|
||||
@mock.patch.object(cinder.Cinder, '_create_mountpoint')
|
||||
@mock.patch.object(mount, 'do_mount')
|
||||
def test_mount_state_attach_to_other(self, mock_create_mp, mock_do_mount):
|
||||
fd, fake_devpath = tempfile.mkstemp()
|
||||
fake_link_path = fake_devpath
|
||||
fake_mountpoint = 'fake-mount-point/'
|
||||
with mock.patch.object(FakeCinderConnector, 'get_device_path',
|
||||
return_value=fake_link_path):
|
||||
with mock.patch.object(cinder.Cinder, '_get_mountpoint',
|
||||
return_value=fake_mountpoint):
|
||||
fake_c_vol = fake_object.FakeCinderVolume(multiattach=True)
|
||||
with mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_c_vol,
|
||||
consts.ATTACH_TO_OTHER)):
|
||||
self.assertEqual(fake_mountpoint,
|
||||
self.cinderprovider.mount('fake-vol'))
|
||||
|
||||
fake_c_vol = fake_object.FakeCinderVolume(multiattach=False)
|
||||
with mock.patch.object(cinder.Cinder, '_get_docker_volume',
|
||||
return_value=(fake_c_vol,
|
||||
consts.ATTACH_TO_OTHER)):
|
||||
self.assertRaises(exceptions.FuxiException,
|
||||
self.cinderprovider.mount, 'fake-vol')
|
||||
|
||||
def test_unmount(self):
|
||||
self.assertIsNone(self.cinderprovider.unmount('fake-vol'))
|
||||
|
||||
def test_check_exists(self):
|
||||
self.cinderprovider._get_docker_volume = mock.MagicMock()
|
||||
self.cinderprovider._get_docker_volume.return_value = (
|
||||
None,
|
||||
consts.UNKNOWN)
|
||||
|
||||
result = self.cinderprovider.check_exist('fake-vol')
|
||||
self.assertFalse(result)
|
||||
|
||||
self.cinderprovider._get_docker_volume.return_value = (
|
||||
fake_object.FakeCinderVolume(),
|
||||
consts.NOT_ATTACH)
|
||||
|
||||
result = self.cinderprovider.check_exist('fake-vol')
|
||||
self.assertTrue(result)
|
|
@ -1,204 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import mock
|
||||
|
||||
from manilaclient.common.apiclient import exceptions as manila_exception
|
||||
|
||||
from fuxi.common import constants as consts
|
||||
from fuxi.common import mount
|
||||
from fuxi.common import state_monitor
|
||||
from fuxi import exceptions
|
||||
from fuxi.tests.unit import base, fake_client, fake_object
|
||||
from fuxi import utils
|
||||
from fuxi.volumeprovider import manila
|
||||
|
||||
|
||||
class FakeManilaConnector(object):
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def connect_volume(self, share, **connect_opts):
|
||||
return {'path': share.export_location}
|
||||
|
||||
def disconnect_volume(self, share, **disconnect_opts):
|
||||
pass
|
||||
|
||||
def get_device_path(self, share):
|
||||
return share.export_location
|
||||
|
||||
def get_mountpoint(self, share):
|
||||
return share.name
|
||||
|
||||
|
||||
def mock_monitor_manila_share(cls):
|
||||
cls.expected_obj.status = cls.desired_state
|
||||
return cls.expected_obj
|
||||
|
||||
|
||||
class TestManila(base.TestCase):
|
||||
def setUp(self):
|
||||
super(TestManila, self).setUp()
|
||||
self._set_up_provider()
|
||||
|
||||
@mock.patch.object(utils, 'get_manilaclient',
|
||||
return_value=fake_client.FakeManilaClient())
|
||||
def _set_up_provider(self, mock_client):
|
||||
self.provider = manila.Manila()
|
||||
self.provider.manilaclient = fake_client.FakeManilaClient()
|
||||
self.provider.connector = FakeManilaConnector()
|
||||
|
||||
def test_create_exist(self):
|
||||
fake_share = fake_object.FakeManilaShare(
|
||||
name='fake-vol', id='fake-id',
|
||||
export_location='fake-el')
|
||||
|
||||
for status in [consts.NOT_ATTACH, consts.ATTACH_TO_THIS]:
|
||||
with mock.patch.object(manila.Manila, '_get_docker_volume',
|
||||
return_value=(fake_share, status)):
|
||||
self.assertEqual('fake-el',
|
||||
self.provider.create('fake-vol', {})['path'])
|
||||
|
||||
@mock.patch('fuxi.volumeprovider.manila.Manila._get_docker_volume',
|
||||
side_effect=exceptions.NotFound())
|
||||
def test_create_from_id(self, mock_docker_volume):
|
||||
fake_vol_opts = {'volume_id': 'fake-id'}
|
||||
fake_share = fake_object.FakeManilaShare(
|
||||
name='fake-vol', id='fake-id',
|
||||
export_location='fake-el', status='available', metadata={})
|
||||
with mock.patch.object(fake_client.FakeManilaClient.Shares, 'get',
|
||||
return_value=fake_share):
|
||||
self.assertEqual('fake-el',
|
||||
self.provider.create('fake-vol',
|
||||
fake_vol_opts)['path'])
|
||||
|
||||
@mock.patch('fuxi.volumeprovider.manila.Manila._get_docker_volume',
|
||||
side_effect=exceptions.NotFound())
|
||||
def test_create_not_exist(self, mock_docker_volume):
|
||||
fake_vol_opts = {'share_network': 'fake-share-network'}
|
||||
fake_share = fake_object.FakeManilaShare(
|
||||
name='fake-vol', id='fake-id',
|
||||
export_location='fake-el', status='creating')
|
||||
with mock.patch.object(fake_client.FakeManilaClient.Shares, 'create',
|
||||
return_value=fake_share):
|
||||
fake_share.status = 'available'
|
||||
with mock.patch.object(state_monitor.StateMonitor,
|
||||
'monitor_manila_share',
|
||||
return_value=fake_share):
|
||||
self.assertEqual('fake-el',
|
||||
self.provider.create('fake-vol',
|
||||
fake_vol_opts)['path'])
|
||||
|
||||
@mock.patch.object(utils, 'execute')
|
||||
@mock.patch.object(mount.Mounter, 'get_mps_by_device',
|
||||
return_value=[])
|
||||
def test_delete(self, mock_execute, mock_mps):
|
||||
fake_share = fake_object.FakeManilaShare(
|
||||
name='fake-vol', id='fake-id',
|
||||
export_location='fake-el')
|
||||
|
||||
with mock.patch.object(manila.Manila, '_get_docker_volume',
|
||||
return_value=(fake_share,
|
||||
consts.ATTACH_TO_THIS)):
|
||||
with mock.patch.object(manila.Manila, '_delete_share'):
|
||||
self.assertTrue(self.provider.delete('fake-vol'))
|
||||
|
||||
def test_mount(self):
|
||||
fake_share = fake_object.FakeManilaShare(
|
||||
name='fake-vol', id='fake-id',
|
||||
export_location='fake-el', share_proto='nfs')
|
||||
|
||||
with mock.patch.object(manila.Manila, '_get_docker_volume',
|
||||
return_value=(fake_share,
|
||||
consts.ATTACH_TO_THIS)):
|
||||
self.assertEqual('fake-vol',
|
||||
self.provider.mount('fake-vol'))
|
||||
|
||||
def test_unmount(self):
|
||||
self.assertIsNone(self.provider.unmount('fake-vol'))
|
||||
|
||||
def test_show(self):
|
||||
fake_vol = fake_object.DEFAULT_VOLUME_NAME
|
||||
with mock.patch.object(manila.Manila, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeManilaShare(),
|
||||
consts.ATTACH_TO_THIS)):
|
||||
self.assertEqual({'Name': fake_vol,
|
||||
'Mountpoint': fake_vol},
|
||||
self.provider.show(fake_vol))
|
||||
|
||||
@mock.patch('fuxi.tests.unit.fake_client.FakeManilaClient.Shares.list',
|
||||
side_effect=manila_exception.ClientException(500))
|
||||
def test_show_list_failed(self, mock_list):
|
||||
self.assertRaises(manila_exception.ClientException,
|
||||
self.provider.show, 'fake-vol')
|
||||
|
||||
@mock.patch.object(fake_client.FakeManilaClient.Shares, 'list',
|
||||
return_value=[])
|
||||
def test_show_no_share(self, mock_list):
|
||||
self.assertRaises(exceptions.NotFound, self.provider.show, 'fake-vol')
|
||||
|
||||
@mock.patch.object(fake_client.FakeManilaClient.Shares, 'list',
|
||||
return_value=[fake_object.FakeManilaShare(id='1'),
|
||||
fake_object.FakeManilaShare(id='2')])
|
||||
def test_show_too_many_shares(self, mock_list):
|
||||
self.assertRaises(exceptions.TooManyResources,
|
||||
self.provider.show, 'fake-vol')
|
||||
|
||||
@mock.patch.object(manila.Manila, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeManilaShare(),
|
||||
consts.NOT_ATTACH))
|
||||
def test_show_not_attach(self, mock_docker_volume):
|
||||
fake_vol = fake_object.DEFAULT_VOLUME_NAME
|
||||
self.assertEqual({'Name': fake_vol, 'Mountpoint': fake_vol},
|
||||
self.provider.show(fake_vol))
|
||||
|
||||
@mock.patch.object(manila.Manila, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeManilaShare(),
|
||||
consts.ATTACH_TO_THIS))
|
||||
def test_show_not_mount(self, mock_dokcer_volume):
|
||||
fake_vol = fake_object.DEFAULT_VOLUME_NAME
|
||||
self.assertEqual({'Name': fake_vol,
|
||||
'Mountpoint': fake_vol},
|
||||
self.provider.show(fake_vol))
|
||||
|
||||
def test_list(self):
|
||||
share_dict = [
|
||||
{'id': 'fake-id1', 'name': 'fake-name1',
|
||||
'export_location': 'fake-el1'},
|
||||
{'id': 'fake-id2', 'name': 'fake-name2',
|
||||
'export_location': 'fake-el2'}
|
||||
]
|
||||
fake_shares = [fake_object.FakeManilaShare(**s) for s in share_dict]
|
||||
fake_volumes = [{'Name': 'fake-name1', 'Mountpoint': 'fake-name1'},
|
||||
{'Name': 'fake-name2', 'Mountpoint': 'fake-name2'}]
|
||||
with mock.patch.object(fake_client.FakeManilaClient.Shares, 'list',
|
||||
return_value=fake_shares):
|
||||
with mock.patch.object(mount.Mounter, 'get_mps_by_device',
|
||||
return_value=[]):
|
||||
self.assertEqual(fake_volumes, self.provider.list())
|
||||
|
||||
def test_list_failed(self):
|
||||
with mock.patch('fuxi.tests.unit.fake_client.FakeManilaClient'
|
||||
'.Shares.list',
|
||||
side_effect=manila_exception.ClientException):
|
||||
self.assertRaises(manila_exception.ClientException,
|
||||
self.provider.list)
|
||||
|
||||
def test_check_exist(self):
|
||||
with mock.patch('fuxi.volumeprovider.manila.Manila._get_docker_volume',
|
||||
side_effect=exceptions.NotFound()):
|
||||
self.assertFalse(self.provider.check_exist('fake-vol'))
|
||||
|
||||
with mock.patch.object(manila.Manila, '_get_docker_volume',
|
||||
return_value=(fake_object.FakeManilaShare(),
|
||||
consts.ATTACH_TO_THIS)):
|
||||
self.assertTrue(self.provider.check_exist('fake-vol'))
|
227
fuxi/utils.py
227
fuxi/utils.py
|
@ -1,227 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import flask
|
||||
import os
|
||||
import random
|
||||
import requests
|
||||
import socket
|
||||
import string
|
||||
import traceback
|
||||
|
||||
from cinderclient import client as cinder_client
|
||||
from cinderclient import exceptions as cinder_exception
|
||||
from keystoneauth1 import exceptions as ka_exception
|
||||
from keystoneauth1.session import Session
|
||||
from keystoneclient.auth import get_plugin_class
|
||||
from kuryr.lib import utils as kuryr_utils
|
||||
from manilaclient import client as manila_client
|
||||
from manilaclient.common.apiclient import exceptions as manila_exception
|
||||
from novaclient import client as nova_client
|
||||
from novaclient import exceptions as nova_exception
|
||||
from os_brick import exception as brick_exception
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import importutils
|
||||
from oslo_utils import uuidutils
|
||||
from werkzeug import exceptions as w_exceptions
|
||||
|
||||
from fuxi.common import config
|
||||
from fuxi.common import constants
|
||||
from fuxi import exceptions
|
||||
|
||||
cloud_init_conf = '/var/lib/cloud/instances'
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def get_hostname():
|
||||
return socket.gethostname()
|
||||
|
||||
|
||||
def get_instance_uuid():
|
||||
try:
|
||||
inst_uuid = ''
|
||||
inst_uuid_count = 0
|
||||
dirs = os.listdir(cloud_init_conf)
|
||||
for uuid_dir in dirs:
|
||||
if uuidutils.is_uuid_like(uuid_dir):
|
||||
inst_uuid = uuid_dir
|
||||
inst_uuid_count += 1
|
||||
|
||||
# If not or not only get on instance_uuid, then search
|
||||
# it from metadata server.
|
||||
if inst_uuid_count == 1:
|
||||
return inst_uuid
|
||||
except Exception:
|
||||
LOG.warning("Get instance_uuid from cloud-init failed")
|
||||
|
||||
try:
|
||||
resp = requests.get('http://169.254.169.254/openstack',
|
||||
timeout=constants.CURL_MD_TIMEOUT)
|
||||
metadata_api_versions = resp.text.split()
|
||||
metadata_api_versions.sort(reverse=True)
|
||||
except Exception as e:
|
||||
LOG.error("Get metadata apis failed. Error: %s", e)
|
||||
raise exceptions.FuxiException("Metadata API Not Found")
|
||||
|
||||
for api_version in metadata_api_versions:
|
||||
metadata_url = ''.join(['http://169.254.169.254/openstack/',
|
||||
api_version,
|
||||
'/meta_data.json'])
|
||||
try:
|
||||
resp = requests.get(metadata_url,
|
||||
timeout=constants.CURL_MD_TIMEOUT)
|
||||
metadata = resp.json()
|
||||
if metadata.get('uuid', None):
|
||||
return metadata['uuid']
|
||||
except Exception as e:
|
||||
LOG.warning("Get instance_uuid from metadata server"
|
||||
" %(md_url)s failed. Error: %(err)s",
|
||||
{'md_url': metadata_url, 'err': e})
|
||||
continue
|
||||
|
||||
raise exceptions.FuxiException("Instance UUID Not Found")
|
||||
|
||||
|
||||
# Return all errors as JSON. From http://flask.pocoo.org/snippets/83/
|
||||
def make_json_app(import_name, **kwargs):
|
||||
app = flask.Flask(import_name, **kwargs)
|
||||
|
||||
@app.errorhandler(exceptions.FuxiException)
|
||||
@app.errorhandler(cinder_exception.ClientException)
|
||||
@app.errorhandler(nova_exception.ClientException)
|
||||
@app.errorhandler(manila_exception.ClientException)
|
||||
@app.errorhandler(processutils.ProcessExecutionError)
|
||||
@app.errorhandler(brick_exception.BrickException)
|
||||
def make_json_error(ex):
|
||||
LOG.error("Unexpected error happened: %s",
|
||||
traceback.format_exc())
|
||||
response = flask.jsonify({"Err": str(ex)})
|
||||
response.status_code = w_exceptions.InternalServerError.code
|
||||
if isinstance(ex, w_exceptions.HTTPException):
|
||||
response.status_code = ex.code
|
||||
content_type = 'application/vnd.docker.plugins.v1+json; charset=utf-8'
|
||||
response.headers['Content-Type'] = content_type
|
||||
return response
|
||||
|
||||
for code in w_exceptions.default_exceptions:
|
||||
app.register_error_handler(code, make_json_error)
|
||||
|
||||
return app
|
||||
|
||||
|
||||
def driver_dict_from_config(named_driver_config, *args, **kwargs):
|
||||
driver_registry = dict()
|
||||
|
||||
for driver_str in named_driver_config:
|
||||
driver_type, _sep, driver = driver_str.partition('=')
|
||||
driver_class = importutils.import_class(driver)
|
||||
driver_registry[driver_type] = driver_class(*args, **kwargs)
|
||||
return driver_registry
|
||||
|
||||
|
||||
def _openstack_auth_from_config(**config):
|
||||
if config.get('username') and config.get('password'):
|
||||
plugin_class = get_plugin_class('password')
|
||||
else:
|
||||
plugin_class = get_plugin_class('token')
|
||||
plugin_options = plugin_class.get_options()
|
||||
plugin_kwargs = {}
|
||||
for option in plugin_options:
|
||||
if option.dest in config:
|
||||
plugin_kwargs[option.dest] = config[option.dest]
|
||||
return plugin_class(**plugin_kwargs)
|
||||
|
||||
|
||||
def get_legacy_keystone_session(**kwargs):
|
||||
keystone_conf = CONF.keystone
|
||||
config = {}
|
||||
config['auth_url'] = keystone_conf.auth_url
|
||||
config['username'] = keystone_conf.admin_user
|
||||
config['password'] = keystone_conf.admin_password
|
||||
config['tenant_name'] = keystone_conf.admin_tenant_name
|
||||
config['token'] = keystone_conf.admin_token
|
||||
config.update(kwargs)
|
||||
|
||||
if keystone_conf.auth_insecure:
|
||||
verify = False
|
||||
else:
|
||||
verify = keystone_conf.auth_ca_cert
|
||||
|
||||
return Session(auth=_openstack_auth_from_config(**config), verify=verify)
|
||||
|
||||
|
||||
def get_keystone_session(conf_group, **kwargs):
|
||||
try:
|
||||
auth_plugin = kuryr_utils.get_auth_plugin(conf_group)
|
||||
session = kuryr_utils.get_keystone_session(conf_group, auth_plugin)
|
||||
return session, auth_plugin
|
||||
except ka_exception.MissingRequiredOptions:
|
||||
return get_legacy_keystone_session(**kwargs), None
|
||||
|
||||
|
||||
def get_cinderclient(*args, **kwargs):
|
||||
session, auth_plugin = get_keystone_session(config.cinder_group.name)
|
||||
return cinder_client.Client(session=session,
|
||||
auth=auth_plugin,
|
||||
region_name=CONF.cinder.region_name,
|
||||
version=2)
|
||||
|
||||
|
||||
def get_novaclient(*args, **kwargs):
|
||||
session, auth_plugin = get_keystone_session(config.nova_group.name)
|
||||
return nova_client.Client(session=session,
|
||||
auth=auth_plugin,
|
||||
region_name=CONF.nova.region_name,
|
||||
version=2)
|
||||
|
||||
|
||||
def get_manilaclient(*args, **kwargs):
|
||||
session, auth_plugin = get_keystone_session(config.manila_group.name)
|
||||
return manila_client.Client(session=session,
|
||||
auth=auth_plugin,
|
||||
region_name=CONF.manila.region_name,
|
||||
client_version='2')
|
||||
|
||||
|
||||
def get_root_helper():
|
||||
return 'sudo fuxi-rootwrap %s' % CONF.rootwrap_config
|
||||
|
||||
|
||||
def execute(*cmd, **kwargs):
|
||||
if 'run_as_root' in kwargs and 'root_helper' not in kwargs:
|
||||
kwargs['root_helper'] = get_root_helper()
|
||||
|
||||
return processutils.execute(*cmd, **kwargs)
|
||||
|
||||
|
||||
def get_random_string(n=10):
|
||||
return ''.join(random.choice(string.ascii_lowercase) for _ in range(n))
|
||||
|
||||
|
||||
def wrap_check_authorized(f):
|
||||
"""If token is expired, then build a new client, and try again.
|
||||
|
||||
This method required the related object(cls) has method set_client().
|
||||
method set_client() is used to reset OpenStack *client.
|
||||
"""
|
||||
def func(cls, *args, **kwargs):
|
||||
try:
|
||||
return f(cls, *args, **kwargs)
|
||||
except manila_exception.Unauthorized:
|
||||
cls.set_client()
|
||||
return f(cls, *args, **kwargs)
|
||||
return func
|
|
@ -1,17 +0,0 @@
|
|||
# Copyright 2015 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import pbr.version
|
||||
|
||||
version_info = pbr.version.VersionInfo('fuxi')
|
|
@ -1,524 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
import time
|
||||
|
||||
from cinderclient import exceptions as cinder_exception
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import importutils
|
||||
from oslo_utils import strutils
|
||||
|
||||
from fuxi.common import constants as consts
|
||||
from fuxi.common import mount
|
||||
from fuxi.common import state_monitor
|
||||
from fuxi import exceptions
|
||||
from fuxi.i18n import _
|
||||
from fuxi import utils
|
||||
from fuxi.volumeprovider import provider
|
||||
|
||||
CONF = cfg.CONF
|
||||
cinder_conf = CONF.cinder
|
||||
|
||||
# Volume states
|
||||
UNKNOWN = consts.UNKNOWN
|
||||
NOT_ATTACH = consts.NOT_ATTACH
|
||||
ATTACH_TO_THIS = consts.ATTACH_TO_THIS
|
||||
ATTACH_TO_OTHER = consts.ATTACH_TO_OTHER
|
||||
|
||||
OPENSTACK = 'openstack'
|
||||
OSBRICK = 'osbrick'
|
||||
|
||||
volume_connector_conf = {
|
||||
OPENSTACK: 'fuxi.connector.cloudconnector.openstack.CinderConnector',
|
||||
OSBRICK: 'fuxi.connector.osbrickconnector.CinderConnector'}
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def get_cinder_volume_kwargs(docker_volume_name, docker_volume_opt):
|
||||
"""Retrieve parameters for creating Cinder volume.
|
||||
|
||||
Retrieve required parameters and remove unsupported arguments from
|
||||
client input. These parameters are used to create a Cinder volume.
|
||||
|
||||
:param docker_volume_name: Name for Cinder volume
|
||||
:type docker_volume_name: str
|
||||
:param docker_volume_opt: Optional parameters for Cinder volume
|
||||
:type docker_volume_opt: dict
|
||||
:rtype: dict
|
||||
"""
|
||||
options = ['size', 'consistencygroup_id', 'snapshot_id', 'source_volid',
|
||||
'description', 'volume_type', 'user_id', 'project_id',
|
||||
'availability_zone', 'scheduler_hints', 'source_replica',
|
||||
'multiattach']
|
||||
kwargs = {}
|
||||
|
||||
if 'size' in docker_volume_opt:
|
||||
try:
|
||||
size = int(docker_volume_opt.pop('size'))
|
||||
except ValueError:
|
||||
msg = _("Volume size must be able to convert to int type")
|
||||
LOG.error(msg)
|
||||
raise exceptions.InvalidInput(msg)
|
||||
else:
|
||||
size = CONF.default_volume_size
|
||||
LOG.info("Volume size doesn't provide from command, so use"
|
||||
" default size %sG", size)
|
||||
kwargs['size'] = size
|
||||
|
||||
for key, value in docker_volume_opt.items():
|
||||
if key in options:
|
||||
kwargs[key] = value
|
||||
|
||||
if not kwargs.get('availability_zone', None):
|
||||
kwargs['availability_zone'] = cinder_conf.availability_zone
|
||||
|
||||
if not kwargs.get('volume_type', None):
|
||||
kwargs['volume_type'] = cinder_conf.volume_type
|
||||
|
||||
kwargs['name'] = docker_volume_name
|
||||
kwargs['metadata'] = {consts.VOLUME_FROM: CONF.volume_from,
|
||||
'fstype': kwargs.pop('fstype', cinder_conf.fstype)}
|
||||
|
||||
req_multiattach = kwargs.pop('multiattach', cinder_conf.multiattach)
|
||||
kwargs['multiattach'] = strutils.bool_from_string(req_multiattach,
|
||||
strict=True)
|
||||
|
||||
return kwargs
|
||||
|
||||
|
||||
def get_host_id():
|
||||
"""Get a value that could represent this server."""
|
||||
host_id = None
|
||||
volume_connector = cinder_conf.volume_connector
|
||||
if volume_connector == OPENSTACK:
|
||||
host_id = utils.get_instance_uuid()
|
||||
elif volume_connector == OSBRICK:
|
||||
host_id = utils.get_hostname().lower()
|
||||
return host_id
|
||||
|
||||
|
||||
class Cinder(provider.Provider):
|
||||
volume_provider_type = 'cinder'
|
||||
|
||||
def __init__(self):
|
||||
super(Cinder, self).__init__()
|
||||
self.cinderclient = utils.get_cinderclient()
|
||||
|
||||
def _get_connector(self):
|
||||
connector = cinder_conf.volume_connector
|
||||
if not connector or connector not in volume_connector_conf:
|
||||
msg = _("Must provide an valid volume connector")
|
||||
LOG.error(msg)
|
||||
raise exceptions.FuxiException(msg)
|
||||
return importutils.import_class(volume_connector_conf[connector])()
|
||||
|
||||
def _get_docker_volume(self, docker_volume_name):
|
||||
try:
|
||||
search_opts = {'name': docker_volume_name,
|
||||
'metadata': {consts.VOLUME_FROM: CONF.volume_from}}
|
||||
if cinder_conf.all_tenants:
|
||||
search_opts.update({'all_tenants': "true"})
|
||||
vols = self.cinderclient.volumes.list(search_opts=search_opts)
|
||||
except cinder_exception.ClientException as ex:
|
||||
LOG.error("Error happened while getting volume list "
|
||||
"information from Cinder. Error: %s", ex)
|
||||
raise
|
||||
|
||||
vol_num = len(vols)
|
||||
if vol_num == 1:
|
||||
docker_volume = vols[0]
|
||||
if docker_volume.attachments:
|
||||
volume_connector = cinder_conf.volume_connector
|
||||
host_id = get_host_id()
|
||||
for am in docker_volume.attachments:
|
||||
if volume_connector == OPENSTACK:
|
||||
if am['server_id'] == host_id:
|
||||
return docker_volume, ATTACH_TO_THIS
|
||||
elif volume_connector == OSBRICK:
|
||||
if (am['host_name'] or '').lower() == host_id:
|
||||
return docker_volume, ATTACH_TO_THIS
|
||||
return docker_volume, ATTACH_TO_OTHER
|
||||
else:
|
||||
return docker_volume, NOT_ATTACH
|
||||
elif vol_num == 0:
|
||||
return None, UNKNOWN
|
||||
else:
|
||||
raise exceptions.TooManyResources(
|
||||
"find too many volumes with search_opts=%s" % search_opts)
|
||||
|
||||
def _check_attached_to_this(self, cinder_volume):
|
||||
host_id = get_host_id()
|
||||
vol_conn = cinder_conf.volume_connector
|
||||
for am in cinder_volume.attachments:
|
||||
if vol_conn == OPENSTACK and am['server_id'] == host_id:
|
||||
return True
|
||||
elif vol_conn == OSBRICK and am['host_name'] \
|
||||
and am['host_name'].lower() == host_id:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _create_volume(self, docker_volume_name, volume_opts):
|
||||
LOG.info("Start to create docker volume %s from Cinder",
|
||||
docker_volume_name)
|
||||
|
||||
cinder_volume_kwargs = get_cinder_volume_kwargs(docker_volume_name,
|
||||
volume_opts)
|
||||
|
||||
try:
|
||||
volume = self.cinderclient.volumes.create(**cinder_volume_kwargs)
|
||||
except cinder_exception.ClientException as e:
|
||||
LOG.error("Error happened when create an volume %(vol)s from"
|
||||
" Cinder. Error: %(err)s",
|
||||
{'vol': docker_volume_name, 'err': e})
|
||||
raise
|
||||
|
||||
LOG.info("Waiting volume %s to be available", volume)
|
||||
volume_monitor = state_monitor.StateMonitor(
|
||||
self.cinderclient,
|
||||
volume,
|
||||
'available',
|
||||
('creating',),
|
||||
time_delay=consts.VOLUME_SCAN_TIME_DELAY)
|
||||
volume = volume_monitor.monitor_cinder_volume()
|
||||
|
||||
LOG.info("Create docker volume %(d_v)s %(vol)s from Cinder "
|
||||
"successfully",
|
||||
{'d_v': docker_volume_name, 'vol': volume})
|
||||
return volume
|
||||
|
||||
def _create_from_existing_volume(self, docker_volume_name,
|
||||
cinder_volume_id,
|
||||
volume_opts):
|
||||
try:
|
||||
cinder_volume = self.cinderclient.volumes.get(cinder_volume_id)
|
||||
except cinder_exception.ClientException as e:
|
||||
msg = ("Failed to get volume %(vol_id)s from Cinder. "
|
||||
"Error: %(err)s")
|
||||
LOG.error(msg, {'vol_id': cinder_volume_id, 'err': e})
|
||||
raise
|
||||
|
||||
status = cinder_volume.status
|
||||
if status not in ('available', 'in-use'):
|
||||
LOG.error("Current volume %(vol)s status %(status)s not in "
|
||||
"desired states",
|
||||
{'vol': cinder_volume, 'status': status})
|
||||
raise exceptions.NotMatchedState('Cinder volume is unavailable')
|
||||
elif status == 'in-use' and not cinder_volume.multiattach:
|
||||
if not self._check_attached_to_this(cinder_volume):
|
||||
msg = ("Current volume %(vol)s status %(status)s not "
|
||||
"in desired states")
|
||||
LOG.error(msg, {'vol': cinder_volume, 'status': status})
|
||||
raise exceptions.NotMatchedState(
|
||||
'Cinder volume is unavailable')
|
||||
|
||||
if cinder_volume.name != docker_volume_name:
|
||||
LOG.error("Provided volume name %(d_name)s does not match "
|
||||
"with existing Cinder volume name %(c_name)s",
|
||||
{'d_name': docker_volume_name,
|
||||
'c_name': cinder_volume.name})
|
||||
raise exceptions.InvalidInput('Volume name does not match')
|
||||
|
||||
fstype = volume_opts.pop('fstype', cinder_conf.fstype)
|
||||
vol_fstype = cinder_volume.metadata.get('fstype',
|
||||
cinder_conf.fstype)
|
||||
if fstype != vol_fstype:
|
||||
LOG.error("Volume already exists with fstype %(c_fstype)s, "
|
||||
"but currently provided fstype is %(fstype)s, not "
|
||||
"match", {'c_fstype': vol_fstype, 'fstype': fstype})
|
||||
raise exceptions.InvalidInput('FSType does not match')
|
||||
|
||||
try:
|
||||
metadata = {consts.VOLUME_FROM: CONF.volume_from,
|
||||
'fstype': fstype}
|
||||
self.cinderclient.volumes.set_metadata(cinder_volume, metadata)
|
||||
except cinder_exception.ClientException as e:
|
||||
LOG.error("Failed to update volume %(vol)s information. "
|
||||
"Error: %(err)s",
|
||||
{'vol': cinder_volume_id, 'err': e})
|
||||
raise
|
||||
return cinder_volume
|
||||
|
||||
def create(self, docker_volume_name, volume_opts):
|
||||
if not volume_opts:
|
||||
volume_opts = {}
|
||||
|
||||
connector = self._get_connector()
|
||||
cinder_volume, state = self._get_docker_volume(docker_volume_name)
|
||||
LOG.info("Get docker volume %(d_v)s %(vol)s with state %(st)s",
|
||||
{'d_v': docker_volume_name, 'vol': cinder_volume,
|
||||
'st': state})
|
||||
|
||||
device_info = {}
|
||||
if state == ATTACH_TO_THIS:
|
||||
LOG.warning("The volume %(d_v)s %(vol)s already exists "
|
||||
"and attached to this server",
|
||||
{'d_v': docker_volume_name, 'vol': cinder_volume})
|
||||
device_info = {'path': connector.get_device_path(cinder_volume)}
|
||||
elif state == NOT_ATTACH:
|
||||
LOG.warning("The volume %(d_v)s %(vol)s is already exists "
|
||||
"but not attached",
|
||||
{'d_v': docker_volume_name, 'vol': cinder_volume})
|
||||
device_info = connector.connect_volume(cinder_volume)
|
||||
elif state == ATTACH_TO_OTHER:
|
||||
if cinder_volume.multiattach:
|
||||
fstype = volume_opts.get('fstype', cinder_conf.fstype)
|
||||
vol_fstype = cinder_volume.metadata.get('fstype',
|
||||
cinder_conf.fstype)
|
||||
if fstype != vol_fstype:
|
||||
LOG.error(
|
||||
("Volume already exists with fstype: %{v_fs}s, but "
|
||||
"currently provided fstype is %{fs}s, not "
|
||||
"match"),
|
||||
{'v_fs': vol_fstype, 'fs': fstype})
|
||||
raise exceptions.FuxiException('FSType Not Match')
|
||||
device_info = connector.connect_volume(cinder_volume)
|
||||
else:
|
||||
msg = _("The volume {0} {1} is already attached to another "
|
||||
"server").format(docker_volume_name, cinder_volume)
|
||||
LOG.error(msg)
|
||||
raise exceptions.FuxiException(msg)
|
||||
elif state == UNKNOWN:
|
||||
if 'volume_id' in volume_opts:
|
||||
cinder_volume = self._create_from_existing_volume(
|
||||
docker_volume_name,
|
||||
volume_opts.pop('volume_id'),
|
||||
volume_opts)
|
||||
if self._check_attached_to_this(cinder_volume):
|
||||
device_info = {
|
||||
'path': connector.get_device_path(cinder_volume)}
|
||||
else:
|
||||
device_info = connector.connect_volume(cinder_volume)
|
||||
else:
|
||||
cinder_volume = self._create_volume(docker_volume_name,
|
||||
volume_opts)
|
||||
device_info = connector.connect_volume(cinder_volume)
|
||||
|
||||
return device_info
|
||||
|
||||
def _delete_volume(self, volume):
|
||||
try:
|
||||
self.cinderclient.volumes.delete(volume)
|
||||
except cinder_exception.NotFound:
|
||||
return
|
||||
except cinder_exception.ClientException as e:
|
||||
LOG.error("Error happened when delete volume from Cinder."
|
||||
" Error: %s", e)
|
||||
raise
|
||||
|
||||
start_time = time.time()
|
||||
# Wait until the volume is not there or until the operation timeout
|
||||
while (time.time() - start_time < consts.DESTROY_VOLUME_TIMEOUT):
|
||||
try:
|
||||
self.cinderclient.volumes.get(volume.id)
|
||||
except cinder_exception.NotFound:
|
||||
return
|
||||
time.sleep(consts.VOLUME_SCAN_TIME_DELAY)
|
||||
|
||||
# If the volume is not deleted, raise an exception
|
||||
msg_ft = _("Timed out while waiting for volume. "
|
||||
"Expected Volume: {0}, "
|
||||
"Expected State: {1}, "
|
||||
"Elapsed Time: {2}").format(volume,
|
||||
None,
|
||||
time.time() - start_time)
|
||||
raise exceptions.TimeoutException(msg_ft)
|
||||
|
||||
def delete(self, docker_volume_name):
|
||||
cinder_volume, state = self._get_docker_volume(docker_volume_name)
|
||||
LOG.info("Get docker volume %(d_v)s %(vol)s with state %(st)s",
|
||||
{'d_v': docker_volume_name, 'vol': cinder_volume,
|
||||
'st': state})
|
||||
|
||||
if state == ATTACH_TO_THIS:
|
||||
link_path = self._get_connector().get_device_path(cinder_volume)
|
||||
if not link_path or not os.path.exists(link_path):
|
||||
msg = _(
|
||||
"Could not find device link path for volume {0} {1} "
|
||||
"in host").format(docker_volume_name, cinder_volume)
|
||||
LOG.error(msg)
|
||||
raise exceptions.FuxiException(msg)
|
||||
|
||||
devpath = os.path.realpath(link_path)
|
||||
if not os.path.exists(devpath):
|
||||
msg = ("Could not find device path for volume {0} {1} in "
|
||||
"host").format(docker_volume_name, cinder_volume)
|
||||
LOG.error(msg)
|
||||
raise exceptions.FuxiException(msg)
|
||||
|
||||
mounter = mount.Mounter()
|
||||
mps = mounter.get_mps_by_device(devpath)
|
||||
ref_count = len(mps)
|
||||
if ref_count > 0:
|
||||
mountpoint = self._get_mountpoint(docker_volume_name)
|
||||
if mountpoint in mps:
|
||||
mounter.unmount(mountpoint)
|
||||
|
||||
self._clear_mountpoint(mountpoint)
|
||||
|
||||
# If this volume is still mounted on other mount point,
|
||||
# then return.
|
||||
if ref_count > 1:
|
||||
return True
|
||||
else:
|
||||
return True
|
||||
|
||||
# Detach device from this server.
|
||||
self._get_connector().disconnect_volume(cinder_volume)
|
||||
|
||||
available_volume = self.cinderclient.volumes.get(cinder_volume.id)
|
||||
# If this volume is not used by other server anymore,
|
||||
# than delete it from Cinder.
|
||||
if not available_volume.attachments:
|
||||
LOG.warning(
|
||||
("No other servers still use this volume %(d_v)s"
|
||||
" %(vol)s any more, so delete it from Cinder"),
|
||||
{'d_v': docker_volume_name, 'vol': cinder_volume})
|
||||
self._delete_volume(available_volume)
|
||||
return True
|
||||
elif state == NOT_ATTACH:
|
||||
self._delete_volume(cinder_volume)
|
||||
return True
|
||||
elif state == ATTACH_TO_OTHER:
|
||||
msg = "Volume %s is still in use, could not delete it"
|
||||
LOG.warning(msg, cinder_volume)
|
||||
return True
|
||||
elif state == UNKNOWN:
|
||||
return False
|
||||
else:
|
||||
msg = ("Volume %(vol_name)s %(c_vol)s "
|
||||
"state %(state)s is invalid")
|
||||
LOG.error(msg, {'vol_name': docker_volume_name,
|
||||
'c_vol': cinder_volume,
|
||||
'state': state})
|
||||
raise exceptions.NotMatchedState()
|
||||
|
||||
def list(self):
|
||||
LOG.info("Start to retrieve all docker volumes from Cinder")
|
||||
|
||||
docker_volumes = []
|
||||
try:
|
||||
search_opts = {'metadata': {consts.VOLUME_FROM: CONF.volume_from}}
|
||||
if cinder_conf.all_tenants:
|
||||
search_opts.update({'all_tenants': "true"})
|
||||
for vol in self.cinderclient.volumes.list(search_opts=search_opts):
|
||||
docker_volume_name = vol.name
|
||||
if not docker_volume_name:
|
||||
continue
|
||||
|
||||
mountpoint = self._get_mountpoint(vol.name)
|
||||
devpath = os.path.realpath(
|
||||
self._get_connector().get_device_path(vol))
|
||||
mps = mount.Mounter().get_mps_by_device(devpath)
|
||||
mountpoint = mountpoint if mountpoint in mps else ''
|
||||
docker_vol = {'Name': docker_volume_name,
|
||||
'Mountpoint': mountpoint}
|
||||
docker_volumes.append(docker_vol)
|
||||
except cinder_exception.ClientException as e:
|
||||
LOG.error("Retrieve volume list failed. Error: %s", e)
|
||||
raise
|
||||
|
||||
LOG.info("Retrieve docker volumes %s from Cinder "
|
||||
"successfully", docker_volumes)
|
||||
return docker_volumes
|
||||
|
||||
def show(self, docker_volume_name):
|
||||
cinder_volume, state = self._get_docker_volume(docker_volume_name)
|
||||
LOG.info("Get docker volume %(d_v)s %(vol)s with state %(st)s",
|
||||
{'d_v': docker_volume_name, 'vol': cinder_volume,
|
||||
'st': state})
|
||||
|
||||
if state == ATTACH_TO_THIS:
|
||||
devpath = os.path.realpath(
|
||||
self._get_connector().get_device_path(cinder_volume))
|
||||
mp = self._get_mountpoint(docker_volume_name)
|
||||
LOG.info(
|
||||
("Expected devpath: %(dp)s and mountpoint: %(mp)s for"
|
||||
" volume: %(d_v)s %(vol)s"),
|
||||
{'dp': devpath, 'mp': mp,
|
||||
'd_v': docker_volume_name, 'vol': cinder_volume})
|
||||
mounter = mount.Mounter()
|
||||
return {"Name": docker_volume_name,
|
||||
"Mountpoint": mp if mp in mounter.get_mps_by_device(
|
||||
devpath) else ''}
|
||||
elif state in (NOT_ATTACH, ATTACH_TO_OTHER):
|
||||
return {'Name': docker_volume_name, 'Mountpoint': ''}
|
||||
elif state == UNKNOWN:
|
||||
msg = _("Can't find this volume '{0}' in "
|
||||
"Cinder").format(docker_volume_name)
|
||||
LOG.warning(msg)
|
||||
raise exceptions.NotFound(msg)
|
||||
else:
|
||||
msg = _("Volume '{0}' exists, but not attached to this volume,"
|
||||
"and current state is {1}").format(docker_volume_name,
|
||||
state)
|
||||
raise exceptions.NotMatchedState(msg)
|
||||
|
||||
def mount(self, docker_volume_name):
|
||||
cinder_volume, state = self._get_docker_volume(docker_volume_name)
|
||||
LOG.info("Get docker volume %(d_v)s %(vol)s with state %(st)s",
|
||||
{'d_v': docker_volume_name, 'vol': cinder_volume,
|
||||
'st': state})
|
||||
|
||||
connector = self._get_connector()
|
||||
if state == NOT_ATTACH:
|
||||
connector.connect_volume(cinder_volume)
|
||||
elif state == ATTACH_TO_OTHER:
|
||||
if cinder_volume.multiattach:
|
||||
connector.connect_volume(cinder_volume)
|
||||
else:
|
||||
msg = _("Volume {0} {1} is not shareable").format(
|
||||
docker_volume_name, cinder_volume)
|
||||
raise exceptions.FuxiException(msg)
|
||||
elif state != ATTACH_TO_THIS:
|
||||
msg = _("Volume %(vol_name)s %(c_vol)s is not in correct state, "
|
||||
"current state is %(state)s")
|
||||
LOG.error(msg, {'vol_name': docker_volume_name,
|
||||
'c_vol': cinder_volume,
|
||||
'state': state})
|
||||
raise exceptions.NotMatchedState()
|
||||
|
||||
link_path = connector.get_device_path(cinder_volume)
|
||||
if not os.path.exists(link_path):
|
||||
LOG.warning("Could not find device link file, "
|
||||
"so rebuild it")
|
||||
connector.disconnect_volume(cinder_volume)
|
||||
connector.connect_volume(cinder_volume)
|
||||
|
||||
devpath = os.path.realpath(link_path)
|
||||
if not devpath or not os.path.exists(devpath):
|
||||
msg = _("Can't find volume device path")
|
||||
LOG.error(msg)
|
||||
raise exceptions.FuxiException(msg)
|
||||
|
||||
mountpoint = self._get_mountpoint(docker_volume_name)
|
||||
self._create_mountpoint(mountpoint)
|
||||
|
||||
fstype = cinder_volume.metadata.get('fstype', cinder_conf.fstype)
|
||||
|
||||
mount.do_mount(devpath, mountpoint, fstype)
|
||||
|
||||
return mountpoint
|
||||
|
||||
def unmount(self, docker_volume_name):
|
||||
return
|
||||
|
||||
def check_exist(self, docker_volume_name):
|
||||
_, state = self._get_docker_volume(docker_volume_name)
|
||||
LOG.info("Get docker volume %(d_v)s with state %(st)s",
|
||||
{'d_v': docker_volume_name, 'st': state})
|
||||
|
||||
if state == UNKNOWN:
|
||||
return False
|
||||
return True
|
|
@ -1,304 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Volume Provider for OpenStack Manila.
|
||||
|
||||
Current supported and checked Manila share protocol(share driver)
|
||||
NFS(Generic)
|
||||
NFS(Glusterfs)
|
||||
GLUSTERFS(GlusterfsNative)
|
||||
"""
|
||||
|
||||
import time
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import importutils
|
||||
|
||||
from manilaclient.common.apiclient import exceptions as manila_exception
|
||||
|
||||
from fuxi.common import constants as consts
|
||||
from fuxi.common import state_monitor
|
||||
from fuxi import exceptions
|
||||
from fuxi.i18n import _
|
||||
from fuxi import utils
|
||||
from fuxi.volumeprovider import provider
|
||||
|
||||
CONF = cfg.CONF
|
||||
manila_conf = CONF.manila
|
||||
|
||||
NOT_ATTACH = consts.NOT_ATTACH
|
||||
ATTACH_TO_THIS = consts.ATTACH_TO_THIS
|
||||
|
||||
OSBRICK = 'osbrick'
|
||||
|
||||
volume_connector_conf = {
|
||||
OSBRICK: 'fuxi.connector.osbrickconnector.ManilaConnector'}
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def extract_share_kwargs(docker_volume_name, docker_volume_opts):
|
||||
"""Extract parameters for creating manila share.
|
||||
|
||||
Retrieve required parameters and remove unsupported arguments from
|
||||
client input. These parameters are used to create a Cinder volume.
|
||||
|
||||
:param docker_volume_name: Name for Manila share.
|
||||
:param docker_volume_opts: Optional parameters for Manila share.
|
||||
:rtype: dict
|
||||
"""
|
||||
options = ['share_proto', 'size', 'snapshot_id', 'description',
|
||||
'share_network', 'share_type', 'is_public',
|
||||
'availability_zone', 'consistency_group_id']
|
||||
|
||||
kwargs = {}
|
||||
if 'size' in docker_volume_opts:
|
||||
try:
|
||||
size = int(docker_volume_opts.pop('size'))
|
||||
except ValueError:
|
||||
msg = _("Volume size must able to convert to int type")
|
||||
LOG.error(msg)
|
||||
raise exceptions.InvalidInput(msg)
|
||||
else:
|
||||
size = CONF.default_volume_size
|
||||
LOG.info("Volume size doesn't provide from command, so use "
|
||||
"default size %sG", size)
|
||||
kwargs['size'] = size
|
||||
|
||||
share_proto = docker_volume_opts.pop('share_proto', None) \
|
||||
or manila_conf.share_proto
|
||||
kwargs['share_proto'] = share_proto
|
||||
|
||||
for key, value in docker_volume_opts.items():
|
||||
if key in options:
|
||||
kwargs[key] = value
|
||||
|
||||
kwargs['name'] = docker_volume_name
|
||||
kwargs['metadata'] = {consts.VOLUME_FROM: CONF.volume_from}
|
||||
|
||||
return kwargs
|
||||
|
||||
|
||||
class Manila(provider.Provider):
|
||||
volume_provider_type = 'manila'
|
||||
|
||||
def __init__(self):
|
||||
super(Manila, self).__init__()
|
||||
self.manilaclient = utils.get_manilaclient()
|
||||
|
||||
conn_conf = manila_conf.volume_connector
|
||||
if not conn_conf or conn_conf not in volume_connector_conf:
|
||||
msg = _("Must provide a valid volume connector")
|
||||
LOG.error(msg)
|
||||
raise exceptions.InvalidInput(msg)
|
||||
self.connector = importutils.import_object(
|
||||
volume_connector_conf[conn_conf],
|
||||
manilaclient=self.manilaclient)
|
||||
|
||||
def set_client(self):
|
||||
self.manilaclient = utils.get_manilaclient()
|
||||
|
||||
def _get_docker_volume(self, docker_volume_name):
|
||||
search_opts = {'name': docker_volume_name,
|
||||
'metadata': {consts.VOLUME_FROM: CONF.volume_from}}
|
||||
try:
|
||||
docker_shares = self.manilaclient.shares.list(
|
||||
search_opts=search_opts)
|
||||
except manila_exception.ClientException as e:
|
||||
LOG.error("Could not retrieve Manila share list. Error: %s", e)
|
||||
raise
|
||||
|
||||
if not docker_shares:
|
||||
raise exceptions.NotFound("Could not find share with "
|
||||
"search_opts: {0}".format(search_opts))
|
||||
elif len(docker_shares) > 1:
|
||||
raise exceptions.TooManyResources(
|
||||
"Find too many shares with search_opts: {0}, while "
|
||||
"for Fuxi, should get only one share with provided "
|
||||
"search_opts".format(docker_shares))
|
||||
|
||||
docker_share = docker_shares[0]
|
||||
if self.connector.check_access_allowed(docker_share):
|
||||
return docker_share, ATTACH_TO_THIS
|
||||
else:
|
||||
return docker_share, NOT_ATTACH
|
||||
|
||||
def _create_share(self, docker_volume_name, share_opts):
|
||||
share_kwargs = extract_share_kwargs(docker_volume_name,
|
||||
share_opts)
|
||||
|
||||
try:
|
||||
LOG.debug("Start to create share from Manila")
|
||||
share = self.manilaclient.shares.create(**share_kwargs)
|
||||
except manila_exception.ClientException as e:
|
||||
LOG.error("Create Manila share failed. Error: {0}", e)
|
||||
raise
|
||||
|
||||
LOG.info("Waiting for share %s status to be available", share)
|
||||
share_monitor = state_monitor.StateMonitor(self.manilaclient,
|
||||
share,
|
||||
'available',
|
||||
('creating',))
|
||||
share = share_monitor.monitor_manila_share()
|
||||
LOG.info("Creating share %s successfully", share)
|
||||
return share
|
||||
|
||||
def _create_from_existing_share(self, docker_volume_name,
|
||||
share_id, share_opts):
|
||||
try:
|
||||
share = self.manilaclient.shares.get(share_id)
|
||||
except manila_exception.NotFound:
|
||||
LOG.error("Could not find share %s", share_id)
|
||||
raise
|
||||
|
||||
if share.status != 'available':
|
||||
raise exceptions.UnexpectedStateException(
|
||||
"Manila share is unavailable")
|
||||
|
||||
if share.name != docker_volume_name:
|
||||
LOG.error("Provided volume name %(d_name)s does not match "
|
||||
"with existing share name %(s_name)s",
|
||||
{'d_name': docker_volume_name,
|
||||
's_name': share.name})
|
||||
raise exceptions.InvalidInput('Volume name does not match')
|
||||
|
||||
metadata = {consts.VOLUME_FROM: CONF.volume_from}
|
||||
self.manilaclient.shares.update_all_metadata(share, metadata)
|
||||
|
||||
return share
|
||||
|
||||
@utils.wrap_check_authorized
|
||||
def create(self, docker_volume_name, volume_opts):
|
||||
try:
|
||||
share, state = self._get_docker_volume(docker_volume_name)
|
||||
if share:
|
||||
LOG.warning("Volume %(vol)s already exists in Manila, and "
|
||||
"the related Manila share is %(share)s",
|
||||
{'vol': docker_volume_name, 'share': share})
|
||||
|
||||
if state == NOT_ATTACH:
|
||||
return self.connector.connect_volume(share)
|
||||
else:
|
||||
return {'path': self.connector.get_device_path(share)}
|
||||
except exceptions.NotFound:
|
||||
pass
|
||||
|
||||
if 'volume_id' in volume_opts:
|
||||
share = self._create_from_existing_share(
|
||||
docker_volume_name,
|
||||
volume_opts.pop('volume_id'),
|
||||
volume_opts)
|
||||
else:
|
||||
share = self._create_share(docker_volume_name, volume_opts)
|
||||
|
||||
return self.connector.connect_volume(share)
|
||||
|
||||
def _delete_share(self, share):
|
||||
try:
|
||||
share_access_list = self.manilaclient.shares.access_list(share)
|
||||
if len(share_access_list) > 0:
|
||||
LOG.warning("Share %s is still used by other server, so "
|
||||
"should not delete it.", share)
|
||||
return
|
||||
|
||||
self.manilaclient.shares.delete(share)
|
||||
except manila_exception.ClientException as e:
|
||||
LOG.error("Error happened when delete Volume %(vol)s (Manila "
|
||||
"share: %(share)s). Error: %(err)s",
|
||||
{'vol': share.name, 'share': share, 'err': e})
|
||||
raise
|
||||
|
||||
start_time = time.time()
|
||||
while True:
|
||||
try:
|
||||
self.manilaclient.shares.get(share.id)
|
||||
except manila_exception.NotFound:
|
||||
break
|
||||
|
||||
if time.time() - start_time > consts.DESTROY_SHARE_TIMEOUT:
|
||||
raise exceptions.TimeoutException
|
||||
|
||||
time.sleep(consts.SHARE_SCAN_INTERVAL)
|
||||
|
||||
LOG.debug("Delete share %s from Manila successfully", share)
|
||||
|
||||
@utils.wrap_check_authorized
|
||||
def delete(self, docker_volume_name):
|
||||
try:
|
||||
share, state = self._get_docker_volume(docker_volume_name)
|
||||
if state == NOT_ATTACH:
|
||||
self._delete_share(share)
|
||||
return True
|
||||
except exceptions.NotFound:
|
||||
return False
|
||||
|
||||
mountpoint = self.connector.get_mountpoint(share)
|
||||
self.connector.disconnect_volume(share)
|
||||
self._clear_mountpoint(mountpoint)
|
||||
|
||||
self._delete_share(share)
|
||||
return True
|
||||
|
||||
@utils.wrap_check_authorized
|
||||
def mount(self, docker_volume_name):
|
||||
share, state = self._get_docker_volume(docker_volume_name)
|
||||
if state == NOT_ATTACH:
|
||||
LOG.warning("Find share %s, but not attach to this server, "
|
||||
"so connect it", share)
|
||||
self.connector.connect_volume(share)
|
||||
|
||||
mountpoint = self.connector.get_mountpoint(share)
|
||||
if not mountpoint:
|
||||
self.connector.connect_volume(share)
|
||||
return mountpoint
|
||||
|
||||
def unmount(self, docker_volume_name):
|
||||
return
|
||||
|
||||
@utils.wrap_check_authorized
|
||||
def show(self, docker_volume_name):
|
||||
share, state = self._get_docker_volume(docker_volume_name)
|
||||
mountpoint = self.connector.get_mountpoint(share)
|
||||
return {'Name': docker_volume_name, 'Mountpoint': mountpoint}
|
||||
|
||||
def _get_docker_volumes(self, search_opts=None):
|
||||
try:
|
||||
docker_shares = self.manilaclient.shares.list(
|
||||
search_opts=search_opts)
|
||||
except manila_exception.ClientException as e:
|
||||
LOG.error('Could not retrieve Manila shares. Error: %s', e)
|
||||
raise
|
||||
|
||||
docker_volumes = []
|
||||
|
||||
for share in docker_shares:
|
||||
docker_volumes.append(
|
||||
{'Name': share.name,
|
||||
'Mountpoint': self.connector.get_mountpoint(share)})
|
||||
LOG.info("Retrieve docker volumes %s from Manila "
|
||||
"successfully", docker_volumes)
|
||||
return docker_volumes
|
||||
|
||||
@utils.wrap_check_authorized
|
||||
def list(self):
|
||||
search_opts = {'metadata': {consts.VOLUME_FROM: CONF.volume_from}}
|
||||
return self._get_docker_volumes(search_opts)
|
||||
|
||||
@utils.wrap_check_authorized
|
||||
def check_exist(self, docker_volume_name):
|
||||
try:
|
||||
self._get_docker_volume(docker_volume_name)
|
||||
except exceptions.NotFound:
|
||||
return False
|
||||
return True
|
|
@ -1,114 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import abc
|
||||
import os
|
||||
import six
|
||||
|
||||
from fuxi import exceptions
|
||||
from fuxi import utils
|
||||
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class Provider(object):
|
||||
"""Base class for each volume provider.
|
||||
|
||||
Provider provider some operation related with Docker volume provider by
|
||||
each backend volume provider, like Cinder.
|
||||
|
||||
"""
|
||||
volume_provider_type = None
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def create(self, docker_volume_name, volume_opts):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def delete(self, docker_volume_name):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def list(self):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def show(self, docker_volume_name):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def mount(self, docker_volume_name):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def unmount(self, docker_volume_name):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def check_exist(self, docker_volume_name):
|
||||
pass
|
||||
|
||||
def _get_mountpoint(self, docker_volume_name):
|
||||
"""Generate a mount point for volume.
|
||||
|
||||
:param docker_volume_name:
|
||||
:rtype: str
|
||||
"""
|
||||
if not docker_volume_name:
|
||||
LOG.error("Volume name could not be None")
|
||||
raise exceptions.FuxiException("Volume name could not be None")
|
||||
if self.volume_provider_type:
|
||||
return os.path.join(CONF.volume_dir,
|
||||
self.volume_provider_type,
|
||||
docker_volume_name)
|
||||
else:
|
||||
return os.path.join(CONF.volume_dir,
|
||||
docker_volume_name)
|
||||
|
||||
def _create_mountpoint(self, mountpoint):
|
||||
"""Create mount point directory for Docker volume.
|
||||
|
||||
:param mountpoint: The path of Docker volume.
|
||||
"""
|
||||
try:
|
||||
if not os.path.exists(mountpoint) or not os.path.isdir(mountpoint):
|
||||
utils.execute('mkdir', '-p', '-m=755', mountpoint,
|
||||
run_as_root=True)
|
||||
LOG.info("Create mountpoint %s successfully", mountpoint)
|
||||
except processutils.ProcessExecutionError as e:
|
||||
LOG.error("Error happened when create volume "
|
||||
"directory. Error: %s", e)
|
||||
raise
|
||||
|
||||
def _clear_mountpoint(self, mountpoint):
|
||||
"""Clear mount point directory if it wouldn't used any more.
|
||||
|
||||
:param mountpoint: The path of Docker volume.
|
||||
"""
|
||||
if os.path.exists(mountpoint) and os.path.isdir(mountpoint):
|
||||
try:
|
||||
utils.execute('rm', '-r', mountpoint, run_as_root=True)
|
||||
LOG.info("Clear mountpoint %s successfully", mountpoint)
|
||||
except processutils.ProcessExecutionError as e:
|
||||
LOG.error("Error happened when clear mountpoint. "
|
||||
"Error: %s", e)
|
||||
raise
|
28
fuxi/wsgi.py
28
fuxi/wsgi.py
|
@ -1,28 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from fuxi import app
|
||||
from fuxi.common import config
|
||||
from fuxi import controllers
|
||||
|
||||
from oslo_log import log as logging
|
||||
|
||||
|
||||
def init_application():
|
||||
config.init(sys.argv[1:])
|
||||
logging.setup(config.CONF, 'fuxi')
|
||||
|
||||
controllers.init_app_conf()
|
||||
|
||||
return app
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
other:
|
||||
- Introduce a Fuxi devstack plugin. This enables developers to use devstack
|
||||
with the Fuxi devstack plugin to quickly setup the development environment.
|
|
@ -1,12 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Support creating Docker volumes by using existing Cinder volumes.
|
||||
To use this feature, users need to pass the ID of an existing Cinder
|
||||
volume when he/she creates a volume in Docker. For example,
|
||||
|
||||
$ docker volume create --driver fuxi --name test --opt volume_id=<id>
|
||||
|
||||
If a volume_id is given, Fuxi will look up the Cinder volume by the given
|
||||
ID and use it as the created Docker volume (instead of creating a new
|
||||
volume in Cinder).
|
|
@ -1,6 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- Enable Docker to use Manila (as an alternative to Cinder) to provide
|
||||
volumes to Docker containers. Manila supports multiple back-ends and
|
||||
share_proto. In this release, NFS share_proto is supported. The support
|
||||
of other share_proto will be added in the future.
|
|
@ -1,3 +0,0 @@
|
|||
---
|
||||
other:
|
||||
- Add fullstack testing and setup the CI to run the tests.
|
|
@ -1,6 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- Add support for cluster mode. In particular, users can create a Docker
|
||||
volume in one node, and search it in other nodes. Before this feature
|
||||
is implemented, each node manages its own set of volumes independently
|
||||
and sharing volumes across different nodes in a cluster is impossible.
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- Implement the Docker volume plugin API for providing Cinder volumes to
|
||||
Docker containers. Support the usage of Docker native API to create,
|
||||
remove, list, get, mount, and unmount Cinder volumes.
|
|
@ -1,275 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Fuxi Release Notes documentation build configuration file.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its
|
||||
# containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
# -- General configuration ------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
#needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
'openstackdocstheme',
|
||||
'reno.sphinxext',
|
||||
]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The encoding of source files.
|
||||
#source_encoding = 'utf-8-sig'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'Fuxi Release Notes'
|
||||
copyright = u'2017, Fuxi developers'
|
||||
|
||||
# Release notes do not need a version number in the title, they
|
||||
# cover multiple releases.
|
||||
# The short X.Y version.
|
||||
version = ''
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = ''
|
||||
|
||||
repository_name = 'openstack/fuxi'
|
||||
bug_project = 'fuxi'
|
||||
bug_tag = ''
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
#today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
#today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = []
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all
|
||||
# documents.
|
||||
#default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
#add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
#add_module_names = True
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
#show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
#modindex_common_prefix = []
|
||||
|
||||
# If true, keep warnings as "system message" paragraphs in the built documents.
|
||||
#keep_warnings = False
|
||||
|
||||
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
html_theme = 'openstackdocs'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
#html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
#html_theme_path = []
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
#html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
#html_short_title = None
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
#html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
#html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# Add any extra paths that contain custom files (such as robots.txt or
|
||||
# .htaccess) here, relative to this directory. These files are copied
|
||||
# directly to the root of the documentation.
|
||||
#html_extra_path = []
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
#html_last_updated_fmt = '%b %d, %Y'
|
||||
html_last_updated_fmt = '%Y-%m-%d %H:%M'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
#html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
#html_sidebars = {}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
#html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
#html_domain_indices = True
|
||||
|
||||
# If false, no index is generated.
|
||||
#html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
#html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
#html_show_sourcelink = True
|
||||
|
||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||
#html_show_sphinx = True
|
||||
|
||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||
#html_show_copyright = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
#html_use_opensearch = ''
|
||||
|
||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
#html_file_suffix = None
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'FuxiReleaseNotesdoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output ---------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
#'papersize': 'letterpaper',
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#'pointsize': '10pt',
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#'preamble': '',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
('index', 'FuxiReleaseNotes.tex', u'Fuxi Release Notes Documentation',
|
||||
u'2016, Fuxi developers', 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
#latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
#latex_use_parts = False
|
||||
|
||||
# If true, show page references after internal links.
|
||||
#latex_show_pagerefs = False
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
#latex_show_urls = False
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
#latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
#latex_domain_indices = True
|
||||
|
||||
|
||||
# -- Options for manual page output ---------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
('index', 'fuxireleasenotes', u'Fuxi Release Notes Documentation',
|
||||
[u'2017, Fuxi developers'], 1)
|
||||
]
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
#man_show_urls = False
|
||||
|
||||
|
||||
# -- Options for Texinfo output -------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
('index', 'FuxiReleaseNotes', u'Fuxi Release Notes Documentation',
|
||||
u'2017, Fuxi developers', 'FuxiReleaseNotes', 'One line description of project.',
|
||||
'Miscellaneous'),
|
||||
]
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
#texinfo_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
#texinfo_domain_indices = True
|
||||
|
||||
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
||||
#texinfo_show_urls = 'footnote'
|
||||
|
||||
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
||||
#texinfo_no_detailmenu = False
|
|
@ -1,21 +0,0 @@
|
|||
.. Fuxi Release Notes documentation master file.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Welcome to Fuxi Release Notes's documentation!
|
||||
==============================================
|
||||
|
||||
Contents:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
unreleased
|
||||
pike
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
|
@ -1,6 +0,0 @@
|
|||
===================================
|
||||
Pike Series Release Notes
|
||||
===================================
|
||||
|
||||
.. release-notes::
|
||||
:branch: stable/pike
|
|
@ -1,5 +0,0 @@
|
|||
============================
|
||||
Current Series Release Notes
|
||||
============================
|
||||
|
||||
.. release-notes::
|
|
@ -1,23 +0,0 @@
|
|||
# The order of packages is significant, because pip processes them in the order
|
||||
# of appearance. Changing the order has an impact on the overall integration
|
||||
# process, which may cause wedges in the gate later.
|
||||
|
||||
pbr!=2.1.0,>=2.0.0 # Apache-2.0
|
||||
pytz>=2013.6 # MIT
|
||||
Babel!=2.4.0,>=2.3.4 # BSD
|
||||
Flask!=0.11,<1.0,>=0.10 # BSD
|
||||
keystoneauth1>=3.2.0 # Apache-2.0
|
||||
kuryr-lib>=0.5.0 # Apache-2.0
|
||||
oslo.rootwrap>=5.8.0 # Apache-2.0
|
||||
oslo.concurrency>=3.20.0 # Apache-2.0
|
||||
oslo.config>=4.6.0 # Apache-2.0
|
||||
oslo.i18n>=3.15.3 # Apache-2.0
|
||||
oslo.log>=3.30.0 # Apache-2.0
|
||||
oslo.utils>=3.28.0 # Apache-2.0
|
||||
os-brick>=1.15.2 # Apache-2.0
|
||||
python-cinderclient>=3.2.0 # Apache-2.0
|
||||
python-novaclient>=9.1.0 # Apache-2.0
|
||||
python-keystoneclient>=3.8.0 # Apache-2.0
|
||||
python-manilaclient>=1.16.0 # Apache-2.0
|
||||
requests>=2.14.2 # Apache-2.0
|
||||
six>=1.9.0 # MIT
|
62
setup.cfg
62
setup.cfg
|
@ -1,62 +0,0 @@
|
|||
[metadata]
|
||||
name = fuxi
|
||||
summary = Enable Docker container to use Cinder volume and Manila share
|
||||
description-file =
|
||||
README.rst
|
||||
author = OpenStack
|
||||
author-email = openstack-dev@lists.openstack.org
|
||||
home-page = http://www.openstack.org/
|
||||
classifier =
|
||||
Environment :: OpenStack
|
||||
Intended Audience :: Information Technology
|
||||
Intended Audience :: System Administrators
|
||||
License :: OSI Approved :: Apache Software License
|
||||
Operating System :: POSIX :: Linux
|
||||
Programming Language :: Python
|
||||
Programming Language :: Python :: 2
|
||||
Programming Language :: Python :: 2.7
|
||||
Programming Language :: Python :: 3
|
||||
Programming Language :: Python :: 3.5
|
||||
|
||||
[entry_points]
|
||||
oslo.config.opts =
|
||||
fuxi = fuxi.opts:list_fuxi_opts
|
||||
|
||||
console_scripts =
|
||||
fuxi-server = fuxi.server:start
|
||||
fuxi-rootwrap = oslo_rootwrap.cmd:main
|
||||
wsgi_scripts =
|
||||
fuxi-server-wsgi = fuxi.wsgi:init_application
|
||||
|
||||
[files]
|
||||
packages =
|
||||
fuxi
|
||||
data_files =
|
||||
/etc/fuxi =
|
||||
etc/rootwrap.conf
|
||||
/etc/fuxi/rootwrap.d =
|
||||
etc/rootwrap.d/fuxi.filters
|
||||
/usr/lib/docker/plugins/fuxi =
|
||||
etc/fuxi.spec
|
||||
|
||||
[build_sphinx]
|
||||
source-dir = doc/source
|
||||
build-dir = doc/build
|
||||
all_files = 1
|
||||
|
||||
[upload_sphinx]
|
||||
upload-dir = doc/build/html
|
||||
|
||||
[compile_catalog]
|
||||
directory = fuxi/locale
|
||||
domain = fuxi
|
||||
|
||||
[update_catalog]
|
||||
domain = fuxi
|
||||
output_dir = fuxi/locale
|
||||
input_file = fuxi/locale/fuxi.pot
|
||||
|
||||
[extract_messages]
|
||||
keywords = _ gettext ngettext l_ lazy_gettext
|
||||
mapping_file = babel.cfg
|
||||
output_file = fuxi/locale/fuxi.pot
|
29
setup.py
29
setup.py
|
@ -1,29 +0,0 @@
|
|||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
||||
import setuptools
|
||||
|
||||
# In python < 2.7.4, a lazy loading of package `pbr` will break
|
||||
# setuptools if some other modules registered functions in `atexit`.
|
||||
# solution from: http://bugs.python.org/issue15881#msg170215
|
||||
try:
|
||||
import multiprocessing # noqa
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
setuptools.setup(
|
||||
setup_requires=['pbr>=2.0.0'],
|
||||
pbr=True)
|
|
@ -1,17 +0,0 @@
|
|||
# The order of packages is significant, because pip processes them in the order
|
||||
# of appearance. Changing the order has an impact on the overall integration
|
||||
# process, which may cause wedges in the gate later.
|
||||
|
||||
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
|
||||
|
||||
coverage!=4.4,>=4.0 # Apache-2.0
|
||||
docker>=2.4.2 # Apache-2.0
|
||||
python-subunit>=0.0.18 # Apache-2.0/BSD
|
||||
sphinx>=1.6.2 # BSD
|
||||
openstackdocstheme>=1.17.0 # Apache-2.0
|
||||
oslosphinx>=4.7.0 # Apache-2.0
|
||||
oslotest>=1.10.0 # Apache-2.0
|
||||
testrepository>=0.0.18 # Apache-2.0/BSD
|
||||
testscenarios>=0.4 # Apache-2.0/BSD
|
||||
testtools>=1.4.0 # MIT
|
||||
reno>=2.5.0 # Apache-2.0
|
|
@ -1,54 +0,0 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
set -e
|
||||
|
||||
GEN_CMD=oslo-config-generator
|
||||
SCRIPT_PATH=$(dirname "$(readlink -f "$0")")
|
||||
DIST_PATH=$(dirname "$SCRIPT_PATH")
|
||||
|
||||
prerequisites() (
|
||||
if ! command -v "$GEN_CMD" > /dev/null; then
|
||||
echo "ERROR: $GEN_CMD not installed on the system."
|
||||
return 1
|
||||
fi
|
||||
|
||||
if ! [ -f "${DIST_PATH}/fuxi.egg-info/entry_points.txt" ]; then
|
||||
curr_dir=$(pwd)
|
||||
cd "${DIST_PATH}"
|
||||
python setup.py egg_info # Generate entrypoints for config generation
|
||||
cd "${curr_dir}"
|
||||
fi
|
||||
|
||||
return 0
|
||||
)
|
||||
|
||||
generate() (
|
||||
curr_dir=$(pwd)
|
||||
cd "${DIST_PATH}"
|
||||
# Set PYTHONPATH so that it will use the generated egg-info
|
||||
PYTHONPATH=. find "etc/oslo-config-generator" -type f -exec "$GEN_CMD" --config-file="{}" \;
|
||||
cd "${curr_dir}"
|
||||
)
|
||||
|
||||
|
||||
prerequisites
|
||||
rc=$?
|
||||
if [ $rc -ne 0 ]; then
|
||||
exit $rc
|
||||
fi
|
||||
|
||||
generate
|
||||
|
||||
set -x
|
56
tox.ini
56
tox.ini
|
@ -1,56 +0,0 @@
|
|||
[tox]
|
||||
minversion = 2.0
|
||||
envlist = py35,py27,pep8
|
||||
skipsdist = True
|
||||
|
||||
[testenv]
|
||||
usedevelop = True
|
||||
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
|
||||
setenv =
|
||||
VIRTUAL_ENV={envdir}
|
||||
deps = -r{toxinidir}/test-requirements.txt
|
||||
commands = python setup.py testr --slowest --testr-args='{posargs}'
|
||||
whitelist_externals = reno
|
||||
|
||||
[testenv:pep8]
|
||||
commands = flake8 {posargs}
|
||||
|
||||
[testenv:venv]
|
||||
commands = {posargs}
|
||||
|
||||
[testenv:cover]
|
||||
commands = python setup.py test --coverage --testr-args='{posargs}'
|
||||
|
||||
[testenv:docs]
|
||||
commands = python setup.py build_sphinx
|
||||
|
||||
[testenv:debug]
|
||||
commands = oslo_debug_helper {posargs}
|
||||
|
||||
[testenv:debug-py27]
|
||||
basepython = python2.7
|
||||
commands = oslo_debug_helper {posargs}
|
||||
|
||||
[testenv:debug-py34]
|
||||
basepython = python3.4
|
||||
commands = oslo_debug_helper {posargs}
|
||||
|
||||
[testenv:fullstack]
|
||||
basepython = python2.7
|
||||
setenv = OS_TEST_PATH=./fuxi/tests/fullstack
|
||||
|
||||
[flake8]
|
||||
show-source = True
|
||||
enable-extensions = H106,H203,H904
|
||||
builtins = _
|
||||
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build,releasenotes
|
||||
|
||||
[hacking]
|
||||
import_exceptions = fuxi.tests
|
||||
local-check-factory = neutron_lib.hacking.checks.factory
|
||||
|
||||
[testenv:genconfig]
|
||||
commands = oslo-config-generator --config-file=etc/oslo-config-generator/fuxi-config-generator.conf
|
||||
|
||||
[testenv:releasenotes]
|
||||
commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
|
Loading…
Reference in New Issue