Change README to RST format
This commit is contained in:
parent
b872dfbc1e
commit
e91c0c6e7c
|
@ -0,0 +1,36 @@
|
|||
*.pyc
|
||||
*.log
|
||||
.glance-venv
|
||||
.venv
|
||||
.testrepository/
|
||||
.tox
|
||||
.coverage*
|
||||
cover/*
|
||||
covhtml
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
glance.sqlite
|
||||
AUTHORS
|
||||
ChangeLog
|
||||
build
|
||||
doc/source/api
|
||||
dist
|
||||
*.egg
|
||||
glance.egg-info
|
||||
tests.sqlite
|
||||
glance/versioninfo
|
||||
|
||||
# Swap files range from .saa to .swp
|
||||
*.s[a-w][a-p]
|
||||
|
||||
# Files created by doc build
|
||||
doc/source/api
|
||||
|
||||
# IDE files
|
||||
.project
|
||||
.pydevproject
|
||||
.idea
|
||||
.e4p
|
||||
.eric5project/
|
||||
.issues/
|
||||
.ropeproject
|
|
@ -0,0 +1,8 @@
|
|||
[DEFAULT]
|
||||
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
|
||||
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
|
||||
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
|
||||
${PYTHON:-python} -m subunit.run discover -t ./ ./glance/tests $LISTOPT $IDOPTION
|
||||
|
||||
test_id_option=--load-list $IDFILE
|
||||
test_list_option=--list
|
|
@ -0,0 +1,16 @@
|
|||
If you would like to contribute to the development of OpenStack,
|
||||
you must follow the steps in documented at:
|
||||
|
||||
http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||
|
||||
Once those steps have been completed, changes to OpenStack
|
||||
should be submitted for review via the Gerrit tool, following
|
||||
the workflow documented at:
|
||||
|
||||
http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||
|
||||
Pull requests submitted through GitHub will be ignored.
|
||||
|
||||
Bugs should be filed on Launchpad, not GitHub:
|
||||
|
||||
https://bugs.launchpad.net/glance
|
|
@ -0,0 +1,25 @@
|
|||
glance Style Commandments
|
||||
=======================
|
||||
|
||||
- Step 1: Read the OpenStack Style Commandments
|
||||
http://docs.openstack.org/developer/hacking/
|
||||
- Step 2: Read on
|
||||
|
||||
glance Specific Commandments
|
||||
--------------------------
|
||||
|
||||
- [G316] Change assertTrue(isinstance(A, B)) by optimal assert like
|
||||
assertIsInstance(A, B)
|
||||
- [G317] Change assertEqual(type(A), B) by optimal assert like
|
||||
assertIsInstance(A, B)
|
||||
- [G318] Change assertEqual(A, None) or assertEqual(None, A) by optimal assert like
|
||||
assertIsNone(A)
|
||||
- [G319] Validate that debug level logs are not translated
|
||||
- [G320] For python 3 compatibility, use six.text_type() instead of unicode()
|
||||
- [G321] Validate that LOG messages, except debug ones, have translations
|
||||
- [G322] Validate that LOG.info messages use _LI.
|
||||
- [G323] Validate that LOG.exception messages use _LE.
|
||||
- [G324] Validate that LOG.error messages use _LE.
|
||||
- [G325] Validate that LOG.critical messages use _LC.
|
||||
- [G326] Validate that LOG.warning messages use _LW.
|
||||
- [G327] Prevent use of deprecated contextlib.nested
|
|
@ -0,0 +1,176 @@
|
|||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
include run_tests.sh ChangeLog
|
||||
include README.rst builddeb.sh
|
||||
include MANIFEST.in pylintrc
|
||||
include AUTHORS
|
||||
include run_tests.py
|
||||
include HACKING.rst
|
||||
include LICENSE
|
||||
include ChangeLog
|
||||
include babel.cfg tox.ini
|
||||
include openstack-common.conf
|
||||
include searchlight/openstack/common/README
|
||||
graft doc
|
||||
graft etc
|
||||
graft searchlight/locale
|
||||
graft searchlight/tests
|
||||
graft tools
|
||||
global-exclude *.pyc
|
|
@ -1,2 +1,5 @@
|
|||
# searchlight
|
||||
===========
|
||||
Searchlight
|
||||
===========
|
||||
|
||||
To provide advanced and scalable search across multi-tenant cloud resources
|
|
@ -0,0 +1,8 @@
|
|||
[DEFAULT]
|
||||
output_file = etc/searchlight-api.conf.sample
|
||||
namespace = searchlight.api
|
||||
namespace = oslo.concurrency
|
||||
namespace = oslo.messaging
|
||||
namespace = oslo.policy
|
||||
namespace = keystoneclient.middleware.auth_token
|
||||
namespace = oslo.log
|
|
@ -0,0 +1,61 @@
|
|||
{
|
||||
"context_is_admin": "role:admin",
|
||||
"default": "",
|
||||
|
||||
"add_image": "",
|
||||
"delete_image": "",
|
||||
"get_image": "",
|
||||
"get_images": "",
|
||||
"modify_image": "",
|
||||
"publicize_image": "role:admin",
|
||||
"copy_from": "",
|
||||
|
||||
"download_image": "",
|
||||
"upload_image": "",
|
||||
|
||||
"delete_image_location": "",
|
||||
"get_image_location": "",
|
||||
"set_image_location": "",
|
||||
|
||||
"add_member": "",
|
||||
"delete_member": "",
|
||||
"get_member": "",
|
||||
"get_members": "",
|
||||
"modify_member": "",
|
||||
|
||||
"manage_image_cache": "role:admin",
|
||||
|
||||
"get_task": "",
|
||||
"get_tasks": "",
|
||||
"add_task": "",
|
||||
"modify_task": "",
|
||||
|
||||
"deactivate": "",
|
||||
"reactivate": "",
|
||||
|
||||
"get_metadef_namespace": "",
|
||||
"get_metadef_namespaces":"",
|
||||
"modify_metadef_namespace":"",
|
||||
"add_metadef_namespace":"",
|
||||
|
||||
"get_metadef_object":"",
|
||||
"get_metadef_objects":"",
|
||||
"modify_metadef_object":"",
|
||||
"add_metadef_object":"",
|
||||
|
||||
"list_metadef_resource_types":"",
|
||||
"get_metadef_resource_type":"",
|
||||
"add_metadef_resource_type_association":"",
|
||||
|
||||
"get_metadef_property":"",
|
||||
"get_metadef_properties":"",
|
||||
"modify_metadef_property":"",
|
||||
"add_metadef_property":"",
|
||||
|
||||
"get_metadef_tag":"",
|
||||
"get_metadef_tags":"",
|
||||
"modify_metadef_tag":"",
|
||||
"add_metadef_tag":"",
|
||||
"add_metadef_tags":""
|
||||
|
||||
}
|
|
@ -0,0 +1,34 @@
|
|||
# property-protections-policies.conf.sample
|
||||
#
|
||||
# This file is an example config file for when
|
||||
# property_protection_rule_format=policies is enabled.
|
||||
#
|
||||
# Specify regular expression for which properties will be protected in []
|
||||
# For each section, specify CRUD permissions. You may refer to policies defined
|
||||
# in policy.json.
|
||||
# The property rules will be applied in the order specified. Once
|
||||
# a match is found the remaining property rules will not be applied.
|
||||
#
|
||||
# WARNING:
|
||||
# * If the reg ex specified below does not compile, then
|
||||
# the glance-api service fails to start. (Guide for reg ex python compiler
|
||||
# used:
|
||||
# http://docs.python.org/2/library/re.html#regular-expression-syntax)
|
||||
# * If an operation(create, read, update, delete) is not specified or misspelt
|
||||
# then the glance-api service fails to start.
|
||||
# So, remember, with GREAT POWER comes GREAT RESPONSIBILITY!
|
||||
#
|
||||
# NOTE: Only one policy can be specified per action. If multiple policies are
|
||||
# specified, then the glance-api service fails to start.
|
||||
|
||||
[^x_.*]
|
||||
create = default
|
||||
read = default
|
||||
update = default
|
||||
delete = default
|
||||
|
||||
[.*]
|
||||
create = context_is_admin
|
||||
read = context_is_admin
|
||||
update = context_is_admin
|
||||
delete = context_is_admin
|
|
@ -0,0 +1,32 @@
|
|||
# property-protections-roles.conf.sample
|
||||
#
|
||||
# This file is an example config file for when
|
||||
# property_protection_rule_format=roles is enabled.
|
||||
#
|
||||
# Specify regular expression for which properties will be protected in []
|
||||
# For each section, specify CRUD permissions.
|
||||
# The property rules will be applied in the order specified. Once
|
||||
# a match is found the remaining property rules will not be applied.
|
||||
#
|
||||
# WARNING:
|
||||
# * If the reg ex specified below does not compile, then
|
||||
# glance-api service will not start. (Guide for reg ex python compiler used:
|
||||
# http://docs.python.org/2/library/re.html#regular-expression-syntax)
|
||||
# * If an operation(create, read, update, delete) is not specified or misspelt
|
||||
# then the glance-api service will not start.
|
||||
# So, remember, with GREAT POWER comes GREAT RESPONSIBILITY!
|
||||
#
|
||||
# NOTE: Multiple roles can be specified for a given operation. These roles must
|
||||
# be comma separated.
|
||||
|
||||
[^x_.*]
|
||||
create = admin,member
|
||||
read = admin,member
|
||||
update = admin,member
|
||||
delete = admin,member
|
||||
|
||||
[.*]
|
||||
create = admin
|
||||
read = admin
|
||||
update = admin
|
||||
delete = admin
|
|
@ -0,0 +1,8 @@
|
|||
{
|
||||
"context_is_admin": "role:admin",
|
||||
"default": "",
|
||||
|
||||
"catalog_index": "role:admin",
|
||||
"catalog_search": "",
|
||||
"catalog_plugins": ""
|
||||
}
|
|
@ -0,0 +1,23 @@
|
|||
# Use this pipeline for no auth - DEFAULT
|
||||
[pipeline:searchlight]
|
||||
pipeline = unauthenticated-context rootapp
|
||||
|
||||
[pipeline:searchlight-keystone]
|
||||
pipeline = authtoken context rootapp
|
||||
|
||||
[composite:rootapp]
|
||||
paste.composite_factory = searchlight.api:root_app_factory
|
||||
/v1: apiv1app
|
||||
|
||||
[app:apiv1app]
|
||||
paste.app_factory = searchlight.api.v1.router:API.factory
|
||||
|
||||
[filter:unauthenticated-context]
|
||||
paste.filter_factory = searchlight.api.middleware.context:UnauthenticatedContextMiddleware.factory
|
||||
|
||||
[filter:authtoken]
|
||||
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
|
||||
delay_auth_decision = true
|
||||
|
||||
[filter:context]
|
||||
paste.filter_factory = searchlight.api.middleware.context:ContextMiddleware.factory
|
|
@ -0,0 +1,116 @@
|
|||
[DEFAULT]
|
||||
# Show more verbose log output (sets INFO log level output)
|
||||
#verbose = False
|
||||
|
||||
# Show debugging output in logs (sets DEBUG log level output)
|
||||
debug = True
|
||||
|
||||
# Address to bind the GRAFFITI server
|
||||
bind_host = 0.0.0.0
|
||||
|
||||
# Port to bind the server to
|
||||
bind_port = 9393
|
||||
|
||||
# Log to this file. Make sure you do not set the same log file for both the API
|
||||
# and registry servers!
|
||||
#
|
||||
# If `log_file` is omitted and `use_syslog` is false, then log messages are
|
||||
# sent to stdout as a fallback.
|
||||
log_file = /var/log/glance/search.log
|
||||
|
||||
# Backlog requests when creating socket
|
||||
backlog = 4096
|
||||
|
||||
# TCP_KEEPIDLE value in seconds when creating socket.
|
||||
# Not supported on OS X.
|
||||
#tcp_keepidle = 600
|
||||
|
||||
# Property Protections config file
|
||||
# This file contains the rules for property protections and the roles/policies
|
||||
# associated with it.
|
||||
# If this config value is not specified, by default, property protections
|
||||
# won't be enforced.
|
||||
# If a value is specified and the file is not found, then the glance-api
|
||||
# service will not start.
|
||||
#property_protection_file =
|
||||
|
||||
# Specify whether 'roles' or 'policies' are used in the
|
||||
# property_protection_file.
|
||||
# The default value for property_protection_rule_format is 'roles'.
|
||||
#property_protection_rule_format = roles
|
||||
|
||||
# http_keepalive option. If False, server will return the header
|
||||
# "Connection: close", If True, server will return "Connection: Keep-Alive"
|
||||
# in its responses. In order to close the client socket connection
|
||||
# explicitly after the response is sent and read successfully by the client,
|
||||
# you simply have to set this option to False when you create a wsgi server.
|
||||
#http_keepalive = True
|
||||
|
||||
# ================= Syslog Options ============================
|
||||
|
||||
# Send logs to syslog (/dev/log) instead of to file specified
|
||||
# by `log_file`
|
||||
#use_syslog = False
|
||||
|
||||
# Facility to use. If unset defaults to LOG_USER.
|
||||
#syslog_log_facility = LOG_LOCAL0
|
||||
|
||||
# ================= SSL Options ===============================
|
||||
|
||||
# Certificate file to use when starting API server securely
|
||||
#cert_file = /path/to/certfile
|
||||
|
||||
# Private key file to use when starting API server securely
|
||||
#key_file = /path/to/keyfile
|
||||
|
||||
# CA certificate file to use to verify connecting clients
|
||||
#ca_file = /path/to/cafile
|
||||
|
||||
# =============== Policy Options ==================================
|
||||
|
||||
# The JSON file that defines policies.
|
||||
policy_file = search-policy.json
|
||||
|
||||
# Default rule. Enforced when a requested rule is not found.
|
||||
#policy_default_rule = default
|
||||
|
||||
# Directories where policy configuration files are stored.
|
||||
# They can be relative to any directory in the search path
|
||||
# defined by the config_dir option, or absolute paths.
|
||||
# The file defined by policy_file must exist for these
|
||||
# directories to be searched.
|
||||
#policy_dirs = policy.d
|
||||
|
||||
[paste_deploy]
|
||||
# Name of the paste configuration file that defines the available pipelines
|
||||
# config_file = glance-search-paste.ini
|
||||
|
||||
# Partial name of a pipeline in your paste configuration file with the
|
||||
# service name removed. For example, if your paste section name is
|
||||
# [pipeline:glance-registry-keystone], you would configure the flavor below
|
||||
# as 'keystone'.
|
||||
#flavor=
|
||||
#
|
||||
|
||||
[database]
|
||||
# The SQLAlchemy connection string used to connect to the
|
||||
# database (string value)
|
||||
# Deprecated group/name - [DEFAULT]/sql_connection
|
||||
# Deprecated group/name - [DATABASE]/sql_connection
|
||||
# Deprecated group/name - [sql]/connection
|
||||
#connection = <None>
|
||||
|
||||
[keystone_authtoken]
|
||||
identity_uri = http://127.0.0.1:35357
|
||||
admin_tenant_name = %SERVICE_TENANT_NAME%
|
||||
admin_user = %SERVICE_USER%
|
||||
admin_password = %SERVICE_PASSWORD%
|
||||
revocation_cache_time = 10
|
||||
|
||||
# =============== ElasticSearch Options =======================
|
||||
|
||||
[elasticsearch]
|
||||
# List of nodes where Elasticsearch instances are running. A single node
|
||||
# should be defined as an IP address and port number.
|
||||
# The default is ['127.0.0.1:9200']
|
||||
#hosts = ['127.0.0.1:9200']
|
|
@ -0,0 +1,8 @@
|
|||
[DEFAULT]
|
||||
|
||||
# The list of modules to copy from oslo-incubator
|
||||
module=install_venv_common
|
||||
module=service
|
||||
|
||||
# The base module to hold the copy of openstack.common
|
||||
base=searchlight
|
|
@ -0,0 +1,27 @@
|
|||
[Messages Control]
|
||||
# W0511: TODOs in code comments are fine.
|
||||
# W0142: *args and **kwargs are fine.
|
||||
# W0622: Redefining id is fine.
|
||||
disable-msg=W0511,W0142,W0622
|
||||
|
||||
[Basic]
|
||||
# Variable names can be 1 to 31 characters long, with lowercase and underscores
|
||||
variable-rgx=[a-z_][a-z0-9_]{0,30}$
|
||||
|
||||
# Argument names can be 2 to 31 characters long, with lowercase and underscores
|
||||
argument-rgx=[a-z_][a-z0-9_]{1,30}$
|
||||
|
||||
# Method names should be at least 3 characters long
|
||||
# and be lowercased with underscores
|
||||
method-rgx=[a-z_][a-z0-9_]{2,50}$
|
||||
|
||||
# Module names matching nova-* are ok (files in bin/)
|
||||
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+)|(nova-[a-z0-9_-]+))$
|
||||
|
||||
# Don't require docstrings on tests.
|
||||
no-docstring-rgx=((__.*__)|([tT]est.*)|setUp|tearDown)$
|
||||
|
||||
[Design]
|
||||
max-public-methods=100
|
||||
min-public-methods=0
|
||||
max-args=6
|
|
@ -0,0 +1,64 @@
|
|||
# The order of packages is significant, because pip processes them in the order
|
||||
# of appearance. Changing the order has an impact on the overall integration
|
||||
# process, which may cause wedges in the gate later.
|
||||
|
||||
pbr>=0.11,<2.0
|
||||
#
|
||||
# The greenlet package must be compiled with gcc and needs
|
||||
# the Python.h headers. Make sure you install the python-dev
|
||||
# package to get the right headers...
|
||||
greenlet>=0.3.2
|
||||
|
||||
# < 0.8.0/0.8 does not work, see https://bugs.launchpad.net/bugs/1153983
|
||||
SQLAlchemy>=0.9.7,<=0.9.99
|
||||
anyjson>=0.3.3
|
||||
eventlet>=0.17.3
|
||||
PasteDeploy>=1.5.0
|
||||
Routes>=1.12.3,!=2.0
|
||||
WebOb>=1.2.3
|
||||
sqlalchemy-migrate>=0.9.5
|
||||
httplib2>=0.7.5
|
||||
kombu>=3.0.7
|
||||
pycrypto>=2.6
|
||||
iso8601>=0.1.9
|
||||
oslo.config>=1.11.0 # Apache-2.0
|
||||
oslo.concurrency>=1.8.0 # Apache-2.0
|
||||
oslo.context>=0.2.0 # Apache-2.0
|
||||
oslo.utils>=1.4.0 # Apache-2.0
|
||||
stevedore>=1.3.0 # Apache-2.0
|
||||
taskflow>=0.7.1
|
||||
keystonemiddleware>=1.5.0
|
||||
WSME>=0.6
|
||||
# For openstack/common/lockutils
|
||||
posix_ipc
|
||||
|
||||
# For Swift storage backend.
|
||||
python-swiftclient>=2.2.0
|
||||
|
||||
# For VMware storage backed.
|
||||
oslo.vmware>=0.11.1 # Apache-2.0
|
||||
|
||||
# For paste.util.template used in keystone.common.template
|
||||
Paste
|
||||
|
||||
jsonschema>=2.0.0,<3.0.0
|
||||
python-keystoneclient>=1.3.0
|
||||
pyOpenSSL>=0.11
|
||||
# Required by openstack.common libraries
|
||||
six>=1.9.0
|
||||
|
||||
oslo.db>=1.7.0 # Apache-2.0
|
||||
oslo.i18n>=1.5.0 # Apache-2.0
|
||||
oslo.log>=1.0.0 # Apache-2.0
|
||||
oslo.messaging>=1.8.0 # Apache-2.0
|
||||
oslo.policy>=0.3.1 # Apache-2.0
|
||||
oslo.serialization>=1.4.0 # Apache-2.0
|
||||
|
||||
retrying>=1.2.3,!=1.3.0 # Apache-2.0
|
||||
osprofiler>=0.3.0 # Apache-2.0
|
||||
|
||||
# Glance Store
|
||||
glance_store>=0.3.0 # Apache-2.0
|
||||
|
||||
# Artifact repository
|
||||
semantic_version>=2.3.1
|
|
@ -0,0 +1,251 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -eu
|
||||
|
||||
function usage {
|
||||
echo "Usage: $0 [OPTION]..."
|
||||
echo "Run Glance's test suite(s)"
|
||||
echo ""
|
||||
echo " -V, --virtual-env Always use virtualenv. Install automatically if not present"
|
||||
echo " -N, --no-virtual-env Don't use virtualenv. Run tests in local environment"
|
||||
echo " -s, --no-site-packages Isolate the virtualenv from the global Python environment"
|
||||
echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added."
|
||||
echo " -u, --update Update the virtual environment with any newer package versions"
|
||||
echo " -p, --pep8 Just run PEP8 and HACKING compliance check"
|
||||
echo " -8, --pep8-only-changed Just run PEP8 and HACKING compliance check on files changed since HEAD~1"
|
||||
echo " -P, --no-pep8 Don't run static code checks"
|
||||
echo " -c, --coverage Generate coverage report"
|
||||
echo " -d, --debug Run tests with testtools instead of testr. This allows you to use the debugger."
|
||||
echo " -h, --help Print this usage message"
|
||||
echo " --virtual-env-path <path> Location of the virtualenv directory"
|
||||
echo " Default: \$(pwd)"
|
||||
echo " --virtual-env-name <name> Name of the virtualenv directory"
|
||||
echo " Default: .venv"
|
||||
echo " --tools-path <dir> Location of the tools directory"
|
||||
echo " Default: \$(pwd)"
|
||||
echo " --concurrency <concurrency> How many processes to use when running the tests. A value of 0 autodetects concurrency from your CPU count"
|
||||
echo " Default: 0"
|
||||
echo ""
|
||||
echo "Note: with no options specified, the script will try to run the tests in a virtual environment,"
|
||||
echo " If no virtualenv is found, the script will ask if you would like to create one. If you "
|
||||
echo " prefer to run tests NOT in a virtual environment, simply pass the -N option."
|
||||
exit
|
||||
}
|
||||
|
||||
function process_options {
|
||||
i=1
|
||||
while [ $i -le $# ]; do
|
||||
case "${!i}" in
|
||||
-h|--help) usage;;
|
||||
-V|--virtual-env) always_venv=1; never_venv=0;;
|
||||
-N|--no-virtual-env) always_venv=0; never_venv=1;;
|
||||
-s|--no-site-packages) no_site_packages=1;;
|
||||
-f|--force) force=1;;
|
||||
-u|--update) update=1;;
|
||||
-p|--pep8) just_pep8=1;;
|
||||
-8|--pep8-only-changed) just_pep8_changed=1;;
|
||||
-P|--no-pep8) no_pep8=1;;
|
||||
-c|--coverage) coverage=1;;
|
||||
-d|--debug) debug=1;;
|
||||
--virtual-env-path)
|
||||
(( i++ ))
|
||||
venv_path=${!i}
|
||||
;;
|
||||
--virtual-env-name)
|
||||
(( i++ ))
|
||||
venv_dir=${!i}
|
||||
;;
|
||||
--tools-path)
|
||||
(( i++ ))
|
||||
tools_path=${!i}
|
||||
;;
|
||||
--concurrency)
|
||||
(( i++ ))
|
||||
concurrency=${!i}
|
||||
;;
|
||||
-*) testropts="$testropts ${!i}";;
|
||||
*) testrargs="$testrargs ${!i}"
|
||||
esac
|
||||
(( i++ ))
|
||||
done
|
||||
}
|
||||
|
||||
tool_path=${tools_path:-$(pwd)}
|
||||
venv_path=${venv_path:-$(pwd)}
|
||||
venv_dir=${venv_name:-.venv}
|
||||
with_venv=tools/with_venv.sh
|
||||
always_venv=0
|
||||
never_venv=0
|
||||
force=0
|
||||
no_site_packages=0
|
||||
installvenvopts=
|
||||
testrargs=
|
||||
testropts=
|
||||
wrapper=""
|
||||
just_pep8=0
|
||||
just_pep8_changed=0
|
||||
no_pep8=0
|
||||
coverage=0
|
||||
debug=0
|
||||
update=0
|
||||
concurrency=0
|
||||
|
||||
LANG=en_US.UTF-8
|
||||
LANGUAGE=en_US:en
|
||||
LC_ALL=C
|
||||
|
||||
process_options $@
|
||||
# Make our paths available to other scripts we call
|
||||
export venv_path
|
||||
export venv_dir
|
||||
export venv_name
|
||||
export tools_dir
|
||||
export venv=${venv_path}/${venv_dir}
|
||||
|
||||
if [ $no_site_packages -eq 1 ]; then
|
||||
installvenvopts="--no-site-packages"
|
||||
fi
|
||||
|
||||
function run_tests {
|
||||
# Cleanup *pyc
|
||||
${wrapper} find . -type f -name "*.pyc" -delete
|
||||
|
||||
if [ $debug -eq 1 ]; then
|
||||
if [ "$testropts" = "" ] && [ "$testrargs" = "" ]; then
|
||||
# Default to running all tests if specific test is not
|
||||
# provided.
|
||||
testrargs="discover ./glance/tests"
|
||||
fi
|
||||
${wrapper} python -m testtools.run $testropts $testrargs
|
||||
|
||||
# Short circuit because all of the testr and coverage stuff
|
||||
# below does not make sense when running testtools.run for
|
||||
# debugging purposes.
|
||||
return $?
|
||||
fi
|
||||
|
||||
if [ $coverage -eq 1 ]; then
|
||||
TESTRTESTS="$TESTRTESTS --coverage"
|
||||
else
|
||||
TESTRTESTS="$TESTRTESTS"
|
||||
fi
|
||||
|
||||
# Just run the test suites in current environment
|
||||
set +e
|
||||
testrargs=`echo "$testrargs" | sed -e's/^\s*\(.*\)\s*$/\1/'`
|
||||
TESTRTESTS="$TESTRTESTS --testr-args='--subunit --concurrency $concurrency $testropts $testrargs'"
|
||||
if [ setup.cfg -nt glance.egg-info/entry_points.txt ]
|
||||
then
|
||||
${wrapper} python setup.py egg_info
|
||||
fi
|
||||
echo "Running \`${wrapper} $TESTRTESTS\`"
|
||||
if ${wrapper} which subunit-2to1 2>&1 > /dev/null
|
||||
then
|
||||
# subunit-2to1 is present, testr subunit stream should be in version 2
|
||||
# format. Convert to version one before colorizing.
|
||||
bash -c "${wrapper} $TESTRTESTS | ${wrapper} subunit-2to1 | ${wrapper} tools/colorizer.py"
|
||||
else
|
||||
bash -c "${wrapper} $TESTRTESTS | ${wrapper} tools/colorizer.py"
|
||||
fi
|
||||
RESULT=$?
|
||||
set -e
|
||||
|
||||
copy_subunit_log
|
||||
|
||||
if [ $coverage -eq 1 ]; then
|
||||
echo "Generating coverage report in covhtml/"
|
||||
# Don't compute coverage for common code, which is tested elsewhere
|
||||
${wrapper} coverage combine
|
||||
${wrapper} coverage html --include='glance/*' --omit='glance/openstack/common/*' -d covhtml -i
|
||||
fi
|
||||
|
||||
return $RESULT
|
||||
}
|
||||
|
||||
function copy_subunit_log {
|
||||
LOGNAME=`cat .testrepository/next-stream`
|
||||
LOGNAME=$(($LOGNAME - 1))
|
||||
LOGNAME=".testrepository/${LOGNAME}"
|
||||
cp $LOGNAME subunit.log
|
||||
}
|
||||
|
||||
function warn_on_flake8_without_venv {
|
||||
if [ $never_venv -eq 1 ]; then
|
||||
echo "**WARNING**:"
|
||||
echo "Running flake8 without virtual env may miss OpenStack HACKING detection"
|
||||
fi
|
||||
}
|
||||
|
||||
function run_pep8 {
|
||||
echo "Running flake8 ..."
|
||||
warn_on_flake8_without_venv
|
||||
bash -c "${wrapper} flake8"
|
||||
}
|
||||
|
||||
|
||||
TESTRTESTS="lockutils-wrapper python setup.py testr"
|
||||
|
||||
if [ $never_venv -eq 0 ]
|
||||
then
|
||||
# Remove the virtual environment if --force used
|
||||
if [ $force -eq 1 ]; then
|
||||
echo "Cleaning virtualenv..."
|
||||
rm -rf ${venv}
|
||||
fi
|
||||
if [ $update -eq 1 ]; then
|
||||
echo "Updating virtualenv..."
|
||||
python tools/install_venv.py $installvenvopts
|
||||
fi
|
||||
if [ -e ${venv} ]; then
|
||||
wrapper="${with_venv}"
|
||||
else
|
||||
if [ $always_venv -eq 1 ]; then
|
||||
# Automatically install the virtualenv
|
||||
python tools/install_venv.py $installvenvopts
|
||||
wrapper="${with_venv}"
|
||||
else
|
||||
echo -e "No virtual environment found...create one? (Y/n) \c"
|
||||
read use_ve
|
||||
if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then
|
||||
# Install the virtualenv and run the test suite in it
|
||||
python tools/install_venv.py $installvenvopts
|
||||
wrapper=${with_venv}
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Delete old coverage data from previous runs
|
||||
if [ $coverage -eq 1 ]; then
|
||||
${wrapper} coverage erase
|
||||
fi
|
||||
|
||||
if [ $just_pep8 -eq 1 ]; then
|
||||
run_pep8
|
||||
exit
|
||||
fi
|
||||
|
||||
if [ $just_pep8_changed -eq 1 ]; then
|
||||
# NOTE(gilliard) We want use flake8 to check the entirety of every file that has
|
||||
# a change in it. Unfortunately the --filenames argument to flake8 only accepts
|
||||
# file *names* and there are no files named (eg) "nova/compute/manager.py". The
|
||||
# --diff argument behaves surprisingly as well, because although you feed it a
|
||||
# diff, it actually checks the file on disk anyway.
|
||||
files=$(git diff --name-only HEAD~1 | tr '\n' ' ')
|
||||
echo "Running flake8 on ${files}"
|
||||
warn_on_flake8_without_venv
|
||||
bash -c "diff -u --from-file /dev/null ${files} | ${wrapper} flake8 --diff"
|
||||
exit
|
||||
fi
|
||||
|
||||
run_tests
|
||||
|
||||
# NOTE(sirp): we only want to run pep8 when we're running the full-test suite,
|
||||
# not when we're running tests individually. To handle this, we need to
|
||||
# distinguish between options (testropts), which begin with a '-', and
|
||||
# arguments (testrargs).
|
||||
if [ -z "$testrargs" ]; then
|
||||
if [ $no_pep8 -eq 0 ]; then
|
||||
run_pep8
|
||||
fi
|
||||
fi
|
|
@ -0,0 +1,20 @@
|
|||
# Copyright 2014 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import paste.urlmap
|
||||
|
||||
|
||||
def root_app_factory(loader, global_conf, **local_conf):
|
||||
return paste.urlmap.urlmap_factory(loader, global_conf, **local_conf)
|
|
@ -0,0 +1,68 @@
|
|||
# Copyright 2012 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import re
|
||||
|
||||
from oslo_concurrency import lockutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import excutils
|
||||
from oslo_utils import units
|
||||
|
||||
from searchlight.common import exception
|
||||
from searchlight.common import wsgi
|
||||
from searchlight import i18n
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
_ = i18n._
|
||||
_LE = i18n._LE
|
||||
_LI = i18n._LI
|
||||
_LW = i18n._LW
|
||||
CONF = cfg.CONF
|
||||
|
||||
_CACHED_THREAD_POOL = {}
|
||||
|
||||
|
||||
def memoize(lock_name):
|
||||
def memoizer_wrapper(func):
|
||||
@lockutils.synchronized(lock_name)
|
||||
def memoizer(lock_name):
|
||||
if lock_name not in _CACHED_THREAD_POOL:
|
||||
_CACHED_THREAD_POOL[lock_name] = func()
|
||||
|
||||
return _CACHED_THREAD_POOL[lock_name]
|
||||
|
||||
return memoizer(lock_name)
|
||||
|
||||
return memoizer_wrapper
|
||||
|
||||
|
||||
def get_thread_pool(lock_name, size=1024):
|
||||
"""Initializes eventlet thread pool.
|
||||
|
||||
If thread pool is present in cache, then returns it from cache
|
||||
else create new pool, stores it in cache and return newly created
|
||||
pool.
|
||||
|
||||
@param lock_name: Name of the lock.
|
||||
@param size: Size of eventlet pool.
|
||||
|
||||
@return: eventlet pool
|
||||
"""
|
||||
@memoize(lock_name)
|
||||
def _get_thread_pool():
|
||||
return wsgi.get_asynchronous_eventlet_pool(size=size)
|
||||
|
||||
return _get_thread_pool
|
|
@ -0,0 +1,138 @@
|
|||
# Copyright 2011-2012 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_serialization import jsonutils
|
||||
import webob.exc
|
||||
|
||||
from searchlight.api import policy
|
||||
from searchlight.common import wsgi
|
||||
import searchlight.context
|
||||
from searchlight import i18n
|
||||
|
||||
_ = i18n._
|
||||
|
||||
context_opts = [
|
||||
cfg.BoolOpt('owner_is_tenant', default=True,
|
||||
help=_('When true, this option sets the owner of an image '
|
||||
'to be the tenant. Otherwise, the owner of the '
|
||||
' image will be the authenticated user issuing the '
|
||||
'request.')),
|
||||
cfg.StrOpt('admin_role', default='admin',
|
||||
help=_('Role used to identify an authenticated user as '
|
||||
'administrator.')),
|
||||
cfg.BoolOpt('allow_anonymous_access', default=False,
|
||||
help=_('Allow unauthenticated users to access the API with '
|
||||
'read-only privileges. This only applies when using '
|
||||
'ContextMiddleware.')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(context_opts)
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseContextMiddleware(wsgi.Middleware):
|
||||
def process_response(self, resp):
|
||||
try:
|
||||
request_id = resp.request.context.request_id
|
||||
except AttributeError:
|
||||
LOG.warn(_('Unable to retrieve request id from context'))
|
||||
else:
|
||||
resp.headers['x-openstack-request-id'] = 'req-%s' % request_id
|
||||
return resp
|
||||
|
||||
|
||||
class ContextMiddleware(BaseContextMiddleware):
|
||||
def __init__(self, app):
|
||||
self.policy_enforcer = policy.Enforcer()
|
||||
super(ContextMiddleware, self).__init__(app)
|
||||
|
||||
def process_request(self, req):
|
||||
"""Convert authentication information into a request context
|
||||
|
||||
Generate a searchlight.context.RequestContext object from the available
|
||||
authentication headers and store on the 'context' attribute
|
||||
of the req object.
|
||||
|
||||
:param req: wsgi request object that will be given the context object
|
||||
:raises webob.exc.HTTPUnauthorized: when value of the X-Identity-Status
|
||||
header is not 'Confirmed' and
|
||||
anonymous access is disallowed
|
||||
"""
|
||||
if req.headers.get('X-Identity-Status') == 'Confirmed':
|
||||
req.context = self._get_authenticated_context(req)
|
||||
elif CONF.allow_anonymous_access:
|
||||
req.context = self._get_anonymous_context()
|
||||
else:
|
||||
raise webob.exc.HTTPUnauthorized()
|
||||
|
||||
def _get_anonymous_context(self):
|
||||
kwargs = {
|
||||
'user': None,
|
||||
'tenant': None,
|
||||
'roles': [],
|
||||
'is_admin': False,
|
||||
'read_only': True,
|
||||
'policy_enforcer': self.policy_enforcer,
|
||||
}
|
||||
return searchlight.context.RequestContext(**kwargs)
|
||||
|
||||
def _get_authenticated_context(self, req):
|
||||
# NOTE(bcwaldon): X-Roles is a csv string, but we need to parse
|
||||
# it into a list to be useful
|
||||
roles_header = req.headers.get('X-Roles', '')
|
||||
roles = [r.strip().lower() for r in roles_header.split(',')]
|
||||
|
||||
# NOTE(bcwaldon): This header is deprecated in favor of X-Auth-Token
|
||||
deprecated_token = req.headers.get('X-Storage-Token')
|
||||
|
||||
service_catalog = None
|
||||
if req.headers.get('X-Service-Catalog') is not None:
|
||||
try:
|
||||
catalog_header = req.headers.get('X-Service-Catalog')
|
||||
service_catalog = jsonutils.loads(catalog_header)
|
||||
except ValueError:
|
||||
raise webob.exc.HTTPInternalServerError(
|
||||
_('Invalid service catalog json.'))
|
||||
|
||||
kwargs = {
|
||||
'user': req.headers.get('X-User-Id'),
|
||||
'tenant': req.headers.get('X-Tenant-Id'),
|
||||
'roles': roles,
|
||||
'is_admin': CONF.admin_role.strip().lower() in roles,
|
||||
'auth_token': req.headers.get('X-Auth-Token', deprecated_token),
|
||||
'owner_is_tenant': CONF.owner_is_tenant,
|
||||
'service_catalog': service_catalog,
|
||||
'policy_enforcer': self.policy_enforcer,
|
||||
'request_id': req.headers.get('X-Openstack-Request-ID'),
|
||||
}
|
||||
|
||||
return searchlight.context.RequestContext(**kwargs)
|
||||
|
||||
|
||||
class UnauthenticatedContextMiddleware(BaseContextMiddleware):
|
||||
def process_request(self, req):
|
||||
"""Create a context without an authorized user."""
|
||||
kwargs = {
|
||||
'user': None,
|
||||
'tenant': None,
|
||||
'roles': [],
|
||||
'is_admin': True,
|
||||
}
|
||||
|
||||
req.context = searchlight.context.RequestContext(**kwargs)
|
|
@ -0,0 +1,66 @@
|
|||
# Copyright 2013 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Use gzip compression if the client accepts it.
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
from oslo_log import log as logging
|
||||
|
||||
from searchlight.common import wsgi
|
||||
from searchlight import i18n
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
_LI = i18n._LI
|
||||
|
||||
|
||||
class GzipMiddleware(wsgi.Middleware):
|
||||
|
||||
re_zip = re.compile(r'\bgzip\b')
|
||||
|
||||
def __init__(self, app):
|
||||
LOG.info(_LI("Initialized gzip middleware"))
|
||||
super(GzipMiddleware, self).__init__(app)
|
||||
|
||||
def process_response(self, response):
|
||||
request = response.request
|
||||
accept_encoding = request.headers.get('Accept-Encoding', '')
|
||||
|
||||
if self.re_zip.search(accept_encoding):
|
||||
# NOTE(flaper87): Webob removes the content-md5 when
|
||||
# app_iter is called. We'll keep it and reset it later
|
||||
checksum = response.headers.get("Content-MD5")
|
||||
|
||||
# NOTE(flaper87): We'll use lazy for images so
|
||||
# that they can be compressed without reading
|
||||
# the whole content in memory. Notice that using
|
||||
# lazy will set response's content-length to 0.
|
||||
content_type = response.headers["Content-Type"]
|
||||
lazy = content_type == "application/octet-stream"
|
||||
|
||||
# NOTE(flaper87): Webob takes care of the compression
|
||||
# process, it will replace the body either with a
|
||||
# compressed body or a generator - used for lazy com
|
||||
# pression - depending on the lazy value.
|
||||
#
|
||||
# Webob itself will set the Content-Encoding header.
|
||||
response.encode_content(lazy=lazy)
|
||||
|
||||
if checksum:
|
||||
response.headers['Content-MD5'] = checksum
|
||||
|
||||
return response
|
|
@ -0,0 +1,109 @@
|
|||
# Copyright 2011 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
A filter middleware that inspects the requested URI for a version string
|
||||
and/or Accept headers and attempts to negotiate an API controller to
|
||||
return
|
||||
"""
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
|
||||
from searchlight.api import versions
|
||||
from searchlight.common import wsgi
|
||||
from searchlight import i18n
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
_ = i18n._
|
||||
_LW = i18n._LW
|
||||
|
||||
|
||||
class VersionNegotiationFilter(wsgi.Middleware):
|
||||
|
||||
def __init__(self, app):
|
||||
self.versions_app = versions.Controller()
|
||||
super(VersionNegotiationFilter, self).__init__(app)
|
||||
|
||||
def process_request(self, req):
|
||||
"""Try to find a version first in the accept header, then the URL"""
|
||||
msg = _("Determining version of request: %(method)s %(path)s"
|
||||
" Accept: %(accept)s")
|
||||
args = {'method': req.method, 'path': req.path, 'accept': req.accept}
|
||||
LOG.debug(msg % args)
|
||||
|
||||
# If the request is for /versions, just return the versions container
|
||||
# TODO(bcwaldon): deprecate this behavior
|
||||
if req.path_info_peek() == "versions":
|
||||
return self.versions_app
|
||||
|
||||
accept = str(req.accept)
|
||||
if accept.startswith('application/vnd.openstack.images-'):
|
||||
LOG.debug("Using media-type versioning")
|
||||
token_loc = len('application/vnd.openstack.images-')
|
||||
req_version = accept[token_loc:]
|
||||
else:
|
||||
LOG.debug("Using url versioning")
|
||||
# Remove version in url so it doesn't conflict later
|
||||
req_version = self._pop_path_info(req)
|
||||
|
||||
try:
|
||||
version = self._match_version_string(req_version)
|
||||
except ValueError:
|
||||
LOG.warn(_LW("Unknown version. Returning version choices."))
|
||||
return self.versions_app
|
||||
|
||||
req.environ['api.version'] = version
|
||||
req.path_info = ''.join(('/v', str(version), req.path_info))
|
||||
LOG.debug("Matched version: v%d", version)
|
||||
LOG.debug('new path %s', req.path_info)
|
||||
return None
|
||||
|
||||
def _match_version_string(self, subject):
|
||||
"""
|
||||
Given a string, tries to match a major and/or
|
||||
minor version number.
|
||||
|
||||
:param subject: The string to check
|
||||
:returns version found in the subject
|
||||
:raises ValueError if no acceptable version could be found
|
||||
"""
|
||||
if subject in ('v1', 'v1.0', 'v1.1') and CONF.enable_v1_api:
|
||||
major_version = 1
|
||||
elif subject in ('v2', 'v2.0', 'v2.1', 'v2.2') and CONF.enable_v2_api:
|
||||
major_version = 2
|
||||
else:
|
||||
raise ValueError()
|
||||
|
||||
return major_version
|
||||
|
||||
def _pop_path_info(self, req):
|
||||
"""
|
||||
'Pops' off the next segment of PATH_INFO, returns the popped
|
||||
segment. Do NOT push it onto SCRIPT_NAME.
|
||||
"""
|
||||
path = req.path_info
|
||||
if not path:
|
||||
return None
|
||||
while path.startswith('/'):
|
||||
path = path[1:]
|
||||
idx = path.find('/')
|
||||
if idx == -1:
|
||||
idx = len(path)
|
||||
r = path[:idx]
|
||||
req.path_info = path[idx:]
|
||||
return r
|
|
@ -0,0 +1,118 @@
|
|||
# Copyright (c) 2011 OpenStack Foundation
|
||||
# Copyright 2013 IBM Corp.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Policy Engine For Searchlight"""
|
||||
|
||||
import copy
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_policy import policy
|
||||
|
||||
from searchlight.common import exception
|
||||
from searchlight import i18n
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
CONF = cfg.CONF
|
||||
|
||||
DEFAULT_RULES = policy.Rules.from_dict({
|
||||
'context_is_admin': 'role:admin',
|
||||
'default': '@',
|
||||
'manage_image_cache': 'role:admin',
|
||||
})
|
||||
|
||||
_ = i18n._
|
||||
_LI = i18n._LI
|
||||
_LW = i18n._LW
|
||||
|
||||
|
||||
class Enforcer(policy.Enforcer):
|
||||
"""Responsible for loading and enforcing rules"""
|
||||
|
||||
def __init__(self):
|
||||
if CONF.find_file(CONF.oslo_policy.policy_file):
|
||||
kwargs = dict(rules=None, use_conf=True)
|
||||
else:
|
||||
kwargs = dict(rules=DEFAULT_RULES, use_conf=False)
|
||||
super(Enforcer, self).__init__(CONF, overwrite=False, **kwargs)
|
||||
|
||||
def add_rules(self, rules):
|
||||
"""Add new rules to the Rules object"""
|
||||
self.set_rules(rules, overwrite=False, use_conf=self.use_conf)
|
||||
|
||||
def enforce(self, context, action, target):
|
||||
"""Verifies that the action is valid on the target in this context.
|
||||
|
||||
:param context: Glance request context
|
||||
:param action: String representing the action to be checked
|
||||
:param target: Dictionary representing the object of the action.
|
||||
:raises: `searchlight.common.exception.Forbidden`
|
||||
:returns: A non-False value if access is allowed.
|
||||
"""
|
||||
credentials = {
|
||||
'roles': context.roles,
|
||||
'user': context.user,
|
||||
'tenant': context.tenant,
|
||||
}
|
||||
return super(Enforcer, self).enforce(action, target, credentials,
|
||||
do_raise=True,
|
||||
exc=exception.Forbidden,
|
||||
action=action)
|
||||
|
||||
def check(self, context, action, target):
|
||||
"""Verifies that the action is valid on the target in this context.
|
||||
|
||||
:param context: Glance request context
|
||||
:param action: String representing the action to be checked
|
||||
:param target: Dictionary representing the object of the action.
|
||||
:returns: A non-False value if access is allowed.
|
||||
"""
|
||||
credentials = {
|
||||
'roles': context.roles,
|
||||
'user': context.user,
|
||||
'tenant': context.tenant,
|
||||
}
|
||||
return super(Enforcer, self).enforce(action, target, credentials)
|
||||
|
||||
def check_is_admin(self, context):
|
||||
"""Check if the given context is associated with an admin role,
|
||||
as defined via the 'context_is_admin' RBAC rule.
|
||||
|
||||
:param context: Glance request context
|
||||
:returns: A non-False value if context role is admin.
|
||||
"""
|
||||
return self.check(context, 'context_is_admin', context.to_dict())
|
||||
|
||||
|
||||
class CatalogSearchRepoProxy(object):
|
||||
|
||||
def __init__(self, search_repo, context, search_policy):
|
||||
self.context = context
|
||||
self.policy = search_policy
|
||||
self.search_repo = search_repo
|
||||
|
||||
def search(self, *args, **kwargs):
|
||||
self.policy.enforce(self.context, 'catalog_search', {})
|
||||
return self.search_repo.search(*args, **kwargs)
|
||||
|
||||
def plugins_info(self, *args, **kwargs):
|
||||
self.policy.enforce(self.context, 'catalog_plugins', {})
|
||||
return self.search_repo.plugins_info(*args, **kwargs)
|
||||
|
||||
def index(self, *args, **kwargs):
|
||||
self.policy.enforce(self.context, 'catalog_index', {})
|
||||
return self.search_repo.index(*args, **kwargs)
|
|
@ -0,0 +1,126 @@
|
|||
# Copyright 2013 Rackspace
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from searchlight.common import exception
|
||||
import searchlight.domain.proxy
|
||||
|
||||
|
||||
class ProtectedImageFactoryProxy(searchlight.domain.proxy.ImageFactory):
|
||||
|
||||
def __init__(self, image_factory, context, property_rules):
|
||||
self.image_factory = image_factory
|
||||
self.context = context
|
||||
self.property_rules = property_rules
|
||||
kwargs = {'context': self.context,
|
||||
'property_rules': self.property_rules}
|
||||
super(ProtectedImageFactoryProxy, self).__init__(
|
||||
image_factory,
|
||||
proxy_class=ProtectedImageProxy,
|
||||
proxy_kwargs=kwargs)
|
||||
|
||||
def new_image(self, **kwargs):
|
||||
extra_props = kwargs.pop('extra_properties', {})
|
||||
|
||||
extra_properties = {}
|
||||
for key in extra_props.keys():
|
||||
if self.property_rules.check_property_rules(key, 'create',
|
||||
self.context):
|
||||
extra_properties[key] = extra_props[key]
|
||||
else:
|
||||
raise exception.ReservedProperty(property=key)
|
||||
return super(ProtectedImageFactoryProxy, self).new_image(
|
||||
extra_properties=extra_properties, **kwargs)
|
||||
|
||||
|
||||
class ProtectedImageRepoProxy(searchlight.domain.proxy.Repo):
|
||||
|
||||
def __init__(self, image_repo, context, property_rules):
|
||||
self.context = context
|
||||
self.image_repo = image_repo
|
||||
self.property_rules = property_rules
|
||||
proxy_kwargs = {'context': self.context}
|
||||
super(ProtectedImageRepoProxy, self).__init__(
|
||||
image_repo, item_proxy_class=ProtectedImageProxy,
|
||||
item_proxy_kwargs=proxy_kwargs)
|
||||
|
||||
def get(self, image_id):
|
||||
return ProtectedImageProxy(self.image_repo.get(image_id),
|
||||
self.context, self.property_rules)
|
||||
|
||||
def list(self, *args, **kwargs):
|
||||
images = self.image_repo.list(*args, **kwargs)
|
||||
return [ProtectedImageProxy(image, self.context, self.property_rules)
|
||||
for image in images]
|
||||
|
||||
|
||||
class ProtectedImageProxy(searchlight.domain.proxy.Image):
|
||||
|
||||
def __init__(self, image, context, property_rules):
|
||||
self.image = image
|
||||
self.context = context
|
||||
self.property_rules = property_rules
|
||||
|
||||
self.image.extra_properties = ExtraPropertiesProxy(
|
||||
self.context,
|
||||
self.image.extra_properties,
|
||||
self.property_rules)
|
||||
super(ProtectedImageProxy, self).__init__(self.image)
|
||||
|
||||
|
||||
class ExtraPropertiesProxy(searchlight.domain.ExtraProperties):
|
||||
|
||||
def __init__(self, context, extra_props, property_rules):
|
||||
self.context = context
|
||||
self.property_rules = property_rules
|
||||
extra_properties = {}
|
||||
for key in extra_props.keys():
|
||||
if self.property_rules.check_property_rules(key, 'read',
|
||||
self.context):
|
||||
extra_properties[key] = extra_props[key]
|
||||
super(ExtraPropertiesProxy, self).__init__(extra_properties)
|
||||
|
||||
def __getitem__(self, key):
|
||||
if self.property_rules.check_property_rules(key, 'read', self.context):
|
||||
return dict.__getitem__(self, key)
|
||||
else:
|
||||
raise KeyError
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
# NOTE(isethi): Exceptions are raised only for actions update, delete
|
||||
# and create, where the user proactively interacts with the properties.
|
||||
# A user cannot request to read a specific property, hence reads do
|
||||
# raise an exception
|
||||
try:
|
||||
if self.__getitem__(key) is not None:
|
||||
if self.property_rules.check_property_rules(key, 'update',
|
||||
self.context):
|
||||
return dict.__setitem__(self, key, value)
|
||||
else:
|
||||
raise exception.ReservedProperty(property=key)
|
||||
except KeyError:
|
||||
if self.property_rules.check_property_rules(key, 'create',
|
||||
self.context):
|
||||
return dict.__setitem__(self, key, value)
|
||||
else:
|
||||
raise exception.ReservedProperty(property=key)
|
||||
|
||||
def __delitem__(self, key):
|
||||
if key not in super(ExtraPropertiesProxy, self).keys():
|
||||
raise KeyError
|
||||
|
||||
if self.property_rules.check_property_rules(key, 'delete',
|
||||
self.context):
|
||||
return dict.__delitem__(self, key)
|
||||
else:
|
||||
raise exception.ReservedProperty(property=key)
|
|
@ -0,0 +1,66 @@
|
|||
# Copyright 2014 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from searchlight.common import wsgi
|
||||
from searchlight.api.v1 import search
|
||||
|
||||
|
||||
class API(wsgi.Router):
|
||||
|
||||
"""WSGI router for Glance Catalog Search v0_1 API requests."""
|
||||
|
||||
def __init__(self, mapper):
|
||||
|
||||
reject_method_resource = wsgi.Resource(wsgi.RejectMethodController())
|
||||
|
||||
search_catalog_resource = search.create_resource()
|
||||
mapper.connect('/search',
|
||||
controller=search_catalog_resource,
|
||||
action='search',
|
||||
conditions={'method': ['GET']})
|
||||
mapper.connect('/search',
|
||||
controller=search_catalog_resource,
|
||||
action='search',
|
||||
conditions={'method': ['POST']})
|
||||
mapper.connect('/search',
|
||||
controller=reject_method_resource,
|
||||
action='reject',
|
||||
allowed_methods='GET, POST',
|
||||
conditions={'method': ['PUT', 'DELETE',
|
||||
'PATCH', 'HEAD']})
|
||||
|
||||
mapper.connect('/search/plugins',
|
||||
controller=search_catalog_resource,
|
||||
action='plugins_info',
|
||||
conditions={'method': ['GET']})
|
||||
mapper.connect('/search/plugins',
|
||||
controller=reject_method_resource,
|
||||
action='reject',
|
||||
allowed_methods='GET',
|
||||
conditions={'method': ['POST', 'PUT', 'DELETE',
|
||||
'PATCH', 'HEAD']})
|
||||
|
||||
mapper.connect('/index',
|
||||
controller=search_catalog_resource,
|
||||
action='index',
|
||||
conditions={'method': ['POST']})
|
||||
mapper.connect('/index',
|
||||
controller=reject_method_resource,
|
||||
action='reject',
|
||||
allowed_methods='POST',
|
||||
conditions={'method': ['GET', 'PUT', 'DELETE',
|
||||
'PATCH', 'HEAD']})
|
||||
|
||||
super(API, self).__init__(mapper)
|
|
@ -0,0 +1,379 @@
|
|||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
import six
|
||||
import webob.exc
|
||||
|
||||
from searchlight.api import policy
|
||||
from searchlight.common import exception
|
||||
from searchlight.common import utils
|
||||
from searchlight.common import wsgi
|
||||
import searchlight.gateway
|
||||
from searchlight import i18n
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
_ = i18n._
|
||||
_LE = i18n._LE
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
class SearchController(object):
|
||||
def __init__(self, plugins=None, es_api=None, policy_enforcer=None):
|
||||
self.es_api = es_api or searchlight.elasticsearch.get_api()
|
||||
self.policy = policy_enforcer or policy.Enforcer()
|
||||
self.gateway = searchlight.gateway.Gateway(
|
||||
es_api=self.es_api,
|
||||
policy_enforcer=self.policy)
|
||||
self.plugins = plugins or []
|
||||
|
||||
def search(self, req, query, index, doc_type=None, fields=None, offset=0,
|
||||
limit=10):
|
||||
if fields is None:
|
||||
fields = []
|
||||
|
||||
try:
|
||||
search_repo = self.gateway.get_catalog_search_repo(req.context)
|
||||
result = search_repo.search(index,
|
||||
doc_type,
|
||||
query,
|
||||
fields,
|
||||
offset,
|
||||
limit,
|
||||
True)
|
||||
|
||||
for plugin in self.plugins:
|
||||
result = plugin.obj.filter_result(result, req.context)
|
||||
|
||||
return result
|
||||
except exception.Forbidden as e:
|
||||
raise webob.exc.HTTPForbidden(explanation=e.msg)
|
||||
except exception.NotFound as e:
|
||||
raise webob.exc.HTTPNotFound(explanation=e.msg)
|
||||
except exception.Duplicate as e:
|
||||
raise webob.exc.HTTPConflict(explanation=e.msg)
|
||||
except Exception as e:
|
||||
LOG.error(utils.exception_to_str(e))
|
||||
raise webob.exc.HTTPInternalServerError()
|
||||
|
||||
def plugins_info(self, req):
|
||||
try:
|
||||
search_repo = self.gateway.get_catalog_search_repo(req.context)
|
||||
return search_repo.plugins_info()
|
||||
except exception.Forbidden as e:
|
||||
raise webob.exc.HTTPForbidden(explanation=e.msg)
|
||||
except exception.NotFound as e:
|
||||
raise webob.exc.HTTPNotFound(explanation=e.msg)
|
||||
except Exception as e:
|
||||
LOG.error(utils.exception_to_str(e))
|
||||
raise webob.exc.HTTPInternalServerError()
|
||||
|
||||
def index(self, req, actions, default_index=None, default_type=None):
|
||||
try:
|
||||
search_repo = self.gateway.get_catalog_search_repo(req.context)
|
||||
success, errors = search_repo.index(
|
||||
default_index,
|
||||
default_type,
|
||||
actions)
|
||||
return {
|
||||
'success': success,
|
||||
'failed': len(errors),
|
||||
'errors': errors,
|
||||
}
|
||||
|
||||
except exception.Forbidden as e:
|
||||
raise webob.exc.HTTPForbidden(explanation=e.msg)
|
||||
except exception.NotFound as e:
|
||||
raise webob.exc.HTTPNotFound(explanation=e.msg)
|
||||
except exception.Duplicate as e:
|
||||
raise webob.exc.HTTPConflict(explanation=e.msg)
|
||||
except Exception as e:
|
||||
LOG.error(utils.exception_to_str(e))
|
||||
raise webob.exc.HTTPInternalServerError()
|
||||
|
||||
|
||||
class RequestDeserializer(wsgi.JSONRequestDeserializer):
|
||||
_disallowed_properties = ['self', 'schema']
|
||||
|
||||
def __init__(self, plugins, schema=None):
|
||||
super(RequestDeserializer, self).__init__()
|
||||
self.plugins = plugins
|
||||
|
||||
def _get_request_body(self, request):
|
||||
output = super(RequestDeserializer, self).default(request)
|
||||
if 'body' not in output:
|
||||
msg = _('Body expected in request.')
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
return output['body']
|
||||
|
||||
@classmethod
|
||||
def _check_allowed(cls, query):
|
||||
for key in cls._disallowed_properties:
|
||||
if key in query:
|
||||
msg = _("Attribute '%s' is read-only.") % key
|
||||
raise webob.exc.HTTPForbidden(explanation=msg)
|
||||
|
||||
def _get_available_indices(self):
|
||||
return list(set([p.obj.get_index_name() for p in self.plugins]))
|
||||
|
||||
def _get_available_types(self):
|
||||
return list(set([p.obj.get_document_type() for p in self.plugins]))
|
||||
|
||||
def _validate_index(self, index):
|
||||
available_indices = self._get_available_indices()
|
||||
|
||||
if index not in available_indices:
|
||||
msg = _("Index '%s' is not supported.") % index
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
return index
|
||||
|
||||
def _validate_doc_type(self, doc_type):
|
||||
available_types = self._get_available_types()
|
||||
|
||||
if doc_type not in available_types:
|
||||
msg = _("Document type '%s' is not supported.") % doc_type
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
return doc_type
|
||||
|
||||
def _validate_offset(self, offset):
|
||||
try:
|
||||
offset = int(offset)
|
||||
except ValueError:
|
||||
msg = _("offset param must be an integer")
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
if offset < 0:
|
||||
msg = _("offset param must be positive")
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
return offset
|
||||
|
||||
def _validate_limit(self, limit):
|
||||
try:
|
||||
limit = int(limit)
|
||||
except ValueError:
|
||||
msg = _("limit param must be an integer")
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
if limit < 1:
|
||||
msg = _("limit param must be positive")
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
return limit
|
||||
|
||||
def _validate_actions(self, actions):
|
||||
if not actions:
|
||||
msg = _("actions param cannot be empty")
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
output = []
|
||||
allowed_action_types = ['create', 'update', 'delete', 'index']
|
||||
for action in actions:
|
||||
action_type = action.get('action', 'index')
|
||||
document_id = action.get('id')
|
||||
document_type = action.get('type')
|
||||
index_name = action.get('index')
|
||||
data = action.get('data', {})
|
||||
script = action.get('script')
|
||||
|
||||
if index_name is not None:
|
||||
index_name = self._validate_index(index_name)
|
||||
|
||||
if document_type is not None:
|
||||
document_type = self._validate_doc_type(document_type)
|
||||
|
||||
if action_type not in allowed_action_types:
|
||||
msg = _("Invalid action type: '%s'") % action_type
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
elif (action_type in ['create', 'update', 'index'] and
|
||||
not any([data, script])):
|
||||
msg = (_("Action type '%s' requires data or script param.") %
|
||||
action_type)
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
elif action_type in ['update', 'delete'] and not document_id:
|
||||
msg = (_("Action type '%s' requires ID of the document.") %
|
||||
action_type)
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
bulk_action = {
|
||||
'_op_type': action_type,
|
||||
'_id': document_id,
|
||||
'_index': index_name,
|
||||
'_type': document_type,
|
||||
}
|
||||
|
||||
if script:
|
||||
data_field = 'params'
|
||||
bulk_action['script'] = script
|
||||
elif action_type == 'update':
|
||||
data_field = 'doc'
|
||||
else:
|
||||
data_field = '_source'
|
||||
|
||||
bulk_action[data_field] = data
|
||||
|
||||
output.append(bulk_action)
|
||||
return output
|
||||
|
||||
def _get_query(self, context, query, doc_types):
|
||||
is_admin = context.is_admin
|
||||
if is_admin:
|
||||
query_params = {
|
||||
'query': {
|
||||
'query': query
|
||||
}
|
||||
}
|
||||
else:
|
||||
filtered_query_list = []
|
||||
for plugin in self.plugins:
|
||||
try:
|
||||
doc_type = plugin.obj.get_document_type()
|
||||
rbac_filter = plugin.obj.get_rbac_filter(context)
|
||||
except Exception as e:
|
||||
LOG.error(_LE("Failed to retrieve RBAC filters "
|
||||
"from search plugin "
|
||||
"%(ext)s: %(e)s") %
|
||||
{'ext': plugin.name, 'e': e})
|
||||
|
||||
if doc_type in doc_types:
|
||||
filter_query = {
|
||||
"query": query,
|
||||
"filter": rbac_filter
|
||||
}
|
||||
filtered_query = {
|
||||
'filtered': filter_query
|
||||
}
|
||||
filtered_query_list.append(filtered_query)
|
||||
|
||||
query_params = {
|
||||
'query': {
|
||||
'query': {
|
||||
"bool": {
|
||||
"should": filtered_query_list
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return query_params
|
||||
|
||||
def search(self, request):
|
||||
body = self._get_request_body(request)
|
||||
self._check_allowed(body)
|
||||
query = body.pop('query', None)
|
||||
indices = body.pop('index', None)
|
||||
doc_types = body.pop('type', None)
|
||||
fields = body.pop('fields', None)
|
||||
offset = body.pop('offset', None)
|
||||
limit = body.pop('limit', None)
|
||||
highlight = body.pop('highlight', None)
|
||||
|
||||
if not indices:
|
||||
indices = self._get_available_indices()
|
||||
elif not isinstance(indices, (list, tuple)):
|
||||
indices = [indices]
|
||||
|
||||
if not doc_types:
|
||||
doc_types = self._get_available_types()
|
||||
elif not isinstance(doc_types, (list, tuple)):
|
||||
doc_types = [doc_types]
|
||||
|
||||
query_params = self._get_query(request.context, query, doc_types)
|
||||
query_params['index'] = [self._validate_index(index)
|
||||
for index in indices]
|
||||
query_params['doc_type'] = [self._validate_doc_type(doc_type)
|
||||
for doc_type in doc_types]
|
||||
|
||||
if fields is not None:
|
||||
query_params['fields'] = fields
|
||||
|
||||
if offset is not None:
|
||||
query_params['offset'] = self._validate_offset(offset)
|
||||
|
||||
if limit is not None:
|
||||
query_params['limit'] = self._validate_limit(limit)
|
||||
|
||||
if highlight is not None:
|
||||
query_params['query']['highlight'] = highlight
|
||||
|
||||
return query_params
|
||||
|
||||
def index(self, request):
|
||||
body = self._get_request_body(request)
|
||||
self._check_allowed(body)
|
||||
|
||||
default_index = body.pop('default_index', None)
|
||||
if default_index is not None:
|
||||
default_index = self._validate_index(default_index)
|
||||
|
||||
default_type = body.pop('default_type', None)
|
||||
if default_type is not None:
|
||||
default_type = self._validate_doc_type(default_type)
|
||||
|
||||
actions = self._validate_actions(body.pop('actions', None))
|
||||
if not all([default_index, default_type]):
|
||||
for action in actions:
|
||||
if not any([action['_index'], default_index]):
|
||||
msg = (_("Action index is missing and no default "
|
||||
"index has been set."))
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
if not any([action['_type'], default_type]):
|
||||
msg = (_("Action document type is missing and no default "
|
||||
"type has been set."))
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
query_params = {
|
||||
'default_index': default_index,
|
||||
'default_type': default_type,
|
||||
'actions': actions,
|
||||
}
|
||||
return query_params
|
||||
|
||||
|
||||
class ResponseSerializer(wsgi.JSONResponseSerializer):
|
||||
def __init__(self, schema=None):
|
||||
super(ResponseSerializer, self).__init__()
|
||||
self.schema = schema
|
||||
|
||||
def search(self, response, query_result):
|
||||
body = json.dumps(query_result, ensure_ascii=False)
|
||||
response.unicode_body = six.text_type(body)
|
||||
response.content_type = 'application/json'
|
||||
|
||||
def plugins_info(self, response, query_result):
|
||||
body = json.dumps(query_result, ensure_ascii=False)
|
||||
response.unicode_body = six.text_type(body)
|
||||
response.content_type = 'application/json'
|
||||
|
||||
def index(self, response, query_result):
|
||||
body = json.dumps(query_result, ensure_ascii=False)
|
||||
response.unicode_body = six.text_type(body)
|
||||
response.content_type = 'application/json'
|
||||
|
||||
|
||||
def create_resource():
|
||||
"""Search resource factory method"""
|
||||
plugins = utils.get_search_plugins()
|
||||
deserializer = RequestDeserializer(plugins)
|
||||
serializer = ResponseSerializer()
|
||||
controller = SearchController(plugins)
|
||||
return wsgi.Resource(controller, deserializer, serializer)
|
|
@ -0,0 +1,85 @@
|
|||
# Copyright 2012 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import httplib
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_serialization import jsonutils
|
||||
import webob.dec
|
||||
|
||||
from searchlight.common import wsgi
|
||||
from searchlight import i18n
|
||||
|
||||
_ = i18n._
|
||||
|
||||
versions_opts = [
|
||||
cfg.StrOpt('public_endpoint', default=None,
|
||||
help=_('Public url to use for versions endpoint. The default '
|
||||
'is None, which will use the request\'s host_url '
|
||||
'attribute to populate the URL base. If Glance is '
|
||||
'operating behind a proxy, you will want to change '
|
||||
'this to represent the proxy\'s URL.')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(versions_opts)
|
||||
|
||||
|
||||
class Controller(object):
|
||||
|
||||
"""A wsgi controller that reports which API versions are supported."""
|
||||
|
||||
def index(self, req):
|
||||
"""Respond to a request for all OpenStack API versions."""
|
||||
def build_version_object(version, path, status):
|
||||
url = CONF.public_endpoint or req.host_url
|
||||
return {
|
||||
'id': 'v%s' % version,
|
||||
'status': status,
|
||||
'links': [
|
||||
{
|
||||
'rel': 'self',
|
||||
'href': '%s/%s/' % (url, path),
|
||||
},
|
||||
],
|
||||
}
|
||||
|
||||
version_objs = []
|
||||
if CONF.enable_v2_api:
|
||||
version_objs.extend([
|
||||
build_version_object(2.3, 'v2', 'CURRENT'),
|
||||
build_version_object(2.2, 'v2', 'SUPPORTED'),
|
||||
build_version_object(2.1, 'v2', 'SUPPORTED'),
|
||||
build_version_object(2.0, 'v2', 'SUPPORTED'),
|
||||
])
|
||||
if CONF.enable_v1_api:
|
||||
version_objs.extend([
|
||||
build_version_object(1.1, 'v1', 'SUPPORTED'),
|
||||
build_version_object(1.0, 'v1', 'SUPPORTED'),
|
||||
])
|
||||
|
||||
response = webob.Response(request=req,
|
||||
status=httplib.MULTIPLE_CHOICES,
|
||||
content_type='application/json')
|
||||
response.body = jsonutils.dumps(dict(versions=version_objs))
|
||||
return response
|
||||
|
||||
@webob.dec.wsgify(RequestClass=wsgi.Request)
|
||||
def __call__(self, req):
|
||||
return self.index(req)
|
||||
|
||||
|
||||
def create_resource(conf):
|
||||
return wsgi.Resource(Controller())
|
|
@ -0,0 +1,53 @@
|
|||
# Copyright 2013 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
import oslo_utils.strutils as strutils
|
||||
|
||||
from searchlight import i18n
|
||||
|
||||
try:
|
||||
import dns # NOQA
|
||||
except ImportError:
|
||||
dnspython_installed = False
|
||||
else:
|
||||
dnspython_installed = True
|
||||
|
||||
|
||||
def fix_greendns_ipv6():
|
||||
if dnspython_installed:
|
||||
# All of this is because if dnspython is present in your environment
|
||||
# then eventlet monkeypatches socket.getaddrinfo() with an
|
||||
# implementation which doesn't work for IPv6. What we're checking here
|
||||
# is that the magic environment variable was set when the import
|
||||
# happened.
|
||||
nogreendns = 'EVENTLET_NO_GREENDNS'
|
||||
flag = os.environ.get(nogreendns, '')
|
||||
if 'eventlet' in sys.modules and not strutils.bool_from_string(flag):
|
||||
msg = i18n._("It appears that the eventlet module has been "
|
||||
"imported prior to setting %s='yes'. It is currently "
|
||||
"necessary to disable eventlet.greendns "
|
||||
"if using ipv6 since eventlet.greendns currently "
|
||||
"breaks with ipv6 addresses. Please ensure that "
|
||||
"eventlet is not imported prior to this being set.")
|
||||
raise ImportError(msg % (nogreendns))
|
||||
|
||||
os.environ[nogreendns] = 'yes'
|
||||
|
||||
|
||||
i18n.enable_lazy()
|
||||
fix_greendns_ipv6()
|
|
@ -0,0 +1,30 @@
|
|||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from searchlight import listener
|
||||
from searchlight.openstack.common import service as os_service
|
||||
from searchlight import service
|
||||
|
||||
|
||||
def main():
|
||||
service.prepare_service()
|
||||
launcher = os_service.ProcessLauncher()
|
||||
launcher.launch_service(
|
||||
listener.ListenerService(),
|
||||
workers=service.get_workers('listener'))
|
||||
launcher.wait()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -0,0 +1,93 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Searchlight API Server
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
import eventlet
|
||||
|
||||
from searchlight.common import utils
|
||||
|
||||
# Monkey patch socket, time, select, threads
|
||||
eventlet.patcher.monkey_patch(socket=True, time=True, select=True,
|
||||
thread=True, os=True)
|
||||
|
||||
# If ../searchlight/__init__.py exists, add ../ to Python search path, so that
|
||||
# it will override what happens to be installed in /usr/(local/)lib/python...
|
||||
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
|
||||
os.pardir,
|
||||
os.pardir))
|
||||
if os.path.exists(os.path.join(possible_topdir, 'searchlight', '__init__.py')):
|
||||
sys.path.insert(0, possible_topdir)
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
import osprofiler.notifier
|
||||
import osprofiler.web
|
||||
|
||||
from searchlight.common import config
|
||||
from searchlight.common import exception
|
||||
from searchlight.common import wsgi
|
||||
from searchlight import notifier
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.import_group("profiler", "searchlight.common.wsgi")
|
||||
logging.register_options(CONF)
|
||||
|
||||
KNOWN_EXCEPTIONS = (RuntimeError,
|
||||
exception.WorkerCreationFailure)
|
||||
|
||||
|
||||
def fail(e):
|
||||
global KNOWN_EXCEPTIONS
|
||||
return_code = KNOWN_EXCEPTIONS.index(type(e)) + 1
|
||||
sys.stderr.write("ERROR: %s\n" % utils.exception_to_str(e))
|
||||
sys.exit(return_code)
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
config.parse_args()
|
||||
wsgi.set_eventlet_hub()
|
||||
logging.setup(CONF, 'searchlight')
|
||||
|
||||
if cfg.CONF.profiler.enabled:
|
||||
_notifier = osprofiler.notifier.create("Messaging",
|
||||
notifier.messaging, {},
|
||||
notifier.get_transport(),
|
||||
"searchlight", "search",
|
||||
cfg.CONF.bind_host)
|
||||
osprofiler.notifier.set(_notifier)
|
||||
else:
|
||||
osprofiler.web.disable()
|
||||
|
||||
server = wsgi.Server()
|
||||
server.start(config.load_paste_app('searchlight'),
|
||||
default_port=9393)
|
||||
server.wait()
|
||||
except KNOWN_EXCEPTIONS as e:
|
||||
fail(e)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -0,0 +1,411 @@
|
|||
# Copyright (c) 2011 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Helper script for starting/stopping/reloading Glance server programs.
|
||||
Thanks for some of the code, Swifties ;)
|
||||
"""
|
||||
|
||||
from __future__ import print_function
|
||||
from __future__ import with_statement
|
||||
|
||||
import argparse
|
||||
import fcntl
|
||||
import os
|
||||
import resource
|
||||
import signal
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import time
|
||||
|
||||
# If ../searchlight/__init__.py exists, add ../ to Python search path, so that
|
||||
# it will override what happens to be installed in /usr/(local/)lib/python...
|
||||
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
|
||||
os.pardir,
|
||||
os.pardir))
|
||||
if os.path.exists(os.path.join(possible_topdir, 'searchlight', '__init__.py')):
|
||||
sys.path.insert(0, possible_topdir)
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_utils import units
|
||||
# NOTE(jokke): simplified transition to py3, behaves like py2 xrange
|
||||
from six.moves import range
|
||||
|
||||
from searchlight.common import config
|
||||
from searchlight import i18n
|
||||
|
||||
_ = i18n._
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
ALL_COMMANDS = ['start', 'status', 'stop', 'shutdown', 'restart',
|
||||
'reload', 'force-reload']
|
||||
ALL_SERVERS = ['api', 'registry', 'scrubber']
|
||||
RELOAD_SERVERS = ['searchlight-api']
|
||||
GRACEFUL_SHUTDOWN_SERVERS = ['searchlight-api']
|
||||
MAX_DESCRIPTORS = 32768
|
||||
MAX_MEMORY = 2 * units.Gi # 2 GB
|
||||
USAGE = """%(prog)s [options] <SERVER> <COMMAND> [CONFPATH]
|
||||
|
||||
Where <SERVER> is one of:
|
||||
|
||||
all, {0}
|
||||
|
||||
And command is one of:
|
||||
|
||||
{1}
|
||||
|
||||
And CONFPATH is the optional configuration file to use.""".format(
|
||||
', '.join(ALL_SERVERS), ', '.join(ALL_COMMANDS))
|
||||
|
||||
exitcode = 0
|
||||
|
||||
|
||||
def gated_by(predicate):
|
||||
def wrap(f):
|
||||
def wrapped_f(*args):
|
||||
if predicate:
|
||||
return f(*args)
|
||||
else:
|
||||
return None
|
||||
return wrapped_f
|
||||
return wrap
|
||||
|
||||
|
||||
def pid_files(server, pid_file):
|
||||
pid_files = []
|
||||
if pid_file:
|
||||
if os.path.exists(os.path.abspath(pid_file)):
|
||||
pid_files = [os.path.abspath(pid_file)]
|
||||
else:
|
||||
if os.path.exists('/var/run/searchlight/%s.pid' % server):
|
||||
pid_files = ['/var/run/searchlight/%s.pid' % server]
|
||||
for pid_file in pid_files:
|
||||
pid = int(open(pid_file).read().strip())
|
||||
yield pid_file, pid
|
||||
|
||||
|
||||
def do_start(verb, pid_file, server, args):
|
||||
if verb != 'Respawn' and pid_file == CONF.pid_file:
|
||||
for pid_file, pid in pid_files(server, pid_file):
|
||||
if os.path.exists('/proc/%s' % pid):
|
||||
print(_("%(serv)s appears to already be running: %(pid)s") %
|
||||
{'serv': server, 'pid': pid_file})
|
||||
return
|
||||
else:
|
||||
print(_("Removing stale pid file %s") % pid_file)
|
||||
os.unlink(pid_file)
|
||||
|
||||
try:
|
||||
resource.setrlimit(resource.RLIMIT_NOFILE,
|
||||
(MAX_DESCRIPTORS, MAX_DESCRIPTORS))
|
||||
resource.setrlimit(resource.RLIMIT_DATA,
|
||||
(MAX_MEMORY, MAX_MEMORY))
|
||||
except ValueError:
|
||||
print(_('Unable to increase file descriptor limit. '
|
||||
'Running as non-root?'))
|
||||
os.environ['PYTHON_EGG_CACHE'] = '/tmp'
|
||||
|
||||
def write_pid_file(pid_file, pid):
|
||||
with open(pid_file, 'w') as fp:
|
||||
fp.write('%d\n' % pid)
|
||||
|
||||
def redirect_to_null(fds):
|
||||
with open(os.devnull, 'r+b') as nullfile:
|
||||
for desc in fds: # close fds
|
||||
try:
|
||||
os.dup2(nullfile.fileno(), desc)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def redirect_to_syslog(fds, server):
|
||||
log_cmd = 'logger'
|
||||
log_cmd_params = '-t "%s[%d]"' % (server, os.getpid())
|
||||
process = subprocess.Popen([log_cmd, log_cmd_params],
|
||||
stdin=subprocess.PIPE)
|
||||
for desc in fds: # pipe to logger command
|
||||
try:
|
||||
os.dup2(process.stdin.fileno(), desc)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def redirect_stdio(server, capture_output):
|
||||
input = [sys.stdin.fileno()]
|
||||
output = [sys.stdout.fileno(), sys.stderr.fileno()]
|
||||
|
||||
redirect_to_null(input)
|
||||
if capture_output:
|
||||
redirect_to_syslog(output, server)
|
||||
else:
|
||||
redirect_to_null(output)
|
||||
|
||||
@gated_by(CONF.capture_output)
|
||||
def close_stdio_on_exec():
|
||||
fds = [sys.stdin.fileno(), sys.stdout.fileno(), sys.stderr.fileno()]
|
||||
for desc in fds: # set close on exec flag
|
||||
fcntl.fcntl(desc, fcntl.F_SETFD, fcntl.FD_CLOEXEC)
|
||||
|
||||
def launch(pid_file, conf_file=None, capture_output=False, await_time=0):
|
||||
args = [server]
|
||||
if conf_file:
|
||||
args += ['--config-file', conf_file]
|
||||
msg = (_('%(verb)sing %(serv)s with %(conf)s') %
|
||||
{'verb': verb, 'serv': server, 'conf': conf_file})
|
||||
else:
|
||||
msg = (_('%(verb)sing %(serv)s') % {'verb': verb, 'serv': server})
|
||||
print(msg)
|
||||
|
||||
close_stdio_on_exec()
|
||||
|
||||
pid = os.fork()
|
||||
if pid == 0:
|
||||
os.setsid()
|
||||
redirect_stdio(server, capture_output)
|
||||
try:
|
||||
os.execlp('%s' % server, *args)
|
||||
except OSError as e:
|
||||
msg = (_('unable to launch %(serv)s. Got error: %(e)s') %
|
||||
{'serv': server, 'e': e})
|
||||
sys.exit(msg)
|
||||
sys.exit(0)
|
||||
else:
|
||||
write_pid_file(pid_file, pid)
|
||||
await_child(pid, await_time)
|
||||
return pid
|
||||
|
||||
@gated_by(CONF.await_child)
|
||||
def await_child(pid, await_time):
|
||||
bail_time = time.time() + await_time
|
||||
while time.time() < bail_time:
|
||||
reported_pid, status = os.waitpid(pid, os.WNOHANG)
|
||||
if reported_pid == pid:
|
||||
global exitcode
|
||||
exitcode = os.WEXITSTATUS(status)
|
||||
break
|
||||
time.sleep(0.05)
|
||||
|
||||
conf_file = None
|
||||
if args and os.path.exists(args[0]):
|
||||
conf_file = os.path.abspath(os.path.expanduser(args[0]))
|
||||
|
||||
return launch(pid_file, conf_file, CONF.capture_output, CONF.await_child)
|
||||
|
||||
|
||||
def do_check_status(pid_file, server):
|
||||
if os.path.exists(pid_file):
|
||||
with open(pid_file, 'r') as pidfile:
|
||||
pid = pidfile.read().strip()
|
||||
print(_("%(serv)s (pid %(pid)s) is running...") %
|
||||
{'serv': server, 'pid': pid})
|
||||
else:
|
||||
print(_("%s is stopped") % server)
|
||||
|
||||
|
||||
def get_pid_file(server, pid_file):
|
||||
pid_file = (os.path.abspath(pid_file) if pid_file else
|
||||
'/var/run/searchlight/%s.pid' % server)
|
||||
dir, file = os.path.split(pid_file)
|
||||
|
||||
if not os.path.exists(dir):
|
||||
try:
|
||||
os.makedirs(dir)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
if not os.access(dir, os.W_OK):
|
||||
fallback = os.path.join(tempfile.mkdtemp(), '%s.pid' % server)
|
||||
msg = (_('Unable to create pid file %(pid)s. Running as non-root?\n'
|
||||
'Falling back to a temp file, you can stop %(service)s '
|
||||
'service using:\n'
|
||||
' %(file)s %(server)s stop --pid-file %(fb)s') %
|
||||
{'pid': pid_file,
|
||||
'service': server,
|
||||
'file': __file__,
|
||||
'server': server,
|
||||
'fb': fallback})
|
||||
print(msg)
|
||||
pid_file = fallback
|
||||
|
||||
return pid_file
|
||||
|
||||
|
||||
def do_reload(pid_file, server):
|
||||
if server not in RELOAD_SERVERS:
|
||||
msg = (_('Reload of %(serv)s not supported') % {'serv': server})
|
||||
sys.exit(msg)
|
||||
|
||||
pid = None
|
||||
if os.path.exists(pid_file):
|
||||
with open(pid_file, 'r') as pidfile:
|
||||
pid = int(pidfile.read().strip())
|
||||
else:
|
||||
msg = (_('Server %(serv)s is stopped') % {'serv': server})
|
||||
sys.exit(msg)
|
||||
|
||||
sig = signal.SIGHUP
|
||||
try:
|
||||
print(_('Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)')
|
||||
% {'serv': server, 'pid': pid, 'sig': sig})
|
||||
os.kill(pid, sig)
|
||||
except OSError:
|
||||
print(_("Process %d not running") % pid)
|
||||
|
||||
|
||||
def do_stop(server, args, graceful=False):
|
||||
if graceful and server in GRACEFUL_SHUTDOWN_SERVERS:
|
||||
sig = signal.SIGHUP
|
||||
else:
|
||||
sig = signal.SIGTERM
|
||||
|
||||
did_anything = False
|
||||
pfiles = pid_files(server, CONF.pid_file)
|
||||
for pid_file, pid in pfiles:
|
||||
did_anything = True
|
||||
try:
|
||||
os.unlink(pid_file)
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
print(_('Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)')
|
||||
% {'serv': server, 'pid': pid, 'sig': sig})
|
||||
os.kill(pid, sig)
|
||||
except OSError:
|
||||
print(_("Process %d not running") % pid)
|
||||
for pid_file, pid in pfiles:
|
||||
for _junk in range(150): # 15 seconds
|
||||
if not os.path.exists('/proc/%s' % pid):
|
||||
break
|
||||
time.sleep(0.1)
|
||||
else:
|
||||
print(_('Waited 15 seconds for pid %(pid)s (%(file)s) to die;'
|
||||
' giving up') % {'pid': pid, 'file': pid_file})
|
||||
if not did_anything:
|
||||
print(_('%s is already stopped') % server)
|
||||
|
||||
|
||||
def add_command_parsers(subparsers):
|
||||
cmd_parser = argparse.ArgumentParser(add_help=False)
|
||||
cmd_subparsers = cmd_parser.add_subparsers(dest='command')
|
||||
for cmd in ALL_COMMANDS:
|
||||
parser = cmd_subparsers.add_parser(cmd)
|
||||
parser.add_argument('args', nargs=argparse.REMAINDER)
|
||||
|
||||
for server in ALL_SERVERS:
|
||||
full_name = 'searchlight-' + server
|
||||
|
||||
parser = subparsers.add_parser(server, parents=[cmd_parser])
|
||||
parser.set_defaults(servers=[full_name])
|
||||
|
||||
parser = subparsers.add_parser(full_name, parents=[cmd_parser])
|
||||
parser.set_defaults(servers=[full_name])
|
||||
|
||||
parser = subparsers.add_parser('all', parents=[cmd_parser])
|
||||
parser.set_defaults(servers=['searchlight-' + s for s in ALL_SERVERS])
|
||||
|
||||
|
||||
def main():
|
||||
global exitcode
|
||||
|
||||
opts = [
|
||||
cfg.SubCommandOpt('server',
|
||||
title='Server types',
|
||||
help='Available server types',
|
||||
handler=add_command_parsers),
|
||||
cfg.StrOpt('pid-file',
|
||||
metavar='PATH',
|
||||
help='File to use as pid file. Default: '
|
||||
'/var/run/searchlight/$server.pid.'),
|
||||
cfg.IntOpt('await-child',
|
||||
metavar='DELAY',
|
||||
default=0,
|
||||
help='Period to wait for service death '
|
||||
'in order to report exit code '
|
||||
'(default is to not wait at all).'),
|
||||
cfg.BoolOpt('capture-output',
|
||||
default=False,
|
||||
help='Capture stdout/err in syslog '
|
||||
'instead of discarding it.'),
|
||||
cfg.BoolOpt('respawn',
|
||||
default=False,
|
||||
help='Restart service on unexpected death.'),
|
||||
]
|
||||
CONF.register_cli_opts(opts)
|
||||
|
||||
config.parse_args(usage=USAGE)
|
||||
|
||||
@gated_by(CONF.await_child)
|
||||
@gated_by(CONF.respawn)
|
||||
def mutually_exclusive():
|
||||
sys.stderr.write('--await-child and --respawn are mutually exclusive')
|
||||
sys.exit(1)
|
||||
|
||||
mutually_exclusive()
|
||||
|
||||
@gated_by(CONF.respawn)
|
||||
def anticipate_respawn(children):
|
||||
while children:
|
||||
pid, status = os.wait()
|
||||
if pid in children:
|
||||
(pid_file, server, args) = children.pop(pid)
|
||||
running = os.path.exists(pid_file)
|
||||
one_second_ago = time.time() - 1
|
||||
bouncing = (running and
|
||||
os.path.getmtime(pid_file) >= one_second_ago)
|
||||
if running and not bouncing:
|
||||
args = (pid_file, server, args)
|
||||
new_pid = do_start('Respawn', *args)
|
||||
children[new_pid] = args
|
||||
else:
|
||||
rsn = 'bouncing' if bouncing else 'deliberately stopped'
|
||||
print(_('Suppressed respawn as %(serv)s was %(rsn)s.')
|
||||
% {'serv': server, 'rsn': rsn})
|
||||
|
||||
if CONF.server.command == 'start':
|
||||
children = {}
|
||||
for server in CONF.server.servers:
|
||||
pid_file = get_pid_file(server, CONF.pid_file)
|
||||
args = (pid_file, server, CONF.server.args)
|
||||
pid = do_start('Start', *args)
|
||||
children[pid] = args
|
||||
|
||||
anticipate_respawn(children)
|
||||
|
||||
if CONF.server.command == 'status':
|
||||
for server in CONF.server.servers:
|
||||
pid_file = get_pid_file(server, CONF.pid_file)
|
||||
do_check_status(pid_file, server)
|
||||
|
||||
if CONF.server.command == 'stop':
|
||||
for server in CONF.server.servers:
|
||||
do_stop(server, CONF.server.args)
|
||||
|
||||
if CONF.server.command == 'shutdown':
|
||||
for server in CONF.server.servers:
|
||||
do_stop(server, CONF.server.args, graceful=True)
|
||||
|
||||
if CONF.server.command == 'restart':
|
||||
for server in CONF.server.servers:
|
||||
do_stop(server, CONF.server.args)
|
||||
for server in CONF.server.servers:
|
||||
pid_file = get_pid_file(server, CONF.pid_file)
|
||||
do_start('Restart', pid_file, server, CONF.server.args)
|
||||
|
||||
if CONF.server.command in ('reload', 'force-reload'):
|
||||
for server in CONF.server.servers:
|
||||
pid_file = get_pid_file(server, CONF.pid_file)
|
||||
do_reload(pid_file, server)
|
||||
|
||||
sys.exit(exitcode)
|
|
@ -0,0 +1,50 @@
|
|||
# Copyright 2015 Intel Corporation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
import stevedore
|
||||
|
||||
from searchlight.common import config
|
||||
from searchlight import i18n
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger(__name__)
|
||||
_LE = i18n._LE
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
logging.register_options(CONF)
|
||||
cfg_files = cfg.find_config_files(project='searchlight',
|
||||
prog='searchlight-api')
|
||||
config.parse_args(default_config_files=cfg_files)
|
||||
logging.setup(CONF, 'searchlight')
|
||||
|
||||
namespace = 'searchlight.index_backend'
|
||||
ext_manager = stevedore.extension.ExtensionManager(
|
||||
namespace, invoke_on_load=True)
|
||||
for ext in ext_manager.extensions:
|
||||
try:
|
||||
ext.obj.setup()
|
||||
except Exception as e:
|
||||
LOG.error(_LE("Failed to setup index extension "
|
||||
"%(ext)s: %(e)s") % {'ext': ext.name,
|
||||
'e': e})
|
||||
except RuntimeError as e:
|
||||
sys.exit("ERROR: %s" % e)
|
|
@ -0,0 +1,292 @@
|
|||
# Copyright 2011 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
This auth module is intended to allow OpenStack client-tools to select from a
|
||||
variety of authentication strategies, including NoAuth (the default), and
|
||||
Keystone (an identity management system).
|
||||
|
||||
> auth_plugin = AuthPlugin(creds)
|
||||
|
||||
> auth_plugin.authenticate()
|
||||
|
||||
> auth_plugin.auth_token
|
||||
abcdefg
|
||||
|
||||
> auth_plugin.management_url
|
||||
http://service_endpoint/
|
||||
"""
|
||||
import httplib2
|
||||
from oslo_log import log as logging
|
||||
from oslo_serialization import jsonutils
|
||||
# NOTE(jokke): simplified transition to py3, behaves like py2 xrange
|
||||
from six.moves import range
|
||||
import six.moves.urllib.parse as urlparse
|
||||
|
||||
from searchlight.common import exception
|
||||
from searchlight import i18n
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
_ = i18n._
|
||||
|
||||
|
||||
class BaseStrategy(object):
|
||||
def __init__(self):
|
||||
self.auth_token = None
|
||||
# TODO(sirp): Should expose selecting public/internal/admin URL.
|
||||
self.management_url = None
|
||||
|
||||
def authenticate(self):
|
||||
raise NotImplementedError
|
||||
|
||||
@property
|
||||
def is_authenticated(self):
|
||||
raise NotImplementedError
|
||||
|
||||
@property
|
||||
def strategy(self):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class NoAuthStrategy(BaseStrategy):
|
||||
def authenticate(self):
|
||||
pass
|
||||
|
||||
@property
|
||||
def is_authenticated(self):
|
||||
return True
|
||||
|
||||
@property
|
||||
def strategy(self):
|
||||
return 'noauth'
|
||||
|
||||
|
||||
class KeystoneStrategy(BaseStrategy):
|
||||
MAX_REDIRECTS = 10
|
||||
|
||||
def __init__(self, creds, insecure=False, configure_via_auth=True):
|
||||
self.creds = creds
|
||||
self.insecure = insecure
|
||||
self.configure_via_auth = configure_via_auth
|
||||
super(KeystoneStrategy, self).__init__()
|
||||
|
||||
def check_auth_params(self):
|
||||
# Ensure that supplied credential parameters are as required
|
||||
for required in ('username', 'password', 'auth_url',
|
||||
'strategy'):
|
||||
if self.creds.get(required) is None:
|
||||
raise exception.MissingCredentialError(required=required)
|
||||
if self.creds['strategy'] != 'keystone':
|
||||
raise exception.BadAuthStrategy(expected='keystone',
|
||||
received=self.creds['strategy'])
|
||||
# For v2.0 also check tenant is present
|
||||
if self.creds['auth_url'].rstrip('/').endswith('v2.0'):
|
||||
if self.creds.get("tenant") is None:
|
||||
raise exception.MissingCredentialError(required='tenant')
|
||||
|
||||
def authenticate(self):
|
||||
"""Authenticate with the Keystone service.
|
||||
|
||||
There are a few scenarios to consider here:
|
||||
|
||||
1. Which version of Keystone are we using? v1 which uses headers to
|
||||
pass the credentials, or v2 which uses a JSON encoded request body?
|
||||
|
||||
2. Keystone may respond back with a redirection using a 305 status
|
||||
code.
|
||||
|
||||
3. We may attempt a v1 auth when v2 is what's called for. In this
|
||||
case, we rewrite the url to contain /v2.0/ and retry using the v2
|
||||
protocol.
|
||||
"""
|
||||
def _authenticate(auth_url):
|
||||
# If OS_AUTH_URL is missing a trailing slash add one
|
||||
if not auth_url.endswith('/'):
|
||||
auth_url += '/'
|
||||
token_url = urlparse.urljoin(auth_url, "tokens")
|
||||
# 1. Check Keystone version
|
||||
is_v2 = auth_url.rstrip('/').endswith('v2.0')
|
||||
if is_v2:
|
||||
self._v2_auth(token_url)
|
||||
else:
|
||||
self._v1_auth(token_url)
|
||||
|
||||
self.check_auth_params()
|
||||
auth_url = self.creds['auth_url']
|
||||
for _ in range(self.MAX_REDIRECTS):
|
||||
try:
|
||||
_authenticate(auth_url)
|
||||
except exception.AuthorizationRedirect as e:
|
||||
# 2. Keystone may redirect us
|
||||
auth_url = e.url
|
||||
except exception.AuthorizationFailure:
|
||||
# 3. In some configurations nova makes redirection to
|
||||
# v2.0 keystone endpoint. Also, new location does not
|
||||
# contain real endpoint, only hostname and port.
|
||||
if 'v2.0' not in auth_url:
|
||||
auth_url = urlparse.urljoin(auth_url, 'v2.0/')
|
||||
else:
|
||||
# If we successfully auth'd, then memorize the correct auth_url
|
||||
# for future use.
|
||||
self.creds['auth_url'] = auth_url
|
||||
break
|
||||
else:
|
||||
# Guard against a redirection loop
|
||||
raise exception.MaxRedirectsExceeded(redirects=self.MAX_REDIRECTS)
|
||||
|
||||
def _v1_auth(self, token_url):
|
||||
creds = self.creds
|
||||
|
||||
headers = {}
|
||||
headers['X-Auth-User'] = creds['username']
|
||||
headers['X-Auth-Key'] = creds['password']
|
||||
|
||||
tenant = creds.get('tenant')
|
||||
if tenant:
|
||||
headers['X-Auth-Tenant'] = tenant
|
||||
|
||||
resp, resp_body = self._do_request(token_url, 'GET', headers=headers)
|
||||
|
||||
def _management_url(self, resp):
|
||||
for url_header in ('x-image-management-url',
|
||||
'x-server-management-url',
|
||||
'x-searchlight'):
|
||||
try:
|
||||
return resp[url_header]
|
||||
except KeyError as e:
|
||||
not_found = e
|
||||
raise not_found
|
||||
|
||||
if resp.status in (200, 204):
|
||||
try:
|
||||
if self.configure_via_auth:
|
||||
self.management_url = _management_url(self, resp)
|
||||
self.auth_token = resp['x-auth-token']
|
||||
except KeyError:
|
||||
raise exception.AuthorizationFailure()
|
||||
elif resp.status == 305:
|
||||
raise exception.AuthorizationRedirect(uri=resp['location'])
|
||||
elif resp.status == 400:
|
||||
raise exception.AuthBadRequest(url=token_url)
|
||||
elif resp.status == 401:
|
||||
raise exception.NotAuthenticated()
|
||||
elif resp.status == 404:
|
||||
raise exception.AuthUrlNotFound(url=token_url)
|
||||
else:
|
||||
raise Exception(_('Unexpected response: %s') % resp.status)
|
||||
|
||||
def _v2_auth(self, token_url):
|
||||
|
||||
creds = self.creds
|
||||
|
||||
creds = {
|
||||
"auth": {
|
||||
"tenantName": creds['tenant'],
|
||||
"passwordCredentials": {
|
||||
"username": creds['username'],
|
||||
"password": creds['password']
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
headers = {}
|
||||
headers['Content-Type'] = 'application/json'
|
||||
req_body = jsonutils.dumps(creds)
|
||||
|
||||
resp, resp_body = self._do_request(
|
||||
token_url, 'POST', headers=headers, body=req_body)
|
||||
|
||||
if resp.status == 200:
|
||||
resp_auth = jsonutils.loads(resp_body)['access']
|
||||
creds_region = self.creds.get('region')
|
||||
if self.configure_via_auth:
|
||||
endpoint = get_endpoint(resp_auth['serviceCatalog'],
|
||||
endpoint_region=creds_region)
|
||||
self.management_url = endpoint
|
||||
self.auth_token = resp_auth['token']['id']
|
||||
elif resp.status == 305:
|
||||
raise exception.RedirectException(resp['location'])
|
||||
elif resp.status == 400:
|
||||
raise exception.AuthBadRequest(url=token_url)
|
||||
elif resp.status == 401:
|
||||
raise exception.NotAuthenticated()
|
||||
elif resp.status == 404:
|
||||
raise exception.AuthUrlNotFound(url=token_url)
|
||||
else:
|
||||
raise Exception(_('Unexpected response: %s') % resp.status)
|
||||
|
||||
@property
|
||||
def is_authenticated(self):
|
||||
return self.auth_token is not None
|
||||
|
||||
@property
|
||||
def strategy(self):
|
||||
return 'keystone'
|
||||
|
||||
def _do_request(self, url, method, headers=None, body=None):
|
||||
headers = headers or {}
|
||||
conn = httplib2.Http()
|
||||
conn.force_exception_to_status_code = True
|
||||
conn.disable_ssl_certificate_validation = self.insecure
|
||||
headers['User-Agent'] = 'searchlight-client'
|
||||
resp, resp_body = conn.request(url, method, headers=headers, body=body)
|
||||
return resp, resp_body
|
||||
|
||||
|
||||
def get_plugin_from_strategy(strategy, creds=None, insecure=False,
|
||||
configure_via_auth=True):
|
||||
if strategy == 'noauth':
|
||||
return NoAuthStrategy()
|
||||
elif strategy == 'keystone':
|
||||
return KeystoneStrategy(creds, insecure,
|
||||
configure_via_auth=configure_via_auth)
|
||||
else:
|
||||
raise Exception(_("Unknown auth strategy '%s'") % strategy)
|
||||
|
||||
|
||||
def get_endpoint(service_catalog, service_type='image', endpoint_region=None,
|
||||
endpoint_type='publicURL'):
|
||||
"""
|
||||
Select an endpoint from the service catalog
|
||||
|
||||
We search the full service catalog for services
|
||||
matching both type and region. If the client
|
||||
supplied no region then any 'image' endpoint
|
||||
is considered a match. There must be one -- and
|
||||
only one -- successful match in the catalog,
|
||||
otherwise we will raise an exception.
|
||||
"""
|
||||
endpoint = None
|
||||
for service in service_catalog:
|
||||
s_type = None
|
||||
try:
|
||||
s_type = service['type']
|
||||
except KeyError:
|
||||
msg = _('Encountered service with no "type": %s') % s_type
|
||||
LOG.warn(msg)
|
||||
continue
|
||||
|
||||
if s_type == service_type:
|
||||
for ep in service['endpoints']:
|
||||
if endpoint_region is None or endpoint_region == ep['region']:
|
||||
if endpoint is not None:
|
||||
# This is a second match, abort
|
||||
raise exception.RegionAmbiguity(region=endpoint_region)
|
||||
endpoint = ep
|
||||
if endpoint and endpoint.get(endpoint_type):
|
||||
return endpoint[endpoint_type]
|
||||
else:
|
||||
raise exception.NoServiceEndpoint()
|
|
@ -0,0 +1,594 @@
|
|||
# Copyright 2010-2011 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# HTTPSClientAuthConnection code comes courtesy of ActiveState website:
|
||||
# http://code.activestate.com/recipes/
|
||||
# 577548-https-httplib-client-connection-with-certificate-v/
|
||||
|
||||
import collections
|
||||
import copy
|
||||
import errno
|
||||
import functools
|
||||
import httplib
|
||||
import os
|
||||
import re
|
||||
|
||||
try:
|
||||
from eventlet.green import socket
|
||||
from eventlet.green import ssl
|
||||
except ImportError:
|
||||
import socket
|
||||
import ssl
|
||||
|
||||
import osprofiler.web
|
||||
|
||||
try:
|
||||
import sendfile # noqa
|
||||
SENDFILE_SUPPORTED = True
|
||||
except ImportError:
|
||||
SENDFILE_SUPPORTED = False
|
||||
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import encodeutils
|
||||
import six
|
||||
# NOTE(jokke): simplified transition to py3, behaves like py2 xrange
|
||||
from six.moves import range
|
||||
import six.moves.urllib.parse as urlparse
|
||||
|
||||
from searchlight.common import auth
|
||||
from searchlight.common import exception
|
||||
from searchlight.common import utils
|
||||
from searchlight import i18n
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
_ = i18n._
|
||||
|
||||
# common chunk size for get and put
|
||||
CHUNKSIZE = 65536
|
||||
|
||||
VERSION_REGEX = re.compile(r"/?v[0-9\.]+")
|
||||
|
||||
|
||||
def handle_unauthenticated(func):
|
||||
"""
|
||||
Wrap a function to re-authenticate and retry.
|
||||
"""
|
||||
@functools.wraps(func)
|
||||
def wrapped(self, *args, **kwargs):
|
||||
try:
|
||||
return func(self, *args, **kwargs)
|
||||
except exception.NotAuthenticated:
|
||||
self._authenticate(force_reauth=True)
|
||||
return func(self, *args, **kwargs)
|
||||
return wrapped
|
||||
|
||||
|
||||
def handle_redirects(func):
|
||||
"""
|
||||
Wrap the _do_request function to handle HTTP redirects.
|
||||
"""
|
||||
MAX_REDIRECTS = 5
|
||||
|
||||
@functools.wraps(func)
|
||||
def wrapped(self, method, url, body, headers):
|
||||
for _ in range(MAX_REDIRECTS):
|
||||
try:
|
||||
return func(self, method, url, body, headers)
|
||||
except exception.RedirectException as redirect:
|
||||
if redirect.url is None:
|
||||
raise exception.InvalidRedirect()
|
||||
url = redirect.url
|
||||
raise exception.MaxRedirectsExceeded(redirects=MAX_REDIRECTS)
|
||||
return wrapped
|
||||
|
||||
|
||||
class HTTPSClientAuthConnection(httplib.HTTPSConnection):
|
||||
"""
|
||||
Class to make a HTTPS connection, with support for
|
||||
full client-based SSL Authentication
|
||||
|
||||
:see http://code.activestate.com/recipes/
|
||||
577548-https-httplib-client-connection-with-certificate-v/
|
||||
"""
|
||||
|
||||
def __init__(self, host, port, key_file, cert_file,
|
||||
ca_file, timeout=None, insecure=False):
|
||||
httplib.HTTPSConnection.__init__(self, host, port, key_file=key_file,
|
||||
cert_file=cert_file)
|
||||
self.key_file = key_file
|
||||
self.cert_file = cert_file
|
||||
self.ca_file = ca_file
|
||||
self.timeout = timeout
|
||||
self.insecure = insecure
|
||||
|
||||
def connect(self):
|
||||
"""
|
||||
Connect to a host on a given (SSL) port.
|
||||
If ca_file is pointing somewhere, use it to check Server Certificate.
|
||||
|
||||
Redefined/copied and extended from httplib.py:1105 (Python 2.6.x).
|
||||
This is needed to pass cert_reqs=ssl.CERT_REQUIRED as parameter to
|
||||
ssl.wrap_socket(), which forces SSL to check server certificate against
|
||||
our client certificate.
|
||||
"""
|
||||
sock = socket.create_connection((self.host, self.port), self.timeout)
|
||||
if self._tunnel_host:
|
||||
self.sock = sock
|
||||
self._tunnel()
|
||||
# Check CA file unless 'insecure' is specificed
|
||||
if self.insecure is True:
|
||||
self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file,
|
||||
cert_reqs=ssl.CERT_NONE)
|
||||
else:
|
||||
self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file,
|
||||
ca_certs=self.ca_file,
|
||||
cert_reqs=ssl.CERT_REQUIRED)
|
||||
|
||||
|
||||
class BaseClient(object):
|
||||
|
||||
"""A base client class"""
|
||||
|
||||
DEFAULT_PORT = 80
|
||||
DEFAULT_DOC_ROOT = None
|
||||
# Standard CA file locations for Debian/Ubuntu, RedHat/Fedora,
|
||||
# Suse, FreeBSD/OpenBSD
|
||||
DEFAULT_CA_FILE_PATH = ('/etc/ssl/certs/ca-certificates.crt:'
|
||||
'/etc/pki/tls/certs/ca-bundle.crt:'
|
||||
'/etc/ssl/ca-bundle.pem:'
|
||||
'/etc/ssl/cert.pem')
|
||||
|
||||
OK_RESPONSE_CODES = (
|
||||
httplib.OK,
|
||||
httplib.CREATED,
|
||||
httplib.ACCEPTED,
|
||||
httplib.NO_CONTENT,
|
||||
)
|
||||
|
||||
REDIRECT_RESPONSE_CODES = (
|
||||
httplib.MOVED_PERMANENTLY,
|
||||
httplib.FOUND,
|
||||
httplib.SEE_OTHER,
|
||||
httplib.USE_PROXY,
|
||||
httplib.TEMPORARY_REDIRECT,
|
||||
)
|
||||
|
||||
def __init__(self, host, port=None, timeout=None, use_ssl=False,
|
||||
auth_token=None, creds=None, doc_root=None, key_file=None,
|
||||
cert_file=None, ca_file=None, insecure=False,
|
||||
configure_via_auth=True):
|
||||
"""
|
||||
Creates a new client to some service.
|
||||
|
||||
:param host: The host where service resides
|
||||
:param port: The port where service resides
|
||||
:param timeout: Connection timeout.
|
||||
:param use_ssl: Should we use HTTPS?
|
||||
:param auth_token: The auth token to pass to the server
|
||||
:param creds: The credentials to pass to the auth plugin
|
||||
:param doc_root: Prefix for all URLs we request from host
|
||||
:param key_file: Optional PEM-formatted file that contains the private
|
||||
key.
|
||||
If use_ssl is True, and this param is None (the
|
||||
default), then an environ variable
|
||||
GLANCE_CLIENT_KEY_FILE is looked for. If no such
|
||||
environ variable is found, ClientConnectionError
|
||||
will be raised.
|
||||
:param cert_file: Optional PEM-formatted certificate chain file.
|
||||
If use_ssl is True, and this param is None (the
|
||||
default), then an environ variable
|
||||
GLANCE_CLIENT_CERT_FILE is looked for. If no such
|
||||
environ variable is found, ClientConnectionError
|
||||
will be raised.
|
||||
:param ca_file: Optional CA cert file to use in SSL connections
|
||||
If use_ssl is True, and this param is None (the
|
||||
default), then an environ variable
|
||||
GLANCE_CLIENT_CA_FILE is looked for.
|
||||
:param insecure: Optional. If set then the server's certificate
|
||||
will not be verified.
|
||||
:param configure_via_auth: Optional. Defaults to True. If set, the
|
||||
URL returned from the service catalog for the image
|
||||
endpoint will **override** the URL supplied to in
|
||||
the host parameter.
|
||||
"""
|
||||
self.host = host
|
||||
self.port = port or self.DEFAULT_PORT
|
||||
self.timeout = timeout
|
||||
# A value of '0' implies never timeout
|
||||
if timeout == 0:
|
||||
self.timeout = None
|
||||
self.use_ssl = use_ssl
|
||||
self.auth_token = auth_token
|
||||
self.creds = creds or {}
|
||||
self.connection = None
|
||||
self.configure_via_auth = configure_via_auth
|
||||
# doc_root can be a nullstring, which is valid, and why we
|
||||
# cannot simply do doc_root or self.DEFAULT_DOC_ROOT below.
|
||||
self.doc_root = (doc_root if doc_root is not None
|
||||
else self.DEFAULT_DOC_ROOT)
|
||||
|
||||
self.key_file = key_file
|
||||
self.cert_file = cert_file
|
||||
self.ca_file = ca_file
|
||||
self.insecure = insecure
|
||||
self.auth_plugin = self.make_auth_plugin(self.creds, self.insecure)
|
||||
self.connect_kwargs = self.get_connect_kwargs()
|
||||
|
||||
def get_connect_kwargs(self):
|
||||
connect_kwargs = {}
|
||||
|
||||
# Both secure and insecure connections have a timeout option
|
||||
connect_kwargs['timeout'] = self.timeout
|
||||
|
||||
if self.use_ssl:
|
||||
if self.key_file is None:
|
||||
self.key_file = os.environ.get('GLANCE_CLIENT_KEY_FILE')
|
||||
if self.cert_file is None:
|
||||
self.cert_file = os.environ.get('GLANCE_CLIENT_CERT_FILE')
|
||||
if self.ca_file is None:
|
||||
self.ca_file = os.environ.get('GLANCE_CLIENT_CA_FILE')
|
||||
|
||||
# Check that key_file/cert_file are either both set or both unset
|
||||
if self.cert_file is not None and self.key_file is None:
|
||||
msg = _("You have selected to use SSL in connecting, "
|
||||
"and you have supplied a cert, "
|
||||
"however you have failed to supply either a "
|
||||
"key_file parameter or set the "
|
||||
"GLANCE_CLIENT_KEY_FILE environ variable")
|
||||
raise exception.ClientConnectionError(msg)
|
||||
|
||||
if self.key_file is not None and self.cert_file is None:
|
||||
msg = _("You have selected to use SSL in connecting, "
|
||||
"and you have supplied a key, "
|
||||
"however you have failed to supply either a "
|
||||
"cert_file parameter or set the "
|
||||
"GLANCE_CLIENT_CERT_FILE environ variable")
|
||||
raise exception.ClientConnectionError(msg)
|
||||
|
||||
if (self.key_file is not None and
|
||||
not os.path.exists(self.key_file)):
|
||||
msg = _("The key file you specified %s does not "
|
||||
"exist") % self.key_file
|
||||
raise exception.ClientConnectionError(msg)
|
||||
connect_kwargs['key_file'] = self.key_file
|
||||
|
||||
if (self.cert_file is not None and
|
||||
not os.path.exists(self.cert_file)):
|
||||
msg = _("The cert file you specified %s does not "
|
||||
"exist") % self.cert_file
|
||||
raise exception.ClientConnectionError(msg)
|
||||
connect_kwargs['cert_file'] = self.cert_file
|
||||
|
||||
if (self.ca_file is not None and
|
||||
not os.path.exists(self.ca_file)):
|
||||
msg = _("The CA file you specified %s does not "
|
||||
"exist") % self.ca_file
|
||||
raise exception.ClientConnectionError(msg)
|
||||
|
||||
if self.ca_file is None:
|
||||
for ca in self.DEFAULT_CA_FILE_PATH.split(":"):
|
||||
if os.path.exists(ca):
|
||||
self.ca_file = ca
|
||||
break
|
||||
|
||||
connect_kwargs['ca_file'] = self.ca_file
|
||||
connect_kwargs['insecure'] = self.insecure
|
||||
|
||||
return connect_kwargs
|
||||
|
||||
def configure_from_url(self, url):
|
||||
"""
|
||||
Setups the connection based on the given url.
|
||||
|
||||
The form is:
|
||||
|
||||
<http|https>://<host>:port/doc_root
|
||||
"""
|
||||
LOG.debug("Configuring from URL: %s", url)
|
||||
parsed = urlparse.urlparse(url)
|
||||
self.use_ssl = parsed.scheme == 'https'
|
||||
self.host = parsed.hostname
|
||||
self.port = parsed.port or 80
|
||||
self.doc_root = parsed.path.rstrip('/')
|
||||
|
||||
# We need to ensure a version identifier is appended to the doc_root
|
||||
if not VERSION_REGEX.match(self.doc_root):
|
||||
if self.DEFAULT_DOC_ROOT:
|
||||
doc_root = self.DEFAULT_DOC_ROOT.lstrip('/')
|
||||
self.doc_root += '/' + doc_root
|
||||
msg = ("Appending doc_root %(doc_root)s to URL %(url)s" %
|
||||
{'doc_root': doc_root, 'url': url})
|
||||
LOG.debug(msg)
|
||||
|
||||
# ensure connection kwargs are re-evaluated after the service catalog
|
||||
# publicURL is parsed for potential SSL usage
|
||||
self.connect_kwargs = self.get_connect_kwargs()
|
||||
|
||||
def make_auth_plugin(self, creds, insecure):
|
||||
"""
|
||||
Returns an instantiated authentication plugin.
|
||||
"""
|
||||
strategy = creds.get('strategy', 'noauth')
|
||||
plugin = auth.get_plugin_from_strategy(strategy, creds, insecure,
|
||||
self.configure_via_auth)
|
||||
return plugin
|
||||
|
||||
def get_connection_type(self):
|
||||
"""
|
||||
Returns the proper connection type
|
||||
"""
|
||||
if self.use_ssl:
|
||||
return HTTPSClientAuthConnection
|
||||
else:
|
||||
return httplib.HTTPConnection
|
||||
|
||||
def _authenticate(self, force_reauth=False):
|
||||
"""
|
||||
Use the authentication plugin to authenticate and set the auth token.
|
||||
|
||||
:param force_reauth: For re-authentication to bypass cache.
|
||||
"""
|
||||
auth_plugin = self.auth_plugin
|
||||
|
||||
if not auth_plugin.is_authenticated or force_reauth:
|
||||
auth_plugin.authenticate()
|
||||
|
||||
self.auth_token = auth_plugin.auth_token
|
||||
|
||||
management_url = auth_plugin.management_url
|
||||
if management_url and self.configure_via_auth:
|
||||
self.configure_from_url(management_url)
|
||||
|
||||
@handle_unauthenticated
|
||||
def do_request(self, method, action, body=None, headers=None,
|
||||
params=None):
|
||||
"""
|
||||
Make a request, returning an HTTP response object.
|
||||
|
||||
:param method: HTTP verb (GET, POST, PUT, etc.)
|
||||
:param action: Requested path to append to self.doc_root
|
||||
:param body: Data to send in the body of the request
|
||||
:param headers: Headers to send with the request
|
||||
:param params: Key/value pairs to use in query string
|
||||
:returns: HTTP response object
|
||||
"""
|
||||
if not self.auth_token:
|
||||
self._authenticate()
|
||||
|
||||
url = self._construct_url(action, params)
|
||||
# NOTE(ameade): We need to copy these kwargs since they can be altered
|
||||
# in _do_request but we need the originals if handle_unauthenticated
|
||||
# calls this function again.
|
||||
return self._do_request(method=method, url=url,
|
||||
body=copy.deepcopy(body),
|
||||
headers=copy.deepcopy(headers))
|
||||
|
||||
def _construct_url(self, action, params=None):
|
||||
"""
|
||||
Create a URL object we can use to pass to _do_request().
|
||||
"""
|
||||
action = urlparse.quote(action)
|
||||
path = '/'.join([self.doc_root or '', action.lstrip('/')])
|
||||
scheme = "https" if self.use_ssl else "http"
|
||||
netloc = "%s:%d" % (self.host, self.port)
|
||||
|
||||
if isinstance(params, dict):
|
||||
for (key, value) in params.items():
|
||||
if value is None:
|
||||
del params[key]
|
||||
continue
|
||||
if not isinstance(value, six.string_types):
|
||||
value = str(value)
|
||||
params[key] = encodeutils.safe_encode(value)
|
||||
query = urlparse.urlencode(params)
|
||||
else:
|
||||
query = None
|
||||
|
||||
url = urlparse.ParseResult(scheme, netloc, path, '', query, '')
|
||||
log_msg = _("Constructed URL: %s")
|
||||
LOG.debug(log_msg, url.geturl())
|
||||
return url
|
||||
|
||||
def _encode_headers(self, headers):
|
||||
"""
|
||||
Encodes headers.
|
||||
|
||||
Note: This should be used right before
|
||||
sending anything out.
|
||||
|
||||
:param headers: Headers to encode
|
||||
:returns: Dictionary with encoded headers'
|
||||
names and values
|
||||
"""
|
||||
to_str = encodeutils.safe_encode
|
||||
return dict([(to_str(h), to_str(v)) for h, v in
|
||||
six.iteritems(headers)])
|
||||
|
||||
@handle_redirects
|
||||
def _do_request(self, method, url, body, headers):
|
||||
"""
|
||||
Connects to the server and issues a request. Handles converting
|
||||
any returned HTTP error status codes to OpenStack/Glance exceptions
|
||||
and closing the server connection. Returns the result data, or
|
||||
raises an appropriate exception.
|
||||
|
||||
:param method: HTTP method ("GET", "POST", "PUT", etc...)
|
||||
:param url: urlparse.ParsedResult object with URL information
|
||||
:param body: data to send (as string, filelike or iterable),
|
||||
or None (default)
|
||||
:param headers: mapping of key/value pairs to add as headers
|
||||
|
||||
:note
|
||||
|
||||
If the body param has a read attribute, and method is either
|
||||
POST or PUT, this method will automatically conduct a chunked-transfer
|
||||
encoding and use the body as a file object or iterable, transferring
|
||||
chunks of data using the connection's send() method. This allows large
|
||||
objects to be transferred efficiently without buffering the entire
|
||||
body in memory.
|
||||
"""
|
||||
if url.query:
|
||||
path = url.path + "?" + url.query
|
||||
else:
|
||||
path = url.path
|
||||
|
||||
try:
|
||||
connection_type = self.get_connection_type()
|
||||
headers = self._encode_headers(headers or {})
|
||||
headers.update(osprofiler.web.get_trace_id_headers())
|
||||
|
||||
if 'x-auth-token' not in headers and self.auth_token:
|
||||
headers['x-auth-token'] = self.auth_token
|
||||
|
||||
c = connection_type(url.hostname, url.port, **self.connect_kwargs)
|
||||
|
||||
def _pushing(method):
|
||||
return method.lower() in ('post', 'put')
|
||||
|
||||
def _simple(body):
|
||||
return body is None or isinstance(body, six.string_types)
|
||||
|
||||
def _filelike(body):
|
||||
return hasattr(body, 'read')
|
||||
|
||||
def _sendbody(connection, iter):
|
||||
connection.endheaders()
|
||||
for sent in iter:
|
||||
# iterator has done the heavy lifting
|
||||
pass
|
||||
|
||||
def _chunkbody(connection, iter):
|
||||
connection.putheader('Transfer-Encoding', 'chunked')
|
||||
connection.endheaders()
|
||||
for chunk in iter:
|
||||
connection.send('%x\r\n%s\r\n' % (len(chunk), chunk))
|
||||
connection.send('0\r\n\r\n')
|
||||
|
||||
# Do a simple request or a chunked request, depending
|
||||
# on whether the body param is file-like or iterable and
|
||||
# the method is PUT or POST
|
||||
#
|
||||
if not _pushing(method) or _simple(body):
|
||||
# Simple request...
|
||||
c.request(method, path, body, headers)
|
||||
elif _filelike(body) or self._iterable(body):
|
||||
c.putrequest(method, path)
|
||||
|
||||
use_sendfile = self._sendable(body)
|
||||
|
||||
# According to HTTP/1.1, Content-Length and Transfer-Encoding
|
||||
# conflict.
|
||||
for header, value in headers.items():
|
||||
if use_sendfile or header.lower() != 'content-length':
|
||||
c.putheader(header, str(value))
|
||||
|
||||
iter = utils.chunkreadable(body)
|
||||
|
||||
if use_sendfile:
|
||||
# send actual file without copying into userspace
|
||||
_sendbody(c, iter)
|
||||
else:
|
||||
# otherwise iterate and chunk
|
||||
_chunkbody(c, iter)
|
||||
else:
|
||||
raise TypeError('Unsupported image type: %s' % body.__class__)
|
||||
|
||||
res = c.getresponse()
|
||||
|
||||
def _retry(res):
|
||||
return res.getheader('Retry-After')
|
||||
|
||||
status_code = self.get_status_code(res)
|
||||
if status_code in self.OK_RESPONSE_CODES:
|
||||
return res
|
||||
elif status_code in self.REDIRECT_RESPONSE_CODES:
|
||||
raise exception.RedirectException(res.getheader('Location'))
|
||||
elif status_code == httplib.UNAUTHORIZED:
|
||||
raise exception.NotAuthenticated(res.read())
|
||||
elif status_code == httplib.FORBIDDEN:
|
||||
raise exception.Forbidden(res.read())
|
||||
elif status_code == httplib.NOT_FOUND:
|
||||
raise exception.NotFound(res.read())
|
||||
elif status_code == httplib.CONFLICT:
|
||||
raise exception.Duplicate(res.read())
|
||||
elif status_code == httplib.BAD_REQUEST:
|
||||
raise exception.Invalid(res.read())
|
||||
elif status_code == httplib.MULTIPLE_CHOICES:
|
||||
raise exception.MultipleChoices(body=res.read())
|
||||
elif status_code == httplib.REQUEST_ENTITY_TOO_LARGE:
|
||||
raise exception.LimitExceeded(retry=_retry(res),
|
||||
body=res.read())
|
||||
elif status_code == httplib.INTERNAL_SERVER_ERROR:
|
||||
raise exception.ServerError()
|
||||
elif status_code == httplib.SERVICE_UNAVAILABLE:
|
||||
raise exception.ServiceUnavailable(retry=_retry(res))
|
||||
else:
|
||||
raise exception.UnexpectedStatus(status=status_code,
|
||||
body=res.read())
|
||||
|
||||
except (socket.error, IOError) as e:
|
||||
raise exception.ClientConnectionError(e)
|
||||
|
||||
def _seekable(self, body):
|
||||
# pipes are not seekable, avoids sendfile() failure on e.g.
|
||||
# cat /path/to/image | searchlight add ...
|
||||
# or where add command is launched via popen
|
||||
try:
|
||||
os.lseek(body.fileno(), 0, os.SEEK_CUR)
|
||||
return True
|
||||
except OSError as e:
|
||||
return (e.errno != errno.ESPIPE)
|
||||
|
||||
def _sendable(self, body):
|
||||
return (SENDFILE_SUPPORTED and
|
||||
hasattr(body, 'fileno') and
|
||||
self._seekable(body) and
|
||||
not self.use_ssl)
|
||||
|
||||
def _iterable(self, body):
|
||||
return isinstance(body, collections.Iterable)
|
||||
|
||||
def get_status_code(self, response):
|
||||
"""
|
||||
Returns the integer status code from the response, which
|
||||
can be either a Webob.Response (used in testing) or httplib.Response
|
||||
"""
|
||||
if hasattr(response, 'status_int'):
|
||||
return response.status_int
|
||||
else:
|
||||
return response.status
|
||||
|
||||
def _extract_params(self, actual_params, allowed_params):
|
||||
"""
|
||||
Extract a subset of keys from a dictionary. The filters key
|
||||
will also be extracted, and each of its values will be returned
|
||||
as an individual param.
|
||||
|
||||
:param actual_params: dict of keys to filter
|
||||
:param allowed_params: list of keys that 'actual_params' will be
|
||||
reduced to
|
||||
:retval subset of 'params' dict
|
||||
"""
|
||||
try:
|
||||
# expect 'filters' param to be a dict here
|
||||
result = dict(actual_params.get('filters'))
|
||||
except TypeError:
|
||||
result = {}
|
||||
|
||||
for allowed_param in allowed_params:
|
||||
if allowed_param in actual_params:
|
||||
result[allowed_param] = actual_params[allowed_param]
|
||||
|
||||
return result
|
|
@ -0,0 +1,180 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2011 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Routines for configuring Glance
|
||||
"""
|
||||
|
||||
import logging
|
||||
import logging.config
|
||||
import logging.handlers
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
from oslo_concurrency import lockutils
|
||||
from oslo_config import cfg
|
||||
from oslo_policy import policy
|
||||
from paste import deploy
|
||||
|
||||
from searchlight import i18n
|
||||
from searchlight.version import version_info as version
|
||||
|
||||
_ = i18n._
|
||||
|
||||
|
||||
paste_deploy_opts = [
|
||||
cfg.StrOpt('flavor',
|
||||
help=_('Partial name of a pipeline in your paste configuration '
|
||||
'file with the service name removed. For example, if '
|
||||
'your paste section name is '
|
||||
'[pipeline:searchlight-api-keystone] use the value '
|
||||
'"keystone"')),
|
||||
cfg.StrOpt('config_file',
|
||||
help=_('Name of the paste configuration file.')),
|
||||
]
|
||||
|
||||
common_opts = [
|
||||
cfg.IntOpt('limit_param_default', default=25,
|
||||
help=_('Default value for the number of items returned by a '
|
||||
'request if not specified explicitly in the request')),
|
||||
cfg.IntOpt('api_limit_max', default=1000,
|
||||
help=_('Maximum permissible number of items that could be '
|
||||
'returned by a request')),
|
||||
cfg.StrOpt('pydev_worker_debug_host',
|
||||
help=_('The hostname/IP of the pydev process listening for '
|
||||
'debug connections')),
|
||||
cfg.IntOpt('pydev_worker_debug_port', default=5678,
|
||||
help=_('The port on which a pydev process is listening for '
|
||||
'connections.')),
|
||||
cfg.StrOpt('metadata_encryption_key', secret=True,
|
||||
help=_('AES key for encrypting store \'location\' metadata. '
|
||||
'This includes, if used, Swift or S3 credentials. '
|
||||
'Should be set to a random string of length 16, 24 or '
|
||||
'32 bytes')),
|
||||
cfg.StrOpt('digest_algorithm', default='sha1',
|
||||
help=_('Digest algorithm which will be used for digital '
|
||||
'signature; the default is sha1 the default in Kilo '
|
||||
'for a smooth upgrade process, and it will be updated '
|
||||
'with sha256 in next release(L). Use the command '
|
||||
'"openssl list-message-digest-algorithms" to get the '
|
||||
'available algorithms supported by the version of '
|
||||
'OpenSSL on the platform. Examples are "sha1", '
|
||||
'"sha256", "sha512", etc.')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(paste_deploy_opts, group='paste_deploy')
|
||||
CONF.register_opts(common_opts)
|
||||
policy.Enforcer(CONF)
|
||||
|
||||
|
||||
def parse_args(args=None, usage=None, default_config_files=None):
|
||||
if "OSLO_LOCK_PATH" not in os.environ:
|
||||
lockutils.set_defaults(tempfile.gettempdir())
|
||||
|
||||
CONF(args=args,
|
||||
project='searchlight',
|
||||
version=version.cached_version_string(),
|
||||
usage=usage,
|
||||
default_config_files=default_config_files)
|
||||
|
||||
|
||||
def parse_cache_args(args=None):
|
||||
config_files = cfg.find_config_files(project='searchlight', prog='searchlight-cache')
|
||||
parse_args(args=args, default_config_files=config_files)
|
||||
|
||||
|
||||
def _get_deployment_flavor(flavor=None):
|
||||
"""
|
||||
Retrieve the paste_deploy.flavor config item, formatted appropriately
|
||||
for appending to the application name.
|
||||
|
||||
:param flavor: if specified, use this setting rather than the
|
||||
paste_deploy.flavor configuration setting
|
||||
"""
|
||||
if not flavor:
|
||||
flavor = CONF.paste_deploy.flavor
|
||||
return '' if not flavor else ('-' + flavor)
|
||||
|
||||
|
||||
def _get_paste_config_path():
|
||||
paste_suffix = '-paste.ini'
|
||||
conf_suffix = '.conf'
|
||||
if CONF.config_file:
|
||||
# Assume paste config is in a paste.ini file corresponding
|
||||
# to the last config file
|
||||
path = CONF.config_file[-1].replace(conf_suffix, paste_suffix)
|
||||
else:
|
||||
path = CONF.prog + paste_suffix
|
||||
return CONF.find_file(os.path.basename(path))
|
||||
|
||||
|
||||
def _get_deployment_config_file():
|
||||
"""
|
||||
Retrieve the deployment_config_file config item, formatted as an
|
||||
absolute pathname.
|
||||
"""
|
||||
path = CONF.paste_deploy.config_file
|
||||
if not path:
|
||||
path = _get_paste_config_path()
|
||||
if not path:
|
||||
msg = _("Unable to locate paste config file for %s.") % CONF.prog
|
||||
raise RuntimeError(msg)
|
||||
return os.path.abspath(path)
|
||||
|
||||
|
||||
def load_paste_app(app_name, flavor=None, conf_file=None):
|
||||
"""
|
||||
Builds and returns a WSGI app from a paste config file.
|
||||
|
||||
We assume the last config file specified in the supplied ConfigOpts
|
||||
object is the paste config file, if conf_file is None.
|
||||
|
||||
:param app_name: name of the application to load
|
||||
:param flavor: name of the variant of the application to load
|
||||
:param conf_file: path to the paste config file
|
||||
|
||||
:raises RuntimeError when config file cannot be located or application
|
||||
cannot be loaded from config file
|
||||
"""
|
||||
# append the deployment flavor to the application name,
|
||||
# in order to identify the appropriate paste pipeline
|
||||
app_name += _get_deployment_flavor(flavor)
|
||||
|
||||
if not conf_file:
|
||||
conf_file = _get_deployment_config_file()
|
||||
|
||||
try:
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.debug("Loading %(app_name)s from %(conf_file)s",
|
||||
{'conf_file': conf_file, 'app_name': app_name})
|
||||
|
||||
app = deploy.loadapp("config:%s" % conf_file, name=app_name)
|
||||
|
||||
# Log the options used when starting if we're in debug mode...
|
||||
if CONF.debug:
|
||||
CONF.log_opt_values(logger, logging.DEBUG)
|
||||
|
||||
return app
|
||||
except (LookupError, ImportError) as e:
|
||||
msg = (_("Unable to load %(app_name)s from "
|
||||
"configuration file %(conf_file)s."
|
||||
"\nGot: %(e)r") % {'app_name': app_name,
|
||||
'conf_file': conf_file,
|
||||
'e': e})
|
||||
logger.error(msg)
|
||||
raise RuntimeError(msg)
|
|
@ -0,0 +1,69 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2011 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Routines for URL-safe encrypting/decrypting
|
||||
"""
|
||||
|
||||
import base64
|
||||
|
||||
from Crypto.Cipher import AES
|
||||
from Crypto import Random
|
||||
from Crypto.Random import random
|
||||
# NOTE(jokke): simplified transition to py3, behaves like py2 xrange
|
||||
from six.moves import range
|
||||
|
||||
|
||||
def urlsafe_encrypt(key, plaintext, blocksize=16):
|
||||
"""
|
||||
Encrypts plaintext. Resulting ciphertext will contain URL-safe characters
|
||||
:param key: AES secret key
|
||||
:param plaintext: Input text to be encrypted
|
||||
:param blocksize: Non-zero integer multiple of AES blocksize in bytes (16)
|
||||
|
||||
:returns : Resulting ciphertext
|
||||
"""
|
||||
def pad(text):
|
||||
"""
|
||||
Pads text to be encrypted
|
||||
"""
|
||||
pad_length = (blocksize - len(text) % blocksize)
|
||||
sr = random.StrongRandom()
|
||||
pad = ''.join(chr(sr.randint(1, 0xFF)) for i in range(pad_length - 1))
|
||||
# We use chr(0) as a delimiter between text and padding
|
||||
return text + chr(0) + pad
|
||||
|
||||
# random initial 16 bytes for CBC
|
||||
init_vector = Random.get_random_bytes(16)
|
||||
cypher = AES.new(key, AES.MODE_CBC, init_vector)
|
||||
padded = cypher.encrypt(pad(str(plaintext)))
|
||||
return base64.urlsafe_b64encode(init_vector + padded)
|
||||
|
||||
|
||||
def urlsafe_decrypt(key, ciphertext):
|
||||
"""
|
||||
Decrypts URL-safe base64 encoded ciphertext
|
||||
:param key: AES secret key
|
||||
:param ciphertext: The encrypted text to decrypt
|
||||
|
||||
:returns : Resulting plaintext
|
||||
"""
|
||||
# Cast from unicode
|
||||
ciphertext = base64.urlsafe_b64decode(str(ciphertext))
|
||||
cypher = AES.new(key, AES.MODE_CBC, ciphertext[:16])
|
||||
padded = cypher.decrypt(ciphertext[16:])
|
||||
return padded[:padded.rfind(chr(0))]
|
|
@ -0,0 +1,290 @@
|
|||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Glance exception subclasses"""
|
||||
|
||||
import six
|
||||
import six.moves.urllib.parse as urlparse
|
||||
|
||||
from searchlight import i18n
|
||||
|
||||
_ = i18n._
|
||||
|
||||
_FATAL_EXCEPTION_FORMAT_ERRORS = False
|
||||
|
||||
|
||||
class RedirectException(Exception):
|
||||
def __init__(self, url):
|
||||
self.url = urlparse.urlparse(url)
|
||||
|
||||
|
||||
class SearchlightException(Exception):
|
||||
"""
|
||||
Base Glance Exception
|
||||
|
||||
To correctly use this class, inherit from it and define
|
||||
a 'message' property. That message will get printf'd
|
||||
with the keyword arguments provided to the constructor.
|
||||
"""
|
||||
message = _("An unknown exception occurred")
|
||||
|
||||
def __init__(self, message=None, *args, **kwargs):
|
||||
if not message:
|
||||
message = self.message
|
||||
try:
|
||||
if kwargs:
|
||||
message = message % kwargs
|
||||
except Exception:
|
||||
if _FATAL_EXCEPTION_FORMAT_ERRORS:
|
||||
raise
|
||||
else:
|
||||
# at least get the core message out if something happened
|
||||
pass
|
||||
self.msg = message
|
||||
super(SearchlightException, self).__init__(message)
|
||||
|
||||
def __unicode__(self):
|
||||
# NOTE(flwang): By default, self.msg is an instance of Message, which
|
||||
# can't be converted by str(). Based on the definition of
|
||||
# __unicode__, it should return unicode always.
|
||||
return six.text_type(self.msg)
|
||||
|
||||
|
||||
class MissingCredentialError(SearchlightException):
|
||||
message = _("Missing required credential: %(required)s")
|
||||
|
||||
|
||||
class BadAuthStrategy(SearchlightException):
|
||||
message = _("Incorrect auth strategy, expected \"%(expected)s\" but "
|
||||
"received \"%(received)s\"")
|
||||
|
||||
|
||||
class NotFound(SearchlightException):
|
||||
message = _("An object with the specified identifier was not found.")
|
||||
|
||||
|
||||
class BadStoreUri(SearchlightException):
|
||||
message = _("The Store URI was malformed.")
|
||||
|
||||
|
||||
class Duplicate(SearchlightException):
|
||||
message = _("An object with the same identifier already exists.")
|
||||
|
||||
|
||||
class Conflict(SearchlightException):
|
||||
message = _("An object with the same identifier is currently being "
|
||||
"operated on.")
|
||||
|
||||
|
||||
class AuthBadRequest(SearchlightException):
|
||||
message = _("Connect error/bad request to Auth service at URL %(url)s.")
|
||||
|
||||
|
||||
class AuthUrlNotFound(SearchlightException):
|
||||
message = _("Auth service at URL %(url)s not found.")
|
||||
|
||||
|
||||
class AuthorizationFailure(SearchlightException):
|
||||
message = _("Authorization failed.")
|
||||
|
||||
|
||||
class NotAuthenticated(SearchlightException):
|
||||
message = _("You are not authenticated.")
|
||||
|
||||
|
||||
class UploadException(SearchlightException):
|
||||
message = _('Image upload problem: %s')
|
||||
|
||||
|
||||
class Forbidden(SearchlightException):
|
||||
message = _("You are not authorized to complete this action.")
|
||||
|
||||
|
||||
class Invalid(SearchlightException):
|
||||
message = _("Data supplied was not valid.")
|
||||
|
||||
|
||||
class InvalidSortKey(Invalid):
|
||||
message = _("Sort key supplied was not valid.")
|
||||
|
||||
|
||||
class InvalidSortDir(Invalid):
|
||||
message = _("Sort direction supplied was not valid.")
|
||||
|
||||
|
||||
class InvalidPropertyProtectionConfiguration(Invalid):
|
||||
message = _("Invalid configuration in property protection file.")
|
||||
|
||||
|
||||
class InvalidFilterRangeValue(Invalid):
|
||||
message = _("Unable to filter using the specified range.")
|
||||
|
||||
|
||||
class InvalidOptionValue(Invalid):
|
||||
message = _("Invalid value for option %(option)s: %(value)s")
|
||||
|
||||
|
||||
class ReadonlyProperty(Forbidden):
|
||||
message = _("Attribute '%(property)s' is read-only.")
|
||||
|
||||
|
||||
class ReservedProperty(Forbidden):
|
||||
message = _("Attribute '%(property)s' is reserved.")
|
||||
|
||||
|
||||
class AuthorizationRedirect(SearchlightException):
|
||||
message = _("Redirecting to %(uri)s for authorization.")
|
||||
|
||||
|
||||
class ClientConnectionError(SearchlightException):
|
||||
message = _("There was an error connecting to a server")
|
||||
|
||||
|
||||
class ClientConfigurationError(SearchlightException):
|
||||
message = _("There was an error configuring the client.")
|
||||
|
||||
|
||||
class MultipleChoices(SearchlightException):
|
||||
message = _("The request returned a 302 Multiple Choices. This generally "
|
||||
"means that you have not included a version indicator in a "
|
||||
"request URI.\n\nThe body of response returned:\n%(body)s")
|
||||
|
||||
|
||||
class LimitExceeded(SearchlightException):
|
||||
message = _("The request returned a 413 Request Entity Too Large. This "
|
||||
"generally means that rate limiting or a quota threshold was "
|
||||
"breached.\n\nThe response body:\n%(body)s")
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.retry_after = (int(kwargs['retry']) if kwargs.get('retry')
|
||||
else None)
|
||||
super(LimitExceeded, self).__init__(*args, **kwargs)
|
||||
|
||||
|
||||
class ServiceUnavailable(SearchlightException):
|
||||
message = _("The request returned 503 Service Unavailable. This "
|
||||
"generally occurs on service overload or other transient "
|
||||
"outage.")
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.retry_after = (int(kwargs['retry']) if kwargs.get('retry')
|
||||
else None)
|
||||
super(ServiceUnavailable, self).__init__(*args, **kwargs)
|
||||
|
||||
|
||||
class ServerError(SearchlightException):
|
||||
message = _("The request returned 500 Internal Server Error.")
|
||||
|
||||
|
||||
class UnexpectedStatus(SearchlightException):
|
||||
message = _("The request returned an unexpected status: %(status)s."
|
||||
"\n\nThe response body:\n%(body)s")
|
||||
|
||||
|
||||
class InvalidContentType(SearchlightException):
|
||||
message = _("Invalid content type %(content_type)s")
|
||||
|
||||
|
||||
class BadRegistryConnectionConfiguration(SearchlightException):
|
||||
message = _("Registry was not configured correctly on API server. "
|
||||
"Reason: %(reason)s")
|
||||
|
||||
|
||||
class BadDriverConfiguration(SearchlightException):
|
||||
message = _("Driver %(driver_name)s could not be configured correctly. "
|
||||
"Reason: %(reason)s")
|
||||
|
||||
|
||||
class MaxRedirectsExceeded(SearchlightException):
|
||||
message = _("Maximum redirects (%(redirects)s) was exceeded.")
|
||||
|
||||
|
||||
class InvalidRedirect(SearchlightException):
|
||||
message = _("Received invalid HTTP redirect.")
|
||||
|
||||
|
||||
class NoServiceEndpoint(SearchlightException):
|
||||
message = _("Response from Keystone does not contain a Glance endpoint.")
|
||||
|
||||
|
||||
class RegionAmbiguity(SearchlightException):
|
||||
message = _("Multiple 'image' service matches for region %(region)s. This "
|
||||
"generally means that a region is required and you have not "
|
||||
"supplied one.")
|
||||
|
||||
|
||||
class WorkerCreationFailure(SearchlightException):
|
||||
message = _("Server worker creation failed: %(reason)s.")
|
||||
|
||||
|
||||
class SchemaLoadError(SearchlightException):
|
||||
message = _("Unable to load schema: %(reason)s")
|
||||
|
||||
|
||||
class InvalidObject(SearchlightException):
|
||||
message = _("Provided object does not match schema "
|
||||
"'%(schema)s': %(reason)s")
|
||||
|
||||
|
||||
class UnsupportedHeaderFeature(SearchlightException):
|
||||
message = _("Provided header feature is unsupported: %(feature)s")
|
||||
|
||||
|
||||
class SIGHUPInterrupt(SearchlightException):
|
||||
message = _("System SIGHUP signal received.")
|
||||
|
||||
|
||||
class RPCError(SearchlightException):
|
||||
message = _("%(cls)s exception was raised in the last rpc call: %(val)s")
|
||||
|
||||
|
||||
class DuplicateLocation(Duplicate):
|
||||
message = _("The location %(location)s already exists")
|
||||
|
||||
|
||||
class ImageDataNotFound(NotFound):
|
||||
message = _("No image data could be found")
|
||||
|
||||
|
||||
class InvalidParameterValue(Invalid):
|
||||
message = _("Invalid value '%(value)s' for parameter '%(param)s': "
|
||||
"%(extra_msg)s")
|
||||
|
||||
|
||||
class InvalidImageStatusTransition(Invalid):
|
||||
message = _("Image status transition from %(cur_status)s to"
|
||||
" %(new_status)s is not allowed")
|
||||
|
||||
|
||||
class InvalidVersion(Invalid):
|
||||
message = _("Version is invalid: %(reason)s")
|
||||
|
||||
|
||||
class JsonPatchException(SearchlightException):
|
||||
message = _("Invalid jsonpatch request")
|
||||
|
||||
|
||||
class InvalidJsonPatchBody(JsonPatchException):
|
||||
message = _("The provided body %(body)s is invalid "
|
||||
"under given schema: %(schema)s")
|
||||
|
||||
|
||||
class InvalidJsonPatchPath(JsonPatchException):
|
||||
message = _("The provided path '%(path)s' is invalid: %(explanation)s")
|
||||
|
||||
def __init__(self, message=None, *args, **kwargs):
|
||||
self.explanation = kwargs.get("explanation")
|
||||
super(InvalidJsonPatchPath, self).__init__(message, *args, **kwargs)
|
|
@ -0,0 +1,122 @@
|
|||
# Copyright 2015 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
"""
|
||||
A mixin that validates the given body for jsonpatch-compatibility.
|
||||
The methods supported are limited to listed in METHODS_ALLOWED
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
import jsonschema
|
||||
|
||||
import searchlight.common.exception as exc
|
||||
from searchlight.openstack.common._i18n import _
|
||||
|
||||
|
||||
class JsonPatchValidatorMixin(object):
|
||||
# a list of allowed methods allowed according to RFC 6902
|
||||
ALLOWED = ["replace", "test", "remove", "add", "copy"]
|
||||
PATH_REGEX_COMPILED = re.compile("^/[^/]+(/[^/]+)*$")
|
||||
|
||||
def __init__(self, methods_allowed=["replace", "remove"]):
|
||||
self.schema = self._gen_schema(methods_allowed)
|
||||
self.methods_allowed = [m for m in methods_allowed
|
||||
if m in self.ALLOWED]
|
||||
|
||||
@staticmethod
|
||||
def _gen_schema(methods_allowed):
|
||||
"""
|
||||
Generates a jsonschema for jsonpatch request based on methods_allowed
|
||||
"""
|
||||
# op replace needs no 'value' param, so needs a special schema if
|
||||
# present in methods_allowed
|
||||
basic_schema = {
|
||||
"type": "array",
|
||||
"items": {"properties": {"op": {"type": "string",
|
||||
"enum": methods_allowed},
|
||||
"path": {"type": "string"},
|
||||
"value": {"type": ["string",
|
||||
"object",
|
||||
"integer",
|
||||
"array",
|
||||
"boolean"]}
|
||||
},
|
||||
"required": ["op", "path", "value"],
|
||||
"type": "object"},
|
||||
"$schema": "http://json-schema.org/draft-04/schema#"
|
||||
}
|
||||
if "remove" in methods_allowed:
|
||||
methods_allowed.remove("remove")
|
||||
no_remove_op_schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"op": {"type": "string", "enum": methods_allowed},
|
||||
"path": {"type": "string"},
|
||||
"value": {"type": ["string", "object",
|
||||
"integer", "array", "boolean"]}
|
||||
},
|
||||
"required": ["op", "path", "value"]}
|
||||
op_remove_only_schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"op": {"type": "string", "enum": ["remove"]},
|
||||
"path": {"type": "string"}
|
||||
},
|
||||
"required": ["op", "path"]}
|
||||
|
||||
basic_schema = {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"oneOf": [no_remove_op_schema, op_remove_only_schema]},
|
||||
"$schema": "http://json-schema.org/draft-04/schema#"
|
||||
}
|
||||
return basic_schema
|
||||
|
||||
def validate_body(self, body):
|
||||
try:
|
||||
jsonschema.validate(body, self.schema)
|
||||
# now make sure everything is ok with path
|
||||
return [{"path": self._decode_json_pointer(e["path"]),
|
||||
"value": e.get("value", None),
|
||||
"op": e["op"]} for e in body]
|
||||
except jsonschema.ValidationError:
|
||||
raise exc.InvalidJsonPatchBody(body=body, schema=self.schema)
|
||||
|
||||
def _check_for_path_errors(self, pointer):
|
||||
if not re.match(self.PATH_REGEX_COMPILED, pointer):
|
||||
msg = _("Json path should start with a '/', "
|
||||
"end with no '/', no 2 subsequent '/' are allowed.")
|
||||
raise exc.InvalidJsonPatchPath(path=pointer, explanation=msg)
|
||||
if re.search('~[^01]', pointer) or pointer.endswith('~'):
|
||||
msg = _("Pointer contains '~' which is not part of"
|
||||
" a recognized escape sequence [~0, ~1].")
|
||||
raise exc.InvalidJsonPatchPath(path=pointer, explanation=msg)
|
||||
|
||||
def _decode_json_pointer(self, pointer):
|
||||
"""Parses a json pointer. Returns a pointer as a string.
|
||||
|
||||
Json Pointers are defined in
|
||||
http://tools.ietf.org/html/draft-pbryan-zyp-json-pointer .
|
||||
The pointers use '/' for separation between object attributes.
|
||||
A '/' character in an attribute name is encoded as "~1" and
|
||||
a '~' character is encoded as "~0".
|
||||
"""
|
||||
self._check_for_path_errors(pointer)
|
||||
ret = []
|
||||
for part in pointer.lstrip('/').split('/'):
|
||||
ret.append(part.replace('~1', '/').replace('~0', '~').strip())
|
||||
return '/'.join(ret)
|
|
@ -0,0 +1,203 @@
|
|||
# Copyright 2013 Rackspace
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from collections import OrderedDict
|
||||
import ConfigParser
|
||||
import re
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_policy import policy
|
||||
|
||||
import searchlight.api.policy
|
||||
from searchlight.common import exception
|
||||
from searchlight import i18n
|
||||
|
||||
# NOTE(bourke): The default dict_type is collections.OrderedDict in py27, but
|
||||
# we must set manually for compatibility with py26
|
||||
CONFIG = ConfigParser.SafeConfigParser(dict_type=OrderedDict)
|
||||
LOG = logging.getLogger(__name__)
|
||||
_ = i18n._
|
||||
_LE = i18n._LE
|
||||
|
||||
property_opts = [
|
||||
cfg.StrOpt('property_protection_file',
|
||||
help=_('The location of the property protection file.'
|
||||
'This file contains the rules for property protections '
|
||||
'and the roles/policies associated with it. If this '
|
||||
'config value is not specified, by default, property '
|
||||
'protections won\'t be enforced. If a value is '
|
||||
'specified and the file is not found, then the '
|
||||
'searchlight-api service will not start.')),
|
||||
cfg.StrOpt('property_protection_rule_format',
|
||||
default='roles',
|
||||
choices=('roles', 'policies'),
|
||||
help=_('This config value indicates whether "roles" or '
|
||||
'"policies" are used in the property protection file.')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(property_opts)
|
||||
|
||||
# NOTE (spredzy): Due to the particularly lengthy name of the exception
|
||||
# and the number of occurrence it is raise in this file, a variable is
|
||||
# created
|
||||
InvalidPropProtectConf = exception.InvalidPropertyProtectionConfiguration
|
||||
|
||||
|
||||
def is_property_protection_enabled():
|
||||
if CONF.property_protection_file:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
class PropertyRules(object):
|
||||
|
||||
def __init__(self, policy_enforcer=None):
|
||||
self.rules = []
|
||||
self.prop_exp_mapping = {}
|
||||
self.policies = []
|
||||
self.policy_enforcer = policy_enforcer or searchlight.api.policy.Enforcer()
|
||||
self.prop_prot_rule_format = CONF.property_protection_rule_format
|
||||
self.prop_prot_rule_format = self.prop_prot_rule_format.lower()
|
||||
self._load_rules()
|
||||
|
||||
def _load_rules(self):
|
||||
try:
|
||||
conf_file = CONF.find_file(CONF.property_protection_file)
|
||||
CONFIG.read(conf_file)
|
||||
except Exception as e:
|
||||
msg = (_LE("Couldn't find property protection file %(file)s: "
|
||||
"%(error)s.") % {'file': CONF.property_protection_file,
|
||||
'error': e})
|
||||
LOG.error(msg)
|
||||
raise InvalidPropProtectConf()
|
||||
|
||||
if self.prop_prot_rule_format not in ['policies', 'roles']:
|
||||
msg = _LE("Invalid value '%s' for "
|
||||
"'property_protection_rule_format'. "
|
||||
"The permitted values are "
|
||||
"'roles' and 'policies'") % self.prop_prot_rule_format
|
||||
LOG.error(msg)
|
||||
raise InvalidPropProtectConf()
|
||||
|
||||
operations = ['create', 'read', 'update', 'delete']
|
||||
properties = CONFIG.sections()
|
||||
for property_exp in properties:
|
||||
property_dict = {}
|
||||
compiled_rule = self._compile_rule(property_exp)
|
||||
|
||||
for operation in operations:
|
||||
permissions = CONFIG.get(property_exp, operation)
|
||||
if permissions:
|
||||
if self.prop_prot_rule_format == 'policies':
|
||||
if ',' in permissions:
|
||||
LOG.error(
|
||||
_LE("Multiple policies '%s' not allowed "
|
||||
"for a given operation. Policies can be "
|
||||
"combined in the policy file"),
|
||||
permissions)
|
||||
raise InvalidPropProtectConf()
|
||||
self.prop_exp_mapping[compiled_rule] = property_exp
|
||||
self._add_policy_rules(property_exp, operation,
|
||||
permissions)
|
||||
permissions = [permissions]
|
||||
else:
|
||||
permissions = [permission.strip() for permission in
|
||||
permissions.split(',')]
|
||||
if '@' in permissions and '!' in permissions:
|
||||
msg = (_LE(
|
||||
"Malformed property protection rule in "
|
||||
"[%(prop)s] %(op)s=%(perm)s: '@' and '!' "
|
||||
"are mutually exclusive") %
|
||||
dict(prop=property_exp,
|
||||
op=operation,
|
||||
perm=permissions))
|
||||
LOG.error(msg)
|
||||
raise InvalidPropProtectConf()
|
||||
property_dict[operation] = permissions
|
||||
else:
|
||||
property_dict[operation] = []
|
||||
LOG.warn(
|
||||
_('Property protection on operation %(operation)s'
|
||||
' for rule %(rule)s is not found. No role will be'
|
||||
' allowed to perform this operation.') %
|
||||
{'operation': operation,
|
||||
'rule': property_exp})
|
||||
|
||||
self.rules.append((compiled_rule, property_dict))
|
||||
|
||||
def _compile_rule(self, rule):
|
||||
try:
|
||||
return re.compile(rule)
|
||||
except Exception as e:
|
||||
msg = (_LE("Encountered a malformed property protection rule"
|
||||
" %(rule)s: %(error)s.") % {'rule': rule,
|
||||
'error': e})
|
||||
LOG.error(msg)
|
||||
raise InvalidPropProtectConf()
|
||||
|
||||
def _add_policy_rules(self, property_exp, action, rule):
|
||||
"""Add policy rules to the policy enforcer.
|
||||
|
||||
For example, if the file listed as property_protection_file has:
|
||||
[prop_a]
|
||||
create = searchlight_creator
|
||||
then the corresponding policy rule would be:
|
||||
"prop_a:create": "rule:searchlight_creator"
|
||||
where searchlight_creator is defined in policy.json. For example:
|
||||
"searchlight_creator": "role:admin or role:searchlight_create_user"
|
||||
"""
|
||||
rule = "rule:%s" % rule
|
||||
rule_name = "%s:%s" % (property_exp, action)
|
||||
rule_dict = policy.Rules.from_dict({
|
||||
rule_name: rule
|
||||
})
|
||||
self.policy_enforcer.add_rules(rule_dict)
|
||||
|
||||
def _check_policy(self, property_exp, action, context):
|
||||
try:
|
||||
action = ":".join([property_exp, action])
|
||||
self.policy_enforcer.enforce(context, action, {})
|
||||
except exception.Forbidden:
|
||||
return False
|
||||
return True
|
||||
|
||||
def check_property_rules(self, property_name, action, context):
|
||||
roles = context.roles
|
||||
if not self.rules:
|
||||
return True
|
||||
|
||||
if action not in ['create', 'read', 'update', 'delete']:
|
||||
return False
|
||||
|
||||
for rule_exp, rule in self.rules:
|
||||
if rule_exp.search(str(property_name)):
|
||||
break
|
||||
else: # no matching rules
|
||||
return False
|
||||
|
||||
rule_roles = rule.get(action)
|
||||
if rule_roles:
|
||||
if '!' in rule_roles:
|
||||
return False
|
||||
elif '@' in rule_roles:
|
||||
return True
|
||||
if self.prop_prot_rule_format == 'policies':
|
||||
prop_exp_key = self.prop_exp_mapping[rule_exp]
|
||||
return self._check_policy(prop_exp_key, action,
|
||||
context)
|
||||
if set(roles).intersection(set(rule_roles)):
|
||||
return True
|
||||
return False
|
|
@ -0,0 +1,278 @@
|
|||
# Copyright 2013 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
RPC Controller
|
||||
"""
|
||||
import datetime
|
||||
import traceback
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
import oslo_utils.importutils as imp
|
||||
from oslo_utils import timeutils
|
||||
import six
|
||||
from webob import exc
|
||||
|
||||
from searchlight.common import client
|
||||
from searchlight.common import exception
|
||||
from searchlight.common import utils
|
||||
from searchlight.common import wsgi
|
||||
from searchlight import i18n
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
_ = i18n._
|
||||
_LE = i18n._LE
|
||||
|
||||
|
||||
rpc_opts = [
|
||||
# NOTE(flaper87): Shamelessly copied
|
||||
# from oslo rpc.
|
||||
cfg.ListOpt('allowed_rpc_exception_modules',
|
||||
default=['searchlight.common.exception',
|
||||
'exceptions',
|
||||
],
|
||||
help='Modules of exceptions that are permitted to be recreated'
|
||||
' upon receiving exception data from an rpc call.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(rpc_opts)
|
||||
|
||||
|
||||
class RPCJSONSerializer(wsgi.JSONResponseSerializer):
|
||||
|
||||
def _sanitizer(self, obj):
|
||||
def to_primitive(_type, _value):
|
||||
return {"_type": _type, "_value": _value}
|
||||
|
||||
if isinstance(obj, datetime.datetime):
|
||||
return to_primitive("datetime", timeutils.strtime(obj))
|
||||
|
||||
return super(RPCJSONSerializer, self)._sanitizer(obj)
|
||||
|
||||
|
||||
class RPCJSONDeserializer(wsgi.JSONRequestDeserializer):
|
||||
|
||||
def _to_datetime(self, obj):
|
||||
return timeutils.parse_strtime(obj)
|
||||
|
||||
def _sanitizer(self, obj):
|
||||
try:
|
||||
_type, _value = obj["_type"], obj["_value"]
|
||||
return getattr(self, "_to_" + _type)(_value)
|
||||
except (KeyError, AttributeError):
|
||||
return obj
|
||||
|
||||
|
||||
class Controller(object):
|
||||
"""
|
||||
Base RPCController.
|
||||
|
||||
This is the base controller for RPC based APIs. Commands
|
||||
handled by this controller respect the following form:
|
||||
|
||||
[{
|
||||
'command': 'method_name',
|
||||
'kwargs': {...}
|
||||
}]
|
||||
|
||||
The controller is capable of processing more than one command
|
||||
per request and will always return a list of results.
|
||||
|
||||
:params raise_exc: Boolean that specifies whether to raise
|
||||
exceptions instead of "serializing" them.
|
||||
"""
|
||||
|
||||
def __init__(self, raise_exc=False):
|
||||
self._registered = {}
|
||||
self.raise_exc = raise_exc
|
||||
|
||||
def register(self, resource, filtered=None, excluded=None, refiner=None):
|
||||
"""
|
||||
Exports methods through the RPC Api.
|
||||
|
||||
:params resource: Resource's instance to register.
|
||||
:params filtered: List of methods that *can* be registered. Read
|
||||
as "Method must be in this list".
|
||||
:params excluded: List of methods to exclude.
|
||||
:params refiner: Callable to use as filter for methods.
|
||||
|
||||
:raises AssertionError: If refiner is not callable.
|
||||
"""
|
||||
|
||||
funcs = filter(lambda x: not x.startswith("_"), dir(resource))
|
||||
|
||||
if filtered:
|
||||
funcs = [f for f in funcs if f in filtered]
|
||||
|
||||
if excluded:
|
||||
funcs = [f for f in funcs if f not in excluded]
|
||||
|
||||
if refiner:
|
||||
assert callable(refiner), "Refiner must be callable"
|
||||
funcs = filter(refiner, funcs)
|
||||
|
||||
for name in funcs:
|
||||
meth = getattr(resource, name)
|
||||
|
||||
if not callable(meth):
|
||||
continue
|
||||
|
||||
self._registered[name] = meth
|
||||
|
||||
def __call__(self, req, body):
|
||||
"""
|
||||
Executes the command
|
||||
"""
|
||||
|
||||
if not isinstance(body, list):
|
||||
msg = _("Request must be a list of commands")
|
||||
raise exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
def validate(cmd):
|
||||
if not isinstance(cmd, dict):
|
||||
msg = _("Bad Command: %s") % str(cmd)
|
||||
raise exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
command, kwargs = cmd.get("command"), cmd.get("kwargs")
|
||||
|
||||
if (not command or not isinstance(command, six.string_types) or
|
||||
(kwargs and not isinstance(kwargs, dict))):
|
||||
msg = _("Wrong command structure: %s") % (str(cmd))
|
||||
raise exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
method = self._registered.get(command)
|
||||
if not method:
|
||||
# Just raise 404 if the user tries to
|
||||
# access a private method. No need for
|
||||
# 403 here since logically the command
|
||||
# is not registered to the rpc dispatcher
|
||||
raise exc.HTTPNotFound(explanation=_("Command not found"))
|
||||
|
||||
return True
|
||||
|
||||
# If more than one command were sent then they might
|
||||
# be intended to be executed sequentially, that for,
|
||||
# lets first verify they're all valid before executing
|
||||
# them.
|
||||
commands = filter(validate, body)
|
||||
|
||||
results = []
|
||||
for cmd in commands:
|
||||
# kwargs is not required
|
||||
command, kwargs = cmd["command"], cmd.get("kwargs", {})
|
||||
method = self._registered[command]
|
||||
try:
|
||||
result = method(req.context, **kwargs)
|
||||
except Exception as e:
|
||||
if self.raise_exc:
|
||||
raise
|
||||
|
||||
cls, val = e.__class__, utils.exception_to_str(e)
|
||||
msg = (_LE("RPC Call Error: %(val)s\n%(tb)s") %
|
||||
dict(val=val, tb=traceback.format_exc()))
|
||||
LOG.error(msg)
|
||||
|
||||
# NOTE(flaper87): Don't propagate all exceptions
|
||||
# but the ones allowed by the user.
|
||||
module = cls.__module__
|
||||
if module not in CONF.allowed_rpc_exception_modules:
|
||||
cls = exception.RPCError
|
||||
val = six.text_type(exception.RPCError(cls=cls, val=val))
|
||||
|
||||
cls_path = "%s.%s" % (cls.__module__, cls.__name__)
|
||||
result = {"_error": {"cls": cls_path, "val": val}}
|
||||
results.append(result)
|
||||
return results
|
||||
|
||||
|
||||
class RPCClient(client.BaseClient):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self._serializer = RPCJSONSerializer()
|
||||
self._deserializer = RPCJSONDeserializer()
|
||||
|
||||
self.raise_exc = kwargs.pop("raise_exc", True)
|
||||
self.base_path = kwargs.pop("base_path", '/rpc')
|
||||
super(RPCClient, self).__init__(*args, **kwargs)
|
||||
|
||||
@client.handle_unauthenticated
|
||||
def bulk_request(self, commands):
|
||||
"""
|
||||
Execute multiple commands in a single request.
|
||||
|
||||
:params commands: List of commands to send. Commands
|
||||
must respect the following form:
|
||||
|
||||
{
|
||||
'command': 'method_name',
|
||||
'kwargs': method_kwargs
|
||||
}
|
||||
"""
|
||||
body = self._serializer.to_json(commands)
|
||||
response = super(RPCClient, self).do_request('POST',
|
||||
self.base_path,
|
||||
body)
|
||||
return self._deserializer.from_json(response.read())
|
||||
|
||||
def do_request(self, method, **kwargs):
|
||||
"""
|
||||
Simple do_request override. This method serializes
|
||||
the outgoing body and builds the command that will
|
||||
be sent.
|
||||
|
||||
:params method: The remote python method to call
|
||||
:params kwargs: Dynamic parameters that will be
|
||||
passed to the remote method.
|
||||
"""
|
||||
content = self.bulk_request([{'command': method,
|
||||
'kwargs': kwargs}])
|
||||
|
||||
# NOTE(flaper87): Return the first result if
|
||||
# a single command was executed.
|
||||
content = content[0]
|
||||
|
||||
# NOTE(flaper87): Check if content is an error
|
||||
# and re-raise it if raise_exc is True. Before
|
||||
# checking if content contains the '_error' key,
|
||||
# verify if it is an instance of dict - since the
|
||||
# RPC call may have returned something different.
|
||||
if self.raise_exc and (isinstance(content, dict)
|
||||
and '_error' in content):
|
||||
error = content['_error']
|
||||
try:
|
||||
exc_cls = imp.import_class(error['cls'])
|
||||
raise exc_cls(error['val'])
|
||||
except ImportError:
|
||||
# NOTE(flaper87): The exception
|
||||
# class couldn't be imported, using
|
||||
# a generic exception.
|
||||
raise exception.RPCError(**error)
|
||||
return content
|
||||
|
||||
def __getattr__(self, item):
|
||||
"""
|
||||
This method returns a method_proxy that
|
||||
will execute the rpc call in the registry
|
||||
service.
|
||||
"""
|
||||
if item.startswith('_'):
|
||||
raise AttributeError(item)
|
||||
|
||||
def method_proxy(**kw):
|
||||
return self.do_request(item, **kw)
|
||||
|
||||
return method_proxy
|
|
@ -0,0 +1,739 @@
|
|||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2014 SoftLayer Technologies, Inc.
|
||||
# Copyright 2015 Mirantis, Inc
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
System-level utilities and helper functions.
|
||||
"""
|
||||
|
||||
import errno
|
||||
|
||||
try:
|
||||
from eventlet import sleep
|
||||
except ImportError:
|
||||
from time import sleep
|
||||
from eventlet.green import socket
|
||||
|
||||
import functools
|
||||
import os
|
||||
import platform
|
||||
import re
|
||||
import stevedore
|
||||
import subprocess
|
||||
import sys
|
||||
import uuid
|
||||
|
||||
from OpenSSL import crypto
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import encodeutils
|
||||
from oslo_utils import excutils
|
||||
from oslo_utils import netutils
|
||||
from oslo_utils import strutils
|
||||
import six
|
||||
from webob import exc
|
||||
|
||||
from searchlight.common import exception
|
||||
from searchlight import i18n
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
_ = i18n._
|
||||
_LE = i18n._LE
|
||||
|
||||
FEATURE_BLACKLIST = ['content-length', 'content-type', 'x-image-meta-size']
|
||||
|
||||
# Whitelist of v1 API headers of form x-image-meta-xxx
|
||||
IMAGE_META_HEADERS = ['x-image-meta-location', 'x-image-meta-size',
|
||||
'x-image-meta-is_public', 'x-image-meta-disk_format',
|
||||
'x-image-meta-container_format', 'x-image-meta-name',
|
||||
'x-image-meta-status', 'x-image-meta-copy_from',
|
||||
'x-image-meta-uri', 'x-image-meta-checksum',
|
||||
'x-image-meta-created_at', 'x-image-meta-updated_at',
|
||||
'x-image-meta-deleted_at', 'x-image-meta-min_ram',
|
||||
'x-image-meta-min_disk', 'x-image-meta-owner',
|
||||
'x-image-meta-store', 'x-image-meta-id',
|
||||
'x-image-meta-protected', 'x-image-meta-deleted',
|
||||
'x-image-meta-virtual_size']
|
||||
|
||||
GLANCE_TEST_SOCKET_FD_STR = 'GLANCE_TEST_SOCKET_FD'
|
||||
|
||||
|
||||
def chunkreadable(iter, chunk_size=65536):
|
||||
"""
|
||||
Wrap a readable iterator with a reader yielding chunks of
|
||||
a preferred size, otherwise leave iterator unchanged.
|
||||
|
||||
:param iter: an iter which may also be readable
|
||||
:param chunk_size: maximum size of chunk
|
||||
"""
|
||||
return chunkiter(iter, chunk_size) if hasattr(iter, 'read') else iter
|
||||
|
||||
|
||||
def chunkiter(fp, chunk_size=65536):
|
||||
"""
|
||||
Return an iterator to a file-like obj which yields fixed size chunks
|
||||
|
||||
:param fp: a file-like object
|
||||
:param chunk_size: maximum size of chunk
|
||||
"""
|
||||
while True:
|
||||
chunk = fp.read(chunk_size)
|
||||
if chunk:
|
||||
yield chunk
|
||||
else:
|
||||
break
|
||||
|
||||
|
||||
def cooperative_iter(iter):
|
||||
"""
|
||||
Return an iterator which schedules after each
|
||||
iteration. This can prevent eventlet thread starvation.
|
||||
|
||||
:param iter: an iterator to wrap
|
||||
"""
|
||||
try:
|
||||
for chunk in iter:
|
||||
sleep(0)
|
||||
yield chunk
|
||||
except Exception as err:
|
||||
with excutils.save_and_reraise_exception():
|
||||
msg = _LE("Error: cooperative_iter exception %s") % err
|
||||
LOG.error(msg)
|
||||
|
||||
|
||||
def cooperative_read(fd):
|
||||
"""
|
||||
Wrap a file descriptor's read with a partial function which schedules
|
||||
after each read. This can prevent eventlet thread starvation.
|
||||
|
||||
:param fd: a file descriptor to wrap
|
||||
"""
|
||||
def readfn(*args):
|
||||
result = fd.read(*args)
|
||||
sleep(0)
|
||||
return result
|
||||
return readfn
|
||||
|
||||
|
||||
MAX_COOP_READER_BUFFER_SIZE = 134217728 # 128M seems like a sane buffer limit
|
||||
|
||||
|
||||
class CooperativeReader(object):
|
||||
"""
|
||||
An eventlet thread friendly class for reading in image data.
|
||||
|
||||
When accessing data either through the iterator or the read method
|
||||
we perform a sleep to allow a co-operative yield. When there is more than
|
||||
one image being uploaded/downloaded this prevents eventlet thread
|
||||
starvation, ie allows all threads to be scheduled periodically rather than
|
||||
having the same thread be continuously active.
|
||||
"""
|
||||
def __init__(self, fd):
|
||||
"""
|
||||
:param fd: Underlying image file object
|
||||
"""
|
||||
self.fd = fd
|
||||
self.iterator = None
|
||||
# NOTE(markwash): if the underlying supports read(), overwrite the
|
||||
# default iterator-based implementation with cooperative_read which
|
||||
# is more straightforward
|
||||
if hasattr(fd, 'read'):
|
||||
self.read = cooperative_read(fd)
|
||||
else:
|
||||
self.iterator = None
|
||||
self.buffer = ''
|
||||
self.position = 0
|
||||
|
||||
def read(self, length=None):
|
||||
"""Return the requested amount of bytes, fetching the next chunk of
|
||||
the underlying iterator when needed.
|
||||
|
||||
This is replaced with cooperative_read in __init__ if the underlying
|
||||
fd already supports read().
|
||||
"""
|
||||
if length is None:
|
||||
if len(self.buffer) - self.position > 0:
|
||||
# if no length specified but some data exists in buffer,
|
||||
# return that data and clear the buffer
|
||||
result = self.buffer[self.position:]
|
||||
self.buffer = ''
|
||||
self.position = 0
|
||||
return str(result)
|
||||
else:
|
||||
# otherwise read the next chunk from the underlying iterator
|
||||
# and return it as a whole. Reset the buffer, as subsequent
|
||||
# calls may specify the length
|
||||
try:
|
||||
if self.iterator is None:
|
||||
self.iterator = self.__iter__()
|
||||
return self.iterator.next()
|
||||
except StopIteration:
|
||||
return ''
|
||||
finally:
|
||||
self.buffer = ''
|
||||
self.position = 0
|
||||
else:
|
||||
result = bytearray()
|
||||
while len(result) < length:
|
||||
if self.position < len(self.buffer):
|
||||
to_read = length - len(result)
|
||||
chunk = self.buffer[self.position:self.position + to_read]
|
||||
result.extend(chunk)
|
||||
|
||||
# This check is here to prevent potential OOM issues if
|
||||
# this code is called with unreasonably high values of read
|
||||
# size. Currently it is only called from the HTTP clients
|
||||
# of Glance backend stores, which use httplib for data
|
||||
# streaming, which has readsize hardcoded to 8K, so this
|
||||
# check should never fire. Regardless it still worths to
|
||||
# make the check, as the code may be reused somewhere else.
|
||||
if len(result) >= MAX_COOP_READER_BUFFER_SIZE:
|
||||
raise exception.LimitExceeded()
|
||||
self.position += len(chunk)
|
||||
else:
|
||||
try:
|
||||
if self.iterator is None:
|
||||
self.iterator = self.__iter__()
|
||||
self.buffer = self.iterator.next()
|
||||
self.position = 0
|
||||
except StopIteration:
|
||||
self.buffer = ''
|
||||
self.position = 0
|
||||
return str(result)
|
||||
return str(result)
|
||||
|
||||
def __iter__(self):
|
||||
return cooperative_iter(self.fd.__iter__())
|
||||
|
||||
|
||||
class LimitingReader(object):
|
||||
"""
|
||||
Reader designed to fail when reading image data past the configured
|
||||
allowable amount.
|
||||
"""
|
||||
def __init__(self, data, limit):
|
||||
"""
|
||||
:param data: Underlying image data object
|
||||
:param limit: maximum number of bytes the reader should allow
|
||||
"""
|
||||
self.data = data
|
||||
self.limit = limit
|
||||
self.bytes_read = 0
|
||||
|
||||
def __iter__(self):
|
||||
for chunk in self.data:
|
||||
self.bytes_read += len(chunk)
|
||||
if self.bytes_read > self.limit:
|
||||
raise exception.ImageSizeLimitExceeded()
|
||||
else:
|
||||
yield chunk
|
||||
|
||||
def read(self, i):
|
||||
result = self.data.read(i)
|
||||
self.bytes_read += len(result)
|
||||
if self.bytes_read > self.limit:
|
||||
raise exception.ImageSizeLimitExceeded()
|
||||
return result
|
||||
|
||||
|
||||
def image_meta_to_http_headers(image_meta):
|
||||
"""
|
||||
Returns a set of image metadata into a dict
|
||||
of HTTP headers that can be fed to either a Webob
|
||||
Request object or an httplib.HTTP(S)Connection object
|
||||
|
||||
:param image_meta: Mapping of image metadata
|
||||
"""
|
||||
headers = {}
|
||||
for k, v in image_meta.items():
|
||||
if v is not None:
|
||||
if k == 'properties':
|
||||
for pk, pv in v.items():
|
||||
if pv is not None:
|
||||
headers["x-image-meta-property-%s"
|
||||
% pk.lower()] = six.text_type(pv)
|
||||
else:
|
||||
headers["x-image-meta-%s" % k.lower()] = six.text_type(v)
|
||||
return headers
|
||||
|
||||
|
||||
def get_image_meta_from_headers(response):
|
||||
"""
|
||||
Processes HTTP headers from a supplied response that
|
||||
match the x-image-meta and x-image-meta-property and
|
||||
returns a mapping of image metadata and properties
|
||||
|
||||
:param response: Response to process
|
||||
"""
|
||||
result = {}
|
||||
properties = {}
|
||||
|
||||
if hasattr(response, 'getheaders'): # httplib.HTTPResponse
|
||||
headers = response.getheaders()
|
||||
else: # webob.Response
|
||||
headers = response.headers.items()
|
||||
|
||||
for key, value in headers:
|
||||
key = str(key.lower())
|
||||
if key.startswith('x-image-meta-property-'):
|
||||
field_name = key[len('x-image-meta-property-'):].replace('-', '_')
|
||||
properties[field_name] = value or None
|
||||
elif key.startswith('x-image-meta-'):
|
||||
field_name = key[len('x-image-meta-'):].replace('-', '_')
|
||||
if 'x-image-meta-' + field_name not in IMAGE_META_HEADERS:
|
||||
msg = _("Bad header: %(header_name)s") % {'header_name': key}
|
||||
raise exc.HTTPBadRequest(msg, content_type="text/plain")
|
||||
result[field_name] = value or None
|
||||
result['properties'] = properties
|
||||
|
||||
for key, nullable in [('size', False), ('min_disk', False),
|
||||
('min_ram', False), ('virtual_size', True)]:
|
||||
if key in result:
|
||||
try:
|
||||
result[key] = int(result[key])
|
||||
except ValueError:
|
||||
if nullable and result[key] == str(None):
|
||||
result[key] = None
|
||||
else:
|
||||
extra = (_("Cannot convert image %(key)s '%(value)s' "
|
||||
"to an integer.")
|
||||
% {'key': key, 'value': result[key]})
|
||||
raise exception.InvalidParameterValue(value=result[key],
|
||||
param=key,
|
||||
extra_msg=extra)
|
||||
if result[key] < 0 and result[key] is not None:
|
||||
extra = (_("Image %(key)s must be >= 0 "
|
||||
"('%(value)s' specified).")
|
||||
% {'key': key, 'value': result[key]})
|
||||
raise exception.InvalidParameterValue(value=result[key],
|
||||
param=key,
|
||||
extra_msg=extra)
|
||||
|
||||
for key in ('is_public', 'deleted', 'protected'):
|
||||
if key in result:
|
||||
result[key] = strutils.bool_from_string(result[key])
|
||||
return result
|
||||
|
||||
|
||||
def create_mashup_dict(image_meta):
|
||||
"""
|
||||
Returns a dictionary-like mashup of the image core properties
|
||||
and the image custom properties from given image metadata.
|
||||
|
||||
:param image_meta: metadata of image with core and custom properties
|
||||
"""
|
||||
|
||||
def get_items():
|
||||
for key, value in six.iteritems(image_meta):
|
||||
if isinstance(value, dict):
|
||||
for subkey, subvalue in six.iteritems(
|
||||
create_mashup_dict(value)):
|
||||
if subkey not in image_meta:
|
||||
yield subkey, subvalue
|
||||
else:
|
||||
yield key, value
|
||||
|
||||
return dict(get_items())
|
||||
|
||||
|
||||
def safe_mkdirs(path):
|
||||
try:
|
||||
os.makedirs(path)
|
||||
except OSError as e:
|
||||
if e.errno != errno.EEXIST:
|
||||
raise
|
||||
|
||||
|
||||
def safe_remove(path):
|
||||
try:
|
||||
os.remove(path)
|
||||
except OSError as e:
|
||||
if e.errno != errno.ENOENT:
|
||||
raise
|
||||
|
||||
|
||||
class PrettyTable(object):
|
||||
"""Creates an ASCII art table for use in bin/searchlight
|
||||
|
||||
Example:
|
||||
|
||||
ID Name Size Hits
|
||||
--- ----------------- ------------ -----
|
||||
122 image 22 0
|
||||
"""
|
||||
def __init__(self):
|
||||
self.columns = []
|
||||
|
||||
def add_column(self, width, label="", just='l'):
|
||||
"""Add a column to the table
|
||||
|
||||
:param width: number of characters wide the column should be
|
||||
:param label: column heading
|
||||
:param just: justification for the column, 'l' for left,
|
||||
'r' for right
|
||||
"""
|
||||
self.columns.append((width, label, just))
|
||||
|
||||
def make_header(self):
|
||||
label_parts = []
|
||||
break_parts = []
|
||||
for width, label, _ in self.columns:
|
||||
# NOTE(sirp): headers are always left justified
|
||||
label_part = self._clip_and_justify(label, width, 'l')
|
||||
label_parts.append(label_part)
|
||||
|
||||
break_part = '-' * width
|
||||
break_parts.append(break_part)
|
||||
|
||||
label_line = ' '.join(label_parts)
|
||||
break_line = ' '.join(break_parts)
|
||||
return '\n'.join([label_line, break_line])
|
||||
|
||||
def make_row(self, *args):
|
||||
row = args
|
||||
row_parts = []
|
||||
for data, (width, _, just) in zip(row, self.columns):
|
||||
row_part = self._clip_and_justify(data, width, just)
|
||||
row_parts.append(row_part)
|
||||
|
||||
row_line = ' '.join(row_parts)
|
||||
return row_line
|
||||
|
||||
@staticmethod
|
||||
def _clip_and_justify(data, width, just):
|
||||
# clip field to column width
|
||||
clipped_data = str(data)[:width]
|
||||
|
||||
if just == 'r':
|
||||
# right justify
|
||||
justified = clipped_data.rjust(width)
|
||||
else:
|
||||
# left justify
|
||||
justified = clipped_data.ljust(width)
|
||||
|
||||
return justified
|
||||
|
||||
|
||||
def get_terminal_size():
|
||||
|
||||
def _get_terminal_size_posix():
|
||||
import fcntl
|
||||
import struct
|
||||
import termios
|
||||
|
||||
height_width = None
|
||||
|
||||
try:
|
||||
height_width = struct.unpack('hh', fcntl.ioctl(sys.stderr.fileno(),
|
||||
termios.TIOCGWINSZ,
|
||||
struct.pack('HH', 0, 0)))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if not height_width:
|
||||
try:
|
||||
p = subprocess.Popen(['stty', 'size'],
|
||||
shell=False,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=open(os.devnull, 'w'))
|
||||
result = p.communicate()
|
||||
if p.returncode == 0:
|
||||
return tuple(int(x) for x in result[0].split())
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return height_width
|
||||
|
||||
def _get_terminal_size_win32():
|
||||
try:
|
||||
from ctypes import create_string_buffer
|
||||
from ctypes import windll
|
||||
handle = windll.kernel32.GetStdHandle(-12)
|
||||
csbi = create_string_buffer(22)
|
||||
res = windll.kernel32.GetConsoleScreenBufferInfo(handle, csbi)
|
||||
except Exception:
|
||||
return None
|
||||
if res:
|
||||
import struct
|
||||
unpack_tmp = struct.unpack("hhhhHhhhhhh", csbi.raw)
|
||||
(bufx, bufy, curx, cury, wattr,
|
||||
left, top, right, bottom, maxx, maxy) = unpack_tmp
|
||||
height = bottom - top + 1
|
||||
width = right - left + 1
|
||||
return (height, width)
|
||||
else:
|
||||
return None
|
||||
|
||||
def _get_terminal_size_unknownOS():
|
||||
raise NotImplementedError
|
||||
|
||||
func = {'posix': _get_terminal_size_posix,
|
||||
'win32': _get_terminal_size_win32}
|
||||
|
||||
height_width = func.get(platform.os.name, _get_terminal_size_unknownOS)()
|
||||
|
||||
if height_width is None:
|
||||
raise exception.Invalid()
|
||||
|
||||
for i in height_width:
|
||||
if not isinstance(i, int) or i <= 0:
|
||||
raise exception.Invalid()
|
||||
|
||||
return height_width[0], height_width[1]
|
||||
|
||||
|
||||
def mutating(func):
|
||||
"""Decorator to enforce read-only logic"""
|
||||
@functools.wraps(func)
|
||||
def wrapped(self, req, *args, **kwargs):
|
||||
if req.context.read_only:
|
||||
msg = "Read-only access"
|
||||
LOG.debug(msg)
|
||||
raise exc.HTTPForbidden(msg, request=req,
|
||||
content_type="text/plain")
|
||||
return func(self, req, *args, **kwargs)
|
||||
return wrapped
|
||||
|
||||
|
||||
def setup_remote_pydev_debug(host, port):
|
||||
error_msg = _LE('Error setting up the debug environment. Verify that the'
|
||||
' option pydev_worker_debug_host is pointing to a valid '
|
||||
'hostname or IP on which a pydev server is listening on'
|
||||
' the port indicated by pydev_worker_debug_port.')
|
||||
|
||||
try:
|
||||
try:
|
||||
from pydev import pydevd
|
||||
except ImportError:
|
||||
import pydevd
|
||||
|
||||
pydevd.settrace(host,
|
||||
port=port,
|
||||
stdoutToServer=True,
|
||||
stderrToServer=True)
|
||||
return True
|
||||
except Exception:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.exception(error_msg)
|
||||
|
||||
|
||||
def validate_key_cert(key_file, cert_file):
|
||||
try:
|
||||
error_key_name = "private key"
|
||||
error_filename = key_file
|
||||
with open(key_file, 'r') as keyfile:
|
||||
key_str = keyfile.read()
|
||||
key = crypto.load_privatekey(crypto.FILETYPE_PEM, key_str)
|
||||
|
||||
error_key_name = "certificate"
|
||||
error_filename = cert_file
|
||||
with open(cert_file, 'r') as certfile:
|
||||
cert_str = certfile.read()
|
||||
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_str)
|
||||
except IOError as ioe:
|
||||
raise RuntimeError(_("There is a problem with your %(error_key_name)s "
|
||||
"%(error_filename)s. Please verify it."
|
||||
" Error: %(ioe)s") %
|
||||
{'error_key_name': error_key_name,
|
||||
'error_filename': error_filename,
|
||||
'ioe': ioe})
|
||||
except crypto.Error as ce:
|
||||
raise RuntimeError(_("There is a problem with your %(error_key_name)s "
|
||||
"%(error_filename)s. Please verify it. OpenSSL"
|
||||
" error: %(ce)s") %
|
||||
{'error_key_name': error_key_name,
|
||||
'error_filename': error_filename,
|
||||
'ce': ce})
|
||||
|
||||
try:
|
||||
data = str(uuid.uuid4())
|
||||
digest = CONF.digest_algorithm
|
||||
if digest == 'sha1':
|
||||
LOG.warn('The FIPS (FEDERAL INFORMATION PROCESSING STANDARDS)'
|
||||
' state that the SHA-1 is not suitable for'
|
||||
' general-purpose digital signature applications (as'
|
||||
' specified in FIPS 186-3) that require 112 bits of'
|
||||
' security. The default value is sha1 in Kilo for a'
|
||||
' smooth upgrade process, and it will be updated'
|
||||
' with sha256 in next release(L).')
|
||||
out = crypto.sign(key, data, digest)
|
||||
crypto.verify(cert, out, data, digest)
|
||||
except crypto.Error as ce:
|
||||
raise RuntimeError(_("There is a problem with your key pair. "
|
||||
"Please verify that cert %(cert_file)s and "
|
||||
"key %(key_file)s belong together. OpenSSL "
|
||||
"error %(ce)s") % {'cert_file': cert_file,
|
||||
'key_file': key_file,
|
||||
'ce': ce})
|
||||
|
||||
|
||||
def get_test_suite_socket():
|
||||
global GLANCE_TEST_SOCKET_FD_STR
|
||||
if GLANCE_TEST_SOCKET_FD_STR in os.environ:
|
||||
fd = int(os.environ[GLANCE_TEST_SOCKET_FD_STR])
|
||||
sock = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock = socket.SocketType(_sock=sock)
|
||||
sock.listen(CONF.backlog)
|
||||
del os.environ[GLANCE_TEST_SOCKET_FD_STR]
|
||||
os.close(fd)
|
||||
return sock
|
||||
return None
|
||||
|
||||
|
||||
def is_uuid_like(val):
|
||||
"""Returns validation of a value as a UUID.
|
||||
|
||||
For our purposes, a UUID is a canonical form string:
|
||||
aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa
|
||||
"""
|
||||
try:
|
||||
return str(uuid.UUID(val)) == val
|
||||
except (TypeError, ValueError, AttributeError):
|
||||
return False
|
||||
|
||||
|
||||
def is_valid_hostname(hostname):
|
||||
"""Verify whether a hostname (not an FQDN) is valid."""
|
||||
return re.match('^[a-zA-Z0-9-]+$', hostname) is not None
|
||||
|
||||
|
||||
def is_valid_fqdn(fqdn):
|
||||
"""Verify whether a host is a valid FQDN."""
|
||||
return re.match('^[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$', fqdn) is not None
|
||||
|
||||
|
||||
def parse_valid_host_port(host_port):
|
||||
"""
|
||||
Given a "host:port" string, attempts to parse it as intelligently as
|
||||
possible to determine if it is valid. This includes IPv6 [host]:port form,
|
||||
IPv4 ip:port form, and hostname:port or fqdn:port form.
|
||||
|
||||
Invalid inputs will raise a ValueError, while valid inputs will return
|
||||
a (host, port) tuple where the port will always be of type int.
|
||||
"""
|
||||
|
||||
try:
|
||||
try:
|
||||
host, port = netutils.parse_host_port(host_port)
|
||||
except Exception:
|
||||
raise ValueError(_('Host and port "%s" is not valid.') % host_port)
|
||||
|
||||
if not netutils.is_valid_port(port):
|
||||
raise ValueError(_('Port "%s" is not valid.') % port)
|
||||
|
||||
# First check for valid IPv6 and IPv4 addresses, then a generic
|
||||
# hostname. Failing those, if the host includes a period, then this
|
||||
# should pass a very generic FQDN check. The FQDN check for letters at
|
||||
# the tail end will weed out any hilariously absurd IPv4 addresses.
|
||||
|
||||
if not (netutils.is_valid_ipv6(host) or netutils.is_valid_ipv4(host) or
|
||||
is_valid_hostname(host) or is_valid_fqdn(host)):
|
||||
raise ValueError(_('Host "%s" is not valid.') % host)
|
||||
|
||||
except Exception as ex:
|
||||
raise ValueError(_('%s '
|
||||
'Please specify a host:port pair, where host is an '
|
||||
'IPv4 address, IPv6 address, hostname, or FQDN. If '
|
||||
'using an IPv6 address, enclose it in brackets '
|
||||
'separately from the port (i.e., '
|
||||
'"[fe80::a:b:c]:9876").') % ex)
|
||||
|
||||
return (host, int(port))
|
||||
|
||||
|
||||
def exception_to_str(exc):
|
||||
try:
|
||||
error = six.text_type(exc)
|
||||
except UnicodeError:
|
||||
try:
|
||||
error = str(exc)
|
||||
except UnicodeError:
|
||||
error = ("Caught '%(exception)s' exception." %
|
||||
{"exception": exc.__class__.__name__})
|
||||
return encodeutils.safe_encode(error, errors='ignore')
|
||||
|
||||
|
||||
try:
|
||||
REGEX_4BYTE_UNICODE = re.compile(u'[\U00010000-\U0010ffff]')
|
||||
except re.error:
|
||||
# UCS-2 build case
|
||||
REGEX_4BYTE_UNICODE = re.compile(u'[\uD800-\uDBFF][\uDC00-\uDFFF]')
|
||||
|
||||
|
||||
def no_4byte_params(f):
|
||||
"""
|
||||
Checks that no 4 byte unicode characters are allowed
|
||||
in dicts' keys/values and string's parameters
|
||||
"""
|
||||
def wrapper(*args, **kwargs):
|
||||
|
||||
def _is_match(some_str):
|
||||
return (isinstance(some_str, unicode) and
|
||||
REGEX_4BYTE_UNICODE.findall(some_str) != [])
|
||||
|
||||
def _check_dict(data_dict):
|
||||
# a dict of dicts has to be checked recursively
|
||||
for key, value in data_dict.iteritems():
|
||||
if isinstance(value, dict):
|
||||
_check_dict(value)
|
||||
else:
|
||||
if _is_match(key):
|
||||
msg = _("Property names can't contain 4 byte unicode.")
|
||||
raise exception.Invalid(msg)
|
||||
if _is_match(value):
|
||||
msg = (_("%s can't contain 4 byte unicode characters.")
|
||||
% key.title())
|
||||
raise exception.Invalid(msg)
|
||||
|
||||
for data_dict in [arg for arg in args if isinstance(arg, dict)]:
|
||||
_check_dict(data_dict)
|
||||
# now check args for str values
|
||||
for arg in args:
|
||||
if _is_match(arg):
|
||||
msg = _("Param values can't contain 4 byte unicode.")
|
||||
raise exception.Invalid(msg)
|
||||
# check kwargs as well, as params are passed as kwargs via
|
||||
# registry calls
|
||||
_check_dict(kwargs)
|
||||
return f(*args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
def stash_conf_values():
|
||||
"""
|
||||
Make a copy of some of the current global CONF's settings.
|
||||
Allows determining if any of these values have changed
|
||||
when the config is reloaded.
|
||||
"""
|
||||
conf = {}
|
||||
conf['bind_host'] = CONF.bind_host
|
||||
conf['bind_port'] = CONF.bind_port
|
||||
conf['tcp_keepidle'] = CONF.cert_file
|
||||
conf['backlog'] = CONF.backlog
|
||||
conf['key_file'] = CONF.key_file
|
||||
conf['cert_file'] = CONF.cert_file
|
||||
|
||||
return conf
|
||||
|
||||
|
||||
def get_search_plugins():
|
||||
namespace = 'searchlight.index_backend'
|
||||
ext_manager = stevedore.extension.ExtensionManager(
|
||||
namespace, invoke_on_load=True)
|
||||
return ext_manager.extensions
|
|
@ -0,0 +1,901 @@
|
|||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2010 OpenStack Foundation
|
||||
# Copyright 2014 IBM Corp.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Utility methods for working with WSGI servers
|
||||
"""
|
||||
from __future__ import print_function
|
||||
|
||||
import errno
|
||||
import functools
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
|
||||
import eventlet
|
||||
from eventlet.green import socket
|
||||
from eventlet.green import ssl
|
||||
import eventlet.greenio
|
||||
import eventlet.wsgi
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_log import loggers
|
||||
from oslo_serialization import jsonutils
|
||||
import routes
|
||||
import routes.middleware
|
||||
import six
|
||||
import webob.dec
|
||||
import webob.exc
|
||||
from webob import multidict
|
||||
|
||||
from searchlight.common import exception
|
||||
from searchlight.common import utils
|
||||
from searchlight import i18n
|
||||
|
||||
|
||||
_ = i18n._
|
||||
_LE = i18n._LE
|
||||
_LI = i18n._LI
|
||||
_LW = i18n._LW
|
||||
|
||||
bind_opts = [
|
||||
cfg.StrOpt('bind_host', default='0.0.0.0',
|
||||
help=_('Address to bind the server. Useful when '
|
||||
'selecting a particular network interface.')),
|
||||
cfg.IntOpt('bind_port',
|
||||
help=_('The port on which the server will listen.')),
|
||||
]
|
||||
|
||||
socket_opts = [
|
||||
cfg.IntOpt('backlog', default=4096,
|
||||
help=_('The backlog value that will be used when creating the '
|
||||
'TCP listener socket.')),
|
||||
cfg.IntOpt('tcp_keepidle', default=600,
|
||||
help=_('The value for the socket option TCP_KEEPIDLE. This is '
|
||||
'the time in seconds that the connection must be idle '
|
||||
'before TCP starts sending keepalive probes.')),
|
||||
cfg.StrOpt('ca_file', help=_('CA certificate file to use to verify '
|
||||
'connecting clients.')),
|
||||
cfg.StrOpt('cert_file', help=_('Certificate file to use when starting API '
|
||||
'server securely.')),
|
||||
cfg.StrOpt('key_file', help=_('Private key file to use when starting API '
|
||||
'server securely.')),
|
||||
]
|
||||
|
||||
eventlet_opts = [
|
||||
cfg.IntOpt('workers', default=processutils.get_worker_count(),
|
||||
help=_('The number of child process workers that will be '
|
||||
'created to service requests. The default will be '
|
||||
'equal to the number of CPUs available.')),
|
||||
cfg.IntOpt('max_header_line', default=16384,
|
||||
help=_('Maximum line size of message headers to be accepted. '
|
||||
'max_header_line may need to be increased when using '
|
||||
'large tokens (typically those generated by the '
|
||||
'Keystone v3 API with big service catalogs')),
|
||||
cfg.BoolOpt('http_keepalive', default=True,
|
||||
help=_('If False, server will return the header '
|
||||
'"Connection: close", '
|
||||
'If True, server will return "Connection: Keep-Alive" '
|
||||
'in its responses. In order to close the client socket '
|
||||
'connection explicitly after the response is sent and '
|
||||
'read successfully by the client, you simply have to '
|
||||
'set this option to False when you create a wsgi '
|
||||
'server.')),
|
||||
]
|
||||
|
||||
profiler_opts = [
|
||||
cfg.BoolOpt("enabled", default=False,
|
||||
help=_('If False fully disable profiling feature.')),
|
||||
cfg.BoolOpt("trace_sqlalchemy", default=False,
|
||||
help=_("If False doesn't trace SQL requests."))
|
||||
]
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(bind_opts)
|
||||
CONF.register_opts(socket_opts)
|
||||
CONF.register_opts(eventlet_opts)
|
||||
CONF.register_opts(profiler_opts, group="profiler")
|
||||
|
||||
ASYNC_EVENTLET_THREAD_POOL_LIST = []
|
||||
|
||||
|
||||
def get_bind_addr(default_port=None):
|
||||
"""Return the host and port to bind to."""
|
||||
return (CONF.bind_host, CONF.bind_port or default_port)
|
||||
|
||||
|
||||
def ssl_wrap_socket(sock):
|
||||
"""
|
||||
Wrap an existing socket in SSL
|
||||
|
||||
:param sock: non-SSL socket to wrap
|
||||
|
||||
:returns: An SSL wrapped socket
|
||||
"""
|
||||
utils.validate_key_cert(CONF.key_file, CONF.cert_file)
|
||||
|
||||
ssl_kwargs = {
|
||||
'server_side': True,
|
||||
'certfile': CONF.cert_file,
|
||||
'keyfile': CONF.key_file,
|
||||
'cert_reqs': ssl.CERT_NONE,
|
||||
}
|
||||
|
||||
if CONF.ca_file:
|
||||
ssl_kwargs['ca_certs'] = CONF.ca_file
|
||||
ssl_kwargs['cert_reqs'] = ssl.CERT_REQUIRED
|
||||
|
||||
return ssl.wrap_socket(sock, **ssl_kwargs)
|
||||
|
||||
|
||||
def get_socket(default_port):
|
||||
"""
|
||||
Bind socket to bind ip:port in conf
|
||||
|
||||
note: Mostly comes from Swift with a few small changes...
|
||||
|
||||
:param default_port: port to bind to if none is specified in conf
|
||||
|
||||
:returns : a socket object as returned from socket.listen or
|
||||
ssl.wrap_socket if conf specifies cert_file
|
||||
"""
|
||||
bind_addr = get_bind_addr(default_port)
|
||||
|
||||
# TODO(jaypipes): eventlet's greened socket module does not actually
|
||||
# support IPv6 in getaddrinfo(). We need to get around this in the
|
||||
# future or monitor upstream for a fix
|
||||
address_family = [
|
||||
addr[0] for addr in socket.getaddrinfo(bind_addr[0],
|
||||
bind_addr[1],
|
||||
socket.AF_UNSPEC,
|
||||
socket.SOCK_STREAM)
|
||||
if addr[0] in (socket.AF_INET, socket.AF_INET6)
|
||||
][0]
|
||||
|
||||
use_ssl = CONF.key_file or CONF.cert_file
|
||||
if use_ssl and (not CONF.key_file or not CONF.cert_file):
|
||||
raise RuntimeError(_("When running server in SSL mode, you must "
|
||||
"specify both a cert_file and key_file "
|
||||
"option value in your configuration file"))
|
||||
|
||||
sock = utils.get_test_suite_socket()
|
||||
retry_until = time.time() + 30
|
||||
|
||||
while not sock and time.time() < retry_until:
|
||||
try:
|
||||
sock = eventlet.listen(bind_addr,
|
||||
backlog=CONF.backlog,
|
||||
family=address_family)
|
||||
except socket.error as err:
|
||||
if err.args[0] != errno.EADDRINUSE:
|
||||
raise
|
||||
eventlet.sleep(0.1)
|
||||
if not sock:
|
||||
raise RuntimeError(_("Could not bind to %(host)s:%(port)s after"
|
||||
" trying for 30 seconds") %
|
||||
{'host': bind_addr[0],
|
||||
'port': bind_addr[1]})
|
||||
|
||||
return sock
|
||||
|
||||
|
||||
def set_eventlet_hub():
|
||||
try:
|
||||
eventlet.hubs.use_hub('poll')
|
||||
except Exception:
|
||||
try:
|
||||
eventlet.hubs.use_hub('selects')
|
||||
except Exception:
|
||||
msg = _("eventlet 'poll' nor 'selects' hubs are available "
|
||||
"on this platform")
|
||||
raise exception.WorkerCreationFailure(
|
||||
reason=msg)
|
||||
|
||||
|
||||
def get_asynchronous_eventlet_pool(size=1000):
|
||||
"""Return eventlet pool to caller.
|
||||
|
||||
Also store pools created in global list, to wait on
|
||||
it after getting signal for graceful shutdown.
|
||||
|
||||
:param size: eventlet pool size
|
||||
:returns: eventlet pool
|
||||
"""
|
||||
global ASYNC_EVENTLET_THREAD_POOL_LIST
|
||||
|
||||
pool = eventlet.GreenPool(size=size)
|
||||
# Add pool to global ASYNC_EVENTLET_THREAD_POOL_LIST
|
||||
ASYNC_EVENTLET_THREAD_POOL_LIST.append(pool)
|
||||
|
||||
return pool
|
||||
|
||||
|
||||
class Server(object):
|
||||
"""Server class to manage multiple WSGI sockets and applications.
|
||||
|
||||
"""
|
||||
def __init__(self, threads=1000):
|
||||
os.umask(0o27) # ensure files are created with the correct privileges
|
||||
self._logger = logging.getLogger("eventlet.wsgi.server")
|
||||
self._wsgi_logger = loggers.WritableLogger(self._logger)
|
||||
self.threads = threads
|
||||
self.children = set()
|
||||
self.stale_children = set()
|
||||
self.running = True
|
||||
self.pgid = os.getpid()
|
||||
try:
|
||||
# NOTE(flaper87): Make sure this process
|
||||
# runs in its own process group.
|
||||
os.setpgid(self.pgid, self.pgid)
|
||||
except OSError:
|
||||
# NOTE(flaper87): When running searchlight-control,
|
||||
# (searchlight's functional tests, for example)
|
||||
# setpgid fails with EPERM as searchlight-control
|
||||
# creates a fresh session, of which the newly
|
||||
# launched service becomes the leader (session
|
||||
# leaders may not change process groups)
|
||||
#
|
||||
# Running searchlight-api is safe and
|
||||
# shouldn't raise any error here.
|
||||
self.pgid = 0
|
||||
|
||||
def hup(self, *args):
|
||||
"""
|
||||
Reloads configuration files with zero down time
|
||||
"""
|
||||
signal.signal(signal.SIGHUP, signal.SIG_IGN)
|
||||
raise exception.SIGHUPInterrupt
|
||||
|
||||
def kill_children(self, *args):
|
||||
"""Kills the entire process group."""
|
||||
signal.signal(signal.SIGTERM, signal.SIG_IGN)
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
self.running = False
|
||||
os.killpg(self.pgid, signal.SIGTERM)
|
||||
|
||||
def start(self, application, default_port):
|
||||
"""
|
||||
Run a WSGI server with the given application.
|
||||
|
||||
:param application: The application to be run in the WSGI server
|
||||
:param default_port: Port to bind to if none is specified in conf
|
||||
"""
|
||||
self.application = application
|
||||
self.default_port = default_port
|
||||
self.configure()
|
||||
self.start_wsgi()
|
||||
|
||||
def start_wsgi(self):
|
||||
if CONF.workers == 0:
|
||||
# Useful for profiling, test, debug etc.
|
||||
self.pool = self.create_pool()
|
||||
self.pool.spawn_n(self._single_run, self.application, self.sock)
|
||||
return
|
||||
else:
|
||||
LOG.info(_LI("Starting %d workers") % CONF.workers)
|
||||
signal.signal(signal.SIGTERM, self.kill_children)
|
||||
signal.signal(signal.SIGINT, self.kill_children)
|
||||
signal.signal(signal.SIGHUP, self.hup)
|
||||
while len(self.children) < CONF.workers:
|
||||
self.run_child()
|
||||
|
||||
def create_pool(self):
|
||||
return eventlet.GreenPool(size=self.threads)
|
||||
|
||||
def _remove_children(self, pid):
|
||||
if pid in self.children:
|
||||
self.children.remove(pid)
|
||||
LOG.info(_LI('Removed dead child %s') % pid)
|
||||
elif pid in self.stale_children:
|
||||
self.stale_children.remove(pid)
|
||||
LOG.info(_LI('Removed stale child %s') % pid)
|
||||
else:
|
||||
LOG.warn(_LW('Unrecognised child %s') % pid)
|
||||
|
||||
def _verify_and_respawn_children(self, pid, status):
|
||||
if len(self.stale_children) == 0:
|
||||
LOG.debug('No stale children')
|
||||
if os.WIFEXITED(status) and os.WEXITSTATUS(status) != 0:
|
||||
LOG.error(_LE('Not respawning child %d, cannot '
|
||||
'recover from termination') % pid)
|
||||
if not self.children and not self.stale_children:
|
||||
LOG.info(
|
||||
_LI('All workers have terminated. Exiting'))
|
||||
self.running = False
|
||||
else:
|
||||
if len(self.children) < CONF.workers:
|
||||
self.run_child()
|
||||
|
||||
def wait_on_children(self):
|
||||
while self.running:
|
||||
try:
|
||||
pid, status = os.wait()
|
||||
if os.WIFEXITED(status) or os.WIFSIGNALED(status):
|
||||
self._remove_children(pid)
|
||||
self._verify_and_respawn_children(pid, status)
|
||||
except OSError as err:
|
||||
if err.errno not in (errno.EINTR, errno.ECHILD):
|
||||
raise
|
||||
except KeyboardInterrupt:
|
||||
LOG.info(_LI('Caught keyboard interrupt. Exiting.'))
|
||||
break
|
||||
except exception.SIGHUPInterrupt:
|
||||
self.reload()
|
||||
continue
|
||||
eventlet.greenio.shutdown_safe(self.sock)
|
||||
self.sock.close()
|
||||
LOG.debug('Exited')
|
||||
|
||||
def configure(self, old_conf=None, has_changed=None):
|
||||
"""
|
||||
Apply configuration settings
|
||||
|
||||
:param old_conf: Cached old configuration settings (if any)
|
||||
:param has changed: callable to determine if a parameter has changed
|
||||
"""
|
||||
eventlet.wsgi.MAX_HEADER_LINE = CONF.max_header_line
|
||||
self.configure_socket(old_conf, has_changed)
|
||||
|
||||
def reload(self):
|
||||
"""
|
||||
Reload and re-apply configuration settings
|
||||
|
||||
Existing child processes are sent a SIGHUP signal
|
||||
and will exit after completing existing requests.
|
||||
New child processes, which will have the updated
|
||||
configuration, are spawned. This allows preventing
|
||||
interruption to the service.
|
||||
"""
|
||||
def _has_changed(old, new, param):
|
||||
old = old.get(param)
|
||||
new = getattr(new, param)
|
||||
return (new != old)
|
||||
|
||||
old_conf = utils.stash_conf_values()
|
||||
has_changed = functools.partial(_has_changed, old_conf, CONF)
|
||||
CONF.reload_config_files()
|
||||
os.killpg(self.pgid, signal.SIGHUP)
|
||||
self.stale_children = self.children
|
||||
self.children = set()
|
||||
|
||||
# Ensure any logging config changes are picked up
|
||||
logging.setup(CONF, 'searchlight')
|
||||
|
||||
self.configure(old_conf, has_changed)
|
||||
self.start_wsgi()
|
||||
|
||||
def wait(self):
|
||||
"""Wait until all servers have completed running."""
|
||||
try:
|
||||
if self.children:
|
||||
self.wait_on_children()
|
||||
else:
|
||||
self.pool.waitall()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
def run_child(self):
|
||||
def child_hup(*args):
|
||||
"""Shuts down child processes, existing requests are handled."""
|
||||
signal.signal(signal.SIGHUP, signal.SIG_IGN)
|
||||
eventlet.wsgi.is_accepting = False
|
||||
self.sock.close()
|
||||
|
||||
pid = os.fork()
|
||||
if pid == 0:
|
||||
signal.signal(signal.SIGHUP, child_hup)
|
||||
signal.signal(signal.SIGTERM, signal.SIG_DFL)
|
||||
# ignore the interrupt signal to avoid a race whereby
|
||||
# a child worker receives the signal before the parent
|
||||
# and is respawned unnecessarily as a result
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
# The child has no need to stash the unwrapped
|
||||
# socket, and the reference prevents a clean
|
||||
# exit on sighup
|
||||
self._sock = None
|
||||
self.run_server()
|
||||
LOG.info(_LI('Child %d exiting normally') % os.getpid())
|
||||
# self.pool.waitall() is now called in wsgi's server so
|
||||
# it's safe to exit here
|
||||
sys.exit(0)
|
||||
else:
|
||||
LOG.info(_LI('Started child %s') % pid)
|
||||
self.children.add(pid)
|
||||
|
||||
def run_server(self):
|
||||
"""Run a WSGI server."""
|
||||
if cfg.CONF.pydev_worker_debug_host:
|
||||
utils.setup_remote_pydev_debug(cfg.CONF.pydev_worker_debug_host,
|
||||
cfg.CONF.pydev_worker_debug_port)
|
||||
|
||||
eventlet.wsgi.HttpProtocol.default_request_version = "HTTP/1.0"
|
||||
self.pool = self.create_pool()
|
||||
try:
|
||||
eventlet.wsgi.server(self.sock,
|
||||
self.application,
|
||||
log=self._wsgi_logger,
|
||||
custom_pool=self.pool,
|
||||
debug=False,
|
||||
keepalive=CONF.http_keepalive)
|
||||
except socket.error as err:
|
||||
if err[0] != errno.EINVAL:
|
||||
raise
|
||||
|
||||
# waiting on async pools
|
||||
if ASYNC_EVENTLET_THREAD_POOL_LIST:
|
||||
for pool in ASYNC_EVENTLET_THREAD_POOL_LIST:
|
||||
pool.waitall()
|
||||
|
||||
def _single_run(self, application, sock):
|
||||
"""Start a WSGI server in a new green thread."""
|
||||
LOG.info(_LI("Starting single process server"))
|
||||
eventlet.wsgi.server(sock, application, custom_pool=self.pool,
|
||||
log=self._wsgi_logger,
|
||||
debug=False,
|
||||
keepalive=CONF.http_keepalive)
|
||||
|
||||
def configure_socket(self, old_conf=None, has_changed=None):
|
||||
"""
|
||||
Ensure a socket exists and is appropriately configured.
|
||||
|
||||
This function is called on start up, and can also be
|
||||
called in the event of a configuration reload.
|
||||
|
||||
When called for the first time a new socket is created.
|
||||
If reloading and either bind_host or bind port have been
|
||||
changed the existing socket must be closed and a new
|
||||
socket opened (laws of physics).
|
||||
|
||||
In all other cases (bind_host/bind_port have not changed)
|
||||
the existing socket is reused.
|
||||
|
||||
:param old_conf: Cached old configuration settings (if any)
|
||||
:param has changed: callable to determine if a parameter has changed
|
||||
"""
|
||||
# Do we need a fresh socket?
|
||||
new_sock = (old_conf is None or (
|
||||
has_changed('bind_host') or
|
||||
has_changed('bind_port')))
|
||||
# Will we be using https?
|
||||
use_ssl = not (not CONF.cert_file or not CONF.key_file)
|
||||
# Were we using https before?
|
||||
old_use_ssl = (old_conf is not None and not (
|
||||
not old_conf.get('key_file') or
|
||||
not old_conf.get('cert_file')))
|
||||
# Do we now need to perform an SSL wrap on the socket?
|
||||
wrap_sock = use_ssl is True and (old_use_ssl is False or new_sock)
|
||||
# Do we now need to perform an SSL unwrap on the socket?
|
||||
unwrap_sock = use_ssl is False and old_use_ssl is True
|
||||
|
||||
if new_sock:
|
||||
self._sock = None
|
||||
if old_conf is not None:
|
||||
self.sock.close()
|
||||
_sock = get_socket(self.default_port)
|
||||
_sock.setsockopt(socket.SOL_SOCKET,
|
||||
socket.SO_REUSEADDR, 1)
|
||||
# sockets can hang around forever without keepalive
|
||||
_sock.setsockopt(socket.SOL_SOCKET,
|
||||
socket.SO_KEEPALIVE, 1)
|
||||
self._sock = _sock
|
||||
|
||||
if wrap_sock:
|
||||
self.sock = ssl_wrap_socket(self._sock)
|
||||
|
||||
if unwrap_sock:
|
||||
self.sock = self._sock
|
||||
|
||||
if new_sock and not use_ssl:
|
||||
self.sock = self._sock
|
||||
|
||||
# Pick up newly deployed certs
|
||||
if old_conf is not None and use_ssl is True and old_use_ssl is True:
|
||||
if has_changed('cert_file') or has_changed('key_file'):
|
||||
utils.validate_key_cert(CONF.key_file, CONF.cert_file)
|
||||
if has_changed('cert_file'):
|
||||
self.sock.certfile = CONF.cert_file
|
||||
if has_changed('key_file'):
|
||||
self.sock.keyfile = CONF.key_file
|
||||
|
||||
if new_sock or (old_conf is not None and has_changed('tcp_keepidle')):
|
||||
# This option isn't available in the OS X version of eventlet
|
||||
if hasattr(socket, 'TCP_KEEPIDLE'):
|
||||
self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE,
|
||||
CONF.tcp_keepidle)
|
||||
|
||||
if old_conf is not None and has_changed('backlog'):
|
||||
self.sock.listen(CONF.backlog)
|
||||
|
||||
|
||||
class Middleware(object):
|
||||
"""
|
||||
Base WSGI middleware wrapper. These classes require an application to be
|
||||
initialized that will be called next. By default the middleware will
|
||||
simply call its wrapped app, or you can override __call__ to customize its
|
||||
behavior.
|
||||
"""
|
||||
|
||||
def __init__(self, application):
|
||||
self.application = application
|
||||
|
||||
@classmethod
|
||||
def factory(cls, global_conf, **local_conf):
|
||||
def filter(app):
|
||||
return cls(app)
|
||||
return filter
|
||||
|
||||
def process_request(self, req):
|
||||
"""
|
||||
Called on each request.
|
||||
|
||||
If this returns None, the next application down the stack will be
|
||||
executed. If it returns a response then that response will be returned
|
||||
and execution will stop here.
|
||||
|
||||
"""
|
||||
return None
|
||||
|
||||
def process_response(self, response):
|
||||
"""Do whatever you'd like to the response."""
|
||||
return response
|
||||
|
||||
@webob.dec.wsgify
|
||||
def __call__(self, req):
|
||||
response = self.process_request(req)
|
||||
if response:
|
||||
return response
|
||||
response = req.get_response(self.application)
|
||||
response.request = req
|
||||
try:
|
||||
return self.process_response(response)
|
||||
except webob.exc.HTTPException as e:
|
||||
return e
|
||||
|
||||
|
||||
class Debug(Middleware):
|
||||
"""
|
||||
Helper class that can be inserted into any WSGI application chain
|
||||
to get information about the request and response.
|
||||
"""
|
||||
|
||||
@webob.dec.wsgify
|
||||
def __call__(self, req):
|
||||
print(("*" * 40) + " REQUEST ENVIRON")
|
||||
for key, value in req.environ.items():
|
||||
print(key, "=", value)
|
||||
print('')
|
||||
resp = req.get_response(self.application)
|
||||
|
||||
print(("*" * 40) + " RESPONSE HEADERS")
|
||||
for (key, value) in six.iteritems(resp.headers):
|
||||
print(key, "=", value)
|
||||
print('')
|
||||
|
||||
resp.app_iter = self.print_generator(resp.app_iter)
|
||||
|
||||
return resp
|
||||
|
||||
@staticmethod
|
||||
def print_generator(app_iter):
|
||||
"""
|
||||
Iterator that prints the contents of a wrapper string iterator
|
||||
when iterated.
|
||||
"""
|
||||
print(("*" * 40) + " BODY")
|
||||
for part in app_iter:
|
||||
sys.stdout.write(part)
|
||||
sys.stdout.flush()
|
||||
yield part
|
||||
print()
|
||||
|
||||
|
||||
class APIMapper(routes.Mapper):
|
||||
"""
|
||||
Handle route matching when url is '' because routes.Mapper returns
|
||||
an error in this case.
|
||||
"""
|
||||
|
||||
def routematch(self, url=None, environ=None):
|
||||
if url is "":
|
||||
result = self._match("", environ)
|
||||
return result[0], result[1]
|
||||
return routes.Mapper.routematch(self, url, environ)
|
||||
|
||||
|
||||
class RejectMethodController(object):
|
||||
def reject(self, req, allowed_methods, *args, **kwargs):
|
||||
LOG.debug("The method %s is not allowed for this resource" %
|
||||
req.environ['REQUEST_METHOD'])
|
||||
raise webob.exc.HTTPMethodNotAllowed(
|
||||
headers=[('Allow', allowed_methods)])
|
||||
|
||||
|
||||
class Router(object):
|
||||
"""
|
||||
WSGI middleware that maps incoming requests to WSGI apps.
|
||||
"""
|
||||
|
||||
def __init__(self, mapper):
|
||||
"""
|
||||
Create a router for the given routes.Mapper.
|
||||
|
||||
Each route in `mapper` must specify a 'controller', which is a
|
||||
WSGI app to call. You'll probably want to specify an 'action' as
|
||||
well and have your controller be a wsgi.Controller, who will route
|
||||
the request to the action method.
|
||||
|
||||
Examples:
|
||||
mapper = routes.Mapper()
|
||||
sc = ServerController()
|
||||
|
||||
# Explicit mapping of one route to a controller+action
|
||||
mapper.connect(None, "/svrlist", controller=sc, action="list")
|
||||
|
||||
# Actions are all implicitly defined
|
||||
mapper.resource("server", "servers", controller=sc)
|
||||
|
||||
# Pointing to an arbitrary WSGI app. You can specify the
|
||||
# {path_info:.*} parameter so the target app can be handed just that
|
||||
# section of the URL.
|
||||
mapper.connect(None, "/v1.0/{path_info:.*}", controller=BlogApp())
|
||||
"""
|
||||
mapper.redirect("", "/")
|
||||
self.map = mapper
|
||||
self._router = routes.middleware.RoutesMiddleware(self._dispatch,
|
||||
self.map)
|
||||
|
||||
@classmethod
|
||||
def factory(cls, global_conf, **local_conf):
|
||||
return cls(APIMapper())
|
||||
|
||||
@webob.dec.wsgify
|
||||
def __call__(self, req):
|
||||
"""
|
||||
Route the incoming request to a controller based on self.map.
|
||||
If no match, return either a 404(Not Found) or 501(Not Implemented).
|
||||
"""
|
||||
return self._router
|
||||
|
||||
@staticmethod
|
||||
@webob.dec.wsgify
|
||||
def _dispatch(req):
|
||||
"""
|
||||
Called by self._router after matching the incoming request to a route
|
||||
and putting the information into req.environ. Either returns 404,
|
||||
501, or the routed WSGI app's response.
|
||||
"""
|
||||
match = req.environ['wsgiorg.routing_args'][1]
|
||||
if not match:
|
||||
implemented_http_methods = ['GET', 'HEAD', 'POST', 'PUT',
|
||||
'DELETE', 'PATCH']
|
||||
if req.environ['REQUEST_METHOD'] not in implemented_http_methods:
|
||||
return webob.exc.HTTPNotImplemented()
|
||||
else:
|
||||
return webob.exc.HTTPNotFound()
|
||||
app = match['controller']
|
||||
return app
|
||||
|
||||
|
||||
class Request(webob.Request):
|
||||
"""Add some OpenStack API-specific logic to the base webob.Request."""
|
||||
|
||||
def best_match_content_type(self):
|
||||
"""Determine the requested response content-type."""
|
||||
supported = ('application/json',)
|
||||
bm = self.accept.best_match(supported)
|
||||
return bm or 'application/json'
|
||||
|
||||
def get_content_type(self, allowed_content_types):
|
||||
"""Determine content type of the request body."""
|
||||
if "Content-Type" not in self.headers:
|
||||
raise exception.InvalidContentType(content_type=None)
|
||||
|
||||
content_type = self.content_type
|
||||
|
||||
if content_type not in allowed_content_types:
|
||||
raise exception.InvalidContentType(content_type=content_type)
|
||||
else:
|
||||
return content_type
|
||||
|
||||
def best_match_language(self):
|
||||
"""Determines best available locale from the Accept-Language header.
|
||||
|
||||
:returns: the best language match or None if the 'Accept-Language'
|
||||
header was not available in the request.
|
||||
"""
|
||||
if not self.accept_language:
|
||||
return None
|
||||
langs = i18n.get_available_languages('searchlight')
|
||||
return self.accept_language.best_match(langs)
|
||||
|
||||
def get_content_range(self):
|
||||
"""Return the `Range` in a request."""
|
||||
range_str = self.headers.get('Content-Range')
|
||||
if range_str is not None:
|
||||
range_ = webob.byterange.ContentRange.parse(range_str)
|
||||
if range_ is None:
|
||||
msg = _('Malformed Content-Range header: %s') % range_str
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
return range_
|
||||
|
||||
|
||||
class JSONRequestDeserializer(object):
|
||||
valid_transfer_encoding = frozenset(['chunked', 'compress', 'deflate',
|
||||
'gzip', 'identity'])
|
||||
|
||||
def has_body(self, request):
|
||||
"""
|
||||
Returns whether a Webob.Request object will possess an entity body.
|
||||
|
||||
:param request: Webob.Request object
|
||||
"""
|
||||
request_encoding = request.headers.get('transfer-encoding', '').lower()
|
||||
is_valid_encoding = request_encoding in self.valid_transfer_encoding
|
||||
if is_valid_encoding and request.is_body_readable:
|
||||
return True
|
||||
elif request.content_length > 0:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def _sanitizer(obj):
|
||||
"""Sanitizer method that will be passed to jsonutils.loads."""
|
||||
return obj
|
||||
|
||||
def from_json(self, datastring):
|
||||
try:
|
||||
return jsonutils.loads(datastring, object_hook=self._sanitizer)
|
||||
except ValueError:
|
||||
msg = _('Malformed JSON in request body.')
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
def default(self, request):
|
||||
if self.has_body(request):
|
||||
return {'body': self.from_json(request.body)}
|
||||
else:
|
||||
return {}
|
||||
|
||||
|
||||
class JSONResponseSerializer(object):
|
||||
|
||||
def _sanitizer(self, obj):
|
||||
"""Sanitizer method that will be passed to jsonutils.dumps."""
|
||||
if hasattr(obj, "to_dict"):
|
||||
return obj.to_dict()
|
||||
if isinstance(obj, multidict.MultiDict):
|
||||
return obj.mixed()
|
||||
return jsonutils.to_primitive(obj)
|
||||
|
||||
def to_json(self, data):
|
||||
return jsonutils.dumps(data, default=self._sanitizer)
|
||||
|
||||
def default(self, response, result):
|
||||
response.content_type = 'application/json'
|
||||
response.body = self.to_json(result)
|
||||
|
||||
|
||||
def translate_exception(req, e):
|
||||
"""Translates all translatable elements of the given exception."""
|
||||
|
||||
# The RequestClass attribute in the webob.dec.wsgify decorator
|
||||
# does not guarantee that the request object will be a particular
|
||||
# type; this check is therefore necessary.
|
||||
if not hasattr(req, "best_match_language"):
|
||||
return e
|
||||
|
||||
locale = req.best_match_language()
|
||||
|
||||
if isinstance(e, webob.exc.HTTPError):
|
||||
e.explanation = i18n.translate(e.explanation, locale)
|
||||
e.detail = i18n.translate(e.detail, locale)
|
||||
if getattr(e, 'body_template', None):
|
||||
e.body_template = i18n.translate(e.body_template, locale)
|
||||
return e
|
||||
|
||||
|
||||
class Resource(object):
|
||||
"""
|
||||
WSGI app that handles (de)serialization and controller dispatch.
|
||||
|
||||
Reads routing information supplied by RoutesMiddleware and calls
|
||||
the requested action method upon its deserializer, controller,
|
||||
and serializer. Those three objects may implement any of the basic
|
||||
controller action methods (create, update, show, index, delete)
|
||||
along with any that may be specified in the api router. A 'default'
|
||||
method may also be implemented to be used in place of any
|
||||
non-implemented actions. Deserializer methods must accept a request
|
||||
argument and return a dictionary. Controller methods must accept a
|
||||
request argument. Additionally, they must also accept keyword
|
||||
arguments that represent the keys returned by the Deserializer. They
|
||||
may raise a webob.exc exception or return a dict, which will be
|
||||
serialized by requested content type.
|
||||
"""
|
||||
|
||||
def __init__(self, controller, deserializer=None, serializer=None):
|
||||
"""
|
||||
:param controller: object that implement methods created by routes lib
|
||||
:param deserializer: object that supports webob request deserialization
|
||||
through controller-like actions
|
||||
:param serializer: object that supports webob response serialization
|
||||
through controller-like actions
|
||||
"""
|
||||
self.controller = controller
|
||||
self.serializer = serializer or JSONResponseSerializer()
|
||||
self.deserializer = deserializer or JSONRequestDeserializer()
|
||||
|
||||
@webob.dec.wsgify(RequestClass=Request)
|
||||
def __call__(self, request):
|
||||
"""WSGI method that controls (de)serialization and method dispatch."""
|
||||
action_args = self.get_action_args(request.environ)
|
||||
action = action_args.pop('action', None)
|
||||
|
||||
try:
|
||||
deserialized_request = self.dispatch(self.deserializer,
|
||||
action, request)
|
||||
action_args.update(deserialized_request)
|
||||
action_result = self.dispatch(self.controller, action,
|
||||
request, **action_args)
|
||||
except webob.exc.WSGIHTTPException as e:
|
||||
exc_info = sys.exc_info()
|
||||
raise translate_exception(request, e), None, exc_info[2]
|
||||
|
||||
try:
|
||||
response = webob.Response(request=request)
|
||||
self.dispatch(self.serializer, action, response, action_result)
|
||||
return response
|
||||
except webob.exc.WSGIHTTPException as e:
|
||||
return translate_exception(request, e)
|
||||
except webob.exc.HTTPException as e:
|
||||
return e
|
||||
# return unserializable result (typically a webob exc)
|
||||
except Exception:
|
||||
return action_result
|
||||
|
||||
def dispatch(self, obj, action, *args, **kwargs):
|
||||
"""Find action-specific method on self and call it."""
|
||||
try:
|
||||
method = getattr(obj, action)
|
||||
except AttributeError:
|
||||
method = getattr(obj, 'default')
|
||||
|
||||
return method(*args, **kwargs)
|
||||
|
||||
def get_action_args(self, request_environment):
|
||||
"""Parse dictionary created by routes library."""
|
||||
try:
|
||||
args = request_environment['wsgiorg.routing_args'][1].copy()
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
try:
|
||||
del args['controller']
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
try:
|
||||
del args['format']
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
return args
|
|
@ -0,0 +1,70 @@
|
|||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from datetime import datetime
|
||||
|
||||
from oslo_utils import timeutils
|
||||
from wsme import types as wsme_types
|
||||
|
||||
|
||||
class WSMEModelTransformer(object):
|
||||
|
||||
def to_dict(self):
|
||||
# Return the wsme_attributes names:values as a dict
|
||||
my_dict = {}
|
||||
for attribute in self._wsme_attributes:
|
||||
value = getattr(self, attribute.name)
|
||||
if value is not wsme_types.Unset:
|
||||
my_dict.update({attribute.name: value})
|
||||
return my_dict
|
||||
|
||||
@classmethod
|
||||
def to_wsme_model(model, db_entity, self_link=None, schema=None):
|
||||
# Return the wsme_attributes names:values as a dict
|
||||
names = []
|
||||
for attribute in model._wsme_attributes:
|
||||
names.append(attribute.name)
|
||||
|
||||
values = {}
|
||||
for name in names:
|
||||
value = getattr(db_entity, name, None)
|
||||
if value is not None:
|
||||
if type(value) == datetime:
|
||||
iso_datetime_value = timeutils.isotime(value)
|
||||
values.update({name: iso_datetime_value})
|
||||
else:
|
||||
values.update({name: value})
|
||||
|
||||
if schema:
|
||||
values['schema'] = schema
|
||||
|
||||
model_object = model(**values)
|
||||
|
||||
# 'self' kwarg is used in wsme.types.Base.__init__(self, ..) and
|
||||
# conflicts during initialization. self_link is a proxy field to self.
|
||||
if self_link:
|
||||
model_object.self = self_link
|
||||
|
||||
return model_object
|
||||
|
||||
@classmethod
|
||||
def get_mandatory_attrs(cls):
|
||||
return [attr.name for attr in cls._wsme_attributes if attr.mandatory]
|
||||
|
||||
|
||||
def _get_value(obj):
|
||||
if obj is not wsme_types.Unset:
|
||||
return obj
|
||||
else:
|
||||
return None
|
|
@ -0,0 +1,60 @@
|
|||
# Copyright 2011-2014 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_context import context
|
||||
|
||||
from searchlight.api import policy
|
||||
|
||||
|
||||
class RequestContext(context.RequestContext):
|
||||
"""Stores information about the security context.
|
||||
|
||||
Stores how the user accesses the system, as well as additional request
|
||||
information.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, roles=None,
|
||||
owner_is_tenant=True, service_catalog=None,
|
||||
policy_enforcer=None, **kwargs):
|
||||
super(RequestContext, self).__init__(**kwargs)
|
||||
self.roles = roles or []
|
||||
self.owner_is_tenant = owner_is_tenant
|
||||
self.service_catalog = service_catalog
|
||||
self.policy_enforcer = policy_enforcer or policy.Enforcer()
|
||||
if not self.is_admin:
|
||||
self.is_admin = self.policy_enforcer.check_is_admin(self)
|
||||
|
||||
def to_dict(self):
|
||||
d = super(RequestContext, self).to_dict()
|
||||
d.update({
|
||||
'roles': self.roles,
|
||||
'service_catalog': self.service_catalog,
|
||||
})
|
||||
return d
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, values):
|
||||
return cls(**values)
|
||||
|
||||
@property
|
||||
def owner(self):
|
||||
"""Return the owner to correlate with an image."""
|
||||
return self.tenant if self.owner_is_tenant else self.user
|
||||
|
||||
@property
|
||||
def can_see_deleted(self):
|
||||
"""Admins can see deleted by default"""
|
||||
return self.show_deleted or self.is_admin
|
|
@ -0,0 +1,77 @@
|
|||
# Copyright 2014 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import elasticsearch
|
||||
from elasticsearch import helpers
|
||||
from oslo_config import cfg
|
||||
|
||||
from searchlight.common import utils
|
||||
|
||||
|
||||
search_opts = [
|
||||
cfg.ListOpt('hosts', default=['127.0.0.1:9200'],
|
||||
help='List of nodes where Elasticsearch instances are '
|
||||
'running. A single node should be defined as an IP '
|
||||
'address and port number.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(search_opts, group='elasticsearch')
|
||||
|
||||
|
||||
def get_api():
|
||||
es_hosts = CONF.elasticsearch.hosts
|
||||
es_api = elasticsearch.Elasticsearch(hosts=es_hosts)
|
||||
return es_api
|
||||
|
||||
|
||||
class CatalogSearchRepo(object):
|
||||
|
||||
def __init__(self, context, es_api):
|
||||
self.context = context
|
||||
self.es_api = es_api
|
||||
self.plugins = utils.get_search_plugins() or []
|
||||
self.plugins_info_dict = self._get_plugin_info()
|
||||
|
||||
def search(self, index, doc_type, query, fields, offset, limit,
|
||||
ignore_unavailable=True):
|
||||
return self.es_api.search(
|
||||
index=index,
|
||||
doc_type=doc_type,
|
||||
body=query,
|
||||
_source_include=fields,
|
||||
from_=offset,
|
||||
size=limit,
|
||||
ignore_unavailable=ignore_unavailable)
|
||||
|
||||
def index(self, default_index, default_type, actions):
|
||||
return helpers.bulk(
|
||||
client=self.es_api,
|
||||
index=default_index,
|
||||
doc_type=default_type,
|
||||
actions=actions)
|
||||
|
||||
def plugins_info(self):
|
||||
return self.plugins_info_dict
|
||||
|
||||
def _get_plugin_info(self):
|
||||
plugin_info = dict()
|
||||
plugin_info['plugins'] = []
|
||||
for plugin in self.plugins:
|
||||
info = dict()
|
||||
info['type'] = plugin.obj.get_document_type()
|
||||
info['index'] = plugin.obj.get_index_name()
|
||||
plugin_info['plugins'].append(info)
|
||||
return plugin_info
|
|
@ -0,0 +1,140 @@
|
|||
# Copyright 2015 Intel Corporation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import abc
|
||||
|
||||
from elasticsearch import helpers
|
||||
import six
|
||||
|
||||
import searchlight.elasticsearch
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class IndexBase(object):
|
||||
chunk_size = 200
|
||||
|
||||
def __init__(self):
|
||||
self.engine = searchlight.elasticsearch.get_api()
|
||||
self.index_name = self.get_index_name()
|
||||
self.document_type = self.get_document_type()
|
||||
|
||||
def setup(self):
|
||||
"""Comprehensively install search engine index and put data into it."""
|
||||
self.setup_index()
|
||||
self.setup_mapping()
|
||||
self.setup_data()
|
||||
|
||||
def setup_index(self):
|
||||
"""Create the index if it doesn't exist and update its settings."""
|
||||
index_exists = self.engine.indices.exists(self.index_name)
|
||||
if not index_exists:
|
||||
self.engine.indices.create(index=self.index_name)
|
||||
|
||||
index_settings = self.get_settings()
|
||||
if index_settings:
|
||||
self.engine.indices.put_settings(index=self.index_name,
|
||||
body=index_settings)
|
||||
|
||||
return index_exists
|
||||
|
||||
def setup_mapping(self):
|
||||
"""Update index document mapping."""
|
||||
index_mapping = self.get_mapping()
|
||||
|
||||
if index_mapping:
|
||||
self.engine.indices.put_mapping(index=self.index_name,
|
||||
doc_type=self.document_type,
|
||||
body=index_mapping)
|
||||
|
||||
def setup_data(self):
|
||||
"""Insert all objects from database into search engine."""
|
||||
object_list = self.get_objects()
|
||||
documents = []
|
||||
for obj in object_list:
|
||||
document = self.serialize(obj)
|
||||
documents.append(document)
|
||||
|
||||
self.save_documents(documents)
|
||||
|
||||
def save_documents(self, documents, id_field='id'):
|
||||
"""Send list of serialized documents into search engine."""
|
||||
actions = []
|
||||
for document in documents:
|
||||
action = {
|
||||
'_id': document.get(id_field),
|
||||
'_source': document,
|
||||
}
|
||||
|
||||
actions.append(action)
|
||||
|
||||
helpers.bulk(
|
||||
client=self.engine,
|
||||
index=self.index_name,
|
||||
doc_type=self.document_type,
|
||||
chunk_size=self.chunk_size,
|
||||
actions=actions)
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_objects(self):
|
||||
"""Get list of all objects which will be indexed into search engine."""
|
||||
|
||||
@abc.abstractmethod
|
||||
def serialize(self, obj):
|
||||
"""Serialize database object into valid search engine document."""
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_index_name(self):
|
||||
"""Get name of the index."""
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_document_type(self):
|
||||
"""Get name of the document type."""
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_rbac_filter(self, request_context):
|
||||
"""Get rbac filter as es json filter dsl."""
|
||||
|
||||
def filter_result(self, result, request_context):
|
||||
"""Filter the outgoing search result."""
|
||||
return result
|
||||
|
||||
def get_settings(self):
|
||||
"""Get an index settings."""
|
||||
return {}
|
||||
|
||||
def get_mapping(self):
|
||||
"""Get an index mapping."""
|
||||
return {}
|
||||
|
||||
def get_notification_handler(self):
|
||||
"""Get the notification handler which implements NotificationBase."""
|
||||
return None
|
||||
|
||||
def get_notification_supported_events(self):
|
||||
"""Get the list of suppported event types."""
|
||||
return []
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class NotificationBase(object):
|
||||
|
||||
def __init__(self, engine, index_name, document_type):
|
||||
self.engine = engine
|
||||
self.index_name = index_name
|
||||
self.document_type = document_type
|
||||
|
||||
@abc.abstractmethod
|
||||
def process(self, ctxt, publisher_id, event_type, payload, metadata):
|
||||
"""Process the incoming notification message."""
|
|
@ -0,0 +1,155 @@
|
|||
# Copyright 2015 Intel Corporation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_utils import timeutils
|
||||
|
||||
from searchlight.api import policy
|
||||
from searchlight.common import property_utils
|
||||
from searchlight.elasticsearch.plugins import base
|
||||
from searchlight.elasticsearch.plugins import images_notification_handler
|
||||
|
||||
|
||||
class ImageIndex(base.IndexBase):
|
||||
def __init__(self, policy_enforcer=None):
|
||||
super(ImageIndex, self).__init__()
|
||||
self.policy = policy_enforcer or policy.Enforcer()
|
||||
if property_utils.is_property_protection_enabled():
|
||||
self.property_rules = property_utils.PropertyRules(self.policy)
|
||||
self._image_base_properties = [
|
||||
'checksum', 'created_at', 'container_format', 'disk_format', 'id',
|
||||
'min_disk', 'min_ram', 'name', 'size', 'virtual_size', 'status',
|
||||
'tags', 'updated_at', 'visibility', 'protected', 'owner',
|
||||
'members']
|
||||
|
||||
def get_index_name(self):
|
||||
return 'glance'
|
||||
|
||||
def get_document_type(self):
|
||||
return 'image'
|
||||
|
||||
def get_mapping(self):
|
||||
return {
|
||||
'dynamic': True,
|
||||
'properties': {
|
||||
'id': {'type': 'string', 'index': 'not_analyzed'},
|
||||
'name': {'type': 'string'},
|
||||
'description': {'type': 'string'},
|
||||
'tags': {'type': 'string'},
|
||||
'disk_format': {'type': 'string'},
|
||||
'container_format': {'type': 'string'},
|
||||
'size': {'type': 'long'},
|
||||
'virtual_size': {'type': 'long'},
|
||||
'status': {'type': 'string'},
|
||||
'visibility': {'type': 'string'},
|
||||
'checksum': {'type': 'string'},
|
||||
'min_disk': {'type': 'long'},
|
||||
'min_ram': {'type': 'long'},
|
||||
'owner': {'type': 'string', 'index': 'not_analyzed'},
|
||||
'protected': {'type': 'boolean'},
|
||||
'members': {'type': 'string', 'index': 'not_analyzed'},
|
||||
"created_at": {'type': 'date'},
|
||||
"updated_at": {'type': 'date'}
|
||||
},
|
||||
}
|
||||
|
||||
def get_rbac_filter(self, request_context):
|
||||
return [
|
||||
{
|
||||
"and": [
|
||||
{
|
||||
'or': [
|
||||
{
|
||||
'term': {
|
||||
'owner': request_context.owner
|
||||
}
|
||||
},
|
||||
{
|
||||
'term': {
|
||||
'visibility': 'public'
|
||||
}
|
||||
},
|
||||
{
|
||||
'term': {
|
||||
'members': request_context.tenant
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
'type': {
|
||||
'value': self.get_document_type()
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
def filter_result(self, result, request_context):
|
||||
if property_utils.is_property_protection_enabled():
|
||||
hits = result['hits']['hits']
|
||||
for hit in hits:
|
||||
if hit['_type'] == self.get_document_type():
|
||||
source = hit['_source']
|
||||
for key in source.keys():
|
||||
if key not in self._image_base_properties:
|
||||
if not self.property_rules.check_property_rules(
|
||||
key, 'read', request_context):
|
||||
del hit['_source'][key]
|
||||
return result
|
||||
|
||||
def get_objects(self):
|
||||
#TODO: Get objects from Glance API.
|
||||
return images
|
||||
|
||||
def serialize(self, obj):
|
||||
visibility = 'public' if obj.is_public else 'private'
|
||||
members = []
|
||||
for member in obj.members:
|
||||
if member.status == 'accepted' and member.deleted == 0:
|
||||
members.append(member.member)
|
||||
|
||||
document = {
|
||||
'id': obj.id,
|
||||
'name': obj.name,
|
||||
'tags': obj.tags,
|
||||
'disk_format': obj.disk_format,
|
||||
'container_format': obj.container_format,
|
||||
'size': obj.size,
|
||||
'virtual_size': obj.virtual_size,
|
||||
'status': obj.status,
|
||||
'visibility': visibility,
|
||||
'checksum': obj.checksum,
|
||||
'min_disk': obj.min_disk,
|
||||
'min_ram': obj.min_ram,
|
||||
'owner': obj.owner,
|
||||
'protected': obj.protected,
|
||||
'members': members,
|
||||
'created_at': timeutils.isotime(obj.created_at),
|
||||
'updated_at': timeutils.isotime(obj.updated_at)
|
||||
}
|
||||
for image_property in obj.properties:
|
||||
document[image_property.name] = image_property.value
|
||||
|
||||
return document
|
||||
|
||||
def get_notification_handler(self):
|
||||
return images_notification_handler.ImageHandler(
|
||||
self.engine,
|
||||
self.get_index_name(),
|
||||
self.get_document_type()
|
||||
)
|
||||
|
||||
def get_notification_supported_events(self):
|
||||
return ['image.create', 'image.update', 'image.delete']
|
|
@ -0,0 +1,83 @@
|
|||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_log import log as logging
|
||||
import oslo_messaging
|
||||
|
||||
from searchlight.common import utils
|
||||
from searchlight.elasticsearch.plugins import base
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ImageHandler(base.NotificationBase):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(ImageHandler, self).__init__(*args, **kwargs)
|
||||
self.image_delete_keys = ['deleted_at', 'deleted',
|
||||
'is_public', 'properties']
|
||||
|
||||
def process(self, ctxt, publisher_id, event_type, payload, metadata):
|
||||
try:
|
||||
actions = {
|
||||
"image.create": self.create,
|
||||
"image.update": self.update,
|
||||
"image.delete": self.delete
|
||||
}
|
||||
actions[event_type](payload)
|
||||
return oslo_messaging.NotificationResult.HANDLED
|
||||
except Exception as e:
|
||||
LOG.error(utils.exception_to_str(e))
|
||||
|
||||
def create(self, payload):
|
||||
id = payload['id']
|
||||
payload = self.format_image(payload)
|
||||
self.engine.create(
|
||||
index=self.index_name,
|
||||
doc_type=self.document_type,
|
||||
body=payload,
|
||||
id=id
|
||||
)
|
||||
|
||||
def update(self, payload):
|
||||
id = payload['id']
|
||||
payload = self.format_image(payload)
|
||||
doc = {"doc": payload}
|
||||
self.engine.update(
|
||||
index=self.index_name,
|
||||
doc_type=self.document_type,
|
||||
body=doc,
|
||||
id=id
|
||||
)
|
||||
|
||||
def delete(self, payload):
|
||||
id = payload['id']
|
||||
self.engine.delete(
|
||||
index=self.index_name,
|
||||
doc_type=self.document_type,
|
||||
id=id
|
||||
)
|
||||
|
||||
def format_image(self, payload):
|
||||
visibility = 'public' if payload['is_public'] else 'private'
|
||||
payload['visibility'] = visibility
|
||||
|
||||
payload.update(payload.get('properties', '{}'))
|
||||
|
||||
for key in payload.keys():
|
||||
if key in self.image_delete_keys:
|
||||
del payload[key]
|
||||
|
||||
return payload
|
|
@ -0,0 +1,225 @@
|
|||
# Copyright 2015 Intel Corporation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import copy
|
||||
|
||||
import six
|
||||
|
||||
from searchlight.elasticsearch.plugins import base
|
||||
from searchlight.elasticsearch.plugins import metadefs_notification_handler
|
||||
|
||||
|
||||
class MetadefIndex(base.IndexBase):
|
||||
def __init__(self):
|
||||
super(MetadefIndex, self).__init__()
|
||||
|
||||
def get_index_name(self):
|
||||
return 'glance'
|
||||
|
||||
def get_document_type(self):
|
||||
return 'metadef'
|
||||
|
||||
def get_mapping(self):
|
||||
property_mapping = {
|
||||
'dynamic': True,
|
||||
'type': 'nested',
|
||||
'properties': {
|
||||
'property': {'type': 'string', 'index': 'not_analyzed'},
|
||||
'type': {'type': 'string'},
|
||||
'title': {'type': 'string'},
|
||||
'description': {'type': 'string'},
|
||||
}
|
||||
}
|
||||
mapping = {
|
||||
'_id': {
|
||||
'path': 'namespace',
|
||||
},
|
||||
'properties': {
|
||||
'display_name': {'type': 'string'},
|
||||
'description': {'type': 'string'},
|
||||
'namespace': {'type': 'string', 'index': 'not_analyzed'},
|
||||
'owner': {'type': 'string', 'index': 'not_analyzed'},
|
||||
'visibility': {'type': 'string', 'index': 'not_analyzed'},
|
||||
'resource_types': {
|
||||
'type': 'nested',
|
||||
'properties': {
|
||||
'name': {'type': 'string'},
|
||||
'prefix': {'type': 'string'},
|
||||
'properties_target': {'type': 'string'},
|
||||
},
|
||||
},
|
||||
'objects': {
|
||||
'type': 'nested',
|
||||
'properties': {
|
||||
'id': {'type': 'string', 'index': 'not_analyzed'},
|
||||
'name': {'type': 'string'},
|
||||
'description': {'type': 'string'},
|
||||
'properties': property_mapping,
|
||||
}
|
||||
},
|
||||
'properties': property_mapping,
|
||||
'tags': {
|
||||
'type': 'nested',
|
||||
'properties': {
|
||||
'name': {'type': 'string'},
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
return mapping
|
||||
|
||||
def get_rbac_filter(self, request_context):
|
||||
# TODO(krykowski): Define base get_rbac_filter in IndexBase class
|
||||
# which will provide some common subset of query pieces.
|
||||
# Something like:
|
||||
# def get_common_context_pieces(self, request_context):
|
||||
# return [{'term': {'owner': request_context.owner,
|
||||
# 'type': {'value': self.get_document_type()}}]
|
||||
return [
|
||||
{
|
||||
"and": [
|
||||
{
|
||||
'or': [
|
||||
{
|
||||
'term': {
|
||||
'owner': request_context.owner
|
||||
}
|
||||
},
|
||||
{
|
||||
'term': {
|
||||
'visibility': 'public'
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
'type': {
|
||||
'value': self.get_document_type()
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
def get_objects(self):
|
||||
# TODO:Use Glance API instead of db
|
||||
return namespaces
|
||||
|
||||
def get_namespace_resource_types(self, namespace_id, resource_types):
|
||||
# TODO:Use Glance API instead of db
|
||||
return resource_associations
|
||||
|
||||
def get_namespace_properties(self, namespace_id):
|
||||
# TODO:Use Glance API instead of db
|
||||
return list(properties)
|
||||
|
||||
def get_namespace_objects(self, namespace_id):
|
||||
# TODO:Use Glance API instead of db
|
||||
return list(namespace_objects)
|
||||
|
||||
def get_namespace_tags(self, namespace_id):
|
||||
# TODO:Use Glance API instead of db
|
||||
return list(namespace_tags)
|
||||
|
||||
def serialize(self, obj):
|
||||
object_docs = [self.serialize_object(ns_obj) for ns_obj in obj.objects]
|
||||
property_docs = [self.serialize_property(prop.name, prop.json_schema)
|
||||
for prop in obj.properties]
|
||||
resource_type_docs = [self.serialize_namespace_resource_type(rt)
|
||||
for rt in obj.resource_types]
|
||||
tag_docs = [self.serialize_tag(tag) for tag in obj.tags]
|
||||
namespace_doc = self.serialize_namespace(obj)
|
||||
namespace_doc.update({
|
||||
'objects': object_docs,
|
||||
'properties': property_docs,
|
||||
'resource_types': resource_type_docs,
|
||||
'tags': tag_docs,
|
||||
})
|
||||
return namespace_doc
|
||||
|
||||
def serialize_namespace(self, namespace):
|
||||
return {
|
||||
'namespace': namespace.namespace,
|
||||
'display_name': namespace.display_name,
|
||||
'description': namespace.description,
|
||||
'visibility': namespace.visibility,
|
||||
'protected': namespace.protected,
|
||||
'owner': namespace.owner,
|
||||
}
|
||||
|
||||
def serialize_object(self, obj):
|
||||
obj_properties = obj.json_schema
|
||||
property_docs = []
|
||||
for name, schema in six.iteritems(obj_properties):
|
||||
property_doc = self.serialize_property(name, schema)
|
||||
property_docs.append(property_doc)
|
||||
|
||||
document = {
|
||||
'name': obj.name,
|
||||
'description': obj.description,
|
||||
'properties': property_docs,
|
||||
}
|
||||
return document
|
||||
|
||||
def serialize_property(self, name, schema):
|
||||
document = copy.deepcopy(schema)
|
||||
document['property'] = name
|
||||
|
||||
if 'default' in document:
|
||||
document['default'] = str(document['default'])
|
||||
if 'enum' in document:
|
||||
document['enum'] = map(str, document['enum'])
|
||||
|
||||
return document
|
||||
|
||||
def serialize_namespace_resource_type(self, ns_resource_type):
|
||||
return {
|
||||
'name': ns_resource_type['name'],
|
||||
'prefix': ns_resource_type['prefix'],
|
||||
'properties_target': ns_resource_type['properties_target']
|
||||
}
|
||||
|
||||
def serialize_tag(self, tag):
|
||||
return {
|
||||
'name': tag.name
|
||||
}
|
||||
|
||||
def get_notification_handler(self):
|
||||
return metadefs_notification_handler.MetadefHandler(
|
||||
self.engine,
|
||||
self.get_index_name(),
|
||||
self.get_document_type()
|
||||
)
|
||||
|
||||
def get_notification_supported_events(self):
|
||||
return [
|
||||
"metadef_namespace.create",
|
||||
"metadef_namespace.update",
|
||||
"metadef_namespace.delete",
|
||||
"metadef_object.create",
|
||||
"metadef_object.update",
|
||||
"metadef_object.delete",
|
||||
"metadef_property.create",
|
||||
"metadef_property.update",
|
||||
"metadef_property.delete",
|
||||
"metadef_tag.create",
|
||||
"metadef_tag.update",
|
||||
"metadef_tag.delete",
|
||||
"metadef_resource_type.create",
|
||||
"metadef_resource_type.delete",
|
||||
"metadef_namespace.delete_properties",
|
||||
"metadef_namespace.delete_objects",
|
||||
"metadef_namespace.delete_tags"
|
||||
]
|
|
@ -0,0 +1,251 @@
|
|||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import six
|
||||
|
||||
from oslo_log import log as logging
|
||||
import oslo_messaging
|
||||
|
||||
from searchlight.common import utils
|
||||
from searchlight.elasticsearch.plugins import base
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MetadefHandler(base.NotificationBase):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(MetadefHandler, self).__init__(*args, **kwargs)
|
||||
self.namespace_delete_keys = ['deleted_at', 'deleted', 'created_at',
|
||||
'updated_at', 'namespace_old']
|
||||
self.property_delete_keys = ['deleted', 'deleted_at',
|
||||
'name_old', 'namespace', 'name']
|
||||
|
||||
def process(self, ctxt, publisher_id, event_type, payload, metadata):
|
||||
try:
|
||||
actions = {
|
||||
"metadef_namespace.create": self.create_ns,
|
||||
"metadef_namespace.update": self.update_ns,
|
||||
"metadef_namespace.delete": self.delete_ns,
|
||||
"metadef_object.create": self.create_obj,
|
||||
"metadef_object.update": self.update_obj,
|
||||
"metadef_object.delete": self.delete_obj,
|
||||
"metadef_property.create": self.create_prop,
|
||||
"metadef_property.update": self.update_prop,
|
||||
"metadef_property.delete": self.delete_prop,
|
||||
"metadef_resource_type.create": self.create_rs,
|
||||
"metadef_resource_type.delete": self.delete_rs,
|
||||
"metadef_tag.create": self.create_tag,
|
||||
"metadef_tag.update": self.update_tag,
|
||||
"metadef_tag.delete": self.delete_tag,
|
||||
"metadef_namespace.delete_properties": self.delete_props,
|
||||
"metadef_namespace.delete_objects": self.delete_objects,
|
||||
"metadef_namespace.delete_tags": self.delete_tags
|
||||
}
|
||||
actions[event_type](payload)
|
||||
return oslo_messaging.NotificationResult.HANDLED
|
||||
except Exception as e:
|
||||
LOG.error(utils.exception_to_str(e))
|
||||
|
||||
def run_create(self, id, payload):
|
||||
self.engine.create(
|
||||
index=self.index_name,
|
||||
doc_type=self.document_type,
|
||||
body=payload,
|
||||
id=id
|
||||
)
|
||||
|
||||
def run_update(self, id, payload, script=False):
|
||||
if script:
|
||||
self.engine.update(
|
||||
index=self.index_name,
|
||||
doc_type=self.document_type,
|
||||
body=payload,
|
||||
id=id)
|
||||
else:
|
||||
doc = {"doc": payload}
|
||||
self.engine.update(
|
||||
index=self.index_name,
|
||||
doc_type=self.document_type,
|
||||
body=doc,
|
||||
id=id)
|
||||
|
||||
def run_delete(self, id):
|
||||
self.engine.delete(
|
||||
index=self.index_name,
|
||||
doc_type=self.document_type,
|
||||
id=id
|
||||
)
|
||||
|
||||
def create_ns(self, payload):
|
||||
id = payload['namespace']
|
||||
self.run_create(id, self.format_namespace(payload))
|
||||
|
||||
def update_ns(self, payload):
|
||||
id = payload['namespace_old']
|
||||
self.run_update(id, self.format_namespace(payload))
|
||||
|
||||
def delete_ns(self, payload):
|
||||
id = payload['namespace']
|
||||
self.run_delete(id)
|
||||
|
||||
def create_obj(self, payload):
|
||||
id = payload['namespace']
|
||||
object = self.format_object(payload)
|
||||
self.create_entity(id, "objects", object)
|
||||
|
||||
def update_obj(self, payload):
|
||||
id = payload['namespace']
|
||||
object = self.format_object(payload)
|
||||
self.update_entity(id, "objects", object,
|
||||
payload['name_old'], "name")
|
||||
|
||||
def delete_obj(self, payload):
|
||||
id = payload['namespace']
|
||||
self.delete_entity(id, "objects", payload['name'], "name")
|
||||
|
||||
def create_prop(self, payload):
|
||||
id = payload['namespace']
|
||||
property = self.format_property(payload)
|
||||
self.create_entity(id, "properties", property)
|
||||
|
||||
def update_prop(self, payload):
|
||||
id = payload['namespace']
|
||||
property = self.format_property(payload)
|
||||
self.update_entity(id, "properties", property,
|
||||
payload['name_old'], "property")
|
||||
|
||||
def delete_prop(self, payload):
|
||||
id = payload['namespace']
|
||||
self.delete_entity(id, "properties", payload['name'], "property")
|
||||
|
||||
def create_rs(self, payload):
|
||||
id = payload['namespace']
|
||||
resource_type = dict()
|
||||
resource_type['name'] = payload['name']
|
||||
if payload['prefix']:
|
||||
resource_type['prefix'] = payload['prefix']
|
||||
if payload['properties_target']:
|
||||
resource_type['properties_target'] = payload['properties_target']
|
||||
|
||||
self.create_entity(id, "resource_types", resource_type)
|
||||
|
||||
def delete_rs(self, payload):
|
||||
id = payload['namespace']
|
||||
self.delete_entity(id, "resource_types", payload['name'], "name")
|
||||
|
||||
def create_tag(self, payload):
|
||||
id = payload['namespace']
|
||||
tag = dict()
|
||||
tag['name'] = payload['name']
|
||||
|
||||
self.create_entity(id, "tags", tag)
|
||||
|
||||
def update_tag(self, payload):
|
||||
id = payload['namespace']
|
||||
tag = dict()
|
||||
tag['name'] = payload['name']
|
||||
|
||||
self.update_entity(id, "tags", tag, payload['name_old'], "name")
|
||||
|
||||
def delete_tag(self, payload):
|
||||
id = payload['namespace']
|
||||
self.delete_entity(id, "tags", payload['name'], "name")
|
||||
|
||||
def delete_props(self, payload):
|
||||
self.delete_field(payload, "properties")
|
||||
|
||||
def delete_objects(self, payload):
|
||||
self.delete_field(payload, "objects")
|
||||
|
||||
def delete_tags(self, payload):
|
||||
self.delete_field(payload, "tags")
|
||||
|
||||
def create_entity(self, id, entity, entity_data):
|
||||
script = ("if (ctx._source.containsKey('%(entity)s'))"
|
||||
"{ctx._source.%(entity)s += entity_item }"
|
||||
"else {ctx._source.%(entity)s=entity_list};" %
|
||||
{"entity": entity})
|
||||
|
||||
params = {
|
||||
"entity_item": entity_data,
|
||||
"entity_list": [entity_data]
|
||||
}
|
||||
payload = {"script": script, "params": params}
|
||||
self.run_update(id, payload=payload, script=True)
|
||||
|
||||
def update_entity(self, id, entity, entity_data, entity_id, field_name):
|
||||
entity_id = entity_id.lower()
|
||||
script = ("obj=null; for(entity_item :ctx._source.%(entity)s)"
|
||||
"{if(entity_item['%(field_name)s'].toLowerCase() "
|
||||
" == entity_id ) obj=entity_item;};"
|
||||
"if(obj!=null)ctx._source.%(entity)s.remove(obj);"
|
||||
"if (ctx._source.containsKey('%(entity)s'))"
|
||||
"{ctx._source.%(entity)s += entity_item; }"
|
||||
"else {ctx._source.%(entity)s=entity_list;}" %
|
||||
{"entity": entity, "field_name": field_name})
|
||||
params = {
|
||||
"entity_item": entity_data,
|
||||
"entity_list": [entity_data],
|
||||
"entity_id": entity_id
|
||||
}
|
||||
payload = {"script": script, "params": params}
|
||||
self.run_update(id, payload=payload, script=True)
|
||||
|
||||
def delete_entity(self, id, entity, entity_id, field_name):
|
||||
entity_id = entity_id.lower()
|
||||
script = ("obj=null; for(entity_item :ctx._source.%(entity)s)"
|
||||
"{if(entity_item['%(field_name)s'].toLowerCase() "
|
||||
" == entity_id ) obj=entity_item;};"
|
||||
"if(obj!=null)ctx._source.%(entity)s.remove(obj);" %
|
||||
{"entity": entity, "field_name": field_name})
|
||||
params = {
|
||||
"entity_id": entity_id
|
||||
}
|
||||
payload = {"script": script, "params": params}
|
||||
self.run_update(id, payload=payload, script=True)
|
||||
|
||||
def delete_field(self, payload, field):
|
||||
id = payload['namespace']
|
||||
script = ("if (ctx._source.containsKey('%(field)s'))"
|
||||
"{ctx._source.remove('%(field)s')}") % {"field": field}
|
||||
payload = {"script": script}
|
||||
self.run_update(id, payload=payload, script=True)
|
||||
|
||||
def format_namespace(self, payload):
|
||||
for key in self.namespace_delete_keys:
|
||||
if key in payload.keys():
|
||||
del payload[key]
|
||||
return payload
|
||||
|
||||
def format_object(self, payload):
|
||||
formatted_object = dict()
|
||||
formatted_object['name'] = payload['name']
|
||||
formatted_object['description'] = payload['description']
|
||||
if payload['required']:
|
||||
formatted_object['required'] = payload['required']
|
||||
formatted_object['properties'] = []
|
||||
for property in payload['properties']:
|
||||
formatted_property = self.format_property(property)
|
||||
formatted_object['properties'].append(formatted_property)
|
||||
return formatted_object
|
||||
|
||||
def format_property(self, payload):
|
||||
prop_data = dict()
|
||||
prop_data['property'] = payload['name']
|
||||
for key, value in six.iteritems(payload):
|
||||
if key not in self.property_delete_keys and value:
|
||||
prop_data[key] = value
|
||||
return prop_data
|
|
@ -0,0 +1,38 @@
|
|||
# Copyright 2012 OpenStack Foundation
|
||||
# Copyright 2013 IBM Corp.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
from oslo_log import log as logging
|
||||
|
||||
from searchlight.api import policy
|
||||
from searchlight.common import exception
|
||||
import searchlight.elasticsearch
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Gateway(object):
|
||||
def __init__(self, policy_enforcer=None, es_api=None):
|
||||
self.policy = policy_enforcer or policy.Enforcer()
|
||||
if es_api:
|
||||
self.es_api = es_api
|
||||
else:
|
||||
self.es_api = searchlight.elasticsearch.get_api()
|
||||
|
||||
def get_catalog_search_repo(self, context):
|
||||
search_repo = searchlight.elasticsearch.CatalogSearchRepo(
|
||||
context, self.es_api)
|
||||
policy_search_repo = policy.CatalogSearchRepoProxy(
|
||||
search_repo, context, self.policy)
|
||||
return policy_search_repo
|
|
@ -0,0 +1,31 @@
|
|||
# Copyright 2014 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_i18n import * # noqa
|
||||
|
||||
_translators = TranslatorFactory(domain='searchlight')
|
||||
|
||||
# The primary translation function using the well-known name "_"
|
||||
_ = _translators.primary
|
||||
|
||||
# Translators for log levels.
|
||||
#
|
||||
# The abbreviated names are meant to reflect the usual use of a short
|
||||
# name like '_'. The "L" is for "log" and the other letter comes from
|
||||
# the level.
|
||||
_LI = _translators.log_info
|
||||
_LW = _translators.log_warning
|
||||
_LE = _translators.log_error
|
||||
_LC = _translators.log_critical
|
|
@ -0,0 +1,90 @@
|
|||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
import oslo_messaging
|
||||
import stevedore
|
||||
|
||||
from searchlight import i18n
|
||||
from searchlight.openstack.common import service as os_service
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
_ = i18n._
|
||||
_LE = i18n._LE
|
||||
|
||||
|
||||
class NotificationEndpoint(object):
|
||||
|
||||
def __init__(self):
|
||||
self.plugins = get_plugins()
|
||||
self.notification_target_map = dict()
|
||||
for plugin in self.plugins:
|
||||
try:
|
||||
event_list = plugin.obj.get_notification_supported_events()
|
||||
for event in event_list:
|
||||
self.notification_target_map[event.lower()] = plugin.obj
|
||||
except Exception as e:
|
||||
LOG.error(_LE("Failed to retrieve supported notification"
|
||||
" events from search plugins "
|
||||
"%(ext)s: %(e)s") %
|
||||
{'ext': plugin.name, 'e': e})
|
||||
|
||||
def info(self, ctxt, publisher_id, event_type, payload, metadata):
|
||||
event_type_l = event_type.lower()
|
||||
if event_type_l in self.notification_target_map:
|
||||
plugin = self.notification_target_map[event_type_l]
|
||||
handler = plugin.get_notification_handler()
|
||||
handler.process(
|
||||
ctxt,
|
||||
publisher_id,
|
||||
event_type,
|
||||
payload,
|
||||
metadata)
|
||||
|
||||
|
||||
class ListenerService(os_service.Service):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(ListenerService, self).__init__(*args, **kwargs)
|
||||
self.listeners = []
|
||||
|
||||
def start(self):
|
||||
super(ListenerService, self).start()
|
||||
transport = oslo_messaging.get_transport(cfg.CONF)
|
||||
targets = [
|
||||
oslo_messaging.Target(topic="notifications", exchange="glance")
|
||||
]
|
||||
endpoints = [
|
||||
NotificationEndpoint()
|
||||
]
|
||||
listener = oslo_messaging.get_notification_listener(
|
||||
transport,
|
||||
targets,
|
||||
endpoints)
|
||||
listener.start()
|
||||
self.listeners.append(listener)
|
||||
|
||||
def stop(self):
|
||||
for listener in self.listeners:
|
||||
listener.stop()
|
||||
listener.wait()
|
||||
super(ListenerService, self).stop()
|
||||
|
||||
|
||||
def get_plugins():
|
||||
namespace = 'searchlight.search.index_backend'
|
||||
ext_manager = stevedore.extension.ExtensionManager(
|
||||
namespace, invoke_on_load=True)
|
||||
return ext_manager.extensions
|
|
@ -0,0 +1,167 @@
|
|||
# Copyright 2011, OpenStack Foundation
|
||||
# Copyright 2012, Red Hat, Inc.
|
||||
# Copyright 2013 IBM Corp.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import abc
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
import oslo_messaging
|
||||
from oslo_utils import excutils
|
||||
from oslo_utils import timeutils
|
||||
import six
|
||||
import webob
|
||||
|
||||
from searchlight.common import exception
|
||||
from searchlight.common import utils
|
||||
from searchlight import i18n
|
||||
|
||||
_ = i18n._
|
||||
_LE = i18n._LE
|
||||
|
||||
notifier_opts = [
|
||||
cfg.StrOpt('default_publisher_id', default="image.localhost",
|
||||
help='Default publisher_id for outgoing notifications.'),
|
||||
cfg.ListOpt('disabled_notifications', default=[],
|
||||
help='List of disabled notifications. A notification can be '
|
||||
'given either as a notification type to disable a single '
|
||||
'event, or as a notification group prefix to disable all '
|
||||
'events within a group. Example: if this config option '
|
||||
'is set to ["image.create", "metadef_namespace"], then '
|
||||
'"image.create" notification will not be sent after '
|
||||
'image is created and none of the notifications for '
|
||||
'metadefinition namespaces will be sent.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(notifier_opts)
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
_ALIASES = {
|
||||
'searchlight.openstack.common.rpc.impl_kombu': 'rabbit',
|
||||
'searchlight.openstack.common.rpc.impl_qpid': 'qpid',
|
||||
'searchlight.openstack.common.rpc.impl_zmq': 'zmq',
|
||||
}
|
||||
|
||||
|
||||
def get_transport():
|
||||
return oslo_messaging.get_transport(CONF, aliases=_ALIASES)
|
||||
|
||||
|
||||
class Notifier(object):
|
||||
"""Uses a notification strategy to send out messages about events."""
|
||||
|
||||
def __init__(self):
|
||||
publisher_id = CONF.default_publisher_id
|
||||
self._transport = get_transport()
|
||||
self._notifier = oslo_messaging.Notifier(self._transport,
|
||||
publisher_id=publisher_id)
|
||||
|
||||
def warn(self, event_type, payload):
|
||||
self._notifier.warn({}, event_type, payload)
|
||||
|
||||
def info(self, event_type, payload):
|
||||
self._notifier.info({}, event_type, payload)
|
||||
|
||||
def error(self, event_type, payload):
|
||||
self._notifier.error({}, event_type, payload)
|
||||
|
||||
|
||||
def _get_notification_group(notification):
|
||||
return notification.split('.', 1)[0]
|
||||
|
||||
|
||||
def _is_notification_enabled(notification):
|
||||
disabled_notifications = CONF.disabled_notifications
|
||||
notification_group = _get_notification_group(notification)
|
||||
|
||||
notifications = (notification, notification_group)
|
||||
for disabled_notification in disabled_notifications:
|
||||
if disabled_notification in notifications:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def _send_notification(notify, notification_type, payload):
|
||||
if _is_notification_enabled(notification_type):
|
||||
notify(notification_type, payload)
|
||||
|
||||
|
||||
class NotificationBase(object):
|
||||
def get_payload(self, obj):
|
||||
return {}
|
||||
|
||||
def send_notification(self, notification_id, obj, extra_payload=None):
|
||||
payload = self.get_payload(obj)
|
||||
if extra_payload is not None:
|
||||
payload.update(extra_payload)
|
||||
|
||||
_send_notification(self.notifier.info, notification_id, payload)
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class NotificationProxy(NotificationBase):
|
||||
def __init__(self, repo, context, notifier):
|
||||
self.repo = repo
|
||||
self.context = context
|
||||
self.notifier = notifier
|
||||
|
||||
super_class = self.get_super_class()
|
||||
super_class.__init__(self, repo)
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_super_class(self):
|
||||
pass
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class NotificationRepoProxy(NotificationBase):
|
||||
def __init__(self, repo, context, notifier):
|
||||
self.repo = repo
|
||||
self.context = context
|
||||
self.notifier = notifier
|
||||
proxy_kwargs = {'context': self.context, 'notifier': self.notifier}
|
||||
|
||||
proxy_class = self.get_proxy_class()
|
||||
super_class = self.get_super_class()
|
||||
super_class.__init__(self, repo, proxy_class, proxy_kwargs)
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_super_class(self):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_proxy_class(self):
|
||||
pass
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class NotificationFactoryProxy(object):
|
||||
def __init__(self, factory, context, notifier):
|
||||
kwargs = {'context': context, 'notifier': notifier}
|
||||
|
||||
proxy_class = self.get_proxy_class()
|
||||
super_class = self.get_super_class()
|
||||
super_class.__init__(self, factory, proxy_class, kwargs)
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_super_class(self):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_proxy_class(self):
|
||||
pass
|
|
@ -0,0 +1,16 @@
|
|||
oslo-incubator
|
||||
--------------
|
||||
|
||||
A number of modules from oslo-incubator are imported into this project.
|
||||
You can clone the oslo-incubator repository using the following url:
|
||||
|
||||
git://git.openstack.org/openstack/oslo-incubator
|
||||
|
||||
These modules are "incubating" in oslo-incubator and are kept in sync
|
||||
with the help of oslo-incubator's update.py script. See:
|
||||
|
||||
https://wiki.openstack.org/wiki/Oslo#Syncing_Code_from_Incubator
|
||||
|
||||
The copy of the code should never be directly modified here. Please
|
||||
always update oslo-incubator first and then run the script to copy
|
||||
the changes across.
|
|
@ -0,0 +1,45 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""oslo.i18n integration module.
|
||||
|
||||
See http://docs.openstack.org/developer/oslo.i18n/usage.html
|
||||
|
||||
"""
|
||||
|
||||
try:
|
||||
import oslo_i18n
|
||||
|
||||
# NOTE(dhellmann): This reference to o-s-l-o will be replaced by the
|
||||
# application name when this module is synced into the separate
|
||||
# repository. It is OK to have more than one translation function
|
||||
# using the same domain, since there will still only be one message
|
||||
# catalog.
|
||||
_translators = oslo_i18n.TranslatorFactory(domain='searchlight')
|
||||
|
||||
# The primary translation function using the well-known name "_"
|
||||
_ = _translators.primary
|
||||
|
||||
# Translators for log levels.
|
||||
#
|
||||
# The abbreviated names are meant to reflect the usual use of a short
|
||||
# name like '_'. The "L" is for "log" and the other letter comes from
|
||||
# the level.
|
||||
_LI = _translators.log_info
|
||||
_LW = _translators.log_warning
|
||||
_LE = _translators.log_error
|
||||
_LC = _translators.log_critical
|
||||
except ImportError:
|
||||
# NOTE(dims): Support for cases where a project wants to use
|
||||
# code from oslo-incubator, but is not ready to be internationalized
|
||||
# (like tempest)
|
||||
_ = _LI = _LW = _LE = _LC = lambda x: x
|
|
@ -0,0 +1,151 @@
|
|||
# Copyright (c) 2012 OpenStack Foundation.
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import copy
|
||||
import errno
|
||||
import gc
|
||||
import logging
|
||||
import os
|
||||
import pprint
|
||||
import socket
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
import eventlet.backdoor
|
||||
import greenlet
|
||||
from oslo_config import cfg
|
||||
|
||||
from searchlight.openstack.common._i18n import _LI
|
||||
|
||||
help_for_backdoor_port = (
|
||||
"Acceptable values are 0, <port>, and <start>:<end>, where 0 results "
|
||||
"in listening on a random tcp port number; <port> results in listening "
|
||||
"on the specified port number (and not enabling backdoor if that port "
|
||||
"is in use); and <start>:<end> results in listening on the smallest "
|
||||
"unused port number within the specified range of port numbers. The "
|
||||
"chosen port is displayed in the service's log file.")
|
||||
eventlet_backdoor_opts = [
|
||||
cfg.StrOpt('backdoor_port',
|
||||
help="Enable eventlet backdoor. %s" % help_for_backdoor_port)
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(eventlet_backdoor_opts)
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def list_opts():
|
||||
"""Entry point for oslo-config-generator.
|
||||
"""
|
||||
return [(None, copy.deepcopy(eventlet_backdoor_opts))]
|
||||
|
||||
|
||||
class EventletBackdoorConfigValueError(Exception):
|
||||
def __init__(self, port_range, help_msg, ex):
|
||||
msg = ('Invalid backdoor_port configuration %(range)s: %(ex)s. '
|
||||
'%(help)s' %
|
||||
{'range': port_range, 'ex': ex, 'help': help_msg})
|
||||
super(EventletBackdoorConfigValueError, self).__init__(msg)
|
||||
self.port_range = port_range
|
||||
|
||||
|
||||
def _dont_use_this():
|
||||
print("Don't use this, just disconnect instead")
|
||||
|
||||
|
||||
def _find_objects(t):
|
||||
return [o for o in gc.get_objects() if isinstance(o, t)]
|
||||
|
||||
|
||||
def _print_greenthreads():
|
||||
for i, gt in enumerate(_find_objects(greenlet.greenlet)):
|
||||
print(i, gt)
|
||||
traceback.print_stack(gt.gr_frame)
|
||||
print()
|
||||
|
||||
|
||||
def _print_nativethreads():
|
||||
for threadId, stack in sys._current_frames().items():
|
||||
print(threadId)
|
||||
traceback.print_stack(stack)
|
||||
print()
|
||||
|
||||
|
||||
def _parse_port_range(port_range):
|
||||
if ':' not in port_range:
|
||||
start, end = port_range, port_range
|
||||
else:
|
||||
start, end = port_range.split(':', 1)
|
||||
try:
|
||||
start, end = int(start), int(end)
|
||||
if end < start:
|
||||
raise ValueError
|
||||
return start, end
|
||||
except ValueError as ex:
|
||||
raise EventletBackdoorConfigValueError(port_range, ex,
|
||||
help_for_backdoor_port)
|
||||
|
||||
|
||||
def _listen(host, start_port, end_port, listen_func):
|
||||
try_port = start_port
|
||||
while True:
|
||||
try:
|
||||
return listen_func((host, try_port))
|
||||
except socket.error as exc:
|
||||
if (exc.errno != errno.EADDRINUSE or
|
||||
try_port >= end_port):
|
||||
raise
|
||||
try_port += 1
|
||||
|
||||
|
||||
def initialize_if_enabled():
|
||||
backdoor_locals = {
|
||||
'exit': _dont_use_this, # So we don't exit the entire process
|
||||
'quit': _dont_use_this, # So we don't exit the entire process
|
||||
'fo': _find_objects,
|
||||
'pgt': _print_greenthreads,
|
||||
'pnt': _print_nativethreads,
|
||||
}
|
||||
|
||||
if CONF.backdoor_port is None:
|
||||
return None
|
||||
|
||||
start_port, end_port = _parse_port_range(str(CONF.backdoor_port))
|
||||
|
||||
# NOTE(johannes): The standard sys.displayhook will print the value of
|
||||
# the last expression and set it to __builtin__._, which overwrites
|
||||
# the __builtin__._ that gettext sets. Let's switch to using pprint
|
||||
# since it won't interact poorly with gettext, and it's easier to
|
||||
# read the output too.
|
||||
def displayhook(val):
|
||||
if val is not None:
|
||||
pprint.pprint(val)
|
||||
sys.displayhook = displayhook
|
||||
|
||||
sock = _listen('localhost', start_port, end_port, eventlet.listen)
|
||||
|
||||
# In the case of backdoor port being zero, a port number is assigned by
|
||||
# listen(). In any case, pull the port number out here.
|
||||
port = sock.getsockname()[1]
|
||||
LOG.info(
|
||||
_LI('Eventlet backdoor listening on %(port)s for process %(pid)d') %
|
||||
{'port': port, 'pid': os.getpid()}
|
||||
)
|
||||
eventlet.spawn_n(eventlet.backdoor.backdoor_server, sock,
|
||||
locals=backdoor_locals)
|
||||
return port
|
|
@ -0,0 +1,149 @@
|
|||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import contextlib
|
||||
import errno
|
||||
import logging
|
||||
import os
|
||||
import stat
|
||||
import tempfile
|
||||
|
||||
from oslo_utils import excutils
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
_FILE_CACHE = {}
|
||||
DEFAULT_MODE = stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO
|
||||
|
||||
|
||||
def ensure_tree(path, mode=DEFAULT_MODE):
|
||||
"""Create a directory (and any ancestor directories required)
|
||||
|
||||
:param path: Directory to create
|
||||
:param mode: Directory creation permissions
|
||||
"""
|
||||
try:
|
||||
os.makedirs(path, mode)
|
||||
except OSError as exc:
|
||||
if exc.errno == errno.EEXIST:
|
||||
if not os.path.isdir(path):
|
||||
raise
|
||||
else:
|
||||
raise
|
||||
|
||||
|
||||
def read_cached_file(filename, force_reload=False):
|
||||
"""Read from a file if it has been modified.
|
||||
|
||||
:param force_reload: Whether to reload the file.
|
||||
:returns: A tuple with a boolean specifying if the data is fresh
|
||||
or not.
|
||||
"""
|
||||
global _FILE_CACHE
|
||||
|
||||
if force_reload:
|
||||
delete_cached_file(filename)
|
||||
|
||||
reloaded = False
|
||||
mtime = os.path.getmtime(filename)
|
||||
cache_info = _FILE_CACHE.setdefault(filename, {})
|
||||
|
||||
if not cache_info or mtime > cache_info.get('mtime', 0):
|
||||
LOG.debug("Reloading cached file %s" % filename)
|
||||
with open(filename) as fap:
|
||||
cache_info['data'] = fap.read()
|
||||
cache_info['mtime'] = mtime
|
||||
reloaded = True
|
||||
return (reloaded, cache_info['data'])
|
||||
|
||||
|
||||
def delete_cached_file(filename):
|
||||
"""Delete cached file if present.
|
||||
|
||||
:param filename: filename to delete
|
||||
"""
|
||||
global _FILE_CACHE
|
||||
|
||||
if filename in _FILE_CACHE:
|
||||
del _FILE_CACHE[filename]
|
||||
|
||||
|
||||
def delete_if_exists(path, remove=os.unlink):
|
||||
"""Delete a file, but ignore file not found error.
|
||||
|
||||
:param path: File to delete
|
||||
:param remove: Optional function to remove passed path
|
||||
"""
|
||||
|
||||
try:
|
||||
remove(path)
|
||||
except OSError as e:
|
||||
if e.errno != errno.ENOENT:
|
||||
raise
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def remove_path_on_error(path, remove=delete_if_exists):
|
||||
"""Protect code that wants to operate on PATH atomically.
|
||||
Any exception will cause PATH to be removed.
|
||||
|
||||
:param path: File to work with
|
||||
:param remove: Optional function to remove passed path
|
||||
"""
|
||||
|
||||
try:
|
||||
yield
|
||||
except Exception:
|
||||
with excutils.save_and_reraise_exception():
|
||||
remove(path)
|
||||
|
||||
|
||||
def file_open(*args, **kwargs):
|
||||
"""Open file
|
||||
|
||||
see built-in open() documentation for more details
|
||||
|
||||
Note: The reason this is kept in a separate module is to easily
|
||||
be able to provide a stub module that doesn't alter system
|
||||
state at all (for unit tests)
|
||||
"""
|
||||
return open(*args, **kwargs)
|
||||
|
||||
|
||||
def write_to_tempfile(content, path=None, suffix='', prefix='tmp'):
|
||||
"""Create temporary file or use existing file.
|
||||
|
||||
This util is needed for creating temporary file with
|
||||
specified content, suffix and prefix. If path is not None,
|
||||
it will be used for writing content. If the path doesn't
|
||||
exist it'll be created.
|
||||
|
||||
:param content: content for temporary file.
|
||||
:param path: same as parameter 'dir' for mkstemp
|
||||
:param suffix: same as parameter 'suffix' for mkstemp
|
||||
:param prefix: same as parameter 'prefix' for mkstemp
|
||||
|
||||
For example: it can be used in database tests for creating
|
||||
configuration files.
|
||||
"""
|
||||
if path:
|
||||
ensure_tree(path)
|
||||
|
||||
(fd, path) = tempfile.mkstemp(suffix=suffix, dir=path, prefix=prefix)
|
||||
try:
|
||||
os.write(fd, content)
|
||||
finally:
|
||||
os.close(fd)
|
||||
return path
|
|
@ -0,0 +1,45 @@
|
|||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Local storage of variables using weak references"""
|
||||
|
||||
import threading
|
||||
import weakref
|
||||
|
||||
|
||||
class WeakLocal(threading.local):
|
||||
def __getattribute__(self, attr):
|
||||
rval = super(WeakLocal, self).__getattribute__(attr)
|
||||
if rval:
|
||||
# NOTE(mikal): this bit is confusing. What is stored is a weak
|
||||
# reference, not the value itself. We therefore need to lookup
|
||||
# the weak reference and return the inner value here.
|
||||
rval = rval()
|
||||
return rval
|
||||
|
||||
def __setattr__(self, attr, value):
|
||||
value = weakref.ref(value)
|
||||
return super(WeakLocal, self).__setattr__(attr, value)
|
||||
|
||||
|
||||
# NOTE(mikal): the name "store" should be deprecated in the future
|
||||
store = WeakLocal()
|
||||
|
||||
# A "weak" store uses weak references and allows an object to fall out of scope
|
||||
# when it falls out of scope in the code that uses the thread local storage. A
|
||||
# "strong" store will hold a reference to the object so that it never falls out
|
||||
# of scope.
|
||||
weak_store = WeakLocal()
|
||||
strong_store = threading.local()
|
|
@ -0,0 +1,147 @@
|
|||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Justin Santa Barbara
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import logging
|
||||
import sys
|
||||
import time
|
||||
|
||||
from eventlet import event
|
||||
from eventlet import greenthread
|
||||
|
||||
from searchlight.openstack.common._i18n import _LE, _LW
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
# NOTE(zyluo): This lambda function was declared to avoid mocking collisions
|
||||
# with time.time() called in the standard logging module
|
||||
# during unittests.
|
||||
_ts = lambda: time.time()
|
||||
|
||||
|
||||
class LoopingCallDone(Exception):
|
||||
"""Exception to break out and stop a LoopingCallBase.
|
||||
|
||||
The poll-function passed to LoopingCallBase can raise this exception to
|
||||
break out of the loop normally. This is somewhat analogous to
|
||||
StopIteration.
|
||||
|
||||
An optional return-value can be included as the argument to the exception;
|
||||
this return-value will be returned by LoopingCallBase.wait()
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, retvalue=True):
|
||||
""":param retvalue: Value that LoopingCallBase.wait() should return."""
|
||||
self.retvalue = retvalue
|
||||
|
||||
|
||||
class LoopingCallBase(object):
|
||||
def __init__(self, f=None, *args, **kw):
|
||||
self.args = args
|
||||
self.kw = kw
|
||||
self.f = f
|
||||
self._running = False
|
||||
self.done = None
|
||||
|
||||
def stop(self):
|
||||
self._running = False
|
||||
|
||||
def wait(self):
|
||||
return self.done.wait()
|
||||
|
||||
|
||||
class FixedIntervalLoopingCall(LoopingCallBase):
|
||||
"""A fixed interval looping call."""
|
||||
|
||||
def start(self, interval, initial_delay=None):
|
||||
self._running = True
|
||||
done = event.Event()
|
||||
|
||||
def _inner():
|
||||
if initial_delay:
|
||||
greenthread.sleep(initial_delay)
|
||||
|
||||
try:
|
||||
while self._running:
|
||||
start = _ts()
|
||||
self.f(*self.args, **self.kw)
|
||||
end = _ts()
|
||||
if not self._running:
|
||||
break
|
||||
delay = end - start - interval
|
||||
if delay > 0:
|
||||
LOG.warn(_LW('task %(func_name)r run outlasted '
|
||||
'interval by %(delay).2f sec'),
|
||||
{'func_name': self.f, 'delay': delay})
|
||||
greenthread.sleep(-delay if delay < 0 else 0)
|
||||
except LoopingCallDone as e:
|
||||
self.stop()
|
||||
done.send(e.retvalue)
|
||||
except Exception:
|
||||
LOG.exception(_LE('in fixed duration looping call'))
|
||||
done.send_exception(*sys.exc_info())
|
||||
return
|
||||
else:
|
||||
done.send(True)
|
||||
|
||||
self.done = done
|
||||
|
||||
greenthread.spawn_n(_inner)
|
||||
return self.done
|
||||
|
||||
|
||||
class DynamicLoopingCall(LoopingCallBase):
|
||||
"""A looping call which sleeps until the next known event.
|
||||
|
||||
The function called should return how long to sleep for before being
|
||||
called again.
|
||||
"""
|
||||
|
||||
def start(self, initial_delay=None, periodic_interval_max=None):
|
||||
self._running = True
|
||||
done = event.Event()
|
||||
|
||||
def _inner():
|
||||
if initial_delay:
|
||||
greenthread.sleep(initial_delay)
|
||||
|
||||
try:
|
||||
while self._running:
|
||||
idle = self.f(*self.args, **self.kw)
|
||||
if not self._running:
|
||||
break
|
||||
|
||||
if periodic_interval_max is not None:
|
||||
idle = min(idle, periodic_interval_max)
|
||||
LOG.debug('Dynamic looping call %(func_name)r sleeping '
|
||||
'for %(idle).02f seconds',
|
||||
{'func_name': self.f, 'idle': idle})
|
||||
greenthread.sleep(idle)
|
||||
except LoopingCallDone as e:
|
||||
self.stop()
|
||||
done.send(e.retvalue)
|
||||
except Exception:
|
||||
LOG.exception(_LE('in dynamic looping call'))
|
||||
done.send_exception(*sys.exc_info())
|
||||
return
|
||||
else:
|
||||
done.send(True)
|
||||
|
||||
self.done = done
|
||||
|
||||
greenthread.spawn(_inner)
|
||||
return self.done
|
|
@ -0,0 +1,495 @@
|
|||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Justin Santa Barbara
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Generic Node base class for all workers that run on hosts."""
|
||||
|
||||
import errno
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
|
||||
try:
|
||||
# Importing just the symbol here because the io module does not
|
||||
# exist in Python 2.6.
|
||||
from io import UnsupportedOperation # noqa
|
||||
except ImportError:
|
||||
# Python 2.6
|
||||
UnsupportedOperation = None
|
||||
|
||||
import eventlet
|
||||
from eventlet import event
|
||||
from oslo_config import cfg
|
||||
|
||||
from searchlight.openstack.common import eventlet_backdoor
|
||||
from searchlight.openstack.common._i18n import _LE, _LI, _LW
|
||||
from searchlight.openstack.common import systemd
|
||||
from searchlight.openstack.common import threadgroup
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _sighup_supported():
|
||||
return hasattr(signal, 'SIGHUP')
|
||||
|
||||
|
||||
def _is_daemon():
|
||||
# The process group for a foreground process will match the
|
||||
# process group of the controlling terminal. If those values do
|
||||
# not match, or ioctl() fails on the stdout file handle, we assume
|
||||
# the process is running in the background as a daemon.
|
||||
# http://www.gnu.org/software/bash/manual/bashref.html#Job-Control-Basics
|
||||
try:
|
||||
is_daemon = os.getpgrp() != os.tcgetpgrp(sys.stdout.fileno())
|
||||
except OSError as err:
|
||||
if err.errno == errno.ENOTTY:
|
||||
# Assume we are a daemon because there is no terminal.
|
||||
is_daemon = True
|
||||
else:
|
||||
raise
|
||||
except UnsupportedOperation:
|
||||
# Could not get the fileno for stdout, so we must be a daemon.
|
||||
is_daemon = True
|
||||
return is_daemon
|
||||
|
||||
|
||||
def _is_sighup_and_daemon(signo):
|
||||
if not (_sighup_supported() and signo == signal.SIGHUP):
|
||||
# Avoid checking if we are a daemon, because the signal isn't
|
||||
# SIGHUP.
|
||||
return False
|
||||
return _is_daemon()
|
||||
|
||||
|
||||
def _signo_to_signame(signo):
|
||||
signals = {signal.SIGTERM: 'SIGTERM',
|
||||
signal.SIGINT: 'SIGINT'}
|
||||
if _sighup_supported():
|
||||
signals[signal.SIGHUP] = 'SIGHUP'
|
||||
return signals[signo]
|
||||
|
||||
|
||||
def _set_signals_handler(handler):
|
||||
signal.signal(signal.SIGTERM, handler)
|
||||
signal.signal(signal.SIGINT, handler)
|
||||
if _sighup_supported():
|
||||
signal.signal(signal.SIGHUP, handler)
|
||||
|
||||
|
||||
class Launcher(object):
|
||||
"""Launch one or more services and wait for them to complete."""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the service launcher.
|
||||
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
self.services = Services()
|
||||
self.backdoor_port = eventlet_backdoor.initialize_if_enabled()
|
||||
|
||||
def launch_service(self, service):
|
||||
"""Load and start the given service.
|
||||
|
||||
:param service: The service you would like to start.
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
service.backdoor_port = self.backdoor_port
|
||||
self.services.add(service)
|
||||
|
||||
def stop(self):
|
||||
"""Stop all services which are currently running.
|
||||
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
self.services.stop()
|
||||
|
||||
def wait(self):
|
||||
"""Waits until all services have been stopped, and then returns.
|
||||
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
self.services.wait()
|
||||
|
||||
def restart(self):
|
||||
"""Reload config files and restart service.
|
||||
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
cfg.CONF.reload_config_files()
|
||||
self.services.restart()
|
||||
|
||||
|
||||
class SignalExit(SystemExit):
|
||||
def __init__(self, signo, exccode=1):
|
||||
super(SignalExit, self).__init__(exccode)
|
||||
self.signo = signo
|
||||
|
||||
|
||||
class ServiceLauncher(Launcher):
|
||||
def _handle_signal(self, signo, frame):
|
||||
# Allow the process to be killed again and die from natural causes
|
||||
_set_signals_handler(signal.SIG_DFL)
|
||||
raise SignalExit(signo)
|
||||
|
||||
def handle_signal(self):
|
||||
_set_signals_handler(self._handle_signal)
|
||||
|
||||
def _wait_for_exit_or_signal(self, ready_callback=None):
|
||||
status = None
|
||||
signo = 0
|
||||
|
||||
LOG.debug('Full set of CONF:')
|
||||
CONF.log_opt_values(LOG, logging.DEBUG)
|
||||
|
||||
try:
|
||||
if ready_callback:
|
||||
ready_callback()
|
||||
super(ServiceLauncher, self).wait()
|
||||
except SignalExit as exc:
|
||||
signame = _signo_to_signame(exc.signo)
|
||||
LOG.info(_LI('Caught %s, exiting'), signame)
|
||||
status = exc.code
|
||||
signo = exc.signo
|
||||
except SystemExit as exc:
|
||||
status = exc.code
|
||||
finally:
|
||||
self.stop()
|
||||
|
||||
return status, signo
|
||||
|
||||
def wait(self, ready_callback=None):
|
||||
systemd.notify_once()
|
||||
while True:
|
||||
self.handle_signal()
|
||||
status, signo = self._wait_for_exit_or_signal(ready_callback)
|
||||
if not _is_sighup_and_daemon(signo):
|
||||
return status
|
||||
self.restart()
|
||||
|
||||
|
||||
class ServiceWrapper(object):
|
||||
def __init__(self, service, workers):
|
||||
self.service = service
|
||||
self.workers = workers
|
||||
self.children = set()
|
||||
self.forktimes = []
|
||||
|
||||
|
||||
class ProcessLauncher(object):
|
||||
def __init__(self):
|
||||
"""Constructor."""
|
||||
|
||||
self.children = {}
|
||||
self.sigcaught = None
|
||||
self.running = True
|
||||
rfd, self.writepipe = os.pipe()
|
||||
self.readpipe = eventlet.greenio.GreenPipe(rfd, 'r')
|
||||
self.handle_signal()
|
||||
|
||||
def handle_signal(self):
|
||||
_set_signals_handler(self._handle_signal)
|
||||
|
||||
def _handle_signal(self, signo, frame):
|
||||
self.sigcaught = signo
|
||||
self.running = False
|
||||
|
||||
# Allow the process to be killed again and die from natural causes
|
||||
_set_signals_handler(signal.SIG_DFL)
|
||||
|
||||
def _pipe_watcher(self):
|
||||
# This will block until the write end is closed when the parent
|
||||
# dies unexpectedly
|
||||
self.readpipe.read()
|
||||
|
||||
LOG.info(_LI('Parent process has died unexpectedly, exiting'))
|
||||
|
||||
sys.exit(1)
|
||||
|
||||
def _child_process_handle_signal(self):
|
||||
# Setup child signal handlers differently
|
||||
def _sigterm(*args):
|
||||
signal.signal(signal.SIGTERM, signal.SIG_DFL)
|
||||
raise SignalExit(signal.SIGTERM)
|
||||
|
||||
def _sighup(*args):
|
||||
signal.signal(signal.SIGHUP, signal.SIG_DFL)
|
||||
raise SignalExit(signal.SIGHUP)
|
||||
|
||||
signal.signal(signal.SIGTERM, _sigterm)
|
||||
if _sighup_supported():
|
||||
signal.signal(signal.SIGHUP, _sighup)
|
||||
# Block SIGINT and let the parent send us a SIGTERM
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
|
||||
def _child_wait_for_exit_or_signal(self, launcher):
|
||||
status = 0
|
||||
signo = 0
|
||||
|
||||
# NOTE(johannes): All exceptions are caught to ensure this
|
||||
# doesn't fallback into the loop spawning children. It would
|
||||
# be bad for a child to spawn more children.
|
||||
try:
|
||||
launcher.wait()
|
||||
except SignalExit as exc:
|
||||
signame = _signo_to_signame(exc.signo)
|
||||
LOG.info(_LI('Child caught %s, exiting'), signame)
|
||||
status = exc.code
|
||||
signo = exc.signo
|
||||
except SystemExit as exc:
|
||||
status = exc.code
|
||||
except BaseException:
|
||||
LOG.exception(_LE('Unhandled exception'))
|
||||
status = 2
|
||||
finally:
|
||||
launcher.stop()
|
||||
|
||||
return status, signo
|
||||
|
||||
def _child_process(self, service):
|
||||
self._child_process_handle_signal()
|
||||
|
||||
# Reopen the eventlet hub to make sure we don't share an epoll
|
||||
# fd with parent and/or siblings, which would be bad
|
||||
eventlet.hubs.use_hub()
|
||||
|
||||
# Close write to ensure only parent has it open
|
||||
os.close(self.writepipe)
|
||||
# Create greenthread to watch for parent to close pipe
|
||||
eventlet.spawn_n(self._pipe_watcher)
|
||||
|
||||
# Reseed random number generator
|
||||
random.seed()
|
||||
|
||||
launcher = Launcher()
|
||||
launcher.launch_service(service)
|
||||
return launcher
|
||||
|
||||
def _start_child(self, wrap):
|
||||
if len(wrap.forktimes) > wrap.workers:
|
||||
# Limit ourselves to one process a second (over the period of
|
||||
# number of workers * 1 second). This will allow workers to
|
||||
# start up quickly but ensure we don't fork off children that
|
||||
# die instantly too quickly.
|
||||
if time.time() - wrap.forktimes[0] < wrap.workers:
|
||||
LOG.info(_LI('Forking too fast, sleeping'))
|
||||
time.sleep(1)
|
||||
|
||||
wrap.forktimes.pop(0)
|
||||
|
||||
wrap.forktimes.append(time.time())
|
||||
|
||||
pid = os.fork()
|
||||
if pid == 0:
|
||||
launcher = self._child_process(wrap.service)
|
||||
while True:
|
||||
self._child_process_handle_signal()
|
||||
status, signo = self._child_wait_for_exit_or_signal(launcher)
|
||||
if not _is_sighup_and_daemon(signo):
|
||||
break
|
||||
launcher.restart()
|
||||
|
||||
os._exit(status)
|
||||
|
||||
LOG.info(_LI('Started child %d'), pid)
|
||||
|
||||
wrap.children.add(pid)
|
||||
self.children[pid] = wrap
|
||||
|
||||
return pid
|
||||
|
||||
def launch_service(self, service, workers=1):
|
||||
wrap = ServiceWrapper(service, workers)
|
||||
|
||||
LOG.info(_LI('Starting %d workers'), wrap.workers)
|
||||
while self.running and len(wrap.children) < wrap.workers:
|
||||
self._start_child(wrap)
|
||||
|
||||
def _wait_child(self):
|
||||
try:
|
||||
# Block while any of child processes have exited
|
||||
pid, status = os.waitpid(0, 0)
|
||||
if not pid:
|
||||
return None
|
||||
except OSError as exc:
|
||||
if exc.errno not in (errno.EINTR, errno.ECHILD):
|
||||
raise
|
||||
return None
|
||||
|
||||
if os.WIFSIGNALED(status):
|
||||
sig = os.WTERMSIG(status)
|
||||
LOG.info(_LI('Child %(pid)d killed by signal %(sig)d'),
|
||||
dict(pid=pid, sig=sig))
|
||||
else:
|
||||
code = os.WEXITSTATUS(status)
|
||||
LOG.info(_LI('Child %(pid)s exited with status %(code)d'),
|
||||
dict(pid=pid, code=code))
|
||||
|
||||
if pid not in self.children:
|
||||
LOG.warning(_LW('pid %d not in child list'), pid)
|
||||
return None
|
||||
|
||||
wrap = self.children.pop(pid)
|
||||
wrap.children.remove(pid)
|
||||
return wrap
|
||||
|
||||
def _respawn_children(self):
|
||||
while self.running:
|
||||
wrap = self._wait_child()
|
||||
if not wrap:
|
||||
continue
|
||||
while self.running and len(wrap.children) < wrap.workers:
|
||||
self._start_child(wrap)
|
||||
|
||||
def wait(self):
|
||||
"""Loop waiting on children to die and respawning as necessary."""
|
||||
|
||||
systemd.notify_once()
|
||||
LOG.debug('Full set of CONF:')
|
||||
CONF.log_opt_values(LOG, logging.DEBUG)
|
||||
|
||||
try:
|
||||
while True:
|
||||
self.handle_signal()
|
||||
self._respawn_children()
|
||||
# No signal means that stop was called. Don't clean up here.
|
||||
if not self.sigcaught:
|
||||
return
|
||||
|
||||
signame = _signo_to_signame(self.sigcaught)
|
||||
LOG.info(_LI('Caught %s, stopping children'), signame)
|
||||
if not _is_sighup_and_daemon(self.sigcaught):
|
||||
break
|
||||
|
||||
for pid in self.children:
|
||||
os.kill(pid, signal.SIGHUP)
|
||||
self.running = True
|
||||
self.sigcaught = None
|
||||
except eventlet.greenlet.GreenletExit:
|
||||
LOG.info(_LI("Wait called after thread killed. Cleaning up."))
|
||||
|
||||
self.stop()
|
||||
|
||||
def stop(self):
|
||||
"""Terminate child processes and wait on each."""
|
||||
self.running = False
|
||||
for pid in self.children:
|
||||
try:
|
||||
os.kill(pid, signal.SIGTERM)
|
||||
except OSError as exc:
|
||||
if exc.errno != errno.ESRCH:
|
||||
raise
|
||||
|
||||
# Wait for children to die
|
||||
if self.children:
|
||||
LOG.info(_LI('Waiting on %d children to exit'), len(self.children))
|
||||
while self.children:
|
||||
self._wait_child()
|
||||
|
||||
|
||||
class Service(object):
|
||||
"""Service object for binaries running on hosts."""
|
||||
|
||||
def __init__(self, threads=1000):
|
||||
self.tg = threadgroup.ThreadGroup(threads)
|
||||
|
||||
# signal that the service is done shutting itself down:
|
||||
self._done = event.Event()
|
||||
|
||||
def reset(self):
|
||||
# NOTE(Fengqian): docs for Event.reset() recommend against using it
|
||||
self._done = event.Event()
|
||||
|
||||
def start(self):
|
||||
pass
|
||||
|
||||
def stop(self, graceful=False):
|
||||
self.tg.stop(graceful)
|
||||
self.tg.wait()
|
||||
# Signal that service cleanup is done:
|
||||
if not self._done.ready():
|
||||
self._done.send()
|
||||
|
||||
def wait(self):
|
||||
self._done.wait()
|
||||
|
||||
|
||||
class Services(object):
|
||||
|
||||
def __init__(self):
|
||||
self.services = []
|
||||
self.tg = threadgroup.ThreadGroup()
|
||||
self.done = event.Event()
|
||||
|
||||
def add(self, service):
|
||||
self.services.append(service)
|
||||
self.tg.add_thread(self.run_service, service, self.done)
|
||||
|
||||
def stop(self):
|
||||
# wait for graceful shutdown of services:
|
||||
for service in self.services:
|
||||
service.stop()
|
||||
service.wait()
|
||||
|
||||
# Each service has performed cleanup, now signal that the run_service
|
||||
# wrapper threads can now die:
|
||||
if not self.done.ready():
|
||||
self.done.send()
|
||||
|
||||
# reap threads:
|
||||
self.tg.stop()
|
||||
|
||||
def wait(self):
|
||||
self.tg.wait()
|
||||
|
||||
def restart(self):
|
||||
self.stop()
|
||||
self.done = event.Event()
|
||||
for restart_service in self.services:
|
||||
restart_service.reset()
|
||||
self.tg.add_thread(self.run_service, restart_service, self.done)
|
||||
|
||||
@staticmethod
|
||||
def run_service(service, done):
|
||||
"""Service start wrapper.
|
||||
|
||||
:param service: service to run
|
||||
:param done: event to wait on until a shutdown is triggered
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
service.start()
|
||||
done.wait()
|
||||
|
||||
|
||||
def launch(service, workers=1):
|
||||
if workers is None or workers == 1:
|
||||
launcher = ServiceLauncher()
|
||||
launcher.launch_service(service)
|
||||
else:
|
||||
launcher = ProcessLauncher()
|
||||
launcher.launch_service(service, workers=workers)
|
||||
|
||||
return launcher
|
|
@ -0,0 +1,105 @@
|
|||
# Copyright 2012-2014 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Helper module for systemd service readiness notification.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _abstractify(socket_name):
|
||||
if socket_name.startswith('@'):
|
||||
# abstract namespace socket
|
||||
socket_name = '\0%s' % socket_name[1:]
|
||||
return socket_name
|
||||
|
||||
|
||||
def _sd_notify(unset_env, msg):
|
||||
notify_socket = os.getenv('NOTIFY_SOCKET')
|
||||
if notify_socket:
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
|
||||
try:
|
||||
sock.connect(_abstractify(notify_socket))
|
||||
sock.sendall(msg)
|
||||
if unset_env:
|
||||
del os.environ['NOTIFY_SOCKET']
|
||||
except EnvironmentError:
|
||||
LOG.debug("Systemd notification failed", exc_info=True)
|
||||
finally:
|
||||
sock.close()
|
||||
|
||||
|
||||
def notify():
|
||||
"""Send notification to Systemd that service is ready.
|
||||
|
||||
For details see
|
||||
http://www.freedesktop.org/software/systemd/man/sd_notify.html
|
||||
"""
|
||||
_sd_notify(False, 'READY=1')
|
||||
|
||||
|
||||
def notify_once():
|
||||
"""Send notification once to Systemd that service is ready.
|
||||
|
||||
Systemd sets NOTIFY_SOCKET environment variable with the name of the
|
||||
socket listening for notifications from services.
|
||||
This method removes the NOTIFY_SOCKET environment variable to ensure
|
||||
notification is sent only once.
|
||||
"""
|
||||
_sd_notify(True, 'READY=1')
|
||||
|
||||
|
||||
def onready(notify_socket, timeout):
|
||||
"""Wait for systemd style notification on the socket.
|
||||
|
||||
:param notify_socket: local socket address
|
||||
:type notify_socket: string
|
||||
:param timeout: socket timeout
|
||||
:type timeout: float
|
||||
:returns: 0 service ready
|
||||
1 service not ready
|
||||
2 timeout occurred
|
||||
"""
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
|
||||
sock.settimeout(timeout)
|
||||
sock.bind(_abstractify(notify_socket))
|
||||
try:
|
||||
msg = sock.recv(512)
|
||||
except socket.timeout:
|
||||
return 2
|
||||
finally:
|
||||
sock.close()
|
||||
if 'READY=1' in msg:
|
||||
return 0
|
||||
else:
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# simple CLI for testing
|
||||
if len(sys.argv) == 1:
|
||||
notify()
|
||||
elif len(sys.argv) >= 2:
|
||||
timeout = float(sys.argv[1])
|
||||
notify_socket = os.getenv('NOTIFY_SOCKET')
|
||||
if notify_socket:
|
||||
retval = onready(notify_socket, timeout)
|
||||
sys.exit(retval)
|
|
@ -0,0 +1,149 @@
|
|||
# Copyright 2012 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
import logging
|
||||
import threading
|
||||
|
||||
import eventlet
|
||||
from eventlet import greenpool
|
||||
|
||||
from searchlight.openstack.common import loopingcall
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _thread_done(gt, *args, **kwargs):
|
||||
"""Callback function to be passed to GreenThread.link() when we spawn()
|
||||
Calls the :class:`ThreadGroup` to notify if.
|
||||
|
||||
"""
|
||||
kwargs['group'].thread_done(kwargs['thread'])
|
||||
|
||||
|
||||
class Thread(object):
|
||||
"""Wrapper around a greenthread, that holds a reference to the
|
||||
:class:`ThreadGroup`. The Thread will notify the :class:`ThreadGroup` when
|
||||
it has done so it can be removed from the threads list.
|
||||
"""
|
||||
def __init__(self, thread, group):
|
||||
self.thread = thread
|
||||
self.thread.link(_thread_done, group=group, thread=self)
|
||||
|
||||
def stop(self):
|
||||
self.thread.kill()
|
||||
|
||||
def wait(self):
|
||||
return self.thread.wait()
|
||||
|
||||
def link(self, func, *args, **kwargs):
|
||||
self.thread.link(func, *args, **kwargs)
|
||||
|
||||
|
||||
class ThreadGroup(object):
|
||||
"""The point of the ThreadGroup class is to:
|
||||
|
||||
* keep track of timers and greenthreads (making it easier to stop them
|
||||
when need be).
|
||||
* provide an easy API to add timers.
|
||||
"""
|
||||
def __init__(self, thread_pool_size=10):
|
||||
self.pool = greenpool.GreenPool(thread_pool_size)
|
||||
self.threads = []
|
||||
self.timers = []
|
||||
|
||||
def add_dynamic_timer(self, callback, initial_delay=None,
|
||||
periodic_interval_max=None, *args, **kwargs):
|
||||
timer = loopingcall.DynamicLoopingCall(callback, *args, **kwargs)
|
||||
timer.start(initial_delay=initial_delay,
|
||||
periodic_interval_max=periodic_interval_max)
|
||||
self.timers.append(timer)
|
||||
|
||||
def add_timer(self, interval, callback, initial_delay=None,
|
||||
*args, **kwargs):
|
||||
pulse = loopingcall.FixedIntervalLoopingCall(callback, *args, **kwargs)
|
||||
pulse.start(interval=interval,
|
||||
initial_delay=initial_delay)
|
||||
self.timers.append(pulse)
|
||||
|
||||
def add_thread(self, callback, *args, **kwargs):
|
||||
gt = self.pool.spawn(callback, *args, **kwargs)
|
||||
th = Thread(gt, self)
|
||||
self.threads.append(th)
|
||||
return th
|
||||
|
||||
def thread_done(self, thread):
|
||||
self.threads.remove(thread)
|
||||
|
||||
def _stop_threads(self):
|
||||
current = threading.current_thread()
|
||||
|
||||
# Iterate over a copy of self.threads so thread_done doesn't
|
||||
# modify the list while we're iterating
|
||||
for x in self.threads[:]:
|
||||
if x is current:
|
||||
# don't kill the current thread.
|
||||
continue
|
||||
try:
|
||||
x.stop()
|
||||
except eventlet.greenlet.GreenletExit:
|
||||
pass
|
||||
except Exception as ex:
|
||||
LOG.exception(ex)
|
||||
|
||||
def stop_timers(self):
|
||||
for x in self.timers:
|
||||
try:
|
||||
x.stop()
|
||||
except Exception as ex:
|
||||
LOG.exception(ex)
|
||||
self.timers = []
|
||||
|
||||
def stop(self, graceful=False):
|
||||
"""stop function has the option of graceful=True/False.
|
||||
|
||||
* In case of graceful=True, wait for all threads to be finished.
|
||||
Never kill threads.
|
||||
* In case of graceful=False, kill threads immediately.
|
||||
"""
|
||||
self.stop_timers()
|
||||
if graceful:
|
||||
# In case of graceful=True, wait for all threads to be
|
||||
# finished, never kill threads
|
||||
self.wait()
|
||||
else:
|
||||
# In case of graceful=False(Default), kill threads
|
||||
# immediately
|
||||
self._stop_threads()
|
||||
|
||||
def wait(self):
|
||||
for x in self.timers:
|
||||
try:
|
||||
x.wait()
|
||||
except eventlet.greenlet.GreenletExit:
|
||||
pass
|
||||
except Exception as ex:
|
||||
LOG.exception(ex)
|
||||
current = threading.current_thread()
|
||||
|
||||
# Iterate over a copy of self.threads so thread_done doesn't
|
||||
# modify the list while we're iterating
|
||||
for x in self.threads[:]:
|
||||
if x is current:
|
||||
continue
|
||||
try:
|
||||
x.wait()
|
||||
except eventlet.greenlet.GreenletExit:
|
||||
pass
|
||||
except Exception as ex:
|
||||
LOG.exception(ex)
|
|
@ -0,0 +1,226 @@
|
|||
# Copyright 2012 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import jsonschema
|
||||
import six
|
||||
|
||||
from searchlight.common import exception
|
||||
from searchlight.common import utils
|
||||
from searchlight import i18n
|
||||
|
||||
_ = i18n._
|
||||
|
||||
|
||||
class Schema(object):
|
||||
|
||||
def __init__(self, name, properties=None, links=None, required=None,
|
||||
definitions=None):
|
||||
self.name = name
|
||||
if properties is None:
|
||||
properties = {}
|
||||
self.properties = properties
|
||||
self.links = links
|
||||
self.required = required
|
||||
self.definitions = definitions
|
||||
|
||||
def validate(self, obj):
|
||||
try:
|
||||
jsonschema.validate(obj, self.raw())
|
||||
except jsonschema.ValidationError as e:
|
||||
raise exception.InvalidObject(schema=self.name,
|
||||
reason=utils.exception_to_str(e))
|
||||
|
||||
def filter(self, obj):
|
||||
filtered = {}
|
||||
for key, value in six.iteritems(obj):
|
||||
if self._filter_func(self.properties, key):
|
||||
filtered[key] = value
|
||||
return filtered
|
||||
|
||||
@staticmethod
|
||||
def _filter_func(properties, key):
|
||||
return key in properties
|
||||
|
||||
def merge_properties(self, properties):
|
||||
# Ensure custom props aren't attempting to override base props
|
||||
original_keys = set(self.properties.keys())
|
||||
new_keys = set(properties.keys())
|
||||
intersecting_keys = original_keys.intersection(new_keys)
|
||||
conflicting_keys = [k for k in intersecting_keys
|
||||
if self.properties[k] != properties[k]]
|
||||
if conflicting_keys:
|
||||
props = ', '.join(conflicting_keys)
|
||||
reason = _("custom properties (%(props)s) conflict "
|
||||
"with base properties")
|
||||
raise exception.SchemaLoadError(reason=reason % {'props': props})
|
||||
|
||||
self.properties.update(properties)
|
||||
|
||||
def raw(self):
|
||||
raw = {
|
||||
'name': self.name,
|
||||
'properties': self.properties,
|
||||
'additionalProperties': False,
|
||||
}
|
||||
if self.definitions:
|
||||
raw['definitions'] = self.definitions
|
||||
if self.required:
|
||||
raw['required'] = self.required
|
||||
if self.links:
|
||||
raw['links'] = self.links
|
||||
return raw
|
||||
|
||||
def minimal(self):
|
||||
minimal = {
|
||||
'name': self.name,
|
||||
'properties': self.properties
|
||||
}
|
||||
if self.definitions:
|
||||
minimal['definitions'] = self.definitions
|
||||
if self.required:
|
||||
minimal['required'] = self.required
|
||||
return minimal
|
||||
|
||||
|
||||
class PermissiveSchema(Schema):
|
||||
@staticmethod
|
||||
def _filter_func(properties, key):
|
||||
return True
|
||||
|
||||
def raw(self):
|
||||
raw = super(PermissiveSchema, self).raw()
|
||||
raw['additionalProperties'] = {'type': 'string'}
|
||||
return raw
|
||||
|
||||
def minimal(self):
|
||||
minimal = super(PermissiveSchema, self).raw()
|
||||
return minimal
|
||||
|
||||
|
||||
class CollectionSchema(object):
|
||||
|
||||
def __init__(self, name, item_schema):
|
||||
self.name = name
|
||||
self.item_schema = item_schema
|
||||
|
||||
def raw(self):
|
||||
definitions = None
|
||||
if self.item_schema.definitions:
|
||||
definitions = self.item_schema.definitions
|
||||
self.item_schema.definitions = None
|
||||
raw = {
|
||||
'name': self.name,
|
||||
'properties': {
|
||||
self.name: {
|
||||
'type': 'array',
|
||||
'items': self.item_schema.raw(),
|
||||
},
|
||||
'first': {'type': 'string'},
|
||||
'next': {'type': 'string'},
|
||||
'schema': {'type': 'string'},
|
||||
},
|
||||
'links': [
|
||||
{'rel': 'first', 'href': '{first}'},
|
||||
{'rel': 'next', 'href': '{next}'},
|
||||
{'rel': 'describedby', 'href': '{schema}'},
|
||||
],
|
||||
}
|
||||
if definitions:
|
||||
raw['definitions'] = definitions
|
||||
self.item_schema.definitions = definitions
|
||||
|
||||
return raw
|
||||
|
||||
def minimal(self):
|
||||
definitions = None
|
||||
if self.item_schema.definitions:
|
||||
definitions = self.item_schema.definitions
|
||||
self.item_schema.definitions = None
|
||||
minimal = {
|
||||
'name': self.name,
|
||||
'properties': {
|
||||
self.name: {
|
||||
'type': 'array',
|
||||
'items': self.item_schema.minimal(),
|
||||
},
|
||||
'schema': {'type': 'string'},
|
||||
},
|
||||
'links': [
|
||||
{'rel': 'describedby', 'href': '{schema}'},
|
||||
],
|
||||
}
|
||||
if definitions:
|
||||
minimal['definitions'] = definitions
|
||||
self.item_schema.definitions = definitions
|
||||
|
||||
return minimal
|
||||
|
||||
|
||||
class DictCollectionSchema(Schema):
|
||||
def __init__(self, name, item_schema):
|
||||
self.name = name
|
||||
self.item_schema = item_schema
|
||||
|
||||
def raw(self):
|
||||
definitions = None
|
||||
if self.item_schema.definitions:
|
||||
definitions = self.item_schema.definitions
|
||||
self.item_schema.definitions = None
|
||||
raw = {
|
||||
'name': self.name,
|
||||
'properties': {
|
||||
self.name: {
|
||||
'type': 'object',
|
||||
'additionalProperties': self.item_schema.raw(),
|
||||
},
|
||||
'first': {'type': 'string'},
|
||||
'next': {'type': 'string'},
|
||||
'schema': {'type': 'string'},
|
||||
},
|
||||
'links': [
|
||||
{'rel': 'first', 'href': '{first}'},
|
||||
{'rel': 'next', 'href': '{next}'},
|
||||
{'rel': 'describedby', 'href': '{schema}'},
|
||||
],
|
||||
}
|
||||
if definitions:
|
||||
raw['definitions'] = definitions
|
||||
self.item_schema.definitions = definitions
|
||||
|
||||
return raw
|
||||
|
||||
def minimal(self):
|
||||
definitions = None
|
||||
if self.item_schema.definitions:
|
||||
definitions = self.item_schema.definitions
|
||||
self.item_schema.definitions = None
|
||||
minimal = {
|
||||
'name': self.name,
|
||||
'properties': {
|
||||
self.name: {
|
||||
'type': 'object',
|
||||
'additionalProperties': self.item_schema.minimal(),
|
||||
},
|
||||
'schema': {'type': 'string'},
|
||||
},
|
||||
'links': [
|
||||
{'rel': 'describedby', 'href': '{schema}'},
|
||||
],
|
||||
}
|
||||
if definitions:
|
||||
minimal['definitions'] = definitions
|
||||
self.item_schema.definitions = definitions
|
||||
|
||||
return minimal
|
|
@ -0,0 +1,107 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2012-2014 eNovance <licensing@enovance.com>
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
|
||||
from oslo_config import cfg
|
||||
import oslo_i18n
|
||||
from oslo_log import log
|
||||
import oslo_messaging
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
OPTS = [
|
||||
cfg.StrOpt('host',
|
||||
default=socket.gethostname(),
|
||||
help='Name of this node, which must be valid in an AMQP '
|
||||
'key. Can be an opaque identifier. For ZeroMQ only, must '
|
||||
'be a valid host name, FQDN, or IP address.'),
|
||||
cfg.IntOpt('listener_workers',
|
||||
default=1,
|
||||
help='Number of workers for notification service. A single '
|
||||
'notification agent is enabled by default.'),
|
||||
cfg.IntOpt('http_timeout',
|
||||
default=600,
|
||||
help='Timeout seconds for HTTP requests. Set it to None to '
|
||||
'disable timeout.'),
|
||||
]
|
||||
CONF.register_opts(OPTS)
|
||||
|
||||
CLI_OPTS = [
|
||||
cfg.StrOpt('os-username',
|
||||
deprecated_group="DEFAULT",
|
||||
default=os.environ.get('OS_USERNAME', 'searchlight'),
|
||||
help='User name to use for OpenStack service access.'),
|
||||
cfg.StrOpt('os-password',
|
||||
deprecated_group="DEFAULT",
|
||||
secret=True,
|
||||
default=os.environ.get('OS_PASSWORD', 'admin'),
|
||||
help='Password to use for OpenStack service access.'),
|
||||
cfg.StrOpt('os-tenant-id',
|
||||
deprecated_group="DEFAULT",
|
||||
default=os.environ.get('OS_TENANT_ID', ''),
|
||||
help='Tenant ID to use for OpenStack service access.'),
|
||||
cfg.StrOpt('os-tenant-name',
|
||||
deprecated_group="DEFAULT",
|
||||
default=os.environ.get('OS_TENANT_NAME', 'admin'),
|
||||
help='Tenant name to use for OpenStack service access.'),
|
||||
cfg.StrOpt('os-cacert',
|
||||
default=os.environ.get('OS_CACERT'),
|
||||
help='Certificate chain for SSL validation.'),
|
||||
cfg.StrOpt('os-auth-url',
|
||||
deprecated_group="DEFAULT",
|
||||
default=os.environ.get('OS_AUTH_URL',
|
||||
'http://localhost:5000/v2.0'),
|
||||
help='Auth URL to use for OpenStack service access.'),
|
||||
cfg.StrOpt('os-region-name',
|
||||
deprecated_group="DEFAULT",
|
||||
default=os.environ.get('OS_REGION_NAME'),
|
||||
help='Region name to use for OpenStack service endpoints.'),
|
||||
cfg.StrOpt('os-endpoint-type',
|
||||
default=os.environ.get('OS_ENDPOINT_TYPE', 'publicURL'),
|
||||
help='Type of endpoint in Identity service catalog to use for '
|
||||
'communication with OpenStack services.'),
|
||||
cfg.BoolOpt('insecure',
|
||||
default=False,
|
||||
help='Disables X.509 certificate validation when an '
|
||||
'SSL connection to Identity Service is established.'),
|
||||
]
|
||||
CONF.register_cli_opts(CLI_OPTS, group="service_credentials")
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
_DEFAULT_LOG_LEVELS = ['keystonemiddleware=WARN', 'stevedore=WARN']
|
||||
|
||||
|
||||
class WorkerException(Exception):
|
||||
"""Exception for errors relating to service workers."""
|
||||
|
||||
|
||||
def get_workers(name):
|
||||
return 1
|
||||
|
||||
|
||||
def prepare_service(argv=None):
|
||||
oslo_i18n.enable_lazy()
|
||||
log.set_defaults(_DEFAULT_LOG_LEVELS)
|
||||
log.register_options(CONF)
|
||||
if argv is None:
|
||||
argv = sys.argv
|
||||
CONF(argv[1:], project='searchlight')
|
||||
log.setup(cfg.CONF, 'searchlight')
|
||||
oslo_messaging.set_transport_defaults('searchlight')
|
|
@ -0,0 +1,33 @@
|
|||
# Copyright 2010-2011 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import searchlight.cmd as searchlight_cmd
|
||||
|
||||
|
||||
searchlight_cmd.fix_greendns_ipv6()
|
||||
|
||||
# See http://code.google.com/p/python-nose/issues/detail?id=373
|
||||
# The code below enables tests to work with i18n _() blocks
|
||||
import six.moves.builtins as __builtin__
|
||||
setattr(__builtin__, '_', lambda x: x)
|
||||
|
||||
# Set up logging to output debugging
|
||||
import logging
|
||||
logger = logging.getLogger()
|
||||
hdlr = logging.FileHandler('run_tests.log', 'w')
|
||||
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
|
||||
hdlr.setFormatter(formatter)
|
||||
logger.addHandler(hdlr)
|
||||
logger.setLevel(logging.DEBUG)
|
|
@ -0,0 +1,7 @@
|
|||
{
|
||||
"context_is_admin": "role:admin",
|
||||
"default": "",
|
||||
|
||||
"catalog_index": "role:admin",
|
||||
"catalog_search": ""
|
||||
}
|
|
@ -0,0 +1,59 @@
|
|||
[spl_creator_policy]
|
||||
create = searchlight_creator
|
||||
read = searchlight_creator
|
||||
update = context_is_admin
|
||||
delete = context_is_admin
|
||||
|
||||
[spl_default_policy]
|
||||
create = context_is_admin
|
||||
read = default
|
||||
update = context_is_admin
|
||||
delete = context_is_admin
|
||||
|
||||
[^x_all_permitted.*]
|
||||
create = @
|
||||
read = @
|
||||
update = @
|
||||
delete = @
|
||||
|
||||
[^x_none_permitted.*]
|
||||
create = !
|
||||
read = !
|
||||
update = !
|
||||
delete = !
|
||||
|
||||
[x_none_read]
|
||||
create = context_is_admin
|
||||
read = !
|
||||
update = !
|
||||
delete = !
|
||||
|
||||
[x_none_update]
|
||||
create = context_is_admin
|
||||
read = context_is_admin
|
||||
update = !
|
||||
delete = context_is_admin
|
||||
|
||||
[x_none_delete]
|
||||
create = context_is_admin
|
||||
read = context_is_admin
|
||||
update = context_is_admin
|
||||
delete = !
|
||||
|
||||
[x_foo_matcher]
|
||||
create = context_is_admin
|
||||
read = context_is_admin
|
||||
update = context_is_admin
|
||||
delete = context_is_admin
|
||||
|
||||
[x_foo_*]
|
||||
create = @
|
||||
read = @
|
||||
update = @
|
||||
delete = @
|
||||
|
||||
[.*]
|
||||
create = context_is_admin
|
||||
read = context_is_admin
|
||||
update = context_is_admin
|
||||
delete = context_is_admin
|
|
@ -0,0 +1,95 @@
|
|||
[^x_owner_.*]
|
||||
create = admin,member
|
||||
read = admin,member
|
||||
update = admin,member
|
||||
delete = admin,member
|
||||
|
||||
[spl_create_prop]
|
||||
create = admin,spl_role
|
||||
read = admin,spl_role
|
||||
update = admin
|
||||
delete = admin
|
||||
|
||||
[spl_read_prop]
|
||||
create = admin,spl_role
|
||||
read = admin,spl_role
|
||||
update = admin
|
||||
delete = admin
|
||||
|
||||
[spl_read_only_prop]
|
||||
create = admin
|
||||
read = admin,spl_role
|
||||
update = admin
|
||||
delete = admin
|
||||
|
||||
[spl_update_prop]
|
||||
create = admin,spl_role
|
||||
read = admin,spl_role
|
||||
update = admin,spl_role
|
||||
delete = admin
|
||||
|
||||
[spl_update_only_prop]
|
||||
create = admin
|
||||
read = admin
|
||||
update = admin,spl_role
|
||||
delete = admin
|
||||
|
||||
[spl_delete_prop]
|
||||
create = admin,spl_role
|
||||
read = admin,spl_role
|
||||
update = admin
|
||||
delete = admin,spl_role
|
||||
|
||||
[spl_delete_empty_prop]
|
||||
create = admin,spl_role
|
||||
read = admin,spl_role
|
||||
update = admin
|
||||
delete = admin,spl_role
|
||||
|
||||
[^x_all_permitted.*]
|
||||
create = @
|
||||
read = @
|
||||
update = @
|
||||
delete = @
|
||||
|
||||
[^x_none_permitted.*]
|
||||
create = !
|
||||
read = !
|
||||
update = !
|
||||
delete = !
|
||||
|
||||
[x_none_read]
|
||||
create = admin,member
|
||||
read = !
|
||||
update = !
|
||||
delete = !
|
||||
|
||||
[x_none_update]
|
||||
create = admin,member
|
||||
read = admin,member
|
||||
update = !
|
||||
delete = admin,member
|
||||
|
||||
[x_none_delete]
|
||||
create = admin,member
|
||||
read = admin,member
|
||||
update = admin,member
|
||||
delete = !
|
||||
|
||||
[x_foo_matcher]
|
||||
create = admin
|
||||
read = admin
|
||||
update = admin
|
||||
delete = admin
|
||||
|
||||
[x_foo_*]
|
||||
create = @
|
||||
read = @
|
||||
update = @
|
||||
delete = @
|
||||
|
||||
[.*]
|
||||
create = admin
|
||||
read = admin
|
||||
update = admin
|
||||
delete = admin
|
|
@ -0,0 +1,655 @@
|
|||
# Copyright 2015 Intel Corporation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import datetime
|
||||
|
||||
import mock
|
||||
|
||||
from oslo_utils import timeutils
|
||||
|
||||
from searchlight.search.plugins import images as images_plugin
|
||||
from searchlight.search.plugins import metadefs as metadefs_plugin
|
||||
import searchlight.tests.unit.utils as unit_test_utils
|
||||
import searchlight.tests.utils as test_utils
|
||||
|
||||
|
||||
DATETIME = datetime.datetime(2012, 5, 16, 15, 27, 36, 325355)
|
||||
DATE1 = timeutils.isotime(DATETIME)
|
||||
|
||||
# General
|
||||
USER1 = '54492ba0-f4df-4e4e-be62-27f4d76b29cf'
|
||||
|
||||
TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df'
|
||||
TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81'
|
||||
TENANT3 = '5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8'
|
||||
TENANT4 = 'c6c87f25-8a94-47ed-8c83-053c25f42df4'
|
||||
|
||||
# Images
|
||||
UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d'
|
||||
UUID2 = 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc'
|
||||
UUID3 = '971ec09a-8067-4bc8-a91f-ae3557f1c4c7'
|
||||
UUID4 = '6bbe7cc2-eae7-4c0f-b50d-a7160b0c6a86'
|
||||
|
||||
CHECKSUM = '93264c3edf5972c9f1cb309543d38a5c'
|
||||
|
||||
# Metadefinitions
|
||||
NAMESPACE1 = 'namespace1'
|
||||
NAMESPACE2 = 'namespace2'
|
||||
|
||||
PROPERTY1 = 'Property1'
|
||||
PROPERTY2 = 'Property2'
|
||||
PROPERTY3 = 'Property3'
|
||||
|
||||
OBJECT1 = 'Object1'
|
||||
OBJECT2 = 'Object2'
|
||||
OBJECT3 = 'Object3'
|
||||
|
||||
RESOURCE_TYPE1 = 'ResourceType1'
|
||||
RESOURCE_TYPE2 = 'ResourceType2'
|
||||
RESOURCE_TYPE3 = 'ResourceType3'
|
||||
|
||||
TAG1 = 'Tag1'
|
||||
TAG2 = 'Tag2'
|
||||
TAG3 = 'Tag3'
|
||||
|
||||
|
||||
class DictObj(object):
|
||||
def __init__(self, **entries):
|
||||
self.__dict__.update(entries)
|
||||
|
||||
|
||||
def _image_fixture(image_id, **kwargs):
|
||||
image_members = kwargs.pop('members', [])
|
||||
extra_properties = kwargs.pop('extra_properties', {})
|
||||
|
||||
obj = {
|
||||
'id': image_id,
|
||||
'name': None,
|
||||
'is_public': False,
|
||||
'properties': {},
|
||||
'checksum': None,
|
||||
'owner': None,
|
||||
'status': 'queued',
|
||||
'tags': [],
|
||||
'size': None,
|
||||
'virtual_size': None,
|
||||
'locations': [],
|
||||
'protected': False,
|
||||
'disk_format': None,
|
||||
'container_format': None,
|
||||
'deleted': False,
|
||||
'min_ram': None,
|
||||
'min_disk': None,
|
||||
'created_at': DATETIME,
|
||||
'updated_at': DATETIME,
|
||||
}
|
||||
obj.update(kwargs)
|
||||
image = DictObj(**obj)
|
||||
image.tags = set(image.tags)
|
||||
image.properties = [DictObj(name=k, value=v)
|
||||
for k, v in extra_properties.items()]
|
||||
image.members = [DictObj(**m) for m in image_members]
|
||||
return image
|
||||
|
||||
|
||||
def _db_namespace_fixture(**kwargs):
|
||||
obj = {
|
||||
'namespace': None,
|
||||
'display_name': None,
|
||||
'description': None,
|
||||
'visibility': True,
|
||||
'protected': False,
|
||||
'owner': None
|
||||
}
|
||||
obj.update(kwargs)
|
||||
return DictObj(**obj)
|
||||
|
||||
|
||||
def _db_property_fixture(name, **kwargs):
|
||||
obj = {
|
||||
'name': name,
|
||||
'json_schema': {"type": "string", "title": "title"},
|
||||
}
|
||||
obj.update(kwargs)
|
||||
return DictObj(**obj)
|
||||
|
||||
|
||||
def _db_object_fixture(name, **kwargs):
|
||||
obj = {
|
||||
'name': name,
|
||||
'description': None,
|
||||
'json_schema': {},
|
||||
'required': '[]',
|
||||
}
|
||||
obj.update(kwargs)
|
||||
return DictObj(**obj)
|
||||
|
||||
|
||||
def _db_resource_type_fixture(name, **kwargs):
|
||||
obj = {
|
||||
'name': name,
|
||||
'protected': False,
|
||||
}
|
||||
obj.update(kwargs)
|
||||
return DictObj(**obj)
|
||||
|
||||
|
||||
def _db_namespace_resource_type_fixture(name, prefix, **kwargs):
|
||||
obj = {
|
||||
'properties_target': None,
|
||||
'prefix': prefix,
|
||||
'name': name,
|
||||
}
|
||||
obj.update(kwargs)
|
||||
return obj
|
||||
|
||||
|
||||
def _db_tag_fixture(name, **kwargs):
|
||||
obj = {
|
||||
'name': name,
|
||||
}
|
||||
obj.update(**kwargs)
|
||||
return DictObj(**obj)
|
||||
|
||||
|
||||
class TestImageLoaderPlugin(test_utils.BaseTestCase):
|
||||
def setUp(self):
|
||||
super(TestImageLoaderPlugin, self).setUp()
|
||||
self.db = unit_test_utils.FakeDB()
|
||||
self.db.reset()
|
||||
|
||||
self._create_images()
|
||||
|
||||
self.plugin = images_plugin.ImageIndex()
|
||||
|
||||
def _create_images(self):
|
||||
self.simple_image = _image_fixture(
|
||||
UUID1, owner=TENANT1, checksum=CHECKSUM, name='simple', size=256,
|
||||
is_public=True, status='active'
|
||||
)
|
||||
self.tagged_image = _image_fixture(
|
||||
UUID2, owner=TENANT1, checksum=CHECKSUM, name='tagged', size=512,
|
||||
is_public=True, status='active', tags=['ping', 'pong'],
|
||||
)
|
||||
self.complex_image = _image_fixture(
|
||||
UUID3, owner=TENANT2, checksum=CHECKSUM, name='complex', size=256,
|
||||
is_public=True, status='active',
|
||||
extra_properties={'mysql_version': '5.6', 'hypervisor': 'lxc'}
|
||||
)
|
||||
self.members_image = _image_fixture(
|
||||
UUID3, owner=TENANT2, checksum=CHECKSUM, name='complex', size=256,
|
||||
is_public=True, status='active',
|
||||
members=[
|
||||
{'member': TENANT1, 'deleted': False, 'status': 'accepted'},
|
||||
{'member': TENANT2, 'deleted': False, 'status': 'accepted'},
|
||||
{'member': TENANT3, 'deleted': True, 'status': 'accepted'},
|
||||
{'member': TENANT4, 'deleted': False, 'status': 'pending'},
|
||||
]
|
||||
)
|
||||
|
||||
self.images = [self.simple_image, self.tagged_image,
|
||||
self.complex_image, self.members_image]
|
||||
|
||||
def test_index_name(self):
|
||||
self.assertEqual('glance', self.plugin.get_index_name())
|
||||
|
||||
def test_document_type(self):
|
||||
self.assertEqual('image', self.plugin.get_document_type())
|
||||
|
||||
def test_image_serialize(self):
|
||||
expected = {
|
||||
'checksum': '93264c3edf5972c9f1cb309543d38a5c',
|
||||
'container_format': None,
|
||||
'disk_format': None,
|
||||
'id': 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d',
|
||||
'members': [],
|
||||
'min_disk': None,
|
||||
'min_ram': None,
|
||||
'name': 'simple',
|
||||
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
|
||||
'protected': False,
|
||||
'size': 256,
|
||||
'status': 'active',
|
||||
'tags': set([]),
|
||||
'virtual_size': None,
|
||||
'visibility': 'public',
|
||||
'created_at': DATE1,
|
||||
'updated_at': DATE1
|
||||
}
|
||||
serialized = self.plugin.serialize(self.simple_image)
|
||||
self.assertEqual(expected, serialized)
|
||||
|
||||
def test_image_with_tags_serialize(self):
|
||||
expected = {
|
||||
'checksum': '93264c3edf5972c9f1cb309543d38a5c',
|
||||
'container_format': None,
|
||||
'disk_format': None,
|
||||
'id': 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc',
|
||||
'members': [],
|
||||
'min_disk': None,
|
||||
'min_ram': None,
|
||||
'name': 'tagged',
|
||||
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
|
||||
'protected': False,
|
||||
'size': 512,
|
||||
'status': 'active',
|
||||
'tags': set(['ping', 'pong']),
|
||||
'virtual_size': None,
|
||||
'visibility': 'public',
|
||||
'created_at': DATE1,
|
||||
'updated_at': DATE1
|
||||
}
|
||||
serialized = self.plugin.serialize(self.tagged_image)
|
||||
self.assertEqual(expected, serialized)
|
||||
|
||||
def test_image_with_properties_serialize(self):
|
||||
expected = {
|
||||
'checksum': '93264c3edf5972c9f1cb309543d38a5c',
|
||||
'container_format': None,
|
||||
'disk_format': None,
|
||||
'hypervisor': 'lxc',
|
||||
'id': '971ec09a-8067-4bc8-a91f-ae3557f1c4c7',
|
||||
'members': [],
|
||||
'min_disk': None,
|
||||
'min_ram': None,
|
||||
'mysql_version': '5.6',
|
||||
'name': 'complex',
|
||||
'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81',
|
||||
'protected': False,
|
||||
'size': 256,
|
||||
'status': 'active',
|
||||
'tags': set([]),
|
||||
'virtual_size': None,
|
||||
'visibility': 'public',
|
||||
'created_at': DATE1,
|
||||
'updated_at': DATE1
|
||||
}
|
||||
serialized = self.plugin.serialize(self.complex_image)
|
||||
self.assertEqual(expected, serialized)
|
||||
|
||||
def test_image_with_members_serialize(self):
|
||||
expected = {
|
||||
'checksum': '93264c3edf5972c9f1cb309543d38a5c',
|
||||
'container_format': None,
|
||||
'disk_format': None,
|
||||
'id': '971ec09a-8067-4bc8-a91f-ae3557f1c4c7',
|
||||
'members': ['6838eb7b-6ded-434a-882c-b344c77fe8df',
|
||||
'2c014f32-55eb-467d-8fcb-4bd706012f81'],
|
||||
'min_disk': None,
|
||||
'min_ram': None,
|
||||
'name': 'complex',
|
||||
'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81',
|
||||
'protected': False,
|
||||
'size': 256,
|
||||
'status': 'active',
|
||||
'tags': set([]),
|
||||
'virtual_size': None,
|
||||
'visibility': 'public',
|
||||
'created_at': DATE1,
|
||||
'updated_at': DATE1
|
||||
}
|
||||
serialized = self.plugin.serialize(self.members_image)
|
||||
self.assertEqual(expected, serialized)
|
||||
|
||||
def test_setup_data(self):
|
||||
with mock.patch.object(self.plugin, 'get_objects',
|
||||
return_value=self.images) as mock_get:
|
||||
with mock.patch.object(self.plugin, 'save_documents') as mock_save:
|
||||
self.plugin.setup_data()
|
||||
|
||||
mock_get.assert_called_once_with()
|
||||
mock_save.assert_called_once_with([
|
||||
{
|
||||
'status': 'active',
|
||||
'tags': set([]),
|
||||
'container_format': None,
|
||||
'min_ram': None,
|
||||
'visibility': 'public',
|
||||
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
|
||||
'members': [],
|
||||
'min_disk': None,
|
||||
'virtual_size': None,
|
||||
'id': 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d',
|
||||
'size': 256,
|
||||
'name': 'simple',
|
||||
'checksum': '93264c3edf5972c9f1cb309543d38a5c',
|
||||
'disk_format': None,
|
||||
'protected': False,
|
||||
'created_at': DATE1,
|
||||
'updated_at': DATE1
|
||||
},
|
||||
{
|
||||
'status': 'active',
|
||||
'tags': set(['pong', 'ping']),
|
||||
'container_format': None,
|
||||
'min_ram': None,
|
||||
'visibility': 'public',
|
||||
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
|
||||
'members': [],
|
||||
'min_disk': None,
|
||||
'virtual_size': None,
|
||||
'id': 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc',
|
||||
'size': 512,
|
||||
'name': 'tagged',
|
||||
'checksum': '93264c3edf5972c9f1cb309543d38a5c',
|
||||
'disk_format': None,
|
||||
'protected': False,
|
||||
'created_at': DATE1,
|
||||
'updated_at': DATE1
|
||||
},
|
||||
{
|
||||
'status': 'active',
|
||||
'tags': set([]),
|
||||
'container_format': None,
|
||||
'min_ram': None,
|
||||
'visibility': 'public',
|
||||
'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81',
|
||||
'members': [],
|
||||
'min_disk': None,
|
||||
'virtual_size': None,
|
||||
'id': '971ec09a-8067-4bc8-a91f-ae3557f1c4c7',
|
||||
'size': 256,
|
||||
'name': 'complex',
|
||||
'checksum': '93264c3edf5972c9f1cb309543d38a5c',
|
||||
'mysql_version': '5.6',
|
||||
'disk_format': None,
|
||||
'protected': False,
|
||||
'hypervisor': 'lxc',
|
||||
'created_at': DATE1,
|
||||
'updated_at': DATE1
|
||||
},
|
||||
{
|
||||
'status': 'active',
|
||||
'tags': set([]),
|
||||
'container_format': None,
|
||||
'min_ram': None,
|
||||
'visibility': 'public',
|
||||
'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81',
|
||||
'members': ['6838eb7b-6ded-434a-882c-b344c77fe8df',
|
||||
'2c014f32-55eb-467d-8fcb-4bd706012f81'],
|
||||
'min_disk': None,
|
||||
'virtual_size': None,
|
||||
'id': '971ec09a-8067-4bc8-a91f-ae3557f1c4c7',
|
||||
'size': 256,
|
||||
'name': 'complex',
|
||||
'checksum': '93264c3edf5972c9f1cb309543d38a5c',
|
||||
'disk_format': None,
|
||||
'protected': False,
|
||||
'created_at': DATE1,
|
||||
'updated_at': DATE1
|
||||
}
|
||||
])
|
||||
|
||||
|
||||
class TestMetadefLoaderPlugin(test_utils.BaseTestCase):
|
||||
def setUp(self):
|
||||
super(TestMetadefLoaderPlugin, self).setUp()
|
||||
self.db = unit_test_utils.FakeDB()
|
||||
self.db.reset()
|
||||
|
||||
self._create_resource_types()
|
||||
self._create_namespaces()
|
||||
self._create_namespace_resource_types()
|
||||
self._create_properties()
|
||||
self._create_tags()
|
||||
self._create_objects()
|
||||
|
||||
self.plugin = metadefs_plugin.MetadefIndex()
|
||||
|
||||
def _create_namespaces(self):
|
||||
self.namespaces = [
|
||||
_db_namespace_fixture(namespace=NAMESPACE1,
|
||||
display_name='1',
|
||||
description='desc1',
|
||||
visibility='private',
|
||||
protected=True,
|
||||
owner=TENANT1),
|
||||
_db_namespace_fixture(namespace=NAMESPACE2,
|
||||
display_name='2',
|
||||
description='desc2',
|
||||
visibility='public',
|
||||
protected=False,
|
||||
owner=TENANT1),
|
||||
]
|
||||
|
||||
def _create_properties(self):
|
||||
self.properties = [
|
||||
_db_property_fixture(name=PROPERTY1),
|
||||
_db_property_fixture(name=PROPERTY2),
|
||||
_db_property_fixture(name=PROPERTY3)
|
||||
]
|
||||
|
||||
self.namespaces[0].properties = [self.properties[0]]
|
||||
self.namespaces[1].properties = self.properties[1:]
|
||||
|
||||
def _create_objects(self):
|
||||
self.objects = [
|
||||
_db_object_fixture(name=OBJECT1,
|
||||
description='desc1',
|
||||
json_schema={'property1': {
|
||||
'type': 'string',
|
||||
'default': 'value1',
|
||||
'enum': ['value1', 'value2']
|
||||
}}),
|
||||
_db_object_fixture(name=OBJECT2,
|
||||
description='desc2'),
|
||||
_db_object_fixture(name=OBJECT3,
|
||||
description='desc3'),
|
||||
]
|
||||
|
||||
self.namespaces[0].objects = [self.objects[0]]
|
||||
self.namespaces[1].objects = self.objects[1:]
|
||||
|
||||
def _create_resource_types(self):
|
||||
self.resource_types = [
|
||||
_db_resource_type_fixture(name=RESOURCE_TYPE1,
|
||||
protected=False),
|
||||
_db_resource_type_fixture(name=RESOURCE_TYPE2,
|
||||
protected=False),
|
||||
_db_resource_type_fixture(name=RESOURCE_TYPE3,
|
||||
protected=True),
|
||||
]
|
||||
|
||||
def _create_namespace_resource_types(self):
|
||||
self.namespace_resource_types = [
|
||||
_db_namespace_resource_type_fixture(
|
||||
prefix='p1',
|
||||
name=self.resource_types[0].name),
|
||||
_db_namespace_resource_type_fixture(
|
||||
prefix='p2',
|
||||
name=self.resource_types[1].name),
|
||||
_db_namespace_resource_type_fixture(
|
||||
prefix='p2',
|
||||
name=self.resource_types[2].name),
|
||||
]
|
||||
self.namespaces[0].resource_types = self.namespace_resource_types[:1]
|
||||
self.namespaces[1].resource_types = self.namespace_resource_types[1:]
|
||||
|
||||
def _create_tags(self):
|
||||
self.tags = [
|
||||
_db_resource_type_fixture(name=TAG1),
|
||||
_db_resource_type_fixture(name=TAG2),
|
||||
_db_resource_type_fixture(name=TAG3),
|
||||
]
|
||||
self.namespaces[0].tags = self.tags[:1]
|
||||
self.namespaces[1].tags = self.tags[1:]
|
||||
|
||||
def test_index_name(self):
|
||||
self.assertEqual('glance', self.plugin.get_index_name())
|
||||
|
||||
def test_document_type(self):
|
||||
self.assertEqual('metadef', self.plugin.get_document_type())
|
||||
|
||||
def test_namespace_serialize(self):
|
||||
metadef_namespace = self.namespaces[0]
|
||||
expected = {
|
||||
'namespace': 'namespace1',
|
||||
'display_name': '1',
|
||||
'description': 'desc1',
|
||||
'visibility': 'private',
|
||||
'protected': True,
|
||||
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df'
|
||||
}
|
||||
serialized = self.plugin.serialize_namespace(metadef_namespace)
|
||||
self.assertEqual(expected, serialized)
|
||||
|
||||
def test_object_serialize(self):
|
||||
metadef_object = self.objects[0]
|
||||
expected = {
|
||||
'name': 'Object1',
|
||||
'description': 'desc1',
|
||||
'properties': [{
|
||||
'default': 'value1',
|
||||
'enum': ['value1', 'value2'],
|
||||
'property': 'property1',
|
||||
'type': 'string'
|
||||
}]
|
||||
}
|
||||
serialized = self.plugin.serialize_object(metadef_object)
|
||||
self.assertEqual(expected, serialized)
|
||||
|
||||
def test_property_serialize(self):
|
||||
metadef_property = self.properties[0]
|
||||
expected = {
|
||||
'property': 'Property1',
|
||||
'type': 'string',
|
||||
'title': 'title',
|
||||
}
|
||||
serialized = self.plugin.serialize_property(
|
||||
metadef_property.name, metadef_property.json_schema)
|
||||
self.assertEqual(expected, serialized)
|
||||
|
||||
def test_complex_serialize(self):
|
||||
metadef_namespace = self.namespaces[0]
|
||||
expected = {
|
||||
'namespace': 'namespace1',
|
||||
'display_name': '1',
|
||||
'description': 'desc1',
|
||||
'visibility': 'private',
|
||||
'protected': True,
|
||||
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
|
||||
'objects': [{
|
||||
'description': 'desc1',
|
||||
'name': 'Object1',
|
||||
'properties': [{
|
||||
'default': 'value1',
|
||||
'enum': ['value1', 'value2'],
|
||||
'property': 'property1',
|
||||
'type': 'string'
|
||||
}]
|
||||
}],
|
||||
'resource_types': [{
|
||||
'prefix': 'p1',
|
||||
'name': 'ResourceType1',
|
||||
'properties_target': None
|
||||
}],
|
||||
'properties': [{
|
||||
'property': 'Property1',
|
||||
'title': 'title',
|
||||
'type': 'string'
|
||||
}],
|
||||
'tags': [{'name': 'Tag1'}],
|
||||
}
|
||||
serialized = self.plugin.serialize(metadef_namespace)
|
||||
self.assertEqual(expected, serialized)
|
||||
|
||||
def test_setup_data(self):
|
||||
with mock.patch.object(self.plugin, 'get_objects',
|
||||
return_value=self.namespaces) as mock_get:
|
||||
with mock.patch.object(self.plugin, 'save_documents') as mock_save:
|
||||
self.plugin.setup_data()
|
||||
|
||||
mock_get.assert_called_once_with()
|
||||
mock_save.assert_called_once_with([
|
||||
{
|
||||
'display_name': '1',
|
||||
'description': 'desc1',
|
||||
'objects': [
|
||||
{
|
||||
'name': 'Object1',
|
||||
'description': 'desc1',
|
||||
'properties': [{
|
||||
'default': 'value1',
|
||||
'property': 'property1',
|
||||
'enum': ['value1', 'value2'],
|
||||
'type': 'string'
|
||||
}],
|
||||
}
|
||||
],
|
||||
'namespace': 'namespace1',
|
||||
'visibility': 'private',
|
||||
'protected': True,
|
||||
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
|
||||
'properties': [{
|
||||
'property': 'Property1',
|
||||
'type': 'string',
|
||||
'title': 'title'
|
||||
}],
|
||||
'resource_types': [{
|
||||
'prefix': 'p1',
|
||||
'name': 'ResourceType1',
|
||||
'properties_target': None
|
||||
}],
|
||||
'tags': [{'name': 'Tag1'}],
|
||||
},
|
||||
{
|
||||
'display_name': '2',
|
||||
'description': 'desc2',
|
||||
'objects': [
|
||||
{
|
||||
'properties': [],
|
||||
'name': 'Object2',
|
||||
'description': 'desc2'
|
||||
},
|
||||
{
|
||||
'properties': [],
|
||||
'name': 'Object3',
|
||||
'description': 'desc3'
|
||||
}
|
||||
],
|
||||
'namespace': 'namespace2',
|
||||
'visibility': 'public',
|
||||
'protected': False,
|
||||
'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df',
|
||||
'properties': [
|
||||
{
|
||||
'property': 'Property2',
|
||||
'type': 'string',
|
||||
'title': 'title'
|
||||
},
|
||||
{
|
||||
'property': 'Property3',
|
||||
'type': 'string',
|
||||
'title': 'title'
|
||||
}
|
||||
],
|
||||
'resource_types': [
|
||||
{
|
||||
'name': 'ResourceType2',
|
||||
'prefix': 'p2',
|
||||
'properties_target': None,
|
||||
},
|
||||
{
|
||||
'name': 'ResourceType3',
|
||||
'prefix': 'p2',
|
||||
'properties_target': None,
|
||||
}
|
||||
],
|
||||
'tags': [
|
||||
{'name': 'Tag2'},
|
||||
{'name': 'Tag3'},
|
||||
],
|
||||
}
|
||||
])
|
|
@ -0,0 +1,989 @@
|
|||
# Copyright 2015 Hewlett-Packard Corporation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import mock
|
||||
from oslo_serialization import jsonutils
|
||||
import webob.exc
|
||||
|
||||
from searchlight.common import exception
|
||||
from searchlight.common import utils
|
||||
import searchlight.gateway
|
||||
import searchlight.search
|
||||
from searchlight.api.v1 import search as search
|
||||
from searchlight.tests.unit import base
|
||||
import searchlight.tests.unit.utils as unit_test_utils
|
||||
import searchlight.tests.utils as test_utils
|
||||
|
||||
|
||||
def _action_fixture(op_type, data, index=None, doc_type=None, _id=None,
|
||||
**kwargs):
|
||||
action = {
|
||||
'action': op_type,
|
||||
'id': _id,
|
||||
'index': index,
|
||||
'type': doc_type,
|
||||
'data': data,
|
||||
}
|
||||
if kwargs:
|
||||
action.update(kwargs)
|
||||
|
||||
return action
|
||||
|
||||
|
||||
def _image_fixture(op_type, _id=None, index='glance', doc_type='image',
|
||||
data=None, **kwargs):
|
||||
image_data = {
|
||||
'name': 'image-1',
|
||||
'disk_format': 'raw',
|
||||
}
|
||||
if data is not None:
|
||||
image_data.update(data)
|
||||
|
||||
return _action_fixture(op_type, image_data, index, doc_type, _id, **kwargs)
|
||||
|
||||
|
||||
class TestSearchController(base.IsolatedUnitTest):
|
||||
|
||||
def setUp(self):
|
||||
super(TestSearchController, self).setUp()
|
||||
self.search_controller = search.SearchController()
|
||||
|
||||
def test_search_all(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
self.search_controller.search = mock.Mock(return_value="{}")
|
||||
|
||||
query = {"match_all": {}}
|
||||
index = "glance"
|
||||
doc_type = "metadef"
|
||||
fields = None
|
||||
offset = 0
|
||||
limit = 10
|
||||
self.search_controller.search(
|
||||
request, query, index, doc_type, fields, offset, limit)
|
||||
self.search_controller.search.assert_called_once_with(
|
||||
request, query, index, doc_type, fields, offset, limit)
|
||||
|
||||
def test_search_all_repo(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.search = mock.Mock(return_value="{}")
|
||||
query = {"match_all": {}}
|
||||
index = "glance"
|
||||
doc_type = "metadef"
|
||||
fields = []
|
||||
offset = 0
|
||||
limit = 10
|
||||
self.search_controller.search(
|
||||
request, query, index, doc_type, fields, offset, limit)
|
||||
repo.search.assert_called_once_with(
|
||||
index, doc_type, query, fields, offset, limit, True)
|
||||
|
||||
def test_search_forbidden(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.search = mock.Mock(side_effect=exception.Forbidden)
|
||||
|
||||
query = {"match_all": {}}
|
||||
index = "glance"
|
||||
doc_type = "metadef"
|
||||
fields = []
|
||||
offset = 0
|
||||
limit = 10
|
||||
|
||||
self.assertRaises(
|
||||
webob.exc.HTTPForbidden, self.search_controller.search,
|
||||
request, query, index, doc_type, fields, offset, limit)
|
||||
|
||||
def test_search_not_found(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.search = mock.Mock(side_effect=exception.NotFound)
|
||||
|
||||
query = {"match_all": {}}
|
||||
index = "glance"
|
||||
doc_type = "metadef"
|
||||
fields = []
|
||||
offset = 0
|
||||
limit = 10
|
||||
|
||||
self.assertRaises(
|
||||
webob.exc.HTTPNotFound, self.search_controller.search, request,
|
||||
query, index, doc_type, fields, offset, limit)
|
||||
|
||||
def test_search_duplicate(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.search = mock.Mock(side_effect=exception.Duplicate)
|
||||
|
||||
query = {"match_all": {}}
|
||||
index = "glance"
|
||||
doc_type = "metadef"
|
||||
fields = []
|
||||
offset = 0
|
||||
limit = 10
|
||||
|
||||
self.assertRaises(
|
||||
webob.exc.HTTPConflict, self.search_controller.search, request,
|
||||
query, index, doc_type, fields, offset, limit)
|
||||
|
||||
def test_search_internal_server_error(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.search = mock.Mock(side_effect=Exception)
|
||||
|
||||
query = {"match_all": {}}
|
||||
index = "glance"
|
||||
doc_type = "metadef"
|
||||
fields = []
|
||||
offset = 0
|
||||
limit = 10
|
||||
|
||||
self.assertRaises(
|
||||
webob.exc.HTTPInternalServerError, self.search_controller.search,
|
||||
request, query, index, doc_type, fields, offset, limit)
|
||||
|
||||
def test_index_complete(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
self.search_controller.index = mock.Mock(return_value="{}")
|
||||
actions = [{'action': 'create', 'index': 'myindex', 'id': 10,
|
||||
'type': 'MyTest', 'data': '{"name": "MyName"}'}]
|
||||
default_index = 'glance'
|
||||
default_type = 'image'
|
||||
|
||||
self.search_controller.index(
|
||||
request, actions, default_index, default_type)
|
||||
self.search_controller.index.assert_called_once_with(
|
||||
request, actions, default_index, default_type)
|
||||
|
||||
def test_index_repo_complete(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.index = mock.Mock(return_value="{}")
|
||||
actions = [{'action': 'create', 'index': 'myindex', 'id': 10,
|
||||
'type': 'MyTest', 'data': '{"name": "MyName"}'}]
|
||||
default_index = 'glance'
|
||||
default_type = 'image'
|
||||
|
||||
self.search_controller.index(
|
||||
request, actions, default_index, default_type)
|
||||
repo.index.assert_called_once_with(
|
||||
default_index, default_type, actions)
|
||||
|
||||
def test_index_repo_minimal(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.index = mock.Mock(return_value="{}")
|
||||
actions = [{'action': 'create', 'index': 'myindex', 'id': 10,
|
||||
'type': 'MyTest', 'data': '{"name": "MyName"}'}]
|
||||
|
||||
self.search_controller.index(request, actions)
|
||||
repo.index.assert_called_once_with(None, None, actions)
|
||||
|
||||
def test_index_forbidden(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.index = mock.Mock(side_effect=exception.Forbidden)
|
||||
actions = [{'action': 'create', 'index': 'myindex', 'id': 10,
|
||||
'type': 'MyTest', 'data': '{"name": "MyName"}'}]
|
||||
|
||||
self.assertRaises(
|
||||
webob.exc.HTTPForbidden, self.search_controller.index,
|
||||
request, actions)
|
||||
|
||||
def test_index_not_found(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.index = mock.Mock(side_effect=exception.NotFound)
|
||||
actions = [{'action': 'create', 'index': 'myindex', 'id': 10,
|
||||
'type': 'MyTest', 'data': '{"name": "MyName"}'}]
|
||||
|
||||
self.assertRaises(
|
||||
webob.exc.HTTPNotFound, self.search_controller.index,
|
||||
request, actions)
|
||||
|
||||
def test_index_duplicate(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.index = mock.Mock(side_effect=exception.Duplicate)
|
||||
actions = [{'action': 'create', 'index': 'myindex', 'id': 10,
|
||||
'type': 'MyTest', 'data': '{"name": "MyName"}'}]
|
||||
|
||||
self.assertRaises(
|
||||
webob.exc.HTTPConflict, self.search_controller.index,
|
||||
request, actions)
|
||||
|
||||
def test_index_exception(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.index = mock.Mock(side_effect=Exception)
|
||||
actions = [{'action': 'create', 'index': 'myindex', 'id': 10,
|
||||
'type': 'MyTest', 'data': '{"name": "MyName"}'}]
|
||||
|
||||
self.assertRaises(
|
||||
webob.exc.HTTPInternalServerError, self.search_controller.index,
|
||||
request, actions)
|
||||
|
||||
def test_plugins_info(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
self.search_controller.plugins_info = mock.Mock(return_value="{}")
|
||||
self.search_controller.plugins_info(request)
|
||||
self.search_controller.plugins_info.assert_called_once_with(request)
|
||||
|
||||
def test_plugins_info_repo(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.plugins_info = mock.Mock(return_value="{}")
|
||||
self.search_controller.plugins_info(request)
|
||||
repo.plugins_info.assert_called_once_with()
|
||||
|
||||
def test_plugins_info_forbidden(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.plugins_info = mock.Mock(side_effect=exception.Forbidden)
|
||||
|
||||
self.assertRaises(
|
||||
webob.exc.HTTPForbidden, self.search_controller.plugins_info,
|
||||
request)
|
||||
|
||||
def test_plugins_info_not_found(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.plugins_info = mock.Mock(side_effect=exception.NotFound)
|
||||
|
||||
self.assertRaises(webob.exc.HTTPNotFound,
|
||||
self.search_controller.plugins_info, request)
|
||||
|
||||
def test_plugins_info_internal_server_error(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
repo = glance.search.CatalogSearchRepo
|
||||
repo.plugins_info = mock.Mock(side_effect=Exception)
|
||||
|
||||
self.assertRaises(webob.exc.HTTPInternalServerError,
|
||||
self.search_controller.plugins_info, request)
|
||||
|
||||
|
||||
class TestSearchDeserializer(test_utils.BaseTestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestSearchDeserializer, self).setUp()
|
||||
self.deserializer = search.RequestDeserializer(
|
||||
utils.get_search_plugins()
|
||||
)
|
||||
|
||||
def test_single_index(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'index': 'glance',
|
||||
})
|
||||
|
||||
output = self.deserializer.search(request)
|
||||
self.assertEqual(['glance'], output['index'])
|
||||
|
||||
def test_single_doc_type(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'type': 'image',
|
||||
})
|
||||
|
||||
output = self.deserializer.search(request)
|
||||
self.assertEqual(['image'], output['doc_type'])
|
||||
|
||||
def test_empty_request(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({})
|
||||
|
||||
output = self.deserializer.search(request)
|
||||
self.assertEqual(['glance'], output['index'])
|
||||
self.assertEqual(sorted(['image', 'metadef']),
|
||||
sorted(output['doc_type']))
|
||||
|
||||
def test_empty_request_admin(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({})
|
||||
request.context.is_admin = True
|
||||
|
||||
output = self.deserializer.search(request)
|
||||
self.assertEqual(['glance'], output['index'])
|
||||
self.assertEqual(sorted(['image', 'metadef']),
|
||||
sorted(output['doc_type']))
|
||||
|
||||
def test_invalid_index(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'index': 'invalid',
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_invalid_doc_type(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'type': 'invalid',
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_forbidden_schema(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'schema': {},
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPForbidden, self.deserializer.search,
|
||||
request)
|
||||
|
||||
def test_forbidden_self(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'self': {},
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPForbidden, self.deserializer.search,
|
||||
request)
|
||||
|
||||
def test_fields_restriction(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'index': ['glance'],
|
||||
'type': ['metadef'],
|
||||
'query': {'match_all': {}},
|
||||
'fields': ['description'],
|
||||
})
|
||||
|
||||
output = self.deserializer.search(request)
|
||||
self.assertEqual(['glance'], output['index'])
|
||||
self.assertEqual(['metadef'], output['doc_type'])
|
||||
self.assertEqual(['description'], output['fields'])
|
||||
|
||||
def test_highlight_fields(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'index': ['glance'],
|
||||
'type': ['metadef'],
|
||||
'query': {'match_all': {}},
|
||||
'highlight': {'fields': {'name': {}}}
|
||||
})
|
||||
|
||||
output = self.deserializer.search(request)
|
||||
self.assertEqual(['glance'], output['index'])
|
||||
self.assertEqual(['metadef'], output['doc_type'])
|
||||
self.assertEqual({'name': {}}, output['query']['highlight']['fields'])
|
||||
|
||||
def test_invalid_limit(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'index': ['glance'],
|
||||
'type': ['metadef'],
|
||||
'query': {'match_all': {}},
|
||||
'limit': 'invalid',
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.search,
|
||||
request)
|
||||
|
||||
def test_negative_limit(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'index': ['glance'],
|
||||
'type': ['metadef'],
|
||||
'query': {'match_all': {}},
|
||||
'limit': -1,
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.search,
|
||||
request)
|
||||
|
||||
def test_invalid_offset(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'index': ['glance'],
|
||||
'type': ['metadef'],
|
||||
'query': {'match_all': {}},
|
||||
'offset': 'invalid',
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.search,
|
||||
request)
|
||||
|
||||
def test_negative_offset(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'index': ['glance'],
|
||||
'type': ['metadef'],
|
||||
'query': {'match_all': {}},
|
||||
'offset': -1,
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.search,
|
||||
request)
|
||||
|
||||
def test_limit_and_offset(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'index': ['glance'],
|
||||
'type': ['metadef'],
|
||||
'query': {'match_all': {}},
|
||||
'limit': 1,
|
||||
'offset': 2,
|
||||
})
|
||||
|
||||
output = self.deserializer.search(request)
|
||||
self.assertEqual(['glance'], output['index'])
|
||||
self.assertEqual(['metadef'], output['doc_type'])
|
||||
self.assertEqual(1, output['limit'])
|
||||
self.assertEqual(2, output['offset'])
|
||||
|
||||
|
||||
class TestIndexDeserializer(test_utils.BaseTestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestIndexDeserializer, self).setUp()
|
||||
self.deserializer = search.RequestDeserializer(
|
||||
utils.get_search_plugins()
|
||||
)
|
||||
|
||||
def test_empty_request(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_empty_actions(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'default_index': 'glance',
|
||||
'default_type': 'image',
|
||||
'actions': [],
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_missing_actions(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'default_index': 'glance',
|
||||
'default_type': 'image',
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_invalid_operation_type(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [_image_fixture('invalid', '1')]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_invalid_default_index(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'default_index': 'invalid',
|
||||
'actions': [_image_fixture('create', '1')]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_invalid_default_doc_type(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'default_type': 'invalid',
|
||||
'actions': [_image_fixture('create', '1')]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_empty_operation_type(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [_image_fixture('', '1')]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_missing_operation_type(self):
|
||||
action = _image_fixture('', '1')
|
||||
action.pop('action')
|
||||
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [action]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': '1',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'index',
|
||||
'_source': {'disk_format': 'raw', 'name': 'image-1'},
|
||||
'_type': 'image'
|
||||
}],
|
||||
'default_index': None,
|
||||
'default_type': None
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_create_single(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [_image_fixture('create', '1')]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': '1',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'create',
|
||||
'_source': {'disk_format': 'raw', 'name': 'image-1'},
|
||||
'_type': 'image'
|
||||
}],
|
||||
'default_index': None,
|
||||
'default_type': None
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_create_multiple(self):
|
||||
actions = [
|
||||
_image_fixture('create', '1'),
|
||||
_image_fixture('create', '2', data={'name': 'image-2'}),
|
||||
]
|
||||
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': actions,
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [
|
||||
{
|
||||
'_id': '1',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'create',
|
||||
'_source': {'disk_format': 'raw', 'name': 'image-1'},
|
||||
'_type': 'image'
|
||||
},
|
||||
{
|
||||
'_id': '2',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'create',
|
||||
'_source': {'disk_format': 'raw', 'name': 'image-2'},
|
||||
'_type': 'image'
|
||||
},
|
||||
],
|
||||
'default_index': None,
|
||||
'default_type': None
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_create_missing_data(self):
|
||||
action = _image_fixture('create', '1')
|
||||
action.pop('data')
|
||||
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [action]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_create_with_default_index(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'default_index': 'glance',
|
||||
'actions': [_image_fixture('create', '1', index=None)]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': '1',
|
||||
'_index': None,
|
||||
'_op_type': 'create',
|
||||
'_source': {'disk_format': 'raw', 'name': 'image-1'},
|
||||
'_type': 'image'
|
||||
}],
|
||||
'default_index': 'glance',
|
||||
'default_type': None
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_create_with_default_doc_type(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'default_type': 'image',
|
||||
'actions': [_image_fixture('create', '1', doc_type=None)]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': '1',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'create',
|
||||
'_source': {'disk_format': 'raw', 'name': 'image-1'},
|
||||
'_type': None
|
||||
}],
|
||||
'default_index': None,
|
||||
'default_type': 'image'
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_create_with_default_index_and_doc_type(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'default_index': 'glance',
|
||||
'default_type': 'image',
|
||||
'actions': [_image_fixture('create', '1', index=None,
|
||||
doc_type=None)]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': '1',
|
||||
'_index': None,
|
||||
'_op_type': 'create',
|
||||
'_source': {'disk_format': 'raw', 'name': 'image-1'},
|
||||
'_type': None
|
||||
}],
|
||||
'default_index': 'glance',
|
||||
'default_type': 'image'
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_create_missing_id(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [_image_fixture('create')]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': None,
|
||||
'_index': 'glance',
|
||||
'_op_type': 'create',
|
||||
'_source': {'disk_format': 'raw', 'name': 'image-1'},
|
||||
'_type': 'image'
|
||||
}],
|
||||
'default_index': None,
|
||||
'default_type': None,
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_create_empty_id(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [_image_fixture('create', '')]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': '',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'create',
|
||||
'_source': {'disk_format': 'raw', 'name': 'image-1'},
|
||||
'_type': 'image'
|
||||
}],
|
||||
'default_index': None,
|
||||
'default_type': None
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_create_invalid_index(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [_image_fixture('create', index='invalid')]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_create_invalid_doc_type(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [_image_fixture('create', doc_type='invalid')]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_create_missing_index(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [_image_fixture('create', '1', index=None)]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_create_missing_doc_type(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [_image_fixture('create', '1', doc_type=None)]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_update_missing_id(self):
|
||||
action = _image_fixture('update')
|
||||
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [action]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_update_missing_data(self):
|
||||
action = _image_fixture('update', '1')
|
||||
action.pop('data')
|
||||
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [action]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_update_using_data(self):
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [_image_fixture('update', '1')]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': '1',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'update',
|
||||
'_type': 'image',
|
||||
'doc': {'disk_format': 'raw', 'name': 'image-1'}
|
||||
}],
|
||||
'default_index': None,
|
||||
'default_type': None
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_update_using_script(self):
|
||||
action = _image_fixture('update', '1', script='<sample script>')
|
||||
action.pop('data')
|
||||
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [action]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': '1',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'update',
|
||||
'_type': 'image',
|
||||
'params': {},
|
||||
'script': '<sample script>'
|
||||
}],
|
||||
'default_index': None,
|
||||
'default_type': None,
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_update_using_script_and_data(self):
|
||||
action = _image_fixture('update', '1', script='<sample script>')
|
||||
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [action]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': '1',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'update',
|
||||
'_type': 'image',
|
||||
'params': {'disk_format': 'raw', 'name': 'image-1'},
|
||||
'script': '<sample script>'
|
||||
}],
|
||||
'default_index': None,
|
||||
'default_type': None,
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_delete_missing_id(self):
|
||||
action = _image_fixture('delete')
|
||||
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [action]
|
||||
})
|
||||
|
||||
self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index,
|
||||
request)
|
||||
|
||||
def test_delete_single(self):
|
||||
action = _image_fixture('delete', '1')
|
||||
action.pop('data')
|
||||
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [action]
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [{
|
||||
'_id': '1',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'delete',
|
||||
'_source': {},
|
||||
'_type': 'image'
|
||||
}],
|
||||
'default_index': None,
|
||||
'default_type': None
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
def test_delete_multiple(self):
|
||||
action_1 = _image_fixture('delete', '1')
|
||||
action_1.pop('data')
|
||||
action_2 = _image_fixture('delete', '2')
|
||||
action_2.pop('data')
|
||||
|
||||
request = unit_test_utils.get_fake_request()
|
||||
request.body = jsonutils.dumps({
|
||||
'actions': [action_1, action_2],
|
||||
})
|
||||
|
||||
output = self.deserializer.index(request)
|
||||
expected = {
|
||||
'actions': [
|
||||
{
|
||||
'_id': '1',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'delete',
|
||||
'_source': {},
|
||||
'_type': 'image'
|
||||
},
|
||||
{
|
||||
'_id': '2',
|
||||
'_index': 'glance',
|
||||
'_op_type': 'delete',
|
||||
'_source': {},
|
||||
'_type': 'image'
|
||||
},
|
||||
],
|
||||
'default_index': None,
|
||||
'default_type': None
|
||||
}
|
||||
self.assertEqual(expected, output)
|
||||
|
||||
|
||||
class TestResponseSerializer(test_utils.BaseTestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestResponseSerializer, self).setUp()
|
||||
self.serializer = search.ResponseSerializer()
|
||||
|
||||
def test_plugins_info(self):
|
||||
expected = {
|
||||
"plugins": [
|
||||
{
|
||||
"index": "glance",
|
||||
"type": "image"
|
||||
},
|
||||
{
|
||||
"index": "glance",
|
||||
"type": "metadef"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
request = webob.Request.blank('/v0.1/search')
|
||||
response = webob.Response(request=request)
|
||||
result = {
|
||||
"plugins": [
|
||||
{
|
||||
"index": "glance",
|
||||
"type": "image"
|
||||
},
|
||||
{
|
||||
"index": "glance",
|
||||
"type": "metadef"
|
||||
}
|
||||
]
|
||||
}
|
||||
self.serializer.search(response, result)
|
||||
actual = jsonutils.loads(response.body)
|
||||
self.assertEqual(expected, actual)
|
||||
self.assertEqual('application/json', response.content_type)
|
||||
|
||||
def test_search(self):
|
||||
expected = [{
|
||||
'id': '1',
|
||||
'name': 'image-1',
|
||||
'disk_format': 'raw',
|
||||
}]
|
||||
|
||||
request = webob.Request.blank('/v0.1/search')
|
||||
response = webob.Response(request=request)
|
||||
result = [{
|
||||
'id': '1',
|
||||
'name': 'image-1',
|
||||
'disk_format': 'raw',
|
||||
}]
|
||||
self.serializer.search(response, result)
|
||||
actual = jsonutils.loads(response.body)
|
||||
self.assertEqual(expected, actual)
|
||||
self.assertEqual('application/json', response.content_type)
|
||||
|
||||
def test_index(self):
|
||||
expected = {
|
||||
'success': '1',
|
||||
'failed': '0',
|
||||
'errors': [],
|
||||
}
|
||||
|
||||
request = webob.Request.blank('/v0.1/index')
|
||||
response = webob.Response(request=request)
|
||||
result = {
|
||||
'success': '1',
|
||||
'failed': '0',
|
||||
'errors': [],
|
||||
}
|
||||
self.serializer.index(response, result)
|
||||
actual = jsonutils.loads(response.body)
|
||||
self.assertEqual(expected, actual)
|
||||
self.assertEqual('application/json', response.content_type)
|
|
@ -0,0 +1,546 @@
|
|||
# Copyright 2010-2011 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Common utilities used in testing"""
|
||||
|
||||
import BaseHTTPServer
|
||||
import errno
|
||||
import functools
|
||||
import os
|
||||
import shlex
|
||||
import shutil
|
||||
import socket
|
||||
import subprocess
|
||||
|
||||
import fixtures
|
||||
from oslo_config import cfg
|
||||
from oslo_serialization import jsonutils
|
||||
from oslo_utils import timeutils
|
||||
from oslotest import moxstubout
|
||||
import six
|
||||
import testtools
|
||||
import webob
|
||||
|
||||
from searchlight.common import config
|
||||
from searchlight.common import exception
|
||||
from searchlight.common import property_utils
|
||||
from searchlight.common import utils
|
||||
from searchlight.common import wsgi
|
||||
from searchlight import context
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
class BaseTestCase(testtools.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(BaseTestCase, self).setUp()
|
||||
|
||||
# NOTE(bcwaldon): parse_args has to be called to register certain
|
||||
# command-line options - specifically we need config_dir for
|
||||
# the following policy tests
|
||||
config.parse_args(args=[])
|
||||
self.addCleanup(CONF.reset)
|
||||
mox_fixture = self.useFixture(moxstubout.MoxStubout())
|
||||
self.stubs = mox_fixture.stubs
|
||||
self.stubs.Set(exception, '_FATAL_EXCEPTION_FORMAT_ERRORS', True)
|
||||
self.test_dir = self.useFixture(fixtures.TempDir()).path
|
||||
self.conf_dir = os.path.join(self.test_dir, 'etc')
|
||||
utils.safe_mkdirs(self.conf_dir)
|
||||
self.set_policy()
|
||||
|
||||
def set_policy(self):
|
||||
conf_file = "policy.json"
|
||||
self.policy_file = self._copy_data_file(conf_file, self.conf_dir)
|
||||
self.config(policy_file=self.policy_file, group='oslo_policy')
|
||||
|
||||
def set_property_protections(self, use_policies=False):
|
||||
self.unset_property_protections()
|
||||
conf_file = "property-protections.conf"
|
||||
if use_policies:
|
||||
conf_file = "property-protections-policies.conf"
|
||||
self.config(property_protection_rule_format="policies")
|
||||
self.property_file = self._copy_data_file(conf_file, self.test_dir)
|
||||
self.config(property_protection_file=self.property_file)
|
||||
|
||||
def unset_property_protections(self):
|
||||
for section in property_utils.CONFIG.sections():
|
||||
property_utils.CONFIG.remove_section(section)
|
||||
|
||||
def _copy_data_file(self, file_name, dst_dir):
|
||||
src_file_name = os.path.join('searchlight.tests/etc', file_name)
|
||||
shutil.copy(src_file_name, dst_dir)
|
||||
dst_file_name = os.path.join(dst_dir, file_name)
|
||||
return dst_file_name
|
||||
|
||||
def set_property_protection_rules(self, rules):
|
||||
with open(self.property_file, 'w') as f:
|
||||
for rule_key in rules.keys():
|
||||
f.write('[%s]\n' % rule_key)
|
||||
for operation in rules[rule_key].keys():
|
||||
roles_str = ','.join(rules[rule_key][operation])
|
||||
f.write('%s = %s\n' % (operation, roles_str))
|
||||
|
||||
def config(self, **kw):
|
||||
"""
|
||||
Override some configuration values.
|
||||
|
||||
The keyword arguments are the names of configuration options to
|
||||
override and their values.
|
||||
|
||||
If a group argument is supplied, the overrides are applied to
|
||||
the specified configuration option group.
|
||||
|
||||
All overrides are automatically cleared at the end of the current
|
||||
test by the fixtures cleanup process.
|
||||
"""
|
||||
group = kw.pop('group', None)
|
||||
for k, v in six.iteritems(kw):
|
||||
CONF.set_override(k, v, group)
|
||||
|
||||
|
||||
class requires(object):
|
||||
"""Decorator that initiates additional test setup/teardown."""
|
||||
def __init__(self, setup=None, teardown=None):
|
||||
self.setup = setup
|
||||
self.teardown = teardown
|
||||
|
||||
def __call__(self, func):
|
||||
def _runner(*args, **kw):
|
||||
if self.setup:
|
||||
self.setup(args[0])
|
||||
func(*args, **kw)
|
||||
if self.teardown:
|
||||
self.teardown(args[0])
|
||||
_runner.__name__ = func.__name__
|
||||
_runner.__doc__ = func.__doc__
|
||||
return _runner
|
||||
|
||||
|
||||
class depends_on_exe(object):
|
||||
"""Decorator to skip test if an executable is unavailable"""
|
||||
def __init__(self, exe):
|
||||
self.exe = exe
|
||||
|
||||
def __call__(self, func):
|
||||
def _runner(*args, **kw):
|
||||
cmd = 'which %s' % self.exe
|
||||
exitcode, out, err = execute(cmd, raise_error=False)
|
||||
if exitcode != 0:
|
||||
args[0].disabled_message = 'test requires exe: %s' % self.exe
|
||||
args[0].disabled = True
|
||||
func(*args, **kw)
|
||||
_runner.__name__ = func.__name__
|
||||
_runner.__doc__ = func.__doc__
|
||||
return _runner
|
||||
|
||||
|
||||
def skip_if_disabled(func):
|
||||
"""Decorator that skips a test if test case is disabled."""
|
||||
@functools.wraps(func)
|
||||
def wrapped(*a, **kwargs):
|
||||
func.__test__ = False
|
||||
test_obj = a[0]
|
||||
message = getattr(test_obj, 'disabled_message',
|
||||
'Test disabled')
|
||||
if getattr(test_obj, 'disabled', False):
|
||||
test_obj.skipTest(message)
|
||||
func(*a, **kwargs)
|
||||
return wrapped
|
||||
|
||||
|
||||
def fork_exec(cmd,
|
||||
exec_env=None,
|
||||
logfile=None):
|
||||
"""
|
||||
Execute a command using fork/exec.
|
||||
|
||||
This is needed for programs system executions that need path
|
||||
searching but cannot have a shell as their parent process, for
|
||||
example: searchlight.api. When searchlight.api starts it sets itself as
|
||||
the parent process for its own process group. Thus the pid that
|
||||
a Popen process would have is not the right pid to use for killing
|
||||
the process group. This patch gives the test env direct access
|
||||
to the actual pid.
|
||||
|
||||
:param cmd: Command to execute as an array of arguments.
|
||||
:param exec_env: A dictionary representing the environment with
|
||||
which to run the command.
|
||||
:param logile: A path to a file which will hold the stdout/err of
|
||||
the child process.
|
||||
"""
|
||||
env = os.environ.copy()
|
||||
if exec_env is not None:
|
||||
for env_name, env_val in exec_env.items():
|
||||
if callable(env_val):
|
||||
env[env_name] = env_val(env.get(env_name))
|
||||
else:
|
||||
env[env_name] = env_val
|
||||
|
||||
pid = os.fork()
|
||||
if pid == 0:
|
||||
if logfile:
|
||||
fds = [1, 2]
|
||||
with open(logfile, 'r+b') as fptr:
|
||||
for desc in fds: # close fds
|
||||
try:
|
||||
os.dup2(fptr.fileno(), desc)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
args = shlex.split(cmd)
|
||||
os.execvpe(args[0], args, env)
|
||||
else:
|
||||
return pid
|
||||
|
||||
|
||||
def wait_for_fork(pid,
|
||||
raise_error=True,
|
||||
expected_exitcode=0):
|
||||
"""
|
||||
Wait for a process to complete
|
||||
|
||||
This function will wait for the given pid to complete. If the
|
||||
exit code does not match that of the expected_exitcode an error
|
||||
is raised.
|
||||
"""
|
||||
|
||||
rc = 0
|
||||
try:
|
||||
(pid, rc) = os.waitpid(pid, 0)
|
||||
rc = os.WEXITSTATUS(rc)
|
||||
if rc != expected_exitcode:
|
||||
raise RuntimeError('The exit code %d is not %d'
|
||||
% (rc, expected_exitcode))
|
||||
except Exception:
|
||||
if raise_error:
|
||||
raise
|
||||
|
||||
return rc
|
||||
|
||||
|
||||
def execute(cmd,
|
||||
raise_error=True,
|
||||
no_venv=False,
|
||||
exec_env=None,
|
||||
expect_exit=True,
|
||||
expected_exitcode=0,
|
||||
context=None):
|
||||
"""
|
||||
Executes a command in a subprocess. Returns a tuple
|
||||
of (exitcode, out, err), where out is the string output
|
||||
from stdout and err is the string output from stderr when
|
||||
executing the command.
|
||||
|
||||
:param cmd: Command string to execute
|
||||
:param raise_error: If returncode is not 0 (success), then
|
||||
raise a RuntimeError? Default: True)
|
||||
:param no_venv: Disable the virtual environment
|
||||
:param exec_env: Optional dictionary of additional environment
|
||||
variables; values may be callables, which will
|
||||
be passed the current value of the named
|
||||
environment variable
|
||||
:param expect_exit: Optional flag true iff timely exit is expected
|
||||
:param expected_exitcode: expected exitcode from the launcher
|
||||
:param context: additional context for error message
|
||||
"""
|
||||
|
||||
env = os.environ.copy()
|
||||
if exec_env is not None:
|
||||
for env_name, env_val in exec_env.items():
|
||||
if callable(env_val):
|
||||
env[env_name] = env_val(env.get(env_name))
|
||||
else:
|
||||
env[env_name] = env_val
|
||||
|
||||
# If we're asked to omit the virtualenv, and if one is set up,
|
||||
# restore the various environment variables
|
||||
if no_venv and 'VIRTUAL_ENV' in env:
|
||||
# Clip off the first element of PATH
|
||||
env['PATH'] = env['PATH'].split(os.pathsep, 1)[-1]
|
||||
del env['VIRTUAL_ENV']
|
||||
|
||||
# Make sure that we use the programs in the
|
||||
# current source directory's bin/ directory.
|
||||
path_ext = [os.path.join(os.getcwd(), 'bin')]
|
||||
|
||||
# Also jack in the path cmd comes from, if it's absolute
|
||||
args = shlex.split(cmd)
|
||||
executable = args[0]
|
||||
if os.path.isabs(executable):
|
||||
path_ext.append(os.path.dirname(executable))
|
||||
|
||||
env['PATH'] = ':'.join(path_ext) + ':' + env['PATH']
|
||||
process = subprocess.Popen(args,
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
env=env)
|
||||
if expect_exit:
|
||||
result = process.communicate()
|
||||
(out, err) = result
|
||||
exitcode = process.returncode
|
||||
else:
|
||||
out = ''
|
||||
err = ''
|
||||
exitcode = 0
|
||||
|
||||
if exitcode != expected_exitcode and raise_error:
|
||||
msg = ("Command %(cmd)s did not succeed. Returned an exit "
|
||||
"code of %(exitcode)d."
|
||||
"\n\nSTDOUT: %(out)s"
|
||||
"\n\nSTDERR: %(err)s" % {'cmd': cmd, 'exitcode': exitcode,
|
||||
'out': out, 'err': err})
|
||||
if context:
|
||||
msg += "\n\nCONTEXT: %s" % context
|
||||
raise RuntimeError(msg)
|
||||
return exitcode, out, err
|
||||
|
||||
|
||||
def find_executable(cmdname):
|
||||
"""
|
||||
Searches the path for a given cmdname. Returns an absolute
|
||||
filename if an executable with the given name exists in the path,
|
||||
or None if one does not.
|
||||
|
||||
:param cmdname: The bare name of the executable to search for
|
||||
"""
|
||||
|
||||
# Keep an eye out for the possibility of an absolute pathname
|
||||
if os.path.isabs(cmdname):
|
||||
return cmdname
|
||||
|
||||
# Get a list of the directories to search
|
||||
path = ([os.path.join(os.getcwd(), 'bin')] +
|
||||
os.environ['PATH'].split(os.pathsep))
|
||||
|
||||
# Search through each in turn
|
||||
for elem in path:
|
||||
full_path = os.path.join(elem, cmdname)
|
||||
if os.access(full_path, os.X_OK):
|
||||
return full_path
|
||||
|
||||
# No dice...
|
||||
return None
|
||||
|
||||
|
||||
def get_unused_port():
|
||||
"""
|
||||
Returns an unused port on localhost.
|
||||
"""
|
||||
port, s = get_unused_port_and_socket()
|
||||
s.close()
|
||||
return port
|
||||
|
||||
|
||||
def get_unused_port_and_socket():
|
||||
"""
|
||||
Returns an unused port on localhost and the open socket
|
||||
from which it was created.
|
||||
"""
|
||||
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
s.bind(('localhost', 0))
|
||||
addr, port = s.getsockname()
|
||||
return (port, s)
|
||||
|
||||
|
||||
def xattr_writes_supported(path):
|
||||
"""
|
||||
Returns True if the we can write a file to the supplied
|
||||
path and subsequently write a xattr to that file.
|
||||
"""
|
||||
try:
|
||||
import xattr
|
||||
except ImportError:
|
||||
return False
|
||||
|
||||
def set_xattr(path, key, value):
|
||||
xattr.setxattr(path, "user.%s" % key, str(value))
|
||||
|
||||
# We do a quick attempt to write a user xattr to a temporary file
|
||||
# to check that the filesystem is even enabled to support xattrs
|
||||
fake_filepath = os.path.join(path, 'testing-checkme')
|
||||
result = True
|
||||
with open(fake_filepath, 'wb') as fake_file:
|
||||
fake_file.write("XXX")
|
||||
fake_file.flush()
|
||||
try:
|
||||
set_xattr(fake_filepath, 'hits', '1')
|
||||
except IOError as e:
|
||||
if e.errno == errno.EOPNOTSUPP:
|
||||
result = False
|
||||
else:
|
||||
# Cleanup after ourselves...
|
||||
if os.path.exists(fake_filepath):
|
||||
os.unlink(fake_filepath)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def minimal_headers(name, public=True):
|
||||
headers = {
|
||||
'Content-Type': 'application/octet-stream',
|
||||
'X-Image-Meta-Name': name,
|
||||
'X-Image-Meta-disk_format': 'raw',
|
||||
'X-Image-Meta-container_format': 'ovf',
|
||||
}
|
||||
if public:
|
||||
headers['X-Image-Meta-Is-Public'] = 'True'
|
||||
return headers
|
||||
|
||||
|
||||
def minimal_add_command(port, name, suffix='', public=True):
|
||||
visibility = 'is_public=True' if public else ''
|
||||
return ("bin/searchlight.--port=%d add %s"
|
||||
" disk_format=raw container_format=ovf"
|
||||
" name=%s %s" % (port, visibility, name, suffix))
|
||||
|
||||
|
||||
def start_http_server(image_id, image_data):
|
||||
def _get_http_handler_class(fixture):
|
||||
class StaticHTTPRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
|
||||
def do_GET(self):
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Length', str(len(fixture)))
|
||||
self.end_headers()
|
||||
self.wfile.write(fixture)
|
||||
return
|
||||
|
||||
def do_HEAD(self):
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Length', str(len(fixture)))
|
||||
self.end_headers()
|
||||
return
|
||||
|
||||
def log_message(self, *args, **kwargs):
|
||||
# Override this method to prevent debug output from going
|
||||
# to stderr during testing
|
||||
return
|
||||
|
||||
return StaticHTTPRequestHandler
|
||||
|
||||
server_address = ('127.0.0.1', 0)
|
||||
handler_class = _get_http_handler_class(image_data)
|
||||
httpd = BaseHTTPServer.HTTPServer(server_address, handler_class)
|
||||
port = httpd.socket.getsockname()[1]
|
||||
|
||||
pid = os.fork()
|
||||
if pid == 0:
|
||||
httpd.serve_forever()
|
||||
else:
|
||||
return pid, port
|
||||
|
||||
|
||||
class FakeAuthMiddleware(wsgi.Middleware):
|
||||
|
||||
def __init__(self, app, is_admin=False):
|
||||
super(FakeAuthMiddleware, self).__init__(app)
|
||||
self.is_admin = is_admin
|
||||
|
||||
def process_request(self, req):
|
||||
auth_token = req.headers.get('X-Auth-Token')
|
||||
user = None
|
||||
tenant = None
|
||||
roles = []
|
||||
if auth_token:
|
||||
user, tenant, role = auth_token.split(':')
|
||||
if tenant.lower() == 'none':
|
||||
tenant = None
|
||||
roles = [role]
|
||||
req.headers['X-User-Id'] = user
|
||||
req.headers['X-Tenant-Id'] = tenant
|
||||
req.headers['X-Roles'] = role
|
||||
req.headers['X-Identity-Status'] = 'Confirmed'
|
||||
kwargs = {
|
||||
'user': user,
|
||||
'tenant': tenant,
|
||||
'roles': roles,
|
||||
'is_admin': self.is_admin,
|
||||
'auth_token': auth_token,
|
||||
}
|
||||
|
||||
req.context = context.RequestContext(**kwargs)
|
||||
|
||||
|
||||
class FakeHTTPResponse(object):
|
||||
def __init__(self, status=200, headers=None, data=None, *args, **kwargs):
|
||||
data = data or 'I am a teapot, short and stout\n'
|
||||
self.data = six.StringIO(data)
|
||||
self.read = self.data.read
|
||||
self.status = status
|
||||
self.headers = headers or {'content-length': len(data)}
|
||||
|
||||
def getheader(self, name, default=None):
|
||||
return self.headers.get(name.lower(), default)
|
||||
|
||||
def getheaders(self):
|
||||
return self.headers or {}
|
||||
|
||||
def read(self, amt):
|
||||
self.data.read(amt)
|
||||
|
||||
|
||||
class Httplib2WsgiAdapter(object):
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def request(self, uri, method="GET", body=None, headers=None):
|
||||
req = webob.Request.blank(uri, method=method, headers=headers)
|
||||
req.body = body
|
||||
resp = req.get_response(self.app)
|
||||
return Httplib2WebobResponse(resp), resp.body
|
||||
|
||||
|
||||
class Httplib2WebobResponse(object):
|
||||
def __init__(self, webob_resp):
|
||||
self.webob_resp = webob_resp
|
||||
|
||||
@property
|
||||
def status(self):
|
||||
return self.webob_resp.status_code
|
||||
|
||||
def __getitem__(self, key):
|
||||
return self.webob_resp.headers[key]
|
||||
|
||||
def get(self, key):
|
||||
return self.webob_resp.headers[key]
|
||||
|
||||
@property
|
||||
def allow(self):
|
||||
return self.webob_resp.allow
|
||||
|
||||
@allow.setter
|
||||
def allow(self, allowed):
|
||||
if type(allowed) is not str:
|
||||
raise TypeError('Allow header should be a str')
|
||||
|
||||
self.webob_resp.allow = allowed
|
||||
|
||||
|
||||
class HttplibWsgiAdapter(object):
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.req = None
|
||||
|
||||
def request(self, method, url, body=None, headers=None):
|
||||
if headers is None:
|
||||
headers = {}
|
||||
self.req = webob.Request.blank(url, method=method, headers=headers)
|
||||
self.req.body = body
|
||||
|
||||
def getresponse(self):
|
||||
response = self.req.get_response(self.app)
|
||||
return FakeHTTPResponse(response.status_code, response.headers,
|
||||
response.body)
|
|
@ -0,0 +1,21 @@
|
|||
-----BEGIN CERTIFICATE-----
|
||||
MIIDiTCCAnGgAwIBAgIJAMj+Lfpqc9lLMA0GCSqGSIb3DQEBCwUAMFsxCzAJBgNV
|
||||
BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMRIwEAYDVQQKDAlPcGVuU3RhY2sx
|
||||
DzANBgNVBAsMBkdsYW5jZTESMBAGA1UEAwwJR2xhbmNlIENBMB4XDTE1MDEzMTA1
|
||||
MzAyNloXDTI1MDEyODA1MzAyNlowWzELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNv
|
||||
bWUtU3RhdGUxEjAQBgNVBAoMCU9wZW5TdGFjazEPMA0GA1UECwwGR2xhbmNlMRIw
|
||||
EAYDVQQDDAlHbGFuY2UgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
|
||||
AQDcW4cRtw96/ZYsx3UB1jWWT0pAlsMQ03En7dueh9o4UZYChY2NMqTJ3gVqy1vf
|
||||
4wyRU1ROb/N5L4KdQiJARH/ARbV+qrWoRvkcWBfg9w/4uZ9ZFhCBbaa2cAtTIGzV
|
||||
ta6HP9UPeyfXrS+jgjqU2QN3bcc0ZCMAiQbtW7Vpw8RNr0NvTJDaSCzmpGQ7TQtB
|
||||
0jXm1nSG7FZUbojUCYB6TBGd01Cg8GzAai3ngXDq6foVJEwfmaV2Zapb0A4FLquX
|
||||
OzebskY5EL/okQGPofSRCu/ar+HV4HN3+PgIIrfa8RhDDdlv6qE1iEuS6isSH1s+
|
||||
7BA2ZKfzT5t8G/8lSjKa/r2pAgMBAAGjUDBOMB0GA1UdDgQWBBT3M/WuigtS7JYZ
|
||||
QD0XJEDD8JSZrTAfBgNVHSMEGDAWgBT3M/WuigtS7JYZQD0XJEDD8JSZrTAMBgNV
|
||||
HRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQCWOhC9kBZAJalQhAeNGIiiJ2bV
|
||||
HpvzSCEXSEAdh3A0XDK1KxoMHy1LhNGYrMmN2a+2O3SoX0FLB4p9zOifq4ACwaMD
|
||||
CjQeB/whsfPt5s0gV3mGMCR+V2b8r5H/30KRbIzQGXmy+/r6Wfe012jcVVXsQawW
|
||||
Omd4d+Bduf5iiL1OCKEMepqjQLu7Yg41ucRpUewBA+A9hoKp7jpwSnzSALX7FWEQ
|
||||
TBJtJ9jEnZl36S81eZJvOXSzeptHyomSAt8eGFCVuPB0dZCXuBNLu4Gsn+dIhfyj
|
||||
NwK4noYZXMndPwGy92KDhjxVnHzd9HwImgr6atmWhPPz5hm50BrA7sv06Nto
|
||||
-----END CERTIFICATE-----
|
|
@ -0,0 +1,28 @@
|
|||
-----BEGIN PRIVATE KEY-----
|
||||
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDcW4cRtw96/ZYs
|
||||
x3UB1jWWT0pAlsMQ03En7dueh9o4UZYChY2NMqTJ3gVqy1vf4wyRU1ROb/N5L4Kd
|
||||
QiJARH/ARbV+qrWoRvkcWBfg9w/4uZ9ZFhCBbaa2cAtTIGzVta6HP9UPeyfXrS+j
|
||||
gjqU2QN3bcc0ZCMAiQbtW7Vpw8RNr0NvTJDaSCzmpGQ7TQtB0jXm1nSG7FZUbojU
|
||||
CYB6TBGd01Cg8GzAai3ngXDq6foVJEwfmaV2Zapb0A4FLquXOzebskY5EL/okQGP
|
||||
ofSRCu/ar+HV4HN3+PgIIrfa8RhDDdlv6qE1iEuS6isSH1s+7BA2ZKfzT5t8G/8l
|
||||
SjKa/r2pAgMBAAECggEABeoS+v+906BAypzj4BO+xnUEWi1xuN7j951juqKM0dwm
|
||||
uZSaEwMb9ysVXCNvKNgwOypQZfaNQ2BqEgx3XOA5yZBVabvtOkIFZ6RZp7kZ3aQl
|
||||
yb9U3BR0WAsz0pxZL3c74vdsoYi9rgVA9ROGvP4CIM96fEZ/xgDnhbFjch5GA4u2
|
||||
8XQ/kJUwLl0Uzxyo10sqGu3hgMwpM8lpaRW6d5EQ628rJEtA/Wmy5GpyCUhTD/5B
|
||||
jE1IzhjT4T5LqiPjA/Dsmz4Sa0+MyKRmA+zfSH6uS4szSaj53GVMHh4K+Xg2/EeD
|
||||
6I3hGOtzZuYp5HBHE6J8VgeuErBQf32CCglHqN/dLQKBgQD4XaXa+AZtB10cRUV4
|
||||
LZDB1AePJLloBhKikeTboZyhZEwbNuvw3JSQBAfUdpx3+8Na3Po1Tfy3DlZaVCU2
|
||||
0PWh2UYrtwA3dymp8GCuSvnsLz1kNGv0Q7WEYaepyKRO8qHCjrTDUFuGVztU+H6O
|
||||
OWPHRd4DnyF3pKN7K4j6pU76HwKBgQDjIXylwPb6TD9ln13ijJ06t9l1E13dSS0B
|
||||
+9QU3f4abjMmW0K7icrNdmsjHafWLGXP2dxB0k4sx448buH+L8uLjC8G80wLQMSJ
|
||||
NAKpxIsmkOMpPUl80ks8bmzsqztmtql6kAgSwSW84vftJyNrFnp2kC2O4ZYGwz1+
|
||||
8rj3nBrfNwKBgQDrCJxCyoIyPUy0yy0BnIUnmAILSSKXuV97LvtXiOnTpTmMa339
|
||||
8pA4dUf/nLtXpA3r98BkH0gu50d6tbR92mMI5bdM+SIgWwk3g33KkrNN+iproFwk
|
||||
zMqC23Mx7ejnuR6xIiEXz/y89eH0+C+zYcX1tz1xSe7+7PO0RK+dGkDR2wKBgHGR
|
||||
L+MtPhDfCSAF9IqvpnpSrR+2BEv+J8wDIAMjEMgka9z06sQc3NOpL17KmD4lyu6H
|
||||
z3L19fK8ASnEg6l2On9XI7iE9HP3+Y1k/SPny3AIKB1ZsKICAG6CBGK+J6BvGwTW
|
||||
ecLu4rC0iCUDWdlUzvzzkGQN9dcBzoDoWoYsft83AoGAAh4MyrM32gwlUgQD8/jX
|
||||
8rsJlKnme0qMjX4A66caBomjztsH2Qt6cH7DIHx+hU75pnDAuEmR9xqnX7wFTR9Y
|
||||
0j/XqTVsTjDINRLgMkrg7wIqKtWdicibBx1ER9LzwfNwht/ZFeMLdeUUUYMNv3cg
|
||||
cMSLxlxgFaUggYj/dsF6ypQ=
|
||||
-----END PRIVATE KEY-----
|
|
@ -0,0 +1,92 @@
|
|||
# > openssl x509 -in searchlight/tests/var/certificate.crt -noout -text
|
||||
# Certificate:
|
||||
# Data:
|
||||
# Version: 1 (0x0)
|
||||
# Serial Number: 1 (0x1)
|
||||
# Signature Algorithm: sha1WithRSAEncryption
|
||||
# Issuer: C=AU, ST=Some-State, O=OpenStack, OU=Glance, CN=Glance CA
|
||||
# Validity
|
||||
# Not Before: Feb 2 20:22:13 2015 GMT
|
||||
# Not After : Jan 31 20:22:13 2024 GMT
|
||||
# Subject: C=AU, ST=Some-State, O=OpenStack, OU=Glance, CN=127.0.0.1
|
||||
# Subject Public Key Info:
|
||||
# Public Key Algorithm: rsaEncryption
|
||||
# RSA Public Key: (4096 bit)
|
||||
# Modulus (4096 bit):
|
||||
# 00:9f:44:13:51:de:e9:5a:f7:ac:33:2a:1a:4c:91:
|
||||
# a1:73:bc:f3:a6:d3:e6:59:ae:e8:e2:34:68:3e:f4:
|
||||
# 40:c1:a1:1a:65:9a:a3:67:e9:2c:b9:79:9c:00:b1:
|
||||
# 7c:c1:e6:9e:de:47:bf:f1:cb:f2:73:d4:c3:62:fe:
|
||||
# 82:90:6f:b4:75:ca:7e:56:8f:99:3d:06:51:3c:40:
|
||||
# f4:ff:74:97:4f:0d:d2:e6:66:76:8d:97:bf:89:ce:
|
||||
# fe:b2:d7:89:71:f2:a0:d9:f5:26:7c:1a:7a:bf:2b:
|
||||
# 8f:72:80:e7:1f:4d:4a:40:a3:b9:9e:33:f6:55:e0:
|
||||
# 40:2b:1e:49:e4:8c:71:9d:11:32:cf:21:41:e1:13:
|
||||
# 28:c6:d6:f6:e0:b3:26:10:6d:5b:63:1d:c3:ee:d0:
|
||||
# c4:66:63:38:89:6b:8f:2a:c2:bd:4f:e4:bc:03:8f:
|
||||
# a2:f2:5c:1d:73:11:9c:7b:93:3d:d6:a3:d1:2d:cd:
|
||||
# 64:23:24:bc:65:3c:71:20:28:60:a0:ea:fe:77:0e:
|
||||
# 1d:95:36:76:ad:e7:2f:1c:27:62:55:e3:9d:11:c1:
|
||||
# fb:43:3e:e5:21:ac:fd:0e:7e:3d:c9:44:d2:bd:6f:
|
||||
# 89:7e:0f:cb:88:54:57:fd:8d:21:c8:34:e1:47:01:
|
||||
# 28:0f:45:a1:7e:60:1a:9c:4c:0c:b8:c1:37:2d:46:
|
||||
# ab:18:9e:ca:49:d3:77:b7:92:3a:d2:7f:ca:d5:02:
|
||||
# f1:75:81:66:39:51:aa:bc:d7:f0:91:23:69:e8:71:
|
||||
# ae:44:76:5e:87:54:eb:72:fc:ac:fd:60:22:e0:6a:
|
||||
# e4:ad:37:b7:f6:e5:24:b4:95:2c:26:0e:75:a0:e9:
|
||||
# ed:57:be:37:42:64:1f:02:49:0c:bd:5d:74:6d:e6:
|
||||
# f2:da:5c:54:82:fa:fc:ff:3a:e4:1a:7a:a9:3c:3d:
|
||||
# ee:b5:df:09:0c:69:c3:51:92:67:80:71:9b:10:8b:
|
||||
# 20:ff:a2:5e:c5:f2:86:a0:06:65:1c:42:f9:91:24:
|
||||
# 54:29:ed:7e:ec:db:4c:7b:54:ee:b1:25:1b:38:53:
|
||||
# ae:01:b6:c5:93:1e:a3:4d:1b:e8:73:47:50:57:e8:
|
||||
# ec:a0:80:53:b1:34:74:37:9a:c1:8c:14:64:2e:16:
|
||||
# dd:a1:2e:d3:45:3e:2c:46:62:20:2a:93:7a:92:4c:
|
||||
# b2:cc:64:47:ad:63:32:0b:68:0c:24:98:20:83:08:
|
||||
# 35:74:a7:68:7a:ef:d6:84:07:d1:5e:d7:c0:6c:3f:
|
||||
# a7:4a:78:62:a8:70:75:37:fb:ce:1f:09:1e:7c:11:
|
||||
# 35:cc:b3:5a:a3:cc:3f:35:c9:ee:24:6f:63:f8:54:
|
||||
# 6f:7c:5b:b4:76:3d:f2:81:6d:ad:64:66:10:d0:c4:
|
||||
# 0b:2c:2f
|
||||
# Exponent: 65537 (0x10001)
|
||||
# Signature Algorithm: sha1WithRSAEncryption
|
||||
# 5f:e8:a8:93:20:6c:0f:12:90:a6:e2:64:21:ed:63:0e:8c:e0:
|
||||
# 0f:d5:04:13:4d:2a:e9:a5:91:b7:e4:51:94:bd:0a:70:4b:94:
|
||||
# c7:1c:94:ed:d7:64:95:07:6b:a1:4a:bc:0b:53:b5:1a:7e:f1:
|
||||
# 9c:12:59:24:5f:36:72:34:ca:33:ee:28:46:fd:21:e6:52:19:
|
||||
# 0c:3d:94:6b:bd:cb:76:a1:45:7f:30:7b:71:f1:84:b6:3c:e0:
|
||||
# ac:af:13:81:9c:0e:6e:3c:9b:89:19:95:de:8e:9c:ef:70:ac:
|
||||
# 07:ae:74:42:47:35:50:88:36:ec:32:1a:55:24:08:f2:44:57:
|
||||
# 67:fe:0a:bb:6b:a7:bd:bc:af:bf:2a:e4:dd:53:84:6b:de:1d:
|
||||
# 2a:28:21:38:06:7a:5b:d8:83:15:65:31:6d:61:67:00:9e:1a:
|
||||
# 61:85:15:a2:4c:9a:eb:6d:59:8e:34:ac:2c:d5:24:4e:00:ff:
|
||||
# 30:4d:a3:d5:80:63:17:52:65:ac:7f:f4:0a:8e:56:a4:97:51:
|
||||
# 39:81:ae:e8:cb:52:09:b3:47:b4:fd:1b:e2:04:f9:f2:76:e3:
|
||||
# 63:ef:90:aa:54:98:96:05:05:a9:91:76:18:ed:5d:9e:6e:88:
|
||||
# 50:9a:f7:2c:ce:5e:54:ba:15:ec:62:ff:5d:be:af:35:03:b1:
|
||||
# 3f:32:3e:0e
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIEKjCCAxICAQEwDQYJKoZIhvcNAQEFBQAwWzELMAkGA1UEBhMCQVUxEzARBgNV
|
||||
BAgMClNvbWUtU3RhdGUxEjAQBgNVBAoMCU9wZW5TdGFjazEPMA0GA1UECwwGR2xh
|
||||
bmNlMRIwEAYDVQQDDAlHbGFuY2UgQ0EwHhcNMTUwMjAyMjAyMjEzWhcNMjQwMTMx
|
||||
MjAyMjEzWjBbMQswCQYDVQQGEwJBVTETMBEGA1UECBMKU29tZS1TdGF0ZTESMBAG
|
||||
A1UEChMJT3BlblN0YWNrMQ8wDQYDVQQLEwZHbGFuY2UxEjAQBgNVBAMTCTEyNy4w
|
||||
LjAuMTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAJ9EE1He6Vr3rDMq
|
||||
GkyRoXO886bT5lmu6OI0aD70QMGhGmWao2fpLLl5nACxfMHmnt5Hv/HL8nPUw2L+
|
||||
gpBvtHXKflaPmT0GUTxA9P90l08N0uZmdo2Xv4nO/rLXiXHyoNn1Jnwaer8rj3KA
|
||||
5x9NSkCjuZ4z9lXgQCseSeSMcZ0RMs8hQeETKMbW9uCzJhBtW2Mdw+7QxGZjOIlr
|
||||
jyrCvU/kvAOPovJcHXMRnHuTPdaj0S3NZCMkvGU8cSAoYKDq/ncOHZU2dq3nLxwn
|
||||
YlXjnRHB+0M+5SGs/Q5+PclE0r1viX4Py4hUV/2NIcg04UcBKA9FoX5gGpxMDLjB
|
||||
Ny1GqxieyknTd7eSOtJ/ytUC8XWBZjlRqrzX8JEjaehxrkR2XodU63L8rP1gIuBq
|
||||
5K03t/blJLSVLCYOdaDp7Ve+N0JkHwJJDL1ddG3m8tpcVIL6/P865Bp6qTw97rXf
|
||||
CQxpw1GSZ4BxmxCLIP+iXsXyhqAGZRxC+ZEkVCntfuzbTHtU7rElGzhTrgG2xZMe
|
||||
o00b6HNHUFfo7KCAU7E0dDeawYwUZC4W3aEu00U+LEZiICqTepJMssxkR61jMgto
|
||||
DCSYIIMINXSnaHrv1oQH0V7XwGw/p0p4YqhwdTf7zh8JHnwRNcyzWqPMPzXJ7iRv
|
||||
Y/hUb3xbtHY98oFtrWRmENDECywvAgMBAAEwDQYJKoZIhvcNAQEFBQADggEBAF/o
|
||||
qJMgbA8SkKbiZCHtYw6M4A/VBBNNKumlkbfkUZS9CnBLlMcclO3XZJUHa6FKvAtT
|
||||
tRp+8ZwSWSRfNnI0yjPuKEb9IeZSGQw9lGu9y3ahRX8we3HxhLY84KyvE4GcDm48
|
||||
m4kZld6OnO9wrAeudEJHNVCINuwyGlUkCPJEV2f+Crtrp728r78q5N1ThGveHSoo
|
||||
ITgGelvYgxVlMW1hZwCeGmGFFaJMmuttWY40rCzVJE4A/zBNo9WAYxdSZax/9AqO
|
||||
VqSXUTmBrujLUgmzR7T9G+IE+fJ242PvkKpUmJYFBamRdhjtXZ5uiFCa9yzOXlS6
|
||||
Fexi/12+rzUDsT8yPg4=
|
||||
-----END CERTIFICATE-----
|
|
@ -0,0 +1,51 @@
|
|||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIJKAIBAAKCAgEAn0QTUd7pWvesMyoaTJGhc7zzptPmWa7o4jRoPvRAwaEaZZqj
|
||||
Z+ksuXmcALF8weae3ke/8cvyc9TDYv6CkG+0dcp+Vo+ZPQZRPED0/3SXTw3S5mZ2
|
||||
jZe/ic7+steJcfKg2fUmfBp6vyuPcoDnH01KQKO5njP2VeBAKx5J5IxxnREyzyFB
|
||||
4RMoxtb24LMmEG1bYx3D7tDEZmM4iWuPKsK9T+S8A4+i8lwdcxGce5M91qPRLc1k
|
||||
IyS8ZTxxIChgoOr+dw4dlTZ2recvHCdiVeOdEcH7Qz7lIaz9Dn49yUTSvW+Jfg/L
|
||||
iFRX/Y0hyDThRwEoD0WhfmAanEwMuME3LUarGJ7KSdN3t5I60n/K1QLxdYFmOVGq
|
||||
vNfwkSNp6HGuRHZeh1Trcvys/WAi4GrkrTe39uUktJUsJg51oOntV743QmQfAkkM
|
||||
vV10beby2lxUgvr8/zrkGnqpPD3utd8JDGnDUZJngHGbEIsg/6JexfKGoAZlHEL5
|
||||
kSRUKe1+7NtMe1TusSUbOFOuAbbFkx6jTRvoc0dQV+jsoIBTsTR0N5rBjBRkLhbd
|
||||
oS7TRT4sRmIgKpN6kkyyzGRHrWMyC2gMJJgggwg1dKdoeu/WhAfRXtfAbD+nSnhi
|
||||
qHB1N/vOHwkefBE1zLNao8w/NcnuJG9j+FRvfFu0dj3ygW2tZGYQ0MQLLC8CAwEA
|
||||
AQKCAgBL4IvvymqUu0CgE6P57LvlvxS522R4P7uV4W/05jtfxJgl5fmJzO5Q4x4u
|
||||
umB8pJn1vms1EHxPMQNxS1364C0ynSl5pepUx4i2UyAmAG8B680ZlaFPrgdD6Ykw
|
||||
vT0vO2/kx0XxhFAMef1aiQ0TvaftidMqCwmGOlN393Mu3rZWJVZ2lhqj15Pqv4lY
|
||||
3iD5XJBYdVrekTmwqf7KgaLwtVyqDoiAjdMM8lPZeX965FhmxR8oWh0mHR9gf95J
|
||||
etMmdy6Km//+EbeS/HxWRnE0CD/RsQA7NmDFnXvmhsB6/j4EoHn5xB6ssbpGAxIg
|
||||
JwlY4bUrKXpaEgE7i4PYFb1q5asnTDdUZYAGAGXSBbDiUZM2YOe1aaFB/SA3Y3K2
|
||||
47brnx7UXhAXSPJ16EZHejSeFbzZfWgj2J1t3DLk18Fpi/5AxxIy/N5J38kcP7xZ
|
||||
RIcSV1QEasYUrHI9buhuJ87tikDBDFEIIeLZxlyeIdwmKrQ7Vzny5Ls94Wg+2UtI
|
||||
XFLDak5SEugdp3LmmTJaugF+s/OiglBVhcaosoKRXb4K29M7mQv2huEAerFA14Bd
|
||||
dp2KByd8ue+fJrAiSxhAyMDAe/uv0ixnmBBtMH0YYHbfUIgl+kR1Ns/bxrJu7T7F
|
||||
kBQWZV4NRbSRB+RGOG2/Ai5jxu0uLu3gtHMO4XzzElWqzHEDoQKCAQEAzfaSRA/v
|
||||
0831TDL8dmOCO61TQ9GtAa8Ouj+SdyTwk9f9B7NqQWg7qdkbQESpaDLvWYiftoDw
|
||||
mBFHLZe/8RHBaQpEAfbC/+DO6c7O+g1/0Cls33D5VaZOzFnnbHktT3r5xwkZfVBS
|
||||
aPPWl/IZOU8TtNqujQA+mmSnrJ7IuXSsBVq71xgBQT9JBZpUcjZ4eQducmtC43CP
|
||||
GqcSjq559ZKc/sa3PkAtNlKzSUS1abiMcJ86C9PgQ9gOu7y8SSqQ3ivZkVM99rxm
|
||||
wo8KehCcHOPOcIUQKmx4Bs4V3chm8rvygf3aanUHi83xaMeFtIIuOgAJmE9wGQeo
|
||||
k0UGvKBUDIenfwKCAQEAxfVFVxMBfI4mHrgTj/HOq7GMts8iykJK1PuELU6FZhex
|
||||
XOqXRbQ5dCLsyehrKlVPFqUENhXNHaOQrCOZxiVoRje2PfU/1fSqRaPxI7+W1Fsh
|
||||
Fq4PkdJ66NJZJkK5NHwE8SyQf+wpLdL3YhY5LM3tWdX5U9Rr6N8qelE3sLPssAak
|
||||
1km4/428+rkp1BlCffr3FyL0KJmOYfMiAr8m6hRZWbhkvm5YqX1monxUrKdFJ218
|
||||
dxzyniqoS1yU5RClY6783dql1UO4AvxpzpCPYDFIwbEb9zkUo0przhmi4KzyxknB
|
||||
/n/viMWzSnsM9YbakH6KunDTUteme1Dri3Drrq9TUQKCAQAVdvL7YOXPnxFHZbDl
|
||||
7azu5ztcQAfVuxa/1kw/WnwwDDx0hwA13NUK+HNcmUtGbrh/DjwG2x032+UdHUmF
|
||||
qCIN/mHkCoF8BUPLHiB38tw1J3wPNUjm4jQoG96AcYiFVf2d/pbHdo2AHplosHRs
|
||||
go89M+UpELN1h7Ppy4qDuWMME86rtfa7hArqKJFQbdjUVC/wgLkx1tMzJeJLOGfB
|
||||
bgwqiS8jr7CGjsvcgOqfH/qS6iU0glpG98dhTWQaA/OhE9TSzmgQxMW41Qt0eTKr
|
||||
2Bn1pAhxQ2im3Odue6ou9eNqJLiUi6nDqizUjKakj0SeCs71LqIyGZg58OGo2tSn
|
||||
kaOlAoIBAQCE/fO4vQcJpAJOLwLNePmM9bqAcoZ/9auKjPNO8OrEHPTGZMB+Tscu
|
||||
k+wa9a9RgICiyPgcUec8m0+tpjlAGo+EZRdlZqedWUMviCWQC74MKrD/KK9DG3IB
|
||||
ipfkEX2VmiBD2tm1Z3Z+17XlSuLci/iCmzNnM1XP3GYQSRIt/6Lq23vQjzTfU1z7
|
||||
4HwOh23Zb0qjW5NG12sFuS9HQx6kskkY8r2UBlRAggP686Z7W+EkzPSKnYMN6cCo
|
||||
6KkLf3RtlPlDHwq8TUOJlgSLhykbyeCEaDVOkSWhUnU8wJJheS+dMZ5IGbFWZOPA
|
||||
DQ02woOCAdG30ebXSBQL0uB8DL/52sYRAoIBAHtW3NomlxIMqWX8ZYRJIoGharx4
|
||||
ikTOR/jeETb9t//n6kV19c4ICiXOQp062lwEqFvHkKzxKECFhJZuwFc09hVxUXxC
|
||||
LJjvDfauHWFHcrDTWWbd25CNeZ4Sq79GKf+HJ+Ov87WYcjuBFlCh8ES+2N4WZGCn
|
||||
B5oBq1g6E4p1k6xA5eE6VRiHPuFH8N9t1x6IlCZvZBhuVWdDrDd4qMSDEUTlcxSY
|
||||
mtcAIXTPaPcdb3CjdE5a38r59x7dZ/Te2K7FKETffjSmku7BrJITz3iXEk+sn8ex
|
||||
o3mdnFgeQ6/hxvMGgdK2qNb5ER/s0teFjnfnwHuTSXngMDIDb3kLL0ecWlQ=
|
||||
-----END RSA PRIVATE KEY-----
|
|
@ -0,0 +1,18 @@
|
|||
# Copyright 2012 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
import pbr.version
|
||||
|
||||
version_info = pbr.version.VersionInfo('searchlight')
|
|
@ -0,0 +1,53 @@
|
|||
[metadata]
|
||||
name = searchlight
|
||||
version = 2015.1
|
||||
summary = OpenStack Search Service
|
||||
description-file =
|
||||
README.rst
|
||||
author = OpenStack
|
||||
author-email = openstack-dev@lists.openstack.org
|
||||
home-page = http://www.openstack.org/
|
||||
classifier =
|
||||
Environment :: OpenStack
|
||||
Intended Audience :: Information Technology
|
||||
Intended Audience :: System Administrators
|
||||
License :: OSI Approved :: Apache Software License
|
||||
Operating System :: POSIX :: Linux
|
||||
Programming Language :: Python
|
||||
Programming Language :: Python :: 2
|
||||
Programming Language :: Python :: 2.7
|
||||
|
||||
[entry_points]
|
||||
console_scripts =
|
||||
searchlight-api = searchlight.cmd.api:main
|
||||
searchlight-control = searchlight.cmd.control:main
|
||||
searchlight-index = searchlight.cmd.index:main
|
||||
oslo.config.opts =
|
||||
searchlight.api = searchlight.opts:list_api_opts
|
||||
searchlight.index_backend =
|
||||
image = searchlight.elasticsearch.plugins.images:ImageIndex
|
||||
metadef = searchlight.elasticsearch.plugins.metadefs:MetadefIndex
|
||||
|
||||
[build_sphinx]
|
||||
all_files = 1
|
||||
build-dir = doc/build
|
||||
source-dir = doc/source
|
||||
|
||||
[egg_info]
|
||||
tag_build =
|
||||
tag_date = 0
|
||||
tag_svn_revision = 0
|
||||
|
||||
[compile_catalog]
|
||||
directory = searchlight/locale
|
||||
domain = searchlight
|
||||
|
||||
[update_catalog]
|
||||
domain = searchlight
|
||||
output_dir = searchlight/locale
|
||||
input_file = searchlight/locale/searchlight.pot
|
||||
|
||||
[extract_messages]
|
||||
keywords = _ gettext ngettext l_ lazy_gettext
|
||||
mapping_file = babel.cfg
|
||||
output_file = searchlight/locale/searchlight.pot
|
|
@ -0,0 +1,30 @@
|
|||
#!/usr/bin/env python
|
||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
||||
import setuptools
|
||||
|
||||
# In python < 2.7.4, a lazy loading of package `pbr` will break
|
||||
# setuptools if some other modules registered functions in `atexit`.
|
||||
# solution from: http://bugs.python.org/issue15881#msg170215
|
||||
try:
|
||||
import multiprocessing # noqa
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
setuptools.setup(
|
||||
setup_requires=['pbr'],
|
||||
pbr=True)
|
|
@ -0,0 +1,34 @@
|
|||
# The order of packages is significant, because pip processes them in the order
|
||||
# of appearance. Changing the order has an impact on the overall integration
|
||||
# process, which may cause wedges in the gate later.
|
||||
|
||||
# Hacking already pins down pep8, pyflakes and flake8
|
||||
hacking>=0.10.0,<0.11
|
||||
|
||||
# For translations processing
|
||||
Babel>=1.3
|
||||
|
||||
# Needed for testing
|
||||
coverage>=3.6
|
||||
discover
|
||||
fixtures>=0.3.14
|
||||
mox3>=0.7.0
|
||||
mock>=1.0
|
||||
sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
|
||||
requests>=2.5.2
|
||||
testrepository>=0.0.18
|
||||
testtools>=0.9.36,!=1.2.0
|
||||
psutil>=1.1.1,<2.0.0
|
||||
oslotest>=1.5.1 # Apache-2.0
|
||||
# Optional packages that should be installed when testing
|
||||
MySQL-python
|
||||
psycopg2
|
||||
pysendfile>=2.0.0
|
||||
qpid-python
|
||||
xattr>=0.4
|
||||
|
||||
# Documentation
|
||||
oslosphinx>=2.5.0 # Apache-2.0
|
||||
|
||||
# Glance catalog index
|
||||
elasticsearch>=1.3.0
|
|
@ -0,0 +1,330 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
# Copyright (c) 2013, Nebula, Inc.
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Colorizer Code is borrowed from Twisted:
|
||||
# Copyright (c) 2001-2010 Twisted Matrix Laboratories.
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
"""Display a subunit stream through a colorized unittest test runner."""
|
||||
|
||||
import heapq
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
import subunit
|
||||
import testtools
|
||||
|
||||
|
||||
class _AnsiColorizer(object):
|
||||
"""A colorizer is an object that loosely wraps around a stream.
|
||||
|
||||
That allows callers to write text to the stream in a particular color.
|
||||
Colorizer classes must implement C{supported()} and C{write(text, color)}.
|
||||
"""
|
||||
_colors = dict(black=30, red=31, green=32, yellow=33,
|
||||
blue=34, magenta=35, cyan=36, white=37)
|
||||
|
||||
def __init__(self, stream):
|
||||
self.stream = stream
|
||||
|
||||
@staticmethod
|
||||
def supported(stream=sys.stdout):
|
||||
"""Method that checks if the current terminal supports coloring.
|
||||
|
||||
Returns True or False.
|
||||
"""
|
||||
if not stream.isatty():
|
||||
return False # auto color only on TTYs
|
||||
try:
|
||||
import curses
|
||||
except ImportError:
|
||||
return False
|
||||
else:
|
||||
try:
|
||||
try:
|
||||
return curses.tigetnum("colors") > 2
|
||||
except curses.error:
|
||||
curses.setupterm()
|
||||
return curses.tigetnum("colors") > 2
|
||||
except Exception:
|
||||
# guess false in case of error
|
||||
return False
|
||||
|
||||
def write(self, text, color):
|
||||
"""Write the given text to the stream in the given color.
|
||||
|
||||
@param text: Text to be written to the stream.
|
||||
|
||||
@param color: A string label for a color. e.g. 'red', 'white'.
|
||||
|
||||
"""
|
||||
color = self._colors[color]
|
||||
self.stream.write('\x1b[%s;1m%s\x1b[0m' % (color, text))
|
||||
|
||||
|
||||
class _Win32Colorizer(object):
|
||||
"""See _AnsiColorizer docstring."""
|
||||
def __init__(self, stream):
|
||||
import win32console
|
||||
red, green, blue, bold = (win32console.FOREGROUND_RED,
|
||||
win32console.FOREGROUND_GREEN,
|
||||
win32console.FOREGROUND_BLUE,
|
||||
win32console.FOREGROUND_INTENSITY)
|
||||
self.stream = stream
|
||||
self.screenBuffer = win32console.GetStdHandle(
|
||||
win32console.STD_OUT_HANDLE)
|
||||
self._colors = {
|
||||
'normal': red | green | blue,
|
||||
'red': red | bold,
|
||||
'green': green | bold,
|
||||
'blue': blue | bold,
|
||||
'yellow': red | green | bold,
|
||||
'magenta': red | blue | bold,
|
||||
'cyan': green | blue | bold,
|
||||
'white': red | green | blue | bold
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def supported(stream=sys.stdout):
|
||||
try:
|
||||
import win32console
|
||||
screenBuffer = win32console.GetStdHandle(
|
||||
win32console.STD_OUT_HANDLE)
|
||||
except ImportError:
|
||||
return False
|
||||
import pywintypes
|
||||
try:
|
||||
screenBuffer.SetConsoleTextAttribute(
|
||||
win32console.FOREGROUND_RED |
|
||||
win32console.FOREGROUND_GREEN |
|
||||
win32console.FOREGROUND_BLUE)
|
||||
except pywintypes.error:
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
def write(self, text, color):
|
||||
color = self._colors[color]
|
||||
self.screenBuffer.SetConsoleTextAttribute(color)
|
||||
self.stream.write(text)
|
||||
self.screenBuffer.SetConsoleTextAttribute(self._colors['normal'])
|
||||
|
||||
|
||||
class _NullColorizer(object):
|
||||
"""See _AnsiColorizer docstring."""
|
||||
def __init__(self, stream):
|
||||
self.stream = stream
|
||||
|
||||
@staticmethod
|
||||
def supported(stream=sys.stdout):
|
||||
return True
|
||||
|
||||
def write(self, text, color):
|
||||
self.stream.write(text)
|
||||
|
||||
|
||||
def get_elapsed_time_color(elapsed_time):
|
||||
if elapsed_time > 1.0:
|
||||
return 'red'
|
||||
elif elapsed_time > 0.25:
|
||||
return 'yellow'
|
||||
else:
|
||||
return 'green'
|
||||
|
||||
|
||||
class SubunitTestResult(testtools.TestResult):
|
||||
def __init__(self, stream, descriptions, verbosity):
|
||||
super(SubunitTestResult, self).__init__()
|
||||
self.stream = stream
|
||||
self.showAll = verbosity > 1
|
||||
self.num_slow_tests = 10
|
||||
self.slow_tests = [] # this is a fixed-sized heap
|
||||
self.colorizer = None
|
||||
# NOTE(vish): reset stdout for the terminal check
|
||||
stdout = sys.stdout
|
||||
sys.stdout = sys.__stdout__
|
||||
for colorizer in [_Win32Colorizer, _AnsiColorizer, _NullColorizer]:
|
||||
if colorizer.supported():
|
||||
self.colorizer = colorizer(self.stream)
|
||||
break
|
||||
sys.stdout = stdout
|
||||
self.start_time = None
|
||||
self.last_time = {}
|
||||
self.results = {}
|
||||
self.last_written = None
|
||||
|
||||
def _writeElapsedTime(self, elapsed):
|
||||
color = get_elapsed_time_color(elapsed)
|
||||
self.colorizer.write(" %.2f" % elapsed, color)
|
||||
|
||||
def _addResult(self, test, *args):
|
||||
try:
|
||||
name = test.id()
|
||||
except AttributeError:
|
||||
name = 'Unknown.unknown'
|
||||
test_class, test_name = name.rsplit('.', 1)
|
||||
|
||||
elapsed = (self._now() - self.start_time).total_seconds()
|
||||
item = (elapsed, test_class, test_name)
|
||||
if len(self.slow_tests) >= self.num_slow_tests:
|
||||
heapq.heappushpop(self.slow_tests, item)
|
||||
else:
|
||||
heapq.heappush(self.slow_tests, item)
|
||||
|
||||
self.results.setdefault(test_class, [])
|
||||
self.results[test_class].append((test_name, elapsed) + args)
|
||||
self.last_time[test_class] = self._now()
|
||||
self.writeTests()
|
||||
|
||||
def _writeResult(self, test_name, elapsed, long_result, color,
|
||||
short_result, success):
|
||||
if self.showAll:
|
||||
self.stream.write(' %s' % str(test_name).ljust(66))
|
||||
self.colorizer.write(long_result, color)
|
||||
if success:
|
||||
self._writeElapsedTime(elapsed)
|
||||
self.stream.writeln()
|
||||
else:
|
||||
self.colorizer.write(short_result, color)
|
||||
|
||||
def addSuccess(self, test):
|
||||
super(SubunitTestResult, self).addSuccess(test)
|
||||
self._addResult(test, 'OK', 'green', '.', True)
|
||||
|
||||
def addFailure(self, test, err):
|
||||
if test.id() == 'process-returncode':
|
||||
return
|
||||
super(SubunitTestResult, self).addFailure(test, err)
|
||||
self._addResult(test, 'FAIL', 'red', 'F', False)
|
||||
|
||||
def addError(self, test, err):
|
||||
super(SubunitTestResult, self).addFailure(test, err)
|
||||
self._addResult(test, 'ERROR', 'red', 'E', False)
|
||||
|
||||
def addSkip(self, test, reason=None, details=None):
|
||||
super(SubunitTestResult, self).addSkip(test, reason, details)
|
||||
self._addResult(test, 'SKIP', 'blue', 'S', True)
|
||||
|
||||
def startTest(self, test):
|
||||
self.start_time = self._now()
|
||||
super(SubunitTestResult, self).startTest(test)
|
||||
|
||||
def writeTestCase(self, cls):
|
||||
if not self.results.get(cls):
|
||||
return
|
||||
if cls != self.last_written:
|
||||
self.colorizer.write(cls, 'white')
|
||||
self.stream.writeln()
|
||||
for result in self.results[cls]:
|
||||
self._writeResult(*result)
|
||||
del self.results[cls]
|
||||
self.stream.flush()
|
||||
self.last_written = cls
|
||||
|
||||
def writeTests(self):
|
||||
time = self.last_time.get(self.last_written, self._now())
|
||||
if not self.last_written or (self._now() - time).total_seconds() > 2.0:
|
||||
diff = 3.0
|
||||
while diff > 2.0:
|
||||
classes = self.results.keys()
|
||||
oldest = min(classes, key=lambda x: self.last_time[x])
|
||||
diff = (self._now() - self.last_time[oldest]).total_seconds()
|
||||
self.writeTestCase(oldest)
|
||||
else:
|
||||
self.writeTestCase(self.last_written)
|
||||
|
||||
def done(self):
|
||||
self.stopTestRun()
|
||||
|
||||
def stopTestRun(self):
|
||||
for cls in list(self.results.iterkeys()):
|
||||
self.writeTestCase(cls)
|
||||
self.stream.writeln()
|
||||
self.writeSlowTests()
|
||||
|
||||
def writeSlowTests(self):
|
||||
# Pare out 'fast' tests
|
||||
slow_tests = [item for item in self.slow_tests
|
||||
if get_elapsed_time_color(item[0]) != 'green']
|
||||
if slow_tests:
|
||||
slow_total_time = sum(item[0] for item in slow_tests)
|
||||
slow = ("Slowest %i tests took %.2f secs:"
|
||||
% (len(slow_tests), slow_total_time))
|
||||
self.colorizer.write(slow, 'yellow')
|
||||
self.stream.writeln()
|
||||
last_cls = None
|
||||
# sort by name
|
||||
for elapsed, cls, name in sorted(slow_tests,
|
||||
key=lambda x: x[1] + x[2]):
|
||||
if cls != last_cls:
|
||||
self.colorizer.write(cls, 'white')
|
||||
self.stream.writeln()
|
||||
last_cls = cls
|
||||
self.stream.write(' %s' % str(name).ljust(68))
|
||||
self._writeElapsedTime(elapsed)
|
||||
self.stream.writeln()
|
||||
|
||||
def printErrors(self):
|
||||
if self.showAll:
|
||||
self.stream.writeln()
|
||||
self.printErrorList('ERROR', self.errors)
|
||||
self.printErrorList('FAIL', self.failures)
|
||||
|
||||
def printErrorList(self, flavor, errors):
|
||||
for test, err in errors:
|
||||
self.colorizer.write("=" * 70, 'red')
|
||||
self.stream.writeln()
|
||||
self.colorizer.write(flavor, 'red')
|
||||
self.stream.writeln(": %s" % test.id())
|
||||
self.colorizer.write("-" * 70, 'red')
|
||||
self.stream.writeln()
|
||||
self.stream.writeln("%s" % err)
|
||||
|
||||
|
||||
test = subunit.ProtocolTestCase(sys.stdin, passthrough=None)
|
||||
|
||||
if sys.version_info[0:2] <= (2, 6):
|
||||
runner = unittest.TextTestRunner(verbosity=2)
|
||||
else:
|
||||
runner = unittest.TextTestRunner(
|
||||
verbosity=2, resultclass=SubunitTestResult)
|
||||
|
||||
if runner.run(test).wasSuccessful():
|
||||
exit_code = 0
|
||||
else:
|
||||
exit_code = 1
|
||||
sys.exit(exit_code)
|
|
@ -0,0 +1,73 @@
|
|||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Copyright 2010 OpenStack Foundation
|
||||
# Copyright 2013 IBM Corp.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Installation script for Glance's development virtualenv
|
||||
"""
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
import install_venv_common as install_venv # noqa
|
||||
|
||||
|
||||
def print_help():
|
||||
help = """
|
||||
Glance development environment setup is complete.
|
||||
|
||||
Glance development uses virtualenv to track and manage Python dependencies
|
||||
while in development and testing.
|
||||
|
||||
To activate the Glance virtualenv for the extent of your current shell session
|
||||
you can run:
|
||||
|
||||
$ source .venv/bin/activate
|
||||
|
||||
Or, if you prefer, you can run commands in the virtualenv on a case by case
|
||||
basis by running:
|
||||
|
||||
$ tools/with_venv.sh <your command>
|
||||
|
||||
Also, make test will automatically use the virtualenv.
|
||||
"""
|
||||
print(help)
|
||||
|
||||
|
||||
def main(argv):
|
||||
root = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
|
||||
venv = os.path.join(root, '.venv')
|
||||
pip_requires = os.path.join(root, 'requirements.txt')
|
||||
test_requires = os.path.join(root, 'test-requirements.txt')
|
||||
py_version = "python%s.%s" % (sys.version_info[0], sys.version_info[1])
|
||||
project = 'Glance'
|
||||
install = install_venv.InstallVenv(root, venv, pip_requires, test_requires,
|
||||
py_version, project)
|
||||
options = install.parse_args(argv)
|
||||
install.check_python_version()
|
||||
install.check_dependencies()
|
||||
install.create_virtualenv(no_site_packages=options.no_site_packages)
|
||||
install.install_dependencies()
|
||||
install.run_command([os.path.join(venv, 'bin/python'),
|
||||
'setup.py', 'develop'])
|
||||
print_help()
|
||||
|
||||
if __name__ == '__main__':
|
||||
main(sys.argv)
|
|
@ -0,0 +1,172 @@
|
|||
# Copyright 2013 OpenStack Foundation
|
||||
# Copyright 2013 IBM Corp.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Provides methods needed by installation script for OpenStack development
|
||||
virtual environments.
|
||||
|
||||
Since this script is used to bootstrap a virtualenv from the system's Python
|
||||
environment, it should be kept strictly compatible with Python 2.6.
|
||||
|
||||
Synced in from openstack-common
|
||||
"""
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import optparse
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
|
||||
class InstallVenv(object):
|
||||
|
||||
def __init__(self, root, venv, requirements,
|
||||
test_requirements, py_version,
|
||||
project):
|
||||
self.root = root
|
||||
self.venv = venv
|
||||
self.requirements = requirements
|
||||
self.test_requirements = test_requirements
|
||||
self.py_version = py_version
|
||||
self.project = project
|
||||
|
||||
def die(self, message, *args):
|
||||
print(message % args, file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
def check_python_version(self):
|
||||
if sys.version_info < (2, 6):
|
||||
self.die("Need Python Version >= 2.6")
|
||||
|
||||
def run_command_with_code(self, cmd, redirect_output=True,
|
||||
check_exit_code=True):
|
||||
"""Runs a command in an out-of-process shell.
|
||||
|
||||
Returns the output of that command. Working directory is self.root.
|
||||
"""
|
||||
if redirect_output:
|
||||
stdout = subprocess.PIPE
|
||||
else:
|
||||
stdout = None
|
||||
|
||||
proc = subprocess.Popen(cmd, cwd=self.root, stdout=stdout)
|
||||
output = proc.communicate()[0]
|
||||
if check_exit_code and proc.returncode != 0:
|
||||
self.die('Command "%s" failed.\n%s', ' '.join(cmd), output)
|
||||
return (output, proc.returncode)
|
||||
|
||||
def run_command(self, cmd, redirect_output=True, check_exit_code=True):
|
||||
return self.run_command_with_code(cmd, redirect_output,
|
||||
check_exit_code)[0]
|
||||
|
||||
def get_distro(self):
|
||||
if (os.path.exists('/etc/fedora-release') or
|
||||
os.path.exists('/etc/redhat-release')):
|
||||
return Fedora(
|
||||
self.root, self.venv, self.requirements,
|
||||
self.test_requirements, self.py_version, self.project)
|
||||
else:
|
||||
return Distro(
|
||||
self.root, self.venv, self.requirements,
|
||||
self.test_requirements, self.py_version, self.project)
|
||||
|
||||
def check_dependencies(self):
|
||||
self.get_distro().install_virtualenv()
|
||||
|
||||
def create_virtualenv(self, no_site_packages=True):
|
||||
"""Creates the virtual environment and installs PIP.
|
||||
|
||||
Creates the virtual environment and installs PIP only into the
|
||||
virtual environment.
|
||||
"""
|
||||
if not os.path.isdir(self.venv):
|
||||
print('Creating venv...', end=' ')
|
||||
if no_site_packages:
|
||||
self.run_command(['virtualenv', '-q', '--no-site-packages',
|
||||
self.venv])
|
||||
else:
|
||||
self.run_command(['virtualenv', '-q', self.venv])
|
||||
print('done.')
|
||||
else:
|
||||
print("venv already exists...")
|
||||
pass
|
||||
|
||||
def pip_install(self, *args):
|
||||
self.run_command(['tools/with_venv.sh',
|
||||
'pip', 'install', '--upgrade'] + list(args),
|
||||
redirect_output=False)
|
||||
|
||||
def install_dependencies(self):
|
||||
print('Installing dependencies with pip (this can take a while)...')
|
||||
|
||||
# First things first, make sure our venv has the latest pip and
|
||||
# setuptools and pbr
|
||||
self.pip_install('pip>=1.4')
|
||||
self.pip_install('setuptools')
|
||||
self.pip_install('pbr')
|
||||
|
||||
self.pip_install('-r', self.requirements, '-r', self.test_requirements)
|
||||
|
||||
def parse_args(self, argv):
|
||||
"""Parses command-line arguments."""
|
||||
parser = optparse.OptionParser()
|
||||
parser.add_option('-n', '--no-site-packages',
|
||||
action='store_true',
|
||||
help="Do not inherit packages from global Python "
|
||||
"install.")
|
||||
return parser.parse_args(argv[1:])[0]
|
||||
|
||||
|
||||
class Distro(InstallVenv):
|
||||
|
||||
def check_cmd(self, cmd):
|
||||
return bool(self.run_command(['which', cmd],
|
||||
check_exit_code=False).strip())
|
||||
|
||||
def install_virtualenv(self):
|
||||
if self.check_cmd('virtualenv'):
|
||||
return
|
||||
|
||||
if self.check_cmd('easy_install'):
|
||||
print('Installing virtualenv via easy_install...', end=' ')
|
||||
if self.run_command(['easy_install', 'virtualenv']):
|
||||
print('Succeeded')
|
||||
return
|
||||
else:
|
||||
print('Failed')
|
||||
|
||||
self.die('ERROR: virtualenv not found.\n\n%s development'
|
||||
' requires virtualenv, please install it using your'
|
||||
' favorite package management tool' % self.project)
|
||||
|
||||
|
||||
class Fedora(Distro):
|
||||
"""This covers all Fedora-based distributions.
|
||||
|
||||
Includes: Fedora, RHEL, CentOS, Scientific Linux
|
||||
"""
|
||||
|
||||
def check_pkg(self, pkg):
|
||||
return self.run_command_with_code(['rpm', '-q', pkg],
|
||||
check_exit_code=False)[1] == 0
|
||||
|
||||
def install_virtualenv(self):
|
||||
if self.check_cmd('virtualenv'):
|
||||
return
|
||||
|
||||
if not self.check_pkg('python-virtualenv'):
|
||||
self.die("Please install 'python-virtualenv'.")
|
||||
|
||||
super(Fedora, self).install_virtualenv()
|
|
@ -0,0 +1,119 @@
|
|||
#!/usr/bin/env python
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sys
|
||||
|
||||
import keystoneclient.v2_0.client
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
|
||||
import glance.context
|
||||
import glance.db.sqlalchemy.api as db_api
|
||||
from glance import i18n
|
||||
import glance.registry.context
|
||||
|
||||
_ = i18n._
|
||||
_LC = i18n._LC
|
||||
_LE = i18n._LE
|
||||
_LI = i18n._LI
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
LOG.addHandler(logging.StreamHandler())
|
||||
LOG.setLevel(logging.DEBUG)
|
||||
|
||||
|
||||
def get_owner_map(ksclient, owner_is_tenant=True):
|
||||
if owner_is_tenant:
|
||||
entities = ksclient.tenants.list()
|
||||
else:
|
||||
entities = ksclient.users.list()
|
||||
# build mapping of (user or tenant) name to id
|
||||
return dict([(entity.name, entity.id) for entity in entities])
|
||||
|
||||
|
||||
def build_image_owner_map(owner_map, db, context):
|
||||
image_owner_map = {}
|
||||
for image in db.image_get_all(context):
|
||||
image_id = image['id']
|
||||
owner_name = image['owner']
|
||||
|
||||
if not owner_name:
|
||||
LOG.info(_LI('Image %s has no owner. Skipping.') % image_id)
|
||||
continue
|
||||
|
||||
try:
|
||||
owner_id = owner_map[owner_name]
|
||||
except KeyError:
|
||||
msg = (_LE('Image "%(image)s" owner "%(owner)s" was not found. '
|
||||
'Skipping.'),
|
||||
{'image': image_id, 'owner': owner_name})
|
||||
LOG.error(msg)
|
||||
continue
|
||||
|
||||
image_owner_map[image_id] = owner_id
|
||||
|
||||
msg = (_LI('Image "%(image)s" owner "%(owner)s" -> "%(owner_id)s"'),
|
||||
{'image': image_id, 'owner': owner_name, 'owner_id': owner_id})
|
||||
LOG.info(msg)
|
||||
|
||||
return image_owner_map
|
||||
|
||||
|
||||
def update_image_owners(image_owner_map, db, context):
|
||||
for (image_id, image_owner) in image_owner_map.items():
|
||||
db.image_update(context, image_id, {'owner': image_owner})
|
||||
LOG.info(_LI('Image %s successfully updated.') % image_id)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
config = cfg.CONF
|
||||
extra_cli_opts = [
|
||||
cfg.BoolOpt('dry-run',
|
||||
help='Print output but do not make db changes.'),
|
||||
cfg.StrOpt('keystone-auth-uri',
|
||||
help='Authentication endpoint'),
|
||||
cfg.StrOpt('keystone-admin-tenant-name',
|
||||
help='Administrative user\'s tenant name'),
|
||||
cfg.StrOpt('keystone-admin-user',
|
||||
help='Administrative user\'s id'),
|
||||
cfg.StrOpt('keystone-admin-password',
|
||||
help='Administrative user\'s password',
|
||||
secret=True),
|
||||
]
|
||||
config.register_cli_opts(extra_cli_opts)
|
||||
config(project='glance', prog='glance-registry')
|
||||
|
||||
db_api.configure_db()
|
||||
|
||||
context = glance.common.context.RequestContext(is_admin=True)
|
||||
|
||||
auth_uri = config.keystone_auth_uri
|
||||
admin_tenant_name = config.keystone_admin_tenant_name
|
||||
admin_user = config.keystone_admin_user
|
||||
admin_password = config.keystone_admin_password
|
||||
|
||||
if not (auth_uri and admin_tenant_name and admin_user and admin_password):
|
||||
LOG.critical(_LC('Missing authentication arguments'))
|
||||
sys.exit(1)
|
||||
|
||||
ks = keystoneclient.v2_0.client.Client(username=admin_user,
|
||||
password=admin_password,
|
||||
tenant_name=admin_tenant_name,
|
||||
auth_url=auth_uri)
|
||||
|
||||
owner_map = get_owner_map(ks, config.owner_is_tenant)
|
||||
image_updates = build_image_owner_map(owner_map, db_api, context)
|
||||
if not config.dry_run:
|
||||
update_image_owners(image_updates, db_api, context)
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
TOOLS_PATH=${TOOLS_PATH:-$(dirname $0)}
|
||||
VENV_PATH=${VENV_PATH:-${TOOLS_PATH}}
|
||||
VENV_DIR=${VENV_NAME:-/../.venv}
|
||||
TOOLS=${TOOLS_PATH}
|
||||
VENV=${VENV:-${VENV_PATH}/${VENV_DIR}}
|
||||
source ${VENV}/bin/activate && "$@"
|
|
@ -0,0 +1,56 @@
|
|||
[tox]
|
||||
minversion = 1.6
|
||||
envlist = py27,py33,py34,pep8
|
||||
skipsdist = True
|
||||
|
||||
[testenv]
|
||||
setenv = VIRTUAL_ENV={envdir}
|
||||
usedevelop = True
|
||||
install_command = pip install -U {opts} {packages}
|
||||
deps = -r{toxinidir}/requirements.txt
|
||||
-r{toxinidir}/test-requirements.txt
|
||||
commands = lockutils-wrapper python setup.py testr --slowest --testr-args='{posargs}'
|
||||
whitelist_externals = bash
|
||||
|
||||
[tox:jenkins]
|
||||
downloadcache = ~/cache/pip
|
||||
|
||||
[testenv:pep8]
|
||||
commands =
|
||||
flake8 {posargs}
|
||||
# Check that .po and .pot files are valid:
|
||||
bash -c "find glance -type f -regex '.*\.pot?' -print0|xargs -0 -n 1 msgfmt --check-format -o /dev/null"
|
||||
|
||||
[testenv:cover]
|
||||
setenv = VIRTUAL_ENV={envdir}
|
||||
commands = python setup.py testr --coverage --testr-args='^(?!.*test.*coverage).*$'
|
||||
|
||||
[testenv:venv]
|
||||
commands = {posargs}
|
||||
|
||||
[testenv:genconfig]
|
||||
commands =
|
||||
oslo-config-generator --config-file etc/oslo-config-generator/glance-api.conf
|
||||
oslo-config-generator --config-file etc/oslo-config-generator/glance-registry.conf
|
||||
oslo-config-generator --config-file etc/oslo-config-generator/glance-scrubber.conf
|
||||
oslo-config-generator --config-file etc/oslo-config-generator/glance-cache.conf
|
||||
oslo-config-generator --config-file etc/oslo-config-generator/glance-manage.conf
|
||||
oslo-config-generator --config-file etc/oslo-config-generator/glance-search.conf
|
||||
|
||||
[testenv:docs]
|
||||
commands = python setup.py build_sphinx
|
||||
|
||||
[flake8]
|
||||
# TODO(dmllr): Analyze or fix the warnings blacklisted below
|
||||
# E711 comparison to None should be 'if cond is not None:'
|
||||
# E712 comparison to True should be 'if cond is True:' or 'if cond:'
|
||||
# H302 import only modules
|
||||
# H402 one line docstring needs punctuation.
|
||||
# H404 multi line docstring should start with a summary
|
||||
# H405 multi line docstring summary not separated with an empty line
|
||||
# H904 Wrap long lines in parentheses instead of a backslash
|
||||
ignore = E711,E712,H302,H402,H404,H405,H904
|
||||
exclude = .venv,.git,.tox,dist,doc,etc,*glance/locale*,*openstack/common*,*lib/python*,*egg,build
|
||||
|
||||
[hacking]
|
||||
local-check-factory = glance.hacking.checks.factory
|
Loading…
Reference in New Issue